Refine
Year of publication
- 2020 (234) (remove)
Document Type
- Doctoral Thesis (234) (remove)
Keywords
- Maschinelles Lernen (3)
- diffusion (3)
- Anden (2)
- Andes (2)
- Arktis (2)
- Boden (2)
- Chemometrie (2)
- Datenassimilation (2)
- Diffusion (2)
- Digitalisierung (2)
Institute
- Institut für Biochemie und Biologie (32)
- Institut für Physik und Astronomie (31)
- Institut für Geowissenschaften (24)
- Institut für Chemie (22)
- Öffentliches Recht (11)
- Hasso-Plattner-Institut für Digital Engineering GmbH (9)
- Institut für Anglistik und Amerikanistik (9)
- Institut für Ernährungswissenschaft (8)
- Institut für Umweltwissenschaften und Geographie (8)
- Department Psychologie (7)
Die vorliegende Arbeit untersucht Urlaubsfotografien bei Facebook und beschreibt, welche sozio-technischen Medienpraktiken sich innerhalb der Social-Media Plattform über die Fotografien vollziehen. Fotografische Praktiken sind durch aktive Handlungen und soziale Gebrauchsweisen bestimmt. Urlaubsfotografien tragen zum Beispiel zur Strukturierung von Reiserouten und Vorstellungen bei, indem genrespezifische Motive und Rahmungen mit Hilfe von Medien reproduziert und wiederholt werden. Praktiken des Zeigens, Teilens und Kommunizierens werden durch Social Plug-Ins (Like/Share Buttons) und Tagging-Funktionen auch in die Benutzeroberflächen von Facebook integriert. Dadurch werden Nutzer*innen Aktivitäten und technische Prozesse miteinander verbunden. Am Beispiel der automatischen Generierung von Urlaubsfotografien auf Geotagseiten wird gezeigt, dass Social-Tagging zur Entstehung und Aushandlung geographischer Räume und Ortsvorstellungen beiträgt. Mithilfe technischer Strukturierungen von Fotografien auf Taggingseiten werden genrespezifische Motive, fotografische Trends und Ästhetiken besonders sichtbar. Allerdings wird ihre Visualisierung auch durch algorithmische Priorisierung einzelner Inhalte mitbestimmt. Dadurch werden Urlaubsfotografien für ein fotografisches Profiling genutzt, da sie das algorithmische Erfassen und Auswerten von Nutzer*innen-Informationen ermöglichen. Die Arbeit zeigt, dass der Einsatz von Bilderkennungsverfahren und fotografischen Datenanalysen zu einer optimierten Informationsgewinnung und zu einer Standardisierung von Fotografien beiträgt.
Feminist Solidarities after Modulation produces an intersectional analysis of transnational feminist movements and their contemporary digital frameworks of identity and solidarity. Engaging media theory, critical race theory, and Black feminist theory, as well as contemporary feminist movements, this book argues that digital feminist interventions map themselves onto and make use of the multiplicity and ambiguity of digital spaces to question presentist and fixed notions of the internet as a white space and technologies in general as objective or universal. Understanding these frameworks as colonial constructions of the human, identity is traced to a socio-material condition that emerges with the modernity/colonialism binary. In the colonial moment, race and gender become the reasons for, as well as the effects of, technologies of identification, and thus need to be understood as and through technologies. What Deleuze has called modulation is not a present modality of control, but is placed into a longer genealogy of imperial division, which stands in opposition to feminist, queer, and anti-racist activism that insists on non-modular solidarities across seeming difference. At its heart, Feminist Solidarities after Modulation provides an analysis of contemporary digital feminist solidarities, which not only work at revealing the material histories and affective ""leakages"" of modular governance, but also challenges them to concentrate on forms of political togetherness that exceed a reductive or essentialist understanding of identity, solidarity, and difference.
It has frequently been observed that single emotional events are not only more efficiently processed, but also better remembered, and form longer-lasting memory traces than neutral material. However, when emotional information is perceived as a part of a complex event, such as in the context of or in relation to other events and/or source details, the modulatory effects of emotion are less clear. The present work aims to investigate how emotional, contextual source information modulates the initial encoding and subsequent long-term retrieval of associated neutral material (item memory) and contextual source details (contextual source memory). To do so, a two-task experiment was used, consisting of an incidental encoding task in which neutral objects were displayed over different contextual background scenes which varied in emotional content (unpleasant, pleasant, and neutral), and a delayed retrieval task (1 week), in which previously-encoded objects and new ones were presented. In a series of studies, behavioral indices (Studies 2, 3, and 5), event-related potentials (ERPs; Studies 1-4), and functional magnetic resonance imaging (Study 5) were used to investigate whether emotional contexts can rapidly tune the visual processing of associated neutral information (Study 1) and modulate long-term item memory (Study 2), how different recognition memory processes (familiarity vs. recollection) contribute to these emotion effects on item and contextual source memory (Study 3), whether the emotional effects of item memory can also be observed during spontaneous retrieval (Sstudy 4), and which brain regions underpin the modulatory effects of emotional contexts on item and contextual source memory (Study 5). In Study 1, it was observed that emotional contexts by means of emotional associative learning, can rapidly alter the processing of associated neutral information. Neutral items associated with emotional contexts (i.e. emotional associates) compared to neutral ones, showed enhanced perceptual and more elaborate processing after one single pairing, as indexed by larger amplitudes in the P100 and LPP components, respectively. Study 2 showed that emotional contexts produce longer-lasting memory effects, as evidenced by better item memory performance and larger ERP Old/New differences for emotional associates. In Study 3, a mnemonic differentiation was observed between item and contextual source memory which was modulated by emotion. Item memory was driven by familiarity, independently of emotional contexts during encoding, whereas contextual source memory was driven by recollection, and better for emotional material. As in Study 2, enhancing effects of emotional contexts for item memory were observed in ERPs associated with recollection processes. Likewise, for contextual source memory, a pronounced recollection-related ERP enhancement was observed for exclusively emotional contexts. Study 4 showed that the long-term recollection enhancement of emotional contexts on item memory can be observed even when retrieval is not explicitly attempted, as measured with ERPs, suggesting that the emotion enhancing effects on memory are not related to the task embedded during recognition, but to the motivational relevance of the triggering event. In Study 5, it was observed that enhancing effects of emotional contexts on item and contextual source memory involve stronger engagement of the brain's regions which are associated with memory recollection, including areas of the medial temporal lobe, posterior parietal cortex, and prefrontal cortex.
Taken together, these findings suggest that emotional contexts rapidly modulate the initial processing of associated neutral information and the subsequent, long-term item and contextual source memories. The enhanced memory effects of emotional contexts are strongly supported by recollection rather than familiarity processes, and are shown to be triggered when retrieval is both explicitly and spontaneously attempted. These results provide new insights into the modulatory role of emotional information on the visual processing and the long-term recognition memory of complex events. The present findings are integrated into the current theoretical models and future ventures are discussed.
In der Arbeit „Von der Studienaufnahme bis zum Studienabbruch“ strebt die Autorin an, das Phänomen des Studienabbruchs sowohl handlungstheoretisch einzubetten als auch längsschnittlich zu untersuchen und damit einen wichtigen Beitrag für die Hochschulforschung zu leisten. Die übergeordneten Fragestellungen der Arbeit lauten „Wie kann der Entscheidungsprozess zum Studienabbruch handlungstheoretisch beschrieben werden?“ und „Inwieweit kann ein Studienabbruch durch die Veränderung wesentlicher Einflussfaktoren, wie des studentischen Frames, empirisch erklärt werden?“. Zur Beantwortung dieser zwei übergeordneten Fragen wählt die Autorin die integrative Handlungstheorie und das Modell der Frame-Selektion von Hartmut Esser und Clemens Kroneberg. Anhand dieser entwickelt die Autorin im theoretischen Teil ihrer Arbeit ein Modell des Reframings in der Studieneingangsphase, was den Prozess der Entscheidung zum Studienabbruch bzw. Studienverbleib beschreibt. Die Wahl der Theorie begründet sie durch den aktuellen Forschungsstand zu Bildungsentscheidungen aus der soziologischen Bildungsforschung. Im Rahmen des abgeleiteten Modells stellt der Studienabbruch eine weitere allgemeine Bildungsentscheidung dar, die durch ein Reframing des Interpretationsrahmens der Studierenden (das sogenannte Frame) in der Studieneingangsphase verursacht werden kann. Dabei liegt der Fokus des Modells auf dem Prozess der Entscheidung, indem beschrieben wird, wie und durch welche Faktoren das ursprüngliche Frame, mit dem die Studienentscheidung getroffen wurde, in der Studieneingangsphase sich verändert und infolgedessen es zu einer wiederholten Bildungsentscheidung kommt. Mit dem empirischen Teil der Arbeit verfolgt die Autorin zwei Ziele. Zum einen werden die theoretischen Annahmen aus dem Modell zum Reframing in der Studieneingangsphase anhand von Wiederholungsbefragungen von Studierenden überprüft. Zum anderen untersucht die Autorin mithilfe dieser Daten die Entscheidung zum Studienabbruch für den deutschen Kontext erstmalig im Längsschnitt. Die empirischen Untersuchungen umfassen vier Teilstudien und orientieren sich chronologisch am theoretischen Modell. So wird zunächst in der Teilstudie I die messtheoretische Güte der Operationalisierung des studentischen Frames überprüft und anschließend Bestimmungsfaktoren des anfänglichen studentischen Frames zur Studienaufnahme betrachtet. Die Teilstudie II bietet eine Untersuchung zum Match zwischen den anfänglichen Erwartungen und der vorgefundenen Studienrealität in der Studieneingangsphase, wobei das Match anhand von individuellen und institutionellen Faktoren erklärt wird. Der Fokus der Teilstudie III liegt auf dem Ausmaß und den Bedingungsfaktoren der Veränderungen des Frames in der Studieneingangsphase. Letztlich bietet die Teilstudie IV erstmalig für die deutsche Hochschulforschung eine längsschnittliche Perspektive mit dem Fokus auf die zeitliche Veränderung des studentischen Frames zur Erklärung des Studienabbruchs und Studienverbleibs. Im Fazit diskutiert die Autorin Implikationen für die Weiterentwicklung des vorgeschlagenen theoretischen Modells, Implikationen für die zukünftige Forschung zum Studienabbruch und Implikationen für die Praxis an den Hochschulen zur Gestaltung der Studieneingangsphase und Prävention von Studienabbrüchen. Dabei kann sie vor allem für die Hochschulpraxis fünf große Themenbereiche identifizieren: der Umfang von unerfüllten Erwartungen der Studierenden; die Heterogenität der studentischen Frames bei der Studienaufnahme; der große Anteil an Studierenden, die mindestens einen Studiengangwechsel vollzogen haben; die phasenspezifische Veränderung des studentischen Frames und ihre Bedeutung für die Stabilisierung bzw. Destabilisierung der Studienentscheidung sowie die Bedeutung der inhaltlichen und qualitativen Gestaltung der Studieneingangsphase und des Studiums. Letztlich spricht sich die Autorin auch für eine Entstigmatisierung des Studienabbruchs aus.
From self-help books and nootropics, to self-tracking and home health tests, to the tinkering with technology and biological particles – biohacking brings biology, medicine, and the material foundation of life into the sphere of »do-it-yourself«. This trend has the potential to fundamentally change people's relationship with their bodies and biology but it also creates new cultural narratives of responsibility, authority, and differentiation. Covering a broad range of examples, this book explores practices and representations of biohacking in popular culture, discussing their ambiguous position between empowerment and requirement, promise and prescription.
Cosa avviene quando coscienze linguistiche distinte, oltre ad essere separate dall’epoca, dall’area geografica di provenienza o dalla differenziazione sociale, dalle diverse dimensioni linguistiche, appartengono anche a domini semiotici diversi? È quel che accade ogni volta che comunichiamo in rete, l’interazione digitale è infatti l’ambito di comunicazione ibrido per eccellenza: in esso alla mescolanza di lingue diverse si sovrappone la mescolanza di codici diversi. Partendo dal presupposto che siano i nuovi bisogni espressivi e le nuove situazioni comunicative a spingere verso le innovazioni linguistiche, sembra dunque interessante tener conto del rilievo assunto dal repertorio visuale – e più in generale multimodale – nell’uso spontaneo dei nuovi media e constatare come le particolari strategie di costruzione del significato attualmente in atto non possano ormai più prescindere da queste seconde dimensioni. Del loro peso nell’uso digitale della lingua è bene avere consapevolezza per affrontare senza pregiudizi tutte le novità ad essa connesse. Un ruolo di centrale importanza nell’approccio al linguaggio verbale in Internet è legato alla funzione indessicale della lingua che, unito alla presenza di un archivio di riferimento di conoscenze del mondo condiviso, innesca un nuovo tipo d’inferenzialità nel ricevente. La conversazione attraverso i social network consente infatti azioni che non necessariamente sono presenti nello scambio vis-a-vis, ma che invece sono peculiari di Facebook, Twitter, G+, Instagram, Flickr e in generale dei social network: la condivisione di materiale multimediale di vario genere, l’opzione di richiamare i messaggi relativi a un tema specifico e la possibilità di glossarlo. Il materiale multimediale diventa così al tempo stesso parte integrante della comunicazione e modalità espressiva, focus del discorso e linguaggio metaforico condiviso. Questo lavoro di ricerca indaga come ambiti di ricerca diversi, e apparentemente distanti fra loro, possano interagire produttivamente con il panorama scientifico delle scienze del linguaggio, dell’immagine e della comunicazione, giungendo alla formulazione di un modello aggiornato dell'ibridazione linguistica che caratterizza la comunicazione in rete.
The goal of regenerative medicine is to guide biological systems towards natural healing outcomes using a combination of niche-specific cells, bioactive molecules and biomaterials. In this regard, mimicking the extracellular matrix (ECM) surrounding cells and tissues in vivo is an effective strategy to modulate cell behaviors. Cellular function and phenotype is directed by the biochemical and biophysical signals present in the complex 3D network of ECMs composed mainly of glycoproteins and hydrophilic proteoglycans. While cellular modulation in response to biophysical cues emulating ECM features has been investigated widely, the influence of biochemical display of ECM glycoproteins mimicking their presentation in vivo is not well characterized. It remains a significant challenge to build artificial biointerfaces using ECM glycoproteins that precisely match their presentation in nature in terms of morphology, orientation and conformation. This challenge becomes clear, when one understands how ECM glycoproteins self-assemble in the body. Glycoproteins produced inside the cell are secreted in the extra-cellular space, where they are bound to the cell membrane or other glycoproteins by specific interactions. This leads to elevated local concentration and 2Dspatial confinement, resulting in self-assembly by the reciprocal interactions arising from the molecular complementarity encoded in the glycoprotein domains. In this thesis, air-water (A-W) interface is presented as a suitable platform, where self-assembly parameters of ECM glycoproteins such as pH, temperature and ionic strength can be controlled to simulate in vivo conditions (Langmuir technique), resulting in the formation of glycoprotein layers with defined characteristics. The layer can be further compressed with surface barriers to enhance glycoprotein-glycoprotein contacts and defined layers of glycoproteins can be immobilized on substrates by horizontal lift and touch method, called Langmuir-Schäfer (LS) method. Here, the benefit of Langmuir and LS methods in achieving ECM glycoprotein biointerfaces with controlled network morphology and ligand density on substrates is highlighted and contrasted with the commonly used (glyco)protein solution deposition (SO) method on substrates. In general, the (glyco)protein layer formation by SO is rather uncontrolled, influenced strongly by (glyco)protein-substrate interactions and it results in multilayers and aggregations on substrates, while the LS method results in (glyco)proteins layers with a more homogenous presentation. To achieve the goal of realizing defined ECM layers on substrates, ECM glycoproteins having the ability to self-assemble were selected: Collagen-IV (Col-IV) and fibronectin (FN). Highly packed FN layer with uniform presentation of ligands was deposited on polydimethysiloxane VIII (PDMS) by LS method, while a heterogeneous layer was formed on PDMS by SO with prominent aggregations visible. Mesenchymal stem cells (MSC) on PDMS equipped with FN by LS exhibited more homogeneous and elevated vinculin expression and weaker stress fiber formation than on PDMS equipped with FN by SO and these divergent responses could be attributed to the differences in glycoprotein presentation at the interface. Col-IV are scaffolding components of specialized ECM called basement membranes (BM), and have the propensity to form 2D networks by self-polymerization associated with cells. Col- IV behaves as a thin-disordered network at the A-W interface. As the Col-IV layer was compressed at the A-W interface using trough barriers, there was negligible change in thickness (layer thickness ~ 50 nm) or orientation of molecules. The pre-formed organization of Col-IV was transferred by LS method in a controlled fashion onto substrates meeting the wettability criterion (CA ≤ 80°). MSC adhesion (24h) on PET substrates deposited with Col-IV LS films at 10, 15 and 20 mN·m-1 surface pressures was (12269.0 ± 5856.4) cells for LS10, (16744.2 ± 1280.1) cells for LS15 and (19688.3 ± 1934.0) cells for LS20 respectively. Remarkably, by selecting the surface areal density of Col-IV on the Langmuir trough on PET, there is a linear increase between the number of adherent MSCs and the Col-IV ligand density. Further, FN has the ability to self-stabilize and form 2D networks (even without compression) while preserving native β-sheet structure at the A-W interface on a defined subphase (pH = 2). This provides the possibility to form such layers on any vessel (even on standard six-well culture plates) and the cohesive FN layers can be deposited by LS transfer, without the need for expensive LB instrumentation. Multilayers of FN can be immobilized on substrates by this approach, as easily as Layer-by-Layer method, even without the need for secondary adlayer or activated bare substrate. Thus, this facile glycoprotein coating strategy approach is accessible to many researchers to realize defined FN films on substrates for cell culture. In conclusion, Langmuir and LS methods can create biomimetic glycoprotein biointerfaces on substrates controlling aspects of presentation such as network morphology and ligand density. These methods will be utilized to produce artificial BM mimics and interstitial ECM mimics composed of more than one ECM glycoprotein layer on substrates, serving as artificial niches instructing stem cells for cell-replacement therapies in the future.
The Southern Central Andes (33°-36°S) are an excellent natural laboratory to study orogenic deformation processes, where boundary conditions, such as the geometry of the subducted plate, impose an important control on the evolution of the orogen. On the other hand, the South American plate presents a series of heterogeneities that additionally impart control on the mode of deformation. This thesis aims to test the control of this last factor over the construction of the Cenozoic Andean orogenic system.
From the integration of surface and subsurface information in the southern area (34-36°S), the evolution of Andean deformation over the steeply dipping subduction segment was studied. A structural model was developed evaluating the stress state from the Miocene to the present-day and its influence in the migration of magmatic fluids and hydrocarbons. Based on these data, together with the data generated by other researchers in the northern zone of the study area (33-34°S), geodynamic numerical modeling was performed to test the hypothesis of the decisive role of upper-plate heterogeneities in the Andean evolution. Geodynamic codes (LAPEX-2D and ASPECT) which simulate the behavior of materials with elasto-visco-plastic rheologies under deformation, were used. The model results suggest that upper-plate contractional deformation is significantly controlled by the strength of the lithosphere, which is defined by the composition of the upper and lower crust, and by the proportion of lithospheric mantle, which in turn is determined by previous tectonic events. In addition, the previous regional tectono-magmatic events also defined the composition of the crust and its geometry, which is another factor that controls the localization of deformation. Accordingly, with more felsic lower crustal composition, the deformation follows a pure-shear mode, while more mafic compositions induce a simple-shear deformation mode. On the other hand, it was observed that initial lithospheric thickness may fundamentally control the location of deformation, with zones characterized by thin lithosphere are prone to concentrate it. Finally, it was found that an asymmetric lithosphere-astenosphere boundary resulting from corner flow in the mantle wedge of the eastward-directed subduction zone tends to generate east-vergent detachments.
Remembering the dismembered
(2020)
This thesis – written in co-authorship with Tanzanian activist Mnyaka Sururu Mboro – examines different cases of repatriation of ancestral remains to African countries and communities through the prism of postcolonial memory studies. It follows the theft and displacement of prominent ancestors from East and Southern Africa (Sarah Baartman, Dawid Stuurman, Mtwa Mkwawa, Songea Mbano, King Hintsa and the victims of the Ovaherero and Nama genocides) and argues that efforts made for the repatriation of their remains have contributed to a transnational remembrance of colonial violence.
Drawing from cultural studies theories such as "multidirectional memory", "rehumanisation" and "necropolitics", the thesis argues for a new conceptualisation or "re-membrance" in repatriation, through processes of reunion, empowerment, story-telling and belonging. Besides, the afterlives of the dead ancestors, who stand at the centre of political debates on justice and reparations, remind of their past struggles against colonial oppression. They are therefore "memento vita", fostering counter-discourses that recognize them
as people and stories.
This manuscript is accompanied by a “(web)site of memory” where some of the research findings are made available to a wider audience. This blog also hosts important sound material which appears in the thesis as interventions by external contributors. Through QR codes, both the written and the digital version are linked with each other to problematize the idea of a written monograph and bring a polyphonic perspective to those diverse, yet connected, histories.
Chloroplasts are the photosynthetic organelles in plant and algae cells that enable photoautotrophic growth. Due to their prokaryotic origin, modern-day chloroplast genomes harbor 100 to 200 genes. These genes encode for core components of the photosynthetic complexes and the chloroplast gene expression machinery, making most of them essential for the viability of the organism. The regulation of those genes is predominated by translational adjustments. The powerful technique of ribosome profiling was successfully used to generate highly resolved pictures of the translational landscape of Arabidopsis thaliana cytosol, identifying translation of upstream open reading frames and long non-coding transcripts. In addition, differences in plastidial translation and ribosomal pausing sites were addressed with this method. However, a highly resolved picture of the chloroplast translatome is missing. Here, with the use of chloroplast isolation and targeted ribosome affinity purification, I generated highly enriched ribosome profiling datasets of the chloroplasts translatome for Nicotiana tabacum in the dark and light. Chloroplast isolation was found unsuitable for the unbiased analysis of translation in the chloroplast but adequate to identify potential co-translational import. Affinity purification was performed for the small and large ribosomal subunit independently. The enriched datasets mirrored the results obtained from whole-cell ribosome profiling. Enhanced translational activity was detected for psbA in the light. An alternative translation initiation mechanism was not identified by selective enrichment of small ribosomal subunit footprints. In sum, this is the first study that used enrichment strategies to obtain high-depth ribosome profiling datasets of chloroplasts to study ribosome subunit distribution and chloroplast associated translation.
Ever-changing light intensities are challenging the photosynthetic capacity of photosynthetic organism. Increased light intensities may lead to over-excitation of photosynthetic reaction centers resulting in damage of the photosystem core subunits. Additional to an expensive repair mechanism for the photosystem II core protein D1, photosynthetic organisms developed various features to reduce or prevent photodamage. In the long-term, photosynthetic complex contents are adjusted for the efficient use of experienced irradiation. However, the contribution of chloroplastic gene expression in the acclimation process remained largely unknown. Here, comparative transcriptome and ribosome profiling was performed for the early time points of high-light acclimation in Nicotiana tabacum chloroplasts in a genome-wide scale. The time- course data revealed stable transcript level and only minor changes in translational activity of specific chloroplast genes during high-light acclimation. Yet, psbA translation was increased by two-fold in the high light from shortly after the shift until the end of the experiment. A stress-inducing shift from low- to high light exhibited increased translation only of psbA. This study indicate that acclimation fails to start in the observed time frame and only short-term responses to reduce photoinhibition were observed.
The political legacy of the Martinican poet, novelist and philosopher Édouard Glissant (1928–2011) is the subject of an ongoing debate among postcolonial literary scholars. Responding to an influential view shaping this debate, that Glissant’s work can be categorised into an early political and late apolitical phase, this dissertation claims that this division is based on a narrow conception of 'engaged political writing' that prevents a more comprehensive view of the changing political strategies Glissant pursued throughout his life from emerging. Proceeding from this conceptual basis, the dissertation is concerned with re-reading the dimensions of Glissant's work that have hitherto been relegated as apolitical, literary or poetic, with the aim of conceptualising the politics of relation as an integral part of his overall poetic project. In methodological terms, the dissertation therefore proposes a relational reading of Glissant’s life-work across literary genres, epochs, as well as the conventional divisions between political thought, writing and activism. This perspective is informed by Glissant's philosophy of relation, and draws on a conception of political practice that includes both explicit engagements with established political systems and institutions, as well as literary and cultural interventions geared towards their transformation and the creation of alternatives to them. Theoretically the work thus combines a poststructuralist lens on the conceptual difference between 'politics' and 'the political' with arguments for an inherent political quality of literature, and perspectives from the Afro-Caribbean radical tradition, in which writers and intellectuals have historically sought to combine discursive interventions with organisational actions. Applying this theoretical angle to the analysis of Glissant's politics of relation results in an interdisciplinary research framework designed to explore the synergies between postcolonial political and literary studies.
In order to comprehensively describe Glissant's politics of relation without recourse to evolutionary or digressive models, the concept of an intellectual marronage is proposed as a framework to map the strategies making up Glissant's political archive. Drawing on a variety of historic, political theoretical and literary sources, intellectual marronage is understood as a mode of radical resistance to the neocolonial subjugation for which the plantation system stands historically and metaphorically, as an inherently innovative political practice invested in the creation of communities marked by relational ontologies, and as a commitment to fostering an imagination of the world and the human that differs fundamentally from the Enlightenment paradigm. This specific conception of intellectual marronage forms the basis on which three key strategies that consistently shape Glissant's political practice are identified and mapped. They revolve around Glissant's engagement with history (chapter 2), his commitment to fostering an imagination of the Tout-Monde (whole-world) as a political point of reference (chapter 3), and the continuous exploration of alternative forms of community on the levels of the island, the archipelago and the Tout-Monde (chapter 4). Together these strategies constitute Glissant's personal politics of relation. Its abstract characteristics can be put in a productive conversation with related theoretical traditions invested in exploring the political potentials of fugitivity (chapters 5), as well as with the work of other postcolonial actors whose holistic practice warrants to be described as a politics of relation (chapter 6).
Matthias Walden
(2020)
Matthias Walden (1927–1984) war einer der Vertreter eines politischen Neuanfangs im Journalismus in Deutschland nach 1945. Im Kern seines politischen Denkens stand die Verteidigung der liberalen Demokratie, deren ideellen Gehalt Walden sowohl durch eine personelle Kontinuität in der Bundesrepublik Deutschland zum Nationalsozialismus als auch durch die von ihm als Anbiederung empfundene Neue Ostpolitik und den gesellschaftlichen Protest der 1960er und 1970er Jahre in Gefahr sah.
Als profilierter Leitartikler wurde er vor allem für den Verlag Axel Springer zu einem intellektuellen Impulsgeber. Walden war überzeugt, Diktaturen und totalitäre Gesellschaftsentwürfe sähen immer nur so aus, als wären sie für die Ewigkeit gemacht. Im Kalten Krieg gab ihm gerade die Unmenschlichkeit der kommunistischen Regime die Gewissheit, dass diese einst verschwinden würden.
Nils Lange legt mit seiner intellektuellen Biographie von Matthias Walden die erste umfassende Arbeit über diesen streitbaren Journalisten vor. Er arbeitet sowohl die politischen als auch die ideengeschichtlichen Ursprünge von Waldens Denken heraus.
Die Arbeit beleuchtet das besondere öffentliche Interesse an der Strafverfolgung in seiner Gesamtheit. Im ersten Teil der Arbeit werden insbesondere die formellen Aspekte rund um das besondere öffentliche Interesse untersucht. Im Zentrum stehen hierbei die besonders problematischen Aspekte der Frage nach der Prozessvoraussetzung und der gerichtlichen Überprüfbarkeit. Aufgezeigt wird, dass das besondere öffentliche Interesse tatsächlich vorliegen und dessen Vorliegen vom mit der Sache befassten Gericht vollständig überprüft werden muss. Im zweiten Teil geht es um die inhaltliche Auslegung des besonderen öffentlichen Interesses. Nachdem kurz der derzeitige Forschungsstand aufgezeigt wird, wird zunächst die Frage erörtert, ob das besondere öffentliche als ein gegenüber dem öffentlichen Interesse gesteigerter Begriff anzusehen ist, was verneint wird. Anschließend wird der eigene Auslegungsansatz erarbeitet, der das besondere öffentliche Interesse als Ergebnis einer Abwägung begreift.
Die Aufgabe der Selbstregulierung der Presse wird in Deutschland vom Deutschen Presserat wahrgenommen. Dieser sieht sich seit seiner Gründung fortwährender Kritik ausgesetzt. Die Arbeit geht der Frage nach, ob der Deutsche Presserat mit seiner bisherigen Tätigkeit den Ansprüchen an eine erfolgreiche Selbstregulierung gerecht wird. Sodann werden elf Lösungsvorschläge auf ihre rechtliche Umsetzbarkeit und ihre Auswirkungen auf die Pressekontrolle in Deutschland untersucht. Zudem wird das britische Modell der Presseselbstkontrolle mit dem deutschen verglichen, um Unterschiede und Gemeinsamkeiten aufzuzeigen, sowie Anregungen und Verbesserungsvorschläge für die Zukunft zu erforschen. Durch ähnliche Strukturen am Anfang ihres Entstehens und das größtenteils parallele Bestehen von Presseräten in beiden Ländern eignet sich explizit das britische Modell für einen Vergleich mit dem Deutschen Presserat.
Die Auswirkungen der reformierten Psychotherapierichtlinie auf die ambulante Patentenversorgung
(2020)
Redox signalling in plants
(2020)
Once proteins are synthesized, they can additionally be modified by post-translational modifications (PTMs). Proteins containing reactive cysteine thiols, stabilized in their deprotonated form due to their local environment as thiolates (RS-), serve as redox sensors by undergoing a multitude of oxidative PTMs (Ox-PTMs). Ox-PTMs such as S-nitrosylation or formation of inter- or intra-disulfide bridges induce functional changes in these proteins. Proteins containing cysteines, whose thiol oxidation state regulates their functions, belong to the so-called redoxome. Such Ox-PTMs are controlled by site-specific cellular events that play a crucial role in protein regulation, affecting enzyme catalytic sites, ligand binding affinity, protein-protein interactions or protein stability. Reversible protein thiol oxidation is an essential regulatory mechanism of photosynthesis, metabolism, and gene expression in all photosynthetic organisms. Therefore, studying PTMs will remain crucial for understanding plant adaptation to external stimuli like fluctuating light conditions. Optimizing methods suitable for studying plants Ox-PTMs is of high importance for elucidation of the redoxome in plants. This study focusses on thiol modifications occurring in plant and provides novel insight into in vivo redoxome of Arabidopsis thaliana in response to light vs. dark. This was achieved by utilizing a resin-assisted thiol enrichment approach. Furthermore, confirmation of candidates on the single protein level was carried out by a differential labelling approach. The thiols and disulfides were differentially labelled, and the protein levels were detected using immunoblot analysis. Further analysis was focused on light-reduced proteins. By the enrichment approach many well studied redox-regulated proteins were identified. Amongst those were fructose 1,6-bisphosphatase (FBPase) and sedoheptulose-1,7-bisphosphatase (SBPase) which have previously been described as thioredoxin system targeted enzymes. The redox regulated proteins identified in the current study were compared to several published, independent results showing redox regulated proteins in Arabidopsis leaves, root, mitochondria and specifically S-nitrosylated proteins. These proteins were excluded as potential new candidates but remain as a proof-of-concept to the enrichment experiments to be effective. Additionally, CSP41A and CSP41B proteins, which emerged from this study as potential targets of redox-regulation, were analyzed by Ribo-Seq. The active translatome study of csp41a mutant vs. wild-type showed most of the significant changes at end of the night, similarly as csp41b. Yet, in both mutants only several chloroplast-encoded genes were altered. Further studies of CSP41A and CSP41B proteins are needed to reveal their functions and elucidate the role of redox regulation of these proteins.
Distance Education or e-Learning platform should be able to provide a virtual laboratory to let the participants have hands-on exercise experiences in practicing their skill remotely. Especially in Cybersecurity e-Learning where the participants need to be able to attack or defend the IT System. To have a hands-on exercise, the virtual laboratory environment must be similar to the real operational environment, where an attack or a victim is represented by a node in a virtual laboratory environment. A node is usually represented by a Virtual Machine (VM). Scalability has become a primary issue in the virtual laboratory for cybersecurity e-Learning because a VM needs a significant and fix allocation of resources. Available resources limit the number of simultaneous users. Scalability can be increased by increasing the efficiency of using available resources and by providing more resources. Increasing scalability means increasing the number of simultaneous users.
In this thesis, we propose two approaches to increase the efficiency of using the available resources. The first approach in increasing efficiency is by replacing virtual machines (VMs) with containers whenever it is possible. The second approach is sharing the load with the user-on-premise machine, where the user-on-premise machine represents one of the nodes in a virtual laboratory scenario. We also propose two approaches in providing more resources. One way to provide more resources is by using public cloud services. Another way to provide more resources is by gathering resources from the crowd, which is referred to as Crowdresourcing Virtual Laboratory (CRVL).
In CRVL, the crowd can contribute their unused resources in the form of a VM, a bare metal system, an account in a public cloud, a private cloud and an isolated group of VMs, but in this thesis, we focus on a VM. The contributor must give the credential of the VM admin or root user to the CRVL system. We propose an architecture and methods to integrate or dis-integrate VMs from the CRVL system automatically. A Team placement algorithm must also be investigated to optimize the usage of resources and at the same time giving the best service to the user. Because the CRVL system does not manage the contributor host machine, the CRVL system must be able to make sure that the VM integration will not harm their system and that the training material will be stored securely in the contributor sides, so that no one is able to take the training material away without permission. We are investigating ways to handle this kind of threats.
We propose three approaches to strengthen the VM from a malicious host admin. To verify the integrity of a VM before integration to the CRVL system, we propose a remote verification method without using any additional hardware such as the Trusted Platform Module chip. As the owner of the host machine, the host admins could have access to the VM's data via Random Access Memory (RAM) by doing live memory dumping, Spectre and Meltdown attacks. To make it harder for the malicious host admin in getting the sensitive data from RAM, we propose a method that continually moves sensitive data in RAM. We also propose a method to monitor the host machine by installing an agent on it. The agent monitors the hypervisor configurations and the host admin activities.
To evaluate our approaches, we conduct extensive experiments with different settings. The use case in our approach is Tele-Lab, a Virtual Laboratory platform for Cyber Security e-Learning. We use this platform as a basis for designing and developing our approaches. The results show that our approaches are practical and provides enhanced security.
Subsea permafrost is perennially cryotic earth material that lies offshore. Most submarine permafrost is relict terrestrial permafrost beneath the Arctic shelf seas, was inundated after the last glaciation, and has been warming and thawing ever since. It is a reservoir and confining layer for gas hydrates and has the potential to release greenhouse gases and affect global climate change. Furthermore, subsea permafrost thaw destabilizes coastal infrastructure. While numerous studies focus on its distribution and rate of thaw over glacial timescales, these studies have not been brought together and examined in their entirety to assess rates of thaw beneath the Arctic Ocean. In addition, there is still a large gap in our understanding of sub-aquatic permafrost processes on finer spatial and temporal scales. The degradation rate of subsea permafrost is influenced by the initial conditions upon submergence. Terrestrial permafrost that has already undergone warming, partial thawing or loss of ground ice may react differently to inundation by seawater compared to previously undisturbed ice-rich permafrost. Heat conduction models are sufficient to model the thaw of thick subsea permafrost from the bottom, but few studies have included salt diffusion for top-down chemical degradation in shallow waters characterized by mean annual cryotic conditions on the seabed. Simulating salt transport is critical for assessing degradation rates for recently inundated permafrost, which may accelerate in response to warming shelf waters, a lengthening open water season, and faster coastal erosion rates. In the nearshore zone, degradation rates are also controlled by seasonal processes like bedfast ice, brine injection, seasonal freezing under floating ice conditions and warm freshwater discharge from large rivers. The interplay of all these variables is complex and needs further research. To fill this knowledge gap, this thesis investigates sub-aquatic permafrost along the southern coast of the Bykovsky Peninsula in eastern Siberia. Sediment cores and ground temperature profiles were collected at a freshwater thermokarst lake and two thermokarst lagoons in 2017. At this site, the coastline is retreating, and seawater is inundating various types of permafrost: sections of ice-rich Pleistocene permafrost (Yedoma) cliffs at the coastline alternate with lagoons and lower elevation previously thawed and refrozen permafrost basins (Alases). Electrical resistivity surveys with floating electrodes were carried out to map ice-bearing permafrost and taliks (unfrozen zones in the permafrost, usually formed beneath lakes) along the diverse coastline and in the lagoons. Combined with the borehole data, the electrical resistivity results permit estimation of contemporary ice-bearing permafrost characteristics, distribution, and occasionally, thickness. To conceptualize possible geomorphological and marine evolutionary pathways to the formation of the observed layering, numerical models were applied. The developed model incorporates salt diffusion and seasonal dynamics at the seabed, including bedfast ice. Even along coastlines with mean annual non-cryotic boundary conditions like the Bykovsky Peninsula, the modelling results show that salt diffusion minimizes seasonal freezing of the seabed, leading to faster degradation rates compared to models without salt diffusion. Seasonal processes are also important for thermokarst lake to lagoon transitions because lagoons can generate cold hypersaline conditions underneath the ice cover. My research suggests that ice-bearing permafrost can form in a coastal lagoon environment, even under floating ice. Alas basins, however, may degrade more than twice as fast as Yedoma permafrost in the first several decades of inundation. In addition to a lower ice content compared to Yedoma permafrost, Alas basins may be pre-conditioned with salt from adjacent lagoons. Considering the widespread distribution of thermokarst in the Arctic, its integration into geophysical models and offshore surveys is important to quantify and understand subsea permafrost degradation and aggradation. Through numerical modelling, fieldwork, and a circum-Arctic review of subsea permafrost literature, this thesis provides new insights into sub-aquatic permafrost evolution in saline coastal environments.
Die vorliegende Untersuchung analysierte den direkten Zusammenhang eines berufsbezogenen Angebots Sozialer Gruppenarbeit mit dem Ergebnis beruflicher Wiedereingliederung bei Rehabilitandinnen und Rehabilitanden in besonderen beruflichen Problemlagen. Sie wurde von der Deutschen Rentenversicherung Bund als Forschungsprojekt vom 01.01.2013 bis 31.12. 2015 gefördert und an der Professur für Rehabilitationswissenschaften der Universität Potsdam realisiert.
Die Forschungsfrage lautete: Kann eine intensive sozialarbeiterische Gruppenintervention im Rahmen der stationären medizinischen Rehabilitation soweit auf die Stärkung sozialer Kompetenzen und die Soziale Unterstützung von Rehabilitandinnen und Rehabilitanden einwirken, dass sich dadurch langfristige Verbesserungen hinsichtlich der beruflichen Wiedereingliederung im Vergleich zur konventionellen Behandlung ergeben?
Die Studie gliederte sich in eine qualitative und eine quantitative Erhebung mit einer zwischenliegenden Intervention. Eingeschlossen waren 352 Patientinnen und Patienten im Alter zwischen 18 und 65 Jahren mit kardiovaskulären Diagnosen, deren Krankheitsbilder häufig von komplexen Problemlagen begleitet sind, verbunden mit einer schlechten sozialmedizinischen Prognose.
Die Evaluation der Gruppenintervention erfolgte in einem clusterrandomisierten kontrollierten Studiendesign, um einen empirischen Nachweis darüber zu erbringen, inwieweit die Intervention gegenüber der regulären sozialarbeiterischen Behandlung höhere Effekte erzielen kann. Die Interventionsgruppen nahmen am Gruppenprogramm teil, die Kontrollgruppen erhielten die reguläre sozialarbeiterische Behandlung.
Im Ergebnis konnte mit dieser Stichprobe kein Nachweis zur Verbesserung der beruflichen Wiedereingliederung, der gesundheitsbezogenen Arbeitsfähigkeit, der Lebensqualität sowie der Sozialen Unterstützung durch die Teilnahme am sozialarbeiterischen Gruppenprogramm erbracht werden. Die Return-To-Work-Rate betrug 43,7 %, ein Viertel der Untersuchungsgruppe befand sich nach einem Jahr in Arbeitslosigkeit. Die durchgeführte Gruppenintervention ist dem konventionellen Setting Sozialer Arbeit als gleichwertig anzusehen.
Schlussfolgernd wurde auf eine sozialarbeiterische Unterstützung der beruflichen Wiedereingliederung über einen längeren Zeitraum nach einer kardiovaskulären Erkrankung verwiesen, insbesondere durch wohnortnahe Angebote zu einem späteren Zeitpunkt bei stabilerer Gesundheit. Aus den Erhebungen ließen sich mögliche Erfolge bei engerer Kooperation zwischen dem Fachbereich der Sozialen Arbeit und der Psychologie ableiten. Ebenfalls gab es Hinweise auf die einflussreiche Rolle der Angehörigen, die durch Einbindung in die Soziale Beratung unterstützend auf den Wiedereingliederungsprozess wirken könnten. Die Passgenauigkeit der untersuchten sozialarbeiterischen Gruppeninterventionen ist durch eine gezielte Soziale Diagnostik zu verbessern.
Die Ordnung der Religionen
(2020)
Der römische Adlige und Humanist Pietro Della Valle bereist von 1614 bis 1626 das Osmanische Reich, Persien und Indien. In einer Zeit des Umbruchs sucht er nach neuen Allianzen für Rom: Gegen die Reformatoren will er die Einheit mit den orientalischen Christen wiederherstellen. Gegen die Osmanen sucht er ein Bündnis mit dem schiitischen Schah Abbas I. zu schließen. Sein Reisebericht, die „Viaggi“ (3 Teile, 1650-63), dokumentiert seine Ambitionen und enthält umfangreiche Erläuterungen zu vielen Religionen Asiens, die damals wie heute im Zentrum des Interesses stehen.
In der Form einer Begriffs- und Ideengeschichte untersuche ich in den „Viaggi“, welche Rolle Religion in den damaligen Auseinandersetzungen spielt und wie sich Della Valle mit der großen religiösen Vielfalt Asiens auseinandersetzt. Welche Hoffnungen und Befürchtungen verbindet er mit den verschiedenen Religionen? Welche Strategien verfolgt er in Bezug auf sie? Wo zieht er Grenzen und wo baut er Brücken?
Gold at the nanoscale
(2020)
In this cumulative dissertation, I want to present my contributions to the field of plasmonic nanoparticle science. Plasmonic nanoparticles are characterised by resonances of the free electron gas around the spectral range of visible light. In recent years, they have evolved as promising components for light based nanocircuits, light harvesting, nanosensors, cancer therapies, and many more.
This work exhibits the articles I authored or co-authored in my time as PhD student at the University of Potsdam. The main focus lies on the coupling between localised plasmons and excitons in organic dyes. Plasmon–exciton coupling brings light–matter coupling to the nanoscale. This size reduction is accompanied by strong enhancements of the light field which can, among others, be utilised to enhance the spectroscopic footprint of molecules down to single molecule detection, improve the efficiency of solar cells, or establish lasing on the nanoscale. When the coupling exceeds all decay channels, the system enters the strong coupling regime. In this case, hybrid light–matter modes emerge utilisable as optical switches, in quantum networks, or as thresholdless lasers. The present work investigates plasmon–exciton coupling in gold–dye core–shell geometries and contains both fundamental insights and technical novelties. It presents a technique which reveals the anticrossing in coupled systems without manipulating the particles themselves. The method is used to investigate the relation between coupling strength and particle size. Additionally, the work demonstrates that pure extinction measurements can be insufficient when trying to assess the coupling regime. Moreover, the fundamental quantum electrodynamic effect of vacuum induced saturation is introduced. This effect causes the vacuum fluctuations to diminish the polarisability of molecules and has not yet been considered in the plasmonic context.
The work additionally discusses the reaction of gold nanoparticles to optical heating. Such knowledge is of great importance for all potential optical applications utilising plasmonic nanoparticles since optical excitation always generates heat. This heat can induce a change in the optical properties, but also mechanical changes up to melting can occur. Here, the change of spectra in coupled plasmon–exciton particles is discussed and explained with a precise model. Moreover, the work discusses the behaviour of gold nanotriangles exposed to optical heating. In a pump–probe measurement, X-ray probe pulses directly monitored the particles’ breathing modes. In another experiment, the triangles were exposed to cw laser radiation with varying intensities and illumination areas. X-ray diffraction directly measured the particles’ temperature. Particle melting was investigated with surface enhanced Raman spectroscopy and SEM imaging demonstrating that larger illumination areas can cause melting at lower intensities. An elaborate methodological and theoretical introduction precedes the articles. This way, also readers without specialist’s knowledge get a concise and detailed overview of the theory and methods used in the articles. I introduce localised plasmons in metal nanoparticles of different shapes. For this work, the plasmons were mostly coupled to excitons in J-aggregates. Therefore, I discuss these aggregates of organic dyes with sharp and intense resonances and establish an understanding of the coupling between the two systems. For ab initio simulations of the coupled systems, models for the systems’ permittivites are presented, too. Moreover, the route to the sample fabrication – the dye coating of gold nanoparticles, their subsequent deposition on substrates, and the covering with polyelectrolytes – is presented together with the measurement methods that were used for the articles.
Recent advances in microscopy have led to an improved visualization of different cell processes. Yet, this also leads to a higher demand of tools which can process images in an automated and quantitative fashion. Here, we present two applications that were developed to quantify different processes in eukaryotic cells which rely on the organization and dynamics of the cytoskeleton.. In plant cells, microtubules and actin filaments form the backbone of the cytoskeleton. These structures support cytoplasmic streaming, cell wall organization and tracking of cellular material to and from the plasma membrane. To better understand the underlying mechanisms of cytoskeletal organization, dynamics and coordination, frameworks for the quantification are needed. While this is fairly well established for the microtubules, the actin cytoskeleton has remained difficult to study due to its highly dynamic behaviour. One aim of this thesis was therefore to provide an automated framework to quantify and describe actin organization and dynamics. We used the framework to represent actin structures as networks and examined the transport efficiency in Arabidopsis thaliana hypocotyl cells. Furthermore, we applied the framework to determine the growth mode of cotton fibers and compared the actin organization in wild-type and mutant cells of rice. Finally, we developed a graphical user interface for easy usage. Microtubules and the actin cytoskeleton also play a major role in the morphogenesis of epidermal leaf pavement cells. These cells have highly complex and interdigitated shapes which are hard to describe in a quantitative way. While the relationship between microtubules, the actin cytoskeleton and shape formation is the object of many studies, it is still not clear how and if the cytoskeletal components predefine indentations and protrusions in pavement cell shapes. To understand the underlying cell processes which coordinate cell morphogenesis, a quantitative shape descriptor is needed. Therefore, the second aim of this thesis was the development of a network-based shape descriptor which captures global and local shape features, facilitates shape comparison and can be used to evaluate shape complexity. We demonstrated that our framework can be used to describe and compare shapes from various domains. In addition, we showed that the framework accurately detects local shape features of pavement cells and outperform contending approaches. In the third part of the thesis, we extended the shape description framework to describe pavement cell shape features on tissue-level by proposing different network representations of the underlying imaging data.
Um ihre ästhetischen und strukturellen Ähnlichkeiten zum Fernsehprogramm aufzudecken, analysiert Christian Richter ausführlich mediale Inszenierungen von Netflix und YouTube. Die Schlagworte »Flow«, »Serialität«, »Liveness« und »Adressierung« dienen dabei als zentrale Orientierungshilfen. Antworten liefern etablierte Fernsehtheorien ebenso wie facettenreiche und triviale Beispiele. Diese reichen vom ZDF-Fernsehgarten und alten Horrorfilmen über den SuperBowl und einsame Bahnfahrten durch Norwegen bis zu BibisBeautyPalace und House of Cards. Am Ende schält sich ein Zustand von FERNSEHEN heraus, der als eine neue Version aufgefasst werden kann.
Research on novel and advanced biomaterials is an indispensable step towards their applications in desirable fields such as tissue engineering, regenerative medicine, cell culture, or biotechnology. The work presented here focuses on such a promising material: polyelectrolyte multilayer (PEM) composed of hyaluronic acid (HA) and poly(L-lysine) (PLL). This gel-like polymer surface coating is able to accumulate (bio-)molecules such as proteins or drugs and release them in a controlled manner. It serves as a mimic of the extracellular matrix (ECM) in composition and intrinsic properties. These qualities make the HA/PLL multilayers a promising candidate for multiple bio-applications such as those mentioned above. The work presented aims at the development of a straightforward approach for assessment of multi-fractional diffusion in multilayers (first part) and at control of local molecular transport into or from the multilayers by laser light trigger (second part).
The mechanism of the loading and release is governed by the interaction of bioactives with the multilayer constituents and by the diffusion phenomenon overall. The diffusion of a molecule in HA/PLL multilayers shows multiple fractions of different diffusion rate. Approaches, that are able to assess the mobility of molecules in such a complex system, are limited. This shortcoming motivated the design of a novel evaluation tool presented here.
The tool employs a simulation-based approach for evaluation of the data acquired by fluorescence recovery after photobleaching (FRAP) method. In this approach, possible fluorescence recovery scenarios are primarily simulated and afterwards compared with the data acquired while optimizing parameters of a model until a sufficient match is achieved. Fluorescent latex particles of different sizes and fluorescein in an aqueous medium are utilized as test samples validating the analysis results. The diffusion of protein cytochrome c in HA/PLL multilayers is evaluated as well.
This tool significantly broadens the possibilities of analysis of spatiotemporal FRAP data, which originate from multi-fractional diffusion, while striving to be widely applicable. This tool has the potential to elucidate the mechanisms of molecular transport and empower rational engineering of the drug release systems.
The second part of the work focuses on the fabrication of such a spatiotemporarily-controlled drug release system employing the HA/PLL multilayer. This release system comprises different layers of various functionalities that together form a sandwich structure. The bottom layer, which serves as a reservoir, is formed by HA/PLL PEM deposited on a planar glass substrate. On top of the PEM, a layer of so-called hybrids is deposited. The hybrids consist of thermoresponsive poly(N-isopropylacrylamide) (PNIPAM) -based hydrogel microparticles with surface-attached gold nanorods. The layer of hybrids is intended to serve as a gate that controls the local molecular transport through the PEM–solution-interface. The possibility of stimulating the molecular transport by near-infrared (NIR) laser irradiation is being explored.
From several tested approaches for the deposition of hybrids onto the PEM surface, the drying-based approach was identified as optimal. Experiments, that examine the functionality of the fabricated sandwich at elevated temperature, document the reversible volume phase transition of the PEM-attached hybrids while sustaining the sandwich stability. Further, the gold nanorods were shown to effectively absorb light radiation in the tissue- and cell-friendly NIR spectral region while transducing the energy of light into heat. The rapid and reversible shrinkage of the PEM-attached hybrids was thereby achieved. Finally, dextran was employed as a model transport molecule. It loads into the PEM reservoir in a few seconds with the partition constant of 2.4, while it spontaneously releases in a slower, sustained manner. The local laser irradiation of the sandwich, which contains the fluorescein isothiocyanate tagged dextran, leads to a gradual reduction of fluorescence intensity in the irradiated region.
The release system fabricated employs renowned photoresponsivity of the hybrids in an innovative setting. The results of the research are a step towards a spatially-controlled on-demand drug release system that paves the way to spatiotemporally controlled drug release.
The approaches developed in this work have the potential to elucidate the molecular dynamics in ECM and to foster engineering of multilayers with properties tuned to mimic the ECM. The work aims at spatiotemporal control over the diffusion of bioactives and their presentation to the cells.
Bank filtration is an effective water treatment technique and is widely adopted in Europe along major rivers. It is the process where surface water penetrates the riverbed, flows through the aquifer, and then is extracted by near-bank production wells. By flowing in the subsurface flow passage, the water quality can be improved by a series of beneficial processes. Long-term riverbank filtration also produces colmation layers on the riverbed. The colmation layer may act as a bioactive zone that is governed by biochemical and physical processes owing to its enrichment of microbes and organic matter. Low permeability may strongly limit the surface water infiltration and further lead to a decreasing recoverable ratio of production wells.The removal of the colmation layer is therefore a trade-off between the treatment capacity and treatment efficiency. The goal of this Ph.D. thesis is to focus on the temporal and spatial change of the water quality and quantity along the flow path of a hydrogeological heterogeneous riverbank filtration site adjacent to an artificial-reconstructed (bottom excavation and bank reconstruction) canal in Potsdam, Germany.
To quantify the change of the infiltration rate, travel time distribution, and the thermal field brought by the canal reconstruction, a three-dimensional flow and heat transport model was created. This model has two scenarios, 1) ‘with’ canal reconstruction, and 2) ‘without’ canal reconstruction. Overall, the model calibration results of both water heads and temperatures matched those observed in the field study. In comparison to the model without reconstruction, the reconstruction model led to more water being infiltrated into the aquifer on that section, on average 521 m3/d, which corresponded to around 9% of the total pumping rate. Subsurface travel-time distribution substantially shifted towards shorter travel times. Flow paths with travel times <200 days increased by ~10% and those with <300 days by 15%. Furthermore, the thermal distribution in the aquifer showed that the seasonal variation in the scenario with reconstruction reaches deeper and laterally propagates further.
By scatter plotting of δ18O versus δ 2H, the infiltrated river water could be differentiated from water flowing in the deep aquifer, which may contain remnant landside groundwater from further north. In contrast, the increase of river water contribution due to decolmation could be shown by piper plot. Geological heterogeneity caused a substantial spatial difference in redox zonation among different flow paths, both horizontally and vertically. Using the Wilcoxon rank test, the reconstruction changed the redox potential differently in observation wells. However, taking the small absolute concentration level, the change is also relatively minor. The treatment efficiency for both organic matter and inorganic matter is consistent after the reconstruction, except for ammonium. The inconsistent results for ammonium could be explained by changes in the Cation Exchange Capacity (CEC) in the newly paved riverbed. Because the bed is new, it was not yet capable of keeping the newly produced ammonium by sorption and further led to the breakthrough of the ammonium plume. By estimation, the peak of the ammonium plume would reach the most distant observation well before February 2024, while the peaking concentration could be further dampened by sorption and diluted by the afterward low ammonium flow. The consistent DOC and SUVA level suggests that there was no clear preference for the organic matter removal along the flow path.
The two hallmark features of Brownian motion are the linear growth < x2(t)> = 2Ddt of the mean squared displacement (MSD) with diffusion coefficient D in d spatial dimensions, and the Gaussian distribution of displacements. With the increasing complexity of the studied systems deviations from these two central properties have been unveiled over the years. Recently, a large variety of systems have been reported in which the MSD exhibits the linear growth in time of Brownian (Fickian) transport, however, the distribution of displacements is pronouncedly non-Gaussian (Brownian yet non-Gaussian, BNG). A similar behaviour is also observed for viscoelastic-type motion where an anomalous trend of the MSD, i.e., <x2(t)> ~ ta, is combined with a priori unexpected non-Gaussian distributions (anomalous yet non-Gaussian, ANG). This kind of behaviour observed in BNG and ANG diffusions has been related to the presence of heterogeneities in the systems and a common approach has been established to address it, that is, the random diffusivity approach.
This dissertation explores extensively the field of random diffusivity models. Starting from a chronological description of all the main approaches used as an attempt of describing BNG and ANG diffusion, different mathematical methodologies are defined for the resolution and study of these models. The processes that are reported in this work can be classified in three subcategories, i) randomly-scaled Gaussian processes, ii) superstatistical models and iii) diffusing diffusivity models, all belonging to the more general class of random diffusivity models. Eventually, the study focuses more on BNG diffusion, which is by now well-established and relatively well-understood. Nevertheless, many examples are discussed for the description of ANG diffusion, in order to highlight the possible scenarios which are known so far for the study of this class of processes.
The second part of the dissertation deals with the statistical analysis of random diffusivity processes. A general description based on the concept of moment-generating function is initially provided to obtain standard statistical properties of the models. Then, the discussion moves to the study of the power spectral analysis and the first passage statistics for some particular random diffusivity models. A comparison between the results coming from the random diffusivity approach and the ones for standard Brownian motion is discussed. In this way, a deeper physical understanding of the systems described by random diffusivity models is also outlined.
To conclude, a discussion based on the possible origins of the heterogeneity is sketched, with the main goal of inferring which kind of systems can actually be described by the random diffusivity approach.
Organizations continue to assemble and rely upon teams of remote workers as an essential element of their business strategy; however, knowledge processing is particular difficult in such isolated, largely digitally mediated settings. The great challenge for a knowledge-based organization lies not in how individuals should interact using technology but in how to achieve effective cooperation and knowledge exchange. Currently more attention has been paid to technology and the difficulties machines have processing natural language and less to studies of the human aspect—the influence of our own individual cognitive abilities and preferences on the processing of information when interacting online. This thesis draws on four scientific domains involved in the process of interpreting and processing massive, unstructured data—knowledge management, linguistics, cognitive science, and artificial intelligence—to build a model that offers a reliable way to address the ambiguous nature of language and improve workers’ digitally mediated interactions. Human communication can be discouragingly imprecise and is characterized by a strong linguistic ambiguity; this represents an enormous challenge for the computer analysis of natural language. In this thesis, I propose and develop a new data interpretation layer for the processing of natural language based on the human cognitive preferences of the conversants themselves. Such a semantic analysis merges information derived both from the content and from the associated social and individual contexts, as well as the social dynamics that emerge online. At the same time, assessment taxonomies are used to analyze online comportment at the individual and community level in order to successfully identify characteristics leading to greater effectiveness of communication. Measurement patterns for identifying effective methods of individual interaction with regard to individual cognitive and learning preferences are also evaluated; a novel Cyber-Cognitive Identity (CCI)—a perceptual profile of an individual’s cognitive and learning styles—is proposed. Accommodation of such cognitive preferences can greatly facilitate knowledge management in the geographically dispersed and collaborative digital environment. Use of the CCI is proposed for cognitively labeled Latent Dirichlet Allocation (CLLDA), a novel method for automatically labeling and clustering knowledge that does not rely solely on probabilistic methods, but rather on a fusion of machine learning algorithms and the cognitive identities of the associated individuals interacting in a digitally mediated environment. Advantages include: a greater perspicuity of dynamic and meaningful cognitive rules leading to greater tagging accuracy and a higher content portability at the sentence, document, and corpus level with respect to digital communication.
Glycosylphosphatidylinositols (GPIs) are highly complex glycolipids that serve as membrane anchors to a large variety of eukaryotic proteins. These are covalently attached to a group of peripheral proteins called GPI-anchored proteins (GPI-APs) through a post-translational modification in the endoplasmic reticulum. The GPI anchor is a unique structure composed of a glycan, with phospholipid tail at one end and a phosphoethanolamine linker at the other where the protein attaches. The glycan part of the GPI comprises a conserved pseudopentasaccharide core that could branch out to carry additional glycosyl or phosphoethanolamine units. GPI-APs are involved in a diverse range of cellular processes, few of which are signal transduction, protein trafficking, pathogenesis by protozoan parasites like the malaria- causing parasite Plasmodium falciparum. GPIs can also exist freely on the membrane surface without an attached protein such as those found in parasites like Toxoplasma gondii, the causative agent of Toxoplasmosis. These molecules are both structurally and functionally diverse, however, their structure-function relationship is still poorly understood. This is mainly because no clear picture exists regarding how the protein and the glycan arrange with respect to the lipid layer. Direct experimental evidence is rather scarce, due to which inconclusive pictures have emerged, especially regarding the orientation of GPIs and GPI-APs on membrane surfaces and the role of GPIs in membrane organization. It appears that computational modelling through molecular dynamics simulations would be a useful method to make progress. In this thesis, we attempt to explore characteristics of GPI anchors and GPI-APs embedded in lipid bilayers by constructing molecular models at two different resolutions – all-atom and coarse-grained.
First, we show how to construct a modular molecular model of GPIs and GPI-anchored proteins that can be readily extended to a broad variety of systems, addressing the micro-heterogeneity of GPIs. We do so by creating a hybrid link to which GPIs of diverse branching and lipid tails of varying saturation with their optimized force fields, GLYCAM06 and Lipid14 respectively, can be attached. Using microsecond simulations, we demonstrate that GPI prefers to “flop-down” on the membrane, thereby, strongly interacting with the lipid heads, over standing upright like a “lollipop”. Secondly, we extend the model of the GPI core to carry out a systematic study of the structural aspects of GPIs carrying different side chains (parasitic and human GPI variants) inserted in lipid bilayers. Our results demonstrate the importance of the side branch residues as these are the most accessible, and thereby, recognizable epitopes. This finding qualitatively agrees with experimental observations that highlight the role of the side branches in immunogenicity of GPIs and the specificity thereof. The overall flop-down orientation of the GPIs with respect to the bilayer surface presents the side chain residues to face the solvent. Upon attaching the green fluorescent protein (GFP) to the GPI, it is seen to lie in close proximity to the bilayer, interacting both with the lipid heads and glycan part of the GPI. However the orientation of GFP is sensitive to the type of GPI it is attached to. Finally, we construct a coarse-grained model of the GPI and GPI-anchored GFP using a modified version of the MARTINI force-field, using which the timescale is enhanced by at least an order of magnitude compared to the atomistic system.
This study provides a theoretical perspective on the conformational behavior of the GPI core and some of its branched variations in presence of lipid bilayers, as well as draws comparisons with experimental observations. Our modular atomistic model of GPI can be further employed to study GPIs of variable branching, and thereby, aid in designing future experiments especially in the area of vaccines and drug therapies. Our coarse-grained model can be used to study dynamic aspects of GPIs and GPI-APs w.r.t plasma membrane organization. Furthermore, the backmapping technique of converting coarse-grained trajectory back to the atomistic model would enable in-depth structural analysis with ample conformational sampling.
Due to continuously intensifying human usage of the marine environment worldwide ranging cetaceans face an increasing number of threats. Besides whaling, overfishing and by-catch, new technical developments increase the water and noise pollution, which can negatively affect marine species. Cetaceans are especially prone to these influences, being at the top of the food chain and therefore accumulating toxins and contaminants. Furthermore, they are extremely noise sensitive due to their highly developed hearing sense and echolocation ability. As a result, several cetacean species were brought to extinction during the last century or are now classified as critically endangered. This work focuses on two odontocetes. It applies and compares different molecular methods for inference of population status and adaptation, with implications for conservation. The worldwide distributed sperm whale (Physeter macrocephalus) shows a matrilineal population structure with predominant male dispersal. A recently stranded group of male sperm whales provided a unique opportunity to investigate male grouping for the first time. Based on the mitochondrial control region, I was able to infer that male bachelor groups comprise multiple matrilines, hence derive from different social groups, and that they represent the genetic variability of the entire North Atlantic. The harbor porpoise (Phocoena phocoena) occurs only in the northern hemisphere. By being small and occurring mostly in coastal habitats it is especially prone to human disturbance. Since some subspecies and subpopulations are critically endangered, it is important to generate and provide genetic markers with high resolution to facilitate population assignment and subsequent protection measurements. Here, I provide the first harbour porpoise whole genome, in high quality and including a draft annotation. Using it for mapping ddRAD seq data, I identify genome wide SNPs and, together with a fragment of the mitochondrial control region, inferred the population structure of its North Atlantic distribution range. The Belt Sea harbors a distinct subpopulation oppose to the North Atlantic, with a transition zone in the Kattegat. Within the North Atlantic I could detect subtle genetic differentiation between western (Canada-Iceland) and eastern (North Sea) regions, with support for a German North Sea breading ground around the Isle of Sylt. Further, I was able to detect six outlier loci which show isolation by distance across the investigated sampling areas. In employing different markers, I could show that single maker systems as well as genome wide data can unravel new information about population affinities of odontocetes. Genome wide data can facilitate investigation of adaptations and evolutionary history of the species and its populations. Moreover, they facilitate population genetic investigations, providing a high resolution, and hence allowing for detection of subtle population structuring especially important for highly mobile cetaceans.
This thesis investigates how the permafrost microbiota responds to global warming. In detail, the constraints behind methane production in thawing permafrost were linked to methanogenic activity, abundance and composition. Furthermore, this thesis offers new insights into microbial adaptions to the changing environmental conditions during global warming. This was assesed by investigating the potential ecological relevant functions encoded by plasmid DNA within the permafrost microbiota. Permafrost of both interglacial and glacial origin spanning the Holocene to the late Pleistocene, including Eemian, were studied during long-term thaw incubations. Furthermore, several permafrost cores of different stratigraphy, soil type and vegetation cover were used to target the main constraints behind methane production during short-term thaw simulations. Short- and long-term incubations simulating thaw with and without the addition of substrate were combined with activity measurements, amplicon and metagenomic sequencing of permanently frozen and seasonally thawed active layer. Combined, it allowed to address the following questions. i) What constraints methane production when permafrost thaws and how is this linked to methanogenic activity, abundance and composition? ii) How does the methanogenic community composition change during long-term thawing conditions? iii) Which potential ecological relevant functions are encoded by plasmid DNA in active layer soils?
The major outcomes of this thesis are as follows. i) Methane production from permafrost after long-term thaw simulation was found to be constrained mainly by the abundance of methanogens and the archaeal community composition. Deposits formed during periods of warmer temperatures and increased precipitation, (here represented by deposits from the Late Pleistocene of both interstadial and interglacial periods) were found to respond strongest to thawing conditions and to contain an archaeal community dominated by methanogenic archaea (40% and 100% of all detected archaea). Methanogenic population size and carbon density were identified as main predictors for potential methane production in thawing permafrost in short-term incubations when substrate was sufficiently available.
ii) Besides determining the methanogenic activity after long-term thaw, the paleoenvironmental conditions were also found to influence the response of the methanogenic community composition. Substantial shifts within methanogenic community structure and a drop in diversity were observed in deposits formed during warmer periods, but not in deposits from stadials, when colder and drier conditions occurred. Overall, a shift towards a dominance of hydrogenotrophic methanogens was observed in all samples, except for the oldest interglacial deposits from the Eemian, which displayed a potential dominance of acetoclastic methanogens. The Eemian, which is discussed to serve as an analogue to current climate conditions, contained highly active methanogenic communities. However, all potential limitation of methane production after permafrost thaw, it means methanogenic community structure, methanogenic population size, and substrate pool might be overcome after permafrost had thawed on the long-term. iii) Enrichments with soil from the seasonally thawed active layer revealed that its plasmid DNA (‘metaplasmidome’) carries stress-response genes. In particular it encoded antibiotic resistance genes, heavy metal resistance genes, cold shock proteins and genes encoding UV-protection. Those are functions that are directly involved in the adaptation of microbial communities to stresses in polar environments. It was further found that metaplasmidomes from the Siberian active layer originate mainly from Gammaproteobacteria. By applying enrichment cultures followed by plasmid DNA extraction it was possible to obtain a higher average contigs length and significantly higher recovery of plasmid sequences than from extracting plasmid sequences from metagenomes. The approach of analyzing ‘metaplasmidomes’ established in this thesis is therefore suitable for studying the ecological role of plasmids in polar environments in general.
This thesis emphasizes that including microbial community dynamics have the potential to improve permafrost-carbon projections. Microbially mediated methane release from permafrost environments may significantly impact future climate change. This thesis identified drivers of methanogenic composition, abundance and activity in thawing permafrost landscapes. Finally, this thesis underlines the importance to study how the current warming Arctic affects microbial communities in order to gain more insight into microbial response and adaptation strategies.
Towards seasonal prediction: stratosphere-troposphere coupling in the atmospheric model ICON-NWP
(2020)
Stratospheric variability is one of the main potential sources for sub-seasonal to seasonal predictability in mid-latitudes in winter. Stratospheric pathways play an important role for long-range teleconnections between tropical phenomena, such as the quasi-biennial oscillation (QBO) and El Niño-Southern Oscillation (ENSO), and the mid-latitudes on the one hand, and linkages between Arctic climate change and the mid-latitudes on the other hand. In order to move forward in the field of extratropical seasonal predictions, it is essential that an atmospheric model is able to realistically simulate the stratospheric circulation and variability. The numerical weather prediction (NWP) configuration of the ICOsahedral Non-hydrostatic atmosphere model ICON is currently being used by the German Meteorological Service for the regular weather forecast, and is intended to produce seasonal predictions in future. This thesis represents the first extensive evaluation of Northern Hemisphere stratospheric winter circulation in ICON-NWP by analysing a large set of seasonal ensemble experiments.
An ICON control climatology simulated with a default setup is able to reproduce the basic behaviour of the stratospheric polar vortex. However, stratospheric westerlies are significantly too weak and major stratospheric warmings too frequent, especially in January. The weak stratospheric polar vortex in ICON is furthermore connected to a mean sea level pressure (MSLP) bias pattern resembling the negative phase of the Arctic Oscillation (AO). Since a good representation of the drag exerted by gravity waves is crucial for a realistic simulation of the stratosphere, three sensitivity experiments with reduced gravity wave drag are performed. Both a reduction of the non-orographic and orographic gravity wave drag respectively, lead to a strengthening of the stratospheric vortex and thus a bias reduction in winter, in particular in January. However, the effect of the non-orographic gravity wave drag on the stratosphere is stronger. A third experiment, combining a reduced orographic and non-orographic drag, exhibits the largest stratospheric bias reductions. The analysis of stratosphere-troposphere coupling based on an index of the Northern Annular Mode demonstrates that ICON realistically represents downward coupling. This coupling is intensified and more realistic in experiments with a reduced gravity wave drag, in particular with reduced non-orographic drag. Tropospheric circulation is also affected by the reduced gravity wave drag, especially in January, when the strongly improved stratospheric circulation reduces biases in the MSLP patterns. Moreover, a retuning of the subgrid-scale orography parameterisations leads to a significant error reduction in the MSLP in all months. In conclusion, the combination of these adjusted parameterisations is recommended as a current optimal setup for seasonal simulations with ICON.
Additionally, this thesis discusses further possible influences on the stratospheric polar vortex, including the influence of tropical phenomena, such as QBO and ENSO, as well as the influence of a rapidly warming Arctic. ICON does not simulate the quasi-oscillatory behaviour of the QBO and favours weak easterlies in the tropical stratosphere. A comparison with a reanalysis composite of the easterly QBO phase reveals, that the shift towards the easterly QBO in ICON further weakens the stratospheric polar vortex. On the other hand, the stratospheric reaction to ENSO events in ICON is realistic. ICON and the reanalysis exhibit a weakened stratospheric vortex in warm ENSO years. Furthermore, in particular in winter, warm ENSO events favour the negative phase of the Arctic Oscillation, whereas cold events favour the positive phase. The ICON simulations also suggest a significant effect of ENSO on the Atlantic-European sector in late winter. To investigate the influence of Arctic climate change on mid-latitude circulation changes, two differing approaches with transient and fixed sea ice conditions are chosen. Neither ICON approach exhibits the mid-latitude tropospheric negative Arctic Oscillation circulation response to amplified Arctic warming, as it is discussed on the basis of observational evidence. Nevertheless, adding a new model to the current and active discussion on Arctic-midlatitude linkages, further contributes to the understanding of divergent conclusions between model and observational studies.
Philosophische Tugenden
(2020)
Worin besteht gutes Philosophieren? Und weshalb ist gerade John Stuart Mill ein außergewöhnlich guter Philosoph? Joachim Toenges-Hinn verbindet in diesem Band die metaphilosophische Suche danach, was gute Philosophie ausmacht, mit einer historischen Betrachtung des Philosophen John Stuart Mill. Dabei fungiert Mill zugleich als Urheber von und Verkörperung des Strebens nach zwei philosophischen Tugenden, die Toenges-Hinn aus Mills philosophischem Werk ableitet und anschließend systematisch verteidigt. Diese als „Bentham-Ideal“ und „Coleridge-Ideal“ bezeichneten Tugenden stehen dabei ebenso im Fokus seiner Untersuchung wie die Bedeutung von Lebensexperimenten für philosophische Biografien.
Sekundäre Pflanzenstoffe und ihre gesundheitsfördernden Eigenschaften sind in den letzten zwei Jahrzehnten vielfach ernährungsphysiologisch untersucht und spezifische positive Effekte im humanen Organismus zum Teil sehr genau beschrieben worden. Zu den Carotinoiden zählend ist der sekundäre Pflanzenstoff Lutein insbesondere in der Prävention von ophthalmologischen Erkrankungen in den Mittelpunkt der Forschung gerückt. Das ausschließlich von Pflanzen und einigen Algen synthetisierte Xanthophyll wird über die pflanzliche Nahrung insbesondere grünes Blattgemüse in den humanen Organismus aufgenommen. Dort akkumuliert es bevorzugt im Makulapigment der Retina des menschlichen Auges und ist bedeutend im Prozess der Aufrechterhaltung der Funktionsfähigkeit der Photorezeptorzellen. Im Laufe des Alterns kann die Abnahme der Dichte des Makulapigments und der Abbau von Lutein beobachtet werden. Die dadurch eintretende Destabilisierung der Photorezeptorzellen im Zusammenhang mit einer veränderten Stoffwechsellage im alternden Organismus kann zur Ausprägung der altersbedingten Makuladegeneration (AMD) führen. Die pathologische Symptomatik der Augenerkrankung reicht vom Verlust der Sehschärfe bis hin zum irreversiblen Erblinden. Da therapeutische Mittel ausschließlich ein Fortschreiten verhindern, bestehen hier Forschungsansätze präventive Maßnahmen zu finden. Die Supplementierung von luteinhaltigen Präparaten bietet dabei einen Ansatzpunkt. Auf dem Markt finden sich bereits Nahrungsergänzungsmittel (NEM) mit Lutein in verschiedenen Applikationen. Limitierend ist dabei die Stabilität und Bioverfügbarkeit von Lutein, welches teilweise kostenintensiv und mit unbekannter Reinheit zu erwerben ist. Aus diesem Grund wäre die Verwendung von Luteinestern als die pflanzliche Speicherform des Luteins im Rahmen eines NEMs vorteilhaft. Neben ihrer natürlichen, höheren Stabilität sind Luteinester nachhaltig und kostengünstig einsetzbar.
In dieser Arbeit wurden physikochemische und ernährungsphysiologisch relevante Aspekte in dem Produktentwicklungsprozess eines NEMs mit Luteinestern in einer kolloidalen Formulierung untersucht. Die bisher einzigartige Anwendung von Luteinestern in einem Mundspray sollte die Aufnahme des Wirkstoffes insbesondere für ältere Menschen erleichtern und verbessern. Unter Beachtung der Ergebnisse und der ernährungsphysiologischen Bewertung sollten u.a. Empfehlungen für die Rezepturzusammensetzungen einer Miniemulsion (Emulsion mit Partikelgrößen <1,0 µm) gegeben werden. Eine Einschätzung der Bioverfügbarkeit der Luteinester aus den entwickelten, kolloidalen Formulierungen konnte anhand von Studien zur Resorption- und Absorptionsverfügbarkeit in vitro ermöglicht werden.
In physikalischen Untersuchungen wurden zunächst Basisbestandteile für die Formulierungen präzisiert. In ersten wirkstofffreien Musteremulsionen konnten ausgewählte Öle als Trägerphase sowie Emulgatoren und Löslichkeitsvermittler (Peptisatoren) hinsichtlich ihrer Eignung zur Bereitstellung einer Miniemulsion physikalisch geprüft werden. Die beste Stabilität und optimale Eigenschaften einer Miniemulsion zeigten sich bei der Verwendung von MCT-Öl (engl. medium chain triglyceride) bzw. Rapsöl in der Trägerphase sowie des Emulgators Tween® 80 (Tween 80) allein oder in Kombination mit dem Molkenproteinhydrolysat Biozate® 1 (Biozate 1).
Aus den physikalischen Untersuchungen der Musteremulsionen gingen die Präemulsionen als Prototypen hervor. Diese enthielten den Wirkstoff Lutein in verschiedenen Formen. So wurden Präemulsionen mit Lutein, mit Luteinestern sowie mit Lutein und Luteinestern konzipiert, welche den Emulgator Tween 80 oder die Kombination mit Biozate 1 enthielten. Bei der Herstellung der Präemulsionen führte die Anwendung der Emulgiertechniken Ultraschall mit anschließender Hochdruckhomogenisation zu den gewünschten Miniemulsionen. Beide eingesetzten Emulgatoren boten optimale Stabilisierungseffekte. Anschließend erfolgte die physikochemische Charakterisierung der Wirkstoffe. Insbesondere Luteinester aus Oleoresin erwiesen sich hier als stabil gegenüber verschiedenen Lagerungsbedingungen. Ebenso konnte bei einer kurzzeitigen Behandlung der Wirkstoffe unter spezifischen mechanischen, thermischen, sauren und basischen Bedingungen eine Stabilität von Lutein und Luteinestern gezeigt werden. Die Zugabe von Biozate 1 bot dabei nur für Lutein einen zusätzlichen Schutz. Bei längerer physikochemischer Behandlung unterlagen die in den Miniemulsionen eingebrachten Wirkstoffe moderaten Abbauvorgängen. Markant war deren Sensitivität gegenüber dem basischen Milieu. Im Rahmen der Rezepturentwicklung des NEMs war hier die Empfehlung, eine Miniemulsion mit einem leicht saurem pH-Milieu zum Schutz des Wirkstoffes durch kontrollierte Zugabe weiterer Inhaltstoffe zu gestalten.
Im weiteren Entwicklungsprozess des NEMs wurden Fertigrezepturen mit dem Wirkstoff Luteinester aufgestellt. Die alleinige Anwendung des Emulgators Biozate 1 zeigte sich dabei als ungeeignet. Die weiterhin zur Verfügung stehenden Fertigrezepturen enthielten in der Öl-phase neben dem Wirkstoff das MCT-ÖL oder Rapsöl sowie a-Tocopherol zur Stabilisierung. Die Wasserphase bestand aus dem Emulgator Tween 80 oder einer Kombination aus Tween 80 und Biozate 1. Zusatzstoffe waren zudem als mikrobiologischer Schutz Ascorbinsäure und Kaliumsorbat sowie für sensorische Effekte Xylitol und Orangenaroma. Die Anordnung der Basisrezeptur und das angewendete Emulgierverfahren lieferten stabile Miniemulsionen. Weiterhin zeigten langfristige Lagerungsversuche mit den Fertigrezepturen bei 4°C, dass eine Aufrechterhaltung der geforderten Luteinestermenge im Produkt gewährleistet war. Analoge Untersuchungen an einem luteinhaltigen, marktgängigen Präparat bestätigten dagegen eine bereits bei kurzfristiger Lagerung auftretende Instabilität von Lutein.
Abschließend wurde durch Resorptions- und Absorptionsstudien in vitro mit den Präemulsionen und Fertigrezepturen die Bioverfügbarkeit von Luteinestern geprüft. Nach Behandlung in einem etablierten in vitro Verdaumodell konnte eine geringfügige Resorptionsverfügbarkeit der Luteinester definiert werden. Limitiert war eine Micellarisierung des Wirkstoffes aus den konzipierten Formulierungen zu beobachten. Eine enzymatische Spaltung der Luteinester zu freiem Lutein wurde nur begrenzt festgestellt. Spezifität und Aktivität von entsprechenden hydrolytischen Lipasen sind als äußerst gering gegenüber Luteinestern zu bewerten. In sich anschließenden Zellkulturversuchen mit der Zelllinie Caco-2 wurden keine zytotoxischen Effekte durch die relevanten Inhaltsstoffe in den Präemulsionen gezeigt. Dagegen konnten eine Sensibilität gegenüber den Fertigrezepturen beobachtet werden. Diese sollte im Zusammenhang mit Irritationen der Schleimhäute des Magen-Darm-Traktes bedacht werden. Eine weniger komplexe Rezeptur könnte die beobachteten Einschränkungen möglicherweise minimieren. Abschließende Absorptionsstudien zeigten, dass grundsätzlich eine geringfügige Aufnahme von vorrangig Lutein, aber auch Luteinmonoestern in den Enterocyten aus Miniemulsionen erfolgen kann. Dabei hatte weder Tween 80 noch Biozate 1 einen förderlichen Einfluss auf die Absorptionsrate von Lutein oder Luteinestern. Die Metabolisierung der Wirkstoffe durch vorherigen in vitro-Verdau steigerte die zelluläre Aufnahme von Wirkstoffen aus Formulierungen mit Lutein und Luteinestern gleichermaßen. Die beobachtete Aufnahme von Lutein und Luteinmonoestern in den Enterocyten scheint über passive Diffusion zu erfolgen, wobei auch der aktive Transport nicht ausgeschlossen werden kann. Dagegen können Luteindiester aufgrund ihrer Molekülgröße nicht über den Weg der Micellarisierung und einfachen Diffusion in die Enterocyten gelangen. Ihre Aufnahme in die Dünndarmepithelzellen bedarf einer vorherigen hydrolytischen Spaltung durch spezifische Lipasen. Dieser Schritt limitiert wiederum die effektive Aufnahme der Luteinester in die Zellen bzw. stellt eine Einschränkung in ihrer Bioverfügbarkeit im Vergleich zu freiem Lutein dar.
Zusammenfassend konnte für die physikochemisch stabilen Luteinester eine geringe Bioverfügbarkeit aus kolloidalen Formulierungen gezeigt werden. Dennoch ist die Verwendung als Wirkstoffquelle für den sekundären Pflanzenstoff Lutein in einem NEM zu empfehlen. Im Zusammenhang mit der Aufnahme von luteinreichen, pflanzlichen Lebensmitteln kann trotz der zu erwartenden geringen Bioverfügbarkeit der Luteinester aus dem NEM ein Beitrag zur Verbesserung des Luteinstatus erreicht werden. Entsprechende Publikationen zeigten eindeutige Korrelationen zwischen der Aufnahme von luteinesterhaltigen Präparaten und einem Anstieg der Luteinkonzentration im Serum bzw. der Makulapigmentdichte in vivo. Die geringfügig bessere Bioverfügbarkeit von freiem Lutein steht im kritischen Zusammenhang mit seiner Instabilität und Kostenintensität. Bilanzierend wurde im Rahmen dieser Arbeit das marktgängige Produkt Vita Culus® konzipiert. Im Ausblick sollten humane Interventionsstudien mit dem NEM die abschließende Bewertung der Bioverfügbarkeit von Luteinestern aus dem Präparat möglich machen.
The East Asian monsoons characterize the modern-day Asian climate, yet their geological history and driving mechanisms remain controversial. The southeasterly summer monsoon provides moisture, whereas the northwesterly winter monsoon sweeps up dust from the arid Asian interior to form the Chinese Loess Plateau. The onset of this loess accumulation, and therefore of the monsoons, was thought to be 8 million years ago (Ma). However, in recent years these loess records have been extended further back in time to the Eocene (56-34 Ma), a period characterized by significant changes in both the regional geography and global climate. Yet the extent to which these reconfigurations drive atmospheric circulation and whether the loess-like deposits are monsoonal remains debated. In this thesis, I study the terrestrial deposits of the Xining Basin previously identified as Eocene loess, to derive the paleoenvironmental evolution of the region and identify the geological processes that have shaped the Asian climate.
I review dust deposits in the geological record and conclude that these are commonly represented by a mix of both windblown and water-laid sediments, in contrast to the pure windblown material known as loess. Yet by using a combination of quartz surface morphologies, provenance characteristics and distinguishing grain-size distributions, windblown dust can be identified and quantified in a variety of settings. This has important implications for tracking aridification and dust-fluxes throughout the geological record.
Past reversals of Earth’s magnetic field are recorded in the deposits of the Xining Basin and I use these together with a dated volcanic ash layer to accurately constrain the age to the Eocene period. A combination of pollen assemblages, low dust abundances and other geochemical data indicates that the early Eocene was relatively humid suggesting an intensified summer monsoon due to the warmer greenhouse climate at this time. A subsequent shift from predominantly freshwater to salt lakes reflects a long-term aridification trend possibly driven by global cooling and the continuous uplift of the Tibetan Plateau. Superimposed on this aridification are wetter intervals reflected in more abundant lake deposits which correlate with highstands of the inland proto-Paratethys Sea. This sea covered the Eurasian continent and thereby provided additional moisture to the winter-time westerlies during the middle to late Eocene.
The long-term aridification culminated in an abrupt shift at 40 Ma reflected by the onset of windblown dust, an increase in steppe-desert pollen, the occurrence of high-latitude orbital cycles and northwesterly winds identified in deflated salt deposits. Together, these indicate the onset of a Siberian high atmospheric pressure system driving the East Asian winter monsoon as well as dust storms and was triggered by a major sea retreat from the Asian interior. These results therefore show that the proto-Paratethys Sea, though less well recognized than the Tibetan Plateau and global climate, has been a major driver in setting up the modern-day climate in Asia.
Organizations incorporate the institutional demands from their environment in order to be deemed legitimate and survive. Yet, complexifying societies promulgate multiple and sometimes inconsistent institutional prescriptions. When these prescriptions collide, organizations are said to face “institutional complexity”. How does an organization then incorporate incompatible demands? What are the consequences of institutional complexity for an organization? The literature provides contradictory conceptual and empirical insights on the matter. A central assumption, however, remains that internal incompatibilities generate tensions that, under certain conditions, can escalate into intractable conflicts, resulting in dysfunctionality and loss of legitimacy. The present research is an inquiry into what happens inside an organization when it incorporates complex institutional demands.
To answer this question, I focus on how individuals inside an organization interpret a complex institutional prescription. I examine how members of the French Development Agency interpret ‘results-based management’, a central but complex concept of organizing in the field of development aid. I use an inductive mixed methods design to systematically explore how different interpretations of results-based management relate to one another and to the organizational context in which they are embedded.
The results reveal that results-based management is a contested concept in the French Development Agency. I find multiple interpretations of the concept, which are attached to partly incompatible rationales about “who we are” and “what we do as an organization”. These rationales nevertheless coexist as balanced forces, without escalating into open conflict. The analysis points to four reasons for this peaceful coexistence of diverging rationales inside one and the same organization: 1) individuals’ capacity to manipulate different interpretations of a complex institutional demand, 2) the nature of interpretations, which makes them more or less prone to conflict, 3) the balanced distribution of rationales across the organizational sub-contexts and 4) the shared rules of interpretation provided by the larger socio-cultural context.
This research shows that an organization that incorporates institutional complexity comes to represent different, partly incompatible things to its members without being at war with itself. In doing so, it contributes to our knowledge of institutional complexity and organizational hybridity. It also advances our understanding of internal organizational legitimacy and of the translation of managerial concepts in organizations.
Comment sections of online news platforms are an essential space to express opinions and discuss political topics. However, the misuse by spammers, haters, and trolls raises doubts about whether the benefits justify the costs of the time-consuming content moderation. As a consequence, many platforms limited or even shut down comment sections completely. In this thesis, we present deep learning approaches for comment classification, recommendation, and prediction to foster respectful and engaging online discussions. The main focus is on two kinds of comments: toxic comments, which make readers leave a discussion, and engaging comments, which make readers join a discussion. First, we discourage and remove toxic comments, e.g., insults or threats. To this end, we present a semi-automatic comment moderation process, which is based on fine-grained text classification models and supports moderators. Our experiments demonstrate that data augmentation, transfer learning, and ensemble learning allow training robust classifiers even on small datasets. To establish trust in the machine-learned models, we reveal which input features are decisive for their output with attribution-based explanation methods. Second, we encourage and highlight engaging comments, e.g., serious questions or factual statements. We automatically identify the most engaging comments, so that readers need not scroll through thousands of comments to find them. The model training process builds on upvotes and replies as a measure of reader engagement. We also identify comments that address the article authors or are otherwise relevant to them to support interactions between journalists and their readership. Taking into account the readers' interests, we further provide personalized recommendations of discussions that align with their favored topics or involve frequent co-commenters. Our models outperform multiple baselines and recent related work in experiments on comment datasets from different platforms.
Successfully completing any data science project demands careful consideration across its whole process. Although the focus is often put on later phases of the process, in practice, experts spend more time in earlier phases, preparing data, to make them consistent with the systems' requirements or to improve their models' accuracies. Duplicate detection is typically applied during the data cleaning phase, which is dedicated to removing data inconsistencies and improving the overall quality and usability of data. While data cleaning involves a plethora of approaches to perform specific operations, such as schema alignment and data normalization, the task of detecting and removing duplicate records is particularly challenging. Duplicates arise when multiple records representing the same entities exist in a database. Due to numerous reasons, spanning from simple typographical errors to different schemas and formats of integrated databases. Keeping a database free of duplicates is crucial for most use-cases, as their existence causes false negatives and false positives when matching queries against it. These two data quality issues have negative implications for tasks, such as hotel booking, where users may erroneously select a wrong hotel, or parcel delivery, where a parcel can get delivered to the wrong address. Identifying the variety of possible data issues to eliminate duplicates demands sophisticated approaches.
While research in duplicate detection is well-established and covers different aspects of both efficiency and effectiveness, our work in this thesis focuses on the latter. We propose novel approaches to improve data quality before duplicate detection takes place and apply the latter in datasets even when prior labeling is not available. Our experiments show that improving data quality upfront can increase duplicate classification results by up to 19%. To this end, we propose two novel pipelines that select and apply generic as well as address-specific data preparation steps with the purpose of maximizing the success of duplicate detection. Generic data preparation, such as the removal of special characters, can be applied to any relation with alphanumeric attributes. When applied, data preparation steps are selected only for attributes where there are positive effects on pair similarities, which indirectly affect classification, or on classification directly. Our work on addresses is twofold; first, we consider more domain-specific approaches to improve the quality of values, and, second, we experiment with known and modified versions of similarity measures to select the most appropriate per address attribute, e.g., city or country.
To facilitate duplicate detection in applications where gold standard annotations are not available and obtaining them is not possible or too expensive, we propose MDedup. MDedup is a novel, rule-based, and fully automatic duplicate detection approach that is based on matching dependencies. These dependencies can be used to detect duplicates and can be discovered using state-of-the-art algorithms efficiently and without any prior labeling. MDedup uses two pipelines to first train on datasets with known labels, learning to identify useful matching dependencies, and then be applied on unseen datasets, regardless of any existing gold standard. Finally, our work is accompanied by open source code to enable repeatability of our research results and application of our approaches to other datasets.
To find out the future of nowadays reef ecosystem turnover under the environmental stresses such as global warming and ocean acidification, analogue studies from the geologic past are needed. As a critical time of reef ecosystem innovation, the Permian-Triassic transition witnessed the most severe demise of Phanerozoic reef builders, and the establishment of modern style symbiotic relationships within the reef-building organisms. Being the initial stage of this transition, the Middle Permian (Capitanian) mass extinction coursed a reef eclipse in the early Late Permian, which lead to a gap of understanding in the post-extinction Wuchiapingian reef ecosystem, shortly before the radiation of Changhsingian reefs. Here, this thesis presents detailed biostratigraphic, sedimentological, and palaeoecological studies of the Wuchiapingian reef recovery following the Middle Permian (Capitanian) mass extinction, on the only recorded Wuchiapingian reef setting, outcropping in South China at the Tieqiao section.
Conodont biostratigraphic zonations were revised from the Early Permian Artinskian to the Late Permian Wuchiapingian in the Tieqiao section. Twenty main and seven subordinate conodont zones are determined at Tieqiao section including two conodont zone below and above the Tieqiao reef complex. The age of Tieqiao reef was constrained as early to middle Wuchiapingian.
After constraining the reef age, detailed two-dimensional outcrop mapping combined with lithofacies study were carried out on the Wuchiapingian Tieqiao Section to investigate the reef growth pattern stratigraphically as well as the lateral changes of reef geometry on the outcrop scale. Semi-quantitative studies of the reef-building organisms were used to find out their evolution pattern within the reef recovery. Six reef growth cycles were determined within six transgressive-regressive cycles in the Tieqiao section. The reefs developed within the upper part of each regressive phase and were dominated by different biotas. The timing of initial reef recovery after the Middle Permian (Capitanian) mass extinction was updated to the Clarkina leveni conodont zone, which is earlier than previous understanding. Metazoans such as sponges were not the major components of the Wuchiapingian reefs until the 5th and 6th cycles. So, the recovery of metazoan reef ecosystem after the Middle Permian (Capitanian) mass extinction was obviously delayed. In addition, although the importance of metazoan reef builders such as sponges did increase following the recovery process, encrusting organisms such as Archaeolithoporella and Tubiphytes, combined with microbial carbonate precipitation, still played significant roles to the reef building process and reef recovery after the mass extinction.
Based on the results from outcrop mapping and sedimentological studies, quantitative composition analysis of the Tieqiao reef complex were applied on selected thin sections to further investigate the functioning of reef building components and the reef evolution after the Middle Permian (Capitanian) mass extinction. Data sets of skeletal grains and whole rock components were analyzed. The results show eleven biocommunity clusters/eight rock composition clusters dominated by different skeletal grains/rock components. Sponges, Archaeolithoporella and Tubiphytes were the most ecologically important components within the Wuchiapingian Tieqiao reef, while the clotted micrites and syndepositional cements are the additional important rock components for reef cores. The sponges were important within the whole reef recovery. Tubiphytes were broadly distributed in different environments and played a key-role in the initial reef communities. Archaeolithoporella concentrated in the shallower part of reef cycles (i.e., the upper part of reef core) and was functionally significant for the enlargement of reef volume.
In general, the reef recovery after the Middle Permian (Capitanian) mass extinction has some similarities with the reef recovery following the end-Permian mass extinction. It shows a delayed recovery of metazoan reefs and a stepwise recovery pattern that was controlled by both ecological and environmental factors. The importance of encrusting organisms and microbial carbonates are also similar to most of the other post-extinction reef ecosystems. These findings can be instructive to extend our understanding of the reef ecosystem evolution under environmental perturbation or stresses.
Ein Ergebnis der interkulturellen Beziehungen in Südostasien sind die immer noch existierenden portugiesisch-basierten Kreolsprachen Papia Kristang und Macaísta, die zu Muttersprachen von Generationen von Menschen in Malakka und Macau geworden sind. Welche Faktoren bewirken den Sprachwandel dieser Idiome, und wie ist dieser erkennbar? Dieser Band beschäftigt sich nicht nur mit der Sprachdynamik der portugiesisch-basierten Kreolsprachen Südostasiens, sondern auch mit anderen wesentlichen Fragestellungen der Variationslinguistik. Als Basis dienen die Ergebnisse einer empirischen Datenerhebung, die insbesondere die Veränderungen im Sprachgebrauch dokumentieren. Darüber hinaus stellt der Autor neue Resultate hinsichtlich der Sprachidentifikationen vor, die nicht nur für die Kreolistik von Bedeutung sind, sondern auch fachübergreifend für das Interesse der allgemeinen Sprachwissenschaft.
Cardiac valves are essential for the continuous and unidirectional flow of blood throughout the body. During embryonic development, their formation is strictly connected to the mechanical forces exerted by blood flow. The endocardium that lines the interior of the heart is a specialized endothelial tissue and is highly sensitive to fluid shear stress. Endocardial cells harbor a signal transduction machinery required for the translation of these forces into biochemical signaling, which strongly impacts cardiac morphogenesis and physiology. To date, we lack a solid understanding on the mechanisms by which endocardial cells sense the dynamic mechanical stimuli and how they trigger different cellular responses. In the zebrafish embryo, endocardial cells at the atrioventricular canal respond to blood flow by rearranging from a monolayer to a double-layer, composed of a luminal cell population subjected to blood flow and an abluminal one that is not exposed to it. These early morphological changes lead to the formation of an immature valve leaflet. While previous studies mainly focused on genes that are positively regulated by shear stress, the mechanisms regulating cell behaviors and fates in cells that lack the stimulus of blood flow are largely unknown. One key discovery of my work is that the flow-sensitive Notch receptor and Krüppel-like factor (Klf) 2, one of the best characterized flow-regulated transcriptional factors, are activated by shear stress but that they function in two parallel signal transduction pathways. Each of these two pathways is essential for the rearrangement of atrioventricular cells into an immature double-layered valve leaflets. A second key discovery of my study is the finding that both Notch and Klf2 signaling negatively regulate the expression of the angiogenesis receptor Vegfr3/Flt4, which becomes restricted to abluminal endocardial cells of the valve leaflet. Within these cells, Flt4 downregulates the expressions of the cell adhesion proteins Alcam and VE-cadherin. A loss of Flt4 causes abluminal endocardial cells to ectopically express Notch, which is normally restricted to luminal cells, and impairs valve morphology. My study suggests that abluminal endocardial cells that do not experience mechanical stimuli loose Notch expression and this triggers expression of Flt4. In turn, Flt4 negatively regulates Notch on the abluminal side of the valve leaflet. These antagonistic signaling activities and fine-tuned gene regulatory mechanisms ultimately shape cardiac valve leaflets by inducing unique differences in the fates of endocardial cells.
Potato is the 4th most important food crop in the world. Especially in tropical and sub-tropical potato production, drought is a yield limiting factor. Potato is sensitive to water stress. Potato yield loss under water stress could be reduced by using tolerant varieties and adjusted agronomic practices. Direct selection for yield under water-stressed conditions requires long selection cycles. Thus, identification of markers for marker-assisted selection may speed up breeding. The objective of this thesis is to identify morphological markers for drought tolerance by continuously monitoring plant growth and canopy temperature with an automatic phenotyping system.
The phenotyping was performed in drought-stress experiments that were conducted in population A with 64 genotypes and population B with 21 genotypes in the screenhouse in 2015 and 2016 (population A) and in 2017 and 2018 (population B). Drought tolerance was quantified as deviation of the relative tuber starch yield from the experimental median (DRYM) and parent median (DRYMp). Relative tuber starch yield is starch yield under drought stress relative to the average starch yield of the respective cultivar under control conditions in the same experiment. The specific DRYM value was calculated based on the yield data of the same experiment or the global DRYM that was calculated from yield data derived from data combined over yeas of respective population or across multiple experiments including VALDIS and TROST experiments (2011-2016).
Analysis of variance found a significant effect of genotype on DRYM indicating that the tolerance variation required for marker identification was given in both populations.
Canopy growth was monitored continuously six times a day over five to ten weeks by a laser scanner system and yielded information on leaf area, plant height and leaf angle for population A and additionally on leaf inclination and light penetration depth for population B. Canopy temperature was measured 48 times a day over six to seven weeks by infrared thermometry in population B. From the continuous IRT surface temperature data set, the canopy temperature for each plant was selected by matching the time stamp of the IRT data with laser scanner data.
Mean, maximum, range and growth rate values were calculated from continuous laser scanner measurements of respective canopy parameters. Among the canopy parameters, the maximum and mean values in long-term stress conditions showed better correlation with DRYM values calculated in the same experiment than growth rate and diurnal range values. Therefore, drought tolerance index prediction was done from maximum and mean values of canopy parameters.
The tolerance index in specific experiment condition was linearly predicted by simple regression model from different single canopy parameters under long-term stress condition in population A (2016) and population B (2017 and 2018). Among the canopy parameters maximum light penetration depth (2017), mean leaf angle (2017, 2018, and 2016), mean leaf inclination or mean canopy temperature depression (2017 and 2018), maximum plant height (2017) were selected as tolerance predictors. However, no single parameters were sufficient to predict DRYM. Therefore, several independent parameters were integrated in a multiple regression model.
In multiple regression model, specific experiment DRYM values in population A was predicted from mean leaf angle (2016). In population B, specific tolerance could be predicted from maximum light penetration depth and mean leaf inclination (2017) and mean leaf inclination (2018) or mean canopy temperature depression and mean leaf angle (2018).
In data combined over season of population A, the multiple linear regression model selected maximum plant height and mean leaf angle as tolerance predictor. In Population B, mean leaf inclination was selected as tolerance predictor. However, in population A, the variation explained by the final model was too low.
Furthermore, the average tolerances respective to parent median (2011-2018) across FGH plants or all plants (FGH and field) were predicted from maximum plant height (population A) and maximum plant height and mean leaf inclination (population B). Altogether, canopy parameters could be used as markers for drought tolerance. Therefore, water stress breeding in potato could be speed up through using leaf inclination, light penetration depth, plant height and canopy temperature depression as markers for drought tolerance, especially in long-term stress conditions.
The current thesis is focused on the properties of graphene supported by metallic substrates and specifically on the behaviour of electrons in such systems. Methods of scanning tunneling microscopy, electron diffraction and photoemission spectroscopy were applied to study the structural and electronic properties of graphene. The purpose of the first part of this work is to introduce the most relevant aspects of graphene physics and the methodical background of experimental techniques used in the current thesis.
The scientific part of this work starts with the extensive study by means of scanning tunneling microscopy of the nanostructures that appear in Au intercalated graphene on Ni(111). This study was aimed to explore the possible structural explanations of the Rashba-type spin splitting of ~100 meV experimentally observed in this system — much larger than predicted by theory. It was demonstrated that gold can be intercalated under graphene not only as a dense monolayer, but also in the form of well-periodic arrays of nanoclusters, a structure previously not reported. Such nanocluster arrays are able to decouple graphene from the strongly interacting Ni substrate and render it quasi-free-standing, as demonstrated by our DFT study. At the same time calculations confirm strong enhancement of the proximity-induced SOI in graphene supported by such nanoclusters in comparison to monolayer gold. This effect, attributed to the reduced graphene-Au distance in the case of clusters, provides a large Rashba-type spin splitting of ~60 meV.
The obtained results not only provide a possible mechanism of SOI enhancement in this particular system, but they can be also generalized for graphene on other strongly interacting substrates intercalated by nanostructures of heavy noble d metals.
Even more intriguing is the proximity of graphene to heavy sp-metals that were predicted to induce an intrinsic SOI and realize a spin Hall effect in graphene. Bismuth is the heaviest stable sp-metal and its compounds demonstrate a plethora of exciting physical phenomena. This was the motivation behind the next part of the current thesis, where structural and electronic properties of a previously unreported phase of Bi-intercalated graphene on Ir(111) were studied by means of scanning tunneling microscopy, spin- and angle-resolved photoemission spectroscopy and electron diffraction. Photoemission experiments revealed a remarkable, nearly ideal graphene band structure with strongly suppressed signatures of interaction between graphene and the Ir(111) substrate, moreover, the characteristic moiré pattern observed in graphene on Ir(111) by electron diffraction and scanning tunneling microscopy was strongly suppressed after intercalation. The whole set of experimental data evidences that Bi forms a dense intercalated layer that efficiently decouples graphene from the substrate. The interaction manifests itself only in the n-type charge doping (~0.4 eV) and a relatively small band gap at the Dirac point (~190 meV). The origin of this minor band gap is quite intriguing and in this work it was possible to exclude a wide range of mechanisms that could be responsible for it, such as induced intrinsic spin-orbit interaction, hybridization with the substrate states and corrugation of the graphene lattice. The main origin of the band gap was attributed to the A-B symmetry breaking and this conclusion found support in the careful analysis of the interference effects in photoemission that provided the band gap estimate of ~140 meV.
While the previous chapters were focused on adjusting the properties of graphene by proximity to heavy metals, graphene on its own is a great object to study various physical effects at crystal surfaces. The final part of this work is devoted to a study of surface scattering resonances by means of photoemission spectroscopy, where this effect manifests itself as a distinct modulation of photoemission intensity. Though scattering resonances were widely studied in the past by means of electron diffraction, studies about their observation in photoemission experiments started to appear only recently and they are very scarce.
For a comprehensive study of scattering resonances graphene was selected as a versatile model system with adjustable properties. After the theoretical and historical introduction to the topic of scattering resonances follows a detailed description of the unusual features observed in the photoemission spectra obtained in this work and finally the equivalence between these features and scattering resonances is proven. The obtained photoemission results are in a good qualitative agreement with the existing theory, as verified by our calculations in the framework of the interference model. This simple model gives a suitable explanation for the general experimental observations.
The possibilities of engineering the scattering resonances were also explored. A systematic study of graphene on a wide range of substrates revealed that the energy position of the resonances is in a direct relation to the magnitude of charge transfer between graphene and the substrate. Moreover, it was demonstrated that the scattering resonances in graphene on Ir(111) can be suppressed by nanopatterning either by a superlattice of Ir nanoclusters or by atomic hydrogen. These effects were attributed to strong local variations of tork function and/or destruction of long-range order of thephene lattice. The tunability of scattering resonances can be applied for optoelectronic devices based on graphene. Moreover, the results of this study expand the general understanding of the phenomenon of scattering resonances and are applicable to many other materials besides graphene.
The impact that catalysis has on global economy and environment is substantial, since 85% of all chemical industrial processes are catalytic. Among those, 80% of the processes are heterogeneously catalyzed, 17% make use of homogeneous catalysts, and 3% are biocatalytic processes. Especially in the pharmaceutical and agrochemical industry, a significant part of these processes involves chiral compounds. Obtaining enantiomerically pure compounds is necessary and it is usually accomplished by asymmetric synthesis and catalysis, as well as chiral separation. The efficiency of these processes may be vastly improved if the chiral selectors are positioned on a porous solid support, thereby increasing the available surface area for chiral recognition. Similarly, the majority of commercial catalysts are also supported, usually comprising of metal nanoparticles (NPs) dispersed on highly porous oxide or nanoporous carbon material.
Materials that have exceptional thermal and chemical stability, and are electrically conductive are porous carbons. Their stability in extreme pH regions and temperatures, the possibility to tailor their pore architecture and chemical functionalization, and their electric conductivity have already established these materials in the fields of separation and catalysis. However, their heterogeneous chemical structure with abundant defects make it challenging to develop reliable models for the investigation of structure-performance relationships. Therefore, there is a necessity for expanding the fundamental understanding of these robust materials under experimental conditions to allow for their further optimization for particular applications. This thesis gives a contribution to our knowledge about carbons, through different aspects, and in different applications.
On the one hand, a rather exotic novel application was investigated by attempts in synthesizing porous carbon materials with an enantioselective surface. Chapter 4.1 described an approach for obtaining mesoporous carbons with an enantioselective surface by direct carbonization of a chiral precursor. Two enantiomers of chiral ionic liquids (CIL) based on amino acid tyrosine were used as carbon precursors and ordered mesoporous silica SBA-15 served as a hard template for obtaining porosity. The chiral recognition of the prepared carbons has been tested in the solution by isothermal titration calorimetry with enantiomers of Phenylalanine as probes, as well as chiral vapor adsorption with 2-butanol enantiomers. Measurements in both solution and the gas phase revealed the differences in the affinity of carbons towards two enantiomers.
The atomic efficiency of the CIL precursors was increased in Chapter 4.2, and the porosity was developed independently from the development of chiral carbons, through the formation of stable composites of pristine carbon and CIL-derived coating. After the same set of experiments for the investigation of chirality, the enantiomeric ratios of the composites reported herein were even higher than in the previous chapter.
On the other hand, the structure‒activity relationship of carbons as supports for gold nanoparticles in a rather traditional catalytic model reaction, on the interface between gas, liquid, and solid, was studied. In Chapter 5.1 it was shown on the series of catalysts with different porosities that the kinetics of ᴅ-glucose oxidation reaction can be enhanced by increasing the local concentration of the reactants around the active phase of the catalyst. A large amount of uniform narrow mesopores connected to the surface of the Au catalyst supported on ordered mesoporous carbon led to the water confinement, which increased the solubility of the oxygen in the proximity of the catalyst and thereby increased the apparent catalytic activity of this catalyst.
After increasing the oxygen concentration in the internal area of the catalyst, in Chapter 5.2 the concentration of oxygen was increased in the external environment of the catalyst, by the introduction of less cohesive liquids that serve as efficient solvent for oxygen, perfluorinated compounds, near the active phase of the catalyst. This was achieved by a formation of catalyst particle-stabilized emulsions of perfluorocarbon in aqueous ᴅ-glucose solution, that further promoted the catalytic activity of gold-on-carbon catalyst.
The findings reported within this thesis are an important step in the understanding of the structure-related properties of carbon materials.
This book endeavours to understand the seemingly direct link between utopianism and the USA, discussing novels that have never been brought together in this combination before, even though they all revolve around intentional communities: Imlay’s The Emigrants (1793), Hawthorne’s The Blithedale Romance (1852), Howland’s Papas Own Girl (1874), Griggs’s Imperium in Imperio (1899), and Du Bois’s The Quest of the Silver Fleece (1911). They relate nation and utopia not by describing perfect societies, but by writing about attempts to immediately live radically different lives. Signposting the respective communal history, the readings provide a literary perspective to communal studies, and add to a deeply necessary historicization for strictly literary approaches to US utopianism, and for studies that focus on Pilgrims/Puritans/Founding Fathers as utopian practitioners. This book therefore highlights how the authors evaluated the USA’s utopian potential and traces the nineteenth-century development of the utopian imagination from various perspectives.
The development of methods such as super-resolution microscopy (Nobel prize in Chemistry, 2014) and multi-scale computer modelling (Nobel prize in Chemistry, 2013) have provided scientists with powerful tools to study microscopic systems. Sub-micron particles or even fluorescently labelled single molecules can now be tracked for long times in a variety of systems such as living cells, biological membranes, colloidal solutions etc. at spatial and temporal resolutions previously inaccessible. Parallel to such single-particle tracking experiments, super-computing techniques enable simulations of large atomistic or coarse-grained systems such as biologically relevant membranes or proteins from picoseconds to seconds, generating large volume of data. These have led to an unprecedented rise in the number of reported cases of anomalous diffusion wherein the characteristic features of Brownian motion—namely linear growth of the mean squared displacement with time and the Gaussian form of the probability density function (PDF) to find a particle at a given position at some fixed time—are routinely violated. This presents a big challenge in identifying the underlying stochastic process and also estimating the corresponding parameters of the process to completely describe the observed behaviour. Finding the correct physical mechanism which leads to the observed dynamics is of paramount importance, for example, to understand the first-arrival time of transcription factors which govern gene regulation, or the survival probability of a pathogen in a biological cell post drug administration. Statistical Physics provides useful methods that can be applied to extract such vital information. This cumulative dissertation, based on five publications, focuses on the development, implementation and application of such tools with special emphasis on Bayesian inference and large deviation theory. Together with the implementation of Bayesian model comparison and parameter estimation methods for models of diffusion, complementary tools are developed based on different observables and large deviation theory to classify stochastic processes and gather pivotal information. Bayesian analysis of the data of micron-sized particles traced in mucin hydrogels at different pH conditions unveiled several interesting features and we gained insights into, for example, how in going from basic to acidic pH, the hydrogel becomes more heterogeneous and phase separation can set in, leading to observed non-ergodicity (non-equivalence of time and ensemble averages) and non-Gaussian PDF. With large deviation theory based analysis we could detect, for instance, non-Gaussianity in seeming Brownian diffusion of beads in aqueous solution, anisotropic motion of the beads in mucin at neutral pH conditions, and short-time correlations in climate data. Thus through the application of the developed methods to biological and meteorological datasets crucial information is garnered about the underlying stochastic processes and significant insights are obtained in understanding the physical nature of these systems.
Lattice dynamics
(2020)
In this thesis I summarize my contribution to the research field of ultrafast structural dynamics in condensend matter. It consists of 17 publications that cover the complex interplay between electron, magnon, and phonon subsystems in solid materials and the resulting lattice dynamics after ultrafast photoexcitation. The investigation of such dynamics is necessary for the physical understanding of the processes in materials that might become important in the future as functional materials for technological applications, for example in data storage applications, information processing, sensors, or energy harvesting.
In this work I present ultrafast x-ray diffraction (UXRD) experiments based on the optical pump – x-ray probe technique revealing the time-resolved lattice strain. To study these dynamics the samples (mainly thin film heterostructures) are excited by femtosecond near-infrared or visible light pulses. The induced strain dynamics caused by stresses of the excited subsystems are measured in a pump-probe scheme with x-ray diffraction (XRD) as a probe. The UXRD setups used during my thesis are a laser-driven table-top x-ray source and large-scale synchrotron facilities with dedicated time-resolved diffraction setups. The UXRD experiments provide quantitative access to heat reservoirs in nanometric layers and monitor the transient responses of these layers with coupled electron, magnon, and phonon subsystems. In contrast to optical probes, UXRD allows accessing the material-specific information, which is unavailable for optical light due to the detection of multiple indistinguishable layers in the range of the penetration depth.
In addition, UXRD facilitates a layer-specific probe for layers buried opaque heterostructures to study the energy flow. I extended this UXRD technique to obtain the driving stress profile by measuring the strain dynamics in the unexcited buried layer after excitation of the adjacent absorbing layers with femtosecond laser pulses. This enables the study of negative thermal expansion (NTE) in magnetic materials, which occurs due to the loss of the magnetic order. Part of this work is the investigation of stress profiles which are the source of coherent acoustic phonon wave packets (hypersound waves). The spatiotemporal shape of these stress profiles depends on the energy distribution profile and the ability of the involved subsystems to produce stress. The evaluation of the UXRD data of rare-earth metals yields a stress profile that closely matches the optical penetration profile: In the paramagnetic (PM) phase the photoexcitation results in a quasi-instantaneous expansive stress of the metallic layer whereas in the antiferromagnetic (AFM) phase a quasi-instantaneous contractive stress and a second contractive stress contribution rising on a 10 ps time scale adds to the PM contribution. These two time scales are characteristic for the magnetic contribution and are in agreement with related studies of the magnetization dynamics of rare-earth materials.
Several publications in this thesis demonstrate the scientific progress in the field of active strain control to drive a second excitation or engineer an ultrafast switch. These applications of ultrafast dynamics are necessary to enable control of functional material properties via strain on ultrafast time scales.
For this thesis I implemented upgrades of the existing laser-driven table-top UXRD setup in order to achieve an enhancement of x-ray flux to resolve single digit nanometer thick layers. Furthermore, I developed and built a new in-situ time-resolved magneto-optic Kerr effect (MOKE) and optical reflectivity setup at the laser-driven table-top UXRD setup to measure the dynamics of lattice, electrons and magnons under the same excitation conditions.
Im Jahre 1960 behauptete Yamabe folgende Aussage bewiesen zu haben: Auf jeder kompakten Riemannschen Mannigfaltigkeit (M,g) der Dimension n ≥ 3 existiert eine zu g konform äquivalente Metrik mit konstanter Skalarkrümmung. Diese Aussage ist äquivalent zur Existenz einer Lösung einer bestimmten semilinearen elliptischen Differentialgleichung, der Yamabe-Gleichung. 1968 fand Trudinger einen Fehler in seinem Beweis und infolgedessen beschäftigten sich viele Mathematiker mit diesem nach Yamabe benannten Yamabe-Problem. In den 80er Jahren konnte durch die Arbeiten von Trudinger, Aubin und Schoen gezeigt werden, dass diese Aussage tatsächlich zutrifft. Dadurch ergeben sich viele Vorteile, z.B. kann beim Analysieren von konform invarianten partiellen Differentialgleichungen auf kompakten Riemannschen Mannigfaltigkeiten die Skalarkrümmung als konstant vorausgesetzt werden.
Es stellt sich nun die Frage, ob die entsprechende Aussage auch auf Lorentz-Mannigfaltigkeiten gilt. Das Lorentz'sche Yamabe Problem lautet somit: Existiert zu einer gegebenen räumlich kompakten global-hyperbolischen Lorentz-Mannigfaltigkeit (M,g) eine zu g konform äquivalente Metrik mit konstanter Skalarkrümmung? Das Ziel dieser Arbeit ist es, dieses Problem zu untersuchen.
Bei der sich aus dieser Fragestellung ergebenden Yamabe-Gleichung handelt es sich um eine semilineare Wellengleichung, deren Lösung eine positive glatte Funktion ist und aus der sich der konforme Faktor ergibt. Um die für die Behandlung des Yamabe-Problems benötigten Grundlagen so allgemein wie möglich zu halten, wird im ersten Teil dieser Arbeit die lokale Existenztheorie für beliebige semilineare Wellengleichungen für Schnitte auf Vektorbündeln im Rahmen eines Cauchy-Problems entwickelt. Hierzu wird der Umkehrsatz für Banachräume angewendet, um mithilfe von bereits existierenden Existenzergebnissen zu linearen Wellengleichungen, Existenzaussagen zu semilinearen Wellengleichungen machen zu können. Es wird bewiesen, dass, falls die Nichtlinearität bestimmte Bedingungen erfüllt, eine fast zeitglobale Lösung des Cauchy-Problems für kleine Anfangsdaten sowie eine zeitlokale Lösung für beliebige Anfangsdaten existiert.
Der zweite Teil der Arbeit befasst sich mit der Yamabe-Gleichung auf global-hyperbolischen Lorentz-Mannigfaltigkeiten. Zuerst wird gezeigt, dass die Nichtlinearität der Yamabe-Gleichung die geforderten Bedingungen aus dem ersten Teil erfüllt, so dass, falls die Skalarkrümmung der gegebenen Metrik nahe an einer Konstanten liegt, kleine Anfangsdaten existieren, so dass die Yamabe-Gleichung eine fast zeitglobale Lösung besitzt. Mithilfe von Energieabschätzungen wird anschließend für 4-dimensionale global-hyperbolische Lorentz-Mannigfaltigkeiten gezeigt, dass unter der Annahme, dass die konstante Skalarkrümmung der konform äquivalenten Metrik nichtpositiv ist, eine zeitglobale Lösung der Yamabe-Gleichung existiert, die allerdings nicht notwendigerweise positiv ist. Außerdem wird gezeigt, dass, falls die H2-Norm der Skalarkrümmung bezüglich der gegebenen Metrik auf einem kompakten Zeitintervall auf eine bestimmte Weise beschränkt ist, die Lösung positiv auf diesem Zeitintervall ist. Hierbei wird ebenfalls angenommen, dass die konstante Skalarkrümmung der konform äquivalenten Metrik nichtpositiv ist. Falls zusätzlich hierzu gilt, dass die Skalarkrümmung bezüglich der gegebenen Metrik negativ ist und die Metrik gewisse Bedingungen erfüllt, dann ist die Lösung für alle Zeiten in einem kompakten Zeitintervall positiv, auf dem der Gradient der Skalarkrümmung auf eine bestimmte Weise beschränkt ist. In beiden Fällen folgt unter den angeführten Bedingungen die Existenz einer zeitglobalen positiven Lösung, falls M = I x Σ für ein beschränktes offenes Intervall I ist. Zum Schluss wird für M = R x Σ ein Beispiel für die Nichtexistenz einer globalen positiven Lösung angeführt.
Galaxies are gravitationally bound systems of stars, gas, dust and - probably - dark matter. They are the building blocks of the Universe. The morphology of galaxies is diverse: some galaxies have structures such as spirals, bulges, bars, rings, lenses or inner disks, among others. The main processes that characterise galaxy evolution can be separated into fast violent events that dominated evolution at earlier times and slower processes, which constitute a phase called secular evolution, that became dominant at later times. Internal processes of secular evolution include the gradual rearrangement of matter and angular momentum, the build-up and dissolution of substructures or the feeding of supermassive black holes and their feedback. Galaxy bulges – bright central components in disc galaxies –, on one hand, are relics of galaxy formation and evolution. For instance, the presence of a classical bulge suggests a relatively violent history. In contrast, the presence of a disc-like bulge instead indicates the occurrence of secular evolution processes in the main disc. Galaxy bars – elongated central stellar structures –, on the other hand, are the engines of secular evolution. Studying internal properties of both bars and bulges is key to comprehending some of the processes through which secular evolution takes place. The main objectives of this thesis are (1) to improve the classification of bulges by combining photometric and spectroscopic approaches for a large sample of galaxies, (2) to quantify star formation in bars and verify dependencies on galaxy properties and (3) to analyse stellar populations in bars to aid in understanding the formation and evolution of bars. Integral field spectroscopy is fundamental to the work presented in this thesis, which consists of three different projects as part of three different galaxy surveys: the CALIFA survey, the CARS survey and the TIMER project.
The first part of this thesis constitutes an investigation of the nature of bulges in disc galaxies. We analyse 45 galaxies from the integral-field spectroscopic survey CALIFA by performing 2D image decompositions, growth curve measurements and spectral template fitting to derive stellar kinematics from CALIFA data cubes. From the obtained results, we present a recipe to classify bulges that combines four different parameters from photometry and kinematics: The bulge Sersic index nb, the concentration index C20;50, the Kormendy relation and the inner slope of the radial velocity dispersion profile ∇σ. The results of the different approaches are in good agreement and allow a safe classification for approximately 95% of the galaxies. We also find that our new ‘inner’ concentration index performs considerably better than the traditionally used C50;90 and, in combination with the Kormendy relation, provides a very robust indication of the physical nature of the bulge. In the second part, we study star formation within bars using VLT/MUSE observations for 16 nearby (0.01 < z < 0.06) barred active-galactic-nuclei (AGN)-host galaxies from the CARS survey. We derive spatially-resolved star formation rates (SFR) from Hα emission line fluxes and perform a detailed multi-component photometric decomposition on images derived from the data cubes. We find a clear separation into eight star-forming (SF) and eight non-SF bars, which we interpret as indication of a fast quenching process. We further report a correlation between the SFR in the bar and the shape of the bar surface brightness profile: only the flattest bars (nbar < 0.4) are SF. Both parameters are found to be uncorrelated with Hubble type. Additionally, owing to the high spatial resolution of the MUSE data cubes, for the first time, we are able to dissect the SFR within the bar and analyse trends parallel and perpendicular to the bar major axis. Star formation is 1.75 times stronger on the leading edge of a rotating bar than on the trailing edge and is radially decreasing. Moreover, from testing an AGN feeding scenario, we report that the SFR of the bar is uncorrelated with AGN luminosity. Lastly, we present a detailed analysis of star formation histories and chemical enrichment of stellar populations (SP) in galaxy bars. We use MUSE observations of nine very nearby barred galaxies from the TIMER project to derive spatially resolved maps of stellar ages and metallicities, [α/Fe] abundances, star formation histories, as well as Hα as tracer of star formation. Using these maps, we explore in detail variations of SP perpendicular to the bar major axes. We find observational evidence for a separation of SP, supposedly caused by an evolving bar. Specifically, intermediate-age stars (∼ 2-6 Gyr) get trapped on more elongated orbits forming a thinner bar, while old stars (> 8 Gyr) form a rounder and thicker bar. This evidence is further strengthened by very similar results obtained from barred galaxies in the cosmological zoom-in simulations from the Auriga project. In addition, we find imprints of typical star formation patterns in barred galaxies on the youngest populations (< 2 Gyr), which continuously become more dominant from the major axis towards the sides of the bar. The effect is slightly stronger on the leading side. Furthermore, we find that bars are on average more metal-rich and less α-enhanced than the inner parts of the discs that surrounds them. We interpret this result as an indication of a more prolonged or continuous formation of stars that shape the bar as compared to shorter formation episodes in the disc within the bar region.
The steadily rising number of investor-State arbitration proceedings within the EU has triggered an extensive backlash and an increased questioning of the international investment law regime by different Member States as well as the EU Commission. This has resulted in the EU's assertion of control over the intra-EU investment regime by promoting the termination of bilateral intra-EU investment treaties (intra-EU BITs) and by opposing the jurisdiction of arbitral tribunals in intra-EU investor-State arbitration proceedings. Against the backdrop of the landmark Achmea decision of the European Court of Justice, the book offers an in depth analysis of the interplay of international investment law and the law of the European Union with regard to intra-EU investments, i.e. investments undertaken by an investor from one EU Member State within the territory of another EU Member State. It specifically analyses the conflict between the two investment protection regimes applicable within the EU with a particular emphasis on the compatibility of the international legal instruments with the law of the European Union. The book thereby addresses the more general question of the relationship between EU law and international law and offers a conceptual framework of intra-European investment protection based on the analysis of all intra-EU BITs, the Energy Charter Treaty and EU law, as well as the arbitral practice in over 180 intra-EU investor-State arbitration proceedings. Finally, the book develops possible solutions to reconcile the international legal standards of protection with the regionalized transnational law of the European Union
Nach lavierenden obiter dicta entspricht die Steueranspruchstheorie wieder der gefestigten Rechtsprechung des 1. Strafsenates des BGH (zuletzt: BGH, Beschl. v. 1.4.2020 – 1 StR 5/20, BeckRS 2020, 23245). Für den prozessualen Nachweis des Steuerhinterziehungsvorsatzes ist damit erforderlich, dass der Steuerpflichtige auch in rechtlicher Hinsicht den verletzten Steueranspruch kannte. Die vorliegende Rechtsprechungsanalyse zum straf- und finanzgerichtlichen Vorsatznachweis bei § 370 AO zeigt, dass die Judikatur mit diesem Konzept besser umgeht als vielfach unterstellt.
Insoweit wird herausgearbeitet, dass die Rechtsprechung mit einem hinreichend etablierten Kanon an Indizien operiert, welche berechtigterweise den Schluss auf vorsätzliches Handeln zulassen. Zugleich kommt die Untersuchung zu dem Ergebnis, dass die Rechtsprechung ebenfalls konsistent entlastende Umstände würdigt und bei entsprechender Indizienlage einen vorsatzausschließenden Irrtum feststellt. Als weiteres Ergebnis liefert die Untersuchung zugleich eine Leitlinie für den Tatrichter bei der Vorsatzfeststellung im Steuerstrafverfahren.
Die Trennung von Werbung und Programm gilt als »Magna Charta« des Medienrechts. In der Medienpraxis scheinen jedoch andere Spielregeln zu herrschen: Werbung soll dort so unauffällig wie möglich in das redaktionelle Programm eingebaut werden, um mit dem potenziellen Käufer erfolgreich zu kommunizieren. Immer neue programmintegrierte Werbeformen entstehen, die Programme darauf ausrichten, Markenprodukte in Szene zu setzen. Dieser Widerspruch zwischen Recht und Praxis bildet den Hintergrund dieser Arbeit. Im Schwerpunkt wird der Rechtsbegriff der Schleichwerbung im nationalen Medienrecht untersucht. Für die Bestimmung der verbotenen programmintegrierten Werbeform im Fernsehen und Internet stellt der Rundfunkstaatsvertrag die entscheidende Rechtsgrundlage dar. Insbesondere spielt die Schleichwerbung im Zusammenhang mit dem Influencer-Marketing eine große Rolle und daher klärt die Arbeit, was Werbebetreibende in sozialen Medien beachten müssen. Ferner werden die Aktualität und Angemessenheit der heutigen Regeln analysiert. Die Arbeit wurde ausgezeichnet mit dem Wolf-Rüdiger-Bub-Preis des Vereins der Freunde und Förderer der Juristischen Fakultät der Universität Potsdam.
In recent years, a substantial number of psycholinguistic studies and of studies on acquired language impairments have investigated the case of morphologically complex words. These have provided evidence for what is known as ‘morphological decomposition’, i.e. a mechanism that decomposes complex words into their constituent morphemes during online processing. This is believed to be a fundamental, possibly universal mechanism of morphological processing, operating irrespective of a word’s specific properties.
However, current accounts of morphological decomposition are mostly based on evidence from suffixed words and compound words, while prefixed words have been comparably neglected. At the same time, it has been consistently observed that, across languages, prefixed words are less widespread than suffixed words. This cross-linguistic preference for suffixing morphology has been claimed to be grounded in language processing and language learning mechanisms. This would predict differences in how prefixed words are processed and therefore also affected in language impairments, challenging the predictions of the major accounts of morphological decomposition.
Against this background, the present thesis aims at reducing the gap between the accounts of morphological decomposition and the accounts of the suffixing preference, by providing a thorough empirical investigation of prefixed words. Prefixed words are examined in three different domains: (i) visual word processing in native speakers; (ii) visual word processing in non-native speakers; (iii) acquired morphological impairments. The processing studies employ the masked priming paradigm, tapping into early stages of visual word recognition. Instead, the studies on morphological impairments investigate the errors produced in reading aloud tasks.
As for native processing, the present work first focuses on derivation (Publication I), specifically investigating whether German prefixed derived words, both lexically restricted (e.g. inaktiv ‘inactive’) and unrestricted (e.g. unsauber ‘unclean’) can be efficiently decomposed. I then present a second study (Publication II) on a Bantu language, Setswana, which offers the unique opportunity of testing inflectional prefixes, and directly comparing priming with prefixed inflected primes (e.g. dikgeleke ‘experts’) to priming with prefixed derived primes (e.g. bokgeleke ‘talent’). With regard to non-native processing (Publication I), the priming effects obtained from the lexically restricted and unrestricted prefixed derivations in native speakers are additionally compared to the priming effects obtained in a group of non-native speakers of German. Finally, in the two studies on acquired morphological impairments, the thesis investigates whether prefixed derived words yield different error patterns than suffixed derived words (Publication III and IV).
For native speakers, the results show evidence for morphological decomposition of both types of prefixed words, i.e. lexically unrestricted and restricted derivations, as well as of prefixed inflected words. Furthermore, non-native speakers are also found to efficiently decompose prefixed derived words, with parallel results to the group of native speakers. I therefore conclude that, for the early stages of visual word recognition, the relative position of stem and affix in prefixed versus suffixed words does not affect how efficiently complex words are decomposed, either in native or in non-native processing. In the studies on acquired language impairments, instead, prefixes are consistently found to be more impaired than suffixes. This is explained in terms of a learnability disadvantage for prefixed words, which may cause weaker representations of the information encoded in affixes when these precede the stem (prefixes) as compared to when they follow it (suffixes). Based on the impairment profiles of the individual participants and on the nature of the task, this dissociation is assumed to emerge from later processing stages than those that are tapped into by masked priming. I therefore conclude that the different characteristics of prefixed and suffixed words do come into play at later processing stages, during which the lexical-semantic information contained in the different constituent morphemes is processed.
The findings presented in the four manuscripts significantly contribute to our current understanding of the mechanisms involved in processing prefixed words. Crucially, the thesis constrains the processing disadvantage for prefixed words to later processing stages, thereby suggesting that theories trying to establish links between language universals and processing mechanisms should more carefully consider the different stages involved in language processing and what factors are relevant for each specific stage.
Depending on the biochemical and biotechnical approach, the aim of this work was to understand the mechanism of protein-glucan interactions in regulation and control of starch degradation. Although starch degradation starts with the phosphorylation process, the mechanisms by which this process is controlling and adjusting starch degradation are not yet fully understood. Phosphorylation is a major process performed by the two dikinases enzymes α-glucan, water dikinase (GWD) and phosphoglucan water dikinase (PWD). GWD and PWD enzymes phosphorylate the starch granule surface; thereby stimulate starch degradation by hydrolytic enzymes. Despite these important roles for GWD and PWD, so far the biochemical processes by which these enzymes are able to regulate and adjust the rate of phosphate incorporation into starch during the degradation process haven‘t been understood. Recently, some proteins were found associated with the starch granule. Two of these proteins are named Early Starvation Protein 1 (ESV1) and its homologue Like-Early Starvation Protein 1 (LESV). It was supposed that both are involved in the control of starch degradation, but their function has not been clearly known until now. To understand how ESV1 and LESV-glucan interactions are regulated and affect the starch breakdown, it was analyzed the influence of ESV1 and LESV proteins on the phosphorylating enzyme GWD and PWD and hydrolysing enzymes ISA, BAM, and AMY. However, the analysis determined the location of LESV and ESV1 in the chloroplast stroma of Arabidopsis. Mass spectrometry data predicted ESV1and LESV proteins as a product of the At1g42430 and At3g55760 genes with a predicted mass of ~50 kDa and ~66 kDa, respectively. The ChloroP program predicted that ESV1 lacks the chloroplast transit peptide, but it predicted the first 56 amino acids N-terminal region as a chloroplast transit peptide for LESV. Usually, the transit peptide is processed during transport of the proteins into plastids. Given that this processing is critical, two forms of each ESV1 and LESV were generated and purified, a full-length form and a truncated form that lacks the transit peptide, namely, (ESV1and tESV1) and (LESV and tLESV), respectively. Both protein forms were included in the analysis assays, but only slight differences in glucan binding and protein action between ESV1 and tESV1 were observed, while no differences in the glucan binding and effect on the GWD and PWD action were observed between LESV and tLESV. The results revealed that the presence of the N-terminal is not massively altering the action of ESV1 or LESV. Therefore, it was only used the ESV1 and tLESV forms data to explain the function of both proteins.
However, the analysis of the results revealed that LESV and ESV1 proteins bind strongly at the starch granule surface. Furthermore, not all of both proteins were released after their incubation with starches after washing the granules with 2% [w/v] SDS indicates to their binding to the deeper layers of the granule surface. Supporting of this finding comes after the binding of both proteins to starches after removing the free glucans chains from the surface by the action of ISA and BAM. Although both proteins are capable of binding to the starch structure, only LESV showed binding to amylose, while in ESV1, binding was not observed. The alteration of glucan structures at the starch granule surface is essential for the incorporation of phosphate into starch granule while the phosphorylation of starch by GWD and PWD increased after removing the free glucan chains by ISA. Furthermore, PWD showed the possibility of starch phosphorylation without prephosphorylation by GWD.
Biochemical studies on protein-glucan interactions between LESV or ESV1 with different types of starch showed a potentially important mechanism of regulating and adjusting the phosphorylation process while the binding of LESV and ESV1 leads to altering the glucan structures of starches, hence, render the effect of the action of dikinases enzymes (GWD and PWD) more able to control the rate of starch degradation. Despite the presence of ESV1 which revealed an antagonistic effect on the PWD action as the PWD action was decreased without prephosphorylation by GWD and increased after prephosphorylation by GWD (Chapter 4), PWD showed a significant reduction in its action with or without prephosphorylation by GWD in the presence of ESV1 whether separately or together with LESV (Chapter 5). However, the presence of LESV and ESV1 together revealed the same effect compared to the effect of each one alone on the phosphorylation process, therefore it is difficult to distinguish the specific function between them. However, non-interactions were detected between LESV and ESV1 or between each of them with GWD and PWD or between GWD and PWD indicating the independent work for these proteins. It was also observed that the alteration of the starch structure by LESV and ESV1 plays a role in adjusting starch degradation rates not only by affecting the dikinases but also by affecting some of the hydrolysing enzymes since it was found that the presence of LESV and ESV1leads to the reduction of the action of BAM, but does not abolish it.
This thesis is focused on a better understanding of the formation mechanism of bulk birefringence gratings (BBG) and a surface relief gratings (SRG) in photo-sensitive polymer films. A new set-up is developed enabling the in situ investigation how the polymer film is being structured during irradiation with modulated light. The new aspect of the equipment is that it combines several techniques such as a diffraction efficiency (DE) set-up, an atomic force microscope (AFM) and an optical set-up for controlled illumination of the sample. This enables the simultaneous acquiring and differentiation of both gratings (BBG and SRG), while changing the irradiation conditions in desired way.
The dissertation is based on five publications. The first publication (I) is focused on the description of the set-up and interpretation of the measured data. A fine structure within the 1st-order diffraction spot is observed, which is a result of the inhomogeneity of the inscribed gratings.
In the second publication (II) the interplay of BBG and SRG in the DE is discussed. It has been found, that, dependent on the polarization of a weak probe beam, the diffraction components of the SRG and BBG either interfere constructively or destructively in the DE, altering the appearance of the intensity distribution within the diffracted spot.
The third (III) and fourth (IV) publications describe the light-induced reconfiguration of surface structures. Special attention is payed to conditions influencing the erasure of topography and bulk gratings. This can be achieved via thermal treatment or illumination of the polymer film. Using the translation of the interference pattern (IP) in a controlled way, the optical erase speed is significantly increased. Additionally, a dynamic reconfigurable surface is generated, which could move surface attached objects by the continuous translation of the interference pattern during irradiation of the polymer films.
The fifth publication (V) deals with the understanding of polymer deformation under irradiation with SP-IP, which is the only IP generating a half-period topography grating (compared to the period of the IP) on the photo-sensitive polymer film. This mechanism is used, e.g. to generate a SRG below the diffraction limit of light. It also represents an easy way of changing the period of the surface grating just by a small change in polarization angle of the interfering beams without adjusting the optical pass of the two beams. Additionally, complex surface gratings formed in mixed polarization- and intensity interference patterns are shown.
I J. Jelken, C. Henkel and S. Santer, Applied Physics B, 125 (2019), 218
II J. Jelken, C. Henkel and S. Santer, Appl. Phys. Lett., 116 (2020), 051601
III J. Jelken and S. Santer, RSC Advances, 9 (2019), 20295
IV J. Jelken, M. Brinkjans, C. Henkel and S. Santer, SPIE Proceedings, 11367 (2020), 1136710
V J. Jelken, C. Henkel and S. Santer, Formation of Half-Period Surface Relief Gratings in Azobenzene Containing Polymer Films (submitted to Applied Physics B)
With his September 2015 speech “Breaking the tragedy of the horizon”, the President of the Central Bank of England, Mark Carney, put climate change on the agenda of financial market regulators. Until then, climate change had been framed mainly as a problem of negative externalities leading to long-term economic costs, which resulted in countries trying to keep the short-term costs of climate action to a minimum. Carney argued that climate change, as well as climate policy, can also lead to short-term financial risks, potentially causing strong adjustments in asset prices. Analysing the effect of a sustainability transition on the financial sector challenges traditional economic and financial analysis and requires a much deeper understanding of the interrelations between climate policy and financial markets.
This dissertation thus investigates the implications of climate policy for financial markets as well as the role of financial markets in a transition to a sustainable economy. The approach combines insights from macroeconomic and financial risk analysis. Following an introduction and classification in Chapter 1, Chapter 2 shows a macroeconomic analysis that combines ambitious climate targets (negative externality) with technological innovation (positive externality), adaptive expectations and an investment program, resulting in overall positive macroeconomic outcomes. The analysis also reveals the limitations of climate economic models in their representation of financial markets. Therefore, the subsequent part of this dissertation is concerned with the link between climate policies and financial markets. In Chapter 3, an empirical analysis of stock-market responses to the announcement of climate policy targets is performed to investigate impacts of climate policy on financial markets. Results show that 1) international climate negotiations have an effect on asset prices and 2) investors increasingly recognize transition risks in carbon-intensive investments. In Chapter 4, an analysis of equity markets and the interbank market shows that transition risks can potentially affect a large part of the equity market and that financial interconnections can amplify negative shocks. In Chapter 5, an analysis of mortgage loans shows how information on climate policy and the energy performance of buildings can be integrated into risk management and reflected in interest rates.
While costs of climate action have been explored at great depth, this dissertation offers two main contributions. First, it highlights the importance of a green investment program to strengthen the macroeconomic benefits of climate action. Second, it shows different approaches on how to integrate transition risks and opportunities into financial market analysis. Anticipating potential losses and gains in the value of financial assets as early as possible can make the financial system more resilient to transition risks and can stimulate investments into the decarbonization of the economy.
Perovskite solar cells have become one of the most studied systems in the quest for new, cheap and efficient solar cell materials. Within a decade device efficiencies have risen to >25% in single-junction and >29% in tandem devices on top of silicon. This rapid improvement was in many ways fortunate, as e. g. the energy levels of commonly used halide perovskites are compatible with already existing materials from other photovoltaic technologies such as dye-sensitized or organic solar cells. Despite this rapid success, fundamental working principles must be understood to allow concerted further improvements. This thesis focuses on a comprehensive understanding of recombination processes in functioning devices.
First the impact the energy level alignment between the perovskite and the electron transport layer based on fullerenes is investigated. This controversial topic is comprehensively addressed and recombination is mitigated through reducing the energy difference between the perovskite conduction band minimum and the LUMO of the fullerene. Additionally, an insulating blocking layer is introduced, which is even more effective in reducing this recombination, without compromising carrier collection and thus efficiency. With the rapid efficiency development (certified efficiencies have broken through the 20% ceiling) and thousands of researchers working on perovskite-based optoelectronic devices, reliable protocols on how to reach these efficiencies are lacking. Having established robust methods for >20% devices, while keeping track of possible pitfalls, a detailed description of the fabrication of perovskite solar cells at the highest efficiency level (>20%) is provided. The fabrication of low-temperature p-i-n structured devices is described, commenting on important factors such as practical experience, processing atmosphere & temperature, material purity and solution age. Analogous to reliable fabrication methods, a method to identify recombination losses is needed to further improve efficiencies. Thus, absolute photoluminescence is identified as a direct way to quantify the Quasi-Fermi level splitting of the perovskite absorber (1.21eV) and interfacial recombination losses the transport layers impose, reducing the latter to ~1.1eV. Implementing very thin interlayers at both the p- and n-interface (PFN-P2 and LiF, respectively), these losses are suppressed, enabling a VOC of up to 1.17eV. Optimizing the device dimensions and the bandgap, 20% devices with 1cm2 active area are demonstrated. Another important consideration is the solar cells’ stability if subjected to field-relevant stressors during operation. In particular these are heat, light, bias or a combination thereof. Perovskite layers – especially those incorporating organic cations – have been shown to degrade if subjected to these stressors. Keeping in mind that several interlayers have been successfully used to mitigate recombination losses, a family of perfluorinated self-assembled monolayers (X-PFCn, where X denotes I/Br and n = 7-12) are introduced as interlayers at the n-interface. Indeed, they reduce interfacial recombination losses enabling device efficiencies up to 21.3%. Even more importantly they improve the stability of the devices. The solar cells with IPFC10 are stable over 3000h stored in the ambient and withstand a harsh 250h of MPP at 85◦C without appreciable efficiency losses. To advance further and improve device efficiencies, a sound understanding of the photophysics of a device is imperative. Many experimental observations in recent years have however drawn an inconclusive picture, often suffering from technical of physical impediments, disguising e. g. capacitive discharge as recombination dynamics. To circumvent these obstacles, fully operational, highly efficient perovskites solar cells are investigated by a combination of multiple optical and optoelectronic probes, allowing to draw a conclusive picture of the recombination dynamics in operation. Supported by drift-diffusion simulations, the device recombination dynamics can be fully described by a combination of first-, second- and third-order recombination and JV curves as well as luminescence efficiencies over multiple illumination intensities are well described within the model. On this basis steady state carrier densities, effective recombination constants, densities-of-states and effective masses are calculated, putting the devices at the brink of the radiative regime. Moreover, a comprehensive review of recombination in state-of-the-art devices is given, highlighting the importance of interfaces in nonradiative recombination. Different strategies to assess these are discussed, before emphasizing successful strategies to reduce interfacial recombination and pointing towards the necessary steps to further improve device efficiency and stability. Overall, the main findings represent an advancement in understanding loss mechanisms in highly efficient solar cells. Different reliable optoelectronic techniques are used and interfacial losses are found to be of grave importance for both efficiency and stability. Addressing the interfaces, several interlayers are introduced, which mitigate recombination losses and degradation.
Ammonia is a chemical of fundamental importance for nature`s vital nitrogen cycle. It is crucial for the growth of living organisms as well as food and energy source. Traditionally, industrial ammonia production is predominated by Haber- Bosch process (HBP) which is based on direct conversion of N2 and H2 gas under high temperature and high pressure (~500oC, 150-300 bar). However, it is not the favorite route because of its thermodynamic and kinetic limitations, and the need for the energy intense production of hydrogen gas by reforming processes. All these disfavors of HBP open a target to search for an alternative technique to perform efficient ammonia synthesis via electrochemical catalytic processes, in particular via water electrolysis, using water as the hydrogen source to save the process from gas reforming.
In this study, the investigation of the interface effects between imidazolium-based ionic liquids and the surface of porous carbon materials with a special interest in the nitrogen absorption capability. As the further step, the possibility to establish this interface as the catalytically active area for the electrochemical N2 reduction to NH3 has been evaluated. This particular combination has been chosen because the porous carbon materials and ionic liquids (IL) have a significant importance in many scientific fields including catalysis and electrocatalysis due to their special structural and physicochemical properties. Primarily, the effects of the confinement of ionic liquid (EmimOAc, 1-Ethyl-3-methylimidazolium acetate) into carbon pores have been investigated. The salt-templated porous carbons, which have different porosity (microporous and mesoporous) and nitrogen species, were used as model structures for the comparison of the IL confinement at different loadings. The nitrogen uptake of EmimOAc can be increased by about 10 times by the confinement in the pores of carbon materials compared to the bulk form. In addition, the most improved nitrogen absorption was observed by IL confinement in micropores and in nitrogen-doped carbon materials as a consequence of the maximized structural changes of IL. Furthermore, the possible use of such interfaces between EmimOAc and porous carbon for the catalytic activation of dinitrogen during the kinetically challenging NRR due to the limited gas absorption in the electrolyte, was examined. An electrocatalytic NRR system based on the conversion of water and nitrogen gas to ammonia at ambient operation conditions (1 bar, 25 °C) was performed in a setup under an applied electric potential with a single chamber electrochemical cell, which consists of the combination of EmimOAc electrolyte with the porous carbon-working electrode and without a traditional electrocatalyst. Under a potential of -3 V vs. SCE for 45 minutes, a NH3 production rate of 498.37 μg h-1 cm-2 and FE of 12.14% were achieved. The experimental observations show that an electric double-layer, which serves the catalytically active area, occurs between a microporous carbon material and ions of the EmimOAc electrolyte in the presence of sufficiently high provided electric potential. Comparing with the typical NRR systems which have been reported in the literature, the presented electrochemical ammonia synthesis approach provides a significantly higher ammonia production rate with a chance to avoid the possible kinetic limitations of NRR. In terms of operating conditions, ammonia production rate and the faradic efficiency without the need for any synthetic electrocatalyst can be resulted of electrocatalytic activation of nitrogen in the double-layer formed between carbon and IL ions.
Im Rahmen der vom Bundesministerium für Bildung und -forschung geförderten Forschungsinitiative „BonaRes – Boden als nachhaltige Ressource der Bioökonomie“ soll sich das Teilprojekt „I4S – integrated system for site-specific soil fertility management“ der Entwicklung eines integrierten Systems zum ortsspezifischen Management der Bodenfruchtbarkeit widmen. Hierfür ist eine Messplattform zur Bestimmung relevanter Bodeneigenschaften und der quantitativen Analyse ausgewählter Makro- und Mikronährstoffe geplant. In der ersten Phase dieses Projekts liegt das Hauptaugenmerk auf der Kalibrierung und Validierung der verschiedenen Sensoren auf die Matrix Boden, der Probennahme auf dem Acker und der Planung sowie dem Aufbau der Messplattform. Auf dieser Plattform sollen in der zweiten Phase des Projektes die verschiedenen Bodensensoren installiert, sowie Modelle und Entscheidungsalgorithmen zur Steuerung der Düngung und dementsprechend Verbesserung der Bodenfunktionen erstellt werden.
Ziel der vorliegenden Arbeit ist die Grundlagenuntersuchung und Entwicklung einer robusten Online-Analyse mittels Energie-dispersiver Röntgenfluoreszenzspektroskopie (EDRFA) zur Quantifizierung ausgewählter Makro- und Mikronährstoffe in Böden für eine kostengünstige und flächendeckende Kartierung von Ackerflächen. Für die Entwicklung eines Online-Verfahrens wurde ein dem Stand der Technik entsprechender Röntgenfluoreszenzmesskopf in Betrieb genommen und die dazugehörigen Geräteparameter auf die Matrix Boden optimiert. Die Bestimmung der analytischen Qualitäts-merkmale wie Präzision und Nachweisgrenzen fand für eine Auswahl an Nährelementen von Aluminium bis Zink statt. Um eine möglichst Matrix-angepasste Kalibrierung zu erhalten, wurde sowohl mit zertifizierten Referenzmaterialien (CRM), als auch mit Ackerböden kalibriert. Da einer der größten Nachteile der Röntgenfluoreszenzanalyse die Beeinflussung durch Matrixeffekte ist, wurde neben der klassischen univariaten Datenauswertung auch die chemometrische multivariate Methode der Partial Least Squares Regression (PLSR) eingesetzt. Die PLSR bietet dabei den Vorteil, Matrixeffekte auszugleichen, wodurch robustere Kalibriermodelle erhalten werden können. Zusätzlich wurde eine Hauptkomponentenanalyse (PCA) durchgeführt, um Gemeinsamkeiten und Ausreißer innerhalb des Probensets zu identifizieren. Es zeigte sich, dass eine Klassifizierung der Böden anhand ihrer Textur Sand, Schluff, Lehm und Ton möglich ist.
Aufbauend auf den Ergebnissen idealer Bodenproben (zu Tabletten gepresste luftgetrocknete Proben mit Korngrößen < 0,5 mm) wurde im Verlauf dieser Arbeit die Probenvorbereitung immer weiter reduziert und der Einfluss verschiedener Kenngrößen untersucht. Diese Einflussfaktoren können die Dichte und die Homogenität der Probe, sowie Korngrößeneffekte und die Feuchtigkeit sein. Anhand des RMSE (Wurzel der mittleren Fehlerquadratsumme) und unter Berücksichtigung der Residuen werden die jeweils erstellten Kalibriermodelle miteinander verglichen. Um die Güte der Modelle zu bewerten, wurden diese mit einem Testset validiert. Hierfür standen 662 Bodenproben von 15 verschiedenen Standorten in Deutschland zur Verfügung. Da die Ergebnisse an gepressten Tabletten für die Elemente Al, Si, K, Ca, Ti, Mn, Fe und Zn den Anforderungen für eine spätere Online-Analyse entsprechen, wurden im weiteren Verlauf dieser Arbeit Kalibriermodelle mit losen Bodenproben erstellt. Auch hier konnten gute Ergebnisse durch ausreichende Nachweisgrenzen und eine niedrige gemittelte Messabweichung bei der Vorhersage unbekannter Testproben erzielt werden. Es zeigte sich, dass die Vorhersagefähigkeit mit der multivariaten PLSR besser ist als mit der univariaten Datenauswertung, insbesondere für die Elemente Mn und Zn.
Der untersuchte Einfluss der Feuchtigkeit und der Korngrößen auf die Quantifizierung der Elementgehalte war vor allem bei leichteren Elementen deutlich zu sehen. Es konnte schließlich eine multivariate Kalibrierung unter Berücksichtigung dieser Faktoren für die Elemente Al bis Zn erstellt werden, so dass ein Einsatz an Böden auf dem Acker möglich sein sollte. Eine höhere Messunsicherheit muss dabei einkalkuliert werden. Für eine spätere Probennahme auf dem Feld wurde zudem der Unterschied zwischen statischen und dynamischen Messungen betrachtet, wobei sich zeigte, dass beide Varianten genutzt werden können. Zum Abschluss wurde der hier eingesetzte Sensor mit einem kommerziell erhältlichen Hand-Gerät auf sein Quantifizierungspotential hin verglichen. Der Sensor weist anhand seiner Ergebnisse ein großes Potential als Online-Sensor für die Messplattform auf. Die Ergebnisse unter Laborbedingungen zeigen, dass eine robuste Analyse Ackerböden unter Berücksichtigung der Einflussfaktoren möglich ist.
In this dissertation we introduce a concept of light driven active and passive manipulation of colloids trapped at solid/liquid interface. The motion is induced due to generation of light driven diffusioosmotic flow (LDDO) upon irradiation with light of appropriate wavelength. The origin of the flow is due to osmotic pressure gradient resulting from a concentration gradient at the solid/liquid interface of the photosensitive surfactant present in colloidal dispersion. The photosensitive surfactant consists of a cationic head group and a hydrophobic tail in which azobenzene group is integrated in. The azobenzene is known to undergo reversible photo-isomerization from a stable trans to a meta stable cis state under irradiation with UV light. Exposure to light of larger wavelength results in back photo-isomerization from cis to trans state. The two isomers have different molecular properties, for instance, trans isomer has a rod like structure and low polarity (0 dipole moment), whereas cis one is bent and has a dipole moment of ~3 Debye. Being integrated in the hydrophobic tail of the surfactant molecule, the azobenzene state determines the hydrophobicity of the whole molecule: in the trans state the surfactant is more hydrophobic than in the cis-state. In this way many properties of the surfactant such as the CMC, solubility and the interaction potential with a solid surface can be altered by light. When the solution containing such a surfactant is irradiated with focused light, a concentration gradient of different isomers is formed near the boundary of the irradiated area near the solid surface resulting in osmotic pressure gradient. The generated diffusioosmotic (DO) flow carries the particles passively along.
The local-LDDO flow can be generated around and by each particle when mesoporous silica colloids are dispersed in the surfactant solution. This is because porous particles act as a sink/source which absorbs azobenzene molecule in trans state and expels it when it is in the cis state. The DO flows generated at each particle interact resulting in aggregation or separation depending upon the initial state of surfactant molecules. The kinetic of aggregation and separation can be controlled and manipulated by altering the parameters such as the wavelength and intensity of the applied light, as well as surfactant and particle concentration. Using two wavelengths simultaneously allows for dynamic gathering and separation creating fascinating patterns such as 2D disk of well separated particles or establishing collective complex behaviour of particle ensemble as described in this thesis.
The mechanism of l-LDDO is also used to generate self-propelled motion. This is possible when half of the porous particle is covered by metal layer, basically blocking the pores on one side. The LDDO flow generated on uncapped side pushes the particle forward resulting in a super diffusive motion. The system of porous particle and azobenzene containing surfactant molecule can be utilized for various application such as drug delivery, cargo transportation, self-assembling, micro motors/ machines or micro patterning.
Lava domes are severely hazardous, mound-shaped extrusions of highly viscous lava and commonly erupt at many active stratovolcanoes around the world. Due to gradual growth and flank oversteepening, such lava domes regularly experience partial or full collapses, resulting in destructive and far-reaching pyroclastic density currents. They are also associated with cyclic explosive activity as the complex interplay of cooling, degassing, and solidification of dome lavas regularly causes gas pressurizations on the dome or the underlying volcano conduit. Lava dome extrusions can last from days to decades, further highlighting the need for accurate and reliable monitoring data.
This thesis aims to improve our understanding of lava dome processes and to contribute to the monitoring and prediction of hazards posed by these domes. The recent rise and sophistication of photogrammetric techniques allows for the extraction of observational data in unprecedented detail and creates ideal tools for accomplishing this purpose. Here, I study natural lava dome extrusions as well as laboratory-based analogue models of lava dome extrusions and employ photogrammetric monitoring by Structure-from-Motion (SfM) and Particle-Image-Velocimetry (PIV) techniques. I primarily use aerial photography data obtained by helicopter, airplanes, Unoccupied Aircraft Systems (UAS) or ground-based timelapse cameras. Firstly, by combining a long time-series of overflight data at Volcán de Colima, México, with seismic and satellite radar data, I construct a detailed timeline of lava dome and crater evolution. Using numerical model, the impact of the extrusion on dome morphology and loading stress is further evaluated and an impact on the growth direction is identified, bearing important implications for the location of collapse hazards. Secondly, sequential overflight surveys at the Santiaguito lava dome, Guatemala, reveal surface motion data in high detail. I quantify the growth of the lava dome and the movement of a lava flow, showing complex motions that occur on different timescales and I provide insight into rock properties relevant for hazard assessment inferred purely by photogrammetric processing of remote sensing data. Lastly, I recreate artificial lava dome and spine growth using analogue modelling under controlled conditions, providing new insights into lava extrusion processes and structures as well as the conditions in which they form.
These findings demonstrate the capabilities of photogrammetric data analyses to successfully monitor lava dome growth and evolution while highlighting the advantages of complementary modelling methods to explain the observed phenomena. The results presented herein further bear important new insights and implications for the hazards posed by lava domes.
Unlike today’s prevailing terrestrial features, the geologic past of Central Asia witnessed marine environments and conditions as well. A vast, shallow sea, known as proto-Paratethys, extended across Eurasia from the Mediterranean Tethys to the Tarim Basin in western China during Cretaceous to Paleogene times. This sea formed about 160 million years ago (during Jurassic times) when the waters of the Tethys Ocean flooded into Eurasia. It drastically retreated to the west and became isolated as the Paratethys during the Late Eocene-Oligocene (ca. 34 Ma).
Having well-constrained timing and paleogeography for the Cretaceous-Paleogene proto-Paratethys sea incursions in Central Asia is essential to properly understand and distinguish the controlling mechanisms and their link to Asian paleoenvironmental and paleoclimatic change. The Cretaceous-Paleogene tectonic evolution of the Pamir and Tibet and their far-field effects play a significant role on the sedimentological and structural evolution of the Central Asian basins and on the evolution of the proto-Paratethys sea fluctuations as well. Comparing the records of the sea incursions to the tectonic and eustatic events has paramount importance to reveal the controlling mechanisms behind the sea incursions. However, due to inaccuracies in the dating of rocks (mostly continental rocks and marine rocks with benthic microfossils providing low-resolution biostratigraphic constraints) and conflicting results, there has been no consensus on the timing of the sea incursions and interpretation of their records has been in question. Here, we present a new chronostratigraphic framework based on biostratigraphy and magnetostratigraphy as well as a detailed paleoenvironmental analysis for the Cretaceous and Paleogene proto-Paratethys Sea incursions in the Tajik and Tarim basins, in Central Asia. This enables us to identify the major drivers of marine fluctuations and their potential consequences on regional and global climate, particularly Asian aridification and the global carbon cycle perturbations such as the Paleocene-Eocene Thermal Maximum (PETM). To estimate the paleogeographic evolution of the proto-Paratethys Sea, the refined age constraints and detailed paleoenvironmental interpretations are combined with successive paleogeographic maps. Regional coastlines and depositional environments during the Cretaceous-Paleogene sea advances and retreats were drawn based on the results of this thesis and integrated with existing literature to generate new paleogeographic maps.
Before its final westward retreat in the Eocene, a total of six Cretaceous and Paleogene major sea incursions have been distinguished from the sedimentary records of the Tajik and Tarim basins in Central Asia. All have been studied and documented here.
We identify the presence of marine conditions already in the Early Cretaceous in the western Tajik Basin, followed by the Cenomanian (ca. 100 Ma) and Santonian (ca. 86 Ma) major marine incursions far into the eastern Tajik and Tarim basins separated by a Turonian-Coniacian (ca. 92-86 Ma) regression. Basin-wide tectonic subsidence analyses imply that the Early Cretaceous invasion of the sea into the Tajik Basin is related to increased Pamir tectonism (at ca. 130 – 90 Ma) in a retro-arc basin setting inferred to be linked to collision and subduction. This tectonic event mainly governed the Cenomanian (ca. 100 Ma) sea incursion in conjunction with a coeval global eustatic high resulting in the maximum geographic extent of the sea. The following Turonian-Coniacian (ca. 92-86 Ma) major regression, driven by eustasy, coincides with a sharp slowdown in tectonic subsidence related to a regime change in Pamir tectonism from compression to extension. The Santonian (ca. 86 Ma) major sea incursion was more likely controlled dominantly by eustasy as also evidenced by the coeval fluctuations in the west Siberian Basin. During the early Maastrichtian, the global Late Cretaceous cooling is inferred from the disappearance of mollusk-rich limestones and the dominance of bryozoan-rich and echinoderm-rich limestones in the Tajik Basin documenting the first evidence for the Late Cretaceous cooling event in Central Asia.
Following the last Cretaceous sea incursion, a major regional restriction event, marked by the exceptionally thick (≤ 400 m) shelf evaporites is assigned a Danian-Selandian age (ca. 63-59 Ma). This is followed by the largest recorded proto-Paratethys sea incursion with a transgression estimated as early Thanetian (ca. 59-57 Ma) and a regression within the Ypresian (ca. 53-52 Ma). The transgression of the next incursion is now constrained as early Lutetian (ca. 47-46 Ma), whereas its regression is constrained as late Lutetian (ca. 41 Ma) and is associated with a drastic increase in both tectonic subsidence and basin infilling. The age of the final and least pronounced sea incursion restricted to the westernmost margin of the Tarim Basin is assigned as Bartonian–Priabonian (ca. 39.7-36.7 Ma). We interpret the long-term westward retreat of the proto-Paratethys Sea starting at ca. 41 Ma to be associated with far-field tectonic effects of the Indo-Asia collision and Pamir/Tibetan plateau uplift. Short-term eustatic sea level transgressions are superimposed on this long-term regression and seem coeval with the transgression events in the other northern Peri-Tethyan sedimentary provinces for the 1st and 2nd Paleogene sea incursions. However, the last Paleogene sea incursion is interpreted as related to tectonism. The transgressive and regressive intervals of the proto-Paratethys Sea correlate well with the reported humid and arid phases, respectively in the Qaidam and Xining basins, thus demonstrating the role of the proto-Paratethys Sea as an important moisture source for the Asian interior and its regression as a contributor to Asian aridification.
We lastly study the mechanics, relative contribution and preservation efficiency of ancient epicontinental seas as carbon sinks with new and existing data, using organic rich (sapropel) deposits dated to the PETM from the extensive epicontinental proto-Paratethys and West Siberian seas. We estimate ca. 1390±230 Gt organic C burial, a substantial amount compared to previously estimated global total excess organic C burial (ca. 1700-2900 Gt) is focused in the proto-Paratethys and West Siberian seas alone. We also speculate that enhanced organic carbon burial later over much of the proto-Paratethys (and later Paratethys) basin (during the deposition of the Kuma Formation and Maikop series, repectively) may have majorly contributed to drawdown of atmospheric carbon dioxide before and during the EOT cooling and glaciation of Antarctica. For past periods with smaller epicontinental seas, the effectiveness of this negative carbon cycle feedback was arguably diminished, and the same likely applies to the present-day.
Carbonates play a key role in the chemistry and dynamics of our planet. They are directly connected to the CO2 budget of our atmosphere and have a great impact on the deep carbon cycle. Moreover, recent studies have shown that carbonates are stable along the geothermal gradient down to Earth's lower mantle conditions, changing their crystal structure and related properties. Subducted carbonates may also react with silicates to form new phases. These reactions will redistribute elements, such as calcium (Ca), magnesium (Mg), iron (Fe) and carbon in the form of carbon dioxide (CO2), but also trace elements, that are carried by the carbonates. The trace elements of most interest are strontium (Sr) and rare earth elements (REE) which have been found to be important constituents in the composition of the primitive lower mantle and in mineral inclusions found in super-deep diamonds. However, the stability of carbonates in presence of mantle silicates at relevant temperatures is far from being well understood. Related to this, very little is known about distribution processes of trace elements between carbonates and mantle silicates. To shed light on these processes, we studied reactions between Sr- and REE-containing CaCO3 and Mg/Fe-bearing silicates of the system (Mg,Fe)2SiO4 - (Mg,Fe)SiO3 at high pressure and high temperature using synchrotron radiation based μ-X-ray diffraction (μ-XRD) and μ-X-ray fluorescence (μ-XRF) with μm-resolution in a laser-heated diamond anvil cell. X-ray diffraction is used to derive the structural changes of the phase reactions whereas X-ray fluorescence gives information on the chemical changes in the sample. In-situ experiments at high pressure and high temperature were performed at beamline P02.2 at PETRA III (Hamburg, Germany) and at beamline ID27 at ESRF (Grenoble, France). In addition to μ-XRD and μ-XRF, ex-situ measurements were made on the recovered sample material using transmission electron microscopy (TEM) and provided further insights into the reaction kinetics of carbonate-silicate reactions.
Our investigations show that CaCO3 is unstable in presence of mantle silicates above 1700 K and a reaction takes place in which magnesite plus CaSiO3-perovskite are formed. In addition, we observed that a high content of iron in the carbonate-silicate system favours dolomite formation during the reaction. The subduction of natural carbonates with significant amounts of Sr leads to a comprehensive investigation of the stability not only of CaCO3 phases in contact with mantle silicates but also of SrCO3 (and of Sr-bearing CaCO3). We found that SrCO3 reacts with (Mg,Fe)SiO3-perovskite to form magnesite and gained evidence for the formation of SrSiO3-perovskite.
To complement our study on the stability of SrCO3 at conditions of the Earth's lower mantle, we performed powder X-ray diffraction and single crystal X-ray diffraction experiments at ambient temperature and up to 49 GPa. We observed a transformation from SrCO3-I into a new high-pressure phase SrCO3-II at around 26 GPa with Pmmn crystal structure and a bulk modulus of 103(10) GPa. This information is essential to fully understand the phase behaviour and stability of carbonates in the Earth's lower mantle and to elucidate the possibility of introducing Sr into mantle silicates by carbonate-silicate reactions.
Simultaneous recording of μ-XRD and μ-XRF in the μm-range over the heated areas provides spatial information not only about phase reactions but also on the elemental redistribution during the reactions. A comparison of the spatial intensity distribution of the XRF signal before and after heating indicates a change in the elemental distribution of Sr and an increase in Sr-concentration was found around the newly formed SrSiO3-perovskite. With the help of additional TEM analyses on the quenched sample material the elemental redistribution was studied at a sub-micrometer scale. Contrary to expectations from combined μ-XRD and μ-XRF measurements, we found that La and Eu were not incorporated into the silicate phases, instead they tend to form either isolated oxide phases (e.g. Eu2O3, La2O3) or hydroxyl-bastnäsite (La(CO3)(OH)). In addition, we observed the transformation from (Mg,Fe)SiO3-perovskite to low-pressure clinoenstatite during pressure release. The monoclinic structure (P21/c) of this phase allows the incorporation of Ca as shown by additional EDX analyses and, to a minor extent, Sr too.
Based on our experiments, we can conclude that a detection of the trace elements in-situ at high pressure and high temperature remains challenging. However, our first findings imply that silicates may incorporate the trace elements provided by the carbonates and indicate that carbonates may have a major effect on the trace element contents of mantle phases.
The transfer of particulate organic carbon from continents to the ocean is an important component of the global carbon cycle. Transfer to and burial of photosynthetically fixed biospheric organic carbon in marine sediments can effectively sequester atmospheric carbon dioxide over geological timescales. The exhumation and erosion of fossil organic carbon contained in sedimentary rocks, i.e. petrogenic carbon, can result in remineralization, releasing carbon to the atmosphere. In contrast, eroded petrogenic organic carbon that gets transferred back to the ocean and reburied does not affect atmospheric carbon content.
Mountain ranges play a key role in this transfer since they can source vast amounts of sediment including particulate organic carbon. Globally, the export of both, biospheric and petrogenic organic carbon has been linked to sediment export. Additionally, short transfer times from mountains to the ocean and high sediment concentrations have been shown to increase the likelihood of organic carbon burial. While the importance of mountain ranges in the organic carbon cycle is now widely recognized, the processes acting within mountain ranges to influence the storage, cycling and mobilization of organic carbon, as well as carbon fluxes from mountain ranges remain poorly constrained.
In this thesis, I employ different methods to assess the nature and fate of particulate organic carbon in mountain belts, ranging from the molecular to regional landscape scale. These studies are located along the Trans-Himalayan Kali Gandaki River in Central Nepal. This river traverses all major geological and climatic zones of the Himalaya, from the dry northern Tibetan plateau to the high-relief, monsoon dominated steep High Himalaya and the lower relief and abundant vegetation of the Lesser Himalayan region.
First, I document how biospheric organic matter has accumulated during the Holocene in the headwaters of the Kali Gandaki River valley, by combining compound specific isotope measurements with different dating methods and grain size data, and investigate the stability of this organic carbon reservoir on millennial timescales. I show, that around 1.6 ka an eco-geomorphic tipping point occurred leading to a destabilization of the landscape resulting in today’s high erosion rates and the excavation of the aged organic carbon reservoir. This study highlights the climatic and geomorphic controls on biospheric organic carbon storage and release from mountain ranges.
Second, I systematically investigate the spatial variation of particulate organic carbon fluxes across the Himalaya along the Kali Gandaki River, using bulk stable and radioactive isotopes combined with a new Bayesian modeling approach. The detailed dataset allows the distinction of aged and modern biospheric organic carbon as well as petrogenic organic carbon across the Himalayan mountain range and the investigation of the role of climatic and geomorphic factors in their riverine export. The data suggest a decoupling of the particulate organic carbon from the sediment yield along the Kali Gandaki River, partially driven by climatic and geomorphic processes. In contrast to the suspended sediment, a large part of the particulate organic carbon exported by the river originates from the Tibetan part of the catchment and is dominated by petrogenic organic carbon derived from Jurassic shales with only minor contributions of modern and aged biospheric organic carbon. These findings emphasize the importance of organic carbon source distribution and erosion mechanisms in determining the organic carbon export from mountain ranges.
In a third step, I explore the potential of ultra-high resolution mass spectrometry for particulate organic carbon transport studies. I have generated a novel and unprecedented high-resolution molecular dataset, which contains up to 103 molecular formulas of the lipid fraction of particulate organic matter for modern and aged biospheric carbon, petrogenic organic carbon and river sediments. First, I test if this dataset can be used to better resolve different organic carbon sources and to identify new geochemical tracers. Using multivariate statistics, I identify up to 10² characteristic molecular formulas for the major organic carbon sources in the upper part of the Kali Gandaki catchment, and trace their transfer from the surrounding landscape into the river sediment. Second, I test the potential of the molecular dataset to trace molecular transformations along source-to-sink pathways. I identify changes in molecular metrics derived from the dataset, which are characteristic of transformation processes during incorporation of litter into soil, the aging of soil material, and the mobilization of the organic carbon into the river. These two studies demonstrate that high-resolution molecular datasets open a promising analytical window on particulate organic carbon and can provide novel insights into the composition, sourcing and transformation of riverine particulate organic carbon.
Collectively, these studies advance our understanding of the processes contributing to the storage and mobilization of organic carbon in the Central Himalaya, the mountain belt that dominates global erosional fluxes. They do so by identifying the major sources of particulate organic carbon to the Trans-Himalayan Kali Gandaki River, by elucidating their sensitivity to climate and geomorphic processes, and by identifying some of the transformations of this material on the molecular scale. As a result, the thesis demonstrates that the amount and composition of organic carbon routed from mountain belts is a function of the dynamic interactions of geologic, biologic, geomorphic and climatic processes within the mountain belt. This understanding will ultimately help in answering whether the build-up and erosion of mountain ranges over geological time represents a net carbon source or sink to the atmosphere. Beyond this, the thesis contributes to our technical ability to characterize organic matter and attribute it to sources by scoping the potential of high-end molecular analysis.
Grenzen des Organisierbaren
(2020)
With populations growing worldwide and climate change threatening food production there is an urgent need to find ways to ensure food security. Increasing carbon fixation rate in plants is a promising approach to boost crop yields. The carbon-fixing enzyme Rubisco catalyzes, beside the carboxylation reaction, also an oxygenation reaction that generates glycolate-2P, which needs to be recycled via a metabolic route termed photorespiration. Photorespiration dissipates energy and most importantly releases previously fixed CO2, thus significantly lowering carbon fixation rate and yield. Engineering plants to omit photorespiratory CO2 release is the goal of the FutureAgriculture consortium and this thesis is part of this collaboration. The consortium aims to establish alternative glycolate-2P recycling routes that do not release CO2. Ultimately, they are expected to increase carbon fixation rates and crop yields. Natural and novel reactions, which require enzyme engineering, were considered in the pathway design process. Here I describe the engineering of two pathways, the arabinose-5P and the erythrulose shunt. They were designed to recycle glycolate-2P via glycolaldehyde into a sugar phosphate and thereby reassimilate glycolate-2P to the Calvin cycle. I used Escherichia coli gene deletion strains to validate and characterize the activity of both synthetic shunts. The strains’ auxotrophies can be alleviated by the activity of the synthetic route, thus providing a direct way to select for pathway activity. I introduced all pathway components to these dedicated selection strains and discovered inhibitions, limitations and metabolic cross talk interfering with pathway activity. After resolving these issues, I was able to show the in vivo activity of all pathway components and combine them into functional modules.. Specifically, I demonstrate the activity of a new-to-nature module of glycolate reduction to glycolaldehyde. Also, I successfully show a new glycolaldehyde assimilation route via arabinose-5P to ribulose-5P. In addition, all necessary enzymes for glycolaldehyde assimilation via L-erythrulose were shown to be active and an L-threitol assimilation route via L-erythrulose was established in E. coli. On their own, these findings demonstrate the power of using an easily engineerable microbe to test novel pathways; combined, they will form the basis for implementing photorespiration bypasses in plants.
Das Ziel der vorliegenden Dissertation war es herauszuarbeiten, ob Nachhaltigkeitsbewusstsein den Konsum von Luxusgütern beeinflusst und ob verschiedene Moderatoren einen Einfluss auf diesen Zusammenhang ausüben. Das Nachhaltigkeitsbewusstsein wurde dabei basierend auf dem von Balderjahn et al. (2013) entwickelten Consciousness-for-sustainable-consumption-Modell durch die ökologische, soziale und die ökonomische Nachhaltigkeit sowie ergänzend durch das Tierschutzbewusstsein und Bewusstsein für lokale Produktion repräsentiert. Als Moderatoren dienten das Streben nach sozialer Anerkennung und Prestige, Materialismus, Hedonismus und Traditionsbewusstsein. Für die Aufdeckung möglicher Zusammenhänge zwischen den verschiedenen Dimensionen der Nachhaltigkeit und Luxuskonsum wurde eine Prädiktorenanalyse durchgeführt. Moderatorenanalysen offenbarten zusätzlich, ob ein Einfluss der verschiedenen Moderatoren auf die einzelnen Zusammenhänge vorlag. Die Untersuchung zeigte, dass jeweils das Umweltbewusstsein, das Bewusstsein für genügsamen Konsum sowie das Bewusstsein für schuldenfreien Konsum als Teil der ökonomischen Nachhaltigkeit und das Tierschutzbewusstsein einen Einfluss auf den Luxuskonsum ausüben. Darüber hinaus konnten insgesamt sieben Einflüsse durch die verschiedenen Moderatorvariablen auf die unterschiedlichen Zusammenhänge zwischen den Nachhaltigkeitsdimensionen und Luxuskonsum aufgedeckt werden.
Giant unilamellar vesicles are an important tool in todays experimental efforts to understand the structure and behaviour of biological cells. Their simple structure allows the isolation of the physical elastic properties of the lipid membrane. A central physical
property is the bending energy of the membrane, since the many different shapes of giant vesicles can be obtained by finding the minimum of the bending energy. In the spontaneous curvature model the bending energy is a function of the bending rigidity as well as the mean curvature and an additional parameter called the spontaneous curvature, which describes an internal preference of the lipid-bilayer to bend towards one side or the other. The spontaneous and mean curvature are local properties of the membrane.
Additional constraints arise from the conservation of the membrane surface area and the enclosed volume, which are global properties.
In this thesis the spontaneous curvature model is used to explain the experimental observation of a periodic shape oscillation of a giant unilamellar vesicle that was filled with a protein complex that periodically binds to and unbinds from the membrane.
By assuming that the binding of the proteins to the membrane induces a change in the spontaneous curvature the experimentally observed shapes could successfully be explained. This involves the numerical solution of the differential equations as obtained from the minimization of the bending energy respecting the area and volume constraints, the so called shape equations. Vice versa this approach can be used to estimate the spontaneous curvature from experimentally measurable quantities.
The second topic of this thesis is the analysis of concentration gradients in rigid conic membrane compartments. Gradients of an ideal gas due to gravity and gradients generated by the directed stochastic movement of molecular motors along a microtubulus were considered. It was possible to calculate the free energy and the bending energy analytically for the ideal gas. In the case of the non-equilibrium system with molecular motors, the characteristic length of the density profile, the jam-length, and its dependency on the opening angle of the conic compartment have been calculated in the mean-field limit.
The mean field results agree qualitatively with stochastic particle simulations.
In der vorliegenden Arbeit wird untersucht, in wie weit physikalische Experimente ein flow-Erleben bei Lernenden hervorrufen. Flow-Erleben wird als Motivationsursache gesehen und soll den Weg zu Freude und Glück darstellen. Insbesondere wegen dem oft zitierten Fachkräftemangel in naturwissenschaftlichen und technischen Berufen ist eine Motivationssteigerung in naturwissenschaftlichen Unterrichtsfächern wichtig. Denn trotz Leistungssteigerungen in internationalen Vergleichstests möchten in Deutschland deutlich weniger Schüler*innen einen solchen Beruf ergreifen als in anderen Industriestaaten. Daher gilt es, möglichst früh Schüler*innen für naturwissenschaftlich-technische Fächer zu begeistern und insbesondere im regelrecht verhassten Physikunterricht flow-Erleben zu erzeugen.
Im Rahmen dieser Arbeit wird das flow-Erleben von Studierenden in klassischen Laborexperimenten und FELS (Forschend-Entdeckendes Lernen mit dem Smartphone) als Lernumgebung untersucht. FELS ist eine an die Lebenswelt der Schüler*innen angepasste Lernumgebung, in der sie mit Smartphones ihre eigene Lebenswelt experimentell untersuchen.
Es zeigt sich, dass sowohl klassische Laborexperimente als auch in der Lebenswelt durchgeführte, smartphonebasierte Experimente flow-Erleben erzeugen. Allerdings verursachen die smartphonebasierten Experimente kaum Stressgefühle.
Die in dieser Arbeit herausgefundenen Ergebnisse liefern einen ersten Ansatz, der durch Folgestudien erweitert werden sollte.
In recent years people have realised non-renewability of our modern society which relays on spending huge amounts of energy mostly produced from fosil fuels, such as oil and coal, and the shift towards more sustainable energy sources has started. However, sustainable sources of energy, such as wind-, solar- and hydro-energy, produce primarily electrical energy and can not just be poured in canister like many fosil fuels, creating necessity for rechragable batteries. However, modern Li-ion batteries are made from toxic heavy metals and sustainable alternatives are needed. Here we show that naturally abundant catecholic and guaiacyl groups can be utilised to replace heavy metals in Li-ion batteries.
Foremost vanillin, a naturally occurring food additive that can be sustainably synthesised from industrial biowaste, lignin, was utilised to synthesise materials that showed extraordinary performance as cathodes in Li-ion batteries. Furthermore, behaviour of catecholic and guiacyl groups in Li-ion system was compared, confirming usability of guiacayl containing biopolymers as cathodes in Li-ion batteries. Lastly, naturally occurring polyphenol, tannic acid, was incorporated in fully bioderived hybrid material that shows performance comparable to commercial Li-ion batteries and good stability.
This thesis presents an important advancement in understanding of biowaste derived cathode materials for Li-ion batteries. Further research should be conducted to better understand behaviour of guaiacyl groups during Li-ion battery cycling. Lastly, challenges of incorporation of lignin, an industrial biowaste, have to be addressed and lignin should be incorporated as a cathode material in Li-ion batteries.
After endosymbiosis, chloroplasts lost most of their genome. Many former endosymbiotic genes are now nucleus-encoded and the products are re-imported post-translationally. Consequently, photosynthetic complexes are built of nucleus- and plastid-encoded subunits in a well-defined stoichiometry. In Chlamydomonas, the translation of chloroplast-encoded photosynthetic core subunits is feedback-regulated by the assembly state of the complexes they reside in. This process is called Control by Epistasy of Synthesis (CES) and enables the efficient production of photosynthetic core subunits in stoichiometric amounts. In chloroplasts of embryophytes, only Rubisco subunits have been shown to be feedback-regulated. That opens the question if there is additional CES regulation in embryophytes. I analyzed chloroplast gene expression in tobacco and Arabidopsis mutants with assembly defects for each photosynthetic complex to broadly answer this question. My results (i) confirmed CES within Rubisco and hint to potential translational feedback regulation in the synthesis of (ii) cytochrome b6f (Cyt b6f) and (iii) photosystem II (PSII) subunits. This work suggests a CES network in PSII that links psbD, psbA, psbB, psbE, and potentially psbH expression by a feedback mechanism that at least partially differs from that described in Chlamydomonas. Intriguingly, in the Cyt b6f complex, a positive feedback regulation that coordinates the synthesis of PetA and PetB was observed, which was not previously reported in Chlamydomonas. No evidence for CES interactions was found in the expression of NDH and ATP synthase subunits of embryophytes. Altogether, this work provides solid evidence for novel assembly-dependent feedback regulation mechanisms controlling the expression of photosynthetic genes in chloroplasts of embryophytes.
In order to obtain a comprehensive inventory of the rbcL and psbA RNA-binding proteomes (including factors that regulate their expression, especially factors involved in CES), an aptamer based affinity purification method was adapted and refined for the specific purification these transcripts from tobacco chloroplasts. To this end, three different aptamers (MS2, Sephadex ,and streptavidin binding) were stably introduced into the 3’ UTRs of psbA and rbcL by chloroplast transformation. RNA aptamer based purification and subsequent chip analysis (RAP Chip) demonstrated a strong enrichment of psbA and rbcL transcripts and currently, ongoing mass spectrometry analyses shall reveal potential regulatory factors. Furthermore, the suborganellar localization of MS2 tagged psbA and rbcL transcripts was analyzed by a combined affinity, immunology, and electron microscopy approach and demonstrated the potential of aptamer tags for the examination of the spatial distribution of chloroplast transcripts.
The plasmasphere is a dynamic region of cold, dense plasma surrounding the Earth. Its shape and size are highly susceptible to variations in solar and geomagnetic conditions. Having an accurate model of plasma density in the plasmasphere is important for GNSS navigation and for predicting hazardous effects of radiation in space on spacecraft. The distribution of cold plasma and its dynamic dependence on solar wind and geomagnetic conditions remain, however, poorly quantified. Existing empirical models of plasma density tend to be oversimplified as they are based on statistical averages over static parameters. Understanding the global dynamics of the plasmasphere using observations from space remains a challenge, as existing density measurements are sparse and limited to locations where satellites can provide in-situ observations. In this dissertation, we demonstrate how such sparse electron density measurements can be used to reconstruct the global electron density distribution in the plasmasphere and capture its dynamic dependence on solar wind and geomagnetic conditions.
First, we develop an automated algorithm to determine the electron density from in-situ measurements of the electric field on the Van Allen Probes spacecraft. In particular, we design a neural network to infer the upper hybrid resonance frequency from the dynamic spectrograms obtained with the Electric and Magnetic Field Instrument Suite and Integrated Science (EMFISIS) instrumentation suite, which is then used to calculate the electron number density. The developed Neural-network-based Upper hybrid Resonance Determination (NURD) algorithm is applied to more than four years of EMFISIS measurements to produce the publicly available electron density data set.
We utilize the obtained electron density data set to develop a new global model of plasma density by employing a neural network-based modeling approach. In addition to the location, the model takes the time history of geomagnetic indices and location as inputs, and produces electron density in the equatorial plane as an output. It is extensively validated using in-situ density measurements from the Van Allen Probes mission, and also by comparing the predicted global evolution of the plasmasphere with the global IMAGE EUV images of He+ distribution. The model successfully reproduces erosion of the plasmasphere on the night side as well as plume formation and evolution, and agrees well with data.
The performance of neural networks strongly depends on the availability of training data, which is limited during intervals of high geomagnetic activity. In order to provide reliable density predictions during such intervals, we can employ physics-based modeling. We develop a new approach for optimally combining the neural network- and physics-based models of the plasmasphere by means of data assimilation. The developed approach utilizes advantages of both neural network- and physics-based modeling and produces reliable global plasma density reconstructions for quiet, disturbed, and extreme geomagnetic conditions.
Finally, we extend the developed machine learning-based tools and apply them to another important problem in the field of space weather, the prediction of the geomagnetic index Kp. The Kp index is one of the most widely used indicators for space weather alerts and serves as input to various models, such as for the thermosphere, the radiation belts and the plasmasphere. It is therefore crucial to predict the Kp index accurately. Previous work in this area has mostly employed artificial neural networks to nowcast and make short-term predictions of Kp, basing their inferences on the recent history of Kp and solar wind measurements at L1. We analyze how the performance of neural networks compares to other machine learning algorithms for nowcasting and forecasting Kp for up to 12 hours ahead. Additionally, we investigate several machine learning and information theory methods for selecting the optimal inputs to a predictive model of Kp. The developed tools for feature selection can also be applied to other problems in space physics in order to reduce the input dimensionality and identify the most important drivers.
Research outlined in this dissertation clearly demonstrates that machine learning tools can be used to develop empirical models from sparse data and also can be used to understand the underlying physical processes. Combining machine learning, physics-based modeling and data assimilation allows us to develop novel methods benefiting from these different approaches.
Today’s focus on the 1930s as a time of radical politics paving the way for the apocalypse of the Second World War ignores the complexity of the decade’s cultural responses, especially those by British women writers who highlighted gender issues within their contemporary political climate. The decade’s literature is often understood to capture the political unrest, either narrating people’s chaotic movement or their paralysed shock. This book argues that 1930s novels collapse the distinction between movement and standstill and calls this phenomenon Dynamic Stasis. This Dynamic Stasis thematically and structurally informs the novels of Nancy Mitford, Stevie Smith, Rosamond Lehmann and Jean Rhys. By disrupting the oft-repeated cliché of the 1930s as the age of political extremes, gender politics and negotiations of femininity can emerge from the discursive periphery. This book therefore corrects a persistent gender blind spot, which opens up a (re)consideration of authors that have been overlooked in literary criticism of 1930s to this day.
Advanced hybrid materials are recognized as one of the most significant enablers for new technologies, which holds true especially on the quest for sustainable energy sources and energy production schemes (e.g., semiconductor based photocatalytic materials). Usually, a single component is far from meeting all the demands needed for these advanced applications. Hybrid materials are composed of at least two components commonly an inorganic and an organic material on the molecular level, which feature novel properties exceeding the sum of the individual parts and might be the milestones of next-generation applications. This dissertation aims to provide novel combinations of the metal-free semiconductor graphitic carbon nitride (g-C3N4) with polymers to obtain materials with advanced properties and applications. Visible light constitutes the core of the present work as it is the only energy source utilized either in synthesis or in the application process. In the area of applications by combination of g-C3N4 and polymers, two different hybrids were thoroughly elucidated, i.e.. their design and construction as well as potential application in photocatalysis. Novel soft 3D liquid objects were formed via charge-interaction driven interfacial jamming between polyelectrolytes in aqueous environment and colloidal dispersions of g-C3N4 in edible sunflower oil. As such, stable liquid objects could be molded into specific shapes and utilized for photodegradation of organic dyes in water. Furthermore, the grafting of polymers onto g-C3N4 was investigated. Allyl-end functionalized polymers were grafted onto g-C3N4 by a photoinitiated process to yield g-C3N4 with versatile and improved properties, e.g. advanced dispersibility enabling processing via spin coating. As g-C3N4 produces radicals under visible light irradiation, which is of significant interest for polymer science, g-C3N4 containing polymer latex and macrogel beads (MGB) were synthesized by emulsion photopolymerization and inverse suspension photopolymerization, respectively. A well-controlled emulsion photopolymerization process via g-C3N4 initiation was designed, which features synthesis of well-defined and cross-linked polymer particles. Furthermore, the polymerization process was investigated thoroughly, indicating an ad-layer polymerization in early stages of the process. The utilization of functionalized g-C3N4 allowed the polymerization of various monomer types. Moreover, g-C3N4 was utilized as photoinitiator in hydrogel MGB formation. The formed MGB properties could be tailored via process design, e.g. stirring rate, cross-linker content and g-C3N4 content. Finally, MGBs were introduced as photocatalyst for waste water remediation, i.e. the degradation of Rhodamine B in aqueous solution was studied. The present thesis therefore builds a bridge between g-C3N4 and polymers and provides strategies for hybrid material formation. Furthermore, several potential applications are revealed with significant implications for photocatalysis, polymerization processes and polymer materials.
Bilder des Glaubens
(2020)
Medien prägen maßgeblich das öffentliche Bild von Religion in der modernen Gesellschaft, weshalb die Kirchen frühzeitig mediale Wege der Verkündigung suchten. Das Fernsehen als bedeutendstes Massenmedium in der zweiten Hälfte des 20. Jahrhunderts spielte dabei eine Schlüsselrolle. Von den Kirchen produzierte Unterhaltungsserien erreichten große Reichweiten, gleichzeitig entwickelte sich das bundesdeutsche Fernsehen zu einem Experimentierfeld für neue Praktiken der Religionsausübung. Alternative Glaubensformen gewannen an Aufmerksamkeit, etwa nichtchristliche Weltreligionen wie der Islam, Gläubige am Rande der Kirchen wie die Evangelikalen oder als »Sekten« geschmähte Gruppen wie die Zeugen Jehovas, aber auch spirituelle Phänomene wie Wahrsagerei, Okkultismus und »New Age«. Ronald Funke analysiert die zwischen den 1950er und 1980er Jahren im Fernsehen gezeigten Bilder des Glaubens in der Bundesrepublik und verdeutlicht, wie deren Inhalte durch enge Zusammenarbeit, aber auch durch Konflikte ausgehandelt wurden.
Grenzen des Organiesierbaren
(2020)
Interessiert man sich für den gesellschaftlichen Einfluss der Organisationssoziologie auf die Praxis des Organisierens, so muss der Befund ernüchtern. Stärker als auf organisationssoziologische Wissensbestände wird in Unternehmen oder Verwaltungen auf aktuelle Managementtrends rekurriert. Man könnte diesen Befund beklagen und als fehlerhafte Rezeption der Praxis beiseitelegen. Alternativ ließe sich aber auch diskutieren, welchen Beitrag die Disziplin selbst zu dieser Rezeption leistet. Mit einer solchen Diskussion begibt man sich fast unweigerlich auf einen schwierigen Pfad. Zum einen kann die Soziologie gerade dann, wenn sie ihren Blick auf die Erforschung von Unternehmen oder Verwaltungen richtet, nicht die von der Praxis erwarteten positiven Antworten liefern. Gerade die Organisationssoziologie begibt sich zum anderen jedoch in direkte Konkurrenz zu Nachbardisziplinen wie die Betriebswirtschaftslehre oder die Organisationspsychologie, die die Rezeptionsfähigkeit ihrer Wissensbestände im Praxisfeld in den letzten Jahren unter Beweis gestellt haben. Die Erwartungen an die Umsetzbarkeit wissenschaftlicher Erkenntnisse in der Praxis sind dadurch gestiegen. Eine Soziologie, die ihre Erkenntniskraft in der kritischen Distanz sieht, mag das skeptisch stimmen. Es gilt daher, die Frage zu beantworten, wie die Praxisrelevanz einer Wissenschaft des zweiten Blicks auf Organisationen konkret aussehen kann. Diesem Vorhaben widmet sich das vorgelegte Promotionsprojekt. Die in der kumulativen Dissertation versammelten Beiträge verstehen sich allesamt als Erkundungen und Erprobungen der Praxisrelevanz der Organisationssoziologie anhand aktueller Managementfragen in Unternehmen. Die These lautet dabei, dass sich diese Praxisrelevanz nur als Kritik entfalten kann. Eine solche Kritik kann dabei zwei grundsätzliche Formen annehmen: Als Strukturkritik bezieht sie sich auf konkrete Organisationen, deren spezifische Eigenlogiken und strukturelle Verstrickungen. Sie beschreibt dabei für den Einzelfall Funktionen und Folgen von Erwartungsstrukturen, die sich dann z. B. fallvergleichend generalisieren oder typisieren lassen. Organisationssoziologische Strukturkritik kann sich damit sowohl als vergleichender, praxissensibler Forschungsansatz realisieren, als auch die Grundlage einer soziologisch orientierten Beratung bilden. Als Schematakritik richtet sie sich gegen verkürzte Vorstellungen des Organisierens, die sich etwa in Managementmoden finden lassen. Dem Kumulus zugrunde liegen fünf Beiträge, die konkrete Ausprägungen beider Kritikformen ausloten. Der erste Beitrag „Datafizierung und Organisation“ zeigt, wie Schematakritik an Nachbardisziplinen aussehen kann, indem er Organisation als blinden Fleck der Digitalisierungsforschung diskutiert und Anschlussstellen für interdisziplinäre Forschung ausweist. Daher liefert der Beitrag einen systematischen Zugang zu organisationalen Implikationen der Digitalisierung. Neben der Anreicherung der Digitalisierungsforschung kann die entwickelte Argumentation auch für die Praxis Erkenntniskraft haben, indem z. B. problematisiert wird, dass im Managementdiskurs um Digitalisierung überzogene Rationalisierungserwartungen herrschen oder durch digitale Infrastrukturen entstehende Informalitäten systematisch ausgeblendet werden Der zweite Beitrag „Führung als erfolgreiche Einflussnahme in kritischen Momenten“ legt eine Umdeutung des populären Managementbegriffs Führung durch Schematakritik vor. Damit trägt er in mehrfacher Hinsicht zu einer praxisrelevanten Neubestimmung von Führung bei. Für Führungskräfte ermöglicht er beispielsweise die Einsicht, dass sie ihre Führungsaufgaben auf kritische Momente konzentrieren können und postuliert die Abkehr vom heroischen Bild des dauerhaft Führenden. Diese Umdeutung kann auch für Führungskräfte in Organisationen entlastend sein, weist sie doch auf den Zusammenhang zwischen der organisationalen Verfasstheit und Führungschancen hin und eröffnet damit Gestaltungschancen jenseits der Führungskräfte- und Personalentwicklung. Für die Organisationsforschung liefert der Beitrag einen theoretisch integrierten Führungsbegriff, der Führung sowohl organisational als auch situativ bestimmt. Er steht somit exemplarisch für eine organisationssoziologische Schematakritik, die etablierte Managementbegriffe neu deutet. Der dritte Beitrag kritisiert mit dem Konzept der transformationalen Führung eine Managementmode und zeigt auf, wie das darin enthaltene Führungsmodell durch die Bildung moralischer Kategorien Organisationsprobleme auf Organisationsmitglieder (hier: Führungskräfte) verschiebt. Es wird einerseits eine organisationssoziologische Kritik am populären Managementkonzept der transformationalen Führung vorgelegt. Andererseits verdeutlicht der Beitrag anhand systemtheoretischer Konzepte wie elementarer Verhaltensweisen, Moral oder Rollentrennung exemplarisch, dass organisationssoziologisches Denken den Managementdiskurs bereichern kann, indem es Verkürzungen und Simplifizierungen aufdeckt und alternative Analyse- und Gestaltungsansätze bereitstellt. Dafür lässt sich auch im Praxisdiskurs Gehör finden, weil man annehmen darf, dass mit den Heilsversprechen von Kompaktlösungen auch Enttäuschungen einhergehen, für die die Organisationssoziologie Erklärungen liefern kann. Die Möglichkeiten und Grenzen von Strukturkritik werden in den letzten beiden Beiträgen diskutiert. Das Potenzial von Strukturkritik für die soziologisch orientierte Beratung von Organisationen exploriert der Beitrag „Die schwierige Liaison von Organisationssoziologie und Praxisbezug am Beispiel der Beratung“. Ausgehend vom Theorie-Praxis-Komplex wird eruiert, wie soziologischer Praxisbezug im Feld der Beratung aussehen kann. Dafür systematisiert der Beitrag organisationssoziologische Ansätze von Beratung und zeigt auf, wie ein genuin soziologischer Beratungsansatz aussehen könnte. Der letzte Beitrag stellt Grundzüge einer Methodologie strukturkritischer Forschung vor und illustriert diese an einem durchgeführten Forschungsprojekt zu Managementmoden. Anhand der Forschung in einem Produktionsbetrieb wird gezeigt, wie strukturkritische Forschung konkret aussehen kann. Solch strukturkritische Forschung steht im Forschungsprozess vor drei Herausforderungen: dem qualitativ hochwertigen Feldzugang, der Entwicklung einer für Forschung und Praxis instruktiven Fragestellung und der Rückspiegelung der Ergebnisse in das Feld. Der Beitrag stellt Grundzüge einer Methodologie strukturkritischer Organisationsforschung vor, die sich sachlich, zeitlich und sozial entlang der drei beschriebenen Momente des Feldzugangs, der Ausgangsfragestellung und der Rückspiegelung der Ergebnisse spezifizieren lassen.
TrainTrap
(2020)
The Willmore functional is a function that maps an immersed Riemannian manifold to its total mean curvature. Finding closed surfaces that minimizes the Willmore energy, or more generally finding critical surfaces, is a classic problem of differential geometry.
In this thesis we will develop the concept of generalized Willmore functionals for surfaces in Riemannian manifolds. We are guided by models in mathematical physics, such as the Hawking energy of general relativity and the bending energies for thin membranes.
We prove the existence of minimizers under area constraint for these generalized Willmore functionals in a suitable class of generalized surfaces. In particular, we construct minimizers of the bending energy mentioned above for prescribed area and enclosed volume.
Furthermore, we prove that critical surfaces of generalized Willmore functionals with prescribed area are smooth, away from finitely many points. These results and the following are based on the existing theory for the Willmore functional.
This general discussion is succeeded by a detailed analysis of the Hawking energy. In the context of general relativity the surrounding manifold describes the space at a given time, hence we strive to understand the interplay between the Hawking energy and the ambient space. We characterize points in the surrounding manifold for which there are small critical spheres with prescribed area in any neighborhood. These points are interpreted as concentration points of the Hawking energy.
Additionally, we calculate an expansion of the Hawking energy on small, round spheres. This allows us to identify a kind of energy density of the Hawking energy.
It needs to be mentioned that our results stand in contrast to previous expansions of the Hawking energy. However, these expansions are obtained on spheres along the light cone at a given point. At this point it is not clear how to explain the discrepancy.
Finally, we consider asymptotically Schwarzschild manifolds. They are a special case of asymptotically flat manifolds, which serf as models for isolated systems. The Schwarzschild spacetime itself is a classical solution to the Einstein equations and yields a simple description of a black hole.
In these asymptotically Schwarzschild manifolds we construct a foliation of the exterior region by critical spheres of the Hawking energy with prescribed large area. This foliation can be seen as a generalized notion of the center of mass of the isolated system. Additionally, the Hawking energy of grows along the foliation as the area of the surfaces grows.
Die Verweigerung des Militärdienstes und seine Folgen sind angesichts andauernder bewaffneter Konflikte immer häufiger Gegenstand von Asylanträgen. Die Autorin untersucht die Voraussetzungen für die Flüchtlingsanerkennung von Wehrdienstverweigerern anhand verschiedener Fallgruppen und unter Einbeziehung zahlreicher Entscheidungen von angloamerikanischen, europäischen und deutschen Gerichten. Dabei widmet sie sich dem Spannungsfeld zwischen dem Wehrdienst als Grundpflicht und dem Recht auf Kriegsdienstverweigerung aus Gewissensgründen. Zudem beleuchtet sie das Verhältnis des Flüchtlingsrechts zum humanitären Völkerrecht und Völkerstrafrecht, bettet also das Flüchtlingsrecht in andere Teilbereiche des Völkerrechts ein.
Die Fehlerkorrektur in der Codierungstheorie beschäftigt sich mit der Erkennung und Behebung von Fehlern bei der Übertragung und auch Sicherung von Nachrichten.
Hierbei wird die Nachricht durch zusätzliche Informationen in ein Codewort kodiert.
Diese Kodierungsverfahren besitzen verschiedene Ansprüche, wie zum Beispiel die maximale Anzahl der zu korrigierenden Fehler und die Geschwindigkeit der Korrektur.
Ein gängiges Codierungsverfahren ist der BCH-Code, welches industriell für bis zu vier Fehler korrigiere Codes Verwendung findet. Ein Nachteil dieser Codes ist die technische Durchlaufzeit für die Berechnung der Fehlerstellen mit zunehmender Codelänge.
Die Dissertation stellt ein neues Codierungsverfahren vor, bei dem durch spezielle Anordnung kleinere Codelängen eines BCH-Codes ein langer Code erzeugt wird. Diese Anordnung geschieht über einen weiteren speziellen Code, einem LDPC-Code, welcher für eine schneller Fehlererkennung konzipiert ist.
Hierfür wird ein neues Konstruktionsverfahren vorgestellt, welches einen Code für einen beliebige Länge mit vorgebbaren beliebigen Anzahl der zu korrigierenden Fehler vorgibt. Das vorgestellte Konstruktionsverfahren erzeugt zusätzlich zum schnellen Verfahren der Fehlererkennung auch eine leicht und schnelle Ableitung eines Verfahrens zu Kodierung der Nachricht zum Codewort. Dies ist in der Literatur für die LDPC-Codes bis zum jetzigen Zeitpunkt einmalig.
Durch die Konstruktion eines LDPC-Codes wird ein Verfahren vorgestellt wie dies mit einem BCH-Code kombiniert wird, wodurch eine Anordnung des BCH-Codes in Blöcken erzeugt wird. Neben der allgemeinen Beschreibung dieses Codes, wird ein konkreter Code für eine 2-Bitfehlerkorrektur beschrieben. Diese besteht aus zwei Teilen, welche in verschiedene Varianten beschrieben und verglichen werden. Für bestimmte Längen des BCH-Codes wird ein Problem bei der Korrektur aufgezeigt, welche einer algebraischen Regel folgt.
Der BCH-Code wird sehr allgemein beschrieben, doch existiert durch bestimmte Voraussetzungen ein BCH-Code im engerem Sinne, welcher den Standard vorgibt. Dieser BCH-Code im engerem Sinne wird in dieser Dissertation modifiziert, so dass das algebraische Problem bei der 2-Bitfehler Korrektur bei der Kombination mit dem LDPC-Code nicht mehr existiert. Es wird gezeigt, dass nach der Modifikation der neue Code weiterhin ein BCH-Code im allgemeinen Sinne ist, welcher 2-Bitfehler korrigieren und 3-Bitfehler erkennen kann. Bei der technischen Umsetzung der Fehlerkorrektur wird des Weiteren gezeigt, dass die Durchlaufzeiten des modifizierten Codes im Vergleich zum BCH-Code schneller ist und weiteres Potential für Verbesserungen besitzt.
Im letzten Kapitel wird gezeigt, dass sich dieser modifizierte Code mit beliebiger Länge eignet für die Kombination mit dem LDPC-Code, wodurch dieses Verfahren nicht nur umfänglicher in der Länge zu nutzen ist, sondern auch durch die schnellere Dekodierung auch weitere Vorteile gegenüber einem BCH-Code im engerem Sinne besitzt.
In the present study, we employ the angle-resolved photoemission spectroscopy (ARPES) technique to study the electronic structure of topological states of matter. In particular, the so-called topological crystalline insulators (TCIs) Pb1-xSnxSe and Pb1-xSnxTe, and the Mn-doped Z2 topological insulators (TIs) Bi2Te3 and Bi2Se3. The Z2 class of strong topological insulators is protected by time-reversal symmetry and is characterized by an odd number of metallic Dirac type surface states in the surface Brillouin zone. The topological crystalline insulators on the other hand are protected by the individual crystal symmetries and exhibit an even number of Dirac cones.
The topological properties of the lead tin chalcogenides topological crystalline insulators can be tuned by temperature and composition. Here, we demonstrate that Bi-doping of the Pb1-xSnxSe(111) epilayers induces a quantum phase transition from a topological crystalline insulator to a Z2 topological insulator. This occurs because Bi-doping lifts the fourfold valley degeneracy in the bulk. As a consequence a gap appears at ⌈¯, while the three Dirac cones at the M̅ points of the surface Brillouin zone remain intact. We interpret this new phase transition is caused by lattice distortion. Our findings extend the topological phase diagram enormously and make strong topological insulators switchable by distortions or electric field. In contrast, the bulk Bi doping of epitaxial Pb1-xSnxTe(111) films induces a giant Rashba splitting at the surface that can be tuned by the doping level. Tight binding calculations identify their origin as Fermi level pinning by trap states at the surface.
Magnetically doped topological insulators enable the quantum anomalous Hall effect (QAHE) which provide quantized edge states for lossless charge transport applications. The edge states are hosted by a magnetic energy gap at the Dirac point which has not been experimentally observed to date. Our low temperature ARPES studies unambiguously reveal the magnetic gap of Mn-doped Bi2Te3. Our analysis shows a five times larger gap size below the Tc than theoretically predicted. We assign this enhancement to a remarkable structure modification induced by Mn doping. Instead of a disordered impurity system, a self-organized alternating sequence of MnBi2Te4 septuple and Bi2Te3quintuple layers is formed. This enhances the wave-function overlap and gives rise to a large magnetic gap. Mn-doped Bi2Se3 forms similar heterostructure, but only a nonmagnetic gap is observed in this system. This correlates with the difference in magnetic anisotropy due to the much larger spin-orbit interaction in Bi2Te3 compared to Bi2Se3. These findings provide crucial insights for pushing lossless transport in topological insulators towards room-temperature applications.
Sie senden den Wandel
(2020)
Altbekannt ist, welch wichtige Rolle Medien bei der Konsolidierung oder aber auch bei der Transformation einer Gesellschaft spielen. Was aber geschieht, wenn Medien von unten aus agieren und dies in großer Zahl geschieht, unter Einbindung vieler gesellschaftlicher Akteure sowie gegenüber einem umfassenden Publikum? In Argentinien hat sich eine faszinierende Radiolandschaft gebildet, die kollektiv, partizipativ und progressiv arbeitet: Die Community-Radios. Viviana Uriona nimmt uns mit auf eine ethnografische Reise durch die Geschichte dieser Radios, analysiert ihre Arbeitsweise und sucht nach den Gründen ihres Erfolges. Am Ende der Lektüre bleibt eine Frage nicht mehr offen: Könnte hierzulande in gleicher Weise gelingen, was dort geschah?
Using individual-based modeling to understand grassland diversity and resilience in the Anthropocene
(2020)
The world’s grassland systems are increasingly threatened by anthropogenic change. Susceptible to a variety of different stressors, from land-use intensification to climate change, understanding the mechanisms driving the maintenance of these systems’ biodiversity and stability, and how these mechanisms may shift under human-mediated disturbance, is thus critical for successfully navigating the next century. Within this dissertation, I use an individual-based and spatially-explicit model of grassland community assembly (IBC-grass) to examine several processes, thought key to understanding their biodiversity and stability and how it changes under stress. In the first chapter of my thesis, I examine the conditions under which intraspecific trait variation influences the diversity of simulated grassland communities. In the second and third chapters of my thesis, I shift focus towards understanding how belowground herbivores influence the stability of these grassland systems to either a disturbance that results in increased, stochastic, plant mortality, or eutrophication.
Intraspecific trait variation (ITV), or variation in trait values between individuals of the same species, is fundamental to the structure of ecological communities. However, because it has historically been difficult to incorporate into theoretical and statistical models, it has remained largely overlooked in community-level analyses. This reality is quickly shifting, however, as a consensus of research suggests that it may compose a sizeable proportion of the total variation within an ecological community and that it may play a critical role in determining if species coexist. Despite this increasing awareness that ITV matters, there is little consensus of the magnitude and direction of its influence. Therefore, to better understand how ITV changes the assembly of grassland communities, in the first chapter of my thesis, I incorporate it into an established, individual-based grassland community model, simulating both pairwise invasion experiments as well as the assembly of communities with varying initial diversities. By varying the amount of ITV in these species’ functional traits, I examine the magnitude and direction of ITV’s influence on pairwise invasibility and community coexistence. During pairwise invasion, ITV enables the weakest species to more frequently invade the competitively superior species, however, this influence does not generally scale to the community level. Indeed, unless the community has low alpha- and beta- diversity, there will be little effect of ITV in bolstering diversity. In these situations, since the trait axis is sparsely filled, the competitively inferior may suffer less competition and therefore ITV may buffer the persistence and abundance of these species for some time.
In the second and third chapters of my thesis, I model how one of the most ubiquitous trophic interactions within grasslands, herbivory belowground, influences their diversity and stability. Until recently, the fundamental difficulty in studying a process within the soil has left belowground herbivory “out of sight, out of mind.” This dilemma presents an opportunity for simulation models to explore how this understudied process may alter community dynamics. In the second chapter of my thesis, I implement belowground herbivory – represented by the weekly removal of plant biomass – into IBC-grass. Then, by introducing a pulse disturbance, modelled as the stochastic mortality of some percentage of the plant community, I observe how the presence of belowground herbivores influences the resistance and recovery of Shannon diversity in these communities. I find that high resource, low diversity, communities are significantly more destabilized by the presence of belowground herbivores after disturbance. Depending on the timing of the disturbance and whether the grassland’s seed bank persists for more than one season, the impact of the disturbance – and subsequently the influence of the herbivores – can be greatly reduced. However, because human-mediated eutrophication increases the amount of resources in the soil, thus pressuring grassland systems, our results suggest that the influence of these herbivores may become more important over time.
In the third chapter of my thesis, I delve further into understanding the mechanistic underpinnings of belowground herbivores on the diversity of grasslands by replicating an empirical mesocosm experiment that crosses the presence of herbivores above- and below-ground with eutrophication. I show that while aboveground herbivory, as predicted by theory and frequently observed in experiments, mitigates the impact of eutrophication on species diversity, belowground herbivores counterintuitively reduce biodiversity. Indeed, this influence positively interacts with the eutrophication process, amplifying its negative impact on diversity. I discovered the mechanism underlying this surprising pattern to be that, as the herbivores consume roots, they increase the proportion of root resources to root biomass. Because root competition is often symmetric, herbivory fails to mitigate any asymmetries in the plants’ competitive dynamics. However, since the remaining roots have more abundant access to resources, the plants’ competition shifts aboveground, towards asymmetric competition for light. This leads the community towards a low-diversity state, composed of mostly high-performance, large plant species. We further argue that this pattern will emerge unless the plants’ root competition is asymmetric, in which case, like its counterpart aboveground, belowground herbivory may buffer diversity by reducing this asymmetry between the competitively superior and inferior plants.
I conclude my dissertation by discussing the implications of my research on the state of the art in intraspecific trait variation and belowground herbivory, with emphasis on the necessity of more diverse theory development in the study of these fundamental interactions. My results suggest that the influence of these processes on the biodiversity and stability of grassland systems is underappreciated and multidimensional, and must be thoroughly explored if researchers wish to predict how the world’s grasslands will respond to anthropogenic change. Further, should researchers myopically focus on understanding central ecological interactions through only mathematically tractable analyses, they may miss entire suites of potential coexistence mechanisms that can increase the coviability of species, potentially leading to coexistence over ecologically-significant timespans. Individual-based modelling, therefore, with its focus on individual interactions, will prove a critical tool in the coming decades for understanding how local interactions scale to larger contexts, and how these interactions shape ecological communities and further predicting how these systems will change under human-mediated stress.
Lehrkräftefortbildungen werden in der Forschung als vielversprechende Möglichkeit angesehen, Lehrkräfte dabei zu unterstützen, den an sie gestellten Anforderungen gerecht zu werden (Darling-Hammond, Hyler & Gardner, 2017). Über verschiedene Studien hinweg konnte hierbei gezeigt werden, dass die Teilnahme einer Lehrkraft an Fortbildungen positiv mit der Entwicklung ihrer Schüler*innen zusammenhängt (Kalinowski, Egert, Gronostaj & Vock, 2020; Yoon, Duncan, Lee, Scarloss & Shapley, 2007). Während ein Teil der Forschung ihren Fokus auf die Wirksamkeit von Fortbildungsangeboten richtet, nehmen andere Untersuchungen stärker die Nutzung dieser Angebote in den Blick. Die vorliegende Arbeit knüpft an die bisherige Forschung zur Lehrkräftefortbildung an und versucht, Aspekte der Nutzung und des Angebots der Lehrkräftefortbildung in Deutschland im Rahmen des Comprehensive Lifelong Learning Participation Models stärker gemeinsam zu betrachten (Boeren, Nicaise & Baert, 2010). Hierbei handelt es sich um ein Mehrebenen-Modell zur Erklärung von Weiterbildungsverhalten, welches verschiedene Akteursgruppen (z.B. Fortbildungsteilnehmer*innen, Fortbildungsinstitute) auf der Nachfrage- und der Angebotsseite berücksichtigt und deren Interdependenz in den Blick nimmt. Vor diesem Hintergrund beschäftigt sich die vorliegende Arbeit in vier Teilstudien mit den Merkmalen der Fortbildungsteilnehmer*innen und des Fortbildungsangebots. Sie untersucht zudem die Prädiktoren für Fortbildungsbeteiligung von Lehrkräften auf der Nachfrage- und Angebotsseite. Studie 1 fokussiert zunächst die Nachfrageseite des Comprehensive Lifelong Learning Participation Models (Boeren et al., 2010) und betrachtet die in der Forschung wenig beachtete Gruppe der Nicht-Teilnehmer*innen. An Befunde der allgemeinen Weiterbildungsforschung anknüpfend, beschäftigt sich diese Teilstudie mit den Teilnahmebarrieren von Lehrkräften an Fortbildungen und untersucht, wie diese mit der Fortbildungsaktivität von Lehrkräften zusammenhängen. Die Beantwortung der Forschungsfragen basiert auf einer Sekundärdatenanalyse des IQB-Ländervergleichs 2012 (Pant et al., 2013), auf dessen Grundlage ein Gruppenvergleich von Teilnehmer*innen und Nicht-Teilnehmer*innen an Fortbildung sowie eine faktoranalytische Betrachtung der berichteten Teilnahmebarrieren durchgeführt wurde. Studie 2 greift die Frage auf, welche Lehrkräfte intensiv von Angeboten der Fortbildung Gebrauch machen. Ausgangspunkt der Arbeit ist die Beobachtung eines Matthäus-Effektes durch die allgemeine Weiterbildungsforschung. Demnach beteiligen sich insbesondere jene Personen stärker an beruflichen Lerngelegenheiten, die bereits vor der Maßnahme über günstige Ausgangsvoraussetzungen, etwa in Form eines höheren Kompetenzniveaus im Vergleich zu Personen, die nicht oder weniger an Angeboten der Fort- und Weiterbildung partizipieren, verfügen. In Anlehnung an diese Befunde diskutiert Teilstudie 2 verschiedene Aspekte der Qualität von Lehrkräften und geht anhand bivariater Zusammenhangsanalysen der Frage nach, welche Zusammenhänge zwischen der Voraussetzung der Lehrkraft und der Nutzung von Fortbildung bestehen. Dabei berücksichtigt die Studie das von Boeren et al. (2010) eingeführte Comprehensive Lifelong Learning Participation Model, indem sie die Befunde der Wirksamkeits- und Angebotsforschung aufgreift und differentielle Effekte in Abhängigkeit der Merkmale der Fortbildungen (fachlich vs. nicht-fachlich) in den Blick nimmt. Die durchgeführten Analysen beruhen auf einer Sekundärdatenanalyse des COACTIV-Forschungsprojekts (Kunter, Baumert & Blum, 2011). Auch in Studie 3 steht die Interaktion von Nachfrage- und Angebotsseite im Mittelpunkt. Während in Studie 2 jedoch konzeptionelle Merkmale der Fortbildung fokussiert wurden, liegt der Fokus in Studie 3 auf dem strukturellen Merkmal Zeit. Zeiten zum (beruflichen) Lernen werden hierbei zunächst auf Basis empirischer Befunde als Grundvoraussetzung für das Zustandekommen von Fortbildungsbeteiligung herausgearbeitet. Die Studie geht dann der Frage nach, welche zeitlichen Merkmale das Fortbildungsangebot für Lehrkräfte aufweist und wie diese Merkmale des Angebots im Zusammenhang mit der Nutzung durch Lehrkräfte stehen. Zur Beantwortung der Forschungsfragen werden eine Programmanalyse sowie polynomiale Regressionen durchgeführt. Die der Analyse zugrundeliegenden Daten beruhen hierbei auf den in der elektronischen Datenbank hinterlegten Fortbildungsangebotsdaten für das Land Brandenburg aus dem akademischen Jahr 2016/2017. Studie 4 fokussiert schließlich die Gruppe der Lehrerfortbildner*innen und somit die Angebotsseite des Comprehensive Lifelong Learning Participation Models (Boeren et al., 2010). In Anlehnung an theoretische Arbeiten zur professionellen Identität wird dabei der Frage nachgegangen, wie Lehrerfortbildner*innen ihre Aufgaben wahrnehmen und wie diese Wahrnehmung mit weiteren Aspekten ihrer professionellen Identität und der Gestaltung ihrer Fortbildungsveranstaltungen zusammenhängt. Hierzu wurden selbsterhobene Daten einer schriftlichen Befragung von Lehrerfortbildner*innen im Jahr 2019 zunächst faktoranalytisch betrachtet und anschließend mithilfe bivariater Zusammenhangsanalysen untersucht. Die zentralen Ergebnisse der vorliegenden Arbeit werden abschließend zusammengefasst diskutiert. Sie deuten insgesamt darauf hin, dass das derzeitige Fortbildungssystem in Deutschland nicht dazu geeignet erscheint, alle Lehrkräfte mit qualitativ hochwertigen Fortbildungen so zu erreichen, dass sie an ihren Schwächen arbeiten können. Die Befunde zeigen weiter, dass Fortbildner*innen eine mögliche Stellschraube für die Veränderung von Teilen des Fortbildungsangebots darstellen. Die Befunde bieten somit die Grundlage für zukünftige Forschungsarbeiten und mögliche Implikationen in der Fortbildungspraxis und Bildungspolitik.
Small moonlets or moons embedded in dense planetary rings create S-shaped density modulations called propellers if their masses are smaller than a certain threshold, alternatively they create a circumferential gap in the disk if the embedded body’s mass exceeds this threshold (Spahn and Sremčević, 2000). The gravitational perturber scatters the ring particles, depletes the disk’s density, and, thus, clears a gap, whereas counteracting viscous diffusion of the ring material has the tendency to close the created gap, thereby forming a propeller. Propeller objects were predicted by Spahn and Sremčević (2000) and Sremčević et al. (2002) and were later discovered by the Cassini space probe (Tiscareno et al., 2006, Sremčević et al., 2007, Tiscareno et al., 2008, and Tiscareno et al., 2010). The ring moons Pan and Daphnis are massive enough to maintain the circumferential Encke and Keeler gaps in Saturn’s A ring and were detected by Showalter (1991) and Porco (2005) in Voyager and Cassini images, respectively. In this thesis, a nonlinear axisymmetric diffusion model is developed to describe radial density profiles of circumferential gaps in planetary rings created by embedded moons (Grätz et al., 2018). The model accounts for the gravitational scattering of the ring particles by the embedded moon and for the counteracting viscous diffusion of the ring matter back into the gap. With test particle simulations it is shown that the scattering of the ring particles passing the moon is larger for small impact parameters than estimated by Goldreich and Tremaine (1980). This is especially significant for the modeling of the Keeler gap. The model is applied to the Encke and Keeler gaps with the aim to estimate the shear viscosity of the ring in their vicinities. In addition, the model is used to analyze whether tiny icy moons whose dimensions lie below Cassini’s resolution capabilities would be able to cause the poorly understood gap structure of the C ring and the Cassini Division. One of the most intriguing facets of Saturn’s rings are the extremely sharp edges of the Encke and Keeler gaps: UVIS-scans of their gap edges show that the optical depth drops from order unity to zero over a range of far less than 100 m, a spatial scale comparable to the ring’s vertical extent. This occurs despite the fact that the range over which a moon transfers angular momentum onto the ring material is much larger. Borderies et al. (1982, 1989) have shown that this striking feature is likely related to the local reversal of the usually outward-directed viscous transport of angular momentum in strongly perturbed regions. We have revised the Borderies et al. (1989) model using a granular flow model to define the shear and bulk viscosities, ν and ζ, in order to incorporate the angular momentum flux reversal effect into the axisymmetric diffusion model for circumferential gaps presented in this thesis (Grätz et al., 2019). The sharp Encke and Keeler gap edges are modeled and conclusions regarding the shear and bulk viscosities of the ring are discussed. Finally, we explore the question of whether the radial density profile of the central and outer A ring, recently measured by Tiscareno and Harris (2018) in the highest resolution to date, and in particular, the sharp outer A ring edge can be modeled consistently from the balance of gravitational scattering by several outer moons and the mass and momentum transport. To this aim, the developed model is extended to account for the inward drifts caused by multiple discrete and overlapping resonances with multiple outer satellites and is then used to hydrodynamically simulate the normalized surface mass density profile of the A ring. This section of the thesis is based on studies by Tajeddine et al. (2017a) who recently discussed the common misconception that the 7:6 resonance with Janus alone maintains the outer A ring edge, showing that the combined effort of several resonances with several outer moons is required to confine the A ring as observed by the Cassini spacecraft.
The Earth's inner magnetosphere is a very dynamic system, mostly driven by the external solar wind forcing exerted upon the magnetic field of our planet. Disturbances in the solar wind, such as coronal mass ejections and co-rotating interaction regions, cause geomagnetic storms, which lead to prominent changes in charged particle populations of the inner magnetosphere - the plasmasphere, ring current, and radiation belts. Satellites operating in the regions of elevated energetic and relativistic electron fluxes can be damaged by deep dielectric or surface charging during severe space weather events. Predicting the dynamics of the charged particles and mitigating their effects on the infrastructure is of particular importance, due to our increasing reliance on space technologies.
The dynamics of particles in the plasmasphere, ring current, and radiation belts are strongly coupled by means of collisions and collisionless interactions with electromagnetic fields induced by the motion of charged particles. Multidimensional numerical models simplify the treatment of transport, acceleration, and loss processes of these particles, and allow us to predict how the near-Earth space environment responds to solar storms. The models inevitably rely on a number of simplifications and assumptions that affect model accuracy and complicate the interpretation of the results. In this dissertation, we quantify the processes that control electron dynamics in the inner magnetosphere, paying particular attention to the uncertainties of the employed numerical codes and tools.
We use a set of convenient analytical solutions for advection and diffusion equations to test the accuracy and stability of the four-dimensional Versatile Electron Radiation Belt (VERB-4D) code. We show that numerical schemes implemented in the code converge to the analytical solutions and that the VERB-4D code demonstrates stable behavior independent of the assumed time step. The order of the numerical scheme for the convection equation is demonstrated to affect results of ring current and radiation belt simulations, and it is crucially important to use high-order numerical schemes to decrease numerical errors in the model.
Using the thoroughly tested VERB-4D code, we model the dynamics of the ring current electrons during the 17 March 2013 storm. The discrepancies between the model and observations above 4.5 Earth's radii can be explained by uncertainties in the outer boundary conditions. Simulation results indicate that the electrons were transported from the geostationary orbit towards the Earth by the global-scale electric and magnetic fields.
We investigate how simulation results depend on the input models and parameters. The model is shown to be particularly sensitive to the global electric field and electron lifetimes below 4.5 Earth's radii. The effects of radial diffusion and subauroral polarization streams are also quantified.
We developed a data-assimilative code that blends together a convection model of energetic electron transport and loss and Van Allen Probes satellite data by means of the Kalman filter. We show that the Kalman filter can correct model uncertainties in the convection electric field, electron lifetimes, and boundary conditions. It is also demonstrated how the innovation vector - the difference between observations and model prediction - can be used to identify physical processes missing in the model of energetic electron dynamics.
We computed radial profiles of phase space density of ultrarelativistic electrons, using Van Allen Probes measurements. We analyze the shape of the profiles during geomagnetically quiet and disturbed times and show that the formation of new local minimums in the radial profiles coincides with the ground observations of electromagnetic ion-cyclotron (EMIC) waves. This correlation indicates that EMIC waves are responsible for the loss of ultrarelativistic electrons from the heart of the outer radiation belt into the Earth's atmosphere.
In the last decade the photovoltaic research has been preponderantly overturned by the arrival of metal halide perovskites. The introduction of this class of materials in the academic research for renewable energy literally shifted the focus of a large number of research groups and institutions. The attractiveness of halide perovskites lays particularly on their skyrocketing efficiencies and relatively simple and cheap fabrication methods. Specifically, the latter allowed for a quick development of this research in many universities and institutes around the world at the same time. The outcome has been a fast and beneficial increase in knowledge with a consequent terrific improvement of this new technology. On the other side, the enormous amount of research promoted an immense outgrowth of scientific literature, perpetually published. Halide perovskite solar cells are now effectively competing with other established photovoltaic technologies in terms of power conversion efficiencies and production costs. Despite the tremendous improvement, a thorough understanding of the energy losses in these systems is of imperative importance to unlock the full thermodynamic potential of this material. This thesis focuses on the understanding of the non-radiative recombination processes in the neat perovskite and in complete devices. Specifically, photoluminescence quantum yield (PLQY) measurements were applied to multilayer stacks and cells under different illumination conditions to accurately determine the quasi-Fermi levels splitting (QFLS) in the absorber, and compare it with the external open-circuit voltage of the device (V_OC). Combined with drift-diffusion simulations, this approach allowed us to pinpoint the sites of predominant recombination, but also to investigate the dynamics of the underlying processes. As such, the internal and external ideality factors, associated to the QFLS and V_OC respectively, are studied with the aim of understanding the type of recombination processes taking place in the multilayered architecture of the device. Our findings highlight the failure of the equality between QFLS and V_OC in the case of strong interface recombination, as well as the detrimental effect of all commonly used transport layers in terms of V_OC losses. In these regards, we show how, in most perovskite solar cells, different recombination processes can affect the internal QFLS and the external V_OC and that interface recombination dictates the V_OC losses. This line of arguments allowed to rationalize that, in our devices, the external ideality factor is completely dominated by interface recombination, and that this process can alone be responsible for values of the ideality factor between 1 and 2, typically observed in perovskite solar cells. Importantly, our studies demonstrated how strong interface recombination can lower the ideality factor towards values of 1, often misinterpreted as pure radiative second order recombination. As such, a comprehensive understanding of the recombination loss mechanisms currently limiting the device performance was achieved. In order to reach the full thermodynamic potential of the perovskite absorber, the interfaces of both the electron and hole transport layers (ETL/HTL) must be properly addressed and improved. From here, the second part of the research work is devoted on reducing the interfacial non-radiative energy losses by optimizing the structure and energetics of the relevant interface in our solar cell devices, with the aim of bringing their quasi-Fermi level splitting closer to its radiative limit. As such, the interfaces have been carefully addressed and optimized with different methodologies. First, a small amount of Sr is added into the perovskite precursor solution with the effect of effectively reducing surface and interface recombination. In this case, devices with V_OC up to 1.23 V were achieved and the energy losses were minimized to as low as 100 meV from the radiative limit of the material. Through a combination of different methods, we showed that these improvements are related to a strong n-type surface doping, which repels the holes in the perovskite from the surface and the interface with the ETL. Second, a more general device improvement was achieved by depositing a defect-passivating poly(ionic-liquid) layer on top of the perovskite absorber. The resulting devices featured a concomitant improvement of the V_OC and fill factor, up to 1.17 V and 83% respectively, reaching efficiency as high as 21.4%. Moreover, the protecting polymer layer helped to enhance the stability of the devices under prolonged maximum power point tracking measurements. Lastly, PLQY measurements are used to investigate the recombination mechanisms in halide-segregated large bandgap perovskite materials. Here, our findings showed how few iodide-rich low-energy domains act as highly efficient radiative recombination centers, capable of generating PLQY values up to 25%. Coupling these results with a detailed microscopic cathodoluminescence analysis and absorption profiles allowed to demonstrate how the emission from these low energy domains is due to the trapping of the carriers photogenerated in the Br-rich high-energy domains. Thereby, the strong implications of this phenomenon are discussed in relation to the failure of the optical reciprocity between absorption and emission and on the consequent applicability of the Shockley-Queisser theory for studying the energy losses such systems. In conclusion, the identification and quantification of the non-radiative QFLS and V_OC losses provided a base knowledge of the fundamental limitation of perovskite solar cells and served as guidance for future optimization and development of this technology. Furthermore, by providing practical examples of solar cell improvements, we corroborated the correctness of our fundamental understanding and proposed new methodologies to be further explored by new generations of scientists.
Ein wesentlicher Grund für den fortdauernden wirtschaftlichen Rückstand Ostdeutschlands, im Vergleich zu Westdeutschland, liegt am geringeren Gewicht technologieintensiver Branchen und, damit zusammenhängend, an fehlenden regionalen Wachstumszentren („Clustern“). Die Économie des conventions (EC), ein wirtschaftswissenschaftliches und wirtschaftssoziologisches Paradigma, das in Frankreich in den 80er Jahren entstanden ist, ermöglicht die Analyse von Unternehmen und Märkten und wurde für die vorliegende Dissertation verwendet, um unterschiedliche „Qualitätskonventionen“ in einer vergleichenden Analyse in der west- und ostdeutschen Maschinenbaubranche zu identifizieren. Anhand von Studien des Instituts für Wirtschaftsforschung in Halle (IWH), der Baden-Württembergischen Landesregierung und des Verbandes des Deutschen Maschinen- und Anlagenbaus e. V. (VDMA) wurde das Feld auf fünf ostdeutsche und acht baden-württembergische Raumordungsregionen eingegrenzt und ein qualitativer Stichprobenplan entwickelt. Empirisch wurden 21 leitfadengestützte Experteninterviews mit Geschäftsführern der Maschinen- und Anlagenbaubranche durchgeführt, die mit der Qualitativen Inhaltsanalyse methodisch ausgewertet und dann mit Bezug zur EC zu Idealtypen verdichtet wurden.
Im Ost-West-Vergleich zeigte sich, dass für ostdeutsche Unternehmen eine Entwicklung hin zum Systemanbieter, die Projekte koordinieren (Netzwerkkonvention), um damit auf (internationalen) Märkten höhere Preise durchzusetzen (Marktkonvention), am vielversprechendsten ist. Damit einher geht der Aufbau und die Aufrechterhaltung von Netzwerken (Netzwerkkonvention), die Herausforderung besteht aber darin, auch mit Wettbewerbern (Marktkonvention) vertrauensvoll zusammen zu arbeiten. Des Weiteren zeigte sich, dass bei öffentlich geförderten Verbundprojekten („Clusterpolitik“) die Marktkonvention ebenfalls nicht dominant sein darf bzw. sie zumindest Kompromisse mit anderen Konventionen eingehen muss, damit diese Netzwerke nicht nach Ende der Förderperiode auseinander fallen. Diese Befunde decken sich mit Arbeiten aus der Wirtschaftsgeographie und verwandter Fächer, bei denen gezeigt wurde, dass erst ein Gebilde aus spezifischen regionale Institutionen technologisches Lernen ermöglicht bzw. dass insbesondere die gleichzeitige Ausprägung von Konkurrenz- und Kooperationsprinzipien („Coopetition“) auf der gleichen Wertschöpfungsstufe, es Unternehmen ermöglicht, neue wettbewerbsfähige Produkte auf den Markt zu bringen. Eine theoretisch fundierte Clusterpolitik sollte daher nicht nur Vernetzungsaktivitäten (Netzwerkkonvention), sondern auch den Wettbewerb (Marktkonvention) im Cluster mit fördern. Im Fazit wurden dann die Instrumente, die in der Literatur genannt werden, um vorhandene Clusterstrukturen weiter zu entwickeln, mit der rekonstruierten Typologie der Qualitätskonventionen verknüpft.
Abstract. Catalysis is one of the most effective tools for the highly efficient assembly of complex molecular structures. Nevertheless, it is mainly represented by transition metal-based catalysts and typically is an energy consuming process. Therefore, photocatalysis utilizing solar energy is one of the appealing approaches to overcome these problems. A great alternative to classic transition metal-based photocatalysts, carbon nitrides, a group of organic polymeric semiconductors, have already shown their efficiency in water splitting, CO2 reduction, and organic pollutants degradation. However, these materials have also a great potential for the use in functionalization of complex organic molecules for synthetic needs as it was shown in recent years.
This work addresses the challenge to develop efficient system for heterogeneous organic photocatalysis, employing cheap and environmentally benign photocatalysts – carbon nitrides. Herein, fundamental properties of semiconductors are studied from the organic chemistry standpoint; the inherent properties of carbon nitrides, such as ability to accumulate electrons, are deeply investigated and their effect on the reaction outcome is established. Thus, understanding of the electron charging processes allowed for the synthesis of otherwise hardly-achieved diazetidines-1,3 by tetramerization of benzylamines. Furthermore, the high electron capacity of Potassium Poly(heptazine imide)s (K-PHI) made possible a multi-electron reduction of aromatic nitro compounds to bare or formylated anilines. Additionally, two deep eutectic solvents (DES) were designed as a sustainable reaction media and reducing reagent for this reaction. Eventually, the high oxidation ability of carbon nitride K-PHI is employed in a challenging reaction of halide anion oxidation (Cl―, Br―) to accomplish electrophilic substitution in aromatic ring. The possibility to utilize NaCl solution (seawater mimetic) for the chlorination of electron rich arenes was shown. Eventually, light itself is used as a tool in a chromoselective photocatalytic oxidation of aromatic thiols and thioacetatas to three different compounds, using UV, blue, and red LEDs.
All in all, the work enhances understanding the mechanism of heterogeneous photocatalysis in synthetic organic reactions and therefore, is a step forward to the sustainable methods of synthesis in organic chemistry.