Refine
Year of publication
- 2020 (234) (remove)
Document Type
- Doctoral Thesis (234) (remove)
Keywords
- Maschinelles Lernen (3)
- diffusion (3)
- Anden (2)
- Andes (2)
- Arktis (2)
- Boden (2)
- Chemometrie (2)
- Datenassimilation (2)
- Diffusion (2)
- Digitalisierung (2)
Institute
- Institut für Biochemie und Biologie (32)
- Institut für Physik und Astronomie (31)
- Institut für Geowissenschaften (24)
- Institut für Chemie (22)
- Öffentliches Recht (11)
- Hasso-Plattner-Institut für Digital Engineering GmbH (9)
- Institut für Anglistik und Amerikanistik (9)
- Institut für Ernährungswissenschaft (8)
- Institut für Umweltwissenschaften und Geographie (8)
- Department Psychologie (7)
Die vorliegende Arbeit untersucht Urlaubsfotografien bei Facebook und beschreibt, welche sozio-technischen Medienpraktiken sich innerhalb der Social-Media Plattform über die Fotografien vollziehen. Fotografische Praktiken sind durch aktive Handlungen und soziale Gebrauchsweisen bestimmt. Urlaubsfotografien tragen zum Beispiel zur Strukturierung von Reiserouten und Vorstellungen bei, indem genrespezifische Motive und Rahmungen mit Hilfe von Medien reproduziert und wiederholt werden. Praktiken des Zeigens, Teilens und Kommunizierens werden durch Social Plug-Ins (Like/Share Buttons) und Tagging-Funktionen auch in die Benutzeroberflächen von Facebook integriert. Dadurch werden Nutzer*innen Aktivitäten und technische Prozesse miteinander verbunden. Am Beispiel der automatischen Generierung von Urlaubsfotografien auf Geotagseiten wird gezeigt, dass Social-Tagging zur Entstehung und Aushandlung geographischer Räume und Ortsvorstellungen beiträgt. Mithilfe technischer Strukturierungen von Fotografien auf Taggingseiten werden genrespezifische Motive, fotografische Trends und Ästhetiken besonders sichtbar. Allerdings wird ihre Visualisierung auch durch algorithmische Priorisierung einzelner Inhalte mitbestimmt. Dadurch werden Urlaubsfotografien für ein fotografisches Profiling genutzt, da sie das algorithmische Erfassen und Auswerten von Nutzer*innen-Informationen ermöglichen. Die Arbeit zeigt, dass der Einsatz von Bilderkennungsverfahren und fotografischen Datenanalysen zu einer optimierten Informationsgewinnung und zu einer Standardisierung von Fotografien beiträgt.
Feminist Solidarities after Modulation produces an intersectional analysis of transnational feminist movements and their contemporary digital frameworks of identity and solidarity. Engaging media theory, critical race theory, and Black feminist theory, as well as contemporary feminist movements, this book argues that digital feminist interventions map themselves onto and make use of the multiplicity and ambiguity of digital spaces to question presentist and fixed notions of the internet as a white space and technologies in general as objective or universal. Understanding these frameworks as colonial constructions of the human, identity is traced to a socio-material condition that emerges with the modernity/colonialism binary. In the colonial moment, race and gender become the reasons for, as well as the effects of, technologies of identification, and thus need to be understood as and through technologies. What Deleuze has called modulation is not a present modality of control, but is placed into a longer genealogy of imperial division, which stands in opposition to feminist, queer, and anti-racist activism that insists on non-modular solidarities across seeming difference. At its heart, Feminist Solidarities after Modulation provides an analysis of contemporary digital feminist solidarities, which not only work at revealing the material histories and affective ""leakages"" of modular governance, but also challenges them to concentrate on forms of political togetherness that exceed a reductive or essentialist understanding of identity, solidarity, and difference.
It has frequently been observed that single emotional events are not only more efficiently processed, but also better remembered, and form longer-lasting memory traces than neutral material. However, when emotional information is perceived as a part of a complex event, such as in the context of or in relation to other events and/or source details, the modulatory effects of emotion are less clear. The present work aims to investigate how emotional, contextual source information modulates the initial encoding and subsequent long-term retrieval of associated neutral material (item memory) and contextual source details (contextual source memory). To do so, a two-task experiment was used, consisting of an incidental encoding task in which neutral objects were displayed over different contextual background scenes which varied in emotional content (unpleasant, pleasant, and neutral), and a delayed retrieval task (1 week), in which previously-encoded objects and new ones were presented. In a series of studies, behavioral indices (Studies 2, 3, and 5), event-related potentials (ERPs; Studies 1-4), and functional magnetic resonance imaging (Study 5) were used to investigate whether emotional contexts can rapidly tune the visual processing of associated neutral information (Study 1) and modulate long-term item memory (Study 2), how different recognition memory processes (familiarity vs. recollection) contribute to these emotion effects on item and contextual source memory (Study 3), whether the emotional effects of item memory can also be observed during spontaneous retrieval (Sstudy 4), and which brain regions underpin the modulatory effects of emotional contexts on item and contextual source memory (Study 5). In Study 1, it was observed that emotional contexts by means of emotional associative learning, can rapidly alter the processing of associated neutral information. Neutral items associated with emotional contexts (i.e. emotional associates) compared to neutral ones, showed enhanced perceptual and more elaborate processing after one single pairing, as indexed by larger amplitudes in the P100 and LPP components, respectively. Study 2 showed that emotional contexts produce longer-lasting memory effects, as evidenced by better item memory performance and larger ERP Old/New differences for emotional associates. In Study 3, a mnemonic differentiation was observed between item and contextual source memory which was modulated by emotion. Item memory was driven by familiarity, independently of emotional contexts during encoding, whereas contextual source memory was driven by recollection, and better for emotional material. As in Study 2, enhancing effects of emotional contexts for item memory were observed in ERPs associated with recollection processes. Likewise, for contextual source memory, a pronounced recollection-related ERP enhancement was observed for exclusively emotional contexts. Study 4 showed that the long-term recollection enhancement of emotional contexts on item memory can be observed even when retrieval is not explicitly attempted, as measured with ERPs, suggesting that the emotion enhancing effects on memory are not related to the task embedded during recognition, but to the motivational relevance of the triggering event. In Study 5, it was observed that enhancing effects of emotional contexts on item and contextual source memory involve stronger engagement of the brain's regions which are associated with memory recollection, including areas of the medial temporal lobe, posterior parietal cortex, and prefrontal cortex.
Taken together, these findings suggest that emotional contexts rapidly modulate the initial processing of associated neutral information and the subsequent, long-term item and contextual source memories. The enhanced memory effects of emotional contexts are strongly supported by recollection rather than familiarity processes, and are shown to be triggered when retrieval is both explicitly and spontaneously attempted. These results provide new insights into the modulatory role of emotional information on the visual processing and the long-term recognition memory of complex events. The present findings are integrated into the current theoretical models and future ventures are discussed.
In der Arbeit „Von der Studienaufnahme bis zum Studienabbruch“ strebt die Autorin an, das Phänomen des Studienabbruchs sowohl handlungstheoretisch einzubetten als auch längsschnittlich zu untersuchen und damit einen wichtigen Beitrag für die Hochschulforschung zu leisten. Die übergeordneten Fragestellungen der Arbeit lauten „Wie kann der Entscheidungsprozess zum Studienabbruch handlungstheoretisch beschrieben werden?“ und „Inwieweit kann ein Studienabbruch durch die Veränderung wesentlicher Einflussfaktoren, wie des studentischen Frames, empirisch erklärt werden?“. Zur Beantwortung dieser zwei übergeordneten Fragen wählt die Autorin die integrative Handlungstheorie und das Modell der Frame-Selektion von Hartmut Esser und Clemens Kroneberg. Anhand dieser entwickelt die Autorin im theoretischen Teil ihrer Arbeit ein Modell des Reframings in der Studieneingangsphase, was den Prozess der Entscheidung zum Studienabbruch bzw. Studienverbleib beschreibt. Die Wahl der Theorie begründet sie durch den aktuellen Forschungsstand zu Bildungsentscheidungen aus der soziologischen Bildungsforschung. Im Rahmen des abgeleiteten Modells stellt der Studienabbruch eine weitere allgemeine Bildungsentscheidung dar, die durch ein Reframing des Interpretationsrahmens der Studierenden (das sogenannte Frame) in der Studieneingangsphase verursacht werden kann. Dabei liegt der Fokus des Modells auf dem Prozess der Entscheidung, indem beschrieben wird, wie und durch welche Faktoren das ursprüngliche Frame, mit dem die Studienentscheidung getroffen wurde, in der Studieneingangsphase sich verändert und infolgedessen es zu einer wiederholten Bildungsentscheidung kommt. Mit dem empirischen Teil der Arbeit verfolgt die Autorin zwei Ziele. Zum einen werden die theoretischen Annahmen aus dem Modell zum Reframing in der Studieneingangsphase anhand von Wiederholungsbefragungen von Studierenden überprüft. Zum anderen untersucht die Autorin mithilfe dieser Daten die Entscheidung zum Studienabbruch für den deutschen Kontext erstmalig im Längsschnitt. Die empirischen Untersuchungen umfassen vier Teilstudien und orientieren sich chronologisch am theoretischen Modell. So wird zunächst in der Teilstudie I die messtheoretische Güte der Operationalisierung des studentischen Frames überprüft und anschließend Bestimmungsfaktoren des anfänglichen studentischen Frames zur Studienaufnahme betrachtet. Die Teilstudie II bietet eine Untersuchung zum Match zwischen den anfänglichen Erwartungen und der vorgefundenen Studienrealität in der Studieneingangsphase, wobei das Match anhand von individuellen und institutionellen Faktoren erklärt wird. Der Fokus der Teilstudie III liegt auf dem Ausmaß und den Bedingungsfaktoren der Veränderungen des Frames in der Studieneingangsphase. Letztlich bietet die Teilstudie IV erstmalig für die deutsche Hochschulforschung eine längsschnittliche Perspektive mit dem Fokus auf die zeitliche Veränderung des studentischen Frames zur Erklärung des Studienabbruchs und Studienverbleibs. Im Fazit diskutiert die Autorin Implikationen für die Weiterentwicklung des vorgeschlagenen theoretischen Modells, Implikationen für die zukünftige Forschung zum Studienabbruch und Implikationen für die Praxis an den Hochschulen zur Gestaltung der Studieneingangsphase und Prävention von Studienabbrüchen. Dabei kann sie vor allem für die Hochschulpraxis fünf große Themenbereiche identifizieren: der Umfang von unerfüllten Erwartungen der Studierenden; die Heterogenität der studentischen Frames bei der Studienaufnahme; der große Anteil an Studierenden, die mindestens einen Studiengangwechsel vollzogen haben; die phasenspezifische Veränderung des studentischen Frames und ihre Bedeutung für die Stabilisierung bzw. Destabilisierung der Studienentscheidung sowie die Bedeutung der inhaltlichen und qualitativen Gestaltung der Studieneingangsphase und des Studiums. Letztlich spricht sich die Autorin auch für eine Entstigmatisierung des Studienabbruchs aus.
From self-help books and nootropics, to self-tracking and home health tests, to the tinkering with technology and biological particles – biohacking brings biology, medicine, and the material foundation of life into the sphere of »do-it-yourself«. This trend has the potential to fundamentally change people's relationship with their bodies and biology but it also creates new cultural narratives of responsibility, authority, and differentiation. Covering a broad range of examples, this book explores practices and representations of biohacking in popular culture, discussing their ambiguous position between empowerment and requirement, promise and prescription.
Cosa avviene quando coscienze linguistiche distinte, oltre ad essere separate dall’epoca, dall’area geografica di provenienza o dalla differenziazione sociale, dalle diverse dimensioni linguistiche, appartengono anche a domini semiotici diversi? È quel che accade ogni volta che comunichiamo in rete, l’interazione digitale è infatti l’ambito di comunicazione ibrido per eccellenza: in esso alla mescolanza di lingue diverse si sovrappone la mescolanza di codici diversi. Partendo dal presupposto che siano i nuovi bisogni espressivi e le nuove situazioni comunicative a spingere verso le innovazioni linguistiche, sembra dunque interessante tener conto del rilievo assunto dal repertorio visuale – e più in generale multimodale – nell’uso spontaneo dei nuovi media e constatare come le particolari strategie di costruzione del significato attualmente in atto non possano ormai più prescindere da queste seconde dimensioni. Del loro peso nell’uso digitale della lingua è bene avere consapevolezza per affrontare senza pregiudizi tutte le novità ad essa connesse. Un ruolo di centrale importanza nell’approccio al linguaggio verbale in Internet è legato alla funzione indessicale della lingua che, unito alla presenza di un archivio di riferimento di conoscenze del mondo condiviso, innesca un nuovo tipo d’inferenzialità nel ricevente. La conversazione attraverso i social network consente infatti azioni che non necessariamente sono presenti nello scambio vis-a-vis, ma che invece sono peculiari di Facebook, Twitter, G+, Instagram, Flickr e in generale dei social network: la condivisione di materiale multimediale di vario genere, l’opzione di richiamare i messaggi relativi a un tema specifico e la possibilità di glossarlo. Il materiale multimediale diventa così al tempo stesso parte integrante della comunicazione e modalità espressiva, focus del discorso e linguaggio metaforico condiviso. Questo lavoro di ricerca indaga come ambiti di ricerca diversi, e apparentemente distanti fra loro, possano interagire produttivamente con il panorama scientifico delle scienze del linguaggio, dell’immagine e della comunicazione, giungendo alla formulazione di un modello aggiornato dell'ibridazione linguistica che caratterizza la comunicazione in rete.
The goal of regenerative medicine is to guide biological systems towards natural healing outcomes using a combination of niche-specific cells, bioactive molecules and biomaterials. In this regard, mimicking the extracellular matrix (ECM) surrounding cells and tissues in vivo is an effective strategy to modulate cell behaviors. Cellular function and phenotype is directed by the biochemical and biophysical signals present in the complex 3D network of ECMs composed mainly of glycoproteins and hydrophilic proteoglycans. While cellular modulation in response to biophysical cues emulating ECM features has been investigated widely, the influence of biochemical display of ECM glycoproteins mimicking their presentation in vivo is not well characterized. It remains a significant challenge to build artificial biointerfaces using ECM glycoproteins that precisely match their presentation in nature in terms of morphology, orientation and conformation. This challenge becomes clear, when one understands how ECM glycoproteins self-assemble in the body. Glycoproteins produced inside the cell are secreted in the extra-cellular space, where they are bound to the cell membrane or other glycoproteins by specific interactions. This leads to elevated local concentration and 2Dspatial confinement, resulting in self-assembly by the reciprocal interactions arising from the molecular complementarity encoded in the glycoprotein domains. In this thesis, air-water (A-W) interface is presented as a suitable platform, where self-assembly parameters of ECM glycoproteins such as pH, temperature and ionic strength can be controlled to simulate in vivo conditions (Langmuir technique), resulting in the formation of glycoprotein layers with defined characteristics. The layer can be further compressed with surface barriers to enhance glycoprotein-glycoprotein contacts and defined layers of glycoproteins can be immobilized on substrates by horizontal lift and touch method, called Langmuir-Schäfer (LS) method. Here, the benefit of Langmuir and LS methods in achieving ECM glycoprotein biointerfaces with controlled network morphology and ligand density on substrates is highlighted and contrasted with the commonly used (glyco)protein solution deposition (SO) method on substrates. In general, the (glyco)protein layer formation by SO is rather uncontrolled, influenced strongly by (glyco)protein-substrate interactions and it results in multilayers and aggregations on substrates, while the LS method results in (glyco)proteins layers with a more homogenous presentation. To achieve the goal of realizing defined ECM layers on substrates, ECM glycoproteins having the ability to self-assemble were selected: Collagen-IV (Col-IV) and fibronectin (FN). Highly packed FN layer with uniform presentation of ligands was deposited on polydimethysiloxane VIII (PDMS) by LS method, while a heterogeneous layer was formed on PDMS by SO with prominent aggregations visible. Mesenchymal stem cells (MSC) on PDMS equipped with FN by LS exhibited more homogeneous and elevated vinculin expression and weaker stress fiber formation than on PDMS equipped with FN by SO and these divergent responses could be attributed to the differences in glycoprotein presentation at the interface. Col-IV are scaffolding components of specialized ECM called basement membranes (BM), and have the propensity to form 2D networks by self-polymerization associated with cells. Col- IV behaves as a thin-disordered network at the A-W interface. As the Col-IV layer was compressed at the A-W interface using trough barriers, there was negligible change in thickness (layer thickness ~ 50 nm) or orientation of molecules. The pre-formed organization of Col-IV was transferred by LS method in a controlled fashion onto substrates meeting the wettability criterion (CA ≤ 80°). MSC adhesion (24h) on PET substrates deposited with Col-IV LS films at 10, 15 and 20 mN·m-1 surface pressures was (12269.0 ± 5856.4) cells for LS10, (16744.2 ± 1280.1) cells for LS15 and (19688.3 ± 1934.0) cells for LS20 respectively. Remarkably, by selecting the surface areal density of Col-IV on the Langmuir trough on PET, there is a linear increase between the number of adherent MSCs and the Col-IV ligand density. Further, FN has the ability to self-stabilize and form 2D networks (even without compression) while preserving native β-sheet structure at the A-W interface on a defined subphase (pH = 2). This provides the possibility to form such layers on any vessel (even on standard six-well culture plates) and the cohesive FN layers can be deposited by LS transfer, without the need for expensive LB instrumentation. Multilayers of FN can be immobilized on substrates by this approach, as easily as Layer-by-Layer method, even without the need for secondary adlayer or activated bare substrate. Thus, this facile glycoprotein coating strategy approach is accessible to many researchers to realize defined FN films on substrates for cell culture. In conclusion, Langmuir and LS methods can create biomimetic glycoprotein biointerfaces on substrates controlling aspects of presentation such as network morphology and ligand density. These methods will be utilized to produce artificial BM mimics and interstitial ECM mimics composed of more than one ECM glycoprotein layer on substrates, serving as artificial niches instructing stem cells for cell-replacement therapies in the future.
The Southern Central Andes (33°-36°S) are an excellent natural laboratory to study orogenic deformation processes, where boundary conditions, such as the geometry of the subducted plate, impose an important control on the evolution of the orogen. On the other hand, the South American plate presents a series of heterogeneities that additionally impart control on the mode of deformation. This thesis aims to test the control of this last factor over the construction of the Cenozoic Andean orogenic system.
From the integration of surface and subsurface information in the southern area (34-36°S), the evolution of Andean deformation over the steeply dipping subduction segment was studied. A structural model was developed evaluating the stress state from the Miocene to the present-day and its influence in the migration of magmatic fluids and hydrocarbons. Based on these data, together with the data generated by other researchers in the northern zone of the study area (33-34°S), geodynamic numerical modeling was performed to test the hypothesis of the decisive role of upper-plate heterogeneities in the Andean evolution. Geodynamic codes (LAPEX-2D and ASPECT) which simulate the behavior of materials with elasto-visco-plastic rheologies under deformation, were used. The model results suggest that upper-plate contractional deformation is significantly controlled by the strength of the lithosphere, which is defined by the composition of the upper and lower crust, and by the proportion of lithospheric mantle, which in turn is determined by previous tectonic events. In addition, the previous regional tectono-magmatic events also defined the composition of the crust and its geometry, which is another factor that controls the localization of deformation. Accordingly, with more felsic lower crustal composition, the deformation follows a pure-shear mode, while more mafic compositions induce a simple-shear deformation mode. On the other hand, it was observed that initial lithospheric thickness may fundamentally control the location of deformation, with zones characterized by thin lithosphere are prone to concentrate it. Finally, it was found that an asymmetric lithosphere-astenosphere boundary resulting from corner flow in the mantle wedge of the eastward-directed subduction zone tends to generate east-vergent detachments.
Remembering the dismembered
(2020)
This thesis – written in co-authorship with Tanzanian activist Mnyaka Sururu Mboro – examines different cases of repatriation of ancestral remains to African countries and communities through the prism of postcolonial memory studies. It follows the theft and displacement of prominent ancestors from East and Southern Africa (Sarah Baartman, Dawid Stuurman, Mtwa Mkwawa, Songea Mbano, King Hintsa and the victims of the Ovaherero and Nama genocides) and argues that efforts made for the repatriation of their remains have contributed to a transnational remembrance of colonial violence.
Drawing from cultural studies theories such as "multidirectional memory", "rehumanisation" and "necropolitics", the thesis argues for a new conceptualisation or "re-membrance" in repatriation, through processes of reunion, empowerment, story-telling and belonging. Besides, the afterlives of the dead ancestors, who stand at the centre of political debates on justice and reparations, remind of their past struggles against colonial oppression. They are therefore "memento vita", fostering counter-discourses that recognize them
as people and stories.
This manuscript is accompanied by a “(web)site of memory” where some of the research findings are made available to a wider audience. This blog also hosts important sound material which appears in the thesis as interventions by external contributors. Through QR codes, both the written and the digital version are linked with each other to problematize the idea of a written monograph and bring a polyphonic perspective to those diverse, yet connected, histories.
Chloroplasts are the photosynthetic organelles in plant and algae cells that enable photoautotrophic growth. Due to their prokaryotic origin, modern-day chloroplast genomes harbor 100 to 200 genes. These genes encode for core components of the photosynthetic complexes and the chloroplast gene expression machinery, making most of them essential for the viability of the organism. The regulation of those genes is predominated by translational adjustments. The powerful technique of ribosome profiling was successfully used to generate highly resolved pictures of the translational landscape of Arabidopsis thaliana cytosol, identifying translation of upstream open reading frames and long non-coding transcripts. In addition, differences in plastidial translation and ribosomal pausing sites were addressed with this method. However, a highly resolved picture of the chloroplast translatome is missing. Here, with the use of chloroplast isolation and targeted ribosome affinity purification, I generated highly enriched ribosome profiling datasets of the chloroplasts translatome for Nicotiana tabacum in the dark and light. Chloroplast isolation was found unsuitable for the unbiased analysis of translation in the chloroplast but adequate to identify potential co-translational import. Affinity purification was performed for the small and large ribosomal subunit independently. The enriched datasets mirrored the results obtained from whole-cell ribosome profiling. Enhanced translational activity was detected for psbA in the light. An alternative translation initiation mechanism was not identified by selective enrichment of small ribosomal subunit footprints. In sum, this is the first study that used enrichment strategies to obtain high-depth ribosome profiling datasets of chloroplasts to study ribosome subunit distribution and chloroplast associated translation.
Ever-changing light intensities are challenging the photosynthetic capacity of photosynthetic organism. Increased light intensities may lead to over-excitation of photosynthetic reaction centers resulting in damage of the photosystem core subunits. Additional to an expensive repair mechanism for the photosystem II core protein D1, photosynthetic organisms developed various features to reduce or prevent photodamage. In the long-term, photosynthetic complex contents are adjusted for the efficient use of experienced irradiation. However, the contribution of chloroplastic gene expression in the acclimation process remained largely unknown. Here, comparative transcriptome and ribosome profiling was performed for the early time points of high-light acclimation in Nicotiana tabacum chloroplasts in a genome-wide scale. The time- course data revealed stable transcript level and only minor changes in translational activity of specific chloroplast genes during high-light acclimation. Yet, psbA translation was increased by two-fold in the high light from shortly after the shift until the end of the experiment. A stress-inducing shift from low- to high light exhibited increased translation only of psbA. This study indicate that acclimation fails to start in the observed time frame and only short-term responses to reduce photoinhibition were observed.
The political legacy of the Martinican poet, novelist and philosopher Édouard Glissant (1928–2011) is the subject of an ongoing debate among postcolonial literary scholars. Responding to an influential view shaping this debate, that Glissant’s work can be categorised into an early political and late apolitical phase, this dissertation claims that this division is based on a narrow conception of 'engaged political writing' that prevents a more comprehensive view of the changing political strategies Glissant pursued throughout his life from emerging. Proceeding from this conceptual basis, the dissertation is concerned with re-reading the dimensions of Glissant's work that have hitherto been relegated as apolitical, literary or poetic, with the aim of conceptualising the politics of relation as an integral part of his overall poetic project. In methodological terms, the dissertation therefore proposes a relational reading of Glissant’s life-work across literary genres, epochs, as well as the conventional divisions between political thought, writing and activism. This perspective is informed by Glissant's philosophy of relation, and draws on a conception of political practice that includes both explicit engagements with established political systems and institutions, as well as literary and cultural interventions geared towards their transformation and the creation of alternatives to them. Theoretically the work thus combines a poststructuralist lens on the conceptual difference between 'politics' and 'the political' with arguments for an inherent political quality of literature, and perspectives from the Afro-Caribbean radical tradition, in which writers and intellectuals have historically sought to combine discursive interventions with organisational actions. Applying this theoretical angle to the analysis of Glissant's politics of relation results in an interdisciplinary research framework designed to explore the synergies between postcolonial political and literary studies.
In order to comprehensively describe Glissant's politics of relation without recourse to evolutionary or digressive models, the concept of an intellectual marronage is proposed as a framework to map the strategies making up Glissant's political archive. Drawing on a variety of historic, political theoretical and literary sources, intellectual marronage is understood as a mode of radical resistance to the neocolonial subjugation for which the plantation system stands historically and metaphorically, as an inherently innovative political practice invested in the creation of communities marked by relational ontologies, and as a commitment to fostering an imagination of the world and the human that differs fundamentally from the Enlightenment paradigm. This specific conception of intellectual marronage forms the basis on which three key strategies that consistently shape Glissant's political practice are identified and mapped. They revolve around Glissant's engagement with history (chapter 2), his commitment to fostering an imagination of the Tout-Monde (whole-world) as a political point of reference (chapter 3), and the continuous exploration of alternative forms of community on the levels of the island, the archipelago and the Tout-Monde (chapter 4). Together these strategies constitute Glissant's personal politics of relation. Its abstract characteristics can be put in a productive conversation with related theoretical traditions invested in exploring the political potentials of fugitivity (chapters 5), as well as with the work of other postcolonial actors whose holistic practice warrants to be described as a politics of relation (chapter 6).
Matthias Walden
(2020)
Matthias Walden (1927–1984) war einer der Vertreter eines politischen Neuanfangs im Journalismus in Deutschland nach 1945. Im Kern seines politischen Denkens stand die Verteidigung der liberalen Demokratie, deren ideellen Gehalt Walden sowohl durch eine personelle Kontinuität in der Bundesrepublik Deutschland zum Nationalsozialismus als auch durch die von ihm als Anbiederung empfundene Neue Ostpolitik und den gesellschaftlichen Protest der 1960er und 1970er Jahre in Gefahr sah.
Als profilierter Leitartikler wurde er vor allem für den Verlag Axel Springer zu einem intellektuellen Impulsgeber. Walden war überzeugt, Diktaturen und totalitäre Gesellschaftsentwürfe sähen immer nur so aus, als wären sie für die Ewigkeit gemacht. Im Kalten Krieg gab ihm gerade die Unmenschlichkeit der kommunistischen Regime die Gewissheit, dass diese einst verschwinden würden.
Nils Lange legt mit seiner intellektuellen Biographie von Matthias Walden die erste umfassende Arbeit über diesen streitbaren Journalisten vor. Er arbeitet sowohl die politischen als auch die ideengeschichtlichen Ursprünge von Waldens Denken heraus.
Die Arbeit beleuchtet das besondere öffentliche Interesse an der Strafverfolgung in seiner Gesamtheit. Im ersten Teil der Arbeit werden insbesondere die formellen Aspekte rund um das besondere öffentliche Interesse untersucht. Im Zentrum stehen hierbei die besonders problematischen Aspekte der Frage nach der Prozessvoraussetzung und der gerichtlichen Überprüfbarkeit. Aufgezeigt wird, dass das besondere öffentliche Interesse tatsächlich vorliegen und dessen Vorliegen vom mit der Sache befassten Gericht vollständig überprüft werden muss. Im zweiten Teil geht es um die inhaltliche Auslegung des besonderen öffentlichen Interesses. Nachdem kurz der derzeitige Forschungsstand aufgezeigt wird, wird zunächst die Frage erörtert, ob das besondere öffentliche als ein gegenüber dem öffentlichen Interesse gesteigerter Begriff anzusehen ist, was verneint wird. Anschließend wird der eigene Auslegungsansatz erarbeitet, der das besondere öffentliche Interesse als Ergebnis einer Abwägung begreift.
Die Aufgabe der Selbstregulierung der Presse wird in Deutschland vom Deutschen Presserat wahrgenommen. Dieser sieht sich seit seiner Gründung fortwährender Kritik ausgesetzt. Die Arbeit geht der Frage nach, ob der Deutsche Presserat mit seiner bisherigen Tätigkeit den Ansprüchen an eine erfolgreiche Selbstregulierung gerecht wird. Sodann werden elf Lösungsvorschläge auf ihre rechtliche Umsetzbarkeit und ihre Auswirkungen auf die Pressekontrolle in Deutschland untersucht. Zudem wird das britische Modell der Presseselbstkontrolle mit dem deutschen verglichen, um Unterschiede und Gemeinsamkeiten aufzuzeigen, sowie Anregungen und Verbesserungsvorschläge für die Zukunft zu erforschen. Durch ähnliche Strukturen am Anfang ihres Entstehens und das größtenteils parallele Bestehen von Presseräten in beiden Ländern eignet sich explizit das britische Modell für einen Vergleich mit dem Deutschen Presserat.
Die Auswirkungen der reformierten Psychotherapierichtlinie auf die ambulante Patentenversorgung
(2020)
Redox signalling in plants
(2020)
Once proteins are synthesized, they can additionally be modified by post-translational modifications (PTMs). Proteins containing reactive cysteine thiols, stabilized in their deprotonated form due to their local environment as thiolates (RS-), serve as redox sensors by undergoing a multitude of oxidative PTMs (Ox-PTMs). Ox-PTMs such as S-nitrosylation or formation of inter- or intra-disulfide bridges induce functional changes in these proteins. Proteins containing cysteines, whose thiol oxidation state regulates their functions, belong to the so-called redoxome. Such Ox-PTMs are controlled by site-specific cellular events that play a crucial role in protein regulation, affecting enzyme catalytic sites, ligand binding affinity, protein-protein interactions or protein stability. Reversible protein thiol oxidation is an essential regulatory mechanism of photosynthesis, metabolism, and gene expression in all photosynthetic organisms. Therefore, studying PTMs will remain crucial for understanding plant adaptation to external stimuli like fluctuating light conditions. Optimizing methods suitable for studying plants Ox-PTMs is of high importance for elucidation of the redoxome in plants. This study focusses on thiol modifications occurring in plant and provides novel insight into in vivo redoxome of Arabidopsis thaliana in response to light vs. dark. This was achieved by utilizing a resin-assisted thiol enrichment approach. Furthermore, confirmation of candidates on the single protein level was carried out by a differential labelling approach. The thiols and disulfides were differentially labelled, and the protein levels were detected using immunoblot analysis. Further analysis was focused on light-reduced proteins. By the enrichment approach many well studied redox-regulated proteins were identified. Amongst those were fructose 1,6-bisphosphatase (FBPase) and sedoheptulose-1,7-bisphosphatase (SBPase) which have previously been described as thioredoxin system targeted enzymes. The redox regulated proteins identified in the current study were compared to several published, independent results showing redox regulated proteins in Arabidopsis leaves, root, mitochondria and specifically S-nitrosylated proteins. These proteins were excluded as potential new candidates but remain as a proof-of-concept to the enrichment experiments to be effective. Additionally, CSP41A and CSP41B proteins, which emerged from this study as potential targets of redox-regulation, were analyzed by Ribo-Seq. The active translatome study of csp41a mutant vs. wild-type showed most of the significant changes at end of the night, similarly as csp41b. Yet, in both mutants only several chloroplast-encoded genes were altered. Further studies of CSP41A and CSP41B proteins are needed to reveal their functions and elucidate the role of redox regulation of these proteins.
Distance Education or e-Learning platform should be able to provide a virtual laboratory to let the participants have hands-on exercise experiences in practicing their skill remotely. Especially in Cybersecurity e-Learning where the participants need to be able to attack or defend the IT System. To have a hands-on exercise, the virtual laboratory environment must be similar to the real operational environment, where an attack or a victim is represented by a node in a virtual laboratory environment. A node is usually represented by a Virtual Machine (VM). Scalability has become a primary issue in the virtual laboratory for cybersecurity e-Learning because a VM needs a significant and fix allocation of resources. Available resources limit the number of simultaneous users. Scalability can be increased by increasing the efficiency of using available resources and by providing more resources. Increasing scalability means increasing the number of simultaneous users.
In this thesis, we propose two approaches to increase the efficiency of using the available resources. The first approach in increasing efficiency is by replacing virtual machines (VMs) with containers whenever it is possible. The second approach is sharing the load with the user-on-premise machine, where the user-on-premise machine represents one of the nodes in a virtual laboratory scenario. We also propose two approaches in providing more resources. One way to provide more resources is by using public cloud services. Another way to provide more resources is by gathering resources from the crowd, which is referred to as Crowdresourcing Virtual Laboratory (CRVL).
In CRVL, the crowd can contribute their unused resources in the form of a VM, a bare metal system, an account in a public cloud, a private cloud and an isolated group of VMs, but in this thesis, we focus on a VM. The contributor must give the credential of the VM admin or root user to the CRVL system. We propose an architecture and methods to integrate or dis-integrate VMs from the CRVL system automatically. A Team placement algorithm must also be investigated to optimize the usage of resources and at the same time giving the best service to the user. Because the CRVL system does not manage the contributor host machine, the CRVL system must be able to make sure that the VM integration will not harm their system and that the training material will be stored securely in the contributor sides, so that no one is able to take the training material away without permission. We are investigating ways to handle this kind of threats.
We propose three approaches to strengthen the VM from a malicious host admin. To verify the integrity of a VM before integration to the CRVL system, we propose a remote verification method without using any additional hardware such as the Trusted Platform Module chip. As the owner of the host machine, the host admins could have access to the VM's data via Random Access Memory (RAM) by doing live memory dumping, Spectre and Meltdown attacks. To make it harder for the malicious host admin in getting the sensitive data from RAM, we propose a method that continually moves sensitive data in RAM. We also propose a method to monitor the host machine by installing an agent on it. The agent monitors the hypervisor configurations and the host admin activities.
To evaluate our approaches, we conduct extensive experiments with different settings. The use case in our approach is Tele-Lab, a Virtual Laboratory platform for Cyber Security e-Learning. We use this platform as a basis for designing and developing our approaches. The results show that our approaches are practical and provides enhanced security.
Subsea permafrost is perennially cryotic earth material that lies offshore. Most submarine permafrost is relict terrestrial permafrost beneath the Arctic shelf seas, was inundated after the last glaciation, and has been warming and thawing ever since. It is a reservoir and confining layer for gas hydrates and has the potential to release greenhouse gases and affect global climate change. Furthermore, subsea permafrost thaw destabilizes coastal infrastructure. While numerous studies focus on its distribution and rate of thaw over glacial timescales, these studies have not been brought together and examined in their entirety to assess rates of thaw beneath the Arctic Ocean. In addition, there is still a large gap in our understanding of sub-aquatic permafrost processes on finer spatial and temporal scales. The degradation rate of subsea permafrost is influenced by the initial conditions upon submergence. Terrestrial permafrost that has already undergone warming, partial thawing or loss of ground ice may react differently to inundation by seawater compared to previously undisturbed ice-rich permafrost. Heat conduction models are sufficient to model the thaw of thick subsea permafrost from the bottom, but few studies have included salt diffusion for top-down chemical degradation in shallow waters characterized by mean annual cryotic conditions on the seabed. Simulating salt transport is critical for assessing degradation rates for recently inundated permafrost, which may accelerate in response to warming shelf waters, a lengthening open water season, and faster coastal erosion rates. In the nearshore zone, degradation rates are also controlled by seasonal processes like bedfast ice, brine injection, seasonal freezing under floating ice conditions and warm freshwater discharge from large rivers. The interplay of all these variables is complex and needs further research. To fill this knowledge gap, this thesis investigates sub-aquatic permafrost along the southern coast of the Bykovsky Peninsula in eastern Siberia. Sediment cores and ground temperature profiles were collected at a freshwater thermokarst lake and two thermokarst lagoons in 2017. At this site, the coastline is retreating, and seawater is inundating various types of permafrost: sections of ice-rich Pleistocene permafrost (Yedoma) cliffs at the coastline alternate with lagoons and lower elevation previously thawed and refrozen permafrost basins (Alases). Electrical resistivity surveys with floating electrodes were carried out to map ice-bearing permafrost and taliks (unfrozen zones in the permafrost, usually formed beneath lakes) along the diverse coastline and in the lagoons. Combined with the borehole data, the electrical resistivity results permit estimation of contemporary ice-bearing permafrost characteristics, distribution, and occasionally, thickness. To conceptualize possible geomorphological and marine evolutionary pathways to the formation of the observed layering, numerical models were applied. The developed model incorporates salt diffusion and seasonal dynamics at the seabed, including bedfast ice. Even along coastlines with mean annual non-cryotic boundary conditions like the Bykovsky Peninsula, the modelling results show that salt diffusion minimizes seasonal freezing of the seabed, leading to faster degradation rates compared to models without salt diffusion. Seasonal processes are also important for thermokarst lake to lagoon transitions because lagoons can generate cold hypersaline conditions underneath the ice cover. My research suggests that ice-bearing permafrost can form in a coastal lagoon environment, even under floating ice. Alas basins, however, may degrade more than twice as fast as Yedoma permafrost in the first several decades of inundation. In addition to a lower ice content compared to Yedoma permafrost, Alas basins may be pre-conditioned with salt from adjacent lagoons. Considering the widespread distribution of thermokarst in the Arctic, its integration into geophysical models and offshore surveys is important to quantify and understand subsea permafrost degradation and aggradation. Through numerical modelling, fieldwork, and a circum-Arctic review of subsea permafrost literature, this thesis provides new insights into sub-aquatic permafrost evolution in saline coastal environments.
Die vorliegende Untersuchung analysierte den direkten Zusammenhang eines berufsbezogenen Angebots Sozialer Gruppenarbeit mit dem Ergebnis beruflicher Wiedereingliederung bei Rehabilitandinnen und Rehabilitanden in besonderen beruflichen Problemlagen. Sie wurde von der Deutschen Rentenversicherung Bund als Forschungsprojekt vom 01.01.2013 bis 31.12. 2015 gefördert und an der Professur für Rehabilitationswissenschaften der Universität Potsdam realisiert.
Die Forschungsfrage lautete: Kann eine intensive sozialarbeiterische Gruppenintervention im Rahmen der stationären medizinischen Rehabilitation soweit auf die Stärkung sozialer Kompetenzen und die Soziale Unterstützung von Rehabilitandinnen und Rehabilitanden einwirken, dass sich dadurch langfristige Verbesserungen hinsichtlich der beruflichen Wiedereingliederung im Vergleich zur konventionellen Behandlung ergeben?
Die Studie gliederte sich in eine qualitative und eine quantitative Erhebung mit einer zwischenliegenden Intervention. Eingeschlossen waren 352 Patientinnen und Patienten im Alter zwischen 18 und 65 Jahren mit kardiovaskulären Diagnosen, deren Krankheitsbilder häufig von komplexen Problemlagen begleitet sind, verbunden mit einer schlechten sozialmedizinischen Prognose.
Die Evaluation der Gruppenintervention erfolgte in einem clusterrandomisierten kontrollierten Studiendesign, um einen empirischen Nachweis darüber zu erbringen, inwieweit die Intervention gegenüber der regulären sozialarbeiterischen Behandlung höhere Effekte erzielen kann. Die Interventionsgruppen nahmen am Gruppenprogramm teil, die Kontrollgruppen erhielten die reguläre sozialarbeiterische Behandlung.
Im Ergebnis konnte mit dieser Stichprobe kein Nachweis zur Verbesserung der beruflichen Wiedereingliederung, der gesundheitsbezogenen Arbeitsfähigkeit, der Lebensqualität sowie der Sozialen Unterstützung durch die Teilnahme am sozialarbeiterischen Gruppenprogramm erbracht werden. Die Return-To-Work-Rate betrug 43,7 %, ein Viertel der Untersuchungsgruppe befand sich nach einem Jahr in Arbeitslosigkeit. Die durchgeführte Gruppenintervention ist dem konventionellen Setting Sozialer Arbeit als gleichwertig anzusehen.
Schlussfolgernd wurde auf eine sozialarbeiterische Unterstützung der beruflichen Wiedereingliederung über einen längeren Zeitraum nach einer kardiovaskulären Erkrankung verwiesen, insbesondere durch wohnortnahe Angebote zu einem späteren Zeitpunkt bei stabilerer Gesundheit. Aus den Erhebungen ließen sich mögliche Erfolge bei engerer Kooperation zwischen dem Fachbereich der Sozialen Arbeit und der Psychologie ableiten. Ebenfalls gab es Hinweise auf die einflussreiche Rolle der Angehörigen, die durch Einbindung in die Soziale Beratung unterstützend auf den Wiedereingliederungsprozess wirken könnten. Die Passgenauigkeit der untersuchten sozialarbeiterischen Gruppeninterventionen ist durch eine gezielte Soziale Diagnostik zu verbessern.
Die Ordnung der Religionen
(2020)
Der römische Adlige und Humanist Pietro Della Valle bereist von 1614 bis 1626 das Osmanische Reich, Persien und Indien. In einer Zeit des Umbruchs sucht er nach neuen Allianzen für Rom: Gegen die Reformatoren will er die Einheit mit den orientalischen Christen wiederherstellen. Gegen die Osmanen sucht er ein Bündnis mit dem schiitischen Schah Abbas I. zu schließen. Sein Reisebericht, die „Viaggi“ (3 Teile, 1650-63), dokumentiert seine Ambitionen und enthält umfangreiche Erläuterungen zu vielen Religionen Asiens, die damals wie heute im Zentrum des Interesses stehen.
In der Form einer Begriffs- und Ideengeschichte untersuche ich in den „Viaggi“, welche Rolle Religion in den damaligen Auseinandersetzungen spielt und wie sich Della Valle mit der großen religiösen Vielfalt Asiens auseinandersetzt. Welche Hoffnungen und Befürchtungen verbindet er mit den verschiedenen Religionen? Welche Strategien verfolgt er in Bezug auf sie? Wo zieht er Grenzen und wo baut er Brücken?
Gold at the nanoscale
(2020)
In this cumulative dissertation, I want to present my contributions to the field of plasmonic nanoparticle science. Plasmonic nanoparticles are characterised by resonances of the free electron gas around the spectral range of visible light. In recent years, they have evolved as promising components for light based nanocircuits, light harvesting, nanosensors, cancer therapies, and many more.
This work exhibits the articles I authored or co-authored in my time as PhD student at the University of Potsdam. The main focus lies on the coupling between localised plasmons and excitons in organic dyes. Plasmon–exciton coupling brings light–matter coupling to the nanoscale. This size reduction is accompanied by strong enhancements of the light field which can, among others, be utilised to enhance the spectroscopic footprint of molecules down to single molecule detection, improve the efficiency of solar cells, or establish lasing on the nanoscale. When the coupling exceeds all decay channels, the system enters the strong coupling regime. In this case, hybrid light–matter modes emerge utilisable as optical switches, in quantum networks, or as thresholdless lasers. The present work investigates plasmon–exciton coupling in gold–dye core–shell geometries and contains both fundamental insights and technical novelties. It presents a technique which reveals the anticrossing in coupled systems without manipulating the particles themselves. The method is used to investigate the relation between coupling strength and particle size. Additionally, the work demonstrates that pure extinction measurements can be insufficient when trying to assess the coupling regime. Moreover, the fundamental quantum electrodynamic effect of vacuum induced saturation is introduced. This effect causes the vacuum fluctuations to diminish the polarisability of molecules and has not yet been considered in the plasmonic context.
The work additionally discusses the reaction of gold nanoparticles to optical heating. Such knowledge is of great importance for all potential optical applications utilising plasmonic nanoparticles since optical excitation always generates heat. This heat can induce a change in the optical properties, but also mechanical changes up to melting can occur. Here, the change of spectra in coupled plasmon–exciton particles is discussed and explained with a precise model. Moreover, the work discusses the behaviour of gold nanotriangles exposed to optical heating. In a pump–probe measurement, X-ray probe pulses directly monitored the particles’ breathing modes. In another experiment, the triangles were exposed to cw laser radiation with varying intensities and illumination areas. X-ray diffraction directly measured the particles’ temperature. Particle melting was investigated with surface enhanced Raman spectroscopy and SEM imaging demonstrating that larger illumination areas can cause melting at lower intensities. An elaborate methodological and theoretical introduction precedes the articles. This way, also readers without specialist’s knowledge get a concise and detailed overview of the theory and methods used in the articles. I introduce localised plasmons in metal nanoparticles of different shapes. For this work, the plasmons were mostly coupled to excitons in J-aggregates. Therefore, I discuss these aggregates of organic dyes with sharp and intense resonances and establish an understanding of the coupling between the two systems. For ab initio simulations of the coupled systems, models for the systems’ permittivites are presented, too. Moreover, the route to the sample fabrication – the dye coating of gold nanoparticles, their subsequent deposition on substrates, and the covering with polyelectrolytes – is presented together with the measurement methods that were used for the articles.
Recent advances in microscopy have led to an improved visualization of different cell processes. Yet, this also leads to a higher demand of tools which can process images in an automated and quantitative fashion. Here, we present two applications that were developed to quantify different processes in eukaryotic cells which rely on the organization and dynamics of the cytoskeleton.. In plant cells, microtubules and actin filaments form the backbone of the cytoskeleton. These structures support cytoplasmic streaming, cell wall organization and tracking of cellular material to and from the plasma membrane. To better understand the underlying mechanisms of cytoskeletal organization, dynamics and coordination, frameworks for the quantification are needed. While this is fairly well established for the microtubules, the actin cytoskeleton has remained difficult to study due to its highly dynamic behaviour. One aim of this thesis was therefore to provide an automated framework to quantify and describe actin organization and dynamics. We used the framework to represent actin structures as networks and examined the transport efficiency in Arabidopsis thaliana hypocotyl cells. Furthermore, we applied the framework to determine the growth mode of cotton fibers and compared the actin organization in wild-type and mutant cells of rice. Finally, we developed a graphical user interface for easy usage. Microtubules and the actin cytoskeleton also play a major role in the morphogenesis of epidermal leaf pavement cells. These cells have highly complex and interdigitated shapes which are hard to describe in a quantitative way. While the relationship between microtubules, the actin cytoskeleton and shape formation is the object of many studies, it is still not clear how and if the cytoskeletal components predefine indentations and protrusions in pavement cell shapes. To understand the underlying cell processes which coordinate cell morphogenesis, a quantitative shape descriptor is needed. Therefore, the second aim of this thesis was the development of a network-based shape descriptor which captures global and local shape features, facilitates shape comparison and can be used to evaluate shape complexity. We demonstrated that our framework can be used to describe and compare shapes from various domains. In addition, we showed that the framework accurately detects local shape features of pavement cells and outperform contending approaches. In the third part of the thesis, we extended the shape description framework to describe pavement cell shape features on tissue-level by proposing different network representations of the underlying imaging data.
Um ihre ästhetischen und strukturellen Ähnlichkeiten zum Fernsehprogramm aufzudecken, analysiert Christian Richter ausführlich mediale Inszenierungen von Netflix und YouTube. Die Schlagworte »Flow«, »Serialität«, »Liveness« und »Adressierung« dienen dabei als zentrale Orientierungshilfen. Antworten liefern etablierte Fernsehtheorien ebenso wie facettenreiche und triviale Beispiele. Diese reichen vom ZDF-Fernsehgarten und alten Horrorfilmen über den SuperBowl und einsame Bahnfahrten durch Norwegen bis zu BibisBeautyPalace und House of Cards. Am Ende schält sich ein Zustand von FERNSEHEN heraus, der als eine neue Version aufgefasst werden kann.
Research on novel and advanced biomaterials is an indispensable step towards their applications in desirable fields such as tissue engineering, regenerative medicine, cell culture, or biotechnology. The work presented here focuses on such a promising material: polyelectrolyte multilayer (PEM) composed of hyaluronic acid (HA) and poly(L-lysine) (PLL). This gel-like polymer surface coating is able to accumulate (bio-)molecules such as proteins or drugs and release them in a controlled manner. It serves as a mimic of the extracellular matrix (ECM) in composition and intrinsic properties. These qualities make the HA/PLL multilayers a promising candidate for multiple bio-applications such as those mentioned above. The work presented aims at the development of a straightforward approach for assessment of multi-fractional diffusion in multilayers (first part) and at control of local molecular transport into or from the multilayers by laser light trigger (second part).
The mechanism of the loading and release is governed by the interaction of bioactives with the multilayer constituents and by the diffusion phenomenon overall. The diffusion of a molecule in HA/PLL multilayers shows multiple fractions of different diffusion rate. Approaches, that are able to assess the mobility of molecules in such a complex system, are limited. This shortcoming motivated the design of a novel evaluation tool presented here.
The tool employs a simulation-based approach for evaluation of the data acquired by fluorescence recovery after photobleaching (FRAP) method. In this approach, possible fluorescence recovery scenarios are primarily simulated and afterwards compared with the data acquired while optimizing parameters of a model until a sufficient match is achieved. Fluorescent latex particles of different sizes and fluorescein in an aqueous medium are utilized as test samples validating the analysis results. The diffusion of protein cytochrome c in HA/PLL multilayers is evaluated as well.
This tool significantly broadens the possibilities of analysis of spatiotemporal FRAP data, which originate from multi-fractional diffusion, while striving to be widely applicable. This tool has the potential to elucidate the mechanisms of molecular transport and empower rational engineering of the drug release systems.
The second part of the work focuses on the fabrication of such a spatiotemporarily-controlled drug release system employing the HA/PLL multilayer. This release system comprises different layers of various functionalities that together form a sandwich structure. The bottom layer, which serves as a reservoir, is formed by HA/PLL PEM deposited on a planar glass substrate. On top of the PEM, a layer of so-called hybrids is deposited. The hybrids consist of thermoresponsive poly(N-isopropylacrylamide) (PNIPAM) -based hydrogel microparticles with surface-attached gold nanorods. The layer of hybrids is intended to serve as a gate that controls the local molecular transport through the PEM–solution-interface. The possibility of stimulating the molecular transport by near-infrared (NIR) laser irradiation is being explored.
From several tested approaches for the deposition of hybrids onto the PEM surface, the drying-based approach was identified as optimal. Experiments, that examine the functionality of the fabricated sandwich at elevated temperature, document the reversible volume phase transition of the PEM-attached hybrids while sustaining the sandwich stability. Further, the gold nanorods were shown to effectively absorb light radiation in the tissue- and cell-friendly NIR spectral region while transducing the energy of light into heat. The rapid and reversible shrinkage of the PEM-attached hybrids was thereby achieved. Finally, dextran was employed as a model transport molecule. It loads into the PEM reservoir in a few seconds with the partition constant of 2.4, while it spontaneously releases in a slower, sustained manner. The local laser irradiation of the sandwich, which contains the fluorescein isothiocyanate tagged dextran, leads to a gradual reduction of fluorescence intensity in the irradiated region.
The release system fabricated employs renowned photoresponsivity of the hybrids in an innovative setting. The results of the research are a step towards a spatially-controlled on-demand drug release system that paves the way to spatiotemporally controlled drug release.
The approaches developed in this work have the potential to elucidate the molecular dynamics in ECM and to foster engineering of multilayers with properties tuned to mimic the ECM. The work aims at spatiotemporal control over the diffusion of bioactives and their presentation to the cells.
Bank filtration is an effective water treatment technique and is widely adopted in Europe along major rivers. It is the process where surface water penetrates the riverbed, flows through the aquifer, and then is extracted by near-bank production wells. By flowing in the subsurface flow passage, the water quality can be improved by a series of beneficial processes. Long-term riverbank filtration also produces colmation layers on the riverbed. The colmation layer may act as a bioactive zone that is governed by biochemical and physical processes owing to its enrichment of microbes and organic matter. Low permeability may strongly limit the surface water infiltration and further lead to a decreasing recoverable ratio of production wells.The removal of the colmation layer is therefore a trade-off between the treatment capacity and treatment efficiency. The goal of this Ph.D. thesis is to focus on the temporal and spatial change of the water quality and quantity along the flow path of a hydrogeological heterogeneous riverbank filtration site adjacent to an artificial-reconstructed (bottom excavation and bank reconstruction) canal in Potsdam, Germany.
To quantify the change of the infiltration rate, travel time distribution, and the thermal field brought by the canal reconstruction, a three-dimensional flow and heat transport model was created. This model has two scenarios, 1) ‘with’ canal reconstruction, and 2) ‘without’ canal reconstruction. Overall, the model calibration results of both water heads and temperatures matched those observed in the field study. In comparison to the model without reconstruction, the reconstruction model led to more water being infiltrated into the aquifer on that section, on average 521 m3/d, which corresponded to around 9% of the total pumping rate. Subsurface travel-time distribution substantially shifted towards shorter travel times. Flow paths with travel times <200 days increased by ~10% and those with <300 days by 15%. Furthermore, the thermal distribution in the aquifer showed that the seasonal variation in the scenario with reconstruction reaches deeper and laterally propagates further.
By scatter plotting of δ18O versus δ 2H, the infiltrated river water could be differentiated from water flowing in the deep aquifer, which may contain remnant landside groundwater from further north. In contrast, the increase of river water contribution due to decolmation could be shown by piper plot. Geological heterogeneity caused a substantial spatial difference in redox zonation among different flow paths, both horizontally and vertically. Using the Wilcoxon rank test, the reconstruction changed the redox potential differently in observation wells. However, taking the small absolute concentration level, the change is also relatively minor. The treatment efficiency for both organic matter and inorganic matter is consistent after the reconstruction, except for ammonium. The inconsistent results for ammonium could be explained by changes in the Cation Exchange Capacity (CEC) in the newly paved riverbed. Because the bed is new, it was not yet capable of keeping the newly produced ammonium by sorption and further led to the breakthrough of the ammonium plume. By estimation, the peak of the ammonium plume would reach the most distant observation well before February 2024, while the peaking concentration could be further dampened by sorption and diluted by the afterward low ammonium flow. The consistent DOC and SUVA level suggests that there was no clear preference for the organic matter removal along the flow path.
The two hallmark features of Brownian motion are the linear growth < x2(t)> = 2Ddt of the mean squared displacement (MSD) with diffusion coefficient D in d spatial dimensions, and the Gaussian distribution of displacements. With the increasing complexity of the studied systems deviations from these two central properties have been unveiled over the years. Recently, a large variety of systems have been reported in which the MSD exhibits the linear growth in time of Brownian (Fickian) transport, however, the distribution of displacements is pronouncedly non-Gaussian (Brownian yet non-Gaussian, BNG). A similar behaviour is also observed for viscoelastic-type motion where an anomalous trend of the MSD, i.e., <x2(t)> ~ ta, is combined with a priori unexpected non-Gaussian distributions (anomalous yet non-Gaussian, ANG). This kind of behaviour observed in BNG and ANG diffusions has been related to the presence of heterogeneities in the systems and a common approach has been established to address it, that is, the random diffusivity approach.
This dissertation explores extensively the field of random diffusivity models. Starting from a chronological description of all the main approaches used as an attempt of describing BNG and ANG diffusion, different mathematical methodologies are defined for the resolution and study of these models. The processes that are reported in this work can be classified in three subcategories, i) randomly-scaled Gaussian processes, ii) superstatistical models and iii) diffusing diffusivity models, all belonging to the more general class of random diffusivity models. Eventually, the study focuses more on BNG diffusion, which is by now well-established and relatively well-understood. Nevertheless, many examples are discussed for the description of ANG diffusion, in order to highlight the possible scenarios which are known so far for the study of this class of processes.
The second part of the dissertation deals with the statistical analysis of random diffusivity processes. A general description based on the concept of moment-generating function is initially provided to obtain standard statistical properties of the models. Then, the discussion moves to the study of the power spectral analysis and the first passage statistics for some particular random diffusivity models. A comparison between the results coming from the random diffusivity approach and the ones for standard Brownian motion is discussed. In this way, a deeper physical understanding of the systems described by random diffusivity models is also outlined.
To conclude, a discussion based on the possible origins of the heterogeneity is sketched, with the main goal of inferring which kind of systems can actually be described by the random diffusivity approach.
Organizations continue to assemble and rely upon teams of remote workers as an essential element of their business strategy; however, knowledge processing is particular difficult in such isolated, largely digitally mediated settings. The great challenge for a knowledge-based organization lies not in how individuals should interact using technology but in how to achieve effective cooperation and knowledge exchange. Currently more attention has been paid to technology and the difficulties machines have processing natural language and less to studies of the human aspect—the influence of our own individual cognitive abilities and preferences on the processing of information when interacting online. This thesis draws on four scientific domains involved in the process of interpreting and processing massive, unstructured data—knowledge management, linguistics, cognitive science, and artificial intelligence—to build a model that offers a reliable way to address the ambiguous nature of language and improve workers’ digitally mediated interactions. Human communication can be discouragingly imprecise and is characterized by a strong linguistic ambiguity; this represents an enormous challenge for the computer analysis of natural language. In this thesis, I propose and develop a new data interpretation layer for the processing of natural language based on the human cognitive preferences of the conversants themselves. Such a semantic analysis merges information derived both from the content and from the associated social and individual contexts, as well as the social dynamics that emerge online. At the same time, assessment taxonomies are used to analyze online comportment at the individual and community level in order to successfully identify characteristics leading to greater effectiveness of communication. Measurement patterns for identifying effective methods of individual interaction with regard to individual cognitive and learning preferences are also evaluated; a novel Cyber-Cognitive Identity (CCI)—a perceptual profile of an individual’s cognitive and learning styles—is proposed. Accommodation of such cognitive preferences can greatly facilitate knowledge management in the geographically dispersed and collaborative digital environment. Use of the CCI is proposed for cognitively labeled Latent Dirichlet Allocation (CLLDA), a novel method for automatically labeling and clustering knowledge that does not rely solely on probabilistic methods, but rather on a fusion of machine learning algorithms and the cognitive identities of the associated individuals interacting in a digitally mediated environment. Advantages include: a greater perspicuity of dynamic and meaningful cognitive rules leading to greater tagging accuracy and a higher content portability at the sentence, document, and corpus level with respect to digital communication.
Glycosylphosphatidylinositols (GPIs) are highly complex glycolipids that serve as membrane anchors to a large variety of eukaryotic proteins. These are covalently attached to a group of peripheral proteins called GPI-anchored proteins (GPI-APs) through a post-translational modification in the endoplasmic reticulum. The GPI anchor is a unique structure composed of a glycan, with phospholipid tail at one end and a phosphoethanolamine linker at the other where the protein attaches. The glycan part of the GPI comprises a conserved pseudopentasaccharide core that could branch out to carry additional glycosyl or phosphoethanolamine units. GPI-APs are involved in a diverse range of cellular processes, few of which are signal transduction, protein trafficking, pathogenesis by protozoan parasites like the malaria- causing parasite Plasmodium falciparum. GPIs can also exist freely on the membrane surface without an attached protein such as those found in parasites like Toxoplasma gondii, the causative agent of Toxoplasmosis. These molecules are both structurally and functionally diverse, however, their structure-function relationship is still poorly understood. This is mainly because no clear picture exists regarding how the protein and the glycan arrange with respect to the lipid layer. Direct experimental evidence is rather scarce, due to which inconclusive pictures have emerged, especially regarding the orientation of GPIs and GPI-APs on membrane surfaces and the role of GPIs in membrane organization. It appears that computational modelling through molecular dynamics simulations would be a useful method to make progress. In this thesis, we attempt to explore characteristics of GPI anchors and GPI-APs embedded in lipid bilayers by constructing molecular models at two different resolutions – all-atom and coarse-grained.
First, we show how to construct a modular molecular model of GPIs and GPI-anchored proteins that can be readily extended to a broad variety of systems, addressing the micro-heterogeneity of GPIs. We do so by creating a hybrid link to which GPIs of diverse branching and lipid tails of varying saturation with their optimized force fields, GLYCAM06 and Lipid14 respectively, can be attached. Using microsecond simulations, we demonstrate that GPI prefers to “flop-down” on the membrane, thereby, strongly interacting with the lipid heads, over standing upright like a “lollipop”. Secondly, we extend the model of the GPI core to carry out a systematic study of the structural aspects of GPIs carrying different side chains (parasitic and human GPI variants) inserted in lipid bilayers. Our results demonstrate the importance of the side branch residues as these are the most accessible, and thereby, recognizable epitopes. This finding qualitatively agrees with experimental observations that highlight the role of the side branches in immunogenicity of GPIs and the specificity thereof. The overall flop-down orientation of the GPIs with respect to the bilayer surface presents the side chain residues to face the solvent. Upon attaching the green fluorescent protein (GFP) to the GPI, it is seen to lie in close proximity to the bilayer, interacting both with the lipid heads and glycan part of the GPI. However the orientation of GFP is sensitive to the type of GPI it is attached to. Finally, we construct a coarse-grained model of the GPI and GPI-anchored GFP using a modified version of the MARTINI force-field, using which the timescale is enhanced by at least an order of magnitude compared to the atomistic system.
This study provides a theoretical perspective on the conformational behavior of the GPI core and some of its branched variations in presence of lipid bilayers, as well as draws comparisons with experimental observations. Our modular atomistic model of GPI can be further employed to study GPIs of variable branching, and thereby, aid in designing future experiments especially in the area of vaccines and drug therapies. Our coarse-grained model can be used to study dynamic aspects of GPIs and GPI-APs w.r.t plasma membrane organization. Furthermore, the backmapping technique of converting coarse-grained trajectory back to the atomistic model would enable in-depth structural analysis with ample conformational sampling.
Due to continuously intensifying human usage of the marine environment worldwide ranging cetaceans face an increasing number of threats. Besides whaling, overfishing and by-catch, new technical developments increase the water and noise pollution, which can negatively affect marine species. Cetaceans are especially prone to these influences, being at the top of the food chain and therefore accumulating toxins and contaminants. Furthermore, they are extremely noise sensitive due to their highly developed hearing sense and echolocation ability. As a result, several cetacean species were brought to extinction during the last century or are now classified as critically endangered. This work focuses on two odontocetes. It applies and compares different molecular methods for inference of population status and adaptation, with implications for conservation. The worldwide distributed sperm whale (Physeter macrocephalus) shows a matrilineal population structure with predominant male dispersal. A recently stranded group of male sperm whales provided a unique opportunity to investigate male grouping for the first time. Based on the mitochondrial control region, I was able to infer that male bachelor groups comprise multiple matrilines, hence derive from different social groups, and that they represent the genetic variability of the entire North Atlantic. The harbor porpoise (Phocoena phocoena) occurs only in the northern hemisphere. By being small and occurring mostly in coastal habitats it is especially prone to human disturbance. Since some subspecies and subpopulations are critically endangered, it is important to generate and provide genetic markers with high resolution to facilitate population assignment and subsequent protection measurements. Here, I provide the first harbour porpoise whole genome, in high quality and including a draft annotation. Using it for mapping ddRAD seq data, I identify genome wide SNPs and, together with a fragment of the mitochondrial control region, inferred the population structure of its North Atlantic distribution range. The Belt Sea harbors a distinct subpopulation oppose to the North Atlantic, with a transition zone in the Kattegat. Within the North Atlantic I could detect subtle genetic differentiation between western (Canada-Iceland) and eastern (North Sea) regions, with support for a German North Sea breading ground around the Isle of Sylt. Further, I was able to detect six outlier loci which show isolation by distance across the investigated sampling areas. In employing different markers, I could show that single maker systems as well as genome wide data can unravel new information about population affinities of odontocetes. Genome wide data can facilitate investigation of adaptations and evolutionary history of the species and its populations. Moreover, they facilitate population genetic investigations, providing a high resolution, and hence allowing for detection of subtle population structuring especially important for highly mobile cetaceans.
This thesis investigates how the permafrost microbiota responds to global warming. In detail, the constraints behind methane production in thawing permafrost were linked to methanogenic activity, abundance and composition. Furthermore, this thesis offers new insights into microbial adaptions to the changing environmental conditions during global warming. This was assesed by investigating the potential ecological relevant functions encoded by plasmid DNA within the permafrost microbiota. Permafrost of both interglacial and glacial origin spanning the Holocene to the late Pleistocene, including Eemian, were studied during long-term thaw incubations. Furthermore, several permafrost cores of different stratigraphy, soil type and vegetation cover were used to target the main constraints behind methane production during short-term thaw simulations. Short- and long-term incubations simulating thaw with and without the addition of substrate were combined with activity measurements, amplicon and metagenomic sequencing of permanently frozen and seasonally thawed active layer. Combined, it allowed to address the following questions. i) What constraints methane production when permafrost thaws and how is this linked to methanogenic activity, abundance and composition? ii) How does the methanogenic community composition change during long-term thawing conditions? iii) Which potential ecological relevant functions are encoded by plasmid DNA in active layer soils?
The major outcomes of this thesis are as follows. i) Methane production from permafrost after long-term thaw simulation was found to be constrained mainly by the abundance of methanogens and the archaeal community composition. Deposits formed during periods of warmer temperatures and increased precipitation, (here represented by deposits from the Late Pleistocene of both interstadial and interglacial periods) were found to respond strongest to thawing conditions and to contain an archaeal community dominated by methanogenic archaea (40% and 100% of all detected archaea). Methanogenic population size and carbon density were identified as main predictors for potential methane production in thawing permafrost in short-term incubations when substrate was sufficiently available.
ii) Besides determining the methanogenic activity after long-term thaw, the paleoenvironmental conditions were also found to influence the response of the methanogenic community composition. Substantial shifts within methanogenic community structure and a drop in diversity were observed in deposits formed during warmer periods, but not in deposits from stadials, when colder and drier conditions occurred. Overall, a shift towards a dominance of hydrogenotrophic methanogens was observed in all samples, except for the oldest interglacial deposits from the Eemian, which displayed a potential dominance of acetoclastic methanogens. The Eemian, which is discussed to serve as an analogue to current climate conditions, contained highly active methanogenic communities. However, all potential limitation of methane production after permafrost thaw, it means methanogenic community structure, methanogenic population size, and substrate pool might be overcome after permafrost had thawed on the long-term. iii) Enrichments with soil from the seasonally thawed active layer revealed that its plasmid DNA (‘metaplasmidome’) carries stress-response genes. In particular it encoded antibiotic resistance genes, heavy metal resistance genes, cold shock proteins and genes encoding UV-protection. Those are functions that are directly involved in the adaptation of microbial communities to stresses in polar environments. It was further found that metaplasmidomes from the Siberian active layer originate mainly from Gammaproteobacteria. By applying enrichment cultures followed by plasmid DNA extraction it was possible to obtain a higher average contigs length and significantly higher recovery of plasmid sequences than from extracting plasmid sequences from metagenomes. The approach of analyzing ‘metaplasmidomes’ established in this thesis is therefore suitable for studying the ecological role of plasmids in polar environments in general.
This thesis emphasizes that including microbial community dynamics have the potential to improve permafrost-carbon projections. Microbially mediated methane release from permafrost environments may significantly impact future climate change. This thesis identified drivers of methanogenic composition, abundance and activity in thawing permafrost landscapes. Finally, this thesis underlines the importance to study how the current warming Arctic affects microbial communities in order to gain more insight into microbial response and adaptation strategies.
Towards seasonal prediction: stratosphere-troposphere coupling in the atmospheric model ICON-NWP
(2020)
Stratospheric variability is one of the main potential sources for sub-seasonal to seasonal predictability in mid-latitudes in winter. Stratospheric pathways play an important role for long-range teleconnections between tropical phenomena, such as the quasi-biennial oscillation (QBO) and El Niño-Southern Oscillation (ENSO), and the mid-latitudes on the one hand, and linkages between Arctic climate change and the mid-latitudes on the other hand. In order to move forward in the field of extratropical seasonal predictions, it is essential that an atmospheric model is able to realistically simulate the stratospheric circulation and variability. The numerical weather prediction (NWP) configuration of the ICOsahedral Non-hydrostatic atmosphere model ICON is currently being used by the German Meteorological Service for the regular weather forecast, and is intended to produce seasonal predictions in future. This thesis represents the first extensive evaluation of Northern Hemisphere stratospheric winter circulation in ICON-NWP by analysing a large set of seasonal ensemble experiments.
An ICON control climatology simulated with a default setup is able to reproduce the basic behaviour of the stratospheric polar vortex. However, stratospheric westerlies are significantly too weak and major stratospheric warmings too frequent, especially in January. The weak stratospheric polar vortex in ICON is furthermore connected to a mean sea level pressure (MSLP) bias pattern resembling the negative phase of the Arctic Oscillation (AO). Since a good representation of the drag exerted by gravity waves is crucial for a realistic simulation of the stratosphere, three sensitivity experiments with reduced gravity wave drag are performed. Both a reduction of the non-orographic and orographic gravity wave drag respectively, lead to a strengthening of the stratospheric vortex and thus a bias reduction in winter, in particular in January. However, the effect of the non-orographic gravity wave drag on the stratosphere is stronger. A third experiment, combining a reduced orographic and non-orographic drag, exhibits the largest stratospheric bias reductions. The analysis of stratosphere-troposphere coupling based on an index of the Northern Annular Mode demonstrates that ICON realistically represents downward coupling. This coupling is intensified and more realistic in experiments with a reduced gravity wave drag, in particular with reduced non-orographic drag. Tropospheric circulation is also affected by the reduced gravity wave drag, especially in January, when the strongly improved stratospheric circulation reduces biases in the MSLP patterns. Moreover, a retuning of the subgrid-scale orography parameterisations leads to a significant error reduction in the MSLP in all months. In conclusion, the combination of these adjusted parameterisations is recommended as a current optimal setup for seasonal simulations with ICON.
Additionally, this thesis discusses further possible influences on the stratospheric polar vortex, including the influence of tropical phenomena, such as QBO and ENSO, as well as the influence of a rapidly warming Arctic. ICON does not simulate the quasi-oscillatory behaviour of the QBO and favours weak easterlies in the tropical stratosphere. A comparison with a reanalysis composite of the easterly QBO phase reveals, that the shift towards the easterly QBO in ICON further weakens the stratospheric polar vortex. On the other hand, the stratospheric reaction to ENSO events in ICON is realistic. ICON and the reanalysis exhibit a weakened stratospheric vortex in warm ENSO years. Furthermore, in particular in winter, warm ENSO events favour the negative phase of the Arctic Oscillation, whereas cold events favour the positive phase. The ICON simulations also suggest a significant effect of ENSO on the Atlantic-European sector in late winter. To investigate the influence of Arctic climate change on mid-latitude circulation changes, two differing approaches with transient and fixed sea ice conditions are chosen. Neither ICON approach exhibits the mid-latitude tropospheric negative Arctic Oscillation circulation response to amplified Arctic warming, as it is discussed on the basis of observational evidence. Nevertheless, adding a new model to the current and active discussion on Arctic-midlatitude linkages, further contributes to the understanding of divergent conclusions between model and observational studies.
Philosophische Tugenden
(2020)
Worin besteht gutes Philosophieren? Und weshalb ist gerade John Stuart Mill ein außergewöhnlich guter Philosoph? Joachim Toenges-Hinn verbindet in diesem Band die metaphilosophische Suche danach, was gute Philosophie ausmacht, mit einer historischen Betrachtung des Philosophen John Stuart Mill. Dabei fungiert Mill zugleich als Urheber von und Verkörperung des Strebens nach zwei philosophischen Tugenden, die Toenges-Hinn aus Mills philosophischem Werk ableitet und anschließend systematisch verteidigt. Diese als „Bentham-Ideal“ und „Coleridge-Ideal“ bezeichneten Tugenden stehen dabei ebenso im Fokus seiner Untersuchung wie die Bedeutung von Lebensexperimenten für philosophische Biografien.
Sekundäre Pflanzenstoffe und ihre gesundheitsfördernden Eigenschaften sind in den letzten zwei Jahrzehnten vielfach ernährungsphysiologisch untersucht und spezifische positive Effekte im humanen Organismus zum Teil sehr genau beschrieben worden. Zu den Carotinoiden zählend ist der sekundäre Pflanzenstoff Lutein insbesondere in der Prävention von ophthalmologischen Erkrankungen in den Mittelpunkt der Forschung gerückt. Das ausschließlich von Pflanzen und einigen Algen synthetisierte Xanthophyll wird über die pflanzliche Nahrung insbesondere grünes Blattgemüse in den humanen Organismus aufgenommen. Dort akkumuliert es bevorzugt im Makulapigment der Retina des menschlichen Auges und ist bedeutend im Prozess der Aufrechterhaltung der Funktionsfähigkeit der Photorezeptorzellen. Im Laufe des Alterns kann die Abnahme der Dichte des Makulapigments und der Abbau von Lutein beobachtet werden. Die dadurch eintretende Destabilisierung der Photorezeptorzellen im Zusammenhang mit einer veränderten Stoffwechsellage im alternden Organismus kann zur Ausprägung der altersbedingten Makuladegeneration (AMD) führen. Die pathologische Symptomatik der Augenerkrankung reicht vom Verlust der Sehschärfe bis hin zum irreversiblen Erblinden. Da therapeutische Mittel ausschließlich ein Fortschreiten verhindern, bestehen hier Forschungsansätze präventive Maßnahmen zu finden. Die Supplementierung von luteinhaltigen Präparaten bietet dabei einen Ansatzpunkt. Auf dem Markt finden sich bereits Nahrungsergänzungsmittel (NEM) mit Lutein in verschiedenen Applikationen. Limitierend ist dabei die Stabilität und Bioverfügbarkeit von Lutein, welches teilweise kostenintensiv und mit unbekannter Reinheit zu erwerben ist. Aus diesem Grund wäre die Verwendung von Luteinestern als die pflanzliche Speicherform des Luteins im Rahmen eines NEMs vorteilhaft. Neben ihrer natürlichen, höheren Stabilität sind Luteinester nachhaltig und kostengünstig einsetzbar.
In dieser Arbeit wurden physikochemische und ernährungsphysiologisch relevante Aspekte in dem Produktentwicklungsprozess eines NEMs mit Luteinestern in einer kolloidalen Formulierung untersucht. Die bisher einzigartige Anwendung von Luteinestern in einem Mundspray sollte die Aufnahme des Wirkstoffes insbesondere für ältere Menschen erleichtern und verbessern. Unter Beachtung der Ergebnisse und der ernährungsphysiologischen Bewertung sollten u.a. Empfehlungen für die Rezepturzusammensetzungen einer Miniemulsion (Emulsion mit Partikelgrößen <1,0 µm) gegeben werden. Eine Einschätzung der Bioverfügbarkeit der Luteinester aus den entwickelten, kolloidalen Formulierungen konnte anhand von Studien zur Resorption- und Absorptionsverfügbarkeit in vitro ermöglicht werden.
In physikalischen Untersuchungen wurden zunächst Basisbestandteile für die Formulierungen präzisiert. In ersten wirkstofffreien Musteremulsionen konnten ausgewählte Öle als Trägerphase sowie Emulgatoren und Löslichkeitsvermittler (Peptisatoren) hinsichtlich ihrer Eignung zur Bereitstellung einer Miniemulsion physikalisch geprüft werden. Die beste Stabilität und optimale Eigenschaften einer Miniemulsion zeigten sich bei der Verwendung von MCT-Öl (engl. medium chain triglyceride) bzw. Rapsöl in der Trägerphase sowie des Emulgators Tween® 80 (Tween 80) allein oder in Kombination mit dem Molkenproteinhydrolysat Biozate® 1 (Biozate 1).
Aus den physikalischen Untersuchungen der Musteremulsionen gingen die Präemulsionen als Prototypen hervor. Diese enthielten den Wirkstoff Lutein in verschiedenen Formen. So wurden Präemulsionen mit Lutein, mit Luteinestern sowie mit Lutein und Luteinestern konzipiert, welche den Emulgator Tween 80 oder die Kombination mit Biozate 1 enthielten. Bei der Herstellung der Präemulsionen führte die Anwendung der Emulgiertechniken Ultraschall mit anschließender Hochdruckhomogenisation zu den gewünschten Miniemulsionen. Beide eingesetzten Emulgatoren boten optimale Stabilisierungseffekte. Anschließend erfolgte die physikochemische Charakterisierung der Wirkstoffe. Insbesondere Luteinester aus Oleoresin erwiesen sich hier als stabil gegenüber verschiedenen Lagerungsbedingungen. Ebenso konnte bei einer kurzzeitigen Behandlung der Wirkstoffe unter spezifischen mechanischen, thermischen, sauren und basischen Bedingungen eine Stabilität von Lutein und Luteinestern gezeigt werden. Die Zugabe von Biozate 1 bot dabei nur für Lutein einen zusätzlichen Schutz. Bei längerer physikochemischer Behandlung unterlagen die in den Miniemulsionen eingebrachten Wirkstoffe moderaten Abbauvorgängen. Markant war deren Sensitivität gegenüber dem basischen Milieu. Im Rahmen der Rezepturentwicklung des NEMs war hier die Empfehlung, eine Miniemulsion mit einem leicht saurem pH-Milieu zum Schutz des Wirkstoffes durch kontrollierte Zugabe weiterer Inhaltstoffe zu gestalten.
Im weiteren Entwicklungsprozess des NEMs wurden Fertigrezepturen mit dem Wirkstoff Luteinester aufgestellt. Die alleinige Anwendung des Emulgators Biozate 1 zeigte sich dabei als ungeeignet. Die weiterhin zur Verfügung stehenden Fertigrezepturen enthielten in der Öl-phase neben dem Wirkstoff das MCT-ÖL oder Rapsöl sowie a-Tocopherol zur Stabilisierung. Die Wasserphase bestand aus dem Emulgator Tween 80 oder einer Kombination aus Tween 80 und Biozate 1. Zusatzstoffe waren zudem als mikrobiologischer Schutz Ascorbinsäure und Kaliumsorbat sowie für sensorische Effekte Xylitol und Orangenaroma. Die Anordnung der Basisrezeptur und das angewendete Emulgierverfahren lieferten stabile Miniemulsionen. Weiterhin zeigten langfristige Lagerungsversuche mit den Fertigrezepturen bei 4°C, dass eine Aufrechterhaltung der geforderten Luteinestermenge im Produkt gewährleistet war. Analoge Untersuchungen an einem luteinhaltigen, marktgängigen Präparat bestätigten dagegen eine bereits bei kurzfristiger Lagerung auftretende Instabilität von Lutein.
Abschließend wurde durch Resorptions- und Absorptionsstudien in vitro mit den Präemulsionen und Fertigrezepturen die Bioverfügbarkeit von Luteinestern geprüft. Nach Behandlung in einem etablierten in vitro Verdaumodell konnte eine geringfügige Resorptionsverfügbarkeit der Luteinester definiert werden. Limitiert war eine Micellarisierung des Wirkstoffes aus den konzipierten Formulierungen zu beobachten. Eine enzymatische Spaltung der Luteinester zu freiem Lutein wurde nur begrenzt festgestellt. Spezifität und Aktivität von entsprechenden hydrolytischen Lipasen sind als äußerst gering gegenüber Luteinestern zu bewerten. In sich anschließenden Zellkulturversuchen mit der Zelllinie Caco-2 wurden keine zytotoxischen Effekte durch die relevanten Inhaltsstoffe in den Präemulsionen gezeigt. Dagegen konnten eine Sensibilität gegenüber den Fertigrezepturen beobachtet werden. Diese sollte im Zusammenhang mit Irritationen der Schleimhäute des Magen-Darm-Traktes bedacht werden. Eine weniger komplexe Rezeptur könnte die beobachteten Einschränkungen möglicherweise minimieren. Abschließende Absorptionsstudien zeigten, dass grundsätzlich eine geringfügige Aufnahme von vorrangig Lutein, aber auch Luteinmonoestern in den Enterocyten aus Miniemulsionen erfolgen kann. Dabei hatte weder Tween 80 noch Biozate 1 einen förderlichen Einfluss auf die Absorptionsrate von Lutein oder Luteinestern. Die Metabolisierung der Wirkstoffe durch vorherigen in vitro-Verdau steigerte die zelluläre Aufnahme von Wirkstoffen aus Formulierungen mit Lutein und Luteinestern gleichermaßen. Die beobachtete Aufnahme von Lutein und Luteinmonoestern in den Enterocyten scheint über passive Diffusion zu erfolgen, wobei auch der aktive Transport nicht ausgeschlossen werden kann. Dagegen können Luteindiester aufgrund ihrer Molekülgröße nicht über den Weg der Micellarisierung und einfachen Diffusion in die Enterocyten gelangen. Ihre Aufnahme in die Dünndarmepithelzellen bedarf einer vorherigen hydrolytischen Spaltung durch spezifische Lipasen. Dieser Schritt limitiert wiederum die effektive Aufnahme der Luteinester in die Zellen bzw. stellt eine Einschränkung in ihrer Bioverfügbarkeit im Vergleich zu freiem Lutein dar.
Zusammenfassend konnte für die physikochemisch stabilen Luteinester eine geringe Bioverfügbarkeit aus kolloidalen Formulierungen gezeigt werden. Dennoch ist die Verwendung als Wirkstoffquelle für den sekundären Pflanzenstoff Lutein in einem NEM zu empfehlen. Im Zusammenhang mit der Aufnahme von luteinreichen, pflanzlichen Lebensmitteln kann trotz der zu erwartenden geringen Bioverfügbarkeit der Luteinester aus dem NEM ein Beitrag zur Verbesserung des Luteinstatus erreicht werden. Entsprechende Publikationen zeigten eindeutige Korrelationen zwischen der Aufnahme von luteinesterhaltigen Präparaten und einem Anstieg der Luteinkonzentration im Serum bzw. der Makulapigmentdichte in vivo. Die geringfügig bessere Bioverfügbarkeit von freiem Lutein steht im kritischen Zusammenhang mit seiner Instabilität und Kostenintensität. Bilanzierend wurde im Rahmen dieser Arbeit das marktgängige Produkt Vita Culus® konzipiert. Im Ausblick sollten humane Interventionsstudien mit dem NEM die abschließende Bewertung der Bioverfügbarkeit von Luteinestern aus dem Präparat möglich machen.
The East Asian monsoons characterize the modern-day Asian climate, yet their geological history and driving mechanisms remain controversial. The southeasterly summer monsoon provides moisture, whereas the northwesterly winter monsoon sweeps up dust from the arid Asian interior to form the Chinese Loess Plateau. The onset of this loess accumulation, and therefore of the monsoons, was thought to be 8 million years ago (Ma). However, in recent years these loess records have been extended further back in time to the Eocene (56-34 Ma), a period characterized by significant changes in both the regional geography and global climate. Yet the extent to which these reconfigurations drive atmospheric circulation and whether the loess-like deposits are monsoonal remains debated. In this thesis, I study the terrestrial deposits of the Xining Basin previously identified as Eocene loess, to derive the paleoenvironmental evolution of the region and identify the geological processes that have shaped the Asian climate.
I review dust deposits in the geological record and conclude that these are commonly represented by a mix of both windblown and water-laid sediments, in contrast to the pure windblown material known as loess. Yet by using a combination of quartz surface morphologies, provenance characteristics and distinguishing grain-size distributions, windblown dust can be identified and quantified in a variety of settings. This has important implications for tracking aridification and dust-fluxes throughout the geological record.
Past reversals of Earth’s magnetic field are recorded in the deposits of the Xining Basin and I use these together with a dated volcanic ash layer to accurately constrain the age to the Eocene period. A combination of pollen assemblages, low dust abundances and other geochemical data indicates that the early Eocene was relatively humid suggesting an intensified summer monsoon due to the warmer greenhouse climate at this time. A subsequent shift from predominantly freshwater to salt lakes reflects a long-term aridification trend possibly driven by global cooling and the continuous uplift of the Tibetan Plateau. Superimposed on this aridification are wetter intervals reflected in more abundant lake deposits which correlate with highstands of the inland proto-Paratethys Sea. This sea covered the Eurasian continent and thereby provided additional moisture to the winter-time westerlies during the middle to late Eocene.
The long-term aridification culminated in an abrupt shift at 40 Ma reflected by the onset of windblown dust, an increase in steppe-desert pollen, the occurrence of high-latitude orbital cycles and northwesterly winds identified in deflated salt deposits. Together, these indicate the onset of a Siberian high atmospheric pressure system driving the East Asian winter monsoon as well as dust storms and was triggered by a major sea retreat from the Asian interior. These results therefore show that the proto-Paratethys Sea, though less well recognized than the Tibetan Plateau and global climate, has been a major driver in setting up the modern-day climate in Asia.
Organizations incorporate the institutional demands from their environment in order to be deemed legitimate and survive. Yet, complexifying societies promulgate multiple and sometimes inconsistent institutional prescriptions. When these prescriptions collide, organizations are said to face “institutional complexity”. How does an organization then incorporate incompatible demands? What are the consequences of institutional complexity for an organization? The literature provides contradictory conceptual and empirical insights on the matter. A central assumption, however, remains that internal incompatibilities generate tensions that, under certain conditions, can escalate into intractable conflicts, resulting in dysfunctionality and loss of legitimacy. The present research is an inquiry into what happens inside an organization when it incorporates complex institutional demands.
To answer this question, I focus on how individuals inside an organization interpret a complex institutional prescription. I examine how members of the French Development Agency interpret ‘results-based management’, a central but complex concept of organizing in the field of development aid. I use an inductive mixed methods design to systematically explore how different interpretations of results-based management relate to one another and to the organizational context in which they are embedded.
The results reveal that results-based management is a contested concept in the French Development Agency. I find multiple interpretations of the concept, which are attached to partly incompatible rationales about “who we are” and “what we do as an organization”. These rationales nevertheless coexist as balanced forces, without escalating into open conflict. The analysis points to four reasons for this peaceful coexistence of diverging rationales inside one and the same organization: 1) individuals’ capacity to manipulate different interpretations of a complex institutional demand, 2) the nature of interpretations, which makes them more or less prone to conflict, 3) the balanced distribution of rationales across the organizational sub-contexts and 4) the shared rules of interpretation provided by the larger socio-cultural context.
This research shows that an organization that incorporates institutional complexity comes to represent different, partly incompatible things to its members without being at war with itself. In doing so, it contributes to our knowledge of institutional complexity and organizational hybridity. It also advances our understanding of internal organizational legitimacy and of the translation of managerial concepts in organizations.
Comment sections of online news platforms are an essential space to express opinions and discuss political topics. However, the misuse by spammers, haters, and trolls raises doubts about whether the benefits justify the costs of the time-consuming content moderation. As a consequence, many platforms limited or even shut down comment sections completely. In this thesis, we present deep learning approaches for comment classification, recommendation, and prediction to foster respectful and engaging online discussions. The main focus is on two kinds of comments: toxic comments, which make readers leave a discussion, and engaging comments, which make readers join a discussion. First, we discourage and remove toxic comments, e.g., insults or threats. To this end, we present a semi-automatic comment moderation process, which is based on fine-grained text classification models and supports moderators. Our experiments demonstrate that data augmentation, transfer learning, and ensemble learning allow training robust classifiers even on small datasets. To establish trust in the machine-learned models, we reveal which input features are decisive for their output with attribution-based explanation methods. Second, we encourage and highlight engaging comments, e.g., serious questions or factual statements. We automatically identify the most engaging comments, so that readers need not scroll through thousands of comments to find them. The model training process builds on upvotes and replies as a measure of reader engagement. We also identify comments that address the article authors or are otherwise relevant to them to support interactions between journalists and their readership. Taking into account the readers' interests, we further provide personalized recommendations of discussions that align with their favored topics or involve frequent co-commenters. Our models outperform multiple baselines and recent related work in experiments on comment datasets from different platforms.
Successfully completing any data science project demands careful consideration across its whole process. Although the focus is often put on later phases of the process, in practice, experts spend more time in earlier phases, preparing data, to make them consistent with the systems' requirements or to improve their models' accuracies. Duplicate detection is typically applied during the data cleaning phase, which is dedicated to removing data inconsistencies and improving the overall quality and usability of data. While data cleaning involves a plethora of approaches to perform specific operations, such as schema alignment and data normalization, the task of detecting and removing duplicate records is particularly challenging. Duplicates arise when multiple records representing the same entities exist in a database. Due to numerous reasons, spanning from simple typographical errors to different schemas and formats of integrated databases. Keeping a database free of duplicates is crucial for most use-cases, as their existence causes false negatives and false positives when matching queries against it. These two data quality issues have negative implications for tasks, such as hotel booking, where users may erroneously select a wrong hotel, or parcel delivery, where a parcel can get delivered to the wrong address. Identifying the variety of possible data issues to eliminate duplicates demands sophisticated approaches.
While research in duplicate detection is well-established and covers different aspects of both efficiency and effectiveness, our work in this thesis focuses on the latter. We propose novel approaches to improve data quality before duplicate detection takes place and apply the latter in datasets even when prior labeling is not available. Our experiments show that improving data quality upfront can increase duplicate classification results by up to 19%. To this end, we propose two novel pipelines that select and apply generic as well as address-specific data preparation steps with the purpose of maximizing the success of duplicate detection. Generic data preparation, such as the removal of special characters, can be applied to any relation with alphanumeric attributes. When applied, data preparation steps are selected only for attributes where there are positive effects on pair similarities, which indirectly affect classification, or on classification directly. Our work on addresses is twofold; first, we consider more domain-specific approaches to improve the quality of values, and, second, we experiment with known and modified versions of similarity measures to select the most appropriate per address attribute, e.g., city or country.
To facilitate duplicate detection in applications where gold standard annotations are not available and obtaining them is not possible or too expensive, we propose MDedup. MDedup is a novel, rule-based, and fully automatic duplicate detection approach that is based on matching dependencies. These dependencies can be used to detect duplicates and can be discovered using state-of-the-art algorithms efficiently and without any prior labeling. MDedup uses two pipelines to first train on datasets with known labels, learning to identify useful matching dependencies, and then be applied on unseen datasets, regardless of any existing gold standard. Finally, our work is accompanied by open source code to enable repeatability of our research results and application of our approaches to other datasets.
To find out the future of nowadays reef ecosystem turnover under the environmental stresses such as global warming and ocean acidification, analogue studies from the geologic past are needed. As a critical time of reef ecosystem innovation, the Permian-Triassic transition witnessed the most severe demise of Phanerozoic reef builders, and the establishment of modern style symbiotic relationships within the reef-building organisms. Being the initial stage of this transition, the Middle Permian (Capitanian) mass extinction coursed a reef eclipse in the early Late Permian, which lead to a gap of understanding in the post-extinction Wuchiapingian reef ecosystem, shortly before the radiation of Changhsingian reefs. Here, this thesis presents detailed biostratigraphic, sedimentological, and palaeoecological studies of the Wuchiapingian reef recovery following the Middle Permian (Capitanian) mass extinction, on the only recorded Wuchiapingian reef setting, outcropping in South China at the Tieqiao section.
Conodont biostratigraphic zonations were revised from the Early Permian Artinskian to the Late Permian Wuchiapingian in the Tieqiao section. Twenty main and seven subordinate conodont zones are determined at Tieqiao section including two conodont zone below and above the Tieqiao reef complex. The age of Tieqiao reef was constrained as early to middle Wuchiapingian.
After constraining the reef age, detailed two-dimensional outcrop mapping combined with lithofacies study were carried out on the Wuchiapingian Tieqiao Section to investigate the reef growth pattern stratigraphically as well as the lateral changes of reef geometry on the outcrop scale. Semi-quantitative studies of the reef-building organisms were used to find out their evolution pattern within the reef recovery. Six reef growth cycles were determined within six transgressive-regressive cycles in the Tieqiao section. The reefs developed within the upper part of each regressive phase and were dominated by different biotas. The timing of initial reef recovery after the Middle Permian (Capitanian) mass extinction was updated to the Clarkina leveni conodont zone, which is earlier than previous understanding. Metazoans such as sponges were not the major components of the Wuchiapingian reefs until the 5th and 6th cycles. So, the recovery of metazoan reef ecosystem after the Middle Permian (Capitanian) mass extinction was obviously delayed. In addition, although the importance of metazoan reef builders such as sponges did increase following the recovery process, encrusting organisms such as Archaeolithoporella and Tubiphytes, combined with microbial carbonate precipitation, still played significant roles to the reef building process and reef recovery after the mass extinction.
Based on the results from outcrop mapping and sedimentological studies, quantitative composition analysis of the Tieqiao reef complex were applied on selected thin sections to further investigate the functioning of reef building components and the reef evolution after the Middle Permian (Capitanian) mass extinction. Data sets of skeletal grains and whole rock components were analyzed. The results show eleven biocommunity clusters/eight rock composition clusters dominated by different skeletal grains/rock components. Sponges, Archaeolithoporella and Tubiphytes were the most ecologically important components within the Wuchiapingian Tieqiao reef, while the clotted micrites and syndepositional cements are the additional important rock components for reef cores. The sponges were important within the whole reef recovery. Tubiphytes were broadly distributed in different environments and played a key-role in the initial reef communities. Archaeolithoporella concentrated in the shallower part of reef cycles (i.e., the upper part of reef core) and was functionally significant for the enlargement of reef volume.
In general, the reef recovery after the Middle Permian (Capitanian) mass extinction has some similarities with the reef recovery following the end-Permian mass extinction. It shows a delayed recovery of metazoan reefs and a stepwise recovery pattern that was controlled by both ecological and environmental factors. The importance of encrusting organisms and microbial carbonates are also similar to most of the other post-extinction reef ecosystems. These findings can be instructive to extend our understanding of the reef ecosystem evolution under environmental perturbation or stresses.
Ein Ergebnis der interkulturellen Beziehungen in Südostasien sind die immer noch existierenden portugiesisch-basierten Kreolsprachen Papia Kristang und Macaísta, die zu Muttersprachen von Generationen von Menschen in Malakka und Macau geworden sind. Welche Faktoren bewirken den Sprachwandel dieser Idiome, und wie ist dieser erkennbar? Dieser Band beschäftigt sich nicht nur mit der Sprachdynamik der portugiesisch-basierten Kreolsprachen Südostasiens, sondern auch mit anderen wesentlichen Fragestellungen der Variationslinguistik. Als Basis dienen die Ergebnisse einer empirischen Datenerhebung, die insbesondere die Veränderungen im Sprachgebrauch dokumentieren. Darüber hinaus stellt der Autor neue Resultate hinsichtlich der Sprachidentifikationen vor, die nicht nur für die Kreolistik von Bedeutung sind, sondern auch fachübergreifend für das Interesse der allgemeinen Sprachwissenschaft.
Cardiac valves are essential for the continuous and unidirectional flow of blood throughout the body. During embryonic development, their formation is strictly connected to the mechanical forces exerted by blood flow. The endocardium that lines the interior of the heart is a specialized endothelial tissue and is highly sensitive to fluid shear stress. Endocardial cells harbor a signal transduction machinery required for the translation of these forces into biochemical signaling, which strongly impacts cardiac morphogenesis and physiology. To date, we lack a solid understanding on the mechanisms by which endocardial cells sense the dynamic mechanical stimuli and how they trigger different cellular responses. In the zebrafish embryo, endocardial cells at the atrioventricular canal respond to blood flow by rearranging from a monolayer to a double-layer, composed of a luminal cell population subjected to blood flow and an abluminal one that is not exposed to it. These early morphological changes lead to the formation of an immature valve leaflet. While previous studies mainly focused on genes that are positively regulated by shear stress, the mechanisms regulating cell behaviors and fates in cells that lack the stimulus of blood flow are largely unknown. One key discovery of my work is that the flow-sensitive Notch receptor and Krüppel-like factor (Klf) 2, one of the best characterized flow-regulated transcriptional factors, are activated by shear stress but that they function in two parallel signal transduction pathways. Each of these two pathways is essential for the rearrangement of atrioventricular cells into an immature double-layered valve leaflets. A second key discovery of my study is the finding that both Notch and Klf2 signaling negatively regulate the expression of the angiogenesis receptor Vegfr3/Flt4, which becomes restricted to abluminal endocardial cells of the valve leaflet. Within these cells, Flt4 downregulates the expressions of the cell adhesion proteins Alcam and VE-cadherin. A loss of Flt4 causes abluminal endocardial cells to ectopically express Notch, which is normally restricted to luminal cells, and impairs valve morphology. My study suggests that abluminal endocardial cells that do not experience mechanical stimuli loose Notch expression and this triggers expression of Flt4. In turn, Flt4 negatively regulates Notch on the abluminal side of the valve leaflet. These antagonistic signaling activities and fine-tuned gene regulatory mechanisms ultimately shape cardiac valve leaflets by inducing unique differences in the fates of endocardial cells.
Potato is the 4th most important food crop in the world. Especially in tropical and sub-tropical potato production, drought is a yield limiting factor. Potato is sensitive to water stress. Potato yield loss under water stress could be reduced by using tolerant varieties and adjusted agronomic practices. Direct selection for yield under water-stressed conditions requires long selection cycles. Thus, identification of markers for marker-assisted selection may speed up breeding. The objective of this thesis is to identify morphological markers for drought tolerance by continuously monitoring plant growth and canopy temperature with an automatic phenotyping system.
The phenotyping was performed in drought-stress experiments that were conducted in population A with 64 genotypes and population B with 21 genotypes in the screenhouse in 2015 and 2016 (population A) and in 2017 and 2018 (population B). Drought tolerance was quantified as deviation of the relative tuber starch yield from the experimental median (DRYM) and parent median (DRYMp). Relative tuber starch yield is starch yield under drought stress relative to the average starch yield of the respective cultivar under control conditions in the same experiment. The specific DRYM value was calculated based on the yield data of the same experiment or the global DRYM that was calculated from yield data derived from data combined over yeas of respective population or across multiple experiments including VALDIS and TROST experiments (2011-2016).
Analysis of variance found a significant effect of genotype on DRYM indicating that the tolerance variation required for marker identification was given in both populations.
Canopy growth was monitored continuously six times a day over five to ten weeks by a laser scanner system and yielded information on leaf area, plant height and leaf angle for population A and additionally on leaf inclination and light penetration depth for population B. Canopy temperature was measured 48 times a day over six to seven weeks by infrared thermometry in population B. From the continuous IRT surface temperature data set, the canopy temperature for each plant was selected by matching the time stamp of the IRT data with laser scanner data.
Mean, maximum, range and growth rate values were calculated from continuous laser scanner measurements of respective canopy parameters. Among the canopy parameters, the maximum and mean values in long-term stress conditions showed better correlation with DRYM values calculated in the same experiment than growth rate and diurnal range values. Therefore, drought tolerance index prediction was done from maximum and mean values of canopy parameters.
The tolerance index in specific experiment condition was linearly predicted by simple regression model from different single canopy parameters under long-term stress condition in population A (2016) and population B (2017 and 2018). Among the canopy parameters maximum light penetration depth (2017), mean leaf angle (2017, 2018, and 2016), mean leaf inclination or mean canopy temperature depression (2017 and 2018), maximum plant height (2017) were selected as tolerance predictors. However, no single parameters were sufficient to predict DRYM. Therefore, several independent parameters were integrated in a multiple regression model.
In multiple regression model, specific experiment DRYM values in population A was predicted from mean leaf angle (2016). In population B, specific tolerance could be predicted from maximum light penetration depth and mean leaf inclination (2017) and mean leaf inclination (2018) or mean canopy temperature depression and mean leaf angle (2018).
In data combined over season of population A, the multiple linear regression model selected maximum plant height and mean leaf angle as tolerance predictor. In Population B, mean leaf inclination was selected as tolerance predictor. However, in population A, the variation explained by the final model was too low.
Furthermore, the average tolerances respective to parent median (2011-2018) across FGH plants or all plants (FGH and field) were predicted from maximum plant height (population A) and maximum plant height and mean leaf inclination (population B). Altogether, canopy parameters could be used as markers for drought tolerance. Therefore, water stress breeding in potato could be speed up through using leaf inclination, light penetration depth, plant height and canopy temperature depression as markers for drought tolerance, especially in long-term stress conditions.
The current thesis is focused on the properties of graphene supported by metallic substrates and specifically on the behaviour of electrons in such systems. Methods of scanning tunneling microscopy, electron diffraction and photoemission spectroscopy were applied to study the structural and electronic properties of graphene. The purpose of the first part of this work is to introduce the most relevant aspects of graphene physics and the methodical background of experimental techniques used in the current thesis.
The scientific part of this work starts with the extensive study by means of scanning tunneling microscopy of the nanostructures that appear in Au intercalated graphene on Ni(111). This study was aimed to explore the possible structural explanations of the Rashba-type spin splitting of ~100 meV experimentally observed in this system — much larger than predicted by theory. It was demonstrated that gold can be intercalated under graphene not only as a dense monolayer, but also in the form of well-periodic arrays of nanoclusters, a structure previously not reported. Such nanocluster arrays are able to decouple graphene from the strongly interacting Ni substrate and render it quasi-free-standing, as demonstrated by our DFT study. At the same time calculations confirm strong enhancement of the proximity-induced SOI in graphene supported by such nanoclusters in comparison to monolayer gold. This effect, attributed to the reduced graphene-Au distance in the case of clusters, provides a large Rashba-type spin splitting of ~60 meV.
The obtained results not only provide a possible mechanism of SOI enhancement in this particular system, but they can be also generalized for graphene on other strongly interacting substrates intercalated by nanostructures of heavy noble d metals.
Even more intriguing is the proximity of graphene to heavy sp-metals that were predicted to induce an intrinsic SOI and realize a spin Hall effect in graphene. Bismuth is the heaviest stable sp-metal and its compounds demonstrate a plethora of exciting physical phenomena. This was the motivation behind the next part of the current thesis, where structural and electronic properties of a previously unreported phase of Bi-intercalated graphene on Ir(111) were studied by means of scanning tunneling microscopy, spin- and angle-resolved photoemission spectroscopy and electron diffraction. Photoemission experiments revealed a remarkable, nearly ideal graphene band structure with strongly suppressed signatures of interaction between graphene and the Ir(111) substrate, moreover, the characteristic moiré pattern observed in graphene on Ir(111) by electron diffraction and scanning tunneling microscopy was strongly suppressed after intercalation. The whole set of experimental data evidences that Bi forms a dense intercalated layer that efficiently decouples graphene from the substrate. The interaction manifests itself only in the n-type charge doping (~0.4 eV) and a relatively small band gap at the Dirac point (~190 meV). The origin of this minor band gap is quite intriguing and in this work it was possible to exclude a wide range of mechanisms that could be responsible for it, such as induced intrinsic spin-orbit interaction, hybridization with the substrate states and corrugation of the graphene lattice. The main origin of the band gap was attributed to the A-B symmetry breaking and this conclusion found support in the careful analysis of the interference effects in photoemission that provided the band gap estimate of ~140 meV.
While the previous chapters were focused on adjusting the properties of graphene by proximity to heavy metals, graphene on its own is a great object to study various physical effects at crystal surfaces. The final part of this work is devoted to a study of surface scattering resonances by means of photoemission spectroscopy, where this effect manifests itself as a distinct modulation of photoemission intensity. Though scattering resonances were widely studied in the past by means of electron diffraction, studies about their observation in photoemission experiments started to appear only recently and they are very scarce.
For a comprehensive study of scattering resonances graphene was selected as a versatile model system with adjustable properties. After the theoretical and historical introduction to the topic of scattering resonances follows a detailed description of the unusual features observed in the photoemission spectra obtained in this work and finally the equivalence between these features and scattering resonances is proven. The obtained photoemission results are in a good qualitative agreement with the existing theory, as verified by our calculations in the framework of the interference model. This simple model gives a suitable explanation for the general experimental observations.
The possibilities of engineering the scattering resonances were also explored. A systematic study of graphene on a wide range of substrates revealed that the energy position of the resonances is in a direct relation to the magnitude of charge transfer between graphene and the substrate. Moreover, it was demonstrated that the scattering resonances in graphene on Ir(111) can be suppressed by nanopatterning either by a superlattice of Ir nanoclusters or by atomic hydrogen. These effects were attributed to strong local variations of tork function and/or destruction of long-range order of thephene lattice. The tunability of scattering resonances can be applied for optoelectronic devices based on graphene. Moreover, the results of this study expand the general understanding of the phenomenon of scattering resonances and are applicable to many other materials besides graphene.