Refine
Has Fulltext
- yes (130) (remove)
Year of publication
- 2019 (130) (remove)
Document Type
- Doctoral Thesis (130) (remove)
Is part of the Bibliography
- yes (130) (remove)
Keywords
- Klimawandel (4)
- Spektroskopie (4)
- climate change (4)
- Biodiversität (3)
- machine learning (3)
- spectroscopy (3)
- Anden (2)
- Andes (2)
- Himalaya (2)
- Klima (2)
Institute
- Institut für Biochemie und Biologie (22)
- Institut für Geowissenschaften (18)
- Institut für Physik und Astronomie (17)
- Institut für Chemie (15)
- Hasso-Plattner-Institut für Digital Engineering GmbH (10)
- Extern (9)
- Department Linguistik (8)
- Institut für Umweltwissenschaften und Geographie (6)
- Sozialwissenschaften (5)
- Institut für Mathematik (4)
Risiken für Cyberressourcen können durch unbeabsichtigte oder absichtliche Bedrohungen entstehen. Dazu gehören Insider-Bedrohungen von unzufriedenen oder nachlässigen Mitarbeitern und Partnern, eskalierende und aufkommende Bedrohungen aus aller Welt, die stetige Weiterentwicklung der Angriffstechnologien und die Entstehung neuer und zerstörerischer Angriffe. Informationstechnik spielt mittlerweile in allen Bereichen des Lebens eine entscheidende Rolle, u. a. auch im Bereich des Militärs. Ein ineffektiver Schutz von Cyberressourcen kann hier Sicherheitsvorfälle und Cyberattacken erleichtern, welche die kritischen Vorgänge stören, zu unangemessenem Zugriff, Offenlegung, Änderung oder Zerstörung sensibler Informationen führen und somit die nationale Sicherheit, das wirtschaftliche Wohlergehen sowie die öffentliche Gesundheit und Sicherheit gefährden. Oftmals ist allerdings nicht klar, welche Bedrohungen konkret vorhanden sind und welche der kritischen Systemressourcen besonders gefährdet ist.
In dieser Dissertation werden verschiedene Analyseverfahren für Bedrohungen in militärischer Informationstechnik vorgeschlagen und in realen Umgebungen getestet. Dies bezieht sich auf Infrastrukturen, IT-Systeme, Netze und Anwendungen, welche Verschlusssachen (VS)/Staatsgeheimnisse verarbeiten, wie zum Beispiel bei militärischen oder Regierungsorganisationen. Die Besonderheit an diesen Organisationen ist das Konzept der Informationsräume, in denen verschiedene Datenelemente, wie z. B. Papierdokumente und Computerdateien, entsprechend ihrer Sicherheitsempfindlichkeit eingestuft werden, z. B. „STRENG GEHEIM“, „GEHEIM“, „VS-VERTRAULICH“, „VS-NUR-FÜR-DEN-DIENSTGEBRAUCH“ oder „OFFEN“.
Die Besonderheit dieser Arbeit ist der Zugang zu eingestuften Informationen aus verschiedenen Informationsräumen und der Prozess der Freigabe dieser. Jede in der Arbeit entstandene Veröffentlichung wurde mit Angehörigen in der Organisation besprochen, gegengelesen und freigegeben, so dass keine eingestuften Informationen an die Öffentlichkeit gelangen.
Die Dissertation beschreibt zunächst Bedrohungsklassifikationsschemen und Angreiferstrategien, um daraus ein ganzheitliches, strategiebasiertes Bedrohungsmodell für Organisationen abzuleiten. Im weiteren Verlauf wird die Erstellung und Analyse eines Sicherheitsdatenflussdiagramms definiert, welches genutzt wird, um in eingestuften Informationsräumen operationelle Netzknoten zu identifizieren, die aufgrund der Bedrohungen besonders gefährdet sind. Die spezielle, neuartige Darstellung ermöglicht es, erlaubte und verbotene Informationsflüsse innerhalb und zwischen diesen Informationsräumen zu verstehen.
Aufbauend auf der Bedrohungsanalyse werden im weiteren Verlauf die Nachrichtenflüsse der operationellen Netzknoten auf Verstöße gegen Sicherheitsrichtlinien analysiert und die Ergebnisse mit Hilfe des Sicherheitsdatenflussdiagramms anonymisiert dargestellt. Durch Anonymisierung der Sicherheitsdatenflussdiagramme ist ein Austausch mit externen Experten zur Diskussion von Sicherheitsproblematiken möglich.
Der dritte Teil der Arbeit zeigt, wie umfangreiche Protokolldaten der Nachrichtenflüsse dahingehend untersucht werden können, ob eine Reduzierung der Menge an Daten möglich ist. Dazu wird die Theorie der groben Mengen aus der Unsicherheitstheorie genutzt. Dieser Ansatz wird in einer Fallstudie, auch unter Berücksichtigung von möglichen auftretenden Anomalien getestet und ermittelt, welche Attribute in Protokolldaten am ehesten redundant sind.
Bildungsort Familie
(2019)
In der Bildungs- und Familienforschung wird die intergenerationale Weitergabe von Bildung innerhalb der Familie hauptsächlich unter dem Blickwinkel des schulischen Erfolges der nachwachsenden Generation thematisiert. „Wie“ aber bildungsbezogene Transferprozesse innerhalb der Familie konkret ablaufen, bleibt jedoch in der deutschen Forschungslandschaft weitestgehend unbearbeitet. An dieser Stelle setzt diese qualitativ angelegte Arbeit an. Ziel dieser Arbeit ist, bildungsbezogene Transferprozesse innerhalb von russischen Dreigenerationenfamilien, die aus der ehemaligen Sowjetunion nach Berlin seit 1989 ausgewandert sind und zwischen der Großeltern-, Elterngeneration und der Enkelgeneration ablaufen, zu untersuchen. Hinter diesen Transferprozessen verbergen sich im Sinne Bourdieus bewusste und unbewusste Bildungsstrategien der interviewten Familienmitglieder. Im Rahmen dieser Arbeit wurden zwei Spätaussiedlerfamilien – zu diesen zählen Familie Hoffmann und Familie Popow, sowie zwei russisch-jüdische Familien – zu diesen zählen Familie Rosenthal und Familie Buchbinder, interviewt. Es wurden mit den einzelnen Mitgliedern der vier untersuchten Dreigenerationenfamilien Gruppendiskussionen sowie mit je einem Vertreter einer Generation leitfadengestützte Einzelinterviews geführt. Die Erhebungsphase fand in Berlin im Zeitraum von 2010 bis 2012 statt. Das auf diese Weise gewonnene empirische Material wurde mithilfe der dokumentarischen Methode nach Bohnsack ausgewertet. Hierdurch wurde es möglich die implizite Selbstverständlichkeit, mit der sich Bildung in Familien nach Bourdieu habituell vollzieht, einzufangen und rekonstruierbar zu machen. In der Arbeit wurden eine habitustheoretische Interpretation der russischen Dreigenerationenfamilien und die entsprechende Feldanalyse nach Bourdieu vorgenommen. In diesem Zusammenhang wurde der soziale Raum der untersuchten Familien in der Ankunftsgesellschaft bezüglich ihres Vergleichshorizontes der Herkunftsgesellschaft rekonstruiert. Weiter wurde der Bildungstransfer vor dem jeweiligen Erlebnishintergrund der einzelnen Familien untersucht und diesbezüglich eine Typisierung vorgenommen.
Im Rahmen dieser Untersuchung konnten neue Erkenntnisse zum bisher unerforschten Feld des Bildungstransfers russischer Dreigenerationenfamilien in Berlin gewonnen werden. Ein wesentliches Ergebnis dieser Arbeit ist, dass die Anwendung von Bourdieus Klassentheorie auch auf Gruppen, die in einer sozialistischen Gesellschaft sozialisiert wurden und in eine kapitalistisch orientierte Gesellschaft ausgewandert sind, produktiv sein kann. Ein weiteres zentrales Ergebnis der Studie ist, dass bei zwei der vier untersuchten Familien die Migration den intergenerationalen Bildungstransfer beeinflusste. In diesem Zusammenhang weist Familie Rosenthal durch die Migration einen „gespaltenen“ Habitus auf. Dieser ist darauf zurückzuführen, dass diese Familie bei der Planung des Berufes für die Enkelin in Berlin sich am Praktischen und Notwendigen orientierte. Während die bewusste Bildungsstrategie der Großeltern- und Elterngeneration für die Enkelgeneration im Ankunftsland dem Habitus der Notwendigkeit, den Bourdieu der Arbeiterklasse zuschreibt, zugeordnet werden kann, lässt sich hingegen das Freizeitverhalten der Familie Rosenthal dem Habitus der Distinktion zuordnen, der typisch für die herrschende Klasse ist. Ein weiterer Befund dieser Untersuchung ist, dass im Vergleich zur Enkelin Rosenthal bei der Enkelin Popow eine sogenannte Sphärendiskrepanz rekonstruiert wurde. So ist die Enkelin Popow in der äußeren Sphäre der Schule auf sich gestellt, da die Großeltern- und Elterngeneration zum deutschen Schulsystem nur über einen geringen Informationsstand verfügen. Die Enkelin grenzt sich einerseits von ihrer Familie (innere Sphäre) und deutschen Schulabbrechern (äußere Sphäre) ab, orientiert sich aber andererseits beim Versuch sozial aufzusteigen an russischsprachigen Peers, die die gymnasiale Oberstufe besuchen (dritte Sphäre). Bei Enkelin Popow fungiert demzufolge die Peergruppe und nicht die Familie als zentraler Bildungsort. An dieser Stelle sei angemerkt, dass sowohl bei einer russisch-jüdischen Familie als auch bei einer Spätaussiedlerfamilie der intergenerationale Bildungstransfer durch die Migration beeinflusst wurde. Während Familie Rosenthal in der Herkunftsgesellschaft der Intelligenzija zuzuordnen ist, gehört Familie Popow der Arbeiterschaft an. Daraus folgt, dass der intergenerationale Bildungstransfer der untersuchten Familien sowohl unabhängig vom Spätaussiedler- und Kontingentflüchtlingsstatus als auch vom herkunftsortspezifischen sozialen Status abläuft. Demnach kann geschlussfolgert werden, dass im Rahmen dieser Studie die Migration ein zentraler Faktor für den intergenerationalen Bildungstransfer ist.
C-Arylglykoside und Chalkone
(2019)
Im bis heute andauernden Zeitalter der wissenschaftlichen Medizin, konnte ein breites Spektrum von Wirkstoffen zur Behandlung diverser Krankheiten zusammengetragen werden. Dennoch hat es sich die organische Synthesechemie zur Aufgabe gemacht, dieses Spektrum auf neuen oder bekannten Wegen und aus verschiedenen Gründen zu erweitern. Zum einen ist das Vorkommen bestimmter Verbindungen in der Natur häufig limitiert, sodass synthetische Methoden immer öfter an Stelle eines weniger nachhaltigen Abbaus treten. Zum anderen kann durch Derivatisierung und Wirkstoffanpassung die physiologische Wirkung oder die Bioverfügbarkeit eines Wirkstoffes erhöht werden. In dieser Arbeit konnten einige Vertreter der bekannten Wirkstoffklassen C-Arylglykoside und Chalkone durch den Schlüsselschritt der Palladium-katalysierten MATSUDA-HECK-Reaktion synthetisiert werden.
Dazu wurden im Fall der C-Arylglykoside zunächst ungesättigte Kohlenhydrate (Glykale) über eine Ruthenium-katalysierte Zyklisierungsreaktion dargestellt. Diese wurden im Anschluss mit unterschiedlich substituierten Diazoniumsalzen in der oben erwähnten Palladium-katalysierten Kupplungsreaktion zur Reaktion gebracht. Bei der Auswertung der analytischen Daten konnte festgestellt werden, dass stets die trans-Diastereomere gebildet wurden. Im Anschluss konnte gezeigt werden, dass die Doppelbindungen dieser Verbindungen durch Hydrierung, Dihydroxylierung oder Epoxidierung funktionalisiert werden können. Auf diesem Wege konnte u. a. eine dem Diabetesmedikament Dapagliflozin ähnliche Verbindung hergestellt werden.
Im zweiten Teil der Arbeit wurden Arylallylchromanone durch die MATSUDA-HECK-Reaktion von verschiedenen 8-Allylchromanonen mit Diazoniumsalzen dargestellt. Dabei konnte beobachtet werden, dass eine MOM-Schutzgruppe in 7-Position der Moleküle die Darstellung von Produktgemischen unterdrückt und jeweils nur eine der möglichen Verbindungen gebildet wird. Die Lage der Doppelbindung konnte mittels 2D-NMR-Untersuchungen lokalisiert werden. In Kooperation mit der theoretischen Chemie sollte durch Berechnungen untersucht werden, wie die beobachteten Verbindungen entstehen. Durch eine auftretende Wechselwirkung innerhalb des Moleküls konnte allerdings keine explizite Aussage getroffen werden.
Im Anschluss sollten die erhaltenen Verbindungen in einer allylischen Oxidation zu Chalkonen umgesetzt werden. Die Ruthenium-katalysierten Methoden zeigten u. a. keine Eignung. Es konnte allerdings eine metallfreie, Mikrowellen-unterstützte Methode erfolgreich erprobt werden, sodass die Darstellung einiger Vertreter dieser physiologisch aktiven Stoffklasse gelang.
Cellulose derived polymers
(2019)
Plastics, such as polyethylene, polypropylene, and polyethylene terephthalate are part of our everyday lives in the form of packaging, household goods, electrical insulation, etc. These polymers are non-degradable and create many environmental problems and public health concerns. Additionally, these polymers are produced from finite fossils resources. With the continuous utilization of these limited resources, it is important to look towards renewable sources along with biodegradation of the produced polymers, ideally. Although many bio-based polymers are known, such as polylactic acid, polybutylene succinate adipate or polybutylene succinate, none have yet shown the promise of replacing conventional polymers like polyethylene, polypropylene and polyethylene terephthalate. Cellulose is one of the most abundant renewable resources produced in nature. It can be transformed into various small molecules, such as sugars, furans, and levoglucosenone. The aim of this research is to use the cellulose derived molecules for the synthesis of polymers.
Acid-treated cellulose was subjected to thermal pyrolysis to obtain levoglucosenone, which was reduced to levoglucosenol. Levoglucosenol was polymerized, for the first time, by ring-opening metathesis polymerization (ROMP) yielding high molar mass polymers of up to ~150 kg/mol. The poly(levoglucosenol) is thermally stable up to ~220 ℃, amorphous, and is exhibiting a relatively high glass transition temperature of ~100 ℃. The poly(levoglucosenol) can be converted to a transparent film, resembling common plastic, and was found to degrade in a moist acidic environment. This means that poly(levoglucosenol) may find its use as an alternative to conventional plastic, for instance, polystyrene.
Levoglucosenol was also converted into levoglucosenyl methyl ether, which was polymerized by cationic ring-opening metathesis polymerization (CROP). Polymers were obtained with molar masses up to ~36 kg/mol. These polymers are thermally stable up to ~220 ℃ and are semi-crystalline thermoplastics, having a glass transition temperature of ~35 ℃ and melting transition of 70-100 ℃. Additionally, the polymers underwent cross-linking, hydrogenation and thiol-ene click chemistry.
Floods are among the most costly natural hazards that affect Europe and Germany, demanding a continuous adaptation of flood risk management. While social and economic development in recent years altered the flood risk patterns mainly with regard to an increase in flood exposure, different flood events are further expected to increase in frequency and severity in certain European regions due to climate change. As a result of recent major flood events in Germany, the German flood risk management shifted to more integrated approaches that include private precaution and preparation to reduce the damage on exposed assets. Yet, detailed insights into the preparedness decisions of flood-prone households remain scarce, especially in connection to mental impacts and individual coping strategies after being affected by different flood types.
This thesis aims to gain insights into flash floods as a costly hazard in certain German regions and compares the damage driving factors to the damage driving factors of river floods. Furthermore, psychological impacts as well as the effects on coping and mitigation behaviour of flood-affected households are assessed. In this context, psychological models such as the Protection Motivation Theory (PMT) and methods such as regressions and Bayesian statistics are used to evaluate influencing factors on the mental coping after an event and to identify psychological variables that are connected to intended private flood mitigation. The database consists of surveys that were conducted among affected households after major river floods in 2013 and flash floods in 2016.
The main conclusions that can be drawn from this thesis reveal that the damage patterns and damage driving factors of strong flash floods differ significantly from those of river floods due to a rapid flow origination process, higher flow velocities and flow forces. However, the effects on mental coping of people that have been affected by flood events appear to be weakly influenced by different flood types, but yet show a coherence to the event severity, where often thinking of the respective event is pronounced and also connected to a higher mitigation motivation. The mental coping and preparation after floods is further influenced by a good information provision and a social environment, which encourages a positive attitude towards private mitigation.
As an overall recommendation, approaches for an integrated flood risk management in Germany should be followed that also take flash floods into account and consider psychological characteristics of affected households to support and promote private flood mitigation. Targeted information campaigns that concern coping options and discuss current flood risks are important to better prepare for future flood hazards in Germany.
The central motivation of the thesis was to provide possible solutions and concepts to improve the performance (e.g. activity and selectivity) of electrochemical N2 reduction reaction (NRR). Given that porous carbon-based materials usually exhibit a broad range of structural properties, they could be promising NRR catalysts. Therefore, the advanced design of novel porous carbon-based materials and the investigation of their application in electrocatalytic NRR including the particular reaction mechanisms are the most crucial points to be addressed. In this regard, three main topics were investigated. All of them are related to the functionalization of porous carbon for electrochemical NRR or other electrocatalytic reactions.
In chapter 3, a novel C-TixOy/C nanocomposite has been described that has been obtained via simple pyrolysis of MIL-125(Ti). A novel mode for N2 activation is achieved by doping carbon atoms from nearby porous carbon into the anion lattice of TixOy. By comparing the NRR performance of M-Ts and by carrying out DFT calculations, it is found that the existence of (O-)Ti-C bonds in C-doped TixOy can largely improve the ability to activate and reduce N2 as compared to unoccupied OVs in TiO2. The strategy of rationally doping heteroatoms into the anion lattice of transition metal oxides to create active centers may open many new opportunities beyond the use of noble metal-based catalysts also for other reactions that require the activation of small molecules as well.
In chapter 4, a novel catalyst construction composed of Au single atoms decorated on the surface of NDPCs was reported. The introduction of Au single atoms leads to active reaction sites, which are stabilized by the N species present in NDPCs. Thus, the interaction within as-prepared AuSAs-NDPCs catalysts enabled promising performance for electrochemical NRR. For the reaction mechanism, Au single sites and N or C species can act as Frustrated Lewis pairs (FLPs) to enhance the electron donation and back-donation process to activate N2 molecules. This work provides new opportunities for catalyst design in order to achieve efficient N2 fixation at ambient conditions by utilizing recycled electric energy.
The last topic described in chapter 5 mainly focused on the synthesis of dual heteroatom-doped porous carbon from simple precursors. The introduction of N and B heteroatoms leads to the construction of N-B motives and Frustrated Lewis pairs in a microporous architecture which is also rich in point defects. This can improve the strength of adsorption of different reactants (N2 and HMF) and thus their activation. As a result, BNC-2 exhibits a desirable electrochemical NRR and HMF oxidation performance. Gas adsorption experiments have been used as a simple tool to elucidate the relationship between the structure and catalytic activity. This work provides novel and deep insights into the rational design and the origin of activity in metal-free electrocatalysts and enables a physically viable discussion of the active motives, as well as the search for their further applications.
Throughout this thesis, the ubiquitous problems of low selectivity and activity of electrochemical NRR are tackled by designing porous carbon-based catalysts with high efficiency and exploring their catalytic mechanisms. The structure-performance relationships and mechanisms of activation of the relatively inert N2 molecules are revealed by either experimental results or DFT calculations. These fundamental understandings pave way for a future optimal design and targeted promotion of NRR catalysts with porous carbon-based structure, as well as study of new N2 activation modes.
The natural abundance of Coiled Coil (CC) motifs in cytoskeleton and extracellular matrix proteins suggests that CCs play an important role as passive (structural) and active (regulatory) mechanical building blocks. CCs are self-assembled superhelical structures consisting of 2-7 α-helices. Self-assembly is driven by hydrophobic and ionic interactions, while the helix propensity of the individual helices contributes additional stability to the structure. As a direct result of this simple sequence-structure relationship, CCs serve as templates for protein design and sequences with a pre-defined thermodynamic stability have been synthesized de novo. Despite this quickly increasing knowledge and the vast number of possible CC applications, the mechanical function of CCs has been largely overlooked and little is known about how different CC design parameters determine the mechanical stability of CCs. Once available, this knowledge will open up new applications for CCs as nanomechanical building blocks, e.g. in biomaterials and nanobiotechnology.
With the goal of shedding light on the sequence-structure-mechanics relationship of CCs, a well-characterized heterodimeric CC was utilized as a model system. The sequence of this model system was systematically modified to investigate how different design parameters affect the CC response when the force is applied to opposing termini in a shear geometry or separated in a zipper-like fashion from the same termini (unzip geometry). The force was applied using an atomic force microscope set-up and dynamic single-molecule force spectroscopy was performed to determine the rupture forces and energy landscape properties of the CC heterodimers under study. Using force as a denaturant, CC chain separation is initiated by helix uncoiling from the force application points. In the shear geometry, this allows uncoiling-assisted sliding parallel to the force vector or dissociation perpendicular to the force vector. Both competing processes involve the opening of stabilizing hydrophobic (and ionic) interactions. Also in the unzip geometry, helix uncoiling precedes the rupture of hydrophobic contacts.
In a first series of experiments, the focus was placed on canonical modifications in the hydrophobic core and the helix propensity. Using the shear geometry, it was shown that both a reduced core packing and helix propensity lower the thermodynamic and mechanical stability of the CC; however, with different effects on the energy landscape of the system. A less tightly packed hydrophobic core increases the distance to the transition state, with only a small effect on the barrier height. This originates from a more dynamic and less tightly packed core, which provides more degrees of freedom to respond to the applied force in the direction of the force vector. In contrast, a reduced helix propensity decreases both the distance to the transition state and the barrier height. The helices are ‘easier’ to unfold and the remaining structure is less thermodynamically stable so that dissociation perpendicular to the force axis can occur at smaller deformations.
Having elucidated how canonical sequence modifications influence CC mechanics, the pulling geometry was investigated in the next step. Using one and the same sequence, the force application points were exchanged and two different shear and one unzipping geometry were compared. It was shown that the pulling geometry determines the mechanical stability of the CC. Different rupture forces were observed in the different shear as well as in the unzipping geometries, suggesting that chain separation follows different pathways on the energy landscape. Whereas the difference between CC shearing and unzipping was anticipated and has also been observed for other biological structures, the observed difference for the two shear geometries was less expected. It can be explained with the structural asymmetry of the CC heterodimer. It is proposed that the direction of the α-helices, the different local helix propensities and the position of a polar asparagine in the hydrophobic core are responsible for the observed difference in the chain separation pathways. In combination, these factors are considered to influence the interplay between processes parallel and perpendicular to the force axis.
To obtain more detailed insights into the role of helix stability, helical turns were reinforced locally using artificial constraints in the form of covalent and dynamic ‘staples’. A covalent staple bridges to adjacent helical turns, thus protecting them against uncoiling. The staple was inserted directly at the point of force application in one helix or in the same terminus of the other helix, which did not experience the force directly. It was shown that preventing helix uncoiling at the point of force application reduces the distance to the transition state while slightly increasing the barrier height. This confirms that helix uncoiling is critically important for CC chain separation. When inserted into the second helix, this stabilizing effect is transferred across the hydrophobic core and protects the force-loaded turns against uncoiling. If both helices were stapled, no additional increase in mechanical stability was observed. When replacing the covalent staple with a dynamic metal-coordination bond, a smaller decrease in the distance to the transition was observed, suggesting that the staple opens up while the CC is under load.
Using fluorinated amino acids as another type of non-natural modification, it was investigated how the enhanced hydrophobicity and the altered packing at the interface influences CC mechanics. The fluorinated amino acid was inserted into one central heptad of one or both α-helices. It was shown that this substitution destabilized the CC thermodynamically and mechanically. Specifically, the barrier height was decreased and the distance to the transition state increased. This suggests that a possible stabilizing effect of the increased hydrophobicity is overruled by a disturbed packing, which originates from a bad fit of the fluorinated amino acid into the local environment. This in turn increases the flexibility at the interface, as also observed for the hydrophobic core substitution described above. In combination, this confirms that the arrangement of the hydrophobic side chains is an additional crucial factor determining the mechanical stability of CCs.
In conclusion, this work shows that knowledge of the thermodynamic stability alone is not sufficient to predict the mechanical stability of CCs. It is the interplay between helix propensity and hydrophobic core packing that defines the sequence-structure-mechanics relationship. In combination, both parameters determine the relative contribution of processes parallel and perpendicular to the force axis, i.e. helix uncoiling and uncoiling-assisted sliding as well as dissociation. This new mechanistic knowledge provides insight into the mechanical function of CCs in tissues and opens up the road for designing CCs with pre-defined mechanical properties. The library of mechanically characterized CCs developed in this work is a powerful starting point for a wide spectrum of applications, ranging from molecular force sensors to mechanosensitive crosslinks in protein nanostructures and synthetic extracellular matrix mimics.
Force plays a fundamental role in the regulation of biological processes. Cells can sense the mechanical properties of the extracellular matrix (ECM) by applying forces and transmitting mechanical signals. They further use mechanical information for regulating a wide range of cellular functions, including adhesion, migration, proliferation, as well as differentiation and apoptosis. Even though it is well understood that mechanical signals play a crucial role in directing cell fate, surprisingly little is known about the range of forces that define cell-ECM interactions at the molecular level.
Recently, synthetic molecular force sensor (MFS) designs have been established for measuring the molecular forces acting at the cell-ECM interface. MFSs detect the traction forces generated by cells and convert this mechanical input into an optical readout. They are composed of calibrated mechanoresponsive building blocks and are usually equipped with a fluorescence reporter system. Up to date, many different MFS designs have been introduced and successfully used for measuring forces involved in the adhesion of mammalian cells. These MFSs utilize different molecular building blocks, such as double-stranded deoxyribonucleic acid (dsDNA) molecules, DNA hairpins and synthetic polymers like polyethylene glycol (PEG). These currently available MFS designs lack ECM mimicking properties.
In this work, I introduce a new MFS building block for cell biology applications, derived from the natural ECM. It combines mechanical tunability with the ability to mimic the native cellular microenvironment. Inspired by structural ECM proteins with load bearing function, this new MFS design utilizes coiled coil (CC)-forming peptides. CCs are involved in structural and mechanical tasks in the cellular microenvironment and many of the key protein components of the cytoskeleton and the ECM contain CC structures. The well-known folding motif of CC structures, an easy synthesis via solid phase methods and the many roles CCs play in biological processes have inspired studies to use CCs as tunable model systems for protein design and assembly. All these properties make CCs ideal candidates as building blocks for MFSs. In this work, a series of heterodimeric CCs were designed, characterized and further used as molecular building blocks for establishing a novel, next-generation MFS prototype.
A mechanistic molecular understanding of their structural response to mechanical load is essential for revealing the sequence-structure-mechanics relationships of CCs. Here, synthetic heterodimeric CCs of different length were loaded in shear geometry and their mechanical response was investigated using a combination of atomic force microscope (AFM)-based single-molecule force spectroscopy (SMFS) and steered molecular dynamics (SMD) simulations. SMFS showed that the rupture forces of short heterodimeric CCs (3-5 heptads) lie in the range of 20-50 pN, depending on CC length, pulling geometry and the applied loading rate (dF/dt). Upon shearing, an initial rise in the force, followed by a force plateau and ultimately strand separation was observed in SMD simulations. A detailed structural analysis revealed that CC response to shear load depends on the loading rate and involves helix uncoiling, uncoiling-assisted sliding in the direction of the applied force and uncoiling-assisted dissociation perpendicular to the force axis.
The application potential of these mechanically characterized CCs as building blocks for MFSs has been tested in 2D cell culture applications with the goal of determining the threshold force for cell adhesion. Fully calibrated, 4- to 5-heptad long, CC motifs (CC-A4B4 and CC-A5B5) were used for functionalizing glass surfaces with MFSs. 3T3 fibroblasts and endothelial cells carrying mutations in a signaling pathway linked to cell adhesion and mechanotransduction processes were used as model systems for time-dependent adhesion experiments. A5B5-MFS efficiently supported cell attachment to the functionalized surfaces for both cell types, while A4B4-MFS failed to maintain attachment of 3T3 fibroblasts after the first 2 hours of initial cell adhesion. This difference in cell adhesion behavior demonstrates that the magnitude of cell-ECM forces varies depending on the cell type and further supports the application potential of CCs as mechanoresponsive and tunable molecular building blocks for the development of next-generation protein-based MFSs.This novel CC-based MFS design is expected to provide a powerful new tool for observing cellular mechanosensing processes at the molecular level and to deliver new insights into the mechanisms and forces involved. This MFS design, utilizing mechanically tunable CC building blocks, will not only allow for measuring the molecular forces acting at the cell-ECM interface, but also yield a new platform for the development of mechanically controlled materials for a large number of biological and medical applications.
Emotions are a central element of human experience. They occur with high frequency in everyday life and play an important role in decision making. However, currently there is no consensus among researchers on what constitutes an emotion and on how emotions should be investigated. This dissertation identifies three problems of current emotion research: the problem of ground truth, the problem of incomplete constructs and the problem of optimal representation. I argue for a focus on the detailed measurement of emotion manifestations with computer-aided methods to solve these problems. This approach is demonstrated in three research projects, which describe the development of methods specific to these problems as well as their application to concrete research questions.
The problem of ground truth describes the practice to presuppose a certain structure of emotions as the a priori ground truth. This determines the range of emotion descriptions and sets a standard for the correct assignment of these descriptions. The first project illustrates how this problem can be circumvented with a multidimensional emotion perception paradigm which stands in contrast to the emotion recognition paradigm typically employed in emotion research. This paradigm allows to calculate an objective difficulty measure and to collect subjective difficulty ratings for the perception of emotional stimuli. Moreover, it enables the use of an arbitrary number of emotion stimuli categories as compared to the commonly used six basic emotion categories. Accordingly, we collected data from 441 participants using dynamic facial expression stimuli from 40 emotion categories. Our findings suggest an increase in emotion perception difficulty with increasing actor age and provide evidence to suggest that young adults, the elderly and men underestimate their emotion perception difficulty. While these effects were predicted from the literature, we also found unexpected and novel results. In particular, the increased difficulty on the objective difficulty measure for female actors and observers stood in contrast to reported findings. Exploratory analyses revealed low relevance of person-specific variables for the prediction of emotion perception difficulty, but highlighted the importance of a general pleasure dimension for the ease of emotion perception.
The second project targets the problem of incomplete constructs which relates to vaguely defined psychological constructs on emotion with insufficient ties to tangible manifestations. The project exemplifies how a modern data collection method such as face tracking data can be used to sharpen these constructs on the example of arousal, a long-standing but fuzzy construct in emotion research. It describes how measures of distance, speed and magnitude of acceleration can be computed from face tracking data and investigates their intercorrelations. We find moderate to strong correlations among all measures of static information on one hand and all measures of dynamic information on the other. The project then investigates how self-rated arousal is tied to these measures in 401 neurotypical individuals and 19 individuals with autism. Distance to the neutral face was predictive of arousal ratings in both groups. Lower mean arousal ratings were found for the autistic group, but no difference in correlation of the measures and arousal ratings could be found between groups. Results were replicated in a high autistic traits group consisting of 41 participants. The findings suggest a qualitatively similar perception of arousal for individuals with and without autism. No correlations between valence ratings and any of the measures could be found which emphasizes the specificity of our tested measures for the construct of arousal.
The problem of optimal representation refers to the search for the best representation of emotions and the assumption that there is a one-fits-all solution. In the third project we introduce partial least squares analysis as a general method to find an optimal representation to relate two high-dimensional data sets to each other. The project demonstrates its applicability to emotion research on the question of emotion perception differences between men and women. The method was used with emotion rating data from 441 participants and face tracking data computed on 306 videos. We found quantitative as well as qualitative differences in the perception of emotional facial expressions between these groups. We showed that women’s emotional perception systematically captured more of the variance in facial expressions. Additionally, we could show that significant differences exist in the way that women and men perceive some facial expressions which could be visualized as concrete facial expression sequences. These expressions suggest differing perceptions of masked and ambiguous facial expressions between the sexes. In order to facilitate use of the developed method by the research community, a package for the statistical environment R was written. Furthermore, to call attention to the method and its usefulness for emotion research, a website was designed that allows users to explore a model of emotion ratings and facial expression data in an interactive fashion.
This dissertation combines field and geochemical observations and analyses with numerical modeling to understand the formation of vein-hosted Sn-W ore in the Panasqueira deposit of Portugal, which is among the ten largest worldwide. The deposit is located above a granite body that is altered by magmatic-hydrothermal fluids in its upper part (greisen). These fluids are thought to be the source of metals, but that was still under debate. The goal of this study is to determine the composition and temperature of hydrothermal fluids at Panasqueira, and with that information to construct a numerical model of the hydrothermal system. The focus is on analysis of the minerals tourmaline and white mica, which formed during mineralization and are widespread throughout the deposit. Tourmaline occurs mainly in alteration zones around mineralized veins and is less abundant in the vein margins. White mica is more widespread. It is abundant in vein margins as well as alteration zones, and also occurs in the granite greisen. The laboratory work involved in-situ microanalysis of major- and trace elements in tourmaline and white mica, and boron-isotope analysis in both minerals by secondary ion mass spectrometry (SIMS).
The boron-isotope composition of tourmaline and white mica suggests a magmatic source. Comparison of hydrothermally-altered and unaltered rocks from drill cores shows that the ore metals (W, Sn, Cu, and Zn) and As, F, Li, Rb, and Cs were introduced during the alteration. Most of these elements are also enriched in tourmaline and mica, which confirms their potential value as exploration guides to Sn-W ores elsewhere.
The thermal evolution of the hydrothermal system was estimated by B-isotope exchange thermometry and the Ti-in-quartz method. Both methods yielded similar temperatures for the early hydrothermal phase: 430° to 460°C for B-isotopes and 503° ± 24°C for Ti-in-quartz. Mineral pairs from a late fault zone yield significantly lower median temperatures of 250°C. The combined results of thermometry with variations in chemical and B-isotope composition of tourmaline and mica suggest that a similar magmatic-hydrothermal fluid was active at all stages of mineralization. Mineralization in the late stage shows the same B-isotope composition as in the main stage despite a ca. 250°C cooling, which supports a multiple injection model of magmatic-hydrothermal fluids.
Two-dimensional numerical simulations of convection in a multiphase NaCl hydrothermal system were conducted: (a) in order to test a new approach (lower dimensional elements) for flow through fractures and faults and (b) in order to identify conditions for horizontal fluid flow as observed in the flat-lying veins at Panasqueira. The results show that fluid flow over an intrusion (heat and fluid source) develops a horizontal component if there is sufficient fracture connectivity. Late, steep fault zones have been identified in the deposit area, which locally contain low-temperature Zn-Pb mineralization. The model results confirm that the presence of subvertical faults with enhanced permeability play a crucial role in the ascent of magmatic fluids to the surface and the recharge of meteoric waters. Finally, our model results suggest that recharge of meteoric fluids and mixing processes may be important at later stages, while flow of magmatic fluids dominate the early stages of the hydrothermal fluid circulation.
Magmatic-hydrothermal fluids are responsible for numerous mineralization types, including porphyry copper and granite related tin-tungsten (Sn-W) deposits. Ore formation is dependent on various factors, including, the pressure and temperature regime of the intrusions, the chemical composition of the magma and hydrothermal fluids, and fluid rock interaction during the ascent. Fluid inclusions have potential to provide direct information on the temperature, salinity, pressure and chemical composition of fluids responsible for ore formation. Numerical modeling allows the parametrization of pluton features that cannot be analyzed directly via geological observations.
Microthermometry of fluid inclusions from the Zinnwald Sn-W deposit, Erzgebirge, Germany / Czech Republic, provide evidence that the greisen mineralization is associated with a low salinity (2-10 wt.% NaCl eq.) fluid with homogenization temperatures between 350°C and 400°C. Quartzes from numerous veins are host to inclusions with the same temperatures and salinities, whereas cassiterite- and wolframite-hosted assemblages with slightly lower temperatures (around 350°C) and higher salinities (ca. 15 wt. NaCl eq.). Further, rare quartz samples contained boiling assemblages consisting of coexisting brine and vapor phases. The formation of ore minerals within the greisen is driven by invasive fluid-rock interaction, resulting in the loss of complexing agents (Cl-) leading to precipitation of cassiterite. The fluid inclusion record in the veins suggests boiling as the main reason for cassiterite and wolframite mineralization. Ore and coexisting gangue minerals hosted different types of fluid inclusions where the beginning boiling processes are solely preserved by the ore minerals emphasizing the importance of microthermometry in ore minerals. Further, the study indicates that boiling as a precipitation mechanism can only occur in mineralization related to shallow intrusions whereas deeper plutons prevent the fluid from boiling and can therefore form tungsten mineralization in the distal regions.
The tin mineralization in the Hämmerlein deposit, Erzgebirge, Germany, occurs within a skarn horizon and the underlying schist. Cassiterite within the skarn contains highly saline (30-50 wt% NaCl eq.) fluid inclusions, with homogenization temperatures up to 500°C, whereas cassiterites from the schist and additional greisen samples contain inclusions of lower salinity (~5 wt% NaCl eq.) and temperature (between 350 and 400°C). Inclusions in the gangue minerals (quartz, fluorite) preserve homogenization temperatures below 350°C and sphalerite showed the lowest homogenization temperatures (ca. 200°C) whereby all minerals (cassiterite from schist and greisen, gangue minerals and sphalerite) show similar salinity ranges (2-5 wt% NaCl eq.). Similar trace element contents and linear trends in the chemistry of the inclusions suggest a common source fluid. The inclusion record in the Hämmerlein deposit documents an early exsolution of hot brines from the underlying granite which is responsible for the mineralization hosted by the skarn. Cassiterites in schist and greisen are mainly forming due to fluid-rock interaction at lower temperatures. The low temperature inclusions documented in the sphalerite mineralization as well as their generally low trace element composition in comparison to the other minerals suggests that their formation was induced by mixing with meteoric fluids.
Numerical simulations of magma chambers and overlying copper distribution document the importance of incremental growth by sills. We analyzed the cooling behavior at variable injection intervals as well as sill thicknesses. The models suggest that magma accumulation requires volumetric injection rates of at least 4 x 10-4 km³/y. These injection rates are further needed to form a stable magmatic-hydrothermal fluid plume above the magma chamber to ensure a constant copper precipitation and enrichment within a confined location in order to form high-grade ore shells within a narrow geological timeframe between 50 and 100 kyrs as suggested for porphyry copper deposits. The highest copper enrichment can be found in regions with steep temperature gradients, typical of regions where the magmatic-hydrothermal fluid meets the cooler ambient fluids.
Das Preußische Erbrecht in der Judikatur des Berliner Obertribunals in den Jahren 1836 bis 1865
(2019)
Die Dissertation befasst sich mit dem Allgemeinen Preußischen Landrecht von 1794 und der hierzu ergangenen Rechtsprechung des Berliner Obertribunals. Im Fokus der Untersuchung stehen die erbrechtlichen Regelungen des Landrechts und deren Anwendung sowie Auslegung in der Judikatur des höchsten preußischen Gerichts. Der Forschungsgegenstand ergibt sich aus dem im Landrecht kodifizierten speziellen Gesetzesverständnisses. Nach diesem sollte die Gesetzesauslegung durch die Rechtsprechung auf ein Minimum, nämlich die Auslegung allein anhand des Wortlauts der Regelung reduziert werden, um dem absolutistischen Regierungsanspruch der preußischen Monarchen, namentlich Friedrich des Großen, hinreichend Rechnung zu tragen. In diesem Kontext wird der Frage nachgegangen, inwieweit das preußische Obertribunal das im Landrecht statuierte „Auslegungsverbot“ beachtet hat und in welchen Fällen sich das Gericht von der Vorgabe emanzipierte und weitere Auslegungsmethoden anwendete und sich so eine unabhängige Rechtsprechung entwickeln konnte.
Die Arbeit gliedert sich in drei Hauptabschnitte. Im Anschluss an die Einleitung, in der zunächst die rechtshistorische Bedeutung des Landrechts und des Erbrechts sowie der Untersuchungsgegenstand umrissen werden, folgt die Darstellung der Entstehungsgeschichte des Landrechts und des Berliner Obertribunals.
Hieran schließt sich in einem dritten Abschnitt eine Analyse der erbrechtlichen Vorschriften des Landrechts an. In dieser wird auf die Entstehungsgeschichte der verschiedenen erbrechtlichen Institute wie beispielsweise der gesetzlichen und gewillkürten Erbfolge, dem Pflichtteilsrecht etc., unter Berücksichtigung des zeitgenössischen wissenschaftlichen Diskurses eingegangen.
Im vierten Abschnitt geht es um die Judikate des Berliner Obertribunals aus den Jahren 1836-1865 in denen die zuvor dargestellten erbrechtlichen Regelungen entscheidungserheblich waren. Dabei wird der Forschungsfrage, inwieweit das Obertribunal das im Landrecht statuierte Auslegungsverbot beachtet hat und in welchen Fällen es von diesem abwich bzw. weitere Auslegungsmethoden anwendete, konkret nachgegangen wird. Insgesamt werden 26 Entscheidungen des Obertribunals unter dem Aspekt der Auslegungspraxis, der Kontinuität und der Beschleunigung der Rechtsprechung analysiert und ausgewertet.
Since half a century, cytometry has been a major scientific discipline in the field of cytomics - the study of system’s biology at single cell level. It enables the investigation of physiological processes, functional characteristics and rare events with proteins by analysing multiple parameters on an individual cell basis. In the last decade, mass cytometry has been established which increased the parallel measurement to up to 50 proteins. This has shifted the analysis strategy from conventional consecutive manual gates towards multi-dimensional data processing. Novel algorithms have been developed to tackle these high-dimensional protein combinations in the data. They are mainly based on clustering or non-linear dimension reduction techniques, or both, often combined with an upstream downsampling procedure. However, these tools have obstacles either in comprehensible interpretability, reproducibility, computational complexity or in comparability between samples and groups.
To address this bottleneck, a reproducible, semi-automated cytometric data mining workflow PRI (pattern recognition of immune cells) is proposed which combines three main steps: i) data preparation and storage; ii) bin-based combinatorial variable engineering of three protein markers, the so called triploTs, and subsequent sectioning of these triploTs in four parts; and iii) deployment of a data-driven supervised learning algorithm, the cross-validated elastic-net regularized logistic regression, with these triploT sections as input variables. As a result, the selected variables from the models are ranked by their prevalence, which potentially have discriminative value. The purpose is to significantly facilitate the identification of meaningful subpopulations, which are most distinguish between two groups. The proposed workflow PRI is exemplified by a recently published public mass cytometry data set. The authors found a T cell subpopulation which is discriminative between effective and ineffective treatment of breast carcinomas in mice. With PRI, that subpopulation was not only validated, but was further narrowed down as a particular Th1 cell population. Moreover, additional insights of combinatorial protein expressions are revealed in a traceable manner. An essential element in the workflow is the reproducible variable engineering. These variables serve as basis for a clearly interpretable visualization, for a structured variable exploration and as input layers in neural network constructs.
PRI facilitates the determination of marker levels in a semi-continuous manner. Jointly with the combinatorial display, it allows a straightforward observation of correlating patterns, and thus, the dominant expressed markers and cell hierarchies. Furthermore, it enables the identification and complex characterization of discriminating subpopulations due to its reproducible and pseudo-multi-parametric pattern presentation. This endorses its applicability as a tool for unbiased investigations on cell subsets within multi-dimensional cytometric data sets.
Medical imaging plays an important role in disease diagnosis, treatment planning, and clinical monitoring. One of the major challenges in medical image analysis is imbalanced training data, in which the class of interest is much rarer than the other classes. Canonical machine learning algorithms suppose that the number of samples from different classes in the training dataset is roughly similar or balance. Training a machine learning model on an imbalanced dataset can introduce unique challenges to the learning problem.
A model learned from imbalanced training data is biased towards the high-frequency samples. The predicted results of such networks have low sensitivity and high precision. In medical applications, the cost of misclassification of the minority class could be more than the cost of misclassification of the majority class. For example, the risk of not detecting a tumor could be much higher than referring to a healthy subject to a doctor. The current Ph.D. thesis introduces several deep learning-based approaches for handling class imbalanced problems for learning multi-task such as disease classification and semantic segmentation.
At the data-level, the objective is to balance the data distribution through re-sampling the data space: we propose novel approaches to correct internal bias towards fewer frequency samples. These approaches include patient-wise batch sampling, complimentary labels, supervised and unsupervised minority oversampling using generative adversarial networks for all.
On the other hand, at algorithm-level, we modify the learning algorithm to alleviate the bias towards majority classes. In this regard, we propose different generative adversarial networks for cost-sensitive learning, ensemble learning, and mutual learning to deal with highly imbalanced imaging data.
We show evidence that the proposed approaches are applicable to different types of medical images of varied sizes on different applications of routine clinical tasks, such as disease classification and semantic segmentation. Our various implemented algorithms have shown outstanding results on different medical imaging challenges.
Light-switchable proteins are being used increasingly to understand and manipulate complex molecular systems. The success of this approach has fueled the development of tailored photo-switchable proteins, to enable targeted molecular events to be studied using light. The development of novel photo-switchable tools has to date largely relied on rational design. Complementing this approach with directed evolution would be expected to facilitate these efforts. Directed evolution, however, has been relatively infrequently used to develop photo-switchable proteins due to the challenge presented by high-throughput evaluation of switchable protein activity. This thesis describes the development of two genetic circuits that can be used to evaluate libraries of switchable proteins, enabling optimization of both the on- and off-states. A screening system is described, which permits detection of DNA-binding activity based on conditional expression of a fluorescent protein. In addition, a tunable selection system is presented, which allows for the targeted selection of protein-protein interactions of a desired affinity range. This thesis additionally describes the development and characterization of a synthetic protein that was designed to investigate chromophore reconstitution in photoactive yellow protein (PYP), a promising scaffold for engineering photo-controlled protein tools.
Earthquake swarms are characterized by large numbers of events occurring in a short period of time within a confined source volume and without significant mainshock aftershock pattern as opposed to tectonic sequences. Intraplate swarms in the absence of active volcanism usually occur in continental rifts as for example in the Eger Rift zone in North West Bohemia, Czech Republic. A common hypothesis links event triggering to pressurized fluids. However, the exact causal chain is often poorly understood since the underlying geotectonic processes are slow compared to tectonic sequences. The high event rate during active periods challenges standard seismological routines as these are often designed for single events and therefore costly in terms of human resources when working with phase picks or computationally costly when exploiting full waveforms.
This methodological thesis develops new approaches to analyze earthquake swarm seismicity as well as the underlying seismogenic volume. It focuses on the region of North West (NW) Bohemia, a well studied, well monitored earthquake swarm region.
In this work I develop and test an innovative approach to detect and locate earthquakes using deep convolutional neural networks. This technology offers great potential as it allows to efficiently process large amounts of data which becomes increasingly important given that seismological data storage grows at increasing pace. The proposed deep neural network trained on NW Bohemian earthquake swarm records is able to locate 1000 events in less than 1 second using full waveforms while approaching precision of double difference relocated catalogs. A further technological novelty is that the trained filters of the deep neural network’s first layer can be repurposed to function as a pattern matching event detector without additional training on noise datasets. For further methodological development and benchmarking, I present a new toolbox to generate realistic earthquake cluster catalogs as well as synthetic full waveforms of those clusters in an automated fashion. The input is parameterized using constraints on source volume geometry, nucleation and frequency-magnitude relations. It harnesses recorded noise to produce highly realistic synthetic data for benchmarking and development. This tool is used to study and assess detection performance in terms of magnitude of completeness Mc of a full waveform detector applied to synthetic data of a hydrofracturing experiment at the Wysin site, Poland.
Finally, I present and demonstrate a novel approach to overcome the masking effects of wave propagation between earthquake and stations and to determine source volume attenuation directly in the source volume where clustered earthquakes occur. The new event couple spectral ratio approach exploits high frequency spectral slopes of two events sharing the greater part of their rays. Synthetic tests based on the toolbox mentioned before show that this method is able to infer seismic wave attenuation within the source volume at high spatial resolution. Furthermore, it is independent from the distance towards a station as well as the complexity of the attenuation and velocity structure outside of the source volume of swarms. The application to recordings of the NW Bohemian earthquake swarm shows increased P phase attenuation within the source volume (Qp < 100) based on results at a station located close to the village Luby (LBC). The recordings of a station located in epicentral proximity, close to Nový Kostel (NKC), show a relatively high complexity indicating that waves arriving at that station experience more scattering than signals recorded at other stations. The high level of complexity destabilizes the inversion. Therefore, the Q estimate at NKC is not reliable and an independent proof of the high attenuation finding given the geometrical and frequency constraints is still to be done. However, a high attenuation in the source volume of NW Bohemian swarms has been postulated before in relation to an expected, highly damaged zone bearing CO 2 at high pressure.
The methods developed in the course of this thesis yield the potential to improve our understanding regarding the role of fluids and gases in intraplate event clustering.
Die Anfechtbarkeit und die Feststellbarkeit der Mutterschaft de lege lata und de lege ferenda
(2019)
Der althergebrachte Grundsatz, wonach das Kind von der Frau abstammt, welche es geboren hat, ist durch die moderne Fortpflanzungsmedizin ins Wanken geraten. Dennoch ordnet § 1591 BGB das Kind unanfechtbar der Geburtsmutter zu. Rechtliche und genetische Mutterschaft fallen deshalb dauerhaft auseinander, wenn das Kind im Wege der Leihmutterschaft oder nach einer Eizell- bzw. Embryospende zur Welt kommt. Die auf diese Methoden der artifiziellen Reproduktion bezogenen, im Inland bestehenden Verbote halten Paare mit Kinderwunsch nicht davon ab, auf entsprechende Angebote im Ausland zurückzugreifen. Daraus resultierende kollisions- und verfassungsrechtliche Probleme sind Gegenstand der vorliegenden Arbeit.
Für den Bereich der Leihmutterschaft wird der Frage nachgegangen, ob die mit dem Anfechtungsausschluss verfolgten Ziele des Gesetzgebers die damit einhergehenden Beeinträchtigungen grundrechtlich geschützter Rechtspositionen von genetischer Mutter und Kind rechtfertigen können. Besonderes Augenmerk liegt auf dem von Art. 6 Abs. 2 S. 1 GG geschützten Interesse von leiblichen Eltern und Kindern, die verfahrensrechtliche Möglichkeit zu erhalten, rechtlich einander zugeordnet zu werden. Dieses Interesse wird den Zielen des Gesetzgebers, der mit dem Anfechtungsausschluss die Rechte von Leihmüttern und Kindern zu schützen beabsichtigt, im Rahmen einer umfassenden Verhältnismäßigkeitsprüfung gegenübergestellt.
In den Konstellationen der Eizell- und Embryospende tritt schwerpunktmäßig das Recht des Kindes auf Kenntnis der eigenen Abstammung in den Vordergrund und mit ihm die Frage, ob sich daraus eine Verpflichtung des Gesetzgebers ableiten lässt, den Tatbestand von § 1598a BGB so zu erweitern, dass die vermuteten genetischen Eltern für den Bereich der artifiziellen Reproduktion in den Kreis der Klärungsverpflichteten aufgenommen werden.
Neben diesen Schwerpunkten werden viele weitere Probleme angesprochen. Im Ergebnis mündet die Arbeit in einen Vorschlag für die Legislative.
In this thesis we introduce the concept of the degree of formality. It is directed against a dualistic point of view, which only distinguishes between formal and informal proofs. This dualistic attitude does not respect the differences between the argumentations classified as informal and it is unproductive because the individual potential of the respective argumentation styles cannot be appreciated and remains untapped.
This thesis has two parts. In the first of them we analyse the concept of the degree of formality (including a discussion about the respective benefits for each degree) while in the second we demonstrate its usefulness in three case studies. In the first case study we will repair Haskell B. Curry's view of mathematics, which incidentally is of great importance in the first part of this thesis, in light of the different degrees of formality. In the second case study we delineate how awareness of the different degrees of formality can be used to help students to learn how to prove. Third, we will show how the advantages of proofs of different degrees of formality can be combined by the development of so called tactics having a medium degree of formality. Together the three case studies show that the degrees of formality provide a convincing solution to the problem of untapped potential.
With the growth of information technology, patient attitudes are shifting – away from passively receiving care towards actively taking responsibility for their well- being. Handling doctor-patient relationships collaboratively and providing patients access to their health information are crucial steps in empowering patients. In mental healthcare, the implicit consensus amongst practitioners has been that sharing medical records with patients may have an unpredictable, harmful impact on clinical practice. In order to involve patients more actively in mental healthcare processes, Tele-Board MED (TBM) allows for digital collaborative documentation in therapist-patient sessions. The TBM software system offers a whiteboard-inspired graphical user interface that allows therapist and patient to jointly take notes during the treatment session. Furthermore, it provides features to automatically reuse the digital treatment session notes for the creation of treatment session summaries and clinical case reports. This thesis presents the development of the TBM system and evaluates its effects on 1) the fulfillment of the therapist’s duties of clinical case documentation, 2) patient engagement in care processes, and 3) the therapist-patient relationship. Following the design research methodology, TBM was developed and tested in multiple evaluation studies in the domains of cognitive behavioral psychotherapy and addiction care. The results show that therapists are likely to use TBM with patients if they have a technology-friendly attitude and when its use suits the treatment context. Support in carrying out documentation duties as well as fulfilling legal requirements contributes to therapist acceptance. Furthermore, therapists value TBM as a tool to provide a discussion framework and quick access to worksheets during treatment sessions. Therapists express skepticism, however, regarding technology use in patient sessions and towards complete record transparency in general. Patients expect TBM to improve the communication with their therapist and to offer a better recall of discussed topics when taking a copy of their notes home after the session. Patients are doubtful regarding a possible distraction of the therapist and usage in situations when relationship-building is crucial. When applied in a clinical environment, collaborative note-taking with TBM encourages patient engagement and a team feeling between therapist and patient. Furthermore, it increases the patient’s acceptance of their diagnosis, which in turn is an important predictor for therapy success. In summary, TBM has a high potential to deliver more than documentation support and record transparency for patients, but also to contribute to a collaborative doctor-patient relationship. This thesis provides design implications for the development of digital collaborative documentation systems in (mental) healthcare as well as recommendations for a successful implementation in clinical practice.
One method of embedding groups into skew fields was introduced by A. I. Mal'tsev and B. H. Neumann (cf. [18, 19]). If G is an ordered group and F is a skew field, the set F((G)) of formal power series over F in G with well-ordered support forms a skew field into which the group ring F[G] can be embedded. Unfortunately it is not suficient that G is left-ordered since F((G)) is only an F-vector space in this case as there is no natural way to define a multiplication on F((G)). One way to extend the original idea onto left-ordered groups is to examine the endomorphism ring of F((G)) as explored by N. I. Dubrovin (cf. [5, 6]). It is possible to embed any crossed product ring F[G; η, σ] into the endomorphism ring of F((G)) such that each non-zero element of F[G; η, σ] defines an automorphism of F((G)) (cf. [5, 10]). Thus, the rational closure of F[G; η, σ] in the endomorphism ring of F((G)), which we will call the Dubrovin-ring of F[G; η, σ], is a potential candidate for a skew field of fractions of F[G; η, σ]. The methods of N. I. Dubrovin allowed to show that specific classes of groups can be embedded into a skew field. For example, N. I. Dubrovin contrived some special criteria, which are applicable on the universal covering group of SL(2, R). These methods have also been explored by J. Gräter and R. P. Sperner (cf. [10]) as well as N.H. Halimi and T. Ito (cf. [11]). Furthermore, it is of interest to know if skew fields of fractions are unique. For example, left and right Ore domains have unique skew fields of fractions (cf. [2]). This is not the general case as for example the free group with 2 generators can be embedded into non-isomorphic skew fields of fractions (cf. [12]). It seems likely that Ore domains are the most general case for which unique skew fields of fractions exist. One approach to gain uniqueness is to restrict the search to skew fields of fractions with additional properties. I. Hughes has defined skew fields of fractions of crossed product rings F[G; η, σ] with locally indicable G which fulfill a special condition. These are called Hughes-free skew fields of fractions and I. Hughes has proven that they are unique if they exist [13, 14]. This thesis will connect the ideas of N. I. Dubrovin and I. Hughes. The first chapter contains the basic terminology and concepts used in this thesis. We present methods provided by N. I. Dubrovin such as the complexity of elements in rational closures and special properties of endomorphisms of the vector space of formal power series F((G)). To combine the ideas of N.I. Dubrovin and I. Hughes we introduce Conradian left-ordered groups of maximal rank and examine their connection to locally indicable groups. Furthermore we provide notations for crossed product rings, skew fields of fractions as well as Dubrovin-rings and prove some technical statements which are used in later parts. The second chapter focuses on Hughes-free skew fields of fractions and their connection to Dubrovin-rings. For that purpose we introduce series representations to interpret elements of Hughes-free skew fields of fractions as skew formal Laurent series. This 1 Introduction allows us to prove that for Conradian left-ordered groups G of maximal rank the statement "F[G; η, σ] has a Hughes-free skew field of fractions" implies "The Dubrovin ring of F [G; η, σ] is a skew field". We will also prove the reverse and apply the results to give a new prove of Theorem 1 in [13]. Furthermore we will show how to extend injective ring homomorphisms of some crossed product rings onto their Hughes-free skew fields of fractions. At last we will be able to answer the open question whether Hughes--free skew fields are strongly Hughes-free (cf. [17, page 53]).
Geomagnetic paleosecular variations (PSVs) are an expression of geodynamo processes inside the Earth’s liquid outer core. These paleomagnetic time series provide insights into the properties of the Earth’s magnetic field, from normal behavior with a dominating dipolar geometry, over field crises, such as pronounced intensity lows and geomagnetic excursions with a distorted field geometry, to the complete reversal of the dominating dipole contribution. Particularly, long-term high-resolution and high-quality PSV time series are needed for properly reconstructing the higher frequency components in the spectrum of geomagnetic field variations and for a better understanding of the effects of smoothing during the recording of such paleomagnetic records by sedimentary archives.
In this doctorate study, full vector paleomagnetic records were derived from 16 sediment cores recovered from the southeastern Black Sea. Age models are based on radiocarbon dating and correlations of warming/cooling cycles monitored by high-resolution X-ray fluorescence (XRF) elementary ratios as well as ice-rafted debris (IRD) in Black Sea sediments to the sequence of ‘Dansgaard-Oeschger’ (DO) events defined from Greenland ice core oxygen isotope stratigraphy.
In order to identify the carriers of magnetization in Black Sea sediments, core MSM33-55-1 recovered from the southeast Black Sea was subjected to detailed rock magnetic and electron microscopy investigations. The younger part of core MSM33-55-1 was continuously deposited since 41 ka. Before 17.5 ka, the magnetic minerals were dominated by a mixture of greigite (Fe3S4) and titanomagnetite (Fe3-xTixO4) in samples with SIRM/κLF >10 kAm-1, or exclusively by titanomagnetite in samples with SIRM/κLF ≤10 kAm-1. It was found that greigite is generally present as crustal aggregates in locally reducing micro-environments. From 17.5 ka to 8.3 ka, the dominant magnetic mineral in this transition phase was changing from greigite (17.5 – ~10.0 ka) to probably silicate-hosted titanomagnetite (~10.0 – 8.3 ka). After 8.3 ka, the anoxic Black Sea was a favorable environment for the formation of non-magnetic pyrite (FeS2) framboids.
Aiming to avoid compromising of paleomagnetic data by erroneous directions carried by greigite, paleomagnetic data from samples with SIRM/κLF >10 kAm-1, shown to contain greigite by various methods, were removed from obtained records. Consequently, full vector paleomagnetic records, comprising directional data and relative paleointensity (rPI), were derived only from samples with SIRM/κLF ≤10 kAm-1 from 16 Black Sea sediment cores. The obtained data sets were used to create a stack covering the time window between 68.9 and 14.5 ka with temporal resolution between 40 and 100 years, depending on sedimentation rates.
At 64.5 ka, according to obtained results from Black Sea sediments, the second deepest minimum in relative paleointensity during the past 69 ka occurred. The field minimum during MIS 4 is associated with large declination swings beginning about 3 ka before the minimum. While a swing to 50°E is associated with steep inclinations (50-60°) according to the coring site at 42°N, the subsequent declination swing to 30°W is associated with shallow inclinations of down to 40°. Nevertheless, these large deviations from the direction of a geocentric axial dipole field (I=61°, D=0°) still can not yet be termed as 'excursional', since latitudes of corresponding VGPs only reach down to 51.5°N (120°E) and 61.5°N (75°W), respectively. However, these VGP positions at opposite sides of the globe are linked with VGP drift rates of up to 0.2° per year in between. These extreme secular variations might be the mid-latitude expression of the Norwegian–Greenland Sea excursion found at several sites much further North in Arctic marine sediments between 69°N and 81°N.
At about 34.5 ka, the Mono Lake excursion is evidenced in the stacked Black Sea PSV record by both a rPI minimum and directional shifts. Associated VGPs from stacked Black Sea data migrated from Alaska, via central Asia and the Tibetan Plateau, to Greenland, performing a clockwise loop. This agrees with data recorded in the Wilson Creek Formation, USA., and Arctic sediment core PS2644-5 from the Iceland Sea, suggesting a dominant dipole field. On the other hand, the Auckland lava flows, New Zealand, the Summer Lake, USA., and Arctic sediment core from ODP Site-919 yield distinct VGPs located in the central Pacific Ocean due to a presumably non-dipole (multi-pole) field configuration.
A directional anomaly at 18.5 ka, associated with pronounced swings in inclination and declination, as well as a low in rPI, is probably contemporaneous with the Hilina Pali excursion, originally reported from Hawaiian lava flows. However, virtual geomagnetic poles (VGPs) calculated from Black Sea sediments are not located at latitudes lower than 60° N, which denotes normal, though pronounced secular variations. During the postulated Hilina Pali excursion, the VGPs calculated from Black Sea data migrated clockwise only along the coasts of the Arctic Ocean from NE Canada (20.0 ka), via Alaska (18.6 ka) and NE Siberia (18.0 ka) to Svalbard (17.0 ka), then looping clockwise through the Eastern Arctic Ocean.
In addition to the Mono Lake and the Norwegian–Greenland Sea excursions, the Laschamp excursion was evidenced in the Black Sea PSV record with the lowest paleointensities at about 41.6 ka and a short-term (~500 years) full reversal centered at 41 ka. These excursions are further evidenced by an abnormal PSV index, though only the Laschamp and the Mono Lake excursions exhibit excursional VGP positions. The stacked Black Sea paleomagnetic record was also converted into one component parallel to the direction expected from a geocentric axial dipole (GAD) and two components perpendicular to it, representing only non-GAD components of the geomagnetic field. The Laschamp and the Norwegian–Greenland Sea excursions are characterized by extremely low GAD components, while the Mono Lake excursion is marked by large non-GAD contributions. Notably, negative values of the GAD component, indicating a fully reversed geomagnetic field, are observed only during the Laschamp excursion.
In summary, this doctoral thesis reconstructed high-resolution and high-fidelity PSV records from SE Black Sea sediments. The obtained record comprises three geomagnetic excursions, the Norwegian–Greenland Sea excursion, the Laschamp excursion, and the Mono Lake excursion. They are characterized by abnormal secular variations of different amplitudes centered at about 64.5 ka, 41.0 ka and 34.5 ka, respectively. In addition, the obtained PSV record from the Black Sea do not provide evidence for the postulated 'Hilina Pali excursion' at about 18.5 ka. Anyway, the obtained Black Sea paleomagnetic record, covering field fluctuations from normal secular variations, over excursions, to a short but full reversal, points to a geomagnetic field characterized by a large dynamic range in intensity and a highly variable superposition of dipole and non-dipole contributions from the geodynamo during the past 68.9 to 14.5 ka.
Phytoplankton growth depends not only on the mean intensity but also on the dynamics of the light supply. The nonlinear light-dependency of growth is characterized by a small number of basic parameters: the compensation light intensity PARcompμ, where production and losses are balanced, the growth efficiency at sub-saturating light αµ, and the maximum growth rate at saturating light µmax. In surface mixed layers, phytoplankton may rapidly move between high light intensities and almost darkness. Because of the different frequency distribution of light and/or acclimation processes, the light-dependency of growth may differ between constant and fluctuating light. Very few studies measured growth under fluctuating light at a sufficient number of mean light intensities to estimate the parameters of the growth-irradiance relationship. Hence, the influence of light dynamics on µmax, αµ and PARcompμ are still largely unknown. By extension, accurate modelling predictions of phytoplankton development under fluctuating light exposure remain difficult to make. This PhD thesis does not intend to directly extrapolate few experimental results to aquatic systems – but rather improving the mechanistic understanding of the variation of the light-dependency of growth under light fluctuations and effects on phytoplankton development.
In Lake TaiHu and at the Three Gorges Reservoir (China), we incubated phytoplankton communities in bottles placed either at fixed depths or moved vertically through the water column to mimic vertical mixing. Phytoplankton at fixed depths received only the diurnal changes in light (defined as constant light regime), while phytoplankton received rapidly fluctuating light by superimposing the vertical light gradient on the natural sinusoidal diurnal sunlight. The vertically moved samples followed a circular movement with 20 min per revolution, replicating to some extent the full overturn of typical Langmuir cells. Growth, photosynthesis, oxygen production and respiration of communities (at Lake TaiHu) were
measured. To complete these investigations, a physiological experiment was performed in the laboratory on a toxic strain of Microcystis aeruginosa (FACBH 1322) incubated under 20 min period fluctuating light. Here, we measured electron transport rates and net oxygen production at a much higher time resolution (single minute timescale).
The present PhD thesis provides evidence for substantial effects of fluctuating light on the eco-physiology of phytoplankton. Both experiments performed under semi-natural conditions in Lake TaiHu and at the Three Gorges Reservoir gave similar results. The significant decline in community growth efficiencies αµ under fluctuating light was caused for a great share by different frequency distribution of light intensities that shortened the effective daylength for production. The remaining gap in community αµ was attributed to species-specific photoacclimation mechanisms and to light-dependent respiratory losses. In contrast, community maximal growth rates µmax were similar between incubations at constant and fluctuating light. At daily growth saturating light supply, differences in losses for biosynthesis between the two light regimes were observed. Phytoplankton experiencing constant light suffered photo-inhibition - leading to photosynthesis foregone and additional respiratory costs for photosystems repair. On the contrary, intermittent exposure to low and high light intensities prevented photo-inhibition of mixed algae but forced them to develop alternative light strategy. They better harvested and exploited surface irradiance by enhancing their photosynthesis. In the laboratory, we showed that Microcystis aeruginosa increased its oxygen consumption by dark respiration in the light few minutes only after exposure to increasing light intensities. More, we proved that within a simulated Langmuir cell, the net production at saturating light and the compensation light intensity for production at limiting light are positively related. These results are best explained by an accumulation of photosynthetic products at increasing irradiance and mobilization of these fresh resources by rapid enhancement of dark respiration for maintenance and biosynthesis at decreasing irradiance. At the daily timescale, we showed that the enhancement of photosynthesis at high irradiance for biosynthesis of species increased their maintenance respiratory costs at limiting light. Species-specific growth at saturating light µmax and compensation light intensity for growth PARcompμ of species incubated in Lake TaiHu were positively related. Because of this species-specific physiological tradeoff, species displayed different light affinities to limiting and saturating light - thereby exhibiting a gleaner-opportunist tradeoff. In Lake TaiHu, we showed that inter-specific differences in light acquisition traits (µmax and PARcompμ) allowed coexis¬tence of species on a gradient of constant
light while avoiding competitive exclusion. More interestingly we demonstrated for the first time that vertical mixing (inducing fluctuating light supply for phytoplankton) may alter or even reverse the light utilization strategies of species within couple of days. The intra-specific variation in traits under fluctuating light increased the niche space for acclimated species, precluding competitive exclusion.
Overall, this PhD thesis contributes to a better understanding of phytoplankton eco-physiology under fluctuating light supply. This work could enhance the quality of predictions of phytoplankton development under certain weather conditions or climate change scenarios.
A contemporary challenge in Ecology and Evolutionary Biology is to anticipate the fate of populations of organisms in the context of a changing world. Climate change and landscape changes due to anthropic activities have been of major concern in the contemporary history. Organisms facing these threats are expected to respond by local adaptation (i.e., genetic changes or phenotypic plasticity) or by shifting their distributional range (migration). However, there are limits to their responses. For example, isolated populations will have more difficulties in developing adaptive innovations by means of genetic changes than interconnected metapopulations. Similarly, the topography of the environment can limit dispersal opportunities for crawling organisms as compared to those that rely on wind. Thus, populations of species with different life history strategy may differ in their ability to cope with changing environmental conditions. However, depending on the taxon, empirical studies investigating organisms’ responses to environmental change may become too complex, long and expensive; plus, complications arising from dealing with endangered species. In consequence, eco-evolutionary modeling offers an opportunity to overcome these limitations and complement empirical studies, understand the action and limitations of underlying mechanisms, and project into possible future scenarios. In this work I take a modeling approach and investigate the effect and relative importance of evolutionary mechanisms (including phenotypic plasticity) on the ability for local adaptation of populations with different life strategy experiencing climate change scenarios. For this, I performed a review on the state of the art of eco-evolutionary Individual-Based Models (IBMs) and identify gaps for future research. Then, I used the results from the review to develop an eco-evolutionary individual-based modeling tool to study the role of genetic and plastic mechanisms in promoting local adaption of populations of organisms with different life strategies experiencing scenarios of climate change and environmental stochasticity. The environment was simulated through a climate variable (e.g., temperature) defining a phenotypic optimum moving at a given rate of change. The rate of change was changed to simulate different scenarios of climate change (no change, slow, medium, rapid climate change). Several scenarios of stochastic noise color resembling different climatic conditions were explored. Results show that populations of sexual species will rely mainly on standing genetic variation and phenotypic plasticity for local adaptation. Population of species with relatively slow growth rate (e.g., large mammals) – especially those of small size – are the most vulnerable, particularly if their plasticity is limited (i.e., specialist species). In addition, whenever organisms from these populations are capable of adaptive plasticity, they can buffer fitness losses in reddish climatic conditions. Likewise, whenever they can adjust their plastic response (e.g., bed-hedging strategy) they will cope with bluish environmental conditions as well. In contrast, life strategies of high fecundity can rely on non-adaptive plasticity for their local adaptation to novel environmental conditions, unless the rate of change is too rapid. A recommended management measure is to guarantee interconnection of isolated populations into metapopulations, such that the supply of useful genetic variation can be increased, and, at the same time, provide them with movement opportunities to follow their preferred niche, when local adaptation becomes problematic. This is particularly important for bluish and reddish climatic conditions, when the rate of change is slow, or for any climatic condition when the level of stress (rate of change) is relatively high.
In der Dissertationsarbeit mit dem Titel „Eine Hypothese über die Grundlagen von Moral und einige Implikationen“ unternimmt die Autorin den Versuch, die anthropologischen Prämissen moralischen Handelns herauszuarbeiten. Es wird eine Hypothese aufgestellt und erläutert, die behauptet, dass moralisches Handeln nur dann verständlich wird, wenn der Handelnde erstens die Fähigkeit der Phantasie aufweist, zweitens auf Erfahrungen (mittels seines Gedächtnisses) zugreifen kann und durch Konversation mit anderen Personen interagierte und interagiert, denn nur auf der Basis dieser drei Grundlagen von Moral können sich diejenigen Fähigkeiten ent¬wickeln, die als Voraussetzungen moralischen Handeln gesehen werden müssen: Selbstbewusstsein, Freiheit, die Entwicklung eines Wir-Gefühls, die Genese eines moralischen Ideals und die Fähigkeit, sich im Entscheiden und Handeln nach diesem Ideal richten zu können. Außerdem werden in dieser Dissertation einige Implikationen dieser Hypothese auf individueller und zwischenmenschlicher Ebene diskutiert.
Electrets are dielectrics with quasi-permanent electric charge and/or dipoles, sometimes can be regarded as an electric analogy to a magnet. Since the discovery of the excellent charge retention capacity of poly(tetrafluoro ethylene) and the invention of the electret microphone, electrets have grown out of a scientific curiosity to an important application both in science and technology. The history of electret research goes hand in hand with the quest for new materials with better capacity at charge and/or dipole retention. To be useful, electrets normally have to be charged/poled to render them electro-active. This process involves electric-charge deposition and/or electric dipole orientation within the dielectrics ` surfaces and bulk. Knowledge of the spatial distribution of electric charge and/or dipole polarization after their deposition and subsequent decay is crucial in the task to improve their stability in the dielectrics.
Likewise, for dielectrics used in electrical insulation applications, there are also needs for accumulated space-charge and polarization spatial profiling. Traditionally, space-charge accumulation and large dipole polarization within insulating dielectrics is considered undesirable and harmful to the insulating dielectrics as they might cause dielectric loss and could lead to internal electric field distortion and local field enhancement. High local electric field could trigger several aging processes and reduce the insulating dielectrics' lifetime. However, with the advent of high-voltage DC transmission and high-voltage capacitor for energy storage, these are no longer the case. There are some overlapped between the two fields of electrets and electric insulation. While quasi-permanently trapped electric-charge and/or large remanent dipole polarization are the requisites for electret operation, stably trapped electric charge in electric insulation helps reduce electric charge transport and overall reduced electric conductivity. Controlled charge trapping can help in preventing further charge injection and accumulation as well as serving as field grading purpose in insulating dielectrics whereas large dipole polarization can be utilized in energy storage applications.
In this thesis, the Piezoelectrically-generated Pressure Steps (PPSs) were employed as a nondestructive method to probe the electric-charge and dipole polarization distribution in a range of thin film (several hundred micron) polymer-based materials, namely polypropylene (PP), low-density polyethylene/magnesium oxide (LDPE/MgO) nanocomposites and poly(vinylidene fluoride-co- trifluoro ethylene) (P(VDF-TrFE)) copolymer. PP film surface-treated with phosphoric acid to introduce surfacial isolated nanostructures serves as example of 2-dimensional nano-composites whereas LDPE/MgO serves as the case of 3-dimensional nano-composites with MgO nano-particles dispersed in LDPE polymer matrix. It is evidenced that the nanoparticles on the surface of acid-treated PP and in the bulk of LDPE/MgO nanocomposites improve charge trapping capacity of the respective material and prevent further charge injection and transport and that the enhanced charge trapping capacity makes PP and LDPE/MgO nanocomposites potential materials for both electret and electrical insulation applications. As for PVDF and VDF-based copolymers, the remanent spatial polarization distribution depends critically on poling method as well as specific parameters used in the respective poling method. In this work, homogeneous polarization poling of P(VDF-TrFE) copolymers with different VDF-contents have been attempted with hysteresis cyclical poling. The behaviour of remanent polarization growth and spatial polarization distribution are reported and discussed. The Piezoelectrically-generated Pressure Steps (PPSs) method has proven as a powerful method for the charge storage and transport characterization of a wide range of polymer material from nonpolar, to polar, to polymer nanocomposites category.
In den letzten Jahrzehnten fand auch in der Beschichtungsindustrie ein Umdenken hin zu umweltfreundlicheren Farben und Lacken statt. Allerdings basieren auch neue Lösungen meist nicht auf Biopolymeren und in einem noch geringeren Anteil auf wasserbasierten Beschichtungssystemen aus nachwachsenden Rohstoffen. Dies stellt den Anknüpfungspunkt dieser Arbeit dar, in der untersucht wurde, ob das Biopolymer Stärke das Potenzial zum wasserbasierten Filmbildner für Farben und Lacke besitzt. Dabei müssen angelehnt an etablierte synthetische Marktprodukte die folgenden Kriterien erfüllt werden: Die wässrige Dispersion muss mindestens einen 30%igen Feststoffgehalt haben, bei Raumtemperatur verarbeitet werden können und Viskositäten zwischen 10^2-10^3 mPa·s aufweisen. Die finale Beschichtung muss einen geschlossenen Film bilden und sehr gute Haftfestigkeiten zu einer spezifischen Oberfläche, in dieser Arbeit Glas, besitzen. Als Grundlage für die Modifizierung der Stärke wurde eine Kombination von molekularem Abbau und chemischer Funktionalisierung ausgewählt. Da nicht bekannt war, welchen Einfluss die Stärkeart, die gewählte Abbaureaktion als auch verschiedene Substituenten auf die Dispersionsherstellung und deren Eigenschaften sowie die Beschichtungseigenschaften ausüben könnten, wurden die strukturellen Parameter getrennt voneinander untersucht.
Das erste Themengebiet beinhaltete den oxidativen Abbau von Kartoffel- und Palerbsenstärke mittels des Hypochlorit-Abbaus (OCl-) und des ManOx-Abbaus (H2O2, KMnO4). Mit beiden Abbaureaktionen konnten vergleichbare gewichtsmittlere Molmassen (Mw) von 2·10^5-10^6 g/mol (GPC-MALS) hergestellt werden. Allerdings führten die gewählten Reaktionsbedingungen beim ManOx-Abbau zur Bildung von Gelpartikeln. Diese lagen im µm-Bereich (DLS und Kryo-REM-Messungen) und hatten zur Folge, dass die ManOx-Proben deutlich erhöhte Viskositäten (c: 7,5 %; 9-260 mPa·s) im Vergleich zu den OCl--Proben (4-10 mPa·s) bei scherverdünnendem Verhalten besaßen und die Eigenschaften von viskoelastischen Gelen (G‘ > G‘‘) zeigten. Des Weiteren wiesen sie reduzierte Heißwasserlöslichkeiten (95 °C, vorrangig: 70-99 %) auf. Der OCl--Abbau führte zu hydrophileren (Carboxylgruppengehalt bis zu 6,1 %; ManOx: bis zu 3,1 %), nach 95 °C-Behandlung vollständig wasserlöslichen abgebauten Stärken, die ein Newtonsches Fließverhalten mit Eigenschaften einer viskoelastischen Flüssigkeit (G‘‘ > G‘) hatten. Die OCl--Proben konnten im Vergleich zu den ManOx-Produkten (10-20 %) zu konzentrierteren Dispersionen (20-40 %) verarbeitet werden, die gleichzeitig die Einschränkung von anwendungsrelevanten Mw auf < 7·10^5 g/mol zuließen (Konzentration sollte > 30 % sein). Außerdem führten nur die OCl--Proben der Kartoffelstärke zu transparenten (alle anderen waren opak) geschlossenen Beschichtungsfilmen. Somit hebt sich die Kombination von OCl--Abbau und Kartoffelstärke mit Hinblick auf die Endanwendung ab.
Das zweite Themengebiet umfasste Untersuchungen zum Einfluss von Ester- und Hydroxyalkylether-Substituenten auf Basis einer industriell abgebauten Kartoffelstärke (Mw: 1,2·10^5 g/mol) vor allem auf die Dispersionsherstellung, die rheologischen Eigenschaften der Dispersionen und die Beschichtungseigenschaften in Kombination mit Glassubstraten. Dazu wurden Ester und Ether mit DS/MS-Werten von 0,07-0,91 synthetisiert. Die Derivate konnten zu wasserbasierten Dispersionen mit Konzentrationen von 30-45 % verarbeitet werden, wobei bei hydrophoberen Modifikaten ein Co-Lösemittel, Diethylenglycolmonobutylether (DEGBE), eingesetzt werden musste. Die Feststoffgehalte sanken dabei für beide Derivatklassen vor allem mit zunehmender Alkylkettenlänge. Die anwendungsrelevanten Viskositäten (323-1240 mPa·s) stiegen auf Grund von Wechselwirkungen tendenziell mit DS/MS und Alkylkettenlänge an. Hinsichtlich der Beschichtungseigenschaften erwiesen sich die Ester vergleichend zu den Ethern als die bevorzugte Substituentenklasse, da nur die Ester geschlossene, defektfreie und mehrheitlich transparente Beschichtungsfilme bildeten, die exzellente bis sehr gute Haftfestigkeiten (ISO Klasse: 0 und 1) auf Glas besaßen. Die Ether bildeten mehrheitlich brüchige Filme. Basierend auf der Kombination der Ergebnisse aus Lösemittelaustausch, den rheologischen Untersuchungen und zusätzlichen Oberflächenspannungsmessungen (30-61 mN/m) konnte geschlossen werden, dass wahrscheinlich fehlende oder schlechte Haftfestigkeiten vorrangig akkumuliertem Wasser in den Beschichtungsfilmen (visuell: trüb oder weiß) geschuldet sind, während die Brüchigkeit vermutlich auf Wechselwirkungen (H-Brücken Wechselwirkungen, hydrophobe Wechselwirkungen) zwischen den Polymeren zurückgeführt werden kann.
Insgesamt scheint die Kombination aus Kartoffelstärke basierend auf dem OCl--Abbau mit Mw < 7·10^5 g/mol und einem Estersubstituenten eine gute Wahl für wasserbasierte Dispersionen mit hohen Feststoffkonzentrationen (> 30 %), guter Filmbildung und exzellenten Haftungen auf Glas zu sein.
Kosmologie beschreibt die Entwicklung des Universums als Ganzes. Kosmologische Entdeckungen in Theorie und Praxis haben daher unser modernes wissenschaftliches Weltbild entscheidend geprägt. Die Vermittlung eines modernen Weltbildes durch Unterricht ist ein häufiger Wunsch in der naturwissenschaftlichen Bildungsdiskussion. Dennoch existieren weiterhin Forschungs- und Entwicklungsbedarfe. Kosmologische Themen finden sich häufig in den Medien und sind gleichzeitig weiter vom Alltag entfernt, so dass sich hier besonders leicht wissenschaftlich inkorrekte Vorstellungen entwickeln können, die zu Problemen im Unterricht führen können.
Das Ziel dieser wissenschaftlichen Arbeit ist es, zu diesem Forschungsgebiet beizutragen und die Voraussetzungen hinsichtlich vorhandener Vorkenntnisse und Präkonzepte in Kosmologie, mit denen Schülerinnen und Schüler in den Unterricht kommen, zu untersuchen und anschließend mit denen anderer Länder zu vergleichen. Dies erfolgt anhand einer qualitativen Inhaltsanalyse eines offenen Fragebogens. Auf dieser Grundlage wird schließlich ein Multiple-Choice Fragebogen entwickelt, angewendet und evaluiert.
Die Ergebnisse zeigen große Wissenslücken im Bereich der Kosmologie auf und geben erste Hinweise auf vorhandene Unterschiede zwischen den Ländern. Es existieren ebenfalls einige teils weit verbreitete wissenschaftlich inkorrekte Vorstellungen wie beispielsweise die Assoziation des Urknalls mit einer Explosion, der Urknall verursacht durch eine Kollision von Teilchen oder größeren Objekten, oder die Vorstellung der Ausdehnung des Universums als neue Entdeckungen und/oder Wissen. Des Weiteren gab nur etwa jeder Fünfte das korrekte Alter des Universums oder die Ausdehnung des Universums als einen der drei Belege der Urknalltheorie an, während fast 40% keinen einzigen Beleg nennen konnten. Für den geschlossenen Fragebogen konnten gute Hinweise für verschiedene Validitätsaspekte herausgearbeitet werden und es existieren erste Hinweise darauf, dass der Fragebogen Wissenszuwachs messen kann und damit wahrscheinlich zur Untersuchung der Wirksamkeit von Lerneinheiten eingesetzt werden kann. Auch ein entsprechendes Modell zur Verständnisentwicklung der Ausdehnung des Universums zeigte sich vielversprechend.
Diese Arbeit liefert insgesamt einen Forschungsbeitrag zum Schülervorwissen und Vorstellungen in der Kosmologie und deren Large Scale Assessment. Dies eröffnet die Möglichkeit zukünftiger Forschungen im Bereich von Gruppenvergleichen insbesondere hinsichtlich objektiver Ländervergleiche sowie der Untersuchungen der Wirksamkeit von einzelnen Lerneinheiten als auch Vergleiche verschiedener Lerneinheiten untereinander.
Im Rahmen dieser Arbeit wird anhand von neuartigen Materialien das Potential der Europium-Lumineszenz für die strukturelle Analyse dargestellt. Bei diesen Materialien handelt es sich zum einen um Nanopartikel mit Matrizes aus mehreren Metall-Mischoxiden und Dotierungen durch die Sonde Europium und zum anderen um Metallorganische Netzwerke (MOFs), die mit Neodym , Samarium- und Europium-Ionen beladen sind.
Die Synthese der aus der Kombination von Metalloxiden enthaltenen Nanopartikel ist unter milden Bedingungen mithilfe von speziell dafür hergestellten Reagenzien erfolgt und hat zu sehr kleinen, amorphen Nanopartikeln geführt. Durch eine nachfolgende Temperaturbehandlung hat sich die Kristallinität erhöht. Damit verbunden haben sich auch die Kristallstruktur sowie die Position des Dotanden Europium verändert.
Während die etablierte Methode der Röntgendiffraktometrie einen Blick auf das Kristallgitter als Gesamtes ermöglicht, so trifft die Lumineszenz des Europiums durch die Sichtbarkeit einzelner Stark-Aufspaltungen Aussagen über dessen lokale Symmetrien. Die Symmetrie wird durch Sauerstofffehlstellen verändert, welche die Sauerstoffleitfähigkeit der Nanopartikel beeinflussen. Diese ist für die Anwendung als Katalysatoren in industriellen Prozessen und ebenso als Sensoren und Therapeutika in biologischen Systemen von Bedeutung.
Zur ersten katalytischen Charakterisierung werden die Proben mittels Temperatur-programmierter Reduktion untersucht. Des Weiteren werden die Mischoxid-Nanopartikel auch hinsichtlich ihrer Verwendbarkeit als Matrix in Aufkonversionsprozessen untersucht.
Die Metallorganischen Netzwerke eignen sich aufgrund ihrer mikroporösen Struktur für Anwendungen in der Speicherung gleichermaßen von Nutzgasen wie auch von Schadstoffen. Ebenfalls ist eine biologische Anwendung denkbar, die insbesondere den Bereich der drug delivery-Reagenzien betrifft.
Erfolgt in die mikroporösen Strukturen der Metallorganischen Netzwerke die Einlagerung von Lanthanoid-Ionen, so können diese bei der entsprechenden Kombination als Weißlicht-Emittierer fungieren. Dabei ist neben den Verhältnissen zwischen den Lanthanoid-Ionen auch die genaue Position innerhalb des Netzwerks sowie die Distanz zu anderen Ionen von Interesse. Zur Untersuchung dieser Fragestellungen wird die Umgebungssensitivität der Europium-Lumineszenz ausgenutzt. Die auf diese Weise festgestellte Formiat-Bildung hängt von zahlreichen Parametern ab.
Insgesamt stellt sich die im Rahmen dieser Arbeit verwendete Methodik des Einsatzes von Europium als strukturelle Sonde in höchstem Maße vielseitig dar und zeigt seine größte Stärke in der Kombination mit weiteren Methoden der Strukturanalytik. Die auf diese Weise genauestens charakterisierten neuartigen Materialien können nun gezielt und anwendungsfokussiert weiterentwickelt werden.
Business process management (BPM) deals with modeling, executing, monitoring, analyzing, and improving business processes. During execution, the process communicates with its environment to get relevant contextual information represented as events. Recent development of big data and the Internet of Things (IoT) enables sources like smart devices and sensors to generate tons of events which can be filtered, grouped, and composed to trigger and drive business processes.
The industry standard Business Process Model and Notation (BPMN) provides several event constructs to capture the interaction possibilities between a process and its environment, e.g., to instantiate a process, to abort an ongoing activity in an exceptional situation, to take decisions based on the information carried by the events, as well as to choose among the alternative paths for further process execution. The specifications of such interactions are termed as event handling. However, in a distributed setup, the event sources are most often unaware of the status of process execution and therefore, an event is produced irrespective of the process being ready to consume it. BPMN semantics does not support such scenarios and thus increases the chance of processes getting delayed or getting in a deadlock by missing out on event occurrences which might still be relevant.
The work in this thesis reviews the challenges and shortcomings of integrating real-world events into business processes, especially the subscription management. The basic integration is achieved with an architecture consisting of a process modeler, a process engine, and an event processing platform. Further, points of subscription and unsubscription along the process execution timeline are defined for different BPMN event constructs. Semantic and temporal dependencies among event subscription, event occurrence, event consumption and event unsubscription are considered. To this end, an event buffer with policies for updating the buffer, retrieving the most suitable event for the current process instance, and reusing the event has been discussed that supports issuing of early subscription.
The Petri net mapping of the event handling model provides our approach with a translation of semantics from a business process perspective. Two applications based on this formal foundation are presented to support the significance of different event handling configurations on correct process execution and reachability of a process path. Prototype implementations of the approaches show that realizing flexible event handling is feasible with minor extensions of off-the-shelf process engines and event platforms.
Increasing concerns regarding the environmental impact of our chemical production have shifted attention towards possibilities for sustainable biotechnology. One-carbon (C1) compounds, including methane, methanol, formate and CO, are promising feedstocks for future bioindustry. CO2 is another interesting feedstock, as it can also be transformed using renewable energy to other C1 feedstocks for use. While formaldehyde is not suitable as a feedstock due to its high toxicity, it is a central intermediate in the process of C1 assimilation. This thesis explores formaldehyde metabolism and aims to engineer formaldehyde assimilation in the model organism Escherichia coli for the future C1-based bioindustry.
The first chapter of the thesis aims to establish growth of E. coli on formaldehyde via the most efficient naturally occurring route, the ribulose monophosphate pathway. Linear variants of the pathway were constructed in multiple-gene knockouts strains, coupling E. coli growth to the activities of the key enzymes of the pathway. Formaldehyde-dependent growth was achieved in rationally designed strains. In the final strain, the synthetic pathway provides the cell with almost all biomass and energy requirements.
In the second chapter, taking advantage of the unique feature of its reactivity, formaldehyde assimilation via condensation with glycine and pyruvate by two promiscuous aldolases was explored. Facilitated by these two reactions, the newly designed homoserine cycle is expected to support higher yields of a wide array of products than its counterparts. By dividing the pathway into segments and coupling them to the growth of dedicated strains, all pathway reactions were demonstrated to be sufficiently active. The work paves a way for future implementation of a highly efficient route for C1 feedstocks into commodity chemicals.
In the third chapter, the in vivo rate of the spontaneous formaldehyde tetrahydrofolate condensation to methylene-tetrahydrofolate was assessed in order to evaluate its applicability as a biotechnological process. Tested within an E. coli strain deleted in essential genes for native methylene-tetrahydrofolate biosynthesis, the reaction was shown to support the production of this essential intermediate. However, only low growth rates were observed and only at high formaldehyde concentrations. Computational analysis dependent on in vivo evidence from this strain deduced the slow rate of this spontaneous reaction, thus ruling out its substantial contribution to growth on C1 feedstocks.
The reactivity of formaldehyde makes it highly toxic. In the last chapter, the formation of thioproline, the condensation product of cysteine and formaldehyde, was confirmed to contribute this toxicity effect. Xaa-Pro aminopeptidase (PepP), which genetically links with folate metabolism, was shown to hydrolyze thioproline-containing peptides. Deleting pepP increased strain sensitivity to formaldehyde, pointing towards the toxicity of thioproline-containing peptides and the importance of their removal. The characterization in this study could be useful in handling this toxic intermediate.
Overall, this thesis identified challenges related to formaldehyde metabolism and provided novel solutions towards a future bioindustry based on sustainable C1 feedstocks in which formaldehyde serves as a key intermediate.
Ficción herética
(2019)
La metáfora de la «isla» en la narrativa cubana contemporánea engloba toda una serie de complejidades simbólicas dependientes de la vivencia del espacio y el tiempo. Su potencial visual se manifiesta u oculta las propias vivencias insulares de los escritores cubanos. En los últimos 30 años en Cuba, los fenómenos políticos económicos y sociales han modificado categóricamente la percepción y la configuración del plano social e individual frente a las exigencias globales (Fornet 2006, Rojas 1999, 2002, 2006). Se ha confirmado una sensación de acinesia e ingravidez (Casamayor 2013) y se ha presentado una actitud «herética» por parte de los narradores cubanos, quienes se confrontan con las ideas de la postmodernidad, lo postsoviético o postutópico, reafirmando así una sensibilidad presentista (Guerrero 2016). Estos autores presentan resonancias y reivindicaciones de los imaginarios insulares de autores y de tradiciones estéticas dentro y fuera de la isla como José Lezama Lima, Virgilio Piñera, Guillermo Cabrera Infante, Reinaldo Arenas y Severo Sarduy). El análisis de las écfrasis insulares permite examinar las dinámicas de representación y de sentido: disimulación: la disimulación, la anamorfosis y la trompe l’oeil (Sarduy 1981). La novela Tuyo es el reino (1998) de Abilio Estévez es un modelo desde el que se localizará las relaciones de sentido entre canon literario y los referentes socioculturales de las variaciones somatopológicas de la isla en la narrativa cubana actual: Ena Lucía Portela, Atilio Caballero, Antonio José Ponte, Daniel Díaz Mantilla, Emerio Medina, Orlando Luis Pardo, Anisley Negrin y Ahmel Echeverría, entre otros.
Die funktionelle Charakterisierung von therapeutisch relevanten Proteinen kann bereits durch die Bereitstellung des Zielproteins in adäquaten Mengen limitierend sein. Dies trifft besonders auf Membranproteine zu, die aufgrund von zytotoxischen Effekten auf die Produktionszelllinie und der Tendenz Aggregate zu bilden, in niedrigen Ausbeuten an aktivem Protein resultieren können. Der lebende Organismus kann durch die Verwendung von translationsaktiven Zelllysaten umgangen werden- die Grundlage der zellfreien Proteinsynthese. Zu Beginn der Arbeit wurde die ATP-abhängige Translation eines Lysates auf der Basis von kultivierten Insektenzellen (Sf21) analysiert. Für diesen Zweck wurde ein ATP-bindendes Aptamer eingesetzt, durch welches die Translation der Nanoluziferase reguliert werden konnte. Durch die dargestellte Applizierung von Aptameren, könnten diese zukünftig in zellfreien Systemen für die Visualisierung der Transkription und Translation eingesetzt werden, wodurch zum Beispiel komplexe Prozesse validiert werden können.
Neben der reinen Proteinherstellung können Faktoren wie posttranslationale Modifikationen sowie eine Integration in eine lipidische Membran essentiell für die Funktionalität des Membranproteins sein. Im zweiten Abschnitt konnte, im zellfreien Sf21-System, für den G-Protein-gekoppelten Rezeptor Endothelin B sowohl eine Integration in die endogen vorhandenen Endoplasmatisch Retikulum-basierten Membranstrukturen als auch Glykosylierungen, identifiziert werden.
Auf der Grundlage der erfolgreichen Synthese des ET-B-Rezeptors wurden verschiedene Methoden zur Fluoreszenzmarkierung des Adenosin-Rezeptors A2a (Adora2a) angewandt und optimiert. Im dritten Abschnitt wurde der Adora2a mit Hilfe einer vorbeladenen tRNA, welche an eine fluoreszierende Aminosäure gekoppelt war, im zellfreien Chinesischen Zwerghamster Ovarien (CHO)-System markiert. Zusätzlich konnte durch den Einsatz eines modifizierten tRNA/Aminoacyl-tRNA-Synthetase-Paares eine nicht-kanonische Aminosäure an Position eines integrierten Amber-Stopcodon in die Polypeptidkette eingebaut und die funktionelle Gruppe im Anschluss an einen Fluoreszenzfarbstoff gekoppelt werden. Aufgrund des offenen Charakters eignen sich zellfreie Proteinsynthesesysteme besonders für eine Integration von exogenen Komponenten in den Translationsprozess. Mit Hilfe der Fluoreszenzmarkierung wurde eine ligandvermittelte Konformationsänderung im Adora2a über einen Biolumineszenz-Resonanzenergietransfer detektiert. Durch die Etablierung der Amber-Suppression wurde darüber hinaus das Hormon Erythropoetin pegyliert, wodurch Eigenschaften wie Stabilität und Halbwertszeit des Proteins verändert wurden.
Zu guter Letzt wurde ein neues tRNA/Aminoacyl-tRNA-Synthetase-Paar auf Basis der Methanosarcina mazei Pyrrolysin-Synthetase etabliert, um das Repertoire an nicht-kanonischen Aminosäuren und den damit verbundenen Kopplungsreaktionen zu erweitern. Zusammenfassend wurden die Potenziale zellfreier Systeme in Bezug auf der Herstellung von komplexen Membranproteinen und der Charakterisierung dieser durch die Einbringung einer positionsspezifischen Fluoreszenzmarkierung verdeutlicht, wodurch neue Möglichkeiten für die Analyse und Funktionalisierung von komplexen Proteinen geschaffen wurden.
For millennia, humans have affected landscapes all over the world. Due to horizontal expansion, agriculture plays a major role in the process of fragmentation. This process is caused by a substitution of natural habitats by agricultural land leading to agricultural landscapes. These landscapes are characterized by an alternation of agriculture and other land use like forests. In addition, there are landscape elements of natural origin like small water bodies. Areas of different land use are beside each other like patches, or fragments. They are physically distinguishable which makes them look like a patchwork from an aerial perspective. These fragments are each an own ecosystem with conditions and properties that differ from their adjacent fragments. As open systems, they are in exchange of information, matter and energy across their boundaries. These boundary areas are called transition zones. Here, the habitat properties and environmental conditions are altered compared to the interior of the fragments. This changes the abundance and the composition of species in the transition zones, which in turn has a feedback effect on the environmental conditions.
The literature mainly offers information and insights on species abundance and composition in forested transition zones. Abiotic effects, the gradual changes in energy and matter, received less attention. In addition, little is known about non-forested transition zones. For example, the effects on agricultural yield in transition zones of an altered microclimate, matter dynamics or different light regimes are hardly researched or understood. The processes in transition zones are closely connected with altered provisioning and regulating ecosystem services. To disentangle the mechanisms and to upscale the effects, models can be used.
My thesis provides insights into these topics: literature was reviewed and a conceptual framework for the quantitative description of gradients of matter and energy in transition zones was introduced. The results of measurements of environmental gradients like microclimate, aboveground biomass and soil carbon and nitrogen content are presented that span from within the forest into arable land. Both the measurements and the literature review could not validate a transition zone of 100 m for abiotic effects. Although this value is often reported and used in the literature, it is likely to be smaller.
Further, the measurements suggest that on the one hand trees in transition zones are smaller compared to those in the interior of the fragments, while on the other hand less biomass was measured in the arable lands’ transition zone. These results support the hypothesis that less carbon is stored in the aboveground biomass in transition zones. The soil at the edge (zero line) between adjacent forest and arable land contains more nitrogen and carbon content compared to the interior of the fragments. One-year measurements in the transition zone also provided evidence that microclimate is different compared to the fragments’ interior.
To predict the possible yield decreases that transition zones might cause, a modelling approach was developed. Using a small virtual landscape, I modelled the effect of a forest fragment shading the adjacent arable land and the effects of this on yield using the MONICA crop growth model. In the transition zone yield was less compared to the interior due to shading. The results of the simulations were upscaled to the landscape level and exemplarily calculated for the arable land of a whole region in Brandenburg, Germany.
The major findings of my thesis are: (1) Transition zones are likely to be much smaller than assumed in the scientific literature; (2) transition zones aren’t solely a phenomenon of forested ecosystems, but significantly extend into arable land as well; (3) empirical and modelling results show that transition zones encompass biotic and abiotic changes that are likely to be important to a variety of agricultural landscape ecosystem services.
Species assembly from a regional pool into local metacommunities and how they colonize and coexist over time and space is essential to understand how communities response to their environment including abiotic and biotic factors. In highly disturbed landscapes, connectivity of isolated habitat patches is essential to maintain biodiversity and the entire ecosystem functioning. In northeast Germany, a high density of the small water bodies called kettle holes, are good systems to study metacommunities due to their condition as “aquatic islands” suitable for hygrophilous species that are surrounded by in unsuitable matrix of crop fields. The main objective of this thesis was to infer the main ecological processes shaping plant communities and their response to the environment, from biodiversity patterns and key life-history traits involved in connectivity using ecological and genetic approaches; and to provide first insights of the role of kettle holes harboring wild-bee species as important mobile linkers connecting plant communities in this insular system.
t a community level, I compared plant diversity patterns and trait composition in ephemeral vs. permanent kettle holes). My results showed that types of kettle holes act as environmental filers shaping plant diversity, community-composition and trait-distribution, suggesting species sorting and niche processes in both types of kettle holes. At a population level, I further analyzed the role of dispersal and reproductive strategies of four selected species occurring in permanent kettle holes. Using microsatellites, I found that breeding system (degree of clonality), is the main factor shaping genetic diversity and genetic divergence. Although, higher gene flow and lower genetic differentiation among populations in wind vs. insect pollinated species was also found, suggesting that dispersal mechanisms played a role related to gene flow and connectivity. For most flowering plants, pollinators play an important role connecting communities. Therefore, as a first insight of the potential mobile linkers of these plant communities, I investigated the diversity wild-bees occurring in these kettle holes. My main results showed that local habitat quality (flower resources) had a positive effect on bee diversity, while habitat heterogeneity (number of natural landscape elements surrounding kettle holes 100–300m), was negatively correlated.
This thesis covers from genetic flow at individual and population level to plant community assembly. My results showed how patterns of biodiversity, dispersal and reproduction strategies in plant population and communities can be used to infer ecological processes. In addition, I showed the importance of life-history traits and the relationship between species and their abiotic and biotic interactions. Furthermore, I included a different level of mobile linkers (pollinators) for a better understanding of another level of the system. This integration is essential to understand how communities respond to their surrounding environment and how disturbances such as agriculture, land-use and climate change might affect them. I highlight the need to integrate many scientific areas covering from genes to ecosystems at different spatiotemporal scales for a better understanding, management and conservation of our ecosystems.
Aluminum oxide is an Earth-abundant geological material, and its interaction with water is of crucial importance for geochemical and environmental processes. Some aluminum oxide surfaces are also known to be useful in heterogeneous catalysis, while the surface chemistry of aqueous oxide interfaces determines the corrosion, growth and dissolution of such materials. In this doctoral work, we looked mainly at the (0001) surface of α-Al 2 O 3 and its reactivity towards water. In particular, a great focus of this work is dedicated to simulate and address the vibrational spectra of water adsorbed on the α-alumina(0001) surface in various conditions and at different coverages. In fact, the main source of comparison and inspiration for this work comes from the collaboration with the “Interfacial Molecular Spectroscopy” group led by Dr. R. Kramer Campen at the Fritz-Haber Institute of the MPG in Berlin. The expertise of our project partners in surface-sensitive Vibrational Sum Frequency (VSF) generation spectroscopy was crucial to develop and adapt specific simulation schemes used in this work. Methodologically, the main approach employed in this thesis is Ab Initio Molecular Dynamics (AIMD) based on periodic Density Functional Theory (DFT) using the PBE functional with D2 dispersion correction. The analysis of vibrational frequencies from both a static and a dynamic, finite-temperature perspective offers the ability to investigate the water / aluminum oxide interface in close connection to experiment.
The first project presented in this work considers the characterization of dissociatively adsorbed deuterated water on the Al-terminated (0001) surface. This particular structure is known from both experiment and theory to be the thermodynamically most stable surface termination of α-alumina in Ultra-High Vacuum (UHV) conditions. Based on experiments performed by our colleagues at FHI, different adsorption sites and products have been proposed and identified for D 2 O. While previous theoretical investigations only looked at vibrational frequencies of dissociated OD groups by staticNormal Modes Analysis (NMA), we rather employed a more sophisticated approach to directly assess vibrational spectra (like IR and VSF) at finite temperature from AIMD. In this work, we have employed a recent implementation which makes use of velocity-velocity autocorrelation functions to simulate such spectral responses of O-H(D) bonds. This approach allows for an efficient and qualitatively accurate estimation of Vibrational Densities of States (VDOS) as well as IR and VSF spectra, which are then tested against experimental spectra from our collaborators.
In order to extend previous work on unimolecularly dissociated water on α-Al 2 O 3 , we then considered a different system, namely, a fully hydroxylated (0001) surface, which results from the reconstruction of the UHV-stable Al-terminated surface at high water contents. This model is then further extended by considering a hydroxylated surface with additional water molecules, forming a two-dimensional layer which serves as a potential template to simulate an aqueous interface in environmental conditions. Again, employing finite-temperature AIMD trajectories at the PBE+D2 level, we investigated the behaviour of both hydroxylated surface (HS) and the water-covered structure derived from it (known as HS+2ML). A full range of spectra, from VDOS to IR and VSF, is then calculated using the same methodology, as described above. This is the main focus of the second project, reported in Chapter 5. In this case, comparison between theoretical spectra and experimental data is definitely good. In particular, we underline the nature of high-frequency resonances observed above 3700 cm −1 in VSF experiments to be associated with surface OH-groups, known as “aluminols” which are a key fingerprint of the fully hydroxylated surface.
In the third and last project, which is presented in Chapter 6, the extension of VSF spectroscopy experiments to the time-resolved regime offered us the opportunity to investigate vibrational energy relaxation at the α-alumina / water interface. Specifically, using again DFT-based AIMD simulations, we simulated vibrational lifetimes for surface aluminols as experimentally detected via pump-probe VSF. We considered the water-covered HS model as a potential candidate to address this problem. The vibrational (IR) excitation and subsequent relaxation is performed by means of a non-equilibrium molecular dynamics scheme. In such a scheme, we specifically looked at the O-H stretching mode of surface aluminols. Afterwards, the analysis of non-equilibrium trajectories allows for an estimation of relaxation times in the order of 2-4 ps which are in overall agreement with measured ones.
The aim of this work has been to provide, within a consistent theoretical framework, a better understanding of vibrational spectroscopy and dynamics for water on the α-alumina(0001) surface,ranging from very low water coverage (similar to the UHV case) up to medium-high coverages, resembling the hydroxylated oxide in environmental moist conditions.
The present work is a compilation of three original research articles submitted (or already published) in international peer-reviewed venues of the field of speech science. These three articles address the topics of fundamental motor laws in speech and dynamics of corresponding speech movements:
1. Kuberski, Stephan R. and Adamantios I. Gafos (2019). "The speed-curvature power law in tongue movements of repetitive speech". PLOS ONE 14(3). Public Library of Science. doi: 10.1371/journal.pone.0213851.
2. Kuberski, Stephan R. and Adamantios I. Gafos (In press). "Fitts' law in tongue movements of repetitive speech". Phonetica: International Journal of Phonetic Science. Karger Publishers. doi: 10.1159/000501644
3. Kuberski, Stephan R. and Adamantios I. Gafos (submitted). "Distinct phase space topologies of identical phonemic sequences". Language. Linguistic Society of America.
The present work introduces a metronome-driven speech elicitation paradigm in which participants were asked to utter repetitive sequences of elementary consonant-vowel syllables. This paradigm, explicitly designed to cover speech rates from a substantially wider range than has been explored so far in previous work, is demonstrated to satisfy the important prerequisites for assessing so far difficult to access aspects of speech. Specifically, the paradigm's extensive speech rate manipulation enabled elicitation of a great range of movement speeds as well as movement durations and excursions of the relevant effectors. The presence of such variation is a prerequisite to assessing whether invariant relations between these and other parameters exist and thus provides the foundation for a rigorous evaluation of the two laws examined in the first two contributions of this work.
In the data resulting from this paradigm, it is shown that speech movements obey the same fundamental laws as movements from other domains of motor control do. In particular, it is demonstrated that speech strongly adheres to the power law relation between speed and curvature of movement with a clear speech rate dependency of the power law's exponent. The often-sought or reported exponent of one third in the statement of the law is unique to a subclass of movements which corresponds to the range of faster rates under which a particular utterance is produced. For slower rates, significantly larger values than one third are observed. Furthermore, for the first time in speech this work uncovers evidence for the presence of Fitts' law. It is shown that, beyond a speaker-specific speech rate, speech movements of the tongue clearly obey Fitts' law by emergence of its characteristic linear relation between movement time and index of difficulty. For slower speech rates (when temporal pressure is small), no such relation is observed. The methods and datasets obtained in the two assessment above provide a rigorous foundation both for addressing implications for theories and models of speech as well as for better understanding the status of speech movements in the context of human movements in general.
All modern theories of language rely on a fundamental segmental hypothesis according to which the phonological message of an utterance is represented by a sequence of segments or phonemes. It is commonly assumed that each of these phonemes can be mapped to some unit of speech motor action, a so-called speech gesture.
For the first time here, it is demonstrated that the relation between the phonological description of simple utterances and the corresponding speech motor action is non-unique. Specifically, by the extensive speech rate manipulation in the herein used experimental paradigm it is demonstrated that speech exhibits clearly distinct dynamical organizations underlying the production of simple utterances. At slower speech rates, the dynamical organization underlying the repetitive production of elementary /CV/ syllables can be described by successive concatenations of closing and opening gestures, each with its own equilibrium point. As speech rate increases, the equilibria of opening and closing gestures are not equally stable yielding qualitatively different modes of organization with either a single equilibrium point of a combined opening-closing gesture or a periodic attractor unleashed by the disappearance of both equilibria. This observation, the non-uniqueness of the dynamical organization underlying what on the surface appear to be identical phonemic sequences, is an entirely new result in the domain of speech. Beyond that, the demonstration of periodic attractors in speech reveals that dynamical equilibrium point models do not account for all possible modes of speech motor behavior.
Seit 2003 hat sich das politische Bild des Irak stark verändert. Dadurch begann der Prozess der Neugestaltung der irakischen Rechtsordnung. Die irakische Verfassung von 2005 legt erstmalig in der Geschichte des Irak den Islam und die Demokratie als zwei nebeneinander zu beachtende Grundprinzipien bei der Gesetzgebung fest. Trotz dieser signifikanten Veränderung im irakischen Rechtssystem und erheblicher Entwicklungen im internationalen Privat- und Zivilverfahrensrecht (IPR/IZVR) im internationalen Vergleich gilt die hauptsächlich im irakischen Zivilgesetzbuch (ZGB) von 1951 enthaltene gesetzliche Regelung des IPR/IZVR im Irak weiterhin. Deshalb entstand diese Arbeit für eine Reformierung des irakischen IPR/IZVR.
Die Arbeit gilt als erste umfassende wissenschaftliche Untersuchung, die sich mit dem jetzigen Inhalt und der zukünftigen Reformierung des irakischen internationalen Privatrecht- und Zivilverfahrensrechts (IPR/IZVR) beschäftigt.
Die Verfasserin vermittelt einen Gesamtüberblick über das jetzt geltende irakische internationale Privat- und Zivilverfahrensrecht mit gelegentlicher punktueller und stichwortartiger Heranziehung des deutschen, islamischen, türkischen und tunesischen Rechts, zeigt dessen Schwachstellen auf und unterbreitet entsprechende Reformvorschläge.
Wegen der besonderen Bedeutung des internationalen Vertragsrechts für die Wirtschaft im Irak und auch zum Teil für Deutschland gibt die Verfasserin einen genaueren Überblick über das irakische internationale Vertragsrecht und bekräftigt gleichzeitig dessen Reformbedürftigkeit.
Die Darstellung der wichtigen Entwicklungen im deutsch-europäischen, im traditionellen islamischen Recht und im türkischen und tunesischen internationalen Privat- und Zivilverfahrensrecht im zweiten Kapitel dienen als Grundlage, auf die bei der Reformierung des irakischen IPR/ IZVR zurück gegriffen werden kann. Da die Kenntnisse des islamischen Rechts nicht zwingend zum Rechtsstudium gehören, wird das islamische Recht dazu in Bezug auf seine Entstehung und die Rechtsquellen dargestellt.
Am Ende der Arbeit wird ein Entwurf eines föderalen Gesetzes zum internationalen Privatrecht im Irak katalogisiert, der sich im Rahmen der irakischen Verfassung gleichzeitig mit dem Islam und der Demokratie vereinbaren lässt.
Floral scent is an important way for plants to communicate with insects, but scent emission has been lost or strongly reduced during the transition from pollinator-mediated outbreeding to selfing. The shift from outcrossing to selfing is not only accompanied by scent loss, but also by a reduction in other pollinator-attracting traits like petal size and can be observed multiple times among angiosperms. These changes are summarized by the term selfing syndrome and represent one of the most prominent examples of convergent evolution within the plant kingdom. In this work the genus Capsella was used as a model to study convergent evolution in two closely related selfers with separate transitions to self-fertilization.
Compared to their outbreeding ancestor C. grandiflora, the emission of benzaldehyde as main compound of floral scent is lacking or strongly reduced in the selfing species C. rubella and C. orientalis. In C. rubella the loss of benzaldehyde was caused by mutations to cinnamate:CoA ligase CNL1, but the biochemical basis and evolutionary history of this loss remained unknown, together with the genetic basis of scent loss in C. orientalis. Here, a combination of plant transformations, in vitro enzyme assays, population genetics and quantitative genetics has been used to address these questions. The results indicate that CNL1 has been inactivated twice independently by point mutations in C. rubella, leading to a loss of benzaldehyde emission. Both inactivated haplotypes can be found around the Mediterranean Sea, indicating that they arose before the species´ geographical spread. This study confirmed CNL1 as a hotspot for mutations to eliminate benzaldehyde emission, as it has been suggested by previous studies. In contrast to these findings, CNL1 in C. orientalis remains active. To test whether similar mechanisms underlie the convergent evolution of scent loss in C. orientalis a QTL mapping approach was used and the results suggest that this closely related species followed a different evolutionary route to reduce floral scent, possibly reflecting that the convergent evolution of floral scent is driven by ecological rather than genetic factors.
In parallel with studying the genetic basis of repeated scent loss a method for testing the adaptive value of individual selfing syndrome traits was established. The established method allows estimating outcrossing rates with a high throughput of samples and detects successfully insect-mediated outcrossing events, providing major advantages regarding time and effort compared to other approaches. It can be applied to correlate outcrossing rates with differences in individual traits by using quasi-isogenic lines as demonstrated here or with environmental or morphological parameters.
Convergent evolution can not only be observed for scent loss in Capsella but also for the morphological evolution of petal size. Previous studies detected several QTLs underlying the petal size reduction in C. orientalis and C. rubella, some of them shared among both species. One shared QTL is PAQTL1 which might map to NUBBIN, a growth factor. To better understand the morphological evolution and genetic basis of petal size reduction, this QTL was studied. Mapping this QTL to a gene might identify another example for a hotspot gene, in this case for the convergent evolution of petal size.
Quantum field theory on curved spacetimes is understood as a semiclassical approximation of some quantum theory of gravitation, which models a quantum field under the influence of a classical gravitational field, that is, a curved spacetime. The most remarkable effect predicted by this approach is the creation of particles by the spacetime itself, represented, for instance, by Hawking's evaporation of black holes or the Unruh effect. On the other hand, these aspects already suggest that certain cornerstones of Minkowski quantum field theory, more precisely a preferred vacuum state and, consequently, the concept of particles, do not have sensible counterparts within a theory on general curved spacetimes. Likewise, the implementation of covariance in the model has to be reconsidered, as curved spacetimes usually lack any non-trivial global symmetry. Whereas this latter issue has been resolved by introducing the paradigm of locally covariant quantum field theory (LCQFT), the absence of a reasonable concept for distinct vacuum and particle states on general curved spacetimes has become manifest even in the form of no-go-theorems.
Within the framework of algebraic quantum field theory, one first introduces observables, while states enter the game only afterwards by assigning expectation values to them. Even though the construction of observables is based on physically motivated concepts, there is still a vast number of possible states, and many of them are not reasonable from a physical point of view. We infer that this notion is still too general, that is, further physical constraints are required. For instance, when dealing with a free quantum field theory driven by a linear field equation, it is natural to focus on so-called quasifree states. Furthermore, a suitable renormalization procedure for products of field operators is vitally important. This particularly concerns the expectation values of the energy momentum tensor, which correspond to distributional bisolutions of the field equation on the curved spacetime. J. Hadamard's theory of hyperbolic equations provides a certain class of bisolutions with fixed singular part, which therefore allow for an appropriate renormalization scheme.
By now, this specification of the singularity structure is known as the Hadamard condition and widely accepted as the natural generalization of the spectral condition of flat quantum field theory. Moreover, due to Radzikowski's celebrated results, it is equivalent to a local condition, namely on the wave front set of the bisolution. This formulation made the powerful tools of microlocal analysis, developed by Duistermaat and Hörmander, available for the verification of the Hadamard property as well as the construction of corresponding Hadamard states, which initiated much progress in this field. However, although indispensable for the investigation in the characteristics of operators and their parametrices, microlocal analyis is not practicable for the study of their non-singular features and central results are typically stated only up to smooth objects. Consequently, Radzikowski's work almost directly led to existence results and, moreover, a concrete pattern for the construction of Hadamard bidistributions via a Hadamard series. Nevertheless, the remaining properties (bisolution, causality, positivity) are ensured only modulo smooth functions.
It is the subject of this thesis to complete this construction for linear and formally self-adjoint wave operators acting on sections in a vector bundle over a globally hyperbolic Lorentzian manifold. Based on Wightman's solution of d'Alembert's equation on Minkowski space and the construction for the advanced and retarded fundamental solution, we set up a Hadamard series for local parametrices and derive global bisolutions from them. These are of Hadamard form and we show existence of smooth bisections such that the sum also satisfies the remaining properties exactly.
Die vorliegende Dissertation zielt generell darauf ab, die Anwendung der dialektischen Methodologie auf den Bereich der Sprachphilosophie zu rechtfertigen und eine systematische Bearbeitung eines begrenzten Teils der Sprachphilosophie mithilfe der Dialektik durchzuführen. Um diese Herangehensweise, die in der Forschung kaum oder gar nicht vertreten ist, aufzuklären und festzustellen, werde ich zuerst auf die philosophischen Überlegungen von zwei Autoren zurückgreifen: Hegel und Wittgenstein.
Hegel und Wittgenstein sind, auf den ersten Blick, Autoren, die kaum Gemeinsamkeiten haben, außer dass sie sich beide mit der Philosophie als Fach beschäftigt und unvermeidlich ein gemeinschaftliches Thema, die Sprache, behandelt haben, wobei jedoch weder eine inhaltliche noch eine methodologische Verbindung hervorgehoben werden könnte. Die erste Voraussetzung dieser Dissertation, in Bezug auf die Geschichte der Idee, besteht darin, darauf hinzudeuten, dass der hegelsche Geistesbegriff und Wittgensteins Lebensform zwei Ansätze und Resultat einer philosophischen Bemühung sind, die gemeinsam die notwendige Auflösung bzw. Überwindung skeptischer Argumentation vornehmen. Tatsächlich hat Wittgenstein in seinen Philosophischen Untersuchungen eine Argumentation entwickelt, die als „Paradox des Regelfolgens“ bezeichnet und in der sekundären Literatur (hauptsächlich bei Kripke) als eine Art skeptische Argumentation betrachtet wurde. Demnach wird Wittgensteins Theorie der Sprache entweder als eine Auflösung dieses Skeptizismus oder einfach als ein skeptischer Text selbst ausgelegt (Brandom). Das erste Ziel meiner Dissertation besteht darin, zu zeigen, dass dieses Paradox als skeptische Argumentation allerdings unvollständig geblieben ist dass dieses Paradox als der erster entscheidender Moment zu der höchsten Form der skeptischen Herausforderung, der Antinomie, betrachtet werden kann. Eine vollständige skeptische Argumentation heißt, dass die alleinige Auflösung des Paradoxes, der Dispositionalismus und die Negation dieser Theorie, beide beweisbar sind. Ich werde also versuchen, aus der in den PU dargestellten Auflösung des Paradoxes des Regelfolgens die Vervollständigung einer Antinomie des Begriffes der Normativität in Bezug auf die Sprachregeln festzulegen, ähnlich der von Kant entwickelten kosmologischen Antinomie (Thesis cum Antithesis). Das zweite Ziel meiner Dissertation besteht folglich darin, zu zeigen, 1. dass die kantische Auflösung der Antinomie unwirksam bezüglich der Antinomie der Normativität ist, 2. dass diese Antinomie eine notwendige Auseinandersetzung mit einem radikalen Skeptizismus bedeutet und dass wir logisch gezwungen sind, nicht nur irgendeine Theorie der Sprachphilosophie neu zu bestimmen, sondern unsere Methodologie – das heißt die Anwendung der üblichen Normen der Rationalität – selbst grundsätzlich tiefer gehend in Frage zu stellen, und 3. dass die hegelsche Dialektik sich als die methodologische Auflösung einer solchen radikalen skeptischen Herausforderung bzw. als die Auflösung einer Antinomie überhaupt ergibt. Anlässlich dieser methodologischen Revidierung wird auf die hegelsche Dialektik zurückgegriffen.
Dennoch begrenzt sich der Zweck dieser Dissertation nicht darauf, eine Interpretation von Hegels Dialektik oder eine Überwindung von Wittgensteins Lebensform darzustellen, vielmehr geht es darum, auf die Problematik und die Grundsätze des Begriffs der Lebensform bzw. des theoretischen Geistes zurückzugreifen und kraft Hegels Dialektik darüber hinauszuführen, um den Platz und die Funktion der Sprache besser zu verstehen. Diese Arbeit erfolgt vielmehr im Rahmen eines wissenschaftlichen Projekts, oder anders gesagt, sie nutzt die methodologischen Resultate von zwei Autoren der Philosophie, um ein wissenschaftliches Programm vorzustellen. Der Anspruch dieser Arbeit ist dementsprechend, durch das Zurückgreifen auf Hegels Dialektik eine neue Erkenntnis über die Sprache zu gewinnen, wobei die beiden kontradiktorischen Momente der Kognition – die Normativität, die durch das Bewusstsein erfolgt und diejenige, die durch Dispositionen erfolgt –, konstruktiv verbinden sind. Der konkrete Gewinn dieser Methodologie ist es demnach, eine Sprachphilosophie überhaupt als System festlegen zu können, ein System, das es ermöglicht, sprachliche Phänomene in all ihren Aspekten in kohärenter Weise zu fassen. Inhaltlich betrachtet zielt dieses Programm darauf ab, die allgemeine Stufe des Begriffs der Sprache als Moment des Begriffs des Geistes dialektisch abzuleiten, d. h. den richtigen Sinn der Sprache festzulegen. Eine vollständige Bearbeitung der Sprachphilosophie mithilfe der Dialektik konnte ich allerdings nicht durchführen, und der Umfang der mithilfe der Dialektik hergeleiteten Sprachkategorien begrenzt sich auf die Lehre der Einbildungskraft, die die Lehre der allgemeinen Semiologie und der Grammatik einschließt.
Hyperspectral remote sensing of the spatial and temporal heterogeneity of low Arctic vegetation
(2019)
Arctic tundra ecosystems are experiencing warming twice the global average and Arctic vegetation is responding in complex and heterogeneous ways. Shifting productivity, growth, species composition, and phenology at local and regional scales have implications for ecosystem functioning as well as the global carbon and energy balance. Optical remote sensing is an effective tool for monitoring ecosystem functioning in this remote biome. However, limited field-based spectral characterization of the spatial and temporal heterogeneity limits the accuracy of quantitative optical remote sensing at landscape scales. To address this research gap and support current and future satellite missions, three central research questions were posed:
• Does canopy-level spectral variability differ between dominant low Arctic vegetation communities and does this variability change between major phenological phases?
• How does canopy-level vegetation colour images recorded with high and low spectral resolution devices relate to phenological changes in leaf-level photosynthetic pigment concentrations?
• How does spatial aggregation of high spectral resolution data from the ground to satellite scale influence low Arctic tundra vegetation signatures and thereby what is the potential of upcoming hyperspectral spaceborne systems for low Arctic vegetation characterization?
To answer these questions a unique and detailed database was assembled. Field-based canopy-level spectral reflectance measurements, nadir digital photographs, and photosynthetic pigment concentrations of dominant low Arctic vegetation communities were acquired at three major phenological phases representing early, peak and late season. Data were collected in 2015 and 2016 in the Toolik Lake Research Natural Area located in north central Alaska on the North Slope of the Brooks Range. In addition to field data an aerial AISA hyperspectral image was acquired in the late season of 2016. Simulations of broadband Sentinel-2 and hyperspectral Environmental and Mapping Analysis Program (EnMAP) satellite reflectance spectra from ground-based reflectance spectra as well as simulations of EnMAP imagery from aerial hyperspectral imagery were also obtained.
Results showed that canopy-level spectral variability within and between vegetation communities differed by phenological phase. The late season was identified as the most discriminative for identifying many dominant vegetation communities using both ground-based and simulated hyperspectral reflectance spectra. This was due to an overall reduction in spectral variability and comparable or greater differences in spectral reflectance between vegetation communities in the visible near infrared spectrum.
Red, green, and blue (RGB) indices extracted from nadir digital photographs and pigment-driven vegetation indices extracted from ground-based spectral measurements showed strong significant relationships. RGB indices also showed moderate relationships with chlorophyll and carotenoid pigment concentrations. The observed relationships with the broadband RGB channels of the digital camera indicate that vegetation colour strongly influences the response of pigment-driven spectral indices and digital cameras can track the seasonal development and degradation of photosynthetic pigments.
Spatial aggregation of hyperspectral data from the ground to airborne, to simulated satel-lite scale was influenced by non-photosynthetic components as demonstrated by the distinct shift of the red edge to shorter wavelengths. Correspondence between spectral reflectance at the three scales was highest in the red spectrum and lowest in the near infra-red. By artificially mixing litter spectra at different proportions to ground-based spectra, correspondence with aerial and satellite spectra increased. Greater proportions of litter were required to achieve correspondence at the satellite scale.
Overall this thesis found that integrating multiple temporal, spectral, and spatial data is necessary to monitor the complexity and heterogeneity of Arctic tundra ecosystems. The identification of spectrally similar vegetation communities can be optimized using non-peak season hyperspectral data leading to more detailed identification of vegetation communities. The results also highlight the power of vegetation colour to link ground-based and satellite data. Finally, a detailed characterization non-photosynthetic ecosystem components is crucial for accurate interpretation of vegetation signals at landscape scales.
The unprecedented increase in atmospheric concentrations of carbon dioxide (CO2) and other greenhouse gases (GHG) by anthropogenic activities since the Industrial Revolution impacts on various earth system processes, commonly referred to as `climate change´ (CC). CC faces aquatic ecosystems with extreme abiotic perturbations that potentially alter the interrelations between functional autotrophic and heterotrophic plankton groups. These relations, however, modulate biogeochemical cycling and mediate the functioning of aquatic ecosystems as C sources or sinks to the atmosphere. The aim of this thesis was therefore to investigate how different aspects of CC influence community composition and functioning of pelagic heterotrophic bacteria. These organisms constitute a major component of biogeochemical cycling and largely determine the balance between autotrophic and heterotrophic processes.
Due to the vast amount of potential CC impacts, this thesis focuses on the following two aspects: (1) Increased exchange of CO2 across the atmosphere-water interface and reaction of CO2 with seawater leads to profound shifts in seawater carbonate chemistry, commonly termed as `ocean acidification´ (OA), with consequences for organism physiology and the availability of dissolved inorganic carbon (DIC) in seawater. (2) The increase in atmospheric GHG concentration impacts on the efficiency with which the Earth cools to space, affecting global surface temperature and climate. With ongoing CC, shifts in frequency and severity of episodic weather events, such as storms, are expected that in particular might affect lake ecosystems by disrupting thermal summer stratification. Both aspects of CC were studied at the ecosystem-level in large-volume mesocosm experiments by using the Kiel Off-shore Mesocosms for Future Ocean Simulations (KOSMOS) deployed at different coastal marine locations, and the LakeLab facility in Lake Stechlin.
We evaluated the impact of OA on heterotrophic bacterial metabolism in a brackish coastal ecosystem during low-nutrient summer months in the Baltic Sea. There are several in situ experiments that already assessed potential OA-induced changes in natural plankton communities at diverse spatial and seasonal conditions. However, most studies were performed at high phytoplankton biomass conditions, partly provoked by nutrient amendments. Our study highlights potential OA effects at low-nutrient conditions that are representative for most parts of the ocean and of particular interest in current OA research. The results suggest that during extended periods at low-nutrient concentrations, increasing pCO2 levels indirectly impact the growth balance of heterotrophic bacteria via trophic bacteria-phytoplankton interactions and shift the ecosystem to a more autotrophic system.
Further work investigated how OA affects heterotrophic bacterial dissolved organic matter (DOM) transformation in two mesocsom studies, performed at different nutrient conditions. We observed similar succession patterns for individual compound pools during a phytoplankton bloom and subsequent accumulation of these compounds irrespective of the pCO2 treatment. Our results indicate that OA-induced changes in the dynamics of bacterial DOM transformation and potential impacts on DOM quality are unlikely. In addition, there have been no indications that in dependence of nutrient conditions, different amounts of photosynthetic organic matter are channelled into the more recalcitrant DOM pool. This provides novel insights into the general dynamics of the marine DOM pool.
A fourth enclosure experiment in oligo-mesotrophic Lake Stechlin assessed the impact of a severe summer storm on lake bacterial communities during thermal stratification by artificially mixing. Mixing disrupted and lowered the thermocline, increasing the upper mixed layer and substantially changed water physical-chemical variables. Deep water entrainment and associated changes in water physical-chemical variables significantly affected relative bacterial abundances for about one week. Afterwards a pronounced cyanobacterial bloom developed in response to mixing which affected community assembly of heterotrophic bacteria. Colonization and mineralization of senescent phytoplankton cells by heterotrophic bacteria largely determined C-sequestration to the sediment. About six weeks after mixing, bacterial communities and measured activity parameters converged to control conditions. As such, summer storms have the potential to affect bacterial communities for a prolonged period during summer stratification. The results highlight effects on community assembly and heterotrophic bacterial metabolism that are associated to entrainment of deep water into the mixed water layer and assess consequences of an episodic disturbance event for the coupling between bacterial metabolism and autochthonous DOM production in large volume clear-water lakes.
Altogether, this doctoral thesis reveales substantial sensitivities of heterotrophic bacterial metabolism and community structure in response to OA and a simulated summer storm event, which should be considered when assessing the impact of climate change on marine and lake ecosystems.
The trace gases CO2 and CH4 pertain to the most relevant greenhouse gases and are important exchange fluxes of the global carbon (C) cycle. Their atmospheric quantity increased significantly as a result of the intensification of anthropogenic activities, such as especially land-use and land-use change, since the mid of the 18th century. To mitigate global climate change and ensure food security, land-use systems need to be developed, which favor reduced trace gas emissions and a sustainable soil carbon management. This requires the accurate and precise quantification of the influence of land-use and land-use change on CO2 and CH4 emissions. A common method to determine the trace gas dynamics and C sink or source function of a particular ecosystem is the closed chamber method. This method is often used assuming that accuracy and precision are high enough to determine differences in C gas emissions for e.g., treatment comparisons or different ecosystem components.
However, the broad range of different chamber designs, related operational procedures and data-processing strategies which are described in the scientific literature contribute to the overall uncertainty of closed chamber-based emission estimates. Hence, the outcomes of meta-analyses are limited, since these methodical differences hamper the comparability between studies. Thus, a standardization of closed chamber data acquisition and processing is much-needed.
Within this thesis, a set of case studies were performed to: (I) develop standardized routines for an unbiased data acquisition and processing, with the aim of providing traceable, reproducible and comparable closed chamber based C emission estimates; (II) validate those routines by comparing C emissions derived using closed chambers with independent C emission estimates; and (III) reveal processes driving the spatio-temporal dynamics of C emissions by developing (data processing based) flux separation approaches.
The case studies showed: (I) the importance to test chamber designs under field conditions for an appropriate sealing integrity and to ensure an unbiased flux measurement. Compared to the sealing integrity, the use of a pressure vent and fan was of minor importance, affecting mainly measurement precision; (II) that the developed standardized data processing routines proved to be a powerful and flexible tool to estimate C gas emissions and that this tool can be successfully applied on a broad range of flux data sets from very different ecosystem; (III) that automatic chamber measurements display temporal dynamics of CO2 and CH4 fluxes very well and most importantly, that they accurately detect small-scale spatial differences in the development of soil C when validated against repeated soil inventories; and (IV) that a simple algorithm to separate CH4 fluxes into ebullition and diffusion improves the identification of environmental drivers, which allows for an accurate gap-filling of measured CH4 fluxes.
Overall, the proposed standardized data acquisition and processing routines strongly improved the detection accuracy and precision of source/sink patterns of gaseous C emissions. Hence, future studies, which consider the recommended improvements, will deliver valuable new data and insights to broaden our understanding of spatio-temporal C gas dynamics, their particular environmental drivers and underlying processes.
A reliable inference of networks from data is of key interest in many scientific fields. Several methods have been suggested in the literature to reliably determine links in a network. These techniques rely on statistical methods, typically controlling the number of false positive links, but not considering false negative links. In this thesis new methodologies to improve network inference are suggested. Initial analyses demonstrate the impact of falsepositive and false negative conclusions about the presence or absence of links on the resulting inferred network. Consequently, revealing the importance of making well-considered choices leads to suggest new approaches to enhance existing network reconstruction methods.
A simulation study, presented in Chapter 3, shows that different values to balance false positive and false negative conclusions about links should be used in order to reliably estimate network characteristics. The existence of type I and type II errors in the reconstructed network, also called biased network, is accepted. Consequently, an analytic method that describes the influence of these two errors on the network structure is explored. As a result of this analysis, an analytic formula of the density of the biased vertex degree distribution is found (Chapter 4).
In the inverse problem, the vertex degree distribution of the true underlying network is analytically reconstructed, assuming the probabilities of type I and type II errors. Chapters 4-5 show that the method is robust to incorrect estimates of α and β within reasonable limits. In Chapter 6, an iterative procedure to enhance this method is presented in the case of large errors on the estimates of α and β.
The investigations presented so far focus on the influence of false positive and false negative links on the network characteristics. In Chapter 7, the analysis is reversed - the study focuses on the influence of network characteristics on the probability of type I and type II errors, in the case of networks of coupled oscillators. The probabilities of α and β are influenced by the shortest path length and the detour degree, respectively. These results have been used to improve the network reconstruction, when the true underlying network is not known a priori, introducing a novel and advanced concept of threshold.
Carbonfasern haben sich in der Luft- und Raumfahrt etabliert und gewinnen in Alltagsanwendungen wie dem Automobilbereich, Windkraft- und Sportbereich durch ihre hohen Zugfestigkeiten, insbesondere ihrer hohen E-Moduli, und ihrer geringen Dichte immer mehr an Bedeutung. Auf Grund ihrer hohen Kosten, welche sich zur Hälfte aus der Precursorherstellung, inklusive seiner Synthese und seinem Verspinnprozess, dem Lösungsspinnverfahren, ergeben, erhalten zunehmend alternative und schmelzspinnbare Precursoren Interesse. Für die Carbonfaserherstellung wird fast ausschließlich Polyacrylnitril (PAN) verwendet, das vor dem Schmelzen irreversible exotherme Zyklisierungsreaktionen aufweist, welchen sich seine Zersetzung anschließt. Eine Möglichkeit der Reduzierung der Schmelztemperatur von Polymeren ist die Einbringung von Comonomeren zur Erhöhung des freien Volumens und die Reduzierung der intermolekularen Wechselwirkungen als interne Weichmacher. Wie am Fraunhofer IAP gezeigt wurde, kann mittels 2-Methoxyethylacrylat (MEA) die Schmelztemperatur zu neuartigen PAN-basierten Precursoren verringert werden. Um den PAN-co-MEA-Precursor für die nachfolgenden Prozessschritte der Carbonfaserherstellung zu verwenden, müssen die thermoplastischen Fasern in thermisch stabile Fasern ohne thermoplastisches Verhalten überführt werden. Es wurde ein neuer Prozessschritt (Prästabilisierung) eingeführt, welcher unter alkalischen Bedingungen zur Abspaltung der Comonomerseitenkette führt. Neben der Esterhydrolyse finden Reaktionen statt, welche an diesem Material noch nicht hinreichend untersucht wurden. Weiterhin stellt sich die Frage nach der Kinetik der Prästabilisierung und der Ermittlung einer geeigneten Prozessführung.
Hierzu wurde die Prästabilisierung in den Labormaßstab überführt und die möglichen Zusammensetzungen des aus DMSO und einer KOH-Lösung bestehenden Reaktionsmediums evaluiert. Weiterhin wurde die Behandlung bei verschiedenen Prästabilisierungszeiten von maximal 30 min und Temperaturen von 40, 50 und 60 °C durchgeführt, um primär mittels NMR-Spektroskopie die chemischen Strukturänderungen aufzuklären. Die Esterhydrolyse des Comonomers, welche zur Abspaltung des 2-Methoxyethanols führt, wurde mittels 1H-NMR-spektroskopischer Untersuchungen detektiert.
Es wurde ein Modell aufgestellt, das die chemisch-physikalischen Strukturänderungen während der Prästabilisierung aufzeigt. Die zuerst ablaufende Reaktion ist die Esterhydrolyse am Comonomer, welche vom Faserrand nach innen verläuft und durch die Präsenz des DMSO in Kombination mit der KOH-Lösung (Superbase) initiiert wird. Der zeitliche Reaktionsverlauf der Esterhydrolyse kann in drei Bereiche eingeteilt werden. Der erste Bereich ab dem Prästabilisierungsbeginn wird durch die Diffusion der basischen Anionen in die Faser, der zweite Bereich durch die Reaktion an der Estergruppe des Comonomers und der dritte Bereich durch letzte Reaktionen im Faserinneren und diffusiven Prozessen der Produkte und Edukte charakterisiert. Der zweite Bereich kann mit einer Reaktion pseudo 1. Ordnung abgebildet werden, da in diesem Bereich bereits eine ausreichende Diffusion der Edukte in die Faser stattgefunden hat. Bei 50 °C spielt die Diffusion im ersten Bereich im Vergleich zur Reaktion eine untergeordnete Rolle. Mit Erhöhung der Temperatur auf 60 °C kann eine im Verhältnis geringere Diffusions- als Reaktionsgeschwindigkeit beobachtet werden. Die Nebenreaktionen wurden mittels 13C-CP/MAS-NMR-spektroskopischen, elementaranlaytischen Untersuchungen sowie Doppelbrechungsmessungen charakterisiert. Während der alkalischen Esterhydrolyse beginnt die Reduzierung der Nitrilgruppen unter der Bildung von primären Carbonsäureamiden und Carbonsäuren. Zur Beschreibung dieser Umsetzung wurde eine Methode entwickelt, welche die Addition von 13C-CP/MAS-NMR-Spektren der Modellsubstanzen PAN, PAM und PAA beinhaltet. Weitere stattfindende Reaktionen sind die Bildung von konjugierten Doppelbindungen, welche insbesondere auf eine Zyklisierung der Nitrile hinweisen. Die nasschemisch initiierte Zyklisierung der Nitrilgruppen kann zu kürzeren Stabilisierungszeiten und einem besser kontrollierbaren Stabilisierungsprozess durch geringere Wärmefreisetzung und schlussendlich zu einer Kostenersparnis des gesamten Verfahrens führen. Die Umsetzung der Nitrilgruppen konnte mit einer Reaktion pseudo 1. Ordnung gut abgebildet werden. DMSO initiiert die Esterhydrolyse, wobei die KOH-Konzentration einen höheren Einfluss auf die Reaktionsgeschwindigkeit der Ester- und Nitrilhydrolyse als die DMSO-Konzentration besitzt. Beide Reaktionen zeigen eine vergleichbare Abhängigkeit von der Temperatur. Die Erhöhung der Prästabilisierungszeit und der KOH- bzw. DMSO-Konzentration führt zur Migration niedermolekularer Bestandteile des Fasermaterials an die Oberfläche und der Bildung punktueller Ablagerungen bis hin zu miteinander verbundenen Einzelfasern. Eine weitere Erhöhung der Prästabilisierungszeit bzw. der Konzentration führt zu einem steigenden Carbonsäureanteil und zur Quellung des Fasermaterials, wodurch die Ablagerungen in das Reaktionsmedium diffundieren. Die Ablagerungen enthalten Chlor, welches durch den Waschvorgang mit HCl in das Materialsystem gelangt ist und durch Parameteranpassungen reduziert wurde. Die schmelzbaren Fasern konnten durch die Prästabilisierung erfolgreich über eine Kern-Mantel-Struktur in nicht-thermoplastische Fasern überführt werden.
Zur Ermittlung eines geeigneten Prozessfensters für nachfolgende thermische Beanspruchungen der prästabilisierten Fasern wurden drei Kriterien identifiziert, anhand welcher die Evaluation erfolgte. Das erste Kriterium beinhaltet die Notwendigkeit der vollständigen Aufhebung der thermoplastischen Eigenschaft der Fasern. Als zweites Kriterium diente die Fasermorphologie. Anhand von REM-Aufnahmen wurden Faserbündel mit separierten Einzelfasern ohne Ablagerungen für die nachfolgende Stabilisierung ausgewählt. Das dritte Kriterium bezieht sich auf eine möglichst geringe Umsetzung der Nitrilgruppen, um Prästabilisierungsbedingungen mit Nebenreaktionen zu vermeiden.
Aus den Untersuchungen konnte eine Prästabilisierungstemperatur von 60 °C als geeignet identifiziert werden. Weiterhin führen hoch alkalische Zusammensetzungen des Reaktionsmediums mit KOH-Konzentrationen von 1, 1,5 und 2 M, vorzugsweise 1,5 M und 50 vol% DMSO mit Reaktionszeiten von unter 10 min zu geeigneten Fasern. Ein MEA-Anteil unterhalb von 2 mol% bewirkt eine Überführung in die Unschmelzbarkeit. Thermisch stabile und für die nachfolgende Stabilisierung geeignete Fasern besitzen weiterhin 68 – 80 mol% Nitrilgruppen, 20 – 25 mol% Carbonsäuren, bis zu 15 mol% primäre Carbonsäureamide und zyklisierte Strukturen.
Sin pragmática no sería posible la comunicación, puesto que no podríamos interpretar enunciados lingüísticos. A cualquier aprendiente de una lengua que no domina, no le basta con ser competente lingüísticamente, puesto que su fin es comunicarse con otras personas y en contextos determinados. Solo una enseñanza que facilite la habilidad de producir y comprender enunciados para realizar actos de lengua, seleccionando los más apropiados para un contexto determinado, podrá preciarse de ser eficiente.
El trabajo que aquí se presenta pretende dar a conocer a la comunidad científica y, en especial, a los y las involucradas directa e indirectamente en el proceso de enseñanza, el concepto de pragmática verbal y contrastarlo con otros como gramática, cultura o interculturalidad, así como concienciarlos de la importancia y de la necesidad imperiosa del establecimiento de la pragmática como disciplina relevante en el proceso comunicativo y, en especial, de su inclusión sistemática y manifiesta en los libros de texto de español como lengua extranjera elaborados para el contexto escolar. Para ello se investiga la presencia de elementos pragmáticos y el fomento de la competencia pragmática en libros de texto para principiantes, por ser estos el material utilizado por excelencia en las escuelas y por su relevancia a la hora de especificar contenidos, tipo de progresión y metodología.
L’extériorisation de toute communication est assujettie à un mode d’accès du locuteur aux informations véhiculées. Les constatations faites de nos données prouvent que tous les huit verbes étudiés traduisent des mécanismes d’acquisition des connaissances que nous avons appelés en emprunt à (Vogeleer, 1995 :92) « l’accès cognitif au savoir ». C’est cette valeur intrinsèque qui vaut à ces termes la dénomination de verbes médiatifs. En d’autres mots, ce sont des éléments qui explicitent des processus d’accès du locuteur au savoir. Une source du savoir qui peut être directe (la vue, le touché, l’ouïe, l’odorat…) ou indirecte (ouï-dire) et surtout inférée. Nous entendons par inférence un processus d’analyse et de mise en relation d’éléments (prémisses), lesquelles permettent de tirer une conclusion par déduction, induction ou par abduction. Et selon que lesdites prémisses tendent à être plus ou moins fiables, ces processus inférentiels impliqueront des valeurs épistémiques à des degrés divers.
Sur le plan rhétorico-syntaxique, nos analyses ont montré tous les verbes cognitifs (VC) de cette étude exigent l’occurrence d’autres constituants (actants) phrastiques qu’ils régissent. C’est grâce à cette valence verbale qu’ils gardent un pouvoir rectionnel dans les constructions asyndétiques. Ce sont donc les matrices des éléments sur lesquels ils se rapportent. Quant au cinétisme de ces verbes, il possède une fonction rhétorique et syntaxique. En effet, cet agencement particulier et souvent perturbant permet de traduire l’expression d’une figure de syntaxe à effet rhétorique : l’hyperbate. Une construction atypique qui, à travers les agencements anticonformistes, donne un sens de regressivité à l’énoncé et confère une saillance à des termes mis ce fait en exergue.
Light-induced pH cycle
(2019)
Background Many biochemical reactions depend on the pH of their environment and some are strongly accelerated in an acidic surrounding. A classical approach to control biochemical reactions non-invasivly is by changing the temperature. However, if the pH could be controlled by optical means using photo-active chemicals, this would mean to be able to accelerate suitable biochemical reactions. Optically switching the pH can be achieved by using photoacids. A photoacid is a molecule with a functional group that releases a proton upon irradiation with the suitable wavelength, acidifying the environmental aqueous surrounding. A major goal of this work was to establish a non-invasive method of optically controlling the pH in aqueous solutions, offering the opportunity to enhance the known chemical reactions portfolio. To demonstrate the photo-switchable pH cycling we chose an enzymatic assay using acid phosphatase, which is an enzyme with a strong pH dependent activity.
Results In this work we could demonstrate a light-induced, reversible control of the enzymatic activity of acid phosphatase non-invasivly. To successfully conduct those experiments a high power LED array was designed and built, suitable for a 96 well standard microtiter plate, not being commercially available. Heat management and a lateral ventilation system to avoid heat accumulation were established and a stable light intensity achieved. Different photoacids were characterised and their pH dependent absorption spectra recorded. By using the reversible photoacid G-acid as a proton donor, the pH can be changed reversibly using high power UV 365 nm LEDs. To demonstrate the pH cycling, acid phosphatase with hydrolytic activity under acidic conditions was chosen. An assay using the photoacid together with the enzyme was established, also providing that G-acid does not inhibit acid phosphatase. The feasibility of reversibly regulating the enzyme’s pH dependent activity by optical means was demonstrated, by controlling the enzymatic activity with light. It was demonstrated that the enzyme activity depends on the light exposure time only. When samples are not illuminated and left in the dark, no enzymatic activity was recorded. The process can be rapidly controlled by simply switching the light on and off and should be applicable to a wide range of enzymes and biochemical reactions.
Conclusions Reversible photoacids offer a light-dependent regulation of pH, making them extremely attractive for miniaturizable, non-invasive and time-resolved control of biochemical reactions. Many enzymes have a sharp pH dependent activity, thus the established setup in this thesis could be used for a versatile enzyme portfolio. Even though the demonstrated photo-switchable strategy could also be used for non-enzymatic assays, greatly facilitating the assay establishment. Photoacids have the potential for high throughput methods and automation. We demonstrated that it is possible to control photoacids using commonly available LEDs, making their use in highly integrated devices and instruments more attractive. The successfully designed 96 well high power UV LED array presents an opportunity for general combinatorial analysis in e.g. photochemistry, where a high light intensity is needed for the investigation of various reactions.
Interlocutors typically link their utterances to the discourse environment and enrich communication by linguistic (e.g., information packaging) and extra-linguistic (e.g., eye gaze, gestures) means to optimize information transfer. Psycholinguistic studies underline that ‒for meaning computation‒ listeners profit from linguistic and visual cues that draw their focus of attention to salient information. This dissertation is the first work that examines how linguistic compared to visual salience cues influence sentence comprehension using the very same experimental paradigms and materials, that is, German subject-before-object (SO) and object-before-subject (OS) sentences, across the two cue modalities. Linguistic salience was induced by indicating a referent as the aboutness topic. Visual salience was induced by implicit (i.e., unconscious) or explicit (i.e., shared) manipulations of listeners’ attention to a depicted referent.
In Study 1, a selective, facilitative impact of linguistic salience on the context-sensitive OS word order was found using offline comprehensibility judgments. More precisely, during online sentence processing, this impact was characterized by a reduced sentence-initial Late positivity which reflects reduced processing costs for updating the current mental representation of discourse. This facilitative impact of linguistic salience was not replicated by means of an implicit visual cue (Study 2) shown to modulate word order preferences during sentence production. However, a gaze shift to a depicted referent as an indicator of shared attention eased sentence-initial processing similar to linguistic salience as revealed by reduced reading times (Study 3). Yet, this cue did not modulate the strong subject-antecedent preference during later pronoun resolution like linguistic salience. Taken together, these findings suggest a significant impact of linguistic and visual salience cues on sentence comprehension, which substantiates that both the information delivered via language and via the visual environment is integrated into the mental representation of the discourse; but, the way how salience is induced is crucial to its impact.
The goal of this thesis is to broaden the empirical basis for a better, comprehensive understanding
of massive star evolution, star formation and feedback at low metallicity. Low metallicity massive stars are a key to understand the early universe. Quantitative information on metal-poor massive stars was sparse before. The quantitative spectroscopic studies of massive star populations associated with large-scale ISM structures were not performed at low metallicity before, but are important to investigate star-formation histories and feedback in detail. Much of this work relies on spectroscopic observations with VLT-FLAMES of ~500 OB stars in the Magellanic Clouds. When available, the optical spectroscopy was complemented by UV spectra from the HST, IUE, and FUSE archives. The two representative young stellar populations that have been studied are associated with the superbubble N 206 in the Large Magellanic Cloud (LMC) and with the supergiant shell SMC-SGS 1 in the Wing of the Small Magellanic Cloud (SMC), respectively. We performed spectroscopic analyses of the massive stars using the nonLTE Potsdam Wolf-Rayet (PoWR) model atmosphere code. We estimated the stellar, wind, and feedback parameters of the individual massive stars and established their statistical distributions.
The mass-loss rates of N206 OB stars are consistent with theoretical expectations for LMC metallicity. The most massive and youngest stars show nitrogen enrichment at their surface and are found to be slower rotators than the rest of the sample. The N 206 complex has undergone star formation episodes since more than 30 Myr, with a current star formation rate higher than average in the LMC. The spatial age distribution of stars across the complex possibly indicates triggered star formation due to the expansion of the superbubble. Three very massive, young Of stars in the region dominate the ionizing and mechanical feedback among hundreds of other OB stars in the sample. The current stellar wind feedback rate from the two WR stars in the complex is comparable to that released by the whole OB sample. We see only a minor fraction of this stellar wind feedback converted into X-ray emission. In this LMC complex, stellar winds and supernovae equally contribute to the total energy feedback, which eventually powered the central superbubble. However, the total energy input accumulated over the time scale of the superbubble significantly exceeds the observed energy content of the complex. The lack of energy along with the morphology of the complex suggests a leakage of hot gas from the superbubble.
With a detailed spectroscopic study of massive stars in SMC-SGS 1, we provide the stellar and wind parameters of a large sample of OB stars at low metallicity, including those in the lower mass-range. The stellar rotation velocities show a broad, tentatively bimodal distribution, with Be stars being among the fastest. A few very luminous O stars are found close to the main sequence, while all other, slightly evolved stars obey a strict luminosity limit. Considering additional massive stars in evolved stages, with published parameters and located all over the SMC, essentially confirms this picture. The comparison with single-star evolutionary tracks suggests a dichotomy in the fate of massive stars in the SMC. Only stars with an initial mass below 30 solar masses seem to evolve from the main sequence to the cool side of the HRD to become a red supergiant and to explode as type II-P supernova. In contrast, more massive stars appear to stay always hot and might evolve quasi chemically homogeneously, finally collapsing to relatively massive black holes. However, we find no indication that chemical mixing is correlated with rapid rotation. We measured the key parameters of stellar feedback and established the links between the rates of star formation and supernovae. Our study demonstrates that in metal-poor environments stellar feedback is dominated by core-collapse supernovae in combination with winds and ionizing radiation supplied by a few of the most massive stars. We found indications of the stochastic mode of star formation, where the resulting stellar population is fully capable of producing large-scale structures such as the supergiant shell SMC-SGS 1 in the Wing. The low level of feedback in metal-poor stellar populations allows star formation episodes to persist over long timescales.
Our study showcases the importance of quantitative spectroscopy of massive stars with adequate stellar-atmosphere models in order to understand star-formation, evolution, and feedback. The stellar population analyses in the LMC and SMC make us understand that massive stars and their impact can be very different depending on their environment. Obviously, due to their different metallicity, the massive stars in the LMC and the SMC follow different evolutionary paths. Their winds differ significantly, and the key feedback agents are different. As a consequence, the star formation can proceed in different modes.