Refine
Year of publication
- 2021 (276) (remove)
Document Type
- Doctoral Thesis (276) (remove)
Is part of the Bibliography
- yes (276)
Keywords
- Klimawandel (4)
- Spektroskopie (4)
- climate change (4)
- DDR (3)
- Deep Learning (3)
- Politik (3)
- spectroscopy (3)
- 3D-Visualisierung (2)
- Agrarökologie (2)
- Air pollution (2)
Institute
- Institut für Biochemie und Biologie (32)
- Institut für Physik und Astronomie (29)
- Institut für Geowissenschaften (26)
- Institut für Chemie (25)
- Institut für Ernährungswissenschaft (17)
- Historisches Institut (16)
- Hasso-Plattner-Institut für Digital Engineering GmbH (13)
- Institut für Informatik und Computational Science (10)
- Wirtschaftswissenschaften (10)
- Öffentliches Recht (10)
The characterization of exoplanets applying high-resolution transmission spectroscopy ini- tiated a new era making it possible to trace atmospheric signature at high altitudes in exoplanet atmospheres and to determine atmospheric properties which enrich our under- standing of the formation and evolution of the solar system. In contrast to what is observed in our solar system, where gaseous planets orbit at wide orbits, Jupiter type exoplanets were detected in foreign stellar systems surrounding their host stars within few days, in close orbits, the so called hot- and ultra-hot Jupiters. The most well studied ones are HD209458b and HD189733b, which are the first exoplanets where absorption is detected in their atmospheres, namely from the alkali line sodium. For hot Jupiters, the resonant alkali lines are the atmospheric species with one of the strongest absorption signatures, due to their large absorption cross-section. However, al- though the alkali lines sodium and potassium were detected in low-resolution observations for various giant exoplanets, potassium was absent in different high-resolution investiga- tions in contrast to sodium. The reason for this is quite puzzling, since both alkalis have very similar physical and chemical properties (e.g. condensation and ionization proper- ties). Obtaining high-resolution transit observations of HD189733b and HD209458b, we were able to detect potassium on HD189733b (Manuscript 1), which was the first high-resolution detection of potassium on an exoplanet. The absence of potassium on HD209458b could be reasoned by depletion processes, such as condensation or photo-ionization or high-altitude clouds. In a further study (Manuscript II), we resolved the potassium line and compared this to a previously detected sodium absorption on this planet. The comparison showed, that the potassium lines are either tracing different altitudes and temperatures compared to the sodium lines, or are depleted so that the planetary Na/K- ratio is way larger than the stellar one. A comparison of the alkali lines with synthetic line profiles showed that the sodium lines were much broader than the potassium lines, probably being induced by winds. To investigate this, the effect of zonal streaming winds on the sodium lines on Jupiter-type planets is investigated in a further study (Manuscript III), showing that such winds can significantly broaden the Na- lines and that high-resolution observations can trace such winds with different properties. Furthermore, investigating the Na-line observations for different exoplanets, I showed that the Na-line broadening follows a trend with cooler planets showing stronger line broadening and so hinting on stronger winds, matching well into theoretical predictions. Each presented manuscript depends on the re- sults published within the previous manuscript, yielding a unitary study of the exoplanet HD189733b. The investigation of the potassium absorption required to account for different effects: The telluric lines removal and the effect of center-to-limb variation (see Manuscript I), the residual Rossiter-Mc-Laughlin effect (see Manuscript II) and the broadening of spectral lines on a translucent atmospheric ring by zonal jet streams (see Manuscript III). This thesis shows that high-resolution transmission spectroscopy is a powerful tool to probe sharp alkali line absorption on giant exoplanet atmospheres and to investigate on the properties and dynamics of hot Jupiter type atmospheres.
Die stetige Weiterentwicklung von VR-Systemen bietet neue Möglichkeiten der Interaktion mit virtuellen Objekten im dreidimensionalen Raum, stellt Entwickelnde von VRAnwendungen aber auch vor neue Herausforderungen. Selektions- und Manipulationstechniken müssen unter Berücksichtigung des Anwendungsszenarios, der Zielgruppe und der zur Verfügung stehenden Ein- und Ausgabegeräte ausgewählt werden. Diese Arbeit leistet einen Beitrag dazu, die Auswahl von passenden Interaktionstechniken zu unterstützen. Hierfür wurde eine repräsentative Menge von Selektions- und Manipulationstechniken untersucht und, unter Berücksichtigung existierender Klassifikationssysteme, eine Taxonomie entwickelt, die die Analyse der Techniken hinsichtlich interaktionsrelevanter Eigenschaften ermöglicht. Auf Basis dieser Taxonomie wurden Techniken ausgewählt, die in einer explorativen Studie verglichen wurden, um Rückschlüsse auf die Dimensionen der Taxonomie zu ziehen und neue Indizien für Vor- und Nachteile der Techniken in spezifischen Anwendungsszenarien zu generieren. Die Ergebnisse der Arbeit münden in eine Webanwendung, die Entwickelnde von VR-Anwendungen gezielt dabei unterstützt, passende Selektions- und Manipulationstechniken für ein Anwendungsszenario auszuwählen, indem Techniken auf Basis der Taxonomie gefiltert und unter Verwendung der Resultate aus der Studie sortiert werden können.
Hacker und Haecksen zählen zur Avantgarde der Computerisierung. Seit den späten 1970er-Jahren bildeten sie sich in der Bundesrepublik und in der DDR zu eigensinnigen ComputernutzerInnen mit einschlägigem Wissen heraus. Sie eigneten sich das Medium spielerisch an, schufen Kontakträume und brachten sich so aktiv in den Prozess der Computerisierung ein. Durch ihre Grenzüberschreitungen zeigten sie dabei Chancen und Risiken der Digitalisierung auf.
Julia Gül Erdogan geht der Entstehung der Hackerkulturen in Ost- und Westdeutschland nach. Sie analysiert, wie deren teils subversive Praktiken Machtgefüge in Politik, Wirtschaft und Gesellschaft herausforderten. Zugleich verdeutlicht die Arbeit Gemeinsamkeiten und Unterschiede der frühen sub- und gegenkulturellen Computernutzung in den beiden deutschen Teilstaaten.
While patients are known to respond differently to drug therapies, current clinical practice often still follows a standardized dosage regimen for all patients. For drugs with a narrow range of both effective and safe concentrations, this approach may lead to a high incidence of adverse events or subtherapeutic dosing in the presence of high patient variability. Model-informedprecision dosing (MIPD) is a quantitative approach towards dose individualization based on mathematical modeling of dose-response relationships integrating therapeutic drug/biomarker monitoring (TDM) data. MIPD may considerably improve the efficacy and safety of many drug therapies. Current MIPD approaches, however, rely either on pre-calculated dosing tables or on simple point predictions of the therapy outcome. These
approaches lack a quantification of uncertainties and the ability to account for effects that are delayed. In addition, the underlying models are not improved while applied to patient data. Therefore, current approaches are not well suited for informed clinical decision-making based on a differentiated understanding of the individually predicted therapy outcome.
The objective of this thesis is to develop mathematical approaches for MIPD, which (i) provide efficient fully Bayesian forecasting of the individual therapy outcome including associated uncertainties, (ii) integrate Markov decision processes via reinforcement learning (RL) for a comprehensive decision framework for dose individualization, (iii) allow for continuous learning across patients and hospitals. Cytotoxic anticancer chemotherapy with its major dose-limiting toxicity, neutropenia, serves as a therapeutically relevant application example.
For more comprehensive therapy forecasting, we apply Bayesian data assimilation (DA) approaches, integrating patient-specific TDM data into mathematical models of chemotherapy-induced neutropenia that build on prior population analyses. The value of uncertainty quantification is demonstrated as it allows reliable computation of the patient-specific probabilities of relevant clinical quantities, e.g., the neutropenia grade. In view of novel home monitoring devices that increase the amount of TDM data available, the data processing of
sequential DA methods proves to be more efficient and facilitates handling of the variability between dosing events.
By transferring concepts from DA and RL we develop novel approaches for MIPD. While DA-guided dosing integrates individualized uncertainties into dose selection, RL-guided dosing provides a framework to consider delayed effects of dose selections. The combined
DA-RL approach takes into account both aspects simultaneously and thus represents a holistic approach towards MIPD. Additionally, we show that RL can be used to gain insights into important patient characteristics for dose selection. The novel dosing strategies substantially reduce the occurrence of both subtherapeutic and life-threatening neutropenia grades in a simulation study based on a recent clinical study (CEPAC-TDM trial) compared to currently used MIPD approaches.
If MIPD is to be implemented in routine clinical practice, a certain model bias with respect to the underlying model is inevitable, as the models are typically based on data from comparably small clinical trials that reflect only to a limited extent the diversity in real-world patient populations. We propose a sequential hierarchical Bayesian inference framework that enables continuous cross-patient learning to learn the underlying model parameters of the target patient population. It is important to note that the approach only requires summary information of the individual patient data to update the model. This separation of the individual inference from population inference enables implementation across different centers of care.
The proposed approaches substantially improve current MIPD approaches, taking into account new trends in health care and aspects of practical applicability. They enable progress towards more informed clinical decision-making, ultimately increasing patient benefits beyond the current practice.
The presented study investigated the influence of microbial and biogeochemical processes on the physical transport related properties and the fate of microplastics in freshwater reservoirs. The overarching goal was to elucidate the mechanisms leading to sedimentation and deposition of microplastics in such environments. This is of importance, as large amounts of initially buoyant microplastics are found in reservoir sediments worldwide. However, the transport processes which lead to microplastics accumulation in sediments, were up to now understudied.
The impact of biofilm formation on the density and subsequent sedimentation of microplastics was investigated in the eutrophic Bautzen reservoirs (Chapter 2). Biofilms are complex microbial communities fixed to submerged surfaces through a slimy organic film. The mineral calcite was detected in the biofilms, which led to the
sinking of the overgrown microplastic particles. The calcite was of biogenic origin, most likely precipitated by sessile cyanobacteria within the biofilms.
Biofilm formation was also studied in the mesotrophic Malter reservoir. Unlike in Bautzen reservoir, biofilm formation did not govern the sedimentation of different microplastics in Malter reservoir (Chapter 3). Instead autumnal lake mixing led to
the formation of sinking aggregates of microplastics and iron colloids. Such colloids form when anoxic, iron-rich water from the hypolimnion mixes with the oxygenated epilimnetic waters. The colloids bind organic material from the lake water, which leads to the formation of large and sinking iron-organo flocs.
Hence, iron-organo floc formation and their influence on the buoyancy or burial of microplastics into sediments of Bautzen reservoir was studied in laboratory experiments (Chapter 4). Microplastics of different shapes (fiber, fragment, sphere) and sizes were readily incorporated into sinking iron-organo flocs. By this initially buoyant polyethylene microplastics were transported on top of sediments from Bautzen reservoir. Shortly after deposition, the microplastic bearing flocs started to subside and transported the pollutants into deeper sediment layers. The microplastics were not released from the sediments within two months of laboratory incubation.
The stability of floc microplastic deposition was further investigated employing experiments with the iron reducing model organism Shewanella oneidensis (Chapter 5). It was shown, that reduction or re-mineralization of the iron minerals did not affect the integrity of the iron-organo flocs. The organic matrix was stable under iron reducing conditions. Hence, no incorporated microplastics were released from the flocs. As similar processes are likely to take place in natural sediments, this might explain the previous described low microplastic release from the sediments.
This thesis introduced different mechanisms leading to the sedimentation of initially buoyant microplastics and to their subsequent deposition in freshwater reservoirs. Novel processes such as the aggregation with iron-organo flocs were identified and the understudied issue of biofilm densification through biogenic mineral formation was further investigated. The findings might have implications for the fate of microplastics within the river-reservoir system and outline the role of freshwater reservoirs as important accumulation zone for microplastics. Microplastics deposited in the sediments of reservoirs might not be transported further by through flowing river. Hence the study might contribute to better risk assessment and transport balances of these anthropogenic contaminants.
Die herausragenden mechanischen Eigenschaften natürlicher anorganisch-organischer Kompositmaterialien wie Knochen oder Muschelschalen entspringen ihrer hierarchischen Struktur, die von der nano- bis hinauf zur makroskopischen Ebene reicht, und einer kontrollierten Verbindung entlang der Grenzflächen der anorganischen und organischen Komponenten.
Ausgehend von diesen Schlüsselprinzipien des biologischen Materialdesigns wurden in dieser Arbeit zwei Konzepte für die bioinspirierte Strukturbildung von Kompositen untersucht, die auf dem Verkleben von Nano- oder Mesokristallen mit funktionalisierten Poly(2-oxazolin)-Blockcopolymeren beruhen sowie deren Potenzial zur Herstellung bioinspirierter selbstorganisierter hierarchischer anorganisch-organischer Verbundstrukturen ohne äußere Kräfte beleuchtet. Die Konzepte unterschieden sich in den verwendeten anorganischen Partikeln und in der Art der Strukturbildung.
Über einen modularen Ansatz aus Polymersynthese und polymeranaloger Thiol-En-Funktionalisierung wurde erfolgreich eine Bibliothek von Poly(2-oxazolin)en mit unterschiedlichen Funktionalitäten erstellt. Die Blockcopolymere bestehen aus einem kurzen partikelaffinen "Klebeblock", der aus Thiol-En-funktionalisiertem Poly(2-(3-butenyl)-2-oxazolin) besteht, und einem langen wasserlöslichen, strukturbildenden Block, der aus thermoresponsivem und kristallisierbarem Poly(2-isopropyl-2-oxazolin) besteht und hierarchische Morphologien ausbildet. Verschiedene analytische Untersuchungen wie Turbidimetrie, DLS, DSC, SEM oder XRD machten das thermoresponsive bzw. das Kristallisationsverhalten der Blockcopolymere in Abhängigkeit vom eingeführten Klebeblock zugänglich. Es zeigte sich, dass diese Polymere ein komplexes temperatur- und pH-abhängiges Trübungsverhalten aufweisen. Hinsichtlich der Kristallisation änderte der Klebeblock nicht die nanoskopische Kristallstruktur; er beeinflusste jedoch die Kristallisationszeit, den Kristallisationsgrad und die hierarchische Morphologie. Dieses Ergebnis wurde auf das unterschiedliche Aggregationsverhalten der Polymere in Wasser zurückgeführt.
Für die Herstellung von Kompositen nutzte Konzept 1 mikrometergroße Kupferoxalat-Mesokristalle, die eine innere Nanostruktur aufweisen. Die Strukturbildung über den anorganischen Teil wurde durch das Verkleben und Anordnen dieser Partikel erstrebt. Konzept 1 ermöglichte homogene freistehende stabile Kompositfilme mit einem hohen anorganischen Anteil. Die Partikel-Polymer-Kombination vereinte jedoch ungünstige Eigenschaften in sich, d. h. ihre Längenskalen waren zu unterschiedlich, was die Selbstassemblierung der Partikel verhinderte. Aufgrund des geringen Aspektverhältnisses von Kupferoxalat blieb auch die gegenseitige Ausrichtung durch äußere Kräfte erfolglos. Im Ergebnis eignet sich das Kupferoxalat-Poly(2-oxazolin)-Modellsystem nicht für die Herstellung hierarchischer Kompositstrukturen.
Im Gegensatz dazu verwendet Konzept 2 scheibenförmige Laponit®-Nanopartikel und kristallisierbare Blockcopolymere zur Strukturbildung über die organische Komponente durch polymervermittelte Selbstassemblierung. Komplementäre Analysemethoden (Zeta-Potenzial, DLS, SEM, XRD, DSC, TEM) zeigten sowohl eine kontrollierte Wechselwirkung zwischen den Komponenten in wässriger Umgebung als auch eine kontrollierte Strukturbildung, die in selbstassemblierten Nanokompositen resultiert, deren Struktur sich über mehrere Längenskalen erstreckt. Es wurde gezeigt, dass die negativ geladenen Klebeblöcke spezifisch und selektiv an den positiv geladenen Rändern der Laponit®-Partikel binden und so Polymer-Laponit®-Nanohybridpartikel entstehen, die als Grundbausteine für die Kompositbildung dienen. Die Hybridpartikel sind bei Raumtemperatur elektrosterisch stabilisiert - sterisch durch ihre langen, mit Wasser wechselwirkenden Poly(2-isopropyl-2-oxazolin)-Blöcke und elektrostatisch über die negativ geladenen Laponit®-Flächen. Im Ergebnis ließ sich Konzept 2 und damit die Strukturbildung über die organische Komponente erfolgreich umsetzten. Das Laponit®-Poly(2-oxazolin)-Modellsystem eröffnete den Weg zu selbstassemblierten geschichteten quasi-hierarchischen Nanokompositstrukturen mit hohem anorganischen Anteil. Abhängig von der frei verfügbaren Polymerkonzentration bei der Kompositbildung entstanden zwei unterschiedliche Komposit-Typen. Darüber hinaus entwarf die Arbeit einen Erklärungsansatz für den polymervermittelten Bildungsprozess der Komposit-Strukturen.
Insgesamt legt diese Arbeit Struktur-Prozess-Eigenschafts-Beziehungen offen, um selbstassemblierte bioinspirierte Kompositstrukturen zu bilden und liefert neue Einsichten zu einer geeigneten Kombination an Komponenten und Herstellungsbedingungen, die eine kontrollierte selbstassemblierte Strukturbildung mithilfe funktionalisierter Poly(2-oxazolin)-Blockcopolymere erlauben.
Boon and bane
(2021)
Semi-natural habitats (SNHs) in agricultural landscapes represent important refugia for biodiversity including organisms providing ecosystem services. Their spill-over into agricultural fields may lead to the provision of regulating ecosystem services such as biological pest control ultimately affecting agricultural yield. Still, it remains largely unexplored, how different habitat types and their distributions in the surrounding landscape shape this provision of ecosystem services within arable fields. Hence, in this thesis I investigated the effect of SNHs on biodiversity-driven ecosystem services and disservices affecting wheat production with an emphasis on the role and interplay of habitat type, distance to the habitat and landscape complexity.
I established transects from the field border into the wheat field, starting either from a field-to-field border, a hedgerow, or a kettle hole, and assessed beneficial and detrimental organisms and their ecosystem functions as well as wheat yield at several in-field distances. Using this study design, I conducted three studies where I aimed to relate the impacts of SNHs at the field and at the landscape scale on ecosystem service providers to crop production.
In the first study, I observed yield losses close to SNHs for all transect types. Woody habitats, such as hedgerows, reduced yields stronger than kettle holes, most likely due to shading from the tall vegetation structure. In order to find the biotic drivers of these yield losses close to SNHs, I measured pest infestation by selected wheat pests as potential ecosystem disservices to crop production in the second study. Besides relating their damage rates to wheat yield of experimental plots, I studied the effect of SNHs on these pest rates at the field and at the landscape scale. Only weed cover could be associated to yield losses, having their strongest impact on wheat yield close to the SNH. While fungal seed infection rates did not respond to SNHs, fungal leaf infection and herbivory rates of cereal leaf beetle larvae were positively influenced by kettle holes. The latter even increased at kettle holes with increasing landscape complexity suggesting a release of natural enemies at isolated habitats within the field interior.
In the third study, I found that also ecosystem service providers benefit from the presence of kettle holes. The distance to a SNH decreased species richness of ecosystem service providers, whereby the spatial range depended on species mobility, i.e. arable weeds diminished rapidly while carabids were less affected by the distance to a SNH. Contrarily, weed seed predation increased with distance suggesting that a higher food availability at field borders might have diluted the predation on experimental seeds. Intriguingly, responses to landscape complexity were rather mixed: While weed species richness was generally elevated with increasing landscape complexity, carabids followed a hump-shaped curve with highest species numbers and activity-density in simple landscapes. The latter might give a hint that carabids profit from a minimum endowment of SNHs, while a further increase impedes their mobility. Weed seed predation was affected differently by landscape complexity depending on weed species displayed. However, in habitat-rich landscapes seed predation of the different weed species converged to similar rates, emphasising that landscape complexity can stabilize the provision of ecosystem services. Lastly, I could relate a higher weed seed predation to an increase in wheat yield even though seed predation did not diminish weed cover. The exact mechanisms of the provision of weed control to crop production remain to be investigated in future studies.
In conclusion, I found habitat-specific responses of ecosystem (dis)service providers and their functions emphasizing the need to evaluate the effect of different habitat types on the provision of ecosystem services not only at the field scale, but also at the landscape scale. My findings confirm that besides identifying species richness of ecosystem (dis)service providers the assessment of their functions is indispensable to relate the actual delivery of ecosystem (dis)services to crop production.
Bundeswehrapotheken
(2021)
„Bundeswehrapotheken" schildert die Entstehung und Entwicklung der Sanitätsmaterialversorgungseinrichtungen der Bundeswehr von ihrer Aufstellung Ende der 1950er Jahre bis in die 2000er Jahre. Anfangs im rechtlichen Sinne noch keine Apotheken, waren diese Einrichtungen damals in erster Linie auf den Kriegsfall ausgerichtet und von sehr begrenzter pharmazeutischer Leistungsfähigkeit.
Heute sind nur noch wenige große Bundeswehrapotheken verblieben, die jedoch hinsichtlich ihrer personellen, materiellen und infrastrukturellen Ausstattung und Möglichkeiten uneingeschränkt zivilen Einrichtungen vergleichbar leistungsfähig sind. Überwiegend mit militärischen Fachleuten besetzt und entsprechend ausgerüstet, können die heutigen Bundeswehrapotheken diese Leistung prinzipiell weltweit erbringen.
Gegliedert nach markanten Zeitabschnitten hat der Autor vor dem Hintergrund der jeweils geltenden politischen und militärischen Rahmenbedingungen sowie der Einsatzgrundsätze der Bundeswehr den Einfluss der jeweiligen rechtlichen, militärischen, gesellschaftlichen und fachlichen Vorgaben sowie prägender Ereignisse herausgearbeitet, eingeordnet und bewertet.
Noise is ubiquitous in nature and usually results in rich dynamics in stochastic systems such as oscillatory systems, which exist in such various fields as physics, biology and complex networks. The correlation and synchronization of two or many oscillators are widely studied topics in recent years.
In this thesis, we mainly investigate two problems, i.e., the stochastic bursting phenomenon in noisy excitable systems and synchronization in a three-dimensional Kuramoto model with noise. Stochastic bursting here refers to a sequence of coherent spike train, where each spike has random number of followers due to the combined effects of both time delay and noise. Synchronization, as a universal phenomenon in nonlinear dynamical systems, is well illustrated in the Kuramoto model, a prominent model in the description of collective motion.
In the first part of this thesis, an idealized point process, valid if the characteristic timescales in the problem are well separated, is used to describe statistical properties such as the power spectral density and the interspike interval distribution. We show how the main parameters of the point process, the spontaneous excitation rate, and the probability to induce a spike during the delay action can be calculated from the solutions of a stationary and a forced Fokker-Planck equation. We extend it to the delay-coupled case and derive analytically the statistics of the spikes in each neuron, the pairwise correlations between any two neurons, and the spectrum of the total output from the network.
In the second part, we investigate the three-dimensional noisy Kuramoto model, which can be used to describe the synchronization in a swarming model with helical trajectory. In the case without natural frequency, the Kuramoto model can be connected with the Vicsek model, which is widely studied in collective motion and swarming of active matter. We analyze the linear stability of the incoherent state and derive the critical coupling strength above which the incoherent state loses stability. In the limit of no natural frequency, an exact self-consistent equation of the mean field is derived and extended straightforward to any high-dimensional case.
Cellulose is the most abundant biopolymer on Earth and cell wall (CW) synthesis is one of the major carbon consumers in the plant cell. Structure and several interaction partners of plasma membrane (PM)-bound cellulose synthase (CESA) complexes, CSCs, have been studied extensively, but much less is understood about the signals that activate and translocate CESAs to the PM and how exactly cellulose synthesis is being regulated during the diel cycle. The literature describes CSC regulation possibilities through interactions with accessory proteins upon stress conditions (e.g. CC1), post-translational modifications that regulate CSC speed and their possible anchoring in the PM (e.g. with phosphorylation and S-acylation, respectively). In this thesis, 13CO2 labeling and imaging techniques were employed in the same Arabidopsis seedling growth system to elucidate how and when new carbon is incorporated into cell wall (CW) sugars and UDP-glucose, and to follow CSC behavior during the diel cycle. Additionally, an ubiquitination analysis was performed to investigate a possible mechanism to affect CSC trafficking to and/or from the PM. Carbon is being incorporated into CW glucose at a 3-fold higher rate during the light period in comparison to the night in wild-type seedlings. Furthermore, CSC density at the PM, as an indication of active cellulose synthesizing machinery, is increasing in the light and falling during the night, showing that CW biosynthesis is more active in the light. Therefore, CW synthesis might be regulated by the carbon status of the cell. This regulation is broken in the starchless pgm mutant where light and dark carbon incorporation rates into CW glucose are similar, possibly due to the high soluble sugar content in pgm during the first part of the night. Strikingly, pgm CSC abundance at the PM is constantly low during the whole diel cycle, indicating little or no cellulose synthesis, but can be restored with exogenous sucrose or a longer photoperiod. Ubiquitination was explored as a possible regulating mechanism for translocation of primary CW CSCs from the PM and several potential ubiquitination sites have been identified.. The approach in this thesis enabled to study cellulose/CW synthesis from different angles but in the same growth system, allowing direct comparison of those methodologies, which could help understand the relationship between the amount of available carbon in a plant cell and the cells capacity to synthesize cellulose/CW. Understanding which factors contribute to cellulose synthesis regulation and addressing those fundamental questions can provide essential knowledge to manage the need for increased crop production.
The energy required to drive photochemical reactions is derived from charge separation across the thylakoid membrane. As the consequence of difference in proton concentration between chloroplasts stroma and thylakoid lumen, a proton motive force (pmf) is generated. The pmf is composed out of the proton gradient (ΔpH) and membrane potential (ΔΨ), and together they drive the ATP synthesis. In nature, the amount of energy fueling photosynthesis varies due to frequent changes in the light intensity. Thylakoid ion transport can adapt the energy flow through a photosynthetic apparatus to the light availability by adjusting the pmf composition. Dissipation of ΔΨ reduces the charge recombination at the photosystem II, allowing for an increase in ΔpH component to trigger a feedback downregulation of photosynthesis. K+ Exchange Antiporter 3 (KEA3) driven K+/H+ antiport reduces the ΔpH fraction of pmf, thereby dampening a non-photochemical quenching (NPQ). As a result, it increases the photosynthesis efficiency during the transition to lower light intensity. This thesis aimed to find the answers for questions concerning KEA3 activity regulation and its role in plant development. Presented data shows that in plants lacking chloroplast ATP synthase assembly factor CGL160 with decreased ATP synthase activity, KEA3 has a pivotal role in photosynthesis regulation and plant growth during steady-state conditions. Lack of KEA3 in cgl160 mutant results in a strong growth impairment, as photosynthesis is limited due to increased pH-dependent NPQ and decreased electron flow through cytochrome b6f complex. Overexpression of KEA3 in cgl160 mutant increases charge recombination at photosystem II, promoting photosynthesis. Thus, during periods of low ATP synthase activity, plants benefit from KEA3 activity. The KEA3 undergoes dimerization via its regulatory C-terminus (RCT). The RCT responds to changes in light intensity as the plants expressing KEA3 without this domain show reduced photo-protective mechanism in light intensity transients. However, those plants fix more carbon during the photosynthesis induction phase as a trade-off for a long-term photoprotection, showing KEA3 regulatory role in plant development. The KEA3 RCT is facing thylakoid stroma, thus its regulation depends on light-induced changes in the stromal environment. KEA3 activity regulation overlaps with the stromal pH changes occurring during light fluctuations. The ATP and ADP has shown to have an affinity towards heterologously expressed KEA3 RCT. Such interaction causes conformational changes in RCT structure. The fold change of RCT-ligand interaction depends on the environmental pH value. With a combination of bioinformatics and in vitro approach, the ATP binding site at RCT was located. Introduction of binding site point mutation in planta KEA3 RCT resulted in antiporter activity deregulation during transition to low light. Together, the data presented in this thesis allowed us to assess more broadly a KEA3 role in photosynthesis adjustment and propose the models of KEA3 activity regulation throughout transition in light intensity.
Many children struggle with reading for comprehension. Reading is a complex cognitive task depending on various sub-tasks, such as word decoding and building connections across sentences. The task of connecting sentences is guided by referential expressions. References, such as anaphoric noun phrases (Minky/the cat) or pronouns (Minky/she), signal to the reader how the protagonists of adjacent sentences are connected. Readers construct a coherent mental model of the text by resolving these references. Personal pronouns (he/she) in particular need to be resolved towards an appropriate antecedent before they can be fully understood. Pronoun resolution therefore is vital for successful text comprehension. The present thesis investigated children’s resolution of personal pronouns during natural reading as a possible source of reading comprehension difficulty. Three eye tracking studies investigated whether children aged 8-9 (Grade 3-4) resolve pronouns online during reading and how the varying information around the pronoun region influences children’s eye movement behavior.
The first study investigated whether children prefer a pronoun over a noun phrase when the antecedent is highly accessible. Children read three-sentence stories that introduced a protagonist (Mia) in the first sentence and a reference to this protagonist in one of the following sentences using either a repeated name (Mia) or a pronoun (she). For proficient readers, it was repeatedly shown that there is a preference for a pronoun over the name in these contexts, i.e., when the antecedent is salient. The first study tested the repeated name penalty effect in children using eye tracking. It was hypothesized that in contrast to proficient readers, the fluency of children’s reading processing profits from an overlapping word form (i.e., the repeated noun phrase) compared to a pronoun. This is because overlapping word forms allow for direct mapping, whereas pronouns have to be resolved towards their antecedent first.
The second study investigated children’s online processing of pronominal gender in a mismatch paradigm. Children read sentences in which the pronoun either was a gender-match to the antecedent or a gender-mismatch. Reading skill and reading fluency were also tested and related to children’s ability to detect a mismatching pronoun during reading.
The third study investigated the online processing of gender information on the pronoun and whether disambiguating gender information improves the accuracy of pronoun comprehension. Offline comprehension accuracy, that is the comprehension of the pronoun, was related to children’s online eye movement behavior. This study was conducted in a semi-longitudinal paradigm: 70 children were tested in Grade 3 (age 8) and again in Grade 4 (age 9) to investigate effects of age and reading skill on pronoun processing and comprehension.
The results of this thesis clearly show that children aged 8-9, when they are in the second half of primary school, struggle with the comprehension of pronouns in reading tasks. The responses to pronoun comprehension questions revealed that children have difficulties with the comprehension of a pronoun in the absence of a disambiguating gender cue, that is when they have to apply context information. When there is a gender cue to disambiguate the pronoun, children’s accuracy improves significantly. This is true for children in Grade 3, but also in Grade 4, albeit their overall resolution accuracy slightly improves with age.
The results from the analyses of eye movements suggest that the discourse accessibility of an antecedent does play a role in children’s processing of pronouns and repeated names. The repetition of a name does not facilitate children’s reading processing like it was anticipated. Similar to adults, children showed a penalty effect for the repeated name where a pronoun is expected. However, this does not mean that children’s processing of pronouns is always adult-like. The results from eye movement analyses in the pronoun region during sentence reading revealed significant individual differences related to children’s individual reading skill and reading fluency.
The results from the mismatch study revealed that reading fluency is associated with children’s detection of incongruent pronouns. All children had longer gaze durations at mismatching than matching pronouns, but only fluent readers among the children followed this up with a regression out of the pronoun region. This was interpreted as an attempt to gain processing time and “repair” the inconsistency. Reading fluency was therefore associated with detection of the mismatch, while less fluent readers did not see any mismatch between pronoun and antecedent. The eye movement pattern of the “detectors” is more adult-like and was interpreted as reflecting successful monitoring and attempted pronoun resolution.
Children differ considerably in their reading comprehension skill. The results of this thesis show that only skilled readers among the children use gender information online for pronoun resolution. They took more time to read the pronoun when there was disambiguating gender information that was useful to resolve the pronoun, in contrast to the less skilled readers. Age was a less important factor in pronoun resolution processes and comprehension than were reading skill and reading fluency. Taken together, this suggests that the good readers direct cognitive resources towards pronoun resolution when the pronoun can be resolved, which is a successful comprehension strategy. Moreover, there was evidence that reading skill is a relevant factor in this task but not age.
The contribution of the present thesis is a depiction of the specific eye movement patterns that are related to successful and unsuccessful attempts at pronoun resolution in children. Eye movement behavior in the pronoun area is related to children’s reading skill and fluency. The results of this thesis suggest that many children do not resolve pronouns spontaneously during sentence reading, which is likely detrimental to their reading comprehension in more complex reading materials. The present thesis informs our understanding of the challenge that pronoun resolution poses for beginning readers, and gives new impulses for the study of higher-order reading processes in children’s natural reading.
Active Galactic Nuclei (AGN) are considered to be the main powering source of active galaxies, where central Super Massive Black Holes (SMBHs), with masses between 106 and 109 M⊙ gravitationally pull the surrounding material via accre- tion. AGN phenomenon expands over a very wide range of luminosities, from the most luminous high-redshift quasars (QSOs), to the local Low-Luminosity AGN (LLAGN), with significantly weaker luminosities. While "typical" luminous AGNs distinguish themselves by their characteristical blue featureless continuum, the Broad Emission Lines (BELs) with Full Widths at Half Maximum (FWHM) in order of few thousands km s1, arising from the so-called Broad Line Region (BLR), and strong radio and/or X-ray emission, detection of LLAGNs on the other hand is quite chal- lenging due to their extremely weak emission lines, and absence of the power-law continuum. In order to fully understand AGN evolution and their duty-cycles across cosmic history, we need a proper knowledge of AGN phenomenon at all luminosi- ties and redshifts, as well as perspectives from different wavelength bands.
In this thesis I present a search for AGN signatures in central spectra of 542 local (0.005 < z < 0.03) galaxies from the Calar Alto Legacy Integral Field Area (CALIFA) survey. The adopted aperture of 3′′ × 3′′ corresponds to central ∼ 100 − 500 pc for the redshift range of CALIFA. Using the standard emission-line ratio diagnostic diagrams, we initially classified all CALIFA emission-line galaxies (526) into star- forming, LINER-like, Seyfert 2 and intermediates. We further detected signatures of the broad Hα component in 89 spectra from the sample, of which more than 60% are present in the central spectra of LINER-like galaxies. These BELs are very weak, with luminosities in range 1038 − 1041 erg s−1, but with FWHMs between 1000 km s−1 and 6000 km s−1, comparable to those of luminous high-z AGN. This result implies that type 1 AGN are in fact quite frequent in the local Universe. We also identified additional 29 Seyfert 2 galaxies using the emission-line ratio diagnostic diagrams.
Using the MBH − σ∗ correlation, we estimated black hole masses of 55 type 1 AGN from CALIFA, a sample for which we had estimates of bulge stellar velocity dispersions σ∗. We compared these masses to the ones that we estimated from the virial method and found large discrepancies. We analyzed the validity of both meth- ods for black hole mass estimation of local LLAGN, and concluded that most likely virial scaling relations can no longer be applied as a valid MBH estimator in such low-luminosity regime. These black holes accrete at very low rate, having Edding- ton ratios in range 4.1 × 10−5 − 2.4 × 10−3. Detection of BELs with such low lumi- nosities and at such low Eddington rates implies that these LLAGN are still able to form the BLR, although with probably modified structure of the central engine.
In order to obtain full picture of black hole growth across cosmic time, it is es- sential that we study them in different stages of their activity. For that purpose, we estimated the broad AGN Luminosity Function (AGNLF) of our entire type 1 AGN sample using the 1/Vmax method. The shape of AGNLF indicates an apparent flattening below luminosities LHα ∼ 1039 erg s−1. Correspondingly we estimated ac- tive Black Hole Mass Function (BHMF) and Eddington Ration Distribution Function (ERDF) for a sub-sample of type 1 AGN for which we have MBH and λ estimates. The flattening is also present in both BHMF and ERDF, around log(MBH) ∼ 7.7 and log(λ) < 3, respectively. We estimated the fraction of active SMBHs in CALIFA by comparing our active BHMF to the one of the local quiescent SMBHs. The shape of
the active fraction which decreases with increasing MBH, as well as the flattening of AGNLF, BHMF and ERDF is consistent with scenario of AGN cosmic downsizing.
To complete AGN census in the CALIFA galaxy sample, it is necessary to search for them in various wavelength bands. For the purpose of completing the census we performed cross-correlations between all 542 CALIFA galaxies and multiwavelength surveys, Swift – BAT 105 month catalogue (in hard 15 - 195 keV X-ray band), and NRAO VLA Sky Survey (NVSS, in 1.4 GHz radio domain). This added 1 new AGN candidate in X-ray, and 7 in radio wavelength band to our local LLAGN count.
It is possible to detect AGN emission signatures within 10 – 20 kpc outside of the central galactic regions. This may happen when the central AGN has recently switched off and the photoionized material is spread across the galaxy within the light-travel-time, or the photoionized material is blown away from the nucleus by outflows. In order to detect these extended AGN regions we constructed spatially resolved emission-line ratio diagnostic diagrams of all emission-line galaxies from the CALIFA, and found 1 new object that was previously not identified as AGN.
Obtaining the complete AGN census in CALIFA, with five different AGN types, showed that LLAGN contribute a significant fraction of 24% of the emission-line galaxies in the CALIFA sample. This result implies that AGN are quite common in the local Universe, and although being in very low activity stage, they contribute to large fraction of all local SMBHs. Within this thesis we approached the upper limit of AGN fraction in the local Universe and gained some deeper understanding of the LLAGN phenomenon.
One third of the world's population lives in areas where earthquakes causing at least slight damage are frequently expected. Thus, the development and testing of global seismicity models is essential to improving seismic hazard estimates and earthquake-preparedness protocols for effective disaster-risk mitigation. Currently, the availability and quality of geodetic data along plate-boundary regions provides the opportunity to construct global models of plate motion and strain rate, which can be translated into global maps of forecasted seismicity. Moreover, the broad coverage of existing earthquake catalogs facilitates in present-day the calibration and testing of global seismicity models. As a result, modern global seismicity models can integrate two independent factors necessary for physics-based, long-term earthquake forecasting, namely interseismic crustal strain accumulation and sudden lithospheric stress release.
In this dissertation, I present the construction of and testing results for two global ensemble seismicity models, aimed at providing mean rates of shallow (0-70 km) earthquake activity for seismic hazard assessment. These models depend on the Subduction Megathrust Earthquake Rate Forecast (SMERF2), a stationary seismicity approach for subduction zones, based on the conservation of moment principle and the use of regional "geodesy-to-seismicity" parameters, such as corner magnitudes, seismogenic thicknesses and subduction dip angles. Specifically, this interface-earthquake model combines geodetic strain rates with instrumentally-recorded seismicity to compute long-term rates of seismic and geodetic moment. Based on this, I derive analytical solutions for seismic coupling and earthquake activity, which provide this earthquake model with the initial abilities to properly forecast interface seismicity. Then, I integrate SMERF2 interface-seismicity estimates with earthquake computations in non-subduction zones provided by the Seismic Hazard Inferred From Tectonics based on the second iteration of the Global Strain Rate Map seismicity approach to construct the global Tectonic Earthquake Activity Model (TEAM). Thus, TEAM is designed to reduce number, and potentially spatial, earthquake inconsistencies of its predecessor tectonic earthquake model during the 2015-2017 period. Also, I combine this new geodetic-based earthquake approach with a global smoothed-seismicity model to create the World Hybrid Earthquake Estimates based on Likelihood scores (WHEEL) model. This updated hybrid model serves as an alternative earthquake-rate approach to the Global Earthquake Activity Rate model for forecasting long-term rates of shallow seismicity everywhere on Earth.
Global seismicity models provide scientific hypotheses about when and where earthquakes may occur, and how big they might be. Nonetheless, the veracity of these hypotheses can only be either confirmed or rejected after prospective forecast evaluation. Therefore, I finally test the consistency and relative performance of these global seismicity models with independent observations recorded during the 2014-2019 pseudo-prospective evaluation period. As a result, hybrid earthquake models based on both geodesy and seismicity are the most informative seismicity models during the testing time frame, as they obtain higher information scores than their constituent model components. These results support the combination of interseismic strain measurements with earthquake-catalog data for improved seismicity modeling. However, further prospective evaluations are required to more accurately describe the capacities of these global ensemble seismicity models to forecast longer-term earthquake activity.
The business problem of having inefficient processes, imprecise process analyses, and simulations as well as non-transparent artificial neuronal network models can be overcome by an easy-to-use modeling concept. With the aim of developing a flexible and efficient approach to modeling, simulating, and optimizing processes, this paper proposes a flexible Concept of Neuronal Modeling (CoNM). The modeling concept, which is described by the modeling language designed and its mathematical formulation and is connected to a technical substantiation, is based on a collection of novel sub-artifacts. As these have been implemented as a computational model, the set of CoNM tools carries out novel kinds of Neuronal Process Modeling (NPM), Neuronal Process Simulations (NPS), and Neuronal Process Optimizations (NPO). The efficacy of the designed artifacts was demonstrated rigorously by means of six experiments and a simulator of real industrial production processes.
In my doctoral thesis, I examine continuous gravity measurements for monitoring of the geothermal site at Þeistareykir in North Iceland. With the help of high-precision superconducting gravity meters (iGravs), I investigate underground mass changes that are caused by operation of the geothermal power plant (i.e. by extraction of hot water and reinjection of cold water). The overall goal of this research project is to make a statement about the sustainable use of the geothermal reservoir, from which also the Icelandic energy supplier and power plant operator Landsvirkjun should benefit.
As a first step, for investigating the performance and measurement stability of the gravity meters, in summer 2017, I performed comparative measurements at the gravimetric observatory J9 in Strasbourg. From the three-month gravity time series, I examined calibration, noise and drift behaviour of the iGravs in comparison to stable long-term time series of the observatory superconducting gravity meters. After preparatory work in Iceland (setup of gravity stations, additional measuring equipment and infrastructure, discussions with Landsvirkjun and meetings with the Icelandic partner institute ISOR), gravity monitoring at Þeistareykir was started in December 2017. With the help of the iGrav records of the initial 18 months after start of measurements, I carried out the same investigations (on calibration, noise and drift behaviour) as in J9 to understand how the transport of the superconducting gravity meters to Iceland may influence instrumental parameters.
In the further course of this work, I focus on modelling and reduction of local gravity contributions at Þeistareykir. These comprise additional mass changes due to rain, snowfall and vertical surface displacements that superimpose onto the geothermal signal of the gravity measurements. For this purpose, I used data sets from additional monitoring sensors that are installed at each gravity station and adapted scripts for hydro-gravitational modelling. The third part of my thesis targets geothermal signals in the gravity measurements.
Together with my PhD colleague Nolwenn Portier from France, I carried out additional gravity measurements with a Scintrex CG5 gravity meter at 26 measuring points within the geothermal field in the summers of 2017, 2018 and 2019. These annual time-lapse gravity measurements are intended to increase the spatial coverage of gravity data from the three continuous monitoring stations to the entire geothermal field. The combination of CG5 and iGrav observations, as well as annual reference measurements with an FG5 absolute gravity meter represent the hybrid gravimetric monitoring method for Þeistareykir. Comparison of the gravimetric data to local borehole measurements (of groundwater levels, geothermal extraction and injection rates) is used to relate the observed gravity changes to the actually extracted (and reinjected) geothermal fluids. An approach to explain the observed gravity signals by means of forward modelling of the geothermal production rate is presented at the end of the third (hybrid gravimetric) study. Further modelling with the help of the processed gravity data is planned by Landsvirkjun. In addition, the experience from time-lapse and continuous gravity monitoring will be used for future gravity measurements at the Krafla geothermal field 22 km south-east of Þeistareykir.
Contributions to the theoretical analysis of the algorithms with adversarial and dependent data
(2021)
In this work I present the concentration inequalities of Bernstein's type for the norms of Banach-valued random sums under a general functional weak-dependency assumption (the so-called $\cC-$mixing). The latter is then used to prove, in the asymptotic framework, excess risk upper bounds of the regularised Hilbert valued statistical learning rules under the τ-mixing assumption on the underlying training sample. These results (of the batch statistical setting) are then supplemented with the regret analysis over the classes of Sobolev balls of the type of kernel ridge regression algorithm in the setting of online nonparametric regression with arbitrary data sequences. Here, in particular, a question of robustness of the kernel-based forecaster is investigated. Afterwards, in the framework of sequential learning, the multi-armed bandit problem under $\cC-$mixing assumption on the arm's outputs is considered and the complete regret analysis of a version of Improved UCB algorithm is given. Lastly, probabilistic inequalities of the first part are extended to the case of deviations (both of Azuma-Hoeffding's and of Burkholder's type) to the partial sums of real-valued weakly dependent random fields (under the type of projective dependence condition).
The development and optimization of carbonaceous materials is of great interest for several applications including gas sorption, electrochemical storage and conversion, or heterogeneous catalysis. In this thesis, the exploration and optimization of nitrogen containing carbonaceous materials by direct condensation of smart chosen, molecular precursors will be presented. As suggested with the concept of noble carbons, the choice of a stable, nitrogen-containing precursor will lead to an even more stable, nitrogen doped carbonaceous material with a controlled structure and electronic properties. Molecules fulfilling this requirement are for example nucleobases. The direct condensation of nucleobases leads to highly nitrogen containing carbonaceous materials without any further post or pretreatment. By using salt melt templating, pore structure adjustment is possible without the use of hazardous or toxic reagents and the template can be reused.
Using these simple tools, the synergetic effect of the pore structure and nitrogen content of the materials can be explored. Within this thesis, the influence of the condensation parameters will be correlated to the structure and performance of the materials. First, the influence of the condensation temperature to the porosity and nitrogen content of guanine will be discussed and the exploration of highly CO2 selective structural pores in C1N1 materials will be shown. Further tuning the pore structure of the materials by salt melt templating will be then explored, the potential of the prepared materials as heterogeneous catalysts and their basic catalytic strength will be correlated to their nitrogen content and pore morphology. A similar approach is used to explore the water sorption behavior of uric acid derived carbonaceous materials as potential sorbents for heat transformation applications. Changes in maximum water uptake and hydrophilicity of the prepared materials will be correlated to the nitrogen content and pore architecture. Due to the high thermal stability, porosity, and nitrogen content of ionic liquid derived nitrogen doped carbonaceous materials, a simple impregnation and calcination route can be conducted to obtain copper nano cluster decorated nitrogen-doped carbonaceous materials. The activity as catalyst for the oxygen reduction reaction of the obtained materials will be shown and structure performance relations are discussed.
In conclusion, the versatility of nitrogen doped carbonaceous materials with a nitrogen to carbon ratio of up to one will be shown. The possibility to tune the pore structure as well as the nitrogen content by using a simple procedure including salt melt templating as well as the use of molecular precursors and their effect on the performance will be discussed.