Refine
Has Fulltext
- yes (165) (remove)
Year of publication
- 2021 (165) (remove)
Document Type
- Doctoral Thesis (165) (remove)
Is part of the Bibliography
- yes (165)
Keywords
- Spektroskopie (4)
- Klimawandel (3)
- Politik (3)
- climate change (3)
- spectroscopy (3)
- 3D-Visualisierung (2)
- Agrarökologie (2)
- Air pollution (2)
- Alpen (2)
- Alps (2)
Institute
- Institut für Physik und Astronomie (24)
- Institut für Geowissenschaften (22)
- Institut für Biochemie und Biologie (20)
- Institut für Chemie (20)
- Hasso-Plattner-Institut für Digital Engineering GmbH (13)
- Institut für Umweltwissenschaften und Geographie (9)
- Wirtschaftswissenschaften (9)
- Institut für Informatik und Computational Science (8)
- Department Psychologie (6)
- Extern (5)
Die Geschichtsschreibung terminiert das Ende des deutschen Zionismus bisher mit dem NS-Verbot der Zionistischen Vereinigung für Deutschland im Zuge des Novemberpogroms 1938. Zu diesem Zeitpunkt hatte er aber von seinem geographischen Kontext entgrenzt, in Erez Israel bereits neue Wurzeln geschlagen. Zionisten aus Deutschland schickten sich nun an, mit ihrem spezifischen Erfahrungshorizont und Wertemaßstab und mitgebrachtem ideologischen Rüstzeug die Entwicklung des jüdischen Nationalheims mitzugestalten und einer umfassenden ökonomischen, kulturellen und politischen Akkulturation der deutschen Alijah den Weg zu bahnen. Entgegen aller zionistischen Theorie gründeten sie auf landsmannschaftlicher Basis im Jahr 1932 die Selbsthilfeorganisation Hitachduth Olej Germania und während des Weltkrieges die Partei Alija Chadascha.
Die Dissertation beinhaltet die Gesamtschau des deutschen Zionismus in seiner letzten Phase in den Jahren 1932 bis 1948; zugleich beleuchtet sie die Geschichte der etwa 60.000 in Palästina eingewanderten Juden aus Deutschland in der für diese Abhandlung relevanten Zeitperiode. Im ersten Teil wird in chronologischer Folge die 1932 beginnende letztmalige Sammlung und Neuformierung des deutschen Zionismus in seiner neu-alten Heimat dargestellt. Wenn man so will, die formativen Jahre im personellen, organisatorischen und ideologisch-politischen Sinne, die schließlich nach dem fast gänzlichen Scheitern der politischen Integration der deutschen Alijah mit der – in der Rückschau – fast zwangsläufig erscheinenden Begründung der Alija Chadascha ihren Abschluss fanden. Im zweiten Teil werden die Positionen der deutschen Zionisten zu den existenziellen Fragen der jüdischen Gemeinschaft in Palästina, hebräisch Jischuw genannt, in der im Fokus stehenden Zeitperiode dargestellt. Im Einzelnen handelt es sich erstens um die Einwanderungsfrage, die untrennbar verbunden war mit der in der zionistischen Theorie unabdingbaren Forderung nach der Erlangung einer jüdischen Majorität in Palästina; zweitens um die der staatlichen Ausgestaltung des zukünftigen jüdischen Gemeinwesens und drittens um die Frage der adäquaten Reaktion des Jischuw auf die Schoah. In diese jeweils in separaten Kapiteln behandelten Themenkomplexe wird die Frage nach dem anzustrebenden Verhältnis zur britischen Mandatsmacht mit einfließen. Hieran mussten die deutschen Zionisten ihr mitgebrachtes geistig-ideologisches Rüstzeug einem Praxistest unterziehen und nach realpolitischen Antworten suchen.
Dem kometenhaften Aufstieg der weiterhin landsmannschaftlich geprägten Alija Chadascha folgte dann in den ersten Nachkriegsjahren ein ebenso rapider Zerfall. Einige Monate nach der Staatsgründung Israels löste sie sich dann sang- und klanglos auf und das Gros ihrer Aktivisten integrierte sich in das Parteiengefüge des neuen Staates. Der deutsche Zionismus als politische Bewegung kam nun wirklich an sein Ende. Diese Abhandlung wird somit zum einen den Kampf der deutschen Alijah um gesellschaftliche Anerkennung und politische Partizipation im Jischuw nachzeichnen und zum anderen eine geistig-ideologische Verortung des deutschen Zionismus in seiner letzten Phase vollziehen und Tendenzen der ideologischen Neuausrichtung offenlegen. Darüber hinaus werden in der Historiographie vorhandene Allgemeinplätze wie die fast allseits anerkannte These vom Scheitern der deutschen Zionisten in der neuen Heimat einer Überprüfung unterzogen. Die letzte vorhandene Leerstelle im wissenschaftlichen Kanon zur mehr als 50-jährigen Geschichte des deutschen Zionismus wird somit geschlossen.
Gravitational-wave (GW) astrophysics is a field in full blossom. Since the landmark detection of GWs from a binary black hole on September 14th 2015, fifty-two compact-object binaries have been reported by the LIGO-Virgo collaboration. Such events carry astrophysical and cosmological information ranging from an understanding of how black holes and neutron stars are formed, what neutron stars are composed of, how the Universe expands, and allow testing general relativity in the highly-dynamical strong-field regime. It is the goal of GW astrophysics to extract such information as accurately as possible. Yet, this is only possible if the tools and technology used to detect and analyze GWs are advanced enough. A key aspect of GW searches are waveform models, which encapsulate our best predictions for the gravitational radiation under a certain set of parameters, and that need to be cross-correlated with data to extract GW signals. Waveforms must be very accurate to avoid missing important physics in the data, which might be the key to answer the fundamental questions of GW astrophysics. The continuous improvements of the current LIGO-Virgo detectors, the development of next-generation ground-based detectors such as the Einstein Telescope or the Cosmic Explorer, as well as the development of the Laser Interferometer Space Antenna (LISA), demand accurate waveform models. While available models are enough to capture the low spins, comparable-mass binaries routinely detected in LIGO-Virgo searches, those for sources from both current and next-generation ground-based and spaceborne detectors must be accurate enough to detect binaries with large spins and asymmetry in the masses. Moreover, the thousands of sources that we expect to detect with future detectors demand accurate waveforms to mitigate biases in the estimation of signals’ parameters due to the presence of a foreground of many sources that overlap in the frequency band. This is recognized as one of the biggest challenges for the analysis of future-detectors’ data, since biases might hinder the extraction of important astrophysical and cosmological information from future detectors’ data. In the first part of this thesis, we discuss how to improve waveform models for binaries with high spins and asymmetry in the masses. In the second, we present the first generic metrics that have been proposed to predict biases in the presence of a foreground of many overlapping signals in GW data.
For the first task, we will focus on several classes of analytical techniques. Current models for LIGO and Virgo studies are based on the post-Newtonian (PN, weak-field, small velocities) approximation that is most natural for the bound orbits that are routinely detected in GW searches. However, two other approximations have risen in prominence, the post-Minkowskian (PM, weak- field only) approximation natural for unbound (scattering) orbits and the small-mass-ratio (SMR) approximation typical of binaries in which the mass of one body is much bigger than the other. These are most appropriate to binaries with high asymmetry in the masses that challenge current waveform models. Moreover, they allow one to “cover” regions of the parameter space of coalescing binaries, thereby improving the interpolation (and faithfulness) of waveform models. The analytical approximations to the relativistic two-body problem can synergically be included within the effective-one-body (EOB) formalism, in which the two-body information from each approximation can be recast into an effective problem of a mass orbiting a deformed Schwarzschild (or Kerr) black hole. The hope is that the resultant models can cover both the low-spin comparable-mass binaries that are routinely detected, and the ones that challenge current models. The first part of this thesis is dedicated to a study about how to best incorporate information from the PN, PM, SMR and EOB approaches in a synergistic way. We also discuss how accurate the resulting waveforms are, as compared against numerical-relativity (NR) simulations. We begin by comparing PM models, whether alone or recast in the EOB framework, against PN models and NR simulations. We will show that PM information has the potential to improve currently-employed models for LIGO and Virgo, especially if recast within the EOB formalism. This is very important, as the PM approximation comes with a host of new computational techniques from particle physics to exploit. Then, we show how a combination of PM and SMR approximations can be employed to access previously-unknown PN orders, deriving the third subleading PN dynamics for spin-orbit and (aligned) spin1-spin2 couplings. Such new results can then be included in the EOB models currently used in GW searches and parameter estimation studies, thereby improving them when the binaries have high spins. Finally, we build an EOB model for quasi-circular nonspinning binaries based on the SMR approximation (rather than the PN one as usually done). We show how this is done in detail without incurring in the divergences that had affected previous attempts, and compare the resultant model against NR simulations. We find that the SMR approximation is an excellent approximation for all (quasi-circular nonspinning) binaries, including both the equal-mass binaries that are routinely detected in GW searches and the ones with highly asymmetric masses. In particular, the SMR-based models compare much better than the PN models, suggesting that SMR-informed EOB models might be the key to model binaries in the future. In the second task of this thesis, we work within the linear-signal ap- proximation and describe generic metrics to predict inference biases on the parameters of a GW source of interest in the presence of confusion noise from unfitted foregrounds and from residuals of other signals that have been incorrectly fitted out. We illustrate the formalism with simple (yet realistic) LISA sources, and demonstrate its validity against Monte-Carlo simulations. The metrics we describe pave the way for more realistic studies to quantify the biases with future ground-based and spaceborne detectors.
In my doctoral thesis, I examine continuous gravity measurements for monitoring of the geothermal site at Þeistareykir in North Iceland. With the help of high-precision superconducting gravity meters (iGravs), I investigate underground mass changes that are caused by operation of the geothermal power plant (i.e. by extraction of hot water and reinjection of cold water). The overall goal of this research project is to make a statement about the sustainable use of the geothermal reservoir, from which also the Icelandic energy supplier and power plant operator Landsvirkjun should benefit.
As a first step, for investigating the performance and measurement stability of the gravity meters, in summer 2017, I performed comparative measurements at the gravimetric observatory J9 in Strasbourg. From the three-month gravity time series, I examined calibration, noise and drift behaviour of the iGravs in comparison to stable long-term time series of the observatory superconducting gravity meters. After preparatory work in Iceland (setup of gravity stations, additional measuring equipment and infrastructure, discussions with Landsvirkjun and meetings with the Icelandic partner institute ISOR), gravity monitoring at Þeistareykir was started in December 2017. With the help of the iGrav records of the initial 18 months after start of measurements, I carried out the same investigations (on calibration, noise and drift behaviour) as in J9 to understand how the transport of the superconducting gravity meters to Iceland may influence instrumental parameters.
In the further course of this work, I focus on modelling and reduction of local gravity contributions at Þeistareykir. These comprise additional mass changes due to rain, snowfall and vertical surface displacements that superimpose onto the geothermal signal of the gravity measurements. For this purpose, I used data sets from additional monitoring sensors that are installed at each gravity station and adapted scripts for hydro-gravitational modelling. The third part of my thesis targets geothermal signals in the gravity measurements.
Together with my PhD colleague Nolwenn Portier from France, I carried out additional gravity measurements with a Scintrex CG5 gravity meter at 26 measuring points within the geothermal field in the summers of 2017, 2018 and 2019. These annual time-lapse gravity measurements are intended to increase the spatial coverage of gravity data from the three continuous monitoring stations to the entire geothermal field. The combination of CG5 and iGrav observations, as well as annual reference measurements with an FG5 absolute gravity meter represent the hybrid gravimetric monitoring method for Þeistareykir. Comparison of the gravimetric data to local borehole measurements (of groundwater levels, geothermal extraction and injection rates) is used to relate the observed gravity changes to the actually extracted (and reinjected) geothermal fluids. An approach to explain the observed gravity signals by means of forward modelling of the geothermal production rate is presented at the end of the third (hybrid gravimetric) study. Further modelling with the help of the processed gravity data is planned by Landsvirkjun. In addition, the experience from time-lapse and continuous gravity monitoring will be used for future gravity measurements at the Krafla geothermal field 22 km south-east of Þeistareykir.
In the frame of a world fighting a dramatic global warming caused by human-related activities, research towards the development of renewable energies plays a crucial role. Solar energy is one of the most important clean energy sources and its role in the satisfaction of the global energy demand is set to increase. In this context, a particular class of materials captured the attention of the scientific community for its attractive properties: halide perovskites. Devices with perovskite as light-absorber saw an impressive development within the last decade, reaching nowadays efficiencies comparable to mature photovoltaic technologies like silicon solar cells. Yet, there are still several roadblocks to overcome before a wide-spread commercialization of this kind of devices is enabled. One of the critical points lies at the interfaces: perovskite solar cells (PSCs) are made of several layers with different chemical and physical features. In order for the device to function properly, these properties have to be well-matched.
This dissertation deals with some of the challenges related to interfaces in PSCs, with a focus on the interface between the perovskite material itself and the subsequent charge transport layer. In particular, molecular assemblies with specific properties are deposited on the perovskite surface to functionalize it. The functionalization results in energy level alignment adjustment, interfacial losses reduction, and stability improvement.
First, a strategy to tune the perovskite’s energy levels is introduced: self-assembled monolayers of dipolar molecules are used to functionalize the surface, obtaining simultaneously a shift in the vacuum level position and a saturation of the dangling bonds at the surface. A shift in the vacuum level corresponds to an equal change in work function, ionization energy, and electron affinity. The direction of the shift depends on the direction of the collective interfacial dipole. The magnitude of the shift can be tailored by controlling the deposition parameters, such as the concentration of the solution used for the deposition. The shift for different molecules is characterized by several non-invasive techniques, including in particular Kelvin probe. Overall, it is shown that it is possible to shift the perovskite energy levels in both directions by several hundreds of meV. Moreover, interesting insights on the molecules deposition dynamics are revealed.
Secondly, the application of this strategy in perovskite solar cells is explored. Devices with different perovskite compositions (“triple cation perovskite” and MAPbBr3) are prepared. The two resulting model systems present different energetic offsets at the perovskite/hole-transport layer interface. Upon tailored perovskite surface functionalization, the devices show a stabilized open circuit voltage (Voc) enhancement of approximately 60 meV on average for devices with MAPbBr3, while the impact is limited on triple-cation solar cells. This suggests that the proposed energy level tuning method is valid, but its effectiveness depends on factors such as the significance of the energetic offset compared to the other losses in the devices.
Finally, the above presented method is further developed by incorporating the ability to interact with the perovskite surface directly into a novel hole-transport material (HTM), named PFI. The HTM can anchor to the perovskite halide ions via halogen bonding (XB). Its behaviour is compared to that of another HTM (PF) with same chemical structure and properties, except for the ability of forming XB. The interaction of perovskite with PFI and PF is characterized through UV-Vis, atomic force microscopy and Kelvin probe measurements combined with simulations. Compared to PF, PFI exhibits enhanced resilience against solvent exposure and improved energy level alignment with the perovskite layer. As a consequence, devices comprising PFI show enhanced Voc and operational stability during maximum-power-point tracking, in addition to hysteresis reduction. XB promotes the formation of a high-quality interface by anchoring to the halide ions and forming a stable and ordered interfacial layer, showing to be a particularly interesting candidate for the development of tailored charge transport materials in PSCs.
Overall, the results exposed in this dissertation introduce and discuss a versatile tool to functionalize the perovskite surface and tune its energy levels. The application of this method in devices is explored and insights on its challenges and advantages are given. Within this frame, the results shed light on XB as ideal interaction for enhancing stability and efficiency in perovskite-based devices.
With the downscaling of CMOS technologies, the radiation-induced Single Event Transient (SET) effects in combinational logic have become a critical reliability issue for modern integrated circuits (ICs) intended for operation under harsh radiation conditions. The SET pulses generated in combinational logic may propagate through the circuit and eventually result in soft errors. It has thus become an imperative to address the SET effects in the early phases of the radiation-hard IC design. In general, the soft error mitigation solutions should accommodate both static and dynamic measures to ensure the optimal utilization of available resources. An efficient soft-error-aware design should address synergistically three main aspects: (i) characterization and modeling of soft errors, (ii) multi-level soft error mitigation, and (iii) online soft error monitoring. Although significant results have been achieved, the effectiveness of SET characterization methods, accuracy of predictive SET models, and efficiency of SET mitigation measures are still critical issues. Therefore, this work addresses the following topics: (i) Characterization and modeling of SET effects in standard combinational cells, (ii) Static mitigation of SET effects in standard combinational cells, and (iii) Online particle detection, as a support for dynamic soft error mitigation.
Since the standard digital libraries are widely used in the design of radiation-hard ICs, the characterization of SET effects in standard cells and the availability of accurate SET models for the Soft Error Rate (SER) evaluation are the main prerequisites for efficient radiation-hard design. This work introduces an approach for the SPICE-based standard cell characterization with the reduced number of simulations, improved SET models and optimized SET sensitivity database. It has been shown that the inherent similarities in the SET response of logic cells for different input levels can be utilized to reduce the number of required simulations. Based on characterization results, the fitting models for the SET sensitivity metrics (critical charge, generated SET pulse width and propagated SET pulse width) have been developed. The proposed models are based on the principle of superposition, and they express explicitly the dependence of the SET sensitivity of individual combinational cells on design, operating and irradiation parameters. In contrast to the state-of-the-art characterization methodologies which employ extensive look-up tables (LUTs) for storing the simulation results, this work proposes the use of LUTs for storing the fitting coefficients of the SET sensitivity models derived from the characterization results. In that way the amount of characterization data in the SET sensitivity database is reduced significantly.
The initial step in enhancing the robustness of combinational logic is the application of gate-level mitigation techniques. As a result, significant improvement of the overall SER can be achieved with minimum area, delay and power overheads. For the SET mitigation in standard cells, it is essential to employ the techniques that do not require modifying the cell structure. This work introduces the use of decoupling cells for improving the robustness of standard combinational cells. By insertion of two decoupling cells at the output of a target cell, the critical charge of the cell’s output node is increased and the attenuation of short SETs is enhanced. In comparison to the most common gate-level techniques (gate upsizing and gate duplication), the proposed approach provides better SET filtering. However, as there is no single gate-level mitigation technique with optimal performance, a combination of multiple techniques is required. This work introduces a comprehensive characterization of gate-level mitigation techniques aimed to quantify their impact on the SET robustness improvement, as well as introduced area, delay and power overhead per gate. By characterizing the gate-level mitigation techniques together with the standard cells, the required effort in subsequent SER analysis of a target design can be reduced. The characterization database of the hardened standard cells can be utilized as a guideline for selection of the most appropriate mitigation solution for a given design.
As a support for dynamic soft error mitigation techniques, it is important to enable the online detection of energetic particles causing the soft errors. This allows activating the power-greedy fault-tolerant configurations based on N-modular redundancy only at the high radiation levels. To enable such a functionality, it is necessary to monitor both the particle flux and the variation of particle LET, as these two parameters contribute significantly to the system SER. In this work, a particle detection approach based on custom-sized pulse stretching inverters is proposed. Employing the pulse stretching inverters connected in parallel enables to measure the particle flux in terms of the number of detected SETs, while the particle LET variations can be estimated from the distribution of SET pulse widths. This approach requires a purely digital processing logic, in contrast to the standard detectors which require complex mixed-signal processing. Besides the possibility of LET monitoring, additional advantages of the proposed particle detector are low detection latency and power consumption, and immunity to error accumulation.
The results achieved in this thesis can serve as a basis for establishment of an overall soft-error-aware database for a given digital library, and a comprehensive multi-level radiation-hard design flow that can be implemented with the standard IC design tools. The following step will be to evaluate the achieved results with the irradiation experiments.
The incorporation of proteins in artificial materials such as membranes offers great opportunities to avail oneself the miscellaneous qualities of proteins and enzymes perfected by nature over millions of years. One possibility to leverage proteins is the modification with artificial polymers. To obtain such protein-polymer conjugates, either a polymer can be grown from the protein surface (grafting-from) or a pre-synthesized polymer attached to the protein (grafting-to). Both techniques were used to synthesize conjugates of different proteins with thermo-responsive polymers in this thesis.
First, conjugates were analyzed by protein NMR spectroscopy. Typical characterization techniques for conjugates can verify the successful conjugation and give hints on the secondary structure of the protein. However, the 3-dimensional structure, being highly important for the protein function, cannot be probed by standard techniques. NMR spectroscopy is a unique method allowing to follow even small alterations in the protein structure. A mutant of the carbohydrate binding module 3b (CBM3bN126W) was used as model protein and functionalized with poly(N-isopropylacrylamide). Analysis of conjugates prepared by grafting-to or grafting-from revealed a strong impact of conjugation type on protein folding. Whereas conjugates prepared by grafting a pre-formed polymer to the protein resulted in complete preservation of protein folding, grafting the polymer from the protein surface led to (partial) disruption of the protein structure.
Next, conjugates of bovine serum albumin (BSA) as cheap and easily accessible protein were synthesized with PNIPAm and different oligoethylene glycol (meth)acrylates. The obtained protein-polymer conjugates were analyzed by an in-line combination of size exclusion chromatography and multi-angle laser light scattering (SEC-MALS). This technique is particular advantageous to determine molar masses, as no external calibration of the system is needed. Different SEC column materials and operation conditions were tested to evaluate the applicability of this system to determine absolute molar masses and hydrodynamic properties of heterogeneous conjugates prepared by grafting-from and grafting-to. Hydrophobic and non-covalent interactions of conjugates lead to error-prone values not in accordance to expected molar masses based on conversions and extents of modifications.
As alternative to this method, conjugates were analyzed by sedimentation velocity analytical ultracentrifugation (SV-AUC) to gain insights in the hydrodynamic properties and how they change after conjugation. Within a centrifugal field, a sample moves and fractionates according to the mass, density, and shape of its individual components. Conjugates of BSA with PNIPAm were analyzed below and above the cloud point temperature of the thermo-responsive polymer component. It was identified that the polymer characteristics were transferred to the conjugate molecule which than showed a decreased ideality – defined as increased deviation from a perfect sphere model – below and increased ideality above the cloud point temperature. This effect can be attributed to an arrangement of the polymer chain pointing towards the solvent (expanded state) or snuggling around the protein surface depending on the applied temperature.
The last project dealt with the synthesis of ferric hydroxamate uptake protein component A (FhuA)-polymer conjugates as building blocks for novel membrane materials. The shape of FhuA can be described as barrel and removal of a cork domain inside the protein results in a passive channel aimed to be utilized as pores in the membrane system. The polymer matrix surrounding the membrane protein is composed of a thermo-responsive and a UV-crosslinkable part. Therefore, an external trigger for covalent immobilization of these building blocks in the membrane and switchability of the membrane between different states was incorporated. The overall performance of membranes prepared by a drying-mediated self-assembly approach was evaluated by permeability and size exclusion experiments. The obtained membranes displayed an insufficiency in interchain crosslinking and therefore a lack in performance. Furthermore, the aimed switch between a hydrophilic and hydrophobic state of the polymer matrix did not occur. Correspondingly, size exclusion experiments did not result in a retention of analytes larger than the pores defined by the dimension of the used FhuA variant.
Overall, different paths to generate protein-polymer conjugates by either grafting-from or grafting-to the protein surface were presented paving the way to the generation of new hybrid materials. Different analytical methods were utilized to describe the folding and hydrodynamic properties of conjugates providing a deeper insight in the overall characteristics of these seminal building blocks.
Influenza A virus (IAV) is a pathogen responsible for severe seasonal epidemics threatening human and animal populations every year. During the viral assembly process in the infected cells, the plasma membrane (PM) has to bend in localized regions into a vesicle towards the extracellular side. Studies in cellular models have proposed that different viral proteins might be responsible for inducing membrane curvature in this context (including M1), but a clear consensus has not been reached. M1 is the most abundant protein in IAV particles. It plays an important role in virus assembly and budding at the PM. M1 is recruited to the host cell membrane where it associates with lipids and other viral proteins. However, the details of M1 interactions with the cellular PM, as well as M1-mediated membrane bending at the budozone, have not been clarified.
In this work, we used several experimental approaches to analyze M1-lipids and M1-M1 interactions. By performing SPR analysis, we quantified membrane association for full-length M1 and different genetically engineered M1 constructs (i.e., N- and C-terminally truncated constructs and a mutant of the polybasic region). This allowed us to obtain novel information on the protein regions mediating M1 binding to membranes. By using fluorescence microscopy, cryogenic transmission electron microscopy (cryo-TEM), and three-dimensional (3D) tomography (cryo-ET), we showed that M1 is indeed able to cause membrane deformation on vesicles containing negatively-charged lipids, in the absence of other viral components. Further, sFCS analysis proved that simple protein binding is not sufficient to induce membrane restructuring. Rather, it appears that stable M1-M1 interactions and multimer formation are required to alter the bilayer three-dimensional structure through the formation of a protein scaffold.
Finally, to mimic the budding mechanism in cells that arise by the lateral organization of the virus membrane components on lipid raft domains, we created vesicles with lipid domains. Our results showed that local binding of M1 to spatial confined acidic lipids within membrane domains of vesicles led to local M1 inward curvature.
Polymeric films and coatings derived from semi-crystalline oligomers are of relevance for medical and pharmaceutical applications. In this context, the material surface is of particular importance, as it mediates the interaction with the biological system. Two dimensional (2D) systems and ultrathin films are used to model this interface. However, conventional techniques for their preparation, such as spin coating or dip coating, have disadvantages, since the morphology and chain packing of the generated films can only be controlled to a limited extent and adsorption on the substrate used affects the behavior of the films. Detaching and transferring the films prepared by such techniques requires additional sacrificial or supporting layers, and free-standing or self supporting domains are usually of very limited lateral extension. The aim of this thesis is to study and modulate crystallization, melting, degradation and chemical reactions in ultrathin films of oligo(ε-caprolactone)s (OCL)s with different end-groups under ambient conditions. Here, oligomeric ultrathin films are assembled at the air-water interface using the Langmuir technique. The water surface allows lateral movement and aggregation of the oligomers, which, unlike solid substrates, enables dynamic physical and chemical interaction of the molecules. Parameters like surface pressure (π), temperature and mean molecular area (MMA) allow controlled assembly and manipulation of oligomer molecules when using the Langmuir technique. The π-MMA isotherms, Brewster angle microscopy (BAM), and interfacial infrared spectroscopy assist in detecting morphological and physicochemical changes in the film. Ultrathin films can be easily transferred to the solid silicon surface via Langmuir Schaefer (LS) method (horizontal substrate dipping). Here, the films transferred on silicon are investigated using atomic force microscopy (AFM) and optical microscopy and are compared to the films on the water surface.
The semi-crystalline morphology (lamellar thicknesses, crystal number densities, and lateral crystal dimensions) is tuned by the chemical structure of the OCL end-groups (hydroxy or methacrylate) and by the crystallization temperature (Tc; 12 or 21 °C) or MMAs. Compression to lower MMA of ~2 Å2, results in the formation of a highly crystalline film, which consists of tightly packed single crystals. Preparation of tightly packed single crystals on a cm2 scale is not possible by conventional techniques. Upon transfer to a solid surface, these films retain their crystalline morphology whereas amorphous films undergo dewetting.
The melting temperature (Tm) of OCL single crystals at the water and the solid surface is found proportional to the inverse crystal thickness and is generally lower than the Tm of bulk PCL. The impact of OCL end-groups on melting behavior is most noticeable at the air-solid interface, where the methacrylate end-capped OCL (OCDME) melted at lower temperatures than the hydroxy end-capped OCL (OCDOL). When comparing the underlying substrate, melting/recrystallization of OCL ultrathin films is possible at lower temperatures at the air water interface than at the air-solid interface, where recrystallization is not visible. Recrystallization at the air-water interface usually occurs at a higher temperature than the initial Tc.
Controlled degradation is crucial for the predictable performance of degradable polymeric biomaterials. Degradation of ultrathin films is carried out under acidic (pH ~ 1) or enzymatic catalysis (lipase from Pseudomonas cepcia) on the water surface or on a silicon surface as transferred films. A high crystallinity strongly reduces the hydrolytic but not the enzymatic degradation rate. As an influence of end-groups, the methacrylate end-capped linear oligomer, OCDME (~85 ± 2 % end-group functionalization) hydrolytically degrades faster than the hydroxy end capped linear oligomer, OCDOL (~95 ± 3 % end-group functionalization) at different temperatures. Differences in the acceleration of hydrolytic degradation of semi-crystalline films were observed upon complete melting, partial melting of the crystals, or by heating to temperatures close to Tm. Therefore, films of densely packed single crystals are suitable as barrier layers with thermally switchable degradation rates.
Chemical modification in ultrathin films is an intricate process applicable to connect functionalized molecules, impart stability or create stimuli-sensitive cross-links. The reaction of end-groups is explored for transferred single crystals on a solid surface or amorphous monolayer at the air-water interface. Bulky methacrylate end-groups are expelled to the crystal surface during chain-folded crystallization. The density of end-groups is inversely proportional to molecular weight and hence very pronounced for oligomers. The methacrylate end-groups at the crystal surface, which are present at high concentration, can be used for further chemical functionalization. This is demonstrated by fluorescence microscopy after reaction with fluorescein dimethacrylate. The thermoswitching behavior (melting and recrystallization) of fluorescein functionalized single crystals shows the temperature-dependent distribution of the chemically linked fluorescein moieties, which are accumulated on the surfaces of crystals, and homogeneously dispersed when the crystals are molten. In amorphous monolayers at the air-water interface, reversible cross-linking of hydroxy-terminated oligo(ε-caprolactone) monolayers using dialdehyde (glyoxal) lead to the formation of 2D networks. Pronounced contraction in the area occurred for 2D OCL films in dependence of surface pressure and time indicating the reaction progress. Cross linking inhibited crystallization and retarded enzymatic degradation of the OCL film. Altering the subphase pH to ~2 led to cleavage of the covalent acetal cross-links. Besides as model systems, these reversibly cross-linked films are applicable for drug delivery systems or cell substrates modulating adhesion at biointerfaces.
Major challenges during geothermal exploration and exploitation include the structural-geological characterization of the geothermal system and the application of sustainable monitoring concepts to explain changes in a geothermal reservoir during production and/or reinjection of fluids. In the absence of sufficiently permeable reservoir rocks, faults and fracture networks are preferred drilling targets because they can facilitate the migration of hot and/or cold fluids. In volcanic-geothermal systems considerable amounts of gas emissions can be released at the earth surface, often related to these fluid-releasing structures.
In this thesis, I developed and evaluated different methodological approaches and measurement concepts to determine the spatial and temporal variation of several soil gas parameters to understand the structural control on fluid flow. In order to validate their potential as innovative geothermal exploration and monitoring tools, these methodological approaches were applied to three different volcanic-geothermal systems. At each site an individual survey design was developed regarding the site-specific questions.
The first study presents results of the combined measurement of CO2 flux, ground temperatures, and the analysis of isotope ratios (δ13CCO2, 3He/4He) across the main production area of the Los Humeros geothermal field, to identify locations with a connection to its supercritical (T > 374◦C and P > 221 bar) geothermal reservoir. The results of the systematic and large-scale (25 x 200 m) CO2 flux scouting survey proved to be a fast and flexible way to identify areas of anomalous degassing. Subsequent sampling with high resolution surveys revealed the actual extent and heterogenous pattern of anomalous degassing areas. They have been related to the internal fault hydraulic architecture and allowed to assess favourable structural settings for fluid flow such as fault intersections. Finally, areas of unknown structurally controlled permeability with a connection to the superhot geothermal reservoir have been determined, which represent promising targets for future geothermal exploration and development.
In the second study, I introduce a novel monitoring approach by examining the variation of CO2 flux to monitor changes in the reservoir induced by fluid reinjection. For that reason, an automated, multi-chamber CO2 flux system was deployed across the damage zone of a major normal fault crossing the Los Humeros geothermal field. Based on the results of the CO2 flux scouting survey, a suitable site was selected that had a connection to the geothermal reservoir, as identified by hydrothermal CO2 degassing and hot ground temperatures (> 50 °C). The results revealed a response of gas emissions to changes in reinjection rates within 24 h, proving an active hydraulic communication between the geothermal reservoir and the earth surface. This is a promising monitoring strategy that provides nearly real-time and in-situ data about changes in the reservoir and allows to timely react to unwanted changes (e.g., pressure decline, seismicity).
The third study presents results from the Aluto geothermal field in Ethiopia where an area-wide and multi-parameter analysis, consisting of measurements of CO2 flux, 222Rn, and 220Rn activity concentrations and ground temperatures was conducted to detect hidden permeable structures. 222Rn and 220Rn activity concentrations are evaluated as a complementary soil gas parameter to CO2 flux, to investigate their potential to understand tectono-volcanic degassing. The combined measurement of all parameters enabled to develop soil gas fingerprints, a novel visualization approach. Depending on the magnitude of gas emissions and their migration velocities the study area was divided in volcanic (heat), tectonic (structures), and volcano-tectonic dominated areas. Based on these concepts, volcano-tectonic dominated areas, where hot hydrothermal fluids migrate along permeable faults, present the most promising targets for future geothermal exploration and development in this geothermal field. Two of these areas have been identified in the south and south-east which have not yet been targeted for geothermal exploitation. Furthermore, two unknown areas of structural related permeability could be identified by 222Rn and 220Rn activity concentrations.
Eventually, the fourth study presents a novel measurement approach to detect structural controlled CO2 degassing, in Ngapouri geothermal area, New Zealand. For the first time, the tunable diode laser (TDL) method was applied in a low-degassing geothermal area, to evaluate its potential as a geothermal exploration method. Although the sampling approach is based on profile measurements, which leads to low spatial resolution, the results showed a link between known/inferred faults and increased CO2 concentrations. Thus, the TDL method proved to be a successful in the determination of structural related permeability, also in areas where no obvious geothermal activity is present. Once an area of anomalous CO2 concentrations has been identified, it can be easily complemented by CO2 flux grid measurements to determine the extent and orientation of the degassing segment.
With the results of this work, I was able to demonstrate the applicability of systematic and area-wide soil gas measurements for geothermal exploration and monitoring purposes. In particular, the combination of different soil gases using different measurement networks enables the identification and characterization of fluid-bearing structures and has not yet been used and/or tested as standard practice. The different studies present efficient and cost-effective workflows and demonstrate a hands-on approach to a successful and sustainable exploration and monitoring of geothermal resources. This minimizes the resource risk during geothermal project development. Finally, to advance the understanding of the complex structure and dynamics of geothermal systems, a combination of comprehensive and cutting-edge geological, geochemical, and geophysical exploration methods is essential.
To achieve a sustainable energy economy, it is necessary to turn back on the combustion of fossil fuels as a means of energy production and switch to renewable sources. However, their temporal availability does not match societal consumption needs, meaning that renewably generated energy must be stored in its main generation times and allocated during peak consumption periods. Electrochemical energy storage (EES) in general is well suited due to its infrastructural independence and scalability. The lithium ion battery (LIB) takes a special place, among EES systems due to its energy density and efficiency, but the scarcity and uneven geological occurrence of minerals and ores vital for many cell components, and hence the high and fluctuating costs will decelerate its further distribution.
The sodium ion battery (SIB) is a promising successor to LIB technology, as the fundamental setup and cell chemistry is similar in the two systems. Yet, the most widespread negative electrode material in LIBs, graphite, cannot be used in SIBs, as it cannot store sufficient amounts of sodium at reasonable potentials. Hence, another carbon allotrope, non-graphitizing or hard carbon (HC) is used in SIBs. This material consists of turbostratically disordered, curved graphene layers, forming regions of graphitic stacking and zones of deviating layers, so-called internal or closed pores.
The structural features of HC have a substantial impact of the charge-potential curve exhibited by the carbon when it is used as the negative electrode in an SIB. At defects and edges an adsorption-like mechanism of sodium storage is prevalent, causing a sloping voltage curve, ill-suited for the practical application in SIBs, whereas a constant voltage plateau of relatively high capacities is found immediately after the sloping region, which recent research attributed to the deposition of quasimetallic sodium into the closed pores of HC.
Literature on the general mechanism of sodium storage in HCs and especially the role of the closed pore is abundant, but the influence of the pore geometry and chemical nature of the HC on the low-potential sodium deposition is yet in an early stage. Therefore, the scope of this thesis is to investigate these relationships using suitable synthetic and characterization methods. Materials of precisely known morphology, porosity, and chemical structure are prepared in clear distinction to commonly obtained ones and their impact on the sodium storage characteristics is observed. Electrochemical impedance spectroscopy in combination with distribution of relaxation times analysis is further established as a technique to study the sodium storage process, in addition to classical direct current techniques, and an equivalent circuit model is proposed to qualitatively describe the HC sodiation mechanism, based on the recorded data. The obtained knowledge is used to develop a method for the preparation of closed porous and non-porous materials from open porous ones, proving not only the necessity of closed pores for efficient sodium storage, but also providing a method for effective pore closure and hence the increase of the sodium storage capacity and efficiency of carbon materials.
The insights obtained and methods developed within this work hence not only contribute to the better understanding of the sodium storage mechanism in carbon materials of SIBs, but can also serve as guidance for the design of efficient electrode materials.
The self-assembly of amphiphilic polymers in aqueous systems is important for a plethora of applications, in particular in the field of cosmetics and detergents. When introducing thermoresponsive blocks, the aggregation behavior of these polymers can be controlled by changing the temperature. While confined to simple diblock copolymer systems for long, the complexity - and thus the versatility - of such smart systems can be strongly enlarged, once designed monomers, specific block sizes, different architectures, or additional functional groups such as hydrophobic stickers are implemented. In this work, the structure-property relationship of such thermoresponsive amphiphilic block copolymers was investigated by varying their structure systematically. The block copolymers were generally composed of a permanently hydrophobic sticker group, a permanently hydrophilic block, and a thermoresponsive block exhibiting a Lower Critical Solution Temperature (LCST) behavior. While the hydrophilic block consisted of N,N dimethylacrylamide (DMAm), different monomers were used for the thermoresponsive block, such as N n propylacrylamide (NPAm), N iso propylacrylamide (NiPAm), N,N diethylacrylamide (DEAm), N,N bis(2 methoxyethyl)acrylamide (bMOEAm), or N acryloylpyrrolidine (NAP) with different reported LCSTs of 25, 32, 33, 42 and 56 °C, respectively. The block copolymers were synthesized by successive reversible addition fragmentation chain transfer (RAFT) polymerization. For the polymers with the basic linear, the twinned hydrophobic and the symmetrical quasi miktoarm architectures, the results were well defined block sizes and end groups as well as narrow molar mass distributions (Ɖ ≤ 1.3). More complex architectures, such as the twinned thermoresponsive and the non-symmetrical quasi miktoarm one, were achieved by combining RAFT polymerization with a second technique, namely atom transfer radical polymerization (ATRP) or single unit monomer insertion (SUMI), respectively. The obtained block copolymers showed well defined block sizes, but due to the complexity of these reaction paths, the dispersities were generally higher (Ɖ ≤ 1.8) and some end groups were lost.
The thermoresponsive behavior of the block copolymers was investigated by turbidimetry and dynamic light scattering (DLS). Below the phase transition temperature, the polymers were soluble in water and small micellar structures were visible. However, above the phase transition temperature, the aggregation behavior was strongly dependent on the architecture and the chemical structure of the thermoresponsive block. Thermoresponsive blocks comprising PNAP and PbMOEAm with DPn = 40 showed no cloud point (CP), since their already high LCSTs were further increased by the attached hydrophilic block. Depending on the architecture as well as on the block size, block copolymers with PNiPAm, PDEAm and PNPAm showed different CP’s. Large aggregates were visible for block copolymers with PNiPAm and PDEAm above their CP. For PNPAm containing block copolymers, the phase transition was very sensitive towards the architecture resulting in either small or large aggregates.
In addition, fluorescence studies were performed using PDMAm and PNiPAm homo and block copolymers with linear architecture, functionalized with complementary fluorescence dyes introduced at the opposite chain ends. The thermoresponsive behavior was studied in pure aqueous solution as well as in an oil in water (o/w) microemulsion. The findings indicate that the block copolymer behaves as polymeric surfactant at low temperatures, with one relatively small hydrophobic end group and an extended hydrophilic chain forming ‘hairy micelles’ similar as the other synthesized architectures. Above the phase transition temperature of the PNiPAm block, however, the copolymer behaves as associative telechelic polymer with two non-symmetrical hydrophobic end groups, which do not mix. Thus, instead of a network of bridged ‘flower micelles’, large dynamic aggregates are formed. These are connected alternatingly by the original micellar cores as well as by clusters of the collapsed PNiPAm blocks. This type of bridged micelles is even more favored in the o/w microemulsion than in pure aqueous solution.
Natural products have proved to be a major resource in the discovery and development of many pharmaceuticals that are in use today. There is a wide variety of biologically active natural products that contain conjugated polyenes or benzofuran structures. Therefore, new synthetic methods for the construction of such building blocks are of great interest to synthetic chemists. The recently developed one-pot tethered ring-closing metathesis approach allows for the formation of Z,E-dienoates in high stereoselectivity. The extension of this method with a Julia-Kocienski olefination protocol would allow for the formation of conjugated trienes in a stereoselective manner. This strategy was applied in the total synthesis of conjugated triene containing (+)-bretonin B. Additionally, investigations of cross metathesis using methyl substituted olefins were pursued. This methodology was applied, as a one-pot cross metathesis/ring-closing metathesis sequence, in the total synthesis of benzofuran containing 7-methoxywutaifuranal. Finally, the design and synthesis of a catalyst for stereoretentive metathesis in aqueous media was investigated.
Ground-based astronomy is set to employ next-generation telescopes with apertures larger than 25 m in diameter before this decade is out. Such giant telescopes observe their targets through a larger patch of turbulent atmosphere, demanding that most of the instruments behind them must also grow larger to make full use of the collected stellar flux. This linear scaling in size greatly complicates the design of astronomical instrumentation, inflating their cost quadratically. Adaptive optics (AO) is one approach to circumvent this scaling law, but it can only be done to an extent before the cost of the corrective system itself overwhelms that of the instrument or even that of the telescope. One promising technique for miniaturizing the instruments and thus driving down their cost is to replace some, or all, of the free space bulk optics in the optical train with integrated photonic components.
Photonic devices, however, do their work primarily in single-mode waveguides, and the atmospherically-distorted starlight must first be efficiently coupled into them if they are to outperform their bulk optic counterparts. This is doable by two means: AO systems can again help control the angular size and motion of seeing disks to the point where they will couple efficiently into astrophotonic components, but this is only feasible for the brightest of objects and over limited fields of view. Alternatively, tapered fiber devices known as photonic lanterns — with their ability to convert multimode into single-mode optical fields — can be used to feed speckle patterns into single-mode integrated optics. They, nonetheless, must conserve the degrees of freedom, and the number of output waveguides will quickly grow out of control for uncorrected large telescopes. An AO-assisted photonic lantern fed by a partially corrected wavefront presents a compromise that can have a manageable size if the trade-off between the two methods is chosen carefully. This requires end-to-end simulations that take into account all the subsystems upstream of the astrophotonic instrument, i.e., the atmospheric layers, the telescope, the AO system, and the photonic lantern, before a decision can be made on sizing the multiplexed integrated instrument.
The numerical models that simulate atmospheric turbulence and AO correction are presented in this work. The physics and models for optical fibers, arrays of waveguides, and photonic lanterns are also provided. The models are on their own useful in understanding the behavior of the individual subsystems involved and are also used together to compute the optimum sizing of photonic lanterns for feeding astrophotonic instruments. Additionally, since photonic lanterns are a relatively new concept, two novel applications are discussed for them later in this thesis: the use of mode-selective photonic lanterns (MSPLs) to reduce the multiplicity of multiplexed integrated instruments and the combination of photonic lanterns with discrete beam combiners (DBCs) to retrieve the modal content in an optical waveguide.
Identification of chemical mediators that regulate the specialized metabolism in Nostoc punctiforme
(2021)
Specialized metabolites, so-called natural products, are produced by a variety of different organisms, including bacteria and fungi. Due to their wide range of different biological activities, including pharmaceutical relevant properties, microbial natural products are an important source for drug development. They are encoded by biosynthetic gene clusters (BGCs), which are a group of locally clustered genes. By screening genomic data for genes encoding typical core biosynthetic enzymes, modern bioinformatical approaches are able to predict a wide range of BGCs. To date, only a small fraction of the predicted BGCs have their associated products identified.
The phylum of the cyanobacteria has been shown to be a prolific, but largely untapped source for natural products. Especially multicellular cyanobacterial genera, like Nostoc, harbor a high amount of BGCs in their genomes.
A main goal of this study was to develop new concepts for the discovery of natural products in cyanobacteria. Due to its diverse setup of orphan BGCs and its amenability to genetic manipulation, Nostoc punctiforme PCC 73102 (N. punctiforme) appeared to be a promising candidate to be established as a model organism for natural product discovery in cyanobacteria. By utilizing a combination of genome-mining, bioactivity-screening, variations of culture conditions, as well as metabolic engineering, not only two new polyketides were discovered, but also first-time insights into the regulation of the specialized metabolism in N. punctiforme were gained during this study.
The cultivation of N. punctiforme to very high densities by utilizing increasing light intensities and CO2 levels, led to an enhanced metabolite production, causing rather complex metabolite extracts. By utilizing a library of CFP reporter mutant strains, each strain reporting for one of the predicted BGCs, it was shown that eight out of 15 BGCs were upregulated under high density (HD) cultivation conditions. Furthermore, it could be demonstrated that the supernatant of an HD culture can increase the expression of four of the influenced BGCs, even under conventional cultivation conditions. This led to the hypothesis that a chemical mediator encoded by one of the affected BGCs is accumulating in the HD supernatant and is able to increase the expression of other BGCs as part of a cell-density dependent regulatory circuit. To identify which of the BGCs could be a main trigger of the presumed regulatory circuit, it was tried to activate four BGCs (pks1, pks2, ripp3, ripp4) selectively by overexpression of putative pathway-specific regulatory genes that were found inside the gene clusters. Transcriptional analysis of the mutants revealed that only the mutant strain targeting the pks1 BGC, called AraC_PKS1, was able to upregulate the expression of its associated BGC. From an RNA sequencing study of the AraC_PKS1 mutant strain, it was discovered that beside pks1, the orphan BGCs ripp3 and ripp4 were also upregulated in the mutant strain. Furthermore, it was observed that secondary metabolite production in the AraC_PKS1 mutant strain is further enhanced under high-light and high-CO2 cultivation conditions. The increased production of the pks1 regulator NvlA also had an impact on other regulatory factors, including sigma factors and the RNA chaperone Hfq. Analysis of the AraC_PKS1 cell and supernatant extracts led to the discovery of two novel polyketides, nostoclide and nostovalerolactone, both encoded by the pks1 BGC. Addition of the polyketides to N. punctiforme WT demonstrated that the pks1-derived compounds are able to partly reproduce the effects on secondary metabolite production found in the AraC_PKS1 mutant strain. This indicates that both compounds are acting as extracellular signaling factors as part of a regulatory network. Since not all transcriptional effects that were found in the AraC_PKS1 mutant strain could be reproduced by the pks1 products, it can be assumed that the regulator NvlA has a global effect and is not exclusively specific to the pks1 pathway.
This study was the first to use a putative pathway specific regulator for the specific activation of BGC expression in cyanobacteria. This strategy did not only lead to the detection of two novel polyketides, it also gave first-time insights into the regulatory mechanism of the specialized metabolism in N. punctiforme. This study illustrates that understanding regulatory pathways can aid in the discovery of novel natural products. The findings of this study can guide the design of new screening strategies for bioactive compounds in cyanobacteria and help to develop high-titer production platforms for cyanobacterial natural products.
The spread of antibiotic-resistant bacteria poses a globally increasing threat to public health care. The excessive use of antibiotics in animal husbandry can develop resistances in the stables. Transmission through direct contact with animals and contamination of food has already been proven. The excrements of the animals combined with a binding material enable a further potential path of spread into the environment, if they are used as organic manure in agricultural landscapes. As most of the airborne bacteria are attached to particulate matter, the focus of the work will be the atmospheric dispersal via the dust fraction.
Field measurements on arable lands in Brandenburg, Germany and wind erosion studies in a wind tunnel were conducted to investigate the risk of a potential atmospheric dust-associated spread of antibiotic-resistant bacteria from poultry manure fertilized agricultural soils. The focus was to (i) characterize the conditions for aerosolization and (ii) qualify and quantify dust emissions during agricultural operations and wind erosion.
PM10 (PM, particulate matter with an aerodynamic diameter smaller than 10 µm) emission factors and bacterial fluxes for poultry manure application and incorporation have not been previously reported before. The contribution to dust emissions depends on the water content of the manure, which is affected by the manure pretreatment (fresh, composted, stored, dried), as well as by the intensity of manure spreading from the manure spreader. During poultry manure application, PM10 emission ranged between 0.05 kg ha-1 and 8.37 kg ha-1. For comparison, the subsequent land preparation contributes to 0.35 – 1.15 kg ha-1 of PM10 emissions. Manure particles were still part of dust emissions but they were accounted to be less than 1% of total PM10 emissions due to the dilution of poultry manure in the soil after manure incorporation. Bacterial emissions of fecal origin were more relevant during manure application than during the subsequent manure incorporation, although PM10 emissions of manure incorporation were larger than PM10 emissions of manure application for the non-dried manure variants.
Wind erosion leads to preferred detachment of manure particles from sandy soils, when poultry manure has been recently incorporated. Sorting effects were determined between the low-density organic particles of manure origin and the soil particles of mineral origin close above the threshold of 7 m s-1. In dependence to the wind speed, potential erosion rates between 101 and 854 kg ha-1 were identified, if 6 t ha-1 of poultry manure were applied. Microbial investigation showed that manure bacteria got detached more easily from the soil surface during wind erosion, due to their attachment on manure particles.
Although antibiotic-resistant bacteria (ESBL-producing E. coli) were still found in the poultry barns, no further contamination could be detected with them in the manure, fertilized soils or in the dust generated by manure application, land preparation or wind erosion. Parallel studies of this project showed that storage of poultry manure for a few days (36 – 72 h) is sufficient to inactivate ESBL-producing E. coli. Further antibiotic-resistant bacteria, i.e. MRSA and VRE, were only found sporadically in the stables and not at all in the dust. Therefore, based on the results of this work, the risk of a potential infection by dust-associated antibiotic-resistant bacteria can be considered as low.
In our daily life, recurrence plays an important role on many spatial and temporal scales and in different contexts. It is the foundation of learning, be it in an evolutionary or in a neural context. It therefore seems natural that recurrence is also a fundamental concept in theoretical dynamical systems science. The way in which states of a system recur or develop in a similar way from similar initial states makes it possible to infer information about the underlying dynamics of the system. The mathematical space in which we define the state of a system (state space) is often high dimensional, especially in complex systems that can also exhibit chaotic dynamics. The recurrence plot (RP) enables us to visualize the recurrences of any high-dimensional systems in a two-dimensional, binary representation. Certain patterns in RPs can be related to physical properties of the underlying system, making the qualitative and quantitative analysis of RPs an integral part of nonlinear systems science. The presented work has a methodological focus and further develops recurrence analysis (RA) by addressing current research questions related to an increasing amount of available data and advances in machine learning techniques. By automatizing a central step in RA, namely the reconstruction of the state space from measured experimental time series, and by investigating the impact of important free parameters this thesis aims to make RA more accessible to researchers outside of physics.
The first part of this dissertation is concerned with the reconstruction of the state space from time series. To this end, a novel idea is proposed which automates the reconstruction problem in the sense that there is no need to preprocesse the data or estimate parameters a priori. The key idea is that the goodness of a reconstruction can be evaluated by a suitable objective function and that this function is minimized in the embedding process. In addition, the new method can process multivariate time series input data. This is particularly important because multi-channel sensor-based observations are ubiquitous in many research areas and continue to increase. Building on this, the described minimization problem of the objective function is then processed using a machine learning approach.
In the second part technical and methodological aspects of RA are discussed. First, we mathematically justify the idea of setting the most influential free parameter in RA, the recurrence threshold ε, in relation to the distribution of all pairwise distances in the data. This is especially important when comparing different RPs and their quantification statistics and is fundamental to any comparative study. Second, some aspects of recurrence quantification analysis (RQA) are examined. As correction schemes for biased RQA statistics, which are based on diagonal lines, we propose a simple method for dealing with border effects of an RP in RQA and a skeletonization algorithm for RPs. This results in less biased (diagonal line based) RQA statistics for flow-like data. Third, a novel type of RQA characteristic is developed, which can be viewed as a generalized non-linear powerspectrum of high dimensional systems. The spike powerspectrum transforms a spike-train like signal into its frequency domain. When transforming the diagonal line-dependent recurrence rate (τ-RR) of a RP in this way, characteristic periods, which can be seen in the state space representation of the system can be unraveled. This is not the case, when Fourier transforming τ-RR.
Finally, RA and RQA are applied to climate science in the third part and neuroscience in the fourth part. To the best of our knowledge, this is the first time RPs and RQA have been used to analyze lake sediment data in a paleoclimate context. Therefore, we first elaborate on the basic formalism and the interpretation of visually visible patterns in RPs in relation to the underlying proxy data. We show that these patterns can be used to classify certain types of variability and transitions in the Potassium record from six short (< 17m) sediment cores collected during the Chew Bahir Drilling Project. Building on this, the long core (∼ m composite) from the same site is analyzed and two types of variability and transitions are
identified and compared with ODP Site wetness index from the eastern Mediterranean. Type variability likely reflects the influence of precessional forcing in the lower latitudes at times of maximum values of the long eccentricity cycle ( kyr) of the earth’s orbit around the sun, with a tendency towards extreme events. Type variability appears to be related to the minimum values of this cycle and corresponds to fairly rapid transitions between relatively dry and relatively wet conditions.
In contrast, RQA has been applied in the neuroscientific context for almost two decades. In the final part, RQA statistics are used to quantify the complexity in a specific frequency band of multivariate EEG (electroencephalography) data. By analyzing experimental data, it can be shown that the complexity of the signal measured in this way across the sensorimotor cortex decreases as motor tasks are performed. The results are consistent with and comple- ment the well known concepts of motor-related brain processes. We assume that the thus discovered features of neuronal dynamics in the sensorimotor cortex together with the robust RQA methods for identifying and classifying these contribute to the non-invasive EEG-based development of brain-computer interfaces (BCI) for motor control and rehabilitation.
The present work is an important step towards a robust analysis of complex systems based on recurrence.
Magmatic continental rifts often constitute the earliest stage of nascent plate boundaries. These extensional tectonic provinces are characterized by ubiquitous normal faulting and volcanic activity; the spatial pattern, the geometry, and the age of these normal faults can help to unravel the spatiotemporal relationships between extensional deformation, magmatism, and long-wavelength crustal deformation of continental rift provinces. This study focuses on the active faulting in the Kenya Rift of the Cenozoic East African Rift System (EARS) with a focus on the mid-Pleistocene to the present-day.
To examine the early stages of continental break-up in the EARS, this thesis presents a time-averaged minimum extension rate for the inner graben of the Northern Kenya Rift (NKR) for the last 0.5 m.y. Using the TanDEM-X digital elevation model, fault-scarp geometries and associated throws are determined across the volcano-tectonic axis of the inner graben of the NKR. By integrating existing geochronology of faulted units with new ⁴⁰Ar/³⁹Ar radioisotopic dates, time-averaged extension rates are calculated. This study reveals that in the inner graben of the NKR, the long-term extension rate based on mid-Pleistocene to recent brittle deformation has minimum values of 1.0 to 1.6 mm yr⁻¹, locally with values up to 2.0 mm yr⁻¹. In light of virtually inactive border faults of the NKR, we show that extension is focused in the region of the active volcano-tectonic axis in the inner graben, thus highlighting the maturing of continental rifting in the NKR.
The phenomenon of focused extension is further investigated with a structural analysis of the youngest volcanic manifestations of the Kenya Rift, their relationship with extensional structures, and their overprint by Holocene faulting. In this context I analyzed the fault characteristics at the ~36 ka old Menengai Caldera and adjacent areas in the Central Kenya Rift using detailed field mapping and a structure-from-motion-based DEM generated from UAV data. In general, the Holocene intra-rift normal faults are dip-slip faults which strike NNE and thus reflect the present-day tectonic stress field; however, inside Menengai caldera persistent magmatic activity and magmatic resurgence overprints these young structures significantly. The caldera is located at the center of an actively extending rift segment and this and the other volcanic edifices of the Kenya Rift may constitute nucleation points of faulting an magmatic extensional processes that ultimately lead into a future stage of magma-assisted rifting.
When viewed at the scale of the entire Kenya Rift the protracted normal faulting in this region compartmentalizes the larger rift depressions, and influences the sedimentology and the hydrology of the intra-rift basins at a scale of less than 100 km. In the present day, most of the fault-bounded sub-basins of the Kenya Rift are hydrologically isolated due to this combination of faulting and magmatic activity that has generated efficient hydrological barriers that maintain these basins as semi-independent geomorphic entities. This isolation, however, was overcome during wetter climatic conditions during the past when the basins were transiently connected. I therefore also investigated the hydrological connectivity of the rift basins during the African Humid Period of the early Holocene, when climate was wetter. With the help of DEM analysis, lake-highstand indicators, radiocarbon dating, and a review of the fossil record, two lake-river-cascades could be identified: one directed southward, and one directed northward. Both cascades connected presently isolated rift basins during the early Holocene via spillovers of lakes and incised river gorges. This hydrological connection fostered the dispersal of aquatic faunas along the rift, and in addition, the water divide between the two river systems represented the only terrestrial dispersal corridor across the Kenya Rift. The reconstruction explains isolated distributions of Nilotic fish species in Kenya Rift lakes and of Guineo-Congolian mammal species in forests east of the Kenya Rift. On longer timescales, repeated episodes of connectivity and isolation must have occurred. To address this problem I participated in research to analyze a sediment drill core from the Koora basin of the Southern Kenya Rift, which provides a paleo-environmental record of the last 1 Ma. Based on this record it can be concluded that at ~400 ka relatively stable environmental conditions were disrupted by tectonic, hydrological, and ecological changes, resulting in increasingly large and frequent fluctuations in water availability, grassland communities, and woody plant cover. The major environmental shifts reflected in the drill core data coincide with phases where volcano-tectonic activity affected the basin. This thesis therefore shows how protracted extensional tectonic processes and the resulting geomorphologic conditions can affect the hydrology, the paleo-environment and the biodiversity of extensional zones in Kenya and elsewhere.
The survey of the prevalence of chronic ankle instability in elite Taiwanese basketball athletes
(2021)
BACKGROUND: Ankle sprains are common in basketball. It could develop into Chronic Ankle Instability (CAI) causing decreased quality of life, functional performance, early osteoarthritis, and increased risk of other injuries. To develop a strategy of CAI prevention, localized epidemiology data and a valid/reliable tool are essential. However, the epidemiological data of CAI is not conclusive from previous studies and the prevalence of CAI in Taiwanese basketball athletes are not clear. In addition, a valid and reliable tool among the Taiwan-Chinese version to evaluate ankle instability is missing.
PURPOSE: The aims were to have an overview of the prevalence of CAI in sports population using a systematic review, to develop a valid and reliable cross-cultural adapted Cumberland Ankle Instability Tool Questionnaire (CAIT) in Taiwan-Chinese (CAIT-TW), and to survey the prevalence of CAI in elite basketball athletes in Taiwan using CAIT-TW.
METHODS: Firstly, a systematic search was conducted. Research articles applying CAI related questionnaires in order to survey the prevalence of CAI were included in the review. Second, the English version of CAIT was translated and cross-culturally adapted into the CAIT-TW. The construct validity, test-retest reliability, internal consistency, and cutoff score of CAIT-TW were evaluated in an athletic population (N=135). Finally, the cross-sectional data of CAI prevalence in 388 elite Taiwanese basketball athletes were presented. Demographics, presence of CAI, and difference of prevalence between gender, different competitive levels and play positions were evaluated.
RESULTS: The prevalence of CAI was 25%, ranging between 7% and 53%. The prevalence of CAI among participants with a history of ankle sprains was 46%, ranging between 9% and 76%. In addition, the cross-cultural adapted CAIT-TW showed a moderate to strong construct validity, an excellent test-retest reliability, a good internal consistency, and a cutoff score of 21.5 for the Taiwanese athletic population. Finally, 26% of Taiwanese basketball athletes had unilateral CAI while 50% of them had bilateral CAI. In addition, women athletes in the investigated cohort had a higher prevalence of CAI than men. There was no difference in prevalence between competitive levels and among play positions.
CONCLUSION: The systematic review shows that the prevalence of CAI has a wide range among included studies. This could be due to the different exclusion criteria, age, sports discipline, or other factors among the included studies. For future studies, standardized criteria to investigate the epidemiology of CAI are required. The CAI epidemiological study should be prospective. Factors affecting the prevalence of CAI ability should be investigated and described. The translated CAIT-TW is a valid and reliable tool to differentiate between stable and unstable ankles in athletes and may further apply for research or daily practice in Taiwan. In the Taiwanese basketball population, CAI is highly prevalent. This might relate to the research method, preexisting ankle instability, and training-related issues. Women showed a higher prevalence of CAI than men. When applying the preventive measure, gender should be taken into consideration.
Innerhalb dieser Arbeit erfolgte die erstmalige systematische Untersuchung von Vinylsulfonsäureethylester (1a), Phenylvinylsulfon (1b), N-Benzyl-N-methylethensulfonamid (1c) in der FUJIWARA-MORITANI Reaktion (alternativ als DHR bezeichnet). Bei dieser übergangsmetallkatalysierten Reaktion erfolgt der Aufbau einer neuen C-C-Bindung unter der doppelten Aktivierung einer C-H-Bindung. Somit kann ein atomökonomischer Aufbau von Molekülen realisiert werden, da keine Beiprodukte in Form von Salzen entstehen. Als aromatischer Reaktant wurden Acetanilide (2) verwendet, damit eine regiospezifische Kupplung durch die katalysatordirigierende Acetamid-Gruppe (CDG) erfolgt. Für die Pd-katalysierte DHR wurde eine umfangreiche Optimierung durchgeführt und anschließend konnten neun verschieden, substituierte 2 mit 1a und sieben verschieden, substituierte 2 mit 1b funktionalisiert werden. Da eine Reaktion mit 1c ausblieb, erfolgte ein Wechsel auf eine Ru-katalysierte Methode für die DHR. Mit dieser Methode konnte 1c mit Acetaniliden funktionalisiert werden und das Spektrum der verwendeten 2, in Form von deaktivierenden Substituenten erweitert werden.
Im Anschluss wurden die sulfalkenylierten Acetanilide in weiterführenden Reaktionen untersucht. Hierfür wurde eine Reaktionssequenz bestehend aus einer DeacetylierungDiazotierung-Kupplungsreaktion verwendet, um die Acetamid-Gruppe in eine Abgangsgruppe zu überführen und danach in einer MATSUDA-HECK Reaktion zu kuppeln. Mit dieser Methode konnten mehrere 1,2-Dialkenylbenzole erhalten werden und die CDG ein weiteres Mal genutzt werden. Neben der Überführung der CDG in eine Abgangsgruppe konnte diese auch in die Synthese verschiedener Heterozyklen integriert werden. Dafür erfolgte zunächst eine 1,3-Zykloaddition durch deprotonierten Tosylmethylisocanid an der elektronenarmen Sulfalkenylgruppe zur Synthese von Pyrrolen. Anschließend erfolgte eine Kupplung der PyrrolFunktion und der CDG durch Zyklokondensation, wodurch Quinoline dargestellt wurden. Durch diese Synthesen konnten Schwefelanaloga des Naturstoffes Marinoquionolin A erhalten werden.
Ein weitere übergangsmetallkatalysierte C-H-Aktivierungsreaktion, die MATSUDA-HECK Reaktion, wurde genutzt, um 1b zu mit verschieden, subtituierten Diazoniumsalzen zu arylieren. Hier konnten zahlreichen Styrenylsulfone erhalten werden. Der erfolgreiche Einsatz der Vinylsulfonylverbindungen in der Kreuzmetathese konnte innerhalb dieser Arbeit nicht erreicht werden. Daher erfolgte die Synthese verschiedener dialkenylierter Sulfonamide. Hierfür wurde die Kettenlänge der Alkenyl-Gruppe am Schwefel zwischen 2-3 und am Stickstoff zwischen 3-4 variiert. Der Einsatz der dialkenylierten Sulfonamide erfolgte in den zuvor untersuchten C-H-Aktivierungsmethoden.
N-Allyl-N-phenylethensulfonamid (3) konnte erfolgreich in der DHR und HECK Reaktion funktionalisiert werden. Hierbei erfolgte eine methodenspezifische Kupplung in Abhängigkeit von der Elektronendichte der entsprechenden Alkenyl-Gruppe. Die DHR führte zur selektiven Arylierung der Vinyl-Gruppe und die HECK Reaktion zur Arylierung an der Allyl-Gruppe. Gemischte Produkte wurden nicht erhalten. Für die weiteren Diolefine wurde komplexe Produktgemische erhalten. Des Weiteren wurden die Diolefine in der Ringschlussmetathese untersucht und die entsprechenden Sultame in sehr guten Ausbeuten erhalten. Die Verwendung der Sultame in der C-H-Aktivierung war erfolglos. Es wird vermutet, dass für diese zweifachsubstituierten Sulfonamide die vorhandenen Reaktionsbedingungen optimiert werden müssen.
Abschließend wurden verschiedene, enantiomerenreine Olefine ausgehend von Levoglucosenon dargestellt. Hierfür wurde Levoglucosenon zunächst mit einem Allyl- und 3-Butenylgrignard Reagenz umgesetzt. Die entsprechenden Produkte wurden in moderaten Ausbeuten erhalten. Eine weitere Methode begann mit der Reduktion von Levoglucosenon zum Levoglucosenol. Dieser Alkohol wurde mit Allylbromid erfolgreich verethert. Neben der Untersuchungen zur Ethersynthese, erfolgte die Veresterung von Levoglucosenol mit verschiedenen Sulfonylchloriden zu den entsprechenden Sulfonsäureestern. Diese Olefine wurden in einer Dominometathesereaktion untersucht. Ausgehend vom Allyllevoglucosenylether erfolgte die Darstellung eines Dihydrofurans.
The development and optimization of carbonaceous materials is of great interest for several applications including gas sorption, electrochemical storage and conversion, or heterogeneous catalysis. In this thesis, the exploration and optimization of nitrogen containing carbonaceous materials by direct condensation of smart chosen, molecular precursors will be presented. As suggested with the concept of noble carbons, the choice of a stable, nitrogen-containing precursor will lead to an even more stable, nitrogen doped carbonaceous material with a controlled structure and electronic properties. Molecules fulfilling this requirement are for example nucleobases. The direct condensation of nucleobases leads to highly nitrogen containing carbonaceous materials without any further post or pretreatment. By using salt melt templating, pore structure adjustment is possible without the use of hazardous or toxic reagents and the template can be reused.
Using these simple tools, the synergetic effect of the pore structure and nitrogen content of the materials can be explored. Within this thesis, the influence of the condensation parameters will be correlated to the structure and performance of the materials. First, the influence of the condensation temperature to the porosity and nitrogen content of guanine will be discussed and the exploration of highly CO2 selective structural pores in C1N1 materials will be shown. Further tuning the pore structure of the materials by salt melt templating will be then explored, the potential of the prepared materials as heterogeneous catalysts and their basic catalytic strength will be correlated to their nitrogen content and pore morphology. A similar approach is used to explore the water sorption behavior of uric acid derived carbonaceous materials as potential sorbents for heat transformation applications. Changes in maximum water uptake and hydrophilicity of the prepared materials will be correlated to the nitrogen content and pore architecture. Due to the high thermal stability, porosity, and nitrogen content of ionic liquid derived nitrogen doped carbonaceous materials, a simple impregnation and calcination route can be conducted to obtain copper nano cluster decorated nitrogen-doped carbonaceous materials. The activity as catalyst for the oxygen reduction reaction of the obtained materials will be shown and structure performance relations are discussed.
In conclusion, the versatility of nitrogen doped carbonaceous materials with a nitrogen to carbon ratio of up to one will be shown. The possibility to tune the pore structure as well as the nitrogen content by using a simple procedure including salt melt templating as well as the use of molecular precursors and their effect on the performance will be discussed.
Background and objectives: The intricate interdependencies between the musculoskeletal and neural systems build the foundation for postural control in humans, which is a prerequisite for successful performance of daily and sports-specific activities. Balance training (BT) is a well-established training method to improve postural control and its components (i.e., static/dynamic steady-state, reactive, proactive balance). The effects of BT have been studied in adult and youth populations, but were systematically and comprehensively assessed only in young and old adults. Additionally, when taking a closer look at established recommendations for BT modalities (e.g., training period, frequency, volume), standardized means to assess and control the progressive increase in exercise intensity are missing. Considering that postural control is primarily neuronally driven, intensity is not easy to quantify. In this context, a measure of balance task difficulty (BTD) appears to be an auspicious alternative as a training modality to monitor BT and control training progression. However, it remains unclear how a systematic increase in BTD affects balance performance and neurophysiological outcomes. Therefore, the primary objectives of the present thesis were to systematically and comprehensively assess the effects of BT on balance performance in healthy youth and establish dose-response relationships for an adolescent population. Additionally, this thesis aimed to investigate the effects of a graded increase in BTD on balance performance (i.e., postural sway) and neurophysiological outcomes (i.e, leg muscle activity, leg muscle coactivation, cortical activity) in adolescents.
Methods: Initially, a systematic review and meta-analysis on the effects of BT on balance performance in youth was conducted per the Preferred Reporting Items for Systematic Reviews and Meta-Analysis statement guidelines. Following this complementary analysis, thirteen healthy adolescents (3 female/ 10 male) aged 16-17 years were enrolled for two cross-sectional studies. The participants executed bipedal balance tasks on a multidirectional balance board that allowed six gradually increasing levels of BTD by narrowing the balance boards’ base of support. During task performance, two pressure sensitive mats fixed on the balance board recorded postural sway. Leg muscle activity and leg muscle coactivation were assessed via electromyography while electroencephalography was used to monitor cortical activity.
Results: Findings from the systematic review and meta-analysis indicated moderate-to-large effects of BT on static and dynamic balance performance in youth (static: weighted mean standardized mean differences [SMDwm] = 0.71; dynamic: SMDwm = 1.03). In adolescents, training-induced effects were moderate and large for static (SMDwm = 0.61) and dynamic (SMDwm = 0.86) balance performance, respectively. Independently (i.e. modality-specific) calculated dose-response relationships identified a training period of 12 weeks, a frequency of two training sessions per week, a total of 24-36 sessions, a duration of 4-15 minutes, and a total duration of 31-60 minutes as the training modalities with the largest effect on overall balance performance in adolescents. However, the implemented meta-regression indicated that none of these training modalities (R² = 0%) could predict the observed performance-increasing effects of BT.
Results from the first cross-sectional study revealed that a gradually increasing level of BTD caused increases in postural sway (p < 0.001; d = 6.36), higher leg muscle activity (p < 0.001; 2.19 < d < 4.88), and higher leg muscle coactivation (p < 0.001; 1.32 < d < 1.41). Increases in postural sway and leg muscle activity were mainly observed during low and high levels of task difficulty during continuous performance of the respective balance task. Results from the second cross-sectional study indicated frequency-specific increases/decreases in cortical activity of different brain areas (p < 0.005; 0.92 < d < 1.80) as a function of BTD. Higher cortical activity within the theta frequency band in the frontal and central right brain areas was observed with increasing postural demands. Concomitantly, activity in the alpha-2 frequency band was attenuated in parietal brain areas.
Conclusion: BT is an effective method to increase static and dynamic balance performance and, thus, improve postural control in healthy youth populations. However, none of the reported training modalities (i.e., training period, frequency, volume) could explain the effects on balance performance. Furthermore, a gradually increasing level of task difficulty resulted in increases in postural sway, leg muscle activity, and coactivation. Frequency and brain area-specific increases/decreases in cortical activity emphasize the involvement of frontoparietal brain areas in regulatory processes of postural control dependent on BTD. Overall, it appears that increasing BTD can be easily accomplished by narrowing the base of support. Since valid methods to assess and quantify BT intensity do not exist, increasing BTD appears to be a very useful candidate to implement and monitor progression in BT programs in healthy adolescents.
Many children struggle with reading for comprehension. Reading is a complex cognitive task depending on various sub-tasks, such as word decoding and building connections across sentences. The task of connecting sentences is guided by referential expressions. References, such as anaphoric noun phrases (Minky/the cat) or pronouns (Minky/she), signal to the reader how the protagonists of adjacent sentences are connected. Readers construct a coherent mental model of the text by resolving these references. Personal pronouns (he/she) in particular need to be resolved towards an appropriate antecedent before they can be fully understood. Pronoun resolution therefore is vital for successful text comprehension. The present thesis investigated children’s resolution of personal pronouns during natural reading as a possible source of reading comprehension difficulty. Three eye tracking studies investigated whether children aged 8-9 (Grade 3-4) resolve pronouns online during reading and how the varying information around the pronoun region influences children’s eye movement behavior.
The first study investigated whether children prefer a pronoun over a noun phrase when the antecedent is highly accessible. Children read three-sentence stories that introduced a protagonist (Mia) in the first sentence and a reference to this protagonist in one of the following sentences using either a repeated name (Mia) or a pronoun (she). For proficient readers, it was repeatedly shown that there is a preference for a pronoun over the name in these contexts, i.e., when the antecedent is salient. The first study tested the repeated name penalty effect in children using eye tracking. It was hypothesized that in contrast to proficient readers, the fluency of children’s reading processing profits from an overlapping word form (i.e., the repeated noun phrase) compared to a pronoun. This is because overlapping word forms allow for direct mapping, whereas pronouns have to be resolved towards their antecedent first.
The second study investigated children’s online processing of pronominal gender in a mismatch paradigm. Children read sentences in which the pronoun either was a gender-match to the antecedent or a gender-mismatch. Reading skill and reading fluency were also tested and related to children’s ability to detect a mismatching pronoun during reading.
The third study investigated the online processing of gender information on the pronoun and whether disambiguating gender information improves the accuracy of pronoun comprehension. Offline comprehension accuracy, that is the comprehension of the pronoun, was related to children’s online eye movement behavior. This study was conducted in a semi-longitudinal paradigm: 70 children were tested in Grade 3 (age 8) and again in Grade 4 (age 9) to investigate effects of age and reading skill on pronoun processing and comprehension.
The results of this thesis clearly show that children aged 8-9, when they are in the second half of primary school, struggle with the comprehension of pronouns in reading tasks. The responses to pronoun comprehension questions revealed that children have difficulties with the comprehension of a pronoun in the absence of a disambiguating gender cue, that is when they have to apply context information. When there is a gender cue to disambiguate the pronoun, children’s accuracy improves significantly. This is true for children in Grade 3, but also in Grade 4, albeit their overall resolution accuracy slightly improves with age.
The results from the analyses of eye movements suggest that the discourse accessibility of an antecedent does play a role in children’s processing of pronouns and repeated names. The repetition of a name does not facilitate children’s reading processing like it was anticipated. Similar to adults, children showed a penalty effect for the repeated name where a pronoun is expected. However, this does not mean that children’s processing of pronouns is always adult-like. The results from eye movement analyses in the pronoun region during sentence reading revealed significant individual differences related to children’s individual reading skill and reading fluency.
The results from the mismatch study revealed that reading fluency is associated with children’s detection of incongruent pronouns. All children had longer gaze durations at mismatching than matching pronouns, but only fluent readers among the children followed this up with a regression out of the pronoun region. This was interpreted as an attempt to gain processing time and “repair” the inconsistency. Reading fluency was therefore associated with detection of the mismatch, while less fluent readers did not see any mismatch between pronoun and antecedent. The eye movement pattern of the “detectors” is more adult-like and was interpreted as reflecting successful monitoring and attempted pronoun resolution.
Children differ considerably in their reading comprehension skill. The results of this thesis show that only skilled readers among the children use gender information online for pronoun resolution. They took more time to read the pronoun when there was disambiguating gender information that was useful to resolve the pronoun, in contrast to the less skilled readers. Age was a less important factor in pronoun resolution processes and comprehension than were reading skill and reading fluency. Taken together, this suggests that the good readers direct cognitive resources towards pronoun resolution when the pronoun can be resolved, which is a successful comprehension strategy. Moreover, there was evidence that reading skill is a relevant factor in this task but not age.
The contribution of the present thesis is a depiction of the specific eye movement patterns that are related to successful and unsuccessful attempts at pronoun resolution in children. Eye movement behavior in the pronoun area is related to children’s reading skill and fluency. The results of this thesis suggest that many children do not resolve pronouns spontaneously during sentence reading, which is likely detrimental to their reading comprehension in more complex reading materials. The present thesis informs our understanding of the challenge that pronoun resolution poses for beginning readers, and gives new impulses for the study of higher-order reading processes in children’s natural reading.
En el presente trabajo se realizó una investigación multidisciplinaria combinando métodos de geomorfología tectónica con estudios geofisicos y estructurales, focalizados principalmente en la caracterización neotectónica de ambos faldeos de la sierra de La Candelaria y del extremo sur de la cuenca de Metán. La zona de estudio se encuentra ubicada en la región limítrofe entre las provincias de Salta y Tucumán y pertenece a la provincia geológica del Sistema Santa Bárbara.
El principal objetivo consistió en contextualizar las evidencias de actividad tectónica cuaternaria de la región mediante la propuesta de un modelo estructural novedoso, con el propósito de incrementar la información disponible sobre estructuras neotectónicas y su potencial sismogénico. Con este fin, se aplicaron e integraron diversas técnicas tales como la interpretación de líneas sísmicas de reflexión, construcción de secciones estructurales balanceadas, y métodos geofísicos someros, para constatar el comportamiento en profundidad tanto de las estructuras geológicas identificadas en superficie como de las posibles fallas ciegas corticales involucradas.
En primer lugar, se realizó un relevamiento regional del área de estudio empleando imágenes satelitales multiespectrales LANDSAT y SENTINEL 2, que permitieron reconocer diferentes niveles de abanicos aluviales y terrazas fluviales cuaternarios. Mediante la determinación de diferentes indicadores morfométricos en modelos de elevación digital (MED), junto con observaciones de campo, fue posible identificar evidencias de deformación sobre dichos niveles cuaternarios que han sido relacionadas genéticamente con cuatro fallas neotectónicas. Tres de ellas (fallas Arias, El Quemado y Copo Quile) fueron seleccionadas para efectuar estudios de mayor detalle por medio de la aplicación de métodos de geofísica somera (tomografía eléctrica resistiva (ERT) y tomografía sísmica de refracción Sísmica (SRT)), que permitieron corroborar su existencia en profundidad, realizar inferencias geométricas y cinemáticas, y estimar la magnitud de la deformación reciente. Las fallas Arias y El Quemado fueron interpretadas como fallas inversas relacionadas con deslizamiento flexural interstratal, mientras que la falla Copo Quile se interpretó como una falla inversa ciega de bajo ángulo.También se realizó una interpretación conjunta de líneas sísmicas de reflexión y pozos exploratorios pertenecientes a áreas hidrocarburíferas de las cuencas de Choromoro y Metán con el fin de contextualizar las principales estructuras reconocidas en el marco estratigráfico y tectónico regional. Toda la información fue integrada en una sección estructural balanceada mediante técnicas de modelado cinemático. Dicho modelo permite inferir que la deformación cuaternaria reconocida está relacionada al desplazamiento del basamento a lo largo de un corrimiento ciego, responsable del levantamiento de la sierra de La Candelaria y el cerr Cantero. Asimismo, el modelo cinemático permite interpretar la ubicación aproximada de los principales niveles de despegue que controlan el estilo de deformación. El nivel de despegue más somero, que controla la deformación de la cobertura sedimentaria se encuentra a 4 km de profundidad, a 21 km se estima la presencia de otra zona de cizalla subhorizontal dentro del basamento.
Finalmente, a partir de la integración de todos los resultados obtenidos, se evaluó el potencial sismogénico de las fallas en la zona de estudio. Las fallas de primer orden que controlan la deformación en la zona son las responsables de los grandes terremotos. Mientras, las fallas Cuaternarias flexodeslizantes e inversas afectan solamente a la cobertura sedimentaria y serían estructuras de segundo orden que acomodan la deformación y fueron activadas durante el cuaternario con movimientos asísmicos y/o sísmicos de muy baja magnitud.
Estos resultados permiten inferir que el corrimiento La Candelaria constituye una fuente sismogénica potencial de importancia para la región, donde se ubican numerosas poblaciones y obras civiles de envergadura. Por otra parte, la sección estructural balanceada implica la presencia de otras fallas ciegas de distinto orden de magnitud que podrían ser posibles fuentes sismogénicas profundas adicionales, marcando la necesidad de continuar con el desarrollo de este tipo de estudios en esta región tectónicamente activa.
The optical properties of chromophores, especially organic dyes and optically active inorganic molecules, are determined by their chemical structures, surrounding media, and excited state behaviors. The classical optical go-to techniques for spectroscopic investigations are absorption and luminescence spectroscopy. While both techniques are powerful and easy to apply spectroscopic methods, the limited time resolution of luminescence spectroscopy and its reliance on luminescent properties can make its application, in certain cases, complex, or even impossible. This can be the case when the investigated molecules do not luminesce anymore due to quenching effects, or when they were never luminescent in the first place. In those cases, transient absorption spectroscopy is an excellent and much more sophisticated technique to investigate such systems. This pump-probe laser-spectroscopic method is excellent for mechanistic investigations of luminescence quenching phenomena and photoreactions. This is due to its extremely high time resolution in the femto- and picosecond ranges, where many intermediate or transient species of a reaction can be identified and their kinetic evolution can be observed. Furthermore, it does not rely on the samples being luminescent, due to the active sample probing after excitation. In this work it is shown, that with transient absorption spectroscopy it was possible to identify the luminescence quenching mechanisms and thus luminescence quantum yield losses of the organic dye classes O4-DBD, S4-DBD, and pyridylanthracenes. Hence, the population of their triplet states could be identified as the competitive mechanism to their luminescence. While the good luminophores O4-DBD showed minor losses, the S4-DBD dye luminescence was almost entirely quenched by this process. However, for pyridylanthracenes, this phenomenon is present in both the protonated and unprotonated forms and moderately effects the luminescence quantum yield. Also, the majority of the quenching losses in the protonated forms are caused by additional non-radiative processes introduced by the protonation of the pyridyl rings. Furthermore, transient absorption spectroscopy can be applied to investigate the quenching mechanisms of uranyl(VI) luminescence by chloride and bromide. The reduction of the halides by excited uranyl(VI) leads to the formation of dihalide radicals X^(·−2). This excited state redox process is thus identified as the quenching mechanism for both halides, and this process, being diffusion-limited, can be suppressed by cryogenically freezing the samples or by observing these interactions in media with a lower dielectric constant, such as ACN and acetone.
Previous behavioral studies showed that perceptual changes in infancy can be observed in multiple patterns, namely decline (e.g., Mattock et al., 2008; Yeung et al., 2013), maintenance (e.g., Chen & Kager, 2016) and U-shaped development (Liu & Kager, 2014).
This dissertation contributes further to the understanding of the developmental trajectory of phonological acquisition in infancy. The dissertation addresses the questions of how the perceptual sensitivity of lexical tones and vowels changes in infancy and how different experimental procedures contribute to our understanding. We used three experimental procedures to investigate German-learning infants’ discrimination abilities. In Studies 1 and 3 (Chapters 5 and 7) we used behavioral methods (habituation and familiarization procedures) and in Study 2 (Chapter 6) we measured neural correlates.
Study 1 showed a U-shaped developmental pattern: 6- and 18-month-olds discriminated a lexical tone contrast, but not the 9-month-olds. In addition, we found an effect of experimental procedure: infants discriminated the tone contrast at 6 months in a habituation but not in a familiarization procedure. In Study 2, we observed mismatch responses (MMR) to a non-native tone contrast and a native-like vowel in 6- and 9-month-olds. In 6-month-olds, both contrasts elicited positive MMRs. At 9 months, the vowel contrast elicited an adult-like negative MMR, while the tone contrast elicited a positive MMR. Study 3 demonstrated a change in perceptual sensitivity to a vowel contrast between 6 and 9 months. In contrast to the 6-month-old infants, the 9-month-old infants discriminated the tested vowel contrast asymmetrically.
We suggest that the shifts in perceptual sensitivity between 6 and 9 months are functional rather than perceptual. In the case of lexical tone discrimination, infants may have already learned by 9 months of age that pitch is not relevant at the lexical level in German, since the infants in Study 1 showed no perceptual sensitivity to the contrast tested. Nevertheless, the brain responded to the contrast, especially since pitch differences are also part of the German intonation system (Gussenhoven, 2004). The role of the intonation system in pitch discrimination could be supported by the recovery of behavioral discrimination at 18 months of age, as well as behavioral and neural discrimination in German-speaking adults.
Generative adversarial networks (GANs) have been broadly applied to a wide range of application domains since their proposal. In this thesis, we propose several methods that aim to tackle different existing problems in GANs. Particularly, even though GANs are generally able to generate high-quality samples, the diversity of the generated set is often sub-optimal. Moreover, the common increase of the number of models in the original GANs framework, as well as their architectural sizes, introduces additional costs. Additionally, even though challenging, the proper evaluation of a generated set is an important direction to ultimately improve the generation process in GANs. We start by introducing two diversification methods that extend the original GANs framework to multiple adversaries to stimulate sample diversity in a generated set. Then, we introduce a new post-training compression method based on Monte Carlo methods and importance sampling to quantize and prune the weights and activations of pre-trained neural networks without any additional training. The previous method may be used to reduce the memory and computational costs introduced by increasing the number of models in the original GANs framework. Moreover, we use a similar procedure to quantize and prune gradients during training, which also reduces the communication costs between different workers in a distributed training setting. We introduce several topology-based evaluation methods to assess data generation in different settings, namely image generation and language generation. Our methods retrieve both single-valued and double-valued metrics, which, given a real set, may be used to broadly assess a generated set or separately evaluate sample quality and sample diversity, respectively. Moreover, two of our metrics use locality-sensitive hashing to accurately assess the generated sets of highly compressed GANs. The analysis of the compression effects in GANs paves the way for their efficient employment in real-world applications. Given their general applicability, the methods proposed in this thesis may be extended beyond the context of GANs. Hence, they may be generally applied to enhance existing neural networks and, in particular, generative frameworks.
3D point clouds are a universal and discrete digital representation of three-dimensional objects and environments. For geospatial applications, 3D point clouds have become a fundamental type of raw data acquired and generated using various methods and techniques. In particular, 3D point clouds serve as raw data for creating digital twins of the built environment.
This thesis concentrates on the research and development of concepts, methods, and techniques for preprocessing, semantically enriching, analyzing, and visualizing 3D point clouds for applications around transport infrastructure. It introduces a collection of preprocessing techniques that aim to harmonize raw 3D point cloud data, such as point density reduction and scan profile detection. Metrics such as, e.g., local density, verticality, and planarity are calculated for later use. One of the key contributions tackles the problem of analyzing and deriving semantic information in 3D point clouds. Three different approaches are investigated: a geometric analysis, a machine learning approach operating on synthetically generated 2D images, and a machine learning approach operating on 3D point clouds without intermediate representation.
In the first application case, 2D image classification is applied and evaluated for mobile mapping data focusing on road networks to derive road marking vector data. The second application case investigates how 3D point clouds can be merged with ground-penetrating radar data for a combined visualization and to automatically identify atypical areas in the data. For example, the approach detects pavement regions with developing potholes. The third application case explores the combination of a 3D environment based on 3D point clouds with panoramic imagery to improve visual representation and the detection of 3D objects such as traffic signs.
The presented methods were implemented and tested based on software frameworks for 3D point clouds and 3D visualization. In particular, modules for metric computation, classification procedures, and visualization techniques were integrated into a modular pipeline-based C++ research framework for geospatial data processing, extended by Python machine learning scripts. All visualization and analysis techniques scale to large real-world datasets such as road networks of entire cities or railroad networks.
The thesis shows that some use cases allow taking advantage of established image vision methods to analyze images rendered from mobile mapping data efficiently. The two presented semantic classification methods working directly on 3D point clouds are use case independent and show similar overall accuracy when compared to each other. While the geometry-based method requires less computation time, the machine learning-based method supports arbitrary semantic classes but requires training the network with ground truth data. Both methods can be used in combination to gradually build this ground truth with manual corrections via a respective annotation tool.
This thesis contributes results for IT system engineering of applications, systems, and services that require spatial digital twins of transport infrastructure such as road networks and railroad networks based on 3D point clouds as raw data. It demonstrates the feasibility of fully automated data flows that map captured 3D point clouds to semantically classified models. This provides a key component for seamlessly integrated spatial digital twins in IT solutions that require up-to-date, object-based, and semantically enriched information about the built environment.
Magnetic strain contributions in laser-excited metals studied by time-resolved X-ray diffraction
(2021)
In this work I explore the impact of magnetic order on the laser-induced ultrafast strain response of metals. Few experiments with femto- or picosecond time-resolution have so far investigated magnetic stresses. This is contrasted by the industrial usage of magnetic invar materials or magnetostrictive transducers for ultrasound generation, which already utilize magnetostrictive stresses in the low frequency regime.
In the reported experiments I investigate how the energy deposition by the absorption of femtosecond laser pulses in thin metal films leads to an ultrafast stress generation. I utilize that this stress drives an expansion that emits nanoscopic strain pulses, so called hypersound, into adjacent layers. Both the expansion and the strain pulses change the average inter-atomic distance in the sample, which can be tracked with sub-picosecond time resolution using an X-ray diffraction setup at a laser-driven Plasma X-ray source. Ultrafast X-ray diffraction can also be applied to buried layers within heterostructures that cannot be accessed by optical methods, which exhibit a limited penetration into metals. The reconstruction of the initial energy transfer processes from the shape of the strain pulse in buried detection layers represents a contribution of this work to the field of picosecond ultrasonics.
A central point for the analysis of the experiments is the direct link between the deposited energy density in the nano-structures and the resulting stress on the crystal lattice. The underlying thermodynamical concept of a Grüneisen parameter provides the theoretical framework for my work. I demonstrate how the Grüneisen principle can be used for the interpretation of the strain response on ultrafast timescales in various materials and that it can be extended to describe magnetic stresses. The class of heavy rare-earth elements exhibits especially large magnetostriction effects, which can even lead to an unconventional contraction of the laser-excited transducer material. Such a dominant contribution of the magnetic stress to the motion of atoms has not been demonstrated previously. The observed rise time of the magnetic stress contribution in Dysprosium is identical to the decrease in the helical spin-order, that has been found previously using time-resolved resonant X-ray diffraction. This indicates that the strength of the magnetic stress can be used as a proxy of the underlying magnetic order. Such magnetostriction measurements are applicable even in case of antiparallel or non-collinear alignment of the magnetic moments and a vanishing magnetization.
The strain response of metal films is usually determined by the pressure of electrons and lattice vibrations. I have developed a versatile two-pulse excitation routine that can be used to extract the magnetic contribution to the strain response even if systematic measurements above and below the magnetic ordering temperature are not feasible. A first laser pulse leads to a partial ultrafast demagnetization so that the amplitude and shape of the strain response triggered by the second pulse depends on the remaining magnetic order. With this method I could identify a strongly anisotropic magnetic stress contribution in the magnetic data storage material iron-platinum and identify the recovery of the magnetic order by the variation of the pulse-to-pulse delay. The stark contrast of the expansion of iron-platinum nanograins and thin films shows that the different constraints for the in-plane expansion have a strong influence on the out-of-plane expansion, due to the Poisson effect. I show how such transverse strain contributions need to be accounted for when interpreting the ultrafast out-of-plane strain response using thermal expansion coefficients obtained in near equilibrium conditions.
This work contributes an investigation of magnetostriction on ultrafast timescales to the literature of magnetic effects in materials. It develops a method to extract spatial and temporal varying stress contributions based on a model for the amplitude and shape of the emitted strain pulses. Energy transfer processes result in a change of the stress profile with respect to the initial absorption of the laser pulses. One interesting example occurs in nanoscopic gold-nickel heterostructures, where excited electrons rapidly transport energy into a distant nickel layer, that takes up much more energy and expands faster and stronger than the laser-excited gold capping layer. Magnetic excitations in rare earth materials represent a large energy reservoir that delays the energy transfer into adjacent layers. Such magneto-caloric effects are known in thermodynamics but not extensively covered on ultrafast timescales. The combination of ultrafast X-ray diffraction and time-resolved techniques with direct access to the magnetization has a large potential to uncover and quantify such energy transfer processes.
Die stetige Weiterentwicklung von VR-Systemen bietet neue Möglichkeiten der Interaktion mit virtuellen Objekten im dreidimensionalen Raum, stellt Entwickelnde von VRAnwendungen aber auch vor neue Herausforderungen. Selektions- und Manipulationstechniken müssen unter Berücksichtigung des Anwendungsszenarios, der Zielgruppe und der zur Verfügung stehenden Ein- und Ausgabegeräte ausgewählt werden. Diese Arbeit leistet einen Beitrag dazu, die Auswahl von passenden Interaktionstechniken zu unterstützen. Hierfür wurde eine repräsentative Menge von Selektions- und Manipulationstechniken untersucht und, unter Berücksichtigung existierender Klassifikationssysteme, eine Taxonomie entwickelt, die die Analyse der Techniken hinsichtlich interaktionsrelevanter Eigenschaften ermöglicht. Auf Basis dieser Taxonomie wurden Techniken ausgewählt, die in einer explorativen Studie verglichen wurden, um Rückschlüsse auf die Dimensionen der Taxonomie zu ziehen und neue Indizien für Vor- und Nachteile der Techniken in spezifischen Anwendungsszenarien zu generieren. Die Ergebnisse der Arbeit münden in eine Webanwendung, die Entwickelnde von VR-Anwendungen gezielt dabei unterstützt, passende Selektions- und Manipulationstechniken für ein Anwendungsszenario auszuwählen, indem Techniken auf Basis der Taxonomie gefiltert und unter Verwendung der Resultate aus der Studie sortiert werden können.
The evolution of life on Earth has been driven by disturbances of different types and magnitudes over the 4.6 million years of Earth’s history (Raup, 1994, Alroy, 2008). One example for such disturbances are mass extinctions which are characterized by an exceptional increase in the extinction rate affecting a great number of taxa in a short interval of geologic time (Sepkoski, 1986). During the 541 million years of the Phanerozoic, life on Earth suffered five exceptionally severe mass extinctions named the “Big Five Extinctions”. Many mass extinctions are linked to changes in climate
(Feulner, 2009). Hence, the study of past mass extinctions is not only intriguing, but can also provide insights into the complex nature of the Earth system. This thesis aims at deepening our understanding of the triggers of mass extinctions and how they affected life. To accomplish this, I investigate changes in climate during two of the Big Five extinctions using a coupled climate model.
During the Devonian (419.2–358.9 million years ago) the first vascular plants and vertebrates evolved on land while extinction events occurred in the ocean (Algeo et al., 1995). The causes of these formative changes, their interactions and their links to changes in climate are still poorly understood. Therefore, we explore the sensitivity of the Devonian climate to various boundary conditions using an intermediate-complexity climate model (Brugger et al., 2019). In contrast to Le Hir et al. (2011), we find only a minor biogeophysical effect of changes in vegetation cover due to unrealistically high soil albedo values used in the earlier study. In addition, our results cannot support the strong influence of orbital parameters on the Devonian climate, as simulated with a climate model with a strongly simplified ocean model (De Vleeschouwer et al., 2013, 2014, 2017). We can only reproduce the changes in Devonian climate suggested by proxy data by decreasing atmospheric CO2. Still, finding agreement between the evolution of sea surface temperatures reconstructed from proxy data (Joachimski et al., 2009) and our simulations remains challenging and suggests a lower δ18O ratio of Devonian seawater. Furthermore, our study of the sensitivity of the Devonian climate reveals a prevailing mode of climate variability on a timescale of decades to centuries. The quasi-periodic ocean temperature fluctuations are linked to a physical mechanism of changing sea-ice cover, ocean convection and overturning in high northern latitudes.
In the second study of this thesis (Dahl et al., under review) a new reconstruction of atmospheric CO2 for the Devonian, which is based on CO2-sensitive carbon isotope fractionation in the earliest vascular plant fossils, suggests a much earlier drop of atmo- spheric CO2 concentration than previously reconstructed, followed by nearly constant CO2 concentrations during the Middle and Late Devonian. Our simulations for the Early Devonian with identical boundary conditions as in our Devonian sensitivity study (Brugger et al., 2019), but with a low atmospheric CO2 concentration of 500 ppm, show no direct conflict with available proxy and paleobotanical data and confirm that under the simulated climatic conditions carbon isotope fractionation represents a robust proxy for atmospheric CO2. To explain the earlier CO2 drop we suggest that early forms of vascular land plants have already strongly influenced weathering. This new perspective on the Devonian questions previous ideas about the climatic conditions and earlier explanations for the Devonian mass extinctions.
The second mass extinction investigated in this thesis is the end-Cretaceous mass extinction (66 million years ago) which differs from the Devonian mass extinctions in terms of the processes involved and the timescale on which the extinctions occurred. In the two studies presented here (Brugger et al., 2017, 2021), we model the climatic effects of the Chicxulub impact, one of the proposed causes of the end-Cretaceous extinction, for the first millennium after the impact. The light-dimming effect of stratospheric sulfate aerosols causes severe cooling, with a decrease of global annual mean surface air temperature of at least 26◦C and a recovery to pre-impact temperatures after more than 30 years. The sudden surface cooling of the ocean induces deep convection which brings nutrients from the deep ocean via upwelling to the surface ocean. Using an ocean biogeochemistry model we explore the combined effect of ocean mixing and iron-rich dust originating from the impactor on the marine biosphere. As soon as light levels have recovered, we find a short, but prominent peak in marine net primary productivity. This newly discovered mechanism could result in toxic effects for marine near-surface ecosystems. Comparison of our model results to proxy data (Vellekoop et al., 2014, 2016, Hull et al., 2020) suggests that carbon release from the terrestrial biosphere is required in addition to the carbon dioxide which can be attributed to the target material. Surface ocean acidification caused by the addition of carbon dioxide and sulfur is only moderate. Taken together, the results indicate a significant contribution of the Chicxulub impact to the end-Cretaceous mass extinction by triggering multiple stressors for the Earth system.
Although the sixth extinction we face today is characterized by human intervention in nature, this thesis shows that we can gain many insights into future extinctions from studying past mass extinctions, such as the importance of the rate of change (Rothman, 2017), the interplay of multiple stressors (Gunderson et al., 2016), and changes in the carbon cycle (Rothman, 2017, Tierney et al., 2020).
Elucidating the molecular basis of enhanced growth in the Arabidopsis thaliana accession Bur-0
(2021)
The life cycle of flowering plants is a dynamic process that involves successful passing through several developmental phases and tremendous progress has been made to reveal cellular and molecular regulatory mechanisms underlying these phases, morphogenesis, and growth. Although several key regulators of plant growth or developmental phase transitions have been identified in Arabidopsis, little is known about factors that become active during embryogenesis, seed development and also during further postembryonic growth. Much less is known about accession-specific factors that determine plant architecture and organ size. Bur-0 has been reported as a natural Arabidopsis thaliana accession with exceptionally big seeds and a large rosette; its phenotype makes it an interesting candidate to study growth and developmental aspects in plants, however, the molecular basis underlying this big phenotype remains to be elucidated. Thus, the general aim of this PhD project was to investigate and unravel the molecular mechanisms underlying the big phenotype in Bur-0.
Several natural Arabidopsis accessions and late flowering mutant lines were analysed in this study, including Bur-0. Phenotypes were characterized by determining rosette size, seed size, flowering time, SAM size and growth in different photoperiods, during embryonic and postembryonic development. Our results demonstrate that Bur-0 stands out as an interesting accession with simultaneously larger rosettes, larger SAM, later flowering phenotype and larger seeds, but also larger embryos. Interestingly, inter-accession crosses (F1) resulted in bigger seeds than the parental self-crossed accessions, particularly when Bur-0 was used as the female parental genotype, suggesting parental effects on seed size that might be maternally controlled. Furthermore, developmental stage-based comparisons revealed that the large embryo size of Bur-0 is achieved during late embryogenesis and the large rosette size is achieved during late postembryonic growth. Interestingly, developmental phase progression analyses revealed that from germination onwards, the length of developmental phases during postembryonic growth is delayed in Bur-0, suggesting that in general, the mechanisms that regulate developmental phase progression are shared across developmental phases.
On the other hand, a detailed physiological characterization in different tissues at different developmental stages revealed accession-specific physiological and metabolic traits that underlie accession-specific phenotypes and in particular, more carbon resources during embryonic and postembryonic development were found in Bur-0, suggesting an important role of carbohydrates in determination of the bigger Bur-0 phenotype. Additionally, differences in the cellular organization, nuclei DNA content, as well as ploidy level were analyzed in different tissues/cell types and we found that the large organ size in Bur-0 can be mainly attributed to its larger cells and also to higher cell proliferation in the SAM, but not to a different ploidy level.
Furthermore, RNA-seq analysis of embryos at torpedo and mature stage, as well as SAMs at vegetative and floral transition stage from Bur-0 and Col-0 was conducted to identify accession-specific genetic determinants of plant phenotypes, shared across tissues and developmental stages during embryonic and postembryonic growth. Potential candidate genes were identified and further validation of transcriptome data by expression analyses of candidate genes as well as known key regulators of organ size and growth during embryonic and postembryonic development confirmed that the high confidence transcriptome datasets generated in this study are reliable for elucidation of molecular mechanisms regulating plant growth and accession-specific phenotypes in Arabidopsis.
Taken together, this PhD project contributes to the plant development research field providing a detailed analysis of mechanisms underlying plant growth and development at different levels of biological organization, focusing on Arabidopsis accessions with remarkable phenotypical differences. For this, the natural accession Bur-0 was an ideal outlier candidate and different mechanisms at organ and tissue level, cell level, metabolism, transcript and gene expression level were identified, providing a better understanding of different factors involved in plant growth regulation and mechanisms underlying different growth patterns in nature.
Mental health problems are highly prevalent worldwide. Fortunately, psychotherapy has proven highly effective in the treatment of a number of mental health issues, such as depression and anxiety disorders. In contrast, psychotherapy training as is practised currently cannot be considered evidence-based. Thus, there is much room for improvement. The integration of simulated patients (SPs) into psychotherapy training and research is on the rise. SPs originate from the medical education and have, in a number of studies, been demonstrated to contribute to effective learning environments. Nevertheless, there has been voiced criticism regarding the authenticity of SP portrayals, but few studies have examined this to date.
Based on these considerations, this dissertation explores SPs’ authenticity while portraying a mental disorder, depression. Altogether, the present cumulative dissertation consists of three empirical papers. At the time of printing, Paper I and Paper III have been accepted for publication, and Paper II is under review after a minor revision.
First, Paper I develops and validates an observer-based rating-scale to assess SP authenticity in psychotherapeutic contexts. Based on the preliminary findings, it can be concluded that the Authenticity of Patient Demonstrations scale is a reliable and valid tool that can be used for recruiting, training, and evaluating the authenticity of SPs.
Second, Paper II tests whether student SPs are perceived as more authentic after they receive an in-depth role-script compared to those SPs who only receive basic information on the patient case. To test this assumption, a randomised controlled study design was implemented and the hypothesis could be confirmed. As a consequence, when engaging SPs, an in-depth role-script with details, e.g. on nonverbal behaviour and feelings of the patient, should be provided.
Third, Paper III demonstrates that psychotherapy trainees cannot distinguish between trained SPs and real patients and therefore suggests that, with proper training, SPs are a promising training method for psychotherapy.
Altogether, the dissertation shows that SPs can be trained to portray a depressive patient authentically and thus delivers promising evidence for the further dissemination of SPs.
Partial synchronous states exist in systems of coupled oscillators between full synchrony and asynchrony. They are an important research topic because of their variety of different dynamical states. Frequently, they are studied using phase dynamics. This is a caveat, as phase dynamics are generally obtained in the weak coupling limit of a first-order approximation in the coupling strength. The generalization to higher orders in the coupling strength is an open problem. Of particular interest in the research of partial synchrony are systems containing both attractive and repulsive coupling between the units. Such a mix of coupling yields very specific dynamical states that may help understand the transition between full synchrony and asynchrony. This thesis investigates partial synchronous states in mixed-coupling systems. First, a method for higher-order phase reduction is introduced to observe interactions beyond the pairwise one in the first-order phase description, hoping that these may apply to mixed-coupling systems. This new method for coupled systems with known phase dynamics of the units gives correct results but, like most comparable methods, is computationally expensive. It is applied to three Stuart-Landau oscillators coupled in a line with a uniform coupling strength. A numerical method is derived to verify the analytical results. These results are interesting but give importance to simpler phase models that still exhibit exotic states. Such simple models that are rarely considered are Kuramoto oscillators with attractive and repulsive interactions. Depending on how the units are coupled and the frequency difference between the units, it is possible to achieve many different states. Rich synchronization dynamics, such as a Bellerophon state, are observed when considering a Kuramoto model with attractive interaction in two subpopulations (groups) and repulsive interactions between groups. In two groups, one attractive and one repulsive, of identical oscillators with a frequency difference, an interesting solitary state appears directly between full and partial synchrony. This system can be described very well analytically.
In der vorliegenden Arbeit wird die Herstellung und Charakterisierung von Mixed-Matrix-Membranen (MMM) für die Gastrennung thematisiert. Dazu wurden verschiedene Füllstoffe genutzt, um in Verbindung mit dem Membranmaterial Polysulfon MMMs herzustellen. Als Füllstoffe wurden 3 aktive und 2 passive Füllstoffe verwendet. Die aktiven Füllstoffe besaßen Porenöffnungen, die in der Lage sind Gase in Abhängigkeit der Molekülgröße zu trennen. Daraus folgt ein höherer idealer Trennfaktor für bestimmte Gaspaare als in Polysulfon selbst. Aufgrund der durch die Poren gebildeten permanenten Kanäle in den aktiven Füllstoffen ergibt sich ein schnellerer Gastransport (Permeabilität) als in Polysulfon. Es handelte sich bei den aktiven Füllstoffen um den Zeolith SAPO-34 und 2 Chargen eines Zeolitic Imidazolate Framework (ZIF) ZIF-8. Die beiden Chargen ZIF-8 unterschieden sich in ihrer spezifischen Oberfläche, was diesen Einfluss speziell in die Untersuchungen zum Gastransport einbeziehen sollte. Bei den passiven Füllstoffen handelte es sich um ein aminofunktionalisiertes Kieselgel und unporöse (dichte) Glaskügelchen. Das Kieselgel besaß Poren, die zu groß waren, um Gase effektiv zu trennen. Die Glaskügelchen konnten keine Gastrennung ermöglichen, da sie keine Poren besaßen.
Aus der Literatur ist bekannt, dass die Einbettung von Füllstoffen oft zu Defekten in MMMs führt. Ein Ziel dieser Arbeit war es daher die Einbettung zu optimieren. Weiterhin sollte der Gastransport in MMMs dieser Arbeit mit dem in einer unbeladenen Polysulfonmembran verglichen werden. Aufgrund des selektiveren Trennverhaltens der aktiven Füllstoffe im Vergleich zum Membranmaterial, sollte mit der Einbettung aktiver Füllstoffe die Trennleistung der MMMs mit steigender Füllstoffbeladung immer weiter verbessert werden.
Um die Eigenschaften der MMMs zu untersuchen, wurden diese mittels Rasterelektronenmikroskop (REM), Gaspermeationsmessungen (GP) und Thermogravimetrischer Analyse gekoppelt mit Massenspektrometrie (TGA-MS) charakterisiert.
Untersuchungen am REM konnten eine Verbesserung der Einbettung zeigen, wenn ein polymerer Haftvermittler verwendet wurde. Verglichen wurde die optimierte Einbettung mit der Einbettung ohne Haftvermittler und Ergebnissen aus der Literatur, in der die Verwendung verschiedener Silane als Haftvermittler beschrieben wurde. Trotz der verbesserten Einbettung konnte lediglich bei geringen Beladungen an Füllstoff (10 und 20 Ma-% bezogen auf das Membranmaterial) eine geringe Steigerung des idealen Trennfaktors in den MMMs gegenüber der unbeladenen Polysulfonmembranen beobachtet werden. Bei höheren Füllstoffbeladungen (30, 40 und 50 Ma-%) war ein deutlicher Anstieg der Permeabilität bei stark sinkendem idealen Trennfaktor zu beobachten. Mit Hilfe von TGA-MS Messungen konnte darüber hinaus festgestellt werden, dass der verwendete Zeolith SAPO-34 durch Wassermoleküle blockierte Porenöffnungen besaß. Das verhinderte den Gastransport im Füllstoff, wodurch die Trennleistung des Füllstoffes nicht ausgenutzt werden konnte. Die Füllstoffe ZIF-8 (chargenunabhängig) und aminofunktionalisiertes Kieselgel wiesen keine blockierten Poren auf. Dennoch zeigte sich in diesen MMMs keine Verbesserung der Gastrenn- oder Gastransporteigenschaften. MMMs mit dichten Glaskügelchen als Füllstoff zeigten dasselbe Gastrenn- und Gastransportverhalten, wie alle MMMs mit den zuvor genannten Füllstoffen.
In dieser Arbeit konnte, trotz optimierter Einbettung anorganischer Füllstoffe, für MMMs keine Verbesserung der Gastrenn- oder Gastransporteigenschaften nachgewiesen werden. Vielmehr wurde ein Einfluss der Füllstoffmenge auf die Gastransporteigenschaften in MMMs festgestellt. Die Änderungen der MMMs gegenüber Polysulfon stammen von den Folgen der Einbettung von Füllstoffen in das Matrixpolymer. Durch die Einbettung werden die Eigenschaften des Matrixpolymers ändern, sodass auch der Gastransport beeinflusst wird. Des Weiteren wurde dokumentiert, dass in Abhängigkeit der Füllstoffbeladung die entstehende Membranstruktur beeinflusst wird. Die Beeinflussung war dabei unabhängig von der Füllstoffart. Es wurde eine Korrelation zwischen Füllstoffmenge und veränderter Membranstruktur gefunden.
Eukaryotic cells can be regarded as complex microreactors capable of performing various biochemical reactions in parallel which are necessary to sustain life. An essential prerequisite for these complex metabolic reactions to occur is the evolution of lipid membrane-bound organelles enabling compartmental- ization of reactions and biomolecules. This allows for a spatiotemporal control over the metabolic reactions within the cellular system. Intracellular organi- zation arising due to compartmentalization is a key feature of all living cells and has inspired synthetic biologists to engineer such systems with bottom-up approaches.
Artificial cells provide an ideal platform to isolate and study specific re- actions without the interference from the complex network of biomolecules present in biological cells. To mimic the hierarchical architecture of eukaryotic cells, multi-compartment assemblies with nested liposomal structures also re- ferred to as multi-vesicular vesicles (MVVs) have been widely adopted. Most of the previously reported multi-compartment systems adopt bulk method- ologies which suffer from low yield and poor control over size. Microfluidic strategies help circumvent these issues and facilitate a high-throughput and robust technique to assemble MVVs of uniform size distribution.
In this thesis, firstly, the bulk methodologies are explored to build MVVs and implement a synthetic signalling cascade. Next, a polydimethylsiloxane (PDMS)-based microfluidic platform is introduced to build MVVs and the significance of PEGylated lipids for the successful encapsulation of inner com- partments to generate stable multi-compartment systems is highlighted.
Next, a novel two-inlet channel PDMS-based microfluidic device to create MVVs encompassing a three-step enzymatic reaction cascade is presented. A directed reaction pathway comprising of the enzymes α-glucosidase (α-Glc), glucose oxidase (GOx), and horseradish peroxidase (HRP) spanning across three compartments via reconstitution of size-selective membrane proteins is described. Furthermore, owing to the monodispersity of our MVVs due to microfluidic strategies, this platform is employed to study the effect of com- partmentalization on reaction kinetics.
Further integration of cell-free expression module into the MVVs would allow for gene-mediated signal transduction within artificial eukaryotic cells. Therefore, the chemically inducible cell-free expression of a membrane protein alpha-hemolysin and its further reconstitution into liposomes is carried out.
In conclusion, the present thesis aims to build artificial eukaryotic cells to achieve size-selective chemical communication that also show potential for applications as micro reactors and as vehicles for drug delivery.
Iron-sulfur clusters are essential enzyme cofactors. The most common and stable clusters are [2Fe-2S] and [4Fe-4S] that are found in nature. They are involved in crucial biological processes like respiration, gene regulation, protein translation, replication and DNA repair in prokaryotes and eukaryotes. In Escherichia coli, Fe-S clusters are essential for molybdenum cofactor (Moco) biosynthesis, which is a ubiquitous and highly conserved pathway. The first step of Moco biosynthesis is catalyzed by the MoaA protein to produce cyclic pyranopterin monophosphate (cPMP) from 5’GTP. MoaA is a [4Fe-4S] cluster containing radical S-adenosyl-L-methionine (SAM) enzyme. The focus of this study was to investigate Fe-S cluster insertion into MoaA under nitrate and TMAO respiratory conditions using E. coli as a model organism. Nitrate and TMAO respiration usually occur under anaerobic conditions, where oxygen is depleted. Under these conditions, E. coli uses nitrate and TMAO as terminal electron. Previous studies revealed that Fe-S cluster insertion is performed by Fe-S cluster carrier proteins. In E. coli, these proteins are known as A-type carrier proteins (ATC) by phylogenomic and genetic studies. So far, three of them have been characterized in detail in E. coli, namely IscA, SufA, and ErpA. This study shows that ErpA and IscA are involved in Fe-S cluster insertion into MoaA under nitrate and TMAO respiratory conditions. ErpA and IscA can partially replace each other in their role to provide [4Fe-4S] clusters for MoaA. SufA is not able to replace the functions of IscA or ErpA under nitrate respiratory conditions.
Nitrate reductase is a molybdoenzyme that coordinates Moco and Fe-S clusters. Under nitrate respiratory conditions, the expression of nitrate reductase is significantly increased in E. coli. Nitrate reductase is encoded in narGHJI genes, the expression of which is regulated by the transcriptional regulator, fumarate and nitrate reduction (FNR). The activation of FNR under conditions of nitrate respiration requires one [4Fe-4S] cluster. In this part of the study, we analyzed the insertion of Fe-S cluster into FNR for the expression of narGHJI genes in E. coli. The results indicate that ErpA is essential for the FNR-dependent expression of the narGHJI genes, a role that can be replaced partially by IscA and SufA when they are produced sufficiently under the conditions tested. This observation suggests that ErpA is indirectly regulating nitrate reductase expression via inserting Fe-S clusters into FNR.
Most molybdoenzymes are complex multi-subunit and multi-cofactor-containing enzymes that coordinate Fe-S clusters, which are functioning as electron transfer chains for catalysis. In E. coli, periplasmic aldehyde oxidoreductase (PaoAC) is a heterotrimeric molybdoenzyme that
consists of flavin, two [2Fe-2S], one [4Fe-4S] cluster and Moco. In the last part of this study, we investigated the insertion of Fe-S clusters into E. coli periplasmic aldehyde oxidoreductase (PaoAC). The results show that SufA and ErpA are involved in inserting [4Fe-4S] and [2Fe-2S] clusters into PaoABC, respectively under aerobic respiratory conditions.
Zentrales Element dieser Arbeit ist die Synthese und Charakterisierung praktisch nutzbarer Ionogele. Die Basis der Polymerionogele bildet das Modellpolymer Polymethylmethacrylat. Als Additive kommen ionische Flüssigkeiten zum Einsatz, deren Grundlage Derivate des vielfach verwendeten Imidazoliumkations sind. Die Eigenschaften der eingebetteten ionischen Flüssigkeiten sind für die Ionogele funktionsgebend. Die Funktionalität der jeweiligen Gele und damit der Transfer der Eigenschaften von ionischen Flüssigkeiten auf die Ionogele wurde in der vorliegenden Arbeit mittels zahlreicher Charakterisierungstechniken überprüft und bestätigt. In dieser Arbeit wurden durch Ionogelbildung makroskopische Ionogelobjekte in Form von Folien und Vliesen erzeugt. Dabei kamen das Filmgießen und das Elektrospinnen als Methoden zur Erzeugung dieser Folien und Vliese zum Einsatz, woraus jeweils ein Modellsystem resultiert. Dadurch wird die vorliegende Arbeit in die Themenkomplexe „elektrisch halbleitende Ionogelfolien“ und „antimikrobiell aktive Ionogelvliese“ gegliedert. Der Einsatz von triiodidhaltigen ionischen Flüssigkeiten und einer Polymermatrix in einem diskontinuierlichen Gießprozess resultiert in elektrisch halbleitenden Ionogelfolien. Die flexiblen und transparenten Folien können Mittelpunkt zahlreicher neuer Anwendungsfelder im Bereich flexibler Elektronik sein. Das Elektrospinnen von Polymethylmethacrylat mit einer ionischen Flüssigkeit führte zu einem homogen Ionogelvlies, welches ein Modell für die Übertragung antimikrobiell aktiver Eigenschaften ionischer Flüssigkeiten auf poröse Strukturen zur Filtration darstellt. Gleichzeitig ist es das erste Beispiel für ein kupferchloridhaltiges Ionogel. Ionogele sind attraktive Materialien mit zahlreichen Anwendungsmöglichkeiten. Mit der vorliegenden Arbeit wird das Spektrum der Ionogele um ein elektrisch halbleitendes und ein antimikrobiell aktives Ionogel erweitert. Gleichzeitig wurden durch diese Arbeit der Gruppe der ionischen Flüssigkeiten drei Beispiele für elektrisch halbleitende ionische Flüssigkeiten sowie zahlreiche kupfer(II)chloridbasierte ionische Flüssigkeiten hinzugefügt.
Angular momentum is a particularly sensitive probe into stellar evolution because it changes significantly over the main sequence life of a star. In this thesis, I focus on young main sequence stars of which some feature a rapid evolution in their rotation rates. This transition from fast to slow rotation is inadequately explored observationally and this work aims to provide insights into the properties and time scales but also investigates stellar rotation in young open clusters in general.
I focus on the two open clusters NGC 2516 and NGC 3532 which are ~150 Myr (zero-age main sequence age) and ~300 Myr old, respectively. From 42 d-long time series photometry obtained at the Cerro Tololo Inter-American Observatory, I determine stellar rotation periods in both clusters. With accompanying low resolution spectroscopy, I measure radial velocities and chromospheric emission for NGC 3532, the former to establish a clean membership and the latter to probe the rotation-activity connection.
The rotation period distribution derived for NGC 2516 is identical to that of four other coeval open clusters, including the Pleiades, which shows the universality of stellar rotation at the zero-age main sequence. Among the similarities (with the Pleiades) the "extended slow rotator sequence" is a new, universal, yet sparse, feature in the colour-period diagrams of open clusters. From a membership study, I find NGC 3532 to be one of the richest nearby open clusters with 660 confirmed radial velocity members and to be slightly sub-solar in metallicity. The stellar rotation periods for NGC 3532 are the first published for a 300 Myr-old open cluster, a key age to understand the transition from fast to slow rotation. The fast rotators at this age have significantly evolved beyond what is observed in NGC 2516 which allows to estimate the spin-down timescale and to explore the issues that angular momentum models have in describing this transition. The transitional sequence is also clearly identified in a colour-activity diagram of stars in NGC 3532. The synergies of the chromospheric activity and the rotation periods allow to understand the colour-activity-rotation connection for NGC 3532 in unprecedented detail and to estimate additional rotation periods for members of NGC 3532, including stars on the "extended slow rotator sequence".
In conclusion, this thesis probes the transition from fast to slow rotation but has also more general implications for the angular momentum evolution of young open clusters.
Monoklonale Antikörper sind essenzielle Werkzeuge in der modernen Laboranalytik sowie in der medizinischen Therapie und Diagnostik. Die Herstellung monoklonaler Antikörper ist ein zeit- und arbeitsintensiver Prozess. Herkömmliche Methoden beruhen auf der Immunisierung von Labortieren, die mitunter mehrere Monate in Anspruch nimmt. Anschließend werden die Antikörper-produzierenden B-Lymphozyten bzw. deren Antikörpergene isoliert und in Screening-Verfahren untersucht, um geeignete Binder zu identifizieren.
Der Transfer der humoralen Immunantwort in eine in vitro Umgebung erlaubt eine Verkürzung des Prozesses und umgeht die Notwendigkeit der in vivo Immunisierung. Das komplexe Zusammenspiel aller involvierten Immunzellen in vitro abzubilden, stellt sich allerdings als schwierig dar. Der Schwerpunkt dieser Arbeit war deshalb die Realisierung einer vereinfachten In vitro Immunisierung, die sich auf die Protagonisten der Antikörper-Produktion konzentriert: die B-Lymphozyten. Darüber hinaus sollte eine permanente Zelllinie etabliert werden, die zur Antikörper-Herstellung eingesetzt werden und die Verwendung von Primärzellen ersetzen würde.
Im ersten Teil der Arbeit wurde ein Protokoll zur In vitro Immunisierung muriner BLymphozyten etabliert. In Vorversuchen wurden die optimalen Konditionen für die Antigenspezifische Aktivierung gereinigter Milz-B-Lymphozyten aus nicht-immunisierten Mäusen
determiniert. Dazu wurde der Einfluss verschiedener Stimuli auf die Produktion unspezifischer und spezifischer Antikörper untersucht. Eine Kombination aus dem Modellantigen VP1 (Hamster Polyomavirus Hüllprotein 1), einem Anti-CD40-Antikörper, Interleukin 4 (IL 4) und Lipopolysaccharid (LPS) oder IL 7 induzierte nachweislich eine Antigen-spezifische Antikörper-Antwort in vitro. Als Indikatoren einer erfolgreichen Aktivierung der B-Lymphozyten infolge der in vitro Stimulation wurden die rapide Proliferation und die Expression charakteristischer Aktivierungsmarker auf der Zelloberfläche nachgewiesen. In einer Zeitreihe über zehn Tage wurde am zehnten Tag der In vitro Immunisierung die verhältnismäßig höchste Konzentration Antigen-spezifischer IgG-Antikörper im Kulturüberstand der stimulierten Zellen nachgewiesen.
Als nächster Schritt sollte eine permanente Zelllinie hergestellt werden, die statt primärer BLymphozyten für die zuvor etablierte In vitro Immunisierung eingesetzt werden könnte. Zu diesem Zweck wurden retrovirale Vektoren hergestellt, die durch den Transfer verschiedener Onkogene in murine B-Lymphozyten bzw. deren Vorläuferzellen das Proliferationsverhalten der Zellen manipulieren sollen. Es wurden Retroviren mit Doxycyclin-induzierbaren Expressionskassetten mit den Onkogenen cmyc, Bcl2, BclxL und dem Fusionsgen NUP98HOXB4 generiert. Eine Testzelllinie wurde erfolgreich mit den hergestellten Retroviren transduziert und die Funktionalität der hergestellten Viren anhand verschiedener Assays belegt. Die transferierten Gene konnten in der Testzelllinie auf DNAEbene nachgewiesen oder die Überexpression der entsprechenden Proteine im Western Blot detektiert werden. Es wurden schließlich B-Lymphozyten bzw. unreife Vorläuferzellen derselben mit den generierten Retroviren transduziert und mit Knochenmark-ähnlichen Stromazellen co-kultiviert. Aus keinem der transduzierten Ansätze konnte bisher eine Zelllinie oder eine Langzeit-Kultur etabliert werden.
Im letzten Teil der Arbeit wurde die Effektivität und Übertragbarkeit des zuvor etablierten Protokolls zur In vitro Immunisierung muriner B-Lymphozyten anhand verschiedener Antigene gezeigt. Es konnten in vitro spezifische IgG-Antworten gegen VP1, Legionella pneumophila und das Protein Mip, von dem ein Peptid in das zur Immunisierung eingesetzte VP1 integriert wurde, induziert werden. Die stimulierten B-Lymphozyten wurden durch Fusion mit Myelomzellen in permanente Antikörper-produzierende Zelllinien transformiert.
Dabei konnten mehrere Hybridomzelllinien generiert werden, die spezifische IgGAntikörper gegen VP1 oder Mip produzieren. Die generierten Antikörper konnten sowohl im Western Blot als auch im ELISA (Enzyme-Linked Immunosorbent Assay) das entsprechende Antigen spezifisch binden.
Die hier etablierte In vitro Immunisierung bietet eine effektive Alternative zu bisherigen Verfahren zur Herstellung spezifischer Antikörper. Sie ersetzt die Immunisierung von Versuchstieren und reduziert den Zeitaufwand erheblich. In Kombination mit der Hybridomtechnologie können die in vitro immunisierten Zellen, wie hier demonstriert, zur Generation von Hybridomzelllinien und zur Herstellung monoklonaler Antikörper genutzt werden. Um die Verwendung von Versuchstieren in dieser Methode durch eine adäquate permanente Zelllinie zu ersetzen, muss die genetische Veränderung von B-Lymphozyten und unreifen hämatopoetischen Zellen optimiert werden. Die Ergebnisse bieten eine Basis für eine universelle, Spezies-unabhängige Methodik zur Antikörperherstellung und für die
Etablierung einer idealen, tierfreien In vitro Immunisierung.
The controlled dosage of substances from a device to its environment, such as a tissue or an organ in medical applications or a reactor, room, machinery or ecosystem in technical, should ideally match the requirements of the applications, e.g. in terms of the time point at which the cargo is released. On-demand dosage systems may enable such a desired release pattern, if the device contain suitable features that can translate external signals into a release function. This study is motivated by the opportunities arising from microsystems capable of an on-demand release and the contributions that geometrical design may have in realizing such features. The goals of this work included the design, fabrication, characterization and experimental proof-of-concept of geometry-assisted triggerable dosing effect (a) with a sequential dosing release and (b) in a self-sufficient dosage system. Structure-function relationships were addressed on the molecular, morphological and, with a particular attention, the device design level, which is on the micrometer scale. Models and/or computational tools were used to screen the parameter space and provide guidance for experiments.
El flanco oriental de los Andes Centrales en el noroeste argentino es una zona caracterizada por serranías limitadas por fallas inversas que conforman un orógeno de piel gruesa activo con un patrón espacio-temporal no sistemático de deformación contraccional. Este patrón queda representado tanto por la dispersión de la actividad sísmica cortical como de la localización de las estructuras cuaternarias a través de la Cordillera Oriental y el Sistema de Santa Bárbara, configurando un frente orogénico difuso de más de 200 km de extensión. El estudio de la actividad neotectónica en esta región ha tomado más relevancia en los últimos años, mediante la aplicación de herramientas variadas, incluyendo técnicas de geomorfología tectónica, herramientas de teledetección, geodesia y estudios de campo convencionales. Los depósitos lacustres han demostrado ser, en numerosos ejemplos, excelentes marcadores de la actividad tectónica, dadas la horizontalidad original de sus capas y la susceptibilidad a los cambios del entorno. Es por ello que en este trabajo se analizaron los depósitos lacustres que afloran en el sector central de los valles Calchaquíes (región de Cafayate), para comprender cómo se acomoda la deformación cuaternaria en una de las cuencas intermontanas de la cuña orogénica activa.
El rumbo de las estructuras cuaternarias en el área de estudio es subparalelo al de las fallas que exhuman los cordones serranos circundantes. A partir del estudio estratigráfico, morfotectónico y estructural de los depósitos lacustres, se identificó un mínimo de cinco episodios de deformación afectando a la columna estratigráfica cuaternaria. Integrando perfiles estructurales balanceados con edades obtenidas en este trabajo y recopiladas de la bibliografía, se calcularon para el Pleistoceno mediotardío, tasas mínimas y máximas de acortamiento que varían entre 0,19-2,80 y 0,21-4,47 mm/a, respectivamente. Para comparar estos resultados con mediciones de la tectónica activa a escala regional se recopilaron datos de estaciones geodésicas del noroeste argentino, con los cuales se elaboró un perfil de velocidades horizontales. El perfil obtenido muestra un decrecimiento gradual de los vectores hacia el este, indicando actividad interna del orógeno en congruencia con los registros de actividad sísmica y compilación regional de las estructuras cuaternarias.
Además de la caracterización neotectónica de este sector de la Cordillera Oriental, el análisis estratigráfico de los depósitos lacustres ha permitido refinar la evolución geológica del sector central de los valles Calchaquíes durante el Cuaternario. De esta manera se han identificado al menos siete episodios de inundación lacustre relacionados con la desconexión del sistema fluvial con su nivel de base, dando lugar a sucesivos eventos de agradación y erosión. Las cotas máximas alcanzadas por los paleolagos, en conjunto con un modelo hidrológico previamente publicado para esta región, permitieron asimismo efectuar una comparación con el registro paleoclimático regional.
Los resultados de esta tesis representan un aporte significativo al conocimiento de la evolución tectónica y estratigráfica del sector central de los valles Calchaquíes durante el Cuaternario. Por otra parte, su integración a escala regional contribuye a comprender mejor la dinámica de la deformación en la cuña orogénica de piel gruesa del noroeste argentino.
We investigate models for incremental binary classification, an example for supervised online learning. Our starting point is a model for human and machine learning suggested by E.M.Gold.
In the first part, we consider incremental learning algorithms that use all of the available binary labeled training data in order to compute the current hypothesis. For this model, we observe that the algorithm can be assumed to always terminate and that the distribution of the training data does not influence learnability. This is still true if we pose additional delayable requirements that remain valid despite a hypothesis output delayed in time. Additionally, we consider the non-delayable requirement of consistent learning. Our corresponding results underpin the claim for delayability being a suitable structural property to describe and collectively investigate a major part of learning success criteria. Our first theorem states the pairwise implications or incomparabilities between an established collection of delayable learning success criteria, the so-called complete map. Especially, the learning algorithm can be assumed to only change its last hypothesis in case it is inconsistent with the current training data. Such a learning behaviour is called conservative.
By referring to learning functions, we obtain a hierarchy of approximative learning success criteria. Hereby we allow an increasing finite number of errors of the hypothesized concept by the learning algorithm compared with the concept to be learned. Moreover, we observe a duality depending on whether vacillations between infinitely many different correct hypotheses are still considered a successful learning behaviour. This contrasts the vacillatory hierarchy for learning from solely positive information.
We also consider a hypothesis space located between the two most common hypothesis space types in the nearby relevant literature and provide the complete map.
In the second part, we model more efficient learning algorithms. These update their hypothesis referring to the current datum and without direct regress to past training data. We focus on iterative (hypothesis based) and BMS (state based) learning algorithms. Iterative learning algorithms use the last hypothesis and the current datum in order to infer the new hypothesis.
Past research analyzed, for example, the above mentioned pairwise relations between delayable learning success criteria when learning from purely positive training data. We compare delayable learning success criteria with respect to iterative learning algorithms, as well as learning from either exclusively positive or binary labeled data. The existence of concept classes that can be learned by an iterative learning algorithm but not in a conservative way had already been observed, showing that conservativeness is restrictive. An additional requirement arising from cognitive science research %and also observed when training neural networks is U-shapedness, stating that the learning algorithm does diverge from a correct hypothesis. We show that forbidding U-shapes also restricts iterative learners from binary labeled data.
In order to compute the next hypothesis, BMS learning algorithms refer to the currently observed datum and the actual state of the learning algorithm. For learning algorithms equipped with an infinite amount of states, we provide the complete map. A learning success criterion is semantic if it still holds, when the learning algorithm outputs other parameters standing for the same classifier. Syntactic (non-semantic) learning success criteria, for example conservativeness and syntactic non-U-shapedness, restrict BMS learning algorithms. For proving the equivalence of the syntactic requirements, we refer to witness-based learning processes. In these, every change of the hypothesis is justified by a later on correctly classified witness from the training data. Moreover, for every semantic delayable learning requirement, iterative and BMS learning algorithms are equivalent. In case the considered learning success criterion incorporates syntactic non-U-shapedness, BMS learning algorithms can learn more concept classes than iterative learning algorithms.
The proofs are combinatorial, inspired by investigating formal languages or employ results from computability theory, such as infinite recursion theorems (fixed point theorems).
Das Gewichtsstigma und insbesondere das internalisierte Gewichtsstigma sind bei Kindern und Jugendlichen mit negativen Folgen für die physische und psychische Gesundheit assoziiert. Da die Befundlage in diesem Altersbereich jedoch noch unzureichend ist, war es das Ziel der Dissertation, begünstigende Faktoren und Folgen von gewichtsbezogener Stigmatisierung und internalisiertem Gewichtsstigma bei Kindern und Jugendlichen zu untersuchen. Die Analysen basierten auf zwei großen Stichproben, die im Rahmen der prospektiven PIER-Studie an Schulen rekrutiert wurden. Die erste Publikation bezieht sich auf eine Stichprobe mit Kindern und Jugendlichen im Alter zwischen 9 und 19 Jahren (49.2 % weiblich) und untersuchte den prospektiven bidirektionalen Zusammenhang zwischen erlebter Gewichtsstigmatisierung und Gewichtsstatus anhand eines latenten Strukturgleichungsmodells über drei Messzeitpunkte hinweg. Die anderen beiden Publikationen beziehen sich auf eine Stichprobe mit Kindern und Jugendlichen im Alter zwischen 6 und 11 Jahren (51.1 % weiblich). Die zweite Publikation analysierte anhand einer hierarchischen Regression, welche intrapersonalen Risikofaktoren das internalisierte Gewichtsstigma prospektiv prädizieren. Die dritte Publikation untersuchte anhand von ROC-Kurven, ab welchem Ausmaß das internalisierte Gewichtsstigma mit einem erhöhten Risiko für psychosoziale Auffälligkeit und gestörtes Essverhalten einhergeht. Im Rahmen der ersten Publikation zeigte sich, dass ein höherer Gewichtsstatus mit einer höheren späteren Gewichtsstigmatisierung einhergeht und umgekehrt die Gewichtsstigmatisierung auch den späteren Gewichtsstatus prädiziert. Die zweite Publikation identifizierte Gewichtsstatus, gewichtsbezogene Hänseleien, depressive Symptome, Körperunzufriedenheit, Relevanz der eigenen Figur sowie das weibliche Geschlecht und einen niedrigeren Bildungsabschluss der Eltern als Prädiktoren des internalisierten Gewichtsstigmas. Die dritte Publikation verdeutlichte, dass das internalisierte Gewichtsstigma bereits ab einem geringen Ausmaß mit einem erhöhten Risiko für gestörtes Essverhalten einhergeht und mit weiteren psychosozialen Problemen assoziiert ist. Insgesamt zeigte sich, dass sowohl das erlebte als auch das internalisierte Gewichtsstigma bei Kindern und Jugendlichen über alle Gewichtsgruppen hinweg relevante Konstrukte sind, die im Entwicklungsverlauf ein komplexes Gefüge bilden. Es wurde deutlich, dass es essentiell ist, bidirektionale Wirkmechanismen einzubeziehen. Die vorliegende Dissertation liefert erste Ansatzpunkte für die Gestaltung von Präventions- und Interventionsmaßnahmen, um ungünstige Entwicklungsverläufe in Folge von Gewichtsstigmatisierung und internalisiertem Gewichtsstigma zu verhindern.
Im Mittelpunkt dieser Dissertation steht die Wiederentdeckung, Analyse und bildungshistorische Einordnung des reformpädagogischen Schulprojekts von Eugenie SCHWARZWALD (1872-1940) in Wien im ersten Drittel des 20. Jahrhunderts. Die Genese der Schulentwicklung offenbart die reformpädagogischen Verflechtungen eines überregional bedeutsamen Schulprojekts, die maßgeblich das Profil, die inhaltliche sowie didaktisch-methodische Ausgestaltung von Schule, Schulleben und Unterricht geprägt haben. In der Einleitung (Kap. 1) werden das Erkenntnisinteresse, die zentralen Fragestellungen, die ausgewerteten Quellenbestände und die methodische Vorgehensweise der Arbeit als historisch kritische Analyse der herangezogenen Quellen aufgezeigt. Die systematische Entfaltung des Themas erfolgt entlang von drei zentralen Kapiteln. Dabei rücken die gesellschaftliche und bildungshistorische Einordnung des Schulprojekts in die Ideenwelt und sozialstrukturelle Wirklichkeit Wiens (Kap. 2), biographische Zugänge der Schulgründerin, die Gründung, Genese, Ausformung sowie Beendigung des Schulprojekts, die strukturellen und pädagogischen Charakteristika, die reformpädagogischen Merkmale im ersten Drittel des 20. Jahrhunderts (Kap. 3) in den Mittelpunkt der Analyse. Zugleich werden exemplarische Verflechtungen zu den zeitgenössischen reformpädagogischen Strömungen ebenso sichtbar gemacht wie die damit verbundene Impulsgebung des SCHWARZWALD-Schulprojekts auf das Schulwesen Wiens und Österreichs. Einen Schwerpunkt der Arbeit bildet die Analyse der mannigfachen Vernetzungen der SCHWARZWALDschule im Hinblick auf die Künstlerische Avantgarde (Kap. 4). In der thesenhaften Zusammenfassung (Kap. 5) werden SCHWARZWALDs Leistungen für das österreichische Schul- und Bildungswesen, u. a. für die höhere Mädchenbildung, gewürdigt. Die Arbeit fragt schließlich nach der Reichweite der mit dem Schulprojekt verbundenen reformpädagogischen Impulse und systematisiert Gelingens- und Nichtgelingens-Bedingungen für den Schulreformprozess. Das macht die Arbeit – mit Blick auf Transferüberlegungen – für aktuelle Fragestellungen der Schulentwicklung anschlussfähig.
Kenya and Uganda are amongst the countries that, for different historical, political, and economic reasons, have embarked on law reform processes as regards to citizenship. In 2009, Uganda made provisions in its laws to allow citizens to have dual citizenship while Kenya’s 2010 constitution similarly introduced it, and at the same time, a general prohibition on dual citizenship was lifted, that is, a ban on state officers, including the President and Deputy President, being dual nationals (Manby, 2018).
Against this background, I analysed the reasons for which these countries that previously held stringent laws and policies against dual citizenship, made a shift in a close time proximity. Given their geo-political roles, location, regional, continental, and international obligations, I conducted a comparative study on the processes, actors, impact, and effect. A specific period of 2000 to 2010 was researched, that is, from when the debates for law reforms emerged, to the processes being implemented, the actors, and the implications.
According to Rubenstein (2000, p. 520), citizenship is observed in terms of “political institutions” that are free to act according to the will of, in the interests of, or with authority over, their citizenry. Institutions are emergent national or international, higher-order factors above the individual spectrum, having the interests and political involvement of their actors without requiring recurring collective mobilisation or imposing intervention to realise these regularities. Transnational institutions are organisations with authority beyond single governments. Given their International obligations, I analysed the role of the UN, AU, and EAC in influencing the citizenship debates and reforms in Kenya and Uganda. Further, non-state actors, such as civil society, were considered.
Veblen, (1899) describes institutions as a set of settled habits of thought common to the generality of men. Institutions function only because the rules involved are rooted in shared habits of thought and behaviour although there is some ambiguity in the definition of the term “habit”. Whereas abstracts and definitions depend on different analytical procedures, institutions restrain some forms of action and facilitate others. Transnational institutions both restrict and aid behaviour. The famous “invisible hand” is nothing else but transnational institutions. Transnational theories, as applied to politics, posit two distinct forms that are of influence over policy and political action (Veblen, 1899). This influence and durability of institutions is “a function of the degree to which they are instilled in political actors at the individual or organisational level, and the extent to which they thereby “tie up” material resources and networks. Against this background, transitional networks with connection to Kenya and Uganda were considered alongside the diaspora from these two countries and their role in the debate and reforms on Dual citizenship.
Sterian (2013, p. 310) notes that Nation states may be vulnerable to institutional influence and this vulnerability can pose a threat to a nation’s autonomy, political legitimacy, and to the democratic public law. Transnational institutions sometimes “collide with the sovereignty of the state when they create new structures for regulating cross-border relationships”. However, Griffin (2003) disagrees that transnational institutional behaviour is premised on the principles of neutrality, impartiality, and independence. Transnational institutions have become the main target of the lobby groups and civil society, consequently leading to excessive politicisation. Kenya and Uganda are member states not only of the broader African union but also of the E.A.C which has adopted elements of socio-economic uniformity. Therefore, in the comparative analysis, I examine the role of the East African Community and its partners in the dual citizenship debate on the two countries.
I argue in the analysis that it is not only important to be a citizen within Kenya or Uganda but also important to discover how the issue of dual citizenship is legally interpreted within the borders of each individual nation-state. In light of this discussion, I agree with Mamdani’s definition of the nation-state as a unique form of power introduced in Africa by colonial powers between 1880 and 1940 whose outcomes can be viewed as “debris of a modernist postcolonial project, an attempt to create a centralised modern state as the bearer of Westphalia sovereignty against the background of indirect rule” (Mamdani, 1996, p. xxii). I argue that this project has impacted the citizenship debate through the adopted legal framework of post colonialism, built partly on a class system, ethnic definitions, and political affiliation. I, however, insist that the nation-state should still be a vital custodian of the citizenship debate, not in any way denying the individual the rights to identity and belonging. The question then that arises is which type of nation-state? Mamdani (1996, p. 298) asserts that the core agenda that African states faced at independence was threefold: deracialising civil society; detribalising the native authority; and developing the economy in the context of unequal international relations. Post-independence governments grappled with overcoming the citizen and subject dichotomy through either preserving the customary in the name of “defending tradition against alien encroachment or abolishing it in the name of overcoming backwardness and embracing triumphant modernism”. Kenya and Uganda are among countries that have reformed their citizenship laws attesting to Mamdani’s latter assertion.
Mamdani’s (1996) assertions on how African states continue to deal with the issue of citizenship through either the defence of tradition against subjects or abolishing it in the name of overcoming backwardness and acceptance of triumphant modernism are based on the colonial legal theory and the citizen-subject dichotomy within Africa communities. To further create a wider perspective on legal theory, I argue that those assertions above, point to the historical divergence between the republican model of citizenship, which places emphasis on political agency as envisioned in Rousseau´s social contract, as opposed to the liberal model of citizenship, which stresses the legal status and protection (Pocock, 1995).
I, therefore, compare the contexts of both Kenya and Uganda, the actors, the implications of transnationalism and post-nationalism, on the citizens, the nation-state and the region. I conclude by highlighting the shortcomings in the law reforms that allowed for dual citizenship, further demonstrating an urgent need to address issues, such as child statelessness, gender nationality laws, and the rights of dual citizens. Ethnicity, a weak nation state, and inconsistent citizenship legal reforms are closely linked to the historical factors of both countries. I further indicate the economic and political incentives that influenced the reform.
Keywords: Citizenship, dual citizenship, nation state, republicanism, liberalism, transnationalism, post-nationalism
Anthropogenic activities such as continuous landscape changes threaten biodiversity at both local and regional scales. Metacommunity models attempt to combine these two scales and continuously contribute to a better mechanistic understanding of how spatial processes and constraints, such as fragmentation, affect biodiversity. There is a strong consensus that such structural changes of the landscape tend to negatively effect the stability of metacommunities. However, in particular the interplay of complex trophic communities and landscape structure is not yet fully understood.
In this present dissertation, a metacommunity approach is used based on a dynamic and spatially explicit model that integrates population dynamics at the local scale and dispersal dynamics at the regional scale. This approach allows the assessment of complex spatial landscape components such as habitat clustering on complex species communities, as well as the analysis of population dynamics of a single species. In addition to the impact of a fixed landscape structure, periodic environmental disturbances are also considered, where a periodical change of habitat availability, temporally alters landscape structure, such as the seasonal drying of a water body.
On the local scale, the model results suggest that large-bodied animal species, such as predator species at high trophic positions, are more prone to extinction in a state of large patch isolation than smaller species at lower trophic levels.
Increased metabolic losses for species with a lower body mass lead to increased energy limitation for species on higher trophic levels and serves as an explanation for a predominant loss of these species. This effect is particularly pronounced for food webs, where species are more sensitive to increased metabolic losses through dispersal and a change in landscape structure.
In addition to the impact of species composition in a food web for diversity, the strength of local foraging interactions likewise affect the synchronization of population dynamics. A reduced predation pressure leads to more asynchronous population dynamics, beneficial for the stability of population dynamics as it reduces the risk of correlated extinction events among habitats. On the regional scale, two landscape aspects, which are the mean patch isolation and the formation of local clusters of two patches, promote an increase in $\beta$-diversity. Yet, the individual composition and robustness of the local species community equally explain a large proportion of the observed diversity patterns.
A combination of periodic environmental disturbance and patch isolation has a particular impact on population dynamics of a species. While the periodic disturbance has a synchronizing effect, it can even superimpose emerging asynchronous dynamics in a state of large patch isolation and unifies trends in synchronization between different species communities.
In summary, the findings underline a large local impact of species composition and interactions on local diversity patterns of a metacommunity. In comparison, landscape structures such as fragmentation have a negligible effect on local diversity patterns, but increase their impact for regional diversity patterns. In contrast, at the level of population dynamics, regional characteristics such as periodic environmental disturbance and patch isolation have a particularly strong impact and contribute substantially to the understanding of the stability of population dynamics in a metacommunity. These studies demonstrate once again the complexity of our ecosystems and the need for further analysis for a better understanding of our surrounding environment and more targeted conservation of biodiversity.
Over the last decades, the rate of near-surface warming in the Arctic is at least double than elsewhere on our planet (Arctic amplification). However, the relative contribution of different feedback processes to Arctic amplification is a topic of ongoing research, including the role of aerosol and clouds. Lidar systems are well-suited for the investigation of aerosol and optically-thin clouds as they provide vertically-resolved information on fine temporal scales. Global aerosol models fail to converge on the sign of the Arctic aerosol radiative effect (ARE). In the first part of this work, the optical and microphysical properties of Arctic aerosol were characterized at case study level in order to assess the short-wave (SW) ARE. A long-range transport episode was first investigated. Geometrically similar aerosol layers were captured over three locations. Although the aerosol size distribution was different between Fram Strait(bi-modal) and Ny-Ålesund (fine mono-modal), the atmospheric column ARE was similar. The latter was related to the domination of accumulation mode aerosol. Over both locations top of the atmosphere (TOA) warming was accompanied by surface cooling.
Subsequently, the sensitivity of ARE was investigated with respect to different aerosol and spring-time ambient conditions. A 10% change in the single-scattering albedo (SSA) induced higher ARE perturbations compared to a 30% change in the aerosol extinction coefficient. With respect to ambient conditions, the ARETOA was more sensitive to solar elevation changes compared to AREsur f ace. Over dark surfaces the ARE profile was exclusively negative, while over bright surfaces a negative to positive shift occurred above the aerosol layers. Consequently, the sign of ARE can be highly sensitive in spring since this season is characterized by transitional surface albedo conditions.
As the inversion of the aerosol microphysics is an ill-posed problem, the inferred aerosol size distribution of a low-tropospheric event was compared to the in-situ measured distribution. Both techniques revealed a bi-modal distribution, with good agreement in the total volume concentration. However, in terms of SSA a disagreement was found, with the lidar inversion indicating highly scattering particles and the in-situ measurements pointing to absorbing particles. The discrepancies could stem from assumptions in the inversion (e.g. wavelength-independent refractive index) and errors in the conversion of the in-situ measured light attenuation into absorption. Another source of discrepancy might be related to an incomplete capture of fine particles in the in-situ sensors. The disagreement in the most critical parameter for the Arctic ARE necessitates further exploration in the frame of aerosol closure experiments. Care must be taken in ARE modelling studies, which may use either the in-situ or lidar-derived SSA as input.
Reliable characterization of cirrus geometrical and optical properties is necessary for improving their radiative estimates. In this respect, the detection of sub-visible cirrus is of special importance. The total cloud radiative effect (CRE) can be negatively biased, should only the optically-thin and opaque cirrus contributions are considered. To this end, a cirrus retrieval scheme was developed aiming at increased sensitivity to thin clouds. The cirrus detection was based on the wavelet covariance transform (WCT) method, extended by dynamic thresholds. The dynamic WCT exhibited high sensitivity to faint and thin cirrus layers (less than 200 m) that were partly or completely undetected by the existing static method. The optical characterization scheme extended the Klett–Fernald retrieval by an iterative lidar ratio (LR) determination (constrained Klett). The iterative process was constrained by a reference value, which indicated the aerosol concentration beneath the cirrus cloud. Contrary to existing approaches, the aerosol-free assumption was not adopted, but the aerosol conditions were approximated by an initial guess. The inherent uncertainties of the constrained Klett were higher for optically-thinner cirrus, but an overall good agreement was found with two established retrievals. Additionally, existing approaches, which rely on aerosol-free assumptions, presented increased accuracy when the proposed reference value was adopted. The constrained Klett retrieved reliably the optical properties in all cirrus regimes, including upper sub-visible cirrus with COD down to 0.02.
Cirrus is the only cloud type capable of inducing TOA cooling or heating at daytime. Over the Arctic, however, the properties and CRE of cirrus are under-explored. In the final part of this work, long-term cirrus geometrical and optical properties were investigated for the first time over an Arctic site (Ny-Ålesund). To this end, the newly developed retrieval scheme was employed. Cirrus layers over Ny-Ålesund seemed to be more absorbing in the visible spectral region compared to lower latitudes and comprise relatively more spherical ice particles. Such meridional differences could be related to discrepancies in absolute humidity and ice nucleation mechanisms. The COD tended to decline for less spherical and smaller ice particles probably due to reduced water vapor deposition on the particle surface. The cirrus optical properties presented weak dependence on ambient temperature and wind conditions.
Over the 10 years of the analysis, no clear temporal trend was found and the seasonal cycle was not pronounced. However, winter cirrus appeared under colder conditions and stronger winds. Moreover, they were optically-thicker, less absorbing and consisted of relatively more spherical ice particles. A positive CREnet was primarily revealed for a broad range of representative cloud properties and ambient conditions. Only for high COD (above 10) and over tundra a negative CREnet was estimated, which did not hold true over snow/ice surfaces. Consequently, the COD in combination with the surface albedo seem to play the most critical role in determining the CRE sign over the high European Arctic.
The Internet of Things (IoT) is a system of physical objects that can be discovered, monitored, controlled, or interacted with by electronic devices that communicate over various networking interfaces and eventually can be connected to the wider Internet. [Guinard and Trifa, 2016]. IoT devices are equipped with sensors and/or actuators and may be constrained in terms of memory, computational power, network bandwidth, and energy. Interoperability can help to manage such heterogeneous devices. Interoperability is the ability of different types of systems to work together smoothly. There are four levels of interoperability: physical, network and transport, integration, and data. The data interoperability is subdivided into syntactic and semantic data. Semantic data describes the meaning of data and the common understanding of vocabulary e.g. with the help of dictionaries, taxonomies, ontologies. To achieve interoperability, semantic interoperability is necessary.
Many organizations and companies are working on standards and solutions for interoperability in the IoT. However, the commercial solutions produce a vendor lock-in. They focus on centralized approaches such as cloud-based solutions. This thesis proposes a decentralized approach namely Edge Computing. Edge Computing is based on the concepts of mesh networking and distributed processing. This approach has an advantage that information collection and processing are placed closer to the sources of this information. The goals are to reduce traffic, latency, and to be robust against a lossy or failed Internet connection.
We see management of IoT devices from the network configuration management perspective. This thesis proposes a framework for network configuration management of heterogeneous, constrained IoT devices by using semantic descriptions for interoperability. The MYNO framework is an acronym for MQTT, YANG, NETCONF and Ontology. The NETCONF protocol is the IETF standard for network configuration management. The MQTT protocol is the de-facto standard in the IoT. We picked up the idea of the NETCONF-MQTT bridge, originally proposed by Scheffler and Bonneß[2017], and extended it with semantic device descriptions. These device descriptions provide a description of the device capabilities. They are based on the oneM2M Base ontology and formalized by the Semantic Web Standards.
The novel approach is using a ontology-based device description directly on a constrained device in combination with the MQTT protocol. The bridge was extended in order to query such descriptions. Using a semantic annotation, we achieved that the device capabilities are self-descriptive, machine readable and re-usable.
The concept of a Virtual Device was introduced and implemented, based on semantic device descriptions. A Virtual Device aggregates the capabilities of all devices at the edge network and contributes therefore to the scalability. Thus, it is possible to control all devices via a single RPC call.
The model-driven NETCONF Web-Client is generated automatically from this YANG model which is generated by the bridge based on the semantic device description. The Web-Client provides a user-friendly interface, offers RPC calls and displays sensor values. We demonstrate the feasibility of this approach in different use cases: sensor and actuator scenarios, as well as event configuration and triggering.
The semantic approach results in increased memory overhead. Therefore, we evaluated CBOR and RDF HDT for optimization of ontology-based device descriptions for use on constrained devices. The evaluation shows that CBOR is not suitable for long strings and RDF HDT is a promising candidate but is still a W3C Member Submission. Finally, we used an optimized JSON-LD format for the syntax of the device descriptions.
One of the security tasks of network management is the distribution of firmware updates. The MYNO Update Protocol (MUP) was developed and evaluated on constrained devices CC2538dk and 6LoWPAN. The MYNO update process is focused on freshness and authenticity of the firmware. The evaluation shows that it is challenging but feasible to bring the firmware updates to constrained devices using MQTT. As a new requirement for the next MQTT version, we propose to add a slicing feature for the better support of constrained devices. The MQTT broker should slice data to the maximum packet size specified by the device and transfer it slice-by-slice.
For the performance and scalability evaluation of MYNO framework, we setup the High Precision Agriculture demonstrator with 10 ESP-32 NodeMCU boards at the edge of the network. The ESP-32 NodeMCU boards, connected by WLAN, were equipped with six sensors and two actuators. The performance evaluation shows that the processing of ontology-based descriptions on a Raspberry Pi 3B with the RDFLib is a challenging task regarding computational power. Nevertheless, it is feasible because it must be done only once per device during the discovery process.
The MYNO framework was tested with heterogeneous devices such as CC2538dk from Texas Instruments, Arduino Yún Rev 3, and ESP-32 NodeMCU, and IP-based networks such as 6LoWPAN and WLAN.
Summarizing, with the MYNO framework we could show that the semantic approach on constrained devices is feasible in the IoT.
Anthropogenic climate change alters the hydrological cycle. While certain areas experience more intense precipitation events, others will experience droughts and increased evaporation, affecting water storage in long-term reservoirs, groundwater, snow, and glaciers. High elevation environments are especially vulnerable to climate change, which will impact the water supply for people living downstream. The Himalaya has been identified as a particularly vulnerable system, with nearly one billion people depending on the runoff in this system as their main water resource. As such, a more refined understanding of spatial and temporal changes in the water cycle in high altitude systems is essential to assess variations in water budgets under different climate change scenarios.
However, not only anthropogenic influences have an impact on the hydrological cycle, but changes to the hydrological cycle can occur over geological timescales, which are connected to the interplay between orogenic uplift and climate change. However, their temporal evolution and causes are often difficult to constrain. Using proxies that reflect hydrological changes with an increase in elevation, we can unravel the history of orogenic uplift in mountain ranges and its effect on the climate.
In this thesis, stable isotope ratios (expressed as δ2H and δ18O values) of meteoric waters and organic material are combined as tracers of atmospheric and hydrologic processes with remote sensing products to better understand water sources in the Himalayas. In addition, the record of modern climatological conditions based on the compound specific stable isotopes of leaf waxes (δ2Hwax) and brGDGTs (branched Glycerol dialkyl glycerol tetraethers) in modern soils in four Himalayan river catchments was assessed as proxies of the paleoclimate and (paleo-) elevation. Ultimately, hydrological variations over geological timescales were examined using δ13C and δ18O values of soil carbonates and bulk organic matter originating from sedimentological sections from the pre-Siwalik and Siwalik groups to track the response of vegetation and monsoon intensity and seasonality on a timescale of 20 Myr.
I find that Rayleigh distillation, with an ISM moisture source, mainly controls the isotopic composition of surface waters in the studied Himalayan catchments. An increase in d-excess in the spring, verified by remote sensing data products, shows the significant impact of runoff from snow-covered and glaciated areas on the surface water isotopic values in the timeseries.
In addition, I show that biomarker records such as brGDGTs and δ2Hwax have the potential to record (paleo-) elevation by yielding a significant correlation with the temperature and surface water δ2H values, respectively, as well as with elevation. Comparing the elevation inferred from both brGDGT and δ2Hwax, large differences were found in arid sections of the elevation transects due to an additional effect of evapotranspiration on δ2Hwax. A combined study of these proxies can improve paleoelevation estimates and provide recommendations based on the results found in this study.
Ultimately, I infer that the expansion of C4 vegetation between 20 and 1 Myr was not solely dependent on atmospheric pCO2, but also on regional changes in aridity and seasonality from to the stable isotopic signature of the two sedimentary sections in the Himalaya (east and west).
This thesis shows that the stable isotope chemistry of surface waters can be applied as a tool to monitor the changing Himalayan water budget under projected increasing temperatures. Minimizing the uncertainties associated with the paleo-elevation reconstructions were assessed by the combination of organic proxies (δ2Hwax and brGDGTs) in Himalayan soil. Stable isotope ratios in bulk soil and soil carbonates showed the evolution of vegetation influenced by the monsoon during the late Miocene, proving that these proxies can be used to record monsoon intensity, seasonality, and the response of vegetation. In conclusion, the use of organic proxies and stable isotope chemistry in the Himalayas has proven to successfully record changes in climate with increasing elevation. The combination of δ2Hwax and brGDGTs as a new proxy provides a more refined understanding of (paleo-)elevation and the influence of climate.
Lie group method in combination with Magnus expansion is utilized to develop a universal method applicable to solving a Sturm–Liouville Problem (SLP) of any order with arbitrary boundary conditions. It is shown that the method has ability to solve direct regular and some singular SLPs of even orders (tested up to order eight), with a mix of boundary conditions (including non-separable and finite singular endpoints), accurately and efficiently.
The present technique is successfully applied to overcome the difficulties in finding suitable sets of eigenvalues so that the inverse SLP problem can be effectively solved.
Next, a concrete implementation to the inverse Sturm–Liouville problem
algorithm proposed by Barcilon (1974) is provided. Furthermore, computational feasibility and applicability of this algorithm to solve inverse Sturm–Liouville problems of order n=2,4 is verified successfully. It is observed that the method is successful even in the presence of significant noise, provided that the assumptions of the algorithm are satisfied.
In conclusion, this work provides methods that can be adapted successfully for solving a direct (regular/singular) or inverse SLP of an arbitrary order with arbitrary boundary conditions.