@phdthesis{Ziege2022, author = {Ziege, Ricardo}, title = {Growth dynamics and mechanical properties of E. coli biofilms}, doi = {10.25932/publishup-55986}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-559869}, school = {Universit{\"a}t Potsdam}, pages = {xi, 123}, year = {2022}, abstract = {Biofilms are complex living materials that form as bacteria get embedded in a matrix of self-produced protein and polysaccharide fibres. The formation of a network of extracellular biopolymer fibres contributes to the cohesion of the biofilm by promoting cell-cell attachment and by mediating biofilm-substrate interactions. This sessile mode of bacteria growth has been well studied by microbiologists to prevent the detrimental effects of biofilms in medical and industrial settings. Indeed, biofilms are associated with increased antibiotic resistance in bacterial infections, and they can also cause clogging of pipelines or promote bio-corrosion. However, biofilms also gained interest from biophysics due to their ability to form complex morphological patterns during growth. Recently, the emerging field of engineered living materials investigates biofilm mechanical properties at multiple length scales and leverages the tools of synthetic biology to tune the functions of their constitutive biopolymers. This doctoral thesis aims at clarifying how the morphogenesis of Escherichia coli (E. coli) biofilms is influenced by their growth dynamics and mechanical properties. To address this question, I used methods from cell mechanics and materials science. I first studied how biological activity in biofilms gives rise to non-uniform growth patterns. In a second study, I investigated how E. coli biofilm morphogenesis and its mechanical properties adapt to an environmental stimulus, namely the water content of their substrate. Finally, I estimated how the mechanical properties of E. coli biofilms are altered when the bacteria express different extracellular biopolymers. On nutritive hydrogels, micron-sized E. coli cells can build centimetre-large biofilms. During this process, bacterial proliferation and matrix production introduce mechanical stresses in the biofilm, which release through the formation of macroscopic wrinkles and delaminated buckles. To relate these biological and mechanical phenomena, I used time-lapse fluorescence imaging to track cell and matrix surface densities through the early and late stages of E. coli biofilm growth. Colocalization of high cell and matrix densities at the periphery precede the onset of mechanical instabilities at this annular region. Early growth is detected at this outer annulus, which was analysed by adding fluorescent microspheres to the bacterial inoculum. But only when high rates of matrix production are present in the biofilm centre, does overall biofilm spreading initiate along the solid-air interface. By tracking larger fluorescent particles for a long time, I could distinguish several kinematic stages of E. coli biofilm expansion and observed a transition from non-linear to linear velocity profiles, which precedes the emergence of wrinkles at the biofilm periphery. Decomposing particle velocities to their radial and circumferential components revealed a last kinematic stage, where biofilm movement is mostly directed towards the radial delaminated buckles, which verticalize. The resulting compressive strains computed in these regions were observed to substantially deform the underlying agar substrates. The co-localization of higher cell and matrix densities towards an annular region and the succession of several kinematic stages are thus expected to promote the emergence of mechanical instabilities at the biofilm periphery. These experimental findings are predicted to advance future modelling approaches of biofilm morphogenesis. E. coli biofilm morphogenesis is further anticipated to depend on external stimuli from the environment. To clarify how the water could be used to tune biofilm material properties, we quantified E. coli biofilm growth, wrinkling dynamics and rigidity as a function of the water content of the nutritive substrates. Time-lapse microscopy and computational image analysis revealed that substrates with high water content promote biofilm spreading kinetics, while substrates with low water content promote biofilm wrinkling. The wrinkles observed on biofilm cross-sections appeared more bent on substrates with high water content, while they tended to be more vertical on substrates with low water content. Both wet and dry biomass, accumulated over 4 days of culture, were larger in biofilms cultured on substrates with high water content, despite extra porosity within the matrix layer. Finally, the micro-indentation analysis revealed that substrates with low water content supported the formation of stiffer biofilms. This study shows that E. coli biofilms respond to the water content of their substrate, which might be used for tuning their material properties in view of further applications. Biofilm material properties further depend on the composition and structure of the matrix of extracellular proteins and polysaccharides. In particular, E. coli biofilms were suggested to present tissue-like elasticity due to a dense fibre network consisting of amyloid curli and phosphoethanolamine-modified cellulose. To understand the contribution of these components to the emergent mechanical properties of E. coli biofilms, we performed micro-indentation on biofilms grown from bacteria of several strains. Besides showing higher dry masses, larger spreading diameters and slightly reduced water contents, biofilms expressing both main matrix components also presented high rigidities in the range of several hundred kPa, similar to biofilms containing only curli fibres. In contrast, a lack of amyloid curli fibres provides much higher adhesive energies and more viscoelastic fluid-like material behaviour. Therefore, the combination of amyloid curli and phosphoethanolamine-modified cellulose fibres implies the formation of a composite material whereby the amyloid curli fibres provide rigidity to E. coli biofilms, whereas the phosphoethanolamine-modified cellulose rather acts as a glue. These findings motivate further studies involving purified versions of these protein and polysaccharide components to better understand how their interactions benefit biofilm functions. All three studies depict different aspects of biofilm morphogenesis, which are interrelated. The first work reveals the correlation between non-uniform biological activities and the emergence of mechanical instabilities in the biofilm. The second work acknowledges the adaptive nature of E. coli biofilm morphogenesis and its mechanical properties to an environmental stimulus, namely water. Finally, the last study reveals the complementary role of the individual matrix components in the formation of a stable biofilm material, which not only forms complex morphologies but also functions as a protective shield for the bacteria it contains. Our experimental findings on E. coli biofilm morphogenesis and their mechanical properties can have further implications for fundamental and applied biofilm research fields.}, language = {en} } @phdthesis{Zeuschner2022, author = {Zeuschner, Steffen Peer}, title = {Magnetoacoustics observed with ultrafast x-ray diffraction}, doi = {10.25932/publishup-56109}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-561098}, school = {Universit{\"a}t Potsdam}, pages = {V, 128, IX}, year = {2022}, abstract = {In the present thesis I investigate the lattice dynamics of thin film hetero structures of magnetically ordered materials upon femtosecond laser excitation as a probing and manipulation scheme for the spin system. The quantitative assessment of laser induced thermal dynamics as well as generated picosecond acoustic pulses and their respective impact on the magnetization dynamics of thin films is a challenging endeavor. All the more, the development and implementation of effective experimental tools and comprehensive models are paramount to propel future academic and technological progress. In all experiments in the scope of this cumulative dissertation, I examine the crystal lattice of nanoscale thin films upon the excitation with femtosecond laser pulses. The relative change of the lattice constant due to thermal expansion or picosecond strain pulses is directly monitored by an ultrafast X-ray diffraction (UXRD) setup with a femtosecond laser-driven plasma X-ray source (PXS). Phonons and spins alike exert stress on the lattice, which responds according to the elastic properties of the material, rendering the lattice a versatile sensor for all sorts of ultrafast interactions. On the one hand, I investigate materials with strong magneto-elastic properties; The highly magnetostrictive rare-earth compound TbFe2, elemental Dysprosium or the technological relevant Invar material FePt. On the other hand I conduct a comprehensive study on the lattice dynamics of Bi1Y2Fe5O12 (Bi:YIG), which exhibits high-frequency coherent spin dynamics upon femtosecond laser excitation according to the literature. Higher order standing spinwaves (SSWs) are triggered by coherent and incoherent motion of atoms, in other words phonons, which I quantified with UXRD. We are able to unite the experimental observations of the lattice and magnetization dynamics qualitatively and quantitatively. This is done with a combination of multi-temperature, elastic, magneto-elastic, anisotropy and micro-magnetic modeling. The collective data from UXRD, to probe the lattice, and time-resolved magneto-optical Kerr effect (tr-MOKE) measurements, to monitor the magnetization, were previously collected at different experimental setups. To improve the precision of the quantitative assessment of lattice and magnetization dynamics alike, our group implemented a combination of UXRD and tr-MOKE in a singular experimental setup, which is to my knowledge, the first of its kind. I helped with the conception and commissioning of this novel experimental station, which allows the simultaneous observation of lattice and magnetization dynamics on an ultrafast timescale under identical excitation conditions. Furthermore, I developed a new X-ray diffraction measurement routine which significantly reduces the measurement time of UXRD experiments by up to an order of magnitude. It is called reciprocal space slicing (RSS) and utilizes an area detector to monitor the angular motion of X-ray diffraction peaks, which is associated with lattice constant changes, without a time-consuming scan of the diffraction angles with the goniometer. RSS is particularly useful for ultrafast diffraction experiments, since measurement time at large scale facilities like synchrotrons and free electron lasers is a scarce and expensive resource. However, RSS is not limited to ultrafast experiments and can even be extended to other diffraction techniques with neutrons or electrons.}, language = {en} } @phdthesis{Zeitz2022, author = {Zeitz, Maria}, title = {Modeling the future resilience of the Greenland Ice Sheet}, doi = {10.25932/publishup-56883}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-568839}, school = {Universit{\"a}t Potsdam}, pages = {x, 189}, year = {2022}, abstract = {The Greenland Ice Sheet is the second-largest mass of ice on Earth. Being almost 2000 km long, more than 700 km wide, and more than 3 km thick at the summit, it holds enough ice to raise global sea levels by 7m if melted completely. Despite its massive size, it is particularly vulnerable to anthropogenic climate change: temperatures over the Greenland Ice Sheet have increased by more than 2.7◦C in the past 30 years, twice as much as the global mean temperature. Consequently, the ice sheet has been significantly losing mass since the 1980s and the rate of loss has increased sixfold since then. Moreover, it is one of the potential tipping elements of the Earth System, which might undergo irreversible change once a warming threshold is exceeded. This thesis aims at extending the understanding of the resilience of the Greenland Ice Sheet against global warming by analyzing processes and feedbacks relevant to its centennial to multi-millennial stability using ice sheet modeling. One of these feedbacks, the melt-elevation-feedback is driven by the temperature rise with decreasing altitudes: As the ice sheet melts, its thickness and surface elevation decrease, exposing the ice surface to warmer air and thus increasing the melt rates even further. The glacial isostatic adjustment (GIA) can partly mitigate this melt-elevation feedback as the bedrock lifts in response to an ice load decrease, forming the negative GIA feedback. In my thesis, I show that the interaction between these two competing feedbacks can lead to qualitatively different dynamical responses of the Greenland Ice Sheet to warming - from permanent loss to incomplete recovery, depending on the feedback parameters. My research shows that the interaction of those feedbacks can initiate self-sustained oscillations of the ice volume while the climate forcing remains constant. Furthermore, the increased surface melt changes the optical properties of the snow or ice surface, e.g. by lowering their albedo, which in turn enhances melt rates - a process known as the melt-albedo feedback. Process-based ice sheet models often neglect this melt-albedo feedback. To close this gap, I implemented a simplified version of the diurnal Energy Balance Model, a computationally efficient approach that can capture the first-order effects of the melt-albedo feedback, into the Parallel Ice Sheet Model (PISM). Using the coupled model, I show in warming experiments that the melt-albedo feedback almost doubles the ice loss until the year 2300 under the low greenhouse gas emission scenario RCP2.6, compared to simulations where the melt-albedo feedback is neglected, and adds up to 58\% additional ice loss under the high emission scenario RCP8.5. Moreover, I find that the melt-albedo feedback dominates the ice loss until 2300, compared to the melt-elevation feedback. Another process that could influence the resilience of the Greenland Ice Sheet is the warming induced softening of the ice and the resulting increase in flow. In my thesis, I show with PISM how the uncertainty in Glen's flow law impacts the simulated response to warming. In a flow line setup at fixed climatic mass balance, the uncertainty in flow parameters leads to a range of ice loss comparable to the range caused by different warming levels. While I focus on fundamental processes, feedbacks, and their interactions in the first three projects of my thesis, I also explore the impact of specific climate scenarios on the sea level rise contribution of the Greenland Ice Sheet. To increase the carbon budget flexibility, some warming scenarios - while still staying within the limits of the Paris Agreement - include a temporal overshoot of global warming. I show that an overshoot by 0.4◦C increases the short-term and long-term ice loss from Greenland by several centimeters. The long-term increase is driven by the warming at high latitudes, which persists even when global warming is reversed. This leads to a substantial long-term commitment of the sea level rise contribution from the Greenland Ice Sheet. Overall, in my thesis I show that the melt-albedo feedback is most relevant for the ice loss of the Greenland Ice Sheet on centennial timescales. In contrast, the melt-elevation feedback and its interplay with the GIA feedback become increasingly relevant on millennial timescales. All of these influence the resilience of the Greenland Ice Sheet against global warming, in the near future and on the long term.}, language = {en} } @phdthesis{Witt2018, author = {Witt, Tanja Ivonne}, title = {Camera Monitoring at volcanoes}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-421073}, school = {Universit{\"a}t Potsdam}, pages = {viii, 140}, year = {2018}, abstract = {Basaltic fissure eruptions, such as on Hawai'i or on Iceland, are thought to be driven by the lateral propagation of feeder dikes and graben subsidence. Associated solid earth processes, such as deformation and structural development, are well studied by means of geophysical and geodetic technologies. The eruptions themselves, lava fountaining and venting dynamics, in turn, have been much less investigated due to hazardous access, local dimension, fast processes, and resulting poor data availability. This thesis provides a detailed quantitative understanding of the shape and dynamics of lava fountains and the morphological changes at their respective eruption sites. For this purpose, I apply image processing techniques, including drones and fixed installed cameras, to the sequence of frames of video records from two well-known fissure eruptions in Hawai'i and Iceland. This way I extract the dimensions of multiple lava fountains, visible in all frames. By putting these results together and considering the acquisition times of the frames I quantify the variations in height, width and eruption velocity of the lava fountains. Then I analyse these time-series in both time and frequency domains and investigate the similarities and correlations between adjacent lava fountains. Following this procedure, I am able to link the dynamics of the individual lava fountains to physical parameters of the magma transport in the feeder dyke of the fountains. The first case study in this thesis focuses on the March 2011 Pu'u'O'o eruption, Hawai'i, where a continuous pulsating behaviour at all eight lava fountains has been observed. The lava fountains, even those from different parts of the fissure that are closely connected, show a similar frequency content and eruption behaviour. The regular pattern in the heights of lava fountain suggests a controlling process within the magma feeder system like a hydraulic connection in the underlying dyke, affecting or even controlling the pulsating behaviour. The second case study addresses the 2014-2015 Holuhraun fissure eruption, Iceland. In this case, the feeder dyke is highlighted by the surface expressions of graben-like structures and fault systems. At the eruption site, the activity decreases from a continuous line of fire of ~60 vents to a limited number of lava fountains. This can be explained by preferred upwards magma movements through vertical structures of the pre-eruptive morphology. Seismic tremors during the eruption reveal vent opening at the surface and/or pressure changes in the feeder dyke. The evolving topography of the cinder cones during the eruption interacts with the lava fountain behaviour. Local variations in the lava fountain height and width are controlled by the conduit diameter, the depth of the lava pond and the shape of the crater. Modelling of the fountain heights shows that long-term eruption behaviour is controlled mainly by pressure changes in the feeder dyke. This research consists of six chapters with four papers, including two first author and two co-author papers. It establishes a new method to analyse lava fountain dynamics by video monitoring. The comparison with the seismicity, geomorphologic and structural expressions of fissure eruptions shows a complex relationship between focussed flow through dykes, the morphology of the cinder cones, and the lava fountain dynamics at the vents of a fissure eruption.}, language = {en} } @phdthesis{Wischnewski2011, author = {Wischnewski, Juliane}, title = {Reconstructing climate variability on the Tibetan Plateau : comparing aquatic and terrestrial signals}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-52453}, school = {Universit{\"a}t Potsdam}, year = {2011}, abstract = {Spatial and temporal temperature and moisture patterns across the Tibetan Plateau are very complex. The onset and magnitude of the Holocene climate optimum in the Asian monsoon realm, in particular, is a subject of considerable debate as this time period is often used as an analogue for recent global warming. In the light of contradictory inferences regarding past climate and environmental change on the Tibetan Plateau, I have attempted to explain mismatches in the timing and magnitude of change. Therefore, I analysed the temporal variation of fossil pollen and diatom spectra and the geochemical record from palaeo-ecological records covering different time scales (late Quaternary and the last 200 years) from two core regions in the NE and SE Tibetan Plateau. For interpretation purposes I combined my data with other available palaeo-ecological data to set up corresponding aquatic and terrestrial proxy data sets of two lake pairs and two sets of sites. I focused on the direct comparison of proxies representing lacustrine response to climate signals (e.g., diatoms, ostracods, geochemical record) and proxies representing changes in the terrestrial environment (i.e., terrestrial pollen), in order to asses whether the lake and its catchments respond at similar times and magnitudes to environmental changes. Therefore, I introduced the established numerical technique procrustes rotation as a new approach in palaeoecology to quantitatively compare raw data of any two sedimentary records of interest in order to assess their degree of concordance. Focusing on the late Quaternary, sediment cores from two lakes (Kuhai Lake 35.3°N; 99.2°E; 4150 m asl; and Koucha Lake 34.0°N; 97.2°E; 4540 m asl) on the semi-arid northeastern Tibetan Plateau were analysed to identify post-glacial vegetation and environmental changes, and to investigate the responses of lake ecosystems to such changes. Based on the pollen record, five major vegetation and climate changes could be identified: (1) A shift from alpine desert to alpine steppe indicates a change from cold, dry conditions to warmer and more moist conditions at 14.8 cal. ka BP, (2) alpine steppe with tundra elements points to conditions of higher effective moisture and a stepwise warming climate at 13.6 cal. ka BP, (3) the appearance of high-alpine meadow vegetation indicates a further change towards increased moisture, but with colder temperatures, at 7.0 cal. ka BP, (4) the reoccurrence of alpine steppe with desert elements suggests a return to a significantly colder and drier phase at 6.3 cal. ka BP, and (5) the establishment of alpine steppe-meadow vegetation indicates a change back to relatively moist conditions at 2.2 cal. ka BP. To place the reconstructed climate inferences from the NE Tibetan Plateau into the context of Holocene moisture evolution across the Tibetan Plateau, I applied a five-scale moisture index and average link clustering to all available continuous pollen and non-pollen palaeoclimate records from the Tibetan Plateau, in an attempt to detect coherent regional and temporal patterns of moisture evolution on the Plateau. However, no common temporal or spatial pattern of moisture evolution during the Holocene could be detected, which can be assigned to the complex responses of different proxies to environmental changes in an already very heterogeneous mountain landscape, where minor differences in elevation can result in marked variations in microenvironments. Focusing on the past 200 years, I analysed the sedimentary records (LC6 Lake 29.5°N, 94.3°E, 4132 m asl; and Wuxu Lake 29.9°N, 101.1°E, 3705 m asl) from the southeastern Tibetan Plateau. I found that despite presumed significant temperature increases over that period, pollen and diatom records from the SE Tibetan Plateau reveal only very subtle changes throughout their profiles. The compositional species turnover investigated over the last 200 years appears relatively low in comparison to the species reorganisations during the Holocene. The results indicate that climatically induced ecological thresholds are not yet crossed, but that human activity has an increasing influence, particularly on the terrestrial ecosystem. Forest clearances and reforestation have not caused forest decline in our study area, but a conversion of natural forests to semi-natural secondary forests. The results from the numerical proxy comparison of the two sets of two pairs of Tibetan lakes indicate that the use of different proxies and the work with palaeo-ecological records from different lake types can cause deviant stories of inferred change. Irrespective of the timescale (Holocene or last 200 years) or region (SE or NE Tibetan Plateau) analysed, the agreement in terms of the direction, timing, and magnitude of change between the corresponding terrestrial data sets is generally better than the match between the corresponding lacustrine data sets, suggesting that lacustrine proxies may partly be influenced by in-lake or local catchment processes whereas the terrestrial proxy reflects a more regional climatic signal. The current disaccord on coherent temporal and spatial climate patterns on the Tibetan Plateau can partly be ascribed to the complexity of proxy response and lake systems on the Tibetan Plateau. Therefore, a multi-proxy, multi-site approach is important in order to gain a reliable climate interpretation for the complex mountain landscape of the Tibetan Plateau.}, language = {en} } @phdthesis{WindirschWoiwode2024, author = {Windirsch-Woiwode, Torben}, title = {Permafrost carbon stabilisation by recreating a herbivore-driven ecosystem}, doi = {10.25932/publishup-62424}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-624240}, school = {Universit{\"a}t Potsdam}, pages = {X, 104, A-57}, year = {2024}, abstract = {With Arctic ground as a huge and temperature-sensitive carbon reservoir, maintaining low ground temperatures and frozen conditions to prevent further carbon emissions that contrib-ute to global climate warming is a key element in humankind's fight to maintain habitable con-ditions on earth. Former studies showed that during the late Pleistocene, Arctic ground condi-tions were generally colder and more stable as the result of an ecosystem dominated by large herbivorous mammals and vast extents of graminoid vegetation - the mammoth steppe. Characterised by high plant productivity (grassland) and low ground insulation due to animal-caused compression and removal of snow, this ecosystem enabled deep permafrost aggrad-ation. Now, with tundra and shrub vegetation common in the terrestrial Arctic, these effects are not in place anymore. However, it appears to be possible to recreate this ecosystem local-ly by artificially increasing animal numbers, and hence keep Arctic ground cold to reduce or-ganic matter decomposition and carbon release into the atmosphere. By measuring thaw depth, total organic carbon and total nitrogen content, stable carbon iso-tope ratio, radiocarbon age, n-alkane and alcohol characteristics and assessing dominant vegetation types along grazing intensity transects in two contrasting Arctic areas, it was found that recreating conditions locally, similar to the mammoth steppe, seems to be possible. For permafrost-affected soil, it was shown that intensive grazing in direct comparison to non-grazed areas reduces active layer depth and leads to higher TOC contents in the active layer soil. For soil only frozen on top in winter, an increase of TOC with grazing intensity could not be found, most likely because of confounding factors such as vertical water and carbon movement, which is not possible with an impermeable layer in permafrost. In both areas, high animal activity led to a vegetation transformation towards species-poor graminoid-dominated landscapes with less shrubs. Lipid biomarker analysis revealed that, even though the available organic material is different between the study areas, in both permafrost-affected and sea-sonally frozen soils the organic material in sites affected by high animal activity was less de-composed than under less intensive grazing pressure. In conclusion, high animal activity af-fects decomposition processes in Arctic soils and the ground thermal regime, visible from reduced active layer depth in permafrost areas. Therefore, grazing management might be utilised to locally stabilise permafrost and reduce Arctic carbon emissions in the future, but is likely not scalable to the entire permafrost region.}, language = {en} } @phdthesis{Werhahn2023, author = {Werhahn, Maria}, title = {Simulating galaxy evolution with cosmic rays: the multi-frequency view}, doi = {10.25932/publishup-57285}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-572851}, school = {Universit{\"a}t Potsdam}, pages = {5, 220}, year = {2023}, abstract = {Cosmic rays (CRs) constitute an important component of the interstellar medium (ISM) of galaxies and are thought to play an essential role in governing their evolution. In particular, they are able to impact the dynamics of a galaxy by driving galactic outflows or heating the ISM and thereby affecting the efficiency of star-formation. Hence, in order to understand galaxy formation and evolution, we need to accurately model this non-thermal constituent of the ISM. But except in our local environment within the Milky Way, we do not have the ability to measure CRs directly in other galaxies. However, there are many ways to indirectly observe CRs via the radiation they emit due to their interaction with magnetic and interstellar radiation fields as well as with the ISM. In this work, I develop a numerical framework to calculate the spectral distribution of CRs in simulations of isolated galaxies where a steady-state between injection and cooling is assumed. Furthermore, I calculate the non-thermal emission processes arising from the modelled CR proton and electron spectra ranging from radio wavelengths up to the very high-energy gamma-ray regime. I apply this code to a number of high-resolution magneto-hydrodynamical (MHD) simulations of isolated galaxies, where CRs are included. This allows me to study their CR spectra and compare them to observations of the CR proton and electron spectra by the Voyager-1 satellite and the AMS-02 instrument in order to reveal the origin of the measured spectral features. Furthermore, I provide detailed emission maps, luminosities and spectra of the non-thermal emission from our simulated galaxies that range from dwarfs to Milk-Way analogues to starburst galaxies at different evolutionary stages. I successfully reproduce the observed relations between the radio and gamma-ray luminosities with the far-infrared (FIR) emission of star-forming (SF) galaxies, respectively, where the latter is a good tracer of the star-formation rate. I find that highly SF galaxies are close to the limit where their CR population would lose all of their energy due to the emission of radiation, whereas CRs tend to escape low SF galaxies more quickly. On top of that, I investigate the properties of CR transport that are needed in order to match the observed gamma-ray spectra. Furthermore, I uncover the underlying processes that enable the FIR-radio correlation (FRC) to be maintained even in starburst galaxies and find that thermal free-free-emission naturally explains the observed radio spectra in SF galaxies like M82 and NGC 253 thus solving the riddle of flat radio spectra that have been proposed to contradict the observed tight FRC. Lastly, I scrutinise the steady-state modelling of the CR proton component by investigating for the first time the influence of spectrally resolved CR transport in MHD simulations on the hadronic gamma-ray emission of SF galaxies revealing new insights into the observational signatures of CR transport both spectrally and spatially.}, language = {en} } @phdthesis{Wendi2018, author = {Wendi, Dadiyorto}, title = {Recurrence Plots and Quantification Analysis of Flood Runoff Dynamics}, doi = {10.25932/publishup-43191}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-431915}, school = {Universit{\"a}t Potsdam}, pages = {114}, year = {2018}, abstract = {This paper introduces a novel measure to assess similarity between event hydrographs. It is based on Cross Recurrence Plots and Recurrence Quantification Analysis which have recently gained attention in a range of disciplines when dealing with complex systems. The method attempts to quantify the event runoff dynamics and is based on the time delay embedded phase space representation of discharge hydrographs. A phase space trajectory is reconstructed from the event hydrograph, and pairs of hydrographs are compared to each other based on the distance of their phase space trajectories. Time delay embedding allows considering the multi-dimensional relationships between different points in time within the event. Hence, the temporal succession of discharge values is taken into account, such as the impact of the initial conditions on the runoff event. We provide an introduction to Cross Recurrence Plots and discuss their parameterization. An application example based on flood time series demonstrates how the method can be used to measure the similarity or dissimilarity of events, and how it can be used to detect events with rare runoff dynamics. It is argued that this methods provides a more comprehensive approach to quantify hydrograph similarity compared to conventional hydrological signatures.}, language = {en} } @phdthesis{Welsch2022, author = {Welsch, Maryna}, title = {Investigation of the stress tolerance regulatory network integration of the NAC transcription factor JUNGBRUNNEN1 (JUB1)}, doi = {10.25932/publishup-54731}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-547310}, school = {Universit{\"a}t Potsdam}, pages = {XIII, 116}, year = {2022}, abstract = {The NAC transcription factor (TF) JUNGBRUNNEN1 (JUB1) is an important negative regulator of plant senescence, as well as of gibberellic acid (GA) and brassinosteroid (BR) biosynthesis in Arabidopsis thaliana. Overexpression of JUB1 promotes longevity and enhances tolerance to drought and other abiotic stresses. A similar role of JUB1 has been observed in other plant species, including tomato and banana. Our data show that JUB1 overexpressors (JUB1-OXs) accumulate higher levels of proline than WT plants under control conditions, during the onset of drought stress, and thereafter. We identified that overexpression of JUB1 induces key proline biosynthesis and suppresses key proline degradation genes. Furthermore, bZIP63, the transcription factor involved in proline metabolism, was identified as a novel downstream target of JUB1 by Yeast One-Hybrid (Y1H) analysis and Chromatin immunoprecipitation (ChIP). However, based on Electrophoretic Mobility Shift Assay (EMSA), direct binding of JUB1 to bZIP63 could not be confirmed. Our data indicate that JUB1-OX plants exhibit reduced stomatal conductance under control conditions. However, selective overexpression of JUB1 in guard cells did not improve drought stress tolerance in Arabidopsis. Moreover, the drought-tolerant phenotype of JUB1 overexpressors does not solely depend on the transcriptional control of the DREB2A gene. Thus, our data suggest that JUB1 confers tolerance to drought stress by regulating multiple components. Until today, none of the previous studies on JUB1´s regulatory network focused on identifying protein-protein interactions. We, therefore, performed a yeast two-hybrid screen (Y2H) which identified several protein interactors of JUB1, two of which are the calcium-binding proteins CaM1 and CaM4. Both proteins interact with JUB1 in the nucleus of Arabidopsis protoplasts. Moreover, JUB1 is expressed with CaM1 and CaM4 under the same conditions. Since CaM1.1 and CaM4.1 encode proteins with identical amino acid sequences, all further experiments were performed with constructs involving the CaM4 coding sequence. Our data show that JUB1 harbors multiple CaM-binding sites, which are localized in both the N-terminal and C-terminal regions of the protein. One of the CaM-binding sites, localized in the DNA-binding domain of JUB1, was identified as a functional CaM-binding site since its mutation strongly reduced the binding of CaM4 to JUB1. Furthermore, JUB1 transactivates expression of the stress-related gene DREB2A in mesophyll cells; this effect is significantly reduced when the calcium-binding protein CaM4 is expressed as well. Overexpression of both genes in Arabidopsis results in early senescence observed through lower chlorophyll content and an enhanced expression of senescence-associated genes (SAGs) when compared with single JUB1 overexpressors. Our data also show that JUB1 and CaM4 proteins interact in senescent leaves, which have increased Ca2+ levels when compared to young leaves. Collectively, our data indicate that JUB1 activity towards its downstream targets is fine-tuned by calcium-binding proteins during leaf senescence.}, language = {en} } @phdthesis{Weber2007, author = {Weber, Jens}, title = {Meso- und mikropor{\"o}se Hochleistungspolymere : Synthese, Analytik und Anwendungen}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-15994}, school = {Universit{\"a}t Potsdam}, year = {2007}, abstract = {Die Arbeit beschreibt die Synthese, Charakterisierung und Anwendung von meso- und mikropor{\"o}sen Hochleistungspolymeren. Im ersten Teil wird die Synthese von mesopor{\"o}sen Polybenzimidazol (PBI) auf der Basis einer Templatierungsmethode vorgestellt. Auf der Grundlage kommerzieller Monomere und Silikatnanopartikel sowie eines neuen Vernetzers wurde ein Polymer-Silikat-Hybridmaterial aufgebaut. Das Herausl{\"o}sen des Silikats mit Ammoniumhydrogendifluorid f{\"u}hrt zu mesopor{\"o}sen Polybenzimidazolen mit spherischen Poren von 9 bis 11 nm Durchmesser. Die Abh{\"a}ngigkeit der beobachteten Porosit{\"a}t vom Massenverh{\"a}ltnis Silikat zu Polymer wurde ebenso untersucht wie die Abh{\"a}ngigkeit der Porosit{\"a}t vom Vernetzergehalt. Die Porosit{\"a}t vollvernetzter Proben zeigt eine lineare Abh{\"a}ngigkeit vom Verh{\"a}ltnis Silikat zu Polymer bis zu einem Grenzwert von 1. Wird der Grenzwert {\"u}berschritten, ist teilweiser Porenkollaps zu beobachten. Die Abh{\"a}ngigkeit der Porosit{\"a}t vom Vernetzergehalt bei festem Silikatgehalt ist nichtlinear. Oberhalb einer kritischen Vernetzerkonzentration wird eine komplette Replikation der Nanopartikel gefunden. Ist die Vernetzerkonzentration dagegen kleiner als der kritische Wert, so ist der v{\"o}llige Kollaps einiger Poren bei Stabilit{\"a}t der verbleibenden Poren zu beobachten. Ein komplett unpor{\"o}ses PBI resultiert bei Abwesenheit des Vernetzers. Die mesopor{\"o}sen PBI-Netzwerke konnten kontrolliert mit Phosphors{\"a}ure beladen werden. Die erhaltenen Addukte wurden auf ihre Protonenleitf{\"a}higkeit untersucht. Es kann gezeigt werden, dass die Nutzung der vordefinierten Morphologie im Vergleich zu einem unstrukturierten PBI in h{\"o}heren Leitf{\"a}higkeiten resultiert. Durch die vernetzte Struktur war des Weiteren gen{\"u}gend mechanische Stabilit{\"a}t gegeben, um die Addukte reversibel und bei sehr guten Leitf{\"a}higkeiten bis zu Temperaturen von 190°C bei 0\% relativer Feuchtigkeit zu untersuchen. Dies ist f{\"u}r unstrukturierte Phosphors{\"a}ure/PBI - Addukte aus linearem PBI nicht m{\"o}glich. Im zweiten Teil der Arbeit wird die Synthese intrinsisch mikropor{\"o}ser Polyamide und Polyimide vorgestellt. Das Konzept intrinsisch mikropor{\"o}ser Polymere konnte damit auf weitere Polymerklassen ausgeweitet werden. Als zentrales, strukturinduzierendes Motiv wurde 9,9'-Spirobifluoren gew{\"a}hlt. Dieses Molek{\"u}l ist leicht und vielf{\"a}ltig zu di- bzw. tetrafunktionellen Monomeren modifizierbar. Dabei wurden bestehende Synthesevorschriften modifiziert bzw. neue Vorschriften entwickelt. Ein erster Schwerpunkt innerhalb des Kapitels lag in der Synthese und Charakterisierung von l{\"o}slichen, intrinsisch mikropor{\"o}sen, aromatischen Polyamid und Polyimid. Es konnte gezeigt werden, dass das Beobachten von Mikroporosit{\"a}t stark von der molekularen Architektur und der Verarbeitung der Polymere abh{\"a}ngig ist. Die Charakterisierung der Porosit{\"a}t erfolgte unter Nutzung von Stickstoffsorption, Kleinwinkelr{\"o}ntgenstreuung und Molecular Modeling. Es konnte gezeigt werden, dass die Proben stark vom Umgebungsdruck abh{\"a}ngigen Deformationen unterliegen. Die starke Quellung der Proben w{\"a}hrend des Sorptionsvorgangs konnte durch Anwendung des "dual sorption" Modells, also dem Auftreten von Porenf{\"u}llung und dadurch induzierter Henry-Sorption, erkl{\"a}rt werden. Der zweite Schwerpunkt des Kapitels beschreibt die Synthese und Charakterisierung mikropor{\"o}ser Polyamid- und Polyimidnetzwerke. W{\"a}hrend Polyimidnetzwerke auf Spirobifluorenbasis ausgepr{\"a}gte Mikroporosit{\"a}t und spezifische Oberfl{\"a}chen von ca. 1100 m²/g aufwiesen, war die Situation f{\"u}r entsprechende Polyamidnetzwerke abweichend. Mittels Stickstoffsorption konnte keine Mikroporosit{\"a}t nachgewiesen werden, jedoch konnte mittels SAXS eine innere Grenzfl{\"a}che von ca. 300 m²/g nachgewiesen werden. Durch die in dieser Arbeit gezeigten Experimente kann die Grenze zwischen Polymeren mit hohem freien Volumen und mikropor{\"o}sen Polymeren somit etwas genauer gezogen werden. ausgepr{\"a}gte Mikroporosit{\"a}t kann nur in extrem steifen Strukturen nachgewiesen werden. Die Kombination der Konzepte "Mesoporosit{\"a}t durch Templatierung" und "Mikroporosit{\"a}t durch strukturierte Monomere" hatte ein hierarchisch strukturiertes Polybenzimidazol zum Ergebnis. Die Pr{\"a}senz einer Strukturierung im molekularen Maßstab konnte SAXS bewiesen werden. Das so strukturierte Polybenzimidazol zeichnete sich durch eine h{\"o}here Protonenleitf{\"a}higkeit im Vergleich zu einem rein mesopor{\"o}sen PBI aus. Der letzte Teil der Arbeit besch{\"a}ftigte sich mit der Entwicklung einer neuen Synthesemethode zur Herstellung von Polybenzimidazol. Es konnte gezeigt werden, dass lineares PBI in einer eutektischen Salzschmelze aus Lithium- und Kaliumchlorid synthetisiert werden kann. Die Umsetzung der spirobifluorenbasierten Monomere zu l{\"o}slichem oder vernetztem PBI ist in der Salzschmelze m{\"o}glich.}, language = {de} } @phdthesis{Videla2014, author = {Videla, Santiago}, title = {Reasoning on the response of logical signaling networks with answer set programming}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-71890}, school = {Universit{\"a}t Potsdam}, year = {2014}, abstract = {Deciphering the functioning of biological networks is one of the central tasks in systems biology. In particular, signal transduction networks are crucial for the understanding of the cellular response to external and internal perturbations. Importantly, in order to cope with the complexity of these networks, mathematical and computational modeling is required. We propose a computational modeling framework in order to achieve more robust discoveries in the context of logical signaling networks. More precisely, we focus on modeling the response of logical signaling networks by means of automated reasoning using Answer Set Programming (ASP). ASP provides a declarative language for modeling various knowledge representation and reasoning problems. Moreover, available ASP solvers provide several reasoning modes for assessing the multitude of answer sets. Therefore, leveraging its rich modeling language and its highly efficient solving capacities, we use ASP to address three challenging problems in the context of logical signaling networks: learning of (Boolean) logical networks, experimental design, and identification of intervention strategies. Overall, the contribution of this thesis is three-fold. Firstly, we introduce a mathematical framework for characterizing and reasoning on the response of logical signaling networks. Secondly, we contribute to a growing list of successful applications of ASP in systems biology. Thirdly, we present a software providing a complete pipeline for automated reasoning on the response of logical signaling networks.}, language = {en} } @phdthesis{Unterstab2005, author = {Unterstab, Gunhild}, title = {Charakterisierung der viralen Genprodukte p10 und P des Borna Disease Virus}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-6905}, school = {Universit{\"a}t Potsdam}, year = {2005}, abstract = {Das Borna Disease Virus (BDV, Bornavirus) besitzt ein einzelstr{\"a}ngiges RNA-Genom negativer Polarit{\"a}t und ist innerhalb der Ordnung Mononegavirales der Prototyp einer eigenen Virusfamilie, die der Bornaviridae. Eine außergew{\"o}hnliche Eigenschaft des Virus ist seine nukle{\"a}re Transkription und Replikation, eine weitere besteht in seiner F{\"a}higkeit, als neurotropes Virus sowohl in vivo als auch in vitro persistente Infektionen zu etablieren. Die zugrunde liegenden Mechanismen sowohl der Replikation als auch der Persistenz sind derzeit noch unzureichend verstanden, auch deshalb, weil das Virus noch relativ „jung" ist: Erste komplette Sequenzen des RNA-Genoms wurden 1994 publiziert und erst vor einigen Monaten gelang die Generierung rekombinanter Viren auf der Basis klonierter cDNA. Im Mittelpunkt dieser Arbeit standen das p10 Protein und das Phosphoprotein (P), die von der gemeinsamen Transkriptionseinheit II in {\"u}berlappenden Leserahmen kodiert werden. Als im Kern der Wirtszelle replizierendes Virus ist das Bornavirus auf zellul{\"a}re Importmechanismen angewiesen, um den Kernimport aller an der Replikation beteiligten viralen Proteine zu gew{\"a}hrleisten. Das p10 Protein ist ein negativer Regulator der viralen RNA-abh{\"a}ngigen RNA-Polymerase (L). In vitro Importexperimente zeigten, dass p10 {\"u}ber den klassischen Importin alpha/beta abh{\"a}ngigen Kernimportweg in den Nukleus transportiert wird. Dies war unerwartet, da p10 kein vorhersagbares klassisches Kernlokalisierungssignal (NLS) besitzt und weist darauf hin, dass der zellul{\"a}re Importapparat offensichtlich flexibler ist als allgemein angenommen. Die ersten 20 N-terminalen AS vermitteln sowohl Kernimport als auch die Bindung an den Importrezeptor Importin alpha. Durch Di-Alanin-Austauschmutagenese wurden die f{\"u}r diesen Transportprozess essentiellen AS identifiziert und die Bedeutung hydrophober und polarer AS-Reste demonstriert. Die F{\"a}higkeit des Bornavirus, persistente Infektionen zu etablieren, wirft die Frage auf, wie das Virus die zellul{\"a}ren antiviralen Abwehrmechanismen, insbesondere das Typ I Interferon (IFN)-System, unterwandert. Das virale P Protein wurde in dieser Arbeit als potenter Antagonist der IFN-Induktion charakterisiert. Es verhindert die Phosphorylierung des zentralen Transkriptionsfaktors IRF3 durch die zellul{\"a}re Kinase TBK1 und somit dessen Aktivierung. Der Befund, dass P mit TBK1 Komplexe bildet und zudem auch als Substrat f{\"u}r die zellul{\"a}re Kinase fungiert, erlaubt es, erstmalig einen Mechanismus zu postulieren, in dem ein virales Protein (BDV-P) als putatives TBK1-Pseudosubstrat die IRF3-Aktivierung kompetitiv hemmt.}, subject = {Interferon }, language = {de} } @phdthesis{Ulaganathan2016, author = {Ulaganathan, Vamseekrishna}, title = {Molecular fundamentals of foam fractionation}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-94263}, school = {Universit{\"a}t Potsdam}, pages = {ix, 136}, year = {2016}, abstract = {Foam fractionation of surfactant and protein solutions is a process dedicated to separate surface active molecules from each other due to their differences in surface activities. The process is based on forming bubbles in a certain mixed solution followed by detachment and rising of bubbles through a certain volume of this solution, and consequently on the formation of a foam layer on top of the solution column. Therefore, systematic analysis of this whole process comprises of at first investigations dedicated to the formation and growth of single bubbles in solutions, which is equivalent to the main principles of the well-known bubble pressure tensiometry. The second stage of the fractionation process includes the detachment of a single bubble from a pore or capillary tip and its rising in a respective aqueous solution. The third and final stage of the process is the formation and stabilization of the foam created by these bubbles, which contains the adsorption layers formed at the growing bubble surface, carried up and gets modified during the bubble rising and finally ends up as part of the foam layer. Bubble pressure tensiometry and bubble profile analysis tensiometry experiments were performed with protein solutions at different bulk concentrations, solution pH and ionic strength in order to describe the process of accumulation of protein and surfactant molecules at the bubble surface. The results obtained from the two complementary methods allow understanding the mechanism of adsorption, which is mainly governed by the diffusional transport of the adsorbing protein molecules to the bubble surface. This mechanism is the same as generally discussed for surfactant molecules. However, interesting peculiarities have been observed for protein adsorption kinetics at sufficiently short adsorption times. First of all, at short adsorption times the surface tension remains constant for a while before it decreases as expected due to the adsorption of proteins at the surface. This time interval is called induction time and it becomes shorter with increasing protein bulk concentration. Moreover, under special conditions, the surface tension does not stay constant but even increases over a certain period of time. This so-called negative surface pressure was observed for BCS and BLG and discussed for the first time in terms of changes in the surface conformation of the adsorbing protein molecules. Usually, a negative surface pressure would correspond to a negative adsorption, which is of course impossible for the studied protein solutions. The phenomenon, which amounts to some mN/m, was rather explained by simultaneous changes in the molar area required by the adsorbed proteins and the non-ideality of entropy of the interfacial layer. It is a transient phenomenon and exists only under dynamic conditions. The experiments dedicated to the local velocity of rising air bubbles in solutions were performed in a broad range of BLG concentration, pH and ionic strength. Additionally, rising bubble experiments were done for surfactant solutions in order to validate the functionality of the instrument. It turns out that the velocity of a rising bubble is much more sensitive to adsorbing molecules than classical dynamic surface tension measurements. At very low BLG or surfactant concentrations, for example, the measured local velocity profile of an air bubble is changing dramatically in time scales of seconds while dynamic surface tensions still do not show any measurable changes at this time scale. The solution's pH and ionic strength are important parameters that govern the measured rising velocity for protein solutions. A general theoretical description of rising bubbles in surfactant and protein solutions is not available at present due to the complex situation of the adsorption process at a bubble surface in a liquid flow field with simultaneous Marangoni effects. However, instead of modelling the complete velocity profile, new theoretical work has been started to evaluate the maximum values in the profile as characteristic parameter for dynamic adsorption layers at the bubble surface more quantitatively. The studies with protein-surfactant mixtures demonstrate in an impressive way that the complexes formed by the two compounds change the surface activity as compared to the original native protein molecules and therefore lead to a completely different retardation behavior of rising bubbles. Changes in the velocity profile can be interpreted qualitatively in terms of increased or decreased surface activity of the formed protein-surfactant complexes. It was also observed that the pH and ionic strength of a protein solution have strong effects on the surface activity of the protein molecules, which however, could be different on the rising bubble velocity and the equilibrium adsorption isotherms. These differences are not fully understood yet but give rise to discussions about the structure of protein adsorption layer under dynamic conditions or in the equilibrium state. The third main stage of the discussed process of fractionation is the formation and characterization of protein foams from BLG solutions at different pH and ionic strength. Of course a minimum BLG concentration is required to form foams. This minimum protein concentration is a function again of solution pH and ionic strength, i.e. of the surface activity of the protein molecules. Although at the isoelectric point, at about pH 5 for BLG, the hydrophobicity and hence the surface activity should be the highest, the concentration and ionic strength effects on the rising velocity profile as well as on the foamability and foam stability do not show a maximum. This is another remarkable argument for the fact that the interfacial structure and behavior of BLG layers under dynamic conditions and at equilibrium are rather different. These differences are probably caused by the time required for BLG molecules to adapt respective conformations once they are adsorbed at the surface. All bubble studies described in this work refer to stages of the foam fractionation process. Experiments with different systems, mainly surfactant and protein solutions, were performed in order to form foams and finally recover a solution representing the foamed material. As foam consists to a large extent of foam lamella - two adsorption layers with a liquid core - the concentration in a foamate taken from foaming experiments should be enriched in the stabilizing molecules. For determining the concentration of the foamate, again the very sensitive bubble rising velocity profile method was applied, which works for any type of surface active materials. This also includes technical surfactants or protein isolates for which an accurate composition is unknown.}, language = {en} } @phdthesis{Trautmann2022, author = {Trautmann, Tina}, title = {Understanding global water storage variations using model-data integration}, doi = {10.25932/publishup-56595}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-565954}, school = {Universit{\"a}t Potsdam}, pages = {VIII, 141}, year = {2022}, abstract = {Climate change is one of the greatest challenges to humanity in this century, and most noticeable consequences are expected to be impacts on the water cycle - in particular the distribution and availability of water, which is fundamental for all life on Earth. In this context, it is essential to better understand where and when water is available and what processes influence variations in water storages. While estimates of the overall terrestrial water storage (TWS) variations are available from the GRACE satellites, these represent the vertically integrated signal over all water stored in ice, snow, soil moisture, groundwater and surface water bodies. Therefore, complementary observational data and hydrological models are still required to determine the partitioning of the measured signal among different water storages and to understand the underlying processes. However, the application of large-scale observational data is limited by their specific uncertainties and the incapacity to measure certain water fluxes and storages. Hydrological models, on the other hand, vary widely in their structure and process-representation, and rarely incorporate additional observational data to minimize uncertainties that arise from their simplified representation of the complex hydrologic cycle. In this context, this thesis aims to contribute to improving the understanding of global water storage variability by combining simple hydrological models with a variety of complementary Earth observation-based data. To this end, a model-data integration approach is developed, in which the parameters of a parsimonious hydrological model are calibrated against several observational constraints, inducing GRACE TWS, simultaneously, while taking into account each data's specific strengths and uncertainties. This approach is used to investigate 3 specific aspects that are relevant for modelling and understanding the composition of large-scale TWS variations. The first study focusses on Northern latitudes, where snow and cold-region processes define the hydrological cycle. While the study confirms previous findings that seasonal dynamics of TWS are dominated by the cyclic accumulation and melt of snow, it reveals that inter-annual TWS variations on the contrary, are determined by variations in liquid water storages. Additionally, it is found to be important to consider the impact of compensatory effects of spatially heterogeneous hydrological variables when aggregating the contribution of different storage components over large areas. Hence, the determinants of TWS variations are scale-dependent and underlying driving mechanism cannot be simply transferred between spatial and temporal scales. These findings are supported by the second study for the global land areas beyond the Northern latitudes as well. This second study further identifies the considerable impact of how vegetation is represented in hydrological models on the partitioning of TWS variations. Using spatio-temporal varying fields of Earth observation-based data to parameterize vegetation activity not only significantly improves model performance, but also reduces parameter equifinality and process uncertainties. Moreover, the representation of vegetation drastically changes the contribution of different water storages to overall TWS variability, emphasizing the key role of vegetation for water allocation, especially between sub-surface and delayed water storages. However, the study also identifies parameter equifinality regarding the decay of sub-surface and delayed water storages by either evapotranspiration or runoff, and thus emphasizes the need for further constraints hereof. The third study focuses on the role of river water storage, in particular whether it is necessary to include computationally expensive river routing for model calibration and validation against the integrated GRACE TWS. The results suggest that river routing is not required for model calibration in such a global model-data integration approach, due to the larger influence other observational constraints, and the determinability of certain model parameters and associated processes are identified as issues of greater relevance. In contrast to model calibration, considering river water storage derived from routing schemes can already significantly improve modelled TWS compared to GRACE observations, and thus should be considered for model evaluation against GRACE data. Beyond these specific findings that contribute to improved understanding and modelling of large-scale TWS variations, this thesis demonstrates the potential of combining simple modeling approaches with diverse Earth observational data to improve model simulations, overcome inconsistencies of different observational data sets, and identify areas that require further research. These findings encourage future efforts to take advantage of the increasing number of diverse global observational data.}, language = {en} } @phdthesis{Tattarini2022, author = {Tattarini, Giulia}, title = {A job is good, but is a good job healthier?}, doi = {10.25932/publishup-53672}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-536723}, school = {Universit{\"a}t Potsdam}, pages = {182}, year = {2022}, abstract = {What are the consequences of unemployment and precarious employment for individuals' health in Europe? What are the moderating factors that may offset (or increase) the health consequences of labor-market risks? How do the effects of these risks vary across different contexts, which differ in their institutional and cultural settings? Does gender, regarded as a social structure, play a role, and how? To answer these questions is the aim of my cumulative thesis. This study aims to advance our knowledge about the health consequences that unemployment and precariousness cause over the life course. In particular, I investigate how several moderating factors, such as gender, the family, and the broader cultural and institutional context, may offset or increase the impact of employment instability and insecurity on individual health. In my first paper, 'The buffering role of the family in the relationship between job loss and self-perceived health: Longitudinal results from Europe, 2004-2011', I and my co-authors measure the causal effect of job loss on health and the role of the family and welfare states (regimes) as moderating factors. Using EU-SILC longitudinal data (2004-2011), we estimate the probability of experiencing 'bad health' following a transition to unemployment by applying linear probability models and undertake separate analyses for men and women. Firstly, we measure whether changes in the independent variable 'job loss' lead to changes in the dependent variable 'self-rated health' for men and women separately. Then, by adding into the model different interaction terms, we measure the moderating effect of the family, both in terms of emotional and economic support, and how much it varies across different welfare regimes. As an identification strategy, we first implement static fixed-effect panel models, which control for time-varying observables and indirect health selection—i.e., constant unobserved heterogeneity. Secondly, to control for reverse causality and path dependency, we implement dynamic fixed-effect panel models, adding a lagged dependent variable to the model. We explore the role of the family by focusing on close ties within households: we consider the presence of a stable partner and his/her working status as a source of social and economic support. According to previous literature, having a partner should reduce the stress from adverse events, thanks to the symbolic and emotional dimensions that such a relationship entails, regardless of any economic benefits. Our results, however, suggest that benefits linked to the presence of a (female) partner also come from the financial stability that (s)he can provide in terms of a second income. Furthermore, we find partners' employment to be at least as important as the mere presence of the partner in reducing the negative effect of job loss on the individual's health by maintaining the household's standard of living and decreasing economic strain on the family. Our results are in line with previous research, which has highlighted that some people cope better than others with adverse life circumstances, and the support provided by the family is a crucial resource in that regard. We also reported an important interaction between the family and the welfare state in moderating the health consequences of unemployment, showing how the compensation effect of the family varies across welfare regimes. The family plays a decisive role in cushioning the adverse consequences of labor market risks in Southern and Eastern welfare states, characterized by less developed social protection systems and -especially the Southern - high level of familialism. The first paper also found important gender differences concerning job loss, family and welfare effects. Of particular interest is the evidence suggesting that health selection works differently for men and women, playing a more prominent role for women than for men in explaining the relationship between job loss and self-perceived health. The second paper, 'Gender roles and selection mechanisms across contexts: A comparative analysis of the relationship between unemployment, self-perceived health, and gender.' investigates more in-depth the gender differential in health driven by unemployment. Being a highly contested issue in literature, we aim to study whether men are more penalized than women or the other way around and the mechanisms that may explain the gender difference. To do that, we rely on two theoretical arguments: the availability of alternative roles and social selection. The first argument builds on the idea that men and women may compensate for the detrimental health consequences of unemployment through the commitment to 'alternative roles,' which can provide for the resources needed to fulfill people's socially constructed needs. Notably, the availability of alternative options depends on the different positions that men and women have in society. Further, we merge the availability of the 'alternative roles' argument with the health selection argument. We assume that health selection could be contingent on people's social position as defined by gender and, thus, explain the gender differential in the relationship between unemployment and health. Ill people might be less reluctant to fall or remain (i.e., self-select) in unemployment if they have alternative roles. In Western societies, women generally have more alternative roles than men and thus more discretion in their labor market attachment. Therefore, health selection should be stronger for them, explaining why unemployment is less menace for women than for their male counterparts. Finally, relying on the idea of different gender regimes, we extended these arguments to comparison across contexts. For example, in contexts where being a caregiver is assumed to be women's traditional and primary roles and the primary breadwinner role is reserved to men, unemployment is less stigmatized, and taking up alternative roles is more socially accepted for women than for men (Hp.1). Accordingly, social (self)selection should be stronger for women than for men in traditional contexts, where, in the case of ill-health, the separation from work is eased by the availability of alternative roles (Hp.2). By focusing on contexts that are representative of different gender regimes, we implement a multiple-step comparative approach. Firstly, by using EU-SILC longitudinal data (2004-2015), our analysis tests gender roles and selection mechanisms for Sweden and Italy, representing radically different gender regimes, thus providing institutional and cultural variation. Then, we limit institutional heterogeneity by focusing on Germany and comparing East- and West-Germany and older and younger cohorts—for West-Germany (SOEP data 1995-2017). Next, to assess the differential impact of unemployment for men and women, we compared (unemployed and employed) men with (unemployed and employed) women. To do so, we calculate predicted probabilities and average marginal effect from two distinct random-effects probit models. Our first step is estimating random-effects models that assess the association between unemployment and self-perceived health, controlling for observable characteristics. In the second step, our fully adjusted model controls for both direct and indirect selection. We do this using dynamic correlated random-effects (CRE) models. Further, based on the fully adjusted model, we test our hypotheses on alternative roles (Hp.1) by comparing several contexts - models are estimated separately for each context. For this hypothesis, we pool men and women and include an interaction term between unemployment and gender, which has the advantage to allow for directly testing whether gender differences in the effect of unemployment exist and are statistically significant. Finally, we test the role of selection mechanisms (Hp.2), using the KHB method to compare coefficients across nested nonlinear models. Specifically, we test the role of selection for the relationship between unemployment and health by comparing the partially-adjusted and fully-adjusted models. To allow selection mechanisms to operate differently between genders, we estimate separate models for men and women. We found support to our first hypotheses—the context where people are embedded structures the relationship between unemployment, health, and gender. We found no gendered effect of unemployment on health in the egalitarian context of Sweden. Conversely, in the traditional context of Italy, we observed substantive and statistically significant gender differences in the effect of unemployment on bad health, with women suffering less than men. We found the same pattern for comparing East and West Germany and younger and older cohorts in West Germany. On the contrary, our results did not support our theoretical argument on social selection. We found that in Sweden, women are more selected out of employment than men. In contrast, in Italy, health selection does not seem to be the primary mechanism behind the gender differential—Italian men and women seem to be selected out of employment to the same extent. Namely, we do not find any evidence that health selection is stronger for women in more traditional countries (Hp2), despite the fact that the institutional and the cultural context would offer them a more comprehensive range of 'alternative roles' relative to men. Moreover, our second hypothesis is also rejected in the second and third comparisons, where the cross-country heterogeneity is reduced to maximize cultural differences within the same institutional context. Further research that addresses selection into inactivity is needed to evaluate the interplay between selection and social roles across gender regimes. While the health consequences of unemployment have been on the research agenda for a pretty long time, the interest in precarious employment—defined as the linking of the vulnerable worker to work that is characterized by uncertainty and insecurity concerning pay, the stability of the work arrangement, limited access to social benefits, and statutory protections—has emerged only later. Since the 80s, scholars from different disciplines have raised concerns about the social consequences of de-standardization of employment relationships. However, while work has become undoubtedly more precarious, very little is known about its causal effect on individual health and the role of gender as a moderator. These questions are at the core of my third paper : 'Bad job, bad health? A longitudinal analysis of the interaction between precariousness, gender and self-perceived health in Germany'. Herein, I investigate the multidimensional nature of precarious employment and its causal effect on health, particularly focusing on gender differences. With this paper, I aim at overcoming three major shortcomings of earlier studies: The first one regards the cross-sectional nature of data that prevents the authors from ruling out unobserved heterogeneity as a mechanism for the association between precarious employment and health. Indeed, several unmeasured individual characteristics—such as cognitive abilities—may confound the relationship between precarious work and health, leading to biased results. Secondly, only a few studies have directly addressed the role of gender in shaping the relationship. Moreover, available results on the gender differential are mixed and inconsistent: some found precarious employment being more detrimental for women's health, while others found no gender differences or stronger negative association for men. Finally, previous attempts to an empirical translation of the employment precariousness (EP) concept have not always been coherent with their theoretical framework. EP is usually assumed to be a multidimensional and continuous phenomenon; it is characterized by different dimensions of insecurity that may overlap in the same job and lead to different "degrees of precariousness." However, researchers have predominantly focused on one-dimensional indicators—e.g., temporary employment, subjective job insecurity—to measure EP and study the association with health. Besides the fact that this approach partially grasps the phenomenon's complexity, the major problem is the inconsistency of evidence that it has produced. Indeed, this line of inquiry generally reveals an ambiguous picture, with some studies finding substantial adverse effects of temporary over permanent employment, while others report only minor differences. To measure the (causal) effect of precarious work on self-rated health and its variation by gender, I focus on Germany and use four waves from SOEP data (2003, 2007, 2011, and 2015). Germany is a suitable context for my study. Indeed, since the 1980s, the labor market and welfare system have been restructured in many ways to increase the German economy's competitiveness in the global market. As a result, the (standard) employment relationship has been de-standardized: non-standard and atypical employment arrangements—i.e., part-time work, fixed-term contracts, mini-jobs, and work agencies—have increased over time while wages have lowered, even among workers with standard work. In addition, the power of unions has also fallen over the last three decades, leaving a large share of workers without collective protection. Because of this process of de-standardization, the link between wage employment and strong social rights has eroded, making workers more powerless and more vulnerable to labor market risks than in the past. EP refers to this uneven distribution of power in the employment relationship, which can be detrimental to workers' health. Indeed, by affecting individuals' access to power and other resources, EP puts precarious workers at risk of experiencing health shocks and influences their ability to gain and accumulate health advantages (Hp.1). Further, the focus on Germany allows me to investigate my second research question on the gender differential. Germany is usually regarded as a traditionalist gender regime: a context characterized by a configuration of roles. Here, being a caregiver is assumed to be women's primary role, whereas the primary breadwinner role is reserved for men. Although many signs of progress have been made over the last decades towards a greater equalization of opportunities and more egalitarianism, the breadwinner model has barely changed towards a modified version. Thus, women usually take on the double role of workers (the so-called secondary earner) and caregivers, and men still devote most of their time to paid work activities. Moreover, the overall upward trend towards more egalitarian gender ideologies has leveled off over the last decades, moving notably towards more traditional gender ideologies. In this setting, two alternative hypotheses are possible. Firstly, I assume that the negative relationship between EP and health is stronger for women than for men. This is because women are systematically more disadvantaged than men in the public and private spheres of life, having less access to formal and informal sources of power. These gender-related power asymmetries may interact with EP-related power asymmetries resulting in a stronger effect of EP on women's health than on men's health (Hp.2). An alternative way of looking at the gender differential is to consider the interaction that precariousness might have with men's and women's gender identities. According to this view, the negative relationship between EP and health is weaker for women than for men (Hp.2a). In a society with a gendered division of labor and a strong link between masculine identities and stable and well-rewarded job—i.e., a job that confers the role of primary family provider—a male worker with precarious employment might violate the traditional male gender role. Men in precarious jobs may perceive themselves (and by others) as possessing a socially undesirable characteristic, which conflicts with the stereotypical idea of themselves as the male breadwinner. Engaging in behaviors that contradict stereotypical gender identity may decrease self-esteem and foster feelings of inferiority, helplessness, and jealousy, leading to poor health. I develop a new indicator of EP that empirically translates a definition of EP as a multidimensional and continuous phenomenon. I assume that EP is a latent construct composed of seven dimensions of insecurity chosen according to the theory and previous empirical research: Income insecurity, social insecurity, legal insecurity, employment insecurity, working-time insecurity, representation insecurity, worker's vulnerability. The seven dimensions are proxied by eight indicators available in the four waves of the SOEP dataset. The EP composite indicator is obtained by performing a multiple correspondence analysis (MCA) on the eight indicators. This approach aims to construct a summary scale in which all dimensions contribute jointly to the measured experience of precariousness and its health impact. Further, the relationship between EP and 'general self-perceived health' is estimated by applying ordered probit random-effects estimators and calculating average marginal effect (further AME). Then, to control for unobserved heterogeneity, I implement correlated random-effects models that add to the model the within-individual means of the time-varying independent variables. To test the significance of the gender differential, I add an interaction term between EP and gender in the fully adjusted model in the pooled sample. My correlated random-effects models showed EP's negative and substantial 'effect' on self-perceived health for both men and women. Although nonsignificant, the evidence seems in line with previous cross-sectional literature. It supports the hypothesis that employment precariousness could be detrimental to workers' health. Further, my results showed the crucial role of unobserved heterogeneity in shaping the health consequences of precarious employment. This is particularly important as evidence accumulates, yet it is still mostly descriptive. Moreover, my results revealed a substantial difference among men and women in the relationship between EP and health: when EP increases, the risk of experiencing poor health increases much more for men than for women. This evidence falsifies previous theory according to whom the gender differential is contingent on the structurally disadvantaged position of women in western societies. In contrast, they seem to confirm the idea that men in precarious work could experience role conflict to a larger extent than women, as their self-standard is supposed to be the stereotypical breadwinner worker with a good and well-rewarded job. Finally, results from the multiple correspondence analysis contribute to the methodological debate on precariousness, showing that a multidimensional and continuous indicator can express a latent variable of EP. All in all, complementarities are revealed in the results of unemployment and employment precariousness, which have two implications: Policy-makers need to be aware that the total costs of unemployment and precariousness go far beyond the economic and material realm penetrating other fundamental life domains such as individual health. Moreover, they need to balance the trade-off between protecting adequately unemployed people and fostering high-quality employment in reaction to the highlighted market pressures. In this sense, the further development of a (universalistic) welfare state certainly helps mitigate the adverse health effects of unemployment and, therefore, the future costs of both individuals' health and welfare spending. In addition, the presence of a working partner is crucial for reducing the health consequences of employment instability. Therefore, policies aiming to increase female labor market participation should be promoted, especially in those contexts where the welfare state is less developed. Moreover, my results support the significance of taking account of a gender perspective in health research. The findings of the three articles show that job loss, unemployment, and precarious employment, in general, have adverse effects on men's health but less or absent consequences for women's health. Indeed, this suggests the importance of labor and health policies that consider and further distinguish the specific needs of the male and female labor force in Europe. Nevertheless, a further implication emerges: the health consequences of employment instability and de-standardization need to be investigated in light of the gender arrangements and the transforming gender relationships in specific cultural and institutional contexts. My results indeed seem to suggest that women's health advantage may be a transitory phenomenon, contingent on the predominant gendered institutional and cultural context. As the structural difference between men's and women's position in society is eroded, egalitarianism becomes the dominant normative status, so will probably be the gender difference in the health consequences of job loss and precariousness. Therefore, while gender equality in opportunities and roles is a desirable aspect for contemporary societies and a political goal that cannot be postponed further, this thesis raises a further and maybe more crucial question: What kind of equality should be pursued to provide men and women with both good life quality and equal chances in the public and private spheres? In this sense, I believe that social and labor policies aiming to reduce gender inequality in society should focus on improving women's integration into the labor market, implementing policies targeting men, and facilitating their involvement in the private sphere of life. Equal redistribution of social roles could activate a crucial transformation of gender roles and the cultural models that sustain and still legitimate gender inequality in Western societies.}, language = {en} } @phdthesis{Swierczynski2012, author = {Swierczynski, Tina}, title = {A 7000 yr runoff chronology from varved sediments of Lake Mondsee (Upper Austria)}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-66702}, school = {Universit{\"a}t Potsdam}, year = {2012}, abstract = {The potential increase in frequency and magnitude of extreme floods is currently discussed in terms of global warming and the intensification of the hydrological cycle. The profound knowledge of past natural variability of floods is of utmost importance in order to assess flood risk for the future. Since instrumental flood series cover only the last ~150 years, other approaches to reconstruct historical and pre-historical flood events are needed. Annually laminated (varved) lake sediments are meaningful natural geoarchives because they provide continuous records of environmental changes > 10000 years down to a seasonal resolution. Since lake basins additionally act as natural sediment traps, the riverine sediment supply, which is preserved as detrital event layers in the lake sediments, can be used as a proxy for extreme discharge events. Within my thesis I examined a ~ 8.50 m long sedimentary record from the pre-Alpine Lake Mondsee (Northeast European Alps), which covered the last 7000 years. This sediment record consists of calcite varves and intercalated detrital layers, which range in thickness from 0.05 to 32 mm. Detrital layer deposition was analysed by a combined method of microfacies analysis via thin sections, Scanning Electron Microscopy (SEM), μX-ray fluorescence (μXRF) scanning and magnetic susceptibility. This approach allows characterizing individual detrital event layers and assigning a corresponding input mechanism and catchment. Based on varve counting and controlled by 14C age dates, the main goals of this thesis are (i) to identify seasonal runoff processes, which lead to significant sediment supply from the catchment into the lake basin and (ii) to investigate flood frequency under changing climate boundary conditions. This thesis follows a line of different time slices, presenting an integrative approach linking instrumental and historical flood data from Lake Mondsee in order to evaluate the flood record inferred from Lake Mondsee sediments. The investigation of eleven short cores covering the last 100 years reveals the abundance of 12 detrital layers. Therein, two types of detrital layers are distinguished by grain size, geochemical composition and distribution pattern within the lake basin. Detrital layers, which are enriched in siliciclastic and dolomitic material, reveal sediment supply from the Flysch sediments and Northern Calcareous Alps into the lake basin. These layers are thicker in the northern lake basin (0.1-3.9 mm) and thinner in the southern lake basin (0.05-1.6 mm). Detrital layers, which are enriched in dolomitic components forming graded detrital layers (turbidites), indicate the provenance from the Northern Calcareous Alps. These layers are generally thicker (0.65-32 mm) and are solely recorded within the southern lake basin. In comparison with instrumental data, thicker graded layers result from local debris flow events in summer, whereas thin layers are deposited during regional flood events in spring/summer. Extreme summer floods as reported from flood layer deposition are principally caused by cyclonic activity from the Mediterranean Sea, e.g. July 1954, July 1997 and August 2002. During the last two millennia, Lake Mondsee sediments reveal two significant flood intervals with decadal-scale flood episodes, during the Dark Ages Cold Period (DACP) and the transition from the Medieval Climate Anomaly (MCA) into the Little Ice Age (LIA) suggesting a linkage of transition to climate cooling and summer flood recurrences in the Northeastern Alps. In contrast, intermediate or decreased flood episodes appeared during the MWP and the LIA. This indicates a non-straightforward relationship between temperature and flood recurrence, suggesting higher cyclonic activity during climate transition in the Northeast Alps. The 7000-year flood chronology reveals 47 debris flows and 269 floods, with increased flood activity shifting around 3500 and 1500 varve yr BP (varve yr BP = varve years before present, before present = AD 1950). This significant increase in flood activity shows a coincidence with millennial-scale climate cooling that is reported from main Alpine glacier advances and lower tree lines in the European Alps since about 3300 cal. yr BP (calibrated years before present). Despite relatively low flood occurrence prior to 1500 varve yr BP, floods at Lake Mondsee could have also influenced human life in early Neolithic lake dwellings (5750-4750 cal. yr BP). While the first lake dwellings were constructed on wetlands, the later lake dwellings were built on piles in the water suggesting an early flood risk adaptation of humans and/or a general change of the Late Neolithic Culture of lake-dwellers because of socio-economic reasons. However, a direct relationship between the final abandonment of the lake dwellings and higher flood frequencies is not evidenced.}, language = {en} } @phdthesis{Stettner2018, author = {Stettner, Samuel}, title = {Exploring the seasonality of rapid Arctic changes from space}, doi = {10.25932/publishup-42578}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-425783}, school = {Universit{\"a}t Potsdam}, pages = {XIII, 132}, year = {2018}, abstract = {Arctic warming has implications for the functioning of terrestrial Arctic ecosystems, global climate and socioeconomic systems of northern communities. A research gap exists in high spatial resolution monitoring and understanding of the seasonality of permafrost degradation, spring snowmelt and vegetation phenology. This thesis explores the diversity and utility of dense TerraSAR-X (TSX) X-Band time series for monitoring ice-rich riverbank erosion, snowmelt, and phenology of Arctic vegetation at long-term study sites in the central Lena Delta, Russia and on Qikiqtaruk (Herschel Island), Canada. In the thesis the following three research questions are addressed: • Is TSX time series capable of monitoring the dynamics of rapid permafrost degradation in ice-rich permafrost on an intra-seasonal scale and can these datasets in combination with climate data identify the climatic drivers of permafrost degradation? • Can multi-pass and multi-polarized TSX time series adequately monitor seasonal snow cover and snowmelt in small Arctic catchments and how does it perform compared to optical satellite data and field-based measurements? • Do TSX time series reflect the phenology of Arctic vegetation and how does the recorded signal compare to in-situ greenness data from RGB time-lapse camera data and vegetation height from field surveys? To answer the research questions three years of TSX backscatter data from 2013 to 2015 for the Lena Delta study site and from 2015 to 2017 for the Qikiqtaruk study site were used in quantitative and qualitative analysis complimentary with optical satellite data and in-situ time-lapse imagery. The dynamics of intra-seasonal ice-rich riverbank erosion in the central Lena Delta, Russia were quantified using TSX backscatter data at 2.4 m spatial resolution in HH polarization and validated with 0.5 m spatial resolution optical satellite data and field-based time-lapse camera data. Cliff top lines were automatically extracted from TSX intensity images using threshold-based segmentation and vectorization and combined in a geoinformation system with manually digitized cliff top lines from the optical satellite data and rates of erosion extracted from time-lapse cameras. The results suggest that the cliff top eroded at a constant rate throughout the entire erosional season. Linear mixed models confirmed that erosion was coupled with air temperature and precipitation at an annual scale, seasonal fluctuations did not influence 22-day erosion rates. The results highlight the potential of HH polarized X-Band backscatter data for high temporal resolution monitoring of rapid permafrost degradation. The distinct signature of wet snow in backscatter intensity images of TSX data was exploited to generate wet snow cover extent (SCE) maps on Qikiqtaruk at high temporal resolution. TSX SCE showed high similarity to Landsat 8-derived SCE when using cross-polarized VH data. Fractional snow cover (FSC) time series were extracted from TSX and optical SCE and compared to FSC estimations from in-situ time-lapse imagery. The TSX products showed strong agreement with the in-situ data and significantly improved the temporal resolution compared to the Landsat 8 time series. The final combined FSC time series revealed two topography-dependent snowmelt patterns that corresponded to in-situ measurements. Additionally TSX was able to detect snow patches longer in the season than Landsat 8, underlining the advantage of TSX for detection of old snow. The TSX-derived snow information provided valuable insights into snowmelt dynamics on Qikiqtaruk previously not available. The sensitivity of TSX to vegetation structure associated with phenological changes was explored on Qikiqtaruk. Backscatter and coherence time series were compared to greenness data extracted from in-situ digital time-lapse cameras and detailed vegetation parameters on 30 areas of interest. Supporting previous results, vegetation height corresponded to backscatter intensity in co-polarized HH/VV at an incidence angle of 31°. The dry, tall shrub dominated ecological class showed increasing backscatter with increasing greenness when using the cross polarized VH/HH channel at 32° incidence angle. This is likely driven by volume scattering of emerging and expanding leaves. Ecological classes with more prostrate vegetation and higher bare ground contributions showed decreasing backscatter trends over the growing season in the co-polarized VV/HH channels likely a result of surface drying instead of a vegetation structure signal. The results from shrub dominated areas are promising and provide a complementary data source for high temporal monitoring of vegetation phenology. Overall this thesis demonstrates that dense time series of TSX with optical remote sensing and in-situ time-lapse data are complementary and can be used to monitor rapid and seasonal processes in Arctic landscapes at high spatial and temporal resolution.}, language = {en} } @phdthesis{Steding2022, author = {Steding, Svenja}, title = {Geochemical and Hydraulic Modeling of Cavernous Structures in Potash Seams}, doi = {10.25932/publishup-54818}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-548182}, school = {Universit{\"a}t Potsdam}, pages = {IX, 104}, year = {2022}, abstract = {Salt deposits offer a variety of usage types. These include the mining of rock salt and potash salt as important raw materials, the storage of energy in man-made underground caverns, and the disposal of hazardous substances in former mines. The most serious risk with any of these usage types comes from the contact with groundwater or surface water. It causes an uncontrolled dissolution of salt rock, which in the worst case can result in the flooding or collapse of underground facilities. Especially along potash seams, cavernous structures can spread quickly, because potash salts show a much higher solubility than rock salt. However, as their chemical behavior is quite complex, previous models do not account for these highly soluble interlayers. Therefore, the objective of the present thesis is to describe the evolution of cavernous structures along potash seams in space and time in order to improve hazard mitigation during the utilization of salt deposits. The formation of cavernous structures represents an interplay of chemical and hydraulic processes. Hence, the first step is to systematically investigate the dissolution and precipitation reactions that occur when water and potash salt come into contact. For this purpose, a geochemical reaction model is used. The results show that the minerals are only partially dissolved, resulting in a porous sponge like structure. With the saturation of the solution increasing, various secondary minerals are formed, whose number and type depend on the original rock composition. Field data confirm a correlation between the degree of saturation and the distance from the center of the cavern, where solution is entering. Subsequently, the reaction model is coupled with a flow and transport code and supplemented by a novel approach called 'interchange'. The latter enables the exchange of solution and rock between areas of different porosity and mineralogy, and thus ultimately the growth of the cavernous structure. By means of several scenario analyses, cavern shape, growth rate and mineralogy are systematically investigated, taking also heterogeneous potash seams into account. The results show that basically four different cases can be distinguished, with mixed forms being a frequent occurrence in nature. The classification scheme is based on the dimensionless numbers P{\´e}clet and Damk{\"o}hler, and allows for a first assessment of the hazard potential. In future, the model can be applied to any field case, using measurement data for calibration. The presented research work provides a reactive transport model that is able to spatially and temporally characterize the propagation of cavernous structures along potash seams for the first time. Furthermore, it allows to determine thickness and composition of transition zones between cavern center and unaffected salt rock. The latter is particularly important in potash mining, so that natural cavernous structures can be located at an early stage and the risk of mine flooding can thus be reduced. The models may also contribute to an improved hazard prevention in the construction of storage caverns and the disposal of hazardous waste in salt deposits. Predictions regarding the characteristics and evolution of cavernous structures enable a better assessment of potential hazards, such as integrity or stability loss, as well as of suitable mitigation measures.}, language = {en} } @phdthesis{Stanke2023, author = {Stanke, Sandra}, title = {AC electrokinetic immobilization of influenza viruses and antibodies on nanoelectrode arrays for on-chip immunoassays}, doi = {10.25932/publishup-61716}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-617165}, school = {Universit{\"a}t Potsdam}, pages = {x, 115}, year = {2023}, abstract = {In the present thesis, AC electrokinetic forces, like dielectrophoresis and AC electroosmosis, were demonstrated as a simple and fast method to functionalize the surface of nanoelectrodes with submicrometer sized biological objects. These nanoelectrodes have a cylindrical shape with a diameter of 500 nm arranged in an array of 6256 electrodes. Due to its medical relevance influenza virus as well as anti-influenza antibodies were chosen as a model organism. Common methods to bring antibodies or proteins to biosensor surfaces are complex and time-consuming. In the present work, it was demonstrated that by applying AC electric fields influenza viruses and antibodies can be immobilized onto the nanoelectrodes within seconds without any prior chemical modification of neither the surface nor the immobilized biological object. The distribution of these immobilized objects is not uniform over the entire array, it exhibits a decreasing gradient from the outer row to the inner ones. Different causes for this gradient have been discussed, such as the vortex-shaped fluid motion above the nanoelectrodes generated by, among others, electrothermal fluid flow. It was demonstrated that parts of the accumulated material are permanently immobilized to the electrodes. This is a unique characteristic of the presented system since in the literature the AC electrokinetic immobilization is almost entirely presented as a method just for temporary immobilization. The spatial distribution of the immobilized viral material or the anti-influenza antibodies at the electrodes was observed by either the combination of fluorescence microscopy and deconvolution or by super-resolution microscopy (STED). On-chip immunoassays were performed to examine the suitability of the functionalized electrodes as a potential affinity-based biosensor. Two approaches were pursued: A) the influenza virus as the bio-receptor or B) the influenza virus as the analyte. Different sources of error were eliminated by ELISA and passivation experiments. Hence, the activity of the immobilized object was inspected by incubation with the analyte. This resulted in the successful detection of anti-influenza antibodies by the immobilized viral material. On the other hand, a detection of influenza virus particles by the immobilized anti-influenza antibodies was not possible. The latter might be due to lost activity or wrong orientation of the antibodies. Thus, further examinations on the activity of by AC electric fields immobilized antibodies should follow. When combined with microfluidics and an electrical read-out system, the functionalized chips possess the potential to serve as a rapid, portable, and cost-effective point-of-care (POC) device. This device can be utilized as a basis for diverse applications in diagnosing and treating influenza, as well as various other pathogens.}, language = {en} } @phdthesis{Simsek2022, author = {Simsek, Ibrahim}, title = {Ink-based preparation of chalcogenide perovskites as thin films for PV applications}, doi = {10.25932/publishup-57271}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-572711}, school = {Universit{\"a}t Potsdam}, pages = {iv, 113}, year = {2022}, abstract = {The increasing demand for energy in the current technological era and the recent political decisions about giving up on nuclear energy diverted humanity to focus on alternative environmentally friendly energy sources like solar energy. Although silicon solar cells are the product of a matured technology, the search for highly efficient and easily applicable materials is still ongoing. These properties made the efficiency of halide perovskites comparable with silicon solar cells for single junctions within a decade of research. However, the downside of halide perovskites are poor stability and lead toxicity for the most stable ones. On the other hand, chalcogenide perovskites are one of the most promising absorber materials for the photovoltaic market, due to their elemental abundance and chemical stability against moisture and oxygen. In the search of the ultimate solar absorber material, combining the good optoelectronic properties of halide perovskites with the stability of chalcogenides could be the promising candidate. Thus, this work investigates new techniques for the synthesis and design of these novel chalcogenide perovskites, that contain transition metals as cations, e.g., BaZrS3, BaHfS3, EuZrS3, EuHfS3 and SrHfS3. There are two stages in the deposition techniques of this study: In the first stage, the binary compounds are deposited via a solution processing method. In the second stage, the deposited materials are annealed in a chalcogenide atmosphere to form the perovskite structure by using solid-state reactions. The research also focuses on the optimization of a generalized recipe for a molecular ink to deposit precursors of chalcogenide perovskites with different binaries. The implementation of the precursor sulfurization resulted in either binaries without perovskite formation or distorted perovskite structures, whereas some of these materials are reported in the literature as they are more favorable in the needle-like non-perovskite configuration. Lastly, there are two categories for the evaluation of the produced materials: The first category is about the determination of the physical properties of the deposited layer, e.g., crystal structure, secondary phase formation, impurities, etc. For the second category, optoelectronic properties are measured and compared to an ideal absorber layer, e.g., band gap, conductivity, surface photovoltage, etc.}, language = {en} } @phdthesis{Sietz2011, author = {Sietz, Diana}, title = {Dryland vulnerability : typical patterns and dynamics in support of vulnerability reduction efforts}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-58097}, school = {Universit{\"a}t Potsdam}, year = {2011}, abstract = {The pronounced constraints on ecosystem functioning and human livelihoods in drylands are frequently exacerbated by natural and socio-economic stresses, including weather extremes and inequitable trade conditions. Therefore, a better understanding of the relation between these stresses and the socio-ecological systems is important for advancing dryland development. The concept of vulnerability as applied in this dissertation describes this relation as encompassing the exposure to climate, market and other stresses as well as the sensitivity of the systems to these stresses and their capacity to adapt. With regard to the interest in improving environmental and living conditions in drylands, this dissertation aims at a meaningful generalisation of heterogeneous vulnerability situations. A pattern recognition approach based on clustering revealed typical vulnerability-creating mechanisms at global and local scales. One study presents the first analysis of dryland vulnerability with global coverage at a sub-national resolution. The cluster analysis resulted in seven typical patterns of vulnerability according to quantitative indication of poverty, water stress, soil degradation, natural agro-constraints and isolation. Independent case studies served to validate the identified patterns and to prove the transferability of vulnerability-reducing approaches. Due to their worldwide coverage, the global results allow the evaluation of a specific system's vulnerability in its wider context, even in poorly-documented areas. Moreover, climate vulnerability of smallholders was investigated with regard to their food security in the Peruvian Altiplano. Four typical groups of households were identified in this local dryland context using indicators for harvest failure risk, agricultural resources, education and non-agricultural income. An elaborate validation relying on independently acquired information demonstrated the clear correlation between weather-related damages and the identified clusters. It also showed that household-specific causes of vulnerability were consistent with the mechanisms implied by the corresponding patterns. The synthesis of the local study provides valuable insights into the tailoring of interventions that reflect the heterogeneity within the social group of smallholders. The conditions necessary to identify typical vulnerability patterns were summarised in five methodological steps. They aim to motivate and to facilitate the application of the selected pattern recognition approach in future vulnerability analyses. The five steps outline the elicitation of relevant cause-effect hypotheses and the quantitative indication of mechanisms as well as an evaluation of robustness, a validation and a ranking of the identified patterns. The precise definition of the hypotheses is essential to appropriately quantify the basic processes as well as to consistently interpret, validate and rank the clusters. In particular, the five steps reflect scale-dependent opportunities, such as the outcome-oriented aspect of validation in the local study. Furthermore, the clusters identified in Northeast Brazil were assessed in the light of important endogenous processes in the smallholder systems which dominate this region. In order to capture these processes, a qualitative dynamic model was developed using generalised rules of labour allocation, yield extraction, budget constitution and the dynamics of natural and technological resources. The model resulted in a cyclic trajectory encompassing four states with differing degree of criticality. The joint assessment revealed aggravating conditions in major parts of the study region due to the overuse of natural resources and the potential for impoverishment. The changes in vulnerability-creating mechanisms identified in Northeast Brazil are well-suited to informing local adjustments to large-scale intervention programmes, such as "Avan{\c{c}}a Brasil". Overall, the categorisation of a limited number of typical patterns and dynamics presents an efficient approach to improving our understanding of dryland vulnerability. Appropriate decision-making for sustainable dryland development through vulnerability reduction can be significantly enhanced by pattern-specific entry points combined with insights into changing hotspots of vulnerability and the transferability of successful adaptation strategies.}, language = {en} } @phdthesis{Siegmund2022, author = {Siegmund, Nicole}, title = {Wind driven soil particle uptake Quantifying drivers of wind erosion across the particle size spectrum}, doi = {10.25932/publishup-57489}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-574897}, school = {Universit{\"a}t Potsdam}, pages = {ix, 56}, year = {2022}, abstract = {Among the multitude of geomorphological processes, aeolian shaping processes are of special character, Pedogenic dust is one of the most important sources of atmospheric aerosols and therefore regarded as a key player for atmospheric processes. Soil dust emissions, being complex in composition and properties, influence atmospheric processes and air quality and has impacts on other ecosystems. In this because even though their immediate impact can be considered low (exceptions exist), their constant and large-scale force makes them a powerful player in the earth system. dissertation, we unravel a novel scientific understanding of this complex system based on a holistic dataset acquired during a series of field experiments on arable land in La Pampa, Argentina. The field experiments as well as the generated data provide information about topography, various soil parameters, the atmospheric dynamics in the very lower atmosphere (4m height) as well as measurements regarding aeolian particle movement across a wide range of particle size classes between 0.2μm up to the coarse sand. The investigations focus on three topics: (a) the effects of low-scale landscape structures on aeolian transport processes of the coarse particle fraction, (b) the horizontal and vertical fluxes of the very fine particles and (c) the impact of wind gusts on particle emissions. Among other considerations presented in this thesis, it could in particular be shown, that even though the small-scale topology does have a clear impact on erosion and deposition patterns, also physical soil parameters need to be taken into account for a robust statistical modelling of the latter. Furthermore, specifically the vertical fluxes of particulate matter have different characteristics for the particle size classes. Finally, a novel statistical measure was introduced to quantify the impact of wind gusts on the particle uptake and its application on the provided data set. The aforementioned measure shows significantly increased particle concentrations during points in time defined as gust event. With its holistic approach, this thesis further contributes to the fundamental understanding of how atmosphere and pedosphere are intertwined and affect each other.}, language = {en} } @phdthesis{Sieg2018, author = {Sieg, Tobias}, title = {Reliability of flood damage estimations across spatial scales}, doi = {10.25932/publishup-42616}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-426161}, school = {Universit{\"a}t Potsdam}, pages = {XIII, 115}, year = {2018}, abstract = {Extreme Naturereignisse sind ein integraler Bestandteil der Natur der Erde. Sie werden erst dann zu Gefahren f{\"u}r die Gesellschaft, wenn sie diesen Ereignissen ausgesetzt ist. Dann allerdings k{\"o}nnen Naturgefahren verheerende Folgen f{\"u}r die Gesellschaft haben. Besonders hydro-meteorologische Gefahren wie zum Beispiel Flusshochwasser, Starkregenereignisse, Winterst{\"u}rme, Orkane oder Tornados haben ein hohes Schadenspotential und treten rund um den Globus auf. Einhergehend mit einer immer w{\"a}rmer werdenden Welt, werden auch Extremwetterereignisse, welche potentiell Naturgefahren ausl{\"o}sen k{\"o}nnen, immer wahrscheinlicher. Allerdings tr{\"a}gt nicht nur eine sich ver{\"a}ndernde Umwelt zur Erh{\"o}hung des Risikos von Naturgefahren bei, sondern auch eine sich ver{\"a}ndernde Gesellschaft. Daher ist ein angemessenes Risikomanagement erforderlich um die Gesellschaft auf jeder r{\"a}umlichen Ebene an diese Ver{\"a}nderungen anzupassen. Ein essentieller Bestandteil dieses Managements ist die Absch{\"a}tzung der {\"o}konomischen Auswirkungen der Naturgefahren. Bisher allerdings fehlen verl{\"a}ssliche Methoden um die Auswirkungen von hydro-meteorologischen Gefahren abzusch{\"a}tzen. Ein Hauptbestandteil dieser Arbeit ist daher die Entwicklung und Anwendung einer neuen Methode, welche die Verl{\"a}sslichkeit der Schadenssch{\"a}tzung verbessert. Die Methode wurde beispielhaft zur Sch{\"a}tzung der {\"o}konomischen Auswirkungen eines Flusshochwassers auf einzelne Unternehmen bis hin zu den Auswirkungen auf das gesamte Wirtschaftssystem Deutschlands erfolgreich angewendet. Bestehende Methoden geben meist wenig Information {\"u}ber die Verl{\"a}sslichkeit ihrer Sch{\"a}tzungen. Da diese Informationen Entscheidungen zur Anpassung an das Risiko erleichtern, wird die Verl{\"a}sslichkeit der Schadenssch{\"a}tzungen mit der neuen Methode dargestellt. Die Verl{\"a}sslichkeit bezieht sich dabei nicht nur auf die Schadenssch{\"a}tzung selber, sondern auch auf die Annahmen, die {\"u}ber betroffene Geb{\"a}ude gemacht werden. Nach diesem Prinzip kann auch die Verl{\"a}sslichkeit von Annahmen {\"u}ber die Zukunft dargestellt werden, dies ist ein wesentlicher Aspekt f{\"u}r Prognosen. Die Darstellung der Verl{\"a}sslichkeit und die erfolgreiche Anwendung zeigt das Potential der Methode zur Verwendung von Analysen f{\"u}r gegenw{\"a}rtige und zuk{\"u}nftige hydro-meteorologische Gefahren.}, language = {en} } @phdthesis{Shaw2024, author = {Shaw, Vasundhara}, title = {Cosmic-ray transport and signatures in their local environment}, doi = {10.25932/publishup-62019}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-620198}, school = {Universit{\"a}t Potsdam}, pages = {143}, year = {2024}, abstract = {The origin and structure of magnetic fields in the Galaxy are largely unknown. What is known is that they are essential for several astrophysical processes, in particular the propagation of cosmic rays. Our ability to describe the propagation of cosmic rays through the Galaxy is severely limited by the lack of observational data needed to probe the structure of the Galactic magnetic field on many different length scales. This is particularly true for modelling the propagation of cosmic rays into the Galactic halo, where our knowledge of the magnetic field is particularly poor. In the last decade, observations of the Galactic halo in different frequency regimes have revealed the existence of out-of-plane bubble emission in the Galactic halo. In gamma rays these bubbles have been termed Fermi bubbles with a radial extent of ≈ 3 kpc and an azimuthal height of ≈ 6 kpc. The radio counterparts of the Fermi bubbles were seen by both the S-PASS telescopes and the Planck satellite, and showed a clear spatial overlap. The X-ray counterparts of the Fermi bubbles were named eROSITA bubbles after the eROSITA satellite, with a radial width of ≈ 7 kpc and an azimuthal height of ≈ 14 kpc. Taken together, these observations suggest the presence of large extended Galactic Halo Bubbles (GHB) and have stimulated interest in exploring the less explored Galactic halo. In this thesis, a new toy model (GHB model) for the magnetic field and non-thermal electron distribution in the Galactic halo has been proposed. The new toy model has been used to produce polarised synchrotron emission sky maps. Chi-square analysis was used to compare the synthetic skymaps with the Planck 30 GHz polarised skymaps. The obtained constraints on the strength and azimuthal height were found to be in agreement with the S-PASS radio observations. The upper, lower and best-fit values obtained from the above chi-squared analysis were used to generate three separate toy models. These three models were used to propagate ultra-high energy cosmic rays. This study was carried out for two potential sources, Centaurus A and NGC 253, to produce magnification maps and arrival direction skymaps. The simulated arrival direction skymaps were found to be consistent with the hotspots of Centaurus A and NGC 253 as seen in the observed arrival direction skymaps provided by the Pierre Auger Observatory (PAO). The turbulent magnetic field component of the GHB model was also used to investigate the extragalactic dipole suppression seen by PAO. UHECRs with an extragalactic dipole were forward-tracked through the turbulent GHB model at different field strengths. The suppression in the dipole due to the varying diffusion coefficient from the simulations was noted. The results could also be compared with an analytical analogy of electrostatics. The simulations of the extragalactic dipole suppression were in agreement with similar studies carried out for galactic cosmic rays.}, language = {en} } @phdthesis{Senger2007, author = {Senger, Toralf}, title = {Untersuchungen zur Metallhom{\"o}ostase in Arabidopsis thaliana}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-13234}, school = {Universit{\"a}t Potsdam}, year = {2007}, abstract = {Alle Organismen sind f{\"u}r ihr {\"U}berleben auf Metalle angewiesen. Hierbei gibt es f{\"u}r jedes Metall einen Konzentrationsbereich, der das Optimum zwischen Metallmangel, -bedarf und -toxizit{\"a}t darstellt. Es gilt mittlerweile als erwiesen, dass alle Organismen zur Aufrechterhaltung des Metallgleichgewichts ein komplexes Netzwerk von Proteinen und niedermolekularen Verbindungen entwickelt haben. Die molekularen Komponenten dieses Netzwerks sind nur zu einem Teil bekannt und charakterisiert: In den letzten Jahren wurden einige Proteinfamilien identifiziert, deren Mitglieder Metalle durch Lipidmembranen transportieren. Eine dieser Metalltransporterfamilien ist die Cation Diffusion Facilitator (CDF)-Familie: Alle charakterisierten Mitglieder exportieren Metalle aus dem Zytoplasma - entweder in zellul{\"a}re Kompartimente oder aus der Zelle heraus. Von den zw{\"o}lf Mitgliedern dieser Familie in Arabidopsis thaliana (A. thaliana) - Metall Toleranz Protein (MTP)-1 bis -12 - wurden bisher AtMTP1 und AtMTP3 charakterisiert. In dieser Arbeit wird die Charakterisierung von AtMTP2 beschrieben. Wie die homologen Proteine AtMTP1 und AtMTP3 f{\"u}hrt AtMTP2 zu Zn-Toleranz, wenn es heterolog in Zn-sensitiven Hefemutanten exprimiert wird. Mit AtMTP2 transformierte Hefemutanten zeigten dar{\"u}ber hinaus erh{\"o}hte Co-Toleranz. Expression von chim{\"a}ren AtMTP2/GFP Fusionsproteinen in Hefe, A.thaliana protoplasten und in stabil transformierten A.thalinana Planzenlinien deutet auf Lokalisation of AtMTP2 in Membranen des Endoplasmatischen Retikulums (ER) hin, wenn GFP an den C-Terminus von MTP2 fusioniert wird. Fusion of GFP an den N-Terminus von AtMTP2 f{\"u}hrte zu Lokalisation in der vakuol{\"a}ren Membran, was wahrscheinlichsten auf Fehllokalisierung durch Maskierung eines ER-Retentionsmotivs (XXRR) am N-Terminus von AtMTP2 zur{\"u}ckgeht. Dies legt nahe, dass AtMTP2 die erw{\"a}hnten Metalle in das Endomembransystem der Zelle transportieren kann. Eine gewebespezifische Lokalisierung wurde mit Pflanzen durchgef{\"u}hrt, die das β-Glucuronidase (GUS)-Reporterprotein bzw. chim{\"a}re Fusionsproteine aus EGFP und AtMTP2 unter Kontrolle des nativen pMTP2-Promotors exprimierten. Diese Experimente best{\"a}tigten zum einen, dass der pMTP2-Promotor nur unter Zn-Defizienz aktiv ist. GUS-Aktivit{\"a}t wurde unter diesen Bedingungen in zwei Zonen der Wurzelspitze beobachtet: in den isodiametrischen Zellen der meristematischen Zone und in der beginnenden Wurzelhaarzone. Dar{\"u}ber hinaus konnte gezeigt werden, dass die EGFP-Fusionsproteine unter Kontrolle des nativen pMTP2-Promotors nur in epidermalen Zellen exprimiert werden. F{\"u}r eine homozygote Knockout- Linie, mtp2-S3, konnte bisher kein eindeutiger Ph{\"a}notyp identifiziert werden. Auf Grundlage der bisher durchgef{\"u}hrten Charakterisierung von AtMTP2 erscheinen zwei Modelle der Funktion von AtMTP2 in der Pflanze m{\"o}glich: AtMTP2 k{\"o}nnte essentiell f{\"u}r die Versorgung des ER mit Zn unter Zn-Mangelbedingungen sein. Hierf{\"u}r spricht, dass AtMTP2 in jungen, teilungsaktiven und damit Zn-ben{\"o}tigenden Wurzelzonen exprimiert wird. Die auf die Epidermis beschr{\"a}nkte Lokalisation k{\"o}nnte bei diesem Modell auf die M{\"o}glichkeit der zwischenzellul{\"a}ren Zn-Verteilung innerhalb des ER {\"u}ber Desmotubules hindeuten. Alternativ k{\"o}nnte AtMTP2 eine Funktion bei der Detoxifizierung von Zn unter Zn-Schock Bedingungen haben: Es ist bekannt, dass unter Zn- Mangelbedingungen die Expression der zellul{\"a}ren Zn-Aufnahmesysteme hochreguliert wird. Wenn nun die Zn-Verf{\"u}gbarkeit im Boden z. B durch eine pH-{\"A}nderung innerhalb kurzer Zeit stark ansteigt, besteht die Notwendigkeit der Entgiftung von Zn innerhalb der Zelle, bis der starke Einstrom von Zn ins Zytoplasma durch die Deaktivierung der Zn-Aufnahmesysteme und einer geringeren Expression in der Pflanze gedrosselt ist. Ein {\"a}hnlicher Mechanismus wurde in der B{\"a}ckerhefe S. cerevisae beschrieben, in der dar{\"u}ber hinaus ein Zn-Transporter verst{\"a}rkt exprimiert wird, der Zn durch Transport in die Vakuole entgiften kann. Es ist durchaus m{\"o}glich, dass in Arabidopsis AtMTP2 die Zn-Detoxifizierung unter diesen speziellen Bedingungen durch Zn-Transport in das ER oder die Vakuole vermittelt. Zur Identifikation weiterer Komponenten des Metallhom{\"o}ostasenetzwerks sind verschiedene Ans{\"a}tze denkbar. In dieser Arbeit wurde in Hefe ein heterologer Screen durchgef{\"u}hrt, um Interaktoren f{\"u}r vier Mitglieder der Arabidopsis-CDF-Familie zu identifizieren. Unter den 11 im Hefesystem best{\"a}tigten Kandidaten befindet sich mit AtSPL1 ein AtMTP1-Interaktionskandidat, der m{\"o}glicherweise eine Rolle bei der Cu-,Zn-Hom{\"o}ostase spielt. Als wahrscheinliche AtMTP3-Interaktionskandidaten wurde die c"-Untereinheit der vakuol{\"a}ren H+-ATPase AtVHA identifiziert sowie mit AtNPSN13 ein Protein, das vermutlich eine Rolle bei Fusionen von Vesikeln mit Zielmembranen spielt. Ein anderer Ansatz zur Identifikation neuer Metallhom{\"o}ostasegene ist die vergleichende Elementanalyse von nat{\"u}rlichen oder mutagenisierten Pflanzenpopulationen. Voraussetzung f{\"u}r diesen Ansatz ist die schnelle und genaue Analyse des Elementgehalts von Pflanzen. Eine etablierte Methode zur simultanen Bestimmung von bis zu 65 Elementen in einer Probe ist die Inductively Coupled Plasma Optical Emission Spectrometry (ICP OES). Der limitierende Faktor f{\"u}r einen hohen Probendurchsatz ist die Notwendigkeit, Proben f{\"u}r die Analyse zu verfl{\"u}ssigen. Eine alternative Methode der Probenzuf{\"u}hrung zum Analyseger{\"a}t ist die elektrothermale Verdampfung (ETV) der Probe. Zur weitgehend automatisierten Analyse von Pflanzenmaterial mit minimiertem Arbeitsaufwand wurde eine Methode entwickelt, die auf der Kopplung der ETV mit der ICP OES basiert.}, language = {de} } @phdthesis{Schuette2011, author = {Sch{\"u}tte, Moritz}, title = {Evolutionary fingerprints in genome-scale networks}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-57483}, school = {Universit{\"a}t Potsdam}, year = {2011}, abstract = {Mathematical modeling of biological phenomena has experienced increasing interest since new high-throughput technologies give access to growing amounts of molecular data. These modeling approaches are especially able to test hypotheses which are not yet experimentally accessible or guide an experimental setup. One particular attempt investigates the evolutionary dynamics responsible for today's composition of organisms. Computer simulations either propose an evolutionary mechanism and thus reproduce a recent finding or rebuild an evolutionary process in order to learn about its mechanism. The quest for evolutionary fingerprints in metabolic and gene-coexpression networks is the central topic of this cumulative thesis based on four published articles. An understanding of the actual origin of life will probably remain an insoluble problem. However, one can argue that after a first simple metabolism has evolved, the further evolution of metabolism occurred in parallel with the evolution of the sequences of the catalyzing enzymes. Indications of such a coevolution can be found when correlating the change in sequence between two enzymes with their distance on the metabolic network which is obtained from the KEGG database. We observe that there exists a small but significant correlation primarily on nearest neighbors. This indicates that enzymes catalyzing subsequent reactions tend to be descended from the same precursor. Since this correlation is relatively small one can at least assume that, if new enzymes are no "genetic children" of the previous enzymes, they certainly be descended from any of the already existing ones. Following this hypothesis, we introduce a model of enzyme-pathway coevolution. By iteratively adding enzymes, this model explores the metabolic network in a manner similar to diffusion. With implementation of an Gillespie-like algorithm we are able to introduce a tunable parameter that controls the weight of sequence similarity when choosing a new enzyme. Furthermore, this method also defines a time difference between successive evolutionary innovations in terms of a new enzyme. Overall, these simulations generate putative time-courses of the evolutionary walk on the metabolic network. By a time-series analysis, we find that the acquisition of new enzymes appears in bursts which are pronounced when the influence of the sequence similarity is higher. This behavior strongly resembles punctuated equilibrium which denotes the observation that new species tend to appear in bursts as well rather than in a gradual manner. Thus, our model helps to establish a better understanding of punctuated equilibrium giving a potential description at molecular level. From the time-courses we also extract a tentative order of new enzymes, metabolites, and even organisms. The consistence of this order with previous findings provides evidence for the validity of our approach. While the sequence of a gene is actually subject to mutations, its expression profile might also indirectly change through the evolutionary events in the cellular interplay. Gene coexpression data is simply accessible by microarray experiments and commonly illustrated using coexpression networks where genes are nodes and get linked once they show a significant coexpression. Since the large number of genes makes an illustration of the entire coexpression network difficult, clustering helps to show the network on a metalevel. Various clustering techniques already exist. However, we introduce a novel one which maintains control of the cluster sizes and thus assures proper visual inspection. An application of the method on Arabidopsis thaliana reveals that genes causing a severe phenotype often show a functional uniqueness in their network vicinity. This leads to 20 genes of so far unknown phenotype which are however suggested to be essential for plant growth. Of these, six indeed provoke such a severe phenotype, shown by mutant analysis. By an inspection of the degree distribution of the A.thaliana coexpression network, we identified two characteristics. The distribution deviates from the frequently observed power-law by a sharp truncation which follows after an over-representation of highly connected nodes. For a better understanding, we developed an evolutionary model which mimics the growth of a coexpression network by gene duplication which underlies a strong selection criterion, and slight mutational changes in the expression profile. Despite the simplicity of our assumption, we can reproduce the observed properties in A.thaliana as well as in E.coli and S.cerevisiae. The over-representation of high-degree nodes could be identified with mutually well connected genes of similar functional families: zinc fingers (PF00096), flagella, and ribosomes respectively. In conclusion, these four manuscripts demonstrate the usefulness of mathematical models and statistical tools as a source of new biological insight. While the clustering approach of gene coexpression data leads to the phenotypic characterization of so far unknown genes and thus supports genome annotation, our model approaches offer explanations for observed properties of the coexpression network and furthermore substantiate punctuated equilibrium as an evolutionary process by a deeper understanding of an underlying molecular mechanism.}, language = {en} } @phdthesis{Schaefer2024, author = {Sch{\"a}fer, Marj{\"a}nn Helena}, title = {Untersuchungen zur Evolution der 15-Lipoxygenase (ALOX15) bei S{\"a}ugetieren und funktionelle Charakterisierung von Knock-in-M{\"a}usen mit humanisierter Reaktionsspezifit{\"a}t der 15-Lipoxygenase-2 (Alox15b)}, doi = {10.25932/publishup-62034}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-620340}, school = {Universit{\"a}t Potsdam}, pages = {XVII, 280}, year = {2024}, abstract = {Arachidons{\"a}urelipoxygenasen (ALOX-Isoformen) sind Lipid-peroxidierenden Enzyme, die bei der Zelldifferenzierung und bei der Pathogenese verschiedener Erkrankungen bedeutsam sind. Im menschlichen Genom gibt es sechs funktionelle ALOX-Gene, die als Einzelkopiegene vorliegen. F{\"u}r jedes humane ALOX-Gen gibt es ein orthologes Mausgen. Obwohl sich die sechs humanen ALOX-Isoformen strukturell sehr {\"a}hnlich sind, unterscheiden sich ihre funktionellen Eigenschaften deutlich voneinander. In der vorliegenden Arbeit wurden vier unterschiedliche Fragestellungen zum Vorkommen, zur biologischen Rolle und zur Evolutionsabh{\"a}ngigkeit der enzymatischen Eigenschaften von S{\"a}ugetier-ALOX-Isoformen untersucht: 1) Spitzh{\"o}rnchen (Tupaiidae) sind evolution{\"a}r n{\"a}her mit dem Menschen verwandt als Nagetiere und wurden deshalb als Alternativmodelle f{\"u}r die Untersuchung menschlicher Erkrankungen vorgeschlagen. In dieser Arbeit wurde erstmals der Arachidons{\"a}urestoffwechsel von Spitzh{\"o}rnchen untersucht. Dabei wurde festgestellt, dass im Genom von Tupaia belangeri vier unterschiedliche ALOX15-Gene vorkommen und die Enzyme sich hinsichtlich ihrer katalytischen Eigenschaften {\"a}hneln. Diese genomische Vielfalt, die weder beim Menschen noch bei M{\"a}usen vorhanden ist, erschwert die funktionellen Untersuchungen zur biologischen Rolle des ALOX15-Weges. Damit scheint Tupaia belangeri kein geeigneteres Tiermodel f{\"u}r die Untersuchung des ALOX15-Weges des Menschen zu sein. 2) Entsprechend der Evolutionshypothese k{\"o}nnen S{\"a}ugetier-ALOX15-Orthologe in Arachidons{\"a}ure-12-lipoxygenierende- und Arachidons{\"a}ure-15-lipoxygenierende Enzyme eingeteilt werden. Dabei exprimieren S{\"a}ugetierspezies, die einen h{\"o}heren Evolutionsgrad als Gibbons aufweisen, Arachidons{\"a}ure-15-lipoxygenierende ALOX15-Orthologe, w{\"a}hrend evolution{\"a}r weniger weit entwickelte S{\"a}ugetiere Arachidons{\"a}ure-12 lipoxygenierende Enzyme besitzen. In dieser Arbeit wurden elf neue ALOX15-Orthologe als rekombinante Proteine exprimiert und funktionell charakterisiert. Die erhaltenen Ergebnisse f{\"u}gen sich widerspruchsfrei in die Evolutionshypothese ein und verbreitern deren experimentelle Basis. Die experimentellen Daten best{\"a}tigen auch das Triadenkonzept. 3) Da humane und murine ALOX15B-Orthologe unterschiedliche funktionelle Eigenschaften aufweisen, k{\"o}nnen Ergebnisse aus murinen Krankheitsmodellen zur biologischen Rolle der ALOX15B nicht direkt auf den Menschen {\"u}bertragen werden. Um die ALOX15B-Orthologen von Maus und Mensch funktionell einander anzugleichen, wurden im Rahmen der vorliegenden Arbeit Knock-in M{\"a}use durch die In vivo Mutagenese mittels CRISPR/Cas9-Technik hergestellt. Diese exprimieren eine humanisierte Mutante (Doppelmutation von Tyrosin603Asparagins{\"a}ure+Histidin604Valin) der murinen Alox15b. Diese M{\"a}use waren lebens- und fortpflanzungsf{\"a}hig, zeigten aber geschlechtsspezifische Unterschiede zu ausgekreuzten Wildtyp-Kontrolltieren im Rahmen ihre Individualentwicklung. 4) In vorhergehenden Untersuchungen zur Rolle der ALOX15B in Rahmen der Entz{\"u}ndungsreaktion wurde eine antiinflammatorische Wirkung des Enzyms postuliert. In der vorliegenden Arbeit wurde untersucht, ob eine Humanisierung der murinen Alox15b die Entz{\"u}ndungsreaktion in zwei verschiedenen murinen Entz{\"u}ndungsmodellen beeinflusst. Eine Humanisierung der murinen Alox15b f{\"u}hrte zu einer verst{\"a}rkten Ausbildung von Entz{\"u}ndungssymptomen im induzierten Dextran-Natrium-Sulfat-Kolitismodell. Im Gegensatz dazu bewirkte die Humanisierung der Alox15b eine Abschw{\"a}chung der Entz{\"u}ndungssymptome im Freund'schen Adjuvans Pfoten{\"o}demmodell. Diese Daten deuten darauf hin, dass sich die Rolle der ALOX15B in verschiedenen Entz{\"u}ndungsmodellen unterscheidet.}, language = {de} } @phdthesis{Schutjajew2021, author = {Schutjajew, Konstantin}, title = {Electrochemical sodium storage in non-graphitizing carbons - insights into mechanisms and synthetic approaches towards high-energy density materials}, doi = {10.25932/publishup-54189}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-541894}, school = {Universit{\"a}t Potsdam}, pages = {v, 148}, year = {2021}, abstract = {To achieve a sustainable energy economy, it is necessary to turn back on the combustion of fossil fuels as a means of energy production and switch to renewable sources. However, their temporal availability does not match societal consumption needs, meaning that renewably generated energy must be stored in its main generation times and allocated during peak consumption periods. Electrochemical energy storage (EES) in general is well suited due to its infrastructural independence and scalability. The lithium ion battery (LIB) takes a special place, among EES systems due to its energy density and efficiency, but the scarcity and uneven geological occurrence of minerals and ores vital for many cell components, and hence the high and fluctuating costs will decelerate its further distribution. The sodium ion battery (SIB) is a promising successor to LIB technology, as the fundamental setup and cell chemistry is similar in the two systems. Yet, the most widespread negative electrode material in LIBs, graphite, cannot be used in SIBs, as it cannot store sufficient amounts of sodium at reasonable potentials. Hence, another carbon allotrope, non-graphitizing or hard carbon (HC) is used in SIBs. This material consists of turbostratically disordered, curved graphene layers, forming regions of graphitic stacking and zones of deviating layers, so-called internal or closed pores. The structural features of HC have a substantial impact of the charge-potential curve exhibited by the carbon when it is used as the negative electrode in an SIB. At defects and edges an adsorption-like mechanism of sodium storage is prevalent, causing a sloping voltage curve, ill-suited for the practical application in SIBs, whereas a constant voltage plateau of relatively high capacities is found immediately after the sloping region, which recent research attributed to the deposition of quasimetallic sodium into the closed pores of HC. Literature on the general mechanism of sodium storage in HCs and especially the role of the closed pore is abundant, but the influence of the pore geometry and chemical nature of the HC on the low-potential sodium deposition is yet in an early stage. Therefore, the scope of this thesis is to investigate these relationships using suitable synthetic and characterization methods. Materials of precisely known morphology, porosity, and chemical structure are prepared in clear distinction to commonly obtained ones and their impact on the sodium storage characteristics is observed. Electrochemical impedance spectroscopy in combination with distribution of relaxation times analysis is further established as a technique to study the sodium storage process, in addition to classical direct current techniques, and an equivalent circuit model is proposed to qualitatively describe the HC sodiation mechanism, based on the recorded data. The obtained knowledge is used to develop a method for the preparation of closed porous and non-porous materials from open porous ones, proving not only the necessity of closed pores for efficient sodium storage, but also providing a method for effective pore closure and hence the increase of the sodium storage capacity and efficiency of carbon materials. The insights obtained and methods developed within this work hence not only contribute to the better understanding of the sodium storage mechanism in carbon materials of SIBs, but can also serve as guidance for the design of efficient electrode materials.}, language = {en} } @phdthesis{Schulze2017, author = {Schulze, Nicole}, title = {Neue Templatphasen zur anisotropen Goldnanopartikelherstellung durch den Einsatz strukturbildender Polymere}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-409515}, school = {Universit{\"a}t Potsdam}, pages = {VI, 117, xv}, year = {2017}, abstract = {Ziel der vorliegenden Arbeit war die Synthese und Charakterisierung von anisotropen Goldnanopartikeln in einer geeigneten Polyelektrolyt-modifizierten Templatphase. Der Mittelpunkt bildet dabei die Auswahl einer geeigneten Templatphase, zur Synthese von einheitlichen und reproduzierbaren anisotropen Goldnanopartikeln mit den daraus resultierenden besonderen Eigenschaften. Bei der Synthese der anisotropen Goldnanopartikeln lag der Fokus in der Verwendung von Vesikeln als Templatphase, wobei hier der Einfluss unterschiedlicher strukturbildender Polymere (stark alternierende Maleamid-Copolymere PalH, PalPh, PalPhCarb und PalPhBisCarb mit verschiedener Konformation) und Tenside (SDS, AOT - anionische Tenside) bei verschiedenen Synthese- und Abtrennungsbedingungen untersucht werden sollte. Im ersten Teil der Arbeit konnte gezeigt werden, dass PalPhBisCarb bei einem pH-Wert von 9 die Bedingungen eines R{\"o}hrenbildners f{\"u}r eine morphologische Transformation von einer vesikul{\"a}ren Phase in eine r{\"o}hrenf{\"o}rmige Netzwerkstruktur erf{\"u}llt und somit als Templatphase zur formgesteuerten Bildung von Nanopartikeln genutzt werden kann. Im zweiten Teil der Arbeit wurde dargelegt, dass die Templatphase PalPhBisCarb (pH-Wert von 9, Konzentration von 0,01 wt.\%) mit AOT als Tensid und PL90G als Phospholipid (im Verh{\"a}ltnis 1:1) die effektivste Wahl einer Templatphase f{\"u}r die Bildung von anisotropen Strukturen in einem einstufigen Prozess darstellt. Bei einer konstanten Synthesetemperatur von 45 °C wurden die besten Ergebnisse bei einer Goldchloridkonzentration von 2 mM, einem Gold-Templat-Verh{\"a}ltnis von 3:1 und einer Synthesezeit von 30 Minuten erzielt. Ausbeute an anisotropen Strukturen lag bei 52 \% (Anteil an dreieckigen Nanopl{\"a}ttchen von 19 \%). Durch Erh{\"o}hung der Synthesetemperatur konnte die Ausbeute auf 56 \% (29 \%) erh{\"o}ht werden. Im dritten Teil konnte durch zeitabh{\"a}ngige Untersuchungen gezeigt werden, dass bei Vorhandensein von PalPhBisCarb die Bildung der energetisch nicht bevorzugten Pl{\"a}ttchen-Strukturen bei Raumtemperatur initiiert wird und bei 45 °C ein Optimum annimmt. Kintetische Untersuchungen haben gezeigt, dass die Bildung dreieckiger Nanopl{\"a}ttchen bei schrittweiser Zugabe der Goldchlorid-Pr{\"a}kursorl{\"o}sung zur PalPhBisCarb enthaltenden Templatphase durch die Dosierrate der vesikul{\"a}ren Templatphase gesteuert werden kann. In umgekehrter Weise findet bei Zugabe der Templatphase zur Goldchlorid-Pr{\"a}kursorl{\"o}sung bei 45 °C ein {\"a}hnlicher, kinetisch gesteuerter Prozess der Bildung von Nanodreiecken statt mit einer maximalen Ausbeute dreieckigen Nanopl{\"a}ttchen von 29 \%. Im letzten Kapitel erfolgten erste Versuche zur Abtrennung dreieckiger Nanopl{\"a}ttchen von den {\"u}brigen Geometrien der gemischten Nanopartikell{\"o}sung mittels tensidinduzierter Verarmungsf{\"a}llung. Bei Verwendung von AOT mit einer Konzentration von 0,015 M wurde eine Ausbeute an Nanopl{\"a}ttchen von 99 \%, wovon 72 \% dreieckiger Geometrien hatten, erreicht.}, language = {de} } @phdthesis{SchulteOsseili2019, author = {Schulte-Osseili, Christine}, title = {Vom Monomer zum Glykopolymer}, doi = {10.25932/publishup-43216}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-432169}, school = {Universit{\"a}t Potsdam}, pages = {xiii, 149}, year = {2019}, abstract = {Glykopolymere sind synthetische und nat{\"u}rlich vorkommende Polymere, die eine Glykaneinheit in der Seitenkette des Polymers tragen. Glykane sind durch die Glykan-Protein-Wechselwirkung verantwortlich f{\"u}r viele biologische Prozesse. Die Beteiligung der Glykanen in diesen biologischen Prozessen erm{\"o}glicht das Imitieren und Analysieren der Wechselwirkungen durch geeignete Modellverbindungen, z.B. der Glykopolymere. Dieses System der Glykan-Protein-Wechselwirkung soll durch die Glykopolymere untersucht und studiert werden, um die spezifische und selektive Bindung der Proteine an die Glykopolymere nachzuweisen. Die Proteine, die in der Lage sind, Kohlenhydratstrukturen selektiv zu binden, werden Lektine genannt. In dieser Dissertationsarbeit wurden verschiedene Glykopolymere synthetisiert. Dabei sollte auf einen effizienten und kosteng{\"u}nstigen Syntheseweg geachtet werden. Verschiedene Glykopolymere wurden durch funktionalisierte Monomere mit verschiedenen Zuckern, wie z.B. Mannose, Laktose, Galaktose oder N-Acetyl-Glukosamin als funktionelle Gruppe, hergestellt. Aus diesen funktionalisierten Glykomonomeren wurden {\"u}ber ATRP und RAFT-Polymerisation Glykopolymere synthetisiert. Die erhaltenen Glykopolymere wurden in Diblockcopolymeren als hydrophiler Block angewendet und die Selbstassemblierung in w{\"a}ssriger L{\"o}sung untersucht. Die Polymere formten in w{\"a}ssriger L{\"o}sung Mizellen, bei denen der Zuckerblock an der Oberfl{\"a}che der Mizellen sitzt. Die Mizellen wurden mit einem hydrophoben Fluoreszenzfarbstoff beladen, wodurch die CMC der Mizellenbildung bestimmt werden konnte. Außerdem wurden die Glykopolymere als Oberfl{\"a}chenbeschichtung {\"u}ber „Grafting from" mit SI-ATRP oder {\"u}ber „Grafting to" auf verschiedene Oberfl{\"a}chen gebunden. Durch die glykopolymerbschichteten Oberfl{\"a}chen konnte die Glykan Protein Wechselwirkung {\"u}ber spektroskopische Messmethoden, wie SPR- und Mikroring Resonatoren untersucht werden. Hierbei wurde die spezifische und selektive Bindung der Lektine an die Glykopolymere nachgewiesen und die Bindungsst{\"a}rke untersucht. Die synthetisierten Glykopolymere k{\"o}nnten durch Austausch der Glykaneinheit f{\"u}r andere Lektine adressierbar werden und damit ein weites Feld an anderen Proteinen erschließen. Die biovertr{\"a}glichen Glykopolymere w{\"a}ren alternativen f{\"u}r den Einsatz in biologischen Prozessen als Transporter von Medikamenten oder Farbstoffe in den K{\"o}rper. Außerdem k{\"o}nnten die funktionalisierten Oberfl{\"a}chen in der Diagnostik zum Erkennen von Lektinen eingesetzt werden. Die Glykane, die keine selektive und spezifische Bindung zu Proteinen eingehen, k{\"o}nnten als antiadsorptive Oberfl{\"a}chenbeschichtung z.B. in der Zellbiologie eingesetzt werden.}, language = {de} } @phdthesis{Schroeder2007, author = {Schr{\"o}der, Birgit Eva}, title = {Spatial and temporal dynamics of the terrestrial carbon cycle : assimilation of two decades of optical satellite data into a process-based global vegetation model}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-17596}, school = {Universit{\"a}t Potsdam}, year = {2007}, abstract = {This PhD thesis presents the spatio-temporal distribution of terrestrial carbon fluxes for the time period of 1982 to 2002 simulated by a combination of the process-based dynamic global vegetation model LPJ and a 21-year time series of global AVHRR-fPAR data (fPAR - fraction of photosynthetically active radiation). Assimilation of the satellite data into the model allows improved simulations of carbon fluxes on global as well as on regional scales. As it is based on observed data and includes agricultural regions, the model combined with satellite data produces more realistic carbon fluxes of net primary production (NPP), soil respiration, carbon released by fire and the net land-atmosphere flux than the potential vegetation model. It also produces a good fit to the interannual variability of the CO2 growth rate. Compared to the original model, the model with satellite data constraint produces generally smaller carbon fluxes than the purely climate-based stand-alone simulation of potential natural vegetation, now comparing better to literature estimates. The lower net fluxes are a result of a combination of several effects: reduction in vegetation cover, consideration of human influence and agricultural areas, an improved seasonality, changes in vegetation distribution and species composition. This study presents a way to assess terrestrial carbon fluxes and elucidates the processes contributing to interannual variability of the terrestrial carbon exchange. Process-based terrestrial modelling and satellite-observed vegetation data are successfully combined to improve estimates of vegetation carbon fluxes and stocks. As net ecosystem exchange is the most interesting and most sensitive factor in carbon cycle modelling and highly uncertain, the presented results complementary contribute to the current knowledge, supporting the understanding of the terrestrial carbon budget.}, language = {en} } @phdthesis{Scholz2012, author = {Scholz, Markus Reiner}, title = {Spin polarization, circular dichroism, and robustness of topological surface states}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-96686}, school = {Universit{\"a}t Potsdam}, pages = {153}, year = {2012}, abstract = {Dreidimensionale topologische Isolatoren sind ein neues Materialsystem, welches dadurch charakterisiert ist, dass es in seinem Inneren isolierend an der Ober {\"a}che jedoch leitend ist. Urs{\"a}chlich f{\"u}r die Leitf{\"a}higkeit an der Ober {\"a}che sind sogenannte topologische Ober- {\"a}chenzust{\"a}nde, welche das Valenzband des Inneren mit dem Leitungsband des Inneren verbinden. An der Ober {\"a}che ist also die Bandl{\"u}cke, welche die isolierende Eigenschaft verursacht, geschlossen. Die vorliegende Arbeit untersucht diese Ober {\"a}chenzust{\"a}nde mittels spin- und winkelauf- gel{\"o}ster Photoemissionsspektroskopie. Es wird gezeigt, dass in den Materialien Bi2Se3 und Bi2Te3, in {\"u}bereinstimmung mit der Literatur, die entscheidenden Charakteristika eines topologischen Ober {\"a}chenzustands vorzu nden sind: Die Ober {\"a}chenzust{\"a}nde dieser Sys- teme durchqueren die Bandl{\"u}cke in ungerader Anzahl, sie sind nicht entartet und weisen folgerichtig eine hohe Spinpolarisation auf. Weiterhin wird durch Aufdampfen diverser Adsorbate gezeigt, dass der Ober {\"a}chenzust{\"a}n- de von Bi2Se3 und Bi2Te3, wie erwartet, extrem robust ist. Ober {\"a}chenzust{\"a}nde topologisch trivialer Systeme erf{\"u}llen diese Eigenschaft nicht; bereits kleine Verunreinigungen k{\"o}n- nen diese Zust{\"a}nde zerst{\"o}ren, bzw. die Ober {\"a}che isolierend machen. Die topologischen Ober {\"a}chenzust{\"a}nde k{\"o}nnen in der vorliegenden Arbeit noch bis zur Detektionsgrenze der experimentellen Messmethode nachgewiesen werden und die Ober {\"a}che bleibt Leitf{\"a}hig. Unter den Adsorbaten be ndet sich auch Eisen, ein bekanntermaßen magnetisches Materi- al. Eine der Grundvoraussetzungen f{\"u}r topologische Isolatoren ist die Zeitumkehrsymme- trie, die Elektronen, welche den topologischen Ober {\"a}chenzustand besetzen, vorschreibt, dass sie eine bestimmte Spinrichtung haben m{\"u}ssen, wenn sie sich beispielsweise nach links bewegen und den entgegengesetzten Spin wenn sie sich nach rechts bewegen. In magnetischen Materialien ist die Zeitumkehrsymmetrie jedoch explizit gebrochen und die gezeigte Robustheit des Ober {\"a}chenzustands gegen magnetische Materialien daher uner- wartet. Die Zeitumkehrsymmetrie sorgt auch daf{\"u}r, dass eine Streuung der Elektronen um 180°, beispielsweise an einem Gitterdefekt oder an einem Phonon strikt verboten ist. Bei einem solchen Streuprozess bleibt die Spinrichtung erhalten, da aber in der Gegenrichtung nur Zust{\"a}nde mit entgegengesetztem Spin vorhanden sind kann das Elektron nicht in diese Richtung gestreut werden. Dieses Prinzip wird anhand der Lebensdauer der durch Pho- toemission angeregten Zust{\"a}nde untersucht. Hierbei wird gezeigt, dass die Kopplung der Elektronen des Ober {\"a}chenzustands von Bi2Te3 an Phononen unerwartet hoch ist und dass sich eine Anisotropie in der Bandstruktur des Selbigen auch in den Lebensdauern der ange- regten Zust{\"a}nde widerspiegelt. Weiterhin wird gezeigt, dass sich die Ein {\"u}sse von magne- tischen und nicht-magnetischen Verunreinigungen auf die Lebensdauern stark voneinander unterscheiden. Im letzten Teil der vorliegenden Arbeit wird untersucht, ob eine Asymmetrie in der Inten- sit{\"a}tsverteilung der winkelaufgel{\"o}sten Photoemissionsspektren, bei Anregung mit zirku- lar polarisiertem Licht, in Bi2Te3 R{\"u}ckschl{\"u}sse auf die Spinpolarisation der Elektronen erlaubt. Bei Variation der Energie des eingestrahlten Lichts wird ein Vorzeichenwechsel der Asymmetrie beobachtet. Daraus l{\"a}sst sich schlussfolgern, dass die Asymmetrie keine R{\"u}ckschl{\"u}sse auf die Spinpolarisation erlaubt.}, language = {en} } @phdthesis{Schneider2023, author = {Schneider, Helen}, title = {Reactive eutectic media based on ammonium formate for the valorization of bio-sourced materials}, doi = {10.25932/publishup-61302}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-613024}, school = {Universit{\"a}t Potsdam}, pages = {137}, year = {2023}, abstract = {In the last several decades eutectic mixtures of different compositions were successfully used as solvents for vast amount of chemical processes, and only relatively recently they were discovered to be widely spread in nature. As such they are discussed as a third liquid media of the living cell, that is composed of common cell metabolites. Such media may also incorporate water as a eutectic component in order to regulate properties such as enzyme activity or viscosity. Taking inspiration form such sophisticated use of eutectic mixtures, this thesis will explore the use of reactive eutectic media (REM) for organic synthesis. Such unconventional media are characterized by the reactivity of their components, which means that mixture may assume the role of the solvent as well as the reactant itself. The thesis focuses on novel REM based on ammonium formate and investigates their potential for the valorization of bio-sourced materials. The use of REM allows the performance of a number of solvent-free reactions, which entails the benefits of a superior atom and energy economy, higher yields and faster rates compared to reactions in solution. This is evident for the Maillard reaction between ammonium formate and various monosaccharides for the synthesis of substituted pyrazines as well as for a Leuckart type reaction between ammonium formate and levulinic acid for the synthesis of 5-methyl-2-pyrrolidone. Furthermore, reaction of ammonium formate with citric acid for the synthesis of yet undiscovered fluorophores, shows that synthesis in REM can open up unexpected reaction pathways. Another focus of the thesis is the study of water as a third component in the REM. As a result, the concept of two different dilution regimes (tertiary REM and in REM in solvent) appears useful for understanding the influence of water. It is shown that small amounts of water can be of great benefit for the reaction, by reducing viscosity and at the same time increasing reaction yields. REM based on ammonium formate and organic acids are employed for lignocellulosic biomass treatment. The thesis thereby introduces an alternative approach towards lignocellulosic biomass fractionation that promises a considerable process intensification by the simultaneous generation of cellulose and lignin as well as the production of value-added chemicals from REM components. The thesis investigates the generated cellulose and the pathway to nanocellulose generation and also includes the structural analysis of extracted lignin. Finally, the thesis investigates the potential of microwave heating to run chemical reactions in REM and describes the synergy between these two approaches. Microwave heating for chemical reactions and the use of eutectic mixtures as alternative reaction media are two research fields that are often described in the scope of green chemistry. The thesis will therefore also contain a closer inspection of this terminology and its greater goal of sustainability.}, language = {en} } @phdthesis{Schmitz2023, author = {Schmitz, Se{\´a}n}, title = {Using low-cost sensors to gather high resolution measurements of air quality in urban environments and inform mobility policy}, doi = {10.25932/publishup-60105}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-601053}, school = {Universit{\"a}t Potsdam}, pages = {180}, year = {2023}, abstract = {Air pollution has been a persistent global problem in the past several hundred years. While some industrialized nations have shown improvements in their air quality through stricter regulation, others have experienced declines as they rapidly industrialize. The WHO's 2021 update of their recommended air pollution limit values reflects the substantial impacts on human health of pollutants such as NO2 and O3, as recent epidemiological evidence suggests substantial long-term health impacts of air pollution even at low concentrations. Alongside developments in our understanding of air pollution's health impacts, the new technology of low-cost sensors (LCS) has been taken up by both academia and industry as a new method for measuring air pollution. Due primarily to their lower cost and smaller size, they can be used in a variety of different applications, including in the development of higher resolution measurement networks, in source identification, and in measurements of air pollution exposure. While significant efforts have been made to accurately calibrate LCS with reference instrumentation and various statistical models, accuracy and precision remain limited by variable sensor sensitivity. Furthermore, standard procedures for calibration still do not exist and most proprietary calibration algorithms are black-box, inaccessible to the public. This work seeks to expand the knowledge base on LCS in several different ways: 1) by developing an open-source calibration methodology; 2) by deploying LCS at high spatial resolution in urban environments to test their capability in measuring microscale changes in urban air pollution; 3) by connecting LCS deployments with the implementation of local mobility policies to provide policy advice on resultant changes in air quality. In a first step, it was found that LCS can be consistently calibrated with good performance against reference instrumentation using seven general steps: 1) assessing raw data distribution, 2) cleaning data, 3) flagging data, 4) model selection and tuning, 5) model validation, 6) exporting final predictions, and 7) calculating associated uncertainty. By emphasizing the need for consistent reporting of details at each step, most crucially on model selection, validation, and performance, this work pushed forward with the effort towards standardization of calibration methodologies. In addition, with the open-source publication of code and data for the seven-step methodology, advances were made towards reforming the largely black-box nature of LCS calibrations. With a transparent and reliable calibration methodology established, LCS were then deployed in various street canyons between 2017 and 2020. Using two types of LCS, metal oxide (MOS) and electrochemical (EC), their performance in capturing expected patterns of urban NO2 and O3 pollution was evaluated. Results showed that calibrated concentrations from MOS and EC sensors matched general diurnal patterns in NO2 and O3 pollution measured using reference instruments. While MOS proved to be unreliable for discerning differences among measured locations within the urban environment, the concentrations measured with calibrated EC sensors matched expectations from modelling studies on NO2 and O3 pollution distribution in street canyons. As such, it was concluded that LCS are appropriate for measuring urban air quality, including for assisting urban-scale air pollution model development, and can reveal new insights into air pollution in urban environments. To achieve the last goal of this work, two measurement campaigns were conducted in connection with the implementation of three mobility policies in Berlin. The first involved the construction of a pop-up bike lane on Kottbusser Damm in response to the COVID-19 pandemic, the second surrounded the temporary implementation of a community space on B{\"o}ckhstrasse, and the last was focused on the closure of a portion of Friedrichstrasse to all motorized traffic. In all cases, measurements of NO2 were collected before and after the measure was implemented to assess changes in air quality resultant from these policies. Results from the Kottbusser Damm experiment showed that the bike-lane reduced NO2 concentrations that cyclists were exposed to by 22 ± 19\%. On Friedrichstrasse, the street closure reduced NO2 concentrations to the level of the urban background without worsening the air quality on side streets. These valuable results were communicated swiftly to partners in the city administration responsible for evaluating the policies' success and future, highlighting the ability of LCS to provide policy-relevant results. As a new technology, much is still to be learned about LCS and their value to academic research in the atmospheric sciences. Nevertheless, this work has advanced the state of the art in several ways. First, it contributed a novel open-source calibration methodology that can be used by a LCS end-users for various air pollutants. Second, it strengthened the evidence base on the reliability of LCS for measuring urban air quality, finding through novel deployments in street canyons that LCS can be used at high spatial resolution to understand microscale air pollution dynamics. Last, it is the first of its kind to connect LCS measurements directly with mobility policies to understand their influences on local air quality, resulting in policy-relevant findings valuable for decisionmakers. It serves as an example of the potential for LCS to expand our understanding of air pollution at various scales, as well as their ability to serve as valuable tools in transdisciplinary research.}, language = {en} } @phdthesis{Schifferle2024, author = {Schifferle, Lukas}, title = {Optical properties of (Mg,Fe)O at high pressure}, doi = {10.25932/publishup-62216}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-622166}, school = {Universit{\"a}t Potsdam}, pages = {XIV, 90}, year = {2024}, abstract = {Large parts of the Earth's interior are inaccessible to direct observation, yet global geodynamic processes are governed by the physical material properties under extreme pressure and temperature conditions. It is therefore essential to investigate the deep Earth's physical properties through in-situ laboratory experiments. With this goal in mind, the optical properties of mantle minerals at high pressure offer a unique way to determine a variety of physical properties, in a straight-forward, reproducible, and time-effective manner, thus providing valuable insights into the physical processes of the deep Earth. This thesis focusses on the system Mg-Fe-O, specifically on the optical properties of periclase (MgO) and its iron-bearing variant ferropericlase ((Mg,Fe)O), forming a major planetary building block. The primary objective is to establish links between physical material properties and optical properties. In particular the spin transition in ferropericlase, the second-most abundant phase of the lower mantle, is known to change the physical material properties. Although the spin transition region likely extends down to the core-mantle boundary, the ef-fects of the mixed-spin state, where both high- and low-spin state are present, remains poorly constrained. In the studies presented herein, we show how optical properties are linked to physical properties such as electrical conductivity, radiative thermal conductivity and viscosity. We also show how the optical properties reveal changes in the chemical bonding. Furthermore, we unveil how the chemical bonding, the optical and other physical properties are affected by the iron spin transition. We find opposing trends in the pres-sure dependence of the refractive index of MgO and (Mg,Fe)O. From 1 atm to ~140 GPa, the refractive index of MgO decreases by ~2.4\% from 1.737 to 1.696 (±0.017). In contrast, the refractive index of (Mg0.87Fe0.13)O (Fp13) and (Mg0.76Fe0.24)O (Fp24) ferropericlase increases with pressure, likely because Fe Fe interactions between adjacent iron sites hinder a strong decrease of polarizability, as it is observed with increasing density in the case of pure MgO. An analysis of the index dispersion in MgO (decreasing by ~23\% from 1 atm to ~103 GPa) reflects a widening of the band gap from ~7.4 eV at 1 atm to ~8.5 (±0.6) eV at ~103 GPa. The index dispersion (between 550 and 870 nm) of Fp13 reveals a decrease by a factor of ~3 over the spin transition range (~44-100 GPa). We show that the electrical band gap of ferropericlase significantly widens up to ~4.7 eV in the mixed spin region, equivalent to an increase by a factor of ~1.7. We propose that this is due to a lower electron mobility between adjacent Fe2+ sites of opposite spin, explaining the previously observed low electrical conductivity in the mixed spin region. From the study of absorbance spectra in Fp13, we show an increasing covalency of the Fe-O bond with pressure for high-spin ferropericlase, whereas in the low-spin state a trend to a more ionic nature of the Fe-O bond is observed, indicating a bond weakening effect of the spin transition. We found that the spin transition is ultimately caused by both an increase of the ligand field-splitting energy and a decreasing spin-pairing energy of high-spin Fe2+.}, language = {en} } @phdthesis{Schemenz2022, author = {Schemenz, Victoria}, title = {Correlations between osteocyte lacuno-canalicular network and material characteristics in bone adaptation and regeneration}, doi = {10.25932/publishup-55959}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-559593}, school = {Universit{\"a}t Potsdam}, pages = {3, xii, 146}, year = {2022}, abstract = {The complex hierarchical structure of bone undergoes a lifelong remodeling process, where it adapts to mechanical needs. Hereby, bone resorption by osteoclasts and bone formation by osteoblasts have to be balanced to sustain a healthy and stable organ. Osteocytes orchestrate this interplay by sensing mechanical strains and translating them into biochemical signals. The osteocytes are located in lacunae and are connected to one another and other bone cells via cell processes through small channels, the canaliculi. Lacunae and canaliculi form a network (LCN) of extracellular spaces that is able to transport ions and enables cell-to-cell communication. Osteocytes might also contribute to mineral homeostasis by direct interactions with the surrounding matrix. If the LCN is acting as a transport system, this should be reflected in the mineralization pattern. The central hypothesis of this thesis is that osteocytes are actively changing their material environment. Characterization methods of material science are used to achieve the aim of detecting traces of this interaction between osteocytes and the extracellular matrix. First, healthy murine bones were characterized. The properties analyzed were then compared with three murine model systems: 1) a loading model, where a bone of the mouse was loaded during its life time; 2) a healing model, where a bone of the mouse was cut to induce a healing response; and 3) a disease model, where the Fbn1 gene is dysfunctional causing defects in the formation of the extracellular tissue. The measurement strategy included routines that make it possible to analyze the organization of the LCN and the material components (i.e., the organic collagen matrix and the mineral particles) in the same bone volumes and compare the spatial distribution of different data sets. The three-dimensional network architecture of the LCN is visualized by confocal laser scanning microscopy (CLSM) after rhodamine staining and is then subsequently quantified. The calcium content is determined via quantitative backscattered electron imaging (qBEI), while small- and wide-angle X-ray scattering (SAXS and WAXS) are employed to determine the thickness and length of local mineral particles. First, tibiae cortices of healthy mice were characterized to investigate how changes in LCN architecture can be attributed to interactions of osteocytes with the surrounding bone matrix. The tibial mid-shaft cross-sections showed two main regions, consisting of a band with unordered LCN surrounded by a region with ordered LCN. The unordered region is a remnant of early bone formation and exhibited short and thin mineral particles. The surrounding, more aligned bone showed ordered and dense LCN as well as thicker and longer mineral particles. The calcium content was unchanged between the two regions. In the mouse loading model, the left tibia underwent two weeks of mechanical stimulation, which results in increased bone formation and decreased resorption in skeletally mature mice. Here the specific research question addressed was how do bone material characteristics change at (re)modeling sites? The new bone formed in response to mechanical stimulation showed similar properties in terms of the mineral particles, like the ordered calcium region but lower calcium content compared to the right, non-loaded control bone of the same mice. There was a clear, recognizable border between mature and newly formed bone. Nevertheless, some canaliculi went through this border connecting the LCN of mature and newly formed bone. Additionally, the question should be answered whether the LCN topology and the bone matrix material properties adapt to loading. Although, mechanically stimulated bones did not show differences in calcium content compared to controls, different correlations were found between the local LCN density and the local Ca content depending on whether the bone was loaded or not. These results suggest that the LCN may serve as a mineral reservoir. For the healing model, the femurs of mice underwent an osteotomy, stabilized with an external fixator and were allowed to heal for 21 days. Thus, the spatial variations in the LCN topology with mineral properties within different tissue types and their interfaces, namely calcified cartilage, bony callus and cortex, could be simultaneously visualized and compared in this model. All tissue types showed structural differences across multiple length scales. Calcium content increased and became more homogeneous from calcified cartilage to bony callus to lamellar cortical bone. The degree of LCN organization increased as well, while the lacunae became smaller, as did the lacunar density between these different tissue types that make up the callus. In the calcified cartilage, the mineral particles were short and thin. The newly formed callus exhibited thicker mineral particles, which still had a low degree of orientation. While most of the callus had a woven-like structure, it also served as a scaffold for more lamellar tissue at the edges. The lamelar bone callus showed thinner mineral particles, but a higher degree of alignment in both, mineral particles and the LCN. The cortex showed the highest values for mineral length, thickness and degree of orientation. At the same time, the lacunae number density was 34\% lower and the lacunar volume 40\% smaller compared to bony callus. The transition zone between cortical and callus regions showed a continuous convergence of bone mineral properties and lacunae shape. Although only a few canaliculi connected callus and the cortical region, this indicates that communication between osteocytes of both tissues should be possible. The presented correlations between LCN architecture and mineral properties across tissue types may suggest that osteocytes have an active role in mineralization processes of healing. A mouse model for the disease marfan syndrome, which includes a genetic defect in the fibrillin-1 gene, was investigated. In humans, Marfan syndrome is characterized by a range of clinical symptoms such as long bone overgrowth, loose joints, reduced bone mineral density, compromised bone microarchitecture, and increased fracture rates. Thus, fibrillin-1 seems to play a role in the skeletal homeostasis. Therefore, the present work studied how marfan syndrome alters LCN architecture and the surrounding bone matrix. The mice with marfan syndrome showed longer tibiae than their healthy littermates from an age of seven weeks onwards. In contrast, the cortical development appeared retarded, which was observed across all measured characteristics, i. e. lower endocortical bone formation, looser and less organized lacuno-canalicular network, less collagen orientation, thinner and shorter mineral particles. In each of the three model systems, this study found that changes in the LCN architecture spatially correlated with bone matrix material parameters. While not knowing the exact mechanism, these results provide indications that osteocytes can actively manipulate a mineral reservoir located around the canaliculi to make a quickly accessible contribution to mineral homeostasis. However, this interaction is most likely not one-sided, but could be understood as an interplay between osteocytes and extra-cellular matrix, since the bone matrix contains biochemical signaling molecules (e.g. non-collagenous proteins) that can change osteocyte behavior. Bone (re)modeling can therefore not only be understood as a method for removing defects or adapting to external mechanical stimuli, but also for increasing the efficiency of possible osteocyte-mineral interactions during bone homeostasis. With these findings, it seems reasonable to consider osteocytes as a target for drug development related to bone diseases that cause changes in bone composition and mechanical properties. It will most likely require the combined effort of materials scientists, cell biologists, and molecular biologists to gain a deeper understanding of how bone cells respond to their material environment.}, language = {en} } @phdthesis{Schauer2006, author = {Schauer, Nicolas}, title = {Quantitative trait loci (QTL) for metabolite accumulation and metabolic regulation : metabolite profiling of interspecific crosses of tomato}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-7643}, school = {Universit{\"a}t Potsdam}, year = {2006}, abstract = {The advent of large-scale and high-throughput technologies has recently caused a shift in focus in contemporary biology from decades of reductionism towards a more systemic view. Alongside the availability of genome sequences the exploration of organisms utilizing such approach should give rise to a more comprehensive understanding of complex systems. Domestication and intensive breeding of crop plants has led to a parallel narrowing of their genetic basis. The potential to improve crops by conventional breeding using elite cultivars is therefore rather limited and molecular technologies, such as marker assisted selection (MAS) are currently being exploited to re-introduce allelic variance from wild species. Molecular breeding strategies have mostly focused on the introduction of yield or resistance related traits to date. However given that medical research has highlighted the importance of crop compositional quality in the human diet this research field is rapidly becoming more important. Chemical composition of biological tissues can be efficiently assessed by metabolite profiling techniques, which allow the multivariate detection of metabolites of a given biological sample. Here, a GC/MS metabolite profiling approach has been applied to investigate natural variation of tomatoes with respect to the chemical composition of their fruits. The establishment of a mass spectral and retention index (MSRI) library was a prerequisite for this work in order to establish a framework for the identification of metabolites from a complex mixture. As mass spectral and retention index information is highly important for the metabolomics community this library was made publicly available. Metabolite profiling of tomato wild species revealed large differences in the chemical composition, especially of amino and organic acids, as well as on the sugar composition and secondary metabolites. Intriguingly, the analysis of a set of S. pennellii introgression lines (IL) identified 889 quantitative trait loci of compositional quality and 326 yield-associated traits. These traits are characterized by increases/decreases not only of single metabolites but also of entire metabolic pathways, thus highlighting the potential of this approach in uncovering novel aspects of metabolic regulation. Finally the biosynthetic pathway of the phenylalanine-derived fruit volatiles phenylethanol and phenylacetaldehyde was elucidated via a combination of metabolic profiling of natural variation, stable isotope tracer experiments and reverse genetic experimentation.}, subject = {Tomate}, language = {en} } @phdthesis{SarnesNitu2018, author = {Sarnes-Nitu, Juliane}, title = {Mit der Schuldenbremse zu nachhaltigen Staatsfinanzen?}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-413804}, school = {Universit{\"a}t Potsdam}, pages = {294}, year = {2018}, abstract = {The core question of this paper is: Does the debt brake secure fiscal sustainability in Germany? To answer this question, we will first examine the effects of the introduction of the debt brake on the German federal states in the period 2010-16. For this purpose, the observed consolidation performance and the consolidation incentive or pressure experienced by the federal states were evaluated with the help of a scorecard specifically developed for this purpose. Multiple regression analysis was used to analyze how the scorecard factors affect the consolidation performance of the federal states. It found that nearly 90\% of the variation was explained by the independent variables budgetary position, debt burden, revenue growth and pension burden. Thus the debt brake likely played a subordinate role in the 2009-2016 consolidation episode. Subsequently, the data collected in 65 expert interviews was used to analyze the limits of the new fiscal rule, and to determine which potential risks could hinder or prevent the debt brake in the future: municipal debt, FEUs, contingent liabilities in the form of guarantees for financial institutions and pension obligations. The frequently expressed criticism that the debt brake impedes economic growth and public investments is also reviewed and rejected. Finally, we discuss potential future developments regarding the debt brake and the German public administration as well as future consolidation efforts of the L{\"a}nder.}, language = {de} } @phdthesis{Roezer2018, author = {R{\"o}zer, Viktor}, title = {Pluvial flood loss to private households}, doi = {10.25932/publishup-42991}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-429910}, school = {Universit{\"a}t Potsdam}, pages = {XXII, 109}, year = {2018}, abstract = {Today, more than half of the world's population lives in urban areas. With a high density of population and assets, urban areas are not only the economic, cultural and social hubs of every society, they are also highly susceptible to natural disasters. As a consequence of rising sea levels and an expected increase in extreme weather events caused by a changing climate in combination with growing cities, flooding is an increasing threat to many urban agglomerations around the globe. To mitigate the destructive consequences of flooding, appropriate risk management and adaptation strategies are required. So far, flood risk management in urban areas is almost exclusively focused on managing river and coastal flooding. Often overlooked is the risk from small-scale rainfall-triggered flooding, where the rainfall intensity of rainstorms exceeds the capacity of urban drainage systems, leading to immediate flooding. Referred to as pluvial flooding, this flood type exclusive to urban areas has caused severe losses in cities around the world. Without further intervention, losses from pluvial flooding are expected to increase in many urban areas due to an increase of impervious surfaces compounded with an aging drainage infrastructure and a projected increase in heavy precipitation events. While this requires the integration of pluvial flood risk into risk management plans, so far little is known about the adverse consequences of pluvial flooding due to a lack of both detailed data sets and studies on pluvial flood impacts. As a consequence, methods for reliably estimating pluvial flood losses, needed for pluvial flood risk assessment, are still missing. Therefore, this thesis investigates how pluvial flood losses to private households can be reliably estimated, based on an improved understanding of the drivers of pluvial flood loss. For this purpose, detailed data from pluvial flood-affected households was collected through structured telephone- and web-surveys following pluvial flood events in Germany and the Netherlands. Pluvial flood losses to households are the result of complex interactions between impact characteristics such as the water depth and a household's resistance as determined by its risk awareness, preparedness, emergency response, building properties and other influencing factors. Both exploratory analysis and machine-learning approaches were used to analyze differences in resistance and impacts between households and their effects on the resulting losses. The comparison of case studies showed that the awareness around pluvial flooding among private households is quite low. Low awareness not only challenges the effective dissemination of early warnings, but was also found to influence the implementation of private precautionary measures. The latter were predominately implemented by households with previous experience of pluvial flooding. Even cases where previous flood events affected a different part of the same city did not lead to an increase in preparedness of the surveyed households, highlighting the need to account for small-scale variability in both impact and resistance parameters when assessing pluvial flood risk. While it was concluded that the combination of low awareness, ineffective early warning and the fact that only a minority of buildings were adapted to pluvial flooding impaired the coping capacities of private households, the often low water levels still enabled households to mitigate or even prevent losses through a timely and effective emergency response. These findings were confirmed by the detection of loss-influencing variables, showing that cases in which households were able to prevent any loss to the building structure are predominately explained by resistance variables such as the household's risk awareness, while the degree of loss is mainly explained by impact variables. Based on the important loss-influencing variables detected, different flood loss models were developed. Similar to flood loss models for river floods, the empirical data from the preceding data collection was used to train flood loss models describing the relationship between impact and resistance parameters and the resulting loss to building structures. Different approaches were adapted from river flood loss models using both models with the water depth as only predictor for building structure loss and models incorporating additional variables from the preceding variable detection routine. The high predictive errors of all compared models showed that point predictions are not suitable for estimating losses on the building level, as they severely impair the reliability of the estimates. For that reason, a new probabilistic framework based on Bayesian inference was introduced that is able to provide predictive distributions instead of single loss estimates. These distributions not only give a range of probable losses, they also provide information on how likely a specific loss value is, representing the uncertainty in the loss estimate. Using probabilistic loss models, it was found that the certainty and reliability of a loss estimate on the building level is not only determined by the use of additional predictors as shown in previous studies, but also by the choice of response distribution defining the shape of the predictive distribution. Here, a mix between a beta and a Bernoulli distribution to account for households that are able to prevent losses to their building's structure was found to provide significantly more certain and reliable estimates than previous approaches using Gaussian or non-parametric response distributions. The successful model transfer and post-event application to estimate building structure loss in Houston, TX, caused by pluvial flooding during Hurricane Harvey confirmed previous findings, and demonstrated the potential of the newly developed multi-variable beta model for future risk assessments. The highly detailed input data set constructed from openly available data sources containing over 304,000 affected buildings in Harris County further showed the potential of data-driven, building-level loss models for pluvial flood risk assessment. In conclusion, pluvial flood losses to private households are the result of complex interactions between impact and resistance variables, which should be represented in loss models. The local occurrence of pluvial floods requires loss estimates on high spatial resolutions, i.e. on the building level, where losses are variable and uncertainties are high. Therefore, probabilistic loss estimates describing the uncertainty of the estimate should be used instead of point predictions. While the performance of probabilistic models on the building level are mainly driven by the choice of response distribution, multi-variable models are recommended for two reasons: First, additional resistance variables improve the detection of cases in which households were able to prevent structural losses. Second, the added variability of additional predictors provides a better representation of the uncertainties when loss estimates from multiple buildings are aggregated. This leads to the conclusion that data-driven probabilistic loss models on the building level allow for a reliable loss estimation at an unprecedented level of detail, with a consistent quantification of uncertainties on all aggregation levels. This makes the presented approach suitable for a wide range of applications, from decision support in spatial planning to impact- based early warning systems.}, language = {en} } @phdthesis{Rust2007, author = {Rust, Henning}, title = {Detection of long-range dependence : applications in climatology and hydrology}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-13347}, school = {Universit{\"a}t Potsdam}, year = {2007}, abstract = {It is desirable to reduce the potential threats that result from the variability of nature, such as droughts or heat waves that lead to food shortage, or the other extreme, floods that lead to severe damage. To prevent such catastrophic events, it is necessary to understand, and to be capable of characterising, nature's variability. Typically one aims to describe the underlying dynamics of geophysical records with differential equations. There are, however, situations where this does not support the objectives, or is not feasible, e.g., when little is known about the system, or it is too complex for the model parameters to be identified. In such situations it is beneficial to regard certain influences as random, and describe them with stochastic processes. In this thesis I focus on such a description with linear stochastic processes of the FARIMA type and concentrate on the detection of long-range dependence. Long-range dependent processes show an algebraic (i.e. slow) decay of the autocorrelation function. Detection of the latter is important with respect to, e.g. trend tests and uncertainty analysis. Aiming to provide a reliable and powerful strategy for the detection of long-range dependence, I suggest a way of addressing the problem which is somewhat different from standard approaches. Commonly used methods are based either on investigating the asymptotic behaviour (e.g., log-periodogram regression), or on finding a suitable potentially long-range dependent model (e.g., FARIMA[p,d,q]) and test the fractional difference parameter d for compatibility with zero. Here, I suggest to rephrase the problem as a model selection task, i.e.comparing the most suitable long-range dependent and the most suitable short-range dependent model. Approaching the task this way requires a) a suitable class of long-range and short-range dependent models along with suitable means for parameter estimation and b) a reliable model selection strategy, capable of discriminating also non-nested models. With the flexible FARIMA model class together with the Whittle estimator the first requirement is fulfilled. Standard model selection strategies, e.g., the likelihood-ratio test, is for a comparison of non-nested models frequently not powerful enough. Thus, I suggest to extend this strategy with a simulation based model selection approach suitable for such a direct comparison. The approach follows the procedure of a statistical test, with the likelihood-ratio as the test statistic. Its distribution is obtained via simulations using the two models under consideration. For two simple models and different parameter values, I investigate the reliability of p-value and power estimates obtained from the simulated distributions. The result turned out to be dependent on the model parameters. However, in many cases the estimates allow an adequate model selection to be established. An important feature of this approach is that it immediately reveals the ability or inability to discriminate between the two models under consideration. Two applications, a trend detection problem in temperature records and an uncertainty analysis for flood return level estimation, accentuate the importance of having reliable methods at hand for the detection of long-range dependence. In the case of trend detection, falsely concluding long-range dependence implies an underestimation of a trend and possibly leads to a delay of measures needed to take in order to counteract the trend. Ignoring long-range dependence, although present, leads to an underestimation of confidence intervals and thus to an unjustified belief in safety, as it is the case for the return level uncertainty analysis. A reliable detection of long-range dependence is thus highly relevant in practical applications. Examples related to extreme value analysis are not limited to hydrological applications. The increased uncertainty of return level estimates is a potentially problem for all records from autocorrelated processes, an interesting examples in this respect is the assessment of the maximum strength of wind gusts, which is important for designing wind turbines. The detection of long-range dependence is also a relevant problem in the exploration of financial market volatility. With rephrasing the detection problem as a model selection task and suggesting refined methods for model comparison, this thesis contributes to the discussion on and development of methods for the detection of long-range dependence.}, language = {en} } @phdthesis{Ruch2010, author = {Ruch, Jo{\"e}l}, title = {Volcano deformation analysis in the Lazufre area (central Andes) using geodetic and geological observations}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-47361}, school = {Universit{\"a}t Potsdam}, year = {2010}, abstract = {Large-scale volcanic deformation recently detected by radar interferometry (InSAR) provides new information and thus new scientific challenges for understanding volcano-tectonic activity and magmatic systems. The destabilization of such a system at depth noticeably affects the surrounding environment through magma injection, ground displacement and volcanic eruptions. To determine the spatiotemporal evolution of the Lazufre volcanic area located in the central Andes, we combined short-term ground displacement acquired by InSAR with long-term geological observations. Ground displacement was first detected using InSAR in 1997. By 2008, this displacement affected 1800 km2 of the surface, an area comparable in size to the deformation observed at caldera systems. The original displacement was followed in 2000 by a second, small-scale, neighbouring deformation located on the Lastarria volcano. We performed a detailed analysis of the volcanic structures at Lazufre and found relationships with the volcano deformations observed with InSAR. We infer that these observations are both likely to be the surface expression of a long-lived magmatic system evolving at depth. It is not yet clear whether Lazufre may trigger larger unrest or volcanic eruptions; however, the second deformation detected at Lastarria and the clear increase of the large-scale deformation rate make this an area of particular interest for closer continuous monitoring.}, language = {en} } @phdthesis{RodriguezPiceda2022, author = {Rodriguez Piceda, Constanza}, title = {Thermomechanical state of the southern Central Andes}, doi = {10.25932/publishup-54927}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-549275}, school = {Universit{\"a}t Potsdam}, pages = {xx, 228}, year = {2022}, abstract = {The Andes are a ~7000 km long N-S trending mountain range developed along the South American western continental margin. Driven by the subduction of the oceanic Nazca plate beneath the continental South American plate, the formation of the northern and central parts of the orogen is a type case for a non-collisional orogeny. In the southern Central Andes (SCA, 29°S-39°S), the oceanic plate changes the subduction angle between 33°S and 35°S from almost horizontal (< 5° dip) in the north to a steeper angle (~30° dip) in the south. This sector of the Andes also displays remarkable along- and across- strike variations of the tectonic deformation patterns. These include a systematic decrease of topographic elevation, of crustal shortening and foreland and orogenic width, as well as an alternation of the foreland deformation style between thick-skinned and thin-skinned recorded along- and across the strike of the subduction zone. Moreover, the SCA are a very seismically active region. The continental plate is characterized by a relatively shallow seismicity (< 30 km depth) which is mainly focussed at the transition from the orogen to the lowland areas of the foreland and the forearc; in contrast, deeper seismicity occurs below the interiors of the northern foreland. Additionally, frequent seismicity is also recorded in the shallow parts of the oceanic plate and in a sector of the flat slab segment between 31°S and 33°S. The observed spatial heterogeneity in tectonic and seismic deformation in the SCA has been attributed to multiple causes, including variations in sediment thickness, the presence of inherited structures and changes in the subduction angle of the oceanic slab. However, there is no study that inquired the relationship between the long-term rheological configuration of the SCA and the spatial deformation patterns. Moreover, the effects of the density and thickness configuration of the continental plate and of variations in the slab dip angle in the rheological state of the lithosphere have been not thoroughly investigated yet. Since rheology depends on composition, pressure and temperature, a detailed characterization of the compositional, structural and thermal fields of the lithosphere is needed. Therefore, by using multiple geophysical approaches and data sources, I constructed the following 3D models of the SCA lithosphere: (i) a seismically-constrained structural and density model that was tested against the gravity field; (ii) a thermal model integrating the conversion of mantle shear-wave velocities to temperature with steady-state conductive calculations in the uppermost lithosphere (< 50 km depth), validated by temperature and heat-flow measurements; and (iii) a rheological model of the long-term lithospheric strength using as input the previously-generated models. The results of this dissertation indicate that the present-day thermal and rheological fields of the SCA are controlled by different mechanisms at different depths. At shallow depths (< 50 km), the thermomechanical field is modulated by the heterogeneous composition of the continental lithosphere. The overprint of the oceanic slab is detectable where the oceanic plate is shallow (< 85 km depth) and the radiogenic crust is thin, resulting in overall lower temperatures and higher strength compared to regions where the slab is steep and the radiogenic crust is thick. At depths > 50 km, largest temperatures variations occur where the descending slab is detected, which implies that the deep thermal field is mainly affected by the slab dip geometry. The outcomes of this thesis suggests that long-term thermomechanical state of the lithosphere influences the spatial distribution of seismic deformation. Most of the seismicity within the continental plate occurs above the modelled transition from brittle to ductile conditions. Additionally, there is a spatial correlation between the location of these events and the transition from the mechanically strong domains of the forearc and foreland to the weak domain of the orogen. In contrast, seismicity within the oceanic plate is also detected where long-term ductile conditions are expected. I therefore analysed the possible influence of additional mechanisms triggering these earthquakes, including the compaction of sediments in the subduction interface and dehydration reactions in the slab. To that aim, I carried out a qualitative analysis of the state of hydration in the mantle using the ratio between compressional- and shear-wave velocity (vp/vs ratio) from a previous seismic tomography. The results from this analysis indicate that the majority of the seismicity spatially correlates with hydrated areas of the slab and overlying continental mantle, with the exception of the cluster within the flat slab segment. In this region, earthquakes are likely triggered by flexural processes where the slab changes from a flat to a steep subduction angle. First-order variations in the observed tectonic patterns also seem to be influenced by the thermomechanical configuration of the lithosphere. The mechanically strong domains of the forearc and foreland, due to their resistance to deformation, display smaller amounts of shortening than the relatively weak orogenic domain. In addition, the structural and thermomechanical characteristics modelled in this dissertation confirm previous analyses from geodynamic models pointing to the control of the observed heterogeneities in the orogen and foreland deformation style. These characteristics include the lithospheric and crustal thickness, the presence of weak sediments and the variations in gravitational potential energy. Specific conditions occur in the cold and strong northern foreland, which is characterized by active seismicity and thick-skinned structures, although the modelled crustal strength exceeds the typical values of externally-applied tectonic stresses. The additional mechanisms that could explain the strain localization in a region that should resist deformation are: (i) increased tectonic forces coming from the steepening of the slab and (ii) enhanced weakening along inherited structures from pre-Andean deformation events. Finally, the thermomechanical conditions of this sector of the foreland could be a key factor influencing the preservation of the flat subduction angle at these latitudes of the SCA.}, language = {en} } @phdthesis{Robinson2011, author = {Robinson, Alexander}, title = {Modeling the Greenland Ice Sheet response to climate change in the past and future}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-50430}, school = {Universit{\"a}t Potsdam}, year = {2011}, abstract = {The Greenland Ice Sheet (GIS) contains enough water volume to raise global sea level by over 7 meters. It is a relic of past glacial climates that could be strongly affected by a warming world. Several studies have been performed to investigate the sensitivity of the ice sheet to changes in climate, but large uncertainties in its long-term response still exist. In this thesis, a new approach has been developed and applied to modeling the GIS response to climate change. The advantages compared to previous approaches are (i) that it can be applied over a wide range of climatic scenarios (both in the deep past and the future), (ii) that it includes the relevant feedback processes between the climate and the ice sheet and (iii) that it is highly computationally efficient, allowing simulations over very long timescales. The new regional energy-moisture balance model (REMBO) has been developed to model the climate and surface mass balance over Greenland and it represents an improvement compared to conventional approaches in modeling present-day conditions. Furthermore, the evolution of the GIS has been simulated over the last glacial cycle using an ensemble of model versions. The model performance has been validated against field observations of the present-day climate and surface mass balance, as well as paleo information from ice cores. The GIS contribution to sea level rise during the last interglacial is estimated to be between 0.5-4.1 m, consistent with previous estimates. The ensemble of model versions has been constrained to those that are consistent with the data, and a range of valid parameter values has been defined, allowing quantification of the uncertainty and sensitivity of the modeling approach. Using the constrained model ensemble, the sensitivity of the GIS to long-term climate change was investigated. It was found that the GIS exhibits hysteresis behavior (i.e., it is multi-stable under certain conditions), and that a temperature threshold exists above which the ice sheet transitions to an essentially ice-free state. The threshold in the global temperature is estimated to be in the range of 1.3-2.3°C above preindustrial conditions, significantly lower than previously believed. The timescale of total melt scales non-linearly with the overshoot above the temperature threshold, such that a 2°C anomaly causes the ice sheet to melt in ca. 50,000 years, but an anomaly of 6°C will melt the ice sheet in less than 4,000 years. The meltback of the ice sheet was found to become irreversible after a fraction of the ice sheet is already lost - but this level of irreversibility also depends on the temperature anomaly.}, language = {en} } @phdthesis{Richter2022, author = {Richter, Maximilian Jacob Enzo Amandus}, title = {Continental rift dynamics across the scales}, doi = {10.25932/publishup-55060}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-550606}, school = {Universit{\"a}t Potsdam}, pages = {129}, year = {2022}, abstract = {Localisation of deformation is a ubiquitous feature in continental rift dynamics and observed across drastically different time and length scales. This thesis comprises one experimental and two numerical modelling studies investigating strain localisation in (1) a ductile shear zone induced by a material heterogeneity and (2) in an active continental rift setting. The studies are related by the fact that the weakening mechanisms on the crystallographic and grain size scale enable bulk rock weakening, which fundamentally enables the formation of shear zones, continental rifts and hence plate tectonics. Aiming to investigate the controlling mechanisms on initiation and evolution of a shear zone, the torsion experiments of the experimental study were conducted in a Patterson type apparatus with strong Carrara marble cylinders with a weak, planar Solnhofen limestone inclusion. Using state-of-the-art numerical modelling software, the torsion experiments were simulated to answer questions regarding localisation procedure like stress distribution or the impact of rheological weakening. 2D numerical models were also employed to integrate geophysical and geological data to explain characteristic tectonic evolution of the Southern and Central Kenya Rift. Key elements of the numerical tools are a randomized initial strain distribution and the usage of strain softening. During the torsion experiments, deformation begins to localise at the limestone inclusion tips in a process zone, which propagates into the marble matrix with increasing deformation until a ductile shear zone is established. Minor indicators for coexisting brittle deformation are found close to the inclusion tip and presumed to slightly facilitate strain localisation besides the dominant ductile deformation processes. The 2D numerical model of the torsion experiment successfully predicts local stress concentration and strain rate amplification ahead of the inclusion in first order agreement with the experimental results. A simple linear parametrization of strain weaking enables high accuracy reproduction of phenomenological aspects of the observed weakening. The torsion experiments suggest that loading conditions do not affect strain localisation during high temperature deformation of multiphase material with high viscosity contrasts. A numerical simulation can provide a way of analysing the process zone evolution virtually and extend the examinable frame. Furthermore, the nested structure and anastomosing shape of an ultramylonite band was mimicked with an additional second softening step. Rheological weakening is necessary to establish a shear zone in a strong matrix around a weak inclusion and for ultramylonite formation. Such strain weakening laws are also incorporated into the numerical models of the Southern and Central Kenya Rift that capture the characteristic tectonic evolution. A three-stage early rift evolution is suggested that starts with (1) the accommodation of strain by a single border fault and flexure of the hanging-wall crust, after which (2) faulting in the hanging-wall and the basin centre increases before (3) the early-stage asymmetry is lost and basinward localisation of deformation occurs. Along-strike variability of rifts can be produced by modifying the initial random noise distribution. In summary, the three studies address selected aspects of the broad range of mechanisms and processes that fundamentally enable the deformation of rock and govern the localisation patterns across the scales. In addition to the aforementioned results, the first and second manuscripts combined, demonstrate a procedure to find new or improve on existing numerical formulations for specific rheologies and their dynamic weakening. These formulations are essential in addressing rock deformation from the grain to the global scale. As within the third study of this thesis, where geodynamic controls on the evolution of a rift were examined and acquired by the integration of geological and geophysical data into a numerical model.}, language = {en} } @phdthesis{Regel2008, author = {Regel, Stefanie}, title = {The comprehension of figurative language : electrophysiological evidence on the processing of irony}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-33376}, school = {Universit{\"a}t Potsdam}, year = {2008}, abstract = {Diese Dissertation untersucht das Verstehen figurativer Sprache, im Besonderen die zeitliche Verarbeitung von verbaler Ironie. In sechs Experimenten wurde mittels ereignis-korrelierter Potentiale (EKP) die Gehirnaktivit{\"a}t beim Verstehen ironischer {\"A}ußerungen im Vergleich zu entsprechenden nicht-ironischen {\"A}ußerungen gemessen und analysiert. Dar{\"u}berhinaus wurde der Einfluss verschiedener sprachbegleitender Hinweisreize, z.B. von Prosodie oder der Verwendung von Satzzeichen, sowie außersprachlicher Hinweisreize, wie bspw. pragmatischen Wissens, auf das Ironieverstehen untersucht. Auf Grundlage dieser Ergebnisse werden verschiedene psycholinguistische Modelle figurativer Sprachverarbeitung, d.h. 'standard pragmatic model', 'graded salience hypothesis', sowie 'direct access view', diskutiert.}, language = {en} } @phdthesis{Radeff2014, author = {Radeff, Giuditta}, title = {Geohistory of the Central Anatolian Plateau southern margin (southern Turkey)}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-71865}, school = {Universit{\"a}t Potsdam}, year = {2014}, abstract = {The Adana Basin of southern Turkey, situated at the SE margin of the Central Anatolian Plateau is ideally located to record Neogene topographic and tectonic changes in the easternmost Mediterranean realm. Using industry seismic reflection data we correlate 34 seismic profiles with corresponding exposed units in the Adana Basin. The time-depth conversion of the interpreted seismic profiles allows us to reconstruct the subsidence curve of the Adana Basin and to outline the occurrence of a major increase in both subsidence and sedimentation rates at 5.45 - 5.33 Ma, leading to the deposition of almost 1500 km3 of conglomerates and marls. Our provenance analysis of the conglomerates reveals that most of the sediment is derived from and north of the SE margin of the Central Anatolian Plateau. A comparison of these results with the composition of recent conglomerates and the present drainage basins indicates major changes between late Messinian and present-day source areas. We suggest that these changes in source areas result of uplift and ensuing erosion of the SE margin of the plateau. This hypothesis is supported by the comparison of the Adana Basin subsidence curve with the subsidence curve of the Mut Basin, a mainly Neogene basin located on top of the Central Anatolian Plateau southern margin, showing that the Adana Basin subsidence event is coeval with an uplift episode of the plateau southern margin. The collection of several fault measurements in the Adana region show different deformation styles for the NW and SE margins of the Adana Basin. The weakly seismic NW portion of the basin is characterized by extensional and transtensional structures cutting Neogene deposits, likely accomodating the differential uplift occurring between the basin and the SE margin of the plateau. We interpret the tectonic evolution of the southern flank of the Central Anatolian Plateau and the coeval subsidence and sedimentation in the Adana Basin to be related to deep lithospheric processes, particularly lithospheric delamination and slab break-off.}, language = {en} } @phdthesis{QuirogaCarrasco2023, author = {Quiroga Carrasco, Rodrigo Adolfo}, title = {Cenozoic style of deformation and spatiotemporal variations of the tectonic stress field in the southern central Andes}, doi = {10.25932/publishup-61038}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-610387}, school = {Universit{\"a}t Potsdam}, pages = {228}, year = {2023}, abstract = {The central Andean plateau is the second largest orogenic plateau in the world and has formed in a non-collisional orogenic system. It extends from southern Peru (15°S) to northern Argentina and Chile (27°30'S) and reaches an average elevation of 4,000 m.a.s.l. South of 24°S, the Andean plateau is called Puna and it is characterized by a system of endorheic basins with thick sequences where clastic and evaporitic strata are preserved. Between 26° and 27°30'S, the Puna terminates in a structurally complex zone which coincides with the transition from a normal subduction zone to a flat subduction ("flat slab") zone, which extends to 33°S. This transition zone also coincides with important morphostructural provinces that, from west to east, correspond to i) the Cordillera Frontal, where the Maricunga Belt is located; ii) the Famatina system; and iv) the north-western, thick-skinned Sierras Pampeanas. Various structural, sedimentological, thermochronological and geochronological studies in this region have documented a complex history of deformation and uplift during successive Cenozoic deformation events. These processes caused the increase of crustal thickness, as well as episodes of diachronic uplift, which attained its present configuration during the late Miocene. Subsequently, the plateau experienced a change in deformation style from contraction to extension and transtension documented by ubiquitous normal faults, earthquakes, and magmatic rocks. However, at the southern edge of the Puna plateau and in the transition to the other morphostructural provinces, the variation of deformation processes and the changes in the tectonic stress field are not fully understood. This location is thus ideally located to evaluate how the tectonic stress field may have evolved and how it may have been affected by the presence/absence of an orogenic plateau, as well as by the existence of inherited structural anisotropies within the different tectonic provinces. This thesis investigates the relationship between shallow crustal deformation and the spatiotemporal evolution of the tectonic stress field in the southern sector of the Andean plateau, during pre-, syn- and post-uplift periods of this plateau. To carry out this research, multiple methodological approaches were chosen that include (U-Pb) radiometric dating; the analysis of mesoscopic faults to obtain stress tensors and the orientation of the principal stress axes; the determination of magnetic susceptibility anisotropy in sedimentary and volcanoclastic rocks to identify shortening directions or directions of sedimentary transport; kinematic modeling to infer deep crustal structures and deformation; and finally, a morphometric analysis to identify geomorphological indicators associated with Quaternary tectonism. Combining the obtained results with data from published studies, this study reveals a complex history of the tectonic stress field that has been characterized by changes in orientation and by vertical permutations of the principal stress axes during each deformation regime over the last ~24 Ma. The evolution of the tectonic stress field can be linked with three orogenic phases at this latitude of the Andean orogen: (1) a first phase with an E-W-oriented compression documented between Eocene and middle Miocene, which coincided with Andean crustal thickening, lateral growth, and topographic uplift; (2) a second phase characterized by a compressive transpressional stress regime, starting at ~11 Ma and ~5 Ma on the western and eastern edge of the Puna plateau, respectively, and a compressive stress regime in the Famatina system and the Sierras Pampeanas, which is interpreted to reflect a transition between Neogene orogenic construction and the maximum accumulation of deformation and topographic uplift of the Puna plateau; and (3) a third phase, when the tectonic regime caused a changeover to a tensional stress state that followed crustal thickening and the maximum uplift of the plateau between ~5-4 Ma; this is especially well expressed in the Puna, in its western border area with the Maricunga-Valle Ancho Belt, and along its eastern border in the transition with the Sierras Pampeanas. The results of the study thus document that the plateau rim experienced a shift from a compressional to a transtensional regime, which differs from the tensional state of stress of the Andean Plateau in the northern sectors for the same period. Similar stress changes have been documented during the construction of the Tibetan plateau, where a predominantly compressional stress regime changed to a transtensional regime, but which was superseded by a purely tensional regime, between 14 and 4 Ma.}, language = {es} } @phdthesis{Prevot2006, author = {Prevot, Michelle Elizabeth}, title = {Introduction of a thermo-sensitive non-polar species into polyelectrolyte multilayer capsules for drug delivery}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-7785}, school = {Universit{\"a}t Potsdam}, year = {2006}, abstract = {The layer-by-layer assembly (LBL) of polyelectrolytes has been extensively studied for the preparation of ultrathin films due to the versatility of the build-up process. The control of the permeability of these layers is particularly important as there are potential drug delivery applications. Multilayered polyelectrolyte microcapsules are also of great interest due to their possible use as microcontainers. This work will present two methods that can be used as employable drug delivery systems, both of which can encapsulate an active molecule and tune the release properties of the active species. Poly-(N-isopropyl acrylamide), (PNIPAM) is known to be a thermo-sensitive polymer that has a Lower Critical Solution Temperature (LCST) around 32oC; above this temperature PNIPAM is insoluble in water and collapses. It is also known that with the addition of salt, the LCST decreases. This work shows Differential Scanning Calorimetry (DSC) and Confocal Laser Scanning Microscopy (CLSM) evidence that the LCST of the PNIPAM can be tuned with salt type and concentration. Microcapsules were used to encapsulate this thermo-sensitive polymer, resulting in a reversible and tunable stimuli- responsive system. The encapsulation of the PNIPAM inside of the capsule was proven with Raman spectroscopy, DSC (bulk LCST measurements), AFM (thickness change), SEM (morphology change) and CLSM (in situ LCST measurement inside of the capsules). The exploitation of the capsules as a microcontainer is advantageous not only because of the protection the capsules give to the active molecules, but also because it facilitates easier transport. The second system investigated demonstrates the ability to reduce the permeability of polyelectrolyte multilayer films by the addition of charged wax particles. The incorporation of this hydrophobic coating leads to a reduced water sensitivity particularly after heating, which melts the wax, forming a barrier layer. This conclusion was proven with Neutron Reflectivity by showing the decreased presence of D2O in planar polyelectrolyte films after annealing creating a barrier layer. The permeability of capsules could also be decreased by the addition of a wax layer. This was proved by the increase in recovery time measured by Florescence Recovery After Photobleaching, (FRAP) measurements. In general two advanced methods, potentially suitable for drug delivery systems, have been proposed. In both cases, if biocompatible elements are used to fabricate the capsule wall, these systems provide a stable method of encapsulating active molecules. Stable encapsulation coupled with the ability to tune the wall thickness gives the ability to control the release profile of the molecule of interest.}, subject = {Mikrokapsel}, language = {en} } @phdthesis{Popovic2011, author = {Popovic, Jelena}, title = {Novel lithium iron phosphate materials for lithium-ion batteries}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-54591}, school = {Universit{\"a}t Potsdam}, year = {2011}, abstract = {Conventional energy sources are diminishing and non-renewable, take million years to form and cause environmental degradation. In the 21st century, we have to aim at achieving sustainable, environmentally friendly and cheap energy supply by employing renewable energy technologies associated with portable energy storage devices. Lithium-ion batteries can repeatedly generate clean energy from stored materials and convert reversely electric into chemical energy. The performance of lithium-ion batteries depends intimately on the properties of their materials. Presently used battery electrodes are expensive to be produced; they offer limited energy storage possibility and are unsafe to be used in larger dimensions restraining the diversity of application, especially in hybrid electric vehicles (HEVs) and electric vehicles (EVs). This thesis presents a major progress in the development of LiFePO4 as a cathode material for lithium-ion batteries. Using simple procedure, a completely novel morphology has been synthesized (mesocrystals of LiFePO4) and excellent electrochemical behavior was recorded (nanostructured LiFePO4). The newly developed reactions for synthesis of LiFePO4 are single-step processes and are taking place in an autoclave at significantly lower temperature (200 deg. C) compared to the conventional solid-state method (multi-step and up to 800 deg. C). The use of inexpensive environmentally benign precursors offers a green manufacturing approach for a large scale production. These newly developed experimental procedures can also be extended to other phospho-olivine materials, such as LiCoPO4 and LiMnPO4. The material with the best electrochemical behavior (nanostructured LiFePO4 with carbon coating) was able to delive a stable 94\% of the theoretically known capacity.}, language = {en} } @phdthesis{Pons2023, author = {Pons, Micha{\"e}l}, title = {The Nature of the tectonic shortening in Central Andes}, doi = {10.25932/publishup-60089}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-600892}, school = {Universit{\"a}t Potsdam}, pages = {160}, year = {2023}, abstract = {The Andean Cordillera is a mountain range located at the western South American margin and is part of the Eastern- Circum-Pacific orogenic Belt. The ~7000 km long mountain range is one of the longest on Earth and hosts the second largest orogenic plateau in the world, the Altiplano-Puna plateau. The Andes are known as a non-collisional subduction-type orogen which developed as a result of the interaction between the subducted oceanic Nazca plate and the South American continental plate. The different Andean segments exhibit along-strike variations of morphotectonic provinces characterized by different elevations, volcanic activity, deformation styles, crustal thickness, shortening magnitude and oceanic plate geometry. Most of the present-day elevation can be explained by crustal shortening in the last ~50 Ma, with the shortening magnitude decreasing from ~300 km in the central (15°S-30°S) segment to less than half that in the southern part (30°S-40°S). Several factors were proposed that might control the magnitude and acceleration of shortening of the Central Andes in the last 15 Ma. One important factor is likely the slab geometry. At 27-33°S, the slab dips horizontally at ~100 km depth due to the subduction of the buoyant Juan Fernandez Ridge, forming the Pampean flat-slab. This horizontal subduction is thought to influence the thermo-mechanical state of the Sierras Pampeanas foreland, for instance, by strengthening the lithosphere and promoting the thick-skinned propagation of deformation to the east, resulting in the uplift of the Sierras Pampeanas basement blocks. The flat-slab has migrated southwards from the Altiplano latitude at ~30 Ma to its present-day position and the processes and consequences associated to its passage on the contemporaneous acceleration of the shortening rate in Central Andes remain unclear. Although the passage of the flat-slab could offer an explanation to the acceleration of the shortening, the timing does not explain the two pulses of shortening at about 15 Ma and 4 Ma that are suggested from geological observations. I hypothesize that deformation in the Central Andes is controlled by a complex interaction between the subduction dynamics of the Nazca plate and the dynamic strengthening and weakening of the South American plate due to several upper plate processes. To test this hypothesis, a detailed investigation into the role of the flat-slab, the structural inheritance of the continental plate, and the subduction dynamics in the Andes is needed. Therefore, I have built two classes of numerical thermo-mechanical models: (i) The first class of models are a series of generic E-W-oriented high-resolution 2D subduction models thatinclude flat subduction in order to investigate the role of the subduction dynamics on the temporal variability of the shortening rate in the Central Andes at Altiplano latitudes (~21°S). The shortening rate from the models was then validated with the observed tectonic shortening rate in the Central Andes. (ii) The second class of models are a series of 3D data-driven models of the present-day Pampean flat-slab configuration and the Sierras Pampeanas (26-42°S). The models aim to investigate the relative contribution of the present-day flat subduction and inherited structures in the continental lithosphere on the strain localization. Both model classes were built using the advanced finite element geodynamic code ASPECT. The first main finding of this work is to suggest that the temporal variability of shortening in the Central Andes is primarily controlled by the subduction dynamics of the Nazca plate while it penetrates into the mantle transition zone. These dynamics depends on the westward velocity of the South American plate that provides the main crustal shortening force to the Andes and forces the trench to retreat. When the subducting plate reaches the lower mantle, it buckles on it-self until the forced trench retreat causes the slab to steepen in the upper mantle in contrast with the classical slab-anchoring model. The steepening of the slab hinders the trench causing it to resist the advancing South American plate, resulting in the pulsatile shortening. This buckling and steepening subduction regime could have been initiated because of the overall decrease in the westwards velocity of the South American plate. In addition, the passage of the flat-slab is required to promote the shortening of the continental plate because flat subduction scrapes the mantle lithosphere, thus weakening the continental plate. This process contributes to the efficient shortening when the trench is hindered, followed by mantle lithosphere delamination at ~20 Ma. Finally, the underthrusting of the Brazilian cratonic shield beneath the orogen occurs at ~11 Ma due to the mechanical weakening of the thick sediments covered the shield margin, and due to the decreasing resistance of the weakened lithosphere of the orogen. The second main finding of this work is to suggest that the cold flat-slab strengthens the overriding continental lithosphere and prevents strain localization. Therefore, the deformation is transmitted to the eastern front of the flat-slab segment by the shear stress operating at the subduction interface, thus the flat-slab acts like an indenter that "bulldozes" the mantle-keel of the continental lithosphere. The offset in the propagation of deformation to the east between the flat and steeper slab segments in the south causes the formation of a transpressive dextral shear zone. Here, inherited faults of past tectonic events are reactivated and further localize the deformation in an en-echelon strike-slip shear zone, through a mechanism that I refer to as "flat-slab conveyor". Specifically, the shallowing of the flat-slab causes the lateral deformation, which explains the timing of multiple geological events preceding the arrival of the flat-slab at 33°S. These include the onset of the compression and of the transition between thin to thick-skinned deformation styles resulting from the crustal contraction of the crust in the Sierras Pampeanas some 10 and 6 Myr before the Juan Fernandez Ridge collision at that latitude, respectively.}, language = {en} } @phdthesis{Pellegrino2022, author = {Pellegrino, Antonio}, title = {miRNA profiling for diagnosis of chronic pain in polyneuropathy}, doi = {10.25932/publishup-58385}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-583858}, school = {Universit{\"a}t Potsdam}, pages = {viii, 97, xi}, year = {2022}, abstract = {This dissertation aimed to determine differential expressed miRNAs in the context of chronic pain in polyneuropathy. For this purpose, patients with chronic painful polyneuropathy were compared with age matched healthy patients. Taken together, all miRNA pre library preparation quality controls were successful and none of the samples was identified as an outlier or excluded for library preparation. Pre sequencing quality control showed that library preparation worked for all samples as well as that all samples were free of adapter dimers after BluePippin size selection and reached the minimum molarity for further processing. Thus, all samples were subjected to sequencing. The sequencing control parameters were in their optimal range and resulted in valid sequencing results with strong sample to sample correlation for all samples. The resulting FASTQ file of each miRNA library was analyzed and used to perform a differential expression analysis. The differentially expressed and filtered miRNAs were subjected to miRDB to perform a target prediction. Three of those four miRNAs were downregulated: hsa-miR-3135b, hsa-miR-584-5p and hsa-miR-12136, while one was upregulated: hsa-miR-550a-3p. miRNA target prediction showed that chronic pain in polyneuropathy might be the result of a combination of miRNA mediated high blood flow/pressure and neural activity dysregulations/disbalances. Thus, leading to the promising conclusion that these four miRNAs could serve as potential biomarkers for the diagnosis of chronic pain in polyneuropathy. Since TRPV1 seems to be one of the major contributors of nociception and is associated with neuropathic pain, the influence of PKA phosphorylated ARMS on the sensitivity of TRPV1 as well as the part of AKAP79 during PKA phosphorylation of ARMS was characterized. Therefore, possible PKA-sites in the sequence of ARMS were identified. This revealed five canonical PKA-sites: S882, T903, S1251/52, S1439/40 and S1526/27. The single PKA-site mutants of ARMS revealed that PKA-mediated ARMS phosphorylation seems not to influence the interaction rate of TRPV1/ARMS. While phosphorylation of ARMST903 does not increase the interaction rate with TRPV1, ARMSS1526/27 is probably not phosphorylated and leads to an increased interaction rate. The calcium flux measurements indicated that the higher the interaction rate of TRPV1/ARMS, the lower the EC50 for capsaicin of TRPV1, independent of the PKA phosphorylation status of ARMS. In addition, the western blot analysis confirmed the previously observed TRPV1/ARMS interaction. More importantly, AKAP79 seems to be involved in the TRPV1/ARMS/PKA signaling complex. To overcome the problem of ARMS-mediated TRPV1 sensitization by interaction, ARMS was silenced by shRNA. ARMS silencing resulted in a restored TRPV1 desensitization without affecting the TRPV1 expression and therefore could be used as new topical therapeutic analgesic alternative to stop ARMS mediated TRPV1 sensitization.}, language = {en} } @phdthesis{Paganini2018, author = {Paganini, Claudio Francesco}, title = {The role of trapping in black hole spacetimes}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-414686}, school = {Universit{\"a}t Potsdam}, pages = {v, 138}, year = {2018}, abstract = {In the here presented work we discuss a series of results that are all in one way or another connected to the phenomenon of trapping in black hole spacetimes. First we present a comprehensive review of the Kerr-Newman-Taub-NUT-de-Sitter family of black hole spacetimes and their most important properties. From there we go into a detailed analysis of the bahaviour of null geodesics in the exterior region of a sub-extremal Kerr spacetime. We show that most well known fundamental properties of null geodesics can be represented in one plot. In particular, one can see immediately that the ergoregion and trapping are separated in phase space. We then consider the sets of future/past trapped null geodesics in the exterior region of a sub-extremal Kerr-Newman-Taub-NUT spacetime. We show that from the point of view of any timelike observer outside of such a black hole, trapping can be understood as two smooth sets of spacelike directions on the celestial sphere of the observer. Therefore the topological structure of the trapped set on the celestial sphere of any observer is identical to that in Schwarzschild. We discuss how this is relevant to the black hole stability problem. In a further development of these observations we introduce the notion of what it means for the shadow of two observers to be degenerate. We show that, away from the axis of symmetry, no continuous degeneration exists between the shadows of observers at any point in the exterior region of any Kerr-Newman black hole spacetime of unit mass. Therefore, except possibly for discrete changes, an observer can, by measuring the black holes shadow, determine the angular momentum and the charge of the black hole under observation, as well as the observer's radial position and angle of elevation above the equatorial plane. Furthermore, his/her relative velocity compared to a standard observer can also be measured. On the other hand, the black hole shadow does not allow for a full parameter resolution in the case of a Kerr-Newman-Taub-NUT black hole, as a continuous degeneration relating specific angular momentum, electric charge, NUT charge and elevation angle exists in this case. We then use the celestial sphere to show that trapping is a generic feature of any black hole spacetime. In the last chapter we then prove a generalization of the mode stability result of Whiting (1989) for the Teukolsky equation for the case of real frequencies. The main result of the last chapter states that a separated solution of the Teukolsky equation governing massless test fields on the Kerr spacetime, which is purely outgoing at infinity, and purely ingoing at the horizon, must vanish. This has the consequence, that for real frequencies, there are linearly independent fundamental solutions of the radial Teukolsky equation which are purely ingoing at the horizon, and purely outgoing at infinity, respectively. This fact yields a representation formula for solutions of the inhomogenous Teukolsky equation, and was recently used by Shlapentokh-Rothman (2015) for the scalar wave equation.}, language = {en} } @phdthesis{Ott2006, author = {Ott, Christian David}, title = {Stellar iron core collapse in {3+1} general relativity and the gravitational wave signature of core-collapse supernovae}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-12986}, school = {Universit{\"a}t Potsdam}, year = {2006}, abstract = {I perform and analyse the first ever calculations of rotating stellar iron core collapse in {3+1} general relativity that start out with presupernova models from stellar evolutionary calculations and include a microphysical finite-temperature nuclear equation of state, an approximate scheme for electron capture during collapse and neutrino pressure effects. Based on the results of these calculations, I obtain the to-date most realistic estimates for the gravitational wave signal from collapse, bounce and the early postbounce phase of core collapse supernovae. I supplement my {3+1} GR hydrodynamic simulations with 2D Newtonian neutrino radiation-hydrodynamic supernova calculations focussing on (1) the late postbounce gravitational wave emission owing to convective overturn, anisotropic neutrino emission and protoneutron star pulsations, and (2) on the gravitational wave signature of accretion-induced collapse of white dwarfs to neutron stars.}, language = {en} } @phdthesis{Oey2008, author = {Oey, Melanie}, title = {Chloroplasts as bioreactors : high-yield production of active bacteriolytic protein antibiotics}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-28950}, school = {Universit{\"a}t Potsdam}, year = {2008}, abstract = {Plants, more precisely their chloroplasts with their bacterial-like expression machinery inherited from their cyanobacterial ancestors, can potentially offer a cheap expression system for proteinaceous pharmaceuticals. This system would be easily scalable and provides appropriate safety due to chloroplasts maternal inheritance. In this work, it was shown that three phage lytic enzymes (Pal, Cpl-1 and PlyGBS) could be successfully expressed at very high levels and with high stability in tobacco chloroplasts. PlyGBS expression reached an amount of foreign protein accumulation (> 70\% TSP) that has never been obtained before. Although the high expression levels of PlyGBS caused a pale green phenotype with retarded growth, presumably due to exhaustion of plastid protein synthesis capacity, development and seed production were not impaired under greenhouse conditions. Since Pal and Cpl-1 showed toxic effects when expressed in E. coli, a special plastid transformation vector (pTox) was constructed to allow DNA amplification in bacteria. The construction of the pTox transformation vector allowing a recombinase-mediated deletion of an E. coli transcription block in the chloroplast, leading to an increase of foreign protein accumulation to up to 40\% of TSP for Pal and 20\% of TSP for Cpl-1. High dose-dependent bactericidal efficiency was shown for all three plant-derived lytic enzymes using their pathogenic target bacteria S. pyogenes and S. pneumoniae. Confirmation of specificity was obtained for the endotoxic proteins Pal and Cpl-1 by application to E. coli cultures. These results establish tobacco chloroplasts as a new cost-efficient and convenient production platform for phage lytic enzymes and address the greatest obstacle for clinical application. The present study is the first report of lysin production in a non-bacterial system. The properties of chloroplast-produced lysins described in this work, their stability, high accumulation rate and biological activity make them highly attractive candidates for future antibiotics.}, language = {en} } @phdthesis{Niemz2022, author = {Niemz, Peter}, title = {Imaging and modeling of hydraulic fractures in crystalline rock via induced seismic activity}, doi = {10.25932/publishup-55659}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-556593}, school = {Universit{\"a}t Potsdam}, pages = {135}, year = {2022}, abstract = {Enhanced geothermal systems (EGS) are considered a cornerstone of future sustainable energy production. In such systems, high-pressure fluid injections break the rock to provide pathways for water to circulate in and heat up. This approach inherently induces small seismic events that, in rare cases, are felt or can even cause damage. Controlling and reducing the seismic impact of EGS is crucial for a broader public acceptance. To evaluate the applicability of hydraulic fracturing (HF) in EGS and to improve the understanding of fracturing processes and the hydromechanical relation to induced seismicity, six in-situ, meter-scale HF experiments with different injection schemes were performed under controlled conditions in crystalline rock in a depth of 410 m at the {\"A}sp{\"o} Hard Rock Laboratory (Sweden). I developed a semi-automated, full-waveform-based detection, classification, and location workflow to extract and characterize the acoustic emission (AE) activity from the continuous recordings of 11 piezoelectric AE sensors. Based on the resulting catalog of 20,000 AEs, with rupture sizes of cm to dm, I mapped and characterized the fracture growth in great detail. The injection using a novel cyclic injection scheme (HF3) had a lower seismic impact than the conventional injections. HF3 induced fewer AEs with a reduced maximum magnitude and significantly larger b-values, implying a decreased number of large events relative to the number of small ones. Furthermore, HF3 showed an increased fracture complexity with multiple fractures or a fracture network. In contrast, the conventional injections developed single, planar fracture zones (Publication 1). An independent, complementary approach based on a comparison of modeled and observed tilt exploits transient long-period signals recorded at the horizontal components of two broad-band seismometers a few tens of meters apart from the injections. It validated the efficient creation of hydraulic fractures and verified the AE-based fracture geometries. The innovative joint analysis of AEs and tilt signals revealed different phases of the fracturing process, including the (re-)opening, growth, and aftergrowth of fractures, and provided evidence for the reactivation of a preexisting fault in one of the experiments (Publication 2). A newly developed network-based waveform-similarity analysis applied to the massive AE activity supports the latter finding. To validate whether the reduction of the seismic impact as observed for the cyclic injection schemes during the {\"A}sp{\"o} mine-scale experiments is transferable to other scales, I additionally calculated energy budgets for injection experiments from previously conducted laboratory tests and from a field application. Across all three scales, the cyclic injections reduce the seismic impact, as depicted by smaller maximum magnitudes, larger b-values, and decreased injection efficiencies (Publication 3).}, language = {en} } @phdthesis{NickeltCzycykowski2008, author = {Nickelt-Czycykowski, Iliya Peter}, title = {Aktive Regionen der Sonnenoberfl{\"a}che und ihre zeitliche Variation in zweidimensionaler Spektro-Polarimetrie}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-25524}, school = {Universit{\"a}t Potsdam}, year = {2008}, abstract = {Die Arbeit beschreibt die Analyse von Beobachtungen zweier Sonnenflecken in zweidimensionaler Spektro-Polarimetrie. Die Daten wurden mit dem Fabry-P{\´e}rot-Interferometer der Universit{\"a}t G{\"o}ttingen am Vakuum-Turm-Teleskop auf Teneriffa erfasst. Von der aktiven Region NOAA 9516 wurde der volle Stokes-Vektor des polarisierten Lichts in der Absorptionslinie bei 630,249 nm in Einzelaufnahmen beobachtet, und von der aktiven Region NOAA 9036 wurde bei 617,3 nm Wellenl{\"a}nge eine 90-min{\"u}tige Zeitserie des zirkular polarisierten Lichts aufgezeichnet. Aus den reduzierten Daten werden Ergebniswerte f{\"u}r Intensit{\"a}t, Geschwindigkeit in Beobachtungsrichtung, magnetische Feldst{\"a}rke sowie verschiedene weitere Plasmaparameter abgeleitet. Mehrere Ans{\"a}tze zur Inversion solarer Modellatmosph{\"a}ren werden angewendet und verglichen. Die teilweise erheblichen Fehlereinfl{\"u}sse werden ausf{\"u}hrlich diskutiert. Das Frequenzverhalten der Ergebnisse und Abh{\"a}ngigkeiten nach Ort und Zeit werden mit Hilfe der Fourier- und Wavelet-Transformation weiter analysiert. Als Resultat l{\"a}sst sich die Existenz eines hochfrequenten Bandes f{\"u}r Geschwindigkeitsoszillationen mit einer zentralen Frequenz von 75 Sekunden (13 mHz) best{\"a}tigen. In gr{\"o}ßeren photosph{\"a}rischen H{\"o}hen von etwa 500 km entstammt die Mehrheit der damit zusammenh{\"a}ngenden Schockwellen den dunklen Anteilen der Granulen, im Unterschied zu anderen Frequenzbereichen. Die 75-Sekunden-Oszillationen werden ebenfalls in der aktiven Region beobachtet, vor allem in der Lichtbr{\"u}cke. In den identifizierten B{\"a}ndern oszillatorischer Power der Geschwindigkeit sind in einer dunklen, penumbralen Struktur sowie in der Lichtbr{\"u}cke ausgepr{\"a}gte Strukturen erkennbar, die sich mit einer Horizontalgeschwindigkeit von 5-8 km/s in die ruhige Sonne bewegen. Diese zeigen einen deutlichen Anstieg der Power, vor allem im 5-Minuten-Band, und stehen m{\"o}glicherweise in Zusammenhang mit dem Ph{\"a}nomen der „Evershed-clouds". Eingeschr{\"a}nkt durch ein sehr geringes Signal-Rausch-Verh{\"a}ltnis und hohe Fehlereinfl{\"u}sse werden auch Magnetfeldvariationen mit einer Periode von sechs Minuten am {\"U}bergang von Umbra zu Penumbra in der N{\"a}he einer Lichtbr{\"u}cke beobachtet. Um die beschriebenen Resultate zu erzielen, wurden bestehende Visualisierungsverfahren der Frequenzanalyse verbessert oder neu entwickelt, insbesondere f{\"u}r Ergebnisse der Wavelet-Transformation.}, language = {de} } @phdthesis{Neuharth2022, author = {Neuharth, Derek}, title = {Evolution of divergent and strike-slip boundaries in response to surface processes}, doi = {10.25932/publishup-54940}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-549403}, school = {Universit{\"a}t Potsdam}, pages = {xiii, 108}, year = {2022}, abstract = {Plate tectonics describes the movement of rigid plates at the surface of the Earth as well as their complex deformation at three types of plate boundaries: 1) divergent boundaries such as rift zones and mid-ocean ridges, 2) strike-slip boundaries where plates grind past each other, such as the San Andreas Fault, and 3) convergent boundaries that form large mountain ranges like the Andes. The generally narrow deformation zones that bound the plates exhibit complex strain patterns that evolve through time. During this evolution, plate boundary deformation is driven by tectonic forces arising from Earth's deep interior and from within the lithosphere, but also by surface processes, which erode topographic highs and deposit the resulting sediment into regions of low elevation. Through the combination of these factors, the surface of the Earth evolves in a highly dynamic way with several feedback mechanisms. At divergent boundaries, for example, tensional stresses thin the lithosphere, forcing uplift and subsequent erosion of rift flanks, which creates a sediment source. Meanwhile, the rift center subsides and becomes a topographic low where sediments accumulate. This mass transfer from foot- to hanging wall plays an important role during rifting, as it prolongs the activity of individual normal faults. When rifting continues, continents are eventually split apart, exhuming Earth's mantle and creating new oceanic crust. Because of the complex interplay between deep tectonic forces that shape plate boundaries and mass redistribution at the Earth's surface, it is vital to understand feedbacks between the two domains and how they shape our planet. In this study I aim to provide insight on two primary questions: 1) How do divergent and strike-slip plate boundaries evolve? 2) How is this evolution, on a large temporal scale and a smaller structural scale, affected by the alteration of the surface through erosion and deposition? This is done in three chapters that examine the evolution of divergent and strike-slip plate boundaries using numerical models. Chapter 2 takes a detailed look at the evolution of rift systems using two-dimensional models. Specifically, I extract faults from a range of rift models and correlate them through time to examine how fault networks evolve in space and time. By implementing a two-way coupling between the geodynamic code ASPECT and landscape evolution code FastScape, I investigate how the fault network and rift evolution are influenced by the system's erosional efficiency, which represents many factors like lithology or climate. In Chapter 3, I examine rift evolution from a three-dimensional perspective. In this chapter I study linkage modes for offset rifts to determine when fast-rotating plate-boundary structures known as continental microplates form. Chapter 4 uses the two-way numerical coupling between tectonics and landscape evolution to investigate how a strike-slip boundary responds to large sediment loads, and whether this is sufficient to form an entirely new type of flexural strike-slip basin.}, language = {en} } @phdthesis{Neuendorf2022, author = {Neuendorf, Claudia}, title = {Leistungsstarke Sch{\"u}lerinnen und Sch{\"u}ler in Deutschland}, doi = {10.25932/publishup-56470}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-564702}, school = {Universit{\"a}t Potsdam}, pages = {203}, year = {2022}, abstract = {Die vorliegende kumulative Promotionsarbeit besch{\"a}ftigt sich mit leistungsstarken Sch{\"u}lerinnen und Sch{\"u}lern, die seit 2015 in der deutschen Bildungspolitik, zum Beispiel im Rahmen von F{\"o}rderprogrammen wieder mehr Raum einnehmen, nachdem in Folge des „PISA-Schocks" im Jahr 2000 zun{\"a}chst der Fokus st{\"a}rker auf den Risikogruppen lag. W{\"a}hrend leistungsst{\"a}rkere Sch{\"u}lerinnen und Sch{\"u}ler in der {\"o}ffentlichen Wahrnehmung h{\"a}ufig mit „(Hoch-)Begabten" identifiziert werden, geht die Arbeit {\"u}ber die traditionelle Begabungsforschung, die eine generelle Intelligenz als Grundlage f{\"u}r Leistungsf{\"a}higkeit von Sch{\"u}lerinnen und Sch{\"u}lern begreift und beforscht, hinaus. Stattdessen l{\"a}sst sich eher in den Bereich der Talentforschung einordnen, die den Fokus weg von allgemeinen Begabungen auf spezifische Pr{\"a}diktoren und Outcomes im individuellen Entwicklungsverlauf legt. Der Fokus der Arbeit liegt daher nicht auf Intelligenz als Potenzial, sondern auf der aktuellen schulischen Leistung, die als Ergebnis und Ausgangspunkt von Entwicklungsprozessen in einer Leistungsdom{\"a}ne doppelte Bedeutung erh{\"a}lt. Die Arbeit erkennt die Vielgestaltigkeit des Leistungsbegriffs an und ist bestrebt, neue Anl{\"a}sse zu schaffen, {\"u}ber den Leistungsbegriff und seine Operationalisierung in der Forschung zu diskutieren. Hierf{\"u}r wird im ersten Teil ein systematisches Review zur Operationalisierung von Leistungsst{\"a}rke durchgef{\"u}hrt (Artikel I). Es werden Faktoren herausgearbeitet, auf welchen sich die Operationalisierungen unterscheiden k{\"o}nnen. Weiterhin wird ein {\"U}berblick gegeben, wie Studien zu Leistungsstarken sich seit dem Jahr 2000 auf diesen Dimensionen verorten lassen. Es zeigt sich, dass eindeutige Konventionen zur Definition schulischer Leistungsst{\"a}rke noch nicht existieren, woraus folgt, dass Ergebnisse aus Studien, die sich mit leistungsstarken Sch{\"u}lerinnen und Sch{\"u}lern besch{\"a}ftigen, nur bedingt miteinander vergleichbar sind. Im zweiten Teil der Arbeit wird im Rahmen zwei weiterer Artikel, welche sich mit der Leistungsentwicklung (Artikel II) und der sozialen Einbindung (Artikel III) von leistungsstarken Sch{\"u}lerinnen und Sch{\"u}lern befassen, darauf aufbauend der Ansatz verfolgt, die Variabilit{\"a}t von Ergebnissen {\"u}ber verschiedene Operationalisierungen von Leistungsst{\"a}rke deutlich zu machen. Damit wird unter anderem auch die k{\"u}nftige Vergleichbarkeit mit anderen Studien erleichtert. Genutzt wird dabei das Konzept der Multiversumsanalyse (Steegen et al., 2016), bei welcher viele parallele Spezifikationen, die zugleich sinnvolle Alternativen f{\"u}r die Operationalisierung darstellen, nebeneinandergestellt und in ihrem Effekt verglichen werden (Jansen et al., 2021). Die Multiversumsanalyse kn{\"u}pft konzeptuell an das bereits vor l{\"a}ngerem entwickelte Forschungsprogramm des kritischen Multiplismus an (Patry, 2013; Shadish, 1986, 1993), erh{\"a}lt aber als spezifische Methode aktuell im Rahmen der Replizierbarkeitskrise in der Psychologie eine besondere Bedeutung. Dabei st{\"u}tzt sich die vorliegende Arbeit auf die Sekund{\"a}ranalyse großangelegter Schulleistungsstudien, welche den Vorteil besitzen, dass eine große Zahl an Datenpunkten (Variablen und Personen) zur Verf{\"u}gung steht, um Effekte unterschiedlicher Operationalisierungen zu vergleichen. Inhaltlich greifen Artikel II und III Themen auf, die in der wissenschaftlichen und gesellschaftlichen Diskussion zu Leistungsstarken und ihrer Wahrnehmung in der {\"O}ffentlichkeit immer wieder aufscheinen: In Artikel II wird zun{\"a}chst die Frage gestellt, ob Leistungsstarke bereits im aktuellen Regelunterricht einen kumulativen Vorteil gegen{\"u}ber ihren weniger leistungsstarken Mitsch{\"u}lerinnen und Mitsch{\"u}lern haben (Matth{\"a}us-Effekt). Die Ergebnisse zeigen, dass an Gymnasien keineswegs von sich vergr{\"o}ßernden Unterschieden gesprochen werden kann. Im Gegenteil, es verringerte sich im Laufe der Sekundarstufe der Abstand zwischen den Gruppen, indem die Lernraten bei leistungsschw{\"a}cheren Sch{\"u}lerinnen und Sch{\"u}lern h{\"o}her waren. Artikel III hingegen betrifft die soziale Wahrnehmung von leistungsstarken Sch{\"u}lerinnen und Sch{\"u}lern. Auch hier h{\"a}lt sich in der {\"o}ffentlichen Diskussion die Annahme, dass h{\"o}here Leistungen mit Nachteilen in der sozialen Integration einhergehen k{\"o}nnten, was sich auch in Studien widerspiegelt, die sich mit Geschlechterstereotypen Jugendlicher in Bezug auf Schulleistung besch{\"a}ftigen. In Artikel III wird unter anderem erneut das Potenzial der Multiversumsanalyse genutzt, um die Variation des Zusammenhangs {\"u}ber Operationalisierungen von Leistungsst{\"a}rke zu beschreiben. Es zeigt sich unter unterschiedlichen Operationalisierungen von Leistungsst{\"a}rke und {\"u}ber verschiedene Facetten sozialer Integration hinweg, dass die Zusammenh{\"a}nge zwischen Leistung und sozialer Integration insgesamt leicht positiv ausfallen. Annahmen, die auf differenzielle Effekte f{\"u}r Jungen und M{\"a}dchen oder f{\"u}r unterschiedliche F{\"a}cher abzielen, finden in diesen Analysen keine Best{\"a}tigung. Die Dissertation zeigt, dass der Vergleich unterschiedlicher Ans{\"a}tze zur Operationalisierung von Leistungsst{\"a}rke — eingesetzt im Rahmen eines kritischen Multiplismus — das Verst{\"a}ndnis von Ph{\"a}nomenen vertiefen kann und auch das Potenzial hat, Theorieentwicklung voranzubringen.}, language = {de} } @phdthesis{Nerlich2007, author = {Nerlich, Annika}, title = {Die Rolle der Phosphatidylserin Decarboxylase f{\"u}r die mitochondriale Phospholipid-Biosynthese in Arabidopsis thaliana}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-14522}, school = {Universit{\"a}t Potsdam}, year = {2007}, abstract = {Die durch Phosphatidylserin Decarboxylase (PSD) katalysierte Decarboxylierung von Phosphatidylserin (PS) zu Phosphatidylethanolamin (PE) ist f{\"u}r Mitochondrien in Hefe und M{\"a}usen von essentieller Bedeutung. Im Rahmen der vorliegenden Dissertation wurde erstmals die Rolle dieses PE-Syntheseweges in Pflanzen untersucht. Die drei in Arabidopsis identifizierten PSD Gene atPSD1, atPSD2, atPSD3 codieren f{\"u}r Enzyme, die in Membranen der Mitochondrien (atPSD1), der Tonoplasten (atPSD2) und des Endoplasmatischen Retikulums (atPSD3) lokalisiert sind. Der Beitrag der einzelnen PSDs zur PE-Synthese wurde anhand von psd Null-Mutanten untersucht. Dabei stellte sich atPSD3 als das Enzym mit der h{\"o}chsten Aktivit{\"a}t heraus. Alternativ zum PSD-Weg wird in Arabidopsis PE auch mittels Aminoalkohol-phosphotransferase synthetisiert. Der Verlust der gesamten PSD-Aktivit{\"a}t, wie es in der erzeugten psd Dreifachmutante der Fall ist, wirkt sich ausschließlich auf die Lipidzusammensetzung in der Mitochondrienmembran aus. Demzufolge wird extramitochondriales PE haupts{\"a}chlich {\"u}ber die Aminoalkoholphosphotransferase synthetisiert. Die ver{\"a}nderte Lipidzusammensetzung der Mitochondrienmembran hatte jedoch keinen Einfluss auf die Anzahl, Gr{\"o}ße und Ultrastruktur der Mitochondrien sowie auf das ADP/ATP-Verh{\"a}ltnis und die Respiration. Neben der Bereitstellung von Reduktions{\"a}quivalenten beeinflusst die Funktionalit{\"a}t der Mitochondrien auch die Bildung von Bl{\"u}ten- und Staubbl{\"a}ttern. Diese Bl{\"u}tenorgane waren in der psd Dreifachmutante stark ver{\"a}ndert, und der Bl{\"u}tenph{\"a}notyp {\"a}hnelte der APETALA3-Mutante. Dieses hom{\"o}otische Gen ist f{\"u}r die Ausbildung von Bl{\"u}ten- und Staubbl{\"a}ttern verantwortlich. F{\"u}r die Erzeugung der Mutanten psd2-1 und psd3-1 wurde ein T-DNA Vektor verwendet, der den Promotor des APETALA3 Gens enthielt, welcher in den Mutanten psd2-1, psd3-1 sowie psd2-1psd3-1 und der psd1psd2-1psd3-1 Dreifachmutante eine vergleichbare Co-Suppression des APETALA3 Gens hervorruft. Der Bl{\"u}tenph{\"a}notyp trat jedoch nur in der psd Dreifachmutante auf, da nur in ihr die Kombination von geringen Funktionst{\"o}rungen der Mitochondrien, hervorgerufen durch ver{\"a}nderte Lipidzusammensetzung, mit der Co-Suppression von APETALA3 auftritt.}, language = {de} } @phdthesis{Muench2018, author = {M{\"u}nch, Thomas}, title = {Interpretation of temperature signals from ice cores}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-414963}, school = {Universit{\"a}t Potsdam}, pages = {xxi, 197}, year = {2018}, abstract = {Earth's climate varies continuously across space and time, but humankind has witnessed only a small snapshot of its entire history, and instrumentally documented it for a mere 200 years. Our knowledge of past climate changes is therefore almost exclusively based on indirect proxy data, i.e. on indicators which are sensitive to changes in climatic variables and stored in environmental archives. Extracting the data from these archives allows retrieval of the information from earlier times. Obtaining accurate proxy information is a key means to test model predictions of the past climate, and only after such validation can the models be used to reliably forecast future changes in our warming world. The polar ice sheets of Greenland and Antarctica are one major climate archive, which record information about local air temperatures by means of the isotopic composition of the water molecules embedded in the ice. However, this temperature proxy is, as any indirect climate data, not a perfect recorder of past climatic variations. Apart from local air temperatures, a multitude of other processes affect the mean and variability of the isotopic data, which hinders their direct interpretation in terms of climate variations. This applies especially to regions with little annual accumulation of snow, such as the Antarctic Plateau. While these areas in principle allow for the extraction of isotope records reaching far back in time, a strong corruption of the temperature signal originally encoded in the isotopic data of the snow is expected. This dissertation uses observational isotope data from Antarctica, focussing especially on the East Antarctic low-accumulation area around the Kohnen Station ice-core drilling site, together with statistical and physical methods, to improve our understanding of the spatial and temporal isotope variability across different scales, and thus to enhance the applicability of the proxy for estimating past temperature variability. The presented results lead to a quantitative explanation of the local-scale (1-500 m) spatial variability in the form of a statistical noise model, and reveal the main source of the temporal variability to be the mixture of a climatic seasonal cycle in temperature and the effect of diffusional smoothing acting on temporally uncorrelated noise. These findings put significant limits on the representativity of single isotope records in terms of local air temperature, and impact the interpretation of apparent cyclicalities in the records. Furthermore, to extend the analyses to larger scales, the timescale-dependency of observed Holocene isotope variability is studied. This offers a deeper understanding of the nature of the variations, and is crucial for unravelling the embedded true temperature variability over a wide range of timescales.}, language = {en} } @phdthesis{Muench2021, author = {M{\"u}nch, Steffen}, title = {The relevance of the aeolian transport path for the spread of antibiotic-resistant bacteria on arable fields}, doi = {10.25932/publishup-53608}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-536089}, school = {Universit{\"a}t Potsdam}, pages = {XVI, 140}, year = {2021}, abstract = {The spread of antibiotic-resistant bacteria poses a globally increasing threat to public health care. The excessive use of antibiotics in animal husbandry can develop resistances in the stables. Transmission through direct contact with animals and contamination of food has already been proven. The excrements of the animals combined with a binding material enable a further potential path of spread into the environment, if they are used as organic manure in agricultural landscapes. As most of the airborne bacteria are attached to particulate matter, the focus of the work will be the atmospheric dispersal via the dust fraction. Field measurements on arable lands in Brandenburg, Germany and wind erosion studies in a wind tunnel were conducted to investigate the risk of a potential atmospheric dust-associated spread of antibiotic-resistant bacteria from poultry manure fertilized agricultural soils. The focus was to (i) characterize the conditions for aerosolization and (ii) qualify and quantify dust emissions during agricultural operations and wind erosion. PM10 (PM, particulate matter with an aerodynamic diameter smaller than 10 µm) emission factors and bacterial fluxes for poultry manure application and incorporation have not been previously reported before. The contribution to dust emissions depends on the water content of the manure, which is affected by the manure pretreatment (fresh, composted, stored, dried), as well as by the intensity of manure spreading from the manure spreader. During poultry manure application, PM10 emission ranged between 0.05 kg ha-1 and 8.37 kg ha-1. For comparison, the subsequent land preparation contributes to 0.35 - 1.15 kg ha-1 of PM10 emissions. Manure particles were still part of dust emissions but they were accounted to be less than 1\% of total PM10 emissions due to the dilution of poultry manure in the soil after manure incorporation. Bacterial emissions of fecal origin were more relevant during manure application than during the subsequent manure incorporation, although PM10 emissions of manure incorporation were larger than PM10 emissions of manure application for the non-dried manure variants. Wind erosion leads to preferred detachment of manure particles from sandy soils, when poultry manure has been recently incorporated. Sorting effects were determined between the low-density organic particles of manure origin and the soil particles of mineral origin close above the threshold of 7 m s-1. In dependence to the wind speed, potential erosion rates between 101 and 854 kg ha-1 were identified, if 6 t ha-1 of poultry manure were applied. Microbial investigation showed that manure bacteria got detached more easily from the soil surface during wind erosion, due to their attachment on manure particles. Although antibiotic-resistant bacteria (ESBL-producing E. coli) were still found in the poultry barns, no further contamination could be detected with them in the manure, fertilized soils or in the dust generated by manure application, land preparation or wind erosion. Parallel studies of this project showed that storage of poultry manure for a few days (36 - 72 h) is sufficient to inactivate ESBL-producing E. coli. Further antibiotic-resistant bacteria, i.e. MRSA and VRE, were only found sporadically in the stables and not at all in the dust. Therefore, based on the results of this work, the risk of a potential infection by dust-associated antibiotic-resistant bacteria can be considered as low.}, language = {en} } @phdthesis{Mueller2008, author = {M{\"u}ller, Melanie J. I.}, title = {Bidirectional transport by molecular motors}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-18715}, school = {Universit{\"a}t Potsdam}, year = {2008}, abstract = {In biological cells, the long-range intracellular traffic is powered by molecular motors which transport various cargos along microtubule filaments. The microtubules possess an intrinsic direction, having a 'plus' and a 'minus' end. Some molecular motors such as cytoplasmic dynein walk to the minus end, while others such as conventional kinesin walk to the plus end. Cells typically have an isopolar microtubule network. This is most pronounced in neuronal axons or fungal hyphae. In these long and thin tubular protrusions, the microtubules are arranged parallel to the tube axis with the minus ends pointing to the cell body and the plus ends pointing to the tip. In such a tubular compartment, transport by only one motor type leads to 'motor traffic jams'. Kinesin-driven cargos accumulate at the tip, while dynein-driven cargos accumulate near the cell body. We identify the relevant length scales and characterize the jamming behaviour in these tube geometries by using both Monte Carlo simulations and analytical calculations. A possible solution to this jamming problem is to transport cargos with a team of plus and a team of minus motors simultaneously, so that they can travel bidirectionally, as observed in cells. The presumably simplest mechanism for such bidirectional transport is provided by a 'tug-of-war' between the two motor teams which is governed by mechanical motor interactions only. We develop a stochastic tug-of-war model and study it with numerical and analytical calculations. We find a surprisingly complex cooperative motility behaviour. We compare our results to the available experimental data, which we reproduce qualitatively and quantitatively.}, language = {en} } @phdthesis{Moerbt2010, author = {M{\"o}rbt, Nora}, title = {Differential proteome analysis of human lung epithelial cells following exposure to aromatic volatile organic compounds}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-49257}, school = {Universit{\"a}t Potsdam}, year = {2010}, abstract = {The widespread usage of products containing volatile organic compounds (VOC) has lead to a general human exposure to these chemicals in work places or homes being suspected to contribute to the growing incidence of environmental diseases. Since the causal molecular mechanisms for the development of these disorders are not completely understood, the overall objective of this thesis was to investigate VOC-mediated molecular effects on human lung cells in vitro at VOC concentrations comparable to exposure scenarios below current occupational limits. Although differential expression of single proteins in response to VOCs has been reported, effects on complex protein networks (proteome) have not been investigated. However, this information is indispensable when trying to ascertain a mechanism for VOC action on the cellular level and establishing preventive strategies. For this study, the alveolar epithelial cell line A549 has been used. This cell line, cultured in a two-phase (air/liquid) model allows the most direct exposure and had been successfully applied for the analysis of inflammatory effects in response to VOCs. Mass spectrometric identification of 266 protein spots provided the first proteomic map of A549 cell line to this extent that may foster future work with this frequently used cellular model. The distribution of three typical air contaminants, monochlorobenzene (CB), styrene and 1,2 dichlorobenzene (1,2-DCB), between gas and liquid phase of the exposure model has been analyzed by gas chromatography. The obtained VOC partitioning was in agreement with available literature data. Subsequently the adapted in vitro system has been successfully employed to characterize the effects of the aromatic compound styrene on the proteome of A549 cells (Chapter 4). Initially, the cell toxicity has been assessed in order to ensure that most of the concentrations used in the following proteomic approach were not cytotoxic. Significant changes in abundance and phosphorylation in the total soluble protein fraction of A549 cells have been detected following styrene exposure. All proteins have been identified using mass spectrometry and the main cellular functions have been assigned. Validation experiments on protein and transcript level confirmed the results of the 2-DE experiments. From the results, two main cellular pathways have been identified that were induced by styrene: the cellular oxidative stress response combined with moderate pro-apoptotic signaling. Measurement of cellular reactive oxygen species (ROS) as well as the styrene-mediated induction of oxidative stress marker proteins confirmed the hypothesis of oxidative stress as the main molecular response mechanism. Finally, adducts of cellular proteins with the reactive styrene metabolite styrene 7,8 oxide (SO) have been identified. Especially the SO-adducts observed at both the reactive centers of thioredoxin reductase 1, which is a key element in the control of the cellular redox state, may be involved in styrene-induced ROS formation and apoptosis. A similar proteomic approach has been carried out with the halobenzenes CB and 1,2-DCB (Chapter 5). In accordance with previous findings, cell toxicity assessment showed enhanced toxicity compared to the one caused by styrene. Significant changes in abundance and phosphorylation of total soluble proteins of A549 cells have been detected following exposure to subtoxic concentrations of CB and 1,2-DCB. All proteins have been identified using mass spectrometry and the main cellular functions have been assigned. As for the styrene experiment, the results indicated two main pathways to be affected in the presence of chlorinated benzenes, cell death signaling and oxidative stress response. The strong induction of pro-apoptotic signaling has been confirmed for both treatments by detection of the cleavage of caspase 3. Likewise, the induction of redox-sensitive protein species could be correlated to an increased cellular level of ROS observed following CB treatment. Finally, common mechanisms in the cellular response to aromatic VOCs have been investigated (Chapter 6). A similar number (4.6-6.9\%) of all quantified protein spots showed differential expression (p<0.05) following cell exposure to styrene, CB or 1,2-DCB. However, not more than three protein spots showed significant regulation in the same direction for all three volatile compounds: voltage-dependent anion-selective channel protein 2, peroxiredoxin 1 and elongation factor 2. However, all of these proteins are important molecular targets in stress- and cell death-related signaling pathways.}, language = {en} } @phdthesis{Morgenstern2012, author = {Morgenstern, Anne}, title = {Thermokarst and thermal erosion : degradation of Siberian ice-rich permafrost}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-62079}, school = {Universit{\"a}t Potsdam}, year = {2012}, abstract = {Current climate warming is affecting arctic regions at a faster rate than the rest of the world. This has profound effects on permafrost that underlies most of the arctic land area. Permafrost thawing can lead to the liberation of considerable amounts of greenhouse gases as well as to significant changes in the geomorphology, hydrology, and ecology of the corresponding landscapes, which may in turn act as a positive feedback to the climate system. Vast areas of the east Siberian lowlands, which are underlain by permafrost of the Yedoma-type Ice Complex, are particularly sensitive to climate warming because of the high ice content of these permafrost deposits. Thermokarst and thermal erosion are two major types of permafrost degradation in periglacial landscapes. The associated landforms are prominent indicators of climate-induced environmental variations on the regional scale. Thermokarst lakes and basins (alasses) as well as thermo-erosional valleys are widely distributed in the coastal lowlands adjacent to the Laptev Sea. This thesis investigates the spatial distribution and morphometric properties of these degradational features to reconstruct their evolutionary conditions during the Holocene and to deduce information on the potential impact of future permafrost degradation under the projected climate warming. The methodological approach is a combination of remote sensing, geoinformation, and field investigations, which integrates analyses on local to regional spatial scales. Thermokarst and thermal erosion have affected the study region to a great extent. In the Ice Complex area of the Lena River Delta, thermokarst basins cover a much larger area than do present thermokarst lakes on Yedoma uplands (20.0 and 2.2 \%, respectively), which indicates that the conditions for large-area thermokarst development were more suitable in the past. This is supported by the reconstruction of the development of an individual alas in the Lena River Delta, which reveals a prolonged phase of high thermokarst activity since the Pleistocene/Holocene transition that created a large and deep basin. After the drainage of the primary thermokarst lake during the mid-Holocene, permafrost aggradation and degradation have occurred in parallel and in shorter alternating stages within the alas, resulting in a complex thermokarst landscape. Though more dynamic than during the first phase, late Holocene thermokarst activity in the alas was not capable of degrading large portions of Pleistocene Ice Complex deposits and substantially altering the Yedoma relief. Further thermokarst development in existing alasses is restricted to thin layers of Holocene ice-rich alas sediments, because the Ice Complex deposits underneath the large primary thermokarst lakes have thawed completely and the underlying deposits are ice-poor fluvial sands. Thermokarst processes on undisturbed Yedoma uplands have the highest impact on the alteration of Ice Complex deposits, but will be limited to smaller areal extents in the future because of the reduced availability of large undisturbed upland surfaces with poor drainage. On Kurungnakh Island in the central Lena River Delta, the area of Yedoma uplands available for future thermokarst development amounts to only 33.7 \%. The increasing proximity of newly developing thermokarst lakes on Yedoma uplands to existing degradational features and other topographic lows decreases the possibility for thermokarst lakes to reach large sizes before drainage occurs. Drainage of thermokarst lakes due to thermal erosion is common in the study region, but thermo-erosional valleys also provide water to thermokarst lakes and alasses. Besides these direct hydrological interactions between thermokarst and thermal erosion on the local scale, an interdependence between both processes exists on the regional scale. A regional analysis of extensive networks of thermo-erosional valleys in three lowland regions of the Laptev Sea with a total study area of 5,800 km² found that these features are more common in areas with higher slopes and relief gradients, whereas thermokarst development is more pronounced in flat lowlands with lower relief gradients. The combined results of this thesis highlight the need for comprehensive analyses of both, thermokarst and thermal erosion, in order to assess past and future impacts and feedbacks of the degradation of ice-rich permafrost on hydrology and climate of a certain region.}, language = {en} } @phdthesis{Miteva2007, author = {Miteva, Rositsa Stoycheva}, title = {Electron acceleration at localized wave structures in the solar corona}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-14775}, school = {Universit{\"a}t Potsdam}, year = {2007}, abstract = {Our dynamic Sun manifests its activity by different phenomena: from the 11-year cyclic sunspot pattern to the unpredictable and violent explosions in the case of solar flares. During flares, a huge amount of the stored magnetic energy is suddenly released and a substantial part of this energy is carried by the energetic electrons, considered to be the source of the nonthermal radio and X-ray radiation. One of the most important and still open question in solar physics is how the electrons are accelerated up to high energies within (the observed in the radio emission) short time scales. Because the acceleration site is extremely small in spatial extent as well (compared to the solar radius), the electron acceleration is regarded as a local process. The search for localized wave structures in the solar corona that are able to accelerate electrons together with the theoretical and numerical description of the conditions and requirements for this process, is the aim of the dissertation. Two models of electron acceleration in the solar corona are proposed in the dissertation: I. Electron acceleration due to the solar jet interaction with the background coronal plasma (the jet--plasma interaction) A jet is formed when the newly reconnected and highly curved magnetic field lines are relaxed by shooting plasma away from the reconnection site. Such jets, as observed in soft X-rays with the Yohkoh satellite, are spatially and temporally associated with beams of nonthermal electrons (in terms of the so-called type III metric radio bursts) propagating through the corona. A model that attempts to give an explanation for such observational facts is developed here. Initially, the interaction of such jets with the background plasma leads to an (ion-acoustic) instability associated with growing of electrostatic fluctuations in time for certain range of the jet initial velocity. During this process, any test electron that happen to feel this electrostatic wave field is drawn to co-move with the wave, gaining energy from it. When the jet speed has a value greater or lower than the one, required by the instability range, such wave excitation cannot be sustained and the process of electron energization (acceleration and/or heating) ceases. Hence, the electrons can propagate further in the corona and be detected as type III radio burst, for example. II. Electron acceleration due to attached whistler waves in the upstream region of coronal shocks (the electron--whistler--shock interaction) Coronal shocks are also able to accelerate electrons, as observed by the so-called type II metric radio bursts (the radio signature of a shock wave in the corona). From in-situ observations in space, e.g., at shocks related to co-rotating interaction regions, it is known that nonthermal electrons are produced preferably at shocks with attached whistler wave packets in their upstream regions. Motivated by these observations and assuming that the physical processes at shocks are the same in the corona as in the interplanetary medium, a new model of electron acceleration at coronal shocks is presented in the dissertation, where the electrons are accelerated by their interaction with such whistlers. The protons inflowing toward the shock are reflected there by nearly conserving their magnetic moment, so that they get a substantial velocity gain in the case of a quasi-perpendicular shock geometry, i.e, the angle between the shock normal and the upstream magnetic field is in the range 50--80 degrees. The so-accelerated protons are able to excite whistler waves in a certain frequency range in the upstream region. When these whistlers (comprising the localized wave structure in this case) are formed, only the incoming electrons are now able to interact resonantly with them. But only a part of these electrons fulfill the the electron--whistler wave resonance condition. Due to such resonant interaction (i.e., of these electrons with the whistlers), the electrons are accelerated in the electric and magnetic wave field within just several whistler periods. While gaining energy from the whistler wave field, the electrons reach the shock front and, subsequently, a major part of them are reflected back into the upstream region, since the shock accompanied with a jump of the magnetic field acts as a magnetic mirror. Co-moving with the whistlers now, the reflected electrons are out of resonance and hence can propagate undisturbed into the far upstream region, where they are detected in terms of type II metric radio bursts. In summary, the kinetic energy of protons is transfered into electrons by the action of localized wave structures in both cases, i.e., at jets outflowing from the magnetic reconnection site and at shock waves in the corona.}, language = {en} } @phdthesis{Milke2012, author = {Milke, Bettina}, title = {Synthese von Metallnitrid- und Metalloxinitridnanopartikeln f{\"u}r energierelevante Anwendungen}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-60008}, school = {Universit{\"a}t Potsdam}, year = {2012}, abstract = {Ein viel diskutiertes Thema unserer Zeit ist die Zukunft der Energiegewinnung und Speicherung. Dabei nimmt die Nanowissenschaft eine bedeutende Rolle ein; sie f{\"u}hrt zu einer Effizienzsteigerung bei der Speicherung und Gewinnung durch bereits bekannte Materialien und durch neue Materialien. In diesem Zusammenhang ist die Chemie Wegbereiter f{\"u}r Nanomaterialien. Allerdings f{\"u}hren bisher die meisten bekannten Synthesen von Nanopartikeln zu undefinierten Partikeln. Eine einfache, kosteng{\"u}nstige und sichere Synthese w{\"u}rde die M{\"o}glichkeit einer breiten Anwendung und Skalierbarkeit bieten. In dieser Arbeit soll daher die Darstellung der einfachen Synthese von Mangannitrid-, Aluminiumnitrid-, Lithiummangansilicat-, Zirkonium-oxinitrid- und Mangancarbonatnanopartikel betrachtet werden. Dabei werden die sogenannte Harnstoff-Glas-Route als eine Festphasensynthese und die Solvothermalsynthese als typische Fl{\"u}ssigphasensynthese eingesetzt. Beide Synthesewege f{\"u}hren zu definierten Partikelgr{\"o}ßen und interessanten Morphologien und erm{\"o}glichen eine Einflussnahme auf die Produkte. Im Falle der Synthese der Mangannitridnanopartikel mithilfe der Harnstoff-Glas-Route f{\"u}hrt diese zu Nanopartikeln mit Kern-H{\"u}lle-Struktur, deren Einsatz als Konversionsmaterial erstmalig vorgestellt wird. Mit dem Ziel einer leichteren Anwendung von Nanopartikeln wird eine einfache Beschichtung von Oberfl{\"a}chen mit Nanopartikeln mithilfe der Rotationsbeschichtung beschrieben. Es entstand ein Gemisch aus MnN0,43/MnO-Nanopartikeln, eingebettet in einem Kohlenstofffilm, dessen Untersuchung als Konversionsmaterial hohe spezifische Kapazit{\"a}ten (811 mAh/g) zeigt, die die von dem konventionellen Anodenmaterial Graphit (372 mAh/g) {\"u}bersteigt. Neben der Synthese des Anodenmaterials wurde ebenfalls die des Kathodenmaterials Li2MnSiO4-Nanopartikeln mithilfe der Harnstoff-Glas-Route vorgestellt. Mithilfe der Synthese von Zirkoniumoxinitridnanopartikeln Zr2ON2 kann eine einfache Einflussnahme auf das gew{\"u}nschte Produkt durch die Variation derReaktionsbedingungen, wie Harnstoffmenge oder Reaktionstemperatur, bei der Harnstoff-Glas-Route demonstriert werden. Der Zusatz von kleinsten Mengen an Ammoniumchlorid vermeidet, dass sich Kohlenstoff im Endprodukt bildet und f{\"u}hrt so zu gelben Zr2ON2-Nanopartikeln mit einer Gr{\"o}ße d = 8 nm, die Halbleitereigen-schaften besitzen. Die Synthese von Aluminiumnitridnanopartikeln f{\"u}hrt zu kristallinen Nanopartikeln, die in eine amorphe Matrix eingebettet sind. Die Solvothermalsynthese von Mangancarbonatnanopartikel l{\"a}sst neue Morphologien in Form von Nanost{\"a}bchen entstehen, die zu schuppenartigen sph{\"a}rischen {\"U}berstrukturen agglomeriert sind.}, language = {de} } @phdthesis{MichalikOnichimowska2022, author = {Michalik-Onichimowska, Aleksandra}, title = {Real-time monitoring of (photo)chemical reactions in micro flow reactors and levitated droplets by IR-MALDI ion mobility and mass spectrometry}, doi = {10.25932/publishup-55729}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-557298}, school = {Universit{\"a}t Potsdam}, pages = {v, 68}, year = {2022}, abstract = {Eine nachhaltigere chemische Industrie erfordert eine Minimierung der L{\"o}sungsmittel und Chemikalien. Daher werden Optimierung und Entwicklung chemischer Prozesse vor einer Produktion in großem Maßstab in kleinen Chargen durchgef{\"u}hrt. Der entscheidende Schritt bei diesem Ansatz ist die Skalierbarkeit von kleinen Reaktionssystemen auf große, kosteneffiziente Reaktoren. Die Vergr{\"o}ßerung des Volumens des Reaktionsmediums geht immer mit der Vergr{\"o}ßerung der Oberfl{\"a}che einher, die mit dem begrenzenden Gef{\"a}ß in Kontakt steht. Da das Volumen kubisch, w{\"a}hrend die Oberfl{\"a}che quadratisch mit zunehmendem Radius skaliert, nimmt ihr Verh{\"a}ltnis nicht linear zu. Viele an der Grenzfl{\"a}che zwischen Oberfl{\"a}che und Fl{\"u}ssigkeit auftretende Ph{\"a}nomene k{\"o}nnen die Reaktionsgeschwindigkeiten und Ausbeuten beeinflussen, was zu falschen Prognosen aufgrund der kleinskaligen Optimierung f{\"u}hrt. Die Anwendung von schwebenden Tropfen als beh{\"a}lterlose Reaktionsgef{\"a}ße bietet eine vielversprechende M{\"o}glichkeit, die oben genannten Probleme zu vermeiden. In der vorgestellten Arbeit wurde eine effiziente Kopplung von akustisch schwebenden Tropfen und IM Spektrometer f{\"u}r die Echtzeit{\"u}berwachung chemischer Reaktionen entwickelt, bei denen akustisch schwebende Tropfen als Reaktionsgef{\"a}ße fungieren. Das Design des Systems umfasst die ber{\"u}hrungslose Probenahme und Ionisierung, die durch Laserdesorption und -ionisation bei 2,94 µm realisiert wird. Der Umfang der Arbeit umfasst grundlegende Studien zum Verst{\"a}ndnis der Laserbestrahlung von Tropfen im akustischen Feld. Das Verst{\"a}ndnis dieses Ph{\"a}nomens ist entscheidend, um den Effekt der zeitlichen und r{\"a}umlichen Aufl{\"o}sung der erzeugten Ionenwolke zu verstehen, die die Aufl{\"o}sung des Systems beeinflusst. Der Aufbau umfasst eine akustische Falle, Laserbestrahlung und elektrostatische Linsen, die bei hoher Spannung unter Umgebungsdruck arbeiten. Ein effektiver Ionentransfer im Grenzfl{\"a}chenbereich zwischen dem schwebenden Tropfen und dem IMS muss daher elektrostatische und akustische Felder vollst{\"a}ndig ber{\"u}cksichtigen. F{\"u}r die Probenahme und Ionisation wurden zwei unterschiedliche Laserpulsl{\"a}ngen untersucht, n{\"a}mlich im ns- und µs-Bereich. Die Bestrahlung {\"u}ber µs-Laserpulse bietet gegen{\"u}ber ns-Pulse mehrere Vorteile: i) das Tropfenvolumen wird nicht stark beeinflusst, was es erm{\"o}glichet, nur ein kleines Volumen des Tropfens abzutasten; ii) die geringere Fluenz f{\"u}hrt zu weniger ausgepr{\"a}gten Schwingungen des im akustischen Feld eingeschlossenen Tropfens und der Tropfen wird nicht aus dem akustischen Feld r{\"u}ckgeschlagen, was zum Verlust der Probe f{\"u}hren w{\"u}rde; iii) die milde Laserbestrahlung f{\"u}hrt zu einer besseren r{\"a}umlichen und zeitlichen Begrenzung der Ionenwolken, was zu einer besseren Aufl{\"o}sung der detektierten Ionenpakete f{\"u}hrt. Schließlich erm{\"o}glicht dieses Wissen die Anwendung der Ionenoptik, die erforderlich ist, um den Ionenfluss zwischen dem im akustischen Feld suspendierten Tropfen und dem IM Spektrometer zu induzieren. Die Ionenoptik aus 2 elektrostatischen Linsen in der N{\"a}he des Tropfens erm{\"o}glicht es, die Ionenwolke effektiv zu fokussieren und direkt zum IM Spektrometer-Eingang zu f{\"u}hren. Diese neuartige Kopplung hat sich beim Nachweis einiger basischer Molek{\"u}le als erfolgreich erwiesen. Um die Anwendbarkeit des Systems zu belegen, wurde die Reaktion zwischen N-Boc Cysteine Methylester und Allylalkohol in einem Chargenreaktor durchgef{\"u}hrt und online {\"u}berwacht. F{\"u}r eine Kalibrierung wurde der Reaktionsfortschritt parallel mittels 1H-NMR verfolgt. Der beobachtete Reaktionsumsatz von mehr als 50\% innerhalb der ersten 20 Minuten demonstrierte die Eignung der Reaktion, um die Einsatzpotentiale des entwickelten Systems zu bewerten.}, language = {en} } @phdthesis{Metz2023, author = {Metz, Malte}, title = {Finite fault earthquake source inversions}, doi = {10.25932/publishup-61974}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-619745}, school = {Universit{\"a}t Potsdam}, pages = {143}, year = {2023}, abstract = {Earthquake modeling is the key to a profound understanding of a rupture. Its kinematics or dynamics are derived from advanced rupture models that allow, for example, to reconstruct the direction and velocity of the rupture front or the evolving slip distribution behind the rupture front. Such models are often parameterized by a lattice of interacting sub-faults with many degrees of freedom, where, for example, the time history of the slip and rake on each sub-fault are inverted. To avoid overfitting or other numerical instabilities during a finite-fault estimation, most models are stabilized by geometric rather than physical constraints such as smoothing. As a basis for the inversion approach of this study, we build on a new pseudo-dynamic rupture model (PDR) with only a few free parameters and a simple geometry as a physics-based solution of an earthquake rupture. The PDR derives the instantaneous slip from a given stress drop on the fault plane, with boundary conditions on the developing crack surface guaranteed at all times via a boundary element approach. As a side product, the source time function on each point on the rupture plane is not constraint and develops by itself without additional parametrization. The code was made publicly available as part of the Pyrocko and Grond Python packages. The approach was compared with conventional modeling for different earthquakes. For example, for the Mw 7.1 2016 Kumamoto, Japan, earthquake, the effects of geometric changes in the rupture surface on the slip and slip rate distributions could be reproduced by simply projecting stress vectors. For the Mw 7.5 2018 Palu, Indonesia, strike-slip earthquake, we also modelled rupture propagation using the 2D Eikonal equation and assuming a linear relationship between rupture and shear wave velocity. This allowed us to give a deeper and faster propagating rupture front and the resulting upward refraction as a new possible explanation for the apparent supershear observed at the Earth's surface. The thesis investigates three aspects of earthquake inversion using PDR: (1) to test whether implementing a simplified rupture model with few parameters into a probabilistic Bayesian scheme without constraining geometric parameters is feasible, and whether this leads to fast and robust results that can be used for subsequent fast information systems (e.g., ground motion predictions). (2) To investigate whether combining broadband and strong-motion seismic records together with near-field ground deformation data improves the reliability of estimated rupture models in a Bayesian inversion. (3) To investigate whether a complex rupture can be represented by the inversion of multiple PDR sources and for what type of earthquakes this is recommended. I developed the PDR inversion approach and applied the joint data inversions to two seismic sequences in different tectonic settings. Using multiple frequency bands and a multiple source inversion approach, I captured the multi-modal behaviour of the Mw 8.2 2021 South Sandwich subduction earthquake with a large, curved and slow rupturing shallow earthquake bounded by two faster and deeper smaller events. I could cross-validate the results with other methods, i.e., P-wave energy back-projection, a clustering analysis of aftershocks and a simple tsunami forward model. The joint analysis of ground deformation and seismic data within a multiple source inversion also shed light on an earthquake triplet, which occurred in July 2022 in SE Iran. From the inversion and aftershock relocalization, I found indications for a vertical separation between the shallower mainshocks within the sedimentary cover and deeper aftershocks at the sediment-basement interface. The vertical offset could be caused by the ductile response of the evident salt layer to stress perturbations from the mainshocks. The applications highlight the versatility of the simple PDR in probabilistic seismic source inversion capturing features of rather different, complex earthquakes. Limitations, as the evident focus on the major slip patches of the rupture are discussed as well as differences to other finite fault modeling methods.}, language = {en} } @phdthesis{Mester2023, author = {Mester, Benedikt}, title = {Modeling flood-induced human displacement risk under global change}, doi = {10.25932/publishup-60929}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-609293}, school = {Universit{\"a}t Potsdam}, pages = {XVI, 143}, year = {2023}, abstract = {Extreme flooding displaces an average of 12 million people every year. Marginalized populations in low-income countries are in particular at high risk, but also industrialized countries are susceptible to displacement and its inherent societal impacts. The risk of being displaced results from a complex interaction of flood hazard, population exposed in the floodplains, and socio-economic vulnerability. Ongoing global warming changes the intensity, frequency, and duration of flood hazards, undermining existing protection measures. Meanwhile, settlements in attractive yet hazardous flood-prone areas have led to a higher degree of population exposure. Finally, the vulnerability to displacement is altered by demographic and social change, shifting economic power, urbanization, and technological development. These risk components have been investigated intensively in the context of loss of life and economic damage, however, only little is known about the risk of displacement under global change. This thesis aims to improve our understanding of flood-induced displacement risk under global climate change and socio-economic change. This objective is tackled by addressing the following three research questions. First, by focusing on the choice of input data, how well can a global flood modeling chain reproduce flood hazards of historic events that lead to displacement? Second, what are the socio-economic characteristics that shape the vulnerability to displacement? Finally, to what degree has climate change potentially contributed to recent flood-induced displacement events? To answer the first question, a global flood modeling chain is evaluated by comparing simulated flood extent with satellite-derived inundation information for eight major flood events. A focus is set on the sensitivity to different combinations of the underlying climate reanalysis datasets and global hydrological models which serve as an input for the global hydraulic model. An evaluation scheme of performance scores shows that simulated flood extent is mostly overestimated without the consideration of flood protection and only for a few events dependent on the choice of global hydrological models. Results are more sensitive to the underlying climate forcing, with two datasets differing substantially from a third one. In contrast, the incorporation of flood protection standards results in an underestimation of flood extent, pointing to potential deficiencies in the protection level estimates or the flood frequency distribution within the modeling chain. Following the analysis of a physical flood hazard model, the socio-economic drivers of vulnerability to displacement are investigated in the next step. For this purpose, a satellite- based, global collection of flood footprints is linked with two disaster inventories to match societal impacts with the corresponding flood hazard. For each event the number of affected population, assets, and critical infrastructure, as well as socio-economic indicators are computed. The resulting datasets are made publicly available and contain 335 displacement events and 695 mortality/damage events. Based on this new data product, event-specific displacement vulnerabilities are determined and multiple (national) dependencies with the socio-economic predictors are derived. The results suggest that economic prosperity only partially shapes vulnerability to displacement; urbanization, infant mortality rate, the share of elderly, population density and critical infrastructure exhibit a stronger functional relationship, suggesting that higher levels of development are generally associated with lower vulnerability. Besides examining the contextual drivers of vulnerability, the role of climate change in the context of human displacement is also being explored. An impact attribution approach is applied on the example of Cyclone Idai and associated extreme coastal flooding in Mozambique. A combination of coastal flood modeling and satellite imagery is used to construct factual and counterfactual flood events. This storyline-type attribution method allows investigating the isolated or combined effects of sea level rise and the intensification of cyclone wind speeds on coastal flooding. The results suggest that displacement risk has increased by 3.1 to 3.5\% due to the total effects of climate change on coastal flooding, with the effects of increasing wind speed being the dominant factor. In conclusion, this thesis highlights the potentials and challenges of modeling flood- induced displacement risk. While this work explores the sensitivity of global flood modeling to the choice of input data, new questions arise on how to effectively improve the reproduction of flood return periods and the representation of protection levels. It is also demonstrated that disentangling displacement vulnerabilities is feasible, with the results providing useful information for risk assessments, effective humanitarian aid, and disaster relief. The impact attribution study is a first step in assessing the effects of global warming on displacement risk, leading to new research challenges, e.g., coupling fluvial and coastal flood models or the attribution of other hazard types and displacement events. This thesis is one of the first to address flood-induced displacement risk from a global perspective. The findings motivate for further development of the global flood modeling chain to improve our understanding of displacement vulnerability and the effects of global warming.}, language = {en} } @phdthesis{Meinke2006, author = {Meinke, Anja}, title = {Nikotineffekte auf r{\"a}umliche Aufmerksamkeitsprozesse bei Nichtrauchern}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-7659}, school = {Universit{\"a}t Potsdam}, year = {2006}, abstract = {Nikotin in den unterschiedlichsten Darreichungsformen verringert bei verschiedenen Spezies im r{\"a}umlichen Hinweisreizparadigma die Kosten invalider Hinweisreize. Welcher Teilprozess genau durch Nikotin beeinflusst wird, ist bislang nicht untersucht worden. Die g{\"a}ngige Interpretation ist, daß Nikotin das Losl{\"o}sen von Aufmerksamkeit von einem bisher beachteten Ort erleichtert. In f{\"u}nf Studien, drei elektrophysiologischen und zwei behavioralen wurden drei m{\"o}gliche Mechanismen der Nikotinwirkung an Nichtrauchern untersucht. Experiment 1 und 2 gingen der Frage nach, ob Nikotin eine Modulation sensorischer gain Kontrolle bewirkt. Dazu wurden ereigniskorrelierte Potentiale (EKP) im Posner-Paradigma erhoben und die Wirkung von Nikotin auf die aufmerksamkeitsassoziierten Komponenten P1 und N1 betrachtet. Nikotin verringerte die Kosten invalider Hinweisreize bei Aufmerksamkeitslenkung durch endogene Hinweisreize, nicht aber bei exogenen Hinweisreizen. Die P1 und N1 Komponenten zeigten sich unbeeinflusst von Nikotin, damit findet also die Annahme einer Wirkung auf sensorische Suppression keine Unterst{\"u}tzung. In Experiment 3 und 4 wurde untersucht, ob Nikotin einen Effekt auf kostentr{\"a}chtige unwillk{\"u}rliche Aufmerksamkeitsverschiebungen, Distraktionen, hat. In Experiment 3 wurden in einem r{\"a}umlichen Daueraufmerksamkeitsparadigma Distraktionen durch deviante Stimulusmerkmale ausgel{\"o}st und die Wirkung von Nikotin auf eine distraktionsassoziierte Komponente des EKP, die P3a, betrachtet. In Experiment 4 wurde in einem Hinweisreizparadigma durch zus{\"a}tzliche Stimuli eine Distraktion ausgel{\"o}st und die Nikotinwirkung auf die Reaktionszeitkosten untersucht. Nikotin zeigte keinen Einfluss auf Distraktionskosten in beiden Studien und auch keine Wirkung auf die P3a Komponente in Experiment 3. In Experiment 4 wurde zus{\"a}tzlich die Wirkung von Nikotin auf das Losl{\"o}sen von Aufmerksamkeit untersucht, indem die Schwierigkeit des Losl{\"o}sens variiert wurde. Auch hier zeigte sich keine Nikotinwirkung. Allerdings konnte in beiden Studien weder die h{\"a}ufig berichtete generelle Reaktionszeitverk{\"u}rzung noch die Verringerung der Kosten invalider Hinweisreize repliziert werden, so dass zum Einen keine Aussage {\"u}ber die Wirkung von Nikotin auf Distraktionen oder den Aufmerksamkeitslosl{\"o}seprozess gemacht werden k{\"o}nnen, zum Anderen sich die Frage stellte, unter welchen Bedingungen Nikotin einen differentiellen Effekt {\"u}berhaupt zeigt. Im letzten Experiment wurde hierzu die H{\"a}ufigkeit der Reaktionsanforderung einerseits und die zeitlichen Aspekte der Aufmerksamkeitslenkung andererseits variiert und der Effekt des Nikotins auf den Validit{\"a}tseffekt, die Reaktionszeitdifferenz zwischen valide und invalide vorhergesagten Zielreizen, betrachtet. Nikotin verringerte bei Individuen, bei denen Aufmerksamkeitslenkung in allen Bedingungen evident war, in der Tendenz den Validit{\"a}tseffekt in der ereignis{\"a}rmsten Bedingung, wenn nur selten willentliche Aufmerksamkeitsausrichtung notwendig war. Dies k{\"o}nnte als Hinweis gedeutet werden, dass Nikotin unter Bedingungen, die große Anforderungen an die Vigilanz stellen, die top-down Zuweisung von Aufmerksamkeitsressourcen unterst{\"u}tzt.}, subject = {Nicotin}, language = {de} } @phdthesis{Meessen2019, author = {Meeßen, Christian}, title = {The thermal and rheological state of the Northern Argentinian foreland basins}, doi = {10.25932/publishup-43994}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-439945}, school = {Universit{\"a}t Potsdam}, pages = {xviii, 151}, year = {2019}, abstract = {The foreland of the Andes in South America is characterised by distinct along strike changes in surface deformational styles. These styles are classified into two end-members, the thin-skinned and the thick-skinned style. The superficial expression of thin-skinned deformation is a succession of narrowly spaced hills and valleys, that form laterally continuous ranges on the foreland facing side of the orogen. Each of the hills is defined by a reverse fault that roots in a basal d{\´e}collement surface within the sedimentary cover, and acted as thrusting ramp to stack the sedimentary pile. Thick-skinned deformation is morphologically characterised by spatially disparate, basement-cored mountain ranges. These mountain ranges are uplifted along reactivated high-angle crustal-scale discontinuities, such as suture zones between different tectonic terranes. Amongst proposed causes for the observed variation are variations in the dip angle of the Nazca plate, variation in sediment thickness, lithospheric thickening, volcanism or compositional differences. The proposed mechanisms are predominantly based on geological observations or numerical thermomechanical modelling, but there has been no attempt to understand the mechanisms from a point of data-integrative 3D modelling. The aim of this dissertation is therefore to understand how lithospheric structure controls the deformational behaviour. The integration of independent data into a consistent model of the lithosphere allows to obtain additional evidence that helps to understand the causes for the different deformational styles. Northern Argentina encompasses the transition from the thin-skinned fold-and-thrust belt in Bolivia, to the thick-skinned Sierras Pampeanas province, which makes this area a well suited location for such a study. The general workflow followed in this study first involves data-constrained structural- and density-modelling in order to obtain a model of the study area. This model was then used to predict the steady-state thermal field, which was then used to assess the present-day rheological state in northern Argentina. The structural configuration of the lithosphere in northern Argentina was determined by means of data-integrative, 3D density modelling verified by Bouguer gravity. The model delineates the first-order density contrasts in the lithosphere in the uppermost 200 km, and discriminates bodies for the sediments, the crystalline crust, the lithospheric mantle and the subducting Nazca plate. To obtain the intra-crustal density structure, an automated inversion approach was developed and applied to a starting structural model that assumed a homogeneously dense crust. The resulting final structural model indicates that the crustal structure can be represented by an upper crust with a density of 2800 kg/m³, and a lower crust of 3100 kg/m³. The Transbrazilian Lineament, which separates the Pampia terrane from the R{\´i}o de la Plata craton, is expressed as a zone of low average crustal densities. In an excursion, we demonstrate in another study, that the gravity inversion method developed to obtain intra-crustal density structures, is also applicable to obtain density variations in the uppermost lithospheric mantle. Densities in such sub-crustal depths are difficult to constrain from seismic tomographic models due to smearing of crustal velocities. With the application to the uppermost lithospheric mantle in the north Atlantic, we demonstrate in Tan et al. (2018) that lateral density trends of at least 125\,km width are robustly recovered by the inversion method, thereby providing an important tool for the delineation of subcrustal density trends. Due to the genetic link between subduction, orogenesis and retroarc foreland basins the question rises whether the steady-state assumption is valid in such a dynamic setting. To answer this question, I analysed (i) the impact of subduction on the conductive thermal field of the overlying continental plate, (ii) the differences between the transient and steady-state thermal fields of a geodynamic coupled model. Both studies indicate that the assumption of a thermal steady-state is applicable in most parts of the study area. Within the orogenic wedge, where the assumption cannot be applied, I estimated the transient thermal field based on the results of the conducted analyses. Accordingly, the structural model that had been obtained in the first step, could be used to obtain a 3D conductive steady-state thermal field. The rheological assessment based on this thermal field indicates that the lithosphere of the thin-skinned Subandean ranges is characterised by a relatively strong crust and a weak mantle. Contrarily, the adjacent foreland basin consists of a fully coupled, very strong lithosphere. Thus, shortening in northern Argentina can only be accommodated within the weak lithosphere of the orogen and the Subandean ranges. The analysis suggests that the d{\´e}collements of the fold-and-thrust belt are the shallow continuation of shear zones that reside in the ductile sections of the orogenic crust. Furthermore, the localisation of the faults that provide strain transfer between the deeper ductile crust and the shallower d{\´e}collement is strongly influenced by crustal weak zones such as foliation. In contrast to the northern foreland, the lithosphere of the thick-skinned Sierras Pampeanas is fully coupled and characterised by a strong crust and mantle. The high overall strength prevents the generation of crustal-scale faults by tectonic stresses. Even inherited crustal-scale discontinuities, such as sutures, cannot sufficiently reduce the strength of the lithosphere in order to be reactivated. Therefore, magmatism that had been identified to be a precursor of basement uplift in the Sierras Pampeanas, is the key factor that leads to the broken foreland of this province. Due to thermal weakening, and potentially lubrication of the inherited discontinuities, the lithosphere is locally weakened such that tectonic stresses can uplift the basement blocks. This hypothesis explains both the spatially disparate character of the broken foreland, as well as the observed temporal delay between volcanism and basement block uplift. This dissertation provides for the first time a data-driven 3D model that is consistent with geophysical data and geological observations, and that is able to causally link the thermo-rheological structure of the lithosphere to the observed variation of surface deformation styles in the retroarc foreland of northern Argentina.}, language = {en} } @phdthesis{MbayaMani2017, author = {Mbaya Mani, Christian}, title = {Functional nanoporous carbon-based materials derived from oxocarbon-metal coordination complexes}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-407866}, school = {Universit{\"a}t Potsdam}, pages = {IV, 135}, year = {2017}, abstract = {Nanoporous carbon based materials are of particular interest for both science and industry due to their exceptional properties such as a large surface area, high pore volume, high electroconductivity as well as high chemical and thermal stability. Benefiting from these advantageous properties, nanoporous carbons proved to be useful in various energy and environment related applications including energy storage and conversion, catalysis, gas sorption and separation technologies. The synthesis of nanoporous carbons classically involves thermal carbonization of the carbon precursors (e.g. phenolic resins, polyacrylonitrile, poly(vinyl alcohol) etc.) followed by an activation step and/or it makes use of classical hard or soft templates to obtain well-defined porous structures. However, these synthesis strategies are complicated and costly; and make use of hazardous chemicals, hindering their application for large-scale production. Furthermore, control over the carbon materials properties is challenging owing to the relatively unpredictable processes at the high carbonization temperatures. In the present thesis, nanoporous carbon based materials are prepared by the direct heat treatment of crystalline precursor materials with pre-defined properties. This synthesis strategy does not require any additional carbon sources or classical hard- or soft templates. The highly stable and porous crystalline precursors are based on coordination compounds of the squarate and croconate ions with various divalent metal ions including Zn2+, Cu2+, Ni2+, and Co2+, respectively. Here, the structural properties of the crystals can be controlled by the choice of appropriate synthesis conditions such as the crystal aging temperature, the ligand/metal molar ratio, the metal ion, and the organic ligand system. In this context, the coordination of the squarate ions to Zn2+ yields porous 3D cube crystalline particles. The morphology of the cubes can be tuned from densely packed cubes with a smooth surface to cubes with intriguing micrometer-sized openings and voids which evolve on the centers of the low index faces as the crystal aging temperature is raised. By varying the molar ratio, the particle shape can be changed from truncated cubes to perfect cubes with right-angled edges. These crystalline precursors can be easily transformed into the respective carbon based materials by heat treatment at elevated temperatures in a nitrogen atmosphere followed by a facile washing step. The resulting carbons are obtained in good yields and possess a hierarchical pore structure with well-organized and interconnected micro-, meso- and macropores. Moreover, high surface areas and large pore volumes of up to 1957 m2 g-1 and 2.31 cm3 g-1 are achieved, respectively, whereby the macroscopic structure of the precursors is preserved throughout the whole synthesis procedure. Owing to these advantageous properties, the resulting carbon based materials represent promising supercapacitor electrode materials for energy storage applications. This is exemplarily demonstrated by employing the 3D hierarchical porous carbon cubes derived from squarate-zinc coordination compounds as electrode material showing a specific capacitance of 133 F g-1 in H2SO4 at a scan rate of 5 mV s-1 and retaining 67\% of this specific capacitance when the scan rate is increased to 200 mV s-1. In a further application, the porous carbon cubes derived from squarate-zinc coordination compounds are used as high surface area support material and decorated with nickel nanoparticles via an incipient wetness impregnation. The resulting composite material combines a high surface area, a hierarchical pore structure with high functionality and well-accessible pores. Moreover, owing to their regular micro-cube shape, they allow for a good packing of a fixed-bed flow reactor along with high column efficiency and a minimized pressure drop throughout the packed reactor. Therefore, the composite is employed as heterogeneous catalyst in the selective hydrogenation of 5-hydroxymethylfurfural to 2,5-dimethylfuran showing good catalytic performance and overcoming the conventional problem of column blocking. Thinking about the rational design of 3D carbon geometries, the functions and properties of the resulting carbon-based materials can be further expanded by the rational introduction of heteroatoms (e.g. N, B, S, P, etc.) into the carbon structures in order to alter properties such as wettability, surface polarity as well as the electrochemical landscape. In this context, the use of crystalline materials based on oxocarbon-metal ion complexes can open a platform of highly functional materials for all processes that involve surface processes.}, language = {en} } @phdthesis{Mazzanti2022, author = {Mazzanti, Stefano}, title = {Novel photocatalytic processes mediated by carbon nitride photocatalysis}, doi = {10.25932/publishup-54209}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-542099}, school = {Universit{\"a}t Potsdam}, pages = {418}, year = {2022}, abstract = {The key to reduce the energy required for specific transformations in a selective manner is the employment of a catalyst, a very small molecular platform that decides which type of energy to use. The field of photocatalysis exploits light energy to shape one type of molecules into others, more valuable and useful. However, many challenges arise in this field, for example, catalysts employed usually are based on metal derivatives, which abundance is limited, they cannot be recycled and are expensive. Therefore, carbon nitrides materials are used in this work to expand horizons in the field of photocatalysis. Carbon nitrides are organic materials, which can act as recyclable, cheap, non-toxic, heterogeneous photocatalysts. In this thesis, they have been exploited for the development of new catalytic methods, and shaped to develop new types of processes. Indeed, they enabled the creation of a new photocatalytic synthetic strategy, the dichloromethylation of enones by dichloromethyl radical generated in situ from chloroform, a novel route for the making of building blocks to be used for the productions of active pharmaceutical compounds. Then, the ductility of these materials allowed to shape carbon nitride into coating for lab vials, EPR capillaries, and a cell of a flow reactor showing the great potential of such flexible technology in photocatalysis. Afterwards, their ability to store charges has been exploited in the reduction of organic substrates under dark conditions, gaining new insights regarding multisite proton coupled electron transfer processes. Furthermore, the combination of carbon nitrides with flavins allowed the development of composite materials with improved photocatalytic activity in the CO2 photoreduction. Concluding, carbon nitrides are a versatile class of photoactive materials, which may help to unveil further scientific discoveries and to develop a more sustainable future.}, language = {en} } @phdthesis{Martin2013, author = {Martin, Benjamin}, title = {Linking individual-based models and dynamic energy budget theory : lessons for ecology and ecotoxicology}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-67001}, school = {Universit{\"a}t Potsdam}, year = {2013}, abstract = {In the context of ecological risk assessment of chemicals, individual-based population models hold great potential to increase the ecological realism of current regulatory risk assessment procedures. However, developing and parameterizing such models is time-consuming and often ad hoc. Using standardized, tested submodels of individual organisms would make individual-based modelling more efficient and coherent. In this thesis, I explored whether Dynamic Energy Budget (DEB) theory is suitable for being used as a standard submodel in individual-based models, both for ecological risk assessment and theoretical population ecology. First, I developed a generic implementation of DEB theory in an individual-based modeling (IBM) context: DEB-IBM. Using the DEB-IBM framework I tested the ability of the DEB theory to predict population-level dynamics from the properties of individuals. We used Daphnia magna as a model species, where data at the individual level was available to parameterize the model, and population-level predictions were compared against independent data from controlled population experiments. We found that DEB theory successfully predicted population growth rates and peak densities of experimental Daphnia populations in multiple experimental settings, but failed to capture the decline phase, when the available food per Daphnia was low. Further assumptions on food-dependent mortality of juveniles were needed to capture the population dynamics after the initial population peak. The resulting model then predicted, without further calibration, characteristic switches between small- and large-amplitude cycles, which have been observed for Daphnia. We conclude that cross-level tests help detecting gaps in current individual-level theories and ultimately will lead to theory development and the establishment of a generic basis for individual-based models and ecology. In addition to theoretical explorations, we tested the potential of DEB theory combined with IBMs to extrapolate effects of chemical stress from the individual to population level. For this we used information at the individual level on the effect of 3,4-dichloroanailine on Daphnia. The individual data suggested direct effects on reproduction but no significant effects on growth. Assuming such direct effects on reproduction, the model was able to accurately predict the population response to increasing concentrations of 3,4-dichloroaniline. We conclude that DEB theory combined with IBMs holds great potential for standardized ecological risk assessment based on ecological models.}, language = {en} } @phdthesis{Loeffler2005, author = {L{\"o}ffler, Frank}, title = {Numerical simulations of neutron star - black hole mergers}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-7743}, school = {Universit{\"a}t Potsdam}, year = {2005}, abstract = {Collisions of black holes and neutron stars, named mixed binaries in the following, are interesting because of at least two reasons. Firstly, it is expected that they emit a large amount of energy as gravitational waves, which could be measured by new detectors. The form of those waves is expected to carry information about the internal structure of such systems. Secondly, collisions of such objects are the prime suspects of short gamma ray bursts. The exact mechanism for the energy emission is unknown so far. In the past, Newtonian theory of gravitation and modifications to it were often used for numerical simulations of collisions of mixed binary systems. However, near to such objects, the gravitational forces are so strong, that the use of General Relativity is necessary for accurate predictions. There are a lot of problems in general relativistic simulations. However, systems of two neutron stars and systems of two black holes have been studies extensively in the past and a lot of those problems have been solved. One of the remaining problems so far has been the use of hydrodynamic on excision boundaries. Inside excision regions, no evolution is carried out. Such regions are often used inside black holes to circumvent instabilities of the numerical methods near the singularity. Methods to handle hydrodynamics at such boundaries have been described and tests are shown in this work. One important test and the first application of those methods has been the simulation of a collapsing neutron star to a black hole. The success of these simulations and in particular the performance of the excision methods was an important step towards simulations of mixed binaries. Initial data are necessary for every numerical simulation. However, the creation of such initial data for general relativistic situations is in general very complicated. In this work it is shown how to obtain initial data for mixed binary systems using an already existing method for initial data of two black holes. These initial data have been used for evolutions of such systems and problems encountered are discussed in this work. One of the problems are instabilities due to different methods, which could be solved by dissipation of appropriate strength. Another problem is the expected drift of the black hole towards the neutron star. It is shown, that this can be solved by using special gauge conditions, which prevent the black hole from moving on the computational grid. The methods and simulations shown in this work are only the starting step for a much more detailed study of mixed binary system. Better methods, models and simulations with higher resolution and even better gauge conditions will be focus of future work. It is expected that such detailed studies can give information about the emitted gravitational waves, which is important in view of the newly built gravitational wave detectors. In addition, these simulations could give insight into the processes responsible for short gamma ray bursts.}, subject = {Relativistische Astrophysik}, language = {en} } @phdthesis{LopezGarcia2019, author = {L{\´o}pez Garc{\´i}a, Patricia}, title = {Coiled coils as mechanical building blocks}, doi = {10.25932/publishup-42956}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-429568}, school = {Universit{\"a}t Potsdam}, pages = {xi, 130}, year = {2019}, abstract = {The natural abundance of Coiled Coil (CC) motifs in cytoskeleton and extracellular matrix proteins suggests that CCs play an important role as passive (structural) and active (regulatory) mechanical building blocks. CCs are self-assembled superhelical structures consisting of 2-7 α-helices. Self-assembly is driven by hydrophobic and ionic interactions, while the helix propensity of the individual helices contributes additional stability to the structure. As a direct result of this simple sequence-structure relationship, CCs serve as templates for protein design and sequences with a pre-defined thermodynamic stability have been synthesized de novo. Despite this quickly increasing knowledge and the vast number of possible CC applications, the mechanical function of CCs has been largely overlooked and little is known about how different CC design parameters determine the mechanical stability of CCs. Once available, this knowledge will open up new applications for CCs as nanomechanical building blocks, e.g. in biomaterials and nanobiotechnology. With the goal of shedding light on the sequence-structure-mechanics relationship of CCs, a well-characterized heterodimeric CC was utilized as a model system. The sequence of this model system was systematically modified to investigate how different design parameters affect the CC response when the force is applied to opposing termini in a shear geometry or separated in a zipper-like fashion from the same termini (unzip geometry). The force was applied using an atomic force microscope set-up and dynamic single-molecule force spectroscopy was performed to determine the rupture forces and energy landscape properties of the CC heterodimers under study. Using force as a denaturant, CC chain separation is initiated by helix uncoiling from the force application points. In the shear geometry, this allows uncoiling-assisted sliding parallel to the force vector or dissociation perpendicular to the force vector. Both competing processes involve the opening of stabilizing hydrophobic (and ionic) interactions. Also in the unzip geometry, helix uncoiling precedes the rupture of hydrophobic contacts. In a first series of experiments, the focus was placed on canonical modifications in the hydrophobic core and the helix propensity. Using the shear geometry, it was shown that both a reduced core packing and helix propensity lower the thermodynamic and mechanical stability of the CC; however, with different effects on the energy landscape of the system. A less tightly packed hydrophobic core increases the distance to the transition state, with only a small effect on the barrier height. This originates from a more dynamic and less tightly packed core, which provides more degrees of freedom to respond to the applied force in the direction of the force vector. In contrast, a reduced helix propensity decreases both the distance to the transition state and the barrier height. The helices are 'easier' to unfold and the remaining structure is less thermodynamically stable so that dissociation perpendicular to the force axis can occur at smaller deformations. Having elucidated how canonical sequence modifications influence CC mechanics, the pulling geometry was investigated in the next step. Using one and the same sequence, the force application points were exchanged and two different shear and one unzipping geometry were compared. It was shown that the pulling geometry determines the mechanical stability of the CC. Different rupture forces were observed in the different shear as well as in the unzipping geometries, suggesting that chain separation follows different pathways on the energy landscape. Whereas the difference between CC shearing and unzipping was anticipated and has also been observed for other biological structures, the observed difference for the two shear geometries was less expected. It can be explained with the structural asymmetry of the CC heterodimer. It is proposed that the direction of the α-helices, the different local helix propensities and the position of a polar asparagine in the hydrophobic core are responsible for the observed difference in the chain separation pathways. In combination, these factors are considered to influence the interplay between processes parallel and perpendicular to the force axis. To obtain more detailed insights into the role of helix stability, helical turns were reinforced locally using artificial constraints in the form of covalent and dynamic 'staples'. A covalent staple bridges to adjacent helical turns, thus protecting them against uncoiling. The staple was inserted directly at the point of force application in one helix or in the same terminus of the other helix, which did not experience the force directly. It was shown that preventing helix uncoiling at the point of force application reduces the distance to the transition state while slightly increasing the barrier height. This confirms that helix uncoiling is critically important for CC chain separation. When inserted into the second helix, this stabilizing effect is transferred across the hydrophobic core and protects the force-loaded turns against uncoiling. If both helices were stapled, no additional increase in mechanical stability was observed. When replacing the covalent staple with a dynamic metal-coordination bond, a smaller decrease in the distance to the transition was observed, suggesting that the staple opens up while the CC is under load. Using fluorinated amino acids as another type of non-natural modification, it was investigated how the enhanced hydrophobicity and the altered packing at the interface influences CC mechanics. The fluorinated amino acid was inserted into one central heptad of one or both α-helices. It was shown that this substitution destabilized the CC thermodynamically and mechanically. Specifically, the barrier height was decreased and the distance to the transition state increased. This suggests that a possible stabilizing effect of the increased hydrophobicity is overruled by a disturbed packing, which originates from a bad fit of the fluorinated amino acid into the local environment. This in turn increases the flexibility at the interface, as also observed for the hydrophobic core substitution described above. In combination, this confirms that the arrangement of the hydrophobic side chains is an additional crucial factor determining the mechanical stability of CCs. In conclusion, this work shows that knowledge of the thermodynamic stability alone is not sufficient to predict the mechanical stability of CCs. It is the interplay between helix propensity and hydrophobic core packing that defines the sequence-structure-mechanics relationship. In combination, both parameters determine the relative contribution of processes parallel and perpendicular to the force axis, i.e. helix uncoiling and uncoiling-assisted sliding as well as dissociation. This new mechanistic knowledge provides insight into the mechanical function of CCs in tissues and opens up the road for designing CCs with pre-defined mechanical properties. The library of mechanically characterized CCs developed in this work is a powerful starting point for a wide spectrum of applications, ranging from molecular force sensors to mechanosensitive crosslinks in protein nanostructures and synthetic extracellular matrix mimics.}, language = {en} } @phdthesis{Littmann2024, author = {Littmann, Daniela-Christin}, title = {Large eddy simulations of the Arctic boundary layer around the MOSAiC drift track}, doi = {10.25932/publishup-62437}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-624374}, school = {Universit{\"a}t Potsdam}, pages = {xii, 110}, year = {2024}, abstract = {The icosahedral non-hydrostatic large eddy model (ICON-LEM) was applied around the drift track of the Multidisciplinary Observatory Study of the Arctic (MOSAiC) in 2019 and 2020. The model was set up with horizontal grid-scales between 100m and 800m on areas with radii of 17.5km and 140 km. At its lateral boundaries, the model was driven by analysis data from the German Weather Service (DWD), downscaled by ICON in limited area mode (ICON-LAM) with horizontal grid-scale of 3 km. The aim of this thesis was the investigation of the atmospheric boundary layer near the surface in the central Arctic during polar winter with a high-resolution mesoscale model. The default settings in ICON-LEM prevent the model from representing the exchange processes in the Arctic boundary layer in accordance to the MOSAiC observations. The implemented sea-ice scheme in ICON does not include a snow layer on sea-ice, which causes a too slow response of the sea-ice surface temperature to atmospheric changes. To allow the sea-ice surface to respond faster to changes in the atmosphere, the implemented sea-ice parameterization in ICON was extended with an adapted heat capacity term. The adapted sea-ice parameterization resulted in better agreement with the MOSAiC observations. However, the sea-ice surface temperature in the model is generally lower than observed due to biases in the downwelling long-wave radiation and the lack of complex surface structures, like leads. The large eddy resolving turbulence closure yielded a better representation of the lower boundary layer under strongly stable stratification than the non-eddy-resolving turbulence closure. Furthermore, the integration of leads into the sea-ice surface reduced the overestimation of the sensible heat flux for different weather conditions. The results of this work help to better understand boundary layer processes in the central Arctic during the polar night. High-resolving mesoscale simulations are able to represent temporally and spatially small interactions and help to further develop parameterizations also for the application in regional and global models.}, language = {en} } @phdthesis{Lepre2023, author = {Lepre, Enrico}, title = {Nitrogen-doped carbonaceous materials for energy and catalysis}, doi = {10.25932/publishup-57739}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-577390}, school = {Universit{\"a}t Potsdam}, pages = {153}, year = {2023}, abstract = {Facing the environmental crisis, new technologies are needed to sustain our society. In this context, this thesis aims to describe the properties and applications of carbon-based sustainable materials. In particular, it reports the synthesis and characterization of a wide set of porous carbonaceous materials with high nitrogen content obtained from nucleobases. These materials are used as cathodes for Li-ion capacitors, and a major focus is put on the cathode preparation, highlighting the oxidation resistance of nucleobase-derived materials. Furthermore, their catalytic properties for acid/base and redox reactions are described, pointing to the role of nitrogen speciation on their surfaces. Finally, these materials are used as supports for highly dispersed nickel loading, activating the materials for carbon dioxide electroreduction.}, language = {en} } @phdthesis{Lenz2016, author = {Lenz, Josefine}, title = {Thermokarst dynamics in central-eastern Beringia}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-101364}, school = {Universit{\"a}t Potsdam}, pages = {XII, 128, A-47}, year = {2016}, abstract = {Widespread landscape changes are presently observed in the Arctic and are most likely to accelerate in the future, in particular in permafrost regions which are sensitive to climate warming. To assess current and future developments, it is crucial to understand past environmental dynamics in these landscapes. Causes and interactions of environmental variability can hardly be resolved by instrumental records covering modern time scales. However, long-term environmental variability is recorded in paleoenvironmental archives. Lake sediments are important archives that allow reconstruction of local limnogeological processes as well as past environmental changes driven directly or indirectly by climate dynamics. This study aims at reconstructing Late Quaternary permafrost and thermokarst dynamics in central-eastern Beringia, the terrestrial land mass connecting Eurasia and North America during glacial sea-level low stands. In order to investigate development, processes and influence of thermokarst dynamics, several sediment cores from extant lakes and drained lake basins were analyzed to answer the following research questions: 1. When did permafrost degradation and thermokarst lake development take place and what were enhancing and inhibiting environmental factors? 2. What are the dominant processes during thermokarst lake development and how are they reflected in proxy records? 3. How did, and still do, thermokarst dynamics contribute to the inventory and properties of organic matter in sediments and the carbon cycle? Methods applied in this study are based upon a multi-proxy approach combining sedimentological, geochemical, geochronological, and micropaleontological analyses, as well as analyses of stable isotopes and hydrochemistry of pore-water and ice. Modern field observations of water quality and basin morphometrics complete the environmental investigations. The investigated sediment cores reveal permafrost degradation and thermokarst dynamics on different time scales. The analysis of a sediment core from GG basin on the northern Seward Peninsula (Alaska) shows prevalent terrestrial accumulation of yedoma throughout the Early to Mid Wisconsin with intermediate wet conditions at around 44.5 to 41.5 ka BP. This first wetland development was terminated by the accumulation of a 1-meter-thick airfall tephra most likely originating from the South Killeak Maar eruption at 42 ka BP. A depositional hiatus between 22.5 and 0.23 ka BP may indicate thermokarst lake formation in the surrounding of the site which forms a yedoma upland till today. The thermokarst lake forming GG basin initiated 230 ± 30 cal a BP and drained in Spring 2005 AD. Four years after drainage the lake talik was still unfrozen below 268 cm depth. A permafrost core from Mama Rhonda basin on the northern Seward Peninsula preserved a full lacustrine record including several lake phases. The first lake generation developed at 11.8 cal ka BP during the Lateglacial-Early Holocene transition; its old basin (Grandma Rhonda) is still partially preserved at the southern margin of the study basin. Around 9.0 cal ka BP a shallow and more dynamic thermokarst lake developed with actively eroding shorelines and potentially intermediate shallow water or wetland phases (Mama Rhonda). Mama Rhonda lake drainage at 1.1 cal ka BP was followed by gradual accumulation of terrestrial peat and top-down refreezing of the lake talik. A significant lower organic carbon content was measured in Grandma Rhonda deposits (mean TOC of 2.5 wt\%) than in Mama Rhonda deposits (mean TOC of 7.9 wt\%) highlighting the impact of thermokarst dynamics on biogeochemical cycling in different lake generations by thawing and mobilization of organic carbon into the lake system. Proximal and distal sediment cores from Peatball Lake on the Arctic Coastal Plain of Alaska revealed young thermokarst dynamics since about 1,400 years along a depositional gradient based on reconstructions from shoreline expansion rates and absolute dating results. After its initiation as a remnant pond of a previous drained lake basin, a rapidly deepening lake with increasing oxygenation of the water column is evident from laminated sediments, and higher Fe/Ti and Fe/S ratios in the sediment. The sediment record archived characterizing shifts in depositional regimes and sediment sources from upland deposits and re-deposited sediments from drained thaw lake basins depending on the gradually changing shoreline configuration. These changes are evident from alternating organic inputs into the lake system which highlights the potential for thermokarst lakes to recycle old carbon from degrading permafrost deposits of its catchment. The lake sediment record from Herschel Island in the Yukon (Canada) covers the full Holocene period. After its initiation as a thermokarst lake at 11.7 cal ka BP and intense thermokarst activity until 10.0 cal ka BP, the steady sedimentation was interrupted by a depositional hiatus at 1.6 cal ka BP which likely resulted from lake drainage or allochthonous slumping due to collapsing shore lines. The specific setting of the lake on a push moraine composed of marine deposits is reflected in the sedimentary record. Freshening of the maturing lake is indicated by decreasing electrical conductivity in pore-water. Alternation of marine to freshwater ostracods and foraminifera confirms decreasing salinity as well but also reflects episodical re-deposition of allochthonous marine sediments. Based on permafrost and lacustrine sediment records, this thesis shows examples of the Late Quaternary evolution of typical Arctic permafrost landscapes in central-eastern Beringia and the complex interaction of local disturbance processes, regional environmental dynamics and global climate patterns. This study confirms that thermokarst lakes are important agents of organic matter recycling in complex and continuously changing landscapes.}, language = {en} } @phdthesis{Leins2023, author = {Leins, Johannes A.}, title = {Combining model detail with large scales}, doi = {10.25932/publishup-58283}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-582837}, school = {Universit{\"a}t Potsdam}, pages = {xv, 168}, year = {2023}, abstract = {The global climate crisis is significantly contributing to changing ecosystems, loss of biodiversity and is putting numerous species on the verge of extinction. In principle, many species are able to adapt to changing conditions or shift their habitats to more suitable regions. However, change is progressing faster than some species can adjust, or potential adaptation is blocked and disrupted by direct and indirect human action. Unsustainable anthropogenic land use in particular is one of the driving factors, besides global heating, for these ecologically critical developments. Precisely because land use is anthropogenic, it is also a factor that could be quickly and immediately corrected by human action. In this thesis, I therefore assess the impact of three climate change scenarios of increasing intensity in combination with differently scheduled mowing regimes on the long-term development and dispersal success of insects in Northwest German grasslands. The large marsh grasshopper (LMG, Stethophyma grossum, Linn{\´e} 1758) is used as a species of reference for the analyses. It inhabits wet meadows and marshes and has a limited, yet fairly good ability to disperse. Mowing and climate conditions affect the development and mortality of the LMG differently depending on its life stage. The specifically developed simulation model HiLEG (High-resolution Large Environmental Gradient) serves as a tool for investigating and projecting viability and dispersal success under different climate conditions and land use scenarios. It is a spatially explicit, stage- and cohort-based model that can be individually configured to represent the life cycle and characteristics of terrestrial insect species, as well as high-resolution environmental data and the occurrence of external disturbances. HiLEG is a freely available and adjustable software that can be used to support conservation planning in cultivated grasslands. In the three case studies of this thesis, I explore various aspects related to the structure of simulation models per se, their importance in conservation planning in general, and insights regarding the LMG in particular. It became apparent that the detailed resolution of model processes and components is crucial to project the long-term effect of spatially and temporally confined events. Taking into account conservation measures at the regional level has further proven relevant, especially in light of the climate crisis. I found that the LMG is benefiting from global warming in principle, but continues to be constrained by harmful mowing regimes. Land use measures could, however, be adapted in such a way that they allow the expansion and establishment of the LMG without overly affecting agricultural yields. Overall, simulation models like HiLEG can make an important contribution and add value to conservation planning and policy-making. Properly used, simulation results shed light on aspects that might be overlooked by subjective judgment and the experience of individual stakeholders. Even though it is in the nature of models that they are subject to limitations and only represent fragments of reality, this should not keep stakeholders from using them, as long as these limitations are clearly communicated. Similar to HiLEG, models could further be designed in such a way that not only the parameterization can be adjusted as required, but also the implementation itself can be improved and changed as desired. This openness and flexibility should become more widespread in the development of simulation models.}, language = {en} } @phdthesis{Lauterbach2011, author = {Lauterbach, Stefan}, title = {Lateglacial to Holocene climatic and environmental changes in Europe : multi-proxy studies on lake sediments along a transect from northern Italy to northeastern Poland}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-58157}, school = {Universit{\"a}t Potsdam}, year = {2011}, abstract = {Sediment records of three European lakes were investigated in order to reconstruct the regional climate development during the Lateglacial and Holocene, to investigate the response of local ecosystems to climatic fluctuations and human impact and to relate regional peculiarities of past climate development to climatic changes on a larger spatial scale. The Lake Hańcza (NE Poland) sediment record was studied with a focus on reconstructing the early Holocene climate development and identifying possible differences to Western Europe. Following the initial Holocene climatic improvement, a further climatic improvement occurred between 10 000 and 9000 cal. a BP. Apparently, relatively cold and dry climate conditions persisted in NE Poland during the first ca. 1500 years of the Holocene, most likely due to a specific regional atmospheric circulation pattern. Prevailing anticyclonic circulation linked to a high-pressure cell above the remaining Scandinavian Ice Sheet (SIS) might have blocked the eastward propagation of warm and moist Westerlies and thus attenuated the early Holocene climatic amelioration in this region until the final decay of the SIS, a pattern different from climate development in Western Europe. The Lateglacial sediment record of Lake Mondsee (Upper Austria) was investigated in order to study the regional climate development and the environmental response to rapid climatic fluctuations. While the temperature rise and environmental response at the onset of the Holocene took place quasi-synchronously, major leads and lags in proxy responses characterize the onset of the Lateglacial Interstadial. In particular, the spread of coniferous woodlands and the reduction of detrital flux lagged the initial Lateglacial warming by ca. 500-750 years. Major cooling at the onset of the Younger Dryas took place synchronously with a change in vegetation, while the increase of detrital matter flux was delayed by about 150-300 years. Complex proxy responses are also detected for short-term Lateglacial climatic fluctuations. In summary, periods of abrupt climatic changes are characterized by complex and temporally variable proxy responses, mainly controlled by ecosystem inertia and the environmental preconditions. A second study on the Lake Mondsee sediment record focused on two small-scale climate deteriorations around 8200 and 9100 cal. a BP, which have been triggered by freshwater discharges to the North Atlantic, causing a shutdown of the Atlantic meridional overturning circulation (MOC). Combining microscopic varve counting and AMS 14C dating yielded a precise duration estimate (ca. 150 years) and absolute dating of the 8.2 ka cold event, both being in good agreement with results from other palaeoclimate records. Moreover, a sudden temperature overshoot after the 8.2 ka cold event was identified, also seen in other proxy records around the North Atlantic. This was most likely caused by enhanced resumption of the MOC, which also initiated substantial shifts of oceanic and atmospheric front systems. Although there is also evidence from other proxy records for pronounced recovery of the MOC and atmospheric circulation changes after the 9.1 ka cold event, no temperature overshoot is seen in the Lake Mondsee record, indicating the complex behaviour of the global climate system. The Holocene sediment record of Lake Iseo (northern Italy) was studied to shed light on regional earthquake activity and the influence of climate variability and anthropogenic impact on catchment erosion and detrital flux into the lake. Frequent small-scale detrital layers within the sediments reflect allochthonous sediment supply by extreme surface runoff events. During the early to mid-Holocene, increased detrital flux coincides with periods of cold and wet climate conditions, thus apparently being mainly controlled by climate variability. In contrast, intervals of high detrital flux during the late Holocene partly also correlate with phases of increased human impact, reflecting the complex influences on catchment erosion processes. Five large-scale event layers within the sediments, which are composed of mass-wasting deposits and turbidites, are supposed to have been triggered by strong local earthquakes. While the uppermost of these event layers is assigned to a documented adjacent earthquake in AD 1222, the four other layers are supposed to be related to previously undocumented prehistorical earthquakes.}, language = {en} } @phdthesis{Latnikova2012, author = {Latnikova, Alexandra}, title = {Polymeric capsules for self-healing anticorrosion coatings}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-60432}, school = {Universit{\"a}t Potsdam}, year = {2012}, abstract = {The present work is devoted to establishing of a new generation of self-healing anti-corrosion coatings for protection of metals. The concept of self-healing anticorrosion coatings is based on the combination of the passive part, represented by the matrix of conventional coating, and the active part, represented by micron-sized capsules loaded with corrosion inhibitor. Polymers were chosen as the class of compounds most suitable for the capsule preparation. The morphology of capsules made of crosslinked polymers, however, was found to be dependent on the nature of the encapsulated liquid. Therefore, a systematic analysis of the morphology of capsules consisting of a crosslinked polymer and a solvent was performed. Three classes of polymers such as polyurethane, polyurea and polyamide were chosen. Capsules made of these polymers and eight solvents of different polarity were synthesized via interfacial polymerization. It was shown that the morphology of the resulting capsules is specific for every polymer-solvent pair. Formation of capsules with three general types of morphology, such as core-shell, compact and multicompartment, was demonstrated by means of Scanning Electron Microscopy. Compact morphology was assumed to be a result of the specific polymer-solvent interactions and be analogues to the process of swelling. In order to verify the hypothesis, pure polyurethane, polyurea and polyamide were synthesized; their swelling behavior in the solvents used as the encapsulated material was investigated. It was shown that the swelling behavior of the polymers in most cases correlates with the capsules morphology. Different morphologies (compact, core-shell and multicompartment) were therefore attributed to the specific polymer-solvent interactions and discussed in terms of "good" and "poor" solvent. Capsules with core-shell morphology are formed when the encapsulated liquid is a "poor" solvent for the chosen polymer while compact morphologies are formed when the solvent is "good". Multicompartment morphology is explained by the formation of infinite networks or gelation of crosslinked polymers. If gelation occurs after the phase separation in the system is achieved, core-shell morphology is present. If gelation of the polymer occurs far before crosslinking is accomplished, further condensation of the polymer due to the crosslinking may lead to the formation of porous or multicompartment morphologies. It was concluded that in general, the morphology of capsules consisting of certain polymer-solvent pairs can be predicted on the basis of polymer-solvent behavior. In some cases, the swelling behavior and morphology may not match. The reasons for that are discussed in detail in the thesis. The discussed approach is only capable of predicting capsule morphology for certain polymer-solvent pairs. In practice, the design of the capsules assumes the trial of a great number of polymer-solvent combinations; more complex systems consisting of three, four or even more components are often used. Evaluation of the swelling behavior of each component pair of such systems becomes unreasonable. Therefore, exploitation of the solubility parameter approach was found to be more useful. The latter allows consideration of the properties of each single component instead of the pair of components. In such a manner, the Hansen Solubility Parameter (HSP) approach was used for further analysis. Solubility spheres were constructed for polyurethane, polyurea and polyamide. For this a three-dimensional graph is plotted with dispersion, polar and hydrogen bonding components of solubility parameter, obtained from literature, as the orthogonal axes. The HSP of the solvents are used as the coordinates for the points on the HSP graph. Then a sphere with a certain radius is located on a graph, and the "good" solvents would be located inside the sphere, while the "poor" ones are located outside. Both the location of the sphere center and the sphere radius should be fitted according to the information on polymer swelling behavior in a number of solvents. According to the existing correlation between the capsule morphology and swelling behavior of polymers, the solvents located inside the solubility sphere of a polymer give capsules with compact morphologies. The solvents located outside the solubility sphere of the solvent give either core-shell or multicompartment capsules in combination with the chosen polymer. Once the solubility sphere of a polymer is found, the solubility/swelling behavior is approximated to all possible substances. HSP theory allows therefore prediction of polymer solubility/swelling behavior and consequently the capsule morphology for any given substance with known HSP parameters on the basis of limited data. The latter makes the theory so attractive for application in chemistry and technology, since the choice of the system components is usually performed on the basis of a large number of different parameters that should mutually match. Even slight change of the technology sometimes leads to the necessity to find the analogue of this or that solvent in a sense of solvency but carrying different chemistry. Usage of the HSP approach in this case is indispensable. In the second part of the work examples of the HSP application for the fabrication of capsules with on-demand-morphology are presented. Capsules with compact or core-shell morphology containing corrosion inhibitors were synthesized. Thus, alkoxysilanes possessing long hydrophobic tail, combining passivating and water-repelling properties, were encapsulated in polyurethane shell. The mechanism of action of the active material required core-shell morphology of the capsules. The new hybrid corrosion inhibitor, cerium diethylhexyl phosphate, was encapsulated in polyamide shells in order to facilitate the dispersion of the substance and improve its adhesion to the coating matrix. The encapsulation of commercially available antifouling agents in polyurethane shells was carried out in order to control its release behavior and colloidal stability. Capsules with compact morphology made of polyurea containing the liquid corrosion inhibitor 2-methyl benzothiazole were synthesized in order to improve the colloidal stability of the substance. Capsules with compact morphology allow slower release of the liquid encapsulated material compared to the core-shell ones. If the "in-situ" encapsulation is not possible due to the reaction of the oil-soluble monomer with the encapsulated material, a solution was proposed: loading of the capsules should be performed after monomer deactivation due to the accomplishment of the polymerization reaction. Capsules of desired morphologies should be preformed followed by the loading step. In this way, compact polyurea capsules containing the highly effective but chemically active corrosion inhibitors 8-hydroxyquinoline and benzotriazole were fabricated. All the resulting capsules were successfully introduced into model coatings. The efficiency of the resulting "smart" self-healing anticorrosion coatings on steel and aluminium alloy of the AA-2024 series was evaluated using characterization techniques such as Scanning Vibrating Electron Spectroscopy, Electrochemical Impedance Spectroscopy and salt-spray chamber tests.}, language = {en} } @phdthesis{Kunkel2023, author = {Kunkel, Stefanie}, title = {Green industry through industry 4.0? Expected and observed effects of digitalisation in industry for environmental sustainability}, doi = {10.25932/publishup-61395}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-613954}, school = {Universit{\"a}t Potsdam}, pages = {vii, 168}, year = {2023}, abstract = {Digitalisation in industry - also called "Industry 4.0" - is seen by numerous actors as an opportunity to reduce the environmental impact of the industrial sector. The scientific assessments of the effects of digitalisation in industry on environmental sustainability, however, are ambivalent. This cumulative dissertation uses three empirical studies to examine the expected and observed effects of digitalisation in industry on environmental sustainability. The aim of this dissertation is to identify opportunities and risks of digitalisation at different system levels and to derive options for action in politics and industry for a more sustainable design of digitalisation in industry. I use an interdisciplinary, socio-technical approach and look at selected countries of the Global South (Study 1) and the example of China (all studies). In the first study (section 2, joint work with Marcel Matthess), I use qualitative content analysis to examine digital and industrial policies from seven different countries in Africa and Asia for expectations regarding the impact of digitalisation on sustainability and compare these with the potentials of digitalisation for sustainability in the respective country contexts. The analysis reveals that the documents express a wide range of vague expectations that relate more to positive indirect impacts of information and communication technology (ICT) use, such as improved energy efficiency and resource management, and less to negative direct impacts of ICT, such as electricity consumption through ICT. In the second study (section 3, joint work with Marcel Matthess, Grischa Beier and Bing Xue), I conduct and analyse interviews with 18 industry representatives of the electronics industry from Europe, Japan and China on digitalisation measures in supply chains using qualitative content analysis. I find that while there are positive expectations regarding the effects of digital technologies on supply chain sustainability, their actual use and observable effects are still limited. Interview partners can only provide few examples from their own companies which show that sustainability goals have already been pursued through digitalisation of the supply chain or where sustainability effects, such as resource savings, have been demonstrably achieved. In the third study (section 4, joint work with Peter Neuh{\"a}usler, Melissa Dachrodt and Marcel Matthess), I conduct an econometric panel data analysis. I examine the relationship between the degree of Industry 4.0, energy consumption and energy intensity in ten manufacturing sectors in China between 2006 and 2019. The results suggest that overall, there is no significant relationship between the degree of Industry 4.0 and energy consumption or energy intensity in manufacturing sectors in China. However, differences can be found in subgroups of sectors. I find a negative correlation of Industry 4.0 and energy intensity in highly digitalised sectors, indicating an efficiency-enhancing effect of Industry 4.0 in these sectors. On the other hand, there is a positive correlation of Industry 4.0 and energy consumption for sectors with low energy consumption, which could be explained by the fact that digitalisation, such as the automation of previously mainly labour-intensive sectors, requires energy and also induces growth effects. In the discussion section (section 6) of this dissertation, I use the classification scheme of the three levels macro, meso and micro, as well as of direct and indirect environmental effects to classify the empirical observations into opportunities and risks, for example, with regard to the probability of rebound effects of digitalisation at the three levels. I link the investigated actor perspectives (policy makers, industry representatives), statistical data and additional literature across the system levels and consider political economy aspects to suggest fields of action for more sustainable (digitalised) industries. The dissertation thus makes two overarching contributions to the academic and societal discourse. First, my three empirical studies expand the limited state of research at the interface between digitalisation in industry and sustainability, especially by considering selected countries in the Global South and the example of China. Secondly, exploring the topic through data and methods from different disciplinary contexts and taking a socio-technical point of view, enables an analysis of (path) dependencies, uncertainties, and interactions in the socio-technical system across different system levels, which have often not been sufficiently considered in previous studies. The dissertation thus aims to create a scientifically and practically relevant knowledge basis for a value-guided, sustainability-oriented design of digitalisation in industry.}, language = {en} } @phdthesis{Kugel2005, author = {Kugel, Rudolf}, title = {Ein Beitrag zur Problematik der Integration virtueller Maschinen}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-7195}, school = {Universit{\"a}t Potsdam}, year = {2005}, abstract = {Moderne Softwaresysteme sind komplexe Gebilde, welche h{\"a}ufig im Verbund mit anderen technischen und betriebswirtschaftlichen Systemen eingesetzt werden. F{\"u}r die Hersteller solcher Systeme stellt es oft eine große Herausforderung dar, den oft weit reichenden Anforderungen bez{\"u}glich der Anpassbarkeit solcher Systeme gerecht zu werden. Zur Erf{\"u}llung dieser Anforderungen hat es sich vielfach bew{\"a}hrt, eine virtuelle Maschine in das betreffende System zu integrieren. Die Dissertation richtet sich insbesondere an Personen, die vor der Aufgabe der Integration virtueller Maschinen in bestehende Systeme stehen und zielt darauf ab, solche f{\"u}r die Entscheidung {\"u}ber Integrationsfragen wichtigen Zusammenh{\"a}nge klar darzustellen. Typischerweise treten bei der Integration einer virtuellen Maschine in ein System eine Reihe unterschiedlicher Problemstellungen auf. Da diese Problemstellungen oft eng miteinander verzahnt sind, ist eine isolierte Betrachtung meist nicht sinnvoll. Daher werden die Problemstellungen anhand eines zentral gew{\"a}hlten, sehr umfangreichen Beispiels aus der industriellen Praxis eingef{\"u}hrt. Dieses Beispiel hat die Integration der "Java Virtual Machine" in den SAP R/3 Application Server zum Gegenstand. Im Anschluss an dieses Praxisbeispiel wird die Diskussion der Integrationsproblematik unter Bezug auf eine Auswahl weiterer, in der Literatur beschriebener Integrationsbeispiele vertieft. Das Hauptproblem bei der Behandlung der Integrationsproblematik bestand darin, dass die vorgefundenen Beschreibungen, der als Beispiel herangezogenen Systeme, nur bedingt als Basis f{\"u}r die Auseinandersetzung mit der Integrationsproblematik geeignet waren. Zur Schaffung einer verwertbaren Diskussionsgrundlage war es daher erforderlich, eine homogene, durchg{\"a}ngige Modellierung dieser Systeme vorzunehmen. Die Modellierung der Systeme erfolgte dabei unter Verwendung der "Fundamental Modeling Concepts (FMC)". Die erstellten Modelle sowie die auf Basis dieser Modelle durchgef{\"u}hrte Gegen{\"u}berstellung der unterschiedlichen Ans{\"a}tze zur L{\"O}sung typischer Integrationsprobleme bilden den Hauptbeitrag der Dissertation. Im Zusammenhang mit der Integration virtueller Maschinen in bestehende Systeme besteht h{\"a}ufig der Bedarf, zeitgleich mehrere "Programme" durch die integrierte virtuelle Maschine ausf{\"u}hren zu lassen. Angesichts der Konstruktionsmerkmale vieler heute verbreiteter virtueller Maschinen stellt die Realisierung eines "betriebsmittelschonenden Mehrprogrammbetriebs" eine große Herausforderung dar. Die Darstellung des Spektrums an Maßnahmen zur Realisierung eines "betriebsmittelschonenden Mehrprogrammbetriebs" bildet einen zweiten wesentlichen Beitrag der Dissertation.}, subject = {Virtuelle Maschine}, language = {de} } @phdthesis{Kruse2023, author = {Kruse, Marlen}, title = {Characterization of biomolecules and their interactions using electrically controllable DNA nanolevers}, doi = {10.25932/publishup-57738}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-577384}, school = {Universit{\"a}t Potsdam}, pages = {100, xxii}, year = {2023}, abstract = {In this work, binding interactions between biomolecules were analyzed by a technique that is based on electrically controllable DNA nanolevers. The technique was applied to virus-receptor interactions for the first time. As receptors, primarily peptides on DNA nanostructures and antibodies were utilized. The DNA nanostructures were integrated into the measurement technique and enabled the presentation of the peptides in a controllable geometrical order. The number of peptides could be varied to be compatible to the binding sites of the viral surface proteins. Influenza A virus served as a model system, on which the general measurability was demonstrated. Variations of the receptor peptide, the surface ligand density, the measurement temperature and the virus subtypes showed the sensitivity and applicability of the technology. Additionally, the immobilization of virus particles enabled the measurement of differences in oligovalent binding of DNA-peptide nanostructures to the viral proteins in their native environment. When the coronavirus pandemic broke out in 2020, work on binding interactions of a peptide from the hACE2 receptor and the spike protein of the SARS-CoV-2 virus revealed that oligovalent binding can be quantified in the switchSENSE technology. It could also be shown that small changes in the amino acid sequence of the spike protein resulted in complete loss of binding. Interactions of the peptide and inactivated virus material as well as pseudo virus particles could be measured. Additionally, the switchSENSE technology was utilized to rank six antibodies for their binding affinity towards the nucleocapsid protein of SARS-CoV-2 for the development of a rapid antigen test device. The technique was furthermore employed to show binding of a non-enveloped virus (adenovirus) and a virus-like particle (norovirus-like particle) to antibodies. Apart from binding interactions, the use of DNA origami levers with a length of around 50 nm enabled the switching of virus material. This proved that the technology is also able to size objects with a hydrodynamic diameter larger than 14 nm. A theoretical work on diffusion and reaction-limited binding interactions revealed that the technique and the chosen parameters enable the determination of binding rate constants in the reaction-limited regime. Overall, the applicability of the switchSENSE technique to virus-receptor binding interactions could be demonstrated on multiple examples. While there are challenges that remain, the setup enables the determination of affinities between viruses and receptors in their native environment. Especially the possibilities regarding the quantification of oligo- and multivalent binding interactions could be presented.}, language = {en} } @phdthesis{Krause2011, author = {Krause, Jette}, title = {An expert-based Bayesian investigation of greenhouse gas emission reduction options for German passenger vehicles until 2030}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-57671}, school = {Universit{\"a}t Potsdam}, year = {2011}, abstract = {The present thesis introduces an iterative expert-based Bayesian approach for assessing greenhouse gas (GHG) emissions from the 2030 German new vehicle fleet and quantifying the impacts of their main drivers. A first set of expert interviews has been carried out in order to identify technologies which may help to lower car GHG emissions and to quantify their emission reduction potentials. Moreover, experts were asked for their probability assessments that the different technologies will be widely adopted, as well as for important prerequisites that could foster or hamper their adoption. Drawing on the results of these expert interviews, a Bayesian Belief Network has been built which explicitly models three vehicle types: Internal Combustion Engine Vehicles (which include mild and full Hybrid Electric Vehicles), Plug-In Hybrid Electric Vehicles, and Battery Electric Vehicles. The conditional dependencies of twelve central variables within the BBN - battery energy, fuel and electricity consumption, relative costs, and sales shares of the vehicle types - have been quantified by experts from German car manufacturers in a second series of interviews. For each of the seven second-round interviews, an expert's individually specified BBN results. The BBN have been run for different hypothetical 2030 scenarios which differ, e.g., in regard to battery development, regulation, and fuel and electricity GHG intensities. The present thesis delivers results both in regard to the subject of the investigation and in regard to its method. On the subject level, it has been found that the different experts expect 2030 German new car fleet emission to be at 50 to 65\% of 2008 new fleet emissions under the baseline scenario. They can be further reduced to 40 to 50\% of the emissions of the 2008 fleet though a combination of a higher share of renewables in the electricity mix, a larger share of biofuels in the fuel mix, and a stricter regulation of car CO\$_2\$ emissions in the European Union. Technically, 2030 German new car fleet GHG emissions can be reduced to a minimum of 18 to 44\% of 2008 emissions, a development which can not be triggered by any combination of measures modeled in the BBN alone but needs further commitment. Out of a wealth of existing BBN, few have been specified by individual experts through elicitation, and to my knowledge, none of them has been employed for analyzing perspectives for the future. On the level of methods, this work shows that expert-based BBN are a valuable tool for making experts' expectations for the future explicit and amenable to the analysis of different hypothetical scenarios. BBN can also be employed for quantifying the impacts of main drivers. They have been demonstrated to be a valuable tool for iterative stakeholder-based science approaches.}, language = {en} } @phdthesis{Kolk2019, author = {Kolk, Jens}, title = {The long-term legacy of historical land cover changes}, doi = {10.25932/publishup-43939}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-439398}, school = {Universit{\"a}t Potsdam}, pages = {196}, year = {2019}, abstract = {Over the last years there is an increasing awareness that historical land cover changes and associated land use legacies may be important drivers for present-day species richness and biodiversity due to time-delayed extinctions or colonizations in response to historical environmental changes. Historically altered habitat patches may therefore exhibit an extinction debt or colonization credit and can be expected to lose or gain species in the future. However, extinction debts and colonization credits are difficult to detect and their actual magnitudes or payments have rarely been quantified because species richness patterns and dynamics are also shaped by recent environmental conditions and recent environmental changes. In this thesis we aimed to determine patterns of herb-layer species richness and recent species richness dynamics of forest herb layer plants and link those patterns and dynamics to historical land cover changes and associated land use legacies. The study was conducted in the Prignitz, NE-Germany, where the forest distribution remained stable for the last ca. 100 years but where a) the deciduous forest area had declined by more than 90 per cent (leaving only remnants of "ancient forests"), b) small new forests had been established on former agricultural land ("post-agricultural forests"). Here, we analyzed the relative importance of land use history and associated historical land cover changes for herb layer species richness compared to recent environmental factors and determined magnitudes of extinction debt and colonization credit and their payment in ancient and post-agricultural forests, respectively. We showed that present-day species richness patterns were still shaped by historical land cover changes that ranged back to more than a century. Although recent environmental conditions were largely comparable we found significantly more forest specialists, species with short-distance dispersal capabilities and clonals in ancient forests than in post-agricultural forests. Those species richness differences were largely contingent to a colonization credit in post-agricultural forests that ranged up to 9 species (average 4.7), while the extinction debt in ancient forests had almost completely been paid. Environmental legacies from historical agricultural land use played a minor role for species richness differences. Instead, patch connectivity was most important. Species richness in ancient forests was still dependent on historical connectivity, indicating a last glimpse of an extinction debt, and the colonization credit was highest in isolated post-agricultural forests. In post-agricultural forests that were better connected or directly adjacent to ancient forest patches the colonization credit was way smaller and we were able to verify a gradual payment of the colonization credit from 2.7 species to 1.5 species over the last six decades.}, language = {en} } @phdthesis{Kluth2012, author = {Kluth, Oliver}, title = {Einfluss von Glucolipotoxizit{\"a}t auf die Funktion der β-Zellen diabetessuszeptibler und -resistenter Mausst{\"a}mme}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-61961}, school = {Universit{\"a}t Potsdam}, year = {2012}, abstract = {Ziel der vorliegenden Arbeit war es, die Auswirkungen von Glucose- und Lipidtoxizit{\"a}t auf die Funktion der β-Zellen von Langerhans-Inseln in einem diabetesresistenten (B6.V-Lepob/ob, ob/ob) sowie diabetessuszeptiblen (New Zealand Obese, NZO) Mausmodell zu untersuchen. Es sollten molekulare Mechanismen identifiziert werden, die zum Untergang der β-Zellen in der NZO-Maus f{\"u}hren bzw. zum Schutz der β-Zellen der ob/ob-Maus beitragen. Zun{\"a}chst wurde durch ein geeignetes di{\"a}tetisches Regime in beiden Modellen durch kohlenhydratrestriktive Ern{\"a}hrung eine Adipositas(Lipidtoxizit{\"a}t) induziert und anschließend durch F{\"u}tterung einer kohlenhydrathaltigen Di{\"a}t ein Zustand von Glucolipotoxizit{\"a}t erzeugt. Dieses Vorgehen erlaubte es, in der NZO-Maus in einem kurzen Zeitfenster eine Hyperglyk{\"a}mie sowie einen β-Zelluntergang durch Apoptose auszul{\"o}sen. Im Vergleich dazu blieben ob/ob-M{\"a}use l{\"a}ngerfristig normoglyk{\"a}misch und wiesen keinen β-Zelluntergang auf. Die Ursache f{\"u}r den β-Zellverlust war die Inaktivierung des Insulin/IGF-1-Rezeptor-Signalwegs, wie durch Abnahme von phospho-AKT, phospho-FoxO1 sowie des β-zellspezifischen Transkriptionsfaktors PDX1 gezeigt wurde. Mit Ausnahme des Effekts einer Dephosphorylierung von FoxO1, konnten ob/ob-M{\"a}use diesen Signalweg aufrechterhalten und dadurch einen Verlust von β-Zellen abwenden. Die glucolipotoxischen Effekte wurden in vitro an isolierten Inseln beider St{\"a}mme und der β-Zelllinie MIN6 best{\"a}tigt und zeigten, dass ausschließlich die Kombination hoher Glucose und Palmitatkonzentrationen (Glucolipotoxizit{\"a}t) negative Auswirkungen auf die NZO-Inseln und MIN6-Zellen hatte, w{\"a}hrend ob/ob-Inseln davor gesch{\"u}tzt blieben. Die Untersuchung isolierter Inseln ergab, dass beide St{\"a}mme unter glucolipotoxischen Bedingungen keine Steigerung der Insulinexpression aufweisen und sich bez{\"u}glich ihrer Glucose-stimulierten Insulinsekretion nicht unterscheiden. Mit Hilfe von Microarray- sowie immunhistologischen Untersuchungen wurde gezeigt, dass ausschließlich ob/ob-M{\"a}use nach Kohlenhydratf{\"u}tterung eine kompensatorische transiente Induktion der β-Zellproliferation aufwiesen, die in einer nahezu Verdreifachung der Inselmasse nach 32 Tagen m{\"u}ndete. Die hier erzielten Ergebnisse lassen die Schlussfolgerung zu, dass der β-Zelluntergang der NZO-Maus auf eine Beeintr{\"a}chtigung des Insulin/IGF-1-Rezeptor-Signalwegs sowie auf die Unf{\"a}higkeit zur β- Zellproliferation zur{\"u}ckgef{\"u}hrt werden kann. Umgekehrt erm{\"o}glichen der Erhalt des Insulin/IGF-1-Rezeptor-Signalwegs und die Induktion der β-Zellproliferation in der ob/ob-Maus den Schutz vor einer Hyperglyk{\"a}mie und einem Diabetes.}, language = {de} } @phdthesis{Kiss2024, author = {Kiss, Andrea}, title = {Moss-associated bacterial and archaeal communities of northern peatlands: key taxa, environmental drivers and potential functions}, doi = {10.25932/publishup-63064}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-630641}, school = {Universit{\"a}t Potsdam}, pages = {XX, 139, liv}, year = {2024}, abstract = {Moss-microbe associations are often characterised by syntrophic interactions between the microorganisms and their hosts, but the structure of the microbial consortia and their role in peatland development remain unknown. In order to study microbial communities of dominant peatland mosses, Sphagnum and brown mosses, and the respective environmental drivers, four study sites representing different successional stages of natural northern peatlands were chosen on a large geographical scale: two brown moss-dominated, circumneutral peatlands from the Arctic and two Sphagnum-dominated, acidic peat bogs from subarctic and temperate zones. The family Acetobacteraceae represented the dominant bacterial taxon of Sphagnum mosses from various geographical origins and displayed an integral part of the moss core community. This core community was shared among all investigated bryophytes and consisted of few but highly abundant prokaryotes, of which many appear as endophytes of Sphagnum mosses. Moreover, brown mosses and Sphagnum mosses represent habitats for archaea which were not studied in association with peatland mosses so far. Euryarchaeota that are capable of methane production (methanogens) displayed the majority of the moss-associated archaeal communities. Moss-associated methanogenesis was detected for the first time, but it was mostly negligible under laboratory conditions. Contrarily, substantial moss-associated methane oxidation was measured on both, brown mosses and Sphagnum mosses, supporting that methanotrophic bacteria as part of the moss microbiome may contribute to the reduction of methane emissions from pristine and rewetted peatlands of the northern hemisphere. Among the investigated abiotic and biotic environmental parameters, the peatland type and the host moss taxon were identified to have a major impact on the structure of moss-associated bacterial communities, contrarily to archaeal communities whose structures were similar among the investigated bryophytes. For the first time it was shown that different bog development stages harbour distinct bacterial communities, while at the same time a small core community is shared among all investigated bryophytes independent of geography and peatland type. The present thesis displays the first large-scale, systematic assessment of bacterial and archaeal communities associated both with brown mosses and Sphagnum mosses. It suggests that some host-specific moss taxa have the potential to play a key role in host moss establishment and peatland development.}, language = {en} } @phdthesis{Kim2023, author = {Kim, Jiyong}, title = {Synthesis of InP quantum dots and their applications}, doi = {10.25932/publishup-58535}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-585351}, school = {Universit{\"a}t Potsdam}, pages = {XIX, 142}, year = {2023}, abstract = {Technologically important, environmentally friendly InP quantum dots (QDs) typically used as green and red emitters in display devices can achieve exceptional photoluminescence quantum yields (PL QYs) of near-unity (95-100\%) when the-state-of-the-art core/shell heterostructure of the ZnSe inner/ZnS outer shell is elaborately applied. Nevertheless, it has only led to a few industrial applications as QD liquid crystal display (QD-LCD) which is applied to blue backlight units, even though QDs has a lot of possibilities that able to realize industrially feasible applications, such as QD light-emitting diodes (QD‒LEDs) and luminescence solar concentrator (LSC), due to their functionalizable characteristics. Before introducing the main research, the theoretical basis and fundamentals of QDs are described in detail on the basis of the quantum mechanics and experimental synthetic results, where a concept of QD and colloidal QD, a type-I core/shell structure, a transition metal doped semiconductor QDs, the surface chemistry of QD, and their applications (LSC, QD‒LEDs, and EHD jet printing) are sequentially elucidated for better understanding. This doctoral thesis mainly focused on the connectivity between QD materials and QD devices, based on the synthesis of InP QDs that are composed of inorganic core (core/shell heterostructure) and organic shell (surface ligands on the QD surface). In particular, as for the former one (core/shell heterostructure), the ZnCuInS mid-shell as an intermediate layer is newly introduced between a Cu-doped InP core and a ZnS shell for LSC devices. As for the latter one (surface ligands), the ligand effect by 1-octanethiol and chloride ion are investigated for the device stability in QD‒LEDs and the printability of electro-hydrodynamic (EHD) jet printing system, in which this research explores the behavior of surface ligands, based on proton transfer mechanism on the QD surface. Chapter 3 demonstrates the synthesis of strain-engineered highly emissive Cu:InP/Zn-Cu-In-S (ZCIS)/ZnS core/shell/shell heterostructure QDs via a one-pot approach. When this unconventional combination of a ZCIS/ZnS double shelling scheme is introduced to a series of Cu:InP cores with different sizes, the resulting Cu:InP/ZCIS/ZnS QDs with a tunable near-IR PL range of 694-850 nm yield the highest-ever PL QYs of 71.5-82.4\%. These outcomes strongly point to the efficacy of the ZCIS interlayer, which makes the core/shell interfacial strain effectively alleviated, toward high emissivity. The presence of such an intermediate ZCIS layer is further examined by comparative size, structural, and compositional analyses. The end of this chapter briefly introduces the research related to the LSC devices, fabricated from Cu:InP/ZCIS/ZnS QDs, currently in progress. Chapter 4 mainly deals with ligand effect in 1-octanethiol passivation of InP/ZnSe/ZnS QDs in terms of incomplete surface passivation during synthesis. This chapter demonstrates the lack of anionic carboxylate ligands on the surface of InP/ZnSe/ZnS quantum dots (QDs), where zinc carboxylate ligands can be converted to carboxylic acid or carboxylate ligands via proton transfer by 1-octanethiol. The as-synthesized QDs initially have an under-coordinated vacancy surface, which is passivated by solvent ligands such as ethanol and acetone. Upon exposure of 1-octanethiol to the QD surface, 1-octanthiol effectively induces the surface binding of anionic carboxylate ligands (derived from zinc carboxylate ligands) by proton transfer, which consequently exchanges ethanol and acetone ligands that bound on the incomplete QD surface. The systematic chemical analyses, such as thermogravimetric analysis‒mass spectrometry and proton nuclear magnetic resonance spectroscopy, directly show the interplay of surface ligands, and it associates with QD light-emitting diodes (QD‒LEDs). Chapter 5 shows the relation between material stability of QDs and device stability of QD‒LEDs through the investigation of surface chemistry and shell thickness. In typical III-V colloidal InP quantum dots (QDs), an inorganic ZnS outermost shell is used to provide stability when overcoated onto the InP core. However, this work presents a faster photo-degradation of InP/ZnSe/ZnS QDs with a thicker ZnS shell than that with a thin ZnS shell when 1-octanethiol was applied as a sulfur source to form ZnS outmost shell. Herein, 1-octanethiol induces the form of weakly-bound carboxylate ligand via proton transfer on the QD surface, resulting in a faster degradation at UV light even though a thicker ZnS shell was formed onto InP/ZnSe QDs. Detailed insight into surface chemistry was obtained from proton nuclear magnetic resonance spectroscopy and thermogravimetric analysis-mass spectrometry. However, the lifetimes of the electroluminescence devices fabricated from InP/ZnSe/ZnS QDs with a thick or a thin ZnS shell show surprisingly the opposite result to the material stability of QDs, where the QD light-emitting diodes (QD‒LEDs) with a thick ZnS shelled QDs maintained its luminance more stable than that with a thin ZnS shelled QDs. This study elucidates the degradation mechanism of the QDs and the QD light-emitting diodes based on the results and discuss why the material stability of QDs is different from the lifetime of QD‒LEDs. Chapter 6 suggests a method how to improve a printability of EHD jet printing when QD materials are applied to QD ink formulation, where this work introduces the application of GaP mid-shelled InP QDs as a role of surface charge in EHD jet printing technique. In general, GaP intermediate shell has been introduced in III-V colloidal InP quantum dots (QDs) to enhance their thermal stability and quantum efficiency in the case of type-I core/shell/shell heterostructure InP/GaP/ZnSeS QDs. Herein, these highly luminescent InP/GaP/ZnSeS QDs were synthesized and applied to EHD jet printing, by which this study demonstrates that unreacted Ga and Cl ions on the QD surface induce the operating voltage of cone jet and cone jet formation to be reduced and stabilized, respectively. This result indicates GaP intermediate shell not only improves PL QY and thermal stability of InP QDs but also adjusts the critical flow rate required for cone-jet formation. In other words, surface charges of quantum dots can have a significant role in forming cone apex in the EHD capillary nozzle. For an industrially convenient validation of surface charges on the QD surface, Zeta potential analyses of QD solutions as a simple method were performed, as well as inductively coupled plasma optical emission spectrometry (ICP-OES) for a composition of elements. Beyond the generation of highly emissive InP QDs with narrow FWHM, these studies talk about the connection between QD material and QD devices not only to make it a vital jumping-off point for industrially feasible applications but also to reveal from chemical and physical standpoints the origin that obstructs the improvement of device performance experimentally and theoretically.}, language = {en} } @phdthesis{Khosravi2023, author = {Khosravi, Sara}, title = {The effect of new turbulence parameterizations for the stable surface layer on simulations of the Arctic climate}, doi = {10.25932/publishup-64352}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-643520}, school = {Universit{\"a}t Potsdam}, pages = {XIV, 119}, year = {2023}, abstract = {Arctic climate change is marked by intensified warming compared to global trends and a significant reduction in Arctic sea ice which can intricately influence mid-latitude atmospheric circulation through tropo- and stratospheric pathways. Achieving accurate simulations of current and future climate demands a realistic representation of Arctic climate processes in numerical climate models, which remains challenging. Model deficiencies in replicating observed Arctic climate processes often arise due to inadequacies in representing turbulent boundary layer interactions that determine the interactions between the atmosphere, sea ice, and ocean. Many current climate models rely on parameterizations developed for mid-latitude conditions to handle Arctic turbulent boundary layer processes. This thesis focuses on modified representation of the Arctic atmospheric processes and understanding their resulting impact on large-scale mid-latitude atmospheric circulation within climate models. The improved turbulence parameterizations, recently developed based on Arctic measurements, were implemented in the global atmospheric circulation model ECHAM6. This involved modifying the stability functions over sea ice and ocean for stable stratification and changing the roughness length over sea ice for all stratification conditions. Comprehensive analyses are conducted to assess the impacts of these modifications on ECHAM6's simulations of the Arctic boundary layer, overall atmospheric circulation, and the dynamical pathways between the Arctic and mid-latitudes. Through a step-wise implementation of the mentioned parameterizations into ECHAM6, a series of sensitivity experiments revealed that the combined impacts of the reduced roughness length and the modified stability functions are non-linear. Nevertheless, it is evident that both modifications consistently lead to a general decrease in the heat transfer coefficient, being in close agreement with the observations. Additionally, compared to the reference observations, the ECHAM6 model falls short in accurately representing unstable and strongly stable conditions. The less frequent occurrence of strong stability restricts the influence of the modified stability functions by reducing the affected sample size. However, when focusing solely on the specific instances of a strongly stable atmosphere, the sensible heat flux approaches near-zero values, which is in line with the observations. Models employing commonly used surface turbulence parameterizations were shown to have difficulties replicating the near-zero sensible heat flux in strongly stable stratification. I also found that these limited changes in surface layer turbulence parameterizations have a statistically significant impact on the temperature and wind patterns across multiple pressure levels, including the stratosphere, in both the Arctic and mid-latitudes. These significant signals vary in strength, extent, and direction depending on the specific month or year, indicating a strong reliance on the background state. Furthermore, this research investigates how the modified surface turbulence parameterizations may influence the response of both stratospheric and tropospheric circulation to Arctic sea ice loss. The most suitable parameterizations for accurately representing Arctic boundary layer turbulence were identified from the sensitivity experiments. Subsequently, the model's response to sea ice loss is evaluated through extended ECHAM6 simulations with different prescribed sea ice conditions. The simulation with adjusted surface turbulence parameterizations better reproduced the observed Arctic tropospheric warming in vertical extent, demonstrating improved alignment with the reanalysis data. Additionally, unlike the control experiments, this simulation successfully reproduced specific circulation patterns linked to the stratospheric pathway for Arctic-mid-latitude linkages. Specifically, an increased occurrence of the Scandinavian-Ural blocking regime (negative phase of the North Atlantic Oscillation) in early (late) winter is observed. Overall, it can be inferred that improving turbulence parameterizations at the surface layer can improve the ECHAM6's response to sea ice loss.}, language = {en} } @phdthesis{Ketzer2024, author = {Ketzer, Laura}, title = {The impact of stellar activity evolution on atmospheric mass loss of young exoplanets}, doi = {10.25932/publishup-62681}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-626819}, school = {Universit{\"a}t Potsdam}, pages = {x, 208}, year = {2024}, abstract = {The increasing number of known exoplanets raises questions about their demographics and the mechanisms that shape planets into how we observe them today. Young planets in close-in orbits are exposed to harsh environments due to the host star being magnetically highly active, which results in high X-ray and extreme UV fluxes impinging on the planet. Prolonged exposure to this intense photoionizing radiation can cause planetary atmospheres to heat up, expand and escape into space via a hydrodynamic escape process known as photoevaporation. For super-Earth and sub-Neptune-type planets, this can even lead to the complete erosion of their primordial gaseous atmospheres. A factor of interest for this particular mass-loss process is the activity evolution of the host star. Stellar rotation, which drives the dynamo and with it the magnetic activity of a star, changes significantly over the stellar lifetime. This strongly affects the amount of high-energy radiation received by a planet as stars age. At a young age, planets still host warm and extended envelopes, making them particularly susceptible to atmospheric evaporation. Especially in the first gigayear, when X-ray and UV levels can be 100 - 10,000 times higher than for the present-day sun, the characteristics of the host star and the detailed evolution of its high-energy emission are of importance. In this thesis, I study the impact of stellar activity evolution on the high-energy-induced atmospheric mass loss of young exoplanets. The PLATYPOS code was developed as part of this thesis to calculate photoevaporative mass-loss rates over time. The code, which couples parameterized planetary mass-radius relations with an analytical hydrodynamic escape model, was used, together with Chandra and eROSITA X-ray observations, to investigate the future mass loss of the two young multiplanet systems V1298 Tau and K2-198. Further, in a numerical ensemble study, the effect of a realistic spread of activity tracks on the small-planet radius gap was investigated for the first time. The works in this thesis show that for individual systems, in particular if planetary masses are unconstrained, the difference between a young host star following a low-activity track vs. a high-activity one can have major implications: the exact shape of the activity evolution can determine whether a planet can hold on to some of its atmosphere, or completely loses its envelope, leaving only the bare rocky core behind. For an ensemble of simulated planets, an observationally-motivated distribution of activity tracks does not substantially change the final radius distribution at ages of several gigayears. My simulations indicate that the overall shape and slope of the resulting small-planet radius gap is not significantly affected by the spread in stellar activity tracks. However, it can account for a certain scattering or fuzziness observed in and around the radius gap of the observed exoplanet population.}, language = {en} } @phdthesis{Kerutt2019, author = {Kerutt, Josephine Victoria}, title = {The high-redshift voyage of Lyman alpha and Lyman continuum emission as told by MUSE}, doi = {10.25932/publishup-47881}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-478816}, school = {Universit{\"a}t Potsdam}, pages = {152}, year = {2019}, abstract = {Most of the matter in the universe consists of hydrogen. The hydrogen in the intergalactic medium (IGM), the matter between the galaxies, underwent a change of its ionisation state at the epoch of reionisation, at a redshift roughly between 6>z>10, or ~10^8 years after the Big Bang. At this time, the mostly neutral hydrogen in the IGM was ionised but the source of the responsible hydrogen ionising emission remains unclear. In this thesis I discuss the most likely candidates for the emission of this ionising radiation, which are a type of galaxy called Lyman alpha emitters (LAEs). As implied by their name, they emit Lyman alpha radiation, produced after a hydrogen atom has been ionised and recombines with a free electron. The ionising radiation itself (also called Lyman continuum emission) which is needed for this process inside the LAEs could also be responsible for ionising the IGM around those galaxies at the epoch of reionisation, given that enough Lyman continuum escapes. Through this mechanism, Lyman alpha and Lyman continuum radiation are closely linked and are both studied to better understand the properties of high redshift galaxies and the reionisation state of the universe. Before I can analyse their Lyman alpha emission lines and the escape of Lyman continuum emission from them, the first step is the detection and correct classification of LAEs in integral field spectroscopic data, specifically taken with the Multi-Unit Spectroscopic Explorer (MUSE). After detecting emission line objects in the MUSE data, the task of classifying them and determining their redshift is performed with the graphical user interface QtClassify, which I developed during the work on this thesis. It uses the strength of the combination of spectroscopic and photometric information that integral field spectroscopy offers to enable the user to quickly identify the nature of the detected emission lines. The reliable classification of LAEs and determination of their redshifts is a crucial first step towards an analysis of their properties. Through radiative transfer processes, the properties of the neutral hydrogen clouds in and around LAEs are imprinted on the shape of the Lyman alpha line. Thus after identifying the LAEs in the MUSE data, I analyse the properties of the Lyman alpha emission line, such as the equivalent width (EW) distribution, the asymmetry and width of the line as well as the double peak fraction. I challenge the common method of displaying EW distributions as histograms without taking the limits of the survey into account and construct a more independent EW distribution function that better reflects the properties of the underlying population of galaxies. I illustrate this by comparing the fraction of high EW objects between the two surveys MUSE-Wide and MUSE-Deep, both consisting of MUSE pointings (each with the size of one square arcminute) of different depths. In the 60 MUSE-Wide fields of one hour exposure time I find a fraction of objects with extreme EWs above EW_0>240A of ~20\%, while in the MUSE-Deep fields (9 fields with an exposure time of 10 hours and one with an exposure time of 31 hours) I find a fraction of only ~1\%, which is due to the differences in the limiting line flux of the surveys. The highest EW I measure is EW_0 = 600.63 +- 110A, which hints at an unusual underlying stellar population, possibly with a very low metallicity. With the knowledge of the redshifts and positions of the LAEs detected in the MUSE-Wide survey, I also look for Lyman continuum emission coming from these galaxies and analyse the connection between Lyman continuum emission and Lyman alpha emission. I use ancillary Hubble Space Telescope (HST) broadband photometry in the bands that contain the Lyman continuum and find six Lyman continuum leaker candidates. To test whether the Lyman continuum emission of LAEs is coming only from those individual objects or the whole population, I select LAEs that are most promising for the detection of Lyman continuum emission, based on their rest-frame UV continuum and Lyman alpha line shape properties. After this selection, I stack the broadband data of the resulting sample and detect a signal in Lyman continuum with a significance of S/N = 5.5, pointing towards a Lyman continuum escape fraction of ~80\%. If the signal is reliable, it strongly favours LAEs as the providers of the hydrogen ionising emission at the epoch of reionisation and beyond.}, language = {en} } @phdthesis{Kellermann2011, author = {Kellermann, Thorsten}, title = {Accurate numerical relativity simulations of non-vacuumspace-times in two dimensions and applications to critical collapse}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-59578}, school = {Universit{\"a}t Potsdam}, year = {2011}, abstract = {This Thesis puts its focus on the physics of neutron stars and its description with methods of numerical relativity. In the first step, a new numerical framework the Whisky2D code will be developed, which solves the relativistic equations of hydrodynamics in axisymmetry. Therefore we consider an improved formulation of the conserved form of these equations. The second part will use the new code to investigate the critical behaviour of two colliding neutron stars. Considering the analogy to phase transitions in statistical physics, we will investigate the evolution of the entropy of the neutron stars during the whole process. A better understanding of the evolution of thermodynamical quantities, like the entropy in critical process, should provide deeper understanding of thermodynamics in relativity. More specifically, we have written the Whisky2D code, which solves the general-relativistic hydrodynamics equations in a flux-conservative form and in cylindrical coordinates. This of course brings in 1/r singular terms, where r is the radial cylindrical coordinate, which must be dealt with appropriately. In the above-referenced works, the flux operator is expanded and the 1/r terms, not containing derivatives, are moved to the right-hand-side of the equation (the source term), so that the left hand side assumes a form identical to the one of the three-dimensional (3D) Cartesian formulation. We call this the standard formulation. Another possibility is not to split the flux operator and to redefine the conserved variables, via a multiplication by r. We call this the new formulation. The new equations are solved with the same methods as in the Cartesian case. From a mathematical point of view, one would not expect differences between the two ways of writing the differential operator, but, of course, a difference is present at the numerical level. Our tests show that the new formulation yields results with a global truncation error which is one or more orders of magnitude smaller than those of alternative and commonly used formulations. The second part of the Thesis uses the new code for investigations of critical phenomena in general relativity. In particular, we consider the head-on-collision of two neutron stars in a region of the parameter space where two final states a new stable neutron star or a black hole, lay close to each other. In 1993, Choptuik considered one-parameter families of solutions, S[P], of the Einstein-Klein-Gordon equations for a massless scalar field in spherical symmetry, such that for every P > P⋆, S[P] contains a black hole and for every P < P⋆, S[P] is a solution not containing singularities. He studied numerically the behavior of S[P] as P → P⋆ and found that the critical solution, S[P⋆], is universal, in the sense that it is approached by all nearly-critical solutions regardless of the particular family of initial data considered. All these phenomena have the common property that, as P approaches P⋆, S[P] approaches a universal solution S[P⋆] and that all the physical quantities of S[P] depend only on |P - P⋆|. The first study of critical phenomena concerning the head-on collision of NSs was carried out by Jin and Suen in 2007. In particular, they considered a series of families of equal-mass NSs, modeled with an ideal-gas EOS, boosted towards each other and varied the mass of the stars, their separation, velocity and the polytropic index in the EOS. In this way they could observe a critical phenomenon of type I near the threshold of black-hole formation, with the putative solution being a nonlinearly oscillating star. In a successive work, they performed similar simulations but considering the head-on collision of Gaussian distributions of matter. Also in this case they found the appearance of type-I critical behaviour, but also performed a perturbative analysis of the initial distributions of matter and of the merged object. Because of the considerable difference found in the eigenfrequencies in the two cases, they concluded that the critical solution does not represent a system near equilibrium and in particular not a perturbed Tolmann-Oppenheimer-Volkoff (TOV) solution. In this Thesis we study the dynamics of the head-on collision of two equal-mass NSs using a setup which is as similar as possible to the one considered above. While we confirm that the merged object exhibits a type-I critical behaviour, we also argue against the conclusion that the critical solution cannot be described in terms of equilibrium solution. Indeed, we show that, in analogy with what is found in, the critical solution is effectively a perturbed unstable solution of the TOV equations. Our analysis also considers fine-structure of the scaling relation of type-I critical phenomena and we show that it exhibits oscillations in a similar way to the one studied in the context of scalar-field critical collapse.}, language = {en} } @phdthesis{Kegelmann2019, author = {Kegelmann, Lukas}, title = {Advancing charge selective contacts for efficient monolithic perovskite-silicon tandem solar cells}, doi = {10.25932/publishup-42642}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-426428}, school = {Universit{\"a}t Potsdam}, pages = {v, 155}, year = {2019}, abstract = {Hybrid organic-inorganic perovskites are one of the most promising material classes for photovoltaic energy conversion. In solar cells, the perovskite absorber is sandwiched between n- and p-type contact layers which selectively transport electrons and holes to the cell's cathode and anode, respectively. This thesis aims to advance contact layers in perovskite solar cells and unravel the impact of interface and contact properties on the device performance. Further, the contact materials are applied in monolithic perovskite-silicon heterojunction (SHJ) tandem solar cells, which can overcome the single junction efficiency limits and attract increasing attention. Therefore, all contact layers must be highly transparent to foster light harvesting in the tandem solar cell design. Besides, the SHJ device restricts processing temperatures for the selective contacts to below 200°C. A comparative study of various electron selective contact materials, all processed below 180°C, in n-i-p type perovskite solar cells highlights that selective contacts and their interfaces to the absorber govern the overall device performance. Combining fullerenes and metal-oxides in a TiO2/PC60BM (phenyl-C60-butyric acid methyl ester) double-layer contact allows to merge good charge extraction with minimized interface recombination. The layer sequence thereby achieved high stabilized solar cell performances up to 18.0\% and negligible current-voltage hysteresis, an otherwise pronounced phenomenon in this device design. Double-layer structures are therefore emphasized as a general concept to establish efficient and highly selective contacts. Based on this success, the concept to combine desired properties of different materials is transferred to the p-type contact. Here, a mixture of the small molecule Spiro-OMeTAD [2,2',7,7'-tetrakis(N,N-di-p-methoxyphenylamine)-9,9'-spirobifluoren] and the doped polymer PEDOT [poly(3,4-ethylenedioxythiophene)] is presented as a novel hole selective contact. PEDOT thereby remarkably suppresses charge recombination at the perovskite surface, allowing an increase of quasi-Fermi level splitting in the absorber. Further, the addition of Spiro-OMeTAD into the PEDOT layer is shown to enhance charge extraction at the interface and allow high efficiencies up to 16.8\%. Finally, the knowledge on contact properties is applied to monolithic perovskite-SHJ tandem solar cells. The main goal is to optimize the top contact stack of doped Spiro-OMeTAD/molybdenum oxide(MoOx)/ITO towards higher transparency by two different routes. First, fine-tuning of the ITO deposition to mitigate chemical reduction of MoOx and increase the transmittance of MoOx/ITO stacks by 25\%. Second, replacing Spiro-OMeTAD with the alternative hole transport materials PEDOT/Spiro-OMeTAD mixtures, CuSCN or PTAA [poly(triaryl amine)]. Experimental results determine layer thickness constrains and validate optical simulations, which subsequently allow to realistically estimate the respective tandem device performances. As a result, PTAA represents the most promising replacement for Spiro-OMeTAD, with a projected increase of the optimum tandem device efficiency for the herein used architecture by 2.9\% relative to 26.5\% absolute. The results also reveal general guidelines for further performance gains of the technology.}, language = {en} } @phdthesis{Jongejans2022, author = {Jongejans, Loeka Laura}, title = {Organic matter stored in ice-rich permafrost}, doi = {10.25932/publishup-56491}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-564911}, school = {Universit{\"a}t Potsdam}, pages = {xxiii, 178}, year = {2022}, abstract = {The Arctic is changing rapidly and permafrost is thawing. Especially ice-rich permafrost, such as the late Pleistocene Yedoma, is vulnerable to rapid and deep thaw processes such as surface subsidence after the melting of ground ice. Due to permafrost thaw, the permafrost carbon pool is becoming increasingly accessible to microbes, leading to increased greenhouse gas emissions, which enhances the climate warming. The assessment of the molecular structure and biodegradability of permafrost organic matter (OM) is highly needed. My research revolves around the question "how does permafrost thaw affect its OM storage?" More specifically, I assessed (1) how molecular biomarkers can be applied to characterize permafrost OM, (2) greenhouse gas production rates from thawing permafrost, and (3) the quality of OM of frozen and (previously) thawed sediments. I studied deep (max. 55 m) Yedoma and thawed Yedoma permafrost sediments from Yakutia (Sakha Republic). I analyzed sediment cores taken below thermokarst lakes on the Bykovsky Peninsula (southeast of the Lena Delta) and in the Yukechi Alas (Central Yakutia), and headwall samples from the permafrost cliff Sobo-Sise (Lena Delta) and the retrogressive thaw slump Batagay (Yana Uplands). I measured biomarker concentrations of all sediment samples. Furthermore, I carried out incubation experiments to quantify greenhouse gas production in thawing permafrost. I showed that the biomarker proxies are useful to assess the source of the OM and to distinguish between OM derived from terrestrial higher plants, aquatic plants and microbial activity. In addition, I showed that some proxies help to assess the degree of degradation of permafrost OM, especially when combined with sedimentological data in a multi-proxy approach. The OM of Yedoma is generally better preserved than that of thawed Yedoma sediments. The greenhouse gas production was highest in the permafrost sediments that thawed for the first time, meaning that the frozen Yedoma sediments contained most labile OM. Furthermore, I showed that the methanogenic communities had established in the recently thawed sediments, but not yet in the still-frozen sediments. My research provided the first molecular biomarker distributions and organic carbon turnover data as well as insights in the state and processes in deep frozen and thawed Yedoma sediments. These findings show the relevance of studying OM in deep permafrost sediments.}, language = {en} } @phdthesis{Jiang2007, author = {Jiang, Chunyan}, title = {Multi-visualization and hybrid segmentation approaches within telemedicine framework}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-12829}, school = {Universit{\"a}t Potsdam}, year = {2007}, abstract = {The innovation of information techniques has changed many aspects of our life. In health care field, we can obtain, manage and communicate high-quality large volumetric image data by computer integrated devices, to support medical care. In this dissertation I propose several promising methods that could assist physicians in processing, observing and communicating the image data. They are included in my three research aspects: telemedicine integration, medical image visualization and image segmentation. And these methods are also demonstrated by the demo software that I developed. One of my research point focuses on medical information storage standard in telemedicine, for example DICOM, which is the predominant standard for the storage and communication of medical images. I propose a novel 3D image data storage method, which was lacking in current DICOM standard. I also created a mechanism to make use of the non-standard or private DICOM files. In this thesis I present several rendering techniques on medical image visualization to offer different display manners, both 2D and 3D, for example, cut through data volume in arbitrary degree, rendering the surface shell of the data, and rendering the semi-transparent volume of the data. A hybrid segmentation approach, designed for semi-automated segmentation of radiological image, such as CT, MRI, etc, is proposed in this thesis to get the organ or interested area from the image. This approach takes advantage of the region-based method and boundary-based methods. Three steps compose the hybrid approach: the first step gets coarse segmentation by fuzzy affinity and generates homogeneity operator; the second step divides the image by Voronoi Diagram and reclassifies the regions by the operator to refine segmentation from the previous step; the third step handles vague boundary by level set model. Topics for future research are mentioned in the end, including new supplement for DICOM standard for segmentation information storage, visualization of multimodal image information, and improvement of the segmentation approach to higher dimension.}, language = {en} } @phdthesis{Jentsch2021, author = {Jentsch, Anna}, title = {Soil gas analytics in geothermal exploration and monitoring}, doi = {10.25932/publishup-54403}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-544039}, school = {Universit{\"a}t Potsdam}, pages = {xxxi, 162}, year = {2021}, abstract = {Major challenges during geothermal exploration and exploitation include the structural-geological characterization of the geothermal system and the application of sustainable monitoring concepts to explain changes in a geothermal reservoir during production and/or reinjection of fluids. In the absence of sufficiently permeable reservoir rocks, faults and fracture networks are preferred drilling targets because they can facilitate the migration of hot and/or cold fluids. In volcanic-geothermal systems considerable amounts of gas emissions can be released at the earth surface, often related to these fluid-releasing structures. In this thesis, I developed and evaluated different methodological approaches and measurement concepts to determine the spatial and temporal variation of several soil gas parameters to understand the structural control on fluid flow. In order to validate their potential as innovative geothermal exploration and monitoring tools, these methodological approaches were applied to three different volcanic-geothermal systems. At each site an individual survey design was developed regarding the site-specific questions. The first study presents results of the combined measurement of CO2 flux, ground temperatures, and the analysis of isotope ratios (δ13CCO2, 3He/4He) across the main production area of the Los Humeros geothermal field, to identify locations with a connection to its supercritical (T > 374◦C and P > 221 bar) geothermal reservoir. The results of the systematic and large-scale (25 x 200 m) CO2 flux scouting survey proved to be a fast and flexible way to identify areas of anomalous degassing. Subsequent sampling with high resolution surveys revealed the actual extent and heterogenous pattern of anomalous degassing areas. They have been related to the internal fault hydraulic architecture and allowed to assess favourable structural settings for fluid flow such as fault intersections. Finally, areas of unknown structurally controlled permeability with a connection to the superhot geothermal reservoir have been determined, which represent promising targets for future geothermal exploration and development. In the second study, I introduce a novel monitoring approach by examining the variation of CO2 flux to monitor changes in the reservoir induced by fluid reinjection. For that reason, an automated, multi-chamber CO2 flux system was deployed across the damage zone of a major normal fault crossing the Los Humeros geothermal field. Based on the results of the CO2 flux scouting survey, a suitable site was selected that had a connection to the geothermal reservoir, as identified by hydrothermal CO2 degassing and hot ground temperatures (> 50 °C). The results revealed a response of gas emissions to changes in reinjection rates within 24 h, proving an active hydraulic communication between the geothermal reservoir and the earth surface. This is a promising monitoring strategy that provides nearly real-time and in-situ data about changes in the reservoir and allows to timely react to unwanted changes (e.g., pressure decline, seismicity). The third study presents results from the Aluto geothermal field in Ethiopia where an area-wide and multi-parameter analysis, consisting of measurements of CO2 flux, 222Rn, and 220Rn activity concentrations and ground temperatures was conducted to detect hidden permeable structures. 222Rn and 220Rn activity concentrations are evaluated as a complementary soil gas parameter to CO2 flux, to investigate their potential to understand tectono-volcanic degassing. The combined measurement of all parameters enabled to develop soil gas fingerprints, a novel visualization approach. Depending on the magnitude of gas emissions and their migration velocities the study area was divided in volcanic (heat), tectonic (structures), and volcano-tectonic dominated areas. Based on these concepts, volcano-tectonic dominated areas, where hot hydrothermal fluids migrate along permeable faults, present the most promising targets for future geothermal exploration and development in this geothermal field. Two of these areas have been identified in the south and south-east which have not yet been targeted for geothermal exploitation. Furthermore, two unknown areas of structural related permeability could be identified by 222Rn and 220Rn activity concentrations. Eventually, the fourth study presents a novel measurement approach to detect structural controlled CO2 degassing, in Ngapouri geothermal area, New Zealand. For the first time, the tunable diode laser (TDL) method was applied in a low-degassing geothermal area, to evaluate its potential as a geothermal exploration method. Although the sampling approach is based on profile measurements, which leads to low spatial resolution, the results showed a link between known/inferred faults and increased CO2 concentrations. Thus, the TDL method proved to be a successful in the determination of structural related permeability, also in areas where no obvious geothermal activity is present. Once an area of anomalous CO2 concentrations has been identified, it can be easily complemented by CO2 flux grid measurements to determine the extent and orientation of the degassing segment. With the results of this work, I was able to demonstrate the applicability of systematic and area-wide soil gas measurements for geothermal exploration and monitoring purposes. In particular, the combination of different soil gases using different measurement networks enables the identification and characterization of fluid-bearing structures and has not yet been used and/or tested as standard practice. The different studies present efficient and cost-effective workflows and demonstrate a hands-on approach to a successful and sustainable exploration and monitoring of geothermal resources. This minimizes the resource risk during geothermal project development. Finally, to advance the understanding of the complex structure and dynamics of geothermal systems, a combination of comprehensive and cutting-edge geological, geochemical, and geophysical exploration methods is essential.}, language = {en} } @phdthesis{Imranulhaq2008, author = {Imran ul-haq, Muhammad}, title = {Synthesis of fluorinated polymers in supercritical carbon dioxide (scCO₂)}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-19868}, school = {Universit{\"a}t Potsdam}, year = {2008}, abstract = {For the first time stabilizer-free vinylidene fluoride (VDF) polymerizations were carried out in homogeneous phase with supercritical CO₂. Polymerizations were carried out at 140°C, 1500 bar and were initiated with di-tert-butyl peroxide (DTBP). In-line FT-NIR (Fourier Transform- Near Infrared) spectroscopy showed that complete monomer conversion may be obtained. Molecular weights were determined via size-exclusion chromatography (SEC) and polymer end group analysis by 1H-NMR spectroscopy. The number average molecular weights were below 104 g∙mol-1 and polydispersities ranged from 3.1 to 5.7 depending on DTBP and VDF concentration. To allow for isothermal reactions high CO₂ contents ranging from 61 to 83 wt.\% were used. The high-temperature, high-pressure conditions were required for homogeneous phase polymerization. These conditions did not alter the amount of defects in VDF chaining. Scanning electron microscopy (SEM) indicated that regular stack-type particles were obtained upon expansion of the homogeneous polymerization mixture. To reduce the required amount of initiator, further VDF polymerizations using chain transfer agents (CTAs) to control molecular weights were carried out in homogeneous phase with supercritical carbon dioxide (scCO₂) at 120 °C and 1500 bar. Using perfluorinated hexyl iodide as CTA, polymers of low polydispersity ranging from 1.5 to 1.2 at the highest iodide concentration of 0.25 mol·L-1 were obtained. Electrospray ionization- mass spectroscopy (ESI-MS) indicates the absence of initiator derived end groups, supporting livingness of the system. The "livingness" is based on the labile C-I bond. However, due to the weakness of the C-I bond perfluorinated hexyl iodide also contributes to initiation. To allow for kinetic analyses of VDF polymerizations the CTA should not contribute to initiation. Therefore, additional CTAs were applied: BrCCl3, C6F13Br and C6F13H. It was found that C6F13H does not contribute to initiation. At 120°C and 1500 bar kp/kt0.5~ 0.64 (L·mol-1·s-1)0.5 was derived. The chain transfer constant (CT) at 120°C has been determined to be 8·10-1, 9·10-2 and 2·10-4 for C6F13I, C6F13Br and C6F13H, respectively. These CT values are associated with the bond energy of the C-X bond. Moreover, the labile C-I bond allows for functionalization of the polymer to triazole end groups applying click reactions. After substitution of the iodide end group by an azide group 1,3 dipolar cycloadditions with alkynes yield polymers with 1,2,3 triazole end groups. Using symmetrical alkynes the reactions may be carried out in the absence of any catalyst. This end-functionalized poly (vinylidene fluoride) (PVDF) has higher thermal stability as compared to the normal PVDF. PVDF samples from homogeneous phase polymerizations in supercritical CO₂ and subsequent expansion to ambient conditions were analyzed with respect to polymer end groups, crystallinity, type of polymorphs and morphology. Upon expansion the polymer was obtained as white powder. Scanning electron microscopy (SEM) showed that DTBP derived polymer end groups led to stack-type particles whereas sponge- or rose-type particles were obtained in case of CTA fragments as end groups. Fourier-Transform Infrared spectroscopy and wide angle X-ray diffraction indicated that the type of polymorph, α or β crystal phase was significantly affected by the type of end group. The content of β-phase material, which is responsible for piezoelectricity of PVDF, is the highest for polymer with DTBP-derived end groups. In addition, the crystallinity of the material, as determined via differential scanning calorimetry is affected by the end groups and polymer molecular weights. For example, crystallinity ranges from around 26 \% for DTBP-derived end groups to a maximum of 62 \% for end groups originating from perfluorinated hexyl iodide for polymers with Mn ~2200 g·mol-1. Expansion of the homogeneous polymerization mixture results in particle formation by a non-optimized RESS (Rapid Expansion from Supercritical Solution) process. Thus, it was tested how polymer end groups affect the particles size distribution obtained from RESS process under controlled conditions (T = 50°C and P = 200 bar). In all RESS experiments, small primary PVDF with diameters less than 100 nm without the use of liquid solvents, surfactants, or other additives were produced. A strong correlation between particle size and particle size distribution with polymer end groups and molecular weight of the original material was observed. The smallest particles were found for RESS of PVDF with Mn~ 4000 g·mol-1 and PFHI (C6F13I) - derived end groups.}, language = {en} } @phdthesis{IlićPetković2023, author = {Ilić Petković, Nikoleta}, title = {Stars under influence: evidence of tidal interactions between stars and substellar companions}, doi = {10.25932/publishup-61597}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-615972}, school = {Universit{\"a}t Potsdam}, pages = {xi, 137}, year = {2023}, abstract = {Tidal interactions occur between gravitationally bound astrophysical bodies. If their spatial separation is sufficiently small, the bodies can induce tides on each other, leading to angular momentum transfer and altering of evolutionary path the bodies would have followed if they were single objects. The tidal processes are well established in the Solar planet-moon systems and close stellar binary systems. However, how do stars behave if they are orbited by a substellar companion (e.g. a planet or a brown dwarf) on a tight orbit? Typically, a substellar companion inside the corotation radius of a star will migrate toward the star as it loses orbital angular momentum. On the other hand, the star will gain angular momentum which has the potential to increase its rotation rate. The effect should be more pronounced if the substellar companion is more massive. As the stellar rotation rate and the magnetic activity level are coupled, the star should appear more magnetically active under the tidal influence of the orbiting substellar companion. However, the difficulty in proving that a star has a higher magnetic activity level due to tidal interactions lies in the fact that (I) substellar companions around active stars are easier to detect if they are more massive, leading to a bias toward massive companions around active stars and mimicking the tidal interaction effect, and that (II) the age of a main-sequence star cannot be easily determined, leaving the possibility that a star is more active due to its young age. In our work, we overcome these issues by employing wide stellar binary systems where one star hosts a substellar companion, and where the other star provides the magnetic activity baseline for the host star, assuming they have coevolved, and thereby provides the host's activity level if tidal interactions have no effect on it. Firstly, we find that extrasolar planets can noticeably increase the host star's X-ray luminosity and that the effect is more pronounced if the exoplanet is at least Jupiter-like in mass and close to the star. Further, we find that a brown dwarf will have an even stronger effect, as expected, and that the X-ray surface flux difference between the host star and the wide stellar companion is a significant outlier when compared to a large sample of similar wide binary systems without any known substellar companions. This result proves that substellar hosting wide binary systems can be good tools to reveal the tidal effect on host stars, and also show that the typical stellar age indicators as activity or rotation cannot be used for these stars. Finally, knowing that the activity difference is a good tracer of the substellar companion's tidal impact, we develop an analytical method to calculate the modified tidal quality factor Q' of individual host stars, which defines the tidal dissipation efficiency in the convective envelope of a given main-sequence star.}, language = {en} }