Refine
Has Fulltext
- yes (23) (remove)
Year of publication
- 2023 (23) (remove)
Document Type
- Doctoral Thesis (23) (remove)
Is part of the Bibliography
- yes (23)
Keywords
- Nachhaltigkeit (2)
- Virus (2)
- cosmic rays (2)
- kosmische Strahlung (2)
- sustainability (2)
- virus (2)
- AC Elektrokinetik (1)
- AC Elektroosmosis (1)
- AC electrokinetics (1)
- AC electroosmosis (1)
Institute
- Extern (23) (remove)
Cosmic rays (CRs) constitute an important component of the interstellar medium (ISM) of galaxies and are thought to play an essential role in governing their evolution. In particular, they are able to impact the dynamics of a galaxy by driving galactic outflows or heating the ISM and thereby affecting the efficiency of star-formation. Hence, in order to understand galaxy formation and evolution, we need to accurately model this non-thermal constituent of the ISM. But except in our local environment within the Milky Way, we do not have the ability to measure CRs directly in other galaxies. However, there are many ways to indirectly observe CRs via the radiation they emit due to their interaction with magnetic and interstellar radiation fields as well as with the ISM.
In this work, I develop a numerical framework to calculate the spectral distribution of CRs in simulations of isolated galaxies where a steady-state between injection and cooling is assumed. Furthermore, I calculate the non-thermal emission processes arising from the modelled CR proton and electron spectra ranging from radio wavelengths up to the very high-energy gamma-ray regime.
I apply this code to a number of high-resolution magneto-hydrodynamical (MHD) simulations of isolated galaxies, where CRs are included. This allows me to study their CR spectra and compare them to observations of the CR proton and electron spectra by the Voyager-1 satellite and the AMS-02 instrument in order to reveal the origin of the measured spectral features.
Furthermore, I provide detailed emission maps, luminosities and spectra of the non-thermal emission from our simulated galaxies that range from dwarfs to Milk-Way analogues to starburst galaxies at different evolutionary stages. I successfully reproduce the observed relations between the radio and gamma-ray luminosities with the far-infrared (FIR) emission of star-forming (SF) galaxies, respectively, where the latter is a good tracer of the star-formation rate. I find that highly SF galaxies are close to the limit where their CR population would lose all of their energy due to the emission of radiation, whereas CRs tend to escape low SF galaxies more quickly. On top of that, I investigate the properties of CR transport that are needed in order to match the observed gamma-ray spectra.
Furthermore, I uncover the underlying processes that enable the FIR-radio correlation (FRC) to be maintained even in starburst galaxies and find that thermal free-free-emission naturally explains the observed radio spectra in SF galaxies like M82 and NGC 253 thus solving the riddle of flat radio spectra that have been proposed to contradict the observed tight FRC.
Lastly, I scrutinise the steady-state modelling of the CR proton component by investigating for the first time the influence of spectrally resolved CR transport in MHD simulations on the hadronic gamma-ray emission of SF galaxies revealing new insights into the observational signatures of CR transport both spectrally and spatially.
In the present thesis, AC electrokinetic forces, like dielectrophoresis and AC electroosmosis, were demonstrated as a simple and fast method to functionalize the surface of nanoelectrodes with submicrometer sized biological objects. These nanoelectrodes have a cylindrical shape with a diameter of 500 nm arranged in an array of 6256 electrodes. Due to its medical relevance influenza virus as well as anti-influenza antibodies were chosen as a model organism. Common methods to bring antibodies or proteins to biosensor surfaces are complex and time-consuming. In the present work, it was demonstrated that by applying AC electric fields influenza viruses and antibodies can be immobilized onto the nanoelectrodes within seconds without any prior chemical modification of neither the surface nor the immobilized biological object. The distribution of these immobilized objects is not uniform over the entire array, it exhibits a decreasing gradient from the outer row to the inner ones. Different causes for this gradient have been discussed, such as the vortex-shaped fluid motion above the nanoelectrodes generated by, among others, electrothermal fluid flow. It was demonstrated that parts of the accumulated material are permanently immobilized to the electrodes. This is a unique characteristic of the presented system since in the literature the AC electrokinetic immobilization is almost entirely presented as a method just for temporary immobilization. The spatial distribution of the immobilized viral material or the anti-influenza antibodies at the electrodes was observed by either the combination of fluorescence microscopy and deconvolution or by super-resolution microscopy (STED). On-chip immunoassays were performed to examine the suitability of the functionalized electrodes as a potential affinity-based biosensor. Two approaches were pursued: A) the influenza virus as the bio-receptor or B) the influenza virus as the analyte. Different sources of error were eliminated by ELISA and passivation experiments. Hence, the activity of the immobilized object was inspected by incubation with the analyte. This resulted in the successful detection of anti-influenza antibodies by the immobilized viral material. On the other hand, a detection of influenza virus particles by the immobilized anti-influenza antibodies was not possible. The latter might be due to lost activity or wrong orientation of the antibodies. Thus, further examinations on the activity of by AC electric fields immobilized antibodies should follow. When combined with microfluidics and an electrical read-out system, the functionalized chips possess the potential to serve as a rapid, portable, and cost-effective point-of-care (POC) device. This device can be utilized as a basis for diverse applications in diagnosing and treating influenza, as well as various other pathogens.
Reactive eutectic media based on ammonium formate for the valorization of bio-sourced materials
(2023)
In the last several decades eutectic mixtures of different compositions were successfully used as solvents for vast amount of chemical processes, and only relatively recently they were discovered to be widely spread in nature. As such they are discussed as a third liquid media of the living cell, that is composed of common cell metabolites. Such media may also incorporate water as a eutectic component in order to regulate properties such as enzyme activity or viscosity. Taking inspiration form such sophisticated use of eutectic mixtures, this thesis will explore the use of reactive eutectic media (REM) for organic synthesis. Such unconventional media are characterized by the reactivity of their components, which means that mixture may assume the role of the solvent as well as the reactant itself.
The thesis focuses on novel REM based on ammonium formate and investigates their potential for the valorization of bio-sourced materials. The use of REM allows the performance of a number of solvent-free reactions, which entails the benefits of a superior atom and energy economy, higher yields and faster rates compared to reactions in solution. This is evident for the Maillard reaction between ammonium formate and various monosaccharides for the synthesis of substituted pyrazines as well as for a Leuckart type reaction between ammonium formate and levulinic acid for the synthesis of 5-methyl-2-pyrrolidone. Furthermore, reaction of ammonium formate with citric acid for the synthesis of yet undiscovered fluorophores, shows that synthesis in REM can open up unexpected reaction pathways.
Another focus of the thesis is the study of water as a third component in the REM. As a result, the concept of two different dilution regimes (tertiary REM and in REM in solvent) appears useful for understanding the influence of water. It is shown that small amounts of water can be of great benefit for the reaction, by reducing viscosity and at the same time increasing reaction yields.
REM based on ammonium formate and organic acids are employed for lignocellulosic biomass treatment. The thesis thereby introduces an alternative approach towards lignocellulosic biomass fractionation that promises a considerable process intensification by the simultaneous generation of cellulose and lignin as well as the production of value-added chemicals from REM components. The thesis investigates the generated cellulose and the pathway to nanocellulose generation and also includes the structural analysis of extracted lignin.
Finally, the thesis investigates the potential of microwave heating to run chemical reactions in REM and describes the synergy between these two approaches. Microwave heating for chemical reactions and the use of eutectic mixtures as alternative reaction media are two research fields that are often described in the scope of green chemistry. The thesis will therefore also contain a closer inspection of this terminology and its greater goal of sustainability.
Air pollution has been a persistent global problem in the past several hundred years. While some industrialized nations have shown improvements in their air quality through stricter regulation, others have experienced declines as they rapidly industrialize. The WHO’s 2021 update of their recommended air pollution limit values reflects the substantial impacts on human health of pollutants such as NO2 and O3, as recent epidemiological evidence suggests substantial long-term health impacts of air pollution even at low concentrations. Alongside developments in our understanding of air pollution's health impacts, the new technology of low-cost sensors (LCS) has been taken up by both academia and industry as a new method for measuring air pollution. Due primarily to their lower cost and smaller size, they can be used in a variety of different applications, including in the development of higher resolution measurement networks, in source identification, and in measurements of air pollution exposure. While significant efforts have been made to accurately calibrate LCS with reference instrumentation and various statistical models, accuracy and precision remain limited by variable sensor sensitivity. Furthermore, standard procedures for calibration still do not exist and most proprietary calibration algorithms are black-box, inaccessible to the public. This work seeks to expand the knowledge base on LCS in several different ways: 1) by developing an open-source calibration methodology; 2) by deploying LCS at high spatial resolution in urban environments to test their capability in measuring microscale changes in urban air pollution; 3) by connecting LCS deployments with the implementation of local mobility policies to provide policy advice on resultant changes in air quality.
In a first step, it was found that LCS can be consistently calibrated with good performance against reference instrumentation using seven general steps: 1) assessing raw data distribution, 2) cleaning data, 3) flagging data, 4) model selection and tuning, 5) model validation, 6) exporting final predictions, and 7) calculating associated uncertainty. By emphasizing the need for consistent reporting of details at each step, most crucially on model selection, validation, and performance, this work pushed forward with the effort towards standardization of calibration methodologies. In addition, with the open-source publication of code and data for the seven-step methodology, advances were made towards reforming the largely black-box nature of LCS calibrations.
With a transparent and reliable calibration methodology established, LCS were then deployed in various street canyons between 2017 and 2020. Using two types of LCS, metal oxide (MOS) and electrochemical (EC), their performance in capturing expected patterns of urban NO2 and O3 pollution was evaluated. Results showed that calibrated concentrations from MOS and EC sensors matched general diurnal patterns in NO2 and O3 pollution measured using reference instruments. While MOS proved to be unreliable for discerning differences among measured locations within the urban environment, the concentrations measured with calibrated EC sensors matched expectations from modelling studies on NO2 and O3 pollution distribution in street canyons. As such, it was concluded that LCS are appropriate for measuring urban air quality, including for assisting urban-scale air pollution model development, and can reveal new insights into air pollution in urban environments.
To achieve the last goal of this work, two measurement campaigns were conducted in connection with the implementation of three mobility policies in Berlin. The first involved the construction of a pop-up bike lane on Kottbusser Damm in response to the COVID-19 pandemic, the second surrounded the temporary implementation of a community space on Böckhstrasse, and the last was focused on the closure of a portion of Friedrichstrasse to all motorized traffic. In all cases, measurements of NO2 were collected before and after the measure was implemented to assess changes in air quality resultant from these policies. Results from the Kottbusser Damm experiment showed that the bike-lane reduced NO2 concentrations that cyclists were exposed to by 22 ± 19%. On Friedrichstrasse, the street closure reduced NO2 concentrations to the level of the urban background without worsening the air quality on side streets. These valuable results were communicated swiftly to partners in the city administration responsible for evaluating the policies’ success and future, highlighting the ability of LCS to provide policy-relevant results.
As a new technology, much is still to be learned about LCS and their value to academic research in the atmospheric sciences. Nevertheless, this work has advanced the state of the art in several ways. First, it contributed a novel open-source calibration methodology that can be used by a LCS end-users for various air pollutants. Second, it strengthened the evidence base on the reliability of LCS for measuring urban air quality, finding through novel deployments in street canyons that LCS can be used at high spatial resolution to understand microscale air pollution dynamics. Last, it is the first of its kind to connect LCS measurements directly with mobility policies to understand their influences on local air quality, resulting in policy-relevant findings valuable for decisionmakers. It serves as an example of the potential for LCS to expand our understanding of air pollution at various scales, as well as their ability to serve as valuable tools in transdisciplinary research.
El plateau Andino es el segundo plateau orogénico más grande del mundo y se ubica en los Andes Centrales, desarrollado en un sistema orogénico no colisional. Se extiende desde el sur del Perú (15°S), hasta el norte de Argentina y Chile (27°30´S). A partir de los 24°S y prologándose hacia el sur, el plateau Andino se denomina Puna y está caracterizado por un sistema de cuencas endorreicas y salares delimitados por cordones montañosos. Entre los 26° y 27°30´S, la Puna encuentra su límite austral en una zona de transición entre una zona de subducción normal y una zona de subducción plana o “flat slab” que se prolonga hasta los 33°S. Diversos estudios documentan la ocurrencia de un aumento del espesor cortical, y levantamiento episódico y diacrónico del relieve, alcanzando su configuración actual durante el Mioceno tardío. Posteriormente, el plateau habría experimentado un cambio en el estilo de deformación dominado por procesos extensionales evidenciado por fallas y terremotos de cinemática normal. Sin embargo, en el borde sur del plateau de la Puna y en las áreas delimitadas con el resto del orógeno, la variación del campo de esfuerzo no está del todo comprendida, reflejando una excelente oportunidad para evaluar cómo el campo de esfuerzo puede evolucionar durante el desarrollo del orógeno y cómo puede verse afectado por la presencia/ausencia de un plateau orogénico, así como también por la existencia de anisotropías estructurales propias de cada unidad morfotectónica.
Esta Tesis investiga la relación entre la deformación cortical somera y la evolución en tiempo y espacio del campo de esfuerzos en el sector sur del plateau Andino, durante el cenozoico tardío. Para realizar esta investigación, se utilizaron técnicas de obtención de edades radiométricas con el método Uranio-Plomo (U-Pb), análisis de fallas mesoscópicas para la obtención de tensores de esfuerzos y delimitación de la orientación de los ejes principales de esfuerzos, análisis de anisotropía de susceptibilidad magnética en rocas sedimentarias y volcanoclásticas para estimar direcciones de acortamiento o direcciones de transporte sedimentario, técnicas de modelado cinemático para llegar a una aproximación de las estructuras corticales profundas asociadas a la deformación allí registrada, y un análisis morfométrico para la identificación de indicadores geomorfológicos asociados a deformación producto de la actividad tectónica cuaternaria.
Combinando estos resultados con los antecedentes previamente documentados, el estudio revela una compleja variación del campo de esfuerzo caracterizado por cambios en la orientación y permutaciones verticales de los ejes principales de esfuerzos, durante cada régimen de deformación, durante los últimos ~24 Ma. La evolución del campo de esfuerzos puede ser asociada temporalmente a tres fases orogénicas involucradas con la evolución de los Andes Centrales en esta latitud: (1) una primera fase con un régimen de esfuerzos compresivos de acortamiento E-O documentado desde el Eoceno, Oligoceno tardío hasta el Mioceno medio en el área, coincide con la fase de construcción andina, engrosamiento y crecimiento de la corteza y levantamiento topográfico; (2) una segunda fase caracterizada por un régimen de esfuerzos de transcurrencia, a partir de los ~11 Ma en el borde occidental y compresión y transcurrencia a los~5 Ma en el borde oriental del plateau de la Puna, y un régimen de esfuerzo compresivos en Famatina y las Sierras Pampeanas interpretado como una transición entre la construcción orogénica del Neógeno y la máxima acumulación de deformación y el alzamiento topográfico del plateau de la Puna, y (3) una tercera fase donde el régimen se caracteriza por la transcurrencia en la Puna y en su borde occidental y en su borde oriental con las Sierras Pampeanas, después de ~5-4 Ma, interpretado como un régimen de esfuerzos controlados por el engrosamiento cortical desarrollado a lo largo del borde sur del plateau Altiplano/Puna, previo a un colapso orogénico. Los resultados dejan en evidencia que el borde del plateau experimentó el paso desde un régimen compresivo hacia uno transcurrente, que se diferencia de la extensión documentada hacia el norte en el plateau Andino para el mismo período. Cambios en los esfuerzos similares han sido documentado durante la construcción del plateau Tibetano, en donde un régimen de esfuerzo predominantemente compresivo cambió a un régimen de transcurrente cuando el plateau habría alcanzado la mitad de su elevación actual, y que posteriormente derivó en un régimen extensional, entre 14 y 4 Ma, cuando la altitud del plateau fue superior al 80% respecto a su actitud actual, lo que podría estar indicando que los regímenes transcurrentes representan etapas transicionales entre las zonas externas del plateau bajo compresión y las zonas internas, en las que los regímenes extensionales son más viables de ocurrir.
The Andean Cordillera is a mountain range located at the western South American margin and is part of the Eastern- Circum-Pacific orogenic Belt. The ~7000 km long mountain range is one of the longest on Earth and hosts the second largest orogenic plateau in the world, the Altiplano-Puna plateau. The Andes are known as a non-collisional subduction-type orogen which developed as a result of the interaction between the subducted oceanic Nazca plate and the South American continental plate. The different Andean segments exhibit along-strike variations of morphotectonic provinces characterized by different elevations, volcanic activity, deformation styles, crustal thickness, shortening magnitude and oceanic plate geometry. Most of the present-day elevation can be explained by crustal shortening in the last ~50 Ma, with the shortening magnitude decreasing from ~300 km in the central (15°S-30°S) segment to less than half that in the southern part (30°S-40°S). Several factors were proposed that might control the magnitude and acceleration of shortening of the Central Andes in the last 15 Ma. One important factor is likely the slab geometry. At 27-33°S, the slab dips horizontally at ~100 km depth due to the subduction of the buoyant Juan Fernandez Ridge, forming the Pampean flat-slab. This horizontal subduction is thought to influence the thermo-mechanical state of the Sierras Pampeanas foreland, for instance, by strengthening the lithosphere and promoting the thick-skinned propagation of deformation to the east, resulting in the uplift of the Sierras Pampeanas basement blocks. The flat-slab has migrated southwards from the Altiplano latitude at ~30 Ma to its present-day position and the processes and consequences associated to its passage on the contemporaneous acceleration of the shortening rate in Central Andes remain unclear. Although the passage of the flat-slab could offer an explanation to the acceleration of the shortening, the timing does not explain the two pulses of shortening at about 15 Ma and 4 Ma that are suggested from geological observations. I hypothesize that deformation in the Central Andes is controlled by a complex interaction between the subduction dynamics of the Nazca plate and the dynamic strengthening and weakening of the South American plate due to several upper plate processes. To test this hypothesis, a detailed investigation into the role of the flat-slab, the structural inheritance of the continental plate, and the subduction dynamics in the Andes is needed. Therefore, I have built two classes of numerical thermo-mechanical models: (i) The first class of models are a series of generic E-W-oriented high-resolution 2D subduction models thatinclude flat subduction in order to investigate the role of the subduction dynamics on the temporal variability of the shortening rate in the Central Andes at Altiplano latitudes (~21°S). The shortening rate from the models was then validated with the observed tectonic shortening rate in the Central Andes. (ii) The second class of models are a series of 3D data-driven models of the present-day Pampean flat-slab configuration and the Sierras Pampeanas (26-42°S). The models aim to investigate the relative contribution of the present-day flat subduction and inherited structures in the continental lithosphere on the strain localization. Both model classes were built using the advanced finite element geodynamic code ASPECT.
The first main finding of this work is to suggest that the temporal variability of shortening in the Central Andes is primarily controlled by the subduction dynamics of the Nazca plate while it penetrates into the mantle transition zone. These dynamics depends on the westward velocity of the South American plate that provides the main crustal shortening force to the Andes and forces the trench to retreat. When the subducting plate reaches the lower mantle, it buckles on it-self until the forced trench retreat causes the slab to steepen in the upper mantle in contrast with the classical slab-anchoring model. The steepening of the slab hinders the trench causing it to resist the advancing South American plate, resulting in the pulsatile shortening. This buckling and steepening subduction regime could have been initiated because of the overall decrease in the westwards velocity of the South American plate. In addition, the passage of the flat-slab is required to promote the shortening of the continental plate because flat subduction scrapes the mantle lithosphere, thus weakening the continental plate. This process contributes to the efficient shortening when the trench is hindered, followed by mantle lithosphere delamination at ~20 Ma. Finally, the underthrusting of the Brazilian cratonic shield beneath the orogen occurs at ~11 Ma due to the mechanical weakening of the thick sediments covered the shield margin, and due to the decreasing resistance of the weakened lithosphere of the orogen.
The second main finding of this work is to suggest that the cold flat-slab strengthens the overriding continental lithosphere and prevents strain localization. Therefore, the deformation is transmitted to the eastern front of the flat-slab segment by the shear stress operating at the subduction interface, thus the flat-slab acts like an indenter that “bulldozes” the mantle-keel of the continental lithosphere. The offset in the propagation of deformation to the east between the flat and steeper slab segments in the south causes the formation of a transpressive dextral shear zone. Here, inherited faults of past tectonic events are reactivated and further localize the deformation in an en-echelon strike-slip shear zone, through a mechanism that I refer to as “flat-slab conveyor”. Specifically, the shallowing of the flat-slab causes the lateral deformation, which explains the timing of multiple geological events preceding the arrival of the flat-slab at 33°S. These include the onset of the compression and of the transition between thin to thick-skinned deformation styles resulting from the crustal contraction of the crust in the Sierras Pampeanas some 10 and 6 Myr before the Juan Fernandez Ridge collision at that latitude, respectively.
Earthquake modeling is the key to a profound understanding of a rupture. Its kinematics or dynamics are derived from advanced rupture models that allow, for example, to reconstruct the direction and velocity of the rupture front or the evolving slip distribution behind the rupture front. Such models are often parameterized by a lattice of interacting sub-faults with many degrees of freedom, where, for example, the time history of the slip and rake on each sub-fault are inverted. To avoid overfitting or other numerical instabilities during a finite-fault estimation, most models are stabilized by geometric rather than physical constraints such as smoothing.
As a basis for the inversion approach of this study, we build on a new pseudo-dynamic rupture model (PDR) with only a few free parameters and a simple geometry as a physics-based solution of an earthquake rupture. The PDR derives the instantaneous slip from a given stress drop on the fault plane, with boundary conditions on the developing crack surface guaranteed at all times via a boundary element approach. As a side product, the source time function on each point on the rupture plane is not constraint and develops by itself without additional parametrization. The code was made publicly available as part of the Pyrocko and Grond Python packages. The approach was compared with conventional modeling for different earthquakes. For example, for the Mw 7.1 2016 Kumamoto, Japan, earthquake, the effects of geometric changes in the rupture surface on the slip and slip rate distributions could be reproduced by simply projecting stress vectors. For the Mw 7.5 2018 Palu, Indonesia, strike-slip earthquake, we also modelled rupture propagation using the 2D Eikonal equation and assuming a linear relationship between rupture and shear wave velocity. This allowed us to give a deeper and faster propagating rupture front and the resulting upward refraction as a new possible explanation for the apparent supershear observed at the Earth's surface.
The thesis investigates three aspects of earthquake inversion using PDR: (1) to test whether implementing a simplified rupture model with few parameters into a probabilistic Bayesian scheme without constraining geometric parameters is feasible, and whether this leads to fast and robust results that can be used for subsequent fast information systems (e.g., ground motion predictions). (2) To investigate whether combining broadband and strong-motion seismic records together with near-field ground deformation data improves the reliability of estimated rupture models in a Bayesian inversion. (3) To investigate whether a complex rupture can be represented by the inversion of multiple PDR sources and for what type of earthquakes this is recommended.
I developed the PDR inversion approach and applied the joint data inversions to two seismic sequences in different tectonic settings. Using multiple frequency bands and a multiple source inversion approach, I captured the multi-modal behaviour of the Mw 8.2 2021 South Sandwich subduction earthquake with a large, curved and slow rupturing shallow earthquake bounded by two faster and deeper smaller events. I could cross-validate the results with other methods, i.e., P-wave energy back-projection, a clustering analysis of aftershocks and a simple tsunami forward model.
The joint analysis of ground deformation and seismic data within a multiple source inversion also shed light on an earthquake triplet, which occurred in July 2022 in SE Iran. From the inversion and aftershock relocalization, I found indications for a vertical separation between the shallower mainshocks within the sedimentary cover and deeper aftershocks at the sediment-basement interface. The vertical offset could be caused by the ductile response of the evident salt layer to stress perturbations from the mainshocks.
The applications highlight the versatility of the simple PDR in probabilistic seismic source inversion capturing features of rather different, complex earthquakes. Limitations, as the evident focus on the major slip patches of the rupture are discussed as well as differences to other finite fault modeling methods.
Extreme flooding displaces an average of 12 million people every year. Marginalized populations in low-income countries are in particular at high risk, but also industrialized countries are susceptible to displacement and its inherent societal impacts. The risk of being displaced results from a complex interaction of flood hazard, population exposed in the floodplains, and socio-economic vulnerability. Ongoing global warming changes the intensity, frequency, and duration of flood hazards, undermining existing protection measures. Meanwhile, settlements in attractive yet hazardous flood-prone areas have led to a higher degree of population exposure. Finally, the vulnerability to displacement is altered by demographic and social change, shifting economic power, urbanization, and technological development. These risk components have been investigated intensively in the context of loss of life and economic damage, however, only little is known about the risk of displacement under global change.
This thesis aims to improve our understanding of flood-induced displacement risk under global climate change and socio-economic change. This objective is tackled by addressing the following three research questions. First, by focusing on the choice of input data, how well can a global flood modeling chain reproduce flood hazards of historic events that lead to displacement? Second, what are the socio-economic characteristics that shape the vulnerability to displacement? Finally, to what degree has climate change potentially contributed to recent flood-induced displacement events?
To answer the first question, a global flood modeling chain is evaluated by comparing simulated flood extent with satellite-derived inundation information for eight major flood events. A focus is set on the sensitivity to different combinations of the underlying climate reanalysis datasets and global hydrological models which serve as an input for the global hydraulic model. An evaluation scheme of performance scores shows that simulated flood extent is mostly overestimated without the consideration of flood protection and only for a few events dependent on the choice of global hydrological models. Results are more sensitive to the underlying climate forcing, with two datasets differing substantially from a third one. In contrast, the incorporation of flood protection standards results in an underestimation of flood extent, pointing to potential deficiencies in the protection level estimates or the flood frequency distribution within the modeling chain.
Following the analysis of a physical flood hazard model, the socio-economic drivers of vulnerability to displacement are investigated in the next step. For this purpose, a satellite- based, global collection of flood footprints is linked with two disaster inventories to match societal impacts with the corresponding flood hazard. For each event the number of affected population, assets, and critical infrastructure, as well as socio-economic indicators are computed. The resulting datasets are made publicly available and contain 335 displacement events and 695 mortality/damage events. Based on this new data product, event-specific displacement vulnerabilities are determined and multiple (national) dependencies with the socio-economic predictors are derived. The results suggest that economic prosperity only partially shapes vulnerability to displacement; urbanization, infant mortality rate, the share of elderly, population density and critical infrastructure exhibit a stronger functional relationship, suggesting that higher levels of development are generally associated with lower vulnerability.
Besides examining the contextual drivers of vulnerability, the role of climate change in the context of human displacement is also being explored. An impact attribution approach is applied on the example of Cyclone Idai and associated extreme coastal flooding in Mozambique. A combination of coastal flood modeling and satellite imagery is used to construct factual and counterfactual flood events. This storyline-type attribution method allows investigating the isolated or combined effects of sea level rise and the intensification of cyclone wind speeds on coastal flooding. The results suggest that displacement risk has increased by 3.1 to 3.5% due to the total effects of climate change on coastal flooding, with the effects of increasing wind speed being the dominant factor.
In conclusion, this thesis highlights the potentials and challenges of modeling flood- induced displacement risk. While this work explores the sensitivity of global flood modeling to the choice of input data, new questions arise on how to effectively improve the reproduction of flood return periods and the representation of protection levels. It is also demonstrated that disentangling displacement vulnerabilities is feasible, with the results providing useful information for risk assessments, effective humanitarian aid, and disaster relief. The impact attribution study is a first step in assessing the effects of global warming on displacement risk, leading to new research challenges, e.g., coupling fluvial and coastal flood models or the attribution of other hazard types and displacement events. This thesis is one of the first to address flood-induced displacement risk from a global perspective. The findings motivate for further development of the global flood modeling chain to improve our understanding of displacement vulnerability and the effects of global warming.
Facing the environmental crisis, new technologies are needed to sustain our society. In this context, this thesis aims to describe the properties and applications of carbon-based sustainable materials. In particular, it reports the synthesis and characterization of a wide set of porous carbonaceous materials with high nitrogen content obtained from nucleobases. These materials are used as cathodes for Li-ion capacitors, and a major focus is put on the cathode preparation, highlighting the oxidation resistance of nucleobase-derived materials. Furthermore, their catalytic properties for acid/base and redox reactions are described, pointing to the role of nitrogen speciation on their surfaces. Finally, these materials are used as supports for highly dispersed nickel loading, activating the materials for carbon dioxide electroreduction.
The global climate crisis is significantly contributing to changing ecosystems, loss of biodiversity and is putting numerous species on the verge of extinction. In principle, many species are able to adapt to changing conditions or shift their habitats to more suitable regions. However, change is progressing faster than some species can adjust, or potential adaptation is blocked and disrupted by direct and indirect human action. Unsustainable anthropogenic land use in particular is one of the driving factors, besides global heating, for these ecologically critical developments. Precisely because land use is anthropogenic, it is also a factor that could be quickly and immediately corrected by human action.
In this thesis, I therefore assess the impact of three climate change scenarios of increasing intensity in combination with differently scheduled mowing regimes on the long-term development and dispersal success of insects in Northwest German grasslands. The large marsh grasshopper (LMG, Stethophyma grossum, Linné 1758) is used as a species of reference for the analyses. It inhabits wet meadows and marshes and has a limited, yet fairly good ability to disperse. Mowing and climate conditions affect the development and mortality of the LMG differently depending on its life stage.
The specifically developed simulation model HiLEG (High-resolution Large Environmental
Gradient) serves as a tool for investigating and projecting viability and dispersal success under different climate conditions and land use scenarios. It is a spatially explicit, stage- and cohort-based model that can be individually configured to represent the life cycle and characteristics of terrestrial insect species, as well as high-resolution environmental data and the occurrence of external disturbances. HiLEG is a freely available and adjustable software that can be used to support conservation planning in cultivated grasslands.
In the three case studies of this thesis, I explore various aspects related to the structure of simulation models per se, their importance in conservation planning in general, and insights regarding the LMG in particular. It became apparent that the detailed resolution of model processes and components is crucial to project the long-term effect of spatially and temporally confined events. Taking into account conservation measures at the regional level has further proven relevant, especially in light of the climate crisis. I found that the LMG is benefiting from global warming in principle, but continues to be constrained by harmful mowing regimes. Land use measures could, however, be adapted in such a way that they allow the expansion and establishment of the LMG without overly affecting agricultural yields.
Overall, simulation models like HiLEG can make an important contribution and add value
to conservation planning and policy-making. Properly used, simulation results shed light
on aspects that might be overlooked by subjective judgment and the experience of individual stakeholders. Even though it is in the nature of models that they are subject to limitations and only represent fragments of reality, this should not keep stakeholders from using them, as long as these limitations are clearly communicated. Similar to HiLEG, models could further be designed in such a way that not only the parameterization can be adjusted as required, but also the implementation itself can be improved and changed as desired. This openness and flexibility should become more widespread in the development of simulation models.