Extern
Refine
Has Fulltext
- yes (428) (remove)
Year of publication
Document Type
- Conference Proceeding (122)
- Doctoral Thesis (95)
- Article (71)
- Postprint (68)
- Working Paper (39)
- Monograph/Edited Volume (14)
- Preprint (6)
- Review (6)
- Master's Thesis (4)
- Habilitation Thesis (2)
Language
- English (428) (remove)
Keywords
- USA (7)
- United States (7)
- Arktis (6)
- moderne jüdische Geschichte (6)
- Arctic (5)
- climate change (5)
- modern Jewish history (5)
- 20. Jahrhundert (4)
- 20th century (4)
- Fernerkundung (4)
Institute
- Extern (428)
- Institut für Physik und Astronomie (41)
- Institut für Geowissenschaften (37)
- Center for Economic Policy Analysis (CEPA) (32)
- Institut für Chemie (30)
- Fachgruppe Volkswirtschaftslehre (27)
- Vereinigung für Jüdische Studien e. V. (23)
- Institut für Biochemie und Biologie (19)
- Department Psychologie (17)
- Department Linguistik (16)
Hyperspectral remote sensing of the spatial and temporal heterogeneity of low Arctic vegetation
(2019)
Arctic tundra ecosystems are experiencing warming twice the global average and Arctic vegetation is responding in complex and heterogeneous ways. Shifting productivity, growth, species composition, and phenology at local and regional scales have implications for ecosystem functioning as well as the global carbon and energy balance. Optical remote sensing is an effective tool for monitoring ecosystem functioning in this remote biome. However, limited field-based spectral characterization of the spatial and temporal heterogeneity limits the accuracy of quantitative optical remote sensing at landscape scales. To address this research gap and support current and future satellite missions, three central research questions were posed:
• Does canopy-level spectral variability differ between dominant low Arctic vegetation communities and does this variability change between major phenological phases?
• How does canopy-level vegetation colour images recorded with high and low spectral resolution devices relate to phenological changes in leaf-level photosynthetic pigment concentrations?
• How does spatial aggregation of high spectral resolution data from the ground to satellite scale influence low Arctic tundra vegetation signatures and thereby what is the potential of upcoming hyperspectral spaceborne systems for low Arctic vegetation characterization?
To answer these questions a unique and detailed database was assembled. Field-based canopy-level spectral reflectance measurements, nadir digital photographs, and photosynthetic pigment concentrations of dominant low Arctic vegetation communities were acquired at three major phenological phases representing early, peak and late season. Data were collected in 2015 and 2016 in the Toolik Lake Research Natural Area located in north central Alaska on the North Slope of the Brooks Range. In addition to field data an aerial AISA hyperspectral image was acquired in the late season of 2016. Simulations of broadband Sentinel-2 and hyperspectral Environmental and Mapping Analysis Program (EnMAP) satellite reflectance spectra from ground-based reflectance spectra as well as simulations of EnMAP imagery from aerial hyperspectral imagery were also obtained.
Results showed that canopy-level spectral variability within and between vegetation communities differed by phenological phase. The late season was identified as the most discriminative for identifying many dominant vegetation communities using both ground-based and simulated hyperspectral reflectance spectra. This was due to an overall reduction in spectral variability and comparable or greater differences in spectral reflectance between vegetation communities in the visible near infrared spectrum.
Red, green, and blue (RGB) indices extracted from nadir digital photographs and pigment-driven vegetation indices extracted from ground-based spectral measurements showed strong significant relationships. RGB indices also showed moderate relationships with chlorophyll and carotenoid pigment concentrations. The observed relationships with the broadband RGB channels of the digital camera indicate that vegetation colour strongly influences the response of pigment-driven spectral indices and digital cameras can track the seasonal development and degradation of photosynthetic pigments.
Spatial aggregation of hyperspectral data from the ground to airborne, to simulated satel-lite scale was influenced by non-photosynthetic components as demonstrated by the distinct shift of the red edge to shorter wavelengths. Correspondence between spectral reflectance at the three scales was highest in the red spectrum and lowest in the near infra-red. By artificially mixing litter spectra at different proportions to ground-based spectra, correspondence with aerial and satellite spectra increased. Greater proportions of litter were required to achieve correspondence at the satellite scale.
Overall this thesis found that integrating multiple temporal, spectral, and spatial data is necessary to monitor the complexity and heterogeneity of Arctic tundra ecosystems. The identification of spectrally similar vegetation communities can be optimized using non-peak season hyperspectral data leading to more detailed identification of vegetation communities. The results also highlight the power of vegetation colour to link ground-based and satellite data. Finally, a detailed characterization non-photosynthetic ecosystem components is crucial for accurate interpretation of vegetation signals at landscape scales.
Microswimmers, i.e. swimmers of micron size experiencing low Reynolds numbers, have received a great deal of attention in the last years, since many applications are envisioned in medicine and bioremediation. A promising field is the one of magnetic swimmers, since magnetism is biocom-patible and could be used to direct or actuate the swimmers. This thesis studies two examples of magnetic microswimmers from a physics point of view.
The first system to be studied are magnetic cells, which can be magnetic biohybrids (a swimming cell coupled with a magnetic synthetic component) or magnetotactic bacteria (naturally occurring bacteria that produce an intracellular chain of magnetic crystals). A magnetic cell can passively interact with external magnetic fields, which can be used for direction. The aim of the thesis is to understand how magnetic cells couple this magnetic interaction to their swimming strategies, mainly how they combine it with chemotaxis (the ability to sense external gradient of chemical species and to bias their walk on these gradients). In particular, one open question addresses the advantage given by these magnetic interactions for the magnetotactic bacteria in a natural environment, such as porous sediments. In the thesis, a modified Active Brownian Particle model is used to perform simulations and to reproduce experimental data for different systems such as bacteria swimming in the bulk, in a capillary or in confined geometries. I will show that magnetic fields speed up chemotaxis under special conditions, depending on parameters such as their swimming strategy (run-and-tumble or run-and-reverse), aerotactic strategy (axial or polar), and magnetic fields (intensities and orientations), but it can also hinder bacterial chemotaxis depending on the system.
The second example of magnetic microswimmer are rigid magnetic propellers such as helices or random-shaped propellers. These propellers are actuated and directed by an external rotating magnetic field. One open question is how shape and magnetic properties influence the propeller behavior; the goal of this research field is to design the best propeller for a given situation. The aim of the thesis is to propose a simulation method to reproduce the behavior of experimentally-realized propellers and to determine their magnetic properties. The hydrodynamic simulations are based on the use of the mobility matrix. As main result, I propose a method to match the experimental data, while showing that not only shape but also the magnetic properties influence the propellers swimming characteristics.
Basaltic fissure eruptions, such as on Hawai'i or on Iceland, are thought to be driven by the lateral propagation of feeder dikes and graben subsidence. Associated solid earth processes, such as deformation and structural development, are well studied by means of geophysical and geodetic technologies. The eruptions themselves, lava fountaining and venting dynamics, in turn, have been much less investigated due to hazardous access, local dimension, fast processes, and resulting poor data availability.
This thesis provides a detailed quantitative understanding of the shape and dynamics of lava fountains and the morphological changes at their respective eruption sites. For this purpose, I apply image processing techniques, including drones and fixed installed cameras, to the sequence of frames of video records from two well-known fissure eruptions in Hawai'i and Iceland. This way I extract the dimensions of multiple lava fountains, visible in all frames. By putting these results together and considering the acquisition times of the frames I quantify the variations in height, width and eruption velocity of the lava fountains. Then I analyse these time-series in both time and frequency domains and investigate the similarities and correlations between adjacent lava fountains. Following this procedure, I am able to link the dynamics of the individual lava fountains to physical parameters of the magma transport in the feeder dyke of the fountains.
The first case study in this thesis focuses on the March 2011 Pu'u'O'o eruption, Hawai'i, where a continuous pulsating behaviour at all eight lava fountains has been observed. The lava fountains, even those from different parts of the fissure that are closely connected, show a similar frequency content and eruption behaviour. The regular pattern in the heights of lava fountain suggests a controlling process within the magma feeder system like a hydraulic connection in the underlying dyke, affecting or even controlling the pulsating behaviour.
The second case study addresses the 2014-2015 Holuhraun fissure eruption, Iceland. In this case, the feeder dyke is highlighted by the surface expressions of graben-like structures and fault systems. At the eruption site, the activity decreases from a continuous line of fire of ~60 vents to a limited number of lava fountains. This can be explained by preferred upwards magma movements through vertical structures of the pre-eruptive morphology. Seismic tremors during the eruption reveal vent opening at the surface and/or pressure changes in the feeder dyke. The evolving topography of the cinder cones during the eruption interacts with the lava fountain behaviour. Local variations in the lava fountain height and width are controlled by the conduit diameter, the depth of the lava pond and the shape of the crater. Modelling of the fountain heights shows that long-term eruption behaviour is controlled mainly by pressure changes in the feeder dyke.
This research consists of six chapters with four papers, including two first author and two co-author papers. It establishes a new method to analyse lava fountain dynamics by video monitoring. The comparison with the seismicity, geomorphologic and structural expressions of fissure eruptions shows a complex relationship between focussed flow through dykes, the morphology of the cinder cones, and the lava fountain dynamics at the vents of a fissure eruption.
Earth's climate varies continuously across space and time, but humankind has witnessed only a small snapshot of its entire history, and instrumentally documented it for a mere 200 years. Our knowledge of past climate changes is therefore almost exclusively based on indirect proxy data, i.e. on indicators which are sensitive to changes in climatic variables and stored in environmental archives. Extracting the data from these archives allows retrieval of the information from earlier times. Obtaining accurate proxy information is a key means to test model predictions of the past climate, and only after such validation can the models be used to reliably forecast future changes in our warming world. The polar ice sheets of Greenland and Antarctica are one major climate archive, which record information about local air temperatures by means of the isotopic composition of the water molecules embedded in the ice. However, this temperature proxy is, as any indirect climate data, not a perfect recorder of past climatic variations. Apart from local air temperatures, a multitude of other processes affect the mean and variability of the isotopic data, which hinders their direct interpretation in terms of climate variations. This applies especially to regions with little annual accumulation of snow, such as the Antarctic Plateau. While these areas in principle allow for the extraction of isotope records reaching far back in time, a strong corruption of the temperature signal originally encoded in the isotopic data of the snow is expected. This dissertation uses observational isotope data from Antarctica, focussing especially on the East Antarctic low-accumulation area around the Kohnen Station ice-core drilling site, together with statistical and physical methods, to improve our understanding of the spatial and temporal isotope variability across different scales, and thus to enhance the applicability of the proxy for estimating past temperature variability. The presented results lead to a quantitative explanation of the local-scale (1–500 m) spatial variability in the form of a statistical noise model, and reveal the main source of the temporal variability to be the mixture of a climatic seasonal cycle in temperature and the effect of diffusional smoothing acting on temporally uncorrelated noise. These findings put significant limits on the representativity of single isotope records in terms of local air temperature, and impact the interpretation of apparent cyclicalities in the records. Furthermore, to extend the analyses to larger scales, the timescale-dependency of observed Holocene isotope variability is studied. This offers a deeper understanding of the nature of the variations, and is crucial for unravelling the embedded true temperature variability over a wide range of timescales.
The utilization of lignin as renewable electrode material for electrochemical energy storage is a sustainable approach for future batteries and supercapacitors. The composite electrode was fabricated from Kraft lignin and conductive carbon and the charge storage contribution was determined in terms of electrical double layer (EDL) and redox reactions. The important factors at play for achieving high faradaic charge storage capacity contribute to high surface area, accessibility of redox sites in lignin and their interaction with conductive additives. A thinner layer of lignin covering the high surface area of carbon facilitates the electron transfer process with a shorter pathway from the active sites of nonconductive lignin to the current collector leading to the improvement of faradaic charge storage capacity.
Composite electrodes from lignin and carbon would be even more sustainable if the fluorinated binder can be omitted. A new route to fabricate a binder-free composite electrode from Kraft lignin and high surface area carbon has been proposed by crosslinking lignin with glyoxal. A high molecular weight of lignin is obtained to enhance both electroactivity and binder capability in composite electrodes. The order of the processing step of crosslinking lignin on the composite electrode plays a crucial role in achieving a stable electrode and high charge storage capacity. The crosslinked lignin based electrodes are promising since they allow for more stable, sustainable, halogen-free and environmentally benign devices for energy storage applications. Furthermore, improvement of the amount of redox active groups (quinone groups) in lignin is useful to enhance the capacity in lithium battery applications. Direct oxidative demethylation by cerium ammonium nitrate has been carried out under mild conditions. This proves that an increase of quinone groups is able to enhance the performance of lithium battery. Thus, lignin is a promising material and could be a good candidate for application in sustainable energy storage devices.
In the arable soil landscape of hummocky ground moraines, an erosion-affected spatial differentiation of soils can be observed. Man-made erosion leads to soil profile modifications along slopes with changed solum thickness and modified properties of soil horizons due to water erosion in combination with tillage operations. Soil erosion creates, thereby, spatial patterns of soil properties (e.g., texture and organic matter content) and differences in crop development. However, little is known about the manner in which water fluxes are affected by soil-crop interactions depending on contrasting properties of differently-developed soil horizons and how water fluxes influence the carbon transport in an eroded landscape. To identify such feedbacks between erosion-induced soil profile modifications and the 1D-water and solute balance, high-precision weighing lysimeters equipped with a wide range of sensor technique were filled with undisturbed soil monoliths that differed in the degree of past soil erosion. Furthermore, lysimeter effluent concentrations were analyzed for dissolved carbon fractions in bi-weekly intervals.
The water balance components measured by high precision lysimeters varied from the most eroded to the less eroded monolith up to 83 % (deep drainage) primarily caused due to varying amounts of precipitation and evapotranspiration for a 3-years period. Here, interactions between crop development and contrasting rainfall interception by above ground biomass could explain differences in water balance components. Concentrations of dissolved carbon in soil water samples were relatively constant in time, suggesting carbon leaching was mainly affected by water fluxes in this observation period. For the lysimeter-based water balance analysis, a filtering scheme was developed considering temporal autocorrelation. The minute-based autocorrelation analysis of mass changes from lysimeter time series revealed characteristic autocorrelation lengths ranging from 23 to 76 minutes. Thereby, temporal autocorrelation provided an optimal approximation of precipitation quantities. However, the high temporal resolution in lysimeter time series is restricted by the lengths of autocorrelation.
Erosion-induced but also gradual changes in soil properties were reflected by dynamics of soil water retention properties in the lysimeter soils. Short-term and long-term hysteretic water retention data suggested seasonal wettability problems of soils increasingly limited rewetting of previously dried pore regions. Differences in water retention were assigned to soil tillage operations and the erosion history at different slope positions. The threedimensional spatial pattern of soil types that result from erosional soil profile modifications were also reflected in differences of crop root development at different landscape positions. Contrasting root densities revealed positive relations of root and aboveground plant characteristics. Differences in the spatially-distributed root growth between different eroded soil types provided indications that root development was affected by the erosion-induced soil evolution processes.
Overall, the current thesis corroborated the hypothesis that erosion-induced soil profile modifications affect the soil water balance, carbon leaching and soil hydraulic properties, but also the crop root system is influenced by erosion-induced spatial patterns of soil properties in the arable hummocky post glacial soil landscape. The results will help to improve model predictions of water and solute movement in arable soils and to understand interactions between soil erosion and carbon pathways regarding sink-or-source terms in landscapes.
Causes for slow weathering and erosion in the steep, warm, monsoon-subjected Highlands of Sri Lanka
(2018)
In the Highlands of Sri Lanka, erosion and chemical weathering rates are among the lowest for global mountain denudation. In this tropical humid setting, highly weathered deep saprolite profiles have developed from high-grade metamorphic charnockite during spheroidal weathering of the bedrock. The spheroidal weathering produces rounded corestones and spalled rindlets at the rock-saprolite interface. I used detailed textural, mineralogical, chemical, and electron-microscopic (SEM, FIB, TEM) analyses to identify the factors limiting the rate of weathering front advance in the profile, the sequence of weathering reactions, and the underlying mechanisms. The first mineral attacked by weathering was found to be pyroxene initiated by in situ Fe oxidation, followed by in situ biotite oxidation. Bulk dissolution of the primary minerals is best described with a dissolution – re-precipitation process, as no chemical gradients towards the mineral surface and sharp structural boundaries are observed at the nm scale. Only the local oxidation in pyroxene and biotite is better described with an ion by ion process. The first secondary phases are oxides and amorphous precipitates from which secondary minerals (mainly smectite and kaolinite) form. Only for biotite direct solid state transformation to kaolinite is likely. The initial oxidation of pyroxene and biotite takes place in locally restricted areas and is relatively fast: log J = -11 molmin/(m2 s). However, calculated corestone-scale mineral oxidation rates are comparable to corestone-scale mineral dissolution rates: log R = -13 molpx/(m2 s) and log R = -15 molbt/(m2 s). The oxidation reaction results in a volume increase. Volumetric calculations suggest that this observed oxidation leads to the generation of porosity due to the formation of micro-fractures in the minerals and the bedrock allowing for fluid transport and subsequent dissolution of plagioclase. At the scale of the corestone, this fracture reaction is responsible for the larger fractures that lead to spheroidal weathering and to the formation of rindlets. Since these fractures have their origin from the initial oxidational induced volume increase, oxidation is the rate limiting parameter for weathering to take place. The ensuing plagioclase weathering leads to formation of high secondary porosity in the corestone over a distance of only a few cm and eventually to the final disaggregation of bedrock to saprolite. As oxidation is the first weathering reaction, the supply of O2 is a rate-limiting factor for chemical weathering. Hence, the supply of O2 and its consumption at depth connects processes at the weathering front with erosion at the surface in a feedback mechanism. The strength of the feedback depends on the relative weight of advective versus diffusive transport of O2 through the weathering profile. The feedback will be stronger with dominating diffusive transport. The low weathering rate ultimately depends on the transport of O2 through the whole regolith, and on lithological factors such as low bedrock porosity and the amount of Fe-bearing primary minerals. In this regard the low-porosity charnockite with its low content of Fe(II) bearing minerals impedes fast weathering reactions. Fresh weatherable surfaces are a pre-requisite for chemical weathering. However, in the case of the charnockite found in the Sri Lankan Highlands, the only process that generates these surfaces is the fracturing induced by oxidation. Tectonic quiescence in this region and low pre-anthropogenic erosion rate (attributed to a dense vegetation cover) minimize the rejuvenation of the thick and cohesive regolith column, and lowers weathering through the feedback with erosion.
The Milky Way is only one out of billions of galaxies in the universe. However, it is a special galaxy because it allows to explore the main mechanisms involved in its evolution and formation history by unpicking the system star-by-star. Especially, the chemical fingerprints of its stars provide clues and evidence of past events in the Galaxy’s lifetime. These information help not only to decipher the current structure and building blocks of the Milky Way, but to learn more about the general formation process of galaxies.
In the past decade a multitude of stellar spectroscopic Galactic surveys have scanned millions of stars far beyond the rim of the solar neighbourhood. The obtained spectroscopic information provide unprecedented insights to the chemo-dynamics of the Milky Way. In addition analytic models and numerical simulations of the Milky Way provide necessary descriptions and predictions suited for comparison with observations in order to decode the physical properties that underlie the complex system of the Galaxy.
In the thesis various approaches are taken to connect modern theoretical modelling of galaxy formation and evolution with observations from Galactic stellar surveys. With its focus on the chemo-kinematics of the Galactic disk this work aims to determine new observational constraints on the formation of the Milky Way providing also proper comparisons with two different models. These are the population synthesis model TRILEGAL based on analytical distribution functions, which aims to simulate the number and distribution of stars in the Milky Way and its different components, and a hybrid model (MCM) that combines an N-body simulation of a Milky Way like galaxy in the cosmological framework with a semi-analytic chemical evolution model for the Milky Way. The major observational data sets in use come from two surveys, namely the “Radial Velocity Experiment” (RAVE) and the “Sloan Extension for Galactic Understanding and Exploration” (SEGUE).
In the first approach the chemo-kinematic properties of the thin and thick disk of the Galaxy as traced by a selection of about 20000 SEGUE G-dwarf stars are directly compared to the predictions by the MCM model. As a necessary condition for this, SEGUE's selection function and its survey volume are evaluated in detail to correct the spectroscopic observations for their survey specific selection biases. Also, based on a Bayesian method spectro-photometric distances with uncertainties below 15% are computed for the selection of SEGUE G-dwarfs that are studied up to a distance of 3 kpc from the Sun.
For the second approach two synthetic versions of the SEGUE survey are generated based on the above models. The obtained synthetic stellar catalogues are then used to create mock samples best resembling the compiled sample of observed SEGUE G-dwarfs. Generally, mock samples are not only ideal to compare predictions from various models. They also allow validation of the models' quality and improvement as with this work could be especially achieved for TRILEGAL. While TRILEGAL reproduces the statistical properties of the thin and thick disk as seen in the observations, the MCM model has shown to be more suitable in reproducing many chemo-kinematic correlations as revealed by the SEGUE stars. However, evidence has been found that the MCM model may be missing a stellar component with the properties of the thick disk that the observations clearly show. While the SEGUE stars do indicate a thin-thick dichotomy of the stellar Galactic disk in agreement with other spectroscopic stellar studies, no sign for a distinct metal-poor disk is seen in the MCM model.
Usually stellar spectroscopic surveys are limited to a certain volume around the Sun covering different regions of the Galaxy’s disk. This often prevents to obtain a global view on the chemo-dynamics of the Galactic disk. Hence, a suitable combination of stellar samples from independent surveys is not only useful for the verification of results but it also helps to complete the picture of the Milky Way. Therefore, the thesis closes with a comparison of the SEGUE G-dwarfs and a sample of RAVE giants. The comparison reveals that the chemo-kinematic relations agree in disk regions where the samples of both surveys show a similar number of stars. For those parts of the survey volumes where one of the surveys lacks statistics they beautifully complement each other. This demonstrates that the comparison of theoretical models on the one side, and the combined observational data gathered by multiple surveys on the other side, are key ingredients to understand and disentangle the structure and formation history of the Milky Way.
Widespread landscape changes are presently observed in the Arctic and are most likely to
accelerate in the future, in particular in permafrost regions which are sensitive to climate warming. To assess current and future developments, it is crucial to understand past
environmental dynamics in these landscapes. Causes and interactions of environmental variability can hardly be resolved by instrumental records covering modern time scales. However, long-term
environmental variability is recorded in paleoenvironmental archives. Lake sediments are important archives that allow reconstruction of local limnogeological processes as well as past environmental changes driven directly or indirectly by climate dynamics. This study aims at
reconstructing Late Quaternary permafrost and thermokarst dynamics in central-eastern Beringia,
the terrestrial land mass connecting Eurasia and North America during glacial sea-level low stands. In order to investigate development, processes and influence of thermokarst dynamics, several sediment cores from extant lakes and drained lake basins were analyzed to answer the
following research questions:
1. When did permafrost degradation and thermokarst lake development take place and what were enhancing and inhibiting environmental factors?
2. What are the dominant processes during thermokarst lake development and how are
they reflected in proxy records?
3. How did, and still do, thermokarst dynamics contribute to the inventory and properties of organic matter in sediments and the carbon cycle?
Methods applied in this study are based upon a multi-proxy approach combining
sedimentological, geochemical, geochronological, and micropaleontological analyses, as well as
analyses of stable isotopes and hydrochemistry of pore-water and ice. Modern field observations of water quality and basin morphometrics complete the environmental investigations.
The investigated sediment cores reveal permafrost degradation and thermokarst dynamics on different time scales. The analysis of a sediment core from GG basin on the northern Seward
Peninsula (Alaska) shows prevalent terrestrial accumulation of yedoma throughout the Early to
Mid Wisconsin with intermediate wet conditions at around 44.5 to 41.5 ka BP. This first wetland
development was terminated by the accumulation of a 1-meter-thick airfall tephra most likely originating from the South Killeak Maar eruption at 42 ka BP. A depositional hiatus between 22.5 and 0.23 ka BP may indicate thermokarst lake formation in the surrounding of the site which forms a yedoma upland till today. The thermokarst lake forming GG basin initiated 230 ± 30 cal a
BP and drained in Spring 2005 AD. Four years after drainage the lake talik was still unfrozen below 268 cm depth.
A permafrost core from Mama Rhonda basin on the northern Seward Peninsula preserved a
full lacustrine record including several lake phases. The first lake generation developed at 11.8 cal ka BP during the Lateglacial-Early Holocene transition; its old basin (Grandma Rhonda) is still partially preserved at the southern margin of the study basin. Around 9.0 cal ka BP a shallow and more dynamic thermokarst lake developed with actively eroding shorelines and potentially intermediate shallow water or wetland phases (Mama Rhonda). Mama Rhonda lake drainage at 1.1 cal ka BP was followed by gradual accumulation of terrestrial peat and top-down refreezing of the lake talik. A significant lower organic carbon content was measured in Grandma Rhonda deposits (mean TOC of 2.5 wt%) than in Mama Rhonda deposits (mean TOC of 7.9 wt%) highlighting the impact of thermokarst dynamics on biogeochemical cycling in different lake generations by thawing and mobilization of organic carbon into the lake system.
Proximal and distal sediment cores from Peatball Lake on the Arctic Coastal Plain of Alaska revealed young thermokarst dynamics since about 1,400 years along a depositional gradient based on reconstructions from shoreline expansion rates and absolute dating results. After its initiation as a remnant pond of a previous drained lake basin, a rapidly deepening lake with increasing oxygenation of the water column is evident from laminated sediments, and higher Fe/Ti and Fe/S ratios in the sediment. The sediment record archived characterizing shifts in depositional regimes and sediment sources from upland deposits and re-deposited sediments from drained thaw lake basins depending on the gradually changing shoreline configuration. These changes are evident from alternating organic inputs into the lake system which highlights the potential for thermokarst lakes to recycle old carbon from degrading permafrost deposits of its catchment.
The lake sediment record from Herschel Island in the Yukon (Canada) covers the full Holocene period. After its initiation as a thermokarst lake at 11.7 cal ka BP and intense thermokarst activity until 10.0 cal ka BP, the steady sedimentation was interrupted by a depositional hiatus at 1.6 cal ka BP which likely resulted from lake drainage or allochthonous slumping due to collapsing shore lines. The specific setting of the lake on a push moraine composed of marine deposits is reflected in the sedimentary record. Freshening of the maturing lake is indicated by decreasing electrical conductivity in pore-water. Alternation of marine to freshwater ostracods and foraminifera confirms decreasing salinity as well but also reflects episodical re-deposition of allochthonous marine sediments.
Based on permafrost and lacustrine sediment records, this thesis shows examples of the Late Quaternary evolution of typical Arctic permafrost landscapes in central-eastern Beringia and the complex interaction of local disturbance processes, regional environmental dynamics and global climate patterns. This study confirms that thermokarst lakes are important agents of organic matter recycling in complex and continuously changing landscapes.
This thesis is focussed on the electronic properties of the new material class named topological insulators. Spin and angle resolved photoelectron spectroscopy have been applied to reveal several unique properties of the surface state of these materials. The first part of this thesis introduces the methodical background of these quite established experimental techniques.
In the following chapter, the theoretical concept of topological insulators is introduced. Starting from the prominent example of the quantum Hall effect, the application of topological invariants to classify material systems is illuminated. It is explained how, in presence of time reversal symmetry, which is broken in the quantum Hall phase, strong spin orbit coupling can drive a system into a topologically non trivial phase. The prediction of the spin quantum Hall effect in two dimensional insulators an the generalization to the three dimensional case of topological insulators is reviewed together with the first experimental realization of a three dimensional topological insulator in the Bi1-xSbx alloys given in the literature.
The experimental part starts with the introduction of the Bi2X3 (X=Se, Te) family of materials. Recent theoretical predictions and experimental findings on the bulk and surface electronic structure of these materials are introduced in close discussion to our own experimental results. Furthermore, it is revealed, that the topological surface state of Bi2Te3 shares its orbital symmetry with the bulk valence band and the observation of a temperature induced shift of the chemical potential is to a high probability unmasked as a doping effect due to residual gas adsorption.
The surface state of Bi2Te3 is found to be highly spin polarized with a polarization value of about 70% in a macroscopic area, while in Bi2Se3 the polarization appears reduced, not exceeding 50%. We, however, argue that the polarization is most likely only extrinsically limited in terms of the finite angular resolution and the lacking detectability of the out of plane component of the electron spin. A further argument is based on the reduced surface quality of the single crystals after cleavage and, for Bi2Se3 a sensitivity of the electronic structure to photon exposure.
We probe the robustness of the topological surface state in Bi2X3 against surface impurities in Chapter 5. This robustness is provided through the protection by the time reversal symmetry. Silver, deposited on the (111) surface of Bi2Se3 leads to a strong electron doping but the surface state is observed up to a deposited Ag mass equivalent to one atomic monolayer. The opposite sign of doping, i.e., hole-like, is observed by exposing oxygen to Bi2Te3. But while the n-type shift of Ag on Bi2Se3 appears to be more or less rigid, O2 is lifting the Dirac point of the topological surface state in Bi2Te3 out of the valence band minimum at $\Gamma$. After increasing the oxygen dose further, it is possible to shift the Dirac point to the Fermi level, while the valence band stays well beyond. The effect is found reversible, by warming up the samples which is interpreted in terms of physisorption of O2.
For magnetic impurities, i.e., Fe, we find a similar behavior as for the case of Ag in both Bi2Se3 and Bi2Te3. However, in that case the robustness is unexpected, since magnetic impurities are capable to break time reversal symmetry which should introduce a gap in the surface state at the Dirac point which in turn removes the protection. We argue, that the fact that the surface state shows no gap must be attributed to a missing magnetization of the Fe overlayer. In Bi2Te3 we are able to observe the surface state for deposited iron mass equivalents in the monolayer regime. Furthermore, we gain control over the sign of doping through the sample temperature during deposition.
Chapter6 is devoted to the lifetime broadening of the photoemission signal from the topological surface states of Bi2Se3 and Bi2Te3. It is revealed that the hexagonal warping of the surface state in Bi2Te3 introduces an anisotropy for electrons traveling along the two distinct high symmetry directions of the surface Brillouin zone, i.e., $\Gamma$K and $\Gamma$M. We show that the phonon coupling strength to the surface electrons in Bi2Te3 is in nice agreement with the theoretical prediction but, nevertheless, higher than one may expect. We argue that the electron-phonon coupling is one of the main contributions to the decay of photoholes but the relatively small size of the Fermi surface limits the number of phonon modes that may scatter off electrons. This effect is manifested in the energy dependence of the imaginary part of the electron self energy of the surface state which shows a decay to higher binding energies in contrast to the monotonic increase proportional to E$^2$ in the Fermi liquid theory due to electron-electron interaction.
Furthermore, the effect of the surface impurities of Chapter 5 on the quasiparticle life- times is investigated. We find that Fe impurities have a much stronger influence on the lifetimes as compared to Ag. Moreover, we find that the influence is stronger independently of the sign of the doping. We argue that this observation suggests a minor contribution of the warping on increased scattering rates in contrast to current belief. This is additionally confirmed by the observation that the scattering rates increase further with increasing silver amount while the doping stays constant and by the fact that clean Bi2Se3 and Bi2Te3 show very similar scattering rates regardless of the much stronger warping in Bi2Te3.
In the last chapter we report on a strong circular dichroism in the angle distribution of the photoemission signal of the surface state of Bi2Te3. We show that the color pattern obtained by calculating the difference between photoemission intensities measured with opposite photon helicity reflects the pattern expected for the spin polarization. However, we find a strong influence on strength and even sign of the effect when varying the photon energy. The sign change is qualitatively confirmed by means of one-step photoemission calculations conducted by our collaborators from the LMU München, while the calculated spin polarization is found to be independent of the excitation energy. Experiment and theory together unambiguously uncover the dichroism in these systems as a final state effect and the question in the title of the chapter has to be negated: Circular dichroism in the angle distribution is not a new spin sensitive technique.
Foam fractionation of surfactant and protein solutions is a process dedicated to separate surface active molecules from each other due to their differences in surface activities. The process is based on forming bubbles in a certain mixed solution followed by detachment and rising of bubbles through a certain volume of this solution, and consequently on the formation of a foam layer on top of the solution column. Therefore, systematic analysis of this whole process comprises of at first investigations dedicated to the formation and growth of single bubbles in solutions, which is equivalent to the main principles of the well-known bubble pressure tensiometry. The second stage of the fractionation process includes the detachment of a single bubble from a pore or capillary tip and its rising in a respective aqueous solution. The third and final stage of the process is the formation and stabilization of the foam created by these bubbles, which contains the adsorption layers formed at the growing bubble surface, carried up and gets modified during the bubble rising and finally ends up as part of the foam layer.
Bubble pressure tensiometry and bubble profile analysis tensiometry experiments were performed with protein solutions at different bulk concentrations, solution pH and ionic strength in order to describe the process of accumulation of protein and surfactant molecules at the bubble surface. The results obtained from the two complementary methods allow understanding the mechanism of adsorption, which is mainly governed by the diffusional transport of the adsorbing protein molecules to the bubble surface. This mechanism is the same as generally discussed for surfactant molecules. However, interesting peculiarities have been observed for protein adsorption kinetics at sufficiently short adsorption times. First of all, at short adsorption times the surface tension remains constant for a while before it decreases as expected due to the adsorption of proteins at the surface. This time interval is called induction time and it becomes shorter with increasing protein bulk concentration. Moreover, under special conditions, the surface tension does not stay constant but even increases over a certain period of time. This so-called negative surface pressure was observed for BCS and BLG and discussed for the first time in terms of changes in the surface conformation of the adsorbing protein molecules. Usually, a negative surface pressure would correspond to a negative adsorption, which is of course impossible for the studied protein solutions. The phenomenon, which amounts to some mN/m, was rather explained by simultaneous changes in the molar area required by the adsorbed proteins and the non-ideality of entropy of the interfacial layer. It is a transient phenomenon and exists only under dynamic conditions.
The experiments dedicated to the local velocity of rising air bubbles in solutions were performed in a broad range of BLG concentration, pH and ionic strength. Additionally, rising bubble experiments were done for surfactant solutions in order to validate the functionality of the instrument. It turns out that the velocity of a rising bubble is much more sensitive to adsorbing molecules than classical dynamic surface tension measurements. At very low BLG or surfactant concentrations, for example, the measured local velocity profile of an air bubble is changing dramatically in time scales of seconds while dynamic surface tensions still do not show any measurable changes at this time scale. The solution’s pH and ionic strength are important parameters that govern the measured rising velocity for protein solutions. A general theoretical description of rising bubbles in surfactant and protein solutions is not available at present due to the complex situation of the adsorption process at a bubble surface in a liquid flow field with simultaneous Marangoni effects. However, instead of modelling the complete velocity profile, new theoretical work has been started to evaluate the maximum values in the profile as characteristic parameter for dynamic adsorption layers at the bubble surface more quantitatively.
The studies with protein-surfactant mixtures demonstrate in an impressive way that the complexes formed by the two compounds change the surface activity as compared to the original native protein molecules and therefore lead to a completely different retardation behavior of rising bubbles. Changes in the velocity profile can be interpreted qualitatively in terms of increased or decreased surface activity of the formed protein-surfactant complexes. It was also observed that the pH and ionic strength of a protein solution have strong effects on the surface activity of the protein molecules, which however, could be different on the rising bubble velocity and the equilibrium adsorption isotherms. These differences are not fully understood yet but give rise to discussions about the structure of protein adsorption layer under dynamic conditions or in the equilibrium state.
The third main stage of the discussed process of fractionation is the formation and characterization of protein foams from BLG solutions at different pH and ionic strength. Of course a minimum BLG concentration is required to form foams. This minimum protein concentration is a function again of solution pH and ionic strength, i.e. of the surface activity of the protein molecules. Although at the isoelectric point, at about pH 5 for BLG, the hydrophobicity and hence the surface activity should be the highest, the concentration and ionic strength effects on the rising velocity profile as well as on the foamability and foam stability do not show a maximum. This is another remarkable argument for the fact that the interfacial structure and behavior of BLG layers under dynamic conditions and at equilibrium are rather different. These differences are probably caused by the time required for BLG molecules to adapt respective conformations once they are adsorbed at the surface.
All bubble studies described in this work refer to stages of the foam fractionation process. Experiments with different systems, mainly surfactant and protein solutions, were performed in order to form foams and finally recover a solution representing the foamed material. As foam consists to a large extent of foam lamella – two adsorption layers with a liquid core – the concentration in a foamate taken from foaming experiments should be enriched in the stabilizing molecules. For determining the concentration of the foamate, again the very sensitive bubble rising velocity profile method was applied, which works for any type of surface active materials. This also includes technical surfactants or protein isolates for which an accurate composition is unknown.
KEYCIT 2014
(2015)
In our rapidly changing world it is increasingly important not only to be an expert in a chosen field of study but also to be able to respond to developments, master new approaches to solving problems, and fulfil changing requirements in the modern world and in the job market. In response to these needs key competencies in understanding, developing and using new digital technologies are being brought into focus in school and university programmes. The IFIP TC3 conference "KEYCIT – Key Competences in Informatics and ICT (KEYCIT 2014)" was held at the University of Potsdam in Germany from July 1st to 4th, 2014 and addressed the combination of key competencies, Informatics and ICT in detail. The conference was organized into strands focusing on secondary education, university education and teacher education (organized by IFIP WGs 3.1 and 3.3) and provided a forum to present and to discuss research, case studies, positions, and national perspectives in this field.
The Adana Basin of southern Turkey, situated at the SE margin of the Central Anatolian Plateau is ideally located to record Neogene topographic and tectonic changes in the easternmost Mediterranean realm. Using industry seismic reflection data we correlate 34 seismic profiles with corresponding exposed units in the Adana Basin. The time-depth conversion of the interpreted seismic profiles allows us to reconstruct the subsidence curve of the Adana Basin and to outline the occurrence of a major increase in both subsidence and sedimentation rates at 5.45 – 5.33 Ma, leading to the deposition of almost 1500 km3 of conglomerates and marls. Our provenance analysis of the conglomerates reveals that most of the sediment is derived from and north of the SE margin of the Central Anatolian Plateau. A comparison of these results with the composition of recent conglomerates and the present drainage basins indicates major changes between late Messinian and present-day source areas. We suggest that these changes in source areas result of uplift and ensuing erosion of the SE margin of the plateau. This hypothesis is supported by the comparison of the Adana Basin subsidence curve with the subsidence curve of the Mut Basin, a mainly Neogene basin located on top of the Central Anatolian Plateau southern margin, showing that the Adana Basin subsidence event is coeval with an uplift episode of the plateau southern margin. The collection of several fault measurements in the Adana region show different deformation styles for the NW and SE margins of the Adana Basin. The weakly seismic NW portion of the basin is characterized by extensional and transtensional structures cutting Neogene deposits, likely accomodating the differential uplift occurring between the basin and the SE margin of the plateau. We interpret the tectonic evolution of the southern flank of the Central Anatolian Plateau and the coeval subsidence and sedimentation in the Adana Basin to be related to deep lithospheric processes, particularly lithospheric delamination and slab break-off.
Deciphering the functioning of biological networks is one of the central tasks in systems biology. In particular, signal transduction networks are crucial for the understanding of the cellular response to external and internal perturbations. Importantly, in order to cope with the complexity of these networks, mathematical and computational modeling is required. We propose a computational modeling framework in order to achieve more robust discoveries in the context of logical signaling networks. More precisely, we focus on modeling the response of logical signaling networks by means of automated reasoning using Answer Set Programming (ASP). ASP provides a declarative language for modeling various knowledge representation and reasoning problems. Moreover, available ASP solvers provide several reasoning modes for assessing the multitude of answer sets. Therefore, leveraging its rich modeling language and its highly efficient solving capacities, we use ASP to address three challenging problems in the context of logical signaling networks: learning of (Boolean) logical networks, experimental design, and identification of intervention strategies. Overall, the contribution of this thesis is three-fold. Firstly, we introduce a mathematical framework for characterizing and reasoning on the response of logical signaling networks. Secondly, we contribute to a growing list of successful applications of ASP in systems biology. Thirdly, we present a software providing a complete pipeline for automated reasoning on the response of logical signaling networks.
There are two common approaches to implement a virtual machine (VM) for a dynamic object-oriented language. On the one hand, it can be implemented in a C-like language for best performance and maximum control over the resulting executable. On the other hand, it can be implemented in a language such as Java that allows for higher-level abstractions. These abstractions, such as proper object-oriented modularization, automatic memory management, or interfaces, are missing in C-like languages but they can simplify the implementation of prevalent but complex concepts in VMs, such as garbage collectors (GCs) or just-in-time compilers (JITs). Yet, the implementation of a dynamic object-oriented language in Java eventually results in two VMs on top of each other (double stack), which impedes performance. For statically typed languages, the Maxine VM solves this problem; it is written in Java but can be executed without a Java virtual machine (JVM). However, it is currently not possible to execute dynamic object-oriented languages in Maxine. This work presents an approach to bringing object models and execution models of dynamic object-oriented languages to the Maxine VM and the application of this approach to Squeak/Smalltalk. The representation of objects in and the execution of dynamic object-oriented languages pose certain challenges to the Maxine VM that lacks certain variation points necessary to enable an effortless and straightforward implementation of dynamic object-oriented languages' execution models. The implementation of Squeak/Smalltalk in Maxine as a feasibility study is to unveil such missing variation points.
Antarctic glacier forfields are extreme environments and pioneer sites for ecological succession. The Antarctic continent shows microbial community development as a natural laboratory because of its special environment, geographic isolation and little anthropogenic influence. Increasing temperatures due to global warming lead to enhanced deglaciation processes in cold-affected habitats and new terrain is becoming exposed to soil formation and accessible for microbial colonisation. This study aims to understand the structure and development of glacier forefield bacterial communities, especially how soil parameters impact the microorganisms and how those are adapted to the extreme conditions of the habitat. To this effect, a combination of cultivation experiments, molecular, geophysical and geochemical analysis was applied to examine two glacier forfields of the Larsemann Hills, East Antarctica. Culture-independent molecular tools such as terminal restriction length polymorphism (T-RFLP), clone libraries and quantitative real-time PCR (qPCR) were used to determine bacterial diversity and distribution. Cultivation of yet unknown species was carried out to get insights in the physiology and adaptation of the microorganisms. Adaptation strategies of the microorganisms were studied by determining changes of the cell membrane phospholipid fatty acid (PLFA) inventory of an isolated bacterium in response to temperature and pH fluctuations and by measuring enzyme activity at low temperature in environmental soil samples. The two studied glacier forefields are extreme habitats characterised by low temperatures, low water availability and small oligotrophic nutrient pools and represent sites of different bacterial succession in relation to soil parameters. The investigated sites showed microbial succession at an early step of soil formation near the ice tongue in comparison to closely located but rather older and more developed soil from the forefield. At the early step the succession is influenced by a deglaciation-dependent areal shift of soil parameters followed by a variable and prevalently depth-related distribution of the soil parameters that is driven by the extreme Antarctic conditions. The dominant taxa in the glacier forefields are Actinobacteria, Acidobacteria, Proteobacteria, Bacteroidetes, Cyanobacteria and Chloroflexi. The connection of soil characteristics with bacterial community structure showed that soil parameter and soil formation along the glacier forefield influence the distribution of certain phyla. In the early step of succession the relative undifferentiated bacterial diversity reflects the undifferentiated soil development and has a high potential to shift according to past and present environmental conditions. With progressing development environmental constraints such as water or carbon limitation have a greater influence. Adapting the culturing conditions to the cold and oligotrophic environment, the number of culturable heterotrophic bacteria reached up to 108 colony forming units per gram soil and 148 isolates were obtained. Two new psychrotolerant bacteria, Herbaspirillum psychrotolerans PB1T and Chryseobacterium frigidisoli PB4T, were characterised in detail and described as novel species in the family of Oxalobacteraceae and Flavobacteriaceae, respectively. The isolates are able to grow at low temperatures tolerating temperature fluctuations and they are not specialised to a certain substrate, therefore they are well-adapted to the cold and oligotrophic environment. The adaptation strategies of the microorganisms were analysed in environmental samples and cultures focussing on extracellular enzyme activity at low temperature and PLFA analyses. Extracellular phosphatases (pH 11 and pH 6.5), β-glucosidase, invertase and urease activity were detected in the glacier forefield soils at low temperature (14°C) catalysing the conversion of various compounds providing necessary substrates and may further play a role in the soil formation and total carbon turnover of the habitat. The PLFA analysis of the newly isolated species C. frigidisoli showed that the cold-adapted strain develops different strategies to maintain the cell membrane function under changing environmental conditions by altering the PLFA inventory at different temperatures and pH values. A newly discovered fatty acid, which was not found in any other microorganism so far, significantly increased at decreasing temperature and low pH and thus plays an important role in the adaption of C. frigidisoli. This work gives insights into the diversity, distribution and adaptation mechanisms of microbial communities in oligotrophic cold-affected soils and shows that Antarctic glacier forefields are suitable model systems to study bacterial colonisation in connection to soil formation.
In the context of ecological risk assessment of chemicals, individual-based population models hold great potential to increase the ecological realism of current regulatory risk assessment procedures. However, developing and parameterizing such models is time-consuming and often ad hoc. Using standardized, tested submodels of individual organisms would make individual-based modelling more efficient and coherent. In this thesis, I explored whether Dynamic Energy Budget (DEB) theory is suitable for being used as a standard submodel in individual-based models, both for ecological risk assessment and theoretical population ecology. First, I developed a generic implementation of DEB theory in an individual-based modeling (IBM) context: DEB-IBM. Using the DEB-IBM framework I tested the ability of the DEB theory to predict population-level dynamics from the properties of individuals. We used Daphnia magna as a model species, where data at the individual level was available to parameterize the model, and population-level predictions were compared against independent data from controlled population experiments. We found that DEB theory successfully predicted population growth rates and peak densities of experimental Daphnia populations in multiple experimental settings, but failed to capture the decline phase, when the available food per Daphnia was low. Further assumptions on food-dependent mortality of juveniles were needed to capture the population dynamics after the initial population peak. The resulting model then predicted, without further calibration, characteristic switches between small- and large-amplitude cycles, which have been observed for Daphnia. We conclude that cross-level tests help detecting gaps in current individual-level theories and ultimately will lead to theory development and the establishment of a generic basis for individual-based models and ecology. In addition to theoretical explorations, we tested the potential of DEB theory combined with IBMs to extrapolate effects of chemical stress from the individual to population level. For this we used information at the individual level on the effect of 3,4-dichloroanailine on Daphnia. The individual data suggested direct effects on reproduction but no significant effects on growth. Assuming such direct effects on reproduction, the model was able to accurately predict the population response to increasing concentrations of 3,4-dichloroaniline. We conclude that DEB theory combined with IBMs holds great potential for standardized ecological risk assessment based on ecological models.
Die Anpassung von Sektoren an veränderte klimatische Bedingungen erfordert ein Verständnis von regionalen Vulnerabilitäten. Vulnerabilität ist als Funktion von Sensitivität und Exposition, welche potentielle Auswirkungen des Klimawandels darstellen, und der Anpassungsfähigkeit von Systemen definiert. Vulnerabilitätsstudien, die diese Komponenten quantifizieren, sind zu einem wichtigen Werkzeug in der Klimawissenschaft geworden. Allerdings besteht von der wissenschaftlichen Perspektive aus gesehen Uneinigkeit darüber, wie diese Definition in Studien umgesetzt werden soll. Ausdiesem Konflikt ergeben sich viele Herausforderungen, vor allem bezüglich der Quantifizierung und Aggregierung der einzelnen Komponenten und deren angemessenen Komplexitätsniveaus. Die vorliegende Dissertation hat daher zum Ziel die Anwendbarkeit des Vulnerabilitätskonzepts voranzubringen, indem es in eine systematische Struktur übersetzt wird. Dies beinhaltet alle Komponenten und schlägt für jede Klimaauswirkung (z.B. Sturzfluten) eine Beschreibung des vulnerablen Systems vor (z.B. Siedlungen), welches direkt mit einer bestimmten Richtung eines relevanten klimatischen Stimulus in Verbindung gebracht wird (z.B. stärkere Auswirkungen bei Zunahme der Starkregentage). Bezüglich der herausfordernden Prozedur der Aggregierung werden zwei alternative Methoden, die einen sektorübergreifenden Überblick ermöglichen, vorgestellt und deren Vor- und Nachteile diskutiert. Anschließend wird die entwickelte Struktur einer Vulnerabilitätsstudie mittels eines indikatorbasierten und deduktiven Ansatzes beispielhaft für Gemeinden in Nordrhein-Westfalen in Deutschland angewandt. Eine Übertragbarkeit auf andere Regionen ist dennoch möglich. Die Quantifizierung für die Gemeinden stützt sich dabei auf Informationen aus der Literatur. Da für viele Sektoren keine geeigneten Indikatoren vorhanden waren, werden in dieser Arbeit neue Indikatoren entwickelt und angewandt, beispielsweise für den Forst- oder Gesundheitssektor. Allerdings stellen fehlende empirische Daten bezüglich relevanter Schwellenwerte eine Lücke dar, beispielsweise welche Stärke von Klimaänderungen eine signifikante Auswirkung hervorruft. Dies führt dazu, dass die Studie nur relative Aussagen zum Grad der Vulnerabilität jeder Gemeinde im Vergleich zum Rest des Bundeslandes machen kann. Um diese Lücke zu füllen, wird für den Forstsektor beispielhaft die heutige und zukünftige Sturmwurfgefahr von Wäldern berechnet. Zu diesem Zweck werden die Eigenschaften der Wälder mit empirischen Schadensdaten eines vergangenen Sturmereignisses in Verbindung gebracht. Der sich daraus ergebende Sensitivitätswert wird anschließend mit den Windverhältnissen verknüpft. Sektorübergreifende Vulnerabilitätsstudien erfordern beträchtliche Ressourcen, was oft deren Anwendbarkeit erschwert. In einem nächsten Schritt wird daher das Potential einer Vereinfachung der Komplexität anhand zweier sektoraler Beispiele untersucht. Um das Auftreten von Waldbränden vorherzusagen, stehen zahlreiche meteorologische Indices zur Verfügung, welche eine Spannbreite unterschiedlicher Komplexitäten aufweisen. Bezüglich der Anzahl monatlicher Waldbrände weist die relative Luftfeuchtigkeit für die meisten deutschen Bundesländer eine bessere Vorhersagekraft als komplexere Indices auf. Dies ist er Fall, obgleich sie selbst als Eingangsvariable für die komplexeren Indices verwendet wird. Mit Hilfe dieses einzelnen meteorologischen Faktors kann also die Waldbrandgefahr in deutschen Region ausreichend genau ausgedrückt werden, was die Ressourceneffizienz von Studien erhöht. Die Methodenkomplexität wird auf ähnliche Weise hinsichtlich der Anwendung des ökohydrologischen Modells SWIM für die Region Brandenburg untersucht. Die interannuellen Bodenwasserwerte, welche durch dieses Modell simuliert werden, können nur unzureichend durch ein einfacheres statistisches Modell, welches auf denselben Eingangsdaten aufbaut, abgebildet werden. Innerhalb eines Zeithorizonts von Jahrzehnten, kann der statistische Ansatz jedoch das Bodenwasser zufriedenstellend abbilden und zeigt eine Dominanz der Bodeneigenschaft Feldkapazität. Dies deutet darauf hin, dass die Komplexität im Hinblick auf die Anzahl der Eingangsvariablen für langfristige Berechnungen reduziert werden kann. Allerdings sind die Aussagen durch fehlende beobachtete Bodenwasserwerte zur Validierung beschränkt. Die vorliegenden Studien zur Vulnerabilität und ihren Komponenten haben gezeigt, dass eine Anwendung noch immer wissenschaftlich herausfordernd ist. Folgt man der hier verwendeten Vulnerabilitätsdefinition, treten zahlreiche Probleme bei der Implementierung in regionalen Studien auf. Mit dieser Dissertation wurden Fortschritte bezüglich der aufgezeigten Lücken bisheriger Studien erzielt, indem eine systematische Struktur für die Beschreibung und Aggregierung von Vulnerabilitätskomponenten erarbeitet wurde. Hierfür wurden mehrere Ansätze diskutiert, die jedoch Vor- und Nachteile besitzen. Diese sollten vor der Anwendung von zukünftigen Studien daher ebenfalls sorgfältig abgewogen werden. Darüber hinaus hat sich gezeigt, dass ein Potential besteht einige Ansätze zu vereinfachen, jedoch sind hierfür weitere Untersuchungen nötig. Insgesamt konnte die Dissertation die Anwendung von Vulnerabilitätsstudien als Werkzeug zur Unterstützung von Anpassungsmaßnahmen stärken.
The potential increase in frequency and magnitude of extreme floods is currently discussed in terms of global warming and the intensification of the hydrological cycle. The profound knowledge of past natural variability of floods is of utmost importance in order to assess flood risk for the future. Since instrumental flood series cover only the last ~150 years, other approaches to reconstruct historical and pre-historical flood events are needed. Annually laminated (varved) lake sediments are meaningful natural geoarchives because they provide continuous records of environmental changes > 10000 years down to a seasonal resolution. Since lake basins additionally act as natural sediment traps, the riverine sediment supply, which is preserved as detrital event layers in the lake sediments, can be used as a proxy for extreme discharge events. Within my thesis I examined a ~ 8.50 m long sedimentary record from the pre-Alpine Lake Mondsee (Northeast European Alps), which covered the last 7000 years. This sediment record consists of calcite varves and intercalated detrital layers, which range in thickness from 0.05 to 32 mm. Detrital layer deposition was analysed by a combined method of microfacies analysis via thin sections, Scanning Electron Microscopy (SEM), μX-ray fluorescence (μXRF) scanning and magnetic susceptibility. This approach allows characterizing individual detrital event layers and assigning a corresponding input mechanism and catchment. Based on varve counting and controlled by 14C age dates, the main goals of this thesis are (i) to identify seasonal runoff processes, which lead to significant sediment supply from the catchment into the lake basin and (ii) to investigate flood frequency under changing climate boundary conditions. This thesis follows a line of different time slices, presenting an integrative approach linking instrumental and historical flood data from Lake Mondsee in order to evaluate the flood record inferred from Lake Mondsee sediments. The investigation of eleven short cores covering the last 100 years reveals the abundance of 12 detrital layers. Therein, two types of detrital layers are distinguished by grain size, geochemical composition and distribution pattern within the lake basin. Detrital layers, which are enriched in siliciclastic and dolomitic material, reveal sediment supply from the Flysch sediments and Northern Calcareous Alps into the lake basin. These layers are thicker in the northern lake basin (0.1-3.9 mm) and thinner in the southern lake basin (0.05-1.6 mm). Detrital layers, which are enriched in dolomitic components forming graded detrital layers (turbidites), indicate the provenance from the Northern Calcareous Alps. These layers are generally thicker (0.65-32 mm) and are solely recorded within the southern lake basin. In comparison with instrumental data, thicker graded layers result from local debris flow events in summer, whereas thin layers are deposited during regional flood events in spring/summer. Extreme summer floods as reported from flood layer deposition are principally caused by cyclonic activity from the Mediterranean Sea, e.g. July 1954, July 1997 and August 2002. During the last two millennia, Lake Mondsee sediments reveal two significant flood intervals with decadal-scale flood episodes, during the Dark Ages Cold Period (DACP) and the transition from the Medieval Climate Anomaly (MCA) into the Little Ice Age (LIA) suggesting a linkage of transition to climate cooling and summer flood recurrences in the Northeastern Alps. In contrast, intermediate or decreased flood episodes appeared during the MWP and the LIA. This indicates a non-straightforward relationship between temperature and flood recurrence, suggesting higher cyclonic activity during climate transition in the Northeast Alps. The 7000-year flood chronology reveals 47 debris flows and 269 floods, with increased flood activity shifting around 3500 and 1500 varve yr BP (varve yr BP = varve years before present, before present = AD 1950). This significant increase in flood activity shows a coincidence with millennial-scale climate cooling that is reported from main Alpine glacier advances and lower tree lines in the European Alps since about 3300 cal. yr BP (calibrated years before present). Despite relatively low flood occurrence prior to 1500 varve yr BP, floods at Lake Mondsee could have also influenced human life in early Neolithic lake dwellings (5750-4750 cal. yr BP). While the first lake dwellings were constructed on wetlands, the later lake dwellings were built on piles in the water suggesting an early flood risk adaptation of humans and/or a general change of the Late Neolithic Culture of lake-dwellers because of socio-economic reasons. However, a direct relationship between the final abandonment of the lake dwellings and higher flood frequencies is not evidenced.
Grammatica Grandonica
(2013)
In May 2010, Johann Ernst Hanxleden’s Grammatica Grandonica was rediscovered in Montecompatri (Lazio, Rome). Although historiographers attached much weight to the nearly oldest western grammar of Sanskrit, the precious manuscript was lost for several decades. The first aim of the present digital publication is to offer a photographical reproduction of the manuscript. This facsimile is accompanied by a double edition: a facing diplomatic edition with the Sanskrit in Malayāḷam script, followed by a transliterated established text.
This thesis rests on two large Active Galactic Nuclei (AGNs) surveys. The first survey deals with galaxies that host low-level AGNs (LLAGN) and aims at identifying such galaxies by quantifying their variability. While numerous studies have shown that AGNs can be variable at all wavelengths, the nature of the variability is still not well understood. Studying the properties of LLAGNs may help to understand better galaxy evolution, and how AGNs transit between active and inactive states. In this thesis, we develop a method to extract variability properties of AGNs. Using multi-epoch deep photometric observations, we subtract the contribution of the host galaxy at each epoch to extract variability and estimate AGN accretion rates. This pipeline will be a powerful tool in connection with future deep surveys such as PANSTARS. The second study in this thesis describes a survey of X-ray selected AGN hosts at redshifts z>1.5 and compares them to quiescent galaxies. This survey aims at studying environments, sizes and morphologies of star-forming high-redshift AGN hosts in the COSMOS Survey at the epoch of peak AGN activity. Between redshifts 1.5<z<3.8, the COSMOS HST/ACS imaging probes the UV regime, where separating the AGN flux from its host galaxy is very challenging. Nevertheless, we successfully derived the structural properties of 249 AGN hosts using two-dimensional surface-brightness profile fitting with the GALFIT package. This is the largest sample of AGN hosts at redshift z>1.5 to date. We analyzed the evolution of structural parameters of AGN and non-AGN host galaxies with redshift, and compared their disturbance rates to identify the more probable AGN triggering mechanism in the 43.5<log_10 L_X<45 luminosity range. We also conducted mock AGN and quiescent galaxies observations to determine errors and corrections for the derived parameters. We find that the size-absolute magnitude relations of AGN hosts and non-AGN galaxies are very similar, with estimated mean sizes in both samples decreasing by ~50% between redshifts z=1.5 and z=3.5. Morphological classification of both active and quiescent galaxies shows that the majority of the AGN host galaxies are disc-dominated, with disturbance rates that are significantly lower than among the non-AGN galaxies. Such a finding suggests that Major Mergers are probably not responsible for triggering AGN accretion in most of these galaxies. Other secular mechanisms should therefore be responsible.
The challenge is providing teachers with the resources they need to strengthen their instructions and better prepare students for the jobs of the 21st Century. Technology can help meet the challenge. Teachers’ Tryscience is a noncommercial offer, developed by the New York Hall of Science, TeachEngineering, the National Board for Professional Teaching Standards and IBM Citizenship to provide teachers with such resources. The workshop provides deeper insight into this tool and discussion of how to support teaching of informatics in schools.
.NET Gadgeteer Workshop
(2013)
Problem solving is one of the central activities performed by computer scientists as well as by computer science learners. Whereas the teaching of algorithms and programming languages is usually well structured within a curriculum, the development of learners’ problem-solving skills is largely implicit and less structured. Students at all levels often face difficulties in problem analysis and solution construction. The basic assumption of the workshop is that without some formal instruction on effective strategies, even the most inventive learner may resort to unproductive trial-and-error problemsolving processes. Hence, it is important to teach problem-solving strategies and to guide teachers on how to teach their pupils this cognitive tool. Computer science educators should be aware of the difficulties and acquire appropriate pedagogical tools to help their learners gain and experience problem-solving skills.
A method is presented of acquiring the principles of three sorting algorithms through developing interactive applications in Excel.
We present a concept of better integration of practical teaching in student teacher education in Computer Science. As an introduction to the workshop different possible scenarios are discussed on the basis of examples. Afterwards workshop participants will have the opportunity to discuss the application of the aconcepts in other settings.
In this paper we report on our experiments in teaching computer science concepts with a mix of tangible and abstract object manipulations. The goal we set ourselves was to let pupils discover the challenges one has to meet to automatically manipulate formatted text. We worked with a group of 25 secondary school pupils (9-10th grade), and they were actually able to “invent” the concept of mark-up language. From this experiment we distilled a set of activities which will be replicated in other classes (6th grade) under the guidance of maths teachers.
Informatics as a school subject has been virtually absent from bilingual education programs in German secondary schools. Most bilingual programs in German secondary education started out by focusing on subjects from the field of social sciences. Teachers and bilingual curriculum experts alike have been regarding those as the most suitable subjects for bilingual instruction – largely due to the intercultural perspective that a bilingual approach provides. And though one cannot deny the gain that ensues from an intercultural perspective on subjects such as history or geography, this benefit is certainly not limited to social science subjects. In consequence, bilingual curriculum designers have already begun to include other subjects such as physics or chemistry in bilingual school programs. It only seems a small step to extend this to informatics. This paper will start out by addressing potential benefits of adding informatics to the range of subjects taught as part of English-language bilingual programs in German secondary education. In a second step it will sketch out a methodological (= didactical) model for teaching informatics to German learners through English. It will then provide two items of hands-on and tested teaching material in accordance with this model. The discussion will conclude with a brief outlook on the chances and prerequisites of firmly establishing informatics as part of bilingual school curricula in Germany.
We shall examine the Pedagogical Content Knowledge (PCK) of Computer Science (CS) teachers concerning students’ Computational Thinking (CT) problem solving skills within the context of a CS course in Dutch secondary education and thus obtain an operational definition of CT and ascertain appropriate teaching methodology. Next we shall develop an instrument to assess students’ CT and design a curriculum intervention geared toward teaching and improving students’ CT problem solving skills and competences. As a result, this research will yield an operational definition of CT, knowledge about CT PCK, a CT assessment instrument and teaching materials and accompanying teacher instructions. It shall contribute to CS teacher education, development of CT education and to education in other (STEM) subjects where CT plays a supporting role, both nationally and internationally.
We launched an original large-scale experiment concerning informatics learning in French high schools. We are using the France-IOI platform to federate resources and share observation for research. The first step is the implementation of an adaptive hypermedia based on very fine grain epistemic modules for Python programming learning. We define the necessary traces to be built in order to study the trajectories of navigation the pupils will draw across this hypermedia. It may be browsed by pupils either as a course support, or an extra help to solve the list of exercises (mainly for algorithmics discovery). By leaving the locus of control to the learner, we want to observe the different trajectories they finally draw through our system. These trajectories may be abstracted and interpreted as strategies and then compared for their relative efficiency. Our hypothesis is that learners have different profiles and may use the appropriate strategy accordingly. This paper presents the research questions, the method and the expected results.
The traditional purpose of algorithm in education is to prepare students for programming. In our effort to introduce the practically missing computing science into Czech general secondary education, we have revisited this purpose.We propose an approach, which is in better accordance with the goals of general secondary education in Czechia. The importance of programming is diminishing, while recognition of algorithmic procedures and precise (yet concise) communication of algorithms is gaining importance. This includes expressing algorithms in natural language, which is more useful for most of the students than programming. We propose criteria to evaluate such descriptions. Finally, an idea about the limitations is required (inefficient algorithms, unsolvable problems, Turing’s test). We describe these adjusted educational goals and an outline of the resulting course. Our experience with carrying out the proposed intentions is satisfactory, although we did not accomplish all the defined goals.
Japan launched the new Course of Study in April 2012, which has been carried out in elementary schools and junior high schools. It will also be implemented in senior high schools from April 2013. This article presents an overview of the information studies education in the new Course of Study for K-12. Besides, the authors point out what role experts of informatics and information studies education should play in the general education centered around information studies that is meant to help people of the nation to lead an active, powerful, and flexible life until the satisfying end.
This article is a summary of the work carried out by the Ministry of Education in Turkey, in terms of the development of a new ICT Curriculum, together with the e-Training of teachers who will play an important role in the forthcoming pilot study. Based on recent literature on the topic, the article starts by introducing the “F@tih Project”, a national project that aims to effectively integrate technology into schools. After assessing teachers’ and students’ ICT competencies, as defined internationally, the review continues with the proposed model for the e-training of teachers. Summarizing the process of development of the new ICT curriculum, researchers underline key points of the curriculum such as dimensions, levels and competencies. Then teachers’ e-training approaches, together with selected tools, are explained in line with the importance and stages of action research that will be used throughout the pilot implementation of the curriculum and e-training process.
A comparison of current trends within computer science teaching in school in Germany and the UK
(2013)
In the last two years, CS as a school subject has gained a lot of attention worldwide, although different countries have differing approaches to and experiences of introducing CS in schools. This paper reports on a study comparing current trends in CS at school, with a major focus on two countries, Germany and UK. A survey was carried out of a number of teaching professionals and experts from the UK and Germany with regard to the content and delivery of CS in school. An analysis of the quantitative data reveals a difference in foci in the two countries; putting this into the context of curricular developments we are able to offer interpretations of these trends and suggest ways in which curricula in CS at school should be moving forward.
The aim of our article is to collect and present information about contemporary programming environments that are suitable for primary education. We studied the ways they implement (or do not implement) some programming concepts, the ways programs are represented and built in order to support young and novice programmers, as well as their suitability to allow different forms of sharing the results of pupils’ work. We present not only a short description of each considered environment and the taxonomy in the form of a table, but also our understanding and opinions on how and why the environments implement the same concepts and ideas in different ways and which concepts and ideas seem to be important to the creators of such environments.
The process of introducing compulsory ICT education at primary school level in the Czech Republic should be completed next year. Programming and Information, two topics from the basics of computer science have been included in a new textbook. The question is whether the new chapters of the textbook are comprehensible for primary school teachers, who have undergone no training in computer science. The paper reports on a pilot verification project in which pre-service primary school teachers were trained to teach these informatics topics.
In this paper, we show how the theory of NP completeness can be introduced to students in secondary schools. The motivation of this research is that although there are difficult issues that require technical backgrounds, students are already familiar with demanding computational problems through games such as Sudoku or Tetris. Our intention is to bring together important concepts in the theory of NP completeness in such a way that students in secondary schools can easily understand them. This is part of our ongoing research about how to teach fundamental issues in Computer Science in secondary schools. We discuss what needs to be taught in which sequence in order to introduce ideas behind NP completeness to students without technical backgrounds.
Development of competence-oriented curricula is still an important theme in informatics education. Unfortunately informatics curricula, which include the domain of logic programming, are still input-orientated or lack detailed competence descriptions. Therefore, the development of competence model and of learning outcomes' descriptions is essential for the learning process in this domain. A prior research developed both. The next research step is to formulate test items to measure the described learning outcomes. This article describes this procedure and exemplifies test items. It also relates a test in school to the items and shows which misconceptions and typical errors are important to discuss in class. The test result can also confirm or disprove the competence model. Therefore, this school test is important for theoretical research as well as for the concrete planning of lessons. Quantitative analysis in school is important for evaluation and improvement of informatics education.
Assuming that liquid iron alloy from the outer core interacts with the solid silicate-rich lower mantle the influence on the core-mantle reflected phase PcP is studied. If the core-mantle boundary is not a sharp discontinuity, this becomes apparent in the waveform and amplitude of PcP. Iron-silicate mixing would lead to regions of partial melting with higher density which in turn reduces the velocity of seismic waves. On the basis of the calculation and interpretation of short-period synthetic seismograms, using the reflectivity and Gauss Beam method, a model space is evaluated for these ultra-low velocity zones (ULVZs). The aim of this thesis is to analyse the behaviour of PcP between 10° and 40° source distance for such models using different velocity and density configurations. Furthermore, the resolution limits of seismic data are discussed. The influence of the assumed layer thickness, dominant source frequency and ULVZ topography are analysed. The Gräfenberg and NORSAR arrays are then used to investigate PcP from deep earthquakes and nuclear explosions. The seismic resolution of an ULVZ is limited both for velocity and density contrasts and layer thicknesses. Even a very thin global core-mantle transition zone (CMTZ), rather than a discrete boundary and also with strong impedance contrasts, seems possible: If no precursor is observable but the PcP_model /PcP_smooth amplitude reduction amounts to more than 10%, a very thin ULVZ of 5 km with a first-order discontinuity may exist. Otherwise, if amplitude reductions of less than 10% are obtained, this could indicate either a moderate, thin ULVZ or a gradient mantle-side CMTZ. Synthetic computations reveal notable amplitude variations as function of the distance and the impedance contrasts. Thereby a primary density effect in the very steep-angle range and a pronounced velocity dependency in the wide-angle region can be predicted. In view of the modelled findings, there is evidence for a 10 to 13.5 km thick ULVZ 600 km south-eastern of Moscow with a NW-SE extension of about 450 km. Here a single specific assumption about the velocity and density anomaly is not possible. This is in agreement with the synthetic results in which several models create similar amplitude-waveform characteristics. For example, a ULVZ model with contrasts of -5% VP , -15% VS and +5% density explain the measured PcP amplitudes. Moreover, below SW Finland and NNW of the Caspian Sea a CMB topography can be assumed. The amplitude measurements indicate a wavelength of 200 km and a height of 1 km topography, previously also shown in the study by Kampfmann and Müller (1989). Better constraints might be provided by a joined analysis of seismological data, mineralogical experiments and geodynamic modelling.
The International Conference on Informatics in Schools: Situation, Evolution and Perspectives – ISSEP – is a forum for researchers and practitioners in the area of Informatics education, both in primary and secondary schools. It provides an opportunity for educators to reflect upon the goals and objectives of this subject, its curricula and various teaching/learning paradigms and topics, possible connections to everyday life and various ways of establishing Informatics Education in schools. This conference also cares about teaching/learning materials, various forms of assessment, traditional and innovative educational research designs, Informatics’ contribution to the preparation of children for the 21st century, motivating competitions, projects and activities supporting informatics education in school.
Metals are often used in environments that are conducive to corrosion, which leads to a reduction in their mechanical properties and durability. Coatings are applied to corrosion-prone metals such as aluminum alloys to inhibit the destructive surface process of corrosion in a passive or active way. Standard anticorrosive coatings function as a physical barrier between the material and the corrosive environment and provide passive protection only when intact. In contrast, active protection prevents or slows down corrosion even when the main barrier is damaged. The most effective industrially used active corrosion inhibition for aluminum alloys is provided by chromate conversion coatings. However, their toxicity and worldwide restriction provoke an urgent need for finding environmentally friendly corrosion preventing systems. A promising approach to replace the toxic chromate coatings is to embed particles containing nontoxic inhibitor in a passive coating matrix. This work presents the development and optimization of effective anticorrosive coatings for the industrially important aluminum alloy, AA2024-T3 using this approach. The protective coatings were prepared by dispersing mesoporous silica containers, loaded with the nontoxic corrosion inhibitor 2-mercaptobenzothiazole, in a passive sol-gel (SiOx/ZrOx) or organic water-based layer. Two types of porous silica containers with different sizes (d ≈ 80 and 700 nm, respectively) were investigated. The studied robust containers exhibit high surface area (≈ 1000 m² g-1), narrow pore size distribution (dpore ≈ 3 nm) and large pore volume (≈ 1 mL g-1) as determined by N2 sorption measurements. These properties favored the subsequent adsorption and storage of a relatively large amount of inhibitor as well as its release in response to pH changes induced by the corrosion process. The concentration, position and size of the embedded containers were varied to ascertain the optimum conditions for overall anticorrosion performance. Attaining high anticorrosion efficiency was found to require a compromise between delivering an optimal amount of corrosion inhibitor and preserving the coating barrier properties. This study broadens the knowledge about the main factors influencing the coating anticorrosion efficiency and assists the development of optimum active anticorrosive coatings doped with inhibitor loaded containers.
Data dependencies, or integrity constraints, are used to improve the quality of a database schema, to optimize queries, and to ensure consistency in a database. In the last years conditional dependencies have been introduced to analyze and improve data quality. In short, a conditional dependency is a dependency with a limited scope defined by conditions over one or more attributes. Only the matching part of the instance must adhere to the dependency. In this paper we focus on conditional inclusion dependencies (CINDs). We generalize the definition of CINDs, distinguishing covering and completeness conditions. We present a new use case for such CINDs showing their value for solving complex data quality tasks. Further, we define quality measures for conditions inspired by precision and recall. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. Our algorithms choose not only the condition values but also the condition attributes automatically. Finally, we show that our approach efficiently provides meaningful and helpful results for our use case.
Current climate warming is affecting arctic regions at a faster rate than the rest of the world. This has profound effects on permafrost that underlies most of the arctic land area. Permafrost thawing can lead to the liberation of considerable amounts of greenhouse gases as well as to significant changes in the geomorphology, hydrology, and ecology of the corresponding landscapes, which may in turn act as a positive feedback to the climate system. Vast areas of the east Siberian lowlands, which are underlain by permafrost of the Yedoma-type Ice Complex, are particularly sensitive to climate warming because of the high ice content of these permafrost deposits. Thermokarst and thermal erosion are two major types of permafrost degradation in periglacial landscapes. The associated landforms are prominent indicators of climate-induced environmental variations on the regional scale. Thermokarst lakes and basins (alasses) as well as thermo-erosional valleys are widely distributed in the coastal lowlands adjacent to the Laptev Sea. This thesis investigates the spatial distribution and morphometric properties of these degradational features to reconstruct their evolutionary conditions during the Holocene and to deduce information on the potential impact of future permafrost degradation under the projected climate warming. The methodological approach is a combination of remote sensing, geoinformation, and field investigations, which integrates analyses on local to regional spatial scales. Thermokarst and thermal erosion have affected the study region to a great extent. In the Ice Complex area of the Lena River Delta, thermokarst basins cover a much larger area than do present thermokarst lakes on Yedoma uplands (20.0 and 2.2 %, respectively), which indicates that the conditions for large-area thermokarst development were more suitable in the past. This is supported by the reconstruction of the development of an individual alas in the Lena River Delta, which reveals a prolonged phase of high thermokarst activity since the Pleistocene/Holocene transition that created a large and deep basin. After the drainage of the primary thermokarst lake during the mid-Holocene, permafrost aggradation and degradation have occurred in parallel and in shorter alternating stages within the alas, resulting in a complex thermokarst landscape. Though more dynamic than during the first phase, late Holocene thermokarst activity in the alas was not capable of degrading large portions of Pleistocene Ice Complex deposits and substantially altering the Yedoma relief. Further thermokarst development in existing alasses is restricted to thin layers of Holocene ice-rich alas sediments, because the Ice Complex deposits underneath the large primary thermokarst lakes have thawed completely and the underlying deposits are ice-poor fluvial sands. Thermokarst processes on undisturbed Yedoma uplands have the highest impact on the alteration of Ice Complex deposits, but will be limited to smaller areal extents in the future because of the reduced availability of large undisturbed upland surfaces with poor drainage. On Kurungnakh Island in the central Lena River Delta, the area of Yedoma uplands available for future thermokarst development amounts to only 33.7 %. The increasing proximity of newly developing thermokarst lakes on Yedoma uplands to existing degradational features and other topographic lows decreases the possibility for thermokarst lakes to reach large sizes before drainage occurs. Drainage of thermokarst lakes due to thermal erosion is common in the study region, but thermo-erosional valleys also provide water to thermokarst lakes and alasses. Besides these direct hydrological interactions between thermokarst and thermal erosion on the local scale, an interdependence between both processes exists on the regional scale. A regional analysis of extensive networks of thermo-erosional valleys in three lowland regions of the Laptev Sea with a total study area of 5,800 km² found that these features are more common in areas with higher slopes and relief gradients, whereas thermokarst development is more pronounced in flat lowlands with lower relief gradients. The combined results of this thesis highlight the need for comprehensive analyses of both, thermokarst and thermal erosion, in order to assess past and future impacts and feedbacks of the degradation of ice-rich permafrost on hydrology and climate of a certain region.
The present work is devoted to establishing of a new generation of self-healing anti-corrosion coatings for protection of metals. The concept of self-healing anticorrosion coatings is based on the combination of the passive part, represented by the matrix of conventional coating, and the active part, represented by micron-sized capsules loaded with corrosion inhibitor. Polymers were chosen as the class of compounds most suitable for the capsule preparation. The morphology of capsules made of crosslinked polymers, however, was found to be dependent on the nature of the encapsulated liquid. Therefore, a systematic analysis of the morphology of capsules consisting of a crosslinked polymer and a solvent was performed. Three classes of polymers such as polyurethane, polyurea and polyamide were chosen. Capsules made of these polymers and eight solvents of different polarity were synthesized via interfacial polymerization. It was shown that the morphology of the resulting capsules is specific for every polymer-solvent pair. Formation of capsules with three general types of morphology, such as core-shell, compact and multicompartment, was demonstrated by means of Scanning Electron Microscopy. Compact morphology was assumed to be a result of the specific polymer-solvent interactions and be analogues to the process of swelling. In order to verify the hypothesis, pure polyurethane, polyurea and polyamide were synthesized; their swelling behavior in the solvents used as the encapsulated material was investigated. It was shown that the swelling behavior of the polymers in most cases correlates with the capsules morphology. Different morphologies (compact, core-shell and multicompartment) were therefore attributed to the specific polymer-solvent interactions and discussed in terms of “good” and “poor” solvent. Capsules with core-shell morphology are formed when the encapsulated liquid is a “poor” solvent for the chosen polymer while compact morphologies are formed when the solvent is “good”. Multicompartment morphology is explained by the formation of infinite networks or gelation of crosslinked polymers. If gelation occurs after the phase separation in the system is achieved, core-shell morphology is present. If gelation of the polymer occurs far before crosslinking is accomplished, further condensation of the polymer due to the crosslinking may lead to the formation of porous or multicompartment morphologies. It was concluded that in general, the morphology of capsules consisting of certain polymer-solvent pairs can be predicted on the basis of polymer-solvent behavior. In some cases, the swelling behavior and morphology may not match. The reasons for that are discussed in detail in the thesis. The discussed approach is only capable of predicting capsule morphology for certain polymer-solvent pairs. In practice, the design of the capsules assumes the trial of a great number of polymer-solvent combinations; more complex systems consisting of three, four or even more components are often used. Evaluation of the swelling behavior of each component pair of such systems becomes unreasonable. Therefore, exploitation of the solubility parameter approach was found to be more useful. The latter allows consideration of the properties of each single component instead of the pair of components. In such a manner, the Hansen Solubility Parameter (HSP) approach was used for further analysis. Solubility spheres were constructed for polyurethane, polyurea and polyamide. For this a three-dimensional graph is plotted with dispersion, polar and hydrogen bonding components of solubility parameter, obtained from literature, as the orthogonal axes. The HSP of the solvents are used as the coordinates for the points on the HSP graph. Then a sphere with a certain radius is located on a graph, and the “good” solvents would be located inside the sphere, while the “poor” ones are located outside. Both the location of the sphere center and the sphere radius should be fitted according to the information on polymer swelling behavior in a number of solvents. According to the existing correlation between the capsule morphology and swelling behavior of polymers, the solvents located inside the solubility sphere of a polymer give capsules with compact morphologies. The solvents located outside the solubility sphere of the solvent give either core-shell or multicompartment capsules in combination with the chosen polymer. Once the solubility sphere of a polymer is found, the solubility/swelling behavior is approximated to all possible substances. HSP theory allows therefore prediction of polymer solubility/swelling behavior and consequently the capsule morphology for any given substance with known HSP parameters on the basis of limited data. The latter makes the theory so attractive for application in chemistry and technology, since the choice of the system components is usually performed on the basis of a large number of different parameters that should mutually match. Even slight change of the technology sometimes leads to the necessity to find the analogue of this or that solvent in a sense of solvency but carrying different chemistry. Usage of the HSP approach in this case is indispensable. In the second part of the work examples of the HSP application for the fabrication of capsules with on-demand-morphology are presented. Capsules with compact or core-shell morphology containing corrosion inhibitors were synthesized. Thus, alkoxysilanes possessing long hydrophobic tail, combining passivating and water-repelling properties, were encapsulated in polyurethane shell. The mechanism of action of the active material required core-shell morphology of the capsules. The new hybrid corrosion inhibitor, cerium diethylhexyl phosphate, was encapsulated in polyamide shells in order to facilitate the dispersion of the substance and improve its adhesion to the coating matrix. The encapsulation of commercially available antifouling agents in polyurethane shells was carried out in order to control its release behavior and colloidal stability. Capsules with compact morphology made of polyurea containing the liquid corrosion inhibitor 2-methyl benzothiazole were synthesized in order to improve the colloidal stability of the substance. Capsules with compact morphology allow slower release of the liquid encapsulated material compared to the core-shell ones. If the “in-situ” encapsulation is not possible due to the reaction of the oil-soluble monomer with the encapsulated material, a solution was proposed: loading of the capsules should be performed after monomer deactivation due to the accomplishment of the polymerization reaction. Capsules of desired morphologies should be preformed followed by the loading step. In this way, compact polyurea capsules containing the highly effective but chemically active corrosion inhibitors 8-hydroxyquinoline and benzotriazole were fabricated. All the resulting capsules were successfully introduced into model coatings. The efficiency of the resulting “smart” self-healing anticorrosion coatings on steel and aluminium alloy of the AA-2024 series was evaluated using characterization techniques such as Scanning Vibrating Electron Spectroscopy, Electrochemical Impedance Spectroscopy and salt-spray chamber tests.
Cargo transport by molecular motors is ubiquitous in all eukaryotic cells and is typically driven cooperatively by several molecular motors, which may belong to one or several motor species like kinesin, dynein or myosin. These motor proteins transport cargos such as RNAs, protein complexes or organelles along filaments, from which they unbind after a finite run length. Understanding how these motors interact and how their movements are coordinated and regulated is a central and challenging problem in studies of intracellular transport. In this thesis, we describe a general theoretical framework for the analysis of such transport processes, which enables us to explain the behavior of intracellular cargos based on the transport properties of individual motors and their interactions. Motivated by recent in vitro experiments, we address two different modes of transport: unidirectional transport by two identical motors and cooperative transport by actively walking and passively diffusing motors. The case of cargo transport by two identical motors involves an elastic coupling between the motors that can reduce the motors’ velocity and/or the binding time to the filament. We show that this elastic coupling leads, in general, to four distinct transport regimes. In addition to a weak coupling regime, kinesin and dynein motors are found to exhibit a strong coupling and an enhanced unbinding regime, whereas myosin motors are predicted to attain a reduced velocity regime. All of these regimes, which we derive both by analytical calculations and by general time scale arguments, can be explored experimentally by varying the elastic coupling strength. In addition, using the time scale arguments, we explain why previous studies came to different conclusions about the effect and relevance of motor-motor interference. In this way, our theory provides a general and unifying framework for understanding the dynamical behavior of two elastically coupled molecular motors. The second mode of transport studied in this thesis is cargo transport by actively pulling and passively diffusing motors. Although these passive motors do not participate in active transport, they strongly enhance the overall cargo run length. When an active motor unbinds, the cargo is still tethered to the filament by the passive motors, giving the unbound motor the chance to rebind and continue its active walk. We develop a stochastic description for such cooperative behavior and explicitly derive the enhanced run length for a cargo transported by one actively pulling and one passively diffusing motor. We generalize our description to the case of several pulling and diffusing motors and find an exponential increase of the run length with the number of involved motors.
A fine-grained slope that exhibits slow movement rates was investigated to understand how geohydrological processes contribute to a consecutive development of mass movements in the Vorarlberg Alps, Austria. For that purpose intensive hydrometeorological, hydrogeological and geotechnical observations as well as surveying of surface movement rates were conducted during 1998–2001. Subsurface water dynamics at the creeping slope turned out to be dominated by a three-dimensional pressure system. The pressure reaction is triggered by fast infiltration of surface water and subsequent lateral water flow in the south-western part of the hillslope. The related pressure signal was shown to propagate further downhill, causing fast reactions of the piezometric head at 5Ð5 m depth on a daily time scale. The observed pressure reactions might belong to a temporary hillslope water body that extends further downhill. The related buoyancy forces could be one of the driving forces for the mass movement. A physically based hydrological model was adopted to model simultaneously surface and subsurface water dynamics including evapotranspiration and runoff production. It was possible to reproduce surface runoff and observed pressure reactions in principle. However, as soil hydraulic functions were only estimated on pedotransfer functions, a quantitative comparison between observed and simulated subsurface dynamics is not feasible. Nevertheless, the results suggest that it is possible to reconstruct important spatial structures based on sparse observations in the field which allow reasonable simulations with a physically based hydrological model. Copyright 2005 John Wiley & Sons, Ltd. KEY WORDS rainfall-induced landslides; soil creep; hydrological modelling; Vorarlberg; Austria; pressure propagation
This Thesis puts its focus on the physics of neutron stars and its description with methods of numerical relativity. In the first step, a new numerical framework the Whisky2D code will be developed, which solves the relativistic equations of hydrodynamics in axisymmetry. Therefore we consider an improved formulation of the conserved form of these equations. The second part will use the new code to investigate the critical behaviour of two colliding neutron stars. Considering the analogy to phase transitions in statistical physics, we will investigate the evolution of the entropy of the neutron stars during the whole process. A better understanding of the evolution of thermodynamical quantities, like the entropy in critical process, should provide deeper understanding of thermodynamics in relativity. More specifically, we have written the Whisky2D code, which solves the general-relativistic hydrodynamics equations in a flux-conservative form and in cylindrical coordinates. This of course brings in 1/r singular terms, where r is the radial cylindrical coordinate, which must be dealt with appropriately. In the above-referenced works, the flux operator is expanded and the 1/r terms, not containing derivatives, are moved to the right-hand-side of the equation (the source term), so that the left hand side assumes a form identical to the one of the three-dimensional (3D) Cartesian formulation. We call this the standard formulation. Another possibility is not to split the flux operator and to redefine the conserved variables, via a multiplication by r. We call this the new formulation. The new equations are solved with the same methods as in the Cartesian case. From a mathematical point of view, one would not expect differences between the two ways of writing the differential operator, but, of course, a difference is present at the numerical level. Our tests show that the new formulation yields results with a global truncation error which is one or more orders of magnitude smaller than those of alternative and commonly used formulations. The second part of the Thesis uses the new code for investigations of critical phenomena in general relativity. In particular, we consider the head-on-collision of two neutron stars in a region of the parameter space where two final states a new stable neutron star or a black hole, lay close to each other. In 1993, Choptuik considered one-parameter families of solutions, S[P], of the Einstein-Klein-Gordon equations for a massless scalar field in spherical symmetry, such that for every P > P⋆, S[P] contains a black hole and for every P < P⋆, S[P] is a solution not containing singularities. He studied numerically the behavior of S[P] as P → P⋆ and found that the critical solution, S[P⋆], is universal, in the sense that it is approached by all nearly-critical solutions regardless of the particular family of initial data considered. All these phenomena have the common property that, as P approaches P⋆, S[P] approaches a universal solution S[P⋆] and that all the physical quantities of S[P] depend only on |P − P⋆|. The first study of critical phenomena concerning the head-on collision of NSs was carried out by Jin and Suen in 2007. In particular, they considered a series of families of equal-mass NSs, modeled with an ideal-gas EOS, boosted towards each other and varied the mass of the stars, their separation, velocity and the polytropic index in the EOS. In this way they could observe a critical phenomenon of type I near the threshold of black-hole formation, with the putative solution being a nonlinearly oscillating star. In a successive work, they performed similar simulations but considering the head-on collision of Gaussian distributions of matter. Also in this case they found the appearance of type-I critical behaviour, but also performed a perturbative analysis of the initial distributions of matter and of the merged object. Because of the considerable difference found in the eigenfrequencies in the two cases, they concluded that the critical solution does not represent a system near equilibrium and in particular not a perturbed Tolmann-Oppenheimer-Volkoff (TOV) solution. In this Thesis we study the dynamics of the head-on collision of two equal-mass NSs using a setup which is as similar as possible to the one considered above. While we confirm that the merged object exhibits a type-I critical behaviour, we also argue against the conclusion that the critical solution cannot be described in terms of equilibrium solution. Indeed, we show that, in analogy with what is found in, the critical solution is effectively a perturbed unstable solution of the TOV equations. Our analysis also considers fine-structure of the scaling relation of type-I critical phenomena and we show that it exhibits oscillations in a similar way to the one studied in the context of scalar-field critical collapse.
Sediment records of three European lakes were investigated in order to reconstruct the regional climate development during the Lateglacial and Holocene, to investigate the response of local ecosystems to climatic fluctuations and human impact and to relate regional peculiarities of past climate development to climatic changes on a larger spatial scale. The Lake Hańcza (NE Poland) sediment record was studied with a focus on reconstructing the early Holocene climate development and identifying possible differences to Western Europe. Following the initial Holocene climatic improvement, a further climatic improvement occurred between 10 000 and 9000 cal. a BP. Apparently, relatively cold and dry climate conditions persisted in NE Poland during the first ca. 1500 years of the Holocene, most likely due to a specific regional atmospheric circulation pattern. Prevailing anticyclonic circulation linked to a high-pressure cell above the remaining Scandinavian Ice Sheet (SIS) might have blocked the eastward propagation of warm and moist Westerlies and thus attenuated the early Holocene climatic amelioration in this region until the final decay of the SIS, a pattern different from climate development in Western Europe. The Lateglacial sediment record of Lake Mondsee (Upper Austria) was investigated in order to study the regional climate development and the environmental response to rapid climatic fluctuations. While the temperature rise and environmental response at the onset of the Holocene took place quasi-synchronously, major leads and lags in proxy responses characterize the onset of the Lateglacial Interstadial. In particular, the spread of coniferous woodlands and the reduction of detrital flux lagged the initial Lateglacial warming by ca. 500–750 years. Major cooling at the onset of the Younger Dryas took place synchronously with a change in vegetation, while the increase of detrital matter flux was delayed by about 150–300 years. Complex proxy responses are also detected for short-term Lateglacial climatic fluctuations. In summary, periods of abrupt climatic changes are characterized by complex and temporally variable proxy responses, mainly controlled by ecosystem inertia and the environmental preconditions. A second study on the Lake Mondsee sediment record focused on two small-scale climate deteriorations around 8200 and 9100 cal. a BP, which have been triggered by freshwater discharges to the North Atlantic, causing a shutdown of the Atlantic meridional overturning circulation (MOC). Combining microscopic varve counting and AMS 14C dating yielded a precise duration estimate (ca. 150 years) and absolute dating of the 8.2 ka cold event, both being in good agreement with results from other palaeoclimate records. Moreover, a sudden temperature overshoot after the 8.2 ka cold event was identified, also seen in other proxy records around the North Atlantic. This was most likely caused by enhanced resumption of the MOC, which also initiated substantial shifts of oceanic and atmospheric front systems. Although there is also evidence from other proxy records for pronounced recovery of the MOC and atmospheric circulation changes after the 9.1 ka cold event, no temperature overshoot is seen in the Lake Mondsee record, indicating the complex behaviour of the global climate system. The Holocene sediment record of Lake Iseo (northern Italy) was studied to shed light on regional earthquake activity and the influence of climate variability and anthropogenic impact on catchment erosion and detrital flux into the lake. Frequent small-scale detrital layers within the sediments reflect allochthonous sediment supply by extreme surface runoff events. During the early to mid-Holocene, increased detrital flux coincides with periods of cold and wet climate conditions, thus apparently being mainly controlled by climate variability. In contrast, intervals of high detrital flux during the late Holocene partly also correlate with phases of increased human impact, reflecting the complex influences on catchment erosion processes. Five large-scale event layers within the sediments, which are composed of mass-wasting deposits and turbidites, are supposed to have been triggered by strong local earthquakes. While the uppermost of these event layers is assigned to a documented adjacent earthquake in AD 1222, the four other layers are supposed to be related to previously undocumented prehistorical earthquakes.
Dryland vulnerability : typical patterns and dynamics in support of vulnerability reduction efforts
(2011)
The pronounced constraints on ecosystem functioning and human livelihoods in drylands are frequently exacerbated by natural and socio-economic stresses, including weather extremes and inequitable trade conditions. Therefore, a better understanding of the relation between these stresses and the socio-ecological systems is important for advancing dryland development. The concept of vulnerability as applied in this dissertation describes this relation as encompassing the exposure to climate, market and other stresses as well as the sensitivity of the systems to these stresses and their capacity to adapt. With regard to the interest in improving environmental and living conditions in drylands, this dissertation aims at a meaningful generalisation of heterogeneous vulnerability situations. A pattern recognition approach based on clustering revealed typical vulnerability-creating mechanisms at global and local scales. One study presents the first analysis of dryland vulnerability with global coverage at a sub-national resolution. The cluster analysis resulted in seven typical patterns of vulnerability according to quantitative indication of poverty, water stress, soil degradation, natural agro-constraints and isolation. Independent case studies served to validate the identified patterns and to prove the transferability of vulnerability-reducing approaches. Due to their worldwide coverage, the global results allow the evaluation of a specific system’s vulnerability in its wider context, even in poorly-documented areas. Moreover, climate vulnerability of smallholders was investigated with regard to their food security in the Peruvian Altiplano. Four typical groups of households were identified in this local dryland context using indicators for harvest failure risk, agricultural resources, education and non-agricultural income. An elaborate validation relying on independently acquired information demonstrated the clear correlation between weather-related damages and the identified clusters. It also showed that household-specific causes of vulnerability were consistent with the mechanisms implied by the corresponding patterns. The synthesis of the local study provides valuable insights into the tailoring of interventions that reflect the heterogeneity within the social group of smallholders. The conditions necessary to identify typical vulnerability patterns were summarised in five methodological steps. They aim to motivate and to facilitate the application of the selected pattern recognition approach in future vulnerability analyses. The five steps outline the elicitation of relevant cause-effect hypotheses and the quantitative indication of mechanisms as well as an evaluation of robustness, a validation and a ranking of the identified patterns. The precise definition of the hypotheses is essential to appropriately quantify the basic processes as well as to consistently interpret, validate and rank the clusters. In particular, the five steps reflect scale-dependent opportunities, such as the outcome-oriented aspect of validation in the local study. Furthermore, the clusters identified in Northeast Brazil were assessed in the light of important endogenous processes in the smallholder systems which dominate this region. In order to capture these processes, a qualitative dynamic model was developed using generalised rules of labour allocation, yield extraction, budget constitution and the dynamics of natural and technological resources. The model resulted in a cyclic trajectory encompassing four states with differing degree of criticality. The joint assessment revealed aggravating conditions in major parts of the study region due to the overuse of natural resources and the potential for impoverishment. The changes in vulnerability-creating mechanisms identified in Northeast Brazil are well-suited to informing local adjustments to large-scale intervention programmes, such as “Avança Brasil”. Overall, the categorisation of a limited number of typical patterns and dynamics presents an efficient approach to improving our understanding of dryland vulnerability. Appropriate decision-making for sustainable dryland development through vulnerability reduction can be significantly enhanced by pattern-specific entry points combined with insights into changing hotspots of vulnerability and the transferability of successful adaptation strategies.
The present thesis introduces an iterative expert-based Bayesian approach for assessing greenhouse gas (GHG) emissions from the 2030 German new vehicle fleet and quantifying the impacts of their main drivers. A first set of expert interviews has been carried out in order to identify technologies which may help to lower car GHG emissions and to quantify their emission reduction potentials. Moreover, experts were asked for their probability assessments that the different technologies will be widely adopted, as well as for important prerequisites that could foster or hamper their adoption. Drawing on the results of these expert interviews, a Bayesian Belief Network has been built which explicitly models three vehicle types: Internal Combustion Engine Vehicles (which include mild and full Hybrid Electric Vehicles), Plug-In Hybrid Electric Vehicles, and Battery Electric Vehicles. The conditional dependencies of twelve central variables within the BBN - battery energy, fuel and electricity consumption, relative costs, and sales shares of the vehicle types - have been quantified by experts from German car manufacturers in a second series of interviews. For each of the seven second-round interviews, an expert's individually specified BBN results. The BBN have been run for different hypothetical 2030 scenarios which differ, e.g., in regard to battery development, regulation, and fuel and electricity GHG intensities. The present thesis delivers results both in regard to the subject of the investigation and in regard to its method. On the subject level, it has been found that the different experts expect 2030 German new car fleet emission to be at 50 to 65% of 2008 new fleet emissions under the baseline scenario. They can be further reduced to 40 to 50% of the emissions of the 2008 fleet though a combination of a higher share of renewables in the electricity mix, a larger share of biofuels in the fuel mix, and a stricter regulation of car CO$_2$ emissions in the European Union. Technically, 2030 German new car fleet GHG emissions can be reduced to a minimum of 18 to 44% of 2008 emissions, a development which can not be triggered by any combination of measures modeled in the BBN alone but needs further commitment. Out of a wealth of existing BBN, few have been specified by individual experts through elicitation, and to my knowledge, none of them has been employed for analyzing perspectives for the future. On the level of methods, this work shows that expert-based BBN are a valuable tool for making experts' expectations for the future explicit and amenable to the analysis of different hypothetical scenarios. BBN can also be employed for quantifying the impacts of main drivers. They have been demonstrated to be a valuable tool for iterative stakeholder-based science approaches.