Refine
Year of publication
Document Type
- Doctoral Thesis (3139) (remove)
Language
- English (3139) (remove)
Keywords
- climate change (51)
- Klimawandel (50)
- Modellierung (27)
- Nanopartikel (22)
- machine learning (21)
- Blickbewegungen (17)
- Fernerkundung (17)
- Arabidopsis thaliana (16)
- Synchronisation (15)
- remote sensing (15)
Institute
- Institut für Biochemie und Biologie (731)
- Institut für Physik und Astronomie (548)
- Institut für Geowissenschaften (405)
- Institut für Chemie (346)
- Extern (140)
- Institut für Informatik und Computational Science (128)
- Institut für Mathematik (124)
- Institut für Umweltwissenschaften und Geographie (116)
- Institut für Ernährungswissenschaft (114)
- Department Linguistik (97)
- Hasso-Plattner-Institut für Digital Engineering GmbH (88)
- Wirtschaftswissenschaften (86)
- Department Psychologie (70)
- Sozialwissenschaften (45)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (42)
- Institut für Anglistik und Amerikanistik (38)
- Department Sport- und Gesundheitswissenschaften (30)
- Fachgruppe Politik- & Verwaltungswissenschaft (16)
- Fachgruppe Betriebswirtschaftslehre (15)
- Strukturbereich Kognitionswissenschaften (14)
- Philosophische Fakultät (13)
- Department Erziehungswissenschaft (10)
- Öffentliches Recht (10)
- Institut für Philosophie (9)
- Digital Engineering Fakultät (7)
- Fachgruppe Volkswirtschaftslehre (7)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (7)
- Institut für Germanistik (6)
- Institut für Jüdische Studien und Religionswissenschaft (6)
- Institut für Künste und Medien (5)
- Mathematisch-Naturwissenschaftliche Fakultät (5)
- Fachgruppe Soziologie (3)
- Historisches Institut (3)
- Psycholinguistics and Neurolinguistics (3)
- Institut für Jüdische Theologie (2)
- Institut für Romanistik (2)
- Multilingualism (2)
- Patholinguistics/Neurocognition of Language (2)
- Applied Computational Linguistics (1)
- Bürgerliches Recht (1)
- Department Grundschulpädagogik (1)
- Fakultät für Gesundheitswissenschaften (1)
- Foundations of Computational Linguistics (1)
- Institut für Religionswissenschaft (1)
- Institut für Slavistik (1)
- Interdisziplinäres Zentrum für Dynamik komplexer Systeme (1)
- Interdisziplinäres Zentrum für Kognitive Studien (1)
- Lehreinheit für Wirtschafts-Arbeit-Technik (1)
- Phonology & Phonetics (1)
- Potsdam Research Institute for Multilingualism (PRIM) (1)
- Syntax, Morphology & Variability (1)
The sea level rise induced intensification of coastal floods is a serious threat to many regions in proximity to the ocean. Although severe flood events are rare they can entail enormous damage costs, especially when built-up areas are inundated. Fortunately, the mean sea level advances slowly and there is enough time for society to adapt to the changing environment. Most commonly, this is achieved by the construction or reinforcement of flood defence measures such as dykes or sea walls but also land use and disaster management are widely discussed options. Overall, albeit the projection of sea level rise impacts and the elaboration of adequate response strategies is amongst the most prominent topics in climate impact research, global damage estimates are vague and mostly rely on the same assessment models. The thesis at hand contributes to this issue by presenting a distinctive approach which facilitates large scale assessments as well as the comparability of results across regions. Moreover, we aim to improve the general understanding of the interplay between mean sea level rise, adaptation, and coastal flood damage.
Our undertaking is based on two basic building blocks. Firstly, we make use of macroscopic flood-damage functions, i.e. damage functions that provide the total monetary damage within a delineated region (e.g. a city) caused by a flood of certain magnitude. After introducing a systematic methodology for the automatised derivation of such functions, we apply it to a total of 140 European cities and obtain a large set of damage curves utilisable for individual as well as comparative damage assessments. By scrutinising the resulting curves, we are further able to characterise the slope of the damage functions by means of a functional model. The proposed function has in general a sigmoidal shape but exhibits a power law increase for the relevant range of flood levels and we detect an average exponent of 3.4 for the considered cities. This finding represents an essential input for subsequent elaborations on the general interrelations of involved quantities.
The second basic element of this work is extreme value theory which is employed to characterise the occurrence of flood events and in conjunction with a damage function provides the probability distribution of the annual damage in the area under study. The resulting approach is highly flexible as it assumes non-stationarity in all relevant parameters and can be easily applied to arbitrary regions, sea level, and adaptation scenarios. For instance, we find a doubling of expected flood damage in the city of Copenhagen for a rise in mean sea levels of only 11 cm. By following more general considerations, we succeed in deducing surprisingly simple functional expressions to describe the damage behaviour in a given region for varying mean sea levels, changing storm intensities, and supposed protection levels. We are thus able to project future flood damage by means of a reduced set of parameters, namely the aforementioned damage function exponent and the extreme value parameters. Similar examinations are carried out to quantify the aleatory uncertainty involved in these projections. In this regard, a decrease of (relative) uncertainty with rising mean sea levels is detected. Beyond that, we demonstrate how potential adaptation measures can be assessed in terms of a Cost-Benefit Analysis. This is exemplified by the Danish case study of Kalundborg, where amortisation times for a planned investment are estimated for several sea level scenarios and discount rates.
Effect of mass wasting on soil organic carbon storage and coastal erosion in permafrost environments
(2015)
Accelerated permafrost thaw under the warming Arctic climate can have a significant impact on Arctic landscapes. Areas underlain by permafrost store high amounts of soil organic carbon (SOC). Permafrost disturbances may contribute to increased release of carbon dioxide and methane to the atmosphere. Coastal erosion, amplified through a decrease in Arctic sea-ice extent, may also mobilise SOC from permafrost. Large expanses of permafrost affected land are characterised by intense mass-wasting processes such as solifluction, active-layer detachments and retrogressive thaw slumping. Our aim is to assess the influence of mass wasting on SOC storage and coastal erosion.
We studied SOC storage on Herschel Island by analysing active-layer and permafrost samples, and compared non-disturbed sites to those characterised by mass wasting. Mass-wasting sites showed decreased SOC storage and material compaction, whereas sites characterised by material accumulation showed increased storage. The SOC storage on Herschel Island is also significantly correlated to catenary position and other slope characteristics. We estimated SOC storage on Herschel Island to be 34.8 kg C m-2. This is comparable to similar environments in northwest Canada and Alaska.
Coastal erosion was analysed using high resolution digital elevation models (DEMs). Two LIDAR scanning of the Yukon Coast were done in 2012 and 2013. Two DEMs with 1 m horizontal resolution were generated and used to analyse elevation changes along the coast. The results indicate considerable spatial variability in short-term coastline erosion and progradation. The high variability was related to the presence of mass-wasting processes. Erosion and deposition extremes were recorded where the retrogressive thaw slump (RTS) activity was most pronounced. Released sediment can be transported by longshore drift and affects not only the coastal processes in situ but also along adjacent coasts.
We also calculated volumetric coastal erosion for Herschel Island by comparing a stereo-photogrammetrically derived DEM from 2004 with LIDAR DEMs. We compared this volumetric erosion to planimetric erosion, which was based on coastlines digitised from satellite imagery. We found a complex relationship between planimetric and volumetric coastal erosion, which we attribute to frequent occurrence of mass-wasting processes along the coasts. Our results suggest that volumetric erosion corresponds better with environmental forcing and is more suitable for the estimation of organic carbon fluxes than planimetric erosion.
Mass wasting can decrease SOC storage by several mechanisms. Increased aeration following disturbance may increase microbial activity, which accelerates organic matter decomposition. New hydrological conditions that follow the mass wasting event can cause leaching of freshly exposed material. Organic rich material can also be directly removed into the sea or into a lake. On the other hand the accumulation of mobilised material can result in increased SOC storage. Mass-wasting related accumulations of mobilised material can significantly impact coastal erosion in situ or along the adjacent coast by longshore drift. Therefore, the coastline movement observations cannot completely resolve the actual sediment loss due to these temporary accumulations. The predicted increase of mass-wasting activity in the course of Arctic warming may increase SOC mobilisation and coastal erosion induced carbon fluxes.
In this thesis sentence processing was investigated using a psychophysiological measure known as pupillometry as well as Event-Related Potentials (ERP). The scope of the the- sis was broad, investigating the processing of several different movement constructions with native speakers of English and second language learners of English, as well as word order and case marking in German speaking adults and children. Pupillometry and ERP allowed us to test competing linguistic theories and use novel methodologies to investigate the processing of word order. In doing so we also aimed to establish pupillometry as an effective way to investigate the processing of word order thus broadening the methodological spectrum.
The high-latitudinal thermospheric processes driven by the solar wind and Interplanetary Magnetic Field (IMF) interaction with the Earth magnetosphere are highly variable parts of the complex dynamic plasma environment, which represent the coupled Magnetosphere – Ionosphere – Thermosphere (MIT) system. The solar wind and IMF interactions transfer energy to the MIT system via reconnection processes at the magnetopause. The Field Aligned Currents (FACs) constitute the energetic links between the magnetosphere and the Earth ionosphere. The MIT system depends on the highly variable solar wind conditions, in particular on changes of the strength and orientation of the IMF.
In my thesis, I perform an investigation on the physical background of the complex MIT system using the global physical - numerical, three-dimensional, time-dependent and self-consistent Upper Atmosphere Model (UAM). This model describes the thermosphere, ionosphere, plasmasphere and inner magnetosphere as well as the electrodynamics of the coupled MIT system for the altitudinal range from 80 (60) km up to the 15 Earth radii.
In the present study, I developed and investigated several variants of the high-latitudinal electrodynamic coupling by including the IMF dependence of FACs into the UAM model. For testing, the various variants were applied to simulations of the coupled MIT system for different seasons, geomagnetic activities, various solar wind and IMF conditions. Additionally, these variants of the theoretical model with the IMF dependence were compared with global empirical models. The modelling results for the most important thermospheric parameters like neutral wind and mass density were compared with satellite measurements. The variants of the UAM model with IMF dependence show a good agreement with the satellite observations. In comparison with the empirical models, the improved variants of the UAM model reproduce a more realistic meso-scale structures and dynamics of the coupled MIT system than the empirical models, in particular at high latitudes. The new configurations of the UAM model with IMF dependence contribute to the improvement of space weather prediction.
This volume is a novel approach to the corpus-based variationist sociolinguistic study of contemporary urban western Irish English. Based on qualitative data as well as on linguistic features extracted from the Corpus of Galway City Spoken English, this study approaches the major sociolinguistic characteristics of (th) and (dh) variability in Galway City English. It demonstrates the diverse local patterns of variability and change in the phonetic realisation of the dental fricatives and establishes a considerable degree of divergence from traditional accounts on Irish English. This volume suggests that the linguistic stratification of variants of (th) and (dh) in Galway correlates both with the social stratification of the city itself and with the stratification of speakers by social status, sex/gender and age group.
In the last decade, the number and dimensions of catastrophic flooding events in the Niger River Basin (NRB) have markedly increased. Despite the devastating impact of the floods on the population and the mainly agriculturally based economy of the riverine nations, awareness of the hazards in policy and science is still low. The urgency of this topic and the existing research deficits are the motivation for the present dissertation.
The thesis is an initial detailed assessment of the increasing flood risk in the NRB. The research strategy is based on four questions regarding (1) features of the change in flood risk, (2) reasons for the change in the flood regime, (3) expected changes of the flood regime given climate and land use changes, and (4) recommendations from previous analysis for reducing the flood risk in the NRB.
The question examining the features of change in the flood regime is answered by means of statistical analysis. Trend, correlation, changepoint, and variance analyses show that, in addition to the factors exposure and vulnerability, the hazard itself has also increased significantly in the NRB, in accordance with the decadal climate pattern of West Africa. The northern arid and semi-arid parts of the NRB are those most affected by the changes.
As potential reasons for the increase in flood magnitudes, climate and land use changes are attributed by means of a hypothesis-testing framework. Two different approaches, based on either data analysis or simulation, lead to similar results, showing that the influence of climatic changes is generally larger compared to that of land use changes. Only in the dry areas of the NRB is the influence of land use changes comparable to that of climatic alterations.
Future changes of the flood regime are evaluated using modelling results. First ensembles of statistically and dynamically downscaled climate models based on different emission scenarios are analyzed. The models agree with a distinct increase in temperature. The precipitation signal, however, is not coherent. The climate scenarios are used to drive an eco-hydrological model. The influence of climatic changes on the flood regime is uncertain due to the unclear precipitation signal. Still, in general, higher flood peaks are expected. In a next step, effects of land use changes are integrated into the model. Different scenarios show that regreening might help to reduce flood peaks. In contrast, an expansion of agriculture might enhance the flood peaks in the NRB. Similarly to the analysis of observed changes in the flood regime, the impacts of climate- and land use changes for the future scenarios are also most severe in the dry areas of the NRB.
In order to answer the final research question, the results of the above analysis are integrated into a range of recommendations for science and policy on how to reduce flood risk in the NRB. The main recommendations include a stronger consideration of the enormous natural climate variability in the NRB and a focus on so called “no-regret” adaptation strategies which account for high uncertainty, as well as a stronger consideration of regional differences. Regarding the prevention and mitigation of catastrophic flooding, the most vulnerable and sensitive areas in the basin, the arid and semi-arid Sahelian and Sudano-Sahelian regions, should be prioritized. Eventually, an active, science-based and science-guided flood policy is recommended. The enormous population growth in the NRB in connection with the expected deterioration of environmental and climatic conditions is likely to enhance the region´s vulnerability to flooding. A smart and sustainable flood policy can help mitigate these negative impacts of flooding on the development of riverine societies in West Africa.
Application of hybridisation capture to investigate complete mitogenomes from ancient samples
(2015)
Dynamics of mantle plumes
(2016)
Mantle plumes are a link between different scales in the Earth’s mantle: They are an important part of large-scale mantle convection, transporting material and heat from the core-mantle boundary to the surface, but also affect processes on a smaller scale, such as melt generation and transport and surface magmatism. When they reach the base of the lithosphere, they cause massive magmatism associated with the generation of large igneous provinces, and they can be related to mass extinction events (Wignall, 2001) and continental breakup (White and McKenzie, 1989).
Thus, mantle plumes have been the subject of many previous numerical modelling studies (e.g. Farnetani and Richards, 1995; d’Acremont et al., 2003; Lin and van Keken, 2005; Sobolev et al., 2011; Ballmer et al., 2013). However, complex mechanisms, such as the development and implications of chemical heterogeneities in plumes, their interaction with mid-ocean ridges and global mantle flow, and melt ascent from the source region to the surface are still not very well understood; and disagreements between observations and the predictions of classical plume models have led to a challenge of the plume concept in general (Czamanske et al., 1998; Anderson, 2000; Foulger, 2011). Hence, there is a need for more sophisticated models that can explain the underlying physics, assess which properties and processes are important, explain how they cause the observations visible at the Earth’s surface and provide a link between the different scales.
In this work, integrated plume models are developed that investigate the effect of dense recycled oceanic crust on the development of mantle plumes, plume–ridge interaction under the influence of global mantle flow and melting and melt migration in form of two-phase flow.
The presented analysis of these models leads to a new, updated picture of mantle plumes: Models considering a realistic depth-dependent density of recycled oceanic crust and peridotitic mantle material show that plumes with excess temperatures of up to 300 K can transport up to 15% of recycled oceanic crust through the whole mantle. However, due to the high density of recycled crust, plumes can only advance to the base of the lithosphere directly if they have high excess temperatures, high plume volumes and the lowermost mantle is subadiabatic, or plumes rise from the top or edges of thermo-chemical piles. They might only cause minor surface uplift, and instead of the classical head–tail structure, these low-buoyancy plumes are predicted to be broad features in the lower mantle with much less pronounced plume heads. They can form a variety of shapes and regimes, including primary plumes directly advancing to the base of the lithosphere, stagnating plumes, secondary plumes rising from the core–mantle boundary or a pool of eclogitic material in the upper mantle and failing plumes. In the upper mantle, plumes are tilted and deflected by global mantle flow, and the shape, size and stability of the melting region is influenced by the distance from nearby plate boundaries, the speed of the overlying plate and the movement of the plume tail arriving from the lower mantle. Furthermore, the structure of the lithosphere controls where hot material is accumulated and melt is generated. In addition to melting in the plume tail at the plume arrival position, hot plume material flows upwards towards opening rifts, towards mid-ocean ridges and towards other regions of thinner lithosphere, where it produces additional melt due to decompression. This leads to the generation of either broad ridges of thickened magmatic crust or the separation into multiple thinner lines of sea mount chains at the surface. Once melt is generated within the plume, it influences its dynamics, lowering the viscosity and density, and while it rises the melt volume is increased up to 20% due to decompression. Melt has the tendency to accumulate at the top of the plume head, forming diapirs and initiating small-scale convection when the plume reaches the base of the lithosphere. Together with the introduced unstable, high-density material produced by freezing of melt, this provides an efficient mechanism to thin the lithosphere above plume heads.
In summary, this thesis shows that mantle plumes are more complex than previously considered, and linking the scales and coupling the physics of different processes occurring in mantle plumes can provide insights into how mantle plumes are influenced by chemical heterogeneities, interact with the lithosphere and global mantle flow, and are affected by melting and melt migration. Including these complexities in geodynamic models shows that plumes can also have broad plume tails, might produce only negligible surface uplift, can generate one or several volcanic island chains in interaction with a mid–ocean ridge, and can magmatically thin the lithosphere.
This study presents the development of 1D and 2D Surface Evolution Codes (SECs) and their coupling to any lithospheric-scale (thermo-)mechanical code with a quadrilateral structured surface mesh.
Both SECs involve diffusion as approach for hillslope processes and the stream power law to reflect riverbed incision. The 1D SEC settles sediment that was produced by fluvial incision in the appropriate minimum, while the supply-limited 2D SEC DANSER uses a fast filling algorithm to model sedimantation. It is based on a cellular automaton. A slope-dependent factor in the sediment flux extends the diffusion equation to nonlinear diffusion. The discharge accumulation is achieved with the D8-algorithm and an improved drainage accumulation routine. Lateral incision enhances the incision's modelling. Following empirical laws, it incises channels of several cells width.
The coupling method enables different temporal and spatial resolutions of the SEC and the thermo-mechanical code. It transfers vertical as well as horizontal displacements to the surface model. A weighted smoothing of the 3D surface displacements is implemented. The smoothed displacement vectors transmit the deformation by bilinear interpolation to the surface model. These interpolation methods ensure mass conservation in both directions and prevent the two surfaces from drifting apart.
The presented applications refer to the evolution of the Pamir orogen. A calibration of DANSER's parameters with geomorphological data and a DEM as initial topography highlights the advantage of lateral incision. Preserving the channel width and reflecting incision peaks in narrow channels, this closes the huge gap between current orogen-scale incision models and observed topographies.
River capturing models in a system of fault-bounded block rotations reaffirm the importance of the lateral incision routine for capturing events with channel initiation. The models show a low probability of river capturings with large deflection angles. While the probability of river capturing is directly depending on the uplift rate, the erodibility inside of a dip-slip fault speeds up headward erosion along the fault: The model's capturing speed increases within a fault.
Coupling DANSER with the thermo-mechanical code SLIM 3D emphasizes the versatility of the SEC. While DANSER has minor influence on the lithospheric evolution of an indenter model, the brittle surface deformation is strongly affected by its sedimentation, widening a basin in between two forming orogens and also the southern part of the southern orogen to south, east and west.
Porous Membranes from Imidazolium- and Pyridinium-based Poly(ionic liquid)s with Targeted Properties
(2016)
This study presents new insights into null subjects, topic drop and the interpretation of topic-dropped elements. Besides providing an empirical data survey, it offers explanations to well-known problems, e.g. syncretisms in the context of null-subject licensing or the marginality of dropping an element which carries oblique case. The book constitutes a valuable source for both empirically and theoretically interested (generative) linguists.
F2 hybrid chlorosis in a cross between the Arabidopsis thaliana accessions Shahdara and Lovvik-5
(2015)
In recent years, entire industries and their participants have been affected by disruptive technologies, resulting in dramatic market changes and challenges to firm’s business logic and thus their business models (BMs). Firms from mature industries are increasingly realizing that BMs that worked successfully for years have become insufficient to stay on track in today’s “move fast and break things” economy. Firms must scrutinize the core logic that informs how they do business, which means exploring novel ways to engage customers and get them to pay. This can lead to a complete renewal of existing BMs or innovating completely new BMs.
BMs have emerged as a popular object of research within the last decade. Despite the popularity of the BM, the theoretical and empirical foundation underlying the concept is still weak. In particular, the innovation process for BMs has been developed and implemented in firms, but understanding of the mechanisms behind it is still lacking. Business model innovation (BMI) is a complex and challenging management task that requires more than just novel ideas. Systematic studies to generate a better understanding of BMI and support incumbents with appropriate concepts to improve BMI development are in short supply. Further, there is a lack of knowledge about appropriate research practices for studying BMI and generating valid data sets in order to meet expectations in both practice and academia.
This paper-based dissertation aims to contribute to research practice in the field of BM and BMI and foster better understanding of the BM concept and BMI processes in incumbent firms from mature industries. The overall dissertation presents three main results. The first result is a new perspective, or the systems thinking view, on the BM and BMI. With the systems thinking view, the fuzzy BM concept is clearly structured and a BMI framework is proposed. The second result is a new research strategy for studying BMI. After analyzing current research practice in the areas of BMs and BMI, it is obvious that there is a need for better research on BMs and BMI in terms of accuracy, transparency, and practical orientation. Thus, the action case study approach combined with abductive methodology is proposed and proven in the research setting of this thesis. The third result stems from three action case studies in incumbent firms from mature industries employed to study how BMI occurs in practice. The new insights and knowledge gained from the action case studies help to explain BMI in such industries and increase understanding of the core of these processes.
By studying these issues, the articles complied in this thesis contribute conceptually and empirically to the recently consolidated but still increasing literature on the BM and BMI. The conclusions and implications made are intended to foster further research and improve managerial practices for achieving BMI in a dramatically changing business environment.
In the past, floods were basically managed by flood control mechanisms. The focus was set on the reduction of flood hazard. The potential consequences were of minor interest. Nowadays river flooding is increasingly seen from the risk perspective, including possible consequences. Moreover, the large-scale picture of flood risk became increasingly important for disaster management planning, national risk developments and the (re-) insurance industry. Therefore, it is widely accepted that risk-orientated flood management ap-proaches at the basin-scale are needed. However, large-scale flood risk assessment methods for areas of several 10,000 km² are still in early stages. Traditional flood risk assessments are performed reach wise, assuming constant probabilities for the entire reach or basin. This might be helpful on a local basis, but where large-scale patterns are important this approach is of limited use. Assuming a T-year flood (e.g. 100 years) for the entire river network is unrealistic and would lead to an overestimation of flood risk at the large scale. Due to the lack of damage data, additionally, the probability of peak discharge or rainfall is usually used as proxy for damage probability to derive flood risk. With a continuous and long term simulation of the entire flood risk chain, the spatial variability of probabilities could be consider and flood risk could be directly derived from damage data in a consistent way.
The objective of this study is the development and application of a full flood risk chain, appropriate for the large scale and based on long term and continuous simulation. The novel approach of ‘derived flood risk based on continuous simulations’ is introduced, where the synthetic discharge time series is used as input into flood impact models and flood risk is directly derived from the resulting synthetic damage time series.
The bottleneck at this scale is the hydrodynamic simu-lation. To find suitable hydrodynamic approaches for the large-scale a benchmark study with simplified 2D hydrodynamic models was performed. A raster-based approach with inertia formulation and a relatively high resolution of 100 m in combination with a fast 1D channel routing model was chosen.
To investigate the suitability of the continuous simulation of a full flood risk chain for the large scale, all model parts were integrated into a new framework, the Regional Flood Model (RFM). RFM consists of the hydrological model SWIM, a 1D hydrodynamic river network model, a 2D raster based inundation model and the flood loss model FELMOps+r. Subsequently, the model chain was applied to the Elbe catchment, one of the largest catchments in Germany. For the proof-of-concept, a continuous simulation was per-formed for the period of 1990-2003. Results were evaluated / validated as far as possible with available observed data in this period. Although each model part introduced its own uncertainties, results and runtime were generally found to be adequate for the purpose of continuous simulation at the large catchment scale.
Finally, RFM was applied to a meso-scale catchment in the east of Germany to firstly perform a flood risk assessment with the novel approach of ‘derived flood risk assessment based on continuous simulations’. Therefore, RFM was driven by long term synthetic meteorological input data generated by a weather generator. Thereby, a virtual time series of climate data of 100 x 100 years was generated and served as input to RFM providing subsequent 100 x 100 years of spatially consistent river discharge series, inundation patterns and damage values. On this basis, flood risk curves and expected annual damage could be derived directly from damage data, providing a large-scale picture of flood risk. In contrast to traditional flood risk analysis, where homogenous return periods are assumed for the entire basin, the presented approach provides a coherent large-scale picture of flood risk. The spatial variability of occurrence probability is respected. Additionally, data and methods are consistent. Catchment and floodplain processes are repre-sented in a holistic way. Antecedent catchment conditions are implicitly taken into account, as well as physical processes like storage effects, flood attenuation or channel–floodplain interactions and related damage influencing effects. Finally, the simulation of a virtual period of 100 x 100 years and consequently large data set on flood loss events enabled the calculation of flood risk directly from damage distributions. Problems associated with the transfer of probabilities in rainfall or peak runoff to probabilities in damage, as often used in traditional approaches, are bypassed.
RFM and the ‘derived flood risk approach based on continuous simulations’ has the potential to provide flood risk statements for national planning, re-insurance aspects or other questions where spatially consistent, large-scale assessments are required.
In this dissertation, an electric field-assisted method was developed and applied to achieve immobilization and alignment of biomolecules on metal electrodes in a simple one-step experiment. Neither modifications of the biomolecule nor of the electrodes were needed. The two major electrokinetic effects that lead to molecule motion in the chosen electrode configurations used were identified as dielectrophoresis and AC electroosmotic flow. To minimize AC electroosmotic flow, a new 3D electrode configuration was designed. Thus, the influence of experimental parameters on the dielectrophoretic force and the associated molecule movement could be studied. Permanent immobilization of proteins was examined and quantified absolutely using an atomic force microscope. By measuring the volumes of the immobilized protein deposits, a maximal number of proteins contained therein was calculated. This was possible since the proteins adhered to the tungsten electrodes even after switching off the electric field. The permanent immobilization of functional proteins on surfaces or electrodes is one crucial prerequisite for the fabrication of biosensors.
Furthermore, the biofunctionality of the proteins must be retained after immobilization. Due to the chemical or physical modifications on the proteins caused by immobilization, their biofunctionality is sometimes hampered. The activity of dielectrophoretically immobilized proteins, however, was proven here for an enzyme for the first time. The enzyme horseradish peroxidase was used exemplarily, and its activity was demonstrated with the oxidation of dihydrorhodamine 123, a non-fluorescent precursor of the fluorescence dye rhodamine 123.
Molecular alignment and immobilization - reversible and permanent - was achieved under the influence of inhomogeneous AC electric fields. For orientational investigations, a fluorescence microscope setup, a reliable experimental procedure and an evaluation protocol were developed and validated using self-made control samples of aligned acridine orange molecules in a liquid crystal.
Lambda-DNA strands were stretched and aligned temporarily between adjacent interdigitated electrodes, and the orientation of PicoGreen molecules, which intercalate into the DNA strands, was determined. Similarly, the aligned immobilization of enhanced Green Fluorescent Protein was demonstrated exploiting the protein's fluorescence and structural properties. For this protein, the angle of the chromophore with respect to the protein's geometrical axis was determined in good agreement with X-ray crystallographic data. Permanent immobilization with simultaneous alignment of the proteins was achieved along the edges, tips and on the surface of interdigitated electrodes. This was the first demonstration of aligned immobilization of proteins by electric fields.
Thus, the presented electric field-assisted immobilization method is promising with regard to enhanced antibody binding capacities and enzymatic activities, which is a requirement for industrial biosensor production, as well as for general interaction studies of proteins.
Organic bulk heterojunction (BHJ) solar cells based on polymer:fullerene blends are a promising alternative for a low-cost solar energy conversion. Despite significant improvements of the power conversion efficiency in recent years, the fundamental working principles of these devices are yet not fully understood. In general, the current output of organic solar cells is determined by the generation of free charge carriers upon light absorption and their transport to the electrodes in competition to the loss of charge carriers due to recombination.
The object of this thesis is to provide a comprehensive understanding of the dynamic processes and physical parameters determining the performance. A new approach for analyzing the characteristic current-voltage output was developed comprising the experimental determination of the efficiencies of charge carrier generation, recombination and transport, combined with numerical device simulations.
Central issues at the beginning of this work were the influence of an electric field on the free carrier generation process and the contribution of generation, recombination and transport to the current-voltage characteristics. An elegant way to directly measure the field dependence of the free carrier generation is the Time Delayed Collection Field (TDCF) method. In TDCF charge carriers are generated by a short laser pulse and subsequently extracted by a defined rectangular voltage pulse. A new setup was established with an improved time resolution compared to former reports in literature. It was found that charge generation is in general independent of the electric field, in contrast to the current view in literature and opposed to the expectations of the Braun-Onsager model that was commonly used to describe the charge generation process. Even in cases where the charge generation was found to be field-dependend, numerical modelling showed that this field-dependence is in general not capable to account for the voltage dependence of the photocurrent. This highlights the importance of efficient charge extraction in competition to non-geminate recombination, which is the second objective of the thesis.
Therefore, two different techniques were combined to characterize the dynamics and efficiency of non-geminate recombination under device-relevant conditions. One new approach is to perform TDCF measurements with increasing delay between generation and extraction of charges. Thus, TDCF was used for the first time to measure charge carrier generation, recombination and transport with the same experimental setup. This excludes experimental errors due to different measurement and preparation conditions and demonstrates the strength of this technique. An analytic model for the description of TDCF transients was developed and revealed the experimental conditions for which reliable results can be obtained. In particular, it turned out that the $RC$ time of the setup which is mainly given by the sample geometry has a significant influence on the shape of the transients which has to be considered for correct data analysis.
Secondly, a complementary method was applied to characterize charge carrier recombination under steady state bias and illumination, i.e. under realistic operating conditions. This approach relies on the precise determination of the steady state carrier densities established in the active layer. It turned out that current techniques were not sufficient to measure carrier densities with the necessary accuracy. Therefore, a new technique {Bias Assisted Charge Extraction} (BACE) was developed. Here, the charge carriers photogenerated under steady state illumination are extracted by applying a high reverse bias. The accelerated extraction compared to conventional charge extraction minimizes losses through non-geminate recombination and trapping during extraction. By performing numerical device simulations under steady state, conditions were established under which quantitative information on the dynamics can be retrieved from BACE measurements.
The applied experimental techniques allowed to sensitively analyse and quantify geminate and non-geminate recombination losses along with charge transport in organic solar cells. A full analysis was exemplarily demonstrated for two prominent polymer-fullerene blends.
The model system P3HT:PCBM spincast from chloroform (as prepared) exhibits poor power conversion efficiencies (PCE) on the order of 0.5%, mainly caused by low fill factors (FF) and currents. It could be shown that the performance of these devices is limited by the hole transport and large bimolecular recombination (BMR) losses, while geminate recombination losses are insignificant. The low polymer crystallinity and poor interconnection between the polymer and fullerene domains leads to a hole mobility of the order of 10^-7 cm^2/Vs which is several orders of magnitude lower than the electron mobility in these devices. The concomitant build up of space charge hinders extraction of both electrons and holes and promotes bimolecular recombination losses.
Thermal annealing of P3HT:PCBM blends directly after spin coating improves crystallinity and interconnection of the polymer and the fullerene phase and results in comparatively high electron and hole mobilities in the order of 10^-3 cm^2/Vs and 10^-4 cm^2/Vs, respectively. In addition, a coarsening of the domain sizes leads to a reduction of the BMR by one order of magnitude. High charge carrier mobilities and low recombination losses result in comparatively high FF (>65%) and short circuit current (J_SC ≈ 10 mA/cm^2). The overall device performance (PCE ≈ 4%) is only limited by a rather low spectral overlap of absorption and solar emission and a small V_OC, given by the energetics of the P3HT.
From this point of view the combination of the low bandgap polymer PTB7 with PCBM is a promising approach. In BHJ solar cells, this polymer leads to a higher V_OC due to optimized energetics with PCBM. However, the J_SC in these (unoptimized) devices is similar to the J_SC in the optimized blend with P3HT and the FF is rather low (≈ 50%). It turned out that the unoptimized PTB7:PCBM blends suffer from high BMR, a low electron mobility of the order of 10^-5 cm^2/Vs and geminate recombination losses due to field dependent charge carrier generation.
The use of the solvent additive DIO optimizes the blend morphology, mainly by suppressing the formation of very large fullerene domains and by forming a more uniform structure of well interconnected donor and acceptor domains of the order of a few nanometers. Our analysis shows that this results in an increase of the electron mobility by about one order of magnitude (3 x 10^-4 cm^2/Vs), while BMR and geminate recombination losses are significantly reduced. In total these effects improve the J_SC (≈ 17 mA/cm^2) and the FF (> 70%). In 2012 this polymer/fullerene combination resulted in a record PCE for a single junction OSC of 9.2%.
Remarkably, the numerical device simulations revealed that the specific shape of the J-V characteristics depends very sensitively to the variation of not only one, but all dynamic parameters. On the one hand this proves that the experimentally determined parameters, if leading to a good match between simulated and measured J-V curves, are realistic and reliable. On the other hand it also emphasizes the importance to consider all involved dynamic quantities, namely charge carrier generation, geminate and non-geminate recombination as well as electron and hole mobilities. The measurement or investigation of only a subset of these parameters as frequently found in literature will lead to an incomplete picture and possibly to misleading conclusions.
Importantly, the comparison of the numerical device simulation employing the measured parameters and the experimental $J-V$ characteristics allows to identify loss channels and limitations of OSC. For example, it turned out that inefficient extraction of charge carriers is a criticical limitation factor that is often disobeyed. However, efficient and fast transport of charges becomes more and more important with the development of new low bandgap materials with very high internal quantum efficiencies. Likewise, due to moderate charge carrier mobilities, the active layer thicknesses of current high-performance devices are usually limited to around 100 nm. However, larger layer thicknesses would be more favourable with respect to higher current output and robustness of production. Newly designed donor materials should therefore at best show a high tendency to form crystalline structures, as observed in P3HT, combined with the optimized energetics and quantum efficiency of, for example, PTB7.
This thesis contains three experimental studies addressing the interplay between deformation and the mineral reaction between natural calcite and magnesite. The solid-solid mineral reaction between the two carbonates causes the formation of a magnesio-calcite precursor layer and a dolomite reaction rim in every experiment at isostatic annealing and deformation conditions.
CHAPTER 1 briefly introduces general aspects concerning mineral reactions in nature and diffusion pathways for mass transport. Moreover, results of previous laboratory studies on the influence of deformation on mineral reactions are summarized. In addition, the main goals of this study are pointed out.
In CHAPTER 2, the reaction between calcite and magnesite single crystals is examined at isostatic annealing conditions. Time series performed at a fixed temperature revealed a diffusion-controlled dolomite rim growth. Two microstructural domains could be identified characterized by palisade-shaped dolomite grains growing into the magnesite and granular dolomite growing towards calcite. A model was provided for the dolomite rim growth based on the counter-diffusion of CaO and MgO. All reaction products exhibited a characteristic crystallographic relationship with respect to the calcite reactant. Moreover, kinetic parameters of the mineral reaction were determined out of a temperature series at a fixed time. The main goal of the isostatic test series was to gain information about the microstructure evolution, kinetic parameters, chemical composition and texture development of the reaction products. The results were used as a reference to quantify the influence of deformation on the mineral reaction.
CHAPTER 3 deals with the influence of non-isostatic deformation on dolomite and magnesio-calcite layer production between calcite and magnesite single crystals. Deformation was achieved by triaxial compression and by torsion. Triaxial compression up to 38 MPa axial stress at a fixed time showed no significant influence of stress and strain on dolomite formation. Time series conducted at a fixed stress yield no change in growth rates for dolomite and magnesio-calcite at low strains. Slightly larger magnesio-calcite growth rates were observed at strains above >0.1. High strains at similar stresses were caused by the activation of additional glide systems in the calcite single crystal and more mobile dislocations in the magnesio-calcite grains, providing fast diffusion pathways. In torsion experiments a gradual decrease in dolomite and magnesio-calcite layer thickness was observed at a critical shear strain. During deformation, crystallographic orientations of reaction products rearranged with respect to the external framework. A direct effect of the mineral reaction on deformation could not be recognized due to the relatively small reaction product widths.
In CHAPTER 4, the influence of starting material microfabrics and the presence of water on the reaction kinetics was evaluated. In these experimental series polycrystalline material was in contact with single crystals or two polycrystalline materials were used as reactants. Isostatic annealing resulted in different dolomite and magnesio-calcite layer thicknesses, depending on starting material microfabrics. The reaction progress at the magnesite interface was faster with smaller magnesite grain size, because grain boundaries provided fast pathways for diffusion and multiple nucleation sites for dolomite formation. Deformation by triaxial compression and torsion yield lower dolomite rim thicknesses compared to annealed samples for the same time. This was caused by grain coarsening of polycrystalline magnesite during deformation. In contrast, magnesio-calcite layers tended to be larger during deformation, which triggered enhanced diffusion along grain boundaries. The presence of excess water had no significant influence on the reaction kinetics, at least if the reactants were single crystals.
In CHAPTER 5 general conclusions about the interplay between deformation and the mineral reaction in the carbonate system are presented.
Finally, CHAPTER 6 highlights possible future work in the carbonate system based on the results of this study.
Magnetite nanoparticles and their assembly comprise a new area of development for new technologies. The magnetic particles can interact and assemble in chains or networks. Magnetotactic bacteria are one of the most interesting microorganisms, in which the assembly of nanoparticles occurs. These microorganisms are a heterogeneous group of gram negative prokaryotes, which all show the production of special magnetic organelles called magnetosomes, consisting of a magnetic nanoparticle, either magnetite (Fe3O4) or greigite (Fe3S4), embedded in a membrane. The chain is assembled along an actin-like scaffold made of MamK protein, which makes the magnetosomes to arrange in mechanically stable chains. The chains work as a compass needle in order to allow cells to orient and swim along the magnetic field of the Earth.
The formation of magnetosomes is known to be controlled at the molecular level. The physico–chemical conditions of the surrounding environment also influence biomineralization. The work presented in this manuscript aims to understand how such external conditions, in particular the extracellular oxidation reduction potential (ORP) influence magnetite formation in the strain Magnetospirillum magneticum AMB-1. A controlled cultivation of the microorganism was developed in a bioreactor and the formation of magnetosomes was characterized.
Different techniques have been applied in order to characterize the amount of iron taken up by the bacteria and in consequence the size of magnetosomes produced at different ORP conditions. By comparison of iron uptake, morphology of bacteria, size and amount of magnetosomes per cell at different ORP, the formation of magnetosomes was inhibited at ORP 0 mV, whereas reduced conditions, ORP – 500 mV facilitate biomineralization process.
Self-assembly of magnetosomes occurring in magnetotactic bacteria became an inspiration to learn from nature and to construct nanoparticles assemblies by using the bacteriophage M13 as a template. The M13 bacteriophage is an 800 nm long filament with encapsulated single-stranded DNA that has been recently used as a scaffold for nanoparticle assembly. I constructed two types of assemblies based on bacteriophages and magnetic nanoparticles. A chain – like assembly was first formed where magnetite nanoparticles are attached along the phage filament. A sperm – like construct was also built with a magnetic head and a tail formed by phage filament.
The controlled assembly of magnetite nanoparticles on the phage template was possible due to two different mechanism of nanoparticle assembly. The first one was based on the electrostatic interactions between positively charged polyethylenimine coated magnetite nanoparticles and negatively charged phages. The second phage –nanoparticle assembly was achieved by bioengineered recognition sites. A mCherry protein is displayed on the phage and is was used as a linker to a red binding nanobody (RBP) that is fused to the one of the proteins surrounding the magnetite crystal of a magnetosome.
Both assemblies were actuated in water by an external magnetic field showing their swimming behavior and potentially enabling further usage of such structures for medical applications. The speed of the phage - nanoparticles assemblies are relatively slow when compared to those of microswimmers previously published. However, only the largest phage-magnetite assemblies could be imaged and it is therefore still unclear how fast these structures can be in their smaller version.
Software-as-a-Service (SaaS) offers several advantages to both service providers and users. Service providers can benefit from the reduction of Total Cost of Ownership (TCO), better scalability, and better resource utilization. On the other hand, users can use the service anywhere and anytime, and minimize upfront investment by following the pay-as-you-go model. Despite the benefits of SaaS, users still have concerns about the security and privacy of their data. Due to the nature of SaaS and the Cloud in general, the data and the computation are beyond the users' control, and hence data security becomes a vital factor in this new paradigm. Furthermore, in multi-tenant SaaS applications, the tenants become more concerned about the confidentiality of their data since several tenants are co-located onto a shared infrastructure.
To address those concerns, we start protecting the data from the provisioning process by controlling how tenants are being placed in the infrastructure. We present a resource allocation algorithm designed to minimize the risk of co-resident tenants called SecPlace. It enables the SaaS provider to control the resource (i.e., database instance) allocation process while taking into account the security of tenants as a requirement.
Due to the design principles of the multi-tenancy model, tenants follow some degree of sharing on both application and infrastructure levels. Thus, strong security-isolation should be present. Therefore, we develop SignedQuery, a technique that prevents one tenant from accessing others' data. We use the Signing Concept to create a signature that is used to sign the tenant's request, then the server can verifies the signature and recognizes the requesting tenant, and hence ensures that the data to be accessed is belonging to the legitimate tenant.
Finally, Data confidentiality remains a critical concern due to the fact that data in the Cloud is out of users' premises, and hence beyond their control. Cryptography is increasingly proposed as a potential approach to address such a challenge. Therefore, we present SecureDB, a system designed to run SQL-based applications over an encrypted database. SecureDB captures the schema design and analyzes it to understand the internal structure of the data (i.e., relationships between the tables and their attributes). Moreover, we determine the appropriate partialhomomorphic encryption scheme for each attribute where computation is possible even when the data is encrypted.
To evaluate our work, we conduct extensive experiments with di↵erent settings. The main use case in our work is a popular open source HRM application, called OrangeHRM. The results show that our multi-layered approach is practical, provides enhanced security and isolation among tenants, and have a moderate complexity in terms of processing encrypted data.
In many statistical applications, the aim is to model the relationship between covariates and some outcomes. A choice of the appropriate model depends on the outcome and the research objectives, such as linear models for continuous outcomes, logistic models for binary outcomes and the Cox model for time-to-event data. In epidemiological, medical, biological, societal and economic studies, the logistic regression is widely used to describe the relationship between a response variable as binary outcome and explanatory variables as a set of covariates. However, epidemiologic cohort studies are quite expensive regarding data management since following up a large number of individuals takes long time. Therefore, the case-cohort design is applied to reduce cost and time for data collection. The case-cohort sampling collects a small random sample from the entire cohort, which is called subcohort. The advantage of this design is that the covariate and follow-up data are recorded only on the subcohort and all cases (all members of the cohort who develop the event of interest during the follow-up process).
In this thesis, we investigate the estimation in the logistic model for case-cohort design. First, a model with a binary response and a binary covariate is considered. The maximum likelihood estimator (MLE) is described and its asymptotic properties are established. An estimator for the asymptotic variance of the estimator based on the maximum likelihood approach is proposed; this estimator differs slightly from the estimator introduced by Prentice (1986). Simulation results for several proportions of the subcohort show that the proposed estimator gives lower empirical bias and empirical variance than Prentice's estimator.
Then the MLE in the logistic regression with discrete covariate under case-cohort design is studied. Here the approach of the binary covariate model is extended. Proving asymptotic normality of estimators, standard errors for the estimators can be derived. The simulation study demonstrates the estimation procedure of the logistic regression model with a one-dimensional discrete covariate. Simulation results for several proportions of the subcohort and different choices of the underlying parameters indicate that the estimator developed here performs reasonably well. Moreover, the comparison between theoretical values and simulation results of the asymptotic variance of estimator is presented.
Clearly, the logistic regression is sufficient for the binary outcome refers to be available for all subjects and for a fixed time interval. Nevertheless, in practice, the observations in clinical trials are frequently collected for different time periods and subjects may drop out or relapse from other causes during follow-up. Hence, the logistic regression is not appropriate for incomplete follow-up data; for example, an individual drops out of the study before the end of data collection or an individual has not occurred the event of interest for the duration of the study. These observations are called censored observations. The survival analysis is necessary to solve these problems. Moreover, the time to the occurence of the event of interest is taken into account. The Cox model has been widely used in survival analysis, which can effectively handle the censored data. Cox (1972) proposed the model which is focused on the hazard function. The Cox model is assumed to be
λ(t|x) = λ0(t) exp(β^Tx)
where λ0(t) is an unspecified baseline hazard at time t and X is the vector of covariates, β is a p-dimensional vector of coefficient.
In this thesis, the Cox model is considered under the view point of experimental design. The estimability of the parameter β0 in the Cox model, where β0 denotes the true value of β, and the choice of optimal covariates are investigated. We give new representations of the observed information matrix In(β) and extend results for the Cox model of Andersen and Gill (1982). In this way conditions for the estimability of β0 are formulated. Under some regularity conditions, ∑ is the inverse of the asymptotic variance matrix of the MPLE of β0 in the Cox model and then some properties of the asymptotic variance matrix of the MPLE are highlighted. Based on the results of asymptotic estimability, the calculation of local optimal covariates is considered and shown in examples. In a sensitivity analysis, the efficiency of given covariates is calculated. For neighborhoods of the exponential models, the efficiencies have then been found. It is appeared that for fixed parameters β0, the efficiencies do not change very much for different baseline hazard functions. Some proposals for applicable optimal covariates and a calculation procedure for finding optimal covariates are discussed.
Furthermore, the extension of the Cox model where time-dependent coefficient are allowed, is investigated. In this situation, the maximum local partial likelihood estimator for estimating the coefficient function β(·) is described. Based on this estimator, we formulate a new test procedure for testing, whether a one-dimensional coefficient function β(·) has a prespecified parametric form, say β(·; ϑ). The score function derived from the local constant partial likelihood function at d distinct grid points is considered. It is shown that the distribution of the properly standardized quadratic form of this d-dimensional vector under the null hypothesis tends to a Chi-squared distribution. Moreover, the limit statement remains true when replacing the unknown ϑ0 by the MPLE in the hypothetical model and an asymptotic α-test is given by the quantiles or p-values of the limiting Chi-squared distribution. Finally, we propose a bootstrap version of this test. The bootstrap test is only defined for the special case of testing whether the coefficient function is constant. A simulation study illustrates the behavior of the bootstrap test under the null hypothesis and a special alternative. It gives quite good results for the chosen underlying model.
References
P. K. Andersen and R. D. Gill. Cox's regression model for counting processes: a large samplestudy. Ann. Statist., 10(4):1100{1120, 1982.
D. R. Cox. Regression models and life-tables. J. Roy. Statist. Soc. Ser. B, 34:187{220, 1972.
R. L. Prentice. A case-cohort design for epidemiologic cohort studies and disease prevention trials. Biometrika, 73(1):1{11, 1986.
Buyer-seller negotiations have significant impact on a company’s profitability, which makes practitioners aim at maximizing their performance. One lever for increasing bargaining performance is to pursue a clearly defined aspiration, i.e. one’s most desired outcome. In this context, the author explores the role of such aspirations in the three negotiation phases: preparation, bargaining, and striking a deal. She investigates determinants of aspirations, unintended consequences such as unethical bargaining behavior, and the consequences of overly ambitious aspirations. As a result, she does not only close existing gaps in negotiation research, but also derives valuable implications for practitioners
The author examines the cultural identity development of Oromo-Americans in Minnesota, an ethnic group originally located within the national borders of Ethiopia. Earlier studies on language and cultural identity have shown that the degree of ethnic orientation of minorities commonly decreases from generation to generation. Yet oppression and a visible minority status were identified as factors delaying the process of de-ethnicization. Given that Oromos fled persecution in Ethiopia and are confronted with the ramifications of a visible minority status in the U.S., it can be expected that they have retained strong ties to their ethnic culture. This study, however, came to a more complex and theory-building result.
Exhaustivity
(2016)
The dissertation proposes an answer to the question of how to model exhaustive inferences and what the meaning of the linguistic material that triggers these inferences is. In particular, it deals with the semantics of exclusive particles, clefts, and progressive aspect in Ga, an under-researched language spoken in Ghana. Based on new data coming from the author’s original fieldwork in Accra, the thesis points to a previously unattested variation in the semantics of exclusives in a cross-linguistic perspective, analyzes the connections between exhaustive interpretation triggered by clefts and the aspectual interpretation of the sentence, and identifies a cross-categorial definite determiner. By that it sheds new light on several exhaustivity-related phenomena in both the nominal and the verbal domain and shows that both domains are closely connected.
The continuously increasing demand for rare earth elements in technical components of modern technologies, brings the detection of new deposits closer into the focus of global exploration. One promising method to globally map important deposits might be remote sensing, since it has been used for a wide range of mineral mapping in the past. This doctoral thesis investigates the capacity of hyperspectral remote sensing for the detection of rare earth element deposits. The definition and the realization of a fundamental database on the spectral characteristics of rare earth oxides, rare earth metals and rare earth element bearing materials formed the basis of this thesis. To investigate these characteristics in the field, hyperspectral images of four outcrops in Fen Complex, Norway, were collected in the near-field. A new methodology (named REEMAP) was developed to delineate rare earth element enriched zones. The main steps of REEMAP are: 1) multitemporal weighted averaging of multiple images covering the sample area; 2) sharpening the rare earth related signals using a Gaussian high pass deconvolution technique that is calibrated on the standard deviation of a Gaussian-bell shaped curve that represents by the full width of half maxima of the target absorption band; 3) mathematical modeling of the target absorption band and highlighting of rare earth elements. REEMAP was further adapted to different hyperspectral sensors (EO-1 Hyperion and EnMAP) and a new test site (Lofdal, Namibia). Additionally, the hyperspectral signatures of associated minerals were investigated to serve as proxy for the host rocks. Finally, the capacity and limitations of spectroscopic rare earth element detection approaches in general and of the REEMAP approach specifically were investigated and discussed. One result of this doctoral thesis is that eight rare earth oxides show robust absorption bands and, therefore, can be used for hyperspectral detection methods. Additionally, the spectral signatures of iron oxides, iron-bearing sulfates, calcite and kaolinite can be used to detect metasomatic alteration zones and highlight the ore zone. One of the key results of this doctoral work is the developed REEMAP approach, which can be applied from near-field to space. The REEMAP approach enables rare earth element mapping especially for noisy images. Limiting factors are a low signal to noise ratio, a reduced spectral resolution, overlaying materials, atmospheric absorption residuals and non-optimal illumination conditions. Another key result of this doctoral thesis is the finding that the future hyperspectral EnMAP satellite (with its currently published specifications, June 2015) will be theoretically capable to detect absorption bands of erbium, dysprosium, holmium, neodymium and europium, thulium and samarium. This thesis presents a new methodology REEMAP that enables a spatially wide and rapid hyperspectral detection of rare earth elements in order to meet the demand for fast, extensive and efficient rare earth exploration (from near-field to space).
Development of geophysical methods to characterize methane hydrate reservoirs on a laboratory scale
(2015)
Gas hydrates are crystalline solids composed of water and gas molecules. They are stable at elevated pressure and low temperatures. Therefore, natural gas hydrate deposits occur at continental margins, permafrost areas, deep lakes, and deep inland seas. During hydrate formation, the water molecules rearrange to form cavities which host gas molecules. Due to the high pressure during hydrate formation, significant amounts of gas can be stored in hydrate structures. The water-gas ratio hereby can reach up to 1:172 at 0°C and atmospheric pressure. Natural gas hydrates predominantly contain methane. Because methane constitutes both a fuel and a greenhouse gas, gas hydrates are a potential energy resource as well as a potential source for greenhouse gas.
This study investigates the physical properties of methane hydrate bearing sediments on a laboratory scale. To do so, an electrical resistivity tomography (ERT) array was developed and mounted in a large reservoir simulator (LARS). For the first time, the ERT array was applied to hydrate saturated sediment samples under controlled temperature, pressure, and hydrate saturation conditions on a laboratory scale. Typically, the pore space of (marine) sediments is filled with electrically well conductive brine. Because hydrates constitute an electrical isolator, significant contrasts regarding the electrical properties of the pore space emerge during hydrate formation and dissociation. Frequent measurements during hydrate formation experiments permit the recordings of the spatial resistivity distribution inside LARS. Those data sets are used as input for a new data processing routine which transfers the spatial resistivity distribution into the spatial distribution of hydrate saturation. Thus, the changes of local hydrate saturation can be monitored with respect to space and time.
This study shows that the developed tomography yielded good data quality and resolved even small amounts of hydrate saturation inside the sediment sample. The conversion algorithm transforming the spatial resistivity distribution into local hydrate saturation values yielded the best results using the Archie-var-phi relation. This approach considers the increasing hydrate phase as part of the sediment frame, metaphorically reducing the sample’s porosity. In addition, the tomographical measurements showed that fast lab based hydrate formation processes cause small crystallites to form which tend to recrystallize.
Furthermore, hydrate dissociation experiments via depressurization were conducted in order to mimic the 2007/2008 Mallik field trial. It was observed that some patterns in gas and water flow could be reproduced, even though some setup related limitations arose.
In two additional long-term experiments the feasibility and performance of CO2-CH4 hydrate exchange reactions were studied in LARS. The tomographical system was used to monitor the spatial hydrate distribution during the hydrate formation stage. During the subsequent CO2 injection, the tomographical array allowed to follow the CO2 migration front inside the sediment sample and helped to identify the CO2 breakthrough.
While children acquire new words and simple sentence structures extremely fast and without much effort, the ability to process complex sentences develops rather late in life. Although the conjoint occurrence between brain-structural and brain-functional changes, the decrease of plasticity, and changes in cognitive abilities suggests a certain causality between these processes, concrete evidence for the relation between brain development, language processing, and language performance is rare. Therefore, the current dissertation investigates the tripartite relationship between behavior (in the form of language performance and cognitive maturation as prerequisite for language processing), brain structure (in the form of gray matter maturation), and brain function (in the form of brain activation evoked by complex sentence processing). Previous developmental studies indicate a missing increase of activation in accordance to sentence complexity (functional selectivity) in language-relevant brain areas in children. To determine the factors contributing to the functional development of language-relevant brain areas, different methodologies and data acquisition techniques were used to investigate the processing of center-embedded sentences in 5- and 6-year-old children, 7- and 8-year-old children, and adults. Behavioral results indicate that children between 5 and 8 years show difficulties in processing double embedded sentences and that their performance for these type of sentences is positively correlated with digit span. In 7- and 8-year-old children, it was found that especially the processing of long-distance relations between the initial phrase and its corresponding verb appears to be associated with the subject’s verbal working memory capacity. In contrast, children’s performance for double embedded sentences in the younger age group positively correlated with their performance in a standardized sentence comprehension test. This finding supports the hypothesis that processing difficulties in this age group may be mainly attributed to difficulties in processing case marking information. These findings are discussed with respect to current accounts of language and working memory development. A second study aimed at investigating the structural maturation of brain areas involved in sentence comprehension. To do this, whole-brain magnetic resonance images from 59 children between 5 and 8 years were collected and children’s gray matter was analyzed by using voxel-based morphometry. Children’s grammatical proficiency was assessed by a standardized sentence comprehension test. A confirmatory factory analysis corroborated a grammar-relevant and a verbal working memory-relevant factor underlying the measured performance. While children’s ability to assign thematic roles is positively correlated with gray matter probability (GMP) in the left inferior temporal gyrus and the left inferior frontal gyrus, verbal working memory-related performance is positively correlated with GMP in the left parietal operculum extending into the posterior superior temporal gyrus. These areas have been previously shown to be differentially engaged in adults’ complex sentence processing. Thus, the findings of the second study suggest a specific correspondence between children’s GMP in language-relevant brain regions and differential cognitive abilities which underlie complex sentence comprehension. In a third study, functional brain activity during the processing of center-embedded sentences was investigated in three different age groups (5–6 years, 7–8 years, and adults). Although all age groups engage a qualitatively comparable network of the left pars opercularis (PO), the left inferior parietal lobe extending into the posterior superior temporal gyrus (IPL/pSTG), the supplementary motor area (SMA) and the cerebellum, functional selectivity of these regions was only observable in adults. However, functional activation of the language-related regions (PO and IPL/pSTG) predicted sentence comprehension performance for all age groups. To solve the question of the complex interplay between different maturational factors, a fourth study analyzed the predictive power of gray matter probability, verbal working memory capacity, and behavioral differences in performance for simple and complex sentence for the functional selectivity of each activated region. These analyses revealed that the establishment of the adult-like functional selectivity for complex sentences is predicted by a reduction of the left PO’s gray matter probability across age groups while that of the IPL/pSTG is additionally predicted by verbal working memory capacity. Taken all findings together, the current thesis provides evidence that both structural brain maturation and verbal working memory expansion provide the basis for the emergence of functional selectivity in language-related brain regions leading to more efficient sentence processing during development.
The lives of more than 1/6 th of the world population is directly affected by the caprices of the South Asian summer monsoon rainfall. India receives around 78 % of the annual precipitation during the June-September months, the summer monsoon season of South Asia. But, the monsoon circulation is not consistent throughout the entire summer season. Episodes of heavy rainfall (active periods) and low rainfall (break periods) are inherent to the intraseasonal variability of the South Asian summer monsoon. Extended breaks or long-lasting dryness can result in droughts and hence trigger crop failures and in turn famines. Furthermore, India's electricity generation from renewable sources (wind and hydro-power), which is increasingly important in order to satisfy the rapidly rising demand for energy, is highly reliant on the prevailing meteorology. The major drought years 2002 and 2009 for the Indian summer monsoon during the last decades, which are results of the occurrence of multiple extended breaks, emphasise exemplary that the understanding of the monsoon system and its intraseasonal variation is of greatest importance. Although, numerous studies based on observations, reanalysis data and global model simulations have been carried out with the focus on monsoon active and break phases over India, the understanding of the monsoon intraseasonal variability is only in the infancy stage. Regional climate models could benefit the comprehension of monsoon breaks by its resolution advantage.
This study investigates moist dynamical processes that initiate and maintain breaks during the South Asian summer monsoon using the atmospheric regional climate model HIRHAM5 at a horizontal resolution of 25 km forced by the ECMWF ERA Interim reanalysis for the period 1979-2012. By calculating moisture and moist static energy budgets the various competing mechanisms leading to extended breaks are quantitatively estimated. Advection of dry air from the deserts of western Asia towards central India is the dominant moist dynamical process in initiating extended break conditions over South Asia. Once initiated, the extended breaks are maintained due to many competing mechanisms: (i) the anomalous easterlies at the southern flank of this anticyclonic anomaly weaken the low-level cross-equatorial jet and thus the moisture transport into the monsoon region, (ii) differential radiative heating over the continental and the oceanic tropical convergence zone induces a local Hadley circulation with anomalous rising over the equatorial Indian Ocean and descent over central India, and (iii) a cyclonic response to positive rainfall anomalies over the near-equatorial Indian Ocean amplifies the anomalous easterlies over India and hence contributes to the low-level divergence over central India.
A sensitivity experiment that mimics a scenario of higher atmospheric aerosol concentrations over South Asia addresses a current issue of large uncertainty: the role aerosols play in suppressing monsoon rainfall and hence in triggering breaks. To study the indirect aerosol effects the cloud droplet number concentration was increased to imitate the aerosol's function as cloud condensation nuclei. The sensitivity experiment with altered microphysical cloud properties shows a reduction in the summer monsoon precipitation together with a weakening of the South Asian summer monsoon. Several physical mechanisms are proposed to be responsible for the suppressed monsoon rainfall: (i) according to the first indirect radiative forcing the increase in the number of cloud droplets causes an increase in the cloud reflectivity of solar radiation, leading to a climate cooling over India which in turn reduces the hydrological cycle, (ii) a stabilisation of the troposphere induced by a differential cooling between the surface and the upper troposphere over central India inhibits the growth of deep convective rain clouds, (iii) an increase of the amount of low and mid-level clouds together with a decrease in high-level cloud amount amplify the surface cooling and hence the atmospheric stability, and (iv) dynamical changes of the monsoon manifested as a anomalous anticyclonic circulation over India reduce the moisture transport into the monsoon region. The study suggests that the changes in the total precipitation, which are dominated by changes in the convective precipitation, mainly result from the indirect radiative forcing. Suppression of rainfall due to the direct microphysical effect is found to be negligible over India. Break statistics of the polluted cloud scenario indicate an increase in the occurrence of short breaks (3 days), while the frequency of extended breaks (> 7 days) is clearly not affected. This disproves the hypothesis that more and smaller cloud droplets, caused by a high load of atmospheric aerosols trigger long drought conditions over central India.
The present study addresses the question of how German vowels are perceived and produced by Polish learners of German as a Foreign Language. It comprises three main experiments: a discrimination experiment, a production experiment, and an identification experiment. With the exception of the discrimination task, the experiments further investigated the influence of orthographic marking on the perception and production of German vowel length. It was assumed that explicit markings such as the Dehnungs-h ("lengthening h") could help Polish GFL learners in perceiving and producing German words more correctly.
The discrimination experiment with manipulated nonce words showed that Polish GFL learners detect pure length differences in German vowels less accurately than German native speakers, while this was not the case for pure quality differences. The results of the identification experiment contrast with the results of the discrimination task in that Polish GFL learners were better at judging incorrect vowel length than incorrect vowel quality in manipulated real words. However, orthographic marking did not turn out to be the driving factor and it is suggested that metalinguistic awareness can explain the asymmetry between the two perception experiments. The production experiment supported the results of the identification task in that lengthening h did not help Polish learners in producing German vowel length more correctly. Yet, as far as vowel quality productions are concerned, it is argued that orthography does influence L2 sound productions because Polish learners seem to be negatively influenced by their native grapheme-to-phoneme correspondences.
It is concluded that it is important to differentiate between the influence of the L1 and L2 orthographic system. On the one hand, the investigation of the influence of orthographic vowel length markers in German suggests that Polish GFL learners do not make use of length information provided by the L2 orthographic system. On the other hand, the vowel quality data suggest that the L1 orthographic system plays a crucial role in the acquisition of a foreign language. It is therefore proposed that orthography influences the acquisition of foreign sounds, but not in the way it was originally assumed.
The standing stock and production of organismal biomass depends strongly on the organisms’ biotic environment, which arises from trophic and non-trophic interactions among them. The trophic interactions between the different groups of organisms form the food web of an ecosystem, with the autotrophic and bacterial production at the basis and potentially several levels of consumers on top of the producers. Feeding interactions can regulate communities either by severe grazing pressure or by shortage of resources or prey production, termed top-down and bottom-up control, respectively. The limitations of all communities conglomerate in the food web regulation, which is subject to abiotic and biotic forcing regimes arising from external and internal constraints. This dissertation presents the effects of alterations in two abiotic, external forcing regimes, terrestrial matter input and long-lasting low temperatures in winter. Diverse methodological approaches, a complex ecosystem model study and the analysis of two whole-lake measurements, were performed to investigate effects for the food web regulation and the resulting consequences at the species, community and ecosystem scale. Thus, all types of organisms, autotrophs and heterotrophs, at all trophic levels were investigated to gain a comprehensive overview of the effects of the two mentioned altered forcing regimes. In addition, an extensive evaluation of the trophic interactions and resulting carbon fluxes along the pelagic and benthic food web was performed to display the efficiencies of the trophic energy transfer within the food webs. All studies were conducted in shallow lakes, which is worldwide the most abundant type of lakes. The specific morphology of shallow lakes allows that the benthic production contributes substantially to the whole-lake production. Further, as shallow lakes are often small they are especially sensitive to both, changes in the input of terrestrial organic matter and the atmospheric temperature. Another characteristic of shallow lakes is their appearance in alternative stable states. They are either in a clear-water or turbid state, where macrophytes and phytoplankton dominate, respectively. Both states can stabilize themselves through various mechanisms.
These two alternative states and stabilizing mechanisms are integrated in the complex ecosystem model PCLake, which was used to investigate the effects of the enhanced terrestrial particulate organic matter (t-POM) input to lakes. The food web regulation was altered by three distinct pathways: (1) Zoobenthos received more food, increased in biomass which favored benthivorous fish and those reduced the available light due to bioturbation. (2) Zooplankton substituted autochthonous organic matter in their diet by suspended t-POM, thus the autochthonous organic matter remaining in the water reduced its transparency. (3) T-POM suspended into the water and reduced directly the available light. As macrophytes are more light-sensitive than phytoplankton they suffered the most from the lower transparency. Consequently, the resilience of the clear-water state was reduced by enhanced t-POM inputs, which makes the turbid state more likely at a given nutrient concentration. In two subsequent winters long-lasting low temperatures and a concurrent long duration of ice coverage was observed which resulted in low overall adult fish biomasses in the two study lakes – Schulzensee and Gollinsee, characterized by having and not having submerged macrophytes, respectively. Before the partial winterkill of fish Schulzensee allowed for a higher proportion of piscivorous fish than Gollinsee. However, the partial winterkill of fish aligned both communities as piscivorous fish are more sensitive to low oxygen concentrations. Young of the year fish benefitted extremely from the absence of adult fish due to lower predation pressure. Therefore, they could exert a strong top-down control on crustaceans, which restructured the entire zooplankton community leading to low crustacean biomasses and a community composition characterized by copepodites and nauplii. As a result, ciliates were released from top-down control, increased to high biomasses compared to lakes of various trophic states and depths and dominated the zooplankton community. While being very abundant in the study lakes and having the highest weight specific grazing rates among the zooplankton, ciliates exerted potentially a strong top-down control on small phytoplankton and particle-attached bacteria. This resulted in a higher proportion of large phytoplankton compared to other lakes. Additionally, the phytoplankton community was evenly distributed presumably due to the numerous fast growing and highly specific ciliate grazers. Although, the pelagic food web was completely restructured after the subsequent partial winterkills of fish, both lakes were resistant to effects of this forcing regime at the ecosystem scale. The consistently high predation pressure on phytoplankton prevented that Schulzensee switched from the clear-water to the turbid state. Further mechanisms, which potentially stabilized the clear-water state, were allelopathic effects by macrophytes and nutrient limitation in summer. The pelagic autotrophic and bacterial production was an order of magnitude more efficient transferred to animal consumers than the respective benthic production, despite the alterations of the food web structure after the partial winterkill of fish. Thus, the compiled mass-balanced whole-lake food webs suggested that the benthic bacterial and autotrophic production, which exceeded those of the pelagic habitat, was not used by animal consumers. This holds even true if the food quality, additional consumers such as ciliates, benthic protozoa and meiobenthos, the pelagic-benthic link and the potential oxygen limitation of macrobenthos were considered. Therefore, low benthic efficiencies suggest that lakes are primarily pelagic systems at least at the animal consumer level.
Overall, this dissertation gives insights into the regulation of organism groups in the pelagic and benthic habitat at each trophic level under two different forcing regimes and displays the efficiency of the carbon transfer in both habitats. The results underline that the alterations of external forcing regimes affect all hierarchical level including the ecosystem.
Earthquake clustering has proven the most useful tool to forecast changes in seismicity rates in the short and medium term (hours to months), and efforts are currently being made to extend the scope of such models to operational earthquake forecasting. The overarching goal of the research presented in this thesis is to improve physics-based earthquake forecasts, with a focus on aftershock sequences. Physical models of triggered seismicity are based on the redistribution of stresses in the crust, coupled with the rate-and-state constitutive law proposed by Dieterich to calculate changes in seismicity rate. This type of models are known as Coulomb- rate and-state (CRS) models. In spite of the success of the Coulomb hypothesis, CRS models typically performed poorly in comparison to statistical ones, and they have been underepresented in the operational forecasting context. In this thesis, I address some of these issues, and in particular these questions: (1) How can we realistically model the uncertainties and heterogeneity of the mainshock stress field? (2) What is the effect of time dependent stresses in the postseismic phase on seismicity? I focus on two case studies from different tectonic settings: the Mw 9.0 Tohoku megathrust and the Mw 6.0 Parkfield strike slip earthquake. I study aleatoric uncertainties using a Monte Carlo method. I find that the existence of multiple receiver faults is the most important source of intrinsic stress heterogeneity, and CRS models perform better when this variability is taken into account. Epistemic uncertainties inherited from the slip models also have a significant impact on the forecast, and I find that an ensemble model based on several slip distributions outperforms most individual models. I address the role of postseismic stresses due to aseismic slip on the mainshock fault (afterslip) and to the redistribution of stresses by previous aftershocks (secondary triggering). I find that modeling secondary triggering improves model performance. The effect of afterslip is less clear, and difficult to assess for near-fault aftershocks due to the large uncertainties of the afterslip models. Off-fault events, on the other hand, are less sensitive to the details of the slip distribution: I find that following the Tohoku earthquake, afterslip promotes seismicity in the Fukushima region. To evaluate the performance of the improved CRS models in a pseudo-operational context, I submitted them for independent testing to a collaborative experiment carried out by CSEP for the 2010-2012 Canterbury sequence. Preliminary results indicate that physical models generally perform well compared to statistical ones, suggesting that CRS models may have a role to play in the future of operational forecasting. To facilitate efforts in this direction, and to enable future studies of earthquake triggering by time dependent processes, I have made the code open source. In the final part of this thesis I summarize the capabilities of the program and outline technical aspects regarding performance and parallelization strategies.
The size and morphology control of precipitated solid particles is a major economic issue for numerous industries. For instance, it is interesting for the nuclear industry, concerning the recovery of radioactive species from used nuclear fuel.
The precipitates features, which are a key parameter from the post-precipitate processing, depend on the process local mixing conditions. So far, the relationship between precipitation features and hydrodynamic conditions have not been investigated.
In this study, a new experimental configuration consisting of coalescing drops is set to investigate the link between reactive crystallization and hydrodynamics. Two configurations of aqueous drops are examined. The first one corresponds to high contact angle drops (>90°) in oil, as a model system for flowing drops, the second one correspond to sessile drops in air with low contact angle (<25°). In both cases, one reactive is dissolved in each drop, namely oxalic acid and cerium nitrate. When both drops get into contact, they may coalesce; the dissolved species mix and react to produce insoluble cerium oxalate. The precipitates features and effect on hydrodynamics are investigated depending on the solvent. In the case of sessile drops in air, the surface tension difference between the drops generates a gradient which induces a Marangoni flow from the low surface tension drop over the high surface tension drop. By setting the surface tension difference between the two drops and thus the Marangoni flow, the hydrodynamics conditions during the drop coalescence could be modified. Diols/water mixtures are used as solvent, in order to fix the surface tension difference between the liquids of both drops regardless from the reactant concentration. More precisely, the used diols, 1,2-propanediol and 1,3-propanediol, are isomer with identical density and close viscosity. By keeping the water volume fraction constant and playing with the 1,2-propanediol and 1,3-propanediol volume fractions of the solvents, the mixtures surface tensions differ up to 10 mN/m for identical/constant reactant concentration, density and viscosity. 3 precipitation behaviors were identified for the coalescence of water/diols/recatants drops depending on the oxalic excess. The corresponding precipitates patterns are visualized by optical microscopy and the precipitates are characterized by confocal microscopy SEM, XRD and SAXS measurements. In the intermediate oxalic excess regime, formation of periodic patterns can be observed. These patterns consist in alternating cerium oxalate precipitates with distinct morphologies, namely needles and “microflowers”. Such periodic fringes can be explained by a feedback mechanism between convection, reaction and the diffusion.
The aim of this work is the evaluation of the geothermal potential of Luxembourg. The approach consists in a joint interpretation of different types of information necessary for a first rather qualitative assessment of deep geothermal reservoirs in Luxembourg and the adjoining regions in the surrounding countries of Belgium, France and Germany. For the identification of geothermal reservoirs by exploration, geological, thermal, hydrogeological and structural data are necessary. Until recently, however, reliable information about the thermal field and the regional geology, and thus about potential geothermal reservoirs, was lacking. Before a proper evaluation of the geothermal potential can be performed, a comprehensive survey of the geology and an assessment of the thermal field are required.
As a first step, the geology and basin structure of the Mesozoic Trier–Luxembourg Basin (TLB) is reviewed and updated using recently published information on the geology and structures as well as borehole data available in Luxembourg and the adjoining regions. A Bouguer map is used to get insight in the depth, morphology and structures in the Variscan basement buried beneath the Trier–Luxembourg Basin. The geological section of the old Cessange borehole is reinterpreted and provides, in combination with the available borehole data, consistent information for the production of isopach maps. The latter visualize the synsedimentary evolution of the Trier–Luxembourg Basin. Complementary, basin-wide cross sections illustrate the evolution and structure of the Trier–Luxembourg Basin. The knowledge gained does not support the old concept of the Weilerbach Mulde. The basin-wide cross sections, as well as the structural and sedimentological observations in the Trier–Luxembourg Basin suggest that the latter probably formed above a zone of weakness related to a buried Rotliegend graben. The inferred graben structure designated by SE-Luxembourg Graben (SELG) is located in direct southwestern continuation of the Wittlicher Rotliegend-Senke.
The lack of deep boreholes and subsurface temperature prognosis at depth is circumnavigated by using thermal modelling for inferring the geothermal resource at depth. For this approach, profound structural, geological and petrophysical input data are required. Conceptual geological cross sections encompassing the entire crust are constructed and further simplified and extended to lithospheric scale for their utilization as thermal models. The 2-D steady state and conductive models are parameterized by means of measured petrophysical properties including thermal conductivity, radiogenic heat production and density. A surface heat flow of 75 ∓ 7 (2δ) mW m–2 for verification of the thermal models could be determined in the area. The models are further constrained by the geophysically-estimated depth of the lithosphere–asthenosphere boundary (LAB) defined by the 1300 °C isotherm. A LAB depth of 100 km, as seismically derived for the Ardennes, provides the best fit with the measured surface heat flow. The resulting mantle heat flow amounts to ∼40 mW m–2. Modelled temperatures are in the range of 120–125 °C at 5 km depth and of 600–650 °C at the crust/mantle discontinuity (Moho). Possible thermal consequences of the 10–20 Ma old Eifel plume, which apparently caused upwelling of the asthenospheric mantle to 50–60 km depth, were modelled in a steady-state thermal scenario resulting in a surface heat flow of at least 91 mW m–2 (for the plume top at 60 km) in the Eifel region. Available surface heat-flow values are significantly lower (65–80 mW m–2) and indicate that the plume-related heating has not yet entirely reached the surface.
Once conceptual geological models are established and the thermal regime is assessed, the geothermal potential of Luxembourg and the surrounding areas is evaluated by additional consideration of the hydrogeology, the stress field and tectonically active regions. On the one hand, low-enthalpy hydrothermal reservoirs in Mesozoic reservoirs in the Trier–Luxembourg Embayment (TLE) are considered. On the other hand, petrothermal reservoirs in the Lower Devonian basement of the Ardennes and Eifel regions are considered for exploitation by Enhanced/Engineered Geothermal Systems (EGS). Among the Mesozoic aquifers, the Buntsandstein aquifer characterized by temperatures of up to 50 °C is a suitable hydrothermal reservoir that may be exploited by means of heat pumps or provide direct heat for various applications. The most promising area is the zone of the SE–Luxembourg Graben. The aquifer is warmest underneath the upper Alzette River valley and the limestone plateau in Lorraine, where the Buntsandstein aquifer lies below a thick Mesozoic cover. At the base of an inferred Rotliegend graben in the same area, temperatures of up to 75 °C are expected. However, geological and hydraulic conditions are uncertain. In the Lower Devonian basement, thick sandstone-/quartzite-rich formations with temperatures >90 °C are expected at depths >3.5 km and likely offer the possibility of direct heat use. The setting of the Südeifel (South Eifel) region, including the Müllerthal region near Echternach, as a tectonically active zone may offer the possibility of deep hydrothermal reservoirs in the fractured Lower Devonian basement. Based on the recent findings about the structure of the Trier–Luxembourg Basin, the new concept presents the Müllerthal–Südeifel Depression (MSD) as a Cenozoic structure that remains tectonically active and subsiding, and therefore is relevant for geothermal exploration. Beyond direct use of geothermal heat, the expected modest temperatures at 5 km depth (about 120 °C) and increased permeability by EGS in the quartzite-rich Lochkovian could prospectively enable combined geothermal heat production and power generation in Luxembourg and the western realm of the Eifel region.
Adjustment of empirically derived ground motion prediction equations (GMPEs), from a data- rich region/site where they have been derived to a data-poor region/site, is one of the major challenges associated with the current practice of seismic hazard analysis. Due to the fre- quent use in engineering design practices the GMPEs are often derived for response spectral ordinates (e.g., spectral acceleration) of a single degree of freedom (SDOF) oscillator. The functional forms of such GMPEs are based upon the concepts borrowed from the Fourier spectral representation of ground motion. This assumption regarding the validity of Fourier spectral concepts in the response spectral domain can lead to consequences which cannot be explained physically.
In this thesis, firstly results from an investigation that explores the relationship between Fourier and response spectra, and implications of this relationship on the adjustment issues of GMPEs, are presented. The relationship between the Fourier and response spectra is explored by using random vibration theory (RVT), a framework that has been extensively used in earthquake engineering, for instance within the stochastic simulation framework and in the site response analysis. For a 5% damped SDOF oscillator the RVT perspective of response spectra reveals that no one-to-one correspondence exists between Fourier and response spectral ordinates except in a limited range (i.e., below the peak of the response spectra) of oscillator frequencies. The high oscillator frequency response spectral ordinates are dominated by the contributions from the Fourier spectral ordinates that correspond to the frequencies well below a selected oscillator frequency. The peak ground acceleration (PGA) is found to be related with the integral over the entire Fourier spectrum of ground motion which is in contrast to the popularly held perception that PGA is a high-frequency phenomenon of ground motion.
This thesis presents a new perspective for developing a response spectral GMPE that takes the relationship between Fourier and response spectra into account. Essentially, this frame- work involves a two-step method for deriving a response spectral GMPE: in the first step two empirical models for the FAS and for a predetermined estimate of duration of ground motion are derived, in the next step, predictions from the two models are combined within the same RVT framework to obtain the response spectral ordinates. In addition to that, a stochastic model based scheme for extrapolating the individual acceleration spectra beyond the useable frequency limits is also presented. To that end, recorded acceleration traces were inverted to obtain the stochastic model parameters that allow making consistent extrapola- tion in individual (acceleration) Fourier spectra. Moreover an empirical model, for a dura- tion measure that is consistent within the RVT framework, is derived. As a next step, an oscillator-frequency-dependent empirical duration model is derived that allows obtaining the most reliable estimates of response spectral ordinates. The framework of deriving the response spectral GMPE presented herein becomes a self-adjusting model with the inclusion of stress parameter (∆σ) and kappa (κ0) as the predictor variables in the two empirical models. The entire analysis of developing the response spectral GMPE is performed on recently compiled RESORCE-2012 database that contains recordings made from Europe, the Mediterranean and the Middle East. The presented GMPE for response spectral ordinates should be considered valid in the magnitude range of 4 ≤ MW ≤ 7.6 at distances ≤ 200 km.
By perturbing the differential of a (cochain-)complex by "small" operators, one obtains what is referred to as quasicomplexes, i.e. a sequence whose curvature is not equal to zero in general. In this situation the cohomology is no longer defined. Note that it depends on the structure of the underlying spaces whether or not an operator is "small." This leads to a magical mix of perturbation and regularisation theory. In the general setting of Hilbert spaces compact operators are "small." In order to develop this theory, many elements of diverse mathematical disciplines, such as functional analysis, differential geometry, partial differential equation, homological algebra and topology have to be combined. All essential basics are summarised in the first chapter of this thesis. This contains classical elements of index theory, such as Fredholm operators, elliptic pseudodifferential operators and characteristic classes. Moreover we study the de Rham complex and introduce Sobolev spaces of arbitrary order as well as the concept of operator ideals. In the second chapter, the abstract theory of (Fredholm) quasicomplexes of Hilbert spaces will be developed. From the very beginning we will consider quasicomplexes with curvature in an ideal class. We introduce the Euler characteristic, the cone of a quasiendomorphism and the Lefschetz number. In particular, we generalise Euler's identity, which will allow us to develop the Lefschetz theory on nonseparable Hilbert spaces. Finally, in the third chapter the abstract theory will be applied to elliptic quasicomplexes with pseudodifferential operators of arbitrary order. We will show that the Atiyah-Singer index formula holds true for those objects and, as an example, we will compute the Euler characteristic of the connection quasicomplex. In addition to this we introduce geometric quasiendomorphisms and prove a generalisation of the Lefschetz fixed point theorem of Atiyah and Bott.
The promotion of self-employment as part of active labor market policies is considered to be one of the most important unemployment support schemes in Germany. Against this background the main part of this thesis contributes to the evaluation of start-up support schemes within ALMP. Chapter 2 and 4 focus on the evaluation of the New Start-up Subsidy (NSUS, Gründungszuschuss) in its first version (from 2006 to the end of 2011). The chapters offer an advancement of the evaluation of start-up subsidies in Germany, and are based on a novel data set of administrative data from the Federal Employment Agency that was enriched with information from a telephone survey. Chapter 2 provides a thorough descriptive analysis of the NSUS that consists of two parts. First, the participant structure of the program is compared with the one of two former programs. In a second step, the study conducts an in-depth characterization of the participants of the NSUS focusing on founding motives, the level of start-up capital and equity used as well as the sectoral distribution of the new business. Furthermore, the business survival, income situation of founders and job creation by the new businesses is analyzed during a period of 19 months after start-up. The contribution of Chapter 4 is to introduce a new explorative data set that allows comparing subsidized start-ups out of unemployment with non-subsidized business start-ups that were founded by individuals who were not unemployed at the time of start-up. Because previous evaluation studies commonly used eligible non-participants amongst the unemployed as control group to assess the labor market effects of the start-up subsidies, the corresponding results hence referred to the effectiveness of the ALMP measure, but could not address the question whether the subsidy leads to similarly successful and innovative businesses compared to non-subsidized businesses. An assessment of this economic/growth aspect is also important, since the subsidy might induce negative effects that may outweigh the positive effects from an ALMP perspective. The main results of Chapter 4 indicate that subsidized founders seem to have no shortages in terms of formal education, but exhibit less employment and industry-specific experience, and are less likely to benefit from intergenerational transmission of start-ups. Moreover, the study finds evidence that necessity start-ups are over-represented among subsidized business founders, which suggests disadvantages in terms of business preparation due to possible time restrictions right before start-up. Finally, the study also detects more capital constraints among the unemployed, both in terms of the availability of personal equity and access to loans. With respect to potential differences between both groups in terms of business development over time, the results indicate that subsidized start-ups out of unemployment face higher business survival rates 19 months after start-up. However, they lag behind regular business founders in terms of income, business growth, and innovation. The arduous data collection process for start-up activities of non-subsidized founders for Chapter 4 made apparent that Germany is missing a central reporting system for business formations. Additionally, the different start-up reporting systems that do exist exhibit substantial discrepancies in data processing procedures, and therefore also in absolute numbers concerning the overall start-up activity. Chapter 3 is therefore placed in front of Chapter 4 and has the aim to provide a comprehensive review of the most important German start-up reporting systems. The second part of the thesis consists of Chapter 5 which contributes to the literature on determinants of job search behavior of the unemployed individuals by analyzing the effectiveness of internet search with regard to search behavior of unemployed individuals and subsequent job quality. The third and final part of the thesis outlines why the German labor market reacted in a very mild fashion to the Great Recession 2008/09, especially compared to other countries. Chapter 6 describes current economic trends of the labor market in light of general trends in the European Union, and reveals some of the main associated challenges. Thereafter, recent reforms of the main institutional settings of the labor market which influence labor supply are analyzed. Finally, based on the status quo of these institutional settings, the chapter gives a brief overview of strategies to adequately combat the challenges in terms of labor supply and to ensure economic growth in the future.
Optical frequency combs (OFC) constitute an array of phase-correlated equidistant spectral lines with nearly equal intensities over a broad spectral range. The adaptations of combs generated in mode-locked lasers proved to be highly efficient for the calibration of high-resolution (resolving power > 50000) astronomical spectrographs. The observation of different galaxy structures or the studies of the Milky Way are done using instruments in the low- and medium resolution range. To such instruments belong, for instance, the Multi Unit Spectroscopic Explorer (MUSE) being developed for the Very Large Telescope (VLT) of the European Southern Observatory (ESO) and the 4-metre Multi-Object Spectroscopic Telescope (4MOST) being in development for the ESO VISTA 4.1 m Telescope. The existing adaptations of OFC from mode-locked lasers are not resolvable by these instruments.
Within this work, a fibre-based approach for generation of OFC specifically in the low- and medium resolution range is studied numerically. This approach consists of three optical fibres that are fed by two equally intense continuous-wave (CW) lasers. The first fibre is a conventional single-mode fibre, the second one is a suitably pumped amplifying Erbium-doped fibre with anomalous dispersion, and the third one is a low-dispersion highly nonlinear optical fibre. The evolution of a frequency comb in this system is governed by the following processes: as the two initial CW-laser waves with different frequencies propagate through the first fibre, they generate an initial comb via a cascade of four-wave mixing processes. The frequency components of the comb are phase-correlated with the original laser lines and have a frequency spacing that is equal to the initial laser frequency separation (LFS), i.e. the difference in the laser frequencies. In the time domain, a train of pre-compressed pulses with widths of a few pico-seconds arises out of the initial bichromatic deeply-modulated cosine-wave. These pulses undergo strong compression in the subsequent amplifying Erbium-doped fibre: sub-100 fs pulses with broad OFC spectra are formed. In the following low-dispersion highly nonlinear fibre, the OFC experience a further broadening and the intensity of the comb lines are fairly equalised. This approach was mathematically modelled by means of a Generalised Nonlinear Schrödinger Equation (GNLS) that contains terms describing the nonlinear optical Kerr effect, the delayed Raman response, the pulse self-steepening, and the linear optical losses as well as the wavelength-dependent Erbium gain profile for the second fibre. The initial condition equation being a deeply-modulated cosine-wave mimics the radiation of the two initial CW lasers. The numerical studies are performed with the help of Matlab scripts that were specifically developed for the integration of the GNLS and the initial condition according to the proposed approach for the OFC generation. The scripts are based on the Fourth-Order Runge-Kutta in the Interaction Picture Method (RK4IP) in combination with the local error method.
This work includes the studies and results on the length optimisation of the first and the second fibre depending on different values of the group-velocity dispersion of the first fibre. Such length optimisation studies are necessary because the OFC have the biggest possible broadband and exhibit a low level of noise exactly at the optimum lengths. Further, the optical pulse build-up in the first and the second fibre was studied by means of the numerical technique called Soliton Radiation Beat Analysis (SRBA). It was shown that a common soliton crystal state is formed in the first fibre for low laser input powers. The soliton crystal continuously dissolves into separated optical solitons as the input power increases. The pulse formation in the second fibre is critically dependent on the features of the pulses formed in the first fibre. I showed that, for low input powers, an adiabatic soliton compression delivering low-noise OFC occurs in the second fibre. At high input powers, the pulses in the first fibre have more complicated structures which leads to the pulse break-up in the second fibre with a subsequent degradation of the OFC noise performance. The pulse intensity noise studies that were performed within the framework of this thesis allow making statements about the noise performance of an OFC. They showed that the intensity noise of the whole system decreases with the increasing value of LFS.