Refine
Has Fulltext
- yes (47) (remove)
Year of publication
- 2010 (47) (remove)
Document Type
- Doctoral Thesis (47) (remove)
Language
- English (47) (remove)
Is part of the Bibliography
- yes (47)
Keywords
- Cosmogenic nuclides (2)
- Erdbeben (2)
- Gelatine (2)
- InSAR (2)
- Klimawandel (2)
- Kosmogene Nuklide (2)
- Standorteffekte (2)
- seismic noise (2)
- site effects (2)
- (Alters-) Datierungen (1)
Institute
- Institut für Geowissenschaften (14)
- Institut für Biochemie und Biologie (9)
- Institut für Physik und Astronomie (8)
- Extern (3)
- Institut für Informatik und Computational Science (3)
- Institut für Chemie (2)
- Institut für Ernährungswissenschaft (2)
- Institut für Mathematik (2)
- Institut für Umweltwissenschaften und Geographie (2)
- Strukturbereich Kognitionswissenschaften (2)
This work describes the realization of physically crosslinked networks based on gelatin by the introduction of functional groups enabling specific supramolecular interactions. Molecular models were developed in order to predict the material properties and permit to establish a knowledge-based approach to material design. The effect of additional supramolecular interactions with hydroxyapaptite was then studied in composite materials. The calculated properties are compared to experimental results to validate the models. The models are then further used for the study of physically crosslinked networks. Gelatin was functionalized with desaminotyrosine (DAT) and desaminotyrosyl-tyrosine (DATT) side groups, derived from the natural amino acid tyrosine. These group can potentially undergo to π-π and hydrogen bonding interactions also under physiological conditions. Molecular dynamics (MD) simulations were performed on models with 0.8 wt.-% or 25 wt.-% water content, using the second generation forcefield CFF91. The validation of the models was obtained by the comparison with specific experimental data such as, density, peptide conformational angles and X-ray scattering spectra. The models were then used to predict the supramolecular organization of the polymer chain, analyze the formation of physical netpoints and calculate the mechanical properties. An important finding of simulation was that with the increase of aromatic groups also the number of observed physical netpoints increased. The number of relatively stable physical netpoints, on average zero 0 for natural gelatin, increased to 1 and 6 for DAT and DATT functionalized gelatins respectively. A comparison with the Flory-Rehner model suggested reduced equilibrium swelling by factor 6 of the DATT-functionalized materials in water. The functionalized gelatins could be synthesized by chemoselective coupling of the free carboxylic acid groups of DAT and DATT to the free amino groups of gelatin. At 25 wt.-% water content, the simulated and experimentally determined elastic mechanical properties (e.g. Young Modulus) were both in the order of GPa and were not influenced by the degree of aromatic modification. The experimental equilibrium degree of swelling in water decreased with increasing the number of inserted aromatic functions (from 2800 vol.-% for pure gelatin to 300 vol.-% for the DATT modified gelatin), at the same time, Young’s modulus, elongation at break, and maximum tensile strength increased. It could be show that the functionalization with DAT and DATT influences the chain organization of gelatin based materials together with a controlled drying condition. Functionalization with DAT and DATT lead to a drastic reduction of helical renaturation, that could be more finely controlled by the applied drying conditions. The properties of the materials could then be influenced by application of two independent methods. Composite materials of DAT and DATT functionalized gelatins with hydroxyapatite (HAp) show a drastic reduction of swelling degree. In tensile tests and rheological measurements, the composites equilibrated in water had increased Young’s moduli (from 200 kPa up to 2 MPa) and tensile strength (from 57 kPa up to 1.1 MPa) compared to the natural polymer matrix without affecting the elongation at break. Furthermore, an increased thermal stability from 40 °C to 85 °C of the networks could be demonstrated. The differences of the behaviour of the functionalized gelatins to pure gelatin as matrix suggested an additional stabilizing bond between the incorporated aromatic groups to the hydroxyapatite.
CHAMP (CHAllenging Minisatellite Payload) is a German small satellite mission to study the earth's gravity field, magnetic field and upper atmosphere. Thanks to the good condition of the satellite so far, the planned 5 years mission is extended to year 2009. The satellite provides continuously a large quantity of measurement data for the purpose of Earth study. The measurements of the magnetic field are undertaken by two Fluxgate Magnetometers (vector magnetometer) and one Overhauser Magnetometer (scalar magnetometer) flown on CHAMP. In order to ensure the quality of the data during the whole mission, the calibration of the magnetometers has to be performed routinely in orbit. The scalar magnetometer serves as the magnetic reference and its readings are compared with the readings of the vector magnetometer. The readings of the vector magnetometer are corrected by the parameters that are derived from this comparison, which is called the scalar calibration. In the routine processing, these calibration parameters are updated every 15 days by means of scalar calibration. There are also magnetic effects coming from the satellite which disturb the measurements. Most of them have been characterized during tests before launch. Among them are the remanent magnetization of the spacecraft and fields generated by currents. They are all considered to be constant over the mission life. The 8 years of operation experience allow us to investigate the long-term behaviors of the magnetometers and the satellite systems. According to the investigation, it was found that for example the scale factors of the FGM show obvious long-term changes which can be described by logarithmic functions. The other parameters (offsets and angles between the three components) can be considered constant. If these continuous parameters are applied for the FGM data processing, the disagreement between the OVM and the FGM readings is limited to \pm1nT over the whole mission. This demonstrates, the magnetometers on CHAMP exhibit a very good stability. However, the daily correction of the parameter Z component offset of the FGM improves the agreement between the magnetometers markedly. The Z component offset plays a very important role for the data quality. It exhibits a linear relationship with the standard deviation of the disagreement between the OVM and the FGM readings. After Z offset correction, the errors are limited to \pm0.5nT (equivalent to a standard deviation of 0.2nT). We improved the corrections of the spacecraft field which are not taken into account in the routine processing. Such disturbance field, e.g. from the power supply system of the satellite, show some systematic errors in the FGM data and are misinterpreted in 9-parameter calibration, which brings false local time related variation of the calibration parameters. These corrections are made by applying a mathematical model to the measured currents. This non-linear model is derived from an inversion technique. If the disturbance field of the satellite body are fully corrected, the standard deviation of scalar error \triangle B remains about 0.1nT. Additionally, in order to keep the OVM readings a reliable standard, the imperfect coefficients of the torquer current correction for the OVM are redetermined by solving a minimization problem. The temporal variation of the spacecraft remanent field is investigated. It was found that the average magnetic moment of the magneto-torquers reflects well the moment of the satellite. This allows for a continuous correction of the spacecraft field. The reasons for the possible unknown systemic error are discussed in this thesis. Particularly, both temperature uncertainties and time errors have influence on the FGM data. Based on the results of this thesis the data processing of future magnetic missions can be designed in an improved way. In particular, the upcoming ESA mission Swarm can take advantage of our findings and provide all the auxiliary measurements needed for a proper recovery of the ambient magnetic field.
The availability of large data sets has allowed researchers to uncover complex properties in complex systems, such as complex networks and human dynamics. A vast number of systems, from the Internet to the brain, power grids, ecosystems, can be represented as large complex networks. Dynamics on and of complex networks has attracted more and more researchers’ interest. In this thesis, first, I introduced a simple but effective dynamical optimization coupling scheme which can realize complete synchronization in networks with undelayed and delayed couplings and enhance the small-world and scale-free networks’ synchronizability. Second, I showed that the robustness of scale-free networks with community structure was enhanced due to the existence of communities in the networks and some of the response patterns were found to coincide with topological communities. My results provide insights into the relationship between network topology and the functional organization in complex networks from another viewpoint. Third, as an important kind of nodes of complex networks, human detailed correspondence dynamics was studied by both data and the model. A new and general type of human correspondence pattern was found and an interacting priority-queues model was introduced to explain it. The model can also embrace a range of realistic social interacting systems such as email and letter communication. My findings provide insight into various human activities both at the individual and network level. Fourth, I present clearly new evidence that human comment behavior in on-line social systems, a different type of interacting human dynamics, is non-Poissonian and a model based on the personal attraction was introduced to explain it. These results are helpful for discovering regular patterns of human behavior in on-line society and the evolution of the public opinion on the virtual as well as real society. Finally, there are conclusion and outlook of human dynamics and complex networks.
This thesis presents methods, techniques and tools for developing three-dimensional representations of tactical intelligence assessments. Techniques from GIScience are combined with crime mapping methods. The range of methods applied in this study provides spatio-temporal GIS analysis as well as 3D geovisualisation and GIS programming. The work presents methods to enhance digital three-dimensional city models with application specific thematic information. This information facilitates further geovisual analysis, for instance, estimations of urban risks exposure. Specific methods and workflows are developed to facilitate the integration of spatio-temporal crime scene analysis results into 3D tactical intelligence assessments. Analysis comprises hotspot identification with kernel-density-estimation techniques (KDE), LISA-based verification of KDE hotspots as well as geospatial hotspot area characterisation and repeat victimisation analysis. To visualise the findings of such extensive geospatial analysis, three-dimensional geovirtual environments are created. Workflows are developed to integrate analysis results into these environments and to combine them with additional geospatial data. The resulting 3D visualisations allow for an efficient communication of complex findings of geospatial crime scene analysis.
In 1915, Alfred Wegener published his hypotheses of plate tectonics that revolutionised the world for geologists. Since then, many scientists have studied the evolution of continents and especially the geologic structure of orogens: the most visible consequence of tectonic processes. Although the morphology and landscape evolution of mountain belts can be observed due to surface processes, the driving force and dynamics at lithosphere scale are less well understood despite the fact that rocks from deeper levels of orogenic belts are in places exposed at the surface. In this thesis, such formerly deeply-buried (ultra-) high-pressure rocks, in particular eclogite facies series, have been studied in order to reveal details about the formation and exhumation conditions and rates and thus provide insights into the geodynamics of the most spectacular orogenic belt in the world: the Himalaya. The specific area investigated was the Kaghan Valley in Pakistan (NW Himalaya). Following closure of the Tethyan Ocean by ca. 55-50 Ma, the northward subduction of the leading edge of India beneath the Eurasian Plate and subsequent collision initiated a long-lived process of intracrustal thrusting that continues today. The continental crust of India – granitic basement, Paleozoic and Mesozoic cover series and Permo-Triassic dykes, sills and lavas – has been buried partly to mantle depths. Today, these rocks crop out as eclogites, amphibolites and gneisses within the Higher Himalayan Crystalline between low-grade metamorphosed rocks (600-640°C/ ca. 5 kbar) of the Lesser Himalaya and Tethyan sediments. Beside tectonically driven exhumation mechanisms the channel flow model, that describes a denudation focused ductile extrusion of low viscosity material developed in the middle to lower crust beneath the Tibetan Plateau, has been postulated. To get insights into the lithospheric and crustal processes that have initiated and driven the exhumation of this (ultra-) high-pressure rocks, mineralogical, petrological and isotope-geochemical investigations have been performed. They provide insights into 1) the depths and temperatures to which these rocks were buried, 2) the pressures and temperatures the rocks have experienced during their exhumation, 3) the timing of these processes 4) and the velocity with which these rocks have been brought back to the surface. In detail, through microscopical studies, the identification of key minerals, microprobe analyses, standard geothermobarometry and modelling using an effective bulk rock composition it has been shown that published exhumation paths are incomplete. In particular, the eclogites of the northern Kaghan Valley were buried to depths of 140-100 km (36-30 kbar) at 790-640°C. Subsequently, cooling during decompression (exhumation) towards 40-35 km (17-10 kbar) and 630-580°C has been superseded by a phase of reheating to about 720-650°C at roughly the same depth before final exhumation has taken place. In the southern-most part of the study area, amphibolite facies assemblages with formation conditions similar to the deduced reheating phase indicate a juxtaposition of both areas after the eclogite facies stage and thus a stacking of Indian Plate units. Radiometric dating of zircon, titanite and rutile by U-Pb and amphibole and micas by Ar-Ar reveal peak pressure conditions at 47-48 Ma. With a maximum exhumation rate of 14 cm/a these rocks reached the crust-mantle boundary at 40-35 km within 1 Ma. Subsequent exhumation (46-41 Ma, 40-35 km) decelerated to ca. 1 mm/a at the base of the continental crust but rose again to about 2 mm/a in the period of 41-31 Ma, equivalent to 35-20 km. Apatite fission track (AFT) and (U-Th)/He ages from eclogites, amphibolites, micaschists and gneisses yielded moderate Oligocene to Miocene cooling rates of about 10°C/Ma in the high altitude northern parts of the Kaghan Valley using the mineral-pair method. AFT ages are of 24.5±3.8 to 15.6±2.1 Ma whereas apatite (U-Th)/He analyses yielded ages between 21.0±0.6 and 5.3±0.2 Ma. The southern-most part of the Valley is dominated by younger late Miocene to Pliocene apatite fission track ages of 7.6±2.1 and 4.0±0.5 Ma that support earlier tectonically and petrologically findings of a juxtaposition and stack of Indian Plate units. As this nappe is tectonically lowermost, a later distinct exhumation and uplift driven by thrusting along the Main Boundary Thrust is inferred. A multi-stage exhumation path is evident from petrological, isotope-geochemical and low temperature thermochronology investigations. Buoyancy driven exhumation caused an initial rapid exhumation: exhumation as fast as recent normal plate movements (ca. 10 cm/a). As the exhuming units reached the crust-mantle boundary the process slowed down due to changes in buoyancy. Most likely, this exhumation pause has initiated the reheating event that is petrologically evident (e.g. glaucophane rimmed by hornblende, ilmenite overgrowth of rutile). Late stage processes involved widespread thrusting and folding with accompanied regional greenschist facies metamorphism, whereby contemporaneous thrusting on the Batal Thrust (seen by some authors equivalent to the MCT) and back sliding of the Kohistan Arc along the inverse reactivated Main Mantle Thrust caused final exposure of these rocks. Similar circumstances have been seen at Tso Morari, Ladakh, India, 200 km further east where comparable rock assemblages occur. In conclusion, as exhumation was already done well before the initiation of the monsoonal system, climate dependent effects (erosion) appear negligible in comparison to far-field tectonic effects.
Calibration of the global hydrological model WGHM with water mass variations from GRACE gravity data
(2010)
Since the start-up of the GRACE (Gravity Recovery And Climate Experiment) mission in 2002 time dependent global maps of the Earth's gravity field are available to study geophysical and climatologically-driven mass redistributions on the Earth's surface. In particular, GRACE observations of total water storage changes (TWSV) provide a comprehensive data set for analysing the water cycle on large scales. Therefore they are invaluable for validation and calibration of large-scale hydrological models as the WaterGAP Global Hydrology Model (WGHM) which simulates the continental water cycle including its most important components, such as soil, snow, canopy, surface- and groundwater. Hitherto, WGHM exhibits significant differences to GRACE, especially for the seasonal amplitude of TWSV. The need for a validation of hydrological models is further highlighted by large differences between several global models, e.g. WGHM, the Global Land Data Assimilation System (GLDAS) and the Land Dynamics model (LaD). For this purpose, GRACE links geodetic and hydrological research aspects. This link demands the development of adequate data integration methods on both sides, forming the main objectives of this work. They include the derivation of accurate GRACE-based water storage changes, the development of strategies to integrate GRACE data into a global hydrological model as well as a calibration method, followed by the re-calibration of WGHM in order to analyse process and model responses. To achieve these aims, GRACE filter tools for the derivation of regionally averaged TWSV were evaluated for specific river basins. Here, a decorrelation filter using GRACE orbits for its design is most efficient among the tested methods. Consistency in data and equal spatial resolution between observed and simulated TWSV were realised by the inclusion of all most important hydrological processes and an equal filtering of both data sets. Appropriate calibration parameters were derived by a WGHM sensitivity analysis against TWSV. Finally, a multi-objective calibration framework was developed to constrain model predictions by both river discharge and GRACE TWSV, realised with a respective evolutionary method, the ε-Non-dominated-Sorting-Genetic-Algorithm-II (ε-NSGAII). Model calibration was done for the 28 largest river basins worldwide and for most of them improved simulation results were achieved with regard to both objectives. From the multi-objective approach more reliable and consistent simulations of TWSV within the continental water cycle were gained and possible model structure errors or mis-modelled processes for specific river basins detected. For tropical regions as such, the seasonal amplitude of water mass variations has increased. The findings lead to an improved understanding of hydrological processes and their representation in the global model. Finally, the robustness of the results is analysed with respect to GRACE and runoff measurement errors. As a main conclusion obtained from the results, not only soil water and snow storage but also groundwater and surface water storage have to be included in the comparison of the modelled and GRACE-derived total water budged data. Regarding model calibration, the regional varying distribution of parameter sensitivity suggests to tune only parameter of important processes within each region. Furthermore, observations of single storage components beside runoff are necessary to improve signal amplitudes and timing of simulated TWSV as well as to evaluate them with higher accuracy. The results of this work highlight the valuable nature of GRACE data when merged into large-scale hydrological modelling and depict methods to improve large-scale hydrological models.
The aim of this thesis is the design, expression and purification of human cytochrome c mutants and their characterization with regard to electrochemical and structural properties as well as with respect to the reaction with the superoxide radical and the selected proteins sulfite oxidase from human and fungi bilirubin oxidase. All three interaction partners are studied here for the first time with human cyt c and with mutant forms of cyt c. A further aim is the incorporation of the different cyt c forms in two bioelectronic systems: an electrochemical superoxide biosensor with an enhanced sensitivity and a protein multilayer assembly with and without bilirubin oxidase on electrodes. The first part of the thesis is dedicated to the design, expression and characterization of the mutants. A focus is here the electrochemical characterization of the protein in solution and immobilized on electrodes. Further the reaction of these mutants with superoxide was investigated and the possible reaction mechanisms are discussed. In the second part of the work an amperometric superoxide biosensor with selected human cytochrome c mutants was constructed and the performance of the sensor electrodes was studied. The human wild-type and four of the five mutant electrodes could be applied successfully for the detection of the superoxide radical. In the third part of the thesis the reaction of horse heart cyt c, the human wild-type and seven human cyt c mutants with the two proteins sulfite oxidase and bilirubin oxidase was studied electrochemically and the influence of the mutations on the electron transfer reactions was discussed. Finally protein multilayer electrodes with different cyt form including the mutant forms G77K and N70K which exhibit different reaction rates towards BOD were investigated and BOD together with the wild-type and engineered cyt c was embedded in the multilayer assembly. The relevant electron transfer steps and the kinetic behavior of the multilayer electrodes are investigated since the functionality of electroactive multilayer assemblies with incorporated redox proteins is often limited by the electron transfer abilities of the proteins within the multilayer. The formation via the layer-by-layer technique and the kinetic behavior of the mono and bi-protein multilayer system are studied by SPR and cyclic voltammetry. In conclusion this thesis shows that protein engineering is a helpful instrument to study protein reactions as well as electron transfer mechanisms of complex bioelectronic systems (such as bi-protein multilayers). Furthermore, the possibility to design tailored recognition elements for the construction of biosensors with an improved performance is demonstrated.
This work presents the development of entropy-elastic gelatin based networks in the form of films or scaffolds. The materials have good prospects for biomedical applications, especially in the context of bone regeneration. Entropy-elastic gelatin based hydrogel films with varying crosslinking densities were prepared with tailored mechanical properties. Gelatin was covalently crosslinked above its sol gel transition, which suppressed the gelatin chain helicity. Hexamethylene diisocyanate (HDI) or ethyl ester lysine diisocyanate (LDI) were applied as chemical crosslinkers, and the reaction was conducted either in dimethyl sulfoxide (DMSO) or water. Amorphous films were prepared as measured by Wide Angle X-ray Scattering (WAXS), with tailorable degrees of swelling (Q: 300-800 vol. %) and wet state Young’s modulus (E: 70 740 kPa). Model reactions showed that the crosslinking reaction resulted in a combination of direct crosslinks (3-13 mol.-%), grafting (5-40 mol.-%), and blending of oligoureas (16-67 mol.-%). The knowledge gained with this bulk material was transferred to the integrated process of foaming and crosslinking to obtain porous 3-D gelatin-based scaffolds. For this purpose, a gelatin solution was foamed in the presence of a surfactant, Saponin, and the resulting foam was fixed by chemical crosslinking with a diisocyanate. The amorphous crosslinked scaffolds were synthesized with varied gelatin and HDI concentrations, and analyzed in the dry state by micro computed tomography (µCT, porosity: 65±11–73±14 vol.-%), and scanning electron microscopy (SEM, pore size: 117±28–166±32 µm). Subsequently, the work focused on the characterization of the gelatin scaffolds in conditions relevant to biomedical applications. Scaffolds showed high water uptake (H: 630-1680 wt.-%) with minimal changes in outer dimension. Since a decreased scaffold pore size (115±47–130±49 µm) was revealed using confocal laser scanning microscopy (CLSM) upon wetting, the form stability could be explained. Shape recoverability was observed after removal of stress when compressing wet scaffolds, while dry scaffolds maintained the compressed shape. This was explained by a reduction of the glass transition temperature upon equilibration with water (dynamic mechanical analysis at varied temperature (DMTA)). The composition dependent compression moduli (Ec: 10 50 kPa) were comparable to the bulk micromechanical Young’s moduli, which were measured by atomic force microscopy (AFM). The hydrolytic degradation profile could be adjusted, and a controlled decrease of mechanical properties was observed. Partially-degraded scaffolds displayed an increase of pore size. This was likely due to the pore wall disintegration during degradation, which caused the pores to merge. The scaffold cytotoxicity and immunologic responses were analyzed. The porous scaffolds enabled proliferation of human dermal fibroblasts within the implants (up to 90 µm depth). Furthermore, indirect eluate tests were carried out with L929 cells to quantify the material cytotoxic response. Here, the effect of the sterilization method (Ethylene oxide sterilization), crosslinker, and surfactant were analyzed. Fully cytocompatible scaffolds were obtained by using LDI as crosslinker and PEO40 PPO20-PEO40 as surfactant. These investigations were accompanied by a study of the endotoxin material contamination. The formation of medical-grade materials was successfully obtained (<0.5 EU/mL) by using low-endotoxin gelatin and performing all synthetic steps in a laminar flow hood.
Production of regular and non-regular verbs : evidence for a lexical entry complexity account
(2010)
The incredible productivity and creativity of language depends on two fundamental resources: a mental lexicon and a mental grammar. Rules of grammar enable us to produce and understand complex phrases we have not encountered before and at the same time constrain the computation of complex expressions. The concepts of the mental lexicon and mental grammar have been thoroughly tested by comparing the use of regular versus non-regular word forms. Regular verbs (e.g. walk-walked) are computed using a suffixation rule in a neural system for grammatical processing; non-regular verbs (run-ran) are retrieved from associative memory. The role of regularity has only been explored for the past tense, where regularity is overtly visible. To explore the representation and encoding of regularity as well as the inflectional processes involved in the production of regular and non-regular verbs, this dissertation investigated three groups of German verbs: regular, irregular and hybrid verbs. Hybrid verbs in German have completely regular conjugation in the present tense and irregular conjugation in the past tense. Articulation latencies were measured while participants named pictures of actions, producing the 3rd person singular of regular, hybrid, and irregular verbs in present and past tense. Studying the production of German verbs in past and present tense, this dissertation explored the complexity of lexical entries as a decisive factor in the production of verbs.
This thesis is focused on the electronic, spin-dependent and dynamical properties of thin magnetic systems. Photoemission-related techniques are combined with synchrotron radiation to study the spin-dependent properties of these systems in the energy and time domains. In the first part of this thesis, the strength of electron correlation effects in the spin-dependent electronic structure of ferromagnetic bcc Fe(110) and hcp Co(0001) is investigated by means of spin- and angle-resolved photoemission spectroscopy. The experimental results are compared to theoretical calculations within the three-body scattering approximation and within the dynamical mean-field theory, together with one-step model calculations of the photoemission process. From this comparison it is demonstrated that the present state of the art many-body calculations, although improving the description of correlation effects in Fe and Co, give too small mass renormalizations and scattering rates thus demanding more refined many-body theories including nonlocal fluctuations. In the second part, it is shown in detail monitoring by photoelectron spectroscopy how graphene can be grown by chemical vapour deposition on the transition-metal surfaces Ni(111) and Co(0001) and intercalated by a monoatomic layer of Au. For both systems, a linear E(k) dispersion of massless Dirac fermions is observed in the graphene pi-band in the vicinity of the Fermi energy. Spin-resolved photoemission from the graphene pi-band shows that the ferromagnetic polarization of graphene/Ni(111) and graphene/Co(0001) is negligible and that graphene on Ni(111) is after intercalation of Au spin-orbit split by the Rashba effect. In the last part, a time-resolved x-ray magnetic circular dichroic-photoelectron emission microscopy study of a permalloy platelet comprising three cross-tie domain walls is presented. It is shown how a fast picosecond magnetic response in the precessional motion of the magnetization can be induced by means of a laser-excited photoswitch. From a comparision to micromagnetic calculations it is demonstrated that the relatively high precessional frequency observed in the experiments is directly linked to the nature of the vortex/antivortex dynamics and its response to the magnetic perturbation. This includes the time-dependent reversal of the vortex core polarization, a process which is beyond the limit of detection in the present experiments.
With the rise of nanotechnology in the last decade, nanofluidics has been established as a research field and gained increased interest in science and industry. Natural aqueous nanofluidic systems are very complex, there is often a predominance of liquid interfaces or the fluid contains charged or differently shaped colloids. The effects, promoted by these additives, are far from being completely understood and interesting questions arise with regards to the confinement of such complex fluidic systems. A systematic study of nanofluidic processes requires designing suitable experimental model nano – channels with required characteristics. The present work employed thin liquid films (TLFs) as experimental models. They have proven to be useful experimental tools because of their simple geometry, reproducible preparation, and controllable liquid interfaces. The thickness of the channels can be adjusted easily by the concentration of electrolyte in the film forming solution. This way, channel dimensions from 5 – 100 nm are possible, a high flexibility for an experimental system. TLFs have liquid IFs of different charge and properties and they offer the possibility to confine differently shaped ions and molecules to very small spaces, or to subject them to controlled forces. This makes the foam films a unique “device” available to obtain information about fluidic systems in nanometer dimensions. The main goal of this thesis was to study nanofluidic processes using TLFs as models, or tools, and to subtract information about natural systems plus deepen the understanding on physical chemical conditions. The presented work showed that foam films can be used as experimental models to understand the behavior of liquids in nano – sized confinement. In the first part of the thesis, we studied the process of thinning of thin liquid films stabilized with the non – ionic surfactant n – dodecyl – β – maltoside (β – C₁₂G₂) with primary interest in interfacial diffusion processes during the thinning process dependent on surfactant concentration 64. The surfactant concentration in the film forming solutions was varied at constant electrolyte (NaCl) concentration. The velocity of thinning was analyzed combining previously developed theoretical approaches. Qualitative information about the mobility of the surfactant molecules at the film surfaces was obtained. We found that above a certain limiting surfactant concentration the film surfaces were completely immobile and they behaved as non – deformable, which decelerated the thinning process. This follows the predictions for Reynolds flow of liquid between two non – deformable disks. In the second part of the thesis, we designed a TLF nanofluidic system containing rod – like multivalent ions and compared this system to films containing monovalent ions. We presented first results which recognized for the first time the existence of an additional attractive force in the foam films based on the electrostatic interaction between rod – like ions and oppositely charged surfaces. We may speculate that this is an ion bridging component of the disjoining pressure. The results show that for films prepared in presence of spermidine the transformation of the thicker CF to the thinnest NBF is more probable as films prepared with NaCl at similar conditions of electrostatic interaction. This effect is not a result of specific adsorption of any of the ions at the fluid surfaces and it does not lead to any changes in the equilibrium properties of the CF and NBF. Our hypothesis was proven using the trivalent ion Y3+ which does not show ion bridging. The experimental results are compared to theoretical predictions and a quantitative agreement on the system’s energy gain for the change from CF to NBF could be obtained. In the third part of the work, the behavior of nanoparticles in confinement was investigated with respect to their impact on the fluid flow velocity. The particles altered the flow velocity by an unexpected high amount, so that the resulting changes in the dynamic viscosity could not be explained by a realistic change of the fluid viscosity. Only aggregation, flocculation and plug formation can explain the experimental results. The particle systems in the presented thesis had a great impact on the film interfaces due to the stabilizer molecules present in the bulk solution. Finally, the location of the particles with respect to their lateral and vertical arrangement in the film was studied with advanced reflectivity and scattering methods. Neutron Reflectometry studies were performed to investigate the location of nanoparticles in the TLF perpendicular to the IF. For the first time, we study TLFs using grazing incidence small angle X – ray scattering (GISAXS), which is a technique sensitive to the lateral arrangement of particles in confined volumes. This work provides preliminary data on a lateral ordering of particles in the film.
Development of techniques for earthquake microzonation studies in different urban environment
(2010)
The proliferation of megacities in many developing countries, and their location in areas where they are exposed to a high risk from large earthquakes, coupled with a lack of preparation, demonstrates the requirement for improved capabilities in hazard assessment, as well as the rapid adjustment and development of land-use planning. In particular, within the context of seismic hazard assessment, the evaluation of local site effects and their influence on the spatial distribution of ground shaking generated by an earthquake plays an important role. It follows that the carrying out of earthquake microzonation studies, which aim at identify areas within the urban environment that are expected to respond in a similar way to a seismic event, are essential to the reliable risk assessment of large urban areas. Considering the rate at which many large towns in developing countries that are prone to large earthquakes are growing, their seismic microzonation has become mandatory. Such activities are challenging and techniques suitable for identifying site effects within such contexts are needed. In this dissertation, I develop techniques for investigating large-scale urban environments that aim at being non-invasive, cost-effective and quickly deployable. These peculiarities allow one to investigate large areas over a relative short time frame, with a spatial sampling resolution sufficient to provide reliable microzonation. Although there is a negative trade-off between the completeness of available information and extent of the investigated area, I attempt to mitigate this limitation by combining two, what I term layers, of information: in the first layer, the site effects at a few calibration points are well constrained by analyzing earthquake data or using other geophysical information (e.g., shear-wave velocity profiles); in the second layer, the site effects over a larger areal coverage are estimated by means of single-station noise measurements. The microzonation is performed in terms of problem-dependent quantities, by considering a proxy suitable to link information from the first layer to the second one. In order to define the microzonation approach proposed in this work, different methods for estimating site effects have been combined and tested in Potenza (Italy), where a considerable amount of data was available. In particular, the horizontal-to-vertical spectral ratio computed for seismic noise recorded at different sites has been used as a proxy to combine the two levels of information together and to create a microzonation map in terms of spectral intensity ratio (SIR). In the next step, I applied this two-layer approach to Istanbul (Turkey) and Bishkek (Kyrgyzstan). A similar hybrid approach, i.e., combining earthquake and noise data, has been used for the microzonation of these two different urban environments. For both cities, after having calibrated the fundamental frequencies of resonance estimated from seismic noise with those obtained by analysing earthquakes (first layer), a fundamental frequency map has been computed using the noise measurements carried out within the town (second layer). By applying this new approach, maps of the fundamental frequency of resonance for Istanbul and Bishkek have been published for the first time. In parallel, a microzonation map in terms of SIR has been incorporated into a risk scenario for the Potenza test site by means of a dedicated regression between spectral intensity (SI) and macroseismic intensity (EMS). The scenario study confirms the importance of site effects within the risk chain. In fact, their introduction into the scenario led to an increase of about 50% in estimates of the number of buildings that would be partially or totally collapsed. Last, but not least, considering that the approach developed and applied in this work is based on measurements of seismic noise, their reliability has been assessed. A theoretical model describing the self-noise curves of different instruments usually adopted in microzonation studies (e.g., those used in Potenza, Istanbul and Bishkek) have been considered and compared with empirical data recorded in Cologne (Germany) and Gubbio (Italy). The results show that, depending on the geological and environmental conditions, the instrumental noise could severely bias the results obtained by recording and analysing ambient noise. Therefore, in this work I also provide some guidelines for measuring seismic noise.
Crustal deformation can be the result of volcanic and tectonic activity such as fault dislocation and magma intrusion. The crustal deformation may precede and/or succeed the earthquake occurrence and eruption. Mitigating the associated hazard, continuous monitoring of the crustal deformation accordingly has become an important task for geo-observatories and fast response systems. Due to highly non-linear behavior of the crustal deformation fields in time and space, which are not always measurable using conventional geodetic methods (e.g., Leveling), innovative techniques of monitoring and analysis are required. In this thesis I describe novel methods to improve the ability for precise and accurate mapping the spatiotemporal surface deformation field using multi acquisitions of satellite radar data. Furthermore, to better understand the source of such spatiotemporal deformation fields, I present novel static and time dependent model inversion approaches. Almost any interferograms include areas where the signal decorrelates and is distorted by atmospheric delay. In this thesis I detail new analysis methods to reduce the limitations of conventional InSAR, by combining the benefits of advanced InSAR methods such as the permanent scatterer InSAR (PSI) and the small baseline subsets (SBAS) with a wavelet based data filtering scheme. This novel InSAR time series methodology is applied, for instance, to monitor the non-linear deformation processes at Hawaii Island. The radar phase change at Hawaii is found to be due to intrusions, eruptions, earthquakes and flank movement processes and superimposed by significant environmental artifacts (e.g., atmospheric). The deformation field, I obtained using the new InSAR analysis method, is in good agreement with continuous GPS data. This provides an accurate spatiotemporal deformation field at Hawaii, which allows time dependent source modeling. Conventional source modeling methods usually deal with static deformation field, while retrieving the dynamics of the source requires more sophisticated time dependent optimization approaches. This problem I address by combining Monte Carlo based optimization approaches with a Kalman Filter, which provides the model parameters of the deformation source consistent in time. I found there are numerous deformation sources at Hawaii Island which are spatiotemporally interacting, such as volcano inflation is associated to changes in the rifting behavior, and temporally linked to silent earthquakes. I applied these new methods to other tectonic and volcanic terrains, most of which revealing the importance of associated or coupled deformation sources. The findings are 1) the relation between deep and shallow hydrothermal and magmatic sources underneath the Campi Flegrei volcano, 2) gravity-driven deformation at Damavand volcano, 3) fault interaction associated with the 2010 Haiti earthquake, 4) independent block wise flank motion at the Hilina Fault system, Kilauea, and 5) interaction between salt diapir and the 2005 Qeshm earthquake in southern Iran. This thesis, written in cumulative form including 9 manuscripts published or under review in peer reviewed journals, improves the techniques for InSAR time series analysis and source modeling and shows the mutual dependence between adjacent deformation sources. These findings allow more realistic estimation of the hazard associated with complex volcanic and tectonic systems.
In the high mountains of Asia, glaciers cover an area of approximately 115,000 km² and constitute one of the largest continental ice accumulations outside Greenland and Antarctica. Their sensitivity to climate change makes them valuable palaeoclimate archives, but also vulnerable to current and predicted Global Warming. This is a pressing problem as snow and glacial melt waters are important sources for agriculture and power supply of densely populated regions in south, east, and central Asia. Successful prediction of the glacial response to climate change in Asia and mitigation of the socioeconomic impacts requires profound knowledge of the climatic controls and the dynamics of Asian glaciers. However, due to their remoteness and difficult accessibility, ground-based studies are rare, as well as temporally and spatially limited. We therefore lack basic information on the vast majority of these glaciers. In this thesis, I employ different methods to assess the dynamics of Asian glaciers on multiple time scales. First, I tested a method for precise satellite-based measurement of glacier-surface velocities and conducted a comprehensive and regional survey of glacial flow and terminus dynamics of Asian glaciers between 2000 and 2008. This novel and unprecedented dataset provides unique insights into the contrasting topographic and climatic controls of glacial flow velocities across the Asian highlands. The data document disparate recent glacial behavior between the Karakoram and the Himalaya, which I attribute to the competing influence of the mid-latitude westerlies during winter and the Indian monsoon during summer. Second, I tested whether such climate-related longitudinal differences in glacial behavior also prevail on longer time scales, and potentially account for observed regionally asynchronous glacial advances. I used cosmogenic nuclide surface exposure dating of erratic boulders on moraines to obtain a glacial chronology for the upper Tons Valley, situated in the headwaters of the Ganges River. This area is located in the transition zone from monsoonal to westerly moisture supply and therefore ideal to examine the influence of these two atmospheric circulation regimes on glacial advances. The new glacial chronology documents multiple glacial oscillations during the last glacial termination and during the Holocene, suggesting largely synchronous glacial changes in the western Himalayan region that are related to gradual glacial-interglacial temperature oscillations with superimposed monsoonal precipitation changes of higher frequency. In a third step, I combine results from short-term satellite-based climate records and surface velocity-derived ice-flux estimates, with topographic analyses to deduce the erosional impact of glaciations on long-term landscape evolution in the Himalayan-Tibetan realm. The results provide evidence for the long-term effects of pronounced east-west differences in glaciation and glacial erosion, depending on climatic and topographic factors. Contrary to common belief the data suggest that monsoonal climate in the central Himalaya weakens glacial erosion at high elevations, helping to maintain a steep southern orographic barrier that protects the Tibetan Plateau from lateral destruction. The results of this thesis highlight how climatic and topographic gradients across the high mountains of Asia affect glacier dynamics on time scales ranging from 10^0 to 10^6 years. Glacial response times to climate changes are tightly linked to properties such as debris cover and surface slope, which are controlled by the topographic setting, and which need to be taken into account when reconstructing mountainous palaeoclimate from glacial histories or assessing the future evolution of Asian glaciers. Conversely, the regional topographic differences of glacial landscapes in Asia are partly controlled by climatic gradients and the long-term influence of glaciers on the topographic evolution of the orogenic system.
Large-scale volcanic deformation recently detected by radar interferometry (InSAR) provides new information and thus new scientific challenges for understanding volcano-tectonic activity and magmatic systems. The destabilization of such a system at depth noticeably affects the surrounding environment through magma injection, ground displacement and volcanic eruptions. To determine the spatiotemporal evolution of the Lazufre volcanic area located in the central Andes, we combined short-term ground displacement acquired by InSAR with long-term geological observations. Ground displacement was first detected using InSAR in 1997. By 2008, this displacement affected 1800 km2 of the surface, an area comparable in size to the deformation observed at caldera systems. The original displacement was followed in 2000 by a second, small-scale, neighbouring deformation located on the Lastarria volcano. We performed a detailed analysis of the volcanic structures at Lazufre and found relationships with the volcano deformations observed with InSAR. We infer that these observations are both likely to be the surface expression of a long-lived magmatic system evolving at depth. It is not yet clear whether Lazufre may trigger larger unrest or volcanic eruptions; however, the second deformation detected at Lastarria and the clear increase of the large-scale deformation rate make this an area of particular interest for closer continuous monitoring.
This thesis is concerned with the issue of extinction of populations composed of different types of individuals, and their behavior before extinction and in case of a very late extinction. We approach this question firstly from a strictly probabilistic viewpoint, and secondly from the standpoint of risk analysis related to the extinction of a particular model of population dynamics. In this context we propose several statistical tools. The population size is modeled by a branching process, which is either a continuous-time multitype Bienaymé-Galton-Watson process (BGWc), or its continuous-state counterpart, the multitype Feller diffusion process. We are interested in different kinds of conditioning on non-extinction, and in the associated equilibrium states. These ways of conditioning have been widely studied in the monotype case. However the literature on multitype processes is much less extensive, and there is no systematic work establishing connections between the results for BGWc processes and those for Feller diffusion processes. In the first part of this thesis, we investigate the behavior of the population before its extinction by conditioning the associated branching process X_t on non-extinction (X_t≠0), or more generally on non-extinction in a near future 0≤θ<∞ (X_{t+θ}≠0), and by letting t tend to infinity. We prove the result, new in the multitype framework and for θ>0, that this limit exists and is non-degenerate. This reflects a stationary behavior for the dynamics of the population conditioned on non-extinction, and provides a generalization of the so-called Yaglom limit, corresponding to the case θ=0. In a second step we study the behavior of the population in case of a very late extinction, obtained as the limit when θ tends to infinity of the process conditioned by X_{t+θ}≠0. The resulting conditioned process is a known object in the monotype case (sometimes referred to as Q-process), and has also been studied when X_t is a multitype Feller diffusion process. We investigate the not yet considered case where X_t is a multitype BGWc process and prove the existence of the associated Q-process. In addition, we examine its properties, including the asymptotic ones, and propose several interpretations of the process. Finally, we are interested in interchanging the limits in t and θ, as well as in the not yet studied commutativity of these limits with respect to the high-density-type relationship between BGWc processes and Feller processes. We prove an original and exhaustive list of all possible exchanges of limit (long-time limit in t, increasing delay of extinction θ, diffusion limit). The second part of this work is devoted to the risk analysis related both to the extinction of a population and to its very late extinction. We consider a branching population model (arising notably in the epidemiological context) for which a parameter related to the first moments of the offspring distribution is unknown. We build several estimators adapted to different stages of evolution of the population (phase growth, decay phase, and decay phase when extinction is expected very late), and prove moreover their asymptotic properties (consistency, normality). In particular, we build a least squares estimator adapted to the Q-process, allowing a prediction of the population development in the case of a very late extinction. This would correspond to the best or to the worst-case scenario, depending on whether the population is threatened or invasive. These tools enable us to study the extinction phase of the Bovine Spongiform Encephalopathy epidemic in Great Britain, for which we estimate the infection parameter corresponding to a possible source of horizontal infection persisting after the removal in 1988 of the major route of infection (meat and bone meal). This allows us to predict the evolution of the spread of the disease, including the year of extinction, the number of future cases and the number of infected animals. In particular, we produce a very fine analysis of the evolution of the epidemic in the unlikely event of a very late extinction.
About the relation between implicit Theory of Mind & the comprehension of complement sentences
(2010)
Previous studies on the relation between language and social cognition have shown that children’s mastery of embedded sentential complements plays a causal role for the development of a Theory of Mind (ToM). Children start to succeed on complementation tasks in which they are required to report the content of an embedded clause in the second half of the fourth year. Traditional ToM tasks test the child’s ability to predict that a person who is holding a false belief (FB) about a situation will act "falsely". In these task, children do not represent FBs until the age of 4 years. According the linguistic determinism hypothesis, only the unique syntax of complement sentences provides the format for representing FBs. However, experiments measuring children’s looking behavior instead of their explicit predictions provided evidence that already 2-year olds possess an implicit ToM. This dissertation examined the question of whether there is an interrelation also between implicit ToM and the comprehension of complement sentences in typically developing German preschoolers. Two studies were conducted. In a correlational study (Study 1 ), 3-year-old children’s performance on a traditional (explicit) FB task, on an implicit FB task and on language tasks measuring children’s comprehension of tensed sentential complements were collected and tested for their interdependence. Eye-tracking methodology was used to assess implicit ToM by measuring participants’ spontaneous anticipatory eye movements while they were watching FB movies. Two central findings emerged. First, predictive looking (implicit ToM) was not correlated with complement mastery, although both measures were associated with explicit FB task performance. This pattern of results suggests that explicit, but not implicit ToM is language dependent. Second, as a group, 3-year-olds did not display implicit FB understanding. That is, previous findings on a precocious reasoning ability could not be replicated. This indicates that the characteristics of predictive looking tasks play a role for the elicitation of implicit FB understanding as the current task was completely nonverbal and as complex as traditional FB tasks. Study 2 took a methodological approach by investigating whether children display an earlier comprehension of sentential complements when using the same means of measurement as used in experimental tasks tapping implicit ToM, namely anticipatory looking. Two experiments were conducted. 3-year-olds were confronted either with a complement sentence expressing the protagonist’s FB (Exp. 1) or with a complex sentence expressing the protagonist’s belief without giving any information about the truth/ falsity of the belief (Exp. 2). Afterwards, their expectations about the protagonist’s future behavior were measured. Overall, implicit measures reveal no considerably earlier understanding of sentential complementation. Whereas 3-year-olds did not display a comprehension of complex sentences if these embedded a false proposition, children from 3;9 years on were proficient in processing complement sentences if the truth value of the embedded proposition could not be evaluated. This pattern of results suggests that (1) the linguistic expression of a person’s FB does not elicit implicit FB understanding and that (2) the assessment of the purely syntactic understanding of complement sentences is affected by competing reality information. In conclusion, this dissertation found no evidence that the implicit ToM is related to the comprehension of sentential complementation. The findings suggest that implicit ToM might be based on nonlinguistic processes. Results are discussed in the light of recently proposed dual-process models that assume two cognitive mechanisms that account for different levels of ToM task performance.
Situated in an active tectonic region, Santiago de Chile, the country´s capital with more than six million inhabitants, faces tremendous earthquake hazard. Macroseismic data for the 1985 Valparaiso and the 2010 Maule events show large variations in the distribution of damage to buildings within short distances indicating strong influence of local sediments and the shape of the sediment-bedrock interface on ground motion. Therefore, a temporary seismic network was installed in the urban area for recording earthquake activity, and a study was carried out aiming to estimate site amplification derived from earthquake data and ambient noise. The analysis of earthquake data shows significant dependence on the local geological structure with regards to amplitude and duration. Moreover, the analysis of noise spectral ratios shows that they can provide a lower bound in amplitude for site amplification and, since no variability in terms of time and amplitude is observed, that it is possible to map the fundamental resonance frequency of the soil for a 26 km x 12 km area in the northern part of the Santiago de Chile basin. By inverting the noise spectral rations, local shear wave velocity profiles could be derived under the constraint of the thickness of the sedimentary cover which had previously been determined by gravimetric measurements. The resulting 3D model was derived by interpolation between the single shear wave velocity profiles and shows locally good agreement with the few existing velocity profile data, but allows the entire area, as well as deeper parts of the basin, to be represented in greater detail. The wealth of available data allowed further to check if any correlation between the shear wave velocity in the uppermost 30 m (vs30) and the slope of topography, a new technique recently proposed by Wald and Allen (2007), exists on a local scale. While one lithology might provide a greater scatter in the velocity values for the investigated area, almost no correlation between topographic gradient and calculated vs30 exists, whereas a better link is found between vs30 and the local geology. When comparing the vs30 distribution with the MSK intensities for the 1985 Valparaiso event it becomes clear that high intensities are found where the expected vs30 values are low and over a thick sedimentary cover. Although this evidence cannot be generalized for all possible earthquakes, it indicates the influence of site effects modifying the ground motion when earthquakes occur well outside of the Santiago basin. Using the attained knowledge on the basin characteristics, simulations of strong ground motion within the Santiago Metropolitan area were carried out by means of the spectral element technique. The simulation of a regional event, which has also been recorded by a dense network installed in the city of Santiago for recording aftershock activity following the 27 February 2010 Maule earthquake, shows that the model is capable to realistically calculate ground motion in terms of amplitude, duration, and frequency and, moreover, that the surface topography and the shape of the sediment bedrock interface strongly modify ground motion in the Santiago basin. An examination on the dependency of ground motion on the hypocenter location for a hypothetical event occurring along the active San Ramón fault, which is crossing the eastern outskirts of the city, shows that the unfavorable interaction between fault rupture, radiation mechanism, and complex geological conditions in the near-field may give rise to large values of peak ground velocity and therefore considerably increase the level of seismic risk for Santiago de Chile.
The widespread usage of products containing volatile organic compounds (VOC) has lead to a general human exposure to these chemicals in work places or homes being suspected to contribute to the growing incidence of environmental diseases. Since the causal molecular mechanisms for the development of these disorders are not completely understood, the overall objective of this thesis was to investigate VOC-mediated molecular effects on human lung cells in vitro at VOC concentrations comparable to exposure scenarios below current occupational limits. Although differential expression of single proteins in response to VOCs has been reported, effects on complex protein networks (proteome) have not been investigated. However, this information is indispensable when trying to ascertain a mechanism for VOC action on the cellular level and establishing preventive strategies. For this study, the alveolar epithelial cell line A549 has been used. This cell line, cultured in a two-phase (air/liquid) model allows the most direct exposure and had been successfully applied for the analysis of inflammatory effects in response to VOCs. Mass spectrometric identification of 266 protein spots provided the first proteomic map of A549 cell line to this extent that may foster future work with this frequently used cellular model. The distribution of three typical air contaminants, monochlorobenzene (CB), styrene and 1,2 dichlorobenzene (1,2-DCB), between gas and liquid phase of the exposure model has been analyzed by gas chromatography. The obtained VOC partitioning was in agreement with available literature data. Subsequently the adapted in vitro system has been successfully employed to characterize the effects of the aromatic compound styrene on the proteome of A549 cells (Chapter 4). Initially, the cell toxicity has been assessed in order to ensure that most of the concentrations used in the following proteomic approach were not cytotoxic. Significant changes in abundance and phosphorylation in the total soluble protein fraction of A549 cells have been detected following styrene exposure. All proteins have been identified using mass spectrometry and the main cellular functions have been assigned. Validation experiments on protein and transcript level confirmed the results of the 2-DE experiments. From the results, two main cellular pathways have been identified that were induced by styrene: the cellular oxidative stress response combined with moderate pro-apoptotic signaling. Measurement of cellular reactive oxygen species (ROS) as well as the styrene-mediated induction of oxidative stress marker proteins confirmed the hypothesis of oxidative stress as the main molecular response mechanism. Finally, adducts of cellular proteins with the reactive styrene metabolite styrene 7,8 oxide (SO) have been identified. Especially the SO-adducts observed at both the reactive centers of thioredoxin reductase 1, which is a key element in the control of the cellular redox state, may be involved in styrene-induced ROS formation and apoptosis. A similar proteomic approach has been carried out with the halobenzenes CB and 1,2-DCB (Chapter 5). In accordance with previous findings, cell toxicity assessment showed enhanced toxicity compared to the one caused by styrene. Significant changes in abundance and phosphorylation of total soluble proteins of A549 cells have been detected following exposure to subtoxic concentrations of CB and 1,2-DCB. All proteins have been identified using mass spectrometry and the main cellular functions have been assigned. As for the styrene experiment, the results indicated two main pathways to be affected in the presence of chlorinated benzenes, cell death signaling and oxidative stress response. The strong induction of pro-apoptotic signaling has been confirmed for both treatments by detection of the cleavage of caspase 3. Likewise, the induction of redox-sensitive protein species could be correlated to an increased cellular level of ROS observed following CB treatment. Finally, common mechanisms in the cellular response to aromatic VOCs have been investigated (Chapter 6). A similar number (4.6-6.9%) of all quantified protein spots showed differential expression (p<0.05) following cell exposure to styrene, CB or 1,2-DCB. However, not more than three protein spots showed significant regulation in the same direction for all three volatile compounds: voltage-dependent anion-selective channel protein 2, peroxiredoxin 1 and elongation factor 2. However, all of these proteins are important molecular targets in stress- and cell death-related signaling pathways.
Foraging in space and time
(2010)
All animals are adapted to the environmental conditions of the habitat they chose to live in. It was the aim of this PhD-project, to show which behavioral strategies are expressed as mechanisms to cope with the constraints, which contribute to the natural selection pressure acting on individuals. For this purpose, small mammals were exposed to different levels and types of predation risk while actively foraging. Individuals were either exposed to different predator types (airborne or ground) or combinations of both, or to indirect predators (nest predators). Risk was assumed to be distributed homogeneously, so changing the habitat or temporal adaptations where not regarded as potential options. Results show that wild-caught voles have strategic answers to this homogeneously distributed risk, which is perceived by tactile, olfactory or acoustic cues. Thus, they do not have to know an absolut quality (e.g., in terms of food provisioning and risk levels of all possible habitats), but they can adapt their behavior to the actual circumstances. Deriving risk uniform levels from cues and adjusting activity levels to the perceived risk is an option to deal with predators of the same size or with unforeseeable attack rates. Experiments showed that as long as there are no safe places or times, it is best to reduce activity and behave as inconspicuous as possible as long as the costs of missed opportunities do not exceed the benefits of a higher survival probability. Test showed that these costs apparently grow faster for males than for females, especially in times of inactivity. This is supported by strong predatory pressure on the most active groups of rodents (young males, sexually active or dispersers) leading to extremely female-biased operative sex ratios in natural populations. Other groups of animals, those with parental duties such as nest guarding, for example, have to deal with the actual risk in their habitat as well. Strategies to indirect predation pressure were tested by using bank vole mothers, confronted with a nest predator that posed no actual threat to themselves but to their young (Sorex araneus). They reduced travelling and concentrated their effort in the presence of shrews, independent of the different nutritional provisioning of food by varying resource levels due to the different seasons. Additionally, they exhibited nest-guarding strategies by not foraging in the vicinity of the nest site in order to reduce conspicuous scent marks. The repetition of the experiment in summer and autumn showed that changing environmental constraints can have a severe impact on results of outdoor studies. In our case, changing resource levels changed the type of interaction between the two species. The experiments show that it is important to analyze decision making and optimality models on an individual level, and, when that is not possible (maybe because of the constraints of field work), groups of animals should be classified by using the least common denominator that can be identified (such as sex, age, origin or kinship). This will control for the effects of the sex or stage of life history or the individual´s reproductive and nutritional status on decision making and will narrow the wide behavioral variability associated with the complex term of optimality.
The seismically active Alborz mountains of northern Iran are an integral part of the Arabia-Eurasia collision. Linked strike-slip and thrust/reverse-fault systems in this mountain belt are characterized by slow loading rates, and large earthquakes are highly disparate in space and time. Similar to other intracontinental deformation zones such a pattern of tectonic activity is still insufficiently understood, because recurrence intervals between seismic events may be on the order of thousands of years, and are thus beyond the resolution of short term measurements based on GPS or instrumentally recorded seismicity. This study bridges the gap of deformation processes on different time scales. In particular, my investigation focuses on deformation on the Quaternary time scale, beyond present-day deformation rates, and it uses present-day and paleotectonic characteristics to model fault behavior. The study includes data based on structural and geomorphic mapping, faultkinematic analysis, DEM-based morphometry, and numerical fault-interaction modeling. In order to better understand the long- to short term behavior of such complex fault systems, I used geomorphic surfaces as strain markers and dated fluvial and alluvial surfaces using terrestrial cosmogenic nuclides (TCN, 10Be, 26Al, 36Cl) and optically stimulated luminescence (OSL). My investigation focuses on the seismically active Mosha-Fasham fault (MFF) and the seismically virtually inactive North Tehran Thrust (NTT), adjacent to the Tehran metropolitan area. Fault-kinematic data reveal an early mechanical linkage of the NTT and MFF during an earlier dextral transpressional stage, when the shortening direction was oriented northwest. This regime was superseded by Pliocene to Recent NE-oriented shortening, which caused thrusting and sinistral strike-slip faulting. In the course of this kinematic changeover, the NTT and MFF were reactivated and incorporated into a nascent transpressional duplex, which has significantly affected landscape evolution in this part of the range. Two of three distinctive features which characterize topography and relief in the study area can be directly related to their location inside the duplex array and are thus linked to interaction between eastern MFF and NTT, and between western MFF and Taleghan fault, respectively. To account for inferred inherited topography from the previous dextral-transpression regime, a new concept of tectonic landscape characterization has been used. Accordingly, I define simple landscapes as those environments, which have developed during the influence of a sustained tectonic regime. In contrast, composite landscapes contain topographic elements inherited from previous tectonic conditions that are inconsistent with the regional present-day stress field and kinematic style. Using numerical fault-interaction modeling with different tectonic boundary conditions, I calculated synoptic snapshots of artificial topography to compare it with the real topographic metrics. However, in the Alborz mountains, E-W faults are favorably oriented to accommodate the entire range of NW- to NE-directed compression. These faults show the highest total displacement which might indicate sustained faulting under changing boundary conditions. In contrast to the fault system within and at the flanks of the Alborz mountains, Quaternary deformation in the adjacent Tehran plain is characterized by oblique motion and thrust and strike-slip fault systems. In this morphotectonic province fault-propagation folding along major faults, limited strike-slip motion, and en-échelon arrays of second-order upper plate thrusts are typical. While the Tehran plain is characterized by young deformation phenomena, the majority of faulting took place in the early stages of the Quaternary and during late Pliocene time. TCN-dating, which was performed for the first time on geomorphic surfaces in the Tehran plain, revealed that the oldest two phases of alluviation (units A and B) must be older than late Pleistocene. While urban development in Tehran increasingly covers and obliterates the active fault traces, the present-day kinematic style, the vestiges of formerly undeformed Quaternary landforms, and paleo earthquake indicators from the last millennia attest to the threat that these faults and their related structures pose for the megacity.
Processing negative imperatives in Bulgarian : evidence from normal, aphasic and child language
(2010)
The incremental nature of sentence processing raises questions about the way the information of incoming functional elements is accessed and subsequently employed in building the syntactic structure which sustains interpretation processes. The present work approaches these questions by investigating the negative particle ne used for sentential negation in Bulgarian and its impact on the overt realisation and the interpretation of imperative inflexion, bound aspectual morphemes and clitic pronouns in child, adult and aphasic language. In contrast to other Slavic languages, Bulgarian negative imperatives (NI) are grammatical only with imperfective verbs. We argue that NI are instantiations of overt aspectual coercion induced by the presence of negation as a temporally sensitive sentential operator. The scope relation between imperative mood, negation, and aspect yields the configuration of the imperfective present which in Bulgarian has to be overtly expressed and prompts the imperfective marking of the predicate. The regular and transparent application of the imperfectivising mechanism relates to the organisation of the TAM categories in Bulgarian which not only promotes the representation of fine perspective shifts but also provides for their distinct morphological expression. Using an elicitation task with NI, we investigated the way 3- and 4-year-old children represent negation in deontic contexts as reflected in their use of aspectually appropriate predicates. Our findings suggest that children are sensitive to the imperfectivity requirement in NI from early on. The imperfectivisation strategies reveal some differences from the target morphological realisation. The relatively low production of target imperfectivised prefixed verbs cannot be explained with morphological processing deficits, but rather indicates that up to the age of five children experience difficulties to apply a progressive view point to accomplishments. Two self-paced reading studies present evidence that neurologically unimpaired Bulgarian speakers profit from the syntactic and prosodic properties of negation during online sentence comprehension. The imperfectivity requirement negation imposes on the predicate speeds up lexical access to imperfective verbs. Similarly, clitic pronouns are more accessible after negation due to the phono-syntactic properties of clitic clusters. As the experimental stimuli do not provide external discourse referents, personal pronouns are parsed as object agreement markers. Without subsequent resolution, personal pronouns appear to be less resource demanding than reflexive clitics. This finding is indicative of the syntax-driven co-reference establishment processes triggered through the lexical specification of reflexive clitics. The results obtained from Bulgarian Broca's aphasics show that they exhibit processing patterns similar to those of the control group. Notwithstanding their slow processing speed, the agrammatic group showed no impairment of negation as reflected by their sensitivity to the aspectual requirements of NI, and to the prosodic constraints on clitic placement. The aphasics were able to parse the structural dependency between mood, negation and aspect as functional categories and to represent it morphologically. The prolonged reaction times (RT) elicited by prefixed verbs indicate increasing processing costs due to the semantic integration of prefixes as perfectivity markers into an overall imperfective construal. This inference is supported by the slower RT to reflexive clitics, which undergo a structurally triggered resolution. Evaluated against cross-linguistic findings, the obtained result strongly suggests that aphasic performance with pronouns depends on the interpretation efforts associated with co-reference establishment and varies due to availability of discourse referents. The investigation of normal and agrammatic processing of Bulgarian NI presents support for the hypothesis that the comprehension deficits in Broca's aphasia result from a slowed-down implementation of syntactic operations. The protracted structure building consumes processing resources and causes temporal mismatches with other processes sustaining sentence comprehension. The investigation of the way Bulgarian children and aphasic speakers process NI reveals that both groups are highly sensitive to the imperfective constraint on the aspectual construal imposed by the presence of negation. The imperfective interpretation requires access to morphologically complex verb forms which contain aspectual morphemes with conflicting semantic information – perfective prefixes and imperfective suffixes. Across modalities, both populations exhibit difficulties in processing prefixed imperfectivised verbs which as predicates of negative imperative sentences reflect the inner perspective the speaker and the addressee need to take towards a potentially bounded situation description.
The presented work describes new concepts of fast switching elements based on principles of photonics. The waveguides working in visible and infra-red ranges are put in a basis of these elements. And as materials for manufacturing of waveguides the transparent polymers, dopped by molecules of the dyes possessing second order nonlinear-optical properties are proposed. The work shows how nonlinear-optical processes in such structures can be implemented by electro-optical and opto-optical control circuit signals. In this paper we consider the complete cycle of fabrication of several types of integral photonic elements. The theoretical analysis of high-intensity beam propagation in media with second-order optical nonlinearity is performed. Quantitative estimations of necessary conditions of occurrence of the nonlinear-optical phenomena of the second order taking into account properties of used materials are made. The paper describes the various stages of manufacture of the basic structure of the integrated photonics: a planar waveguide. Using the finite element method the structure of the electromagnetic field inside the waveguide in different modes was analysed. A separate part of the work deals with the creation of composite organic materials with high optical nonlinearity. Using the methods of quantum chemistry, the dependence of nonlinear properties of dye molecules from its structure were investigated in details. In addition, the paper discusses various methods of inducing of an optical nonlinearity in dye-doping of polymer films. In the work, for the first time is proposed the use of spatial modulation of nonlinear properties of waveguide according Fibonacci law. This allows involving several different nonlinear optical processes simultaneously. The final part of the work describes various designs of integrated optical modulators and switches constructed of organic nonlinear optical waveguides. A practical design of the optical modulator based on Mach-Zehnder interferometer made by a photolithography on polymer film is presented.
Ghrelin is a unique hunger-inducing stomach-borne hormone. It activates orexigenic circuits in the central nervous system (CNS) when acylated with a fatty acid residue by the Ghrelin O-acyltransferase (GOAT). Soon after the discovery of ghrelin a theoretical model emerged which suggests that the gastric peptide ghrelin is the first “meal initiation molecule
This thesis presents methods for automated synthesis of flexible chip multiprocessor systems from parallel programs targeted at FPGAs to exploit both task-level parallelism and architecture customization. Automated synthesis is necessitated by the complexity of the design space. A detailed description of the design space is provided in order to determine which parameters should be modeled to facilitate automated synthesis by optimizing a cost function, the emphasis being placed on inclusive modeling of parameters from application, architectural and physical subspaces, as well as their joint coverage in order to avoid pre-constraining the design space. Given a parallel program and a set of an IP library, the automated synthesis problem is to simultaneously (i) select processors (ii) map and schedule tasks to them, and (iii) select one or several networks for inter-task communications such that design constraints and optimization objectives are met. The research objective in this thesis is to find a suitable model for automated synthesis, and to evaluate methods of using the model for architectural optimizations. Our contributions are a holistic approach for the design of such systems, corresponding models to facilitate automated synthesis, evaluation of optimization methods using state of the art integer linear and answer set programming, as well as the development of synthesis heuristics to solve runtime challenges.
Indonesia is one of the countries most prone to natural hazards. Complex interaction of several tectonic plates with high relative velocities leads to approximately two earthquakes with magnitude Mw>7 every year, being more than 15% of the events worldwide. Earthquakes with magnitude above 9 happen far more infrequently, but with catastrophic effects. The most severe consequences thereby arise from tsunamis triggered by these subduction-related earthquakes, as the Sumatra-Andaman event in 2004 showed. In order to enable efficient tsunami early warning, which includes the estimation of wave heights and arrival times, it is necessary to combine different types of real-time sensor data with numerical models of earthquake sources and tsunami propagation. This thesis was created as a result of the GITEWS project (German Indonesian Tsunami Early Warning System). It is based on five research papers and manuscripts. Main project-related task was the development of a database containing realistic earthquake scenarios for the Sunda Arc. This database provides initial conditions for tsunami propagation modeling used by the simulation system at the early warning center. An accurate discretization of the subduction geometry, consisting of 25x150 subfaults was constructed based on seismic data. Green’s functions, representing the deformational response to unit dip- and strike slip at the subfaults, were computed using a layered half-space approach. Different scaling relations for earthquake dimensions and slip distribution were implemented. Another project-related task was the further development of the ‘GPS-shield’ concept. It consists of a constellation of near field GPS-receivers, which are shown to be very valuable for tsunami early warning. The major part of this thesis is related to the geophysical interpretation of GPS data. Coseismic surface displacements caused by the 2004 Sumatra earthquake are inverted for slip at the fault. The effect of different Earth layer models is tested, favoring continental structure. The possibility of splay faulting is considered and shown to be a secondary order effect in respect to tsunamigenity for this event. Tsunami models based on source inversions are compared to satellite radar altimetry observations. Postseismic GPS time series are used to test a wide parameter range of uni- and biviscous rheological models of the asthenosphere. Steady-state Maxwell rheology is shown to be incompatible with near-field GPS data, unless large afterslip, amounting to more than 10% of the coseismic moment is assumed. In contrast, transient Burgers rheology is in agreement with data without the need for large aseismic afterslip. Comparison to postseismic geoid observation by the GRACE satellites reveals that even with afterslip, the model implementing Maxwell rheology results in amplitudes being too small, and thus supports a biviscous asthenosphere. A simple approach based on the assumption of quasi-static deformation propagation is introduced and proposed for inversion of coseismic near-field GPS time series. Application of this approach to observations from the 2004 Sumatra event fails to quantitatively reconstruct the rupture propagation, since a priori conditions are not fulfilled in this case. However, synthetic tests reveal the feasibility of such an approach for fast estimation of rupturing properties.
Lake ecosystems across the globe have responded to climate warming of recent decades. However, correctly attributing observed changes to altered climatic conditions is complicated by multiple anthropogenic influences on lakes. This thesis contributes to a better understanding of climate impacts on freshwater phytoplankton, which forms the basis of the food chain and decisively influences water quality. The analyses were, for the most part, based on a long-term data set of physical, chemical and biological variables of a shallow, polymictic lake in north-eastern Germany (Müggelsee), which was subject to a simultaneous change in climate and trophic state during the past three decades. Data analysis included constructing a dynamic simulation model, implementing a genetic algorithm to parameterize models, and applying statistical techniques of classification tree and time-series analysis. Model results indicated that climatic factors and trophic state interactively determine the timing of the phytoplankton spring bloom (phenology) in shallow lakes. Under equally mild spring conditions, the phytoplankton spring bloom collapsed earlier under high than under low nutrient availability, due to a switch from a bottom-up driven to a top-down driven collapse. A novel approach to model phenology proved useful to assess the timings of population peaks in an artificially forced zooplankton-phytoplankton system. Mimicking climate warming by lengthening the growing period advanced algal blooms and consequently also peaks in zooplankton abundance. Investigating the reasons for the contrasting development of cyanobacteria during two recent summer heat wave events revealed that anomalously hot weather did not always, as often hypothesized, promote cyanobacteria in the nutrient-rich lake studied. The seasonal timing and duration of heat waves determined whether critical thresholds of thermal stratification, decisive for cyanobacterial bloom formation, were crossed. In addition, the temporal patterns of heat wave events influenced the summer abundance of some zooplankton species, which as predators may serve as a buffer by suppressing phytoplankton bloom formation. This thesis adds to the growing body of evidence that lake ecosystems have strongly responded to climatic changes of recent decades. It reaches beyond many previous studies of climate impacts on lakes by focusing on underlying mechanisms and explicitly considering multiple environmental changes. Key findings show that climate impacts are more severe in nutrient-rich than in nutrient-poor lakes. Hence, to develop lake management plans for the future, limnologists need to seek a comprehensive, mechanistic understanding of overlapping effects of the multi-faceted human footprint on aquatic ecosystems.
This thesis contains quantum chemical models and force field calculations for the RuBisCO isotope effect, the spectral characteristics of the blue-light sensor BLUF and the light harvesting complex II. The work focuses on the influence of the environment on the corresponding systems. For RuBisCO, it was found that the isotopic effect is almost unaffected by the environment. In case of the BLUF domain, an amino acid was found to be important for the UV/vis spectrum, but unaccounted for in experiments so far (Ser41). The residue was shown to be highly mobile and with a systematic influence on the spectral shift of the BLUF domain chromophore (flavin). Finally, for LHCII it was found that small changes in the geometry of a Chlorophyll b/Violaxanthin chromophore pair can have strong influences regarding the light harvesting mechanism. Especially here it was seen that the proper description of the environment can be critical. In conclusion, the environment was observed to be of often unexpected importance for the molecular properties, and it seems not possible to give a reliable estimate on the changes created by the presence of the environment.
Flood design necessitates discharge estimates for large recurrence intervals. However, in a flood frequency analysis, the uncertainty of discharge estimates increases with higher recurrence intervals, particularly due to the small number of available flood data. Furthermore, traditional distribution functions increase unlimitedly without consideration of an upper bound discharge. Hence, additional information needs to be considered which is representative for high recurrence intervals. Envelope curves which bound the maximum observed discharges of a region are an adequate regionalisation method to provide additional spatial information for the upper tail of a distribution function. Probabilistic regional envelope curves (PRECs) are an extension of the traditional empirical envelope curve approach, in which a recurrence interval is estimated for a regional envelope curve (REC). The REC is constructed for a homogeneous pooling group of sites. The estimation of this recurrence interval is based on the effective sample years of data considering the intersite dependence among all sites of the pooling group. The core idea of this thesis was an improvement of discharge estimates for high recurrence intervals by integrating empirical and probabilistic regional envelope curves into the flood frequency analysis. Therefore, the method of probabilistic regional envelope curves was investigated in detail. Several pooling groups were derived by modifying candidate sets of catchment descriptors and settings of two different pooling methods. These were used to construct PRECs. A sensitivity analysis shows the variability of discharges and the recurrence intervals for a given site due to the different assumptions. The unit flood of record which governs the intercept of PREC was determined as the most influential aspect. By separating the catchments into nested and unnested pairs, the calculation algorithm for the effective sample years of data was refined. In this way, the estimation of the recurrence intervals was improved, and therefore the use of different parameter sets for nested and unnested pairs of catchments is recommended. In the second part of this thesis, PRECs were introduced into a distribution function. Whereas in the traditional approach only discharge values are used, PRECs provide a discharge and its corresponding recurrence interval. Hence, a novel approach was developed, which allows a combination of the PREC results with the traditional systematic flood series while taking the PREC recurrence interval into consideration. An adequate mixed bounded distribution function was presented, which in addition to the PREC results also uses an upper bound discharge derived by an empirical envelope curve. By doing so, two types of additional information which are representative for the upper tail of a distribution function were included in the flood frequency analysis. The integration of both types of additional information leads to an improved discharge estimation for recurrence intervals between 100 and 1000 years.
Russian Jews who left the Former Soviet Union (FSU) and its Successor States after 1989 are considered as one of the best qualified migrants group worldwide. In the preferred countries of destination (Israel, the United States and Germany) they are well-known for cultural self-assertion, strong social upward mobility and manifold forms of self organisation and empowerment. Using Suzanne Kellers sociological model of “Strategic Elites”, it easily becomes clear that a huge share of the Russian Jewish Immigrants in Germany and Israel are part of various elites due to their qualification and high positions in the FSU – first of all professional, cultural and intellectual elites (“Intelligentsija”). The study aimed to find out to what extent developments of cultural self-assertion, of local and transnational networking and of ethno-cultural empowerment are supported or even initiated by the immigrated (Russian Jewish) Elites. The empirical basis for this study have been 35 half-structured expert interviews with Russian Jews in both countries (Israel, Germany) – most of them scholars, artists, writers, journalists/publicists, teachers, engineers, social workers, students and politicians. The qualitative analysis of the interview material in Israel and Germany revealed that there are a lot of commonalities but also significant differences. It was obvious that almost all of the interview partners remained to be linked with Russian speaking networks and communities, irrespective of their success (or failure) in integration into the host societies. Many of them showed self-confidence with regard to the groups’ amazing professional resources (70% of the adults with academic degree), and the cultural, professional and political potential of the FSU immigrants was usually considered as equal to those of the host population(s). Thus, the immigrants’ interest in direct societal participation and social acceptance was accordingly high. Assimilation was no option. For the Russian Jewish “sense of community” in Israel and Germany, Russian Language, Arts and general Russian culture have remained of key importance. The Immigrants do not feel an insuperable contradiction when feeling “Russian” in cultural terms, “Jewish” in ethnical terms and “Israeli” / “German” in national terms – in that a typical case of additive identity shaping what is also significant for the Elites of these Immigrants. Tendencies of ethno-cultural self organisation – which do not necessarily hinder impressing individual careers in the new surroundings – are more noticeable in Israel. Thus, a part of the Russian Jewish Elites has responded to social exclusion, discrimination or blocking by local population (and by local elites) with intense efforts to build (Russian Jewish) Associations, Media, Educational Institutions and even Political Parties. All in all, the results of this study do very much contradict popular stereotypes of the Russian Jewish Immigrant as a pragmatic, passive “Homo Sovieticus”. Among the Interview Partners in this study, civil-societal commitment was not the exception but rather the rule. Traditional activities of the early, legendary Russian „Intelligentsija“ were marked by smooth transitions from arts, education and societal/political commitment. There seem to be certain continuities of this self-demand in some of the Russian Jewish groups in Israel. Though, nothing comparable could be drawn from the Interviews with the Immigrants in Germany. Thus, the myth and self-demand of Russian “Intelligentsija” is irrelevant for collective discourses among Russian Jews in Germany.
Nanofibrous mats are interesting scaffold materials for biomedical applications like tissue engineering due to their interconnectivity and their size dimension which mimics the native cell environment. Electrospinning provides a simple route to access such fiber meshes. This thesis addresses the structural and functional control of electrospun fiber mats. In the first section, it is shown that fiber meshes with bimodal size distribution could be obtained in a single-step process by electrospinning. A standard single syringe set-up was used to spin concentrated poly(ε-caprolactone) (PCL) and poly(lactic-co-glycolic acid) (PLGA) solutions in chloroform and meshes with bimodal-sized fiber distribution could be directly obtained by reducing the spinning rate at elevated humidity. Scanning electron microscopy (SEM) and mercury porosity of the meshes suggested a suitable pore size distribution for effective cell infiltration. The bimodal fiber meshes together with unimodal fiber meshes were evaluated for cellular infiltration. While the micrometer fibers in the mixed meshes generate an open pore structure, the submicrometer fibers support cell adhesion and facilitate cell bridging on the large pores. This was revealed by initial cell penetration studies, showing superior ingrowth of epithelial cells into the bimodal meshes compared to a mesh composed of unimodal 1.5 μm fibers. The bimodal fiber meshes together with electrospun nano- and microfiber meshes were further used for the inorganic/organic hybrid fabrication of PCL with calcium carbonate or calcium phosphate, two biorelevant minerals. Such composite structures are attractive for the potential improvement of properties such as stiffness or bioactivity. It was possible to encapsulate nano and mixed sized plasma-treated PCL meshes to areas > 1 mm2 with calcium carbonate using three different mineralization methods including the use of poly(acrylic acid). The additive seemed to be useful in stabilizing amorphous calcium carbonate to effectively fill the space between the electrospun fibers resulting in composite structures. Micro-, nano- and mixed sized fiber meshes were successfully coated within hours by fiber directed crystallization of calcium phosphate using a ten-times concentrated simulated body fluid. It was shown that nanofibers accelerated the calcium phosphate crystallization, as compared to microfibers. In addition, crystallizations performed at static conditions led to hydroxyapatite formations whereas in dynamic conditions brushite coexisted. In the second section, nanofiber functionalization strategies are investigated. First, a one-step process was introduced where a peptide-polymer-conjugate (PLLA-b-CGGRGDS) was co-spun with PLGA in such a way that the peptide is enriched on the surface. It was shown that by adding methanol to the chloroform/blend solution, a dramatic increase of the peptide concentration at the fiber surface could be achieved as determined by X-ray photoelectron spectroscopy (XPS). Peptide accessibility was demonstrated via a contact angle comparison of pure PLGA and RGD-functionalized fiber meshes. In addition, the electrostatic attraction between a RGD-functionalized fiber and a silica bead at pH ~ 4 confirmed the accessibility of the peptide. The bioactivity of these RGD-functionalized fiber meshes was demonstrated using blends containing 18 wt% bioconjugate. These meshes promoted adhesion behavior of fibroblast compared to pure PLGA meshes. In a second functionalization approach, a modular strategy was investigated. In a single step, reactive fiber meshes were fabricated and then functionalized with bioactive molecules. While the electrospinning of the pure reactive polymer poly(pentafluorophenyl methacrylate) (PPFPMA) was feasible, the inherent brittleness of PPFPMA required to spin a PCL blend. Blends and pure PPFPMA showed a two-step functionalization kinetics. An initial fast reaction of the pentafluorophenyl esters with aminoethanol as a model substance was followed by a slow conversion upon further hydrophilization. This was analysed by UV/Vis-spectroscopy of the pentaflurorophenol release upon nucleophilic substitution with the amines. The conversion was confirmed by increased hydrophilicity of the resulting meshes. The PCL/PPFPMA fiber meshes were then used for functionalization with more complex molecules such as saccharides. Aminofunctionalized D-Mannose or D-Galactose was reacted with the active pentafluorophenyl esters as followed by UV/Vis spectroscopy and XPS. The functionality was shown to be bioactive using macrophage cell culture. The meshes functionalized with D-Mannose specifically stimulated the cytokine production of macrophages when lipopolysaccharides were added. This was in contrast to D-Galactose- or aminoethanol-functionalized and unfunctionalized PCL/PPFPMA fiber mats.
Coupling of the electrical, mechanical and optical response in polymer/liquid-crystal composites
(2010)
Micrometer-sized liquid-crystal (LC) droplets embedded in a polymer matrix may enable optical switching in the composite film through the alignment of the LC director along an external electric field. When a ferroelectric material is used as host polymer, the electric field generated by the piezoelectric effect can orient the director of the LC under an applied mechanical stress, making these materials interesting candidates for piezo-optical devices. In this work, polymer-dispersed liquid crystals (PDLCs) are prepared from poly(vinylidene fluoride-trifluoroethylene) (P(VDF-TrFE)) and a nematic liquid crystal (LC). The anchoring effect is studied by means of dielectric relaxation spectroscopy. Two dispersion regions are observed in the dielectric spectra of the pure P(VDF-TrFE) film. They are related to the glass transition and to a charge-carrier relaxation, respectively. In PDLC films containing 10 and 60 wt% LC, an additional, bias-field-dependent relaxation peak is found that can be attributed to the motion of LC molecules. Due to the anchoring effect of the LC molecules, this relaxation process is slowed down considerably, when compared with the related process in the pure LC. The electro-optical and piezo-optical behavior of PDLC films containing 10 and 60 wt% LCs is investigated. In addition to the refractive-index mismatch between the polymer matrix and the LC molecules, the interaction between the polymer dipoles and the LC molecules at the droplet interface influences the light-scattering behavior of the PDLC films. For the first time, it was shown that the electric field generated by the application of a mechanical stress may lead to changes in the transmittance of a PDLC film. Such a piezo-optical PDLC material may be useful e.g. in sensing and visualization applications. Compared to a non-polar matrix polymer, the polar matrix polymer exhibits a strong interaction with the LC molecules at the polymer/LC interface which affects the electro-optical effect of the PDLC films and prevents a larger increase in optical transmission.
Within our research group Bayesian Risk Solutions we have coined the idea of a Bayesian Risk Management (BRM). It claims (1) a more transparent and diligent data analysis as well as (2)an open-minded incorporation of human expertise in risk management. In this dissertation we formulize a framework for BRM based on the two pillars Hardcore-Bayesianism (HCB) and Softcore-Bayesianism (SCB) providing solutions for the claims above. For data analysis we favor Bayesian statistics with its Markov Chain Monte Carlo (MCMC) simulation algorithm. It provides a full illustration of data-induced uncertainty beyond classical point-estimates. We calibrate twelve different stochastic processes to four years of CO2 price data. Besides, we calculate derived risk measures (ex ante/ post value-at-risks, capital charges, option prices) and compare them to their classical counterparts. When statistics fails because of a lack of reliable data we propose our integrated Bayesian Risk Analysis (iBRA) concept. It is a basic guideline for an expertise-driven quantification of critical risks. We additionally review elicitation techniques and tools supporting experts to express their uncertainty. Unfortunately, Bayesian thinking is often blamed for its arbitrariness. Therefore, we introduce the idea of a Bayesian due diligence judging expert assessments according to their information content and their inter-subjectivity.
Preparation and investigation of polymer-foam films and polymer-layer systems for ferroelectrets
(2010)
Piezoelectric materials are very useful for applications in sensors and actuators. In addition to traditional ferroelectric ceramics and ferroelectric polymers, ferroelectrets have recently become a new group of piezoelectrics. Ferroelectrets are functional polymer systems for electromechanical transduction, with elastically heterogeneous cellular structures and internal quasi-permanent dipole moments. The piezoelectricity of ferroelectrets stems from linear changes of the dipole moments in response to external mechanical or electrical stress. Over the past two decades, polypropylene (PP) foams have been investigated with the aim of ferroelectret applications, and some products are already on the market. PP-foam ferroelectrets may exhibit piezoelectric d33 coefficients of 600 pC/N and more. Their operating temperature can, however, not be much higher than 60 °C. Recently developed polyethylene-terephthalate (PET) and cyclo-olefin copolymer (COC) foam ferroelectrets show slightly better d33 thermal stabilities, but usually at the price of smaller d33 values. Therefore, the main aim of this work is the development of new thermally stable ferroelectrets with appreciable piezoelectricity. Physical foaming is a promising technique for generating polymer foams from solid films without any pollution or impurity. Supercritical carbon dioxide (CO2) or nitrogen (N2) are usually employed as foaming agents due to their good solubility in several polymers. Polyethylene propylene (PEN) is a polyester with slightly better properties than PET. A “voiding + inflation + stretching” process has been specifically developed to prepare PEN foams. Solid PEN films are saturated with supercritical CO2 at high pressure and then thermally voided at high temperatures. Controlled inflation (Gas-Diffusion Expansion or GDE) is applied in order to adjust the void dimensions. Additional biaxial stretching decreases the void heights, since it is known lens-shaped voids lead to lower elastic moduli and therefore also to stronger piezoelectricity. Both, contact and corona charging are suitable for the electric charging of PEN foams. The light emission from the dielectric-barrier discharges (DBDs) can be clearly observed. Corona charging in a gas of high dielectric strength such as sulfur hexafluoride (SF6) results in higher gas-breakdown strength in the voids and therefore increases the piezoelectricity. PEN foams can exhibit piezoelectric d33 coefficients as high as 500 pC/N. Dielectric-resonance spectra show elastic moduli c33 of 1 − 12 MPa, anti-resonance frequencies of 0.2 − 0.8 MHz, and electromechanical coupling factors of 0.016 − 0.069. As expected, it is found that PEN foams show better thermal stability than PP and PET. Samples charged at room temperature can be utilized up to 80 − 100 °C. Annealing after charging or charging at elevated temperatures may improve thermal stabilities. Samples charged at suitable elevated temperatures show working temperatures as high as 110 − 120 °C. Acoustic measurements at frequencies of 2 Hz − 20 kHz show that PEN foams can be well applied in this frequency range. Fluorinated ethylene-propylene (FEP) copolymers are fluoropolymers with very good physical, chemical and electrical properties. The charge-storage ability of solid FEP films can be significantly improved by adding boron nitride (BN) filler particles. FEP foams are prepared by means of a one-step procedure consisting of CO2 saturation and subsequent in-situ high-temperature voiding. Piezoelectric d33 coefficients up to 40 pC/N are measured on such FEP foams. Mechanical fatigue tests show that the as-prepared PEN and FEP foams are mechanically stable for long periods of time. Although polymer-foam ferroelectrets have a high application potential, their piezoelectric properties strongly depend on the cellular morphology, i.e. on size, shape, and distribution of the voids. On the other hand, controlled preparation of optimized cellular structures is still a technical challenge. Consequently, new ferroelectrets based on polymer-layer system (sandwiches) have been prepared from FEP. By sandwiching an FEP mesh between two solid FEP films and fusing the polymer system with a laser beam, a well-designed uniform macroscopic cellular structure can be formed. Dielectric resonance spectroscopy reveals piezoelectric d33 coefficients as high as 350 pC/N, elastic moduli of about 0.3 MPa, anti-resonance frequencies of about 30 kHz, and electromechanical coupling factors of about 0.05. Samples charged at elevated temperatures show better thermal stabilities than those charged at room temperature, and the higher the charging temperature, the better is the stability. After proper charging at 140 °C, the working temperatures can be as high as 110 − 120 °C. Acoustic measurements at frequencies of 200 Hz − 20 kHz indicate that the FEP layer systems are suitable for applications at least in this range.
Fire prone Mediterranean-type vegetation systems like those in the Mediterranean Basin and South-Western Australia are global hot spots for plant species diversity. To ensure management programs act to maintain these highly diverse plant communities, it is necessary to get a profound understanding of the crucial mechanisms of coexistence. In the current literature several mechanisms are discussed. The objective of my thesis is to systematically explore the importance of potential mechanisms for maintaining multi-species, fire prone vegetation by modelling. The model I developed is spatially-explicit, stochastic, rule- and individual-based. It is parameterised on data of population dynamics collected over 18 years in the Mediterranean-type shrublands of Eneabba, Western Australia. From 156 woody species of the area seven plant traits have been identified to be relevant for this study: regeneration mode, annual maximum seed production, seed size, maximum crown diameter, drought tolerance, dispersal mode and seed bank type. Trait sets are used for the definition of plant functional types (PFTs). The PFT dynamics are simulated annual by iterating life history processes. In the first part of my thesis I investigate the importance of trade-offs for the maintenance of high diversity in multi-species systems with 288 virtual PFTs. Simulation results show that the trade-off concept can be helpful to identify non-viable combinations of plant traits. However, the Shannon Diversity Index of modelled communities can be high despite of the presence of ‘supertypes’. I conclude, that trade-offs between two traits are less important to explain multi-species coexistence and high diversity than it is predicted by more conceptual models. Several studies show, that seed immigration from the regional seed pool is essential for maintaining local species diversity. However, systematical studies on the seed rain composition to multi-species communities are missing. The results of the simulation experiments, as presented in part two of this thesis, show clearly, that without seed immigration the local species community found in Eneabba drifts towards a state with few coexisting PFTs. With increasing immigration rates the number of simulated coexisting PFTs and Shannon diversity quickly approaches values as also observed in the field. Including the regional seed input in the model is suited to explain more aggregated measures of the local plant community structure such as species richness and diversity. Hence, the seed rain composition should be implemented in future studies. In the third part of my thesis I test the sensitivity of Eneabba PFTs to four different climate change scenarios, considering their impact on both local and regional processes. The results show that climate change clearly has the potential to alter the number of dispersed seeds for most of the Eneabba PFTs and therefore the source of the ‘immigrants’ at the community level. A classification tree analysis shows that, in general, the response to climate change was PFT-specific. In the Eneabba sand plains sensitivity of a PFT to climate change depends on its specific trait combination and on the scenario of environmental change i.e. development of the amount of rainfall and the fire frequency. This result emphasizes that PFT-specific responses and regional process seed immigration should not be ignored in studies dealing with the impact of climate change on future species distribution. The results of the three chapters are finally analysed in a general discussion. The model is discussed and improvements and suggestions are made for future research. My work leads to the following conclusions: i) It is necessary to support modelling with empirical work to explain coexistence in species-rich plant communities. ii) The chosen modelling approach allows considering the complexity of coexistence and improves the understanding of coexistence mechanisms. iii) Field research based assumptions in terms of environmental conditions and plant life histories can relativise the importance of more hypothetic coexistence theories in species-rich systems. In consequence, trade-offs can play a lower role than predicted by conceptual models. iv) Seed immigration is a key process for local coexistence. Its alteration because of climate change should be considered for prognosis of coexistence. Field studies should be carried out to get data on seed rain composition.
Recent large earthquakes put in evidence the need of improving and developing robust and rapid procedures to properly calculate the magnitude of an earthquake in a short time after its occurrence. The most famous example is the 26 December 2004 Sumatra earthquake, when the limitations of the standard procedures adopted at that time by many agencies failed to provide accurate magnitude estimates of this exceptional event in time to launch early enough warnings and appropriate response. Being related to the radiated seismic energy ES, the energy magnitude ME is a good estimator of the high frequency content radiated by the source which goes into the seismic waves. However, a procedure to rapidly determine ME (that is to say, within 15 minutes after the earthquake occurrence) was required. Here it is presented a procedure able to provide in a rapid way the energy magnitude ME for shallow earthquakes by analyzing teleseismic P‑waves in the distance range 20-98. To account for the energy loss experienced by the seismic waves from the source to the receivers, spectral amplitude decay functions obtained from numerical simulations of Greens functions based on the average global model AK135Q are used. The proposed method has been tested using a large global dataset (~1000 earthquakes) and the obtained rapid ME estimations have been compared to other magnitude scales from different agencies. Special emphasis is given to the comparison with the moment magnitude MW, since the latter is very popular and extensively used in common seismological practice. However, it is shown that MW alone provide only limited information about the seismic source properties, and that disaster management organizations would benefit from a combined use of MW and ME in the prompt evaluation of an earthquake’s tsunami and shaking potential. In addition, since the proposed approach for ME is intended to work without knowledge of the fault plane geometry (often available only hours after an earthquake occurrence), the suitability of this method is discussed by grouping the analyzed earthquakes according to their type of mechanism (strike-slip, normal faulting, thrust faulting, etc.). No clear trend is found from the rapid ME estimates with the different fault plane solution groups. This is not the case for the ME routinely determined by the U.S. Geological Survey, which uses specific radiation pattern corrections. Further studies are needed to verify the effect of such corrections on ME estimates. Finally, exploiting the redundancy of the information provided by the analyzed dataset, the components of variance on the single station ME estimates are investigated. The largest component of variance is due to the intra-station (record-to-record) error, although the inter-station (station-to-station) error is not negligible and is of several magnitude units for some stations. Moreover, it is shown that the intra-station component of error is not random but depends on the travel path from a source area to a given station. Consequently, empirical corrections may be used to account for the heterogeneities of the real Earth not considered in the theoretical calculations of the spectral amplitude decay functions used to correct the recorded data for the propagation effects.
Temporal gravimeter observations, used in geodesy and geophysics to study variation of the Earth’s gravity field, are influenced by local water storage changes (WSC) and – from this perspective – add noise to the gravimeter signal records. At the same time, the part of the gravity signal caused by WSC may provide substantial information for hydrologists. Water storages are the fundamental state variable of hydrological systems, but comprehensive data on total WSC are practically inaccessible and their quantification is associated with a high level of uncertainty at the field scale. This study investigates the relationship between temporal gravity measurements and WSC in order to reduce the hydrological interfering signal from temporal gravity measurements and to explore the value of temporal gravity measurements for hydrology for the superconducting gravimeter (SG) of the Geodetic Observatory Wettzell, Germany. A 4D forward model with a spatially nested discretization domain was developed to simulate and calculate the local hydrological effect on the temporal gravity observations. An intensive measurement system was installed at the Geodetic Observatory Wettzell and WSC were measured in all relevant storage components, namely groundwater, saprolite, soil, top soil and snow storage. The monitoring system comprised also a suction-controlled, weighable, monolith-filled lysimeter, allowing an all time first comparison of a lysimeter and a gravimeter. Lysimeter data were used to estimate WSC at the field scale in combination with complementary observations and a hydrological 1D model. Total local WSC were derived, uncertainties were assessed and the hydrological gravity response was calculated from the WSC. A simple conceptual hydrological model was calibrated and evaluated against records of a superconducting gravimeter, soil moisture and groundwater time series. The model was evaluated by a split sample test and validated against independently estimated WSC from the lysimeter-based approach. A simulation of the hydrological gravity effect showed that WSC of one meter height along the topography caused a gravity response of 52 µGal, whereas, generally in geodesy, on flat terrain, the same water mass variation causes a gravity change of only 42 µGal (Bouguer approximation). The radius of influence of local water storage variations can be limited to 1000 m and 50 % to 80 % of the local hydro¬logical gravity signal is generated within a radius of 50 m around the gravimeter. At the Geodetic Observatory Wettzell, WSC in the snow pack, top soil, unsaturated saprolite and fractured aquifer are all important terms of the local water budget. With the exception of snow, all storage components have gravity responses of the same order of magnitude and are therefore relevant for gravity observations. The comparison of the total hydrological gravity response to the gravity residuals obtained from the SG, showed similarities in both short-term and seasonal dynamics. However, the results demonstrated the limitations of estimating total local WSC using hydrological point measurements. The results of the lysimeter-based approach showed that gravity residuals are caused to a larger extent by local WSC than previously estimated. A comparison of the results with other methods used in the past to correct temporal gravity observations for the local hydrological influence showed that the lysimeter measurements improved the independent estimation of WSC significantly and thus provided a better way of estimating the local hydrological gravity effect. In the context of hydrological noise reduction, at sites where temporal gravity observations are used for geophysical studies beyond local hydrology, the installation of a lysimeter in combination with complementary hydrological measurements is recommended. From the hydrological view point, using gravimeter data as a calibration constraint improved the model results in comparison to hydrological point measurements. Thanks to their capacity to integrate over different storage components and a larger area, gravimeters provide generalized information on total WSC at the field scale. Due to their integrative nature, gravity data must be interpreted with great care in hydrological studies. However, gravimeters can serve as a novel measurement instrument for hydrology and the application of gravimeters especially designed to study open research questions in hydrology is recommended.
This thesis is concerned with the development of numerical methods using finite difference techniques for the discretization of initial value problems (IVPs) and initial boundary value problems (IBVPs) of certain hyperbolic systems which are first order in time and second order in space. This type of system appears in some formulations of Einstein equations, such as ADM, BSSN, NOR, and the generalized harmonic formulation. For IVP, the stability method proposed in [14] is extended from second and fourth order centered schemes, to 2n-order accuracy, including also the case when some first order derivatives are approximated with off-centered finite difference operators (FDO) and dissipation is added to the right-hand sides of the equations. For the model problem of the wave equation, special attention is paid to the analysis of Courant limits and numerical speeds. Although off-centered FDOs have larger truncation errors than centered FDOs, it is shown that in certain situations, off-centering by just one point can be beneficial for the overall accuracy of the numerical scheme. The wave equation is also analyzed in respect to its initial boundary value problem. All three types of boundaries - outflow, inflow and completely inflow that can appear in this case, are investigated. Using the ghost-point method, 2n-accurate (n = 1, 4) numerical prescriptions are prescribed for each type of boundary. The inflow boundary is also approached using the SAT-SBP method. In the end of the thesis, a 1-D variant of BSSN formulation is derived and some of its IBVPs are considered. The boundary procedures, based on the ghost-point method, are intended to preserve the interior 2n-accuracy. Numerical tests show that this is the case if sufficient dissipation is added to the rhs of the equations.
The genome can be considered the blueprint for an organism. Composed of DNA, it harbours all organism-specific instructions for the synthesis of all structural components and their associated functions. The role of carriers of actual molecular structure and functions was believed to be exclusively assumed by proteins encoded in particular segments of the genome, the genes. In the process of converting the information stored genes into functional proteins, RNA – a third major molecule class – was discovered early on to act a messenger by copying the genomic information and relaying it to the protein-synthesizing machinery. Furthermore, RNA molecules were identified to assist in the assembly of amino acids into native proteins. For a long time, these - rather passive - roles were thought to be the sole purpose of RNA. However, in recent years, new discoveries have led to a radical revision of this view. First, RNA molecules with catalytic functions - thought to be the exclusive domain of proteins - were discovered. Then, scientists realized that much more of the genomic sequence is transcribed into RNA molecules than there are proteins in cells begging the question what the function of all these molecules are. Furthermore, very short and altogether new types of RNA molecules seemingly playing a critical role in orchestrating cellular processes were discovered. Thus, RNA has become a central research topic in molecular biology, even to the extent that some researcher dub cells as “RNA machines”. This thesis aims to contribute towards our understanding of RNA-related phenomena by applying Bioinformatics means. First, we performed a genome-wide screen to identify sites at which the chemical composition of DNA (the genotype) critically influences phenotypic traits (the phenotype) of the model plant Arabidopsis thaliana. Whole genome hybridisation arrays were used and an informatics strategy developed, to identify polymorphic sites from hybridisation to genomic DNA. Following this approach, not only were genotype-phenotype associations discovered across the entire Arabidopsis genome, but also regions not currently known to encode proteins, thus representing candidate sites for novel RNA functional molecules. By statistically associating them with phenotypic traits, clues as to their particular functions were obtained. Furthermore, these candidate regions were subjected to a novel RNA-function classification prediction method developed as part of this thesis. While determining the chemical structure (the sequence) of candidate RNA molecules is relatively straightforward, the elucidation of its structure-function relationship is much more challenging. Towards this end, we devised and implemented a novel algorithmic approach to predict the structural and, thereby, functional class of RNA molecules. In this algorithm, the concept of treating RNA molecule structures as graphs was introduced. We demonstrate that this abstraction of the actual structure leads to meaningful results that may greatly assist in the characterization of novel RNA molecules. Furthermore, by using graph-theoretic properties as descriptors of structure, we indentified particular structural features of RNA molecules that may determine their function, thus providing new insights into the structure-function relationships of RNA. The method (termed Grapple) has been made available to the scientific community as a web-based service. RNA has taken centre stage in molecular biology research and novel discoveries can be expected to further solidify the central role of RNA in the origin and support of life on earth. As illustrated by this thesis, Bioinformatics methods will continue to play an essential role in these discoveries.
Based on technological advances made within the past decades, ground-penetrating radar (GPR) has become a well-established, non-destructive subsurface imaging technique. Catalyzed by recent demands for high-resolution, near-surface imaging (e.g., the detection of unexploded ordnances and subsurface utilities, or hydrological investigations), the quality of today's GPR-based, near-surface images has significantly matured. At the same time, the analysis of oil and gas related reflection seismic data sets has experienced significant advances. Considering the sensitivity of attribute analysis with respect to data positioning in general, and multi-trace attributes in particular, trace positioning accuracy is of major importance for the success of attribute-based analysis flows. Therefore, to study the feasibility of GPR-based attribute analyses, I first developed and evaluated a real-time GPR surveying setup based on a modern tracking total station (TTS). The combination of current GPR systems capability of fusing global positioning system (GPS) and geophysical data in real-time, the ability of modern TTS systems to generate a GPS-like positional output and wireless data transmission using radio modems results in a flexible and robust surveying setup. To elaborate the feasibility of this setup, I studied the major limitations of such an approach: system cross-talk and data delays known as latencies. Experimental studies have shown that when a minimal distance of ~5 m between the GPR and the TTS system is considered, the signal-to-noise ratio of the acquired GPR data using radio communication equals the one without radio communication. To address the limitations imposed by system latencies, inherent to all real-time data fusion approaches, I developed a novel correction (calibration) strategy to assess the gross system latency and to correct for it. This resulted in the centimeter trace accuracy required by high-frequency and/or three-dimensional (3D) GPR surveys. Having introduced this flexible high-precision surveying setup, I successfully demonstrated the application of attribute-based processing to GPR specific problems, which may differ significantly from the geological ones typically addressed by the oil and gas industry using seismic data. In this thesis, I concentrated on archaeological and subsurface utility problems, as they represent typical near-surface geophysical targets. Enhancing 3D archaeological GPR data sets using a dip-steered filtering approach, followed by calculation of coherency and similarity, allowed me to conduct subsurface interpretations far beyond those obtained by classical time-slice analyses. I could show that the incorporation of additional data sets (magnetic and topographic) and attributes derived from these data sets can further improve the interpretation. In a case study, such an approach revealed the complementary nature of the individual data sets and, for example, allowed conclusions about the source location of magnetic anomalies by concurrently analyzing GPR time/depth slices to be made. In addition to archaeological targets, subsurface utility detection and characterization is a steadily growing field of application for GPR. I developed a novel attribute called depolarization. Incorporation of geometrical and physical feature characteristics into the depolarization attribute allowed me to display the observed polarization phenomena efficiently. Geometrical enhancement makes use of an improved symmetry extraction algorithm based on Laplacian high-boosting, followed by a phase-based symmetry calculation using a two-dimensional (2D) log-Gabor filterbank decomposition of the data volume. To extract the physical information from the dual-component data set, I employed a sliding-window principle component analysis. The combination of the geometrically derived feature angle and the physically derived polarization angle allowed me to enhance the polarization characteristics of subsurface features. Ground-truth information obtained by excavations confirmed this interpretation. In the future, inclusion of cross-polarized antennae configurations into the processing scheme may further improve the quality of the depolarization attribute. In addition to polarization phenomena, the time-dependent frequency evolution of GPR signals might hold further information on the subsurface architecture and/or material properties. High-resolution, sparsity promoting decomposition approaches have recently had a significant impact on the image and signal processing community. In this thesis, I introduced a modified tree-based matching pursuit approach. Based on different synthetic examples, I showed that the modified tree-based pursuit approach clearly outperforms other commonly used time-frequency decomposition approaches with respect to both time and frequency resolutions. Apart from the investigation of tuning effects in GPR data, I also demonstrated the potential of high-resolution sparse decompositions for advanced data processing. Frequency modulation of individual atoms themselves allows to efficiently correct frequency attenuation effects and improve resolution based on shifting the average frequency level. GPR-based attribute analysis is still in its infancy. Considering the growing widespread realization of 3D GPR studies there will certainly be an increasing demand towards improved subsurface interpretations in the future. Similar to the assessment of quantitative reservoir properties through the combination of 3D seismic attribute volumes with sparse well-log information, parameter estimation in a combined manner represents another step in emphasizing the potential of attribute-driven GPR data analyses.
The origin and evolution of granites has been widely studied because granitoid rocks constitute a major portion of the Earth ́s crust. The formation of granitic magma is, besides temperature mainly triggered by the water content of these rocks. The presence of water in magmas plays an important role due to the ability of aqueous fluids to change the chemical composition of the magma. The exsolution of aqueous fluids from melts is closely linked to a fractionation of elements between the two phases. Then, aqueous fluids migrate to shallower parts of the Earth ́s crust because of it ́s lower density compared to that of melts and adjacent rocks. This process separates fluids and melts, and furthermore, during the ascent, aqueous fluids can react with the adjacent rocks and alter their chemical signature. This is particularly impor- tant during the formation of magmatic-hydrothermal ore deposits or in the late stages of the evolution of magmatic complexes. For a deeper insight to these processes, it is essential to improve our knowledge on element behavior in such systems. In particular, trace elements are used for these studies and petrogenetic interpretations because, unlike major elements, they are not essential for the stability of the phases involved and often reflect magmatic processes with less ambiguity. However, for the majority of important trace elements, the dependence of the geochemical behavior on temperature, pressure, and in particular on the composition of the system are only incompletely or not at all experimentally studied. Former studies often fo- cus on the determination of fluid−melt partition coefficients (Df/m=cfluid/cmelt) of economically interesting elements, e.g., Mo, Sn, Cu, and there are some partitioning data available for ele- ments that are also commonly used for petrological interpretations. At present, no systematic experimental data on trace element behavior in fluid−melt systems as function of pressure, temperature, and chemical composition are available. Additionally, almost all existing data are based on the analysis of quenched phases. This results in substantial uncertainties, particularly for the quenched aqueous fluid because trace element concentrations may change upon cooling. The objective of this PhD thesis consisted in the study of fluid−melt partition coefficients between aqueous solutions and granitic melts for different trace elements (Rb, Sr, Ba, La, Y, and Yb) as a function of temperature, pressure, salinity of the fluid, composition of the melt, and experimental and analytical approach. The latter included the refinement of an existing method to measure trace element concentrations in fluids equilibrated with silicate melts di- rectly at elevated pressures and temperatures using a hydrothermal diamond-anvil cell and synchrotron radiation X-ray fluorescence microanalysis. The application of this in-situ method enables to avoid the main source of error in data from quench experiments, i.e., trace element concentration in the fluid. A comparison of the in-situ results to data of conventional quench experiments allows a critical evaluation of quench data from this study and literature data. In detail, starting materials consisted of a suite of trace element doped haplogranitic glasses with ASI varying between 0.8 and 1.4 and H2O or a chloridic solution with m NaCl/KCl=1 and different salinities (1.16 to 3.56 m (NaCl+KCl)). Experiments were performed at 750 to 950◦C and 0.2 or 0.5 GPa using conventional quench devices (externally and internally heated pressure vessels) with different quench rates, and at 750◦C and 0.2 to 1.4 GPa with in-situ analysis of the trace element concentration in the fluids. The fluid−melt partitioning data of all studied trace elements show 1. a preference for the melt (Df/m < 1) at all studied conditions, 2. one to two orders of magnitude higher Df/m using chloridic solutions compared to experiments with H2O, 3. a clear dependence on the melt composition for fluid−melt partitioning of Sr, Ba, La, Y, and Yb in experiments using chloridic solutions, 4. quench rate−related differences of fluid−melt partition coefficients of Rb and Sr, and 5. distinctly higher fluid−melt partitioning data obtained from in-situ experiments than from comparable quench runs, particularly in the case of H2O as starting solution. The data point to a preference of all studied trace elements for the melt even at fairly high salinities, which contrasts with other experimental studies, but is supported by data from studies of natural co-genetically trapped fluid and melt inclusions. The in-situ measurements of trace element concentrations in the fluid verify that aqueous fluids will change their composition upon cooling, which is in particular important for Cl free systems. The distinct differences of the in-situ results to quench data of this study as well as to data from the literature signify the im- portance of a careful fluid sampling and analysis. Therefore, the direct measurement of trace element contents in fluids equilibrated with silicate melts at elevated PT conditions represents an important development to obtain more reliable fluid−melt partition coefficients. For further improvement, both the aqueous fluid and the silicate melt need to be analyzed in-situ because partitioning data that are based on the direct measurement of the trace element content in the fluid and analysis of a quenched melt are still not completely free of quench effects. At present, all available data on element complexation in aqueous fluids in equilibrium with silicate melts at high PT are indirectly derived from partitioning data, which involves in these experiments assumptions on the species present in the fluid. However, the activities of chemical components in these partitioning experiments are not well constrained, which is required for the definition of exchange equilibria between melt and fluid species. For example, the melt-dependent variation of partition coefficient observed for Sr imply that this element can not only be complexed by Cl− as suggested previously. The data indicate a more complicated complexation of Sr in the aqueous fluid. To verify this hypothesis, the in-situ setup was also used to determine strontium complexation in fluids equilibrated with silicate melts at desired PT conditions by the application of X-ray absorption near edge structure (XANES) spectroscopy. First results show a strong effect of both fluid and melt composition on the resulting XANES spectra, which indicates different complexation environments for Sr.
The Antarctic plays an important role in the global climate system. On the one hand, the Antarctic Ice Sheet is the largest freshwater reservoir on Earth. On the other hand, a major proportion of the global bottom-water formation takes place in Antarctic shelf regions, forcing the global thermohaline circulation. The main goal of this dissertation is to provide new insights into the dynamics and stability of the EAIS during the Quaternary. Additionally, variations in the activity of bottom-water formation and their causes are investigated. The dissertation is a German contribution to the International Polar Year 2007/ 2008 and was funded by the ‘Deutsche Forschungsgesellschaft’ (DFG) within the scope of priority program 1158 ‘Antarctic research with comparative studies in Arctic ice regions’. During RV Polarstern expedition ANT-XXIII/9, glaciomarine sediments were recovered from the Prydz Bay-Kerguelen region. Prydz Bay is a key region for the study of East EAIS dynamics, as 16% of the EAIS are drained through the Lambert Glacier into the bay. Thereby, the glacier transports sediment into Prydz Bay which is then further distributed by calving icebergs or by current transport. The scientific approach of this dissertation is the reconstruction of past glaciomarine environments to infer on the response of the Lambert Glacier-Amery Ice Shelf system to climate shifts during the Quaternary. To characterize the depositional setting, sedimentological methods are used and statistical analyses are applied. Mineralogical and (bio)geochemical methods provide a means to reconstruct sediment provenances and to provide evidence on changes in the primary production in the surface water column. Age-depth models were constructed based on palaeomagnetic and palaeointensity measurements, diatom stratigraphy and radiocarbon dating. Sea-bed surface sediments in the investigation area show distinct variations in terms of their clay minerals and heavy-mineral assemblages. Considerable differences in the mineralogical composition of surface sediments are determined on the continental shelf. Clay minerals as well as heavy minerals provide useful parameters to differentiate between sediments which originated from erosion of crystalline rocks and sediments originating from Permo-Triassic deposits. Consequently, mineralogical parameters can be used to reconstruct the provenance of current-transported and ice-rafted material. The investigated sediment cores cover the time intervals of the last 1.4 Ma (continental slope) and the last 12.8 cal. ka BP (MacRobertson shelf). The sediment deposits were mainly influenced by glacial and oceanographic processes and further by biological activity (continental shelf), meltwater input and possibly gravitational transport. Sediments from the continental slope document two major deglacial events: the first deglaciation is associated with the mid-Pleistocene warming recognized around the Antarctic. In Prydz Bay, the Lambert Glacier-Amery Ice Shelf retreated far to the south and high biogenic productivity commenced or biogenic remains were better preserved due to increased sedimentation rates. Thereafter, stable glacial conditions continued until 400 - 500 ka BP. Calving of icebergs was restricted to the western part of the Lambert Glacier. The deeper bathymetry in this area allows for floating ice shelf even during times of decreased sea-level. Between 400 - 500 ka BP and the last interglacial (marine isotope stage 5) the glacier was more dynamic. During or shortly after the last interglacial the LAIS retreated again due to sea-level rise of 6 - 9 m. Both deglacial events correlate with a reduction in the thickness of ice masses in the Prince Charles Mountains. It indicates that a disintegration of the Amery Ice Shelf possibly led to increased drainage of ice masses from the Prydz Bay hinterland. A new end-member modelling algorithm was successfully applied on sediments from the MacRobertson shelf used to unmix the sand grain size fractions sorted by current activity and ice transport, respectively. Ice retreat on MacRobertson Shelf commenced 12.8 cal. ka BP and ended around 5.5 cal. ka BP. During the Holocene, strong fluctuations of the bottomwater activity were observed, probably related to variations of sea-ice formation in the Cape Darnley polynya. Increased activity of bottom-water flow was reconstructed at transitions from warm to cool conditions, whereas bottom-water activity receded during the mid- Holocene climate optimum. It can be concluded that the Lambert Glacier-Amery Ice Shelf system was relatively stable in terms of climate variations during the Quaternary. In contrast, bottom-water formation due to polynya activity was very sensitive to changes in atmospheric forcing and should gain more attention in future research.
The programmable network envisioned in the 1990s within standardization and research for the Intelligent Network is currently coming into reality using IPbased Next Generation Networks (NGN) and applying Service-Oriented Architecture (SOA) principles for service creation, execution, and hosting. SOA is the foundation for both next-generation telecommunications and middleware architectures, which are rapidly converging on top of commodity transport services. Services such as triple/quadruple play, multimedia messaging, and presence are enabled by the emerging service-oriented IPMultimedia Subsystem (IMS), and allow telecommunications service providers to maintain, if not improve, their position in the marketplace. SOA becomes the de facto standard in next-generation middleware systems as the system model of choice to interconnect service consumers and providers within and between enterprises. We leverage previous research activities in overlay networking technologies along with recent advances in network abstraction, service exposure, and service creation to develop a paradigm for a service environment providing converged Internet and Telecommunications services that we call Service Broker. Such a Service Broker provides mechanisms to combine and mediate between different service paradigms from the two domains Internet/WWW and telecommunications. Furthermore, it enables the composition of services across these domains and is capable of defining and applying temporal constraints during creation and execution time. By adding network-awareness into the service fabric, such a Service Broker may also act as a next generation network-to-service element allowing the composition of crossdomain and cross-layer network and service resources. The contribution of this research is threefold: first, we analyze and classify principles and technologies from Information Technologies (IT) and telecommunications to identify and discuss issues allowing cross-domain composition in a converging service layer. Second, we discuss service composition methods allowing the creation of converged services on an abstract level; in particular, we present a formalized method for model-checking of such compositions. Finally, we propose a Service Broker architecture converging Internet and Telecom services. This environment enables cross-domain feature interaction in services through formalized obligation policies acting as constraints during service discovery, creation, and execution time.
Companies develop process models to explicitly describe their business operations. In the same time, business operations, business processes, must adhere to various types of compliance requirements. Regulations, e.g., Sarbanes Oxley Act of 2002, internal policies, best practices are just a few sources of compliance requirements. In some cases, non-adherence to compliance requirements makes the organization subject to legal punishment. In other cases, non-adherence to compliance leads to loss of competitive advantage and thus loss of market share. Unlike the classical domain-independent behavioral correctness of business processes, compliance requirements are domain-specific. Moreover, compliance requirements change over time. New requirements might appear due to change in laws and adoption of new policies. Compliance requirements are offered or enforced by different entities that have different objectives behind these requirements. Finally, compliance requirements might affect different aspects of business processes, e.g., control flow and data flow. As a result, it is infeasible to hard-code compliance checks in tools. Rather, a repeatable process of modeling compliance rules and checking them against business processes automatically is needed. This thesis provides a formal approach to support process design-time compliance checking. Using visual patterns, it is possible to model compliance requirements concerning control flow, data flow and conditional flow rules. Each pattern is mapped into a temporal logic formula. The thesis addresses the problem of consistency checking among various compliance requirements, as they might stem from divergent sources. Also, the thesis contributes to automatically check compliance requirements against process models using model checking. We show that extra domain knowledge, other than expressed in compliance rules, is needed to reach correct decisions. In case of violations, we are able to provide a useful feedback to the user. The feedback is in the form of parts of the process model whose execution causes the violation. In some cases, our approach is capable of providing automated remedy of the violation.
In a very simplified view, the plant leaf growth can be reduced to two processes, cell division and cell expansion, accompanied by expansion of their surrounding cell walls. The vacuole, as being the largest compartment of the plant cell, plays a major role in controlling the water balance of the plant. This is achieved by regulating the osmotic pressure, through import and export of solutes over the vacuolar membrane (the tonoplast) and by controlling the water channels, the aquaporins. Together with the control of cell wall relaxation, vacuolar osmotic pressure regulation is thought to play an important role in cell expansion, directly by providing cell volume and indirectly by providing ion and pH homestasis for the cytosoplasm. In this thesis the role of tonoplast protein coding genes in cell expansion in the model plant Arabidopsis thaliana is studied and genes which play a putative role in growth are identified. Since there is, to date, no clearly identified protein localization signal for the tonoplast, there is no possibility to perform genome-wide prediction of proteins localized to this compartment. Thus, a series of recent proteomic studies of the tonoplast were used to compile a list of cross-membrane tonoplast protein coding genes (117 genes), and other growth-related genes from notably the growth regulating factor (GRF) and expansin families were included (26 genes). For these genes a platform for high-throughput reverse transcription quantitative real time polymerase chain reaction (RT-qPCR) was developed by selecting specific primer pairs. To this end, a software tool (called QuantPrime, see http://www.quantprime.de) was developed that automatically designs such primers and tests their specificity in silico against whole transcriptomes and genomes, to avoid cross-hybridizations causing unspecific amplification. The RT-qPCR platform was used in an expression study in order to identify candidate growth related genes. Here, a growth-associative spatio-temporal leaf sampling strategy was used, targeting growing regions at high expansion developmental stages and comparing them to samples taken from non-expanding regions or stages of low expansion. Candidate growth related genes were identified after applying a template-based scoring analysis on the expression data, ranking the genes according to their association with leaf expansion. To analyze the functional involvement of these genes in leaf growth on a macroscopic scale, knockout mutants of the candidate growth related genes were screened for growth phenotypes. To this end, a system for non-invasive automated leaf growth phenotyping was established, based on a commercially available image capture and analysis system. A software package was developed for detailed developmental stage annotation of the images captured with the system, and an analysis pipeline was constructed for automated data pre-processing and statistical testing, including modeling and graph generation, for various growth-related phenotypes. Using this system, 24 knockout mutant lines were analyzed, and significant growth phenotypes were found for five different genes.
In the present work, we study wave phenomena in strongly nonlinear lattices. Such lattices are characterized by the absence of classical linear waves. We demonstrate that compactons – strongly localized solitary waves with tails decaying faster than exponential – exist and that they play a major role in the dynamics of the system under consideration. We investigate compactons in different physical setups. One part deals with lattices of dispersively coupled limit cycle oscillators which find various applications in natural sciences such as Josephson junction arrays or coupled Ginzburg-Landau equations. Another part deals with Hamiltonian lattices. Here, a prominent example in which compactons can be found is the granular chain. In the third part, we study systems which are related to the discrete nonlinear Schrödinger equation describing, for example, coupled optical wave-guides or the dynamics of Bose-Einstein condensates in optical lattices. Our investigations are based on a numerical method to solve the traveling wave equation. This results in a quasi-exact solution (up to numerical errors) which is the compacton. Another ansatz which is employed throughout this work is the quasi-continuous approximation where the lattice is described by a continuous medium. Here, compactons are found analytically, but they are defined on a truly compact support. Remarkably, both ways give similar qualitative and quantitative results. Additionally, we study the dynamical properties of compactons by means of numerical simulation of the lattice equations. Especially, we concentrate on their emergence from physically realizable initial conditions as well as on their stability due to collisions. We show that the collisions are not exactly elastic but that a small part of the energy remains at the location of the collision. In finite lattices, this remaining part will then trigger a multiple scattering process resulting in a chaotic state.
We establish elements of a new approach to ellipticity and parametrices within operator algebras on manifolds with higher singularities, only based on some general axiomatic requirements on parameter-dependent operators in suitable scales of spaes. The idea is to model an iterative process with new generations of parameter-dependent operator theories, together with new scales of spaces that satisfy analogous requirements as the original ones, now on a corresponding higher level. The "full" calculus involves two separate theories, one near the tip of the corner and another one at the conical exit to infinity. However, concerning the conical exit to infinity, we establish here a new concrete calculus of edge-degenerate operators which can be iterated to higher singularities.