Filtern
Volltext vorhanden
- ja (156) (entfernen)
Erscheinungsjahr
- 2010 (156) (entfernen)
Dokumenttyp
- Dissertation (47)
- Konferenzveröffentlichung (24)
- Monographie/Sammelband (23)
- Postprint (22)
- Wissenschaftlicher Artikel (21)
- Preprint (7)
- Rezension (6)
- Sonstiges (2)
- Arbeitspapier (2)
- Habilitation (1)
- Vorlesung (1)
Sprache
- Englisch (156) (entfernen)
Schlagworte
- middleware (4)
- Arrayseismologie (2)
- Aspektorientierte Softwareentwicklung (2)
- Betriebssysteme (2)
- Constraint Solving (2)
- Cosmogenic nuclides (2)
- Deduction (2)
- Erdbeben (2)
- Erdbebenkatalog (2)
- Erdbebenschwarm 2008/09 (2)
Institut
- Extern (22)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (17)
- Institut für Geowissenschaften (16)
- Institut für Jüdische Studien und Religionswissenschaft (16)
- Vereinigung für Jüdische Studien e. V. (15)
- Institut für Biochemie und Biologie (13)
- Institut für Künste und Medien (11)
- Mathematisch-Naturwissenschaftliche Fakultät (11)
- Institut für Mathematik (9)
- Wirtschaftswissenschaften (9)
- Institut für Physik und Astronomie (8)
- Institut für Informatik und Computational Science (7)
- Strukturbereich Kognitionswissenschaften (4)
- Department Psychologie (3)
- Institut für Umweltwissenschaften und Geographie (3)
- Humanwissenschaftliche Fakultät (2)
- Institut für Anglistik und Amerikanistik (2)
- Institut für Chemie (2)
- Institut für Ernährungswissenschaft (2)
- Sonderforschungsbereich 632 - Informationsstruktur (2)
- Department Linguistik (1)
- Department Sport- und Gesundheitswissenschaften (1)
- Institut für Religionswissenschaft (1)
- Institut für Romanistik (1)
- Interdisziplinäres Zentrum für Dynamik komplexer Systeme (1)
- MenschenRechtsZentrum (1)
- Sozialwissenschaften (1)
- WeltTrends e.V. Potsdam (1)
Background: Protein phosphorylation is an important post-translational modification influencing many aspects of dynamic cellular behavior. Site-specific phosphorylation of amino acid residues serine, threonine, and tyrosine can have profound effects on protein structure, activity, stability, and interaction with other biomolecules. Phosphorylation sites can be affected in diverse ways in members of any species, one such way is through single nucleotide polymorphisms (SNPs). The availability of large numbers of experimentally identified phosphorylation sites, and of natural variation datasets in Arabidopsis thaliana prompted us to analyze the effect of non-synonymous SNPs (nsSNPs) onto phosphorylation sites.
Results: From the analyses of 7,178 experimentally identified phosphorylation sites we found that: (i) Proteins with multiple phosphorylation sites occur more often than expected by chance. (ii) Phosphorylation hotspots show a preference to be located outside conserved domains. (iii) nsSNPs affected experimental phosphorylation sites as much as the corresponding non-phosphorylated amino acid residues. (iv) Losses of experimental phosphorylation sites by nsSNPs were identified in 86 A. thaliana proteins, among them receptor proteins were overrepresented.
These results were confirmed by similar analyses of predicted phosphorylation sites in A. thaliana. In addition, predicted threonine phosphorylation sites showed a significant enrichment of nsSNPs towards asparagines and a significant depletion of the synonymous substitution. Proteins in which predicted phosphorylation sites were affected by nsSNPs (loss and gain), were determined to be mainly receptor proteins, stress response proteins and proteins involved in nucleotide and protein binding. Proteins involved in metabolism, catalytic activity and biosynthesis were less affected.
Conclusions: We analyzed more than 7,100 experimentally identified phosphorylation sites in almost 4,300 protein-coding loci in silico, thus constituting the largest phosphoproteomics dataset for A. thaliana available to date. Our findings suggest a relatively high variability in the presence or absence of phosphorylation sites between different natural accessions in receptor and other proteins involved in signal transduction. Elucidating the effect of phosphorylation sites affected by nsSNPs on adaptive responses represents an exciting research goal for the future.
A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and traitbased models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative view on the functioning of lake ecosystems. We end with a set of specific recommendations that may be of help in the further development of lake ecosystem models.
We present an approach that provides automatic or semi-automatic support for evolution and change management in heterogeneous legacy landscapes where (1) legacy heterogeneous, possibly distributed platforms are integrated in a service oriented fashion, (2) the coordination of functionality is provided at the service level, through orchestration, (3) compliance and correctness are provided through policies and business rules, (4) evolution and correctness-by-design are supported by the eXtreme Model Driven Development paradigm (XMDD) offered by the jABC (Margaria and Steffen in Annu. Rev. Commun. 57, 2004)—the model-driven service oriented development platform we use here for integration, design, evolution, and governance. The artifacts are here semantically enriched, so that automatic synthesis plugins can field the vision of Enterprise Physics: knowledge driven business process development for the end user.
We demonstrate this vision along a concrete case study that became over the past three years a benchmark for Semantic Web Service discovery and mediation. We enhance the Mediation Scenario of the Semantic Web Service Challenge along the 2 central evolution paradigms that occur in practice: (a) Platform migration: platform substitution of a legacy system by an ERP system and (b) Backend extension: extension of the legacy Customer Relationship Management (CRM) and Order Management System (OMS) backends via an additional ERP layer.
Quantifying uncertainty, variability and likelihood for ordinary differential equation models
(2010)
Background
In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space.
Results
The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability.
Conclusions
While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.
Background
Micrometer resolution placement and immobilization of probe molecules is an important step in the preparation of biochips and a wide range of lab-on-chip systems. Most known methods for such a deposition of several different substances are costly and only suitable for a limited number of probes. In this article we present a flexible procedure for simultaneous spatially controlled immobilization of functional biomolecules by molecular ink lithography.
Results
For the bottom-up fabrication of surface bound nanostructures a universal method is presented that allows the immobilization of different types of biomolecules with micrometer resolution. A supporting surface is biotinylated and streptavidin molecules are deposited with an AFM (atomic force microscope) tip at distinct positions. Subsequent incubation with a biotinylated molecule species leads to binding only at these positions. After washing streptavidin is deposited a second time with the same AFM tip and then a second biotinylated molecule species is coupled by incubation. This procedure can be repeated several times. Here we show how to immobilize different types of biomolecules in an arbitrary arrangement whereas most common methods can deposit only one type of molecules. The presented method works on transparent as well as on opaque substrates. The spatial resolution is better than 400 nm and is limited only by the AFM's positional accuracy after repeated z-cycles since all steps are performed in situ without moving the supporting surface. The principle is demonstrated by hybridization to different immobilized DNA oligomers and was validated by fluorescence microscopy.
Conclusions
The immobilization of different types of biomolecules in high-density microarrays is a challenging task for biotechnology. The method presented here not only allows for the deposition of DNA at submicrometer resolution but also for proteins and other molecules of biological relevance that can be coupled to biotin.
Background: For heterogeneous tissues, such as blood, measurements of gene expression are confounded by relative proportions of cell types involved. Conclusions have to rely on estimation of gene expression signals for homogeneous cell populations, e.g. by applying micro-dissection, fluorescence activated cell sorting, or in-silico deconfounding. We studied feasibility and validity of a non-negative matrix decomposition algorithm using experimental gene expression data for blood and sorted cells from the same donor samples. Our objective was to optimize the algorithm regarding detection of differentially expressed genes and to enable its use for classification in the difficult scenario of reversely regulated genes. This would be of importance for the identification of candidate biomarkers in heterogeneous tissues.
Results: Experimental data and simulation studies involving noise parameters estimated from these data revealed that for valid detection of differential gene expression, quantile normalization and use of non-log data are optimal. We demonstrate the feasibility of predicting proportions of constituting cell types from gene expression data of single samples, as a prerequisite for a deconfounding-based classification approach. Classification cross-validation errors with and without using deconfounding results are reported as well as sample-size dependencies. Implementation of the algorithm, simulation and analysis scripts are available.
Conclusions: The deconfounding algorithm without decorrelation using quantile normalization on non-log data is proposed for biomarkers that are difficult to detect, and for cases where confounding by varying proportions of cell types is the suspected reason. In this case, a deconfounding ranking approach can be used as a powerful alternative to, or complement of, other statistical learning approaches to define candidate biomarkers for molecular diagnosis and prediction in biomedicine, in realistically noisy conditions and with moderate sample sizes.
Two recent magnetic field models, GRIMM and xCHAOS, describe core field accelerations with similar behavior up to Spherical Harmonic (SH) degree 5, but which differ significantly for higher degrees. These discrepancies, due to different approaches in smoothing rapid time variations of the core field, have strong implications for the interpretation of the secular variation. Furthermore, the amount of smoothing applied to the highest SH degrees is essentially the modeler’s choice. We therefore investigate new ways of regularizing core magnetic field models. Here we propose to constrain field models to be consistent with the frozen flux induction equation by co-estimating a core magnetic field model and a flow model at the top of the outer core. The flow model is required to have smooth spatial and temporal behavior. The implementation of such constraints and their effects on a magnetic field model built from one year of CHAMP satellite and observatory data, are presented. In particular, it is shown that the chosen constraints are efficient and can be used to build reliable core magnetic field secular variation and acceleration model components.
Live cell flattening
(2010)
Eukaryotic cell flattening is valuable for improving microscopic observations, ranging from bright field (BF) to total internal reflection fluorescence (TIRF) microscopy. Fundamental processes, such as mitosis and in vivo actin polymerization, have been investigated using these techniques. Here, we review the well known agar overlayer protocol and the oil overlay method. In addition, we present more elaborate microfluidics-based techniques that provide us with a greater level of control. We demonstrate these techniques on the social amoebae Dictyostelium discoideum, comparing the advantages and disadvantages of each method.
Background: Cysteine is a component in organic compounds including glutathione that have been implicated in the adaptation of plants to stresses. O-acetylserine (thiol) lyase (OAS-TL) catalyses the final step of cysteine biosynthesis. OAS-TL enzyme isoforms are localised in the cytoplasm, the plastids and mitochondria but the contribution of individual OAS-TL isoforms to plant sulphur metabolism has not yet been fully clarified.
Results: The seedling lethal phenotype of the Arabidopsis onset of leaf death3-1 (old3-1) mutant is due to a point mutation in the OAS-A1 gene, encoding the cytosolic OAS-TL. The mutation causes a single amino acid substitution from Gly(162) to Glu(162), abolishing old3-1 OAS-TL activity in vitro. The old3-1 mutation segregates as a monogenic semidominant trait when backcrossed to its wild type accession Landsberg erecta (Ler-0) and the Di-2 accession. Consistent with its semi-dominant behaviour, wild type Ler-0 plants transformed with the mutated old3-1 gene, displayed the early leaf death phenotype. However, the old3-1 mutation segregates in an 11: 4: 1 (wild type: semi-dominant: mutant) ratio when backcrossed to the Colombia-0 and Wassilewskija accessions. Thus, the early leaf death phenotype depends on two semi-dominant loci. The second locus that determines the old3-1 early leaf death phenotype is referred to as odd-ler (for old3 determinant in the Ler accession) and is located on chromosome 3. The early leaf death phenotype is temperature dependent and is associated with increased expression of defence-response and oxidative-stress marker genes. Independent of the presence of the odd-ler gene, OAS-A1 is involved in maintaining sulphur and thiol levels and is required for resistance against cadmium stress.
Conclusions: The cytosolic OAS-TL is involved in maintaining organic sulphur levels. The old3-1 mutation causes genome-dependent and independent phenotypes and uncovers a novel function for the mutated OAS-TL in cell death regulation.
Left peripheral focus
(2010)
In Czech, German, and many other languages, part of the semantic focus
of the utterance can be moved to the left periphery of the clause. The main generalization is that only the leftmost accented part of the semantic focus can be moved. We propose that movement to the left periphery is generally triggered by an unspecific edge feature of C (Chomsky 2008) and its restrictions can be attributed to requirements of cyclic linearization, modifying the theory of cyclic linearization developed by Fox and Pesetsky (2005). The crucial assumption is that structural accent is a direct consequence of being linearized at merge, thus it is indirectly relevant for (locality restrictions on) movement. The absence of structural accent correlates with given-ness. Given elements may later receive (topic or contrastive) accents, which accounts for fronting in multiple focus/contrastive topic constructions. Without any additional assumptions, the model can account for movement of pragmatically unmarked elements to the left periphery (‘formal fronting’, Frey 2005). Crucially, the analysis makes no reference at all to concepts of information structure in the syntax, in line with the claim of Chomsky (2008) that UG specifies no direct link between syntax and information structure.
Multi-color fluorescence imaging experiments of wave forming Dictyostelium cells have revealed that actin waves separate two domains of the cell cortex that differ in their actin structure and phosphoinositide composition. We propose a bistable model of actin dynamics to account for these experimental observation. The model is based on the simplifying assumption that the actin cytoskeleton is composed of two distinct network types, a dendritic and a bundled network. The two structurally different states that were observed in experiments correspond to the stable fixed points in the bistable regime of this model. Each fixed point is dominated by one of the two network types. The experimentally observed actin waves can be considered as trigger waves that propagate transitions between the two stable fixed points.
It has been suggested that coronal mass ejections (CMEs) remove the magnetic he-licity of their coronal source region from the Sun. Such removal is often regarded to be necessary due to the hemispheric sign preference of the helicity, which inhibits a simple annihilation by reconnection between volumes of opposite chirality. Here we monitor the relative magnetic he-licity contained in the coronal volume of a simulated flux rope CME, as well as the upward flux of relative helicity through horizontal planes in the simulation box. The unstable and erupting flux rope carries away only a minor part of the initial relative helicity; the major part remains in the volume. This is a consequence of the requirement that the current through an expanding loop must decrease if the magnetic energy of the configuration is to decrease as the loop rises, to provide the kinetic energy of the CME.
The Takab complex is composed of a variety of metamorphic rocks including amphibolites, metapelites, mafic granulites, migmatites and meta-ultramafics, which are intruded by the granitoid. The granitoid magmatic activity occurred in relation to the subduction of the Neo-Tethys oceanic crust beneath the Iranian crust during Tertiary times. The granitoids are mainly granodiorite, quartz monzodiorite, monzonite and quartz diorite. Chemically, the magmatic rocks are characterized by ASI < 1.04, AI < 0.87 and high contents of CaO (up to ∼ 14.5 wt %), which are consistent with the I-type magmatic series. Low FeO t /(FeO t +MgO) values (< 0.75) as well as low Nb, Y and K 2 O contents of the investigated rocks resemble the calc-alkaline series. Low SiO 2 , K 2 O/Na 2 O and Al 2 O 3 accompanied by high CaO and FeO contents indicate melting of metabasites as an appropriate source for the intrusions. Negative Ti and Nb anomalies verify a metaluminous crustal origin for the protoliths of the investigated igneous rocks. These are comparable with compositions of the associated mafic migmatites, in the Takab metamorphic complex, which originated from the partial melting of amphibolites. Therefore, crustal melting and a collision-related origin for the Takab calc-alkaline intrusions are proposed here on the basis of mineralogy and geochemical characteristics. The P–T evolution during magmatic crystallization and subsolidus cooling stages is determined by the study of mineral chemistry of the granodiorite and the quartz diorite. Magmatic crystallization pressure and temperature for the quartz-diorite and the granodiorite are estimated to be P ∼ 7.8 ± 2.5 kbar, T ∼ 760 ± 75 ◦C and P ∼ 5 ± 1 kbar, T ∼ 700 ◦C, respectively. Subsolidus conditions are consistent with temperatures of ∼ 620 ◦C and ∼ 600 ◦C, and pressures of ∼ 5 kbar and ∼ 3.5 kbar for the quartz-diorite and the granodiorite, respectively.
How much is too much?
(2010)
Although dietary nutrient intake is often adequate, nutritional supplement use is common among elite athletes. However, high-dose supplements or the use of multiple supplements may exceed the recommended daily allowance (RDA) of particular nutrients or even result in a daily intake above tolerable upper limits (UL). The present case report presents nutritional intake data and supplement use of a highly trained male swimmer competing at international level. Habitual energy and micronutrient intake were analysed by 3 d dietary reports. Supplement use and dosage were assessed, and total amount of nutrient supply was calculated. Micronutrient intake was evaluated based on RDA and UL as presented by the European Scientific Committee on Food, and maximum permitted levels in supplements (MPL) are given. The athlete’s diet provided adequate micronutrient content well above RDA except for vitamin D. Simultaneous use of ten different supplements was reported, resulting in excess intake above tolerable UL for folate, vitamin E and Zn. Additionally, daily supplement dosage was considerably above MPL for nine micronutrients consumed as artificial products. Risks and possible side effects of exceeding UL by the athlete are discussed. Athletes with high energy intake may be at risk of exceeding UL of particular nutrients if multiple supplements are added. Therefore, dietary counselling of athletes should include assessment of habitual diet and nutritional supplement intake. Educating athletes to balance their diets instead of taking supplements might be prudent to prevent health risks
that may occur with long-term excess nutrient intake.
Think local sell global
(2010)
Inhalt: Introduction: The problem at hand Approaches to EU’s external identity making Mechanisms of external identity making Theoretical approaches to the EU’s external identity making The EU’s external identity promotion The ENP policy instruments Conclusions References
This professorial dissertation thesis collects several empirical studies on tax distribution and tax reform in Germany. Chapter 2 deals with two studies on effective income taxation, based on representative micro data sets from tax statistics. The first study analyses the effective income taxation at the individual level, in particular with respect to the top incomes. It is based on an integrated micro data file of household survey data and income tax statistics, which captures the entire income distribution up to the very top. Despite substantial tax base erosion and reductions of top tax rates, the German personal income tax has remained effectively progressive. The distribution of the tax burden is highly concentrated and the German economic elite is still taxed relatively heavily, even though the effective tax rate for this group has significantly declined. The second study of Chapter 2 highlights the effective income taxation of functional income sources, such as labor income, business and capital income, etc. Using income tax micro data and microsimulation models, we allocate the individual income tax liability to the respective income sources, according to different apportionment schemes accounting for losses. We find that the choice of the apportionment scheme markedly affects the tax shares of income sources and implicit tax rates, in particular those of capital income. Income types without significant losses such as labor income or transfer incomes show higher tax shares and implicit tax rates if we account for losses. The opposite is true for capital income, in particular for income from renting and leasing. Chapter 3 presents two studies on business taxation, based on representative micro data sets from tax statistics and the microsimulation model BizTax. The first part provides a study on fundamental reform options for the German local business tax. We find that today’s high concentration of local business tax revenues on corporations with high profits decreases if the tax base is broadened by integrating more taxpayers and by including more elements of business value added. The reform scenarios with a broader tax base distribute the local business tax revenue per capita more equally across regional categories. The second study of Chapter 3 discusses the macroeconomic performance of business taxation against the background of corporate income. A comparison of the tax base reported in tax statistics with the macroeconomic corporate income from national accounts gives hints to considerable tax base erosion. The average implicit tax rate on corporate income was around 20 percent since 2001, and thus falling considerably short of statutory tax rates and effective tax rates discussed in the literature. For lack of detailed accounting data it is hard to give precise reasons for the presumptive tax base erosion. Chapter 4 deals with several assessment studies on the ecological tax reform implemented in Germany as of 1999. First, we describe the scientific, ideological, and political background of the ecological tax reform. Further, we present the main findings of a first systematic impact analysis. We employ two macroeconomic models, an econometric input-output model and a recursive-dynamic computable general equilibrium (CGE) model. Both models show that Germany’s ecological tax reform helps to reduce energy consumption and CO2 emissions without having a substantial adverse effect on overall economic growth. It could have a slightly positive effect on employment. The reform’s impact on the business sector and the effects of special provisions granted to agriculture and the goods and materials sectors are outlined in a further study. The special provisions avoid higher tax burdens on the energy-intensive production. However, they widely reduce the marginal tax rates and thus the incentives to energy saving. Though the reform of special provisions 2003 increased the overall tax burden of the energy-intensive industry, the enlarged eligibility for tax rebates neutralizes the ecologic incentives. Based on the Income and Consumption Survey of 2003, we have analyzed the distributional impact of the ecological tax reform. The increased energy taxes show a clear regressive impact relative to disposable income. Families with children face a higher tax burden relative to household income. The reduction of pension contributions and the automatic adjustment of social security transfers widely mitigate this regressive impact. Households with low income or with many children nevertheless bear a slight increase in tax burden. Refunding the eco tax revenue by an eco bonus would make the reform clearly progressive.
Eye movements in reading are sensitive to foveal and parafoveal word features. Whereas the influence of orthographic or phonological parafoveal information on gaze control is undisputed, there has been no reliable evidence for early parafoveal extraction of semantic information in alphabetic script. Using a novel combination of the gaze-contingent fast-priming and boundary paradigms, we demonstrate semantic preview benefit when a semantically related parafoveal word was available during the initial 125 ms of a fixation on the pre-target word (Experiments 1 and 2). When the target location was made more salient, significant parafoveal semantic priming occurred only at 80 ms (Experiment 3). Finally, with short primes only (20, 40, 60 ms) effects were not significant but numerically in the expected direction for 40 and 60 ms (Experiment 4). In all experiments, fixation durations on the target word increased with prime durations under all conditions. The evidence for extraction of semantic information from the parafoveal word favors an explanation in terms of parallel word processing in reading.
Parafoveal Load of Word N+1 Modulates Preprocessing Effectivenessof Word N+2 in Chinese Reading
(2010)
Preview benefits (PBs) from two words to the right of the fixated one (i.e., word N+2)and associated parafoveal-on-foveal effects are critical for proposals of distributed lexical processing during reading. This experiment examined parafoveal processing during reading of Chinese sentences, using a boundary manipulation of N+2-word preview with low- and high-frequency words N+1. The main findings were (a) an identity PB for word N+2 that was (b) primarily observed when word N+1 was of high frequency (i.e., an interaction between frequency of word N+1 and PB for word N+2), and (c) a parafoveal-on-foveal frequency effect of word N+1 for fixation durations on word N. We discuss implications for theories of serial attention shifts and parallel distributed processing of words during reading.
The present study explores the role of the word position-in-text in sentence and paragraph reading. Three eye-movement data sets based on the reading of Dutch and German unrelated sentences reveal a sizeable, replicable increase in reading times over several words in the beginning and the end of sentences. The data from the paragraphbased English-language Dundee corpus replicate the pattern and also indicate that the increase in inspection times is driven by the visual boundaries of the text organized in lines, rather than by syntactic sentence boundaries. We argue that this effect is independent of several established lexical, contextual and oculomotor predictors of eye-movement behavior. We also provide evidence that the effect of word position-intext has two independent components: a start-up effect arguably caused by a strategic oculomotor program of saccade planning over the line of text, and a wrap-up effect originating in cognitive processes of comprehension and semantic integration.
Within our research group Bayesian Risk Solutions we have coined the idea of a Bayesian Risk Management (BRM). It claims (1) a more transparent and diligent data analysis as well as (2)an open-minded incorporation of human expertise in risk management. In this dissertation we formulize a framework for BRM based on the two pillars Hardcore-Bayesianism (HCB) and Softcore-Bayesianism (SCB) providing solutions for the claims above. For data analysis we favor Bayesian statistics with its Markov Chain Monte Carlo (MCMC) simulation algorithm. It provides a full illustration of data-induced uncertainty beyond classical point-estimates. We calibrate twelve different stochastic processes to four years of CO2 price data. Besides, we calculate derived risk measures (ex ante/ post value-at-risks, capital charges, option prices) and compare them to their classical counterparts. When statistics fails because of a lack of reliable data we propose our integrated Bayesian Risk Analysis (iBRA) concept. It is a basic guideline for an expertise-driven quantification of critical risks. We additionally review elicitation techniques and tools supporting experts to express their uncertainty. Unfortunately, Bayesian thinking is often blamed for its arbitrariness. Therefore, we introduce the idea of a Bayesian due diligence judging expert assessments according to their information content and their inter-subjectivity.
Development of techniques for earthquake microzonation studies in different urban environment
(2010)
The proliferation of megacities in many developing countries, and their location in areas where they are exposed to a high risk from large earthquakes, coupled with a lack of preparation, demonstrates the requirement for improved capabilities in hazard assessment, as well as the rapid adjustment and development of land-use planning. In particular, within the context of seismic hazard assessment, the evaluation of local site effects and their influence on the spatial distribution of ground shaking generated by an earthquake plays an important role. It follows that the carrying out of earthquake microzonation studies, which aim at identify areas within the urban environment that are expected to respond in a similar way to a seismic event, are essential to the reliable risk assessment of large urban areas. Considering the rate at which many large towns in developing countries that are prone to large earthquakes are growing, their seismic microzonation has become mandatory. Such activities are challenging and techniques suitable for identifying site effects within such contexts are needed. In this dissertation, I develop techniques for investigating large-scale urban environments that aim at being non-invasive, cost-effective and quickly deployable. These peculiarities allow one to investigate large areas over a relative short time frame, with a spatial sampling resolution sufficient to provide reliable microzonation. Although there is a negative trade-off between the completeness of available information and extent of the investigated area, I attempt to mitigate this limitation by combining two, what I term layers, of information: in the first layer, the site effects at a few calibration points are well constrained by analyzing earthquake data or using other geophysical information (e.g., shear-wave velocity profiles); in the second layer, the site effects over a larger areal coverage are estimated by means of single-station noise measurements. The microzonation is performed in terms of problem-dependent quantities, by considering a proxy suitable to link information from the first layer to the second one. In order to define the microzonation approach proposed in this work, different methods for estimating site effects have been combined and tested in Potenza (Italy), where a considerable amount of data was available. In particular, the horizontal-to-vertical spectral ratio computed for seismic noise recorded at different sites has been used as a proxy to combine the two levels of information together and to create a microzonation map in terms of spectral intensity ratio (SIR). In the next step, I applied this two-layer approach to Istanbul (Turkey) and Bishkek (Kyrgyzstan). A similar hybrid approach, i.e., combining earthquake and noise data, has been used for the microzonation of these two different urban environments. For both cities, after having calibrated the fundamental frequencies of resonance estimated from seismic noise with those obtained by analysing earthquakes (first layer), a fundamental frequency map has been computed using the noise measurements carried out within the town (second layer). By applying this new approach, maps of the fundamental frequency of resonance for Istanbul and Bishkek have been published for the first time. In parallel, a microzonation map in terms of SIR has been incorporated into a risk scenario for the Potenza test site by means of a dedicated regression between spectral intensity (SI) and macroseismic intensity (EMS). The scenario study confirms the importance of site effects within the risk chain. In fact, their introduction into the scenario led to an increase of about 50% in estimates of the number of buildings that would be partially or totally collapsed. Last, but not least, considering that the approach developed and applied in this work is based on measurements of seismic noise, their reliability has been assessed. A theoretical model describing the self-noise curves of different instruments usually adopted in microzonation studies (e.g., those used in Potenza, Istanbul and Bishkek) have been considered and compared with empirical data recorded in Cologne (Germany) and Gubbio (Italy). The results show that, depending on the geological and environmental conditions, the instrumental noise could severely bias the results obtained by recording and analysing ambient noise. Therefore, in this work I also provide some guidelines for measuring seismic noise.
Indonesia is one of the countries most prone to natural hazards. Complex interaction of several tectonic plates with high relative velocities leads to approximately two earthquakes with magnitude Mw>7 every year, being more than 15% of the events worldwide. Earthquakes with magnitude above 9 happen far more infrequently, but with catastrophic effects. The most severe consequences thereby arise from tsunamis triggered by these subduction-related earthquakes, as the Sumatra-Andaman event in 2004 showed. In order to enable efficient tsunami early warning, which includes the estimation of wave heights and arrival times, it is necessary to combine different types of real-time sensor data with numerical models of earthquake sources and tsunami propagation. This thesis was created as a result of the GITEWS project (German Indonesian Tsunami Early Warning System). It is based on five research papers and manuscripts. Main project-related task was the development of a database containing realistic earthquake scenarios for the Sunda Arc. This database provides initial conditions for tsunami propagation modeling used by the simulation system at the early warning center. An accurate discretization of the subduction geometry, consisting of 25x150 subfaults was constructed based on seismic data. Green’s functions, representing the deformational response to unit dip- and strike slip at the subfaults, were computed using a layered half-space approach. Different scaling relations for earthquake dimensions and slip distribution were implemented. Another project-related task was the further development of the ‘GPS-shield’ concept. It consists of a constellation of near field GPS-receivers, which are shown to be very valuable for tsunami early warning. The major part of this thesis is related to the geophysical interpretation of GPS data. Coseismic surface displacements caused by the 2004 Sumatra earthquake are inverted for slip at the fault. The effect of different Earth layer models is tested, favoring continental structure. The possibility of splay faulting is considered and shown to be a secondary order effect in respect to tsunamigenity for this event. Tsunami models based on source inversions are compared to satellite radar altimetry observations. Postseismic GPS time series are used to test a wide parameter range of uni- and biviscous rheological models of the asthenosphere. Steady-state Maxwell rheology is shown to be incompatible with near-field GPS data, unless large afterslip, amounting to more than 10% of the coseismic moment is assumed. In contrast, transient Burgers rheology is in agreement with data without the need for large aseismic afterslip. Comparison to postseismic geoid observation by the GRACE satellites reveals that even with afterslip, the model implementing Maxwell rheology results in amplitudes being too small, and thus supports a biviscous asthenosphere. A simple approach based on the assumption of quasi-static deformation propagation is introduced and proposed for inversion of coseismic near-field GPS time series. Application of this approach to observations from the 2004 Sumatra event fails to quantitatively reconstruct the rupture propagation, since a priori conditions are not fulfilled in this case. However, synthetic tests reveal the feasibility of such an approach for fast estimation of rupturing properties.
Situated in an active tectonic region, Santiago de Chile, the country´s capital with more than six million inhabitants, faces tremendous earthquake hazard. Macroseismic data for the 1985 Valparaiso and the 2010 Maule events show large variations in the distribution of damage to buildings within short distances indicating strong influence of local sediments and the shape of the sediment-bedrock interface on ground motion. Therefore, a temporary seismic network was installed in the urban area for recording earthquake activity, and a study was carried out aiming to estimate site amplification derived from earthquake data and ambient noise. The analysis of earthquake data shows significant dependence on the local geological structure with regards to amplitude and duration. Moreover, the analysis of noise spectral ratios shows that they can provide a lower bound in amplitude for site amplification and, since no variability in terms of time and amplitude is observed, that it is possible to map the fundamental resonance frequency of the soil for a 26 km x 12 km area in the northern part of the Santiago de Chile basin. By inverting the noise spectral rations, local shear wave velocity profiles could be derived under the constraint of the thickness of the sedimentary cover which had previously been determined by gravimetric measurements. The resulting 3D model was derived by interpolation between the single shear wave velocity profiles and shows locally good agreement with the few existing velocity profile data, but allows the entire area, as well as deeper parts of the basin, to be represented in greater detail. The wealth of available data allowed further to check if any correlation between the shear wave velocity in the uppermost 30 m (vs30) and the slope of topography, a new technique recently proposed by Wald and Allen (2007), exists on a local scale. While one lithology might provide a greater scatter in the velocity values for the investigated area, almost no correlation between topographic gradient and calculated vs30 exists, whereas a better link is found between vs30 and the local geology. When comparing the vs30 distribution with the MSK intensities for the 1985 Valparaiso event it becomes clear that high intensities are found where the expected vs30 values are low and over a thick sedimentary cover. Although this evidence cannot be generalized for all possible earthquakes, it indicates the influence of site effects modifying the ground motion when earthquakes occur well outside of the Santiago basin. Using the attained knowledge on the basin characteristics, simulations of strong ground motion within the Santiago Metropolitan area were carried out by means of the spectral element technique. The simulation of a regional event, which has also been recorded by a dense network installed in the city of Santiago for recording aftershock activity following the 27 February 2010 Maule earthquake, shows that the model is capable to realistically calculate ground motion in terms of amplitude, duration, and frequency and, moreover, that the surface topography and the shape of the sediment bedrock interface strongly modify ground motion in the Santiago basin. An examination on the dependency of ground motion on the hypocenter location for a hypothetical event occurring along the active San Ramón fault, which is crossing the eastern outskirts of the city, shows that the unfavorable interaction between fault rupture, radiation mechanism, and complex geological conditions in the near-field may give rise to large values of peak ground velocity and therefore considerably increase the level of seismic risk for Santiago de Chile.
The Antarctic plays an important role in the global climate system. On the one hand, the Antarctic Ice Sheet is the largest freshwater reservoir on Earth. On the other hand, a major proportion of the global bottom-water formation takes place in Antarctic shelf regions, forcing the global thermohaline circulation. The main goal of this dissertation is to provide new insights into the dynamics and stability of the EAIS during the Quaternary. Additionally, variations in the activity of bottom-water formation and their causes are investigated. The dissertation is a German contribution to the International Polar Year 2007/ 2008 and was funded by the ‘Deutsche Forschungsgesellschaft’ (DFG) within the scope of priority program 1158 ‘Antarctic research with comparative studies in Arctic ice regions’. During RV Polarstern expedition ANT-XXIII/9, glaciomarine sediments were recovered from the Prydz Bay-Kerguelen region. Prydz Bay is a key region for the study of East EAIS dynamics, as 16% of the EAIS are drained through the Lambert Glacier into the bay. Thereby, the glacier transports sediment into Prydz Bay which is then further distributed by calving icebergs or by current transport. The scientific approach of this dissertation is the reconstruction of past glaciomarine environments to infer on the response of the Lambert Glacier-Amery Ice Shelf system to climate shifts during the Quaternary. To characterize the depositional setting, sedimentological methods are used and statistical analyses are applied. Mineralogical and (bio)geochemical methods provide a means to reconstruct sediment provenances and to provide evidence on changes in the primary production in the surface water column. Age-depth models were constructed based on palaeomagnetic and palaeointensity measurements, diatom stratigraphy and radiocarbon dating. Sea-bed surface sediments in the investigation area show distinct variations in terms of their clay minerals and heavy-mineral assemblages. Considerable differences in the mineralogical composition of surface sediments are determined on the continental shelf. Clay minerals as well as heavy minerals provide useful parameters to differentiate between sediments which originated from erosion of crystalline rocks and sediments originating from Permo-Triassic deposits. Consequently, mineralogical parameters can be used to reconstruct the provenance of current-transported and ice-rafted material. The investigated sediment cores cover the time intervals of the last 1.4 Ma (continental slope) and the last 12.8 cal. ka BP (MacRobertson shelf). The sediment deposits were mainly influenced by glacial and oceanographic processes and further by biological activity (continental shelf), meltwater input and possibly gravitational transport. Sediments from the continental slope document two major deglacial events: the first deglaciation is associated with the mid-Pleistocene warming recognized around the Antarctic. In Prydz Bay, the Lambert Glacier-Amery Ice Shelf retreated far to the south and high biogenic productivity commenced or biogenic remains were better preserved due to increased sedimentation rates. Thereafter, stable glacial conditions continued until 400 - 500 ka BP. Calving of icebergs was restricted to the western part of the Lambert Glacier. The deeper bathymetry in this area allows for floating ice shelf even during times of decreased sea-level. Between 400 - 500 ka BP and the last interglacial (marine isotope stage 5) the glacier was more dynamic. During or shortly after the last interglacial the LAIS retreated again due to sea-level rise of 6 - 9 m. Both deglacial events correlate with a reduction in the thickness of ice masses in the Prince Charles Mountains. It indicates that a disintegration of the Amery Ice Shelf possibly led to increased drainage of ice masses from the Prydz Bay hinterland. A new end-member modelling algorithm was successfully applied on sediments from the MacRobertson shelf used to unmix the sand grain size fractions sorted by current activity and ice transport, respectively. Ice retreat on MacRobertson Shelf commenced 12.8 cal. ka BP and ended around 5.5 cal. ka BP. During the Holocene, strong fluctuations of the bottomwater activity were observed, probably related to variations of sea-ice formation in the Cape Darnley polynya. Increased activity of bottom-water flow was reconstructed at transitions from warm to cool conditions, whereas bottom-water activity receded during the mid- Holocene climate optimum. It can be concluded that the Lambert Glacier-Amery Ice Shelf system was relatively stable in terms of climate variations during the Quaternary. In contrast, bottom-water formation due to polynya activity was very sensitive to changes in atmospheric forcing and should gain more attention in future research.
This work describes the realization of physically crosslinked networks based on gelatin by the introduction of functional groups enabling specific supramolecular interactions. Molecular models were developed in order to predict the material properties and permit to establish a knowledge-based approach to material design. The effect of additional supramolecular interactions with hydroxyapaptite was then studied in composite materials. The calculated properties are compared to experimental results to validate the models. The models are then further used for the study of physically crosslinked networks. Gelatin was functionalized with desaminotyrosine (DAT) and desaminotyrosyl-tyrosine (DATT) side groups, derived from the natural amino acid tyrosine. These group can potentially undergo to π-π and hydrogen bonding interactions also under physiological conditions. Molecular dynamics (MD) simulations were performed on models with 0.8 wt.-% or 25 wt.-% water content, using the second generation forcefield CFF91. The validation of the models was obtained by the comparison with specific experimental data such as, density, peptide conformational angles and X-ray scattering spectra. The models were then used to predict the supramolecular organization of the polymer chain, analyze the formation of physical netpoints and calculate the mechanical properties. An important finding of simulation was that with the increase of aromatic groups also the number of observed physical netpoints increased. The number of relatively stable physical netpoints, on average zero 0 for natural gelatin, increased to 1 and 6 for DAT and DATT functionalized gelatins respectively. A comparison with the Flory-Rehner model suggested reduced equilibrium swelling by factor 6 of the DATT-functionalized materials in water. The functionalized gelatins could be synthesized by chemoselective coupling of the free carboxylic acid groups of DAT and DATT to the free amino groups of gelatin. At 25 wt.-% water content, the simulated and experimentally determined elastic mechanical properties (e.g. Young Modulus) were both in the order of GPa and were not influenced by the degree of aromatic modification. The experimental equilibrium degree of swelling in water decreased with increasing the number of inserted aromatic functions (from 2800 vol.-% for pure gelatin to 300 vol.-% for the DATT modified gelatin), at the same time, Young’s modulus, elongation at break, and maximum tensile strength increased. It could be show that the functionalization with DAT and DATT influences the chain organization of gelatin based materials together with a controlled drying condition. Functionalization with DAT and DATT lead to a drastic reduction of helical renaturation, that could be more finely controlled by the applied drying conditions. The properties of the materials could then be influenced by application of two independent methods. Composite materials of DAT and DATT functionalized gelatins with hydroxyapatite (HAp) show a drastic reduction of swelling degree. In tensile tests and rheological measurements, the composites equilibrated in water had increased Young’s moduli (from 200 kPa up to 2 MPa) and tensile strength (from 57 kPa up to 1.1 MPa) compared to the natural polymer matrix without affecting the elongation at break. Furthermore, an increased thermal stability from 40 °C to 85 °C of the networks could be demonstrated. The differences of the behaviour of the functionalized gelatins to pure gelatin as matrix suggested an additional stabilizing bond between the incorporated aromatic groups to the hydroxyapatite.
Estimation and testing the effect of covariates in accelerated life time models under censoring
(2010)
The accelerated lifetime model is considered. To test the influence of the covariate we transform the model in a regression model. Since censoring is allowed this approach leads to a goodness-of-fit problem for regression functions under censoring. So nonparametric estimation of regression functions under censoring is investigated, a limit theorem for a L2-distance is stated and a test procedure is formulated. Finally a Monte Carlo procedure is proposed.
About the relation between implicit Theory of Mind & the comprehension of complement sentences
(2010)
Previous studies on the relation between language and social cognition have shown that children’s mastery of embedded sentential complements plays a causal role for the development of a Theory of Mind (ToM). Children start to succeed on complementation tasks in which they are required to report the content of an embedded clause in the second half of the fourth year. Traditional ToM tasks test the child’s ability to predict that a person who is holding a false belief (FB) about a situation will act "falsely". In these task, children do not represent FBs until the age of 4 years. According the linguistic determinism hypothesis, only the unique syntax of complement sentences provides the format for representing FBs. However, experiments measuring children’s looking behavior instead of their explicit predictions provided evidence that already 2-year olds possess an implicit ToM. This dissertation examined the question of whether there is an interrelation also between implicit ToM and the comprehension of complement sentences in typically developing German preschoolers. Two studies were conducted. In a correlational study (Study 1 ), 3-year-old children’s performance on a traditional (explicit) FB task, on an implicit FB task and on language tasks measuring children’s comprehension of tensed sentential complements were collected and tested for their interdependence. Eye-tracking methodology was used to assess implicit ToM by measuring participants’ spontaneous anticipatory eye movements while they were watching FB movies. Two central findings emerged. First, predictive looking (implicit ToM) was not correlated with complement mastery, although both measures were associated with explicit FB task performance. This pattern of results suggests that explicit, but not implicit ToM is language dependent. Second, as a group, 3-year-olds did not display implicit FB understanding. That is, previous findings on a precocious reasoning ability could not be replicated. This indicates that the characteristics of predictive looking tasks play a role for the elicitation of implicit FB understanding as the current task was completely nonverbal and as complex as traditional FB tasks. Study 2 took a methodological approach by investigating whether children display an earlier comprehension of sentential complements when using the same means of measurement as used in experimental tasks tapping implicit ToM, namely anticipatory looking. Two experiments were conducted. 3-year-olds were confronted either with a complement sentence expressing the protagonist’s FB (Exp. 1) or with a complex sentence expressing the protagonist’s belief without giving any information about the truth/ falsity of the belief (Exp. 2). Afterwards, their expectations about the protagonist’s future behavior were measured. Overall, implicit measures reveal no considerably earlier understanding of sentential complementation. Whereas 3-year-olds did not display a comprehension of complex sentences if these embedded a false proposition, children from 3;9 years on were proficient in processing complement sentences if the truth value of the embedded proposition could not be evaluated. This pattern of results suggests that (1) the linguistic expression of a person’s FB does not elicit implicit FB understanding and that (2) the assessment of the purely syntactic understanding of complement sentences is affected by competing reality information. In conclusion, this dissertation found no evidence that the implicit ToM is related to the comprehension of sentential complementation. The findings suggest that implicit ToM might be based on nonlinguistic processes. Results are discussed in the light of recently proposed dual-process models that assume two cognitive mechanisms that account for different levels of ToM task performance.
Ghrelin is a unique hunger-inducing stomach-borne hormone. It activates orexigenic circuits in the central nervous system (CNS) when acylated with a fatty acid residue by the Ghrelin O-acyltransferase (GOAT). Soon after the discovery of ghrelin a theoretical model emerged which suggests that the gastric peptide ghrelin is the first “meal initiation molecule
In a very simplified view, the plant leaf growth can be reduced to two processes, cell division and cell expansion, accompanied by expansion of their surrounding cell walls. The vacuole, as being the largest compartment of the plant cell, plays a major role in controlling the water balance of the plant. This is achieved by regulating the osmotic pressure, through import and export of solutes over the vacuolar membrane (the tonoplast) and by controlling the water channels, the aquaporins. Together with the control of cell wall relaxation, vacuolar osmotic pressure regulation is thought to play an important role in cell expansion, directly by providing cell volume and indirectly by providing ion and pH homestasis for the cytosoplasm. In this thesis the role of tonoplast protein coding genes in cell expansion in the model plant Arabidopsis thaliana is studied and genes which play a putative role in growth are identified. Since there is, to date, no clearly identified protein localization signal for the tonoplast, there is no possibility to perform genome-wide prediction of proteins localized to this compartment. Thus, a series of recent proteomic studies of the tonoplast were used to compile a list of cross-membrane tonoplast protein coding genes (117 genes), and other growth-related genes from notably the growth regulating factor (GRF) and expansin families were included (26 genes). For these genes a platform for high-throughput reverse transcription quantitative real time polymerase chain reaction (RT-qPCR) was developed by selecting specific primer pairs. To this end, a software tool (called QuantPrime, see http://www.quantprime.de) was developed that automatically designs such primers and tests their specificity in silico against whole transcriptomes and genomes, to avoid cross-hybridizations causing unspecific amplification. The RT-qPCR platform was used in an expression study in order to identify candidate growth related genes. Here, a growth-associative spatio-temporal leaf sampling strategy was used, targeting growing regions at high expansion developmental stages and comparing them to samples taken from non-expanding regions or stages of low expansion. Candidate growth related genes were identified after applying a template-based scoring analysis on the expression data, ranking the genes according to their association with leaf expansion. To analyze the functional involvement of these genes in leaf growth on a macroscopic scale, knockout mutants of the candidate growth related genes were screened for growth phenotypes. To this end, a system for non-invasive automated leaf growth phenotyping was established, based on a commercially available image capture and analysis system. A software package was developed for detailed developmental stage annotation of the images captured with the system, and an analysis pipeline was constructed for automated data pre-processing and statistical testing, including modeling and graph generation, for various growth-related phenotypes. Using this system, 24 knockout mutant lines were analyzed, and significant growth phenotypes were found for five different genes.
Die besondere Beziehung zwischen Humboldt und Darwin, zwei der bedeutendsten Persönlichkeiten in der Welt der Naturwissenschaften und der Biologie des 19. Jahrhunderts, wird detailliert auf den verschiedenen Ebenen ihres Kontaktes analysiert, sowohl was das real stattgefundene persönliche Treffen betrifft, als auch hinsichtlich ihrer Korrespondenz und der Koinzidenz von Ideen. Dieser wechselseitige Blick zeigt uns wie sich die beiden Gelehrten gegenseitig wahrnahmen, ob sie wirklich versuchten, mit dem Paradigma ihrer bedeutenden Vorgänger zu brechen, oder ob sie lediglich schrittweise das bereits erlangte Wissen erweiterten, bis es durch die Erstellung einer genialen Idee zu einem Bruch des bisherigen Wissens kommt. Bekannt ist die wiederholte Referenz von Darwin auf die Werke Humboldts, insbesondere auf die Tagebücher des deutschen Naturwissenschaftler und seine Art der Beschreibung der amerikanischen Natur in ihrer ganzen Reichhaltigkeit. Weniger bekannt hingegen sind andere Verweise in seiner Autobiografie, sowie die wissenschaftliche Verwendung des Humboldtschen Werkes oder die Zitate in seiner Korrespondenz, die in diesem Beitrag aufgezeigt werden. Darüber hinaus wird die Verwendung der frühen Schriften von Darwin durch Humboldt in einigen seiner Publikationen, vor allem im Kosmos, erwähnt.
This thesis is concerned with the issue of extinction of populations composed of different types of individuals, and their behavior before extinction and in case of a very late extinction. We approach this question firstly from a strictly probabilistic viewpoint, and secondly from the standpoint of risk analysis related to the extinction of a particular model of population dynamics. In this context we propose several statistical tools. The population size is modeled by a branching process, which is either a continuous-time multitype Bienaymé-Galton-Watson process (BGWc), or its continuous-state counterpart, the multitype Feller diffsion process. We are interested in different kinds of conditioning on nonextinction, and in the associated equilibrium states. These ways of conditioning have been widely studied in the monotype case. However the literature on multitype processes is much less extensive, and there is no systematic work establishing connections between the results for BGWc processes and those for Feller diffusion processes. In the first part of this thesis, we investigate the behavior of the population before its extinction by conditioning the associated branching process Xt on non-extinction (Xt 6= 0), or more generally on non-extinction in a near future 0 < 1 (Xt+ 0 = 0), and by letting t tend to infinity. We prove the result, new in the multitype framework and for 0 > 0, that this limit exists and is nondegenerate. This re ects a stationary behavior for the dynamics of the population conditioned on non-extinction, and provides a generalization of the so-called Yaglom limit, corresponding to the case 0 = 0. In a second step we study the behavior of the population in case of a very late extinction, obtained as the limit when 0 tends to infinity of the process conditioned by Xt+ 0 = 0. The resulting conditioned process is a known object in the monotype case (sometimes referred to as Q-process), and has also been studied when Xt is a multitype Feller diffusion process. We investigate the not yet considered case where Xt is a multitype BGWc process and prove the existence of the associated Q-process. In addition, we examine its properties, including the asymptotic ones, and propose several interpretations of the process. Finally, we are interested in interchanging the limits in t and 0, as well as in the not yet studied commutativity of these limits with respect to the high-density-type relationship between BGWc processes and Feller processes. We prove an original and exhaustive list of all possible exchanges of limit (long-time limit in t, increasing delay of extinction 0, diffusion limit). The second part of this work is devoted to the risk analysis related both to the extinction of a population and to its very late extinction. We consider a branching population model (arising notably in the epidemiological context) for which a parameter related to the first moments of the offspring distribution is unknown. We build several estimators adapted to different stages of evolution of the population (phase growth, decay phase, and decay phase when extinction is expected very late), and prove moreover their asymptotic properties (consistency, normality). In particular, we build a least squares estimator adapted to the Q-process, allowing a prediction of the population development in the case of a very late extinction. This would correspond to the best or to the worst-case scenario, depending on whether the population is threatened or invasive. These tools enable us to study the extinction phase of the Bovine Spongiform Encephalopathy epidemic in Great Britain, for which we estimate the infection parameter corresponding to a possible source of horizontal infection persisting after the removal in 1988 of the major route of infection (meat and bone meal). This allows us to predict the evolution of the spread of the disease, including the year of extinction, the number of future cases and the number of infected animals. In particular, we produce a very fine analysis of the evolution of the epidemic in the unlikely event of a very late extinction.
We reconsider the fundamental work of Fichtner ([2]) and exhibit the permanental structure of the ideal Bose gas again, using another approach which combines a characterization of infinitely divisible random measures (due to Kerstan,Kummer and Matthes [5, 6] and Mecke [8, 9]) with a decomposition of the moment measures into its factorial measures due to Krickeberg [4]. To be more precise, we exhibit the moment measures of all orders of the general ideal Bose gas in terms of certain path integrals. This representation can be considered as a point process analogue of the old idea of Symanzik [11] that local times and self-crossings of the Brownian motion can be used as a tool in quantum field theory. Behind the notion of a general ideal Bose gas there is a class of infinitely divisible point processes of all orders with a Levy-measure belonging to some large class of measures containing the one of the classical ideal Bose gas considered by Fichtner. It is well known that the calculation of moments of higher order of point processes are notoriously complicated. See for instance Krickeberg's calculations for the Poisson or the Cox process in [4].
The aim of these lectures is a reformulation and generalization of the fundamental investigations of Alexander Bach [2, 3] on the concept of probability in the work of Boltzmann [6] in the language of modern point process theory. The dominating point of view here is its subordination under the disintegration theory of Krickeberg [14]. This enables us to make Bach's consideration much more transparent. Moreover the point process formulation turns out to be the natural framework for the applications to quantum mechanical models.
The aim of this paper is to build and compare estimators of the infection parameter in the different phases of an epidemic (growth and extinction phases). The epidemic is modeled by a Markovian process of order d > 1 (allowing non-Markovian life spans), and can be written as a multitype branching process. We propose three estimators suitable for the different classes of criticality of the process, in particular for the subcritical case corresponding to the extinction phase. We prove their consistency and asymptotic normality for two asymptotics, when the number of ancestors (resp. number of generations) tends to infinity. We illustrate the asymptotic properties with simulated examples, and finally use our estimators to study the infection intensity in the extinction phase of the BSE epidemic in Great-Britain.
Estimation and testing of distributions in metric spaces are well known. R.A. Fisher, J. Neyman, W. Cochran and M. Bartlett achieved essential results on the statistical analysis of categorical data. In the last 40 years many other statisticians found important results in this field. Often data sets contain categorical data, e.g. levels of factors or names. There does not exist any ordering or any distance between these categories. At each level there are measured some metric or categorical values. We introduce a new method of scaling based on statistical decisions. For this we define empirical probabilities for the original observations and find a class of distributions in a metric space where these empirical probabilities can be found as approximations for equivalently defined probabilities. With this method we identify probabilities connected with the categorical data and probabilities in metric spaces. Here we get a mapping from the levels of factors or names into points of a metric space. This mapping yields the scale for the categorical data. From the statistical point of view we use multivariate statistical methods, we calculate maximum likelihood estimations and compare different approaches for scaling.
The aim of this thesis is the design, expression and purification of human cytochrome c mutants and their characterization with regard to electrochemical and structural properties as well as with respect to the reaction with the superoxide radical and the selected proteins sulfite oxidase from human and fungi bilirubin oxidase. All three interaction partners are studied here for the first time with human cyt c and with mutant forms of cyt c. A further aim is the incorporation of the different cyt c forms in two bioelectronic systems: an electrochemical superoxide biosensor with an enhanced sensitivity and a protein multilayer assembly with and without bilirubin oxidase on electrodes. The first part of the thesis is dedicated to the design, expression and characterization of the mutants. A focus is here the electrochemical characterization of the protein in solution and immobilized on electrodes. Further the reaction of these mutants with superoxide was investigated and the possible reaction mechanisms are discussed. In the second part of the work an amperometric superoxide biosensor with selected human cytochrome c mutants was constructed and the performance of the sensor electrodes was studied. The human wild-type and four of the five mutant electrodes could be applied successfully for the detection of the superoxide radical. In the third part of the thesis the reaction of horse heart cyt c, the human wild-type and seven human cyt c mutants with the two proteins sulfite oxidase and bilirubin oxidase was studied electrochemically and the influence of the mutations on the electron transfer reactions was discussed. Finally protein multilayer electrodes with different cyt form including the mutant forms G77K and N70K which exhibit different reaction rates towards BOD were investigated and BOD together with the wild-type and engineered cyt c was embedded in the multilayer assembly. The relevant electron transfer steps and the kinetic behavior of the multilayer electrodes are investigated since the functionality of electroactive multilayer assemblies with incorporated redox proteins is often limited by the electron transfer abilities of the proteins within the multilayer. The formation via the layer-by-layer technique and the kinetic behavior of the mono and bi-protein multilayer system are studied by SPR and cyclic voltammetry. In conclusion this thesis shows that protein engineering is a helpful instrument to study protein reactions as well as electron transfer mechanisms of complex bioelectronic systems (such as bi-protein multilayers). Furthermore, the possibility to design tailored recognition elements for the construction of biosensors with an improved performance is demonstrated.
This thesis contains quantum chemical models and force field calculations for the RuBisCO isotope effect, the spectral characteristics of the blue-light sensor BLUF and the light harvesting complex II. The work focuses on the influence of the environment on the corresponding systems. For RuBisCO, it was found that the isotopic effect is almost unaffected by the environment. In case of the BLUF domain, an amino acid was found to be important for the UV/vis spectrum, but unaccounted for in experiments so far (Ser41). The residue was shown to be highly mobile and with a systematic influence on the spectral shift of the BLUF domain chromophore (flavin). Finally, for LHCII it was found that small changes in the geometry of a Chlorophyll b/Violaxanthin chromophore pair can have strong influences regarding the light harvesting mechanism. Especially here it was seen that the proper description of the environment can be critical. In conclusion, the environment was observed to be of often unexpected importance for the molecular properties, and it seems not possible to give a reliable estimate on the changes created by the presence of the environment.
The programmable network envisioned in the 1990s within standardization and research for the Intelligent Network is currently coming into reality using IPbased Next Generation Networks (NGN) and applying Service-Oriented Architecture (SOA) principles for service creation, execution, and hosting. SOA is the foundation for both next-generation telecommunications and middleware architectures, which are rapidly converging on top of commodity transport services. Services such as triple/quadruple play, multimedia messaging, and presence are enabled by the emerging service-oriented IPMultimedia Subsystem (IMS), and allow telecommunications service providers to maintain, if not improve, their position in the marketplace. SOA becomes the de facto standard in next-generation middleware systems as the system model of choice to interconnect service consumers and providers within and between enterprises. We leverage previous research activities in overlay networking technologies along with recent advances in network abstraction, service exposure, and service creation to develop a paradigm for a service environment providing converged Internet and Telecommunications services that we call Service Broker. Such a Service Broker provides mechanisms to combine and mediate between different service paradigms from the two domains Internet/WWW and telecommunications. Furthermore, it enables the composition of services across these domains and is capable of defining and applying temporal constraints during creation and execution time. By adding network-awareness into the service fabric, such a Service Broker may also act as a next generation network-to-service element allowing the composition of crossdomain and cross-layer network and service resources. The contribution of this research is threefold: first, we analyze and classify principles and technologies from Information Technologies (IT) and telecommunications to identify and discuss issues allowing cross-domain composition in a converging service layer. Second, we discuss service composition methods allowing the creation of converged services on an abstract level; in particular, we present a formalized method for model-checking of such compositions. Finally, we propose a Service Broker architecture converging Internet and Telecom services. This environment enables cross-domain feature interaction in services through formalized obligation policies acting as constraints during service discovery, creation, and execution time.
Crustal deformation can be the result of volcanic and tectonic activity such as fault dislocation and magma intrusion. The crustal deformation may precede and/or succeed the earthquake occurrence and eruption. Mitigating the associated hazard, continuous monitoring of the crustal deformation accordingly has become an important task for geo-observatories and fast response systems. Due to highly non-linear behavior of the crustal deformation fields in time and space, which are not always measurable using conventional geodetic methods (e.g., Leveling), innovative techniques of monitoring and analysis are required. In this thesis I describe novel methods to improve the ability for precise and accurate mapping the spatiotemporal surface deformation field using multi acquisitions of satellite radar data. Furthermore, to better understand the source of such spatiotemporal deformation fields, I present novel static and time dependent model inversion approaches. Almost any interferograms include areas where the signal decorrelates and is distorted by atmospheric delay. In this thesis I detail new analysis methods to reduce the limitations of conventional InSAR, by combining the benefits of advanced InSAR methods such as the permanent scatterer InSAR (PSI) and the small baseline subsets (SBAS) with a wavelet based data filtering scheme. This novel InSAR time series methodology is applied, for instance, to monitor the non-linear deformation processes at Hawaii Island. The radar phase change at Hawaii is found to be due to intrusions, eruptions, earthquakes and flank movement processes and superimposed by significant environmental artifacts (e.g., atmospheric). The deformation field, I obtained using the new InSAR analysis method, is in good agreement with continuous GPS data. This provides an accurate spatiotemporal deformation field at Hawaii, which allows time dependent source modeling. Conventional source modeling methods usually deal with static deformation field, while retrieving the dynamics of the source requires more sophisticated time dependent optimization approaches. This problem I address by combining Monte Carlo based optimization approaches with a Kalman Filter, which provides the model parameters of the deformation source consistent in time. I found there are numerous deformation sources at Hawaii Island which are spatiotemporally interacting, such as volcano inflation is associated to changes in the rifting behavior, and temporally linked to silent earthquakes. I applied these new methods to other tectonic and volcanic terrains, most of which revealing the importance of associated or coupled deformation sources. The findings are 1) the relation between deep and shallow hydrothermal and magmatic sources underneath the Campi Flegrei volcano, 2) gravity-driven deformation at Damavand volcano, 3) fault interaction associated with the 2010 Haiti earthquake, 4) independent block wise flank motion at the Hilina Fault system, Kilauea, and 5) interaction between salt diapir and the 2005 Qeshm earthquake in southern Iran. This thesis, written in cumulative form including 9 manuscripts published or under review in peer reviewed journals, improves the techniques for InSAR time series analysis and source modeling and shows the mutual dependence between adjacent deformation sources. These findings allow more realistic estimation of the hazard associated with complex volcanic and tectonic systems.
Recent large earthquakes put in evidence the need of improving and developing robust and rapid procedures to properly calculate the magnitude of an earthquake in a short time after its occurrence. The most famous example is the 26 December 2004 Sumatra earthquake, when the limitations of the standard procedures adopted at that time by many agencies failed to provide accurate magnitude estimates of this exceptional event in time to launch early enough warnings and appropriate response. Being related to the radiated seismic energy ES, the energy magnitude ME is a good estimator of the high frequency content radiated by the source which goes into the seismic waves. However, a procedure to rapidly determine ME (that is to say, within 15 minutes after the earthquake occurrence) was required. Here it is presented a procedure able to provide in a rapid way the energy magnitude ME for shallow earthquakes by analyzing teleseismic P‑waves in the distance range 20-98. To account for the energy loss experienced by the seismic waves from the source to the receivers, spectral amplitude decay functions obtained from numerical simulations of Greens functions based on the average global model AK135Q are used. The proposed method has been tested using a large global dataset (~1000 earthquakes) and the obtained rapid ME estimations have been compared to other magnitude scales from different agencies. Special emphasis is given to the comparison with the moment magnitude MW, since the latter is very popular and extensively used in common seismological practice. However, it is shown that MW alone provide only limited information about the seismic source properties, and that disaster management organizations would benefit from a combined use of MW and ME in the prompt evaluation of an earthquake’s tsunami and shaking potential. In addition, since the proposed approach for ME is intended to work without knowledge of the fault plane geometry (often available only hours after an earthquake occurrence), the suitability of this method is discussed by grouping the analyzed earthquakes according to their type of mechanism (strike-slip, normal faulting, thrust faulting, etc.). No clear trend is found from the rapid ME estimates with the different fault plane solution groups. This is not the case for the ME routinely determined by the U.S. Geological Survey, which uses specific radiation pattern corrections. Further studies are needed to verify the effect of such corrections on ME estimates. Finally, exploiting the redundancy of the information provided by the analyzed dataset, the components of variance on the single station ME estimates are investigated. The largest component of variance is due to the intra-station (record-to-record) error, although the inter-station (station-to-station) error is not negligible and is of several magnitude units for some stations. Moreover, it is shown that the intra-station component of error is not random but depends on the travel path from a source area to a given station. Consequently, empirical corrections may be used to account for the heterogeneities of the real Earth not considered in the theoretical calculations of the spectral amplitude decay functions used to correct the recorded data for the propagation effects.
The seismically active Alborz mountains of northern Iran are an integral part of the Arabia-Eurasia collision. Linked strike-slip and thrust/reverse-fault systems in this mountain belt are characterized by slow loading rates, and large earthquakes are highly disparate in space and time. Similar to other intracontinental deformation zones such a pattern of tectonic activity is still insufficiently understood, because recurrence intervals between seismic events may be on the order of thousands of years, and are thus beyond the resolution of short term measurements based on GPS or instrumentally recorded seismicity. This study bridges the gap of deformation processes on different time scales. In particular, my investigation focuses on deformation on the Quaternary time scale, beyond present-day deformation rates, and it uses present-day and paleotectonic characteristics to model fault behavior. The study includes data based on structural and geomorphic mapping, faultkinematic analysis, DEM-based morphometry, and numerical fault-interaction modeling. In order to better understand the long- to short term behavior of such complex fault systems, I used geomorphic surfaces as strain markers and dated fluvial and alluvial surfaces using terrestrial cosmogenic nuclides (TCN, 10Be, 26Al, 36Cl) and optically stimulated luminescence (OSL). My investigation focuses on the seismically active Mosha-Fasham fault (MFF) and the seismically virtually inactive North Tehran Thrust (NTT), adjacent to the Tehran metropolitan area. Fault-kinematic data reveal an early mechanical linkage of the NTT and MFF during an earlier dextral transpressional stage, when the shortening direction was oriented northwest. This regime was superseded by Pliocene to Recent NE-oriented shortening, which caused thrusting and sinistral strike-slip faulting. In the course of this kinematic changeover, the NTT and MFF were reactivated and incorporated into a nascent transpressional duplex, which has significantly affected landscape evolution in this part of the range. Two of three distinctive features which characterize topography and relief in the study area can be directly related to their location inside the duplex array and are thus linked to interaction between eastern MFF and NTT, and between western MFF and Taleghan fault, respectively. To account for inferred inherited topography from the previous dextral-transpression regime, a new concept of tectonic landscape characterization has been used. Accordingly, I define simple landscapes as those environments, which have developed during the influence of a sustained tectonic regime. In contrast, composite landscapes contain topographic elements inherited from previous tectonic conditions that are inconsistent with the regional present-day stress field and kinematic style. Using numerical fault-interaction modeling with different tectonic boundary conditions, I calculated synoptic snapshots of artificial topography to compare it with the real topographic metrics. However, in the Alborz mountains, E-W faults are favorably oriented to accommodate the entire range of NW- to NE-directed compression. These faults show the highest total displacement which might indicate sustained faulting under changing boundary conditions. In contrast to the fault system within and at the flanks of the Alborz mountains, Quaternary deformation in the adjacent Tehran plain is characterized by oblique motion and thrust and strike-slip fault systems. In this morphotectonic province fault-propagation folding along major faults, limited strike-slip motion, and en-échelon arrays of second-order upper plate thrusts are typical. While the Tehran plain is characterized by young deformation phenomena, the majority of faulting took place in the early stages of the Quaternary and during late Pliocene time. TCN-dating, which was performed for the first time on geomorphic surfaces in the Tehran plain, revealed that the oldest two phases of alluviation (units A and B) must be older than late Pleistocene. While urban development in Tehran increasingly covers and obliterates the active fault traces, the present-day kinematic style, the vestiges of formerly undeformed Quaternary landforms, and paleo earthquake indicators from the last millennia attest to the threat that these faults and their related structures pose for the megacity.
This thesis presents methods, techniques and tools for developing three-dimensional representations of tactical intelligence assessments. Techniques from GIScience are combined with crime mapping methods. The range of methods applied in this study provides spatio-temporal GIS analysis as well as 3D geovisualisation and GIS programming. The work presents methods to enhance digital three-dimensional city models with application specific thematic information. This information facilitates further geovisual analysis, for instance, estimations of urban risks exposure. Specific methods and workflows are developed to facilitate the integration of spatio-temporal crime scene analysis results into 3D tactical intelligence assessments. Analysis comprises hotspot identification with kernel-density-estimation techniques (KDE), LISA-based verification of KDE hotspots as well as geospatial hotspot area characterisation and repeat victimisation analysis. To visualise the findings of such extensive geospatial analysis, three-dimensional geovirtual environments are created. Workflows are developed to integrate analysis results into these environments and to combine them with additional geospatial data. The resulting 3D visualisations allow for an efficient communication of complex findings of geospatial crime scene analysis.
Russian Jews who left the Former Soviet Union (FSU) and its Successor States after 1989 are considered as one of the best qualified migrants group worldwide. In the preferred countries of destination (Israel, the United States and Germany) they are well-known for cultural self-assertion, strong social upward mobility and manifold forms of self organisation and empowerment. Using Suzanne Kellers sociological model of “Strategic Elites”, it easily becomes clear that a huge share of the Russian Jewish Immigrants in Germany and Israel are part of various elites due to their qualification and high positions in the FSU – first of all professional, cultural and intellectual elites (“Intelligentsija”). The study aimed to find out to what extent developments of cultural self-assertion, of local and transnational networking and of ethno-cultural empowerment are supported or even initiated by the immigrated (Russian Jewish) Elites. The empirical basis for this study have been 35 half-structured expert interviews with Russian Jews in both countries (Israel, Germany) – most of them scholars, artists, writers, journalists/publicists, teachers, engineers, social workers, students and politicians. The qualitative analysis of the interview material in Israel and Germany revealed that there are a lot of commonalities but also significant differences. It was obvious that almost all of the interview partners remained to be linked with Russian speaking networks and communities, irrespective of their success (or failure) in integration into the host societies. Many of them showed self-confidence with regard to the groups’ amazing professional resources (70% of the adults with academic degree), and the cultural, professional and political potential of the FSU immigrants was usually considered as equal to those of the host population(s). Thus, the immigrants’ interest in direct societal participation and social acceptance was accordingly high. Assimilation was no option. For the Russian Jewish “sense of community” in Israel and Germany, Russian Language, Arts and general Russian culture have remained of key importance. The Immigrants do not feel an insuperable contradiction when feeling “Russian” in cultural terms, “Jewish” in ethnical terms and “Israeli” / “German” in national terms – in that a typical case of additive identity shaping what is also significant for the Elites of these Immigrants. Tendencies of ethno-cultural self organisation – which do not necessarily hinder impressing individual careers in the new surroundings – are more noticeable in Israel. Thus, a part of the Russian Jewish Elites has responded to social exclusion, discrimination or blocking by local population (and by local elites) with intense efforts to build (Russian Jewish) Associations, Media, Educational Institutions and even Political Parties. All in all, the results of this study do very much contradict popular stereotypes of the Russian Jewish Immigrant as a pragmatic, passive “Homo Sovieticus”. Among the Interview Partners in this study, civil-societal commitment was not the exception but rather the rule. Traditional activities of the early, legendary Russian „Intelligentsija“ were marked by smooth transitions from arts, education and societal/political commitment. There seem to be certain continuities of this self-demand in some of the Russian Jewish groups in Israel. Though, nothing comparable could be drawn from the Interviews with the Immigrants in Germany. Thus, the myth and self-demand of Russian “Intelligentsija” is irrelevant for collective discourses among Russian Jews in Germany.
Based on technological advances made within the past decades, ground-penetrating radar (GPR) has become a well-established, non-destructive subsurface imaging technique. Catalyzed by recent demands for high-resolution, near-surface imaging (e.g., the detection of unexploded ordnances and subsurface utilities, or hydrological investigations), the quality of today's GPR-based, near-surface images has significantly matured. At the same time, the analysis of oil and gas related reflection seismic data sets has experienced significant advances. Considering the sensitivity of attribute analysis with respect to data positioning in general, and multi-trace attributes in particular, trace positioning accuracy is of major importance for the success of attribute-based analysis flows. Therefore, to study the feasibility of GPR-based attribute analyses, I first developed and evaluated a real-time GPR surveying setup based on a modern tracking total station (TTS). The combination of current GPR systems capability of fusing global positioning system (GPS) and geophysical data in real-time, the ability of modern TTS systems to generate a GPS-like positional output and wireless data transmission using radio modems results in a flexible and robust surveying setup. To elaborate the feasibility of this setup, I studied the major limitations of such an approach: system cross-talk and data delays known as latencies. Experimental studies have shown that when a minimal distance of ~5 m between the GPR and the TTS system is considered, the signal-to-noise ratio of the acquired GPR data using radio communication equals the one without radio communication. To address the limitations imposed by system latencies, inherent to all real-time data fusion approaches, I developed a novel correction (calibration) strategy to assess the gross system latency and to correct for it. This resulted in the centimeter trace accuracy required by high-frequency and/or three-dimensional (3D) GPR surveys. Having introduced this flexible high-precision surveying setup, I successfully demonstrated the application of attribute-based processing to GPR specific problems, which may differ significantly from the geological ones typically addressed by the oil and gas industry using seismic data. In this thesis, I concentrated on archaeological and subsurface utility problems, as they represent typical near-surface geophysical targets. Enhancing 3D archaeological GPR data sets using a dip-steered filtering approach, followed by calculation of coherency and similarity, allowed me to conduct subsurface interpretations far beyond those obtained by classical time-slice analyses. I could show that the incorporation of additional data sets (magnetic and topographic) and attributes derived from these data sets can further improve the interpretation. In a case study, such an approach revealed the complementary nature of the individual data sets and, for example, allowed conclusions about the source location of magnetic anomalies by concurrently analyzing GPR time/depth slices to be made. In addition to archaeological targets, subsurface utility detection and characterization is a steadily growing field of application for GPR. I developed a novel attribute called depolarization. Incorporation of geometrical and physical feature characteristics into the depolarization attribute allowed me to display the observed polarization phenomena efficiently. Geometrical enhancement makes use of an improved symmetry extraction algorithm based on Laplacian high-boosting, followed by a phase-based symmetry calculation using a two-dimensional (2D) log-Gabor filterbank decomposition of the data volume. To extract the physical information from the dual-component data set, I employed a sliding-window principle component analysis. The combination of the geometrically derived feature angle and the physically derived polarization angle allowed me to enhance the polarization characteristics of subsurface features. Ground-truth information obtained by excavations confirmed this interpretation. In the future, inclusion of cross-polarized antennae configurations into the processing scheme may further improve the quality of the depolarization attribute. In addition to polarization phenomena, the time-dependent frequency evolution of GPR signals might hold further information on the subsurface architecture and/or material properties. High-resolution, sparsity promoting decomposition approaches have recently had a significant impact on the image and signal processing community. In this thesis, I introduced a modified tree-based matching pursuit approach. Based on different synthetic examples, I showed that the modified tree-based pursuit approach clearly outperforms other commonly used time-frequency decomposition approaches with respect to both time and frequency resolutions. Apart from the investigation of tuning effects in GPR data, I also demonstrated the potential of high-resolution sparse decompositions for advanced data processing. Frequency modulation of individual atoms themselves allows to efficiently correct frequency attenuation effects and improve resolution based on shifting the average frequency level. GPR-based attribute analysis is still in its infancy. Considering the growing widespread realization of 3D GPR studies there will certainly be an increasing demand towards improved subsurface interpretations in the future. Similar to the assessment of quantitative reservoir properties through the combination of 3D seismic attribute volumes with sparse well-log information, parameter estimation in a combined manner represents another step in emphasizing the potential of attribute-driven GPR data analyses.
This work presents the development of entropy-elastic gelatin based networks in the form of films or scaffolds. The materials have good prospects for biomedical applications, especially in the context of bone regeneration. Entropy-elastic gelatin based hydrogel films with varying crosslinking densities were prepared with tailored mechanical properties. Gelatin was covalently crosslinked above its sol gel transition, which suppressed the gelatin chain helicity. Hexamethylene diisocyanate (HDI) or ethyl ester lysine diisocyanate (LDI) were applied as chemical crosslinkers, and the reaction was conducted either in dimethyl sulfoxide (DMSO) or water. Amorphous films were prepared as measured by Wide Angle X-ray Scattering (WAXS), with tailorable degrees of swelling (Q: 300-800 vol. %) and wet state Young’s modulus (E: 70 740 kPa). Model reactions showed that the crosslinking reaction resulted in a combination of direct crosslinks (3-13 mol.-%), grafting (5-40 mol.-%), and blending of oligoureas (16-67 mol.-%). The knowledge gained with this bulk material was transferred to the integrated process of foaming and crosslinking to obtain porous 3-D gelatin-based scaffolds. For this purpose, a gelatin solution was foamed in the presence of a surfactant, Saponin, and the resulting foam was fixed by chemical crosslinking with a diisocyanate. The amorphous crosslinked scaffolds were synthesized with varied gelatin and HDI concentrations, and analyzed in the dry state by micro computed tomography (µCT, porosity: 65±11–73±14 vol.-%), and scanning electron microscopy (SEM, pore size: 117±28–166±32 µm). Subsequently, the work focused on the characterization of the gelatin scaffolds in conditions relevant to biomedical applications. Scaffolds showed high water uptake (H: 630-1680 wt.-%) with minimal changes in outer dimension. Since a decreased scaffold pore size (115±47–130±49 µm) was revealed using confocal laser scanning microscopy (CLSM) upon wetting, the form stability could be explained. Shape recoverability was observed after removal of stress when compressing wet scaffolds, while dry scaffolds maintained the compressed shape. This was explained by a reduction of the glass transition temperature upon equilibration with water (dynamic mechanical analysis at varied temperature (DMTA)). The composition dependent compression moduli (Ec: 10 50 kPa) were comparable to the bulk micromechanical Young’s moduli, which were measured by atomic force microscopy (AFM). The hydrolytic degradation profile could be adjusted, and a controlled decrease of mechanical properties was observed. Partially-degraded scaffolds displayed an increase of pore size. This was likely due to the pore wall disintegration during degradation, which caused the pores to merge. The scaffold cytotoxicity and immunologic responses were analyzed. The porous scaffolds enabled proliferation of human dermal fibroblasts within the implants (up to 90 µm depth). Furthermore, indirect eluate tests were carried out with L929 cells to quantify the material cytotoxic response. Here, the effect of the sterilization method (Ethylene oxide sterilization), crosslinker, and surfactant were analyzed. Fully cytocompatible scaffolds were obtained by using LDI as crosslinker and PEO40 PPO20-PEO40 as surfactant. These investigations were accompanied by a study of the endotoxin material contamination. The formation of medical-grade materials was successfully obtained (<0.5 EU/mL) by using low-endotoxin gelatin and performing all synthetic steps in a laminar flow hood.
In the high mountains of Asia, glaciers cover an area of approximately 115,000 km² and constitute one of the largest continental ice accumulations outside Greenland and Antarctica. Their sensitivity to climate change makes them valuable palaeoclimate archives, but also vulnerable to current and predicted Global Warming. This is a pressing problem as snow and glacial melt waters are important sources for agriculture and power supply of densely populated regions in south, east, and central Asia. Successful prediction of the glacial response to climate change in Asia and mitigation of the socioeconomic impacts requires profound knowledge of the climatic controls and the dynamics of Asian glaciers. However, due to their remoteness and difficult accessibility, ground-based studies are rare, as well as temporally and spatially limited. We therefore lack basic information on the vast majority of these glaciers. In this thesis, I employ different methods to assess the dynamics of Asian glaciers on multiple time scales. First, I tested a method for precise satellite-based measurement of glacier-surface velocities and conducted a comprehensive and regional survey of glacial flow and terminus dynamics of Asian glaciers between 2000 and 2008. This novel and unprecedented dataset provides unique insights into the contrasting topographic and climatic controls of glacial flow velocities across the Asian highlands. The data document disparate recent glacial behavior between the Karakoram and the Himalaya, which I attribute to the competing influence of the mid-latitude westerlies during winter and the Indian monsoon during summer. Second, I tested whether such climate-related longitudinal differences in glacial behavior also prevail on longer time scales, and potentially account for observed regionally asynchronous glacial advances. I used cosmogenic nuclide surface exposure dating of erratic boulders on moraines to obtain a glacial chronology for the upper Tons Valley, situated in the headwaters of the Ganges River. This area is located in the transition zone from monsoonal to westerly moisture supply and therefore ideal to examine the influence of these two atmospheric circulation regimes on glacial advances. The new glacial chronology documents multiple glacial oscillations during the last glacial termination and during the Holocene, suggesting largely synchronous glacial changes in the western Himalayan region that are related to gradual glacial-interglacial temperature oscillations with superimposed monsoonal precipitation changes of higher frequency. In a third step, I combine results from short-term satellite-based climate records and surface velocity-derived ice-flux estimates, with topographic analyses to deduce the erosional impact of glaciations on long-term landscape evolution in the Himalayan-Tibetan realm. The results provide evidence for the long-term effects of pronounced east-west differences in glaciation and glacial erosion, depending on climatic and topographic factors. Contrary to common belief the data suggest that monsoonal climate in the central Himalaya weakens glacial erosion at high elevations, helping to maintain a steep southern orographic barrier that protects the Tibetan Plateau from lateral destruction. The results of this thesis highlight how climatic and topographic gradients across the high mountains of Asia affect glacier dynamics on time scales ranging from 10^0 to 10^6 years. Glacial response times to climate changes are tightly linked to properties such as debris cover and surface slope, which are controlled by the topographic setting, and which need to be taken into account when reconstructing mountainous palaeoclimate from glacial histories or assessing the future evolution of Asian glaciers. Conversely, the regional topographic differences of glacial landscapes in Asia are partly controlled by climatic gradients and the long-term influence of glaciers on the topographic evolution of the orogenic system.