Refine
Year of publication
Document Type
- Article (21297)
- Doctoral Thesis (3184)
- Postprint (2347)
- Monograph/Edited Volume (1223)
- Other (675)
- Review (631)
- Preprint (531)
- Conference Proceeding (474)
- Part of a Book (255)
- Working Paper (182)
Language
- English (31017) (remove)
Keywords
- climate change (177)
- Germany (105)
- machine learning (86)
- diffusion (76)
- German (68)
- morphology (67)
- Arabidopsis thaliana (66)
- anomalous diffusion (58)
- stars: massive (58)
- Climate change (55)
Institute
- Institut für Physik und Astronomie (4993)
- Institut für Biochemie und Biologie (4731)
- Institut für Geowissenschaften (3333)
- Institut für Chemie (2917)
- Institut für Mathematik (1876)
- Department Psychologie (1490)
- Institut für Ernährungswissenschaft (1048)
- Department Linguistik (1013)
- Wirtschaftswissenschaften (858)
- Institut für Informatik und Computational Science (841)
Origin and symmetry of the observed global magnetic fields in galaxies are not fully understood. We intend to clarify the question of the magnetic field origin and investigate the global action of the magneto-rotational instability (MRI) in galactic disks with the help of 3D global magneto-hydrodynamical (MHD) simulations. The calculations were done with the time-stepping ZEUS 3D code using massive parallelization. The alpha-Omega dynamo is known to be one of the most efficient mechanisms to reproduce the observed global galactic fields. The presence of strong turbulence is a pre-requisite for the alpha-Omega dynamo generation of the regular magnetic fields. The observed magnitude and spatial distribution of turbulence in galaxies present unsolved problems to theoreticians. The MRI is known to be a fast and powerful mechanism to generate MHD turbulence and to amplify magnetic fields. We find that the critical wavelength increases with the increasing of magnetic fields during the simulation, transporting the energy from critical to larger scales. The final structure, if not disrupted by supernovae explosions, is the structure of `thin layers' of thickness of about 100 pcs. An important outcome of all simulations is the magnitude of the horizontal components of the Reynolds and Maxwell stresses. The result is that the MRI-driven turbulence is magnetic-dominated: its magnetic energy exceeds the kinetic energy by a factor of 4. The Reynolds stress is small and less than 1% of the Maxwell stress. The angular momentum transport is thus completely dominated by the magnetic field fluctuations. The volume-averaged pitch angle is always negative with a magnitude of about -30. The non-saturated MRI regime is lasting sufficiently long to fill the time between the galactic encounters, independently of strength and geometry of the initial field. Therefore, we may claim the observed pitch angles can be due to MRI action in the gaseous galactic disks. The MRI is also shown to be a very fast instability with e-folding time proportional to the time of one rotation. Steep rotation curves imply a stronger growth for the magnetic energy due to MRI. The global e-folding time is from 44 Myr to 100 Myr depending on the rotation profile. Therefore, MRI can explain the existence of rather large magnetic field in very young galaxies. We also have reproduced the observed rms values of velocities in the interstellar turbulence as it was observed in NGC 1058. We have shown with the simulations that the averaged velocity dispersion of about 5 km/s is a typical number for the MRI-driven turbulence in galaxies, which agrees with observations. The dispersion increases outside of the disk plane, whereas supernovae-driven turbulence is found to be concentrated within the disk. In our simulations the velocity dispersion increases a few times with the heights. An additional support to the dynamo alpha-effect in the galaxies is the ability of the MRI to produce a mix of quadrupole and dipole symmetries from the purely vertical seed fields, so it also solves the seed-fields problem of the galactic dynamo theory. The interaction of magneto-rotational instability and random supernovae explosions remains an open question. It would be desirable to run the simulation with the supernovae explosions included. They would disrupt the calm ring structure produced by global MRI, may be even to the level when we can no longer blame MRI to be responsible for the turbulence.
Mesoporous organosilica materials with amine functions : surface characteristics and chirality
(2005)
In this work mesoporous organisilica materials are synthesized through the silica sol-gel process. For this a new class of precursors which are also surfactant are synthesized and self-assembled. This leads to a high surface area functionality which is analysized with copper (II) and water adsorption.
During this PhD project three technical platforms were either improved or newly established in order to identify interesting genes involved in SNF, validate their expression and functionally characterise them. An existing 5.6K cDNA array (Colebatch et al., 2004) was extended to produce the 9.6K LjNEST array, while a second array, the 11.6K LjKDRI array, was also produced. Furthermore, the protocol for array hybridisation was substantially improved (Ott et al., in press). After functional classification of all clones according to the MIPS database and annotation of their corresponding tentative consensus sequence (TIGR) these cDNA arrays were used by several international collaborators and by our group (Krusell et al., 2005; in press). To confirm results obtained from the cDNA array analysis different sets of cDNA pools were generated that facilitate rapid qRT-PCR analysis of candidate gene expression. As stable transformation of Lotus japonicus takes several months, an Agrobacterium rhizogenes transformation system was established in the lab and growth conditions for screening transformants for symbiotic phenotypes were improved. These platforms enable us to identify genes, validate their expression and functionally characterise them in the minimum of time. The resources that I helped to establish, were used in collaboration with other people to characterise several genes like the potassium transporter LjKup and the sulphate transporter LjSst1, that were transcriptionally induced in nodules compared to uninfected roots, in more detail (Desbrosses et al., 2004; Krusell et al., 2005). Another gene that was studied in detail was LjAox1. This gene was identified during cDNA array experiments and detailed expression analysis revealed a strong and early induction of the gene during nodulation with high expression in young nodules which declines with the age of the nodule. Therefore, LjAox1 is an early nodulin. Promoter:gus fusions revealed an LjAox1 expression around the nodule endodermis. The physiological role of LjAox1 is currently being persued via RNAi. Using RNA interference, the synthesis of all symbiotic leghemoglobins was silenced simultaneously in Lotus japonicus. As a result, growth of LbRNAi lines was severely inhibited compared to wild-type plants when plants were grown under symbiotic conditions in the absence of mineral nitrogen. The nodules of these plants were arrested in growth 14 post inoculation and lacked the characteristic pinkish colour. Growing these transgenic plants in conditions where reduced nitrogen is available for the plant led to normal plant growth and development. This demonstrates that leghemoglobins are not required for plant development per se, and proves for the first time that leghemoglobins are indispensable for symbiotic nitrogen fixation. Absence of leghemoglobins in LbRNAi nodules led to significant increases in free-oxygen concentrations throughout the nodules, a decrease in energy status as reflected by the ATP/ADP ratio, and an absence of the bacterial nitrogenase protein. The bacterial population within nodules of LbRNAi plants was slightly reduced. Alterations of plant nitrogen and carbon metabolism in LbRNAi nodules was reflected in changes in amino acid composition and starch deposition (Ott et al., 2005). These data provide strong evidence that nodule leghemoglobins function as oxygen transporters that facilitate high flux rates of oxygen to the sites of respiration at low free oxygen concentrations within the infected cells.
Wetting and phase transitions play a very important role our daily life. Molecularly thin films of long-chain alkanes at solid/vapour interfaces (e.g. C30H62 on silicon wafers) are very good model systems for studying the relation between wetting behaviour and (bulk) phase transitions. Immediately above the bulk melting temperature the alkanes wet partially the surface (drops). In this temperature range the substrate surface is covered with a molecularly thin ordered, solid-like alkane film ("surface freezing"). Thus, the alkane melt wets its own solid only partially which is a quite rare phenomenon in nature. The thesis treats about how the alkane melt wets its own solid surface above and below the bulk melting temperature and about the corresponding melting and solidification processes. Liquid alkane drops can be undercooled to few degrees below the bulk melting temperature without immediate solidification. This undercooling behaviour is quite frequent and theoretical quite well understood. In some cases, slightly undercooled drops start to build two-dimensional solid terraces without bulk solidification. The terraces grow radially from the liquid drops on the substrate surface. They consist of few molecular layers with the thickness multiple of all-trans length of the molecule. By analyzing the terrace growth process one can find that, both below and above the melting point, the entire substrate surface is covered with a thin film of mobile alkane molecules. The presence of this film explains how the solid terrace growth is feeded: the alkane molecules flow through it from the undercooled drops to the periphery of the terrace. The study shows for the first time the coexistence of a molecularly thin film ("precursor") with partially wetting bulk phase. The formation and growth of the terraces is observed only in a small temperature interval in which the 2D nucleation of terraces is more likely than the bulk solidification. The nucleation mechanisms for 2D solidification are also analyzed in this work. More surprising is the terrace behaviour above bulk the melting temperature. The terraces can be slightly overheated before they melt. The melting does not occur all over the surface as a single event; instead small drops form at the terrace edge. Subsequently these drops move on the surface "eating" the solid terraces on their way. By this they grow in size leaving behind paths from were the material was collected. Both overheating and droplet movement can be explained by the fact that the alkane melt wets only partially its own solid. For the first time, these results explicitly confirm the supposed connection between the absence of overheating in solid and "surface melting": the solids usually start to melt without an energetic barrier from the surface at temperatures below the bulk melting point. Accordingly, the surface freezing of alkanes give rise of an energetic barrier which leads to overheating.
In an experimental study the attempt was made to examine the effects of the Reciprocal Teaching method on measures of metacognition and try to identify the effective features of this method that are necessary for the learning gains to occur. Reciprocal Teaching, originally developed by Palincsar and Brown (1984), is a very successful training program which was designed to improve student's reading comprehension skills by teaching them reading strategies. In the present study the tasks and responsibilities assumed by 5thgrade elementary students (N = 55) participating in a 16-session reading strategy training were varied systematically. Not only the students who participated in the training program in one of the three experimental conditions were compared with respect to knowledge and performance measures, but there was also a comparison to their control classmates who did not participate in strategy training (N = 86). Detailed analyses of video-taped sessions provided additional information. The strategy training was most beneficial for measures of knowledge and performance more closely related to the content of the training program, namely knowledge about specific reading strategies taught in training and application of those strategies. No significant effects were observed for more distal measures (general strategy knowledge, reading comprehension). As for the features of the program, it could be shown that students of the two experimental conditions where the students were responsible for giving each other feedback on performance (with respect to both content and strategy application) and guiding the correction of the answer outperformed both the experimental condition in which the trainer was responsible for those tasks and the control group. It is concluded that it is not merely the application of strategies, but the combination of strategy application with concurrent teaching and learning of metacognitive acquisition procedures (analysis, monitoring, evaluation, and regulation) in an inter-individual way as the precedent of these processes occurring intra-individually that seems to be an efficient way of acquiring metacognitive knowledge and skills. It was also shown that strategy training does not necessarily have to include the precise kind of interaction that characterizes the Reciprocal Teaching method. Instead, the tasks of monitoring, evaluating, and regulating other children's learning processes - i.e., tasks associated with the "teacher role" - are the ones that promote the acquisition of metacognitive knowledge and skills. Generally, any strategy training program that not only provides children with plentiful opportunities for practice, but also prompts them to engage in these kinds of metacognitive processes, may help children to acquire metacognitive knowledge and skills.
At present, carbon sequestration in terrestrial ecosystems slows the growth rate of atmospheric CO2 concentrations, and thereby reduces the impact of anthropogenic fossil fuel emissions on the climate system. Changes in climate and land use affect terrestrial biosphere structure and functioning at present, and will likely impact on the terrestrial carbon balance during the coming decades - potentially providing a positive feedback to the climate system due to soil carbon releases under a warmer climate. Quantifying changes, and the associated uncertainties, in regional terrestrial carbon budgets resulting from these effects is relevant for the scientific understanding of the Earth system and for long-term climate mitigation strategies. A model describing the relevant processes that govern the terrestrial carbon cycle is a necessary tool to project regional carbon budgets into the future. This study (1) provides an extensive evaluation of the parameter-based uncertainty in model results of a leading terrestrial biosphere model, the Lund-Potsdam-Jena Dynamic Global Vegetation Model (LPJ-DGVM), against a range of observations and under climate change, thereby complementing existing studies on other aspects of model uncertainty; (2) evaluates different hypotheses to explain the age-related decline in forest growth, both from theoretical and experimental evidence, and introduces the most promising hypothesis into the model; (3) demonstrates how forest statistics can be successfully integrated with process-based modelling to provide long-term constraints on regional-scale forest carbon budget estimates for a European forest case-study; and (4) elucidates the combined effects of land-use and climate changes on the present-day and future terrestrial carbon balance over Europe for four illustrative scenarios - implemented by four general circulation models - using a comprehensive description of different land-use types within the framework of LPJ-DGVM. This study presents a way to assess and reduce uncertainty in process-based terrestrial carbon estimates on a regional scale. The results of this study demonstrate that simulated present-day land-atmosphere carbon fluxes are relatively well constrained, despite considerable uncertainty in modelled net primary production. Process-based terrestrial modelling and forest statistics are successfully combined to improve model-based estimates of vegetation carbon stocks and their change over time. Application of the advanced model for 77 European provinces shows that model-based estimates of biomass development with stand age compare favourably with forest inventory-based estimates for different tree species. Driven by historic changes in climate, atmospheric CO2 concentration, forest area and wood demand between 1948 and 2000, the model predicts European-scale, present-day age structure of forests, ratio of biomass removals to increment, and vegetation carbon sequestration rates that are consistent with inventory-based estimates. Alternative scenarios of climate and land-use change in the 21<sup>st century suggest carbon sequestration in the European terrestrial biosphere during the coming decades will likely be on magnitudes relevant to climate mitigation strategies. However, the uptake rates are small in comparison to the European emissions from fossil fuel combustion, and will likely decline towards the end of the century. Uncertainty in climate change projections is a key driver for uncertainty in simulated land-atmosphere carbon fluxes and needs to be accounted for in mitigation studies of the terrestrial biosphere.
Vitamin E : elucidation of the mechanism of side chain degradation and gene regulatory functions
(2005)
For more than 80 years vitamin E has been in the focus of scientific research. Most of the progress concerning non-antioxidant functions, nevertheless, has only arisen from publications during the last decade. Most recently, the metabolic pathway of vitamin E has been almost completely elucidated. Vitamin E is metabolized by truncation of its side chain. The initial step of an omega-hydroxylation is carried out by cytochromes P450 (CYPs). This was evidenced by the inhibition of the metabolism of alpha-tocopherol by ketoconozole, an inhibitor of CYP3A expression, whereas rifampicin, an inducer of CYP3A expression increased the metabolism of alpha-tocopherol. Although the degradation pathway is identical for all tocopherols and tocotrienols, there is a marked difference in the amount of the release of metabolites from the individual vitamin E forms in cell culture as well as in experimental animals and in humans. Recent findings not only proposed an CYP3A4-mediated degradation of vitamin E but also suggested an induction of the metabolizing enzymes by vitamin E itself. In order to investigate how vitamin E is able to influence the expression of metabolizing enzymes like CYP3A4, a pregnane X receptor (PXR)-based reporter gene assay was chosen. PXR is a nuclear receptor which regulates the transcription of genes, e.g., CYP3A4, by binding to specific DNA response elements. And indeed, as shown here, vitamin E is able to influence the expression of CYP3A via PXR in an in vitro reporter gene assay. Tocotrienols showed the highest activity followed by delta- and alpha-tocopherol. An up-regulation of Cyp3a11 mRNA, the murine homolog of the human CYP3A4, could also be confirmed in an animal experiment. The PXR-mediated change in gene expression displayed the first evidence of a direct transcriptional activity of vitamin E. PXR regulates the expression of genes involved in xenobiotic detoxification, including oxidation, conjugation, and transport. CYP3A, e.g., is involved in the oxidative metabolism of numerous currently used drugs. This opens a discussion of possible side effects of vitamin E, but the extent to which supranutritional doses of vitamin E modulate these pathways in humans has yet to be determined. Additionally, as there is arising evidence that vitamin E's essentiality is more likely to be based on gene regulation than on antioxidant functions, it appeared necessary to further investigate the ability of vitamin E to influence gene expression. Mice were divided in three groups with diets (i) deficient in alpha-tocopherol, (ii) adequate in alpha-tocopherol supply and (iii) with a supranutritional dosage of alpha-tocopherol. After three months, half of each group was supplemented via a gastric tube with a supranutritional dosage of gamma-tocotrienol per day for 7 days. Livers were analyzed for vitamin E content and liver RNA was prepared for hybridization using cDNA array and oligonucleotide array technology. A significant change in gene expression was observed by alpha-tocopherol but not by gamma-tocotrienol and only using the oligonucleotide array but not using the cDNA array. The latter effect is most probably due to the limited number of genes represented on a cDNA array, the lacking gamma-tocotrienol effect is obviously caused by a rapid degradation, which might prevent bioefficacy of gamma-tocotrienol. Alpha-tocopherol changed the expression of various genes. The most striking observation was an up-regulation of genes, which code for proteins involved in synaptic transmitter release and calcium signal transduction. Synapsin, synaptotagmin, synaptophysin, synaptobrevin, RAB3A, complexin 1, Snap25, ionotropic glutamate receptors (alpha 2 and zeta 1) were shown to be up-regulated in the supranutritional group compared to the deficient group. The up-regulation of synaptic genes shown in this work are not only supported by the strong concentration of genes which all are involved in the process of vesicular transport of neurotransmitters, but were also confirmed by a recent publication. However, a confirmation by real time PCR in neuronal tissue like brain is now required to explain the effect of vitamin E on neurological functionality. The change in expression of genes coding for synaptic proteins by vitamin E is of principal interest thus far, since the only human disease directly originating from an inadequate vitamin E status is ataxia with isolated vitamin E deficiency. Therefore, with the results of this work, an explanation for the observed neurological symptoms associated with vitamin E deficiency can be presented for the first time.
Interpretation of and reasoning with conditionals : probabilities, mental models, and causality
(2003)
In everyday conversation "if" is one of the most frequently used conjunctions. This dissertation investigates what meaning an everyday conditional transmits and what inferences it licenses. It is suggested that the nature of the relation between the two propositions in a conditional might play a major role for both questions. Thus, in the experiments reported here conditional statements that describe a causal relationship (e.g., "If you touch that wire, you will receive an electric shock") were compared to arbitrary conditional statements in which there is no meaningful relation between the antecedent and the consequent proposition (e.g., "If Napoleon is dead, then Bristol is in England"). Initially, central assumptions from several approaches to the meaning and the reasoning from causal conditionals will be integrated into a common model. In the model the availability of exceptional situations that have the power to generate exceptions to the rule described in the conditional (e.g., the electricity is turned off), reduces the subjective conditional probability of the consequent, given the antecedent (e.g., the probability of receiving an electric shock when touching the wire). This conditional probability determines people's degree of belief in the conditional, which in turn affects their willingness to accept valid inferences (e.g., "Peter touches the wire, therefore he receives an electric shock") in a reasoning task. Additionally to this indirect pathway, the model contains a direct pathway: Cognitive availability of exceptional situations directly reduces the readiness to accept valid conclusions. The first experimental series tested the integrated model for conditional statements embedded in pseudo-natural cover stories that either established a causal relation between the antecedent and the consequent event (causal conditionals) or did not connect the propositions in a meaningful way (arbitrary conditionals). The model was supported for the causal, but not for the arbitrary conditional statements. Furthermore, participants assigned lower degrees of belief to arbitrary than to causal conditionals. Is this effect due to the presence versus absence of a semantic link between antecedent and consequent in the conditionals? This question was one of the starting points for the second experimental series. Here, the credibility of the conditionals was manipulated by adding explicit frequency information about possible combinations of presence or absence of antecedent and consequent events to the problems (i.e., frequencies of cases of 1. true antecedent with true consequent, 2. true antecedent with false consequent, 3. false antecedent with true consequent, 4. false antecedent with false consequent). This paradigm allows testing different approaches to the meaning of conditionals (Experiment 4) as well as theories of conditional reasoning against each other (Experiment 5). The results of Experiment 4 supported mainly the conditional probability approach to the meaning of conditionals (Edgington, 1995) according to which the degree of belief a listener has in a conditional statement equals the conditional probability that the consequent is true given the antecedent (e.g., the probability of receiving an electric shock when touching the wire). Participants again assigned lower degrees of belief to the arbitrary than the causal conditionals, although the conditional probability of the consequent given the antecedent was held constant within every condition of explicit frequency information. This supports the hypothesis that the mere presence of a causal link enhances the believability of a conditional statement. In Experiment 5 participants solved conditional reasoning tasks from problems that contained explicit frequency information about possible relevant cases. The data favored the probabilistic approach to conditional reasoning advanced by Oaksford, Chater, and Larkin (2000). The two experimental series reported in this dissertation provide strong support for recent probabilistic theories: for the conditional probability approach to the meaning of conditionals by Edgington (1995) and the probabilistic approach to conditional reasoning by Oaksford et al. (2000). In the domain of conditional reasoning, there was additionally support for the modified mental model approaches by Markovits and Barrouillet (2002) and Schroyens and Schaeken (2003). Probabilistic and mental model approaches could be reconciled within a dual-process-model as suggested by Verschueren, Schaeken, and d'Ydewalle (2003).
Subject of this work is the study of applications of the Galactic Microlensing effect, where the light of a distant star (source) is bend according to Einstein's theory of gravity by the gravitational field of intervening compact mass objects (lenses), creating multiple (however not resolvable) images of the source. Relative motion of source, observer and lens leads to a variation of deflection/magnification and thus to a time dependant observable brightness change (lightcurve), a so-called microlensing event, lasting weeks to months. The focus lies on the modeling of binary-lens events, which provide a unique tool to fully characterize the lens-source system and to detect extra-solar planets around the lens star. Making use of the ability of genetic algorithms to efficiently explore large and intricate parameter spaces in the quest for the global best solution, a modeling software (Tango) for binary lenses is developed, presented and applied to data sets from the PLANET microlensing campaign. For the event OGLE-2002-BLG-069 the 2nd ever lens mass measurement has been achieved, leading to a scenario, where a G5III Bulge giant at 9.4 kpc is lensed by an M-dwarf binary with total mass of M=0.51 solar masses at distance 2.9 kpc. Furthermore a method is presented to use the absence of planetary lightcurve signatures to constrain the abundance of extra-solar planets.
Even though the structure of the plant cell wall is by and large quite well characterized, its synthesis and regulation remains largely obscure. However, it is accepted that the building blocks of the polysaccharidic part of the plant cell wall are nucleotide sugars. Thus to gain more insight into the cell wall biosynthesis, in the first part of this thesis, plant genes possibly involved in the nucleotide sugar interconversion pathway were identified using a bioinformatics approach and characterized in plants, mainly in Arabidopsis. For the computational identification profile hidden markov models were extracted from the Pfam and TIGR databases. Mainly with these, plant genes were identified facilitating the “hmmer” program. Several gene families were identified and three were further characterized, the UDP-rhamnose synthase (RHM), UDP-glucuronic acid epimerase (GAE) and the myo-inositol oxygenase (MIOX) families. For the three-membered RHM family relative ubiquitous expression was shown using variuos methods. For one of these genes, RHM2, T-DNA lines could be obtained. Moreover, the transcription of the whole family was downregulated facilitating an RNAi approach. In both cases a alteration of cell wall typic polysaccharides and developmental changes could be shown. In the case of the rhm2 mutant these were restricted to the seed or the seed mucilage, whereas the RNAi plants showed profound changes in the whole plant. In the case of the six-membered GAE family, the gene expressed to the highest level (GAE6) was cloned, expressed heterologously and its function was characterized. Thus, it could be shown that GAE6 encodes for an enzyme responsible for the conversion of UDP-glucuronic acid to UDP-galacturonic acid. However, a change in transcript level of variuos GAE family members achieved by T-DNA insertions (gae2, gae5, gae6), overexpression (GAE6) or an RNAi approach, targeting the whole family, did not reveal any robust changes in the cell wall. Contrary to the other two families the MIOX gene family had to be identified using a BLAST based approach due to the lack of enough suitable candidate genes for building a hidden markov model. An initial bioinformatic characterization was performed which will lead to further insights into this pathway. In total it was possible to identify the two gene families which are involved in the synthesis of the two pectin backbone sugars galacturonic acid and rhamnose. Moreover with the identification of the MIOX genes a genefamily, important for the supply of nucleotide sugar precursors was identified. In a second part of this thesis publicly available microarray datasets were analyzed with respect to co-responsive behavior of transcripts on a global basis using nearly 10,000 genes. The data has been made available to the community in form of a database providing additional statistical and visualization tools (http://csbdb.mpimp-golm.mpg.de). Using the framework of the database to identify nucleotide sugar converting genes indicated that co-response might be used for identification of novel genes involved in cell wall synthesis based on already known genes.
This article examines the multiple governments of independent Estonia since 1992 referring to their stability. Confronted with the immense problems of democratic transition, the multi-party governments of Estonia change comparatively often. Following the elections of March 2003 the ninth government since 1992 was formed. A detailed examination of government stability and the example of Estonia is accordingly warranted, given that the country is seen as the most successful Central Eastern European transition country in spite of its frequent changes of government. Furthermore, this article questions whether or not internal government stability can exist within a situation where the government changes frequently. What does stability of government mean and what are the varying multi-faceted depths of the term? Before analysing the term, it has to be clarified and defined. It is presumed that government stability is composed of multiple variables influencing one another. Data about the average tenure of a government is not very conclusive. Rather, the deeper political causes for governmental change need to be examined. Therefore, this article discusses the conceptual and theoretical basics of governmental stability first. Secondly, it discusses the Estonian situation in detail up to the elections of 2003, including a short review of the 9th government since independence. In the conclusion, the author explains whether or not the governments of Estonia are stable. In the appendix, the reader finds all election results and also a list of all previous ministers of Estonian governments (all data are as of July 2002).
The development of the Polish telecommunications administration in the years 1989/90 to 2003 is marked by the processes of liberalisation and privatisation the telecommunications sector underwent during that period. The gradual liberalisation of the Polish telecommunications sector started as early as 1992. In the beginning, national strategies were pursued. The most important of these was the creation of a bipolar market structure in the local area networks. In the second half of the 1990ies the approaching EU membership accelerated the process of liberalisation and consequently the development of a framework of regulations. EU standards are more directed towards setting out a legal framework for regulation than prescribing concrete details of administrative organisation. Nevertheless, the independent regulatory agencies typical for Western Europe served as a model for the introduction of a new regulatory body responsible for the telecommunications sector in Poland. The growing influence of EU legislation changed telecommunications policy as well as administrative practices. There has been a shift of responsibilities from the ministry to the regulatory agency, but the question remains, if the agency gained enough power to fulfil its regulatory function. In the following the legislative framework created by the EU in telecommunications policy will be described and the model of independent regulatory agencies, as it is typical for most EU countries, will be introduced. Some categories for the analysis of the Polish regulatory system will be deduced from the discussion on the regulations of telecommunication in the established EU-Nations (see Böllhoff 2002 and 2003, Thatcher 2002a and 2002b, Thatcher/Stone Sweet 2002). Subsequently the basic features of Polish telecommunication policies in the 1990ies and its effects on the telecommunications sector will be outlined. In the third chapter the development of organisational structures on the ministerial level and within the regulatory agency will be examined. In the forth chapter I will look at the distribution of power and the coordination of the various authorities responsible for telecommunication regulations. The focus of this chapter is on the Polish regulatory agency and its relationships with the ministry, with the anti-monopoly office and with the Broadcasting and Television Council. In a conclusion, the main findings will be summed up.
Due to its relevance for global climate, the realistic representation of the Atlantic meridional overturning circulation (AMOC) in ocean models is a key task. In recent years, two paradigms have evolved around what are its driving mechanisms: diapycnal mixing and Southern Ocean winds. This work aims at clarifying what sets the strength of the Atlantic overturning components in an ocean general circulation model and discusses the role of spatially inhomogeneous mixing, numerical diffusion and winds. Furthermore, the relation of the AMOC with a key quantity, the meridional pressure difference is analyzed. Due to the application of a very low diffusive tracer advection scheme, a realistic Atlantic overturning circulation can be obtained that is purely wind driven. On top of the winddriven circulation, changes of density gradients are caused by increasing the parameterized eddy diffusion in the North Atlantic and Southern Ocean. The linear relation between the maximum of the Atlantic overturning and the meridional pressure difference found in previous studies is confirmed and it is shown to be due to one significant pressure gradient between the average pressure over high latitude deep water formation regions and a relatively uniform pressure between 30°N and 30°S, which can directly be related to a zonal flow through geostrophy. Under constant Southern Ocean windstress forcing, a South Atlantic outflow in the range of 6-16 Sv is obtained for a large variety of experiments. Overall, the circulation is winddriven but its strength not uniquely determined by the Southern Ocean windstress. The scaling of the Atlantic overturning components is linear with the background vertical diffusivity, not confirming the 2/3 power law for one-hemisphere models without wind forcing. The pycnocline depth is constant in the coarse resolution model with large vertical grid extends. It suggests the ocean model operates like the Stommel box model with a linear relation of the pressure difference and fixed vertical scale for the volume transport. However, this seems only valid for vertical diffusivities smaller 0.4 cm²/s, when the dominant upwelling within the Atlantic occurs along the boundaries. For larger vertical diffusivities, a significant amount of interior upwelling occurs. It is further shown that any localized vertical mixing in the deep to bottom ocean cannot drive an Atlantic overturning. However, enhanced boundary mixing at thermocline depths is potentially important. The numerical diffusion is shown to have a large impact on the representation of the Atlantic overturning in the model. While the horizontal numerical diffusion tends to destabilize the Atlantic overturning the verital numerical diffusion denotes an amplifying mechanism.
The past decades are characterized by various efforts to provide complete sequence information of genomes regarding various organisms. The availability of full genome data triggered the development of multiplex high-throughput assays allowing simultaneous measurement of transcripts, proteins and metabolites. With genome information and profiling technologies now in hand a highly parallel experimental biology is offering opportunities to explore and discover novel principles governing biological systems. Understanding biological complexity through modelling cellular systems represents the driving force which today allows shifting from a component-centric focus to integrative and systems level investigations. The emerging field of systems biology integrates discovery and hypothesis-driven science to provide comprehensive knowledge via computational models of biological systems. Within the context of evolving systems biology, investigations were made in large-scale computational analyses on transcript co-response data through selected prokaryotic and plant model organisms. CSB.DB - a comprehensive systems-biology database - (http://csbdb.mpimp-golm.mpg.de/) was initiated to provide public and open access to the results of biostatistical analyses in conjunction with additional biological knowledge. The database tool CSB.DB enables potential users to infer hypothesis about functional interrelation of genes of interest and may serve as future basis for more sophisticated means of elucidating gene function. The co-response concept and the CSB.DB database tool were successfully applied to predict operons in Escherichia coli by using the chromosomal distance and transcriptional co-responses. Moreover, examples were shown which indicate that transcriptional co-response analysis allows identification of differential promoter activities under different experimental conditions. The co-response concept was successfully transferred to complex organisms with the focus on the eukaryotic plant model organism Arabidopsis thaliana. The investigations made enabled the discovery of novel genes regarding particular physiological processes and beyond, allowed annotation of gene functions which cannot be accessed by sequence homology. GMD - the Golm Metabolome Database - was initiated and implemented in CSB.DB to integrated metabolite information and metabolite profiles. This novel module will allow addressing complex biological questions towards transcriptional interrelation and extent the recent systems level quest towards phenotyping.
Adverb positioning is guided by syntactic, semantic, and pragmatic considerations and is subject to cross-linguistic as well as language-specific variation. The goal of the thesis is to identify the factors that determine adverb placement in general (Part I) as well as in constructions in which the adverb's sister constituent is deprived of its phonetic material by movement or ellipsis (gap constructions, Part II) and to provide an Optimality Theoretic approach to the contrasts in the effects of these factors on the distribution of adverbs in English, French, and German. In Optimality Theory (Prince & Smolensky 1993), grammaticality is defined as optimal satisfaction of a hierarchy of violable constraints: for a given input, a set of output candidates are produced out of which that candidate is selected as grammatical output which optimally satisfies the constraint hierarchy. Since grammaticality crucially relies on the hierarchic relations of the constraints, cross-linguistic variation can be traced back to differences in the language-specific constraint rankings. Part I shows how diverse phenomena of adverb placement can be captured by corresponding constraints and their relative rankings: - contrasts in the linearization of adverbs and verbs/auxiliaries in English and French - verb placement in German and the filling of the prefield position - placement of focus-sensitive adverbs - fronting of topical arguments and adverbs Part II extends the analysis to a particular phenomenon of adverb positioning: the avoidance of adverb attachment to a phonetically empty constituent (gap). English and French are similar in that the acceptability of pre-gap adverb placement depends on the type of adverb, its scope, and the syntactic construction (English: wh-movement vs. topicalization / VP Fronting / VP Ellipsis, inverted vs. non-inverted clauses; French: CLLD vs. Cleft, simple vs. periphrastic tense). Yet, the two languages differ in which strategies a specific type of adverb may pursue to escape placement in front of a certain type of gap. In contrast to English and French, placement of an adverb in front of a gap never gives rise to ungrammaticality in German. Rather, word ordering has to obey the syntactic, semantic, and pragmatic principles discussed in Part I; whether or not it results in adverb attachment to a phonetically empty constituent seems to be irrelevant: though constraints are active in every language, the emergence of a visible effect of their requirements in a given language depends on their relative ranking. The complex interaction of the diverse factors as well as their divergent effects on adverb placement in the various languages are accounted for by the universal constraints and their language-specific hierarchic relations in the OT framework.
Fault planes of large earthquakes incorporate inhomogeneous structures. This can be observed in teleseismic studies through the spatial distribution of slip and seismic moment release caused by the mainshock. Both parameters are often concentrated on patches on the fault plane with much higher values for slip and moment release than their adjacent areas. These patches are called asperities which obviously have a strong influence on the mainshock rupture propagation. Condition and properties of structures in the fault plane area, which are responsible for the evolution of such asperities or their significance on damage distributions of future earthquakes, are still not well understood and subject to recent geo-scientific studies. In the presented thesis asperity structures are identified on the fault plane of the Mw=8.0 Antofagasta earthquake in northern Chile which occurred on 30th of July, 1995. It was a thrust-type event in the seismogenic zone between the subducting pacific Nazca plate and the overriding South American plate. In cooperation of the German Task Force for Earthquakes and the CINCA'95 project a network of up to 44 seismic stations was set up to record the aftershock sequence. The seaward extension of the network with 9 OBH stations increased significantly the precision of hypocenter determinations. They were distributed mainly on the fault plane itself around the city of Antofagasta and Mejillones Peninsula. The asperity structures were recognized here by the spatial variations of local seismological parameters; at first by the spatial distribution of the seismic b-value on the fault plane, derived from the magnitude-frequency relation of Gutenberg-Richter. The correlation of this b-value map with other parameters like the mainshock source time function, the gravity isostatic residual anomalies, the aftershock radiated seismic energy distribution and the vp/vs ratios from a local earthquake tomograhpy study revealed some ideas about the composition and asperity generating processes. The investigation of 295 aftershock focal mechanism solutions supported the resulting fault plane structure and proposed a 3D similar stress state in the area of the Antofagasta fault plane.
Environmental stresses such as drought, high salt and low temperature affect plant growth and decrease crop productivity extremely. It is important to improve stress tolerance of the crop plant to increase crop yield under stress conditions. The Arabidopsis thaliana salt tolerance 1 gene (AtSTO1) was originally identified by Lippuner et al., (1996). In this study around 27 members of STO-like proteins were identified in Arabidopsis thaliana, rice and other plant species. The STO proteins have two consensus motifs (CCADEAAL and FCV(L)EDRA). The STO family members can be regarded as a distinct class of C2C2 proteins considering their low sequence similarity to other GATA like proteins and poor conservation in the C-terminus. AtSTO1 was found to be induced by salt, cold and drought in leaves and roots of 4-week-old Arabidopsis thaliana wild-type plants. The expression of AtSTO1 under salt and cold stress was more pronounced in roots than in leaves. The data provided here revealed that the AtSTO1 protein is localized in the nucleus. The observation that AtSTO1 localizes in the nucleus is consistent with its proposed function as a transcription factor. AtSTO1-dependent phenotypes were observed when plant were grown at 50 mM NaCl on agar plates. Leaves of AtSTO1 overexpression lines were bigger with dark green coloration, whereas stunted growth and yellowish leaves were observed in wild-type and RNAi plants. Also, the AtSTO1 overexpression plants when exposed to long-term cold stress had a red leaf coloration which was much stronger than in wild-type and RNAi lines. Growth of AtSTO1 overexpression lines in long term under salt and cold stress was always associated with long roots which was more pronounced than in wild-type and RNAi lines. Proline accumulation increased more strongly in leaves and roots of AtSTO1 overexpression lines than in tissues of wild-type and RNAi lines when treated with 200 mM NaCl, exposed to cold stress or when watering was prevented for one day or two weeks. Also, soluble sugar content increased to higher levels under salt, cold and drought stress in AtSTO1 overexpression lines when compared to wild-type and RNAi lines. The increase in soluble sugar content was detected in AtSTO1 overexpression lines after long-term (2 weeks) growth of plants under these stresses. Anthocyanins accumulated in leaves of AtSTO1 overexpression lines when exposed to long term salt stress (200 mM NaCl for 2 weeks) or to 4°C for 6 and 8 weeks. Also, anthocyanin content was increased in flowers of AtSTO1 overexpression plants kept at 4°C for 8 weeks. Taken together these data indicate that overexpression of AtSTO1 enhances abiotic stress toleranc via a more pronounced accumulation of compatible solutes under stress.
New chain transfer agents based on dithiobenzoate and trithiocarbonate for free radical polymerization via Reversible Addition-Fragmentation chain Transfer (RAFT) were synthesized. The new compounds bear permanently hydrophilic sulfonate moieties which provide solubility in water independent of the pH. One of them bears a fluorophore, enabling unsymmetrical double end group labelling as well as the preparation of fluorescent labeled polymers. Their stability against hydrolysis in water was studied, and compared with the most frequently employed water-soluble RAFT agent 4-cyano-4-thiobenzoylsulfanylpentanoic acid dithiobenzoate, using UV-Vis and 1H-NMR spectroscopy. An improved resistance to hydrolysis was found for the new RAFT agents, providing good stabilities in the pH range between 1 and 8, and up to temperatures of 70°C. Subsequently, a series of non-ionic, anionic and cationic water-soluble monomers were polymerized via RAFT in water. In these experiments, polymerizations were conducted either at 48°C or 55°C, that are lower than the conventionally employed temperatures (>60°C) for RAFT in organic solvents, in order to minimize hydrolysis of the active chain ends (e.g. dithioester and trithiocarbonate), and thus to obtain good control over the polymerization. Under these conditions, controlled polymerization in aqueous solution was possible with styrenic, acrylic and methacrylic monomers: molar masses increase with conversion, polydispersities are low, and the degree of end group functionalization is high. But polymerizations of methacrylamides were slow at temperatures below 60°C, and showed only moderate control. The RAFT process in water was also proved to be a powerful method to synthesize di- and triblock copolymers including the preparation of functional polymers with complex structure, such as amphiphilic and stimuli-sensitive block copolymers. These include polymers containing one or even two stimuli-sensitive hydrophilic blocks. The hydrophilic character of a single or of several blocks was switched by changing the pH, the temperature or the salt content, to demonstrate the variability of the molecular designs suited for stimuli-sensitive polymeric amphiphiles, and to exemplify the concept of multiple-sensitive systems. Furthermore, stable colloidal block ionomer complexes were prepared by mixing anionic surfactants in aqueous media with a double hydrophilic block copolymer synthesized via RAFT in water. The block copolymer is composed of a noncharged hydrophilic block based on polyethyleneglycol and a cationic block. The complexes prepared with perfluoro decanoate were found so stable that they even withstand dialysis; notably they do not denaturate proteins. So, they are potentially useful for biomedical applications in vivo.
Taking inspiration from nature, where composite materials made of a polymer matrix and inorganic fillers are often found, e.g. bone, shell of crustaceans, shell of eggs, etc., the feasibility on making composite materials containing chitosan and nanosized hydroxyapatite were investigated. A new preparation approach based on a co-precipitation method has been developed. In its earlier stage of formation, the composite occurs as hydrogel as suspended in aqueous alkaline solution. In order to get solid composites various drying procedures including freeze-drying technique, air-drying at room temperature and at moderate temperatures, between 50oC and 100oC were used. Physicochemical studies showed that the composites exhibit different properties with respect to their structure and composition. IR and Raman spectroscopy probed the presence of both chitosan and hydroxyapatite in the composites. Hydroxyapatite as dispersed in the chitosan matrix was found to be in the nanosize range (15-50 nm) and occurs in a bimodal distribution with respect to its crystallite length. Two types of distribution domains of hydroxyapatite crystallites in the composite matrix such as cluster-like (200-400 nm) and scattered-like domains were identified by the transmission electron microscopy (TEM), X-ray diffraction (XRD) and by confocal scanning laser microscopy (CSLM) measurements. Relaxation NMR experiments on composite hydrogels showed the presence of two types of water sites in their gel networks, such as free and bound water. Mechanical tests showed that the mechanical properties of composites are one order of magnitude less than those of compact bone but comparable to those of porous bone. The enzymatic degradation rates of composites showed slow degradation processes. The yields of degradation were estimated to be less than 10% by loss of mass, after incubation with lysozyme, for a period of 50 days. Since the composite materials were found biocompatible by the in vivo tests, the simple mode of their fabrication and their properties recommend them as potential candidates for the non-load bearing bone substitute materials.
This thesis work describes a new experimental method for the determination of Mode II (shear) fracture toughness, KIIC of rock and compares the outcome to results from Mode I (tensile) fracture toughness, KIC, testing using the International Society of Rock Mechanics Chevron-Bend method.Critical Mode I fracture growth at ambient conditions was studied by carrying out a series of experiments on a sandstone at different loading rates. The mechanical and microstructural data show that time- and loading rate dependent crack growth occurs in the test material at constant energy requirement.The newly developed set-up for determination of the Mode II fracture toughness is called the Punch-Through Shear test. Notches were drilled to the end surfaces of core samples. An axial load punches down the central cylinder introducing a shear load in the remaining rock bridge. To the mantle of the cores a confining pressure may be applied. The application of confining pressure favours the growth of Mode II fractures as large pressures suppress the growth of tensile cracks.Variation of geometrical parameters leads to an optimisation of the PTS- geometry. Increase of normal load on the shear zone increases KIIC bi-linear. High slope is observed at low confining pressures; at pressures above 30 MPa low slope increase is evident. The maximum confining pressure applied is 70 MPa. The evolution of fracturing and its change with confining pressure is described.The existence of Mode II fracture in rock is a matter of debate in the literature. Comparison of the results from Mode I and Mode II testing, mainly regarding the resulting fracture pattern, and correlation analysis of KIC and KIIC to physico-mechanical parameters emphasised the differences between the response of rock to Mode I and Mode II loading. On the microscale, neither the fractures resulting from Mode I the Mode II loading are pure mode fractures. On macroscopic scale, Mode I and Mode II do exist.
The role of feedback between erosional unloading and tectonics controlling the development of the Himalaya is a matter of current debate. The distribution of precipitation is thought to control surface erosion, which in turn results in tectonic exhumation as an isostatic compensation process. Alternatively, subsurface structures can have significant influence in the evolution of this actively growing orogen. Along the southern Himalayan front new 40Ar/39Ar white mica and apatite fission track (AFT) thermochronologic data provide the opportunity to determine the history of rock-uplift and exhumation paths along an approximately 120-km-wide NE-SW transect spanning the greater Sutlej region of the northwest Himalaya, India. 40Ar/39Ar data indicate, consistent with earlier studies that first the High Himalayan Crystalline, and subsequently the Lesser Himalayan Crystalline nappes were exhumed rapidly during Miocene time, while the deformation front propagated to the south. In contrast, new AFT data delineate synchronous exhumation of an elliptically shaped, NE-SW-oriented ~80 x 40 km region spanning both crystalline nappes during Pliocene-Quaternary time. The AFT ages correlate with elevation, but show within the resolution of the method no spatial relationship to preexisting major tectonic structures, such as the Main Central Thrust or the Southern Tibetan Fault System. Assuming constant exhumation rates and geothermal gradient, the rocks of two age vs. elevation transects were exhumed at ~1.4 ±0.2 and ~1.1 ±0.4 mm/a with an average cooling rate of ~50-60 °C/Ma during Pliocene-Quaternary time. The locus of pronounced exhumation defined by the AFT data coincides with a region of enhanced precipitation, high discharge, and sediment flux rates under present conditions. We therefore hypothesize that the distribution of AFT cooling ages might reflect the efficiency of surface processes and fluvial erosion, and thus demonstrate the influence of erosion in localizing rock-uplift and exhumation along southern Himalayan front, rather than encompassing the entire orogen.Despite a possible feedback between erosion and exhumation along the southern Himalayan front, we observe tectonically driven, crustal exhumation within the arid region behind the orographic barrier of the High Himalaya, which might be related to and driven by internal plateau forces. Several metamorphic-igneous gneiss dome complexes have been exhumed between the High Himalaya to the south and Indus-Tsangpo suture zone to the north since the onset of Indian-Eurasian collision ~50 Ma ago. Although the overall tectonic setting is characterized by convergence the exhumation of these domes is accommodated by extensional fault systems.Along the Indian-Tibetan border the poorly described Leo Pargil metamorphic-igneous gneiss dome (31-34°N/77-78°E) is located within the Tethyan Himalaya. New field mapping, structural, and geochronologic data document that the western flank of the Leo Pargil dome was formed by extension along temporally linked normal fault systems. Motion on a major detachment system, referred to as the Leo Pargil detachment zone (LPDZ) has led to the juxtaposition of low-grade metamorphic, sedimentary rocks in the hanging wall and high-grade metamorphic gneisses in the footwall. However, the distribution of new 40Ar/39Ar white mica data indicate a regional cooling event during middle Miocene time. New apatite fission track (AFT) data demonstrate that subsequently more of the footwall was extruded along the LPDZ in a brittle stage between 10 and 2 Ma with a minimum displacement of ~9 km. Additionally, AFT-data indicate a regional accelerated cooling and exhumation episode starting at ~4 Ma. Thus, tectonic processes can affect the entire orogenic system, while potential feedbacks between erosion and tectonics appear to be limited to the windward sides of an orogenic systems.
The India-Eurasia continental collision zone provides a spectacular example of active mountain building and climatic forcing. In order to quantify the critically important process of mass removal, I analyzed spatial and temporal precipitation patterns of the oscillating monsoon system and their geomorphic imprints. I processed passive microwave satellite data to derive high-resolution rainfall estimates for the last decade and identified an abnormal monsoon year in 2002. During this year, precipitation migrated far into the Sutlej Valley in the northwestern part of the Himalaya and reached regions behind orographic barriers that are normally arid. There, sediment flux, mean basin denudation rates, and channel-forming processes such as erosion by debris-flows increased significantly. Similarly, during the late Pleistocene and early Holocene, solar forcing increased the strength of the Indian summer monsoon for several millennia and presumably lead to analogous precipitation distribution as were observed during 2002. However, the persistent humid conditions in the steep, high-elevation parts of the Sutlej River resulted in deep-seated landsliding. Landslides were exceptionally large, mainly due to two processes that I infer for this time: At the onset of the intensified monsoon at 9.7 ka BP heavy rainfall and high river discharge removed material stored along the river, and lowered the baselevel. Second, enhanced discharge, sediment flux, and increased pore-water pressures along the hillslopes eventually lead to exceptionally large landslides that have not been observed in other periods. The excess sediments that were removed from the upstream parts of the Sutlej Valley were rapidly deposited in the low-gradient sectors of the lower Sutlej River. Timing of downcutting correlates with centennial-long weaker monsoon periods that were characterized by lower rainfall. I explain this relationship by taking sediment flux and rainfall dynamics into account: High sediment flux derived from the upstream parts of the Sutlej River during strong monsoon phases prevents fluvial incision due to oversaturation the fluvial sediment-transport capacity. In contrast, weaker monsoons result in a lower sediment flux that allows incision in the low-elevation parts of the Sutlej River.
It is known that the efficiency of organic light-emitting devices (OLEDs) is strongly influenced by the ’quality′ of the thin films [1]. On the basis of this conviction, the work presented in this thesis aimed to obtain a better understanding of the structure of organic thin films of general interest in the field of organic light emitting devices by using scanning probe microscopies (SPMs). A not yet reported crystal structure of quaterthiophene film grown on potassium hydrogen (KHP) is determined by optical measurements, a simulation program, diffraction at both normal incidence and grazing angle and AFM. The crystal cell is triclinic with parameters a = 0.721 nm, b = 0.632 nm, c = 0.956 nm and a = 91°, b = 91.4°, g = 91° [2]. The morphologies of four organic thin films deposited on gold are characterized by ultra high vacuum scanning tunneling microscopy (UHV-STM). Terraces in an hexanethiol monolayer, lamellar structures in an azobenzenethiol monolayer, rods in a a poly(paraphenylenevinylene) oligomer film and a granular morphology in an oxadiazole film are shown. The topographies of a series of poly(3,4-ethylenedioxythiophene)/poly(styrenesulfonate) (PEDOT/PSS) films deposited on indium-tin oxide (ITO) and gold obtained from dispersions with PEDOT:PSS weight ratios of 1:20, 1:6 and 1:1 are investigated by AFM. It is demonstrated that the films show the same topography on gold and on ITO. It is shown that the PEDOT films eliminate the spike features of ITO. It is reported that PEDOT 1:20 and 1:6 appear indistinguishable between each other but different from PEDOT 1:1 (the most conductive). Coupling STM and I-d measurements, a not yet reported structural model of PEDOT 1:1 on gold is obtained [3]. In this model the surface presents grains and the bulk particles/domains rich in PEDOT embedded in a PEDOT-poor matrix. The equation of conductivity is derived. A STM investigation of four PEDOT films deposited on ITO obtained from dispersions with the same PEDOT:PSS weight ratio of 1:1 is carried out [4]. The films differ either for the presence of sorbitol or for a different synthetic route (and they present different conductivities). For the first time a quantitative and qualitative correlation between the nanometer-scale morphology of PEDOT films with and without sorbitol and their conductivity is established.
It has been known for several years that under certain conditions electrons can be confined within thin layers even if these layers consist of metal and are supported by a metal substrate. In photoelectron spectra, these layers show characteristic discrete energy levels and it has turned out that these lead to large effects like the oscillatory magnetic coupling technically exploited in modern hard disk reading heads. The current work asks in how far the concepts underlying quantization in two-dimensional films can be transferred to lower dimensionality. This problem is approached by a stepwise transition from two-dimensional layers to one-dimensional nanostructures. On the one hand, these nanostructures are represented by terraces on atomically stepped surfaces, on the other hand by atom chains which are deposited onto these terraces up to complete coverage by atomically thin nanostripes. Furthermore, self organization effects are used in order to arrive at perfectly one-dimensional atomic arrangements at surfaces. Angle-resolved photoemission is particularly suited as method of investigation because is reveals the behavior of the electrons in these nanostructures in dependence of the spacial direction which distinguishes it from, e. g., scanning tunneling microscopy. With this method intense and at times surprisingly large effects of one-dimensional quantization are observed for various exemplary systems, partly for the first time. The essential role of bandgaps in the substrate known from two-dimensional systems is confirmed for nanostructures. In addition, we reveal an ambiguity without precedent in two-dimensional layers between spacial confinement of electrons on the one side and superlattice effects on the other side as well as between effects caused by the sample and by the measurement process. The latter effects are huge and can dominate the photoelectron spectra. Finally, the effects of reduced dimensionality are studied in particular for the d electrons of manganese which are additionally affected by strong correlation effects. Surprising results are also obtained here. ---------------------------- Die Links zur jeweiligen Source der im Appendix beigefügten Veröffentlichungen befinden sich auf Seite 83 des Volltextes.
The topic of synchronization forms a link between nonlinear dynamics and neuroscience. On the one hand, neurobiological research has shown that the synchronization of neuronal activity is an essential aspect of the working principle of the brain. On the other hand, recent advances in the physical theory have led to the discovery of the phenomenon of phase synchronization. A method of data analysis that is motivated by this finding - phase synchronization analysis - has already been successfully applied to empirical data. The present doctoral thesis ties up to these converging lines of research. Its subject are methodical contributions to the further development of phase synchronization analysis, as well as its application to event-related potentials, a form of EEG data that is especially important in the cognitive sciences. The methodical contributions of this work consist firstly in a number of specialized statistical tests for a difference in the synchronization strength in two different states of a system of two oscillators. Secondly, in regard of the many-channel character of EEG data an approach to multivariate phase synchronization analysis is presented. For the empirical investigation of neuronal synchronization a classic experiment on language processing was replicated, comparing the effect of a semantic violation in a sentence context with that of the manipulation of physical stimulus properties (font color). Here phase synchronization analysis detects a decrease of global synchronization for the semantic violation as well as an increase for the physical manipulation. In the latter case, by means of the multivariate analysis the global synchronization effect can be traced back to an interaction of symmetrically located brain areas.<BR> The findings presented show that the method of phase synchronization analysis motivated by physics is able to provide a relevant contribution to the investigation of event-related potentials in the cognitive sciences.
The overall objective of the study is an elaboration of quantitative methods for national conservation planning, coincident with the international approach ('hotspots' approach). This objective requires a solution of following problems: 1) How to estimate large scale vegetation diversity from abiotic factors only? 2) How to adopt 'global hotspots' approach for bordering of national biodiversity hotspots? 3) How to set conservation targets, accounting for difference in environmental conditions and human threats between national biodiversity hotspots? 4) How to design large scale national conservation plan reflecting hierarchical nature of biodiversity? The case study for national conservation planning is Russia. Conclusions: · Large scale vegetation diversity can be predicted to a major extent by climatically determined latent heat for evaporation and geometrical structure of landscape, described as an altitudinal difference. The climate based model reproduces observed species number of vascular plant for different areas of the world with an average error 15% · National biodiversity hotspots can be mapped from biotic or abiotic data using corrected for a country the quantitative criteria for plant endemism and land use from the 'global hotspots' approach · Quantitative conservation targets, accounting for difference in environmental conditions and human threats between national biodiversity hotspots can be set using national data for Red Data book species · Large scale national conservation plan reflecting hierarchical nature of biodiversity can be designed by combination of abiotic method at national scale (identification of large scale hotspots) and biotic method at regional scale (analysis of species data from Red Data book)
In this thesis the magnetohydrodynamic jet formation and the effects of magnetic diffusion on the formation of axisymmetric protostellar jets have been investigated in three different simulation sets. The time-dependent numerical simulations have been performed, using the magnetohydrodynamic ZEUS-3D code.
Robotic telescopes & Doppler imaging : measuring differential rotation on long-period active stars
(2004)
The sun shows a wide variety of magnetic-activity related phenomena. The magnetic field responsible for this is generated by a dynamo process which is believed to operate in the tachocline, which is located at the bottom of the convection zone. This dynamo is driven in part by differential rotation and in part by magnetic turbulences in the convection zone. The surface differential rotation, one key ingredient of dynamo theory, can be measured by tracing sunspot positions.To extend the parameter space for dynamo theories, one can extend these measurements to other stars than the sun. The primary obstacle in this endeavor is the lack of resolved surface images on other stars. This can be overcome by the Doppler imaging technique, which uses the rotation-induced Doppler-broadening of spectral lines to compute the surface distribution of a physical parameter like temperature. To obtain the surface image of a star, high-resolution spectroscopic observations, evenly distributed over one stellar rotation period are needed. This turns out to be quite complicated for long period stars. The upcoming robotic observatory STELLA addresses this problem with a dedicated scheduling routine, which is tailored for Doppler imaging targets. This will make observations for Doppler imaging not only easier, but also more efficient.As a preview of what can be done with STELLA, we present results of a Doppler imaging study of seven stars, all of which show evidence for differential rotation, but unfortunately the errors are of the same order of magnitude as the measurements due to unsatisfactory data quality, something that will not happen on STELLA. Both, cross-correlation analysis and the sheared image technique where used to double check the results if possible. For four of these stars, weak anti-solar differential rotation was found in a sense that the pole rotates faster than the equator, for the other three stars weak differential rotation in the same direction as on the sun was found.Finally, these new measurements along with other published measurements of differential rotation using Doppler imaging, were analyzed for correlations with stellar evolution, binarity, and rotation period. The total sample of stars show a significant correlation with rotation period, but if separated into antisolar and solar type behavior, only the subsample showing anti-solar differential rotation shows this correlation. Additionally, there is evidence for binary stars showing less differential rotation as single stars, as is suggested by theory. All other parameter combinations fail to deliver any results due to the still small sample of stars available.
In this work, the basic principles of self-organization of diblock copolymers having the in¬herent property of selective or specific non-covalent binding were examined. By the introduction of electrostatic, dipole–dipole, or hydrogen bonding interactions, it was hoped to add complexity to the self-assembled mesostructures and to extend the level of ordering from the nanometer to a larger length scale. This work may be seen in the framework of biomimetics, as it combines features of synthetic polymer and colloid chemistry with basic concepts of structure formation applying in supramolecular and biological systems. The copolymer systems under study were (i) block ionomers, (ii) block copolymers with acetoacetoxy chelating units, and (iii) polypeptide block copolymers.
Following work is embedded in the multidisciplinary study DESERT (DEad SEa Rift Transect) that has been carried out in the Middle East since the beginning of the year 2000. It focuses on the structure of the southern Dead Sea Transform (DST), the transform plate boundary between Africa (Sinai) and the Arabian microplate. The left-lateral displacement along this major active strike-slip fault amounts to more than 100 km since Miocene times. The DESERT near-vertical seismic reflection (NVR) experiment crossed the DST in the Arava Valley between Red Sea and Dead Sea, where its main fault is called Arava Fault. The 100 km long profile extends in a NW—SE direction from Sede Boqer/Israel to Ma'an/Jordan and coincides with the central part of a wide-angle seismic refraction/reflection line. Near-vertical seismic reflection studies are powerful tools to study the crustal architecture down to the crust/mantle boundary. Although they cannot directly image steeply dipping fault zones, they can give indirect evidence for transform motion by offset reflectors or an abrupt change in reflectivity pattern. Since no seismic reflection profile had crossed the DST before DESERT, important aspects of this transform plate boundary and related crustal structures were not known. Thus this study aimed to resolve the DST's manifestation in both the upper and the lower crust. It was to show, whether the DST penetrates into the mantle and whether it is associated with an offset of the crust/mantle boundary, which is observed at other large strike-slip zones. In this work a short description of the seismic reflection method and the various processing steps is followed by a geological interpretation of the seismic data, taking into account relevant information from other studies. Geological investigations in the area of the NVR profile showed, that the Arava Fault can partly be recognized in the field by small scarps in the Neogene sediments, small pressure ridges or rhomb-shaped grabens. A typical fault zone architecture with a fault gauge, fault-related damage zone, and undeformed host rock, that has been reported from other large fault zones, could not be found. Therefore, as a complementary part to the NVR experiment, which was designed to resolve deeper crustal structures, ASTER (Advanced Spacebourne Thermal Emission and Reflection Radiometer) satellite images were used to analyze surface deformation and determine neotectonic activity.
The age-by-complexity effect is the dominant empirical pattern in cognitive aging. The current report investigates whether a specific high-level mechanism---an age-related decrease in the reliability of episodic accumulators---can account for the age-by-complexity-effect, which is commonly assumed to be caused by an unspecific, low-level deficit. Groups of younger and older adults are compared in six reaction time experiments, using orthogonal manipulations of early cognitive difficulty (e.g., Stroop condition) and episodic demands (e.g., stimulus-response mapping). The predicted three-way interaction of age and the two factors was observed fairly consistently across experiments. A modified Brinley analysis shows that different regression slopes in old-young-space are required for conditions with low and high episodic difficulty. As a methodological contribution, a Brinley regression model following from certain simple processing assumptions is developed. It is shown that in contrast to a standard Brinley meta-analysis, the regression slopes in this model are not influenced by theoretically un-interesting between-experiment variance.
The goal of this work was to study the binding of ions to polymers and lipid bilayer membranes in aqueous solutions. In the first part of this work, the influence of various inorganic salts and polyelectrolytes on the structure of water was studied using Isothermal Titration Calorimetry (ITC). The heat of dilution of the salts was used as a scale of water structure making and breaking of the ions. The heats of dilution could be attributed to the Hofmeister Series. Following this, the binding of Ca2+ to poly(sodium acrylate) (NaPAA) was studied. ITC and a Ca2+ Ion Selective Electrode were used to measure the reaction enthalpy and binding isotherm. Binding of Ca2+ ions to PAA, was found to be highly endothermic and therefore solely driven by entropy. We then compared the binding of ions to the one-dimensional PAA polymer chain to the binding to lipid vesicles with the same functional groups. As for the polymer, Ca2+ binding was found to be endothermic. Binding of calcium to the lipid bilayer was found to be weaker than to the polymer. In the context of these experiments, it was shown that Ca2+ not only binds to charged but also to zwitterionic lipid vesicles. Finally, we studied the interaction of two salts, KCl and NaCl, to a neutral polymer gel, PNIPAAM, and to the ionic polymer PAA. Combining calorimetry and a potassium selective electrode we observed that the ions interact with both polymers, whether containing charges or not.
Adherent cells constantly collect information about the mechanical properties of their extracellular environment by actively pulling on it through cell-matrix contacts, which act as mechanosensors. In recent years, the sophisticated use of elastic substrates has shown that cells respond very sensitively to changes in effective stiffness in their environment, which results in a reorganization of the cytoskeleton in response to mechanical input. We develop a theoretical model to predict cellular self-organization in soft materials on a coarse grained level. Although cell organization in principle results from complex regulatory events inside the cell, the typical response to mechanical input seems to be a simple preference for large effective stiffness, possibly because force is more efficiently generated in a stiffer environment. The term effective stiffness comprises effects of both rigidity and prestrain in the environment. This observation can be turned into an optimization principle in elasticity theory. By specifying the cellular probing force pattern and by modeling the environment as a linear elastic medium, one can predict preferred cell orientation and position. Various examples for cell organization, which are of large practical interest, are considered theoretically: cells in external strain fields and cells close to boundaries or interfaces for different sample geometries and boundary conditions. For this purpose the elastic equations are solved exactly for an infinite space, an elastic half space and the elastic sphere. The predictions of the model are in excellent agreement with experiments for fibroblast cells, both on elastic substrates and in hydrogels. Mechanically active cells like fibroblasts could also interact elastically with each other. We calculate the optimal structures on elastic substrates as a function of material properties, cell density and the geometry of cell positioning, respectively, that allows each cell to maximize the effective stiffness in its environment due to the traction of all the other cells. Finally, we apply Monte Carlo simulations to study the effect of noise on cellular structure formation. The model not only contributes to a better understanding of many physiological situations. In the future it could also be used for biomedical applications to optimize protocols for artificial tissues with respect to sample geometry, boundary condition, material properties or cell density.
This work deals with the connection between two basic phenomena in Nonlinear Dynamics: synchronization of chaotic systems and recurrences in phase space. Synchronization takes place when two or more systems adapt (synchronize) some characteristic of their respective motions, due to an interaction between the systems or to a common external forcing. The appearence of synchronized dynamics in chaotic systems is rather universal but not trivial. In some sense, the possibility that two chaotic systems synchronize is counterintuitive: chaotic systems are characterized by the sensitivity ti different initial conditions. Hence, two identical chaotic systems starting at two slightly different initial conditions evolve in a different manner, and after a certain time, they become uncorrelated. Therefore, at a first glance, it does not seem to be plausible that two chaotic systems are able to synchronize. But as we will see later, synchronization of chaotic systems has been demonstrated. On one hand it is important to investigate the conditions under which synchronization of chaotic systems occurs, and on the other hand, to develop tests for the detection of synchronization. In this work, I have concentrated on the second task for the cases of phase synchronization (PS) and generalized synchronization (GS). Several measures have been proposed so far for the detection of PS and GS. However, difficulties arise with the detection of synchronization in systems subjected to rather large amounts of noise and/or instationarities, which are common when analyzing experimental data. The new measures proposed in the course of this thesis are rather robust with respect to these effects. They hence allow to be applied to data, which have evaded synchronization analysis so far. The proposed tests for synchronization in this work are based on the fundamental property of recurrences in phase space.
Paleomagnetic dating of climatic events in Late Quaternary sediments of Lake Baikal (Siberia)
(2004)
Lake Baikal provides an excellent climatic archive for Central Eurasia as global climatic variations are continuously depicted in its sediments. We performed continuous rock magnetic and paleomagnetic analyses on hemipelagic sequences retrieved from 4 underwater highs reaching back 300 ka. The rock magnetic study combined with TEM, XRD, XRF and geochemical analyses evidenced that a magnetite of detrital origin dominates the magnetic signal in glacial sediments whereas interglacial sediments are affected by early diagenesis. HIRM roughly quantifies the hematite and goethite contributions and remains the best proxy for estimating the detrital input in Lake Baikal. Relative paleointensity records of the earth′s magnetic field show a reproducible pattern, which allows for correlation with well-dated reference curves and thus provides an alternative age model for Lake Baikal sediments. Using the paleomagnetic age model we observed that cooling in the Lake Baikal region and cooling of the sea surface water in the North Atlantic, as recorded in planktonic foraminifera δ18 O, are coeval. On the other hand, benthic δ18 O curves record mainly the global ice volume change, which occurs later than the sea surface temperature change. This proves that a dating bias results from an age model based on the correlation of Lake Baikal sedimentary records with benthic δ18 O curves. The compilation of paleomagnetic curves provides a new relative paleointensity curve, “Baikal 200”. With a laser-assisted grain size analysis of the detrital input, three facies types, reflecting different sedimentary dynamics can be distinguished. (1) Glacial periods are characterised by a high clay content mostly due to wind activity and by occurrence of a coarse fraction (sand) transported over the ice by local winds. This fraction gives evidence for aridity in the hinterland. (2) At glacial/interglacial transitions, the quantity of silt increases as the moisture increases, reflecting increased sedimentary dynamics. Wind transport and snow trapping are the dominant process bringing silt to a hemipelagic site (3) During the climatic optimum of the Eemian, the silt size and quantity are minimal due to blanketing of the detrital sources by the vegetal cover.
Understanding stars, their magnetic activity phenomena and the underlying dynamo action is the foundation for understanding 'life, the universe and everything' - as stellar magnetic fields play a fundamental role for star and planet formation and for the terrestrial atmosphere and climate. Starspots are the fingerprints of magnetic field lines and thereby the most important sign of activity in a star's photosphere. However, they cannot be observed directly, as it is not (yet) possible to spacially resolve the surfaces of even the nearest neighbouring stars. Therefore, an indirect approach called 'Doppler imaging' is applied, which allows to reconstruct the surface spot distribution on rapidly rotating, active stars. In this work, data from 11 years of continuous spectroscopic observations of the active binary star EI Eridani are reduced and analysed. 34 Doppler maps are obtained and the problem of how to parameterise the information content of Doppler maps is discussed. Three approaches for parameter extraction are introduced and applied to all maps: average temperature, separated for several latitude bands; fractional spottedness; and, for the analysis of structural temperature distribution, longitudinal and latitudinal spot-occurrence functions. The resulting values do not show a distinct correlation with the proposed activity cycle as seen from photometric long-term observations, thereby suggesting that the photometric activity cycle is not accompanied by a spot cycle as seen on the Sun. The general morphology of the spot pattern on EI Eri remains persistent for the whole period of 11 years. In addition, a detailed parameter study is performed. Improved orbital parameters suggest that EI Eri might be complemented by a third star in a wide orbit of about 19 years. Preliminary differential rotation measurements are carried out, indicating an anti-solar orientation.
In this thesis, dynamical structures and manifolds in closed chaotic flows will be investigated. The knowledge about the dynamical structures (and manifolds) of a system is of importance, since they provide us first information about the dynamics of the system - means, with their help we are able to characterize the flow and maybe even to forecast it`s dynamics. The visualization of such structures in closed chaotic flows is a difficult and often long-lasting process. Here, the so-called 'Leaking-method' will be introduced, in examples of simple mathematical maps as the baker- or sine-map, with which we are able to visualize subsets of the manifolds of the system`s chaotic saddle. Comparisons between the visualized manifolds and structures traced out by chemical or biological reactions superimposed on the same flow will be done in the example of a kinematic model of the Gulf Stream. It will be shown that with the help of the leaking method dynamical structures can be also visualized in environmental systems. In the example of a realistic model of the Mediterranean Sea, the leaking method will be extended to the 'exchange-method'. The exchange method allows us to characterize transport between two regions, to visualize transport routes and their exchange sets and to calculate the exchange times. Exchange times and sets will be shown and calculated for a northern and southern region in the western basin of the Mediterranean Sea. Furthermore, mixing properties in the Earth mantle will be characterized and geometrical properties of manifolds in a 3dimensional mathematical model (ABC map) will be investigated.
In this work different approaches are undertaken to improve the understanding of the sucrose-to-starch pathway in developing potato tubers. At first an inducible gene expression system from fungal origin is optimised for the use of studying metabolism in the potato tuber. It is found that the alc system from Aspergillus nidulans responds more rapidly to acetaldehyde than ethanol, and that acetaldehyde has less side-effects on metabolism. The optimal induction conditions then are used to study the effects of temporally controlled cytosolic expression of a yeast invertase on metabolism of potato tubers. The observed differences between induced and constitutive expression of the invertase lead to the conclusion that glycolysis is induced after an ATP demand has been created by an increase in sucrose cycling. Furthermore, the data suggest that in the potato tuber maltose is a product of glucose condensation rather than starch degradation. In the second part of the work it is shown that the expression of a yeast invertase in the vacuole of potato tubers has similar effects on metabolism than the expression of the same enzyme in the apoplast. These observations give further evidence to the presence of a mechanism by which sucrose is taken up via endocytosis to the vacuole rather than via transporters directly to the cytosol. Finally, a kinetic in silico model of sucrose breakdown is presented that is able to simulate this part of potato tuber metabolism on a quantitative level. Furthermore, it can predict the metabolic effects of the introduction of a yeast invertase in the cytosol of potato tubers with an astonishing precision. In summary, these data prove that inducible gene expression and kinetic computer models of metabolic pathways are useful tools to greatly improve the understanding of plant metabolism.
Polymer optical fibers (POFs) are a rather new tool for high-speed data transfer by modulated light. They allow the transport of high amounts of data over distances up to about 100 m without be influenced by external electromagnetic fields. Due to organic chemical nature of POFs, they are sensitive to the climate of their environment and therefore the optical fiber properties are as well. Hence, the optical stability is a key issue for long-term applications of POFs. The causes for a loss of optical transmission due to climatic exposures (aging/degradation) are researched by means of chemical analytical tools such as chemiluminescence (CL) and Fourier transform infrared (FTIR) spectroscopy for five different (with respect to manufacturers) step-index multimode PMMA based POFs and for seven different climatic conditions. Three of the five POF samples are studied more in detail to realize the effects of individual parameters and for forecasting longterm optical stability by short-term exposure tests. At first, the unexposed POF components (core, cladding, and bare POF as combination of core and cladding) are characterized with respect to important physical and chemical properties. The glass transition temperature Tg, and the melting temperature Tm are in the region of 120 °C to 140 °C, the molecular weight (Mw) of cores is in the order of 105 g mol-1. POFs are found to have different chemical compositions of their claddings as could be detected by FTIR, but identical compositions of their cores. Two of the POFs are exposed as cables (core, cladding and jacket) for about 3300 hours to the climate 92 °C / 95 % relative humidity (RH) resulting in a different transmission decrease. Investigating the related unexposed and exposed bare POFs for degradation using CL, FTIR, thermogravimetry (TG), UV/visible transmittance and gel permeation chromatography (GPC) suggest that claddings of POFs are more affected than cores. Probably the observed loss of transmission is mainly due to increased light absorption and imperfections at the core-cladding boundary caused by a large degradation of claddings. Hence, it is highly possible that the optical transmission stability of POFs is governed mainly by the thermo-oxidative stability of the cladding and minor of the core. Three bare POFs (core and cladding only) are exposed for different duration of exposure time (30 hours to 4500 hours) to 92 °C / 95 %RH, 92 °C / 50 %RH, 50 °C / 95 %RH, 90 °C / low humidity, 100 °C / low humidity, 110 °C / low humidity and 120 °C / low humidity. In these climates their transmission variations are found to be different from each other, too. The outcomes strongly inform that under high temperature and high humid climates physical changes such as volume expansion, are the main sources for the loss of optical transmission. Also, the optical transmission stability of POFs is found to be dependent on chemical compositions of claddings. Under high temperature and low humid conditions, a loss of transmission at the early stages of the exposure is mainly caused by physical changes, presumable by corecladding interface imperfections. For the later stages of exposures it is proposed to an additional increase of light absorption by core and cladding owes to degradation. Optical simulation results obtained parallel by Mr. L. Jankowski (a PhD student of BAM) are found to confirm these results. For bare POFs, too, the optical stability of POFs seems to depend on their thermo-oxidative stability. Some short-term exposure tests are conducted to realize influences of individual climatic parameters on the transmission property of POFs. It is found that at stationary high temperature and variable humidity conditions POFs display to a certain amount a reversible transmission loss due to physically absorbed water. But in the case of varying temperature and constant high humidity such reversibility is hardly noticeable. However, at room temperature and varying humidity, POFs display fully reversible transmission loss. The whole research described above has to be regarded as a starting point for further investigations. The restricted distribution of fundamental POF data by the manufacturers and the time consuming aging by climatic exposures restrict the results more or less to the samples, investigated here. Significant general statements require for example additional information concerning the variation of POF properties due to production. Nevertheless the tests, described here, have the capability for approximating and forecasting the long-term optical transmission stability of POFs. -------------- Auch im Druck erschienen: Appajaiah, Anilkumar: Climatic stability of polymer optical fibers (POF) / Anilkumar Appajaiah. - Bremerhaven : Wirtschaftsverl. NW, Verl. für neue Wiss., 2005. - Getr. Zählung [ca. 175 S.]. : Ill., graph. Darst. - (BAM-Dissertationsreihe ; 9) ISBN 3-86509-302-7
This thesis discusses theoretical and practical aspects of modelling of light propagation in non-aged and aged step-index polymer optical fibres (POFs). Special attention has been paid in describing optical characteristics of non-ideal fibres, scattering and attenuation, and in combining application-oriented and theoretical approaches. The precedence has been given to practical issues, but much effort has been also spent on the theoretical analysis of basic mechanisms governing light propagation in cylindrical waveguides.As a result a practically usable general POF model based on the raytracing approach has been developed and implemented. A systematic numerical optimisation of its parameters has been performed to obtain the best fit between simulated and measured optical characteristics of numerous non-aged and aged fibre samples. The model was verified by providing good agreement, especially for the non-aged fibres. The relations found between aging time and optimal values of model parameters contribute to a better understanding of the aging mechanisms of POFs.
Modelling and simulation of light propagation in non-aged and aged step-index polymer optical fibres
(2004)
This thesis discusses theoretical and practical aspects of modelling of light propagation in non-aged and aged step-index polymer optical fibres (POFs). Special attention has been paid in describing optical characteristics of non-ideal fibres, scattering and attenuation, and in combining application-oriented and theoretical approaches. The precedence has been given to practical issues, but much effort has been also spent on the theoretical analysis of basic mechanisms governing light propagation in cylindrical waveguides. As a result a practically usable general POF model based on the raytracing approach has been developed and implemented. A systematic numerical optimisation of its parameters has been performed to obtain the best fit between simulated and measured optical characteristics of numerous non-aged and aged fibre samples. The model was verified by providing good agreement, especially for the non-aged fibres. The relations found between aging time and optimal values of model parameters contribute to a better understanding of the aging mechanisms of POFs.
Recurrence plots, a rather promising tool of data analysis, have been introduced by Eckman et al. in 1987. They visualise recurrences in phase space and give an overview about the system's dynamics. Two features have made the method rather popular. Firstly they are rather simple to compute and secondly they are putatively easy to interpret. However, the straightforward interpretation of recurrence plots for some systems yields rather surprising results. For example indications of low dimensional chaos have been reported for stock marked data, based on recurrence plots. In this work we exploit recurrences or ``naturally occurring analogues'' as they were termed by E. Lorenz, to obtain three key results. One of which is that the most striking structures which are found in recurrence plots are hinged to the correlation entropy and the correlation dimension of the underlying system. Even though an eventual embedding changes the structures in recurrence plots considerably these dynamical invariants can be estimated independently of the special parameters used for the computation. The second key result is that the attractor can be reconstructed from the recurrence plot. This means that it contains all topological information of the system under question in the limit of long time series. The graphical representation of the recurrences can also help to develop new algorithms and exploit specific structures. This feature has helped to obtain the third key result of this study. Based on recurrences to points which have the same ``recurrence structure'', it is possible to generate surrogates of the system which capture all relevant dynamical characteristics, such as entropies, dimensions and characteristic frequencies of the system. These so generated surrogates are shadowed by a trajectory of the system which starts at different initial conditions than the time series in question. They can be used then to test for complex synchronisation.
One of the most striking features of ecological systems is their ability to undergo sudden outbreaks in the population numbers of one or a small number of species. The similarity of outbreak characteristics, which is exhibited in totally different and unrelated (ecological) systems naturally leads to the question whether there are universal mechanisms underlying outbreak dynamics in Ecology. It will be shown into two case studies (dynamics of phytoplankton blooms under variable nutrients supply and spread of epidemics in networks of cities) that one explanation for the regular recurrence of outbreaks stems from the interaction of the natural systems with periodical variations of their environment. Natural aquatic systems like lakes offer very good examples for the annual recurrence of outbreaks in Ecology. The idea whether chaos is responsible for the irregular heights of outbreaks is central in the domain of ecological modeling. This question is investigated in the context of phytoplankton blooms. The dynamics of epidemics in networks of cities is a problem which offers many ecological and theoretical aspects. The coupling between the cities is introduced through their sizes and gives rise to a weighted network which topology is generated from the distribution of the city sizes. We examine the dynamics in this network and classified the different possible regimes. It could be shown that a single epidemiological model can be reduced to a one-dimensional map. We analyze in this context the dynamics in networks of weighted maps. The coupling is a saturation function which possess a parameter which can be interpreted as an effective temperature for the network. This parameter allows to vary continously the network topology from global coupling to hierarchical network. We perform bifurcation analysis of the global dynamics and succeed to construct an effective theory explaining very well the behavior of the system.
This thesis analyses synchronization phenomena occurring in large ensembles of interacting oscillatory units. In particular, the effects of nonisochronicity (frequency dependence on the oscillator's amplitude) on the macroscopic transition to synchronization are studied in detail. The new phenomena found (Anomalous Synchronization) are investigated in populations of oscillators as well as between oscillator's ensembles.
We calculate the additional carbon emissions as a result of the conversion of natural land in a process of urbanisation; and the change of carbon flows by “urbanised” ecosystems, when the atmospheric carbon is exported to the neighboring territories, from 1980 till 2050 for the eight regions of the world. As a scenario we use combined UN and demographic model′s prognoses for regional total and urban population growth. The calculations of urban areas dynamics are based on two models: the regression model and the Gamma-model. The urbanised area is sub-divided on built-up, „green“ (parks, etc.) and informal settlements (favelas) areas. The next step is to calculate the regional and world dynamics of carbon emission and export, and the annual total carbon balance. Both models give similar results with some quantitative differences. In the first model, the world annual emissions attain a maximum of 205 MtC/year between 2020-2030. Emissions will then slowly decrease. The maximum contributions are given by China and the Asia and Pacific regions. In the second model, world annual emissions increase to 1.25 GtC in 2005, beginning to decrease afterwards. If we compare the emission maximum with the annual emission caused by deforestation, 1.36GtC per year, then we can say that the role of urbanised territories (UT) is of a comparable magnitude. Regarding the world annual export of carbon by UT, we observe its monotonous growth by three times, from 24 MtC to 66 MtC in the first model, and from 249 MtC to 505 MtC in the second one. The latter, is therefore comparable to the amount of carbon transported by rivers into the ocean (196-537 MtC). By estimating the total balance we find that urbanisation shifts the total balance towards a “sink” state. The urbanisation is inhibited in the interval 2020-2030, and by 2050 the growth of urbanised areas would almost stop. Hence, the total emission of natural carbon at that stage will stabilise at the level of the 1980s (80 MtC per year). As estimated by the second model, the total balance, being almost constant until 2000, then starts to decrease at an almost constant rate. We can say that by the end of the XXI century, the total carbon balance will be equal to zero, when the exchange flows are fully balanced, and may even be negative, when the system begins to take up carbon from the atmosphere, i.e., becomes a “sink”.
My thesis is concerned with several new noise-induced phenomena in excitable neural models, especially those with FitzHugh-Nagumo dynamics. In these effects the fluctuations intrinsically present in any complex neural network play a constructive role and improve functionality. I report the occurrence of Vibrational Resonance in excitable systems. Both in an excitable electronic circuit and in the FitzHugh-Nagumo model, I show that an optimal amplitude of high-frequency driving enhances the response of an excitable system to a low-frequency signal. Additionally, the influence of additive noise and the interplay between Stochastic and Vibrational Resonance is analyzed. Further, I study systems which combine both oscillatory and excitable properties, and hence intrinsically possess two internal frequencies. I show that in such a system the effect of Stochastic Resonance can be amplified by an additional high-frequency signal which is in resonance with the oscillatory frequency. This amplification needs much lower noise intensities than for conventional Stochastic Resonance in excitable systems. I study frequency selectivity in noise-induced subthreshold signal processing in a system with many noise-supported stochastic attractors. I show that the response of the coupled elements at different noise levels can be significantly enhanced or reduced by forcing some elements into resonance with these new frequencies which correspond to appropriate phase-relations. A noise-induced phase transition to excitability is reported in oscillatory media with FitzHugh-Nagumo dynamics. This transition takes place via noise-induced stabilization of a deterministically unstable fixed point of the local dynamics, while the overall phase-space structure of the system is maintained. The joint action of coupling and noise leads to a different type of phase transition and results in a stabilization of the system. The resulting noise-induced regime is shown to display properties characteristic of excitable media, such as Stochastic Resonance and wave propagation. This effect thus allows the transmission of signals through an otherwise globally oscillating medium. In particular, these theoretical findings suggest a possible mechanism for suppressing undesirable global oscillations in neural networks (which are usually characteristic of abnormal medical conditions such as Parkinson′s disease or epilepsy), using the action of noise to restore excitability, which is the normal state of neuronal ensembles.
Independent component analysis (ICA) is a tool for statistical data analysis and signal processing that is able to decompose multivariate signals into their underlying source components. Although the classical ICA model is highly useful, there are many real-world applications that require powerful extensions of ICA. This thesis presents new methods that extend the functionality of ICA: (1) reliability and grouping of independent components with noise injection, (2) robust and overcomplete ICA with inlier detection, and (3) nonlinear ICA with kernel methods.
For recombinant production of proteins for structural and functional analyses, the E. coli expression system is the most widely used due to high yields and straightforward processing. However, particularly the expression of eukaryotic proteins in E. coli is often problematic, e.g. when the protein is not folded correctly and is deposited in insoluble inclusion bodies. In some cases it is favourable to analyse deletion constructs of a protein or an individual protein domain instead of the full-length protein. This implies the generation of a set of expression constructs that need to be characterised. In this work methods to optimise and evaluate in vitro folding of inclusion body proteins as well as high-throughput characterisation of expression constructs were developed. Transferring inclusion body proteins to their native state involves two steps: (a) solubilisation with a chaotropic reagent or a strong ionic detergent and (b) folding of the protein by removal of the chaotrop accompanied by the transfer into an appropriate buffer. The yield of natively folded protein is often substantially reduced due to aggregation or misfolding; it may, however, be improved by certain additives to the folding buffer. These additives need to be identified empirically. In this thesis a screening procedure for folding conditions was developed. To reduce the number of possible combinations of screening additives, empirical observations documented in the literature as well as well known properties of certain screening additives were considered. To decrease the amount of protein and work invested, the screen was miniaturised and automated using a pipetting robot. Twenty rapid dilution conditions for the denatured protein are tested and two conditions for folding of proteins using the detergent/cyclodextrin protein folding system of Rozema et al. (1996). 100 µg protein is used per condition. In addition, eight conditions can be tested for folding of His-tagged proteins (approx. 200 µg) immobilised on metal chelate resins. The screen was successfully applied to fold a human protein, the p22 subunit of dynactin that is expressed in inclusion bodies in E. coli. For p22 dynactin – as is the case for many proteins – there was no biological assay available to assess the success of the folding screen. Protein solubility can not be used as a stringent criterion because beside natively folded protein, soluble misfolded species and microaggregates may occur. This work evaluates methods to detect small amounts of natively folded protein after automated folding screening. Before folding screening with p22 dynactin, two model enzymes, bovine carbonic anhydrase II (CAB) and pig heart mitochondrial malate dehydrogenase, were used for evaluation. Recovered activity after refolding was correlated to different biophysical methods. 8-anilino-1-naphtalenesulfonic acid binding-experiments gave no useful information when refolding CAB, due to low sensitivity and because misfolded protein could not be readily distinguished from native protein. Tryptophan fluorescence spectra of refolded CAB were used to assess the success of refolding. The shift of the intensity maximum to a shorter wavelength, compared to the denaturant unfolded protein, as well as the fluorescence intensity correlated to recovered enzymatic activity. For both model enzymes, analytical hydrophobic interaction chromatography (HIC) was useful to identify refolded samples that contain active enzyme. Compactly folded, active enzyme eluted in a distinct peak in a decreasing ammonium sulfate gradient. The detection limit of analytical HIC was approx. 5 µg. In case of CAB, tryptophan fluorescence spectroscopy and analytical HIC showed that both methods in combination can be useful to rule out false positives or false negatives obtained with one method. These two methods were also useful to identify conditions for folding of p22 dynactin. However, tryptophan fluorescence spectroscopy can lead to false positives because in some cases spectra of soluble microaggregates are not well distinguishable from spectra of natively folded protein. In summary, a fast and reliable screening procedure was developed to make inclusion body proteins accessible to structural or functional analyses. In a separate project, 88 different E. coli expression constructs for 17 human protein domains that had been identified by sequence analysis were analysed using high-throughput purification and folding analysis in order to obtain candidates suitable for structural analysis. After 96 deep-well microplate expression and automated protein purification, solubly expressed protein domains were directly analysed using 1D ¹H-NMR spectroscopy. It was found that isolated methyl group signals below 0.5 ppm are particularly sensitive and reliable probes for folded protein. In addition – similar to the evaluation of a folding screen – analytical HIC proved to be an efficient tool for identifying constructs that yield compactly folded protein. Both methods, 1D ¹H-NMR spectroscopy and analytical HIC, provided complementary results. Six constructs, representing two domains, could be quickly identified as targets that are well suitable for structural analysis. The structure of one of these domains was solved recently by co-workers, the other structure was published by another group during this project.
Highly collimated, high velocity streams of hot plasma – the jets – are observed as a general phenomenon being found in a variety of astrophysical objects regarding their size and energy output. Known as jet sources are protostellar objects (T Tauri stars, embedded IR sources), galactic high energy sources ("microquasars"), and active galactic nuclei (extragalactic radio sources and quasars). Within the last two decades our knowledge regarding the processes involved in astro-physical jet formation has condensed in a kind of standard model. This is the scenario of a magnetohydrodynamically accelerated and collimated jet stream launched from the innermost part of an accretion disk close to the central object. Traditionally, the problem of jet formation is divided in two categories. One is the question how to collimate and accelerate an uncollimated low velocity disk wind into a jet. The second is the question how to initiate that outflow from a disk, i.e. how to turn accretion of matter into an ejection as a disk wind. My own work is mainly related to the first question, the collimation and acceleration process. Due to the complexity of both, the physical processes believed to be responsible for the jet launching and also the spatial configuration of the physical components of the jet source, the enigma of jet formation is not yet completely understood. On the theoretical side, there has been a substantial advancement during the last decade from purely station-ary models to time-dependent simulations lead by the vast increase of computer power. Observers, on the other hand, do not yet have the instruments at hand in order to spatially resolve observe the very jet origin. It can be expected that also the next years will yield a substantial improvement on both tracks of astrophysical research. Three-dimensional magnetohydrodynamic simu-lations will improve our understanding regarding the jet-disk interrelation and the time-dependent character of jet formation, the generation of the magnetic field in the jet source, and the interaction of the jet with the ambient medium. Another step will be the combina-tion of radiation transfer computations and magnetohydrodynamic simulations providing a direct link to the observations. At the same time, a new generation of telescopes (VLT, NGST) in combination with new instrumental techniques (IR-interferometry) will lead to a "quantum leap" in jet observation, as the resolution will then be sufficient in order to zoom into the innermost region of jet formation.
In a classical context, synchronization means adjustment of rhythms of self-sustained periodic oscillators due to their weak interaction. The history of synchronization goes back to the 17th century when the famous Dutch scientist Christiaan Huygens reported on his observation of synchronization of pendulum clocks: when two such clocks were put on a common support, their pendula moved in a perfect agreement. In rigorous terms, it means that due to coupling the clocks started to oscillate with identical frequencies and tightly related phases. Being, probably, the oldest scientifically studied nonlinear effect, synchronization was understood only in 1920-ies when E. V. Appleton and B. Van der Pol systematically - theoretically and experimentally - studied synchronization of triode generators. Since that the theory was well developed and found many applications. Nowadays it is well-known that certain systems, even rather simple ones, can exhibit chaotic behaviour. It means that their rhythms are irregular, and cannot be characterized only by one frequency. However, as is shown in the Habilitation work, one can extend the notion of phase for systems of this class as well and observe their synchronization, i.e., agreement of their (still irregular!) rhythms: due to very weak interaction there appear relations between the phases and average frequencies. This effect, called phase synchronization, was later confirmed in laboratory experiments of other scientific groups. Understanding of synchronization of irregular oscillators allowed us to address important problem of data analysis: how to reveal weak interaction between the systems if we cannot influence them, but can only passively observe, measuring some signals. This situation is very often encountered in biology, where synchronization phenomena appear on every level - from cells to macroscopic physiological systems; in normal states as well as in severe pathologies. With our methods we found that cardiovascular and respiratory systems in humans can adjust their rhythms; the strength of their interaction increases with maturation. Next, we used our algorithms to analyse brain activity of Parkinsonian patients. The results of this collaborative work with neuroscientists show that different brain areas synchronize just before the onset of pathological tremor. Morevoever, we succeeded in localization of brain areas responsible for tremor generation.
One of the classical ways to describe the dynamics of nonlinear systems is to analyze theur Fourier spectra. For periodic and quasiperiodic processes the Fourier spectrum consists purely of discrete delta-functions. On the contrary, the spectrum of a chaotic motion is marked by the presence of the continuous component. In this work, we describe the peculiar, neither regular nor completely chaotic state with so called singular-continuous power spectrum. Our investigations concern various cases from most different fields, where one meets the singular continuous (fractal) spectra. The examples include both the physical processes which can be reduced to iterated discrete mappings or even symbolic sequences, and the processes whose description is based on the ordinary or partial differential equations.
Line driven winds are accelerated by the momentum transfer from photons to a plasma, by absorption and scattering in numerous spectral lines. Line driving is most efficient for ultraviolet radiation, and at plasma temperatures from 10^4 K to 10^5 K. Astronomical objects which show line driven winds include stars of spectral type O, B, and A, Wolf-Rayet stars, and accretion disks over a wide range of scales, from disks in young stellar objects and cataclysmic variables to quasar disks. It is not yet possible to solve the full wind problem numerically, and treat the combined hydrodynamics, radiative transfer, and statistical equilibrium of these flows. The emphasis in the present writing is on wind hydrodynamics, with severe simplifications in the other two areas. I consider three topics in some detail, for reasons of personal involvement. 1. Wind instability, as caused by Doppler de-shadowing of gas parcels. The instability causes the wind gas to be compressed into dense shells enclosed by strong shocks. Fast clouds occur in the space between shells, and collide with the latter. This leads to X-ray flashes which may explain the observed X-ray emission from hot stars. 2. Wind runaway, as caused by a new type of radiative waves. The runaway may explain why observed line driven winds adopt fast, critical solutions instead of shallow (or breeze) solutions. Under certain conditions the wind settles on overloaded solutions, which show a broad deceleration region and kinks in their velocity law. 3. Magnetized winds, as launched from accretion disks around stars or in active galactic nuclei. Line driving is assisted by centrifugal forces along co-rotating poloidal magnetic field lines, and by Lorentz forces due to toroidal field gradients. A vortex sheet starting at the inner disk rim can lead to highly enhanced mass loss rates.
The behaviour of an adhering cell is strongly influenced by the chemical, topographical and mechanical properties of the surface it attaches to. During recent years, it has been found experimentally that adhering cells actively sense the elastic properties of their environment by pulling on it through numerous sites of adhesion. The resulting build-up of force at sites of adhesion depends on the elastic properties of the environment and is converted into corresponding biochemical signals, which can trigger cellular programmes like growth, differentiation, apoptosis, and migration. In general, force is an important regulator of biological systems, for example in hearing and touch, in wound healing, and in rolling adhesion of leukocytes on vessel walls. In the habilitation thesis by Ulrich Schwarz, several theoretical projects are presented which address the role of forces and elasticity in cell adhesion. (1) A new method has been developed for calculating cellular forces exerted at sites of focal adhesion on micro-patterned elastic substrates. The main result is that cell-matrix contacts function as mechanosensors, converting internal force into protein aggregation. (2) A one-step master equation for the stochastic dynamics of adhesion clusters as a function of cluster size, rebinding rate and force has been solved both analytically and numerically. Moreover this model has been applied to the regulation of cell-matrix contacts, to dynamic force spectroscopy, and to rolling adhesion. (3) Using linear elasticity theory and the concept of force dipoles, a model has been introduced and solved which predicts the positioning and orientation of mechanically active cells in soft material, in good agreement with experimental observations for fibroblasts on elastic substrates and in collagen gels.
Concerns have been raised that anthropogenic climate change could lead to large-scale singular climate events, i.e., abrupt nonlinear climate changes with repercussions on regional to global scales. One central goal of this thesis is the development of models of two representative components of the climate system that could exhibit singular behavior: the Atlantic thermohaline circulation (THC) and the Indian monsoon. These models are conceived so as to fulfill the main requirements of integrated assessment modeling, i.e., reliability, computational efficiency, transparency and flexibility. The model of the THC is an interhemispheric four-box model calibrated against data generated with a coupled climate model of intermediate complexity. It is designed to be driven by global mean temperature change which is translated into regional fluxes of heat and freshwater through a linear down-scaling procedure. Results of a large number of transient climate change simulations indicate that the reduced-form THC model is able to emulate key features of the behavior of comprehensive climate models such as the sensitivity of the THC to the amount, regional distribution and rate of change in the heat and freshwater fluxes. The Indian monsoon is described by a novel one-dimensional box model of the tropical atmosphere. It includes representations of the radiative and surface fluxes, the hydrological cycle and surface hydrology. Despite its high degree of idealization, the model satisfactorily captures relevant aspects of the observed monsoon dynamics, such as the annual course of precipitation and the onset and withdrawal of the summer monsoon. Also, the model exhibits the sensitivity to changes in greenhouse gas and sulfate aerosol concentrations that are known from comprehensive models. A simplified version of the monsoon model is employed for the identification of changes in the qualitative system behavior against changes in boundary conditions. The most notable result is that under summer conditions a saddle-node bifurcation occurs at critical values of the planetary albedo or insolation. Furthermore, the system exhibits two stable equilibria: besides the wet summer monsoon, a stable state exists which is characterized by a weak hydrological cycle. These results are remarkable insofar, as they indicate that anthropogenic perturbations of the planetary albedo such as sulfur emissions and/or land-use changes could destabilize the Indian summer monsoon. The reduced-form THC model is employed in an exemplary integrated assessment application. Drawing on the conceptual and methodological framework of the tolerable windows approach, emissions corridors (i.e., admissible ranges of CO2- emissions) are derived that limit the risk of a THC collapse while considering expectations about the socio-economically acceptable pace of emissions reductions. Results indicate, for example, a large dependency of the width of the emissions corridor on climate and hydrological sensitivity: for low values of climate and/or hydrological sensitivity, the corridor boundaries are far from being transgressed by any plausible emissions scenario for the 21st century. In contrast, for high values of both quantities low non-intervention scenarios leave the corridor already in the early decades of the 21st century. This implies that if the risk of a THC collapse is to be kept low, business-as-usual paths would need to be abandoned within the next two decades. All in all, this thesis highlights the value of reduced-form modeling by presenting a number of applications of this class of models, ranging from sensitivity and bifurcation analysis to integrated assessment. The results achieved and conclusions drawn provide a useful contribution to the scientific and policy debate about the consequences of anthropogenic climate change and the long-term goals of climate protection. --- Anmerkung: Die Autorin ist Trägerin des von der Mathematisch-Naturwissenschaftlichen Fakultät der Universität Potsdam vergebenen Michelson-Preises für die beste Promotion des Jahres 2003/2004.
Western Anatolia that represents the eastward lateral continuation of the Aegean domain is composed of several tectono-metamorphic units showing occurrences of high-pressure/low-temperature (HP-LT) rocks. While some of these metamorphic rocks are vestiges of the Pan-African or Cimmerian orogenies, others are the result of the more recent Alpine orogenesis. In southwest Turkey, the Menderes Massif occupies an extensive area tectonically overlain by nappe units of the Izmir-Ankara Suture Zone in the north, the Afyon Zone in the east, and the Lycian Nappes in the south. In the present study, investigations in the metasediments of the Lycian Nappes and underlying southern Menderes Massif revealed widespread occurrences of Fe-Mg-carpholite-bearing rocks. This discovery leads to the very first consideration that both nappe complexes recorded HP-LT metamorphic conditions during the Alpine orogenesis. P-T conditions for the HP metamorphic peak are about 10-12 kbar/400°C in the Lycian Nappes, and 12-14 kbar/470-500°C in the southern Menderes Massif, documenting a burial of at least 30 km during subduction and nappe stacking. Ductile deformation analysis in concert with multi-equilibrium thermobarometric calculations reveals that metasediments from the Lycian Nappes recorded distinct exhumation patterns after a common HP metamorphic peak. The rocks located far from the contact separating the Lycian Nappes and the Menderes Massif, where HP parageneses are well preserved, retained a single HP cooling path associated with top-to-the-NNE shearing related to the Akçakaya shear zone. This zone of strain localization is an intra-nappe contact that was active in the early stages of exhumation of HP rocks, within the stability field of Fe-Mg-carpholite. The rocks located close to the contact with the Menderes Massif, where HP parageneses are completely retrogressed into chlorite and mica, recorded warmer exhumation paths associated with top-to-the-E intense shearing. This deformation occurred after the southward emplacement of Lycian Nappes, and is contemporaneous with the reactivation of the ’Lycian Nappes-Menderes Massif′ contact as a major shear zone (the Gerit shear zone) that allowed late exhumation of HP parageneses under warmer conditions. The HP rocks from the southern Menderes Massif recorded a simple isothermal decompression at about 450°C during exhumation, and deformation during HP event and its exhumation is characterized by a severe N-S to NE-SW stretching. The age of the HP metamorphism recorded in the Lycian Nappes is assumed to range between the Latest Cretaceous (age of the youngest sediments in the Lycian allochthonous unit) and the Eocene (age of the Cycladic Blueschists). A probable Palaeocene age is suggested. The age of the HP metamorphism that affected the cover series of the Menderes Massif is constrained between the Middle Palaeocene (age of the uppermost metaolistostrome of the Menderes ’cover′) and the Middle Eocene (age of the HP metamorphism in the Dilek-Selçuk region that belongs to the Cycladic Complex). Apatite fission track data for the rocks on both sides of the ’Lycian Nappes/Menderes Massif’ contact suggest that these rocks were very close to the paleo-Earth surface in the Late Oligocene-Early Miocene time. This study in the Lycian Nappes and in the Menderes Massif establishes the existence of an extensive Alpine HP metamorphic belt in southwest Turkey. HP rocks were involved in the accretionary complex related to northward-verging subduction of the Neo-Tethys Ocean, Late Cretaceous obduction and subsequent Early Tertiary continental collision of the passive margin (Anatolide-Tauride block) beneath the active margin of the northern plate (Sakarya micro-continent). During the Eocene, the accretionary complex was made of three stacked HP units. The lowermost corresponds to the imbricated ’core′ and HP ’cover′ of the Menderes Massif, the intermediate one consists of the Cycladic Blueschist Complex (Dilek-Selçuk unit), and the uppermost unit is made of the HP Lycian Nappes. Whereas the basement units of both Aegean and Anatolian regions underwent a different pre-Mesozoic tectonic history, they were probably juxtaposed by the end of the Paleozoic and underwent a common Mesozoic history. Then, the basements and their cover, as well as the Cycladic Blueschists and the Lycian Nappes were involved in similar evolutional accretionary complexes during the Eocene and Oligocene times.
A polymer is a large molecule made up of many elementary chemical units, joined together by covalent bonds (for example, polyethylene). Polyelectrolytes (PELs) are polymer chains containing a certain amount of ionizable monomers. With their specific properties PELs acquire big importance in molecular and cell biology as well as in technology. Compared to neutral polymers the theory of PELs is less understood. In particular, this is valid for PELs in poor solvents. A poor solvent environment causes an effective attraction between monomers. Hence, for PELs in a poor solvent, there occurs a competition between attraction and repulsion. Strong or quenched PELs are completely dissociated at any accessible pH. The position of charges along the chain is fixed by chemical synthesis. On the other hand, in weak or annealed PELs dissociation of charges depends on solution pH. For the first time the simulation results have given direct evidence that at rather poor solvents an annealed PEL indeed undergoes a first-order phase transition when the chemical potential (solution pH) reaches at a certain value. The discontinuous transition occurs between a weakly charged compact globular structure and a strongly charged stretched configuration. At not too poor solvents theory predicts that globule would become unstable with respect to the formation of pearl-necklaces. The results show that pearl-necklaces exist in annealed PELs indeed. Furthermore, as predicted by theory, the simulation results have shown that annealed PELs display a sharp transition from a highly charged stretched state to a weakly charged globule at a critical salt concentration.
The dissertation examines aspects of the interlingual lexical processes of word recognition and word retrieval in Hungarian-German bilinguals learning English as a foreign language, with particular respect to the role of cognates. The purpose of the study is to describe the process of lexical activaton in a polyglot system and to model the mental lexicons and the ways entries in the lexicons are connected and activated (e.g. activation through direct word association or through concept mediation). Three dependent variables are studied in quantitative and qualitative analysis of empirical data taken from experiments: rate of accurate responses, response latencies and phonological interference. The results of the experiments are interpreted in the framework of a multiple language network model.
Die vorliegende Arbeit beschäftigt sich mit der Charakterisierung von Seismizität anhand von Erdbebenkatalogen. Es werden neue Verfahren der Datenanalyse entwickelt, die Aufschluss darüber geben sollen, ob der seismischen Dynamik ein stochastischer oder ein deterministischer Prozess zugrunde liegt und was daraus für die Vorhersagbarkeit starker Erdbeben folgt. Es wird gezeigt, dass seismisch aktive Regionen häufig durch nichtlinearen Determinismus gekennzeichent sind. Dies schließt zumindest die Möglichkeit einer Kurzzeitvorhersage ein. Das Auftreten seismischer Ruhe wird häufig als Vorläuferphaenomen für starke Erdbeben gedeutet. Es wird eine neue Methode präsentiert, die eine systematische raumzeitliche Kartierung seismischer Ruhephasen ermöglicht. Die statistische Signifikanz wird mit Hilfe des Konzeptes der Ersatzdaten bestimmt. Als Resultat erhält man deutliche Korrelationen zwischen seismischen Ruheperioden und starken Erdbeben. Gleichwohl ist die Signifikanz dafür nicht hoch genug, um eine Vorhersage im Sinne einer Aussage über den Ort, die Zeit und die Stärke eines zu erwartenden Hauptbebens zu ermöglichen.
Recent high-throughput technologies enable the acquisition of a variety of complementary data and imply regulatory networks on the systems biology level. A common approach to the reconstruction of such networks is the cluster analysis which is based on a similarity measure. We use the information theoretic concept of the mutual information, that has been originally defined for discrete data, as a measure of similarity and propose an extension to a commonly applied algorithm for its calculation from continuous biological data. We compare our approach to previously existing algorithms. We develop a performance optimised software package for the application of the mutual information to large-scale datasets. Furthermore, we design and implement a web-based service for the analysis of integrated data measured with different technologies. Application to biological data reveals biologically relevant groupings and reconstructed signalling networks show agreements with physiological findings.
Combining the magnetic properties of a given material with the tremendous advantages of colloids can exponentially increase the advantages of both systems. This thesis deals with the field of magnetic nanotechnology. Thus, the design and characterization of new magnetic colloids with fascinating properties compared with the bulk materials is presented. Ferrofluids are referred to either as water or organic stable dispersions of superparamagnetic nanoparticles which respond to the application of an external magnetic field but lose their magnetization in the absence of a magnetic field. In the first part of this thesis, a three-step synthesis for the fabrication of a novel water-based ferrofluid is presented. The encapsulation of high amounts of magnetite into polystyrene particles can efficiently be achieved by a new process including two miniemulsion processes. The ferrofluids consist of novel magnetite polystyrene nanoparticles dispersed in water which are obtained by three-step process including coprecipitation of magnetite, its hydrophobization and further surfactant coating to enable the redispersion in water and the posterior encapsulation into polystyrene by miniemulsion polymerization. It is a desire to take advantage of a potential thermodynamic control for the design of nanoparticles, and the concept of "nanoreactors" where the essential ingredients for the formation of the nanoparticles are already in the beginning. The formulation and application of polymer particles and hybrid particles composed of polymeric and magnetic material is of high interest for biomedical applications. Ferrofluids can for instance be used in medicine for cancer therapy and magnetic resonance imaging. Superparamagnetic or paramagnetic colloids containing iron or gadolinium are also used as magnetic resonance imaging contrast agent, for example as a important tool in the diagnosis of cancer, since they enhance the relaxation of the water of the neighbouring zones. New nanostructured composites by the thermal decomposition of iron pentacarbonyl in the monomer phase and thereafter the formation of paramagnetic nanocomposites by miniemulsion polymerization are discussed in the second part of this thesis. In order to obtain the confined paramagnetic nanocomposites a two-step process was used. In the first step, the thermal decomposition of the iron pentacarbonyl was obtained in the monomer phase using oleic acid as stabilizer. In the second step, this iron-containing monomer dispersion was used for making a miniemulsion polymerization thereof. The addition of lanthanide complexes to ester-containing monomers such as butyl acrylate and subsequent polymerization leading to the spontaneous formation of highly organized layered nanocomposites is presented in the final part of this thesis. By an one-step miniemulsion process, the formation of a lamellar structure within the polymer nanoparticles is achieved. The magnetization and the NMR relaxation measurements have shown these new layered nanocomposites to be very apt for application as contrast agent in magnetic resonance imaging.
We theoretically discuss the interaction of neutral particles (atoms, molecules) with surfaces in the regime where it is mediated by the electromagnetic field. A thorough characterization of the field at sub-wavelength distances is worked out, including energy density spectra and coherence functions. The results are applied to typical situations in integrated atom optics, where ultracold atoms are coupled to a thermal surface, and to single molecule probes in near field optics, where sub-wavelength resolution can be achieved.
This thesis presents new approaches to evolutions of binary black hole systems in numerical relativity. We analyze and compare evolutions from various physically motivated initial data sets, in particular presenting the first evolutions of Thin Sandwich data generated by the Meudon group. For the first time two different quasi-circular orbit initial data sequences are compared through fully 3d numerical evolutions: Puncture data and Thin Sandwich data (TSD) based on a helical killing vector ansatz. The two different sets are compared in terms of the physical quantities that can be measured from the numerical data, and in terms of their evolutionary behavior. The evolutions demonstrate that for the latter, "Meudon" datasets, the black holes do in fact orbit for a longer amount of time before they merge, in comparison with Puncture data from the same separation. This indicates they are potentially better estimates of quasi-circular orbit parameters. The merger times resulting from the numerical simulations are consistent with independent Post-Newtonian estimates that the final plunge phase of a black hole inspiral should take 60% of an orbit.
This MA thesis examines novels by Native American authors of the 20th century in regard to their representation of conflicts between the indigenous population of North America and the dominant Christian religion of the mainstream society. Several major points can be followed throughout the century, which have been presented repeatedly and discussed in various perspectives. Historical conflicts of colonization and Christianization, as well as the perpetual question of Native American Christians -- 'How can you go to a church that killed so many Indians?' [Alexie, Reservation Blues] -- are debated in these novels and analyzed in this paper. Furthermore, I have tried to position and classify the works according to their representation of these problems within literary history. Following Charles Larson's chronologic and thematic examination of American Indian Fiction, the categories rejection, (syncretic) adaptation, and postmodern-ironic revision are introduced to describe the various forms of representation. On the basis of five main examples, we can observe an evolution of contemporary Native American literature, which has liberated itself from the narrow definition of the 1960s and 1970s, in favor of a broader and more varied approach. In so doing, and by means of intercultural and intertextual referencing, postmodern irony, and a new Indian self-confidence, it has also taken a new position towards the religion of the former colonizer.
This work incorporates three treatises which are commonly concerned with a stochastic theory of the Lyapunov exponents. With the help of this theory universal scaling laws are investigated which appear in coupled chaotic and disordered systems. First, two continuous-time stochastic models for weakly coupled chaotic systems are introduced to study the scaling of the Lyapunov exponents with the coupling strength (coupling sensitivity of chaos). By means of the the Fokker-Planck formalism scaling relations are derived, which are confirmed by results of numerical simulations. Next, coupling sensitivity is shown to exist for coupled disordered chains, where it appears as a singular increase of the localization length. Numerical findings for coupled Anderson models are confirmed by analytic results for coupled continuous-space Schrödinger equations. The resulting scaling relation of the localization length resembles the scaling of the Lyapunov exponent of coupled chaotic systems. Finally, the statistics of the exponential growth rate of the linear oscillator with parametric noise are studied. It is shown that the distribution of the finite-time Lyapunov exponent deviates from a Gaussian one. By means of the generalized Lyapunov exponents the parameter range is determined where the non-Gaussian part of the distribution is significant and multiscaling becomes essential.
Understanding the formation of stars in galaxies is central to much of modern astrophysics. For several decades it has been thought that the star formation process is primarily controlled by the interplay between gravity and magnetostatic support, modulated by neutral-ion drift. Recently, however, both observational and numerical work has begun to suggest that supersonic interstellar turbulence rather than magnetic fields controls star formation. This review begins with a historical overview of the successes and problems of both the classical dynamical theory of star formation, and the standard theory of magnetostatic support from both observational and theoretical perspectives. We then present the outline of a new paradigm of star formation based on the interplay between supersonic turbulence and self-gravity. Supersonic turbulence can provide support against gravitational collapse on global scales, while at the same time it produces localized density enhancements that allow for collapse on small scales. The efficiency and timescale of stellar birth in Galactic gas clouds strongly depend on the properties of the interstellar turbulent velocity field, with slow, inefficient, isolated star formation being a hallmark of turbulent support, and fast, efficient, clustered star formation occurring in its absence. After discussing in detail various theoretical aspects of supersonic turbulence in compressible self-gravitating gaseous media relevant for star forming interstellar clouds, we explore the consequences of the new theory for both local star formation and galactic scale star formation. The theory predicts that individual star-forming cores are likely not quasi-static objects, but dynamically evolving. Accretion onto these objects will vary with time and depend on the properties of the surrounding turbulent flow. This has important consequences for the resulting stellar mass function. Star formation on scales of galaxies as a whole is expected to be controlled by the balance between gravity and turbulence, just like star formation on scales of individual interstellar gas clouds, but may be modulated by additional effects like cooling and differential rotation. The dominant mechanism for driving interstellar turbulence in star-forming regions of galactic disks appears to be supernovae explosions. In the outer disk of our Milky Way or in low-surface brightness galaxies the coupling of rotation to the gas through magnetic fields or gravity may become important.
During the past several decades polymer materials become widely used as components of medical devices and implants such as hemodialysers, bioartificial organs as well as vascular and recombinant surgery. Most of the devices cannot avoid the blood contact in their use. When the polymer materials come in contact with blood they can cause different undesired host responses like thrombosis, inflammatory reactions and infections. Thus the materials must be hemocompatible in order to minimize these undesired body responses. The earliest and one of the main problems in the use of blood-contacting biomaterials is the surface induced thrombosis. The sequence of the thrombus formation on the artificial surfaces has been well established. The first event, which occurs, after exposure of biomaterials to blood, is the adsorption of blood proteins. Surface physicochemical properties of the materials as wettability greatly influence the amount and conformational changes of adsorbed proteins. In turn the type, amount and conformational state of the adsorbed protein layer determines whether platelets will adhere and become activated or not on the artificial surface and thus to complete the thrombus formation. The adsorption of fibrinogen (FNG), which is present in plasma, has been shown to be closely related to surface induced thrombosis by participating in all processes of the thrombus formation such as fibrin formation, platelet adhesion and aggregation. Therefore study the FNG adsorption to artificial surfaces could contribute to better understanding of the mechanisms of platelet adhesion and activation and thus to controlling the surface induced thrombosis. Endothelization of the polymer surfaces is one of the strategies for improving the materials hemocompatibility, which is believed to be the most ideal solution for making truly blood-compatible materials. Since at physiological conditions proteins such as FNG and fibronectin (FN) are the usual extracellular matrix (ECM) for endothelial cells (EC) adhesion, precoating of the materials with these proteins has been shown to improve EC adhesion and growth in vitro. ECM proteins play an essential role not only like a structural support for cell adhesion and spreading, but also they are important factor in transmitting signals for different cell functions. The ability of cells to remodel plasma proteins such as FNG and FN in matrix-like structures together with the classical cell parameters such as actin cytoskeleton and focal adhesion formation could be used as an criteria for proper cell functioning. The establishment and the maintaining of delicate balance between cell-cell and cell-substrate contacts is another important factor for better EC colonization of the implants. The functionality of newly established endothelium in order to produce antithromotic substances should be always considered when EC seeding is used for improving the hemocompatibility of the polymer materials. Controlling the polymer surface properties such as surface wettability represents a versatile approach to manipulate the above cellular responses and therefore can be used in biomaterial and tissue engineering applications for producing better hemocompatible materials.
This thesis describes the development and application of the impacts module of the ICLIPS model, a global integrated assessment model of climate change. The presentation of the technical aspects of this model component is preceded by a discussion of the sociopolitical context for model-based integrated assessments, which defines important requirements for the specification of the model. Integrated assessment of climate change comprises a broad range of scientific efforts to support the decision-making about objectives and measures for climate policy, whereby many different approaches have been followed to provide policy-relevant information about climate impacts. Major challenges in this context are the large diversity of the relevant spatial and temporal scales, the multifactorial causation of many climate impacts', considerable scientific uncertainties, and the ambiguity associated with unavoidable normative evaluations. A hierarchical framework is presented for structuring climate impact assessments that reflects the evolution of their practice and of the underlying theory. Integrated assessment models of climate change (IAMs) are scientific tools that contain simplified representations of the relevant components of the coupled society-climate system. The major decision-analytical frameworks for IAMs are evaluated according to their ability to address important aspects of the pertinent social decision problem. The guardrail approach is presented as an inverse' framework for climate change decision support, which aims to identify the whole set of policy strategies that are compatible with a set of normatively specified constraints (guardrails'). This approach combines, to a certain degree, the scientific rigour and objectivity typical of predictive approaches with the ability to consider virtually all decision options that is at the core of optimization approaches. The ICLIPS model is described as the first IAM that implements the guardrail approach. The representation of climate impacts is a key concern in any IAM. A review of existing IAMs reveals large differences in the coverage of impact sectors, in the choice of the impact numeraire(s), in the consideration of non-climatic developments, including purposeful adaptation, in the handling of uncertainty, and in the inclusion of singular events. IAMs based on an inverse approach impose specific requirements to the representation of climate impacts. This representation needs to combine a level of detail and reliability that is sufficient for the specification of impact guardrails with the conciseness and efficiency that allows for an exploration of the complete domain of plausible climate protection strategies. Large-scale singular events can often be represented by dynamic reduced-form models. This approach, however, is less appropriate for regular impacts where the determination of policy-relevant results generally needs to consider the heterogeneity of climatic, environmental, and socioeconomic factors at the local or regional scale. Climate impact response functions (CIRFs) are identified as the most suitable reduced-form representation of regular climate impacts in the ICLIPS model. A CIRF depicts the aggregated response of a climate-sensitive system or sector as simulated by a spatially explicit sectoral impact model for a representative subset of plausible futures. In the CIRFs presented here, global mean temperature and atmospheric CO2 concentration are used as predictors for global and regional impacts on natural vegetation, agricultural crop production, and water availability. Application of a pattern scaling technique makes it possible to consider the regional and seasonal patterns in the climate anomalies simulated by several general circulation models while ensuring the efficiency of the dynamic model components. Efforts to provide quantitative estimates of future climate impacts generally face a trade-off between the relevance of an indicator for stakeholders and the exactness with which it can be determined. A number of non-monetary aggregated impact indicators for the CIRFs is presented, which aim to strike the balance between these two conflicting goals while taking into account additional constraints of the ICLIPS modelling framework. Various types of impact diagrams are used for the visualization of CIRFs, each of which provides a different perspective on the impact result space. The sheer number of CIRFs computed for the ICLIPS model precludes their comprehensive presentation in this thesis. Selected results referring to changes in the distribution of biomes in different biogeographical regions, in the agricultural potential of various countries, and in the water availability in selected major catchments are discussed. The full set of CIRFs is accessible via the ICLIPS Impacts Tool, a graphical user interface that provides convenient access to more than 100,000 impact diagrams developed for the ICLIPS model. The technical aspects of the software are described as well as the accompanying database of CIRFs. The most important application of CIRFs is in inverse' mode, where they are used to translate impact guardrails into simultaneous constraints for variables from the optimizing ICLIPS climate-economy model. This translation is facilitated by algorithms for the computation of reachable climate domains and for the parameterized approximation of admissible climate windows derived from CIRFs. The comprehensive set of CIRFs, together with these algorithms, enables the ICLIPS model to flexibly explore sets of climate policy strategies that explicitly comply with impact guardrails specified in biophysical units. This feature is not found in any other intertemporally optimizing IAM. A guardrail analysis with the integrated ICLIPS model is described that applies selected CIRFs for ecosystem changes. So-called necessary carbon emission corridors' are determined for a default choice of normative constraints that limit global vegetation impacts as well as regional mitigation costs, and for systematic variations of these constraints. A brief discussion of recent developments in integrated assessment modelling of climate change connects the work presented here with related efforts.
In this thesis the gravitational lensing effect is used to explore a number of cosmological topics. We determine the time delay in the gravitationally lensed quasar system HE1104-1805 using different techniques. We obtain a time delay Delta_t(A-B) Delta_t(A-B) =-310 +- 20 days (2 sigma errors) between the two components. We also study the double quasar Q0957+561 during a three years monitoring campaign. The fluctuations we find in the difference light curves are completely consistent with noise and no microlensing is needed to explain these fluctuations. Microlensing is also studied in the quadruple quasar Q2237+0305 during the GLITP collaboration (Oct.1999-Feb.2000). We use the absence of a strong microlensing signal to obtain an upper limit of v=600 km/s for the effective transverse velocity of the lens galaxy (considering microlenses with 0.1 solar masses). The distribution of dark matter in galaxy clusters is also studied in the second part of the thesis. In the cluster of galaxies Cl0024+1654 we obtain a mass-to-light ratio of M/L = 200 M_sun/L_sun (within a radius of 3 arcminutes). In the galaxy cluster RBS380 we find a relatively low X-ray luminosity for a massive cluster of L =2*10^(44) erg/s, but a rich distribution of galaxies in the optical band.
The development of fast and reliable biochemical tools for on-site screening in environmental analysis was the main target of the present work. Due to various hazardous effects such as endocrine disruption and toxicity phenolic compounds are key analytes in environmental analysis and thus were chosen as model analytes. Three different methods were developed: For the enzymatic detection of phenols in environmental samples an enzyme-based biosensor was developed. In contrast to reported work using tyrosinase or peroxidases, we developed a biosensor based on glucose dehydrogenase as biorecognition element. This biosensor was devoted for an application in a laboratory flow system as well as in a portable device for on-site measurements. This enzymatic detection is applicable only for a limited number of phenols due to substrate specificity of the enzyme. For other relevant compounds based on a phenolic structure (i.e. nitrophenol, alkylphenols and alkylphenol ethoxylates) immunological methods had to be developed. The electrochemical GDH-biosensor was used as the label detector in these immunoassays. Two heterogeneous immunoassays were developed where ßGal was used as the label. An electrochemical method for the determination of the marker enzyme activity was processed. The separation step was realized with protein A/G columns (laboratory flow system) or by direct immobilization of the antibodies in small disposable capillaries (on-site analysis). All methods were targeted on the contemporary analysis of small numbers of samples.
Chemical transformations and hydraulic processes in soil and groundwater often lead to an apparent retention of nitrate in lowland catchments. Models are needed to evaluate the interaction of these processes in space and time. The objectives of this study are i) to develop a specific modelling approach by combining selected modelling tools simulating N-transport and turnover in soils and groundwater of lowland catchments, ii) to study interactions between catchment properties and nitrogen transport. Special attention was paid to potential N-loads to surface waters. The modelling approach combines various submodels for water flow and solute transport in soil and groundwater: The soil-water- and nitrogen-model mRISK-N, the groundwater flow model MODFLOW and the solute transport model RT3D. In order to investigate interactions of N-transport and catchment characteristics, the distribution and availability of reaction partners have to be taken into account. Therefore, a special reaction-module is developed, which simulates various chemical processes in groundwater, such as the degradation of organic matter by oxygen, nitrate, sulphate or pyrite oxidation by oxygen and nitrate. The model approach is applied to different simulation, focussing on specific submodels. All simulation studies are based on field data from the Schaugraben catchment, a pleistocene catchment of approximately 25 km², close to Osterburg(Altmark) in the North of Saxony-Anhalt. The following modelling studies have been carried out: i) evaluation of the soil-water- and nitrogen-model based on lysimeter data, ii) modelling of a field scale tracer experiment on nitrate transport and turnover in the groundwater as a first application of the reaction module, iii) evaluation of interactions between hydraulic and chemical aquifer properties in a two-dimensional groundwater transect, iv) modelling of distributed groundwater recharge and soil nitrogen leaching in the study area, to be used as input data for subsequent groundwater simulations, v) study of groundwater nitrate distribution and nitrate breakthrough to the surface water system in the Schaugraben catchment area and a subcatchment, using three-dimensional modelling of reactive groundwater transport. The various model applications prove the model to be capable of simulating interactions between transport, turnover and hydraulic and chemical catchment properties. The distribution of nitrate in the sediment and the resulting loads to surface waters are strongly affected by the amount of reactive substances and by the residence time within the aquifer. In the Schaugraben catchment simulations, it is found that a period of 70 years is needed to raise the average seepage concentrations of nitrate to a level corresponding to the given input situation, if no reactions are considered. Under reactive transport conditions, nitrate concentrations are reduced effectively. Simulation results show that groundwater exfiltration does not contribute considerably to the nitrate pollution of surface waters, as most nitrate entering soils and groundwater is lost by denitrification. Additional sources, such as direct inputs or tile drains have to be taken into account to explain surface water loads. The prognostic value of the models for the study site is limited by uncertainties of input data and estimation of model parameters. Nevertheless, the modelling approach is a useful aid for the identification of source and sink areas of nitrate pollution as well as the investigation of system response to management measures or landuse changes with scenario simulations. The modelling approach assists in the interpretation of observed data, as it allows to integrate local observations into a spatial and temporal framework.
Nanostructured materials are the materials having structural features on the scale of nanometers i.e. 10-9 m. the structural features can enhance the natural properties of the materials or induce additional properties, which are useful for day to technology as well as the future technologies One way to synthesize nanostructured materials is using templating techniques. The templating process involves use of a certain “mould” or “scaffold” to generate the structure. The mould is called as the template, can be a single molecule or assembly of molecule or a larger object, which has its own structure. The product material can be obtained by filling the space around the template with a “precursor”, transformation of precursor into the desired material and then removal of template to get product. The precursor can be any chemical moiety that can be easily transformed in to the desired material. Alternatively the desired material is processed into very tiny bricks or “nano building blocks (NBB)” and the product is obtained by arrangement of the NBB by using a scaffold. We synthesized porous metal oxide spheres of namely TiO2-M2O3: titanium dioxide- M-oxide (M = aluminum, gallium and indium) TiO2-M2O3 and cerium oxide-zirconium oxide solid solution. We used porous polymeric beads as templates. These beads used for chromatographic purposes. For the synthesis of TiO2-M2O3 we used metal- alkoxides as precursor. The pore of beads were filled with precursor and then reacted with water to give transformation of the precursor to amorphous oxide network. The network is crystallized and template is removed by heat treatment at high temperatures. In a similar way we obtained porous spheres of CexZr1-xO2. For this we synthesized nanoparticle of CexZr1-xO2 and used then for the templating process to obtain porous CexZr1-xO2 spheres. Additionally, using the same nanoparticles we synthesized nano-porous powder using self-assembly process between a block-copolymers scaffold and nanoparticles. Morphological and physico-chemical properties of these materials were studies systematically by using various analytical techniques TiO2-M2O3 material were tested for photocatalytic degradation of 2-Chlorophenol a poisonous pollutant. While CexZr1-xO2 spheres were tested for methanol steam reforming reaction to generate hydrogen, which is a fuel for future generation power sources like fuel cells. All the materials showed good catalytic performance.
In this work, an approach of paleoclimate reconstruction for tropical East Africa is presented. After giving a short summary of modern climate conditions in the tropics and the East African climate peculiarity, the potential of reconstructing climate from paleolake sediments is discussed. As demonstrated, the hydrologic sensitivity of high-elevated closed-basin lakes in the Central Kenya Rift yields valuable guaranties for the establishment of long-term climate records. Temporal fluctuations of the limnological characteristics saved in the lake sediments are used to define variations in the Quaternary climate history. Based on diatom analyses in radiocarbon- and 40Ar/39Ar-dated sediments, a chronology of paleoecologic fluctuations is developed for the Central Kenya Rift -lakes Nakuru, Elmenteita and Naivasha. At least during the penultimate interglacial (around 140 to 60 kyr BP) and during the last interglacial (around 12 to 4 kyr BP), these lakes experienced several transgression-regression cycles on time intervals of about 11,000 years. Additionally, a long-term trend of lake evolution is found suggesting the general succession from deep freshwater lakes towards more saline waters during the last million years. Using ecologic transfer functions and a simple lake-balance model, the observed paleohydrologic fluctuations are linked to potential precipitation-evaporation changes in the lake basins. Though also tectonic influences on the drainage pattern and the effect of varied seepage are investigated, it can be shown that already a small increase in precipitation of about 30±10 % may have affected the hydrologic budget of the intra-rift lakes within the reconstructed range. The findings of this study help to assess the natural climate variability of East Africa. They furthermore reflect the sensitivity of the Central Kenya Rift -lakes to fluctuations of large-scale climate parameters, such as solar radiation and sea-surface temperatures of the Indian Ocean.
Red cell development in adult humans results in the mean daily production of 2x1011 erythrocytes. Mature hemoglobinized and enucleated erythrocytes develop from multipotent hematopoietic stem/progenitor cells through more committed progenitor cell types such as BFU-E and CFU-E. The studies on the molecular mechanisms of erythropoiesis in the human system require a sufficient number of purified erythroid progenitors of the different stages of erythropoiesis. Primary human erythroid progenitors are difficult to obtain as a homogenous population in sufficiently high cell numbers. Various culture conditions for the in vitro cell culture of primary human erythroid progenitors have been previously described. Mainly, the culture resulted in the generation of rather mature stages of Epo-dependent erythroid progenitors. In this study our efforts were directed towards the isolation and characterization of more early red cell progenitors that are Epo-independent. To identify such progenitors, CD34+ cells were purified from cord blood and cultured under serum free conditions in the presence of the growth factors SCF, IL-3 and hyper-IL-6, referred to as SI2 culture conditions. By immunomagnetic bead selection of E-cadherin (E-cad) positive cells, E-cad+ progenitors were isolated. These Epo-independent E-cad+ progenitors have been amplified under SI2 conditions to large cell numbers. The E-cad+ progenitors were characterized for surface antigen expression by flow cytometry, response to growth factors in proliferation assay and for their differentiation potential into mature red cells. Additionally, the properties of E-cad+ progenitors were compared to those of two other erythroid progenitors: Epo-dependent progenitors described by Panzenböck et al. (referred to as SCF/Epo progenitor), and CD36+ progenitors described by Freyssinier et al. (Panzenböck et al., 1998; Freyssinier et al., 1999). Finally, the gene expression profile of E-cad+ progenitors was compared to the profiles of SCF/Epo progenitors and CD36+ progenitors using the DNA microarray technique. Based on our studies we propose that Epo-independent E-cad+ progenitors are early stage, BFU-E like progenitors. They respond to Epo, despite the fact that they were generated in the absence of Epo, and can completely undergo erythroid differentiation. Furthermore, we demonstrate that the growth properties, the growth factor response and the surface marker expression of E-cad+ progenitors are similar to those of the SCF/Epo progenitors and the CD36+ progenitors. By the comparison of gene profiles, we were also able to demonstrate that the Epo-dependent and Epo-independent red cell progenitors are very similar. Analyzing the molecular differences between E-cad+ and SCF/Epo progenitors revealed several candidate genes such as galectin-3, cyclin D1, AMHR, PDF and IGFBP4, which are potential regulators involved in red cell development. We also demonstrate that the CD36+ progenitors, isolated by immunomagentic bead selection, are a heterogeneous progenitor population containing an E-cad+ and an E-cad- subpopulation. Based on their gene expression profile, CD36+ progenitors seem to exhibit both erythroid and megakaryocytic features. These studies led to a more updated model of erythroid cell development that should pave the way for further studies on molecular mechanisms of erythropoiesis.
The Dead Sea Transform (DST) is a prominent shear zone in the Middle East. It separates the Arabian plate from the Sinai microplate and stretches from the Red Sea rift in the south via the Dead Sea to the Taurus-Zagros collision zone in the north. Formed in the Miocene about 17 Ma ago and related to the breakup of the Afro-Arabian continent, the DST accommodates the left-lateral movement between the two plates. The study area is located in the Arava Valley between the Dead Sea and the Red Sea, centered across the Arava Fault (AF), which constitutes the major branch of the transform in this region. A set of seismic experiments comprising controlled sources, linear profiles across the fault, and specifically designed receiver arrays reveals the subsurface structure in the vicinity of the AF and of the fault zone itself down to about 3-4 km depth. A tomographically determined seismic P velocity model shows a pronounced velocity contrast near the fault with lower velocities on the western side than east of it. Additionally, S waves from local earthquakes provide an average P-to-S velocity ratio in the study area, and there are indications for a variations across the fault. High-resolution tomographic velocity sections and seismic reflection profiles confirm the surface trace of the AF, and observed features correlate well with fault-related geological observations. Coincident electrical resistivity sections from magnetotelluric measurements across the AF show a conductive layer west of the fault, resistive regions east of it, and a marked contrast near the trace of the AF, which seems to act as an impermeable barrier for fluid flow. The correlation of seismic velocities and electrical resistivities lead to a characterisation of subsurface lithologies from their physical properties. Whereas the western side of the fault is characterised by a layered structure, the eastern side is rather uniform. The vertical boundary between the western and the eastern units seems to be offset to the east of the AF surface trace. A modelling of fault-zone reflected waves indicates that the boundary between low and high velocities is possibly rather sharp but exhibits a rough surface on the length scale a few hundreds of metres. This gives rise to scattering of seismic waves at this boundary. The imaging (migration) method used is based on array beamforming and coherency analysis of P-to-P scattered seismic phases. Careful assessment of the resolution ensures reliable imaging results. The western low velocities correspond to the young sedimentary fill in the Arava Valley, and the high velocities in the east reflect mainly Precambrian igneous rocks. A 7 km long subvertical scattering zone reflector is offset about 1 km east of the AF surface trace and can be imaged from 1 km to about 4 km depth. The reflector marks the boundary between two lithological blocks juxtaposed most probably by displacement along the DST. This interpretation as a lithological boundary is supported by the combined seismic and magnetotelluric analysis. The boundary may be a strand of the AF, which is offset from the current, recently active surface trace. The total slip of the DST may be distributed spatially and in time over these two strands and possibly other faults in the area.
The correlations between the chemical structures of the 2,5-diphenyl-1,3,4-oxadiazole compounds and their corresponding vapour deposited film structures on Si/SiO2 were systematically investigated with AFM, XSR and IR for the first time. The result shows that the film structure depends strongly on the substrate temperature (Ts). For the compounds with ether bridge group, the film periodicity depends linearly on the length of the aliphatic chain. The films based on those oxadiazols have ordered structure in the investigated substrate temperature region, while die amide bridged compounds form ordered film only at high Ts due to the formation of intermolecular H-bond. The tilt angle of most molecules is determined by the pi-pi complexes between the molecules. The intermolecular interaction between head groups leads to the structural transformation during the thermal treatment after deposition. All the ether bridged oxadiazoles form films with bilayer structure, while amide bridged oxadiazole form film bilayer structure only when the molecule has a head group.
Distributed optimality
(2001)
In this thesis I propose a synthesis (Distributed Optimality, DO) between Optimality Theory (OT, Prince & Smolensky, 1993) and a morphological framework in a genuine derivational tradition, namely Distributed Morphology (DM) as developed by Halle & Marantz (1993). By carrying over the apparatus of OT to DM, phenomena which are captured in DM by language-specific rules or features of lexical entries, are given a more principled account in the terms of ranked universal constraints. On the other hand, also the DM part makes two contributions, namely strong locality and impoverishment. The first gives rise to a simple formal interpretation of DO, while the latter is shown to be indispensable in any theoretically satisfying account of agreement morphology. The empirical basis of the work is given by the complex agreement morphology of genetically different languages. Theoretical focus is mainly on two areas: First, so-called direction marking which is shown to be preferably treated in terms of constraints on feature realization. Second, the effects of precedence constraints which are claimed to regulate the status of agreement affixes as prefixes or suffixes and their respective order. A universal typology for the order of agreement categories by means of OT-constraints is proposed.
This thesis deals with the encoding and transmission of information through a quantum channel. A quantum channel is a quantum mechanical system whose state is manipulated by a sender and read out by a receiver. The individual state of the channel represents the message. The two topics of the thesis comprise 1) the possibility of compressing a message stored in a quantum channel without loss of information and 2) the possibility to communicate a message directly from one party to another in a secure manner, that is, a third party is not able to eavesdrop the message without being detected. The main results of the thesis are the following. A general framework for variable-length quantum codes is worked out. These codes are necessary to make lossless compression possible. Due to the quantum nature of the channel, the encoded messages are in general in a superposition of different lengths. It is found to be impossible to compress a quantum message without loss of information if the message is not apriori known to the sender. In the other case it is shown that lossless quantum data compression is possible and a lower bound on the compression rate is derived. Furthermore, an explicit compression scheme is constructed that works for arbitrarily given source message ensembles. A quantum cryptographic protocol - the “ping-pong protocol” - is presented that realizes the secure direct communication of classical messages through a quantum channel. The security of the protocol against arbitrary eavesdropping attacks is proven for the case of an ideal quantum channel. In contrast to other quantum cryptographic protocols, the ping-pong protocol is deterministic and can thus be used to transmit a random key as well as a composed message. The protocol is perfectly secure for the transmission of a key, and it is quasi-secure for the direct transmission of a message. The latter means that the probability of successful eavesdropping exponentially decreases with the length of the message.
We study the effect on the elastic properties of lipid membranes induced by anchoring of long hydrophilic polymers. Theoretically, two limiting regimes for the membrane spontaneous curvature are expected : i) at low surface polymer concentration (mushroom regime) the spontaneous curvature should scale linearly with the surface density of anchored polymers; ii) at high coverage (brush regime) the dependence should be quadratic. We attempt to test the predictions for the brush regime by monitoring the morphological changes induced on giant vesicles. As long polymers we use fluorescently labeled λ-phage DNA molecules which are attached to biotinylated lipid vesicles with a biotin-avidin-biotin linkage. By varying the amount of biotinylated lipid in the membrane we control the surface concentration of the anchors. The amount of anchored DNA to the membrane is quantified with fluorescence measurements. Changes in the elastic properties of the membrane as DNA grafts to it are monitored via analysis of the vesicle fluctuations. The spontaneous curvature of the membrane increases as a function of the surface coverage. At higher grafting concentrations the vesicles bud. The size of the buds can also be used to assess the membrane curvature. The effect on the bending stiffness is a subject of further investigation.
The theory of atomic Boson-Fermion mixtures in the dilute limit beyond mean-field is considered in this thesis. Extending the formalism of quantum field theory we derived expressions for the quasi-particle excitation spectra, the ground state energy, and related quantities for a homogenous system to first order in the dilute gas parameter. In the framework of density functional theory we could carry over the previous results to inhomogeneous systems. We then determined to density distributions for various parameter values and identified three different phase regions: (i) a stable mixed regime, (ii) a phase separated regime, and (iii) a collapsed regime. We found a significant contribution of exchange-correlation effects in the latter case. Next, we determined the shift of the Bose-Einstein condensation temperature caused by Boson-Fermion interactions in a harmonic trap due to redistribution of the density profiles. We then considered Boson-Fermion mixtures in optical lattices. We calculated the criterion for stability against phase separation, identified the Mott-insulating and superfluid regimes both, analytically within a mean-field calculation, and numerically by virtue of a Gutzwiller Ansatz. We also found new frustrated ground states in the limit of very strong lattices. ----Anmerkung: Der Autor ist Träger des durch die Physikalische Gesellschaft zu Berlin vergebenen Carl-Ramsauer-Preises 2004 für die jeweils beste Dissertation der vier Universitäten Freie Universität Berlin, Humboldt-Universität zu Berlin, Technische Universität Berlin und Universität Potsdam.
Value education of youth
(2002)
The value priorities of students and teachers were measured at eight different schools at the beginning and the end of the school year 2000/2001. This study once again confirmed the theoretical model of a universal structure of human values (Schwartz, 1992). At both measurement times, similar gender differences, as well as positive correlations between religiosity and school commitment were found. The students from the non-religious schools determined Hedonism as their highest, and Tradition as their lowest value priority. In the religious schools, Benevolence and Self-Direction were the highest values, whereas Power was found to be the lowest value priority. The change of the values Conformity, Hedonism, and Universalism was predicted both through the students′ religiosity and their type of school. The change of the values Power, Tradition, Benevolence, and Achievement, however, was mainly predicted through their religiosity. In three out of four schools the student-teacher similarity correlated positively with the school commitment of the students. Across all schools student-teacher similarity correlated positively with academic achievement.
Late Miocene to Quaternary volcanic rocks from the frontal arc to the back-arc region of the Central Volcanic Zone in the Andes show a wide range of delta 11B values (+4 to -7 ‰) and boron concentrations (6 to 60 ppm). Positive delta 11B values of samples from the volcanic front indicate involvement of a 11B-enriched slab component, most likely derived from altered oceanic crust, despite the thick Andean continental lithosphere, and rule out a pure crust-mantle origin for these lavas. The delta 11B values and B concentrations in the lavas decrease systematically with increasing depth of the Wadati-Benioff Zone. This across-arc variation in delta 11B values and decreasing B/Nb ratios from the arc to the back-arc samples are attributed to the combined effects of B-isotope fractionation during progressive dehydration in the slab and a steady decrease in slab-fluid flux towards the back arc, coupled with a relatively constant degree of crustal contamination as indicated by similar Sr, Nd and Pb isotope ratios in all samples. Modelling of fluid-mineral B-isotope fractionation as a function of temperature fits the across-arc variation in delta 11B and we conclude that the B-isotope composition of arc volcanics is dominated by changing delta 11B composition of B-rich slab-fluids during progressive dehydration. Crustal contamination becomes more important towards the back-arc due to the decrease in slab-derived fluid flux. Because of this isotope fractionation effect, high delta 11B signatures in volcanic arcs need not necessarily reflect differences in the initial composition of the subducting slab. Three-component mixing calculations for slab-derived fluid, the mantle wedge and the continental crust based on B, Sr and Nd isotope data indicate that the slab-fluid component dominates the B composition of the fertile mantle and that the primary arc magmas were contaminated by an average addition of 15 to 30 % crustal material.
Transport processes in and of cells are of major importance for the survival of the organism. Muscles have to be able to contract, chromosomes have to be moved to opposing ends of the cell during mitosis, and organelles, which are compartments enclosed by membranes, have to be transported along molecular tracks. Molecular motors are proteins whose main task is moving other molecules.For that purpose they transform the chemical energy released in the hydrolysis of ATP into mechanical work. The motors of the cytoskeleton belong to the three super families myosin, kinesin and dynein. Their tracks are filaments of the cytoskeleton, namely actin and the microtubuli. Here, we examine stochastic models which are used for describing the movements of these linear molecular motors. The scale of the movements comprises the regime of single steps of a motor protein up to the directed walk along a filament. A single step bridges around 10 nm, depending on the protein, and takes about 10 ms, if there is enough ATP available. Our models comprise M states or conformations the motor can attain during its movement along a one-dimensional track. At K locations along the track transitions between the states are possible. The velocity of the protein depending on the transition rates between the single states can be determined analytically. We calculate this velocity for systems of up to four states and locations and are able to derive a number of rules which are helpful in estimating the behaviour of an arbitrary given system. Beyond that we have a look at decoupled subsystems, i.e., one or a couple of states which have no connection to the remaining system. With a certain probability a motor undergoes a cycle of conformational changes, with another probability an independent other cycle. Active elements in real transport processes by molecular motors will not be limited to the transitions between the states. In distorted networks or starting from the discrete Master equation of the system, it is possible to specify horizontal rates, too, which furthermore no longer have to fulfill the conditions of detailed balance. Doing so, we obtain unique, complete paths through the respective network and rules for the dependence of the total current on all the rates of the system. Besides, we view the time evolutions for given initial distributions. In enzymatic reactions there is the idea of a main pathway these reactions follow preferably. We determine optimal paths and the maximal flow for given networks. In order to specify the dependence of the motor's velocity on its fuel ATP, we have a look at possible reaction kinetics determining the connection between unbalanced transitions rates and ATP-concentration. Depending on the type of reaction kinetics and the number of unbalanced rates, we obtain qualitatively different curves connecting the velocity to the ATP-concentration. The molecular interaction potentials the motor experiences on its way along its track are unknown. We compare different simple potentials and the effects the localization of the vertical rates in the network model has on the transport coefficients in comparison to other models.
The P- and S-wave velocity structure of the D” layer beneath the southwestern Pacific was investigated by using short-period data from 12 deep events in the Tonga-Fiji region recorded by the J-Array and the Hi-net in Japan. A migration method and reflected wave beamforming (RWB) were used in order to extract weak signals originating from small-scale heterogeneities in the lowermost mantle. In order to acquire high resolution, a double array method (DAM) which integrates source array beamforming with receiver array beamforming was applied to the data. A phase-weighted stacking technique, which reduces incoherent noise by employing complex trace analysis, was also applied to the data, amplifying the weak coherent signals from the lowermost mantle. This combination greatly enhances small phases common to the source and receiver beams. The results of the RWB method indicate that seismic energy is reflected at discontinuities near 2520 km and 2650 km, which have a negative P-wave velocity contrast of 1 % at the most. In addition, there is a positive seismic discontinuity at a depth of 2800 km. In the case of the S-wave, reflected energy is produced almost at the same depth (2550 km depth). The different depth (50 km) between the P-wave velocity discontinuity at the depth of 2800 and a further S-wave velocity discontinuity at the depth of 2850 km may indicate that the S-wave velocity reduction in the lowermost mantle is about 2-3 times stronger that that of P wave. A look at a 2D cross section, constructed with the RWB method, suggests that the observed discontinuities can be characterized as intermittent lateral heterogeneities whose lateral extent is a few hundred km, and that the CMB might have undulations on a scale of less than 10 km in amplitude. The migration shows only weak evidence for the existence of scattering objects. Heterogeneous regions in the migration belong to the detected seismic discontinuities. These anomalous structures may represent a part of hot plume generated beneath the southwestern Pacific in the lowermost mantle.
This study investigated the slope carbonates of two Miocene carbonate systems: the Maltese Islands (in the Central Mediterranean) and the Marion Plateau (Northeastern Australia, drilled during ODP Leg 194). The aim of the study was to trace the impact of the Miocene cooling steps (events Mi1-Mi6) in these carbonate systems, especially the Mi3 event, which took place around 13.6 Ma and deeply impacted the marine oxygen isotope record. This event also profoundly impacted oceanographic and climatic patterns, eventually leading to the establishment of the modern ice-house world. In particular, East Antarctica became ice covered at that period. The rational behind the present study was to investigate the impact that this event had on shallow water systems in order to complement the deep-sea record and hence acquire a more global perspective on Miocene climate change. The Maltese Islands were investigated for trends in bulk-rock carbon and oxygen isotopes, as well as bulk-rock mineralogy, clay minerals analysis and organic geochemisty. Results showed that the mid Miocene cooling event deeply impacted sedimentation at that location by changing sedimentation from carbonate to clay-rich sediments. Moreover, it was discovered that each phase of Antarctic glaciation, not just the major mid Miocene event, resulted in higher terrigenous input on Malta. Mass accumulation rates revealed that this was linked to increased runoff during periods when Antarctica was glaciated, and thus that the carbonate sediments were “diluted” by clay-rich sediments. The model subsequently developed to explain this implies feedback from Antarctic glaciations creating cold, dense air masses that push the ITCZ Northward, thus increasing precipitation on the North African subcontinent. Increased precipitation (or stronger African monsoon) accelerated continental weathering and runoff, thus bringing more terrigenous sediment to the paleo-location of the slope sediments of Malta. Spectral analysis of carbonate content and organic matter geochemical analysis furthermore suggest that the clay-rich intervals are similar to sapropelic deposits. On the Marion Plateau, trends in oxygen and carbon isotopes were obtained by measuring Cibicidoides spp foraminifers. Moreover, carbonate content was reconstructed using a chemical method (coulometer). Results show that the mid Miocene cooling step profoundly affected this system: a major drop in accumulation rates of carbonates occurs precisely at 13.8 Ma, around the time of the East Antarctic ice sheet formation. Moreover, sedimentation changes occurred at that time, carbonate fragments coming from neritic environments becoming less abundant, planktonic foraminifer content increasing and quartz and reworked glauconite being deposited. Conversely, a surprising result is that the major N12-N14 sea-level fall occurring around 11.5 Ma did not impact the accumulation of carbonates on the slope. This was unexpected since carbonate platform are very sensitive to sea-level changes. The model developed to explain that mass accumulation rates of carbonates diminished around 13.6 Ma (Mi3 Event) instead of 11.5 Ma (N12-N14 event), suggests that oceanic currents were controlling slope carbonate deposition on the Marion Plateau prior to the mid-Miocene, and that the mid Miocene event considerably increase their strength, hence reducing the amount of carbonate being deposited on slope sites. Moreover, by combining results from deep-sea oxygen isotopes with sea-level estimates based on coastal onlaps made during Leg 194, we constrain the amplitude of the N12-N14 sea-level fall to 90 meters. When integrating isotopic results from this study, this amplitude is lowered to 70 meters. A general conclusion of this work is that the mid Miocene climatic shift did impact carbonate systems, at least at the two locations studied. However, the nature of this response was highly dependant on the regional settings, in particular the presence of land mass (Malta) and the absence of a barrier to shelter from the effects of open ocean (Marion Plateau).
The present work is dealing with the first synthesis and characterisation of amphiphilic diblock copolymers bearing b-dicarbonyl (acetoacetoxy) chelating residues. Polymers were obtained by Group Transfer Polymerisation (GTP)/acetoacetylation and controlled radical polymerisation techniques (RAFT).Different micellar morphologies of poly(n-butyl methacrylate)-block-poly[2-(acetoacetoxy)ethyl methacrylate] (pBuMA-b-pAEMA) were observed in cyclohexane as a selective solvent. Depending on the block length ratio, either spherical, elliptical, or cylindrical micelles were formed. The density of the polymer chains at the core/corona interface is considerably higher as compared to any other strongly segregating system reported in the literature. It is demonstrated that there are H-bond interactions existing between acetoacetoxy groups, which increase the incompatibility between block segments. In addition, such interactions lead to the formation of secondary structures (such as b-sheets or globular structures) and larger superstructures in the micrometer length scale.Block copolymers were also used to solubilise metal ion salts of different geometries and oxidation states in organic media, in which are otherwise insoluble. Sterically stabilised colloidal hybrid materials are formed, i.e. monodisperse micelles having the metal ion salt incorporated in their core upon complexation with the ligating pAEMA block, whereas pBuMA forms the solvating corona responsible for stabilisation in solution. Systematic studies show that the aggregation behaviour is dependent on different factors, such as the tautomeric form of the beta-dicarbonyl ligand (keto/enol) as well as the nature and amount of added metal ion salt.
Movements of processive cytoskeletal motors are characterized by an interplay between directed motion along filament and diffusion in the surrounding solution. In the present work, these peculiar movements are studied by modeling them as random walks on a lattice. An additional subject of our studies is the effect of motor-motor interactions on these movements. In detail, four transport phenomena are studied: (i) Random walks of single motors in compartments of various geometries, (ii) stationary concentration profiles which build up as a result of these movements in closed compartments, (iii) boundary-induced phase transitions in open tube-like compartments coupled to reservoirs of motors, and (iv) the influence of cooperative effects in motor-filament binding on the movements. All these phenomena are experimentally accessible and possible experimental realizations are discussed.
The present work investigates the structure formation and wetting in two dimensional (2D) Langmuir monolayer phases in local thermodynamic equilibrium. A Langmuir monolayer is an isolated 2D system of surfactants at the air/water interface. It exhibits crystalline, liquid crystalline, liquid and gaseous phases differing in positional and/or orientational order. Permanent electric dipole moments of the surfactants lead to a long range repulsive interaction and to the formation of mesoscopic patterns. An interaction model is used describing the structure formation as a competition between short range attraction (bare line tension) and long range repulsion (surface potentials) on a scale Delta. Delta has the meaning of a dividing length between the short and long range interaction. In the present work the thermodynamic equilibrium conditions for the shape of two phase boundary lines (Young-Laplace equation) and three phase intersection points (Young′s condition) are derived and applied to describe experimental data: The line tension is measured by pendant droplet tensiometry. The bubble shape and size of 2D foams is calculated numerically and compared to experimental foams. Contact angles are measured by fitting numerical solutions of the Young-Laplace equation on micron scale. The scaling behaviour of the contact angle allows to measure a lower limit for Delta. Further it is discussed, whether in biological membranes wetting transitions are a way in order to control reaction kinetics. Studies performed in our group are discussed with respect to this question in the framework of the above mentioned theory. Finally the apparent violation of Gibbs′ phase rule in Langmuir monolayers (non-horizontal plateau of the surface pressure/area-isotherm, extended three phase coexistence region in one component systems) is investigated quantitatively. It has been found that the most probable explanation are impurities within the system whereas finite size effects or the influence of the long range electrostatics can not explain the order of magnitude of the effect.
In recent years, there has been a dramatic increase in available compute capacities. However, these “Grid resources” are rarely accessible in a continuous stream, but rather appear scattered across various machine types, platforms and operating systems, which are coupled by networks of fluctuating bandwidth. It becomes increasingly difficult for scientists to exploit available resources for their applications. We believe that intelligent, self-governing applications should be able to select resources in a dynamic and heterogeneous environment: Migrating applications determine a resource when old capacities are used up. Spawning simulations launch algorithms on external machines to speed up the main execution. Applications are restarted as soon as a failure is detected. All these actions can be taken without human interaction. A distributed compute environment possesses an intrinsic unreliability. Any application that interacts with such an environment must be able to cope with its failing components: deteriorating networks, crashing machines, failing software. We construct a reliable service infrastructure by endowing a service environment with a peer-to-peer topology. This “Grid Peer Services” infrastructure accommodates high-level services like migration and spawning, as well as fundamental services for application launching, file transfer and resource selection. It utilizes existing Grid technology wherever possible to accomplish its tasks. An Application Information Server acts as a generic information registry to all participants in a service environment. The service environment that we developed, allows applications e.g. to send a relocation requests to a migration server. The server selects a new computer based on the transmitted resource requirements. It transfers the application's checkpoint and binary to the new host and resumes the simulation. Although the Grid's underlying resource substrate is not continuous, we achieve persistent computations on Grids by relocating the application. We show with our real-world examples that a traditional genome analysis program can be easily modified to perform self-determined migrations in this service environment.
Our every-day experience is connected with different acoustical noise or music. Usually noise plays the role of nuisance in any communication and destroys any order in a system. Similar optical effects are known: strong snowing or raining decreases quality of a vision. In contrast to these situations noisy stimuli can also play a positive constructive role, e.g. a driver can be more concentrated in a presence of quiet music. Transmission processes in neural systems are of especial interest from this point of view: excitation or information will be transmitted only in the case if a signal overcomes a threshold. Dr. Alexei Zaikin from the Potsdam University studies noise-induced phenomena in nonlinear systems from a theoretical point of view. Especially he is interested in the processes, in which noise influences the behaviour of a system twice: if the intensity of noise is over a threshold, it induces some regular structure that will be synchronized with the behaviour of neighbour elements. To obtain such a system with a threshold one needs one more noise source. Dr. Zaikin has analyzed further examples of such doubly stochastic effects and developed a concept of these new phenomena. These theoretical findings are important, because such processes can play a crucial role in neurophysics, technical communication devices and living sciences.
In order to face the rapidly increasing need for computational resources of various scientific and engineering applications one has to think of new ways to make more efficient use of the worlds current computational resources. In this respect, the growing speed of wide area networks made a new kind of distributed computing possible: Metacomputing or (distributed) Grid computing. This is a rather new and uncharted field in computational science. The rapidly increasing speed of networks even outperforms the average increase of processor speed: Processor speeds double on average each 18 month whereas network bandwidths double every 9 months. Due to this development of local and wide area networks Grid computing will certainly play a key role in the future of parallel computing. This type of distributed computing, however, distinguishes from the traditional parallel computing in many ways since it has to deal with many problems not occurring in classical parallel computing. Those problems are for example heterogeneity, authentication and slow networks to mention only a few. Some of those problems, e.g. the allocation of distributed resources along with the providing of information about these resources to the application have been already attacked by the Globus software. Unfortunately, as far as we know, hardly any application or middle-ware software takes advantage of this information, since most parallelizing algorithms for finite differencing codes are implicitly designed for single supercomputer or cluster execution. We show that although it is possible to apply classical parallelizing algorithms in a Grid environment, in most cases the observed efficiency of the executed code is very poor. In this work we are closing this gap. In our thesis, we will - show that an execution of classical parallel codes in Grid environments is possible but very slow - analyze this situation of bad performance, nail down bottlenecks in communication, remove unnecessary overhead and other reasons for low performance - develop new and advanced algorithms for parallelisation that are aware of a Grid environment in order to generelize the traditional parallelization schemes - implement and test these new methods, replace and compare with the classical ones - introduce dynamic strategies that automatically adapt the running code to the nature of the underlying Grid environment. The higher the performance one can achieve for a single application by manual tuning for a Grid environment, the lower the chance that those changes are widely applicable to other programs. In our analysis as well as in our implementation we tried to keep the balance between high performance and generality. None of our changes directly affect code on the application level which makes our algorithms applicable to a whole class of real world applications. The implementation of our work is done within the Cactus framework using the Globus toolkit, since we think that these are the most reliable and advanced programming frameworks for supporting computations in Grid environments. On the other hand, however, we tried to be as general as possible, i.e. all methods and algorithms discussed in this thesis are independent of Cactus or Globus.
The colloidal systems are present everywhere in many varieties such as emulsions (liquid droplets dispersed in liquid), aerosols (liquid dispersed in gas), foam (gas in liquid), etc. Among several new methods for the preparation of colloids, the so-called miniemulsion technique has been shown to be one of the most promising. Miniemulsions are defined as stable emulsions consisting of droplets with a size of 50-500 nm by shearing a system containing oil, water, a surfactant, and a highly water insoluble compound, the so-called hydrophobe 1. In the first part of this work, dynamic crystallization and melting experiments are described which were performed in small, stable and narrowly distributed nanodroplets (confined systems) of miniemulsions. Both regular and inverse systems were examined, characterizing, first, the crystallization of hexadecane, secondly, the crystallization of ice. It was shown for both cases that the temperature of crystallization in such droplets is significantly decreased (or the required undercooling is increased) as compared to the bulk material. This was attributed to a very effective suppression of heterogeneous nucleation. It was also found that the required undercooling depends on the nanodroplet size: with decreasing droplet size the undercooling increases. 2. It is shown that the temperature of crystallization of other n-alkanes in nanodroplets is also significantly decreased as compared to the bulk material due to a very effective suppression of heterogeneous nucleation. A very different behavior was detected between odd and even alkanes. In even alkanes, the confinement in small droplets changes the crystal structure from a triclinic (as seen in bulk) to an orthorhombic structure, which is attributed to finite size effects inside the droplets. An intermediate metastable rotator phase is of less relevance for the miniemulsion droplets than in the bulk. For odd alkanes, only a strong temperature shift compared to the bulk system is observed, but no structure change. A triclinic structure is formed both in bulk and in miniemulsion droplets. 3. In the next part of the thesis it is shown how miniemulsions could be successfully applied in the development of materials with potential application in pharmaceutical and medical fields. The production of cross-linked gelatin nanoparticles is feasible. Starting from an inverse miniemulsion, the softness of the particles can be controlled by varying the initial concentration, amount of cross-link agent, time of cross-linking, among other parameters. Such particles show a thermo-reversible effect, e.g. the particles swell in water above 37 °C and shrink below this temperature. Above 37 °C the chains loose the physical cross-linking, however the particles do not loose their integrity, because of the chemical cross-linking. Those particles have potential use as drug carriers, since gelatin is a natural polymer derived from collagen. 4. The cross-linked gelatin nanoparticles have been used for the biomineralization of hydroxyapatite (HAP), a biomineral, which is the major constituent of our bones. The biomineralization of HAP crystals within the gelatin nanoparticles results in a hybrid material, which has potential use as a bone repair material. 5. In the last part of this work we have shown that layers of conjugated semiconducting polymers can be deposited from aqueous dispersion prepared by the miniemulsion process. Dispersions of particles of different conjugated semiconducting polymers such as a ladder-type poly(para-phenylene) and several soluble derivatives of polyfluorene could be prepared with well-controlled particle sizes ranging between 70 - 250 nm. Layers of polymer blends were prepared with controlled lateral dimensions of phase separation on sub-micrometer scales, utilizing either a mixture of single component nanoparticles or nanoparticles containing two polymers. From the results of energy transfer it is demonstrated that blending two polymers in the same particle leads to a higher efficiency due to the better contact between the polymers. Such an effect is of great interest for the fabrication of opto-electronic devices such as light emitting diodes with nanometer size emitting points and solar cells comprising of blends of electron donating and electron accepting polymers.
Studies of the role of disturbance in vegetation or ecosystems showed that disturbances are an essential and intrinsic element of ecosystems that contribute substantially to ecosystem health, to structural diversity of ecosystems and to nutrient cycling at the local as well as global level. Fire as a grassland, bush or forest fire is a special disturbance agent, since it is caused by biotic as well abiotic environmental factors. Fire affects biogeochemical cycles and plays an important role in atmospheric chemistry by releasing climate-sensitive trace gases and aerosols, and thus in the global carbon cycle by releasing approximately 3.9 Gt C p.a. through biomass burning. A combined model to describe effects and feedbacks between fire and vegetation became relevant as changes in fire regimes due to land use and land management were observed and the global dimension of biomass burnt as an important carbon flux to the atmosphere, its influence on atmospheric chemistry and climate as well as vegetation dynamics were emphasized. The existing modelling approaches would not allow these investigations. As a consequence, an optimal set of variables that best describes fire occurrence, fire spread and its effects in ecosystems had to be defined, which can simulate observed fire regimes and help to analyse interactions between fire and vegetation dynamics as well as to allude to the reasons behind changing fire regimes. Especially, dynamic links between vegetation, climate and fire processes are required to analyse dynamic feedbacks and effects of changes of single environmental factors. This led us to the point, where new fire models had to be developed that would allow the investigations, mentioned above, and could help to improve our understanding of the role of fire in global ecology. In conclusion of the thesis, one can state that moisture conditions, its persistence over time and fuel load are the important components that describe global fire pattern. If time series of a particular region are to be reproduced, specific ignition sources, fire-critical climate conditions and vegetation composition become additional determinants. Vegetation composition changes the level of fire occurrence and spread, but has limited impact on the inter-annual variability of fire. The importance to consider the full range of major fire processes and links to vegetation dynamics become apparent under climate change conditions. Increases in climate-dependent length of fire season does not automatically imply increases in biomass burnt, it can be buffered or accelerated by changes in vegetation productivity. Changes in vegetation composition as well as enhanced vegetation productivity can intensify changes in fire and lead to even more fire-related emissions. --- Anmerkung: Die Autorin ist Trägerin des von der Mathematisch-Naturwissenschaftlichen Fakultät der Universität Potsdam vergebenen Michelson-Preises für die beste Promotion des Jahres 2002/2003.
In this thesis, I investigated the factors influencing the growth and vertical distribution of planktonic algae in extremely acidic mining lakes (pH 2-3). In the focal study site, Lake 111 (pH 2.7; Lusatia, Germany), the chrysophyte, Ochromonas sp., dominates in the upper water strata and the chlorophyte, Chlamydomonas sp., in the deeper strata, forming a pronounced deep chlorophyll maximum (DCM). Inorganic carbon (IC) limitation influenced the phototrophic growth of Chlamydomonas sp. in the upper water strata. Conversely, in deeper strata, light limited its phototrophic growth. When compared with published data for algae from neutral lakes, Chlamydomonas sp. from Lake 111 exhibited a lower maximum growth rate, an enhanced compensation point and higher dark respiration rates, suggesting higher metabolic costs due to the extreme physico-chemical conditions. The photosynthetic performance of Chlamydomonas sp. decreased in high-light-adapted cells when IC limited. In addition, the minimal phosphorus (P) cell quota was suggestive of a higher P requirement under IC limitation. Subsequently, it was shown that Chlamydomonas sp. was a mixotroph, able to enhance its growth rate by taking up dissolved organic carbon (DOC) via osmotrophy. Therefore, it could survive in deeper water strata where DOC concentrations were higher and light limited. However, neither IC limitation, P availability nor in situ DOC concentrations (bottom-up control) could fully explain the vertical distribution of Chlamydomonas sp. in Lake 111. Conversely, when a novel approach was adopted, the grazing influence of the phagotrophic phototroph, Ochromonas sp., was found to exert top-down control on its prey (Chlamydomonas sp.) reducing prey abundance in the upper water strata. This, coupled with the fact that Chlamydomonas sp. uses DOC for growth, leads to a pronounced accumulation of Chlamydomonas sp. cells at depth; an apparent DCM. Therefore, grazing appears to be the main factor influencing the vertical distribution of algae observed in Lake 111. The knowledge gained from this thesis provides information essential for predicting the effect of strategies to neutralize the acidic mining lakes on the food-web.
One of the rules-of-thumb of colloid and surface physics is that most surfaces are charged when in contact with a solvent, usually water. This is the case, for instance, in charge-stabilized colloidal suspensions, where the surface of the colloidal particles are charged (usually with a charge of hundreds to thousands of e, the elementary charge), monolayers of ionic surfactants sitting at an air-water interface (where the water-loving head groups become charged by releasing counterions), or bilayers containing charged phospholipids (as cell membranes). In this work, we look at some model-systems that, although being a simplified version of reality, are expected to capture some of the physical properties of real charged systems (colloids and electrolytes). We initially study the simple double layer, composed by a charged wall in the presence of its counterions. The charges at the wall are smeared out and the dielectric constant is the same everywhere. The Poisson-Boltzmann (PB) approach gives asymptotically exact counterion density profiles around charged objects in the weak-coupling limit of systems with low-valent counterions, surfaces with low charge density and high temperature (or small Bjerrum length). Using Monte Carlo simulations, we obtain the profiles around the charged wall and compare it with both Poisson-Boltzmann (in the low coupling limit) and the novel strong coupling (SC) theory in the opposite limit of high couplings. In the latter limit, the simulations show that the SC leads in fact to asymptotically correct density profiles. We also compare the Monte Carlo data with previously calculated corrections to the Poisson-Boltzmann theory. We also discuss in detail the methods used to perform the computer simulations. After studying the simple double layer in detail, we introduce a dielectric jump at the charged wall and investigate its effect on the counterion density distribution. As we will show, the Poisson-Boltzmann description of the double layer remains a good approximation at low coupling values, while the strong coupling theory is shown to lead to the correct density profiles close to the wall (and at all couplings). For very large couplings, only systems where the difference between the dielectric constants of the wall and of the solvent is small are shown to be well described by SC. Another experimentally relevant modification to the simple double layer is to make the charges at the plane discrete. The counterions are still assumed to be point-like, but we constraint the distance of approach between ions in the plane and counterions to a minimum distance D. The ratio between D and the distance between neighboring ions in the plane is, as we will see, one of the important quantities in determining the influence of the discrete nature of the charges at the wall over the density profiles. Another parameter that plays an important role, as in the previous case, is the coupling as we will demonstrate, systems with higher coupling are more subject to discretization effects than systems with low coupling parameter. After studying the isolated double layer, we look at the interaction between two double layers. The system is composed by two equally charged walls at distance d, with the counterions confined between them. The charge at the walls is smeared out and the dielectric constant is the same everywhere. Using Monte-Carlo simulations we obtain the inter-plate pressure in the global parameter space, and the pressure is shown to be negative (attraction) at certain conditions. The simulations also show that the equilibrium plate separation (where the pressure changes from attractive to repulsive) exhibits a novel unbinding transition. We compare the Monte Carlo results with the strong-coupling theory, which is shown to describe well the bound states of systems with moderate and high couplings. The regime where the two walls are very close to each other is also shown to be well described by the SC theory. Finally, Using a field-theoretic approach, we derive the exact low-density ("virial") expansion of a binary mixture of positively and negatively charged hard spheres (two-component hard-core plasma, TCPHC). The free energy obtained is valid for systems where the diameters d_+ and d_- and the charge valences q_+ and q_- of positive and negative ions are unconstrained, i.e., the same expression can be used to treat dilute salt solutions (where typically d_+ ~ d_- and q_+ ~ q_-) as well as colloidal suspensions (where the difference in size and valence between macroions and counterions can be very large). We also discuss some applications of our results.
Today, analytical chemistry does not longer consist of only the big measuring devices and methods which are time consuming and expensive, which can furthermore only be handled by the qualified staff and in addition the results can also only be evaluated by this qualified staff. Usually, this technique, which shall be described in the following as 'classic analytic measuring technique', requires also rooms equipped especially and often a relative big quantity of the test compounds which should be prepared especially. Beside this classic analytic measuring technique, limited on definite substance groups and requests, a new measuring technique has gained acceptance particularly within the last years, which one can often be used by a layman, too. Often the new measuring technique has very little pieces of equipment. The needed sample volumes are also small and a special sample preparation isn't required. In addition, the new measuring instruments are simple to handle. They are cheap both in their production and in the use and they permit even a continuous measurement recording usually. Numerous of this new measuring instruments base on the research in the field of Biosensorik during the last 40 years. Since Clark and Lyon in the year 1962 were able to measure glucose with a simple oxygen electrode, completed by an enzyme the development of the new measuring technique did not have to be held back any longer. Biosensors, special pickups which consists of a combination from a biological component (permits a specific recognition of the analyte also without purification of the sample previously) and a physical pickup (convert the primary physicochemical effect into an electronically measurable signal), conquered the market. In the context of this thesis different tyrosinasesensors were developed which fulfilling the various requests, depending on origin and features of the used tyrosinase. One of the tyrosinasesensors for example was used for quantification of phenolic compounds in river and sea water and the results could correlated very well with the corresponding DIN-test for the determination of phenolic compounds. An other developed tyrosinasesensor showed a very high sensitiveness for catecholamines, substances which are of special importance in the medical diagnostics. In addition, the investigations of two different tyrosinases, which were carried out also in the context of this thesis, have shown, that a special tyrosinase (tyrosinase from Streptomyces antibioticus) will be the better choice as tyrosinase from Agaricus bisporus, which is used in the area of biosensor research till now, if one wants to develop in future even more sensitive tyrosinasesensors. Furthermore, first successes became reached on a molecular biological field, the production of tyrosinasemutants with special, before well-considered features. These successes can be used to develop a new generation of tyrosinasesensors, tyrosinasesensors in which tyrosinase can be bound directionally both to the corresponding physical pickup or also to another enzyme. From this one expects to achieve ways minimized which the substance to be determined (or whose product) otherwise must cover. Finally, this should result in an clearly visible increase of sensitivity of the Biosensor.
Combined structural and magnetotelluric investigation across the West Fault Zone in northern Chile
(2002)
The characterisation of the internal architecture of large-scale fault zones is usually restricted to the outcrop-based investigation of fault-related structural damage on the Earth's surface. A method to obtain information on the downward continuation of a fault is to image the subsurface electrical conductivity structure. This work deals with such a combined investigation of a segment of the West Fault, which itself is a part of the more than 2000 km long trench-linked Precordilleran Fault System in the northern Chilean Andes. Activity on the fault system lasted from Eocene to Quaternary times. In the working area (22°04'S, 68°53'W), the West Fault exhibits a clearly defined surface trace with a constant strike over many tens of kilometers. Outcrop condition and morphology of the study area allow ideally for a combination of structural geology investigation and magnetotelluric (MT) / geomagnetic depth sounding (GDS) experiments. The aim was to achieve an understanding of the correlation of the two methods and to obtain a comprehensive view of the West Fault's internal architecture. Fault-related brittle damage elements (minor faults and slip-surfaces with or without striation) record prevalent strike-slip deformation on subvertically oriented shear planes. Dextral and sinistral slip events occurred within the fault zone and indicate reactivation of the fault system. Youngest deformation increments mapped in the working area are extensional and the findings suggest a different orientation of the extension axes on either side of the fault. Damage element density increases with approach to the fault trace and marks an approximately 1000 m wide damage zone around the fault. A region of profound alteration and comminution of rocks, about 400 m wide, is centered in the damage zone. Damage elements in this central part are predominantly dipping steeply towards the east (70-80°). Within the same study area, the electrical conductivity image of the subsurface was measured along a 4 km long MT/GDS profile. This main profile trends perpendicular to the West Fault trace. The MT stations of the central 2 km were 100 m apart from each other. A second profile with 300 m site spacing and 9 recording sites crosses the fault a few kilometers away from the main study area. Data were recorded in the frequency range from 1000 Hz to 0.001 Hz with four real time instruments S.P.A.M. MkIII. The GDS data reveal the fault zone for both profiles at frequencies above 1 Hz. Induction arrows indicate a zone of enhanced conductivity several hundred meters wide, that aligns along the WF strike and lies mainly on the eastern side of the surface trace. A dimensionality analysis of the MT data justifies a two dimensional model approximation of the data for the frequency range from 1000 Hz to 0.1 Hz. For this frequency range a regional geoelectric strike parallel to the West Fault trace could be recovered. The data subset allows for a resolution of the conductivity structure of the uppermost crust down to at least 5 km. Modelling of the MT data is based on an inversion algorithm developed by Mackie et al. (1997). The features of the resulting resistivity models are tested for their robustness using empirical sensitivity studies. This involves variation of the properties (geometry, conductivity) of the anomalies, the subsequent calculation of forward or constrained inversion models and check for consistency of the obtained model results with the data. A fault zone conductor is resolved on both MT profiles. The zones of enhanced conductivity are located to the east of the West Fault surface trace. On the dense MT profile, the conductive zone is confined to a width of about 300 m and the anomaly exhibits a steep dip towards the east (about 70°). Modelling implies that the conductivity increase reaches to a depth of at least 1100 m and indicates a depth extent of less than 2000 m. Further conductive features are imaged but their geometry is less well constrained. The fault zone conductors of both MT profiles coincide in position with the alteration zone. For the dense profile, the dip of the conductive anomaly and the dip of the damage elements of the central part of the fault zone correlate. This suggests that the electrical conductivity enhancement is causally related to a mesh of minor faults and fractures, which is a likely pathway for fluids. The interconnected rock-porosity that is necessary to explain the observed conductivity enhancement by means of fluids is estimated on the basis of the salinity of several ground water samples (Archie's Law). The deeper the source of the water sample, the more saline it is due to longer exposure to fluid-rock interaction and the lower is the fluid's resistivity. A rock porosity in the range of 0.8% - 4% would be required at a depth of 200 m. That indicates that fluids penetrating the damaged fault zone from close to the surface are sufficient to explain the conductivity anomalies. This is as well supported by the preserved geochemical signature of rock samples in the alteration zone. Late stage alteration processes were active in a low temperature regime (<95°C) and the involvement of ascending brines from greater depth is not indicated. The limited depth extent of the fault zone conductors is a likely result of sealing and cementation of the fault fracture mesh due to dissolution and precipitation of minerals at greater depth and increased temperature. Comparison of the results of the apparently inactive West Fault with published studies on the electrical conductivity structure of the currently active San Andreas Fault, suggests that the depth extent and conductivity of the fault zone conductor may be correlated to fault activity. Ongoing deformation will keep the fault/fracture mesh permeable for fluids and impede cementation and sealing of fluid pathways.
Semi-arid areas are, due to their climatic setting, characterized by small water resources. An increasing water demand as a consequence of population growth and economic development as well as a decreasing water availability in the course of possible climate change may aggravate water scarcity in future, which often exists already for present-day conditions in these areas. Understanding the mechanisms and feedbacks of complex natural and human systems, together with the quantitative assessment of future changes in volume, timing and quality of water resources are a prerequisite for the development of sustainable measures of water management to enhance the adaptive capacity of these regions. For this task, dynamic integrated models, containing a hydrological model as one component, are indispensable tools. The main objective of this study is to develop a hydrological model for the quantification of water availability in view of environmental change over a large geographic domain of semi-arid environments. The study area is the Federal State of Ceará (150 000 km2) in the semi-arid north-east of Brazil. Mean annual precipitation in this area is 850 mm, falling in a rainy season with duration of about five months. Being mainly characterized by crystalline bedrock and shallow soils, surface water provides the largest part of the water supply. The area has recurrently been affected by droughts which caused serious economic losses and social impacts like migration from the rural regions. The hydrological model Wasa (Model of Water Availability in Semi-Arid Environments) developed in this study is a deterministic, spatially distributed model being composed of conceptual, process-based approaches. Water availability (river discharge, storage volumes in reservoirs, soil moisture) is determined with daily resolution. Sub-basins, grid cells or administrative units (municipalities) can be chosen as spatial target units. The administrative units enable the coupling of Wasa in the framework of an integrated model which contains modules that do not work on the basis of natural spatial units. The target units mentioned above are disaggregated in Wasa into smaller modelling units within a new multi-scale, hierarchical approach. The landscape units defined in this scheme capture in particular the effect of structured variability of terrain, soil and vegetation characteristics along toposequences on soil moisture and runoff generation. Lateral hydrological processes at the hillslope scale, as reinfiltration of surface runoff, being of particular importance in semi-arid environments, can thus be represented also within the large-scale model in a simplified form. Depending on the resolution of available data, small-scale variability is not represented explicitly with geographic reference in Wasa, but by the distribution of sub-scale units and by statistical transition frequencies for lateral fluxes between these units. Further model components of Wasa which respect specific features of semi-arid hydrology are: (1) A two-layer model for evapotranspiration comprises energy transfer at the soil surface (including soil evaporation), which is of importance in view of the mainly sparse vegetation cover. Additionally, vegetation parameters are differentiated in space and time in dependence on the occurrence of the rainy season. (2) The infiltration module represents in particular infiltration-excess surface runoff as the dominant runoff component. (3) For the aggregate description of the water balance of reservoirs that cannot be represented explicitly in the model, a storage approach respecting different reservoirs size classes and their interaction via the river network is applied. (4) A model for the quantification of water withdrawal by water use in different sectors is coupled to Wasa. (5) A cascade model for the temporal disaggregation of precipitation time series, adapted to the specific characteristics of tropical convective rainfall, is applied for the generating rainfall time series of higher temporal resolution. All model parameters of Wasa can be derived from physiographic information of the study area. Thus, model calibration is primarily not required. Model applications of Wasa for historical time series generally results in a good model performance when comparing the simulation results of river discharge and reservoir storage volumes with observed data for river basins of various sizes. The mean water balance as well as the high interannual and intra-annual variability is reasonably represented by the model. Limitations of the modelling concept are most markedly seen for sub-basins with a runoff component from deep groundwater bodies of which the dynamics cannot be satisfactorily represented without calibration. Further results of model applications are: (1) Lateral processes of redistribution of runoff and soil moisture at the hillslope scale, in particular reinfiltration of surface runoff, lead to markedly smaller discharge volumes at the basin scale than the simple sum of runoff of the individual sub-areas. Thus, these processes are to be captured also in large-scale models. The different relevance of these processes for different conditions is demonstrated by a larger percentage decrease of discharge volumes in dry as compared to wet years. (2) Precipitation characteristics have a major impact on the hydrological response of semi-arid environments. In particular, underestimated rainfall intensities in the rainfall input due to the rough temporal resolution of the model and due to interpolation effects and, consequently, underestimated runoff volumes have to be compensated in the model. A scaling factor in the infiltration module or the use of disaggregated hourly rainfall data show good results in this respect. The simulation results of Wasa are characterized by large uncertainties. These are, on the one hand, due to uncertainties of the model structure to adequately represent the relevant hydrological processes. On the other hand, they are due to uncertainties of input data and parameters particularly in view of the low data availability. Of major importance is: (1) The uncertainty of rainfall data with regard to their spatial and temporal pattern has, due to the strong non-linear hydrological response, a large impact on the simulation results. (2) The uncertainty of soil parameters is in general of larger importance on model uncertainty than uncertainty of vegetation or topographic parameters. (3) The effect of uncertainty of individual model components or parameters is usually different for years with rainfall volumes being above or below the average, because individual hydrological processes are of different relevance in both cases. Thus, the uncertainty of individual model components or parameters is of different importance for the uncertainty of scenario simulations with increasing or decreasing precipitation trends. (4) The most important factor of uncertainty for scenarios of water availability in the study area is the uncertainty in the results of global climate models on which the regional climate scenarios are based. Both a marked increase or a decrease in precipitation can be assumed for the given data. Results of model simulations for climate scenarios until the year 2050 show that a possible future change in precipitation volumes causes a larger percentage change in runoff volumes by a factor of two to three. In the case of a decreasing precipitation trend, the efficiency of new reservoirs for securing water availability tends to decrease in the study area because of the interaction of the large number of reservoirs in retaining the overall decreasing runoff volumes.
Deep convection is an essential part of the circulation in the North Atlantic Ocean. It influences the northward heat transport achieved by the thermohaline circulation. Understanding its stability and variability is therefore necessary for assessing climatic changes in the area of the North Atlantic. This thesis aims at improving the conceptual understanding of the stability and variability of deep convection. Observational data from the Labrador Sea show phases with and without deep convection. A simple two-box model is fitted to these data. The results suggest that the Labrador Sea has two coexisting stable states, one with regular deep convection and one without deep convection. This bistability arises from a positive salinity feedback that is due to the net freshwater input into the surface layer. The convecting state can easily become unstable if the mean forcing shifts to warmer or less saline conditions. The weather-induced variability of the external forcing is included into the box model by adding a stochastic forcing term. It turns out that deep convection is then switched "on" and "off" frequently. The mean residence time in either state is a measure of its stochastic stability. The stochastic stability depends smoothly on the forcing parameters, in contrast to the deterministic (non-stochastic) stability which may change abruptly. The mean and the variance of the stochastic forcing both have an impact on the frequency of deep convection. For instance, a decline in convection frequency due to a surface freshening may be compensated for by an increased heat flux variability. With a further simplified box model some stochastic stability features are studied analytically. A new effect is described, called wandering monostability: even if deep convection is not a stable state due to changed forcing parameters, the stochastic forcing can still trigger convection events frequently. The analytical expressions explicitly show how wandering monostability and other effects depend on the model parameters. This dependence is always exponential for the mean residence times, but for the probability of long nonconvecting phases it is exponential only if this probability is small. It is to be expected that wandering monostability is relevant in other parts of the climate system as well. All in all, the results demonstrate that the stability of deep convection in the Labrador Sea reacts very sensitively to the forcing. The presence of variability is crucial for understanding this sensitivity. Small changes in the forcing can already significantly lower the frequency of deep convection events, which presumably strongly affects the regional climate. ----Anmerkung: Der Autor ist Träger des durch die Physikalische Gesellschaft zu Berlin vergebenen Carl-Ramsauer-Preises 2003 für die jeweils beste Dissertation der vier Universitäten Freie Universität Berlin, Humboldt-Universität zu Berlin, Technische Universität Berlin und Universität Potsdam.
Motivated by recent proposals on the experimental detectability of quantum gravity effects, the present thesis investigates assumptions and methods which might be used for the prediction of such effects within the framework of loop quantum gravity. To this end, a scalar field coupled to gravity is considered as a model system. Starting from certain assumptions about the dynamics of the coupled gravity-matter system, a quantum theory for the scalar field is proposed. Then, assuming that the gravitational field is in a semiclassical state, a "QFT on curved space-time limit" of this theory is defined. In contrast to ordinary quantum field theory on curved space-time however, in this limit the theory describes a quantum scalar field propagating on a (classical) random lattice. Then, methods to obtain the low energy limit of such a lattice theory, especially regarding the resulting modified dispersion relations, are discussed and applied to simple model systems. Finally, under certain simplifying assumptions, using the methods developed before as well as a specific class of semiclassical states, corrections to the dispersion relations for the scalar and the electromagnetic field are computed within the framework of loop quantum gravity. These calculations are of preliminary character, as many assumptions enter whose validity remains to be studied more thoroughly. However they exemplify the problems and possibilities of making predictions based on loop quantum gravity that are in principle testable by experiment.
Structural and spectroscopical study of crystals of 1,3,4-oxadiazole derivatives at high pressure
(2002)
In recent years the search for new materials of technological interest has given new impulses to the study of organic compounds. Organic substances possess a great number of advantages such as the possibility to adjust their properties for a given purpose by different chemical and physical techniques in the preparation process. Oxadiazole derivatives are interesting due to their use as material for light emitting diodes (LED) as well as scintillators. The physical properties of a solid depend on its structure. Different structures induce different intra- and intermolecular interactions. An advantageous method to modify the intra- as well as the intermolecular interactions of a given substance is the application of high pressure. Furthermore, using this method the chemical features of the compound are not influenced. We have investigated the influence of high pressure and high temperature on the super-molecular structure of several oxadiazole derivatives in crystalline state. From the results of this investigation an equation of state for these crystals was determined. Furthermore, the spectroscopical features of these materials under high pressure were characterized.