Refine
Year of publication
Document Type
- Article (21297)
- Doctoral Thesis (3184)
- Postprint (2347)
- Monograph/Edited Volume (1223)
- Other (675)
- Review (631)
- Preprint (531)
- Conference Proceeding (474)
- Part of a Book (255)
- Working Paper (182)
Language
- English (31017) (remove)
Keywords
- climate change (177)
- Germany (105)
- machine learning (86)
- diffusion (76)
- German (68)
- morphology (67)
- Arabidopsis thaliana (66)
- anomalous diffusion (58)
- stars: massive (58)
- Climate change (55)
Institute
- Institut für Physik und Astronomie (4993)
- Institut für Biochemie und Biologie (4731)
- Institut für Geowissenschaften (3333)
- Institut für Chemie (2917)
- Institut für Mathematik (1876)
- Department Psychologie (1490)
- Institut für Ernährungswissenschaft (1048)
- Department Linguistik (1013)
- Wirtschaftswissenschaften (858)
- Institut für Informatik und Computational Science (841)
We present an application of imprecise probability theory to the quantification of uncertainty in the integrated assessment of climate change. Our work is motivated by the fact that uncertainty about climate change is pervasive, and therefore requires a thorough treatment in the integrated assessment process. Classical probability theory faces some severe difficulties in this respect, since it cannot capture very poor states of information in a satisfactory manner. A more general framework is provided by imprecise probability theory, which offers a similarly firm evidential and behavioural foundation, while at the same time allowing to capture more diverse states of information. An imprecise probability describes the information in terms of lower and upper bounds on probability. For the purpose of our imprecise probability analysis, we construct a diffusion ocean energy balance climate model that parameterises the global mean temperature response to secular trends in the radiative forcing in terms of climate sensitivity and effective vertical ocean heat diffusivity. We compare the model behaviour to the 20th century temperature record in order to derive a likelihood function for these two parameters and the forcing strength of anthropogenic sulphate aerosols. Results show a strong positive correlation between climate sensitivity and ocean heat diffusivity, and between climate sensitivity and absolute strength of the sulphate forcing. We identify two suitable imprecise probability classes for an efficient representation of the uncertainty about the climate model parameters and provide an algorithm to construct a belief function for the prior parameter uncertainty from a set of probability constraints that can be deduced from the literature or observational data. For the purpose of updating the prior with the likelihood function, we establish a methodological framework that allows us to perform the updating procedure efficiently for two different updating rules: Dempster's rule of conditioning and the Generalised Bayes' rule. Dempster's rule yields a posterior belief function in good qualitative agreement with previous studies that tried to constrain climate sensitivity and sulphate aerosol cooling. In contrast, we are not able to produce meaningful imprecise posterior probability bounds from the application of the Generalised Bayes' Rule. We can attribute this result mainly to our choice of representing the prior uncertainty by a belief function. We project the Dempster-updated belief function for the climate model parameters onto estimates of future global mean temperature change under several emissions scenarios for the 21st century, and several long-term stabilisation policies. Within the limitations of our analysis we find that it requires a stringent stabilisation level of around 450 ppm carbon dioxide equivalent concentration to obtain a non-negligible lower probability of limiting the warming to 2 degrees Celsius. We discuss several frameworks of decision-making under ambiguity and show that they can lead to a variety of, possibly imprecise, climate policy recommendations. We find, however, that poor states of information do not necessarily impede a useful policy advice. We conclude that imprecise probabilities constitute indeed a promising candidate for the adequate treatment of uncertainty in the integrated assessment of climate change. We have constructed prior belief functions that allow much weaker assumptions on the prior state of information than a prior probability would require and, nevertheless, can be propagated through the entire assessment process. As a caveat, the updating issue needs further investigation. Belief functions constitute only a sensible choice for the prior uncertainty representation if more restrictive updating rules than the Generalised Bayes'Rule are available.
We study a natural Dirac operator on a Lagrangian submanifold of a Kähler manifold. We first show that its square coincides with the Hodge - de Rham Laplacian provided the complex structure identifies the Spin structures of the tangent and normal bundles of the submanifold. We then give extrinsic estimates for the eigenvalues of that operator and discuss some examples.
Adhesion of biological cells to their environment is mediated by two-dimensional clusters of specific adhesion molecules which are assembled in the plasma membrane of the cells. Due to the activity of the cells or external influences, these adhesion sites are usually subject to physical forces. In recent years, the influence of such forces on the stability of cellular adhesion clusters was increasingly investigated. In particular, experimental methods that were originally designed for the investigation of single bond rupture under force have been applied to investigate the rupture of adhesion clusters. The transition from single to multiple bonds, however, is not trivial and requires theoretical modelling. Rupture of biological adhesion molecules is a thermally activated, stochastic process. In this work, a stochastic model for the rupture and rebinding dynamics of clusters of parallel adhesion molecules under force is presented. In particular, the influence of (i) a constant force as it may be assumed for cellular adhesion clusters is investigated and (ii) the influence of a linearly increasing force as commonly used in experiments is considered. Special attention is paid to the force-mediated cooperativity of parallel adhesion bonds. Finally, the influence of a finite distance between receptors and ligands on the binding dynamics is investigated. Thereby, the distance can be bridged by polymeric linker molecules which tether the ligands to a substrate.
This paper focuses on mysteries written by the Afro-American women authors Barbara Neely and Valerie Wilson Wesley. Both authors place a black woman in the role of the detective - an innovative feature not only in the realm of female detective literature of the past two decades but also with regard to the current discourse about race and class in US-American society. This discourse is important because detective novels are considered popular literature and thus a mass product designed to favor commercial instead of literary claims. Thus, the focus is placed on the development of the two protagonists, on their lives as detectives and as black women, in order to find out whether or not and how the genre influences the depiction of Afro-American experiences. It appears that both of these detective series represent Afro-American culture in different ways, which confirms a heterogenic development of this ethnic group. However, the protagonist's search for identity and their relationships to white people could be identified as a major unifying claim of Afro-American literature. With differing intensity, the authors Neely and Wesley provide the white or mainstream reader with insight into their culture and confront the reader's ignorance of black culture. In light of this, it is a great achievement that Neely and Wesley have reached not only a black audience but also a growing number of white readers.
Nitrogen is an essential macronutrient for plants and nitrogen fertilizers are indispensable for modern agriculture. Unfortunately, we know too little about how plants regulate their use of soil nitrogen, to maximize fertilizers-N use by crops and pastures. This project took a dual approach, involving forward and reverse genetics, to identify N-regulators in plants, which may prove useful in the future to improve nitrogen-use efficiency in agriculture. To identify nitrogen-regulated transcription factor genes in Arabidopsis that may control N-use efficiency we developed a unique resource for qRT-PCR measurements on all Arabidpsis transcription factor genes. Using closely spaced, gene-specific primer pairs and SYBR® Green to monitor amplification of double-stranded DNA, transcript levels of 83% of all target genes could be measured in roots or shoots of young Arabidopsis wild-type plants. Only 4% of reactions produced non-specific PCR products, and 13% of TF transcripts were undetectable in these organs. Measurements of transcript abundance were quantitative over six orders of magnitude, with a detection limit equivalent to one transcript molecule in 1000 cells. Transcript levels for different TF genes ranged between 0.001-100 copies per cell. Real-time RT-PCR revealed 26 root-specific and 39 shoot-specific TF genes, most of which have not been identified as organ-specific previously. An enlarged and improved version of the TF qRT-PCR platform contains now primer pairs for 2256 Arabidopsis TF genes, representing 53 gene families and sub-families arrayed on six 384-well plates. Set-up of real-time PCR reactions is now fully robotized. One researcher is able to measure expression of all 2256 TF genes in a single biological sample in a just one working day. The Arabidopsis qRT-PCT platform was successfully used to identify 37 TF genes which transcriptionaly responded at the transcriptional level to N-deprivation or to nitrate per se. Most of these genes have not been characterized previously. Further selection of TF genes based on the responses of selected candidates to other macronutrients and abiotic stresses allowed to distinguish between TFs regulated (i) specifically by nitrogen (29 genes) (ii) regulated by general macronutrient or by salt and osmotic stress (6 genes), and (iii) responding to all major macronutrients and to abiotic stresses. Most of the N-regulated TF genes were also regulated by carbon. Further characterization of sixteen selected TF genes, revealed: (i) lack of transcriptional response to organic nitrogen, (ii) two major types of kinetics of induction by nitrate, (iii) specific responses for the majority of the genes to nitrate but not downstream products of nitrate assimilation. All sixteen TF genes were cloned into binary vectors for constitutive and ethanol inducible over expression, and the first generation of transgenic plants were obtained for almost all of them. Some of the plants constitutively over expressing TF genes under control of the 35S promoter revealed visible phenotypes in T1 generation. Homozygous T-DNA knock out lines were also obtained for many of the candidate TF genes. So far, one knock out line revealed a visible phenotype: retardation of flowering time. A forward genetic approach using an Arabidopsis ATNRT2.1 promoter : Luciferase reporter line, resulted in identification of eleven EMS mutant reporter lines affected in induction of ATNRT2.1 expression by nitrate. These lines could by divided in the following classes according to expression of other genes involved in primary nitrogen and carbon metabolism: (i) lines affected exclusively in nitrate transport, (ii) those affected in nitrate transport, acquisition, but also in glycolysis and oxidative pentose pathway, (iii) mutants affected moderately in nitrate transport, oxidative pentose pathway and glycolysis but not in primary nitrate assimilation. Thus, several different N-regulatory genes may have been mutated in this set of mutants. Map-based cloning has begun to identify the genes affected in these mutants.
In this thesis, we give two constructions for Riemannian metrics on Seiberg-Witten moduli spaces. Both these constructions are naturally induced from the L2-metric on the configuration space. The construction of the so called quotient L2-metric is very similar to the one construction of an L2-metric on Yang-Mills moduli spaces as given by Groisser and Parker. To construct a Riemannian metric on the total space of the Seiberg-Witten bundle in a similar way, we define the reduced gauge group as a subgroup of the gauge group. We show, that the quotient of the premoduli space by the reduced gauge group is isomorphic as a U(1)-bundle to the quotient of the premoduli space by the based gauge group. The total space of this new representation of the Seiberg-Witten bundle carries a natural quotient L2-metric, and the bundle projection is a Riemannian submersion with respect to these metrics. We compute explicit formulae for the sectional curvature of the moduli space in terms of Green operators of the elliptic complex associated with a monopole. Further, we construct a Riemannian metric on the cobordism between moduli spaces for different perturbations. The second construction of a Riemannian metric on the moduli space uses a canonical global gauge fixing, which represents the total space of the Seiberg-Witten bundle as a finite dimensional submanifold of the configuration space. We consider the Seiberg-Witten moduli space on a simply connected Käuhler surface. We show that the moduli space (when nonempty) is a complex projective space, if the perturbation does not admit reducible monpoles, and that the moduli space consists of a single point otherwise. The Seiberg-Witten bundle can then be identified with the Hopf fibration. On the complex projective plane with a special Spin-C structure, our Riemannian metrics on the moduli space are Fubini-Study metrics. Correspondingly, the metrics on the total space of the Seiberg-Witten bundle are Berger metrics. We show that the diameter of the moduli space shrinks to 0 when the perturbation approaches the wall of reducible perturbations. Finally we show, that the quotient L2-metric on the Seiberg-Witten moduli space on a Kähler surface is a Kähler metric.
Diagenetic studies of carbonate rocks focused for a long time on photozoan carbonate assemblages deposited in tropical climates. The results of these investigations were taken as models for the diagenetic evolution of many fossil carbonates. Only in recent years the importance of heterozoan carbonates, generally formed out of the tropics or in deeper waters, was realized. Diagenetic studies focusing on this kind of rocks are still scarce, but indicate that the diagenetic evolution of these rocks might be a better model for many fossil carbonate settings ("calcite-sea" carbonates) than the photozoan model used before. This study deals with the determination of the diagenetic pathways and environments in such shallow-water heterozoan carbonate assemblages. Special emphasis is put on the identification of early, near-seafloor diagenetic processes and on the evaluation of the amount of constructive diagenesis in form of cementation in this diagenetic environment. As study area the Central Mediterranean, the Maltese Islands and Sicily, was chosen. Here two sections were logged in Olio-Miocene shallow-water carbonates consisting of different kinds of heterozoan assemblages. The study area is very suitable for the investigation of constructive early diagenetic processes, as the rocks were never deeply buried and burial diagenetic pressure solution and cementation as cause of lithification could be ruled out. Nevertheless, the carbonate rocks are well lithified and form steep cliffs, implying cementation/lithification in another, shallower diagenetic environment. To determine the diagenetic pathways and environments, detailed transmitted light and cathodoluminescence petrography was carried out on thin sections. Furthermore the stable isotope (δ<sup>18O and δ<sup>13C) composition of the bulk rock, single biota and single cement phases was determined, as well as the major and trace element composition of the single cement phases. Petrographically three (Sicily) to four (Maltese Islands) cementation phases, two phases of fabric selective and one of non-fabric selective dissolution, one phase of neomorphism and one of chemical compaction could be distinguished. The stable isotope measurements of the single cement phases pointed to cement precipitation from marine, marine-derived and meteoric waters. The trace element analysis indicated precipitation under reducing conditions, (A) in an open system with low rock-water interaction on the Maltese Islands and (B) in a closed system with high rock-water interaction on Sicily. For the closed systems case, aragonite as cement source could be concluded because its chemical composition was preserved in the newly formed cements. By integrating these results, diagenetic pathways and environments for the investigated locations were established, and the cement source(s) in the different environments were determined. The diagenetic evolution started in the marine environment with the precipitation of fibrous/fibrous-bladed and epitaxial cement I. These cements formed as High Mg Calcite (HMC) directly out of marine waters. The paleoenvironmentally shallowest part of the section on the Maltese Islands was also exposed to meteoric diagenetic fluids. This meteoric influence lead to the dissolution of aragonitic and HMC skeletons, which sourced the cementation by Low Mg Calcitic (LMC) epitaxial cement II in this part of the Maltese section. Entering the burial-marine environment the main part of dissolution, cementation and neomorphism started to take place. The elevated CO2 content in this environment, caused by the decay of organic matter, lead to the dissolution of aragonitic skeletons, which sourced the cementation by LMC epitaxial cement II, bladed and blocky cements. The earlier precipitated HMC cement phases were either partly dissolved (epitaxial cement I) or neomorphosed to LMC (fibrous/fibrous-bladed and epitaxial cement I). In the burial environment weak chemical compaction took place without sourcing significant amounts of cementation. In a last phase the rocks entered the meteoric realm by uplift, which caused non-fabric selective dissolution. This study shows that early diagenetic processes, taking place at or just below the sediment-water-interface, are very important for the mineralogical stabilization of heterozoan carbonate strata. The main amount of constructive diagenesis in form of cementation takes place in this environment, sourced by dissolution of aragonitic and, to a lesser degree, of HMC skeletons. The results of this study imply that the primary amount of aragonitic skeletons in heterozoan carbonate sediments must be carefully assessed, as they are the main early diagenetic cement source. In fossil heterozoan carbonate rocks, aragonitic skeletons might be the cement source even when no relict structures like micritic envelops or biomolds are preserved. In general, the diagenetic evolution of heterozoan carbonate rocks is a good model for the diagenesis of "calcite-sea" time carbonate rocks.
Stochastic information, to be understood as "information gained by the application of stochastic methods", is proposed as a tool in the assessment of changes in climate. This thesis aims at demonstrating that stochastic information can improve the consideration and reduction of uncertainty in the assessment of changes in climate. The thesis consists of three parts. In part one, an indicator is developed that allows the determination of the proximity to a critical threshold. In part two, the tolerable windows approach (TWA) is extended to a probabilistic TWA. In part three, an integrated assessment of changes in flooding probability due to climate change is conducted within the TWA. The thermohaline circulation (THC) is a circulation system in the North Atlantic, where the circulation may break down in a saddle-node bifurcation under the influence of climate change. Due to uncertainty in ocean models, it is currently very difficult to determine the distance of the THC to the bifurcation point. We propose a new indicator to determine the system's proximity to the bifurcation point by considering the THC as a stochastic system and using the information contained in the fluctuations of the circulation around the mean state. As the system is moved closer to the bifurcation point, the power spectrum of the overturning becomes "redder", i.e. more energy is contained in the low frequencies. Since the spectral changes are a generic property of the saddle-node bifurcation, the method is not limited to the THC, but it could also be applicable to other systems, e.g. transitions in ecosystems. In part two, a probabilistic extension to the tolerable windows approach (TWA) is developed. In the TWA, the aim is to determine the complete set of emission strategies that are compatible with so-called guardrails. Guardrails are limits to impacts of climate change or to climate change itself. Therefore, the TWA determines the "maneuvering space" humanity has, if certain impacts of climate change are to be avoided. Due to uncertainty it is not possible to definitely exclude the impacts of climate change considered, but there will always be a certain probability of violating a guardrail. Therefore the TWA is extended to a probabilistic TWA that is able to consider "probabilistic uncertainty", i.e. uncertainty that can be expressed as a probability distribution or uncertainty that arises through natural variability. As a first application, temperature guardrails are imposed, and the dependence of emission reduction strategies on probability distributions for climate sensitivities is investigated. The analysis suggests that it will be difficult to observe a temperature guardrail of 2°C with high probabilities of actually meeting the target. In part three, an integrated assessment of changes in flooding probability due to climate change is conducted. A simple hydrological model is presented, as well as a downscaling scheme that allows the reconstruction of the spatio-temporal natural variability of temperature and precipitation. These are used to determine a probabilistic climate impact response function (CIRF), a function that allows the assessment of changes in probability of certain flood events under conditions of a changed climate. The assessment of changes in flooding probability is conducted in 83 major river basins. Not all floods can be considered: Events that either happen very fast, or affect only a very small area can not be considered, but large-scale flooding due to strong longer-lasting precipitation events can be considered. Finally, the probabilistic CIRFs obtained are used to determine emission corridors, where the guardrail is a limit to the fraction of world population that is affected by a predefined shift in probability of the 50-year flood event. This latter analysis has two main results. The uncertainty about regional changes in climate is still very high, and even small amounts of further climate change may lead to large changes in flooding probability in some river systems.
The multidrug and toxic compounds extrusion (MATE) family includes hundreds of functionally uncharacterised proteins from bacteria and all eukaryotic kingdoms except the animal kingdom, that function as drug/toxin::Na<sup>+ or H<sup>+ antiporters. In Arabidopsis thaliana the MATE family comprises 56 members, one of which is NIC2 (Novel Ion Carrier 2). Using heterologous expression systems including Escherichia coli and Saccharomyces cerevisiae, and the homologous expression system of Arabidopsis thaliana, the functional characterisation of NIC2 was performed. It has been demonstrated that NIC2 confers resistance of E. coli towards the chemically diverse compounds such as tetraethylammonium chloride (TEACl), tetramethylammonium chloride (TMACl) and a toxic analogue of indole-3-acetic acid, 5-fluoro-indole-acetic acid (F-IAA). Therefore, NIC2 may be able to transport a broad range of drug and toxic compounds. In wild-type yeast the expression of NIC2 increased the tolerance towards lithium and sodium, but not towards potassium and calcium. In A. thaliana, the overexpression of NIC2 led to strong phenotypic changes. Under normal growth condtions overexpression caused an extremely bushy phenotype with no apical dominance but an enhanced number of lateral flowering shoots. The amount of rossette leaves and flowers with accompanying siliques were also much higher than in wild-type plants and the senescence occurred earlier in the transgenic plants. In contrast, RNA interference (RNAi) used to silence NIC2 expression, induced early flower stalk development and flowering compared with wild-type plants. In additon, the main flower stalks were not able to grow vertically, but instead had a strong tendency to bend towards the ground. While NIC2 RNAi seedlings produced many lateral roots outgrowing from the primary root and the root-shoot junction, NIC2 overexpression seedlings displayed longer primary roots that were characterised by a 2 to 4 h delay in the gravitropic response. In addition, these lines exhibited an enhanced resistance to exogenously applied auxins, i.e. indole-3-acetic acid (IAA) and indole-3-butyric acid (IBA) when compared with the wild-type roots. Based on these results, it is suggested that the NIC2 overexpression and NIC2 RNAi phenotypes were due to decreased or increased levels of auxin, respectively. The ProNIC2:GUS fusion gene revealed that NIC2 is expressed in the stele of the elongation zone, in the lateral root cap, in new lateral root primordia, and in pericycle cells of the root system. In the vascular tissue of rosette leaves and inflorescence stems, the expression was observed in the xylem parenchyma cells, while in siliques it was also in vascular tissue, but as well in the dehiscence and abscission zones. The organ- and tissue-specific expression sites of NIC2 correlate with the sites of auxin action in mature Arabidopsis plants. Further experiments using ProNIC2:GUS indicated that NIC2 is an auxin-inducible gene. Additionally, during the gravitropic response when an endogenous auxin gradient across the root tip forms, the GUS activity pattern of the ProNIC2:GUS fusion gene markedly changed at the upper side of the root tip, while at the lower side stayed unchanged. Finally, at the subcellular level NIC2-GFP fusion protein localised in the peroxisomes of Nicotana tabacum BY2 protoplasts. Considering the experimental results, it is proposed that the hypothetical function of NIC2 is the efflux transport which takes part in the auxin homeostasis in plant tissues probably by removing auxin conjugates from the cytoplasm into peroxisomes.
The protection of species is one major focus in conservation biology. The basis for any management concept is the knowledge of the species autecology. In my thesis, I studied the life-history traits and population dynamics of the endangered Lesser Spotted Woodpecker (Picoides minor) in Central Europe. Here, I combine a range of approaches, from empirical investigations of a Lesser Spotted Woodpecker population in the Taunus low mountain range in Germany, the analysis of empirical data and the development of an individual-based stochastic model simulating the population dynamics. In the field studies I collected basic demographic data of reproductive success and mortality. Moreover, breeding biology and behaviour were investigated in detail. My results showed a significant decrease of the reproductive success with later timing of breeding, caused by deterioration in food supply. Moreover, mate fidelity was of benefit, since pairs composed of individuals that bred together the previous year started earlier with egg laying and obtained a higher reproductive success. Both sexes were involved in parental care, but the care was only shared equally during incubation and the early nestling stage. In the late nestling stage, parental care strategies differed between sexes: Females considerably decreased feeding rate with number of nestlings and even completely deserted small broods. Males fed their nestlings irrespective of brood size and compensated for the females absence. The organisation of parental care in the Lesser Spotted Woodpecker is discussed to provide the possibility for females to mate with two males with separate nests and indeed, polyandry was confirmed. To investigate the influence of the observed flexibility in the social mating system on the population persistence, a stochastic individual-based model simulating the population dynamics of the Lesser Spotted Woodpecker was developed, based on empirical results. However, pre-breeding survival rates could not be obtained empirically and I present in this thesis a pattern-oriented modelling approach to estimate pre-breeding survival rates by comparing simulation results with empirical pattern of population structure and reproductive success on population level. Here, I estimated the pre-breeding survival for two Lesser Spotted Woodpecker populations on different latitudes to test the reliability of the results. Finally, I used the same simulation model to investigate the effect of flexibility in the mating system on the persistence of the population. With increasing rate of polyandry in the population, the persistence increased and even low rates of polyandry had a strong influence. Even when presuming only a low polyandry rate and costs of polyandry in terms of higher mortality and lower reproductive success for the secondary male, the positive effect of polyandry on the persistence of the population was still strong. This thesis greatly helped to increase the knowledge of the autecology of an endangered woodpecker species. Beyond the relevance for the species, I could demonstrate here that in general flexibility in mating systems are buffer mechanisms and reduce the impact of environmental and demographic noise.
Origin and symmetry of the observed global magnetic fields in galaxies are not fully understood. We intend to clarify the question of the magnetic field origin and investigate the global action of the magneto-rotational instability (MRI) in galactic disks with the help of 3D global magneto-hydrodynamical (MHD) simulations. The calculations were done with the time-stepping ZEUS 3D code using massive parallelization. The alpha-Omega dynamo is known to be one of the most efficient mechanisms to reproduce the observed global galactic fields. The presence of strong turbulence is a pre-requisite for the alpha-Omega dynamo generation of the regular magnetic fields. The observed magnitude and spatial distribution of turbulence in galaxies present unsolved problems to theoreticians. The MRI is known to be a fast and powerful mechanism to generate MHD turbulence and to amplify magnetic fields. We find that the critical wavelength increases with the increasing of magnetic fields during the simulation, transporting the energy from critical to larger scales. The final structure, if not disrupted by supernovae explosions, is the structure of `thin layers' of thickness of about 100 pcs. An important outcome of all simulations is the magnitude of the horizontal components of the Reynolds and Maxwell stresses. The result is that the MRI-driven turbulence is magnetic-dominated: its magnetic energy exceeds the kinetic energy by a factor of 4. The Reynolds stress is small and less than 1% of the Maxwell stress. The angular momentum transport is thus completely dominated by the magnetic field fluctuations. The volume-averaged pitch angle is always negative with a magnitude of about -30. The non-saturated MRI regime is lasting sufficiently long to fill the time between the galactic encounters, independently of strength and geometry of the initial field. Therefore, we may claim the observed pitch angles can be due to MRI action in the gaseous galactic disks. The MRI is also shown to be a very fast instability with e-folding time proportional to the time of one rotation. Steep rotation curves imply a stronger growth for the magnetic energy due to MRI. The global e-folding time is from 44 Myr to 100 Myr depending on the rotation profile. Therefore, MRI can explain the existence of rather large magnetic field in very young galaxies. We also have reproduced the observed rms values of velocities in the interstellar turbulence as it was observed in NGC 1058. We have shown with the simulations that the averaged velocity dispersion of about 5 km/s is a typical number for the MRI-driven turbulence in galaxies, which agrees with observations. The dispersion increases outside of the disk plane, whereas supernovae-driven turbulence is found to be concentrated within the disk. In our simulations the velocity dispersion increases a few times with the heights. An additional support to the dynamo alpha-effect in the galaxies is the ability of the MRI to produce a mix of quadrupole and dipole symmetries from the purely vertical seed fields, so it also solves the seed-fields problem of the galactic dynamo theory. The interaction of magneto-rotational instability and random supernovae explosions remains an open question. It would be desirable to run the simulation with the supernovae explosions included. They would disrupt the calm ring structure produced by global MRI, may be even to the level when we can no longer blame MRI to be responsible for the turbulence.
Mesoporous organosilica materials with amine functions : surface characteristics and chirality
(2005)
In this work mesoporous organisilica materials are synthesized through the silica sol-gel process. For this a new class of precursors which are also surfactant are synthesized and self-assembled. This leads to a high surface area functionality which is analysized with copper (II) and water adsorption.
During this PhD project three technical platforms were either improved or newly established in order to identify interesting genes involved in SNF, validate their expression and functionally characterise them. An existing 5.6K cDNA array (Colebatch et al., 2004) was extended to produce the 9.6K LjNEST array, while a second array, the 11.6K LjKDRI array, was also produced. Furthermore, the protocol for array hybridisation was substantially improved (Ott et al., in press). After functional classification of all clones according to the MIPS database and annotation of their corresponding tentative consensus sequence (TIGR) these cDNA arrays were used by several international collaborators and by our group (Krusell et al., 2005; in press). To confirm results obtained from the cDNA array analysis different sets of cDNA pools were generated that facilitate rapid qRT-PCR analysis of candidate gene expression. As stable transformation of Lotus japonicus takes several months, an Agrobacterium rhizogenes transformation system was established in the lab and growth conditions for screening transformants for symbiotic phenotypes were improved. These platforms enable us to identify genes, validate their expression and functionally characterise them in the minimum of time. The resources that I helped to establish, were used in collaboration with other people to characterise several genes like the potassium transporter LjKup and the sulphate transporter LjSst1, that were transcriptionally induced in nodules compared to uninfected roots, in more detail (Desbrosses et al., 2004; Krusell et al., 2005). Another gene that was studied in detail was LjAox1. This gene was identified during cDNA array experiments and detailed expression analysis revealed a strong and early induction of the gene during nodulation with high expression in young nodules which declines with the age of the nodule. Therefore, LjAox1 is an early nodulin. Promoter:gus fusions revealed an LjAox1 expression around the nodule endodermis. The physiological role of LjAox1 is currently being persued via RNAi. Using RNA interference, the synthesis of all symbiotic leghemoglobins was silenced simultaneously in Lotus japonicus. As a result, growth of LbRNAi lines was severely inhibited compared to wild-type plants when plants were grown under symbiotic conditions in the absence of mineral nitrogen. The nodules of these plants were arrested in growth 14 post inoculation and lacked the characteristic pinkish colour. Growing these transgenic plants in conditions where reduced nitrogen is available for the plant led to normal plant growth and development. This demonstrates that leghemoglobins are not required for plant development per se, and proves for the first time that leghemoglobins are indispensable for symbiotic nitrogen fixation. Absence of leghemoglobins in LbRNAi nodules led to significant increases in free-oxygen concentrations throughout the nodules, a decrease in energy status as reflected by the ATP/ADP ratio, and an absence of the bacterial nitrogenase protein. The bacterial population within nodules of LbRNAi plants was slightly reduced. Alterations of plant nitrogen and carbon metabolism in LbRNAi nodules was reflected in changes in amino acid composition and starch deposition (Ott et al., 2005). These data provide strong evidence that nodule leghemoglobins function as oxygen transporters that facilitate high flux rates of oxygen to the sites of respiration at low free oxygen concentrations within the infected cells.
Wetting and phase transitions play a very important role our daily life. Molecularly thin films of long-chain alkanes at solid/vapour interfaces (e.g. C30H62 on silicon wafers) are very good model systems for studying the relation between wetting behaviour and (bulk) phase transitions. Immediately above the bulk melting temperature the alkanes wet partially the surface (drops). In this temperature range the substrate surface is covered with a molecularly thin ordered, solid-like alkane film ("surface freezing"). Thus, the alkane melt wets its own solid only partially which is a quite rare phenomenon in nature. The thesis treats about how the alkane melt wets its own solid surface above and below the bulk melting temperature and about the corresponding melting and solidification processes. Liquid alkane drops can be undercooled to few degrees below the bulk melting temperature without immediate solidification. This undercooling behaviour is quite frequent and theoretical quite well understood. In some cases, slightly undercooled drops start to build two-dimensional solid terraces without bulk solidification. The terraces grow radially from the liquid drops on the substrate surface. They consist of few molecular layers with the thickness multiple of all-trans length of the molecule. By analyzing the terrace growth process one can find that, both below and above the melting point, the entire substrate surface is covered with a thin film of mobile alkane molecules. The presence of this film explains how the solid terrace growth is feeded: the alkane molecules flow through it from the undercooled drops to the periphery of the terrace. The study shows for the first time the coexistence of a molecularly thin film ("precursor") with partially wetting bulk phase. The formation and growth of the terraces is observed only in a small temperature interval in which the 2D nucleation of terraces is more likely than the bulk solidification. The nucleation mechanisms for 2D solidification are also analyzed in this work. More surprising is the terrace behaviour above bulk the melting temperature. The terraces can be slightly overheated before they melt. The melting does not occur all over the surface as a single event; instead small drops form at the terrace edge. Subsequently these drops move on the surface "eating" the solid terraces on their way. By this they grow in size leaving behind paths from were the material was collected. Both overheating and droplet movement can be explained by the fact that the alkane melt wets only partially its own solid. For the first time, these results explicitly confirm the supposed connection between the absence of overheating in solid and "surface melting": the solids usually start to melt without an energetic barrier from the surface at temperatures below the bulk melting point. Accordingly, the surface freezing of alkanes give rise of an energetic barrier which leads to overheating.
In an experimental study the attempt was made to examine the effects of the Reciprocal Teaching method on measures of metacognition and try to identify the effective features of this method that are necessary for the learning gains to occur. Reciprocal Teaching, originally developed by Palincsar and Brown (1984), is a very successful training program which was designed to improve student's reading comprehension skills by teaching them reading strategies. In the present study the tasks and responsibilities assumed by 5thgrade elementary students (N = 55) participating in a 16-session reading strategy training were varied systematically. Not only the students who participated in the training program in one of the three experimental conditions were compared with respect to knowledge and performance measures, but there was also a comparison to their control classmates who did not participate in strategy training (N = 86). Detailed analyses of video-taped sessions provided additional information. The strategy training was most beneficial for measures of knowledge and performance more closely related to the content of the training program, namely knowledge about specific reading strategies taught in training and application of those strategies. No significant effects were observed for more distal measures (general strategy knowledge, reading comprehension). As for the features of the program, it could be shown that students of the two experimental conditions where the students were responsible for giving each other feedback on performance (with respect to both content and strategy application) and guiding the correction of the answer outperformed both the experimental condition in which the trainer was responsible for those tasks and the control group. It is concluded that it is not merely the application of strategies, but the combination of strategy application with concurrent teaching and learning of metacognitive acquisition procedures (analysis, monitoring, evaluation, and regulation) in an inter-individual way as the precedent of these processes occurring intra-individually that seems to be an efficient way of acquiring metacognitive knowledge and skills. It was also shown that strategy training does not necessarily have to include the precise kind of interaction that characterizes the Reciprocal Teaching method. Instead, the tasks of monitoring, evaluating, and regulating other children's learning processes - i.e., tasks associated with the "teacher role" - are the ones that promote the acquisition of metacognitive knowledge and skills. Generally, any strategy training program that not only provides children with plentiful opportunities for practice, but also prompts them to engage in these kinds of metacognitive processes, may help children to acquire metacognitive knowledge and skills.
At present, carbon sequestration in terrestrial ecosystems slows the growth rate of atmospheric CO2 concentrations, and thereby reduces the impact of anthropogenic fossil fuel emissions on the climate system. Changes in climate and land use affect terrestrial biosphere structure and functioning at present, and will likely impact on the terrestrial carbon balance during the coming decades - potentially providing a positive feedback to the climate system due to soil carbon releases under a warmer climate. Quantifying changes, and the associated uncertainties, in regional terrestrial carbon budgets resulting from these effects is relevant for the scientific understanding of the Earth system and for long-term climate mitigation strategies. A model describing the relevant processes that govern the terrestrial carbon cycle is a necessary tool to project regional carbon budgets into the future. This study (1) provides an extensive evaluation of the parameter-based uncertainty in model results of a leading terrestrial biosphere model, the Lund-Potsdam-Jena Dynamic Global Vegetation Model (LPJ-DGVM), against a range of observations and under climate change, thereby complementing existing studies on other aspects of model uncertainty; (2) evaluates different hypotheses to explain the age-related decline in forest growth, both from theoretical and experimental evidence, and introduces the most promising hypothesis into the model; (3) demonstrates how forest statistics can be successfully integrated with process-based modelling to provide long-term constraints on regional-scale forest carbon budget estimates for a European forest case-study; and (4) elucidates the combined effects of land-use and climate changes on the present-day and future terrestrial carbon balance over Europe for four illustrative scenarios - implemented by four general circulation models - using a comprehensive description of different land-use types within the framework of LPJ-DGVM. This study presents a way to assess and reduce uncertainty in process-based terrestrial carbon estimates on a regional scale. The results of this study demonstrate that simulated present-day land-atmosphere carbon fluxes are relatively well constrained, despite considerable uncertainty in modelled net primary production. Process-based terrestrial modelling and forest statistics are successfully combined to improve model-based estimates of vegetation carbon stocks and their change over time. Application of the advanced model for 77 European provinces shows that model-based estimates of biomass development with stand age compare favourably with forest inventory-based estimates for different tree species. Driven by historic changes in climate, atmospheric CO2 concentration, forest area and wood demand between 1948 and 2000, the model predicts European-scale, present-day age structure of forests, ratio of biomass removals to increment, and vegetation carbon sequestration rates that are consistent with inventory-based estimates. Alternative scenarios of climate and land-use change in the 21<sup>st century suggest carbon sequestration in the European terrestrial biosphere during the coming decades will likely be on magnitudes relevant to climate mitigation strategies. However, the uptake rates are small in comparison to the European emissions from fossil fuel combustion, and will likely decline towards the end of the century. Uncertainty in climate change projections is a key driver for uncertainty in simulated land-atmosphere carbon fluxes and needs to be accounted for in mitigation studies of the terrestrial biosphere.
Vitamin E : elucidation of the mechanism of side chain degradation and gene regulatory functions
(2005)
For more than 80 years vitamin E has been in the focus of scientific research. Most of the progress concerning non-antioxidant functions, nevertheless, has only arisen from publications during the last decade. Most recently, the metabolic pathway of vitamin E has been almost completely elucidated. Vitamin E is metabolized by truncation of its side chain. The initial step of an omega-hydroxylation is carried out by cytochromes P450 (CYPs). This was evidenced by the inhibition of the metabolism of alpha-tocopherol by ketoconozole, an inhibitor of CYP3A expression, whereas rifampicin, an inducer of CYP3A expression increased the metabolism of alpha-tocopherol. Although the degradation pathway is identical for all tocopherols and tocotrienols, there is a marked difference in the amount of the release of metabolites from the individual vitamin E forms in cell culture as well as in experimental animals and in humans. Recent findings not only proposed an CYP3A4-mediated degradation of vitamin E but also suggested an induction of the metabolizing enzymes by vitamin E itself. In order to investigate how vitamin E is able to influence the expression of metabolizing enzymes like CYP3A4, a pregnane X receptor (PXR)-based reporter gene assay was chosen. PXR is a nuclear receptor which regulates the transcription of genes, e.g., CYP3A4, by binding to specific DNA response elements. And indeed, as shown here, vitamin E is able to influence the expression of CYP3A via PXR in an in vitro reporter gene assay. Tocotrienols showed the highest activity followed by delta- and alpha-tocopherol. An up-regulation of Cyp3a11 mRNA, the murine homolog of the human CYP3A4, could also be confirmed in an animal experiment. The PXR-mediated change in gene expression displayed the first evidence of a direct transcriptional activity of vitamin E. PXR regulates the expression of genes involved in xenobiotic detoxification, including oxidation, conjugation, and transport. CYP3A, e.g., is involved in the oxidative metabolism of numerous currently used drugs. This opens a discussion of possible side effects of vitamin E, but the extent to which supranutritional doses of vitamin E modulate these pathways in humans has yet to be determined. Additionally, as there is arising evidence that vitamin E's essentiality is more likely to be based on gene regulation than on antioxidant functions, it appeared necessary to further investigate the ability of vitamin E to influence gene expression. Mice were divided in three groups with diets (i) deficient in alpha-tocopherol, (ii) adequate in alpha-tocopherol supply and (iii) with a supranutritional dosage of alpha-tocopherol. After three months, half of each group was supplemented via a gastric tube with a supranutritional dosage of gamma-tocotrienol per day for 7 days. Livers were analyzed for vitamin E content and liver RNA was prepared for hybridization using cDNA array and oligonucleotide array technology. A significant change in gene expression was observed by alpha-tocopherol but not by gamma-tocotrienol and only using the oligonucleotide array but not using the cDNA array. The latter effect is most probably due to the limited number of genes represented on a cDNA array, the lacking gamma-tocotrienol effect is obviously caused by a rapid degradation, which might prevent bioefficacy of gamma-tocotrienol. Alpha-tocopherol changed the expression of various genes. The most striking observation was an up-regulation of genes, which code for proteins involved in synaptic transmitter release and calcium signal transduction. Synapsin, synaptotagmin, synaptophysin, synaptobrevin, RAB3A, complexin 1, Snap25, ionotropic glutamate receptors (alpha 2 and zeta 1) were shown to be up-regulated in the supranutritional group compared to the deficient group. The up-regulation of synaptic genes shown in this work are not only supported by the strong concentration of genes which all are involved in the process of vesicular transport of neurotransmitters, but were also confirmed by a recent publication. However, a confirmation by real time PCR in neuronal tissue like brain is now required to explain the effect of vitamin E on neurological functionality. The change in expression of genes coding for synaptic proteins by vitamin E is of principal interest thus far, since the only human disease directly originating from an inadequate vitamin E status is ataxia with isolated vitamin E deficiency. Therefore, with the results of this work, an explanation for the observed neurological symptoms associated with vitamin E deficiency can be presented for the first time.
Interpretation of and reasoning with conditionals : probabilities, mental models, and causality
(2003)
In everyday conversation "if" is one of the most frequently used conjunctions. This dissertation investigates what meaning an everyday conditional transmits and what inferences it licenses. It is suggested that the nature of the relation between the two propositions in a conditional might play a major role for both questions. Thus, in the experiments reported here conditional statements that describe a causal relationship (e.g., "If you touch that wire, you will receive an electric shock") were compared to arbitrary conditional statements in which there is no meaningful relation between the antecedent and the consequent proposition (e.g., "If Napoleon is dead, then Bristol is in England"). Initially, central assumptions from several approaches to the meaning and the reasoning from causal conditionals will be integrated into a common model. In the model the availability of exceptional situations that have the power to generate exceptions to the rule described in the conditional (e.g., the electricity is turned off), reduces the subjective conditional probability of the consequent, given the antecedent (e.g., the probability of receiving an electric shock when touching the wire). This conditional probability determines people's degree of belief in the conditional, which in turn affects their willingness to accept valid inferences (e.g., "Peter touches the wire, therefore he receives an electric shock") in a reasoning task. Additionally to this indirect pathway, the model contains a direct pathway: Cognitive availability of exceptional situations directly reduces the readiness to accept valid conclusions. The first experimental series tested the integrated model for conditional statements embedded in pseudo-natural cover stories that either established a causal relation between the antecedent and the consequent event (causal conditionals) or did not connect the propositions in a meaningful way (arbitrary conditionals). The model was supported for the causal, but not for the arbitrary conditional statements. Furthermore, participants assigned lower degrees of belief to arbitrary than to causal conditionals. Is this effect due to the presence versus absence of a semantic link between antecedent and consequent in the conditionals? This question was one of the starting points for the second experimental series. Here, the credibility of the conditionals was manipulated by adding explicit frequency information about possible combinations of presence or absence of antecedent and consequent events to the problems (i.e., frequencies of cases of 1. true antecedent with true consequent, 2. true antecedent with false consequent, 3. false antecedent with true consequent, 4. false antecedent with false consequent). This paradigm allows testing different approaches to the meaning of conditionals (Experiment 4) as well as theories of conditional reasoning against each other (Experiment 5). The results of Experiment 4 supported mainly the conditional probability approach to the meaning of conditionals (Edgington, 1995) according to which the degree of belief a listener has in a conditional statement equals the conditional probability that the consequent is true given the antecedent (e.g., the probability of receiving an electric shock when touching the wire). Participants again assigned lower degrees of belief to the arbitrary than the causal conditionals, although the conditional probability of the consequent given the antecedent was held constant within every condition of explicit frequency information. This supports the hypothesis that the mere presence of a causal link enhances the believability of a conditional statement. In Experiment 5 participants solved conditional reasoning tasks from problems that contained explicit frequency information about possible relevant cases. The data favored the probabilistic approach to conditional reasoning advanced by Oaksford, Chater, and Larkin (2000). The two experimental series reported in this dissertation provide strong support for recent probabilistic theories: for the conditional probability approach to the meaning of conditionals by Edgington (1995) and the probabilistic approach to conditional reasoning by Oaksford et al. (2000). In the domain of conditional reasoning, there was additionally support for the modified mental model approaches by Markovits and Barrouillet (2002) and Schroyens and Schaeken (2003). Probabilistic and mental model approaches could be reconciled within a dual-process-model as suggested by Verschueren, Schaeken, and d'Ydewalle (2003).
Subject of this work is the study of applications of the Galactic Microlensing effect, where the light of a distant star (source) is bend according to Einstein's theory of gravity by the gravitational field of intervening compact mass objects (lenses), creating multiple (however not resolvable) images of the source. Relative motion of source, observer and lens leads to a variation of deflection/magnification and thus to a time dependant observable brightness change (lightcurve), a so-called microlensing event, lasting weeks to months. The focus lies on the modeling of binary-lens events, which provide a unique tool to fully characterize the lens-source system and to detect extra-solar planets around the lens star. Making use of the ability of genetic algorithms to efficiently explore large and intricate parameter spaces in the quest for the global best solution, a modeling software (Tango) for binary lenses is developed, presented and applied to data sets from the PLANET microlensing campaign. For the event OGLE-2002-BLG-069 the 2nd ever lens mass measurement has been achieved, leading to a scenario, where a G5III Bulge giant at 9.4 kpc is lensed by an M-dwarf binary with total mass of M=0.51 solar masses at distance 2.9 kpc. Furthermore a method is presented to use the absence of planetary lightcurve signatures to constrain the abundance of extra-solar planets.
Even though the structure of the plant cell wall is by and large quite well characterized, its synthesis and regulation remains largely obscure. However, it is accepted that the building blocks of the polysaccharidic part of the plant cell wall are nucleotide sugars. Thus to gain more insight into the cell wall biosynthesis, in the first part of this thesis, plant genes possibly involved in the nucleotide sugar interconversion pathway were identified using a bioinformatics approach and characterized in plants, mainly in Arabidopsis. For the computational identification profile hidden markov models were extracted from the Pfam and TIGR databases. Mainly with these, plant genes were identified facilitating the “hmmer” program. Several gene families were identified and three were further characterized, the UDP-rhamnose synthase (RHM), UDP-glucuronic acid epimerase (GAE) and the myo-inositol oxygenase (MIOX) families. For the three-membered RHM family relative ubiquitous expression was shown using variuos methods. For one of these genes, RHM2, T-DNA lines could be obtained. Moreover, the transcription of the whole family was downregulated facilitating an RNAi approach. In both cases a alteration of cell wall typic polysaccharides and developmental changes could be shown. In the case of the rhm2 mutant these were restricted to the seed or the seed mucilage, whereas the RNAi plants showed profound changes in the whole plant. In the case of the six-membered GAE family, the gene expressed to the highest level (GAE6) was cloned, expressed heterologously and its function was characterized. Thus, it could be shown that GAE6 encodes for an enzyme responsible for the conversion of UDP-glucuronic acid to UDP-galacturonic acid. However, a change in transcript level of variuos GAE family members achieved by T-DNA insertions (gae2, gae5, gae6), overexpression (GAE6) or an RNAi approach, targeting the whole family, did not reveal any robust changes in the cell wall. Contrary to the other two families the MIOX gene family had to be identified using a BLAST based approach due to the lack of enough suitable candidate genes for building a hidden markov model. An initial bioinformatic characterization was performed which will lead to further insights into this pathway. In total it was possible to identify the two gene families which are involved in the synthesis of the two pectin backbone sugars galacturonic acid and rhamnose. Moreover with the identification of the MIOX genes a genefamily, important for the supply of nucleotide sugar precursors was identified. In a second part of this thesis publicly available microarray datasets were analyzed with respect to co-responsive behavior of transcripts on a global basis using nearly 10,000 genes. The data has been made available to the community in form of a database providing additional statistical and visualization tools (http://csbdb.mpimp-golm.mpg.de). Using the framework of the database to identify nucleotide sugar converting genes indicated that co-response might be used for identification of novel genes involved in cell wall synthesis based on already known genes.
This article examines the multiple governments of independent Estonia since 1992 referring to their stability. Confronted with the immense problems of democratic transition, the multi-party governments of Estonia change comparatively often. Following the elections of March 2003 the ninth government since 1992 was formed. A detailed examination of government stability and the example of Estonia is accordingly warranted, given that the country is seen as the most successful Central Eastern European transition country in spite of its frequent changes of government. Furthermore, this article questions whether or not internal government stability can exist within a situation where the government changes frequently. What does stability of government mean and what are the varying multi-faceted depths of the term? Before analysing the term, it has to be clarified and defined. It is presumed that government stability is composed of multiple variables influencing one another. Data about the average tenure of a government is not very conclusive. Rather, the deeper political causes for governmental change need to be examined. Therefore, this article discusses the conceptual and theoretical basics of governmental stability first. Secondly, it discusses the Estonian situation in detail up to the elections of 2003, including a short review of the 9th government since independence. In the conclusion, the author explains whether or not the governments of Estonia are stable. In the appendix, the reader finds all election results and also a list of all previous ministers of Estonian governments (all data are as of July 2002).
The development of the Polish telecommunications administration in the years 1989/90 to 2003 is marked by the processes of liberalisation and privatisation the telecommunications sector underwent during that period. The gradual liberalisation of the Polish telecommunications sector started as early as 1992. In the beginning, national strategies were pursued. The most important of these was the creation of a bipolar market structure in the local area networks. In the second half of the 1990ies the approaching EU membership accelerated the process of liberalisation and consequently the development of a framework of regulations. EU standards are more directed towards setting out a legal framework for regulation than prescribing concrete details of administrative organisation. Nevertheless, the independent regulatory agencies typical for Western Europe served as a model for the introduction of a new regulatory body responsible for the telecommunications sector in Poland. The growing influence of EU legislation changed telecommunications policy as well as administrative practices. There has been a shift of responsibilities from the ministry to the regulatory agency, but the question remains, if the agency gained enough power to fulfil its regulatory function. In the following the legislative framework created by the EU in telecommunications policy will be described and the model of independent regulatory agencies, as it is typical for most EU countries, will be introduced. Some categories for the analysis of the Polish regulatory system will be deduced from the discussion on the regulations of telecommunication in the established EU-Nations (see Böllhoff 2002 and 2003, Thatcher 2002a and 2002b, Thatcher/Stone Sweet 2002). Subsequently the basic features of Polish telecommunication policies in the 1990ies and its effects on the telecommunications sector will be outlined. In the third chapter the development of organisational structures on the ministerial level and within the regulatory agency will be examined. In the forth chapter I will look at the distribution of power and the coordination of the various authorities responsible for telecommunication regulations. The focus of this chapter is on the Polish regulatory agency and its relationships with the ministry, with the anti-monopoly office and with the Broadcasting and Television Council. In a conclusion, the main findings will be summed up.
Due to its relevance for global climate, the realistic representation of the Atlantic meridional overturning circulation (AMOC) in ocean models is a key task. In recent years, two paradigms have evolved around what are its driving mechanisms: diapycnal mixing and Southern Ocean winds. This work aims at clarifying what sets the strength of the Atlantic overturning components in an ocean general circulation model and discusses the role of spatially inhomogeneous mixing, numerical diffusion and winds. Furthermore, the relation of the AMOC with a key quantity, the meridional pressure difference is analyzed. Due to the application of a very low diffusive tracer advection scheme, a realistic Atlantic overturning circulation can be obtained that is purely wind driven. On top of the winddriven circulation, changes of density gradients are caused by increasing the parameterized eddy diffusion in the North Atlantic and Southern Ocean. The linear relation between the maximum of the Atlantic overturning and the meridional pressure difference found in previous studies is confirmed and it is shown to be due to one significant pressure gradient between the average pressure over high latitude deep water formation regions and a relatively uniform pressure between 30°N and 30°S, which can directly be related to a zonal flow through geostrophy. Under constant Southern Ocean windstress forcing, a South Atlantic outflow in the range of 6-16 Sv is obtained for a large variety of experiments. Overall, the circulation is winddriven but its strength not uniquely determined by the Southern Ocean windstress. The scaling of the Atlantic overturning components is linear with the background vertical diffusivity, not confirming the 2/3 power law for one-hemisphere models without wind forcing. The pycnocline depth is constant in the coarse resolution model with large vertical grid extends. It suggests the ocean model operates like the Stommel box model with a linear relation of the pressure difference and fixed vertical scale for the volume transport. However, this seems only valid for vertical diffusivities smaller 0.4 cm²/s, when the dominant upwelling within the Atlantic occurs along the boundaries. For larger vertical diffusivities, a significant amount of interior upwelling occurs. It is further shown that any localized vertical mixing in the deep to bottom ocean cannot drive an Atlantic overturning. However, enhanced boundary mixing at thermocline depths is potentially important. The numerical diffusion is shown to have a large impact on the representation of the Atlantic overturning in the model. While the horizontal numerical diffusion tends to destabilize the Atlantic overturning the verital numerical diffusion denotes an amplifying mechanism.
The past decades are characterized by various efforts to provide complete sequence information of genomes regarding various organisms. The availability of full genome data triggered the development of multiplex high-throughput assays allowing simultaneous measurement of transcripts, proteins and metabolites. With genome information and profiling technologies now in hand a highly parallel experimental biology is offering opportunities to explore and discover novel principles governing biological systems. Understanding biological complexity through modelling cellular systems represents the driving force which today allows shifting from a component-centric focus to integrative and systems level investigations. The emerging field of systems biology integrates discovery and hypothesis-driven science to provide comprehensive knowledge via computational models of biological systems. Within the context of evolving systems biology, investigations were made in large-scale computational analyses on transcript co-response data through selected prokaryotic and plant model organisms. CSB.DB - a comprehensive systems-biology database - (http://csbdb.mpimp-golm.mpg.de/) was initiated to provide public and open access to the results of biostatistical analyses in conjunction with additional biological knowledge. The database tool CSB.DB enables potential users to infer hypothesis about functional interrelation of genes of interest and may serve as future basis for more sophisticated means of elucidating gene function. The co-response concept and the CSB.DB database tool were successfully applied to predict operons in Escherichia coli by using the chromosomal distance and transcriptional co-responses. Moreover, examples were shown which indicate that transcriptional co-response analysis allows identification of differential promoter activities under different experimental conditions. The co-response concept was successfully transferred to complex organisms with the focus on the eukaryotic plant model organism Arabidopsis thaliana. The investigations made enabled the discovery of novel genes regarding particular physiological processes and beyond, allowed annotation of gene functions which cannot be accessed by sequence homology. GMD - the Golm Metabolome Database - was initiated and implemented in CSB.DB to integrated metabolite information and metabolite profiles. This novel module will allow addressing complex biological questions towards transcriptional interrelation and extent the recent systems level quest towards phenotyping.
Adverb positioning is guided by syntactic, semantic, and pragmatic considerations and is subject to cross-linguistic as well as language-specific variation. The goal of the thesis is to identify the factors that determine adverb placement in general (Part I) as well as in constructions in which the adverb's sister constituent is deprived of its phonetic material by movement or ellipsis (gap constructions, Part II) and to provide an Optimality Theoretic approach to the contrasts in the effects of these factors on the distribution of adverbs in English, French, and German. In Optimality Theory (Prince & Smolensky 1993), grammaticality is defined as optimal satisfaction of a hierarchy of violable constraints: for a given input, a set of output candidates are produced out of which that candidate is selected as grammatical output which optimally satisfies the constraint hierarchy. Since grammaticality crucially relies on the hierarchic relations of the constraints, cross-linguistic variation can be traced back to differences in the language-specific constraint rankings. Part I shows how diverse phenomena of adverb placement can be captured by corresponding constraints and their relative rankings: - contrasts in the linearization of adverbs and verbs/auxiliaries in English and French - verb placement in German and the filling of the prefield position - placement of focus-sensitive adverbs - fronting of topical arguments and adverbs Part II extends the analysis to a particular phenomenon of adverb positioning: the avoidance of adverb attachment to a phonetically empty constituent (gap). English and French are similar in that the acceptability of pre-gap adverb placement depends on the type of adverb, its scope, and the syntactic construction (English: wh-movement vs. topicalization / VP Fronting / VP Ellipsis, inverted vs. non-inverted clauses; French: CLLD vs. Cleft, simple vs. periphrastic tense). Yet, the two languages differ in which strategies a specific type of adverb may pursue to escape placement in front of a certain type of gap. In contrast to English and French, placement of an adverb in front of a gap never gives rise to ungrammaticality in German. Rather, word ordering has to obey the syntactic, semantic, and pragmatic principles discussed in Part I; whether or not it results in adverb attachment to a phonetically empty constituent seems to be irrelevant: though constraints are active in every language, the emergence of a visible effect of their requirements in a given language depends on their relative ranking. The complex interaction of the diverse factors as well as their divergent effects on adverb placement in the various languages are accounted for by the universal constraints and their language-specific hierarchic relations in the OT framework.
Fault planes of large earthquakes incorporate inhomogeneous structures. This can be observed in teleseismic studies through the spatial distribution of slip and seismic moment release caused by the mainshock. Both parameters are often concentrated on patches on the fault plane with much higher values for slip and moment release than their adjacent areas. These patches are called asperities which obviously have a strong influence on the mainshock rupture propagation. Condition and properties of structures in the fault plane area, which are responsible for the evolution of such asperities or their significance on damage distributions of future earthquakes, are still not well understood and subject to recent geo-scientific studies. In the presented thesis asperity structures are identified on the fault plane of the Mw=8.0 Antofagasta earthquake in northern Chile which occurred on 30th of July, 1995. It was a thrust-type event in the seismogenic zone between the subducting pacific Nazca plate and the overriding South American plate. In cooperation of the German Task Force for Earthquakes and the CINCA'95 project a network of up to 44 seismic stations was set up to record the aftershock sequence. The seaward extension of the network with 9 OBH stations increased significantly the precision of hypocenter determinations. They were distributed mainly on the fault plane itself around the city of Antofagasta and Mejillones Peninsula. The asperity structures were recognized here by the spatial variations of local seismological parameters; at first by the spatial distribution of the seismic b-value on the fault plane, derived from the magnitude-frequency relation of Gutenberg-Richter. The correlation of this b-value map with other parameters like the mainshock source time function, the gravity isostatic residual anomalies, the aftershock radiated seismic energy distribution and the vp/vs ratios from a local earthquake tomograhpy study revealed some ideas about the composition and asperity generating processes. The investigation of 295 aftershock focal mechanism solutions supported the resulting fault plane structure and proposed a 3D similar stress state in the area of the Antofagasta fault plane.
Environmental stresses such as drought, high salt and low temperature affect plant growth and decrease crop productivity extremely. It is important to improve stress tolerance of the crop plant to increase crop yield under stress conditions. The Arabidopsis thaliana salt tolerance 1 gene (AtSTO1) was originally identified by Lippuner et al., (1996). In this study around 27 members of STO-like proteins were identified in Arabidopsis thaliana, rice and other plant species. The STO proteins have two consensus motifs (CCADEAAL and FCV(L)EDRA). The STO family members can be regarded as a distinct class of C2C2 proteins considering their low sequence similarity to other GATA like proteins and poor conservation in the C-terminus. AtSTO1 was found to be induced by salt, cold and drought in leaves and roots of 4-week-old Arabidopsis thaliana wild-type plants. The expression of AtSTO1 under salt and cold stress was more pronounced in roots than in leaves. The data provided here revealed that the AtSTO1 protein is localized in the nucleus. The observation that AtSTO1 localizes in the nucleus is consistent with its proposed function as a transcription factor. AtSTO1-dependent phenotypes were observed when plant were grown at 50 mM NaCl on agar plates. Leaves of AtSTO1 overexpression lines were bigger with dark green coloration, whereas stunted growth and yellowish leaves were observed in wild-type and RNAi plants. Also, the AtSTO1 overexpression plants when exposed to long-term cold stress had a red leaf coloration which was much stronger than in wild-type and RNAi lines. Growth of AtSTO1 overexpression lines in long term under salt and cold stress was always associated with long roots which was more pronounced than in wild-type and RNAi lines. Proline accumulation increased more strongly in leaves and roots of AtSTO1 overexpression lines than in tissues of wild-type and RNAi lines when treated with 200 mM NaCl, exposed to cold stress or when watering was prevented for one day or two weeks. Also, soluble sugar content increased to higher levels under salt, cold and drought stress in AtSTO1 overexpression lines when compared to wild-type and RNAi lines. The increase in soluble sugar content was detected in AtSTO1 overexpression lines after long-term (2 weeks) growth of plants under these stresses. Anthocyanins accumulated in leaves of AtSTO1 overexpression lines when exposed to long term salt stress (200 mM NaCl for 2 weeks) or to 4°C for 6 and 8 weeks. Also, anthocyanin content was increased in flowers of AtSTO1 overexpression plants kept at 4°C for 8 weeks. Taken together these data indicate that overexpression of AtSTO1 enhances abiotic stress toleranc via a more pronounced accumulation of compatible solutes under stress.
New chain transfer agents based on dithiobenzoate and trithiocarbonate for free radical polymerization via Reversible Addition-Fragmentation chain Transfer (RAFT) were synthesized. The new compounds bear permanently hydrophilic sulfonate moieties which provide solubility in water independent of the pH. One of them bears a fluorophore, enabling unsymmetrical double end group labelling as well as the preparation of fluorescent labeled polymers. Their stability against hydrolysis in water was studied, and compared with the most frequently employed water-soluble RAFT agent 4-cyano-4-thiobenzoylsulfanylpentanoic acid dithiobenzoate, using UV-Vis and 1H-NMR spectroscopy. An improved resistance to hydrolysis was found for the new RAFT agents, providing good stabilities in the pH range between 1 and 8, and up to temperatures of 70°C. Subsequently, a series of non-ionic, anionic and cationic water-soluble monomers were polymerized via RAFT in water. In these experiments, polymerizations were conducted either at 48°C or 55°C, that are lower than the conventionally employed temperatures (>60°C) for RAFT in organic solvents, in order to minimize hydrolysis of the active chain ends (e.g. dithioester and trithiocarbonate), and thus to obtain good control over the polymerization. Under these conditions, controlled polymerization in aqueous solution was possible with styrenic, acrylic and methacrylic monomers: molar masses increase with conversion, polydispersities are low, and the degree of end group functionalization is high. But polymerizations of methacrylamides were slow at temperatures below 60°C, and showed only moderate control. The RAFT process in water was also proved to be a powerful method to synthesize di- and triblock copolymers including the preparation of functional polymers with complex structure, such as amphiphilic and stimuli-sensitive block copolymers. These include polymers containing one or even two stimuli-sensitive hydrophilic blocks. The hydrophilic character of a single or of several blocks was switched by changing the pH, the temperature or the salt content, to demonstrate the variability of the molecular designs suited for stimuli-sensitive polymeric amphiphiles, and to exemplify the concept of multiple-sensitive systems. Furthermore, stable colloidal block ionomer complexes were prepared by mixing anionic surfactants in aqueous media with a double hydrophilic block copolymer synthesized via RAFT in water. The block copolymer is composed of a noncharged hydrophilic block based on polyethyleneglycol and a cationic block. The complexes prepared with perfluoro decanoate were found so stable that they even withstand dialysis; notably they do not denaturate proteins. So, they are potentially useful for biomedical applications in vivo.
Taking inspiration from nature, where composite materials made of a polymer matrix and inorganic fillers are often found, e.g. bone, shell of crustaceans, shell of eggs, etc., the feasibility on making composite materials containing chitosan and nanosized hydroxyapatite were investigated. A new preparation approach based on a co-precipitation method has been developed. In its earlier stage of formation, the composite occurs as hydrogel as suspended in aqueous alkaline solution. In order to get solid composites various drying procedures including freeze-drying technique, air-drying at room temperature and at moderate temperatures, between 50oC and 100oC were used. Physicochemical studies showed that the composites exhibit different properties with respect to their structure and composition. IR and Raman spectroscopy probed the presence of both chitosan and hydroxyapatite in the composites. Hydroxyapatite as dispersed in the chitosan matrix was found to be in the nanosize range (15-50 nm) and occurs in a bimodal distribution with respect to its crystallite length. Two types of distribution domains of hydroxyapatite crystallites in the composite matrix such as cluster-like (200-400 nm) and scattered-like domains were identified by the transmission electron microscopy (TEM), X-ray diffraction (XRD) and by confocal scanning laser microscopy (CSLM) measurements. Relaxation NMR experiments on composite hydrogels showed the presence of two types of water sites in their gel networks, such as free and bound water. Mechanical tests showed that the mechanical properties of composites are one order of magnitude less than those of compact bone but comparable to those of porous bone. The enzymatic degradation rates of composites showed slow degradation processes. The yields of degradation were estimated to be less than 10% by loss of mass, after incubation with lysozyme, for a period of 50 days. Since the composite materials were found biocompatible by the in vivo tests, the simple mode of their fabrication and their properties recommend them as potential candidates for the non-load bearing bone substitute materials.
This thesis work describes a new experimental method for the determination of Mode II (shear) fracture toughness, KIIC of rock and compares the outcome to results from Mode I (tensile) fracture toughness, KIC, testing using the International Society of Rock Mechanics Chevron-Bend method.Critical Mode I fracture growth at ambient conditions was studied by carrying out a series of experiments on a sandstone at different loading rates. The mechanical and microstructural data show that time- and loading rate dependent crack growth occurs in the test material at constant energy requirement.The newly developed set-up for determination of the Mode II fracture toughness is called the Punch-Through Shear test. Notches were drilled to the end surfaces of core samples. An axial load punches down the central cylinder introducing a shear load in the remaining rock bridge. To the mantle of the cores a confining pressure may be applied. The application of confining pressure favours the growth of Mode II fractures as large pressures suppress the growth of tensile cracks.Variation of geometrical parameters leads to an optimisation of the PTS- geometry. Increase of normal load on the shear zone increases KIIC bi-linear. High slope is observed at low confining pressures; at pressures above 30 MPa low slope increase is evident. The maximum confining pressure applied is 70 MPa. The evolution of fracturing and its change with confining pressure is described.The existence of Mode II fracture in rock is a matter of debate in the literature. Comparison of the results from Mode I and Mode II testing, mainly regarding the resulting fracture pattern, and correlation analysis of KIC and KIIC to physico-mechanical parameters emphasised the differences between the response of rock to Mode I and Mode II loading. On the microscale, neither the fractures resulting from Mode I the Mode II loading are pure mode fractures. On macroscopic scale, Mode I and Mode II do exist.
The role of feedback between erosional unloading and tectonics controlling the development of the Himalaya is a matter of current debate. The distribution of precipitation is thought to control surface erosion, which in turn results in tectonic exhumation as an isostatic compensation process. Alternatively, subsurface structures can have significant influence in the evolution of this actively growing orogen. Along the southern Himalayan front new 40Ar/39Ar white mica and apatite fission track (AFT) thermochronologic data provide the opportunity to determine the history of rock-uplift and exhumation paths along an approximately 120-km-wide NE-SW transect spanning the greater Sutlej region of the northwest Himalaya, India. 40Ar/39Ar data indicate, consistent with earlier studies that first the High Himalayan Crystalline, and subsequently the Lesser Himalayan Crystalline nappes were exhumed rapidly during Miocene time, while the deformation front propagated to the south. In contrast, new AFT data delineate synchronous exhumation of an elliptically shaped, NE-SW-oriented ~80 x 40 km region spanning both crystalline nappes during Pliocene-Quaternary time. The AFT ages correlate with elevation, but show within the resolution of the method no spatial relationship to preexisting major tectonic structures, such as the Main Central Thrust or the Southern Tibetan Fault System. Assuming constant exhumation rates and geothermal gradient, the rocks of two age vs. elevation transects were exhumed at ~1.4 ±0.2 and ~1.1 ±0.4 mm/a with an average cooling rate of ~50-60 °C/Ma during Pliocene-Quaternary time. The locus of pronounced exhumation defined by the AFT data coincides with a region of enhanced precipitation, high discharge, and sediment flux rates under present conditions. We therefore hypothesize that the distribution of AFT cooling ages might reflect the efficiency of surface processes and fluvial erosion, and thus demonstrate the influence of erosion in localizing rock-uplift and exhumation along southern Himalayan front, rather than encompassing the entire orogen.Despite a possible feedback between erosion and exhumation along the southern Himalayan front, we observe tectonically driven, crustal exhumation within the arid region behind the orographic barrier of the High Himalaya, which might be related to and driven by internal plateau forces. Several metamorphic-igneous gneiss dome complexes have been exhumed between the High Himalaya to the south and Indus-Tsangpo suture zone to the north since the onset of Indian-Eurasian collision ~50 Ma ago. Although the overall tectonic setting is characterized by convergence the exhumation of these domes is accommodated by extensional fault systems.Along the Indian-Tibetan border the poorly described Leo Pargil metamorphic-igneous gneiss dome (31-34°N/77-78°E) is located within the Tethyan Himalaya. New field mapping, structural, and geochronologic data document that the western flank of the Leo Pargil dome was formed by extension along temporally linked normal fault systems. Motion on a major detachment system, referred to as the Leo Pargil detachment zone (LPDZ) has led to the juxtaposition of low-grade metamorphic, sedimentary rocks in the hanging wall and high-grade metamorphic gneisses in the footwall. However, the distribution of new 40Ar/39Ar white mica data indicate a regional cooling event during middle Miocene time. New apatite fission track (AFT) data demonstrate that subsequently more of the footwall was extruded along the LPDZ in a brittle stage between 10 and 2 Ma with a minimum displacement of ~9 km. Additionally, AFT-data indicate a regional accelerated cooling and exhumation episode starting at ~4 Ma. Thus, tectonic processes can affect the entire orogenic system, while potential feedbacks between erosion and tectonics appear to be limited to the windward sides of an orogenic systems.
The India-Eurasia continental collision zone provides a spectacular example of active mountain building and climatic forcing. In order to quantify the critically important process of mass removal, I analyzed spatial and temporal precipitation patterns of the oscillating monsoon system and their geomorphic imprints. I processed passive microwave satellite data to derive high-resolution rainfall estimates for the last decade and identified an abnormal monsoon year in 2002. During this year, precipitation migrated far into the Sutlej Valley in the northwestern part of the Himalaya and reached regions behind orographic barriers that are normally arid. There, sediment flux, mean basin denudation rates, and channel-forming processes such as erosion by debris-flows increased significantly. Similarly, during the late Pleistocene and early Holocene, solar forcing increased the strength of the Indian summer monsoon for several millennia and presumably lead to analogous precipitation distribution as were observed during 2002. However, the persistent humid conditions in the steep, high-elevation parts of the Sutlej River resulted in deep-seated landsliding. Landslides were exceptionally large, mainly due to two processes that I infer for this time: At the onset of the intensified monsoon at 9.7 ka BP heavy rainfall and high river discharge removed material stored along the river, and lowered the baselevel. Second, enhanced discharge, sediment flux, and increased pore-water pressures along the hillslopes eventually lead to exceptionally large landslides that have not been observed in other periods. The excess sediments that were removed from the upstream parts of the Sutlej Valley were rapidly deposited in the low-gradient sectors of the lower Sutlej River. Timing of downcutting correlates with centennial-long weaker monsoon periods that were characterized by lower rainfall. I explain this relationship by taking sediment flux and rainfall dynamics into account: High sediment flux derived from the upstream parts of the Sutlej River during strong monsoon phases prevents fluvial incision due to oversaturation the fluvial sediment-transport capacity. In contrast, weaker monsoons result in a lower sediment flux that allows incision in the low-elevation parts of the Sutlej River.
It is known that the efficiency of organic light-emitting devices (OLEDs) is strongly influenced by the ’quality′ of the thin films [1]. On the basis of this conviction, the work presented in this thesis aimed to obtain a better understanding of the structure of organic thin films of general interest in the field of organic light emitting devices by using scanning probe microscopies (SPMs). A not yet reported crystal structure of quaterthiophene film grown on potassium hydrogen (KHP) is determined by optical measurements, a simulation program, diffraction at both normal incidence and grazing angle and AFM. The crystal cell is triclinic with parameters a = 0.721 nm, b = 0.632 nm, c = 0.956 nm and a = 91°, b = 91.4°, g = 91° [2]. The morphologies of four organic thin films deposited on gold are characterized by ultra high vacuum scanning tunneling microscopy (UHV-STM). Terraces in an hexanethiol monolayer, lamellar structures in an azobenzenethiol monolayer, rods in a a poly(paraphenylenevinylene) oligomer film and a granular morphology in an oxadiazole film are shown. The topographies of a series of poly(3,4-ethylenedioxythiophene)/poly(styrenesulfonate) (PEDOT/PSS) films deposited on indium-tin oxide (ITO) and gold obtained from dispersions with PEDOT:PSS weight ratios of 1:20, 1:6 and 1:1 are investigated by AFM. It is demonstrated that the films show the same topography on gold and on ITO. It is shown that the PEDOT films eliminate the spike features of ITO. It is reported that PEDOT 1:20 and 1:6 appear indistinguishable between each other but different from PEDOT 1:1 (the most conductive). Coupling STM and I-d measurements, a not yet reported structural model of PEDOT 1:1 on gold is obtained [3]. In this model the surface presents grains and the bulk particles/domains rich in PEDOT embedded in a PEDOT-poor matrix. The equation of conductivity is derived. A STM investigation of four PEDOT films deposited on ITO obtained from dispersions with the same PEDOT:PSS weight ratio of 1:1 is carried out [4]. The films differ either for the presence of sorbitol or for a different synthetic route (and they present different conductivities). For the first time a quantitative and qualitative correlation between the nanometer-scale morphology of PEDOT films with and without sorbitol and their conductivity is established.
It has been known for several years that under certain conditions electrons can be confined within thin layers even if these layers consist of metal and are supported by a metal substrate. In photoelectron spectra, these layers show characteristic discrete energy levels and it has turned out that these lead to large effects like the oscillatory magnetic coupling technically exploited in modern hard disk reading heads. The current work asks in how far the concepts underlying quantization in two-dimensional films can be transferred to lower dimensionality. This problem is approached by a stepwise transition from two-dimensional layers to one-dimensional nanostructures. On the one hand, these nanostructures are represented by terraces on atomically stepped surfaces, on the other hand by atom chains which are deposited onto these terraces up to complete coverage by atomically thin nanostripes. Furthermore, self organization effects are used in order to arrive at perfectly one-dimensional atomic arrangements at surfaces. Angle-resolved photoemission is particularly suited as method of investigation because is reveals the behavior of the electrons in these nanostructures in dependence of the spacial direction which distinguishes it from, e. g., scanning tunneling microscopy. With this method intense and at times surprisingly large effects of one-dimensional quantization are observed for various exemplary systems, partly for the first time. The essential role of bandgaps in the substrate known from two-dimensional systems is confirmed for nanostructures. In addition, we reveal an ambiguity without precedent in two-dimensional layers between spacial confinement of electrons on the one side and superlattice effects on the other side as well as between effects caused by the sample and by the measurement process. The latter effects are huge and can dominate the photoelectron spectra. Finally, the effects of reduced dimensionality are studied in particular for the d electrons of manganese which are additionally affected by strong correlation effects. Surprising results are also obtained here. ---------------------------- Die Links zur jeweiligen Source der im Appendix beigefügten Veröffentlichungen befinden sich auf Seite 83 des Volltextes.
The topic of synchronization forms a link between nonlinear dynamics and neuroscience. On the one hand, neurobiological research has shown that the synchronization of neuronal activity is an essential aspect of the working principle of the brain. On the other hand, recent advances in the physical theory have led to the discovery of the phenomenon of phase synchronization. A method of data analysis that is motivated by this finding - phase synchronization analysis - has already been successfully applied to empirical data. The present doctoral thesis ties up to these converging lines of research. Its subject are methodical contributions to the further development of phase synchronization analysis, as well as its application to event-related potentials, a form of EEG data that is especially important in the cognitive sciences. The methodical contributions of this work consist firstly in a number of specialized statistical tests for a difference in the synchronization strength in two different states of a system of two oscillators. Secondly, in regard of the many-channel character of EEG data an approach to multivariate phase synchronization analysis is presented. For the empirical investigation of neuronal synchronization a classic experiment on language processing was replicated, comparing the effect of a semantic violation in a sentence context with that of the manipulation of physical stimulus properties (font color). Here phase synchronization analysis detects a decrease of global synchronization for the semantic violation as well as an increase for the physical manipulation. In the latter case, by means of the multivariate analysis the global synchronization effect can be traced back to an interaction of symmetrically located brain areas.<BR> The findings presented show that the method of phase synchronization analysis motivated by physics is able to provide a relevant contribution to the investigation of event-related potentials in the cognitive sciences.
The overall objective of the study is an elaboration of quantitative methods for national conservation planning, coincident with the international approach ('hotspots' approach). This objective requires a solution of following problems: 1) How to estimate large scale vegetation diversity from abiotic factors only? 2) How to adopt 'global hotspots' approach for bordering of national biodiversity hotspots? 3) How to set conservation targets, accounting for difference in environmental conditions and human threats between national biodiversity hotspots? 4) How to design large scale national conservation plan reflecting hierarchical nature of biodiversity? The case study for national conservation planning is Russia. Conclusions: · Large scale vegetation diversity can be predicted to a major extent by climatically determined latent heat for evaporation and geometrical structure of landscape, described as an altitudinal difference. The climate based model reproduces observed species number of vascular plant for different areas of the world with an average error 15% · National biodiversity hotspots can be mapped from biotic or abiotic data using corrected for a country the quantitative criteria for plant endemism and land use from the 'global hotspots' approach · Quantitative conservation targets, accounting for difference in environmental conditions and human threats between national biodiversity hotspots can be set using national data for Red Data book species · Large scale national conservation plan reflecting hierarchical nature of biodiversity can be designed by combination of abiotic method at national scale (identification of large scale hotspots) and biotic method at regional scale (analysis of species data from Red Data book)
In this thesis the magnetohydrodynamic jet formation and the effects of magnetic diffusion on the formation of axisymmetric protostellar jets have been investigated in three different simulation sets. The time-dependent numerical simulations have been performed, using the magnetohydrodynamic ZEUS-3D code.
Robotic telescopes & Doppler imaging : measuring differential rotation on long-period active stars
(2004)
The sun shows a wide variety of magnetic-activity related phenomena. The magnetic field responsible for this is generated by a dynamo process which is believed to operate in the tachocline, which is located at the bottom of the convection zone. This dynamo is driven in part by differential rotation and in part by magnetic turbulences in the convection zone. The surface differential rotation, one key ingredient of dynamo theory, can be measured by tracing sunspot positions.To extend the parameter space for dynamo theories, one can extend these measurements to other stars than the sun. The primary obstacle in this endeavor is the lack of resolved surface images on other stars. This can be overcome by the Doppler imaging technique, which uses the rotation-induced Doppler-broadening of spectral lines to compute the surface distribution of a physical parameter like temperature. To obtain the surface image of a star, high-resolution spectroscopic observations, evenly distributed over one stellar rotation period are needed. This turns out to be quite complicated for long period stars. The upcoming robotic observatory STELLA addresses this problem with a dedicated scheduling routine, which is tailored for Doppler imaging targets. This will make observations for Doppler imaging not only easier, but also more efficient.As a preview of what can be done with STELLA, we present results of a Doppler imaging study of seven stars, all of which show evidence for differential rotation, but unfortunately the errors are of the same order of magnitude as the measurements due to unsatisfactory data quality, something that will not happen on STELLA. Both, cross-correlation analysis and the sheared image technique where used to double check the results if possible. For four of these stars, weak anti-solar differential rotation was found in a sense that the pole rotates faster than the equator, for the other three stars weak differential rotation in the same direction as on the sun was found.Finally, these new measurements along with other published measurements of differential rotation using Doppler imaging, were analyzed for correlations with stellar evolution, binarity, and rotation period. The total sample of stars show a significant correlation with rotation period, but if separated into antisolar and solar type behavior, only the subsample showing anti-solar differential rotation shows this correlation. Additionally, there is evidence for binary stars showing less differential rotation as single stars, as is suggested by theory. All other parameter combinations fail to deliver any results due to the still small sample of stars available.
In this work, the basic principles of self-organization of diblock copolymers having the in¬herent property of selective or specific non-covalent binding were examined. By the introduction of electrostatic, dipole–dipole, or hydrogen bonding interactions, it was hoped to add complexity to the self-assembled mesostructures and to extend the level of ordering from the nanometer to a larger length scale. This work may be seen in the framework of biomimetics, as it combines features of synthetic polymer and colloid chemistry with basic concepts of structure formation applying in supramolecular and biological systems. The copolymer systems under study were (i) block ionomers, (ii) block copolymers with acetoacetoxy chelating units, and (iii) polypeptide block copolymers.
Following work is embedded in the multidisciplinary study DESERT (DEad SEa Rift Transect) that has been carried out in the Middle East since the beginning of the year 2000. It focuses on the structure of the southern Dead Sea Transform (DST), the transform plate boundary between Africa (Sinai) and the Arabian microplate. The left-lateral displacement along this major active strike-slip fault amounts to more than 100 km since Miocene times. The DESERT near-vertical seismic reflection (NVR) experiment crossed the DST in the Arava Valley between Red Sea and Dead Sea, where its main fault is called Arava Fault. The 100 km long profile extends in a NW—SE direction from Sede Boqer/Israel to Ma'an/Jordan and coincides with the central part of a wide-angle seismic refraction/reflection line. Near-vertical seismic reflection studies are powerful tools to study the crustal architecture down to the crust/mantle boundary. Although they cannot directly image steeply dipping fault zones, they can give indirect evidence for transform motion by offset reflectors or an abrupt change in reflectivity pattern. Since no seismic reflection profile had crossed the DST before DESERT, important aspects of this transform plate boundary and related crustal structures were not known. Thus this study aimed to resolve the DST's manifestation in both the upper and the lower crust. It was to show, whether the DST penetrates into the mantle and whether it is associated with an offset of the crust/mantle boundary, which is observed at other large strike-slip zones. In this work a short description of the seismic reflection method and the various processing steps is followed by a geological interpretation of the seismic data, taking into account relevant information from other studies. Geological investigations in the area of the NVR profile showed, that the Arava Fault can partly be recognized in the field by small scarps in the Neogene sediments, small pressure ridges or rhomb-shaped grabens. A typical fault zone architecture with a fault gauge, fault-related damage zone, and undeformed host rock, that has been reported from other large fault zones, could not be found. Therefore, as a complementary part to the NVR experiment, which was designed to resolve deeper crustal structures, ASTER (Advanced Spacebourne Thermal Emission and Reflection Radiometer) satellite images were used to analyze surface deformation and determine neotectonic activity.
The age-by-complexity effect is the dominant empirical pattern in cognitive aging. The current report investigates whether a specific high-level mechanism---an age-related decrease in the reliability of episodic accumulators---can account for the age-by-complexity-effect, which is commonly assumed to be caused by an unspecific, low-level deficit. Groups of younger and older adults are compared in six reaction time experiments, using orthogonal manipulations of early cognitive difficulty (e.g., Stroop condition) and episodic demands (e.g., stimulus-response mapping). The predicted three-way interaction of age and the two factors was observed fairly consistently across experiments. A modified Brinley analysis shows that different regression slopes in old-young-space are required for conditions with low and high episodic difficulty. As a methodological contribution, a Brinley regression model following from certain simple processing assumptions is developed. It is shown that in contrast to a standard Brinley meta-analysis, the regression slopes in this model are not influenced by theoretically un-interesting between-experiment variance.
The goal of this work was to study the binding of ions to polymers and lipid bilayer membranes in aqueous solutions. In the first part of this work, the influence of various inorganic salts and polyelectrolytes on the structure of water was studied using Isothermal Titration Calorimetry (ITC). The heat of dilution of the salts was used as a scale of water structure making and breaking of the ions. The heats of dilution could be attributed to the Hofmeister Series. Following this, the binding of Ca2+ to poly(sodium acrylate) (NaPAA) was studied. ITC and a Ca2+ Ion Selective Electrode were used to measure the reaction enthalpy and binding isotherm. Binding of Ca2+ ions to PAA, was found to be highly endothermic and therefore solely driven by entropy. We then compared the binding of ions to the one-dimensional PAA polymer chain to the binding to lipid vesicles with the same functional groups. As for the polymer, Ca2+ binding was found to be endothermic. Binding of calcium to the lipid bilayer was found to be weaker than to the polymer. In the context of these experiments, it was shown that Ca2+ not only binds to charged but also to zwitterionic lipid vesicles. Finally, we studied the interaction of two salts, KCl and NaCl, to a neutral polymer gel, PNIPAAM, and to the ionic polymer PAA. Combining calorimetry and a potassium selective electrode we observed that the ions interact with both polymers, whether containing charges or not.
Adherent cells constantly collect information about the mechanical properties of their extracellular environment by actively pulling on it through cell-matrix contacts, which act as mechanosensors. In recent years, the sophisticated use of elastic substrates has shown that cells respond very sensitively to changes in effective stiffness in their environment, which results in a reorganization of the cytoskeleton in response to mechanical input. We develop a theoretical model to predict cellular self-organization in soft materials on a coarse grained level. Although cell organization in principle results from complex regulatory events inside the cell, the typical response to mechanical input seems to be a simple preference for large effective stiffness, possibly because force is more efficiently generated in a stiffer environment. The term effective stiffness comprises effects of both rigidity and prestrain in the environment. This observation can be turned into an optimization principle in elasticity theory. By specifying the cellular probing force pattern and by modeling the environment as a linear elastic medium, one can predict preferred cell orientation and position. Various examples for cell organization, which are of large practical interest, are considered theoretically: cells in external strain fields and cells close to boundaries or interfaces for different sample geometries and boundary conditions. For this purpose the elastic equations are solved exactly for an infinite space, an elastic half space and the elastic sphere. The predictions of the model are in excellent agreement with experiments for fibroblast cells, both on elastic substrates and in hydrogels. Mechanically active cells like fibroblasts could also interact elastically with each other. We calculate the optimal structures on elastic substrates as a function of material properties, cell density and the geometry of cell positioning, respectively, that allows each cell to maximize the effective stiffness in its environment due to the traction of all the other cells. Finally, we apply Monte Carlo simulations to study the effect of noise on cellular structure formation. The model not only contributes to a better understanding of many physiological situations. In the future it could also be used for biomedical applications to optimize protocols for artificial tissues with respect to sample geometry, boundary condition, material properties or cell density.
This work deals with the connection between two basic phenomena in Nonlinear Dynamics: synchronization of chaotic systems and recurrences in phase space. Synchronization takes place when two or more systems adapt (synchronize) some characteristic of their respective motions, due to an interaction between the systems or to a common external forcing. The appearence of synchronized dynamics in chaotic systems is rather universal but not trivial. In some sense, the possibility that two chaotic systems synchronize is counterintuitive: chaotic systems are characterized by the sensitivity ti different initial conditions. Hence, two identical chaotic systems starting at two slightly different initial conditions evolve in a different manner, and after a certain time, they become uncorrelated. Therefore, at a first glance, it does not seem to be plausible that two chaotic systems are able to synchronize. But as we will see later, synchronization of chaotic systems has been demonstrated. On one hand it is important to investigate the conditions under which synchronization of chaotic systems occurs, and on the other hand, to develop tests for the detection of synchronization. In this work, I have concentrated on the second task for the cases of phase synchronization (PS) and generalized synchronization (GS). Several measures have been proposed so far for the detection of PS and GS. However, difficulties arise with the detection of synchronization in systems subjected to rather large amounts of noise and/or instationarities, which are common when analyzing experimental data. The new measures proposed in the course of this thesis are rather robust with respect to these effects. They hence allow to be applied to data, which have evaded synchronization analysis so far. The proposed tests for synchronization in this work are based on the fundamental property of recurrences in phase space.
Paleomagnetic dating of climatic events in Late Quaternary sediments of Lake Baikal (Siberia)
(2004)
Lake Baikal provides an excellent climatic archive for Central Eurasia as global climatic variations are continuously depicted in its sediments. We performed continuous rock magnetic and paleomagnetic analyses on hemipelagic sequences retrieved from 4 underwater highs reaching back 300 ka. The rock magnetic study combined with TEM, XRD, XRF and geochemical analyses evidenced that a magnetite of detrital origin dominates the magnetic signal in glacial sediments whereas interglacial sediments are affected by early diagenesis. HIRM roughly quantifies the hematite and goethite contributions and remains the best proxy for estimating the detrital input in Lake Baikal. Relative paleointensity records of the earth′s magnetic field show a reproducible pattern, which allows for correlation with well-dated reference curves and thus provides an alternative age model for Lake Baikal sediments. Using the paleomagnetic age model we observed that cooling in the Lake Baikal region and cooling of the sea surface water in the North Atlantic, as recorded in planktonic foraminifera δ18 O, are coeval. On the other hand, benthic δ18 O curves record mainly the global ice volume change, which occurs later than the sea surface temperature change. This proves that a dating bias results from an age model based on the correlation of Lake Baikal sedimentary records with benthic δ18 O curves. The compilation of paleomagnetic curves provides a new relative paleointensity curve, “Baikal 200”. With a laser-assisted grain size analysis of the detrital input, three facies types, reflecting different sedimentary dynamics can be distinguished. (1) Glacial periods are characterised by a high clay content mostly due to wind activity and by occurrence of a coarse fraction (sand) transported over the ice by local winds. This fraction gives evidence for aridity in the hinterland. (2) At glacial/interglacial transitions, the quantity of silt increases as the moisture increases, reflecting increased sedimentary dynamics. Wind transport and snow trapping are the dominant process bringing silt to a hemipelagic site (3) During the climatic optimum of the Eemian, the silt size and quantity are minimal due to blanketing of the detrital sources by the vegetal cover.
Understanding stars, their magnetic activity phenomena and the underlying dynamo action is the foundation for understanding 'life, the universe and everything' - as stellar magnetic fields play a fundamental role for star and planet formation and for the terrestrial atmosphere and climate. Starspots are the fingerprints of magnetic field lines and thereby the most important sign of activity in a star's photosphere. However, they cannot be observed directly, as it is not (yet) possible to spacially resolve the surfaces of even the nearest neighbouring stars. Therefore, an indirect approach called 'Doppler imaging' is applied, which allows to reconstruct the surface spot distribution on rapidly rotating, active stars. In this work, data from 11 years of continuous spectroscopic observations of the active binary star EI Eridani are reduced and analysed. 34 Doppler maps are obtained and the problem of how to parameterise the information content of Doppler maps is discussed. Three approaches for parameter extraction are introduced and applied to all maps: average temperature, separated for several latitude bands; fractional spottedness; and, for the analysis of structural temperature distribution, longitudinal and latitudinal spot-occurrence functions. The resulting values do not show a distinct correlation with the proposed activity cycle as seen from photometric long-term observations, thereby suggesting that the photometric activity cycle is not accompanied by a spot cycle as seen on the Sun. The general morphology of the spot pattern on EI Eri remains persistent for the whole period of 11 years. In addition, a detailed parameter study is performed. Improved orbital parameters suggest that EI Eri might be complemented by a third star in a wide orbit of about 19 years. Preliminary differential rotation measurements are carried out, indicating an anti-solar orientation.
In this thesis, dynamical structures and manifolds in closed chaotic flows will be investigated. The knowledge about the dynamical structures (and manifolds) of a system is of importance, since they provide us first information about the dynamics of the system - means, with their help we are able to characterize the flow and maybe even to forecast it`s dynamics. The visualization of such structures in closed chaotic flows is a difficult and often long-lasting process. Here, the so-called 'Leaking-method' will be introduced, in examples of simple mathematical maps as the baker- or sine-map, with which we are able to visualize subsets of the manifolds of the system`s chaotic saddle. Comparisons between the visualized manifolds and structures traced out by chemical or biological reactions superimposed on the same flow will be done in the example of a kinematic model of the Gulf Stream. It will be shown that with the help of the leaking method dynamical structures can be also visualized in environmental systems. In the example of a realistic model of the Mediterranean Sea, the leaking method will be extended to the 'exchange-method'. The exchange method allows us to characterize transport between two regions, to visualize transport routes and their exchange sets and to calculate the exchange times. Exchange times and sets will be shown and calculated for a northern and southern region in the western basin of the Mediterranean Sea. Furthermore, mixing properties in the Earth mantle will be characterized and geometrical properties of manifolds in a 3dimensional mathematical model (ABC map) will be investigated.
In this work different approaches are undertaken to improve the understanding of the sucrose-to-starch pathway in developing potato tubers. At first an inducible gene expression system from fungal origin is optimised for the use of studying metabolism in the potato tuber. It is found that the alc system from Aspergillus nidulans responds more rapidly to acetaldehyde than ethanol, and that acetaldehyde has less side-effects on metabolism. The optimal induction conditions then are used to study the effects of temporally controlled cytosolic expression of a yeast invertase on metabolism of potato tubers. The observed differences between induced and constitutive expression of the invertase lead to the conclusion that glycolysis is induced after an ATP demand has been created by an increase in sucrose cycling. Furthermore, the data suggest that in the potato tuber maltose is a product of glucose condensation rather than starch degradation. In the second part of the work it is shown that the expression of a yeast invertase in the vacuole of potato tubers has similar effects on metabolism than the expression of the same enzyme in the apoplast. These observations give further evidence to the presence of a mechanism by which sucrose is taken up via endocytosis to the vacuole rather than via transporters directly to the cytosol. Finally, a kinetic in silico model of sucrose breakdown is presented that is able to simulate this part of potato tuber metabolism on a quantitative level. Furthermore, it can predict the metabolic effects of the introduction of a yeast invertase in the cytosol of potato tubers with an astonishing precision. In summary, these data prove that inducible gene expression and kinetic computer models of metabolic pathways are useful tools to greatly improve the understanding of plant metabolism.
Polymer optical fibers (POFs) are a rather new tool for high-speed data transfer by modulated light. They allow the transport of high amounts of data over distances up to about 100 m without be influenced by external electromagnetic fields. Due to organic chemical nature of POFs, they are sensitive to the climate of their environment and therefore the optical fiber properties are as well. Hence, the optical stability is a key issue for long-term applications of POFs. The causes for a loss of optical transmission due to climatic exposures (aging/degradation) are researched by means of chemical analytical tools such as chemiluminescence (CL) and Fourier transform infrared (FTIR) spectroscopy for five different (with respect to manufacturers) step-index multimode PMMA based POFs and for seven different climatic conditions. Three of the five POF samples are studied more in detail to realize the effects of individual parameters and for forecasting longterm optical stability by short-term exposure tests. At first, the unexposed POF components (core, cladding, and bare POF as combination of core and cladding) are characterized with respect to important physical and chemical properties. The glass transition temperature Tg, and the melting temperature Tm are in the region of 120 °C to 140 °C, the molecular weight (Mw) of cores is in the order of 105 g mol-1. POFs are found to have different chemical compositions of their claddings as could be detected by FTIR, but identical compositions of their cores. Two of the POFs are exposed as cables (core, cladding and jacket) for about 3300 hours to the climate 92 °C / 95 % relative humidity (RH) resulting in a different transmission decrease. Investigating the related unexposed and exposed bare POFs for degradation using CL, FTIR, thermogravimetry (TG), UV/visible transmittance and gel permeation chromatography (GPC) suggest that claddings of POFs are more affected than cores. Probably the observed loss of transmission is mainly due to increased light absorption and imperfections at the core-cladding boundary caused by a large degradation of claddings. Hence, it is highly possible that the optical transmission stability of POFs is governed mainly by the thermo-oxidative stability of the cladding and minor of the core. Three bare POFs (core and cladding only) are exposed for different duration of exposure time (30 hours to 4500 hours) to 92 °C / 95 %RH, 92 °C / 50 %RH, 50 °C / 95 %RH, 90 °C / low humidity, 100 °C / low humidity, 110 °C / low humidity and 120 °C / low humidity. In these climates their transmission variations are found to be different from each other, too. The outcomes strongly inform that under high temperature and high humid climates physical changes such as volume expansion, are the main sources for the loss of optical transmission. Also, the optical transmission stability of POFs is found to be dependent on chemical compositions of claddings. Under high temperature and low humid conditions, a loss of transmission at the early stages of the exposure is mainly caused by physical changes, presumable by corecladding interface imperfections. For the later stages of exposures it is proposed to an additional increase of light absorption by core and cladding owes to degradation. Optical simulation results obtained parallel by Mr. L. Jankowski (a PhD student of BAM) are found to confirm these results. For bare POFs, too, the optical stability of POFs seems to depend on their thermo-oxidative stability. Some short-term exposure tests are conducted to realize influences of individual climatic parameters on the transmission property of POFs. It is found that at stationary high temperature and variable humidity conditions POFs display to a certain amount a reversible transmission loss due to physically absorbed water. But in the case of varying temperature and constant high humidity such reversibility is hardly noticeable. However, at room temperature and varying humidity, POFs display fully reversible transmission loss. The whole research described above has to be regarded as a starting point for further investigations. The restricted distribution of fundamental POF data by the manufacturers and the time consuming aging by climatic exposures restrict the results more or less to the samples, investigated here. Significant general statements require for example additional information concerning the variation of POF properties due to production. Nevertheless the tests, described here, have the capability for approximating and forecasting the long-term optical transmission stability of POFs. -------------- Auch im Druck erschienen: Appajaiah, Anilkumar: Climatic stability of polymer optical fibers (POF) / Anilkumar Appajaiah. - Bremerhaven : Wirtschaftsverl. NW, Verl. für neue Wiss., 2005. - Getr. Zählung [ca. 175 S.]. : Ill., graph. Darst. - (BAM-Dissertationsreihe ; 9) ISBN 3-86509-302-7
This thesis discusses theoretical and practical aspects of modelling of light propagation in non-aged and aged step-index polymer optical fibres (POFs). Special attention has been paid in describing optical characteristics of non-ideal fibres, scattering and attenuation, and in combining application-oriented and theoretical approaches. The precedence has been given to practical issues, but much effort has been also spent on the theoretical analysis of basic mechanisms governing light propagation in cylindrical waveguides.As a result a practically usable general POF model based on the raytracing approach has been developed and implemented. A systematic numerical optimisation of its parameters has been performed to obtain the best fit between simulated and measured optical characteristics of numerous non-aged and aged fibre samples. The model was verified by providing good agreement, especially for the non-aged fibres. The relations found between aging time and optimal values of model parameters contribute to a better understanding of the aging mechanisms of POFs.