Refine
Year of publication
- 2008 (459) (remove)
Document Type
- Article (206)
- Doctoral Thesis (116)
- Conference Proceeding (39)
- Monograph/Edited Volume (36)
- Postprint (36)
- Preprint (11)
- Other (6)
- Master's Thesis (3)
- Review (3)
- Habilitation Thesis (2)
Language
- English (459) (remove)
Keywords
- Chitooligosaccharide (3)
- Chitooligosaccharides (3)
- Erdbeben (3)
- Seismology (3)
- magnetic fields (3)
- Array Seismology (2)
- Arrayseismologie (2)
- Chile (2)
- Chitinase (2)
- Earthquake (2)
Institute
- Institut für Chemie (82)
- Institut für Biochemie und Biologie (73)
- Department Psychologie (33)
- Extern (28)
- Institut für Anglistik und Amerikanistik (27)
- Institut für Künste und Medien (21)
- Institut für Mathematik (20)
- Wirtschaftswissenschaften (20)
- Department Linguistik (19)
- Institut für Physik und Astronomie (19)
- Institut für Umweltwissenschaften und Geographie (19)
- Institut für Geowissenschaften (18)
- Mathematisch-Naturwissenschaftliche Fakultät (16)
- Institut für Informatik und Computational Science (15)
- Sozialwissenschaften (8)
- Department Sport- und Gesundheitswissenschaften (7)
- Institut für Germanistik (7)
- Institut für Jüdische Studien und Religionswissenschaft (6)
- Institut für Romanistik (6)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (5)
- Historisches Institut (5)
- Vereinigung für Jüdische Studien e. V. (4)
- MenschenRechtsZentrum (3)
- Öffentliches Recht (3)
- Department Grundschulpädagogik (2)
- Department für Inklusionspädagogik (1)
- Humanwissenschaftliche Fakultät (1)
- Institut für Slavistik (1)
- Interdisziplinäres Zentrum für Kognitive Studien (1)
- Lehreinheit für Wirtschafts-Arbeit-Technik (1)
- Strukturbereich Kognitionswissenschaften (1)
This thesis provides a novel view on the early stage of crystallization utilizing calcium carbonate as a model system. Calcium carbonate is of great economical, scientific and ecological importance, because it is a major part of water hardness, the most abundant Biomineral and forms huge amounts of geological sediments thus binding large amounts of carbon dioxide. The primary experiments base on the evolution of supersaturation via slow addition of dilute calcium chloride solution into dilute carbonate buffer. The time-dependent measurement of the Ca2+ potential and concurrent pH = constant titration facilitate the calculation of the amount of calcium and carbonate ions bound in pre-nucleation stage clusters, which have never been detected experimentally so far, and in the new phase after nucleation, respectively. Analytical Ultracentrifugation independently proves the existence of pre-nucleation stage clusters, and shows that the clusters forming at pH = 9.00 have a proximately time-averaged size of altogether 70 calcium and carbonate ions. Both experiments show that pre-nucleation stage cluster formation can be described by means of equilibrium thermodynamics. Effectively, the cluster formation equilibrium is physico-chemically characterized by means of a multiple-binding equilibrium of calcium ions to a ‘lattice’ of carbonate ions. The evaluation gives GIBBS standard energy for the formation of calcium/carbonate ion pairs in clusters, which exhibits a maximal value of approximately 17.2 kJ mol^-1 at pH = 9.75 and relates to a minimal binding strength in clusters at this pH-value. Nucleated calcium carbonate particles are amorphous at first and subsequently become crystalline. At high binding strength in clusters, only calcite (the thermodynamically stable polymorph) is finally obtained, while with decreasing binding strength in clusters, vaterite (the thermodynamically least stable polymorph) and presumably aragonite (the thermodynamically intermediate stable polymorph) are obtained additionally. Concurrently, two different solubility products of nucleated amorphous calcium carbonate (ACC) are detected at low binding strength and high binding strength in clusters (ACC I 3.1EE-8 M^2, ACC II 3.8EE-8 M^2), respectively, indicating the precipitation of at least two different ACC species, while the clusters provide the precursor species of ACC. It is proximate that ACC I may relate to calcitic ACC –i.e. ACC exhibiting short range order similar to the long range order of calcite and that ACC II may relate to vateritic ACC, which will subsequently transform into the particular crystalline polymorph as discussed in the literature, respectively. Detailed analysis of nucleated particles forming at minimal binding strength in clusters (pH = 9.75) by means of SEM, TEM, WAXS and light microscopy shows that predominantly vaterite with traces of calcite forms. The crystalline particles of early stages are composed of nano-crystallites of approximately 5 to 10 nm size, respectively, which are aligned in high mutual order as in mesocrystals. The analyses of precipitation at pH = 9.75 in presence of additives –polyacrylic acid (pAA) as a model compound for scale inhibitors and peptides exhibiting calcium carbonate binding affinity as model compounds for crystal modifiers- shows that ACC I and ACC II are precipitated in parallel: pAA stabilizes ACC II particles against crystallization leading to their dissolution for the benefit of crystals that form from ACC I and exclusively calcite is finally obtained. Concurrently, the peptide additives analogously inhibit the formation of calcite and exclusively vaterite is finally obtained in case of one of the peptide additives. These findings show that classical nucleation theory is hardly applicable for the nucleation of calcium carbonate. The metastable system is stabilized remarkably due to cluster formation, while clusters forming by means of equilibrium thermodynamics are the nucleation relevant species and not ions. Most likely, the concept of cluster formation is a common phenomenon occurring during the precipitation of hardly soluble compounds as qualitatively shown for calcium oxalate and calcium phosphate. This finding is important for the fundamental understanding of crystallization and nucleation-inhibition and modification by additives with impact on materials of huge scientific and industrial importance as well as for better understanding of the mass transport in crystallization. It can provide a novel basis for simulation and modelling approaches. New mechanisms of scale formation in Bio- and Geomineralization and also in scale inhibition on the basis of the newly reported reaction channel need to be considered.
Multiplexed diagnostics and spectroscopic ruler applications with terbium to quantum dots FRET
(2008)
Human transformation of the Earth’s land surface has far-reaching and important consequences for the functioning of hydrological and hydrochemical processes in watersheds. In nowadays land-use change from forest to pasture is a major issue in particular in the tropics. A sustainable management of deforested areas requires an in-depth understanding of the water and nutrient cycle. On this basis we compared the involved hydrological pathways for rainfall to reach streams and the nutrient budgets of a tropical rainforest and a pasture. In addition we studied the links of hydrochemical differences to differences of the relative importance of flowpaths. This study was conducted in the southwestern part of the Brazilian Amazon basin. An intensive hydrological and hydrochemical sampling and monitoring network was set up. The results indicate that the hydrology was modified in many ways due to land-use change. The most important alteration was the increased importance of the fast flowpath overland flow. Solute exports were in particular linked to the increased volume of overland flow that resulted from the land-use change. An additional reason for the increased nutrient exports from the pasture are the high concentrations of these nutrients in pasture overland flow probably as a due to cattle excrements. Tight nutrient cycles with minimal nutrient losses could not be maintained after the land-use change. This study provides the first attempt to quantify the respective nutrient losses.
Editorial
(2008)
This paper highlights the different ways of perceiving video games and video game content, incorporating interactive and non-interactive methods. It examines varying cognitive and emotive reactions by persons who are used to play video games as well as persons who are unfamiliar with the aesthetics and the most basic game play rules incorporated within video games. Additionally, the principle of “Flow” serves as a theoretical and philosophical foundation. A small case-study featuring two games has been made to emphasize the numerous possible ways of perception of video games.
In the last years, statistical machine translation has already demonstrated its usefulness within a wide variety of translation applications. In this line, phrase-based alignment models have become the reference to follow in order to build competitive systems. Finite state models are always an interesting framework because there are well-known efficient algorithms for their representation and manipulation. This document is a contribution to the evolution of finite state models towards a phrase-based approach. The inference of stochastic transducers that are based on bilingual phrases is carefully analysed from a finite state point of view. Indeed, the algorithmic phenomena that have to be taken into account in order to deal with such phrase-based finite state models when in decoding time are also in-depth detailed.
Supernovae are known to be the dominant energy source for driving turbulence in the interstellar medium. Yet, their effect on magnetic field amplification in spiral galaxies is still poorly understood. Analytical models based on the uncorrelated-ensemble approach predicted that any created field will be expelled from the disk before a significant amplification can occur. By means of direct simulations of supernova-driven turbulence, we demonstrate that this is not the case. Accounting for vertical stratification and galactic differential rotation, we find an exponential amplification of the mean field on timescales of 100Myr. The self-consistent numerical verification of such a “fast dynamo” is highly beneficial in explaining the observed strong magnetic fields in young galaxies. We, furthermore, highlight the importance of rotation in the generation of helicity by showing that a similar mechanism based on Cartesian shear does not lead to a sustained amplification of the mean magnetic field. This finding impressively confirms the classical picture of a dynamo based on cyclonic turbulence.
This study investigated whether older adults could acquire the ability to perform 2 cognitive operations in parallel in a paradigm in which young adults had been shown to be able to do so (K. Oberauer & R. Kliegl, 2004). Twelve young and 12 older adults practiced a numerical and a visuospatial continuous memory updating task in single-task and dual-task conditions for 16 to 24 sessions. After practice, 9 young adults were able to process the 2 tasks without dual- task costs, but none of the older adults had reached the criterion of parallel processing. The results suggest a qualitative difference between young and older adults in how they approach dual-task situations.
The space-image
(2008)
In recent computer game research a paradigmatic shift is observable: Games today are first and foremost conceived as a new medium characterized by their status as an interactive image. The shift in attention towards this aspect becomes apparent in a new approach that is, first and foremost, aware of the spatiality of games or their spatial structures. This rejects traditional approaches on the basis that the medial specificity of games can no longer be reduced to textual or ludic properties, but has to be seen in medial constituted spatiality. For this purpose, seminal studies on the spatiality of computer games are resumed and their advantages and disadvantages are discussed. In connection with this, and against the background of the philosophical method of phenomenology, we propose three steps in describing computer games as space images: With this method it is possible to describe games with respect to the possible appearance of spatiality in a pictorial medium.
The quantification of phosphate bound to the C6 and C3 positions of glucose residues in starch has received increasing interest since the importance of starch phosphorylation for plant metabolism was discovered. The method described here is based on the observation that the isobaric compounds glucose-6-phosphate (Glc6P) and glucose-3- phosphate (Glc3P) exhibit significantly different fragmentation patterns in negative ion electrospray tandem mass spectrometry (MS/MS). A simple experiment involving collision-induced dissociation (CID) MS2 spectra of the sample and the two reference substances Glc3P and Glc6P permitted the quantification of the relative amounts of the two compounds in monosaccharide mixtures generated by acid hydrolysis of starch. The method was tested on well-characterized potato tuber starch. The results are consistent with those obtained by NMR analysis. In contrast to NMR, however, the presented method is fast and can be performed on less than 1 mg of starch. Starch samples of other origins exhibiting a variety of phosphorylation degrees were analyzed to assess the sensitivity and robustness of the method.
Focus asymmetries in Bura
(2008)
(Chadic), which exhibits a number of asymmetries: Grammatical focus marking is obligatory only with focused subjects, where focus is marked by the particle án following the subject. Focused subjects remain in situ and the complement of án is a regular VP. With nonsubject foci, án appears in a cleft-structure between the fronted focus constituent and a relative clause. We present a semantically unified analysis of focus marking in Bura that treats the particle as a focusmarking copula in T that takes a property-denoting expression (the background) and an individual-denoting expression (the focus) as arguments. The article also investigates the realization of predicate and polarity focus, which are almost never marked. The upshot of the discussion is that Bura shares many characteristic traits of focus marking with other Chadic languages, but it crucially differs in exhibiting a structural difference in the marking of focus on subjects and non-subject constituents.
Using ESTs for phylogenomics
(2008)
Background
While full genome sequences are still only available for a handful of taxa, large collections of partial gene sequences are available for many more. The alignment of partial gene sequences results in a multiple sequence alignment containing large gaps that are arranged in a staggered pattern. The consequences of this pattern of missing data on the accuracy of phylogenetic analysis are not well understood. We conducted a simulation study to determine the accuracy of phylogenetic trees obtained from gappy alignments using three commonly used phylogenetic reconstruction methods (Neighbor Joining, Maximum Parsimony, and Maximum Likelihood) and studied ways to improve the accuracy of trees obtained from such datasets.
Results
We found that the pattern of gappiness in multiple sequence alignments derived from partial gene sequences substantially compromised phylogenetic accuracy even in the absence of alignment error. The decline in accuracy was beyond what would be expected based on the amount of missing data. The decline was particularly dramatic for Neighbor Joining and Maximum Parsimony, where the majority of gappy alignments contained 25% to 40% incorrect quartets. To improve the accuracy of the trees obtained from a gappy multiple sequence alignment, we examined two approaches. In the first approach, alignment masking, potentially problematic columns and input sequences are excluded from from the dataset. Even in the absence of alignment error, masking improved phylogenetic accuracy up to 100-fold. However, masking retained, on average, only 83% of the input sequences. In the second approach, alignment subdivision, the missing data is statistically modelled in order to retain as many sequences as possible in the phylogenetic analysis. Subdivision resulted in more modest improvements to alignment accuracy, but succeeded in including almost all of the input sequences.
Conclusion
These results demonstrate that partial gene sequences and gappy multiple sequence alignments can pose a major problem for phylogenetic analysis. The concern will be greatest for high-throughput phylogenomic analyses, in which Neighbor Joining is often the preferred method due to its computational efficiency. Both approaches can be used to increase the accuracy of phylogenetic inference from a gappy alignment. The choice between the two approaches will depend upon how robust the application is to the loss of sequences from the input set, with alignment masking generally giving a much greater improvement in accuracy but at the cost of discarding a larger number of the input sequences.
Introduction
(2008)
Heterophase polymerization is a technique widely used for the synthesis of high performance polymeric materials with applications including paints, inks, adhesives, synthetic rubber, biomedical applications and many others. Due to the heterogeneous nature of the process, many different relevant length and time scales can be identified. Each of these scales has a direct influence on the kinetics of polymerization and on the physicochemical and performance properties of the final product. Therefore, from the point of view of product and process design and optimization, the understanding of each of these relevant scales and their integration into one single model is a very promising route for reducing the time-to-market in the development of new products, for increasing the productivity and profitability of existing processes, and for designing products with improved performance or cost/performance ratio. The process considered is the synthesis of structured or composite polymer particles by multi-stage seeded emulsion polymerization. This type of process is used for the preparation of high performance materials where a synergistic behavior of two or more different types of polymers is obtained. Some examples include the synthesis of core-shell or multilayered particles for improved impact strength materials and for high resistance coatings and adhesives. The kinetics of the most relevant events taking place in an emulsion polymerization process has been investigated using suitable numerical simulation techniques at their corresponding time and length scales. These methods, which include Molecular Dynamics (MD) simulation, Brownian Dynamics (BD) simulation and kinetic Monte Carlo (kMC) simulation, have been found to be very powerful and highly useful for gaining a deeper insight and achieving a better understanding and a more accurate description of all phenomena involved in emulsion polymerization processes, and can be potentially extended to investigate any type of heterogeneous process. The novel approach of using these kinetic-based numerical simulation methods can be regarded as a complement to the traditional thermodynamic-based macroscopic description of emulsion polymerization. The particular events investigated include molecular diffusion, diffusion-controlled polymerization reactions, particle formation, absorption/desorption of radicals and monomer, and the colloidal aggregation of polymer particles. Using BD simulation it was possible to precisely determine the kinetics of absorption/desorption of molecular species by polymer particles, and to simulate the colloidal aggregation of polymer particles. For diluted systems, a very good agreement between BD simulation and the classical theory developed by Smoluchowski was obtained. However, for concentrated systems, significant deviations from the ideal behavior predicted by Smoluchowski were evidenced. BD simulation was found to be a very valuable tool for the investigation of emulsion polymerization processes especially when the spatial and geometrical complexity of the system cannot be neglected, as is the case of concentrated dispersions, non-spherical particles, structured polymer particles, particles with non-uniform monomer concentration, and so on. In addition, BD simulation was used to describe non-equilibrium monomer swelling kinetics, which is not possible using the traditional thermodynamic approach because it is only valid for systems at equilibrium. The description of diffusion-controlled polymerization reactions was successfully achieved using a new stochastic algorithm for the kMC simulation of imperfectly mixed systems (SSA-IM). In contrast to the traditional stochastic simulation algorithm (SSA) and the deterministic rate of reaction equations, instead of assuming perfect mixing in the whole reactor, the new SSA-IM determines the volume perfectly mixed between two consecutive reactions as a function of the diffusion coefficient of the reacting species. Using this approach it was possible to describe, using a single set of kinetic parameters, typical mass transfer limitations effects during a free radical batch polymerization such as the cage effect, the gel effect and the glass effect. Using multiscale integration it was possible to investigate the formation of secondary particles during the seeded emulsion polymerization of vinyl acetate over a polystyrene seed. Three different cases of radical generation were considered: generation of radicals by thermal decomposition of water-soluble initiating compounds, generation of radicals by a redox reaction at the surface of the particles, and generation of radicals by thermal decomposition of surface-active initiators "inisurfs" attached to the surface of the particles. The simulation results demonstrated the satisfactory reduction in secondary particles formation achieved when the locus of radical generation is controlled close to the particles surface.
Duplicate detection consists in determining different representations of real-world objects in a database. Recent research has considered the use of relationships among object representations to improve duplicate detection. In the general case where relationships form a graph, research has mainly focused on duplicate detection quality/effectiveness. Scalability has been neglected so far, even though it is crucial for large real-world duplicate detection tasks. In this paper we scale up duplicate detection in graph data (DDG) to large amounts of data and pairwise comparisons, using the support of a relational database system. To this end, we first generalize the process of DDG. We then present how to scale algorithms for DDG in space (amount of data processed with limited main memory) and in time. Finally, we explore how complex similarity computation can be performed efficiently. Experiments on data an order of magnitude larger than data considered so far in DDG clearly show that our methods scale to large amounts of data not residing in main memory.
Through the cyclization of 1-(;-hydroxynaphthyl)-1,2,3,4-tetrahydroisoquinoline and 1-(;- hydroxynaphthyl)-1,2,3,4-tetrahydroisoquinoline with formaldehyde, phosgene, p-nitrobenzaldehyde or p-chlorophenyl isothiocyanate, 8-substituted 10,11-dihydro-8H,15bH-naphth[1,2-e][1,3]oxazino[4,3-a]isoquinolines (3 and 4) and 10,11- dihydro-8H,15bH-naphth[2,1-e][1,3]oxazino[4,3-a]isoquinolines (15 and 16) were prepared. Conformational analysis of both the piperidine and the 1,3-oxazine moieties of these heterocycles by NMR spectroscopy and an accompanying theoretical study revealed that these two conformationally flexible six-membered ring moieties prefer twisted chair conformers.
pH sensing in living cells represents one of the most prominent topics in biochemistry and physiology. In this study we performed one-photon and two-photon time-domain fluorescence lifetime imaging with a laser-scanning microscope using the time-correlated single-photon counting technique for imaging intracellular pH levels. The suitability of different commercial fluorescence dyes for lifetime-based pH sensing is discussed on the basis of in vitro as well of in situ measurements. Although the tested dyes are suitable for intensity-based ratiometric measurements, for lifetime- based techniques in the time-domain so far only BCECF seems to meet the requirements of reliable intracellular pH recordings in living cells.
MMORPGs such as WORLD OF WARCRAFT can be understood as interactive representations of war. Within the frame provided by the program the players experience martial conflicts and thus a “virtual war.” The game world however requires a technical and as far as possible invisible infrastructure which has to be protected against attacks: Infrastructure means e.g. the servers on which the data of the player characters and the game’s world are saved, as well as the user accounts, which have to be protected, among other things, from “identity theft.” Besides the war on the virtual surface of the program we will therefore describe the invisible war concerning the infrastructure, the outbreak of which is always feared by the developers and operators of online-worlds, requiring them to take precautions. Furthermore we would like to focus on “virtual game worlds” as places of complete surveillance. Since action in these worlds is always associated with the production of data, total observation is theoretically possible and put into practice by the so-called “game master.” The observation of different communication channels (including user forums) serves to monitor and direct the actions on the virtual battlefield subtly, without the player feeling that his freedom is being limited. Finally, we will compare the fictional theater of war in WORLD OF WARCRAFT to the vision of “Network-Centric Warfare,” since it has often been observed that the analysis of MMORPGs is useful to the real trade of war. However, we point out what an unrealistic theater of war WORLD OF WARCRAFT really is.
Self-Structuring of functionalized micro- and mesoporous organosilicas using boron-silane-precursors
(2008)
The structuring of porous silica materials at the nanometer scale and their surface functionalization are important issues of current materials research. Many innovations in chromatography, catalysis and electronic devices benefit from this knowledge. The work at hand is dedicated to the targeted design of functional organosilica materials. In this context a new precursor concept based on boron-silanes is presented. These precursors combine the properties of a structure directing group and a silica source by covalent borane linkage. Formation of the precursor is easily realized by a sequential two-step hydroboration, firstly on bis(triethoxysilyl)ethene, and secondly on an unsaturated structure directing moiety such as alkenes or polymers. The so prepared precursors self-organize when hydrolysis of their inorganic moiety takes place via an aggregation of their organic side chains into hydrophobic domains. In this way, the additional use of a surfactant as a template is not necessary. Chemical cleavage of these moieties (e.g. by ammonolysis or oxidative saponification) yields an organosilica where all functionalities are exclusively located at the pore wall and therefore accessible. The accessibility of the functionalities is a vital point for applications and is not necessarily granted for common silica functionalization approaches. Further advantages of the boron-silane concept are the possibility to introduce a variety of surface functionalities by heterolytic cleavage of the boron linker and the control of the pore morphology. For that purpose the covalent linkage of different alkyl groups and polymers was studied. Another aspect is the access to chiral boron silane precursors yielding functionalized mesoporous organosilica with chiral functionalities exclusively located at the pore walls after condensation and removal of the structure directing moiety. These materials possess great potential for applications documented by preliminary investigations on chiral resolution of a racemic mixture by HPLC and asymmetric catalysis. In the course of this work valuable insights into the targeted structuring and surface functionalization of organosilicas were gained. A promising outlook for further investigations is the extension of this concept by altering the structure directing moieties of the precursor. That way the morphology of the final organosilica might be controlled by for example mesogens. Furthermore, the use of the boron linker enables the introduction of multiple functionalities into organosilicas, making the obtained material unique in its performance.
This paper presents a system for the detection and correction of syntactic errors. It combines a robust morphosyntactic analyser and two groups of finite-state transducers specified using the Xerox Finite State Tool (xfst). One of the groups is used for the description of syntactic error patterns while the second one is used for the correction of the detected errors. The system has been tested on a corpus of real texts, containing both correct and incorrect sentences, with good results.
For the first time stabilizer-free vinylidene fluoride (VDF) polymerizations were carried out in homogeneous phase with supercritical CO₂. Polymerizations were carried out at 140°C, 1500 bar and were initiated with di-tert-butyl peroxide (DTBP). In-line FT-NIR (Fourier Transform- Near Infrared) spectroscopy showed that complete monomer conversion may be obtained. Molecular weights were determined via size-exclusion chromatography (SEC) and polymer end group analysis by 1H-NMR spectroscopy. The number average molecular weights were below 104 g∙mol−1 and polydispersities ranged from 3.1 to 5.7 depending on DTBP and VDF concentration. To allow for isothermal reactions high CO₂ contents ranging from 61 to 83 wt.% were used. The high-temperature, high-pressure conditions were required for homogeneous phase polymerization. These conditions did not alter the amount of defects in VDF chaining. Scanning electron microscopy (SEM) indicated that regular stack-type particles were obtained upon expansion of the homogeneous polymerization mixture. To reduce the required amount of initiator, further VDF polymerizations using chain transfer agents (CTAs) to control molecular weights were carried out in homogeneous phase with supercritical carbon dioxide (scCO₂) at 120 °C and 1500 bar. Using perfluorinated hexyl iodide as CTA, polymers of low polydispersity ranging from 1.5 to 1.2 at the highest iodide concentration of 0.25 mol·L-1 were obtained. Electrospray ionization- mass spectroscopy (ESI-MS) indicates the absence of initiator derived end groups, supporting livingness of the system. The “livingness” is based on the labile C-I bond. However, due to the weakness of the C-I bond perfluorinated hexyl iodide also contributes to initiation. To allow for kinetic analyses of VDF polymerizations the CTA should not contribute to initiation. Therefore, additional CTAs were applied: BrCCl3, C6F13Br and C6F13H. It was found that C6F13H does not contribute to initiation. At 120°C and 1500 bar kp/kt0.5~ 0.64 (L·mol−1·s−1)0.5 was derived. The chain transfer constant (CT) at 120°C has been determined to be 8·10−1, 9·10−2 and 2·10−4 for C6F13I, C6F13Br and C6F13H, respectively. These CT values are associated with the bond energy of the C-X bond. Moreover, the labile C-I bond allows for functionalization of the polymer to triazole end groups applying click reactions. After substitution of the iodide end group by an azide group 1,3 dipolar cycloadditions with alkynes yield polymers with 1,2,3 triazole end groups. Using symmetrical alkynes the reactions may be carried out in the absence of any catalyst. This end-functionalized poly (vinylidene fluoride) (PVDF) has higher thermal stability as compared to the normal PVDF. PVDF samples from homogeneous phase polymerizations in supercritical CO₂ and subsequent expansion to ambient conditions were analyzed with respect to polymer end groups, crystallinity, type of polymorphs and morphology. Upon expansion the polymer was obtained as white powder. Scanning electron microscopy (SEM) showed that DTBP derived polymer end groups led to stack-type particles whereas sponge- or rose-type particles were obtained in case of CTA fragments as end groups. Fourier-Transform Infrared spectroscopy and wide angle X-ray diffraction indicated that the type of polymorph, α or β crystal phase was significantly affected by the type of end group. The content of β-phase material, which is responsible for piezoelectricity of PVDF, is the highest for polymer with DTBP-derived end groups. In addition, the crystallinity of the material, as determined via differential scanning calorimetry is affected by the end groups and polymer molecular weights. For example, crystallinity ranges from around 26 % for DTBP-derived end groups to a maximum of 62 % for end groups originating from perfluorinated hexyl iodide for polymers with Mn ~2200 g·mol–1. Expansion of the homogeneous polymerization mixture results in particle formation by a non-optimized RESS (Rapid Expansion from Supercritical Solution) process. Thus, it was tested how polymer end groups affect the particles size distribution obtained from RESS process under controlled conditions (T = 50°C and P = 200 bar). In all RESS experiments, small primary PVDF with diameters less than 100 nm without the use of liquid solvents, surfactants, or other additives were produced. A strong correlation between particle size and particle size distribution with polymer end groups and molecular weight of the original material was observed. The smallest particles were found for RESS of PVDF with Mn~ 4000 g·mol–1 and PFHI (C6F13I) - derived end groups.
Chitooligosaccharides are composed of linear β-(1→4)-linked 2-acetamido-2-deoxy-β-D-glucopyranose (GlcNAc) and/or 2-amino-2-deoxy-β-D-glucopyranose (GlcN). They are of interest due to their remarkable biological properties including antibacterial, antitumor, antifungal and elicitor activities. They can be obtained from the aminoglucan chitosan by chemical or enzymatic degradation which obviously affords rather heterogenous mixtures. On the other hand, chemical synthesis provides pure compounds with defined sequences of GlcNAc and GlcN monomers. The synthesis of homo- and hetero-chitobioses and hetero-chitotetraoses is described in this thesis. Dimethylmaleoyl and phthaloyl groups were used for protection of the amines. The donor was activated as the trichloroacetimidate in order to form the β-linkages. Glycosylation in the presence of trimethylsilyl trifluoromethanesulfonate, followed by N- and O-deprotection furnished chitobioses and chitotetraoses in good yields.
Plant population modelling has been around since the 1970s, providing a valuable approach to understanding plant ecology from a mechanistic standpoint. It is surprising then that this area of research has not grown in prominence with respect to other approaches employed in modelling plant systems. In this review, we provide an analysis of the development and role of modelling in the field of plant population biology through an exploration of where it has been, where it is now and, in our opinion, where it should be headed. We focus, in particular, on the role plant population modelling could play in ecological forecasting, an urgent need given current rates of regional and global environmental change. We suggest that a critical element limiting the current application of plant population modelling in environmental research is the trade-off between the necessary resolution and detail required to accurately characterize ecological dynamics pitted against the goal of generality, particularly at broad spatial scales. In addition to suggestions how to overcome the current shortcoming of data on the process-level we discuss two emerging strategies that may offer a way to overcome the described limitation: (1) application of a modern approach to spatial scaling from local processes to broader levels of interaction and (2) plant functional-type modelling. Finally we outline what we believe to be needed in developing these approaches towards a 'science of forecasting'.
Being "in the game"
(2008)
When people describe themselves as being “in the game” this is often thought to mean they have a sense of presence, i.e. they feel like they are in the virtual environment (Brown/Cairns 2004). Presence research traditionally focuses on user experiences in virtual reality systems (e.g. head mounted displays, CAVE-like systems). In contrast, the experience of gaming is very different. Gamers willingly submit to the rules of the game, learn arbitrary relationships between the controls and the screen output, and take on the persona of their game character. Also whereas presence in VR systems is immediate, presence in gaming is gradual. Due to these differences, one can question the extent to which people feel present during gaming. A qualitative study was conducted to explore what gamers actually mean when they describe themselves as being “in the game.” Thirteen gamers were interviewed and the resulting grounded theory suggests being “in the game” does not necessarily mean presence (i.e. feeling like you are the character and present in the VE). Some people use this phrase just to emphasize their high involvement in the game. These findings differ with Brown and Cairns as they suggest at the highest state of immersion not everybody experiences presence. Furthermore, the experience of presence does not appear dependent on the game being in the first person perspective or the gamer being able to empathize with the character. Future research should investigate why some people experience presence and others do not. Possible explanations include: use of language, perception of presence, personality traits, and types of immersion.
In a common description, to play a game is to step inside a concrete or metaphorical magic circle where special rules apply. In video game studies, this description has received an inordinate amount of criticism which the paper argues has two primary sources: 1. a misreading of the basic concept of the magic circle and 2. a somewhat rushed application of traditional theoretical concerns onto games. The paper argues that games studies must move beyond conventional criticisms of binary distinctions and rather look at the details of how games are played. Finally, the paper proposes an alternative metaphor for game-playing, the puzzle piece.
Thermal radiation processes
(2008)
We discuss the different physical processes that are important to understand the thermal X-ray emission and absorption spectra of the diffuse gas in clusters of galaxies and the warm-hot intergalactic medium. The ionisation balance, line and continuum emission and absorption properties are reviewed and several practical examples are given that illustrate the most important diagnostic features in the X-ray spectra.
A woman and a language
(2008)
We have developed a microfluidic mixer optimized for rapid measurements of protein folding kinetics using synchrotron radiation circular dichroism (SRCD) spectroscopy. The combination of fabrication in fused silica and synchrotron radiation allows measurements at wavelengths below 220 nm, the typical limit of commercial instrumentation. At these wavelengths, the discrimination between the different types of protein secondary structure increases sharply. The device was optimized for rapid mixing at moderate sample consumption by employing a serpentine channel design, resulting in a dead time of less than 200 ;s. Here, we discuss the design and fabrication of the mixer and quantify the mixing efficiency using wide-field and confocal epi-fluorescence microscopy. We demonstrate the performance of the device in SRCD measurements of the folding kinetics of cytochrome c, a small, fast-folding protein. Our results show that the combination of SRCD with microfluidic mixing opens new possibilities for investigating rapid conformational changes in biological macromolecules that have previously been inaccessible.
Sequence learning at optimal stimulus-response mapping : evidence from a serial reaction-time task
(2008)
We propose a new version of the serial reaction time (SRT) task in which participants merely looked at the target instead of responding manually. As response locations were identical to target locations, stimulus - response compatibility was maximal in this task. We demonstrated that saccadic response times decreased during training and increased again when a new sequence was presented. It is unlikely that this effect was caused by stimulus - response (S - R) learning because bonds between (visual) stimuli and (oculomotor) responses were already well established before the experiment started. Thus, the finding shows that the building of S - R bonds is not essential for learning in the SRT task.
We study resonances for the generator of a diffusion with small noise in R(d) : L = -∈∆ + ∇F * ∇, when the potential F grows slowly at infinity (typically as a square root of the norm). The case when F grows fast is well known, and under suitable conditions one can show that there exists a family of exponentially small eigenvalues, related to the wells of F. We show that, for an F with a slow growth, the spectrum is R+, but we can find a family of resonances whose real parts behave as the eigenvalues of the "quick growth" case, and whose imaginary parts are small.
The through space NMR shielding (TSNMRS) values of adamantane, the 2(N + 1)2 spherical (4c, 2e) homoaromatic compounds 1,3-dehydro-5,7-adamantandiyl dication (C10H122+) and 1,3-dehydro-5,7-cubandiyl dication (C8H42+), and the (6c, 8e) homoaromatic compound 2,2;,4,4;,6,6;,8,8;,10,10;-dehydroadamantane tetracation (C10H44+) have been ab initio calculated, employing the NICS concept, and visualized as iso-chemical shielding surfaces (ICSSs). TSNMRS values can be successfully employed to study both the endohedral and exohedral aromaticity/ antiaromaticity of the compounds studied.
Quantification of the (Anti)Aromaticity of Fulvalenes Subjected to -Electron Cross-Delocalization
(2008)
Fulvalenes 3-12 were theoretically studied at the ab initio level of theory. For the global minima structures, the occupation of the bonding (pi)C=C orbital of the interring C=C double bond obtained by NBO analysis quantitatively proves pi-electron cross-delocalization resulting in, at least partially, 2- or 6pi-electron aromaticity and 8pi- electron antiaromaticity for appropriate moieties. The cross-conjugation was quantified by the corresponding occupation numbers and lengths of the interring C=C double bonds, while the aromaticity or antiaromaticity due to cross- delocalization of the pi-electrons was visualized and quantified by through-space NMR shielding surfaces.
Together with the nonsubstituted reference compound, para-methoxy- and para-nitro cyclohexyl benzoates have been synthesized and their conformational equilibria studied by low temperature NMR spectroscopy and theoretical DFT calculations. The free energy differences ;G° between axial and equatorial conformers were examined with respect to polar substituent influences on the conformational equilibrium of O-mono-substituted cyclohexane.
Tria-, penta-, hepta- and nonafulvenes (1-4) have been studied theoretically at the MP2 ab initio level of theory. For the global minimum structures, the occupation of the bonding ;C=C orbital of the exocyclic C=C double bond, obtained by NBO analysis, quantitatively proves ;-electron delocalization which can reveal partial 2-, 6- and 10-;-electron aromaticity, and 4-, 8- and 12-;-electron antiaromaticity of the ring moieties. Beside the corresponding occupation number, this conjugation was quantified by the length of the exocyclic C=C double bond whilst the (anti)aromaticity of the ring moieties of 1-4 was visualized and quantified by through space NMR shielding surfaces (TSNMRS).
Endohedral and external through-space NMR shieldings (TSNMRS) and the magnetic susceptibilities of the fullerene carbon cages of C50, C60, C60-6, C70, and C70-6 were assessed by ab initio molecular orbital calculations. Employing the nucleus-independent chemical shift (NICS) concept, these TSNMRS were visualized as isochemical shielding surfaces (ICSS) and were applied to quantitatively estimate either the aromaticity or the anti-aromaticity on the fullerene surface pertaining to the five- or six-membered ring moieties and the shielding of any nuclei enclosed within the carbon cages. Differences between the NICSs calculated at the center of the fullerene carbon cages and the experimental chemical shifts of encapsulated NMR-active nuclei as well as experimental shieldings observed for different encapsulated nuclei were able to be understood readily for the first time.
The through space NMR shielding (TSNMRS) values of two tricyclobutabenzene (TCBB) derivatives 2, of the corresponding hexamethylene and hexaoxo TCBB derivatives 3, of [4n]annuleno[4n + 2]annulene 5 and of its tricyclobutadiene parent compound 4 have been ab initio calculated by the GIAO perturbation method employing the nucleus- independent chemical shift (NICS) concept of Paul von Ragué Schleyer, and visualized as iso-chemical shielding surfaces (ICSS). TSNMRS values can be successfully employed to quantify and visualize the aromaticity of the central, and in 5 also of the terminal benzene ring moieties.
The push-pull character of a series of para-phenyl substituted isophorone chromophores has been quantified by the 13C chemical shift difference of the three conjugated partial C=C double bonds and the quotient of the occupations of both the bonding and anti-bonding orbitals of these C=C double bonds as well. The correlations of the two push-pull quantifying parameters, and to the corresponding bond lengths, strongly recommend ;*c=c/ ;c=c as the general parameter to estimate charge alternation and as a very useful indication of the molecular hyperpolarizabilities for NLO application of the compounds studied.
The anisotropic effect of the planar nitrate anion NO3- has been ab initio calculated employing the Nucleus- Independent Chemical Shift (NICS) concept of von Ragué Schleyer and visualized as Iso-Chemical-Shielding Surfaces (ICSSs) of various (de)shieldings. Complexation-induced shifts in the 1H NMR spectra of nitrate/metal complexes or nitrate/receptor supramolecules can be separated now into anisotropic influences of the suitably coordinated nitrate anions and effects originating from differential sources.
The anisotropic effect of the olefinic C=C double bond has been calculated by employing the NICS (nucleus independent chemical shift) concept and visualized as an anisotropic cone by a through space NMR shielding grid. Sign and size of this spatial effect on 1H chemical shifts of protons in norbornene, exo- and endo-2-methylnorbornenes, and in three highly congested tetracyclic norbornene analogs have been compared with the experimental 1H NMR spectra as far as published. 1H NMR spectra have also been calculated at the HF/6-31G* level of theory to get a full, comparable set of proton chemical shifts. Differences between ;(1H)/ppm and the calculated anisotropic effect of the C=C double bond are discussed in terms of the steric compression that occurs in the compounds studied.
The spatial magnetic properties (Through Space NMR Shieldings - TSNMRS) of two cyclobutadiene derivatives (2 and 5) and of a number of cyclobutadiene dianion derivatives (3, 4 and 6-8) have been calculated by the GIAO perturbation method employing the Nucleus-Independent Chemical Shift (NICS) concept of P. v. Ragué Schleyer, and visualized as Iso-Chemical-Shielding Surfaces (ICSS) of various size and direction. TSNMRS values can be successfully employed to quantify and visualize the (anti)aromaticity of the compounds studied and to discuss the influence of Li+ complexation to cyclobutadiene dianion (4a, 7 and 8) on planar 4c,6e or three-dimensional 6c,6e aromaticity.
The Push-pull character of two series of donor-acceptor triazenes has been quantified by C-13 and N-15 chemical shift differences of the partial N(1)=N(2) and N(3)=C(4) double bonds in the central linking C=N-N=N-C unit and by the quotient of the occupations of both the bonding pi and antibonding orbitals pi* of these partial double bonds. Excellent correlations of the two estimates, to quantify the push-pull effect, with the bond lengths strongly recommend the occupation quotients pi*/pi, the N-15 chemical shift differences Delta delta[N(l),N(2)], and the corresponding bond lengths as reasonable sensors for quantifying charge alternation along the C=N-N=N-C linking unit, for the donor- acceptor quality of the triazenes 1 and 2 and for the molecular hyperpolarizability beta(0) of these compounds. Within this context, certain Substances can be strongly recommended for NLO application.
Computational cosmology
(2008)
“Computational Cosmology” is the modeling of structure formation in the Universe by means of numerical simulations. These simulations can be considered as the only “experiment” to verify theories of the origin and evolution of the Universe. Over the last 30 years great progress has been made in the development of computer codes that model the evolution of dark matter (as well as gas physics) on cosmic scales and new research discipline has established itself. After a brief summary of cosmology we will introduce the concepts behind such simulations. We further present a novel computer code for numerical simulations of cosmic structure formation that utilizes adaptive grids to efficiently distribute the work and focus the computing power to regions of interests, respectively. In that regards we also investigate various (numerical) effects that influence the credibility of these simulations and elaborate on the procedure of how to setup their initial conditions. And as running a simulation is only the first step to modelling cosmological structure formation we additionally developed an object finder that maps the density field onto galaxies and galaxy clusters and hence provides the link to observations. Despite the generally accepted success of the cold dark matter cosmology the model still inhibits a number of deviations from observations. Moreover, none of the putative dark matter particle candidates have yet been detected. Utilizing both the novel simulation code and the halo finder we perform and analyse various simulations of cosmic structure formation investigating alternative cosmologies. These include warm (rather than cold) dark matter, features in the power spectrum of the primordial density perturbations caused by non-standard inflation theories, and even modified Newtonian dynamics. We compare these alternatives to the currently accepted standard model and highlight the limitations on both sides; while those alternatives may cure some of the woes of the standard model they also inhibit difficulties on their own. During the past decade simulation codes and computer hardware have advanced to such a stage where it became possible to resolve in detail the sub-halo populations of dark matter halos in a cosmological context. These results, coupled with the simultaneous increase in observational data have opened up a whole new window on the concordance cosmogony in the field that is now known as “Near-Field Cosmology”. We will present an in-depth study of the dynamics of subhaloes and the development of debris of tidally disrupted satellite galaxies.1 Here we postulate a new population of subhaloes that once passed close to the centre of their host and now reside in the outer regions of it. We further show that interactions between satellites inside the radius of their hosts may not be negliable. And the recovery of host properties from the distribution and properties of tidally induced debris material is not as straightforward as expected from simulations of individual satellites in (semi-)analytical host potentials.
Modeling the impacts of grazing land management on land-use change for the Jordan River region
(2008)
In this article, we describe a simulation method for investigating the impacts of different grazing land management strategies on the productivity of (semi-)natural vegetation and the resulting feedback on land-use change. In a first application, we analyze the effects of sustainable and intensive grazing land management in the Jordan River region. For this purpose, we adapt and use the regional version of the spatially explicit modeling framework LandSHIFT. Our simulation experiments indicate that the modeled feedback mechanism has a strong effect on the spatial extent of grazing land. Consequently, the results of our study underline that the inclusion of such feedback mechanisms in land- use models can help to represent and analyze the complex interactions between humans and the environment in a more differentiated and realistic way, but they also identify the demand for more detailed empirical data on grazing land degradation in order to further improve the explanatory power of the model.
The effect of moderate rates of nitrogen deposition on ground floor vegetation is poorly predicted by uncontrolled surveys or fertilization experiments using high rates of nitrogen (N) addition. We compared the temporal trends of ground floor vegetation in permanent plots with moderate (7-13 kg/ha/yr) and lower bulk N deposition (4-6 kg/ha/yr) in southern Sweden during 1982-1998. We examined whether trends differed between growth forms (vascular plants and bryophytes) and vegetation types (three types of coniferous forest, deciduous forest, and bog). Trends of site-standardized cover and richness varied among growth forms, vegetation types, and deposition regions. Cover in spruce forests decreased at the same rate with both moderate and low deposition. In pine forests cover decreased faster with moderate deposition and in bogs cover decreased faster with low deposition. Cover of bryophytes in spruce forests increased at the same rate with both moderate and low deposition. In pine forests cover decreased faster with moderate deposition and in bogs and deciduous forests there was a strong non-linear increase with moderate deposition. The trend of number of vascular plants was constant with moderate and decreased with low deposition. We found no trend in the number of bryophyte species. We propose that the decrease of cover and number with low deposition was related to normal ecosystem development (increased shading), suggesting that N deposition maintained or increased the competitiveness of some species in the moderate-deposition region. Deposition had no consistent negative effect on vegetation suggesting that it is less important than normal successional processes.
Orbits of charged particles under the effect of a magnetic field are mathematically described by magnetic geodesics. They appear as solutions to a system of (nonlinear) ordinary differential equations of second order. But we are only interested in periodic solutions. To this end, we study the corresponding system of (nonlinear) parabolic equations for closed magnetic geodesics and, as a main result, eventually prove the existence of long time solutions. As generalization one can consider a system of elliptic nonlinear partial differential equations whose solutions describe the orbits of closed p-branes under the effect of a "generalized physical force". For the corresponding evolution equation, which is a system of parabolic nonlinear partial differential equations associated to the elliptic PDE, we can establish existence of short time solutions.
Stellar magnetic fields, as a crucial component of star formation and evolution, evade direct observation at least with current and near future instruments. However investigating whether magnetic fields are generated by a dynamo process or represent relics from the formation process, or whether they show a behavior similar to the sun or something very different, it is essential to investigate their structure and temporal evolution. Fortunately nature provides us with the possibility to indirectly observe surface topologies on distant stars by means of Doppler shift and polarization of light, though not without its challenges. Based on the mentioned effects, the so called Zeeman-Doppler Imaging technique is a powerful method to retrieve magnetic fields from rapid rotating stars based on measurements of spectropolarimetric observations in terms of Stokes profiles. In recent years, a large number of stellar magnetic field distributions could be reconstructed by Zeeman-Doppler Imaging (ZDI). However, the implementation of this method often relies on many approximations because, as an inversion method, it entails enormous computational requirements. The aim of this thesis is to develop methods for a ZDI, designed to invert time-resolved spectropolarimetric data of active late type stars, and to account for the expected complex and small scale magnetic fields on these stars. In order to reliably reconstruct the detailed field orientation and strength, the inversion method is employed to be able to use of all four Stokes components. Furthermore it is based on fully polarized radiative transfer calculations to account for the intricate interplay between temperature and magnetic field. Finally, the application of a newly developed ZDI code to Stokes I and V observations of II Pegasi (short: II Peg) was supposed to deliver the first magnetic surface maps for this highly active star. To accomplish the high computational burden of a radiative transfer based ZDI, we developed a novel approximation method to speed up the inversion process. It is based on Principal Component Analysis and Artificial Neural Networks. The latter approximate the functional mapping between atmospheric parameters and the corresponding local Stokes profiles. Inverse problems, as we are dealing with, are potentially ill-posed and require a regularization method. We propose a new regularization scheme, which implements a local entropy function that accounts for the peculiarities of the reconstruction of localized magnetic fields. To deal with the relatively large noise that is always present in polarimetric data, we developed a multi-line denoising technique based on Principal Component Analysis. In contrast to other multi-line techniques that extract from a large number of spectral lines a sort of mean profile, this method allows to extract individual spectral lines and thus allows for an inversion on the basis of specific lines. All these methods are incorporated in our newly developed ZDI code iMap, which is based on a conjugated gradient method. An in depth validation of our new synthesis method demonstrates the reliability and accuracy of this approach as well as a gain in computation time by almost three orders of magnitude relative to the conventional radiative transfer calculations. We investigated the influence of the different Stokes components (IV / IVQU) on the ability to reconstruct a known synthetic field configuration. In doing so we validate the capability of our inversion code, and we also assess limitations of magnetic field inversions in general. In a first application to II Peg, a K2 IV subgiant, we derived temperature and magnetic field surface distributions from spectropolarimetric data obtained in 2004 and 2007. It gives for the first time simultaneously the temporal evolution of the surface temperature and magnetic field distribution on II Peg.
The uptake of nutrients and their subsequent chemical conversion by reactions which provide energy and building blocks for growth and propagation is a fundamental property of life. This property is termed metabolism. In the course of evolution life has been dependent on chemical reactions which generate molecules that are common and indispensable to all life forms. These molecules are the so-called primary metabolites. In addition, life has evolved highly diverse biochemical reactions. These reactions allow organisms to produce unique molecules, the so-called secondary metabolites, which provide a competitive advantage for survival. The sum of all metabolites produced by the complex network of reactions within an organism has since 1998 been called the metabolome. The size of the metabolome can only be estimated and may range from less than 1,000 metabolites in unicellular organisms to approximately 200,000 in the whole plant kingdom. In current biology, three additional types of molecules are thought to be important to the understanding of the phenomena of life: (1) the proteins, in other words the proteome, including enzymes which perform the metabolic reactions, (2) the ribonucleic acids (RNAs) which constitute the so-called transcriptome, and (3) all genes of the genome which are encoded within the double strands of desoxyribonucleic acid (DNA). Investigations of each of these molecular levels of life require analytical technologies which should best enable the comprehensive analysis of all proteins, RNAs, et cetera. At the beginning of this thesis such analytical technologies were available for DNA, RNA and proteins, but not for metabolites. Therefore, this thesis was dedicated to the implementation of the gas chromatography – mass spectrometry technology, in short GC-MS, for the in-parallel analysis of as many metabolites as possible. Today GC-MS is one of the most widely applied technologies and indispensable for the efficient profiling of primary metabolites. The main achievements and research topics of this work can be divided into technological advances and novel insights into the metabolic mechanisms which allow plants to cope with environmental stresses. Firstly, the GC-MS profiling technology has been highly automated and standardized. The major technological achievements were (1) substantial contributions to the development of automated and, within the limits of GC-MS, comprehensive chemical analysis, (2) contributions to the implementation of time of flight mass spectrometry for GC-MS based metabolite profiling, (3) the creation of a software platform for reproducible GC-MS data processing, named TagFinder, and (4) the establishment of an internationally coordinated library of mass spectra which allows the identification of metabolites in diverse and complex biological samples. In addition, the Golm Metabolome Database (GMD) has been initiated to harbor this library and to cope with the increasing amount of generated profiling data. This database makes publicly available all chemical information essential for GC-MS profiling and has been extended to a global resource of GC-MS based metabolite profiles. Querying the concentration changes of hundreds of known and yet non-identified metabolites has recently been enabled by uploading standardized, TagFinder-processed data. Long-term technological aims have been pursued with the central aims (1) to enhance the precision of absolute and relative quantification and (2) to enable the combined analysis of metabolite concentrations and metabolic flux. In contrast to concentrations which provide information on metabolite amounts, flux analysis provides information on the speed of biochemical reactions or reaction sequences, for example on the rate of CO2 conversion into metabolites. This conversion is an essential function of plants which is the basis of life on earth. Secondly, GC-MS based metabolite profiling technology has been continuously applied to advance plant stress physiology. These efforts have yielded a detailed description of and new functional insights into metabolic changes in response to high and low temperatures as well as common and divergent responses to salt stress among higher plants, such as Arabidopsis thaliana, Lotus japonicus and rice (Oryza sativa). Time course analysis after temperature stress and investigations into salt dosage responses indicated that metabolism changed in a gradual manner rather than by stepwise transitions between fixed states. In agreement with these observations, metabolite profiles of the model plant Lotus japonicus, when exposed to increased soil salinity, were demonstrated to have a highly predictive power for both NaCl accumulation and plant biomass. Thus, it may be possible to use GC-MS based metabolite profiling as a breeding tool to support the selection of individual plants that cope best with salt stress or other environmental challenges.
a- vs. B-languages or 2nd position vs. verb-adjacent clitics in west andsouth slavic languages?
(2008)
Rapid and robust characterization of large earthquakes in terms of their spatial extent and temporal duration is of high importance for disaster mitigation and early warning applications. Backtracking of seismic P-waves was successfully used by several authors to image the rupture process of the great Sumatra earthquake (26.12.2004) using short period and broadband arrays. We follow here an approach of Walker et al. to backtrack and stack broadband waveforms from global network stations using traveltimes for a global Earth model to obtain the overall spatio-temporal development of the energy radiation of large earthquakes in a quick and robust way. We present results for selected events with well studied source processes (Kokoxili 14.11.2001, Tokachi-Oki 25.09.2003, Nias 28.03.2005). Further, we apply the technique in a semi-real time fashion to broadband data of earthquakes with a broadband magnitude >= 7 (roughly corresponding to Mw 6.5). Processing is based on first automatic detection messages from the GEOFON extended virtual network (GEVN).
Content: 1 The Development of the Estonian Gender Policy Machinery 1.1 Initiation of Institutionalisation as a Result of International Commitments 1.2 Institutional Measures Facilitating EU Membership 1.3 Assessment of the Gender Equality Machinery 2 Conditions for Gender Mainstreaming in Estonia 2.1 Social Conditions 2.2 Administrative Conditions 3 Gender Mainstreaming Activities in the Estonian Public Administration 3.1 The Legal Foundations 3.2 Inter-ministerial Cooperation 3.3 Gender Mainstreaming Training 3.4 Knowledge Basis 3.5 Lack of Standards for data and Statistics 3.6 Non-adminsitrative Liaisons 4 Conclusion
Ignorance and Vulnerability : the 2002 mulde flood in the city of Eilenburg (Saxony, Germany)
(2008)
Assessing the risk of gene flow from genetically modified trees carrying mitigation transgenes
(2008)
Paleoenvironmental records provide ample information on the Late Quaternary climatic evolution. Due to the great diversity of continental mid-latitude environments the synthetic picture of the past mid-latitudinal climate changes is, however, far from being complete. Owing to its significant size and landlocked setting the Black Sea constitutes a perfect location to study patterns and mechanisms of climate change along the continental interior of Central and Eastern Europe and Asia Minor. Presently, the southern drainage area of the Black Sea is characterized by a Mediterranean-type climate while the northern drainage is under the influence of Central and Northern European climate. During the Last Glacial a decrease in the global sea level disconnected the Black Sea from the Mediterranean Sea transforming it into a giant closed lake. At that time atmospheric precipitation and related with it river run-off were the most important factors driving sediment supply and water chemistry of the Black ‘Lake’. Therefore studying properties of the Black Sea sediments provides important information on the interactions and development of the Mediterranean and Central and North European climate in the past. One significant outcome of my thesis is an improved chronostraphigraphical framework for the glacial lacustrine unit of the Black Sea sediment cores, which allowed to refine the environmental history of the Black Sea region and enabled a reliable correlation with data from other marine and terrestrial archives. Data gathered along a N-S transect presented on a common time scale revealed coherent changes in the basin and its surrounding. During the glacial, the southward-shifted Polar Front reduced moisture transport to the northern drainage of the Black Sea and let the southern drainage become dominant in freshwater and sediment supply into the basin. Changes in NW Anatolian precipitation reconstructed from the variability of the terrigenous input imply that during the glacial the regional rainfall variability was strongly influenced by Mediterranean sea surface temperatures and decreased in response to the cooling associated with the North Atlantic Heinrich Events H1 and H2. In contrast to regional precipitation changes, the hydrological properties of the Black Sea remained relatively stable under full glacial conditions. First significant modification in the freshwater/sediment sources reconstructed from changes in the sediment composition, lithology, and 18O of ostracods took place at around 16.4 cal ka BP, simultaneous to the early deglacial northward retreat of the oceanic and atmospheric polar fronts. Meltwater pulses, most probably derived from the disintegrating European ice sheets, changed the isotopic composition of the Black Sea and increased the supply from northern sediment sources. While these changes signalized a mitigation of the Northern European and Mediterranean climate, a decisive increase in local temperature was indicated only later at the transition from the Oldest Dryas to the Bølling around 14.6 cal ka BP. At that time the warming of the Black Sea surface initiated massive phytoplankton blooms, which in turn, induced the precipitation of inorganic carbonates. This biologically triggered process significantly changed the water chemistry and was recorded by simultaneous shifts in the elemental composition of ostracod shells and in the isotopic composition of the inorganically-precipitated carbonates. Starting with the B/A warming and continuing through the YD cold interval and the Early Holocene warming, the Black Sea temperature signal corresponds to the precipitation and temperature changes recorded in the wider Mediterranean region. Early Holocene conditions, similar to those of the Bølling/Allerød, were punctured by the marine inflow from the Mediterranean at ~ 9.3 cal ka BP, which terminated the lacustrine phase of the Black Sea and had a substantial impact on the chemical and physical properties of its water.
Background To improve the understanding of consequences of climate change for annual plant communities, I used a detailed, grid-based model that simulates the effect of daily rainfall variability on individual plants in five climatic regions on a gradient from 100 to 800 mm mean annual precipitation (MAP). The model explicitly considers moisture storage in the soil. I manipulated daily rainfall variability by changing the daily mean rain (DMR, rain volume on rainy days averaged across years for each day of the year) by ± 20%. At the same time I adjusted intervals appropriately between rainy days for keeping the mean annual volume constant. In factorial combination with changing DMR I also changed MAP by ± 20%. Results Increasing MAP generally increased water availability, establishment, and peak shoot biomass. Increasing DMR increased the time that water was continuously available to plants in the upper 15 to 30 cm of the soil (longest wet period, LWP). The effect of DMR diminished with increasing humidity of the climate. An interaction between water availability and density-dependent germination increased the establishment of seedlings in the arid region, but in the more humid regions the establishment of seedlings decreased with increasing DMR. As plants matured, competition among individuals and their productivity increased, but the size of these effects decreased with the humidity of the regions. Therefore, peak shoot biomass generally increased with increasing DMR but the effect size diminished from the semiarid to the mesic Mediterranean region. Increasing DMR reduced via LWP the annual variability of biomass in the semiarid and dry Mediterranean regions. Conclusion More rainstorms (greater DMR) increased the recharge of soil water reservoirs in more arid sites with consequences for germination, establishment, productivity, and population persistence. The order of magnitudes of DMR and MAP overlapped partially so that their combined effect is important for projections of climate change effects on annual vegetation.
Background: To improve the understanding of consequences of climate change for annual plant communities, I used a detailed, grid-based model that simulates the effect of daily rainfall variability on individual plants in five climatic regions on a gradient from 100 to 800 mm mean annual precipitation (MAP). The model explicitly considers moisture storage in the soil. I manipulated daily rainfall variability by changing the daily mean rain (DMR, rain volume on rainy days averaged across years for each day of the year) by ± 20%. At the same time I adjusted intervals appropriately between rainy days for keeping the mean annual volume constant. In factorial combination with changing DMR I also changed MAP by ± 20%. Results: Increasing MAP generally increased water availability, establishment, and peak shoot biomass. Increasing DMR increased the time that water was continuously available to plants in the upper 15 to 30 cm of the soil (longest wet period, LWP). The effect of DMR diminished with increasing humidity of the climate. An interaction between water availability and density-dependent germination increased the establishment of seedlings in the arid region, but in the more humid regions the establishment of seedlings decreased with increasing DMR. As plants matured, competition among individuals and their productivity increased, but the size of these effects decreased with the humidity of the regions. Therefore, peak shoot biomass generally increased with increasing DMR but the effect size diminished from the semiarid to the mesic Mediterranean region. Increasing DMR reduced via LWP the annual variability of biomass in the semiarid and dry Mediterranean regions. Conclusion: More rainstorms (greater DMR) increased the recharge of soil water reservoirs in more arid sites with consequences for germination, establishment, productivity, and population persistence. The order of magnitudes of DMR and MAP overlapped partially so that their combined effect is important for projections of climate change effects on annual vegetation.
The effect of moderate rates of nitrogen deposition on ground floor vegetation is poorly predicted by uncontrolled surveys or fertilization experiments using high rates of nitrogen (N) addition. We compared the temporal trends of ground floor vegetation in permanent plots with moderate (7–13 kg ha−1 year−1) and lower bulk N deposition (4–6 kg ha−1 year−1) in southern Sweden during 1982–1998. We examined whether trends differed between growth forms (vascular plants and bryophytes) and vegetation types (three types of coniferous forest, deciduous forest, and bog). Trends of site-standardized cover and richness varied among growth forms, vegetation types, and deposition regions. Cover in spruce forests decreased at the same rate with both moderate and low deposition. In pine forests cover decreased faster with moderate deposition and in bogs cover decreased faster with low deposition. Cover of bryophytes in spruce forests increased at the same rate with both moderate and low deposition. In pine forests cover decreased faster with moderate deposition and in bogs and deciduous forests there was a strong non-linear increase with moderate deposition. The trend of number of vascular plants was constant with moderate and decreased with low deposition. We found no trend in the number of bryophyte species. We propose that the decrease of cover and number with low deposition was related to normal ecosystem development (increased shading), suggesting that N deposition maintained or increased the competitiveness of some species in the moderate-deposition region. Deposition had no consistent negative effect on vegetation suggesting that it is less important than normal successional processes.
Small livestock is an important resource for rural human populations in dry climates. How strongly will climate change affect the capacity of the rangeland? We used hierarchical modelling to scale quantitatively the growth of shrubs and annual plants, the main food of sheep and goats, to the landscape extent in the eastern Mediterranean region. Without grazing, productivity increased in a sigmoid way with mean annual precipitation. Grazing reduced productivity more strongly the drier the landscape. At a point just under the stocking capacity of the vegetation, productivity declined precipitously with more intense grazing due to a lack of seed production of annuals. We repeated simulations with precipitation patterns projected by two contrasting IPCC scenarios. Compared to results based on historic patterns, productivity and stocking capacity did not differ in most cases. Thus, grazing intensity remains the stronger impact on landscape productivity in this dry region even in the future.
Small livestock is an important resource for rural human populations in dry climates. How strongly will climate change affect the capacity of the rangeland? We used hierarchical modelling to scale quantitatively the growth of shrubs and annual plants, the main food of sheep and goats, to the landscape extent in the eastern Mediterranean region. Without grazing, productivity increased in a sigmoid way with mean annual precipitation. Grazing reduced productivity more strongly the drier the landscape. At a point just under the stocking capacity of the vegetation, productivity declined precipitously with more intense grazing due to a lack of seed production of annuals. We repeated simulations with precipitation patterns projected by two contrasting IPCC scenarios. Compared to results based on historic patterns, productivity and stocking capacity did not differ in most cases. Thus, grazing intensity remains the stronger impact on landscape productivity in this dry region even in the future.
Background: For omics experiments, detailed characterisation of experimental material with respect to its genetic features, its cultivation history and its treatment history is a requirement for analyses by bioinformatics tools and for publication needs. Furthermore, meta-analysis of several experiments in systems biology based approaches make it necessary to store this information in a standardised manner, preferentially in relational databases. In the Golm Plant Database System, we devised a data management system based on a classical Laboratory Information Management System combined with web-based user interfaces for data entry and retrieval to collect this information in an academic environment.
Results: The database system contains modules representing the genetic features of the germplasm, the experimental conditions and the sampling details. In the germplasm module, genetically identical lines of biological material are generated by defined workflows, starting with the import workflow, followed by further workflows like genetic modification (transformation), vegetative or sexual reproduction. The latter workflows link lines and thus create pedigrees. For experiments, plant objects are generated from plant lines and united in so-called cultures, to which the cultivation conditions are linked. Materials and methods for each cultivation step are stored in a separate ACCESS database of the plant cultivation unit. For all cultures and thus every plant object, each cultivation site and the culture's arrival time at a site are logged by a barcode-scanner based system. Thus, for each plant object, all site-related parameters, e. g. automatically logged climate data, are available. These life history data and genetic information for the plant objects are linked to analytical results by the sampling module, which links sample components to plant object identifiers. This workflow uses controlled vocabulary for organs and treatments. Unique names generated by the system and barcode labels facilitate identification and management of the material. Web pages are provided as user interfaces to facilitate maintaining the system in an environment with many desktop computers and a rapidly changing user community. Web based search tools are the basis for joint use of the material by all researchers of the institute.
Conclusion: The Golm Plant Database system, which is based on a relational database, collects the genetic and environmental information on plant material during its production or experimental use at the Max-Planck-Institute of Molecular Plant Physiology. It thus provides information according to the MIAME standard for the component 'Sample' in a highly standardised format. The Plant Database system thus facilitates collaborative work and allows efficient queries in data analysis for systems biology research.