Refine
Has Fulltext
- yes (87) (remove)
Year of publication
- 2008 (87) (remove)
Document Type
- Doctoral Thesis (44)
- Postprint (23)
- Monograph/Edited Volume (7)
- Preprint (4)
- Conference Proceeding (3)
- Master's Thesis (3)
- Habilitation Thesis (2)
- Working Paper (1)
Language
- English (87) (remove)
Is part of the Bibliography
- yes (87) (remove)
Keywords
- Chitooligosaccharide (3)
- Chitooligosaccharides (3)
- magnetic fields (3)
- Chile (2)
- Glycosylation (2)
- Magnetfelder (2)
- Mass Spectrometry (2)
- Massenspektrometrie (2)
- Metabolomics (2)
- Metallnitride (2)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (16)
- Institut für Chemie (13)
- Institut für Physik und Astronomie (10)
- Institut für Biochemie und Biologie (8)
- Institut für Geowissenschaften (6)
- Department Linguistik (5)
- Extern (5)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (5)
- Institut für Mathematik (4)
- Institut für Umweltwissenschaften und Geographie (4)
The Sun is a star, which due to its proximity has a tremendous influence on Earth. Since its very first days mankind tried to "understand the Sun", and especially in the 20th century science has uncovered many of the Sun's secrets by using high resolution observations and describing the Sun by means of models. As an active star the Sun's activity, as expressed in its magnetic cycle, is closely related to the sunspot numbers. Flares play a special role, because they release large energies on very short time scales. They are correlated with enhanced electromagnetic emissions all over the spectrum. Furthermore, flares are sources of energetic particles. Hard X-ray observations (e.g., by NASA's RHESSI spacecraft) reveal that a large fraction of the energy released during a flare is transferred into the kinetic energy of electrons. However the mechanism that accelerates a large number of electrons to high energies (beyond 20 keV) within fractions of a second is not understood yet. The thesis at hand presents a model for the generation of energetic electrons during flares that explains the electron acceleration based on real parameters obtained by real ground and space based observations. According to this model photospheric plasma flows build up electric potentials in the active regions in the photosphere. Usually these electric potentials are associated with electric currents closed within the photosphere. However as a result of magnetic reconnection, a magnetic connection between the regions of different magnetic polarity on the photosphere can establish through the corona. Due to the significantly higher electric conductivity in the corona, the photospheric electric power supply can be closed via the corona. Subsequently a high electric current is formed, which leads to the generation of hard X-ray radiation in the dense chromosphere. The previously described idea is modelled and investigated by means of electric circuits. For this the microscopic plasma parameters, the magnetic field geometry and hard X-ray observations are used to obtain parameters for modelling macroscopic electric components, such as electric resistors, which are connected with each other. This model demonstrates that such a coronal electric current is correlated with large scale electric fields, which can accelerate the electrons quickly up to relativistic energies. The results of these calculations are encouraging. The electron fluxes predicted by the model are in agreement with the electron fluxes deduced from the measured photon fluxes. Additionally the model developed in this thesis proposes a new way to understand the observed double footpoint hard X-ray sources.
Microfabricated solid-state surfaces, also called atom chip', have become a well-established technique to trap and manipulate atoms. This has simplified applications in atom interferometry, quantum information processing, and studies of many-body systems. Magnetic trapping potentials with arbitrary geommetries are generated with atom chip by miniaturized current-carrying conductors integrated on a solid substrate. Atoms can be trapped and cooled to microKelvin and even nanoKelvin temperatures in such microchip trap. However, cold atoms can be significantly perturbed by the chip surface, typically held at room temperature. The magnetic field fluctuations generated by thermal currents in the chip elements may induce spin flips of atoms and result in loss, heating and decoherence. In this thesis, we extend previous work on spin flip rates induced by magnetic noise and consider the more complex geometries that are typically encountered in atom chips: layered structures and metallic wires of finite cross-section. We also discuss a few aspects of atom chips traps built with superconducting structures that have been suggested as a means to suppress magnetic field fluctuations. The thesis describes calculations of spin flip rates based on magnetic Green functions that are computed analytically and numerically. For a chip with a top metallic layer, the magnetic noise depends essentially on the thickness of that layer, as long as the layers below have a much smaller conductivity. Based on this result, scaling laws for loss rates above a thin metallic layer are derived. A good agreement with experiments is obtained in the regime where the atom-surface distance is comparable to the skin depth of metal. Since in the experiments, metallic layers are always etched to separate wires carrying different currents, the impact of the finite lateral wire size on the magnetic noise has been taken into account. The local spectrum of the magnetic field near a metallic microstructure has been investigated numerically with the help of boundary integral equations. The magnetic noise significantly depends on polarizations above flat wires with finite lateral width, in stark contrast to an infinitely wide wire. Correlations between multiple wires are also taken into account. In the last part, superconducting atom chips are considered. Magnetic traps generated by superconducting wires in the Meissner state and the mixed state are studied analytically by a conformal mapping method and also numerically. The properties of the traps created by superconducting wires are investigated and compared to normal conducting wires: they behave qualitatively quite similar and open a route to further trap miniaturization, due to the advantage of low magnetic noise. We discuss critical currents and fields for several geometries.
Inositol phosphates (IPs) and their turnover products have been implicated to play important roles in stress signaling in eukaryotic cells. In higher plants genes encoding inositol polyphosphate kinases have been identified previously, but their physiological functions have not been fully resolved. Here we expressed Arabidopsis inositol polyphosphate 6-/3-kinase (AtIpk2 beta) in two heterologous systems, i.e. the yeast Saccharomyces cerevisiae and in tobacco (Nicotiana tabacum), and tested the effect on abiotic stress tolerance. Expression of AtIpk2 beta rescued the salt-, osmotic- and temperature-sensitive growth defects of a yeast mutant strain (arg82 Delta) that lacks inositol polyphosphate multikinase activity encoded by the ARG82/IPK2 gene. Transgenic tobacco plants constitutively expressing AtIpk2 beta under the control of the Cauliflower Mosaic Virus 35S promoter were generated and found to exhibit improved tolerance to diverse abiotic stresses when compared to wild type plants. Expression patterns of various stress responsive genes were enhanced, and the activities of anti-oxidative enzymes were elevated in transgenic plants, suggesting a possible involvement of AtIpk2 beta in plant stress responses.
Background: An increasing number of studies demonstrate that genetic differentiation and speciation in the sea occur over much smaller spatial scales than previously appreciated given the wide distribution range of many morphologically defined coral reef invertebrate species and the presumed dispersal-enhancing qualities of ocean currents. However, knowledge about the processes that lead to population divergence and speciation is often lacking despite being essential for the understanding, conservation, and management of marine biodiversity. Sponges, a highly diverse, ecologically and economically important reef-invertebrate taxon, exhibit spatial trends in the Indo-West Pacific that are not universally reflected in other marine phyla. So far, however, processes generating those unexpected patterns are not understood.
Results: We unraveled the phylogeographic structure of the widespread Indo-Pacific coral reef sponge Leucetta chagosensis across its known geographic range using two nuclear markers: the rDNA internal transcribed spacers (ITS 1&2) and a fragment of the 28S gene, as well as the second intron of the ATP synthetase beta subunit-gene (ATPSb-iII). This enabled the detection of several deeply divergent clades congruent over both loci, one containing specimens from the Indian Ocean ( Red Sea and Maldives), another one from the Philippines, and two other large and substructured NW Pacific and SW Pacific clades with an area of overlap in the Great Barrier Reef/Coral Sea. Reciprocally monophyletic populations were observed from the Philippines, Red Sea, Maldives, Japan, Samoa, and Polynesia, demonstrating long-standing isolation. Populations along the South Equatorial Current in the south-western Pacific showed isolation-by-distance effects. Overall, the results pointed towards stepping-stone dispersal with some putative long-distance exchange, consistent with expectations from low dispersal capabilities.
Conclusion: We argue that both founder and vicariance events during the late Pliocene and Pleistocene were responsible to varying degrees for generating the deep phylogeographic structure. This structure was perpetuated largely as a result of the life history of L. chagosensis, resulting in high levels of regional isolation. Reciprocally monophyletic populations constitute putative sibling ( cryptic) species, while population para- and polyphyly may indicate incipient speciation processes. The genetic diversity and biodiversity of tropical Indo-Pacific sponges appears to be substantially underestimated since the high level of genetic divergence is not necessarily manifested at the morphological level.
Questions: 1. Are there differences among species in their preference for coniferous vs. deciduous forest? 2. Are tree and shrub species better colonizers of recent forest stands than herbaceous species? 3. Do colonization patterns of plant species groups depend on tree species composition? Location: Three deciduous and one coniferous recent forest areas in Brandenburg, NE Germany. Methods: In 34 and 21 transects in coniferous and deciduous stands, respectively, we studied the occurrence and percentage cover of vascular plants in a total of 150 plots in ancient stands, 315 in recent stands and 55 at the ecotone. Habitat preference, diaspore weight, generative dispersal potential and clonal extension were used to explain mechanisms of local migration. Regression analysis was conducted to test whether migration distance was related to species’ life-history traits. Results: 25 species were significantly associated with ancient stands and ten species were significantly more frequent in recent stands. Tree and shrub species were good colonizers of recent coniferous and deciduous stands. In the coniferous stands, all herbaceous species showed a strong dispersal limitation during colonization, whereas in the deciduous stands generalist species may have survived in the grasslands which were present prior to afforestation. Conclusions: The fast colonization of recent stands by trees and shrubs can be explained by their effective dispersal via wind and animals. This, and the comparably efficient migration of herbaceous forest specialists into recent coniferous stands, implies that the conversion of coniferous into deciduous stands adjacent to ancient deciduous forests is promising even without planting of trees.
People engage in a multitude of different relationships. Relatives, spouses, and friends are modestly to moderately similar in various characteristics, e.g., personality characteristics, interests, appearance. The role of psychological (e.g., skills, global appraisal) and social (e.g., gender, familial status) similarities in personal relationships and the association with relationship quality (emotional closeness and reciprocity of support) were examined in four independent studies. Young adults (N = 456; M = 27 years) and middle-aged couples from four different family types (N = 171 couples, M = 38 years) gave answer to a computer-aided questionnaire regarding their ego-centered networks. A subsample of 175 middle-aged adults (77 couples and 21 individuals) participated in a one-year follow-up questioning. Two experimental studies (N = 470; N = 802), both including two assessments with an interval of five weeks, were conducted to examine causal relationships among similarity, closeness, and reciprocity expectations. Results underline the role of psychological and social similarities as covariates of emotional closeness and reciprocity of support on the between-relationship level, but indicate a relatively weak effect within established relationships. In specific relationships, such as parent-child relationships and friendships, psychological similarity partly alleviates the effects of missing genetic relatedness. Individual differences moderate these between-relationship effects. In all, results combine evolutionary and social psychological perspectives on similarity in personal relationships and extend previous findings by means of a network approach and an experimental manipulation of existing relationships. The findings further show that psychological and social similarity have different implications for the study of personal relationships depending on the phase in the developmental process of relationships.
Background
Epidemiological data indicate elevated psychosocial health risks for physicians, e.g., burnout, depression, marital disturbances, alcohol and substance abuse, and suicide. The purpose of this study was to identify psychosocial health resources and risk factors in profession-related behaviour and experience patterns of medical students and physicians that may serve as a basis for appropriate health promoting interventions.
Methods
The questionnaire -Related Behaviour and Experience "Work administered in cross-sectional surveys to students in the first (n = 475) and in the fifth year of studies (n = 355) in required courses at three German universities and to physicians in early professional life in the vicinity of these universities (n = 381).
Results
Scores reflecting a healthy behaviour pattern were less likely in physicians (16.7%) compared to 5th year (26.0%) and 1st year students (35.1%) while scores representing unambitious and resigned patterns were more common among physicians (43.4% vs. 24.4% vs. 41.0% and 27.3% vs. 17.2% vs. 23.3 respectively). Female and male responders differed in the domains professional commitment, resistance to stress and emotional well-being. Female physicians on average scored higher in the dimensions resignation tendencies, satisfaction with life and experience of social support, and lower in career ambition.
Conclusion
The results show distinct psychosocial stress patterns among medical students and physicians. Health promotion and prevention of psychosocial symptoms and impairments should be integrated as a required part of the medical curriculum and be considered an important issue during the further training of physicians.
The three major biopolymers, proteins, nucleic acids and glycoconjugates are mainly responsible for the information transfer, which is a fundamental process of life. The biological importance of proteins and nucleic acids are well explored and oligosaccharides in the form of glycoconjugates have gained importance recently. The β-(1→4) linked N-acetylglucosamine (GlcNAc) moiety is a frequently occurring structural unit in various naturally and biologically important oligosaccharides and related conjugates. Chitin which is the most abundant polymer of GlcNAc is widely distributed in nature whereas the related polysaccharide chitosan (polymer of GlcN and GlcNAc) occurs in certain fungi. Chitooligosaccharides of mixed acetylation patterns are of interest for the determination of the substrate specificities and mechanism of chitinases. In this report, we describe the chemical synthesis of three chitotetraoses namely GlcNAc-GlcN-GlcNAc-GlcN, GlcN-GlcNAc-GlcNAc-GlcN and GlcN-GlcN-GlcNAc-GlcNAc. Benzyloxycarbonyl (Z) and p-nitrobenzyloxycarbonyl (PNZ) were used for the amino functionality due to their ability to form the β-linkage during the glycosylation reactions through neighboring group participation and the trichloroacetimidate approach was utilized for the donor. Monomeric, dimeric acceptors and donors have been prepared by utilizing the Z and PNZ groups and coupling between the appropriate donor and acceptors in the presence of Lewis acid yielded the protected tetrasaccharides. Finally cleavage of PNZ followed by reacetylation and the deblocking of other protecting groups afforded the N,N’-diacetyl chitotetraoses in good yield. Successful syntheses for the protected diacetyl chitotetraoses by solid phase synthesis have also been described.
Nowadays, reactions on surfaces are attaining great scientific interest because of their diverse applications. Some well known examples are production of ammonia on metal surfaces for fertilizers and reduction of poisonous gases from automobiles using catalytic converters. More recently, also photoinduced reactions at surfaces, useful, \textit{e.g.}, for photocatalysis, were studied in detail. Often, very short laser pulses are used for this purpose. Some of these reactions are occurring on femtosecond (1 fs=$10^{-15}$ s) time scales since the motion of atoms (which leads to bond breaking and new bond formation) belongs to this time range. This thesis investigates the femtosecond laser induced associative photodesorption of hydrogen, H$_2$, and deuterium, D$_2$, from a ruthenium metal surface. Many interesting features of this reaction were explored by experimentalists: (i) a huge isotope effect in the desorption probability of H$_2$ and D$_2$, (ii) the desorption yield increases non-linearly with the applied visible (vis) laser fluence, and (iii) unequal energy partitioning to different degrees of freedom. These peculiarities are due to the fact that an ultrashort vis pulse creates hot electrons in the metal. These hot electrons then transfer energy to adsorbate vibrations which leads to desorption. In fact, adsorbate vibrations are strongly coupled to metal electrons, \textit{i.e.}, through non-adiabatic couplings. This means that, surfaces introduce additional channels for energy exchange which makes the control of surface reactions more difficult than the control of reactions in the gas phase. In fact, the quantum yield of surface photochemical reactions is often notoriously small. One of the goals of the present thesis is to suggest, on the basis of theoretical simulations, strategies to control/enhance the photodesorption yield of H$_2$ and D$_2$ from Ru(0001). For this purpose, we suggest a \textit{hybrid scheme} to control the reaction, where the adsorbate vibrations are initially excited by an infrared (IR) pulse, prior to the vis pulse. Both \textit{adiabatic} and \textit{non-adiabatic} representations for photoinduced desorption problems are employed here. The \textit{adiabatic} representation is realized within the classical picture using Molecular Dynamics (MD) with electronic frictions. In a quantum mechanical description, \textit{non-adiabatic} representations are employed within open-system density matrix theory. The time evolution of the desorption process is studied using a two-mode reduced dimensionality model with one vibrational coordinate and one translational coordinate of the adsorbate. The ground and excited electronic state potentials, and dipole function for the IR excitation are taken from first principles. The IR driven vibrational excitation of adsorbate modes with moderate efficiency is achieved by (modified) $\pi$-pulses or/and optimal control theory. The fluence dependence of the desorption reaction is computed by including the electronic temperature of the metal calculated from the two-temperature model. Here, our theoretical results show a good agreement with experimental and previous theoretical findings. We then employed the IR+vis strategy in both models. Here, we found that vibrational excitation indeed promotes the desorption of hydrogen and deuterium. To summarize, we conclude that photocontrol of this surface reaction can be achieved by our IR+vis scheme.
In the present dissertation paper an approach which ensures an efficient control of such diverse systems as noisy or chaotic oscillators and neural ensembles is developed. This approach is implemented by a simple linear feedback loop. The dissertation paper consists of two main parts. One part of the work is dedicated to the application of the suggested technique to a population of neurons with a goal to suppress their synchronous collective dynamics. The other part is aimed at investigating linear feedback control of coherence of a noisy or chaotic self-sustained oscillator. First we start with a problem of suppressing synchronization in a large population of interacting neurons. The importance of this task is based on the hypothesis that emergence of pathological brain activity in the case of Parkinson's disease and other neurological disorders is caused by synchrony of many thousands of neurons. The established therapy for the patients with such disorders is a permanent high-frequency electrical stimulation via the depth microelectrodes, called Deep Brain Stimulation (DBS). In spite of efficiency of such stimulation, it has several side effects and mechanisms underlying DBS remain unclear. In the present work an efficient and simple control technique is suggested. It is designed to ensure suppression of synchrony in a neural ensemble by a minimized stimulation that vanishes as soon as the tremor is suppressed. This vanishing-stimulation technique would be a useful tool of experimental neuroscience; on the other hand, control of collective dynamics in a large population of units represents an interesting physical problem. The main idea of suggested approach is related to the classical problem of oscillation theory, namely the interaction between a self-sustained (active) oscillator and a passive load (resonator). It is known that under certain conditions the passive oscillator can suppress the oscillations of an active one. In this thesis a much more complicated case of active medium, which itself consists of thousands of oscillators is considered. Coupling this medium to a specially designed passive oscillator, one can control the collective motion of the ensemble, specifically can enhance or suppress it. Having in mind a possible application in neuroscience, the problem of suppression is concentrated upon. Second, the efficiency of suggested suppression scheme is illustrated by considering more complex case, i.e. when the population of neurons generating the undesired rhythm consists of two non-overlapping subpopulations: the first one is affected by the stimulation, while the collective activity is registered from the second one. Generally speaking, the second population can be by itself both active and passive; both cases are considered here. The possible applications of suggested technique are discussed. Third, the influence of the external linear feedback on coherence of a noisy or chaotic self-sustained oscillator is considered. Coherence is one of the main properties of self-oscillating systems and plays a key role in the construction of clocks, electronic generators, lasers, etc. The coherence of a noisy limit cycle oscillator in the context of phase dynamics is evaluated by the phase diffusion constant, which is in its turn proportional to the width of the spectral peak of oscillations. Many chaotic oscillators can be described within the framework of phase dynamics, and, therefore, their coherence can be also quantified by the way of the phase diffusion constant. The analytical theory for a general linear feedback, considering noisy systems in the linear and Gaussian approximation is developed and validated by numerical results.
Understanding the interactions of predators and their prey and their responses to environmental changes is one of the striking features of ecological research. In this thesis, spring dynamics of phytoplankton and its consumers, zooplankton, were considered in dependence on the environmental conditions in a deep lake (Lake Constance) and a shallow marine water (mesocosms from Kiel Bight), using descriptive statistics, multiple regression models, and process-oriented dynamic simulation models. The development of the spring phytoplankton bloom, representing a dominant feature in the plankton dynamics in temperate and cold oceans and lakes, may depend on temperature, light, and mixing intensity, and the success of over-wintering phyto- and zooplankton. These factors are often correlated in the field. Unexpectedly, irradiance often dominated algal net growth rather than vertical mixing even in deep Lake Constance. Algal net losses from the euphotic layer to larger depth were induced by vertical mixing, but were compensated by the input from larger depth when algae were uniformly distributed over the water column. Dynamics of small, fast-growing algae were well predicted by abiotic variables, such as surface irradiance, vertical mixing intensity, and temperature. A simulation model additionally revealed that even in late winter, grazing may represent an important loss factor of phytoplankton during calm periods when losses due to mixing are small. The importance of losses by mixing and grazing changed rapidly as it depended on the variable mixing intensity. Higher temperature, lower global irradiance and enhanced mixing generated lower algal biomass and primary production in the dynamic simulation model. This suggests that potential consequences of climate change may partly counteract each other. The negative effect of higher temperatures on phytoplankton biomass was due to enhanced temperature-sensitive grazing losses. Comparing the results from deep Lake Constance to those of the shallow mesocosm experiments and simulations, confirmed the strong direct effect of light in contrast to temperature, and the importance of grazing already in early spring as soon as moderate algal biomasses developed. In Lake Constance, ciliates dominated the herbivorous zooplankton in spring. The start of ciliate net growth in spring was closely linked to that of edible algae, chlorophyll a and the vertical mixing intensity but independent of water temperature. The duration of ciliate dominance in spring was largely controlled by the highly variable onset of the phytoplankton bloom, and little by the less variable termination of the ciliate bloom by grazing of meta-zooplankton. During years with an extended spring bloom of algae and ciliates, they coexisted at relatively high biomasses over 15-30 generations, and internally forced species shifts were observed in both communities. Interception feeders alternated with filter feeders, and cryptomonads with non-cryptomonads in their relative importance. These dynamics were not captured by classical 1-predator-1-prey models which consistently predict pronounced predator-prey cycles or equilibria with either the predator or the prey dominating or suppressed. A multi-species predator-prey model with predator species differing in their food selectivity, and prey species in their edibility reproduced the observed patterns. Food-selectivity and edibility were related to the feeding and growth characteristics of the species, which represented ecological trade-offs. For example, the prey species with the highest edibility also had the highest maximum growth rate. Data and model revealed endogenous driven ongoing species alternations, which yielded a higher variability in species-specific biomasses than in total predator and prey biomass. This holds for a broad parameter space as long as the species differ functionally. A more sophisticated model approach enabled the simulation of a continuum of different functional types and adaptability of predator and prey communities to altered environmental conditions, and the maintenance of a rather low model complexity, i.e., low number of equations and free parameters. The community compositions were described by mean functional traits --- prey edibility and predator food-selectivity --- and their variances. The latter represent the functional diversity of the communities and thus, the potential for adaptation. Oscillations in the mean community trait values indicated species shifts. The community traits were related to growth and grazing characteristics representing similar trade-offs as in the multi-species model. The model reproduced the observed patterns, when nonlinear relationships between edibility and capacity, and edibility and food availability for the predator were chosen. A constant minimum amount of variance represented ongoing species invasions and thus, preserved a diversity which allows adaptation on a realistic time-span.
New ABC triblock copolymers were synthesized by controlled free-radical polymerization via Reversible Addition-Fragmentation chain Transfer (RAFT). Compared to amphiphilic diblock copolymers, the prepared materials formed more complex self-assembled structures in water due to three different functional units. Two strategies were followed: The first approach relied on double-thermoresponsive triblock copolymers exhibiting Lower Critical Solution Temperature (LCST) behavior in water. While the first phase transition triggers the self-assembly of triblock copolymers upon heating, the second one allows to modify the self-assembled state. The stepwise self-assembly was followed by turbidimetry, dynamic light scattering (DLS) and 1H NMR spectroscopy as these methods reflect the behavior on the macroscopic, mesoscopic and molecular scale. Although the first phase transition could be easily monitored due to the onset of self-assembly, it was difficult to identify the second phase transition unambiguously as the changes are either marginal or coincide with the slow response of the self-assembled system to relatively fast changes of temperature. The second approach towards advanced polymeric micelles exploited the thermodynamic incompatibility of “triphilic” block copolymers – namely polymers bearing a hydrophilic, a lipophilic and a fluorophilic block – as the driving force for self-assembly in water. The self-assembly of these polymers in water produced polymeric micelles comprising a hydrophilic corona and a microphase-separated micellar core with lipophilic and fluorophilic domains – so called multi-compartment micelles. The association of triblock copolymers in water was studied by 1H NMR spectroscopy, DLS and cryogenic transmission electron microscopy (cryo-TEM). Direct imaging of the polymeric micelles in solution by cryo-TEM revealed different morphologies depending on the block sequence and the preparation conditions. While polymers with the sequence hydrophilic-lipophilic-fluorophilic built core-shell-corona micelles with the core being the fluorinated compartment, block copolymers with the hydrophilic block in the middle formed spherical micelles where single or multiple fluorinated domains “float” as disks on the surface of the lipophilic core. Increasing the temperature during micelle preparation or annealing of the aqueous solutions after preparation at higher temperatures induced occasionally a change of the micelle morphology or the particle size distribution. By RAFT polymerization not only the desired polymeric architectures could be realized, but the technique provided in addition a precious tool for molar mass characterization. The thiocarbonylthio moieties, which are present at the chain ends of polymers prepared by RAFT, absorb light in the UV and visible range and were employed for end-group analysis by UV-vis spectroscopy. A variety of dithiobenzoate and trithiocarbonate RAFT agents with differently substituted initiating R groups were synthesized. The investigation of their absorption characteristics showed that the intensity of the absorptions depends sensitively on the substitution pattern next to the thiocarbonylthio moiety and on the solvent polarity. According to these results, the conditions for a reliable and convenient end-group analysis by UV-vis spectroscopy were optimized. As end-group analysis by UV-vis spectroscopy is insensitive to the potential association of polymers in solution, it was advantageously exploited for the molar mass characterization of the prepared amphiphilic block copolymers.
The present dissertation focuses on the question whether and under which conditions infants recognise clauses in fluent speech and the role a prosodic marker such as a pause may have in the segmentation process. In the speech signal, syntactic clauses often coincide with intonational phrases (IPhs) (Nespor & Vogel, 1986, p. 190), the boundaries of which are marked by changes in fundamental frequency (e.g., Price, Ostendorf, Shattuck-Hufnagel & Fong, 1991), lengthening of the final syllable (e.g., Cooper & Paccia-Cooper, 1980) and the occurrence of a pause (Nespor & Vogel, 1986, p. 188). Thus, IPhs seem to be reliably marked in the speech stream and infants may use these cues to recognise them. Furthermore, corpus studies on the occurrence and distribution of pauses have revealed that there is a strong correlation between the duration of a pause and the type of boundary it marks (e.g., Butcher, 1981, for German). Pauses between words are either non-existent or short, pauses between phrases are a bit longer, and pauses between clauses and at sentence boundaries further increase in duration. This suggests the existence of a natural pause hierarchy that complements the prosodic hierarchy described by Nespor and Vogel (1986). These hierarchies on the side of the speech signal correspond to the syntactic hierarchy of a language. In the present study, five experiments using the Headturn preference paradigm (Hirsh-Pasek, Kemler Nelson, Jusczyk, Cassidy, Druss & Kennedy, 1987) were conducted to investigate German-learning 6- and 8-month-olds’ use of pauses to recognise clauses in the signal and their sensitivity to the natural pause hierarchy. Previous studies on English-learning infants’ recognition of clauses (Hirsh-Pasek et al., 1987; Nazzi, Kemler Nelson, Jusczyk & Jusczyk, 2000) have found that infants as young as 6 months recognise clauses in fluent speech. Recently, Seidl and colleagues have begun to investigate the status the pause may have in this process (Seidl, 2007; Johnson & Seidl, 2008; Seidl & Cristià, 2008). However, none of these studies investigated infants’ sensitivity to the natural pause hierarchy and especially the sensitivity to the correlation between pause durations and the respective within-sentence clause boundaries / sentence boundaries. To address these questions highly controlled stimuli were used. In all five experiments the stimuli were sentences consisting of two IPhs which each coincided with a syntactic clause. In the first three experiments pauses were inserted either at clause and sentence boundaries or within the first clause and the sentence boundaries. The duration of the pauses varied between the experiments. The results show that German-learning 6-month-olds recognise clauses in the speech stream, but only in a condition in which the duration of the pauses conforms to the mean duration of pauses found at the respective boundaries in German. Experiments 4 and 5 explicitly addressed the question of infants’ sensitivity to the natural pause hierarchy by inserting pauses at the clause and sentence boundaries only. Their durations were either conforming to the natural pause hierarchy or were being reversed. The results of these experiments provide evidence that 8-, but not 6-month-olds seem to be sensitive to the correlation of the duration of pauses and the type of boundary they demarcate. The present study provides first evidence that infants not only use pauses to recognise clause and sentence boundaries, but are sensitive to the duration and distribution of pauses in their native language as reflected in the natural pause hierarchy.
Background
Serotonin induces fluid secretion from Calliphora salivary glands by the parallel activation of the InsP3/Ca2+ and cAMP signaling pathways. We investigated whether cAMP affects 5-HT-induced Ca2+ signaling and InsP3-induced Ca2+ release from the endoplasmic reticulum (ER).
Results
Increasing intracellular cAMP level by bath application of forskolin, IBMX or cAMP in the continuous presence of threshold 5-HT concentrations converted oscillatory [Ca2+]i changes into a sustained increase. Intraluminal Ca2+ measurements in the ER of ß-escin-permeabilized glands with mag-fura-2 revealed that cAMP augmented InsP3-induced Ca2+ release in a concentration-dependent manner. This indicated that cAMP sensitized the InsP3 receptor Ca2+ channel for InsP3. By using cAMP analogs that activated either protein kinase A (PKA) or Epac and the application of PKA-inhibitors, we found that cAMP-induced augmentation of InsP3-induced Ca2+ release was mediated by PKA not by Epac. Recordings of the transepithelial potential of the glands suggested that cAMP sensitized the InsP3/Ca2+ signaling pathway for 5-HT, because IBMX potentiated Ca2+-dependent Cl- transport activated by a threshold 5-HT concentration.
Conclusion
This report shows, for the first time for an insect system, that cAMP can potentiate InsP3-induced Ca2+ release from the ER in a PKA-dependent manner, and that this crosstalk between cAMP and InsP3/Ca2+ signaling pathways enhances transepithelial electrolyte transport.
Whether the results of fiscal transfers have positive or negative implications depends upon the incentives that transfer systems create for both central and local governments. The complexity and ambiguity of the relationship between fiscal transfers and tax revenues of local governments is one of the main causes why research projects, even in the same country, come to different results. This investigation is seriously questioning the often stated substitution effect based only on an analysis of aggregated data and finally rejects in the qualitative part of this research (using survey techniques) a substitution effect in the majority of the assessed municipalities. While most theories are modeling governments as tax-maximizers (Leviathan) or as being prone to fiscal laziness, this investigation shows that mayors react to a whole set of incentives. Most mayors react rational and rather pragmatically in respect to the incentives and constraints which are established by the particular context of a municipality, the central government and their own personality/identity/interests. While the yield on property tax in Peru is low, there are no signs that increases in transfers have had, on average, a negative impact on their revenue generation. On an individual basis there exist mayors who are revenue maximizers, others who are substituting revenues and others who show apathy. Many engage in property tax. While rural or small municipalities have limited potential, property taxes are the main revenue sources for the Peruvian urban municipalities, rising on average 10% during the last five years. The property tax in Peru accounts for less than 0.2% of GDP, which compared to the Latin American average, is extremely low. In 2002, property tax was collecting nationwide about 10% of the overall budget of local governments. In 2006, the share was closer to 6% due to windfall transfers. The property tax can enhance accountability at the local level and has important impacts on urban spatial development. It is also important considering that most charges or transfers are earmarked such that property tax yields can cover discretionary finances. The intergovernmental fiscal transfers can be described as a patchwork of political liabilities of the past rather than connected with thorough compensation or service improvement functions. The fiscal base of local governments in Peru remains small for the municipalities and the incentive structure to enhance property tax revenues is far from optimal. The central government and sector institutions, which are in the Peruvian institutional design of the property tax responsible for the enablement environment, can reinforce local tax efforts. In the past the central government permanently changed the rules of the game, giving municipalities reduced predictability of policy choices. There are no relevant signs that a stronger property tax is captured by Peruvian interest groups. Since the central government has responsibility for tax regulation and partly valuation there has been little debate about financial issues on the local political agenda. Most council members are therefore not familiar with tax issues. If the central government did not set the tax rate and valuation then there would probably be a more vigorous public debate and an electorate that was better informed about local politics. Elected mayors (as political and administrative leaders) are not counterbalanced and held in check by an active council and/or by vigorous local political parties. Local politics are concentrated on the mayor, electoral rules, the institutional design and political culture – all of which are not helpful in increasing the degree of influence that citizens and associations have upon collective decision-making at the local level. The many alternations between democracy and autocracy have not been helpful in building strong institutions at the local level. Property tax revenues react slowly and the institutional context matters because an effective tax system as a public good can only be created if actors have long time horizons. The property tax has a substantial revenue potential, however, since municipalities are going through a transfer bonanza, it is especially difficult to make a plea for increasing their own revenue base. Local governments should be the proponents of property tax reform, but they have, in Peru, little policy clout because the municipal associations are dispersed and there exists little relevant information concerning important local policy issues.
The Warm-Hot Intergalactic Medium (WHIM) arises from shock-heated gas collapsing in large-scale filaments and probably harbours a substantial fraction of the baryons in the local Universe. Absorption-line measurements in the ultraviolet (UV) and in the X-ray band currently represent the best method to study the WHIM at low redshifts. We here describe the physical properties of the WHIM and the concepts behind WHIM absorption line measurements of Hi and high ions such as Ovi, Ovii, and Oviii in the far-ultraviolet and X-ray band. We review results of recent WHIM absorption line studies carried out with UV and X-ray satellites such as FUSE, HST, Chandra, and XMM-Newton and discuss their implications for our knowledge of the WHIM.
In order to function properly, organisms have a complex control mechanism, in which a given gene is expressed at a particular time and place. One way to achieve this control is to regulate the initiation of transcription. This step requires the assembly of several components, i.e., a basal/general machinery common to all expressed genes, and a specific/regulatory machinery, which differs among genes and is the responsible for proper gene expression in response to environmental or developmental signals. This specific machinery is composed of transcription factors (TFs), which can be grouped into evolutionarily related gene families that possess characteristic protein domains. In this work we have exploited the presence of protein domains to create rules that serve for the identification and classification of TFs. We have modelled such rules as a bipartite graph, where families and protein domains are represented as nodes. Connections between nodes represent that a protein domain should (required rule) or should not (forbidden rule) be present in a protein to be assigned into a TF family. Following this approach we have identified putative complete sets of TFs in plant species, whose genome is completely sequenced: Cyanidioschyzon merolae (red algae), Chlamydomonas reinhardtii (green alga), Ostreococcus tauri (green alga), Physcomitrella patens (moss), Arabidopsis thaliana (thale cress), Populus trichocarpa (black cottonwood) and Oryza sativa (rice). The identification of the complete sets of TFs in the above-mentioned species, as well as additional information and reference literature are available at http://plntfdb.bio.uni-potsdam.de/. The availability of such sets allowed us performing detailed evolutionary studies at different levels, from a single family to all TF families in different organisms in a comparative genomics context. Notably, we uncovered preferential expansions in different lineages, paving the way to discover the specific biological roles of these proteins under different conditions. For the basic leucine zipper (bZIP) family of TFs we were able to infer that in the most recent common ancestor (MRCA) of all green plants there were at least four bZIP genes functionally involved in oxidative stress and unfolded protein responses that are bZIP-mediated processes in all eukaryotes, but also in light-dependent regulations. The four founder genes amplified and diverged significantly, generating traits that benefited the colonization of new environments. Currently, following the approach described above, up to 57 TF and 11 TR families can be identified, which are among the most numerous transcription regulatory families in plants. Three families of putative TFs predate the split between rhodophyta (red algae) and chlorophyta (green algae), i.e., G2-like, PLATZ, and RWPRK, and may have been of particular importance for the evolution of eukaryotic photosynthetic organisms. Nine additional families, i.e., ABI3/VP1, AP2-EREBP, ARR-B, C2C2-CO-like, C2C2-Dof, PBF-2-like/Whirly, Pseudo ARR-B, SBP, and WRKY, predate the split between green algae and streptophytes. The identification of putative complete list of TFs has also allowed the delineation of lineage-specific regulatory families. The families SBP, bHLH, SNF2, MADS, WRKY, HMG, AP2-EREBP and FHA significantly differ in size between algae and land plants. The SBP family of TFs is significantly larger in C. reinhardtii, compared to land plants, and appears to have been lost in the prasinophyte O. tauri. The families bHLH, SNF2, MADS, WRKY, HMG, AP2-EREBP and FHA preferentially expanded with the colonisation of land, and might have played an important role in this great moment in evolution. Later, after the split of bryophytes and tracheophytes, the families MADS, AP2-EREBP, NAC, AUX/IAA, PHD and HRT have significantly larger numbers in the lineage leading to seed plants. We identified 23 families that are restricted to land plants and that might have played an important role in the colonization of this new habitat. Based on the list of TFs in different species we have started to develop high-throughput experimental platforms (in rice and C. reinhardtii) to monitor gene expression changes of TF genes under different genetic, developmental or environmental conditions. In this work we present the monitoring of Arabidopsis thaliana TFs during the onset of senescence, a process that leads to cell and tissue disintegration in order to redistribute nutrients (e.g. nitrogen) from leaves to reproductive organs. We show that the expression of 185 TF genes changes when leaves develop from half to fully expanded leaves and finally enter partial senescence. 76% of these TFs are down-regulated during senescence, the remaining are up-regulated. The identification of TFs in plants in a comparative genomics setup has proven fruitful for the understanding of evolutionary processes and contributes to the elucidation of complex developmental programs.
The vacuolar H+-ATPase (V-ATPase) in the apical membrane of blowfly (Calliphora vicina) salivary gland cells energizes the secretion of a KCl-rich saliva in response to the neurohormone serotonin (5-HT). We have shown previously that exposure to 5-HT induces a cAMP-mediated reversible assembly of V-0 and V-1 subcomplexes to V-ATPase holoenzymes and increases V-ATPase-driven proton transport. Here, we analyze whether the effect of cAMP on V-ATPase is mediated by protein kinase A (PKA) or exchange protein directly activated by cAMP (Epac), the cAMP target proteins that are present within the salivary glands. Immunofluorescence microscopy shows that PKA activators, but not Epac activators, induce the translocation of V1 components from the cytoplasm to the apical membrane, indicative of an assembly of V-ATPase holoenzymes. Measurements of transepithelial voltage changes and microfluorometric pH measurements at the luminal surface of cells in isolated glands demonstrate further that PKA-activating cAMP analogs increase cation transport to the gland lumen and induce a V-ATPase-dependent luminal acidification, whereas activators of Epac do not. Inhibitors of PKA block the 5-HT-induced V-1 translocation to the apical membrane and the increase in proton transport. We conclude that cAMP exerts its effects on V-ATPase via PKA.
Landscapes evolve in a complex interplay between climate and tectonics. Thus, the geomorphic characteristics of a landscape can only be understood if both, climatic and tectonic signals of past and ongoing processes can be identified. In order to evaluate the impact of both forcing factors it is crucial to quantify the evolution of geomorphic markers in natural environments. The Cenozoic Andes are an ideal setting to evaluate tectonic and climatic aspects of landscape evolution at different time and length scales in different natural compartments. The Andean Cordillera constitutes the type subduction orogen and is associated with the subduction of the oceanic Nazca Plate beneath the South American continent since at least 200 million years. In Chile and the adjacent regions this convergent margin is characterized by active tectonics, volcanism, and mountain building. Importantly, along the coast of Chile megathrust earthquakes occur frequently and influence landscape evolution. In fact, the largest earthquake ever recorded occurred in south-central Chile in 1960 and comprised a rupture zone of ~ 1000 km length. However, on longer time scales beyond historic documentation of seismicity it is not well known, how such seismotectonic segments have behaved and how they influence the geomorphic evolution of the coastal realms. With several semi-independent morphotectonic segments, recurrent megathrust earthquakes, and a plethora of geomorphic features indicating sustained tectonism, the margin of Chile is thus a key area to study relationships between surface processes and tectonics. In this study, I combined geomorphology, geochronology, sedimentology, and morphometry to quantify the Pliocene-Pleistocene landscape evolution of the tectonically active south-central Chile forearc. Thereby, I provide (1) new results about the influence of seismotectonic forearc segmentation on the geomorphic evolution and (2) new insights in the interaction between climate and tectonics with respect to the morphology of the Chilean forearc region. In particular, I show that the forearc is characterized by three long-term segments that are not correlated with short-lived earthquake-rupture zones that may. These segments are the Nahuelbuta, Toltén, and Bueno segments, each recording a distinct geomorphic and tectonic evolution. The Nahuelbuta and Bueno segments are undergoing active tectonic uplift. The long-term behavior of these two segments is manifested in form of two doubly plunging, growing antiforms that constitute an integral part of the Coastal Cordillera and record the uplift of marine and river terraces. In addition, these uplifting areas have caused major changes in flow directions or rivers. In contrast, the Toltén segment, situated between the two other segments, appears to be quasi-stable. In order to further quantify uplift and incision in the actively deforming Nahuelbuta segment, I dated an erosion surface and fluvial terraces in the Coastal Cordillera with cosmogenic 10Be and 26Al and optically stimulated luminescence, respectively. According to my results, late Pleistocene uplift rates corresponding to 0.88 mm a-1 are faster than surface-uplift rates averaging over the last 5 Ma, which are in the range of 0.21 mm a-1. This discrepancy suggests that surface uplift is highly variable in time and space and might preferably concentrate along reverse faults as indicated by a late Pleistocene flow reversal. In addition, the results of exposure dating with cosmogenic 10Be and 26Al indicate that the morphotectonic segmentation of this region of the forearc has been established in Pliocene time, coeval with the initiation of uplift of the Coastal Cordillera about 5 Ma ago, inferred to be related to a shift in subduction mode from erosion to accretion. Finally, I dated volcanic clasts obtained from alluvial surfaces in the Central Depression, a low-relief sector separating the Coastal from the Main Cordillera, with stable cosmogenic 3He and 21Ne, in order to reveal the controls of sediment accumulation in the forearc. My results document that these gently sloping surfaces have been deposited 150 to 300 ka ago. This deposition may be related to changes in the erosional regime during glacial episodes. Taken together, the data indicates that the overall geomorphic expression of the forearc is of post-Miocene age and may be intimately related to a climatic overprint of the tectonic system. This climatic forcing is also reflected in the topography and local relief of the Central and Southern Andes that vary considerably along the margin, determined by the dominant surface process that in turn is eventually controlled by climate. However, relief also partly reflects surface processes that have taken place under past climatic conditions. This emphasizes that due care has to be exercised when interpreting landscapes as mirrors of modern climates.
The comprehension of figurative language : electrophysiological evidence on the processing of irony
(2008)
This dissertation investigates the comprehension of figurative language, in particular the temporal processing of verbal irony. In six experiments using event-related potentials(ERP) brain activity during the comprehension of ironic utterances in relation to equivalent non-ironic utterances was measured and analyzed. Moreover, the impact of various language-accompanying cues, e.g., prosody or the use of punctuation marks, as well as non-verbal cues such as pragmatic knowledge has been examined with respect to the processing of irony. On the basis of these findings different models on figurative language comprehension, i.e., the 'standard pragmatic model', the 'graded salience hypothesis', and the 'direct access view', are discussed.
The Ginibre gas is a Poisson point process defined on a space of loops related to the Feynman-Kac representation of the ideal Bose gas. Here we study thermodynamic limits of different ensembles via Martin-Dynkin boundary technique and show, in which way infinitely long loops occur. This effect is the so-called Bose-Einstein condensation.
It has always been enigmatic which processes control the accretion of the North American terranes towards the Pacific plate and the landward migration of the San Andreas plate boundary. One of the theories suggests that the Pacific plate first cools and captures the uprising mantle in the slab window, and then it causes the accretion of the continental crustal blocks. The alternative theory attributes the accretion to the capture of Farallon plate fragments (microplates) stalled in the ceased Farallon-North America subduction zone. Quantitative judgement between these two end-member concepts requires a 3D thermomechanical numerical modeling. However, the software tool required for such modeling is not available at present in the geodynamic modeling community. The major aim of the presented work is comprised basically of two interconnected tasks. The first task is the development and testing of the research Finite Element code with sufficiently advanced facilities to perform the three-dimensional geological time scale simulations of lithospheric deformation. The second task consists in the application of the developed tool to the Neogene deformations of the crust and the mantle along the San Andreas Fault System in Central and northern California. The geological time scale modeling of lithospheric deformation poses numerous conceptual and implementation challenges for the software tools. Among them is the necessity to handle the brittle-ductile transition within the single computational domain, adequately represent the rock rheology in a broad range of temperatures and stresses, and resolve the extreme deformations of the free surface and internal boundaries. In the framework of this thesis the new Finite Element code (SLIM3D) has been successfully developed and tested. This code includes a coupled thermo-mechanical treatment of deformation processes and allows for an elasto-visco-plastic rheology with diffusion, dislocation and Peierls creep mechanisms and Mohr-Coulomb plasticity. The code incorporates an Arbitrary Lagrangian Eulerian formulation with free surface and Winkler boundary conditions. The modeling technique developed is used to study the aspects influencing the Neogene lithospheric deformation in central and northern California. The model setup is focused on the interaction between three major tectonic elements in the region: the North America plate, the Pacific plate and the Gorda plate, which join together near the Mendocino Triple Junction. Among the modeled effects is the influence of asthenosphere upwelling in the opening slab window on the overlying North American plate. The models also incorporate the captured microplate remnants in the fossil Farallon subduction zone, simplified subducting Gorda slab, and prominent crustal heterogeneity such as the Salinian block. The results show that heating of the mantle roots beneath the older fault zones and the transpression related to fault stepping, altogether, render cooling in the slab window alone incapable to explain eastward migration of the plate boundary. From the viewpoint of the thermomechanical modeling, the results confirm the geological concept, which assumes that a series of microplate capture events has been the primary reason of the inland migration of the San Andreas plate boundary over the recent 20 Ma. The remnants of the Farallon slab, stalled in the fossil subduction zone, create much stronger heterogeneity in the mantle than the cooling of the uprising asthenosphere, providing the more efficient and direct way for transferring the North American terranes to Pacific plate. The models demonstrate that a high effective friction coefficient on major faults fails to predict the distinct zones of strain localization in the brittle crust. The magnitude of friction coefficient inferred from the modeling is about 0.075, which is far less than typical values 0.6 – 0.8 obtained by variety of borehole stress measurements and laboratory data. Therefore, the model results presented in this thesis provide additional independent constrain which supports the “weak-fault” hypothesis in the long-term ongoing debate over the strength of major faults in the SAFS.
Contents: Artem Polyvanny, Sergey Smirnow, and Mathias Weske The Triconnected Abstraction of Process Models 1 Introduction 2 Business Process Model Abstraction 3 Preliminaries 4 Triconnected Decomposition 4.1 Basic Approach for Process Component Discovery 4.2 SPQR-Tree Decomposition 4.3 SPQR-Tree Fragments in the Context of Process Models 5 Triconnected Abstraction 5.1 Abstraction Rules 5.2 Abstraction Algorithm 6 Related Work and Conclusions
The paper sheds some light on the education returns in Germany in the post war period. After describing higher education in Germany the current stand of higher education financing within the single states is presented. In six states tuition fees will be introduced in 2007/08 and discussions are going on in even some more. In the second part of the paper an empirical analysis is done using longitudinal data from the German social pension system. The analysis over the whole lifecycle renders results which proof that the higher education advantages are quite remarkable and might be a justification for more intensified financing by tuition fees. But all this has to be embedded into an encompassing strategy of tax and social policy, especially to prevent a strengthened process of social selection, which would be counterproductive for an increased and highly qualified human capital in Germany.
A production study is presented that investigates the effects of word order and information structural context on the prosodic realization of declarative sentences in Hindi. Previous work on Hindi intonation has shown that: (i) non-final content words bear rising pitch accents (Moore 1965, Dyrud 2001, Nair 1999); (ii) focused constituents show greater pitch excursion and longer duration and that post-focal material undergoes pitch range reduction (Moore 1965, Harnsberger 1994, Harnsberger and Judge 1996); and (iii) focused constituents may be followed by a phrase break (Moore 1965). By means of a controlled experiment, we investigated the effect of focus in relation to word order variation using 1200 utterances produced by 20 speakers. Fundamental frequency (F0) and duration of constituents were measured in Subject-Object-Verb (SOV) and Object-Subject-Verb (OSV) sentences in different information structural conditions (wide focus, subject focus and object focus). The analyses indicate that (i) regardless of word order and focus, the constituents are in a strict downstep relationship; (ii) focus is mainly characterized by post-focal pitch range reduction rather than pitch raising of the element in focus; (iii) given expressions that occur pre-focally appear to undergo no reduction; (iv) pitch excursion and duration of the constituents is higher in OSV compared to SOV sentences. A phonological analysis suggests that focus affects pitch scaling and that word order influences prosodic phrasing of the constituents.
The modern foreland basin straddling the eastern margin of the Andean orogen is the prime example of a retro-arc foreland basin system adjacent to a subduction orogen. While widely studied in the central and southern Andes, the spatial and temporal evolution of the Cenozoic foreland basin system in the northern Andes has received considerably less attention. This is in part due to the complex geodynamic boundary conditions, such as the oblique subduction and accretion of the Caribbean plates to the already complex interaction between the Nazca and the South American plates. In the Colombian Andes, for example, a foreland basin system has been forming since ~80 Ma over an area previously affected by rift tectonics during the Mesozoic. This setting of Cenozoic contractile deformation superposed on continental crust pre-strained by extensional processes thus represents a natural, yet poorly studied experimental set-up, where the role of tectonic inheritance on the development of foreland basin systems can be evaluated. However, a detailed documentation of the early foreland basin evolution in this part of the Andes has thus far only been accomplished in the more internal sectors of the orogen. In this study, I integrate new structural, sedimentological and biostratigraphic data with low-temperature thermochronology from the eastern sector of the Colombian Andes, in order to provide the first comprehensive account of mountain building and related foreland basin sedimentation in this part of the orogen, and to assess as to what extent pre-existent basement anisotropies have conditioned the locus of foreland deformation in space and time. In the Medina Basin, along the eastern flank of the Eastern Cordillera, I integrated detailed structural mapping and new sedimentological data with a new chronostratigraphic framework based on detailed palynology that links an eastward-thinning early Oligocene to early Miocene syntectonic wedge containing rapid facies changes with an episode of fast tectonic subsidence starting at ~30 Ma. This record represents the first evidence of topographic loading generated by slip along the principal basement-bounding thrusts in the Eastern Cordillera to the west of the basin and thus constrains the onset of mountain building in this area. A comprehensive assessment of exhumation patterns based on zircon fission-track (ZFT), apatite fission-track (AFT) analysis and thermal modelling reveals the location of these thrust loads to have been located along the contractionally reactivated Soapaga Fault in the axial sector of the Eastern Cordillera. Farther to the east, AFT and ZFT data also document the onset of thrust-induced exhumation associated with contractional reactivation of the main range-bounding Servita Fault at ~20 Ma. Associated with this episode of orogenic growth, peak burial temperature estimates based on vitrinite reflectance data in the Cenozoic sedimentary record of the adjacent Medina Basin documents earlier incorporation of the western sector of the basin into the advancing fold and thrust belt. I combined these new thermochronological data with published AFT analyses and known chronologic indicators of brittle deformation in order to evaluate the patterns of orogenic-front migration in the Andes of central Colombia. This spatiotemporal analysis of deformation reveals an episodic pattern of eastward migration of the orogenic front at an average rate of 2.5-2.7 mm/yr during the Late Cretaceous-Cenozoic. I identified three major stages of orogen propagation. First, following initiation of mountain building in the Central Cordillera during the Late Cretaceous, the orogenic front propagate eastward at slow rates (0.5-3.1 mm/yr) until early Eocene times. Such slow orogenic advance would have resulted from limited accretionary flux related to slow and oblique (SW-NE-oriented) convergence of the Farallon and South American plates during that time. A second stage of rapid orogenic advance (4.0-18.0 mm/yr) during the middle-late Eocene, and locally of at least 100 mm/yr in the middle Eocene, resulted from initial tectonic inversion of the Eastern Cordillera. I correlate this episode of rapid orogen-front migration with an increase in the accretionary flux triggered by acceleration in convergence and a rotation of the convergence vector to a more orogen-perpendicular direction. Finally, stagnation of the Miocene deformation front along former rift-bounding reactivated faults in the eastern flank of the Eastern Cordillera led to a decrease in the rates of orogenic advance. Post-late Miocene-Pliocene thrusting along the actively deforming front of the Eastern Cordillera at this latitude suggests averaged Miocene-Holocene orogen propagation rates of 1.2-2.1 mm/yr. In addition, ZFT data suggest that exhumation along the eastern flank of the orogen occurred at moderate rates of ~0.3 mm/yr during the Miocene, prior to an acceleration of exhumation since the Pliocene, as suggested by recently published AFT data. In order to evaluate the relations between thrust loading and sedimentary facies evolution in the foreland, I analyzed gravel progradation in the foreland basin system. In particular, I compared one-dimensional Eocene to Pliocene sediment accumulation rates in the Medina basin with a three-dimensional sedimentary budget based on the interpretation of ~1800 km of industry-style seismic reflection profiles and borehole data tied to the new chronostratigraphic framework. The sedimentological data from the Medina Basin reveal rapid accumulation of fluvial and lacustrine sediments at rates of up to ~ 0.5 mm/yr during the Miocene. Provenance data based on gravel petrography and paleocurrents reveal that these Miocene fluvial systems were sourced by Upper Cretaceous and Paleocene sedimentary units exposed to the west, in the Eastern Cordillera. Peak sediment-accumulation rates in the upper Carbonera Formation and the Guayabo Group occur during episodes of gravel progradation in the proximal foredeep in the Early and Late Miocene. I interpreted this positive correlation between sediment accumulation and gravel deposition as the direct consequence of thrust activity in the Servita-Lengupá Fault. This contrasts with current models relating gravel progradation to episodes of tectonic quiescence in more distal portions of foreland basin systems and calls for a re-evaluation of tectonic history interpretations inferred from sedimentary units in other mountain belts. In summary, my results document a late Eocene-early Miocene eastward advance of the topographic loads associated with the leading edge of deformation in the northern Andes of Colombia. Crustal thickening of the Eastern Cordillera associated with initiation of thrusting along the Servitá Fault illustrates that this sector of the Andean orogen acquired ~90% of its present width already by the early Miocene (~20 Ma). My data thus demonstrate that inherited crustal anisotropies, such as the former rift-bounding faults of the Eastern Cordillera, favour a non-systematic progression of foreland basin deformation through time by preferentially concentrating accommodation of slip and thrust-loading. These new chronology of exhumation and deformation associated with specific structures in the Colombian Andes also constitutes an important advance towards the understanding of models for hydrocarbon maturation, migration and trap formation along the prolific petroleum province of the Llanos Basin in the modern foredeep area.
Chloroplasts as bioreactors : high-yield production of active bacteriolytic protein antibiotics
(2008)
Plants, more precisely their chloroplasts with their bacterial-like expression machinery inherited from their cyanobacterial ancestors, can potentially offer a cheap expression system for proteinaceous pharmaceuticals. This system would be easily scalable and provides appropriate safety due to chloroplasts maternal inheritance. In this work, it was shown that three phage lytic enzymes (Pal, Cpl-1 and PlyGBS) could be successfully expressed at very high levels and with high stability in tobacco chloroplasts. PlyGBS expression reached an amount of foreign protein accumulation (> 70% TSP) that has never been obtained before. Although the high expression levels of PlyGBS caused a pale green phenotype with retarded growth, presumably due to exhaustion of plastid protein synthesis capacity, development and seed production were not impaired under greenhouse conditions. Since Pal and Cpl-1 showed toxic effects when expressed in E. coli, a special plastid transformation vector (pTox) was constructed to allow DNA amplification in bacteria. The construction of the pTox transformation vector allowing a recombinase-mediated deletion of an E. coli transcription block in the chloroplast, leading to an increase of foreign protein accumulation to up to 40% of TSP for Pal and 20% of TSP for Cpl-1. High dose-dependent bactericidal efficiency was shown for all three plant-derived lytic enzymes using their pathogenic target bacteria S. pyogenes and S. pneumoniae. Confirmation of specificity was obtained for the endotoxic proteins Pal and Cpl-1 by application to E. coli cultures. These results establish tobacco chloroplasts as a new cost-efficient and convenient production platform for phage lytic enzymes and address the greatest obstacle for clinical application. The present study is the first report of lysin production in a non-bacterial system. The properties of chloroplast-produced lysins described in this work, their stability, high accumulation rate and biological activity make them highly attractive candidates for future antibiotics.
The paper studies the regional integration as the unique process which depends on the degree of cooperation and interchange among regions. The generalisation of existing approaches for regional integration has been classified by the criterions. The data of the main economic indicators have been analysed. The economic analysis proves the differences in production endowments, the asymmetry in fixed capital investment, the disproportional income, and foreign direct investment distribution in 2001 – 2005 in Ukrainian regions. Econometric modelling depicts the existence of the division for the industrial regions with high urbanisation and backward agrarian regions in the Ukraine, the industrial development disparities among regions; the insufficient infrastructure (telecommunications, roads, hotels, services and etc.), the low labour productivity in industrial sector, and insufficient regional trade.
Background: Haplotype inference based on unphased SNP markers is an important task in population genetics. Although there are different approaches to the inference of haplotypes in diploid species, the existing software is not suitable for inferring haplotypes from unphased SNP data in polyploid species, such as the cultivated potato (Solanum tuberosum). Potato species are tetraploid and highly heterozygous.
Results: Here we present the software SATlotyper which is able to handle polyploid and polyallelic data. SATlo-typer uses the Boolean satisfiability problem to formulate Haplotype Inference by Pure Parsimony. The software excludes existing haplotype inferences, thus allowing for calculation of alternative inferences. As it is not known which of the multiple haplotype inferences are best supported by the given unphased data set, we use a bootstrapping procedure that allows for scoring of alternative inferences. Finally, by means of the bootstrapping scores, it is possible to optimise the phased genotypes belonging to a given haplotype inference. The program is evaluated with simulated and experimental SNP data generated for heterozygous tetraploid populations of potato. We show that, instead of taking the first haplotype inference reported by the program, we can significantly improve the quality of the final result by applying additional methods that include scoring of the alternative haplotype inferences and genotype optimisation. For a sub-population of nineteen individuals, the predicted results computed by SATlotyper were directly compared with results obtained by experimental haplotype inference via sequencing of cloned amplicons. Prediction and experiment gave similar results regarding the inferred haplotypes and phased genotypes.
Conclusion: Our results suggest that Haplotype Inference by Pure Parsimony can be solved efficiently by the SAT approach, even for data sets of unphased SNP from heterozygous polyploids. SATlotyper is freeware and is distributed as a Java JAR file. The software can be downloaded from the webpage of the GABI Primary Database at http://www.gabipd.org/projects/satlotyper/. The application of SATlotyper will provide haplotype information, which can be used in haplotype association mapping studies of polyploid plants.
For more than 70 years, understanding of the mechanism of particle nucleation in emulsion polymerization has been one of the most challenging issues in heterophase polymerization research. Within this work a comprehensive experimental study of particle nucleation in emulsion polymerization of styrene at 70 °C and variety of conditions has been performed. To follow the onset of nucleation, on-line conductivity measurements were applied. This technique is highly sensitive to the mobility of conducting species and hence, it can be employed to follow aggregation processes leading to particle formation. On the other hand, by recording the optical transmission (turbidity) of the reaction mixture particle growth was followed. Complementary to the on-line investigations, off-line characterizations of the particle morphology and the molecular weight have been performed. The aim was to achieve a better insight in the processes taking place after starting the reaction via particle nucleation until formation of colloidally stable latex particles. With this experimental protocol the initial period of styrene emulsion polymerization in the absence as well as in the presence of various surfactants (concentrations above and below the critical micellization concentration) and also in the presence of seed particles has been investigated. Ionic and non-ionic initiators (hydrophilic and hydrophobic types) have been applied to start the polymerizations. Following the above algorithm, experimental evidence has been obtained showing the possibility of performing surfactant-free emulsion polymerization of styrene with oil-soluble initiators. The duration of the pre-nucleation period (that is the time between starting the polymerization and nucleation) can be precisely adjusted with the initiator hydrophobicity, the equilibration time of styrene in water, and the surfactant concentration. Spontaneous emulsification of monomer in water, as soon as both phases are brought into contact, is a key factor to explain the experimental results. The equilibration time of monomer in water as well as the type and concentration of other materials in water (surfactants, seed particles, etc.) control the formation rate and the size of the emulsified droplets and thus, have a strong influence on the particle nucleation and the particle morphology. One of the main tasks was to investigate the effect of surfactant molecules and especially micelles on the nucleation mechanism. Experimental results revealed that in the presence of emulsifier micelles the conductivity pattern does not change essentially. This means that the presence of emulsifiers does not change the mechanism of particle formation qualitatively. However, surfactants assist in the nucleation process as they lower the activation free energy of particle formation. Contrary, seed particles influence particle nucleation, substantially. In the presence of seed particles above a critical volume fraction the formation of new particles can be suppressed. However, micelles and seed particles as absorbers exhibit a common behavior under conditions where monomer equilibration is not allowed. Results prove that the nucleation mechanism comprises the initiation of water soluble oligomers in the aqueous phase followed by their aggregation. The process is heterogeneous in nature due to the presence of monomer droplets.
This first volume of the DIGAREC Series holds the proceedings of the conference “The Philosophy of Computer Games”, held at the University of Potsdam from May 8-10, 2008. The contributions of the conference address three fields of computer game research that are philosophically relevant and, likewise, to which philosophical reflection is crucial. These are: ethics and politics, the action-space of games, and the magic circle. All three topics are interlinked and constitute the paradigmatic object of computer games: Whereas the first describes computer games on the outside, looking at the cultural effects of games as well as on moral practices acted out with them, the second describes computer games on the inside, i.e. how they are constituted as a medium. The latter finally discusses the way in which a border between these two realms, games and non-games, persists or is already transgressed in respect to a general performativity.
In biological cells, the long-range intracellular traffic is powered by molecular motors which transport various cargos along microtubule filaments. The microtubules possess an intrinsic direction, having a 'plus' and a 'minus' end. Some molecular motors such as cytoplasmic dynein walk to the minus end, while others such as conventional kinesin walk to the plus end. Cells typically have an isopolar microtubule network. This is most pronounced in neuronal axons or fungal hyphae. In these long and thin tubular protrusions, the microtubules are arranged parallel to the tube axis with the minus ends pointing to the cell body and the plus ends pointing to the tip. In such a tubular compartment, transport by only one motor type leads to 'motor traffic jams'. Kinesin-driven cargos accumulate at the tip, while dynein-driven cargos accumulate near the cell body. We identify the relevant length scales and characterize the jamming behaviour in these tube geometries by using both Monte Carlo simulations and analytical calculations. A possible solution to this jamming problem is to transport cargos with a team of plus and a team of minus motors simultaneously, so that they can travel bidirectionally, as observed in cells. The presumably simplest mechanism for such bidirectional transport is provided by a 'tug-of-war' between the two motor teams which is governed by mechanical motor interactions only. We develop a stochastic tug-of-war model and study it with numerical and analytical calculations. We find a surprisingly complex cooperative motility behaviour. We compare our results to the available experimental data, which we reproduce qualitatively and quantitatively.
Workplace-related anxieties and workplace phobia : a concept of domain-specific mental disorders
(2008)
Background: Anxiety in the workplace is a special problem as workplaces are especially prone to provoke anxiety: There are social hierarchies, rivalries between colleagues, sanctioning through superiors, danger of accidents, failure, and worries of job security. Workplace phobia is a phobic anxiety reaction with symptoms of panic occurring when thinking of or approaching the workplace, and with clear tendency of avoidance. Objectives: What characterizes workplace-related anxieties and workplace phobia as domain-specific mental disorders in contrast to conventional anxiety disorders? Method: 230 patients from an inpatient psychosomatic rehabilitation center were interviewed with the (semi-)structured Mini-Work-Anxiety-Interview and the Mini International Neuropsychiatric Interview, concerning workplace-related anxieties and conventional mental disorders. Additionally, the patients filled in the self-rating questionnaires Job-Anxiety-Scale (JAS) and the Symptom Checklist (SCL-90-R)measuring job-related and general psychosomatic symptom load. Results: Workplace-related anxieties occurred together with conventional anxiety disorders in 35% of the patients, but also alone in others (23%). Workplace phobia could be found in 17% of the interviewed, any diagnosis of workplace-related anxiety was stated in 58%. Workplace phobic patients had significantly higher scores in job-anxiety than patients without workplace phobia. Patients with workplace phobia were significantly longer on sick leave in the past 12 months (23,5 weeks) than patients without workplace phobia (13,4 weeks). Different qualities of workplace-related anxieties lead with different frequencies to work participation disorders. Conclusion: Workplace phobia cannot be described by only assessing the general level of psychosomatic symptom load and conventional mental disorders. Workplace-related anxieties and workplace phobia have an own clinical value which is mainly defined by specific workplace-related symptom load and work-participation disorders. They require special therapeutic attention and treatment instead of a “sick leave” certification by the general health physician. Workplace phobia should be named with a proper diagnosis according to ICD-10 chapter V, F 40.8: “workplace phobia”.
Erster Deutscher IPv6 Gipfel
(2008)
Inhalt: KOMMUNIQUÉ GRUßWORT PROGRAMM HINTERGRÜNDE UND FAKTEN REFERENTEN: BIOGRAFIE & VOTRAGSZUSAMMENFASSUNG 1.) DER ERSTE DEUTSCHE IPV6 GIPFEL AM HASSO PLATTNER INSTITUT IN POTSDAM - PROF. DR. CHRISTOPH MEINEL - VIVIANE REDING 2.) IPV6, ITS TIME HAS COME - VINTON CERF 3.) DIE BEDEUTUNG VON IPV6 FÜR DIE ÖFFENTLICHE VERWALTUNG IN DEUTSCHLAND - MARTIN SCHALLBRUCH 4.) TOWARDS THE FUTURE OF THE INTERNET - PROF. DR. LUTZ HEUSER 5.) IPV6 STRATEGY & DEPLOYMENT STATUS IN JAPAN - HIROSHI MIYATA 6.) IPV6 STRATEGY & DEPLOYMENT STATUS IN CHINA - PROF. WU HEQUAN 7.) IPV6 STRATEGY AND DEPLOYMENT STATUS IN KOREA - DR. EUNSOOK KIM 8.) IPV6 DEPLOYMENT EXPERIENCES IN GREEK SCHOOL NETWORK - ATHANASSIOS LIAKOPOULOS 9.) IPV6 NETWORK MOBILITY AND IST USAGE - JEAN-MARIE BONNIN 10.) IPV6 - RÜSTZEUG FÜR OPERATOR & ISP IPV6 DEPLOYMENT UND STRATEGIEN DER DEUTSCHEN TELEKOM - HENNING GROTE 11.) VIEW FROM THE IPV6 DEPLOYMENT FRONTLINE - YVES POPPE 12.) DEPLOYING IPV6 IN MOBILE ENVIRONMENTS - WOLFGANG FRITSCHE 13.) PRODUCTION READY IPV6 FROM CUSTOMER LAN TO THE INTERNET - LUTZ DONNERHACKE 14.) IPV6 - DIE BASIS FÜR NETZWERKZENTRIERTE OPERATIONSFÜHRUNG (NETOPFÜ) IN DER BUNDESWEHR HERAUSFORDERUNGEN - ANWENDUNGSFALLBETRACHTUNGEN - AKTIVITÄTEN - CARSTEN HATZIG 15.) WINDOWS VISTA & IPV6 - BERND OURGHANLIAN 16.) IPV6 & HOME NETWORKING TECHINCAL AND BUSINESS CHALLENGES - DR. TAYEB BEN MERIEM 17.) DNS AND DHCP FOR DUAL STACK NETWORKS - LAWRENCE HUGHES 18.) CAR INDUSTRY: GERMAN EXPERIENCE WITH IPV6 - AMARDEO SARMA 19.) IPV6 & AUTONOMIC NETWORKING - RANGANAI CHAPARADZA 20.) P2P & GRID USING IPV6 AND MOBILE IPV6 - DR. LATIF LADID
Bio-jETI
(2008)
Background: With Bio-jETI, we introduce a service platform for interdisciplinary work on biological application domains and illustrate its use in a concrete application concerning statistical data processing in R and xcms for an LC/MS analysis of FAAH gene knockout.
Methods: Bio-jETI uses the jABC environment for service-oriented modeling and design as a graphical process modeling tool and the jETI service integration technology for remote tool execution.
Conclusions: As a service definition and provisioning platform, Bio-jETI has the potential to become a core technology in interdisciplinary service orchestration and technology transfer. Domain experts, like biologists not trained in computer science, directly define complex service orchestrations as process models and use efficient and complex bioinformatics tools in a simple and intuitive way.
We give the explicit solution for the minimax linear estimate. For scale dependent models an empirical minimax linear estimates is de¯ned and we prove that these estimates are Stein's estimates.
Plants are the primary producers of biomass and thereby the basis of all life. Many varieties are cultivated, mainly to produce food, but to an increasing amount as a source of renewable energy. Because of the limited acreage available, further improvements of cultivated species both with respect to yield and composition are inevitable. One approach to further progress in developing improved plant cultivars is a systems biology oriented approach. This work aimed to investigate the primary metabolism of the model plant A.thaliana and its relation to plant growth using quantitative genetics methods. A special focus was set on the characterization of heterosis, the deviation of hybrids from their parental means for certain traits, on a metabolic level. More than 2000 samples of recombinant inbred lines (RILs) and introgression lines (ILs) developed from the two accessions Col-0 and C24 were analyzed for 181 metabolic traces using gas-chromatography/ mass-spectrometry (GC-MS). The observed variance allowed the detection of 157 metabolic quantitative trait loci (mQTL), genetic regions carrying genes, which are relevant for metabolite abundance. By analyzing several hundred test crosses of RILs and ILs it was further possible to identify 385 heterotic metabolic QTL (hmQTL). Within the scope of this work a robust method for large scale GC-MS analyses was developed. A highly significant canonical correlation between biomass and metabolic profiles (r = 0.73) was found. A comparable analysis of the results of the two independent experiments using RILs and ILs showed a large agreement. The confirmation rate for RIL QTL in ILs was 56 % and 23 % for mQTL and hmQTL respectively. Candidate genes from available databases could be identified for 67 % of the mQTL. To validate some of these candidates, eight genes were re-sequenced and in total 23 polymorphisms could be found. In the hybrids, heterosis is small for most metabolites (< 20%). Heterotic QTL gave rise to less candidate genes and a lower overlap between both populations than was determined for mQTL. This hints that regulatory loci and epistatic effects contribute to metabolite heterosis. The data described in this thesis present a rich source for further investigation and annotation of relevant genes and may pave the way towards a better understanding of plant biology on a system level.
Although educational content in electronic form is increasing dramatically, its usage in an educational environment is poor, mainly due to the fact that there is too much of (unreliable) redundant, and not relevant information. Finding appropriate answers is a rather difficult task being reliant on the user filtering of the pertinent information from the noise. Turning knowledge bases like the online tele-TASK archive into useful educational resources requires identifying correct, reliable, and "machine-understandable" information, as well as developing simple but efficient search tools with the ability to reason over this information. Our vision is to create an E-Librarian Service, which is able to retrieve multimedia resources from a knowledge base in a more efficient way than by browsing through an index, or by using a simple keyword search. In our E-Librarian Service, the user can enter his question in a very simple and human way; in natural language (NL). Our premise is that more pertinent results would be retrieved if the search engine understood the sense of the user's query. The returned results are then logical consequences of an inference rather than of keyword matchings. Our E-Librarian Service does not return the answer to the user's question, but it retrieves the most pertinent document(s), in which the user finds the answer to his/her question. Among all the documents that have some common information with the user query, our E-Librarian Service identifies the most pertinent match(es), keeping in mind that the user expects an exhaustive answer while preferring a concise answer with only little or no information overhead. Also, our E-Librarian Service always proposes a solution to the user, even if the system concludes that there is no exhaustive answer. Our E-Librarian Service was implemented prototypically in three different educational tools. A first prototype is CHESt (Computer History Expert System); it has a knowledge base with 300 multimedia clips that cover the main events in computer history. A second prototype is MatES (Mathematics Expert System); it has a knowledge base with 115 clips that cover the topic of fractions in mathematics for secondary school w.r.t. the official school programme. All clips were recorded mainly by pupils. The third and most advanced prototype is the "Lecture Butler's E-Librarain Service"; it has a Web service interface to respect a service oriented architecture (SOA), and was developed in the context of the Web-University project at the Hasso-Plattner-Institute (HPI). Two major experiments in an educational environment - at the Lycée Technique Esch/Alzette in Luxembourg - were made to test the pertinence and reliability of our E-Librarian Service as a complement to traditional courses. The first experiment (in 2005) was made with CHESt in different classes, and covered a single lesson. The second experiment (in 2006) covered a period of 6 weeks of intensive use of MatES in one class. There was no classical mathematics lesson where the teacher gave explanations, but the students had to learn in an autonomous and exploratory way. They had to ask questions to the E-Librarian Service just the way they would if there was a human teacher.
Background
Many animals live in environments where different types of predators pose a permanent threat and call for predator specific strategies. When foraging, animals have to balance the competing needs of food and safety in order to survive. While animals sometimes can choose between microhabitats that differ in their risk of predation, many habitats are uniform in their risk distribution. So far, little is known about adaptive antipredator behavior under uniform risk. We simulated two predator types, avian and mammalian, each representing a spatially uniform risk in the artificial resource landscapes. Voles served as experimental foragers.
Results
Animals were exposed to factorial combinations of weasel odour and ground cover to simulate avian and/or mammalian predation. We measured short and long term responses with video analysis and giving-up densities. The results show that previously experienced conditions cause delayed effects. After these effects ceased, the risks of both types of predation caused a reduction in food intake. Avian predation induced a concentration on a smaller number of feeding patches. While higher avian risk caused a delay in activity, the weasel odour shortened the latency until the voles started to be active.
Conclusion
We show that the voles differed in risk types and adjusted their feeding strategies accordingly. Responses to avian and mammalian risk differed both in strength and time scales. Uniformity of risk resulted in a concentration of foraging investment and lower foraging efficiency.
Giant vesicles may contain several spatial compartments formed by phase separation within their enclosed aqueous solution. This phenomenon might be related to molecular crowding, fractionation and protein sorting in cells. To elucidate this process we used two chemically dissimilar polymers, polyethylene glycol (PEG) and dextran, encapsulated in giant vesicles. The dynamics of the phase separation of this polymer solution enclosed in vesicles is studied by concentration quench, i.e. exposing the vesicles to hypertonic solutions. The excess membrane area, produced by dehydration, can either form tubular structures (also known as tethers) or be utilized to perform morphological changes of the vesicle, depending on the interfacial tension between the coexisting phases and those between the membrane and the two phases. Membrane tube formation is coupled to the phase separation process. Apparently, the energy released from the phase separation is utilized to overcome the energy barrier for tube formation. The tubes may be absorbed at the interface to form a 2-demensional structure. The membrane stored in the form of tubes can be retracted under small tension perturbation. Furthermore, a wetting transition, which has been reported only in a few experimental systems, was discovered in this system. By increasing the polymer concentration, the PEG-rich phase changed from complete wetting to partial wetting of the membrane. If sufficient excess membrane area is available in the vesicle where both phases wet the membrane, one of the phases will bud off from the vesicle body, which leads to the separation of the two phases. This wetting-induced budding is governed by the surface energy and modulated by the membrane tension. This was demonstrated by micropipette aspiration experiments on vesicles encapsulating two phases. The budding of one phase can significantly decrease the surface energy by decreasing the contact area between the coexisting phases. The elasticity of the membrane allows it to adjust its tension automatically to balance the pulling force exerted by the interfacial tension of the two liquid phases at the three-phase contact line. The budding of the phase enriched with one polymer may be relevant to the selective protein transportation among lumens by means of vesicle in cells.
The arctic region is undergoing the most rapid environmental change experienced on Earth, and the rate of change is expected to increase over the coming decades. Arctic coasts are particularly vulnerable because they lie at the interface between terrestrial systems dominated by permafrost and marine systems dominated by sea ice. An increased rise in sea level and degradation of sea-ice as predicted by the Intergovernmental Panel on Climate Change in its most recent report and as observed recently in the Arctic will likely result in greater rates of coastal retreat. An increase in coastal erosion would result in dramatic increases in the volume of sediment, organic carbon and contaminants to the Arctic Ocean. These in turn have the potential to create dramatic changes in the geochemistry and biodiversity of the nearshore zone and affect the Arctic Ocean carbon cycle. To calculate estimates of organic carbon input from coastal erosion to the Arctic Ocean, current methods rely on the length of the coastline in the form of non self-similar line datasets. This thesis however emphasizes that using shorelines drawn at different scales can induce changes in the amount of sediment released by 30% in some cases. It proposes a substitute method of computations of erosion based on areas instead of lengths (i.e. buffers instead of shoreline lengths) which can be easily implemented at the circum-Arctic scale. Using this method, variations in quantities of eroded sediment are, on average, 70% less affected by scale changes and are therefore a more reliable method of calculation. Current estimates of coastal erosion rates in the Arctic are scarce and long-term datasets are a handful, which complicates assessment and prognosis of coastal processes, in particular the occurrence of coastal hazards. This thesis aims at filling the gap by providing the first long-term dataset (1951-2006) of coastal erosion on the Bykovsky Peninsula, North-East Siberia. This study shows that the coastline, which is made of ice-rich permafrost, retreated at a mean annual rate of 0.59 m/yr between 1951and 2006. Rates were highly variable: 97.0 % of the rates observed were less than 2 m/yr and 81.6% were less than 1m/yr. However, no significant trend in erosion could be recorded despite the study of five temporal subperiods within 1951-2006. The juxtaposition of wind records could not help to explain erosion records either and this thesis emphasizes the local controls on erosion, in particular the cryostratigraphy, the proximity of the Peninsula to the Lena River Delta freshwater plume and the local topographical constraints on swell development. On ice-rich coastal stretches of the Artic, the interaction of coastal dynamics and permafrost leads to the occurrence of spectacular “C-shaped” depressions termed retrogressive thaw slumps which can reach lengths of up to 650 m. On Herschel Island and at King Point (Yukon Coastal Plain, northern Canada), topographical, sedimentological and biogeochemical surveys were conducted to investigate the present and past activity of these landforms. In particular, undisturbed tundra areas were compared with zones of former slump activity, now stabilized and re-vegetated. This thesis shows that stabilized areas are drier and less prone to plant growth than undisturbed areas and feature fundamentally different geotechnical properties. Radiocarbon dating and topographical surveys indicated until up to 300 BP a likely period of dramatic slump activity on Herschel Island, similar to the one currently observed, which led to the creation of these surfaces. This thesis hypothesizes the occurrence of a ~250 years cycle of slump activity on the Herschel Island shoreline based on the surveyed topography and cryostratigraphy and anticipates higher frequency of slump activity in the future. The variety of processes described in this thesis highlights the changing nature of the intensity and frequency of physical processes acting upon the arctic coast. It also challenges current perceptions of the threats to existing industry and community infrastructure in the Arctic. The increasing presence of humans on Artic coasts coupled with the expected development of shipping will drive an increase in economical and industrial activity on these coasts which remains to be addressed scientifically.
The South Chilean subduction zone between 41° and 43.5°S : seismicity, structure and state of stress
(2008)
While the northern and central part of the South American subduction zone has been intensively studied, the southern part has attracted less attention, which may be due to its difficult accessibility and lower seismic activity. However, the southern part exhibits strong seismic and tsunamogenic potential with the prominent example of the Mw=9.5 May 22, 1960 Valdivia earthquake. In this study data from an amphibious seismic array (Project TIPTEQ) is presented. The network reached from the trench to the active magmatic arc incorporating the Island of Chiloé and the north-south trending Liquiñe-Ofqui fault zone (LOFZ). 364 local events were observed in an 11-month period from November 2004 until October 2005. The observed seismicity allows to constrain for the first time the current state of stress of the subducting plate and magmatic arc, as well as the local seismic velocity structure. The downgoing Benioff zone is readily identifiable as an eastward dipping plane with an inclination of ~30°. Main seismic activity occurred predominantly in a belt parallel to the coast of Chiloé Island in a depth range of 12-30 km, which is presumably related to the plate interface. The down-dip termination of abundant intermediate depth seismicity at approximately 70 km depth seems to be related to the young age (and high temperature) of the oceanic plate. A high-quality subset of events was inverted for a 2-D velocity model. The vp model resolves the sedimentary basins and the downgoing slab. Increased velocities below the longitudinal valley and the eastern part of Chiloé Island suggest the existence of a mantle bulge. Apart from the events in the Benioff Zone, shallow crustal events were observed mainly in different clusters along the magmatic arc. These crustal clusters of seismicity are related to the LOFZ, as well as to the volcanoes Chaitén, Michinmahuida and Corcovado. Seismic activity up to a magnitude of 3.8 Mw reveals the recent activity of the fault zone. Focal mechanisms for the events along the LOFZ were calculated using a moment tensor inversion of amplitude spectra for body waves which mostly yield strike-slip mechanisms indicating a SW-NE striking of sigma_1 for the LOFZ. Focal mechanism stress inversion indicates a strike-slip regime along the arc and a thrust regime in the Benioff zone. The observed deformation - which is also revealed by teleseismic observations - suggests a confirmation for the proposed northward movement of a forearc sliver acting as a detached continental micro-plate.
This research is about local actors' response to problems of uneven development and unemployment. Policies to combat these problems are usually connected to socio-economic regeneration in England and economic and employment promotion (Wirtschafts- und Beschäftigungsförderung) in Germany. The main result of this project is a description of those factors which support the emergence of local socio-economic initiatives aimed at job creation. Eight social and formal economy initiatives have been examined and the ways in which their emergence has been influenced by institutional factors has been analysed. The role of local actors and forms of governance as well as wider regional and national policy frameworks has been taken into account. Socio-economic initiatives have been defined as non-routine local projects or schemes with the objective of direct job creation. Such initiatives often focus on specific local assets for the formal or the social economy. Socio-economic initiatives are grounded on ideas of local economic development, and the creation of local jobs for local people. The adopted understanding of governance focuses on the processes of decision taking. Thus, this understanding of governance is broadly construed to include the ways in which actors in addition to traditional government manage urban development. The applied understanding of governance lays a focus on 'strategic' forms of decision taking about both long term objectives and short term action linked to socio-economic regeneration. Four old industrial towns in North England and East Germany have been selected for case studies due to their particular socio-economic background. These towns, with between 10.000 and 70.000 inhabitants, are located outside of the main agglomerations and bear central functions for their hinterland. The approach has been comparative, with a focus on examining common themes rather than gaining in-depth knowledge of a single case. Until now, most urban governance studies have analysed the impacts of particular forms of governance such as regeneration partnerships. This project looks at particular initiatives and poses the question to what extent their emergence can be understood as a result of particular forms of governance, local institutional factors or regional and national contexts.
GeneFisher-P
(2008)
Background: PCR primer design is an everyday, but not trivial task requiring state-of-the-art software. We describe the popular tool GeneFisher and explain its recent restructuring using workflow techniques. We apply a service-oriented approach to model and implement GeneFisher-P, a process-based version of the GeneFisher web application, as a part of the Bio-jETI platform for service modeling and execution. We show how to introduce a flexible process layer to meet the growing demand for improved user-friendliness and flexibility.
Results: Within Bio-jETI, we model the process using the jABC framework, a mature model-driven, service-oriented process definition platform. We encapsulate remote legacy tools and integrate web services using jETI, an extension of the jABC for seamless integration of remote resources as basic services, ready to be used in the process. Some of the basic services used by GeneFisher are in fact already provided as individual web services at BiBiServ and can be directly accessed. Others are legacy programs, and are made available to Bio-jETI via the jETI technology.
The full power of service-based process orientation is required when more bioinformatics tools, available as web services or via jETI, lead to easy extensions or variations of the basic process. This concerns for instance variations of data retrieval or alignment tools as provided by the European Bioinformatics Institute (EBI).
Conclusions: The resulting service-and process-oriented GeneFisher-P demonstrates how basic services from heterogeneous sources can be easily orchestrated in the Bio-jETI platform and lead to a flexible family of specialized processes tailored to specific tasks.
Background: For omics experiments, detailed characterisation of experimental material with respect to its genetic features, its cultivation history and its treatment history is a requirement for analyses by bioinformatics tools and for publication needs. Furthermore, meta-analysis of several experiments in systems biology based approaches make it necessary to store this information in a standardised manner, preferentially in relational databases. In the Golm Plant Database System, we devised a data management system based on a classical Laboratory Information Management System combined with web-based user interfaces for data entry and retrieval to collect this information in an academic environment.
Results: The database system contains modules representing the genetic features of the germplasm, the experimental conditions and the sampling details. In the germplasm module, genetically identical lines of biological material are generated by defined workflows, starting with the import workflow, followed by further workflows like genetic modification (transformation), vegetative or sexual reproduction. The latter workflows link lines and thus create pedigrees. For experiments, plant objects are generated from plant lines and united in so-called cultures, to which the cultivation conditions are linked. Materials and methods for each cultivation step are stored in a separate ACCESS database of the plant cultivation unit. For all cultures and thus every plant object, each cultivation site and the culture's arrival time at a site are logged by a barcode-scanner based system. Thus, for each plant object, all site-related parameters, e. g. automatically logged climate data, are available. These life history data and genetic information for the plant objects are linked to analytical results by the sampling module, which links sample components to plant object identifiers. This workflow uses controlled vocabulary for organs and treatments. Unique names generated by the system and barcode labels facilitate identification and management of the material. Web pages are provided as user interfaces to facilitate maintaining the system in an environment with many desktop computers and a rapidly changing user community. Web based search tools are the basis for joint use of the material by all researchers of the institute.
Conclusion: The Golm Plant Database system, which is based on a relational database, collects the genetic and environmental information on plant material during its production or experimental use at the Max-Planck-Institute of Molecular Plant Physiology. It thus provides information according to the MIAME standard for the component 'Sample' in a highly standardised format. The Plant Database system thus facilitates collaborative work and allows efficient queries in data analysis for systems biology research.
Paleoenvironmental records provide ample information on the Late Quaternary climatic evolution. Due to the great diversity of continental mid-latitude environments the synthetic picture of the past mid-latitudinal climate changes is, however, far from being complete. Owing to its significant size and landlocked setting the Black Sea constitutes a perfect location to study patterns and mechanisms of climate change along the continental interior of Central and Eastern Europe and Asia Minor. Presently, the southern drainage area of the Black Sea is characterized by a Mediterranean-type climate while the northern drainage is under the influence of Central and Northern European climate. During the Last Glacial a decrease in the global sea level disconnected the Black Sea from the Mediterranean Sea transforming it into a giant closed lake. At that time atmospheric precipitation and related with it river run-off were the most important factors driving sediment supply and water chemistry of the Black ‘Lake’. Therefore studying properties of the Black Sea sediments provides important information on the interactions and development of the Mediterranean and Central and North European climate in the past. One significant outcome of my thesis is an improved chronostraphigraphical framework for the glacial lacustrine unit of the Black Sea sediment cores, which allowed to refine the environmental history of the Black Sea region and enabled a reliable correlation with data from other marine and terrestrial archives. Data gathered along a N-S transect presented on a common time scale revealed coherent changes in the basin and its surrounding. During the glacial, the southward-shifted Polar Front reduced moisture transport to the northern drainage of the Black Sea and let the southern drainage become dominant in freshwater and sediment supply into the basin. Changes in NW Anatolian precipitation reconstructed from the variability of the terrigenous input imply that during the glacial the regional rainfall variability was strongly influenced by Mediterranean sea surface temperatures and decreased in response to the cooling associated with the North Atlantic Heinrich Events H1 and H2. In contrast to regional precipitation changes, the hydrological properties of the Black Sea remained relatively stable under full glacial conditions. First significant modification in the freshwater/sediment sources reconstructed from changes in the sediment composition, lithology, and 18O of ostracods took place at around 16.4 cal ka BP, simultaneous to the early deglacial northward retreat of the oceanic and atmospheric polar fronts. Meltwater pulses, most probably derived from the disintegrating European ice sheets, changed the isotopic composition of the Black Sea and increased the supply from northern sediment sources. While these changes signalized a mitigation of the Northern European and Mediterranean climate, a decisive increase in local temperature was indicated only later at the transition from the Oldest Dryas to the Bølling around 14.6 cal ka BP. At that time the warming of the Black Sea surface initiated massive phytoplankton blooms, which in turn, induced the precipitation of inorganic carbonates. This biologically triggered process significantly changed the water chemistry and was recorded by simultaneous shifts in the elemental composition of ostracod shells and in the isotopic composition of the inorganically-precipitated carbonates. Starting with the B/A warming and continuing through the YD cold interval and the Early Holocene warming, the Black Sea temperature signal corresponds to the precipitation and temperature changes recorded in the wider Mediterranean region. Early Holocene conditions, similar to those of the Bølling/Allerød, were punctured by the marine inflow from the Mediterranean at ~ 9.3 cal ka BP, which terminated the lacustrine phase of the Black Sea and had a substantial impact on the chemical and physical properties of its water.
The uptake of nutrients and their subsequent chemical conversion by reactions which provide energy and building blocks for growth and propagation is a fundamental property of life. This property is termed metabolism. In the course of evolution life has been dependent on chemical reactions which generate molecules that are common and indispensable to all life forms. These molecules are the so-called primary metabolites. In addition, life has evolved highly diverse biochemical reactions. These reactions allow organisms to produce unique molecules, the so-called secondary metabolites, which provide a competitive advantage for survival. The sum of all metabolites produced by the complex network of reactions within an organism has since 1998 been called the metabolome. The size of the metabolome can only be estimated and may range from less than 1,000 metabolites in unicellular organisms to approximately 200,000 in the whole plant kingdom. In current biology, three additional types of molecules are thought to be important to the understanding of the phenomena of life: (1) the proteins, in other words the proteome, including enzymes which perform the metabolic reactions, (2) the ribonucleic acids (RNAs) which constitute the so-called transcriptome, and (3) all genes of the genome which are encoded within the double strands of desoxyribonucleic acid (DNA). Investigations of each of these molecular levels of life require analytical technologies which should best enable the comprehensive analysis of all proteins, RNAs, et cetera. At the beginning of this thesis such analytical technologies were available for DNA, RNA and proteins, but not for metabolites. Therefore, this thesis was dedicated to the implementation of the gas chromatography – mass spectrometry technology, in short GC-MS, for the in-parallel analysis of as many metabolites as possible. Today GC-MS is one of the most widely applied technologies and indispensable for the efficient profiling of primary metabolites. The main achievements and research topics of this work can be divided into technological advances and novel insights into the metabolic mechanisms which allow plants to cope with environmental stresses. Firstly, the GC-MS profiling technology has been highly automated and standardized. The major technological achievements were (1) substantial contributions to the development of automated and, within the limits of GC-MS, comprehensive chemical analysis, (2) contributions to the implementation of time of flight mass spectrometry for GC-MS based metabolite profiling, (3) the creation of a software platform for reproducible GC-MS data processing, named TagFinder, and (4) the establishment of an internationally coordinated library of mass spectra which allows the identification of metabolites in diverse and complex biological samples. In addition, the Golm Metabolome Database (GMD) has been initiated to harbor this library and to cope with the increasing amount of generated profiling data. This database makes publicly available all chemical information essential for GC-MS profiling and has been extended to a global resource of GC-MS based metabolite profiles. Querying the concentration changes of hundreds of known and yet non-identified metabolites has recently been enabled by uploading standardized, TagFinder-processed data. Long-term technological aims have been pursued with the central aims (1) to enhance the precision of absolute and relative quantification and (2) to enable the combined analysis of metabolite concentrations and metabolic flux. In contrast to concentrations which provide information on metabolite amounts, flux analysis provides information on the speed of biochemical reactions or reaction sequences, for example on the rate of CO2 conversion into metabolites. This conversion is an essential function of plants which is the basis of life on earth. Secondly, GC-MS based metabolite profiling technology has been continuously applied to advance plant stress physiology. These efforts have yielded a detailed description of and new functional insights into metabolic changes in response to high and low temperatures as well as common and divergent responses to salt stress among higher plants, such as Arabidopsis thaliana, Lotus japonicus and rice (Oryza sativa). Time course analysis after temperature stress and investigations into salt dosage responses indicated that metabolism changed in a gradual manner rather than by stepwise transitions between fixed states. In agreement with these observations, metabolite profiles of the model plant Lotus japonicus, when exposed to increased soil salinity, were demonstrated to have a highly predictive power for both NaCl accumulation and plant biomass. Thus, it may be possible to use GC-MS based metabolite profiling as a breeding tool to support the selection of individual plants that cope best with salt stress or other environmental challenges.
Stellar magnetic fields, as a crucial component of star formation and evolution, evade direct observation at least with current and near future instruments. However investigating whether magnetic fields are generated by a dynamo process or represent relics from the formation process, or whether they show a behavior similar to the sun or something very different, it is essential to investigate their structure and temporal evolution. Fortunately nature provides us with the possibility to indirectly observe surface topologies on distant stars by means of Doppler shift and polarization of light, though not without its challenges. Based on the mentioned effects, the so called Zeeman-Doppler Imaging technique is a powerful method to retrieve magnetic fields from rapid rotating stars based on measurements of spectropolarimetric observations in terms of Stokes profiles. In recent years, a large number of stellar magnetic field distributions could be reconstructed by Zeeman-Doppler Imaging (ZDI). However, the implementation of this method often relies on many approximations because, as an inversion method, it entails enormous computational requirements. The aim of this thesis is to develop methods for a ZDI, designed to invert time-resolved spectropolarimetric data of active late type stars, and to account for the expected complex and small scale magnetic fields on these stars. In order to reliably reconstruct the detailed field orientation and strength, the inversion method is employed to be able to use of all four Stokes components. Furthermore it is based on fully polarized radiative transfer calculations to account for the intricate interplay between temperature and magnetic field. Finally, the application of a newly developed ZDI code to Stokes I and V observations of II Pegasi (short: II Peg) was supposed to deliver the first magnetic surface maps for this highly active star. To accomplish the high computational burden of a radiative transfer based ZDI, we developed a novel approximation method to speed up the inversion process. It is based on Principal Component Analysis and Artificial Neural Networks. The latter approximate the functional mapping between atmospheric parameters and the corresponding local Stokes profiles. Inverse problems, as we are dealing with, are potentially ill-posed and require a regularization method. We propose a new regularization scheme, which implements a local entropy function that accounts for the peculiarities of the reconstruction of localized magnetic fields. To deal with the relatively large noise that is always present in polarimetric data, we developed a multi-line denoising technique based on Principal Component Analysis. In contrast to other multi-line techniques that extract from a large number of spectral lines a sort of mean profile, this method allows to extract individual spectral lines and thus allows for an inversion on the basis of specific lines. All these methods are incorporated in our newly developed ZDI code iMap, which is based on a conjugated gradient method. An in depth validation of our new synthesis method demonstrates the reliability and accuracy of this approach as well as a gain in computation time by almost three orders of magnitude relative to the conventional radiative transfer calculations. We investigated the influence of the different Stokes components (IV / IVQU) on the ability to reconstruct a known synthetic field configuration. In doing so we validate the capability of our inversion code, and we also assess limitations of magnetic field inversions in general. In a first application to II Peg, a K2 IV subgiant, we derived temperature and magnetic field surface distributions from spectropolarimetric data obtained in 2004 and 2007. It gives for the first time simultaneously the temporal evolution of the surface temperature and magnetic field distribution on II Peg.
Orbits of charged particles under the effect of a magnetic field are mathematically described by magnetic geodesics. They appear as solutions to a system of (nonlinear) ordinary differential equations of second order. But we are only interested in periodic solutions. To this end, we study the corresponding system of (nonlinear) parabolic equations for closed magnetic geodesics and, as a main result, eventually prove the existence of long time solutions. As generalization one can consider a system of elliptic nonlinear partial differential equations whose solutions describe the orbits of closed p-branes under the effect of a "generalized physical force". For the corresponding evolution equation, which is a system of parabolic nonlinear partial differential equations associated to the elliptic PDE, we can establish existence of short time solutions.
Computational cosmology
(2008)
“Computational Cosmology” is the modeling of structure formation in the Universe by means of numerical simulations. These simulations can be considered as the only “experiment” to verify theories of the origin and evolution of the Universe. Over the last 30 years great progress has been made in the development of computer codes that model the evolution of dark matter (as well as gas physics) on cosmic scales and new research discipline has established itself. After a brief summary of cosmology we will introduce the concepts behind such simulations. We further present a novel computer code for numerical simulations of cosmic structure formation that utilizes adaptive grids to efficiently distribute the work and focus the computing power to regions of interests, respectively. In that regards we also investigate various (numerical) effects that influence the credibility of these simulations and elaborate on the procedure of how to setup their initial conditions. And as running a simulation is only the first step to modelling cosmological structure formation we additionally developed an object finder that maps the density field onto galaxies and galaxy clusters and hence provides the link to observations. Despite the generally accepted success of the cold dark matter cosmology the model still inhibits a number of deviations from observations. Moreover, none of the putative dark matter particle candidates have yet been detected. Utilizing both the novel simulation code and the halo finder we perform and analyse various simulations of cosmic structure formation investigating alternative cosmologies. These include warm (rather than cold) dark matter, features in the power spectrum of the primordial density perturbations caused by non-standard inflation theories, and even modified Newtonian dynamics. We compare these alternatives to the currently accepted standard model and highlight the limitations on both sides; while those alternatives may cure some of the woes of the standard model they also inhibit difficulties on their own. During the past decade simulation codes and computer hardware have advanced to such a stage where it became possible to resolve in detail the sub-halo populations of dark matter halos in a cosmological context. These results, coupled with the simultaneous increase in observational data have opened up a whole new window on the concordance cosmogony in the field that is now known as “Near-Field Cosmology”. We will present an in-depth study of the dynamics of subhaloes and the development of debris of tidally disrupted satellite galaxies.1 Here we postulate a new population of subhaloes that once passed close to the centre of their host and now reside in the outer regions of it. We further show that interactions between satellites inside the radius of their hosts may not be negliable. And the recovery of host properties from the distribution and properties of tidally induced debris material is not as straightforward as expected from simulations of individual satellites in (semi-)analytical host potentials.
We study resonances for the generator of a diffusion with small noise in R(d) : L = -∈∆ + ∇F * ∇, when the potential F grows slowly at infinity (typically as a square root of the norm). The case when F grows fast is well known, and under suitable conditions one can show that there exists a family of exponentially small eigenvalues, related to the wells of F. We show that, for an F with a slow growth, the spectrum is R+, but we can find a family of resonances whose real parts behave as the eigenvalues of the "quick growth" case, and whose imaginary parts are small.
Thermal radiation processes
(2008)
We discuss the different physical processes that are important to understand the thermal X-ray emission and absorption spectra of the diffuse gas in clusters of galaxies and the warm-hot intergalactic medium. The ionisation balance, line and continuum emission and absorption properties are reviewed and several practical examples are given that illustrate the most important diagnostic features in the X-ray spectra.
Chitooligosaccharides are composed of linear β-(1→4)-linked 2-acetamido-2-deoxy-β-D-glucopyranose (GlcNAc) and/or 2-amino-2-deoxy-β-D-glucopyranose (GlcN). They are of interest due to their remarkable biological properties including antibacterial, antitumor, antifungal and elicitor activities. They can be obtained from the aminoglucan chitosan by chemical or enzymatic degradation which obviously affords rather heterogenous mixtures. On the other hand, chemical synthesis provides pure compounds with defined sequences of GlcNAc and GlcN monomers. The synthesis of homo- and hetero-chitobioses and hetero-chitotetraoses is described in this thesis. Dimethylmaleoyl and phthaloyl groups were used for protection of the amines. The donor was activated as the trichloroacetimidate in order to form the β-linkages. Glycosylation in the presence of trimethylsilyl trifluoromethanesulfonate, followed by N- and O-deprotection furnished chitobioses and chitotetraoses in good yields.
For the first time stabilizer-free vinylidene fluoride (VDF) polymerizations were carried out in homogeneous phase with supercritical CO₂. Polymerizations were carried out at 140°C, 1500 bar and were initiated with di-tert-butyl peroxide (DTBP). In-line FT-NIR (Fourier Transform- Near Infrared) spectroscopy showed that complete monomer conversion may be obtained. Molecular weights were determined via size-exclusion chromatography (SEC) and polymer end group analysis by 1H-NMR spectroscopy. The number average molecular weights were below 104 g∙mol−1 and polydispersities ranged from 3.1 to 5.7 depending on DTBP and VDF concentration. To allow for isothermal reactions high CO₂ contents ranging from 61 to 83 wt.% were used. The high-temperature, high-pressure conditions were required for homogeneous phase polymerization. These conditions did not alter the amount of defects in VDF chaining. Scanning electron microscopy (SEM) indicated that regular stack-type particles were obtained upon expansion of the homogeneous polymerization mixture. To reduce the required amount of initiator, further VDF polymerizations using chain transfer agents (CTAs) to control molecular weights were carried out in homogeneous phase with supercritical carbon dioxide (scCO₂) at 120 °C and 1500 bar. Using perfluorinated hexyl iodide as CTA, polymers of low polydispersity ranging from 1.5 to 1.2 at the highest iodide concentration of 0.25 mol·L-1 were obtained. Electrospray ionization- mass spectroscopy (ESI-MS) indicates the absence of initiator derived end groups, supporting livingness of the system. The “livingness” is based on the labile C-I bond. However, due to the weakness of the C-I bond perfluorinated hexyl iodide also contributes to initiation. To allow for kinetic analyses of VDF polymerizations the CTA should not contribute to initiation. Therefore, additional CTAs were applied: BrCCl3, C6F13Br and C6F13H. It was found that C6F13H does not contribute to initiation. At 120°C and 1500 bar kp/kt0.5~ 0.64 (L·mol−1·s−1)0.5 was derived. The chain transfer constant (CT) at 120°C has been determined to be 8·10−1, 9·10−2 and 2·10−4 for C6F13I, C6F13Br and C6F13H, respectively. These CT values are associated with the bond energy of the C-X bond. Moreover, the labile C-I bond allows for functionalization of the polymer to triazole end groups applying click reactions. After substitution of the iodide end group by an azide group 1,3 dipolar cycloadditions with alkynes yield polymers with 1,2,3 triazole end groups. Using symmetrical alkynes the reactions may be carried out in the absence of any catalyst. This end-functionalized poly (vinylidene fluoride) (PVDF) has higher thermal stability as compared to the normal PVDF. PVDF samples from homogeneous phase polymerizations in supercritical CO₂ and subsequent expansion to ambient conditions were analyzed with respect to polymer end groups, crystallinity, type of polymorphs and morphology. Upon expansion the polymer was obtained as white powder. Scanning electron microscopy (SEM) showed that DTBP derived polymer end groups led to stack-type particles whereas sponge- or rose-type particles were obtained in case of CTA fragments as end groups. Fourier-Transform Infrared spectroscopy and wide angle X-ray diffraction indicated that the type of polymorph, α or β crystal phase was significantly affected by the type of end group. The content of β-phase material, which is responsible for piezoelectricity of PVDF, is the highest for polymer with DTBP-derived end groups. In addition, the crystallinity of the material, as determined via differential scanning calorimetry is affected by the end groups and polymer molecular weights. For example, crystallinity ranges from around 26 % for DTBP-derived end groups to a maximum of 62 % for end groups originating from perfluorinated hexyl iodide for polymers with Mn ~2200 g·mol–1. Expansion of the homogeneous polymerization mixture results in particle formation by a non-optimized RESS (Rapid Expansion from Supercritical Solution) process. Thus, it was tested how polymer end groups affect the particles size distribution obtained from RESS process under controlled conditions (T = 50°C and P = 200 bar). In all RESS experiments, small primary PVDF with diameters less than 100 nm without the use of liquid solvents, surfactants, or other additives were produced. A strong correlation between particle size and particle size distribution with polymer end groups and molecular weight of the original material was observed. The smallest particles were found for RESS of PVDF with Mn~ 4000 g·mol–1 and PFHI (C6F13I) - derived end groups.
Self-Structuring of functionalized micro- and mesoporous organosilicas using boron-silane-precursors
(2008)
The structuring of porous silica materials at the nanometer scale and their surface functionalization are important issues of current materials research. Many innovations in chromatography, catalysis and electronic devices benefit from this knowledge. The work at hand is dedicated to the targeted design of functional organosilica materials. In this context a new precursor concept based on boron-silanes is presented. These precursors combine the properties of a structure directing group and a silica source by covalent borane linkage. Formation of the precursor is easily realized by a sequential two-step hydroboration, firstly on bis(triethoxysilyl)ethene, and secondly on an unsaturated structure directing moiety such as alkenes or polymers. The so prepared precursors self-organize when hydrolysis of their inorganic moiety takes place via an aggregation of their organic side chains into hydrophobic domains. In this way, the additional use of a surfactant as a template is not necessary. Chemical cleavage of these moieties (e.g. by ammonolysis or oxidative saponification) yields an organosilica where all functionalities are exclusively located at the pore wall and therefore accessible. The accessibility of the functionalities is a vital point for applications and is not necessarily granted for common silica functionalization approaches. Further advantages of the boron-silane concept are the possibility to introduce a variety of surface functionalities by heterolytic cleavage of the boron linker and the control of the pore morphology. For that purpose the covalent linkage of different alkyl groups and polymers was studied. Another aspect is the access to chiral boron silane precursors yielding functionalized mesoporous organosilica with chiral functionalities exclusively located at the pore walls after condensation and removal of the structure directing moiety. These materials possess great potential for applications documented by preliminary investigations on chiral resolution of a racemic mixture by HPLC and asymmetric catalysis. In the course of this work valuable insights into the targeted structuring and surface functionalization of organosilicas were gained. A promising outlook for further investigations is the extension of this concept by altering the structure directing moieties of the precursor. That way the morphology of the final organosilica might be controlled by for example mesogens. Furthermore, the use of the boron linker enables the introduction of multiple functionalities into organosilicas, making the obtained material unique in its performance.
Duplicate detection consists in determining different representations of real-world objects in a database. Recent research has considered the use of relationships among object representations to improve duplicate detection. In the general case where relationships form a graph, research has mainly focused on duplicate detection quality/effectiveness. Scalability has been neglected so far, even though it is crucial for large real-world duplicate detection tasks. In this paper we scale up duplicate detection in graph data (DDG) to large amounts of data and pairwise comparisons, using the support of a relational database system. To this end, we first generalize the process of DDG. We then present how to scale algorithms for DDG in space (amount of data processed with limited main memory) and in time. Finally, we explore how complex similarity computation can be performed efficiently. Experiments on data an order of magnitude larger than data considered so far in DDG clearly show that our methods scale to large amounts of data not residing in main memory.
Heterophase polymerization is a technique widely used for the synthesis of high performance polymeric materials with applications including paints, inks, adhesives, synthetic rubber, biomedical applications and many others. Due to the heterogeneous nature of the process, many different relevant length and time scales can be identified. Each of these scales has a direct influence on the kinetics of polymerization and on the physicochemical and performance properties of the final product. Therefore, from the point of view of product and process design and optimization, the understanding of each of these relevant scales and their integration into one single model is a very promising route for reducing the time-to-market in the development of new products, for increasing the productivity and profitability of existing processes, and for designing products with improved performance or cost/performance ratio. The process considered is the synthesis of structured or composite polymer particles by multi-stage seeded emulsion polymerization. This type of process is used for the preparation of high performance materials where a synergistic behavior of two or more different types of polymers is obtained. Some examples include the synthesis of core-shell or multilayered particles for improved impact strength materials and for high resistance coatings and adhesives. The kinetics of the most relevant events taking place in an emulsion polymerization process has been investigated using suitable numerical simulation techniques at their corresponding time and length scales. These methods, which include Molecular Dynamics (MD) simulation, Brownian Dynamics (BD) simulation and kinetic Monte Carlo (kMC) simulation, have been found to be very powerful and highly useful for gaining a deeper insight and achieving a better understanding and a more accurate description of all phenomena involved in emulsion polymerization processes, and can be potentially extended to investigate any type of heterogeneous process. The novel approach of using these kinetic-based numerical simulation methods can be regarded as a complement to the traditional thermodynamic-based macroscopic description of emulsion polymerization. The particular events investigated include molecular diffusion, diffusion-controlled polymerization reactions, particle formation, absorption/desorption of radicals and monomer, and the colloidal aggregation of polymer particles. Using BD simulation it was possible to precisely determine the kinetics of absorption/desorption of molecular species by polymer particles, and to simulate the colloidal aggregation of polymer particles. For diluted systems, a very good agreement between BD simulation and the classical theory developed by Smoluchowski was obtained. However, for concentrated systems, significant deviations from the ideal behavior predicted by Smoluchowski were evidenced. BD simulation was found to be a very valuable tool for the investigation of emulsion polymerization processes especially when the spatial and geometrical complexity of the system cannot be neglected, as is the case of concentrated dispersions, non-spherical particles, structured polymer particles, particles with non-uniform monomer concentration, and so on. In addition, BD simulation was used to describe non-equilibrium monomer swelling kinetics, which is not possible using the traditional thermodynamic approach because it is only valid for systems at equilibrium. The description of diffusion-controlled polymerization reactions was successfully achieved using a new stochastic algorithm for the kMC simulation of imperfectly mixed systems (SSA-IM). In contrast to the traditional stochastic simulation algorithm (SSA) and the deterministic rate of reaction equations, instead of assuming perfect mixing in the whole reactor, the new SSA-IM determines the volume perfectly mixed between two consecutive reactions as a function of the diffusion coefficient of the reacting species. Using this approach it was possible to describe, using a single set of kinetic parameters, typical mass transfer limitations effects during a free radical batch polymerization such as the cage effect, the gel effect and the glass effect. Using multiscale integration it was possible to investigate the formation of secondary particles during the seeded emulsion polymerization of vinyl acetate over a polystyrene seed. Three different cases of radical generation were considered: generation of radicals by thermal decomposition of water-soluble initiating compounds, generation of radicals by a redox reaction at the surface of the particles, and generation of radicals by thermal decomposition of surface-active initiators "inisurfs" attached to the surface of the particles. The simulation results demonstrated the satisfactory reduction in secondary particles formation achieved when the locus of radical generation is controlled close to the particles surface.
Using ESTs for phylogenomics
(2008)
Background
While full genome sequences are still only available for a handful of taxa, large collections of partial gene sequences are available for many more. The alignment of partial gene sequences results in a multiple sequence alignment containing large gaps that are arranged in a staggered pattern. The consequences of this pattern of missing data on the accuracy of phylogenetic analysis are not well understood. We conducted a simulation study to determine the accuracy of phylogenetic trees obtained from gappy alignments using three commonly used phylogenetic reconstruction methods (Neighbor Joining, Maximum Parsimony, and Maximum Likelihood) and studied ways to improve the accuracy of trees obtained from such datasets.
Results
We found that the pattern of gappiness in multiple sequence alignments derived from partial gene sequences substantially compromised phylogenetic accuracy even in the absence of alignment error. The decline in accuracy was beyond what would be expected based on the amount of missing data. The decline was particularly dramatic for Neighbor Joining and Maximum Parsimony, where the majority of gappy alignments contained 25% to 40% incorrect quartets. To improve the accuracy of the trees obtained from a gappy multiple sequence alignment, we examined two approaches. In the first approach, alignment masking, potentially problematic columns and input sequences are excluded from from the dataset. Even in the absence of alignment error, masking improved phylogenetic accuracy up to 100-fold. However, masking retained, on average, only 83% of the input sequences. In the second approach, alignment subdivision, the missing data is statistically modelled in order to retain as many sequences as possible in the phylogenetic analysis. Subdivision resulted in more modest improvements to alignment accuracy, but succeeded in including almost all of the input sequences.
Conclusion
These results demonstrate that partial gene sequences and gappy multiple sequence alignments can pose a major problem for phylogenetic analysis. The concern will be greatest for high-throughput phylogenomic analyses, in which Neighbor Joining is often the preferred method due to its computational efficiency. Both approaches can be used to increase the accuracy of phylogenetic inference from a gappy alignment. The choice between the two approaches will depend upon how robust the application is to the loss of sequences from the input set, with alignment masking generally giving a much greater improvement in accuracy but at the cost of discarding a larger number of the input sequences.
The space-image
(2008)
In recent computer game research a paradigmatic shift is observable: Games today are first and foremost conceived as a new medium characterized by their status as an interactive image. The shift in attention towards this aspect becomes apparent in a new approach that is, first and foremost, aware of the spatiality of games or their spatial structures. This rejects traditional approaches on the basis that the medial specificity of games can no longer be reduced to textual or ludic properties, but has to be seen in medial constituted spatiality. For this purpose, seminal studies on the spatiality of computer games are resumed and their advantages and disadvantages are discussed. In connection with this, and against the background of the philosophical method of phenomenology, we propose three steps in describing computer games as space images: With this method it is possible to describe games with respect to the possible appearance of spatiality in a pictorial medium.
Supernovae are known to be the dominant energy source for driving turbulence in the interstellar medium. Yet, their effect on magnetic field amplification in spiral galaxies is still poorly understood. Analytical models based on the uncorrelated-ensemble approach predicted that any created field will be expelled from the disk before a significant amplification can occur. By means of direct simulations of supernova-driven turbulence, we demonstrate that this is not the case. Accounting for vertical stratification and galactic differential rotation, we find an exponential amplification of the mean field on timescales of 100Myr. The self-consistent numerical verification of such a “fast dynamo” is highly beneficial in explaining the observed strong magnetic fields in young galaxies. We, furthermore, highlight the importance of rotation in the generation of helicity by showing that a similar mechanism based on Cartesian shear does not lead to a sustained amplification of the mean magnetic field. This finding impressively confirms the classical picture of a dynamo based on cyclonic turbulence.
Human transformation of the Earth’s land surface has far-reaching and important consequences for the functioning of hydrological and hydrochemical processes in watersheds. In nowadays land-use change from forest to pasture is a major issue in particular in the tropics. A sustainable management of deforested areas requires an in-depth understanding of the water and nutrient cycle. On this basis we compared the involved hydrological pathways for rainfall to reach streams and the nutrient budgets of a tropical rainforest and a pasture. In addition we studied the links of hydrochemical differences to differences of the relative importance of flowpaths. This study was conducted in the southwestern part of the Brazilian Amazon basin. An intensive hydrological and hydrochemical sampling and monitoring network was set up. The results indicate that the hydrology was modified in many ways due to land-use change. The most important alteration was the increased importance of the fast flowpath overland flow. Solute exports were in particular linked to the increased volume of overland flow that resulted from the land-use change. An additional reason for the increased nutrient exports from the pasture are the high concentrations of these nutrients in pasture overland flow probably as a due to cattle excrements. Tight nutrient cycles with minimal nutrient losses could not be maintained after the land-use change. This study provides the first attempt to quantify the respective nutrient losses.
This thesis provides a novel view on the early stage of crystallization utilizing calcium carbonate as a model system. Calcium carbonate is of great economical, scientific and ecological importance, because it is a major part of water hardness, the most abundant Biomineral and forms huge amounts of geological sediments thus binding large amounts of carbon dioxide. The primary experiments base on the evolution of supersaturation via slow addition of dilute calcium chloride solution into dilute carbonate buffer. The time-dependent measurement of the Ca2+ potential and concurrent pH = constant titration facilitate the calculation of the amount of calcium and carbonate ions bound in pre-nucleation stage clusters, which have never been detected experimentally so far, and in the new phase after nucleation, respectively. Analytical Ultracentrifugation independently proves the existence of pre-nucleation stage clusters, and shows that the clusters forming at pH = 9.00 have a proximately time-averaged size of altogether 70 calcium and carbonate ions. Both experiments show that pre-nucleation stage cluster formation can be described by means of equilibrium thermodynamics. Effectively, the cluster formation equilibrium is physico-chemically characterized by means of a multiple-binding equilibrium of calcium ions to a ‘lattice’ of carbonate ions. The evaluation gives GIBBS standard energy for the formation of calcium/carbonate ion pairs in clusters, which exhibits a maximal value of approximately 17.2 kJ mol^-1 at pH = 9.75 and relates to a minimal binding strength in clusters at this pH-value. Nucleated calcium carbonate particles are amorphous at first and subsequently become crystalline. At high binding strength in clusters, only calcite (the thermodynamically stable polymorph) is finally obtained, while with decreasing binding strength in clusters, vaterite (the thermodynamically least stable polymorph) and presumably aragonite (the thermodynamically intermediate stable polymorph) are obtained additionally. Concurrently, two different solubility products of nucleated amorphous calcium carbonate (ACC) are detected at low binding strength and high binding strength in clusters (ACC I 3.1EE-8 M^2, ACC II 3.8EE-8 M^2), respectively, indicating the precipitation of at least two different ACC species, while the clusters provide the precursor species of ACC. It is proximate that ACC I may relate to calcitic ACC –i.e. ACC exhibiting short range order similar to the long range order of calcite and that ACC II may relate to vateritic ACC, which will subsequently transform into the particular crystalline polymorph as discussed in the literature, respectively. Detailed analysis of nucleated particles forming at minimal binding strength in clusters (pH = 9.75) by means of SEM, TEM, WAXS and light microscopy shows that predominantly vaterite with traces of calcite forms. The crystalline particles of early stages are composed of nano-crystallites of approximately 5 to 10 nm size, respectively, which are aligned in high mutual order as in mesocrystals. The analyses of precipitation at pH = 9.75 in presence of additives –polyacrylic acid (pAA) as a model compound for scale inhibitors and peptides exhibiting calcium carbonate binding affinity as model compounds for crystal modifiers- shows that ACC I and ACC II are precipitated in parallel: pAA stabilizes ACC II particles against crystallization leading to their dissolution for the benefit of crystals that form from ACC I and exclusively calcite is finally obtained. Concurrently, the peptide additives analogously inhibit the formation of calcite and exclusively vaterite is finally obtained in case of one of the peptide additives. These findings show that classical nucleation theory is hardly applicable for the nucleation of calcium carbonate. The metastable system is stabilized remarkably due to cluster formation, while clusters forming by means of equilibrium thermodynamics are the nucleation relevant species and not ions. Most likely, the concept of cluster formation is a common phenomenon occurring during the precipitation of hardly soluble compounds as qualitatively shown for calcium oxalate and calcium phosphate. This finding is important for the fundamental understanding of crystallization and nucleation-inhibition and modification by additives with impact on materials of huge scientific and industrial importance as well as for better understanding of the mass transport in crystallization. It can provide a novel basis for simulation and modelling approaches. New mechanisms of scale formation in Bio- and Geomineralization and also in scale inhibition on the basis of the newly reported reaction channel need to be considered.
Flood polders are part of the flood risk management strategy for many lowland rivers. They are used for the controlled storage of flood water so as to lower peak discharges of large floods. Consequently, the flood hazard in adjacent and downstream river reaches is decreased in the case of flood polder utilisation. Flood polders are usually dry storage reservoirs that are typically characterised by agricultural activities or other land use of low economic and ecological vulnerability. The objective of this thesis is to analyse hydraulic, environmental and economic impacts of the utilisation of flood polders in order to draw conclusions for their management. For this purpose, hydrodynamic and water quality modelling as well as an economic vulnerability assessment are employed in two study areas on the Middle Elbe River in Germany. One study area is an existing flood polder system on the tributary Havel, which was put into operation during the Elbe flood in summer 2002. The second study area is a planned flood polder, which is currently in the early planning stages. Furthermore, numerical models of different spatial dimensionality, ranging from zero- to two-dimensional, are applied in order to evaluate their suitability for hydrodynamic and water quality simulations of flood polders in regard to performance and modelling effort. The thesis concludes with overall recommendations on the management of flood polders, including operational schemes and land use. In view of future changes in flood frequency and further increasing values of private and public assets in flood-prone areas, flood polders may be effective and flexible technical flood protection measures that contribute to a successful flood risk management for large lowland rivers.
The influence of information structure on tonal scaling in German is examined experimentally. Eighteen speakers uttered a total of 2277 sentences of the same syntactic structure, but with a varying number of constituents, word order and focus-given structure. The quantified results for German support findings for other Germanic languages that the scaling of high tones, and thus the entire melodic pattern, is influenced by information structure. Narrow focus raised the high tones of pitch accents, while givenness lowered them in prenuclear position and canceled them out postnuclearly. The effects of focus and givenness are calculated against all-new sentences as a baseline, which we expected to be characterized by downstep, a significantly lower scaling of high tones as compared to declination. The results further show that information structure alone cannot account for all variations. We therefore assume that dissimilatory tonal effects play a crucial role in the tonal scaling of German. The effects consist of final f0 drop, a steep fall from a raised high tone to the bottom line of the speaker, H-raising before a low tone, and H-lowering before a raised high tone. No correlation between word order and tone scaling could be established. 2008 Elsevier Ltd. All rights reserved.
Over the last decades Britain´s ethnic minorities have successfully established themselves in a multicultural society. In particular, Indian – Hindu communities generally improved their social and economic situation. In this context, the third generation of British Indians is now growing up. In contrast to the previous generation of the Indian diaspora, these children grow up in an established ethnic community, which learned to retain its religion, traditions and culture in a foreign environment. At the same time, these children are part of the multicultural British society. Based on the academic discussion about the second generation of immigrated ethnic communities, when the youth often suffered from cultural differences, racism and discrimination and therefore rejected aspects of their culture of origin, this paper assumes that the loss of the culture of origin further increases in the third generation. This thesis follows the main theories about the connection between generation and integration. It is believed that the preference of western culture influences the personal, ethnic and cultural identity of young people. This leads to the rejection of traditional bonds. Before introducing this thesis various theoretical concepts are discussed which are inevitable for the comprehension of the diasporic situation in which British Indian youngsters grow up. As part of the worldwide Asian Indian diaspora Indian families in Britain maintain manifold links to Indian communities in various countries. Particularly, the link to India plays a decisive role; the subcontinent is referred to as an abstract homeland, especially by the first generation. While the grandparents strongly adhere to their Indian culture and Hindu religion, the second generation already generated cultural change. In this process various cultural values of the Indian ethnic community have been questioned and modified. Further, the second generation pushed the integration into the British society by giving up the dependence on the ethnic network. This paper is based on a hybrid and fluent definition of culture. This definition also applies to the underlying understanding of identity and ethnicity. Due to migration, cultural contact and the multilocality of the diaspora, diasporic and post-diasporic identities and cultures are characterized by hybridity, heterogeneity, fragmentation and flexibility. Particularly, in the younger generation – though dependent on a number of social and structural factors - cultural change and mixture happen; in this process new ethnicities and identities evolve. In the second and third part of this paper the thesis of loss of culture of origin is refuted on the basis of findings from empirical research. British - Indian youngsters in London have been questioned for the study. Half of the youngsters are related to a sampradaya, a Hindu sect. This enables the author to compare youngsters who do not belong to a particular religious group with those who are included into a religious and / or ethnic community through a sampradaya. The analysis of the findings which are based on qualitative and quantitative social research shows that the young people have great interest in their culture of origin and that they aim to maintain this culture in the diaspora. They identify as Indian and are proud of their cultural differences. In this, they differ from the second generation. In contrast to the generation of their grandparents the Indian identity of the third generation is not based on nostalgic memories. They confirm and emphasize their postdiasporic difference in a western multicultural society. The findings from the survey hereby exceed the thesis from Hansen’s theory about the rediscovery of the culture of origin in the third generation. The comparison of both groups shows that in the context of the differentiation of postmodern and postcolonial communities also ethnic groups become increasingly differentiated. Therefore, the Indian heritage and culture does not play the same role for every young British Indian.
Music is a powerful and reliable means to stimulate the percept of both intense pleasantness and unpleasantness in the perceiver. However, everyone’s social experiences with music suggest that the same music piece may elicit a very different valence percept in different individuals. A comparison of music from different historical periods suggests that enculturation modulates the valence percept of intervals and harmonies, and thus possibly also of relatively basic feature extraction processes. Strikingly, it is still largely unknown how much the valence percept is dependent on physical properties of the stimulus and thus mediated by a universal perceptual mechanism, and how much it is dependent on cultural imprinting. The current thesis investigates the neurophysiology of the valence percept, and the modulating influence of culture on several distinguishable sub-processes of music processing, so-called functional modules of music processing, engaged in the mediation of the valence percept.
Recently, several faint ringlets in the Saturnian ring system were found to maintain a peculiar orientation relative to Sun. The Encke gap ringlets as well as the ringlet in the outer rift of the Cassini division were found to have distinct spatial displacements of several tens of kilometers away from Saturn towards Sun, referred to as heliotropicity (Hedman et al., 2007). This is quite exceptional, since dynamically one would expect eccentric features in the Saturnian rings to precess around Saturn over periods of months. In our study we address this exceptional behavior by investigating the dynamics of circumplanetary dust particles with sizes in the range of 1-100 µm. These small particles are perturbed by non-gravitational forces, in particular, solar radiation pres- sure, Lorentz force, and planetary oblateness, on time-scales of the order of days. The combined influences of these forces cause periodical evolutions of grains’ orbital ec- centricities as well as precession of their pericenters, which can be shown by secular perturbation theory. We show that this interaction results in a stationary eccentric ringlet, oriented with its apocenter towards the Sun, which is consistent with obser- vational findings. By applying this heliotropic dynamics to the central Encke gap ringlet, we can give a limit for the expected smallest grain size in the ringlet of about 8.7 microns, and constrain the minimal lifetime to lie in the order of months. Furthermore, our model matches fairly well the observed ringlet eccentricity in the Encke gap, which supports recent estimates on the size distribution of the ringlet material (Hedman et al., 2007). The ringlet-width however, that results from our modeling based on heliotropic dynamics, slightly overestimates the observed confined ringlet-width by a factor of 3 to 10, depending on the width-measure being used. This is indicative for mechanisms, not included in the heliotropic model, which potentially confine the ringlet to its observed width, including shepherding and scattering by embedded moonlets in the ringlet region. Based on these results, early investigations (Cuzzi et al., 1984, Spahn and Wiebicke, 1989, Spahn and Sponholz, 1989), and recent work that has been published on the F ring (Murray et al., 2008) - to which the Encke gap ringlets are found to share similar morphological structures - we model the maintenance of the central ringlet by embedded moonlets. These moonlets, believed to have sizes of hundreds of meters across, release material into space, which is eroded by micrometeoroid bombardment (Divine, 1993). We further argue that Pan - one of Saturn’s moons, which shares its orbit with the central ringlet of the Encke gap - is a rather weak source of ringlet material that efficiently confines the ringlet sources (moonlets) to move on horseshoe-like orbits. Moreover, we suppose that most of the narrow heliotropic ringlets are fed by a moonlet population, which is held together by the largest member to move on horseshoe-like orbits. Modeling the equilibrium between particle source and sinks with a primitive balance equation based on photometric observations (Porco et al., 2005), we find the minimal effective source mass of the order of 3 · 10-2MPan, which is needed to keep the central ringlet from disappearing.
Nanostructured inorganic materials are routinely synthesized by the use of templates. Depending on the synthesis conditions of the product material, either “soft” or “hard” templates can be applied. For sol-gel processes, usually “soft” templating techniques are employed, while “hard” templates are used for high temperature synthesis pathways. In classical templating approaches, the template has the unique role of structure directing agent, in the sense that it is not participating to the chemical formation of the resulting material. This work investigates a new templating pathway to nanostructured materials, where the template is also a reagent in the formation of the final material. This concept is described as “reactive templating” and opens a synthetic path toward materials which cannot be synthesised on a nanometre scale by classical templating approaches. Metal nitrides are such kind of materials. They are usually produced by the conversion of metals or metal oxides in ammonia flow at high temperature (T > 1000°C), which make the application of classical templating techniques difficult. Graphitic carbon nitride, g-C3N4, despite its fundamental and theoretical importance, is probably one of the most promising materials to complement carbon in material science and many efforts are put in the synthesis of this material. A simple polyaddition/elimination reaction path at high temperature (T = 550°C) allows the polymerisation of cyanamide toward graphitic carbon nitride solids. By hard templating, using nanostructured silica or aluminium oxide as nanotemplates, a variety of nanostructured graphitic carbon nitrides such as nanorods, nanotubes, meso- and macroporous powders could be obtained by nanocasting or nanocoating. Due to the special semi-conducting properties of the graphitic carbon nitride matrix, the nanostructured graphitic carbon nitrides show unexpected catalytic activity for the activation of benzene in Friedel-Crafts type reactions, making this material an interesting metal free catalyst. Furthermore, due to the chemical composition of g-C3N4 and the fact that it is totally decomposed at temperatures between 600°C and 800°C even under inert atmosphere, g-C3N4 was shown to be a good nitrogen donor for the synthesis of early transition metal nitrides at high temperatures. Thus using the nanostructured carbon nitrides as “reactive templates” or “nanoreactors”, various metal nitride nanostructures, such as nanoparticles and porous frameworks could be obtained at high temperature. In this approach the carbon nitride nanostructure played both the role of the nitrogen source and of the exotemplate, imprinting its size and shape to the resulting metal nitride nanostructure.
Bridehood revisited
(2008)
We describe an approach to modeling biological networks by action languages via answer set programming. To this end, we propose an action language for modeling biological networks, building on previous work by Baral et al. We introduce its syntax and semantics along with a translation into answer set programming, an efficient Boolean Constraint Programming Paradigm. Finally, we describe one of its applications, namely, the sulfur starvation response-pathway of the model plant Arabidopsis thaliana and sketch the functionality of our system and its usage.
The study of biological interaction networks is a central theme in systems biology. Here, we investigate common as well as differentiating principles of molecular interaction networks associated with different levels of molecular organization. They include metabolic pathway maps, protein-protein interaction networks as well as kinase interaction networks. First, we present an integrated analysis of metabolic pathway maps and protein-protein interaction networks (PIN). It has long been established that successive enzymatic steps are often catalyzed by physically interacting proteins forming permanent or transient multi-enzyme complexes. Inspecting high-throughput PIN data, it has been shown recently that, indeed, enzymes involved in successive reactions are generally more likely to interact than other protein pairs. In this study, we expanded this line of research to include comparisons of the respective underlying network topologies as well as to investigate whether the spatial organization of enzyme interactions correlates with metabolic efficiency. Analyzing yeast data, we detected long-range correlations between shortest paths between proteins in both network types suggesting a mutual correspondence of both network architectures. We discovered that the organizing principles of physical interactions between metabolic enzymes differ from the general PIN of all proteins. While physical interactions between proteins are generally dissortative, enzyme interactions were observed to be assortative. Thus, enzymes frequently interact with other enzymes of similar rather than different degree. Enzymes carrying high flux loads are more likely to physically interact than enzymes with lower metabolic throughput. In particular, enzymes associated with catabolic pathways as well as enzymes involved in the biosynthesis of complex molecules were found to exhibit high degrees of physical clustering. Single proteins were identified that connect major components of the cellular metabolism and hence might be essential for the structural integrity of several biosynthetic systems. Besides metabolic aspects of PINs, we investigated the characteristic topological properties of protein interactions involved in signaling and regulatory functions mediated by kinase interactions. Characteristic topological differences between PINs associated with metabolism, and those describing phosphorylation networks were revealed and shown to reflect the different modes of biological operation of both network types. The construction of phosphorylation networks is based on the identification of specific kinase-target relations including the determination of the actual phosphorylation sites (P-sites). The computational prediction of P-sites as well as the identification of involved kinases still suffers from insufficient accuracies and specificities of the underlying prediction algorithms, and the experimental identification in a genome-scale manner is not (yet) doable. Computational prediction methods have focused primarily on extracting predictive features from the local, one-dimensional sequence information surrounding P-sites. However the recognition of such motifs by the respective kinases is a spatial event. Therefore, we characterized the spatial distributions of amino acid residue types around P-sites and extracted signature 3D-profiles. We then tested the added value of spatial information on the prediction performance. When compared to sequence-only based predictors, a consistent performance gain was obtained. The availability of reliable training data of experimentally determined P-sites is critical for the development of computational prediction methods. As part of this thesis, we provide an assessment of false-positive rates of phosphoproteomic data.
Phase Space Reconstruction is a method that allows to reconstruct the phase space of a system using only an one dimensional time series as input. It can be used for calculating Lyapunov-exponents and detecting chaos. It helps to understand complex dynamics and their behavior. And it can reproduce datasets which were not measured. There are many different methods which produce correct reconstructions such as time-delay, Hilbert-transformation, derivation and integration. The most used one is time-delay but all methods have special properties which are useful in different situations. Hence, every reconstruction method has some situations where it is the best choice. Looking at all these different methods the questions are: Why can all these different looking methods be used for the same purpose? Is there any connection between all these functions? The answer is found in the frequency domain : Performing a Fourier transformation all these methods getting a similar shape: Every presented reconstruction method can be described as a multiplication in the frequency domain with a frequency-depending reconstruction function. This structure is also known as a filter. From this point of view every reconstructed dimension can be seen as a filtered version of the measured time series. It contains the original data but applies just a new focus: Some parts are amplified and other parts are reduced. Furthermore I show, that not every function can be used for reconstruction. In the thesis three characteristics are identified, which are mandatory for the reconstruction function. Under consideration of these restrictions one gets a whole bunch of new reconstruction functions. So it is possible to reduce noise within the reconstruction process itself or to use some advantages of already known reconstructions methods while suppressing unwanted characteristics of it.
Nanostructured materials are materials consisting of nanoparticulate building blocks on the scale of nanometers (i.e. 10-9 m). Composition, crystallinity and morphology can enhance or even induce new properties of the materials, which are desirable for todays and future technological applications. In this work, we have shown new strategies to synthesise metal oxide and metal nitride nanomaterials. The first part of the work deals with the study of nonaqueous synthesis of metal oxide nanoparticles. We succeeded in the synthesis of In2O3 nanopartcles where we could clearly influence the morphology by varying the type of the precursors and the solvents; of ZnO mesocrystals by using acetonitrile as a solvent; of transition metal oxides (Nb2O5, Ta2O5 and HfO2) that are particularly hard to obtain on the nanoscale and other technologically important materials. Solvothermal synthesis however is not restricted to formation of oxide materials only. In the second part we show examples of nonaqueous, solvothermal reactions of metal nitrides, but the main focus lies on the investigation of the influence of different morphologies of metal oxide precursors on the formation of the metal nitride nanoparticles. In spite of various reports, the number and variety of nanocrystalline metal nitrides is marginally small by comparison to metal oxides; hence preformed metal oxides as precursors for the preparation of metal nitrides are a logical choice. By reacting oxide nanoparticles with cyanamide, urea or melamine, at temperatures of 800 to 900 °C under nitrogen flow metal nitrides could be obtained. We studied in detail the influence of the starting material and realized that size, crystallinity, type of nitrogen source and temperature play the most important role. We have managed to propose and verify a dissolution-recrystallisation model as the formation mechanism. Furthermore we could show that the initial morphology of the oxides could be retained when ammonia flow was used instead.
Streamflow dynamics in mountainous environments are controlled by runoff generation processes in the basin upstream. Runoff generation processes are thus a major control of the terrestrial part of the water cycle, influencing both, water quality and water quantity as well as their dynamics. The understanding of these processes becomes especially important for the prediction of floods, erosion, and dangerous mass movements, in particular as hydrological systems often show threshold behavior. In case of extensive environmental changes, be it in climate or in landuse, the understanding of runoff generation processes will allow us to better anticipate the consequences and can thus lead to a more responsible management of resources as well as risks. In this study the runoff generation processes in a small undisturbed catchment in the Chilean Andes were investigated. The research area is characterized by steep hillslopes, volcanic ash soils, undisturbed old growth forest and high rainfall amounts. The investigation of runoff generation processes in this data scarce area is of special interest as a) little is known on the hydrological functioning of the young volcanic ash soils, which are characterized by extremely high porosities and hydraulic conductivities, b) no process studies have been carried out in this area at either slope or catchment scale, and c) understanding the hydrological processes in undisturbed catchments will provide a basis to improve our understanding of disturbed systems, the shift in processes that followed the disturbance and maybe also future process evolution necessary for the achievement of a new steady state. The here studied catchment has thus the potential to serve as a reference catchment for future investigations. As no long term data of rainfall and runoff exists, it was necessary to replace long time series of data with a multitude of experimental methods, using the so called "multi-method approach". These methods cover as many aspects of runoff generation as possible and include not only the measurement of time series such as discharge, rainfall, soil water dynamics and groundwater dynamics, but also various short term measurements and experiments such as determination of throughfall amounts and variability, water chemistry, soil physical parameters, soil mineralogy, geo-electrical soundings and tracer techniques. Assembling the results like pieces of a puzzle produces a maybe not complete but nevertheless useful picture of the dynamic ensemble of runoff generation processes in this catchment. The employed methods were then evaluated for their usefulness vs. expenditures (labour and financial costs). Finally, the hypotheses - the perceptual model of runoff generation generated from the experimental findings - were tested with the physically based model Catflow. Additionally the process-based model Wasim-ETH was used to investigate the influence of landuse on runoff generation at the catchment scale. An initial assessment of hydrologic response of the catchment was achieved with a linear statistical model for the prediction of event runoff coefficients. The parameters identified as best predictors give a first indication of important processes. Various results acquired with the "multi-method approach" show that response to rainfall is generally fast. Preferential vertical flow is of major importance and is reinforced by hydrophobicity during the summer months. Rapid lateral water transport is necessary to produce the fast response signal, however, while lateral subsurface flow was observed at several soil moisture profiles, the location and type of structures causing fast lateral flow on the hillslope scale is still not clear and needs to be investigated in more detail. Surface runoff has not been observed and is unlikely due to the high hydraulic conductivities of the volcanic ash soils. Additionally, a large subsurface storage retains most of the incident rainfall amount during events (>90%, often even >95%) and produces streamflow even after several weeks of drought. Several findings suggest a shift in processes from summer to winter causing changes in flow patterns, changes in response of stream chemistry to rainfall events and also in groundwater-surface water interactions. The results of the modelling study confirm the importance of rapid and preferential flow processes. However, due to the limited knowledge on subsurface structures the model still does not fully capture runoff response. Investigating the importance of landuse on runoff generation showed that while peak runoff generally increased with deforested area, the location of these areas also had an effect. Overall, the "multi-method approach" of replacing long time series with a multitude of experimental methods was successful in the identification of dominant hydrological processes and thus proved its applicability for data scarce catchments under the constraint of limited resources.
This work analyzes the saving and consumption behavior of agents faced with the possibility of unemployment in a dynamic and stochastic life cycle model. The intertemporal optimization is based on Dynamic Programming with a backward recursion algorithm. The implemented uncertainty is not based on income shocks as it is done in traditional life cycle models but uses Markov probabilities where the probability for the next employment status of the agent depends on the current status. The utility function used is a CRRA function (constant relative risk aversion), combined with a CES function (constant elasticity of substitution) and has several consumption goods, a subsistence level, money and a bequest function.
One of the main problems in machine learning is to train a predictive model from training data and to make predictions on test data. Most predictive models are constructed under the assumption that the training data is governed by the exact same distribution which the model will later be exposed to. In practice, control over the data collection process is often imperfect. A typical scenario is when labels are collected by questionnaires and one does not have access to the test population. For example, parts of the test population are underrepresented in the survey, out of reach, or do not return the questionnaire. In many applications training data from the test distribution are scarce because they are difficult to obtain or very expensive. Data from auxiliary sources drawn from similar distributions are often cheaply available. This thesis centers around learning under differing training and test distributions and covers several problem settings with different assumptions on the relationship between training and test distributions-including multi-task learning and learning under covariate shift and sample selection bias. Several new models are derived that directly characterize the divergence between training and test distributions, without the intermediate step of estimating training and test distributions separately. The integral part of these models are rescaling weights that match the rescaled or resampled training distribution to the test distribution. Integrated models are studied where only one optimization problem needs to be solved for learning under differing distributions. With a two-step approximation to the integrated models almost any supervised learning algorithm can be adopted to biased training data. In case studies on spam filtering, HIV therapy screening, targeted advertising, and other applications the performance of the new models is compared to state-of-the-art reference methods.
The selective infrared (IR) excitation of molecular vibrations is a powerful tool to control the photoreactivity prior to electronic excitation in the ultraviolet / visible (UV/Vis) light regime ("vibrationally mediated chemistry"). For adsorbates on surfaces it has been theoretically predicted that IR preexcitation will lead to higher UV/Vis photodesorption yields and larger cross sections for other photoreactions. In a recent experiment, IR-mediated desorption of molecular hydrogen from a Si(111) surface on which atomic hydrogen and deuterium were co-adsorbed was achieved, following a vibrational mechanism as indicated by the isotope-selectivity. In the present work, selective vibrational IR excitation of adsorbate molecules, treated as multi-dimensional oscillators on dissipative surfaces, has been simulated within the framework of open-system density matrix theory. Not only potential-mediated, inter-mode coupling poses an obstacle to selective excitation but also the coupling of the adsorbate ("system") modes to the electronic and phononic degrees of freedom of the surface ("bath") does. Vibrational relaxation thereby takes place, depending on the availabilty of energetically fitting electron-hole (e/h) pairs and/or phonons (lattice vibrations) in the surface, on time-scales ranging from milliseconds to several hundreds of femtoseconds. On metal surfaces, where the relaxation process of the adsorbate via the e/h pair mechanism dominates, vibrational lifetimes are usually shorter than on insulator or semiconductor surfaces, in the range of picoseconds, being also the timescale of the IR pulses used here. Further inhibiting factors for selectivity can be the harmonicity of a mode and weak dipole activities ("dark modes") rendering vibrational excitation with moderate field intensities difficult. In addition to simple analytical pulses, optimal control theory (OCT) has been employed here to generate a suitable electric field to populate the target state/mode maximally. The complex OCT fields were analyzed by Husimi transformation, resolving the control field in time and energy. The adsorbate/surface systems investigated were CO/Cu(100), H/Si(100) and 2H/Ru(0001). These systems proved to be suitable models to study the above mentioned effects. Further, effects of temperature, pure dephasing (elastic scattering processes), pulse duration and dimensionality (up to four degrees of freedom) were studied. It was possible to selectively excite single vibrational modes, often even state-selective. Special processes like hot-band excitation, vibrationally mediated desorption and the excitation of "dark modes" were simulated. Finally, a novel OCT algorithm in density matrix representation has been developed which allows for time-dependent target operators and thus enables to control the excitation mechanism instead of only the final state. The algorithm is based on a combination of global (iterative) and local (non-iterative) OCT schemes, such that short, globally controlled time-intervals are coupled locally in time. Its numerical performance and accuracy were tested and verified and it was successfully applied to stabilize a two-state linear-combination and to enforce a successive "ladder climbing" in a rather harmonic system, where monochromatic, analytical pulses simultaneously excited several states, leading to a population loss in the target state.
Chitooligosaccharides are composed of glycosamin and N-acetylglycisamin residues. Gel permeations chromatography is employed for the separation of oligomers, cation exchange chromatography is used for the separation of homologes and isomers. Trideuterioacetylation of the chitooligosaccharides followed by MALDI-TOF mass spectrometry allowes for the quantitation of mixtures of homologes. vMALDI LTQ multiple-stage MS is employed for quantitative sequencing of complex mixtures of heterochitooligosaccharides. Pure homologes and isomers are applied to biological assays. Chitooligosaccahrides form high-affinity non-covalent complexes with HC gp-39 (human cartilage glycoprotein of 39 kDa). The affinity of the chitooligosaccharides depends on DP, FA and the sequence of glycosamin and N-acetylglycosamin moieties. (+)nanoESI Q TOF MS/MS is used for identification of a high-affinity binding chitooligosaccharide of a non-covalent chitinase B - chitooligosaccharide complex. DADAA is identified as the heterochitoisomer binding with highest affinity and biostability to HC gp-39. Fluorescence based enzyme assays confirm the results.
QuantPrime
(2008)
Background
Medium- to large-scale expression profiling using quantitative polymerase chain reaction (qPCR) assays are becoming increasingly important in genomics research. A major bottleneck in experiment preparation is the design of specific primer pairs, where researchers have to make several informed choices, often outside their area of expertise. Using currently available primer design tools, several interactive decisions have to be made, resulting in lengthy design processes with varying qualities of the assays.
Results
Here we present QuantPrime, an intuitive and user-friendly, fully automated tool for primer pair design in small- to large-scale qPCR analyses. QuantPrime can be used online through the internet http://www.quantprime.de/ or on a local computer after download; it offers design and specificity checking with highly customizable parameters and is ready to use with many publicly available transcriptomes of important higher eukaryotic model organisms and plant crops (currently 295 species in total), while benefiting from exon-intron border and alternative splice variant information in available genome annotations. Experimental results with the model plant Arabidopsis thaliana, the crop Hordeum vulgare and the model green alga Chlamydomonas reinhardtii show success rates of designed primer pairs exceeding 96%.
Conclusion
QuantPrime constitutes a flexible, fully automated web application for reliable primer design for use in larger qPCR experiments, as proven by experimental data. The flexible framework is also open for simple use in other quantification applications, such as hydrolyzation probe design for qPCR and oligonucleotide probe design for quantitative in situ hybridization. Future suggestions made by users can be easily implemented, thus allowing QuantPrime to be developed into a broad-range platform for the design of RNA expression assays.
We propose a network structure-based model for heterosis, and investigate it relying on metabolite profiles from Arabidopsis. A simple feed-forward two-layer network model (the Steinbuch matrix) is used in our conceptual approach. It allows for directly relating structural network properties with biological function. Interpreting heterosis as increased adaptability, our model predicts that the biological networks involved show increasing connectivity of regulatory interactions. A detailed analysis of metabolite profile data reveals that the increasing-connectivity prediction is true for graphical Gaussian models in our data from early development. This mirrors properties of observed heterotic Arabidopsis phenotypes. Furthermore, the model predicts a limit for increasing hybrid vigor with increasing heterozygosity—a known phenomenon in the literature.
Contents 1. Styling for Service-Based 3D Geovisualization Benjamin Hagedorn 2. The Windows Monitoring Kernel Michael Schöbel 3. A Resource-Oriented Information Network Platform for Global Design Processes Matthias Uflacker 4. Federation in SOA – Secure Service Invocation across Trust Domains Michael Menzel 5. KStruct: A Language for Kernel Runtime Inspection Alexander Schmidt 6. Deconstructing Resources Hagen Overdick 7. FMC-QE – Case Studies Stephan Kluth 8. A Matter of Trust Rehab Al-Nemr 9. From Semi-automated Service Composition to Semantic Conformance Harald Meyer