Refine
Year of publication
- 2011 (1133) (remove)
Document Type
- Article (823)
- Doctoral Thesis (139)
- Postprint (40)
- Review (40)
- Monograph/Edited Volume (32)
- Conference Proceeding (26)
- Other (12)
- Preprint (7)
- Part of a Book (6)
- Master's Thesis (3)
Language
- English (1133) (remove)
Keywords
- climate change (8)
- X-rays: stars (7)
- Holocene (6)
- Tibetan Plateau (6)
- NMR (5)
- gamma rays: general (5)
- stars: massive (5)
- Dictyostelium (4)
- Eye movements (4)
- Photosynthesis (4)
Institute
- Institut für Biochemie und Biologie (230)
- Institut für Physik und Astronomie (178)
- Institut für Chemie (159)
- Institut für Geowissenschaften (134)
- Department Psychologie (55)
- Institut für Informatik und Computational Science (46)
- Institut für Ernährungswissenschaft (42)
- Institut für Mathematik (37)
- Department Linguistik (32)
- Institut für Anglistik und Amerikanistik (32)
Resolute readings of later Wittgenstein and the challenge of avoiding hierarchies in philosophy
(2011)
This dissertation addresses the question: How did later Wittgenstein aim to achieve his goal of putting forward a way of dissolving philosophical problems which centered on asking ourselves what we mean by our words – yet which did not entail any claims about the essence of language and meaning? This question is discussed with reference to “resolute” readings of Wittgenstein. I discuss the readings of James Conant, Oskari Kuusela, and Martin Gustafsson. I follow Oskari Kuusela’s claim that in order to fully appreciate how later Wittgenstein meant to achieve his goal, we need to clearly see how he aimed to do away with hierarchies in philosophy: Not only is the dissolution of philosophical problems via the method of clarifying the grammar of expressions to be taken as independent from any theses about what meaning must be – but furthermore, it is to be taken as independent from the dissolution of any particular problem via this method. As Kuusela stresses, this also holds for the problems involving rule-following and meaning: the clarification of the grammar of “rule” and “meaning” has no foundational status – it is nothing on which the method of clarifying the grammar of expressions as such were meant to in any way rely on. The lead question of this dissertation then is: What does it mean to come to see that the method of dissolving philosophical problems by asking “How is this word actually used?” does not in any way rely on the results of our having investigated the grammar of the particular concepts “rule” and “meaning”? What is the relation of such results – results such as “To follow a rule, [...], to obey an order, [...] are customs (uses, institutions)” or “The meaning of a word is its use in the language” – to this method? From this vantage point, I concern myself with two aspects of the readings of Gustafsson and Kuusela. In Gustafsson, I concern myself with his idea that the dissolution of philosophical problems in general “relies on” the very agreement which – during the dissolution of the rule-following problem – comes out as a presupposition for our talk of “meaning” in terms of rules. In Kuusela, I concern myself with his idea that Wittgenstein, in adopting a way of philosophical clarification which investigates the actual use of expressions, is following the model of “meaning as use” – which model he had previously introduced in order to perspicuously present an aspect of the actual use of the word “meaning”. This dissertation aims to show how these two aspects of Gustafsson’s and Kuusela’s readings still fail to live up to the vision of Wittgenstein as a philosopher who aimed to do away with any hierarchies in philosophy. I base this conclusion on a detailed analysis of which of the occasions where Wittgenstein invokes the notions of “use” and “application” (as also “agreement”) have to do with the dissolution of a specific problem only, and which have to do with the dissolution of philosophical problems in general. I discuss Wittgenstein’s remarks on rule-following, showing how in the dissolution of the rule-following paradox, notions such as “use”, “application”, and “practice” figure on two distinct logical levels. I then discuss an example of what happens when this distinction is not duly heeded: Gordon Baker and Peter Hacker’s idea that the rule-following remarks have a special significance for his project of dissolving philosophical problems as such. I furnish an argument to the effect that their idea that the clarification of the rules of grammar of the particular expression “following a rule” could answer a question about rules of grammar in general rests on a conflation of the two logical levels on which “use” occurs in the rule-following remarks, and that it leads into a regress. I then show that Gustafsson’s view – despite its decisive advance over Baker and Hacker – contains a version of that same idea, and that it likewise leads into a regress. Finally, I show that Kuusela’s idea of a special significance of the model “meaning as use” for the whole of the method of stating rules for the use of words is open to a regress argument of a similar kind as that he himself advances against Baker and Hacker. I conclude that in order to avoid such a regress, we need to reject the idea that the grammatical remark “The meaning of a word is its use in the language” – because of the occurrence of “use” in it – stood in any special relation to the method of dissolving philosophical problems by describing the use of words. Rather, we need to take this method as independent from this outcome of the investigation of the use of the particular word “meaning”.
Fusarium spp. infection of cereal grain is a common problem, which leads to a dramatic loss of grain quality. The aim of the present study was to investigate the effect of Fusarium infection on the wheat storage protein gluten and its fractions, the gliadins and glutenins, in an in vitro model system. Gluten proteins were digested by F. graminearum proteases for 2, 4, 8 and 24 h, separated by Osborne fractionation and characterised by chromatographic (RP-HPLC) and electrophoretic analysis (SDS-Page). Gluten digestion by F. graminearum proteases showed in comparison with gliadins a preference for the glutenins whereas the HMW subfraction was at most affected. In comparison with a untreated control, the HMW subfraction was degraded of about 97% after 4 h incubation with Fusarium proteases. Separate digestion of gliadin and glutenin underlined the preference for HMW-GS. Analogue to the observed change in the gluten composition, the yield of the proteins extracted changed. A higher amount of glutenin fragments was found in the gliadin extraction solution after digestion and could mask a gliadin destruction at the same time. This observation can contribute to explain the frequently reported reduced glutenin amount parallel to an increase in gliadin quantity after Fusarium infection in grains.
Background
In many species males face a higher predation risk than females because males display elaborate traits that evolved under sexual selection, which may attract not only females but also predators. Females are, therefore, predicted to avoid such conspicuous males under predation risk. The present study was designed to investigate predator-induced changes of female mating preferences in Atlantic mollies (Poecilia mexicana). Males of this species show a pronounced polymorphism in body size and coloration, and females prefer large, colorful males in the absence of predators.
Results
In dichotomous choice tests predator-naïve (lab-reared) females altered their initial preference for larger males in the presence of the cichlid Cichlasoma salvini, a natural predator of P. mexicana, and preferred small males instead. This effect was considerably weaker when females were confronted visually with the non-piscivorous cichlid Vieja bifasciata or the introduced non-piscivorous Nile tilapia (Oreochromis niloticus). In contrast, predator experienced (wild-caught) females did not respond to the same extent to the presence of a predator, most likely due to a learned ability to evaluate their predators' motivation to prey.
Conclusions
Our study highlights that (a) predatory fish can have a profound influence on the expression of mating preferences of their prey (thus potentially affecting the strength of sexual selection), and females may alter their mate choice behavior strategically to reduce their own exposure to predators. (b) Prey species can evolve visual predator recognition mechanisms and alter their mate choice only when a natural predator is present. (c) Finally, experiential effects can play an important role, and prey species may learn to evaluate the motivational state of their predators.
Many organisms have developed defences to avoid predation by species at higher trophic levels. The capability of primary producers to defend themselves against herbivores affects their own survival, can modulate the strength of trophic cascades and changes rates of competitive exclusion in aquatic communities. Algal species are highly flexible in their morphology, growth form, biochemical composition and production of toxic and deterrent compounds. Several of these variable traits in phytoplankton have been interpreted as defence mechanisms against grazing. Zooplankton feed with differing success on various phytoplankton species, depending primarily on size, shape, cell wall structure and the production of toxins and deterrents. Chemical cues associated with (i) mechanical damage, (ii) herbivore presence and (iii) grazing are the main factors triggering induced defences in both marine and freshwater phytoplankton, but most studies have failed to disentangle the exact mechanism(s) governing defence induction in any particular species. Induced defences in phytoplankton include changes in morphology (e.g. the formation of spines, colonies and thicker cell walls), biochemistry (such as production of toxins, repellents) and in life history characteristics (formation of cysts, reduced recruitment rate). Our categorization of inducible defences in terms of the responsible induction mechanism provides guidance for future work, as hardly any of the available studies on marine or freshwater plankton have performed all the treatments that are required to pinpoint the actual cue(s) for induction. We discuss the ecology of inducible defences in marine and freshwater phytoplankton with a special focus on the mechanisms of induction, the types of defences, their costs and benefits, and their consequences at the community level.
We report on the gamma-ray activity of the blazar Mrk 501 during the first 480 days of Fermi operation. We find that the average Large Area Telescope (LAT) gamma-ray spectrum of Mrk 501 can be well described by a single power-law function with a photon index of 1.78 +/- 0.03. While we observe relatively mild flux variations with the Fermi-LAT (within less than a factor of two), we detect remarkable spectral variability where the hardest observed spectral index within the LAT energy range is 1.52 +/- 0.14, and the softest one is 2.51 +/- 0.20. These unexpected spectral changes do not correlate with the measured flux variations above 0.3 GeV. In this paper, we also present the first results from the 4.5 month long multifrequency campaign (2009 March 15-August 1) on Mrk 501, which included the Very Long Baseline Array (VLBA), Swift, RXTE, MAGIC, and VERITAS, the F-GAMMA, GASP-WEBT, and other collaborations and instruments which provided excellent temporal and energy coverage of the source throughout the entire campaign. The extensive radio to TeV data set from this campaign provides us with the most detailed spectral energy distribution yet collected for this source during its relatively low activity. The average spectral energy distribution of Mrk 501 is well described by the standard one-zone synchrotron self-Compton (SSC) model. In the framework of this model, we find that the dominant emission region is characterized by a size less than or similar to 0.1 pc (comparable within a factor of few to the size of the partially resolved VLBA core at 15-43 GHz), and that the total jet power (similar or equal to 10(44) erg s(-1)) constitutes only a small fraction (similar to 10(-3)) of the Eddington luminosity. The energy distribution of the freshly accelerated radiating electrons required to fit the time-averaged data has a broken power-law form in the energy range 0.3 GeV-10 TeV, with spectral indices 2.2 and 2.7 below and above the break energy of 20 GeV. We argue that such a form is consistent with a scenario in which the bulk of the energy dissipation within the dominant emission zone of Mrk 501 is due to relativistic, proton-mediated shocks. We find that the ultrarelativistic electrons and mildly relativistic protons within the blazar zone, if comparable in number, are in approximate energy equipartition, with their energy dominating the jet magnetic field energy by about two orders of magnitude.
P>Computing the magnitude of an earthquake requires correcting for the propagation effects from the source to the receivers. This is often accomplished by performing numerical simulations using a suitable Earth model. In this work, the energy magnitude M(e) is considered and its determination is performed using theoretical spectral amplitude decay functions over teleseismic distances based on the global Earth model AK135Q. Since the high frequency part (above the corner frequency) of the source spectrum has to be considered in computing M(e), the influence of propagation and site effects may not be negligible and they could bias the single station M(e) estimations. Therefore, in this study we assess the inter- and intrastation distributions of errors by considering the M(e) residuals computed for a large data set of earthquakes recorded at teleseismic distances by seismic stations deployed worldwide. To separate the inter- and intrastation contribution of errors, we apply a maximum likelihood approach to the M(e) residuals. We show that the interstation errors (describing a sort of site effect for a station) are within +/- 0.2 magnitude units for most stations and their spatial distribution reflects the expected lateral variation affecting the velocity and attenuation of the Earth's structure in the uppermost layers, not accounted for by the 1-D AK135Q model. The variance of the intrastation error distribution (describing the record-to-record component of variability) is larger than the interstation one (0.240 against 0.159), and the spatial distribution of the errors is not random but shows specific patterns depending on the source-to-station paths. The set of coefficients empirically determined may be used in the future to account for the heterogeneities of the real Earth not considered in the theoretical calculations of the spectral amplitude decay functions used to correct the recorded data for propagation effects.
Swedish long-term soil fertility experiments were used to investigate the effect of texture and fertilization regime on soil electrical conductivity. In one geophysical approach, fields were mapped to characterize the horizontal variability in apparent electrical conductivity down to 1.5 m soil depth using an electromagnetic induction meter (EM38 device). The data obtained were geo-referenced by dGPS. The other approach consisted of measuring the vertical variability in electrical conductivity along transects using a multi-electrode apparatus for electrical resistivity tomography (GeoTom RES/IP device) down to 2 m depth. Geophysical field work was complemented by soil analyses. The results showed that despite 40 years of different fertilization regimes, treatments had no significant effects on the apparent electrical conductivity. Instead, the comparison of sites revealed high and low conductivity soils, with gradual differences explained by soil texture. A significant, linear relationship found between apparent electrical conductivity and soil clay content explained 80% of the variability measured. In terms of soil depth, both low and high electrical conductivity values were measured. Abrupt changes in electrical conductivity within a field revealed the presence of 'deviating areas'. Higher values corresponded well with layers with a high clay content, while local inclusions of coarse-textured materials caused a high variability in conductivity in some fields. The geophysical methods tested provided useful information on the variability in soil texture at the experimental sites. The use of spatial EC variability as a co-variable in statistical analysis could be a complementary tool in the evaluation of experimental results.
The Western Alpine Sesia-Lanzo Zone (SLZ) is a sliver of eclogite-facies continental crust exhumed from mantle depths in the hanging wall of a subducted oceanic slab. Eclogite-facies felsic and basic rocks sampled across the internal SLZ show different degrees of retrograde metamorphic overprint associated with fluid influx. The weakly deformed samples preserve relict eclogite-facies mineral assemblages that show partial fluid-induced compositional re-equilibration along grain boundaries, brittle fractures and other fluid pathways. Multiple fluid influx stages are indicated by replacement of primary omphacite by phengite, albitic plagioclase and epidote as well as partial re-equilibration and/or overgrowths in phengite and sodic amphibole, producing characteristic step-like compositional zoning patterns. The observed textures, together with the map-scale distribution of the samples, suggest open-system, pervasive and reactive fluid flux across large rock volumes above the subducted slab. Thermodynamic modelling indicates a minimum amount of fluid of 0 center dot 1-0 center dot 5 wt % interacting with the wall-rocks. Phase relations and reaction textures indicate mobility of K, Ca, Fe and Mg, whereas Al is relatively immobile in these medium-temperature-high-pressure fluids. Furthermore, the thermodynamic models show that recycling of previously fractionated material, such as in the cores of garnet porphyroblasts, largely controls the compositional re-equilibration of the exhumed rock body.
The amount and composition of subduction zone fluids and the effect of fluid-rock interaction at a slab-mantle interface have been constrained by thermodynamic and trace element modelling of partially overprinted blueschist-facies rocks from the Sesia Zone (Western Alps). Deformation-induced differences in fluid flux led to a partial preservation of pristine mineral cores in weakly deformed samples that were used to quantify Li, B, Stand Pb distribution during mineral growth, -breakdown and modification induced by fluid-rock interaction. Our results show that Li and 13 budgets are fluid-controlled, thus acting as tracers for fluid-rock interaction processes, whereas Stand Pb budgets are mainly controlled by the fluid-induced formation of epidote. Our calculations show that fluid-rock interaction caused significant Li and B depletion in the affected rocks due to leaching effects, which in turn can lead to a drastic enrichment of these elements in the percolating fluid. Depending on available fluid-mineral trace element distribution coefficients modelled fluid rock ratios were up to 0.06 in weakly deformed samples and at least 0.5 to 4 in shear zone mylonites. These amounts lead to time integrated fluid fluxes of up to 1.4-10(2) m(3) m(-2) in the weakly deformed rocks and 1-8-10(3) m(3) m(-2) in the mylonites. Combined thermodynamic and trace element models can be used to quantify metamorphic fluid fluxes and the associated element transfer in complex, reacting rock systems and help to better understand commonly observed fluid-induced trace element trends in rocks and minerals from different geodynamic environments.
Eclogites from the main borehole of the Chinese Continental Scientific Drilling project yield highly precise Lu-Hf garnet-clinopyroxene ages of 216.9 +/- 1.2 Ma (four samples) and 220.5 +/- 2.7 Ma (one sample). The spatial distribution of the rare earth elements in garnet is consistent with the preservation of primary growth zoning, unmodified by diffusion, which supports the interpretation that the Lu-Hf ages date the time of formation of garnet, the major rock forming mineral in the eclogites. The preservation of primary REE-zoning, despite peak metamorphic temperatures around 800-850 degrees C. indicates that the Lu-Hf chronometer is perfectly suitable to date garnet-forming reactions in high grade rocks. The range of Lu-Hf ages for eclogites in the Dabie-Sulu UHP terrane point to episodic rather than continuous growth of garnets and thus punctuated metamorphism during the collision of the North China Block and the Yangtze Block. The U-Pb ages and Hf-isotope systematics of zircon grains from one eclogite sample imply a protracted geologic history of the eclogite precursors that started around 2 Ga and culminated in the UHP metamorphism around 220 Ma.
The pressures required for diamond and coesite formation far exceed conditions reached by even the deepest present-day orogenic crustal roots. Therefore the occurrence of metamorphosed continental crust containing these minerals requires processes other than crustal thickening to have operated in the past. Here we report the first in situ finding of diamond and coesite, characterized by micro-Raman spectroscopy, in high-pressure granulites otherwise indistinguishable from granulites found associated with garnet peridotite throughout the European Variscides. Our discovery confirms the provenance of Europe's first reliable diamond, the "Bohemian diamond," found in A.D. 1870, and also represents the first robust evidence for ultrahigh-pressure conditions in a major Variscan crustal rock type. A process of deep continental subduction is required to explain the metamorphic pressures and the granulite-garnet peridotite association, and thus tectonometamorphic models for these rocks involving a deep orogenic crustal root need to be significantly modified.
Background: Although nowaday it is broadly accepted that mitochondrial DNA (mtDNA) may undergo recombination, the frequency of such recombination remains controversial. Its estimation is not straightforward, as recombination under homoplasmy (i.e., among identical mt genomes) is likely to be overlooked. In species with tandem duplications of large mtDNA fragments the detection of recombination can be facilitated, as it can lead to gene conversion among duplicates. Although the mechanisms for concerted evolution in mtDNA are not fully understood yet, recombination rates have been estimated from "one per speciation event" down to 850 years or even "during every replication cycle".
Results: Here we present the first complete mt genome of the avian family Bucerotidae, i.e., that of two Philippine hornbills, Aceros waldeni and Penelopides panini. The mt genomes are characterized by a tandemly duplicated region encompassing part of cytochrome b, 3 tRNAs, NADH6, and the control region. The duplicated fragments are identical to each other except for a short section in domain I and for the length of repeat motifs in domain III of the control region. Due to the heteroplasmy with regard to the number of these repeat motifs, there is some size variation in both genomes; with around 21,657 bp (A. waldeni) and 22,737 bp (P. panini), they significantly exceed the hitherto longest known avian mt genomes, that of the albatrosses. We discovered concerted evolution between the duplicated fragments within individuals. The existence of differences between individuals in coding genes as well as in the control region, which are maintained between duplicates, indicates that recombination apparently occurs frequently, i. e., in every generation.
Conclusions: The homogenised duplicates are interspersed by a short fragment which shows no sign of recombination. We hypothesize that this region corresponds to the so-called Replication Fork Barrier (RFB), which has been described from the chicken mitochondrial genome. As this RFB is supposed to halt replication, it offers a potential mechanistic explanation for frequent recombination in mitochondrial genomes.
The meadow grasshopper, Chorthippus parallelus (Zetterstedt), is common and widespread in Central Europe, with a low dispersal range per generation. A population study in Central Germany (Frankenwald and Thuringer Schiefergebirge) showed strong interpopulation differences in abundance and individual fitness. We examined genetic variability using microsatellite markers within and between 22 populations in a short-to long-distance sampling (19 populations, Frankenwald, Schiefergebirge, as well as a southern transect), and in the Erzgebirge region (three populations), with the latter aiming to check for effects as a result of historical forest cover. Of the 671 C. parallelus captured, none was macropterous (functionally winged). All populations showed a high level of expected and observed heterozygosity (mean 0.80-0.90 and 0.60-0.75, respectively), whereas there was evidence of inbreeding (F(IS) values all positive). Allelic richness for all locus-population combinations was high (mean 9.3-11.2), whereas alleles per locus ranged from 15-62. At a local level, genic and genotypic differences were significant. Pairwise F(ST) values were in the range 0.00-0.04, indicating little interpopulation genetic differentiation. Similarly, the calculated gene flow was very high, based on the respective F(ST) (19.5) and using private alleles (7.7). A Neighbour-joining tree using Nei's D(A) and principal coordinate analysis separated two populations that were collected in the Erzgebirge region. Populations from this region may have escaped the effects of the historical forest cover. The visualization of the spatial arrangement of genotypes revealed one geographical barrier to gene flow in the short-distance sampling.
Lake Naivasha, Kenya, is one of a number of freshwater lakes in the East African Rift System. Since the beginning of the twentieth century, it has experienced greater anthropogenic influence as a result of increasingly intensive farming of coffee, tea, flowers, and other horticultural crops within its catchment. The water-level history of Lake Naivasha over the past 200 years was derived from a combination of instrumental records and sediment data. In this study, we analysed diatoms in a lake sediment core to infer past lacustrine conductivity and total phosphorus concentrations. We also measured total nitrogen and carbon concentrations in the sediments. Core chronology was established by (210)Pb dating and covered a similar to 186-year history of natural (climatic) and human-induced environmental changes. Three stratigraphic zones in the core were identified using diatom assemblages. There was a change from littoral/epiphytic diatoms such as Gomphonema gracile and Cymbella muelleri, which occurred during a prolonged dry period from ca. 1820 to 1896 AD, through a transition period, to the present planktonic Aulacoseira sp. that favors nutrient-rich waters. This marked change in the diatom assemblage was caused by climate change, and later a strong anthropogenic overprint on the lake system. Increases in sediment accumulation rates since 1928, from 0.01 to 0.08 g cm(-2) year(-1) correlate with an increase in diatom-inferred total phosphorus concentrations since the beginning of the twentieth century. The increase in phosphorus accumulation suggests increasing eutrophication of freshwater Lake Naivasha. This study identified two major periods in the lake's history: (1) the period from 1820 to 1950 AD, during which the lake was affected mainly by natural climate variations, and (2) the period since 1950, during which the effects of anthropogenic activity overprinted those of natural climate variation.
Rubisco (ribulose-1,5-bisphosphate carboxylase/oxygenase; EC 4.1.1.39), the most abundant protein in nature, catalyzes the assimilation of CO(2) (worldwide about 10(11) t each year) by carboxylation of ribulose-1,5-bisphosphate. It is a hexadecamer consisting of eight large and eight small subunits. Although the Rubisco large subunit (rbcL) is encoded by a single gene on the multicopy chloroplast genome, the Rubisco small subunits (rbcS) are encoded by a family of nuclear genes. In Arabidopsis thaliana, the rbcS gene family comprises four members, that is, rbcS-1a, rbcS-1b, rbcS-2b, and rbcS-3b. We sequenced all Rubisco genes in 26 worldwide distributed A. thaliana accessions. In three of these accessions, we detected a gene duplication/loss event, where rbcS-1b was lost and substituted by a duplicate of rbcS-2b (called rbcS-2b*). By screening 74 additional accessions using a specific polymerase chain reaction assay, we detected five additional accessions with this duplication/loss event. In summary, we found the gene duplication/loss in 8 of 100 A. thaliana accessions, namely, Bch, Bu, Bur, Cvi, Fei, Lm, Sha, and Sorbo. We sequenced an about 1-kb promoter region for all Rubisco genes as well. This analysis revealed that the gene duplication/loss event was associated with promoter alterations (two insertions of 450 and 850 bp, one deletion of 730 bp) in rbcS-2b and a promoter deletion (2.3 kb) in rbcS-2b* in all eight affected accessions. The substitution of rbcS-1b by a duplicate of rbcS-2b (i.e., rbcS-2b*) might be caused by gene conversion. All four Rubisco genes evolve under purifying selection, as expected for central genes of the highly conserved photosystem of green plants. We inferred a single positive selected site, a tyrosine to aspartic acid substitution at position 72 in rbcS-1b. Exactly the same substitution compromises carboxylase activity in the cyanobacterium Anacystis nidulans. In A. thaliana, this substitution is associated with an inferred recombination. Functional implications of the substitution remain to be evaluated.
Intraspecific brood parasitism (IBP) is a remarkable phenomenon by which parasitic females can increase their reproductive output by laying eggs in conspecific females' nests in addition to incubating eggs in their own nest. Kin selection could explain the tolerance, or even the selective advantage, of IBP, but different models of IBP based on game theory yield contradicting predictions. Our analyses of seven polymorphic autosomal microsatellites in two eider duck colonies indicate that relatedness between host and parasitizing females is significantly higher than the background relatedness within the colony. This result is unlikely to be a by-product of relatives nesting in close vicinity, as nest distance and genetic identity are not correlated. For eider females that had been ring-marked during the decades prior to our study, our analyses indicate that (i) the average age of parasitized females is higher than the age of nonparasitized females, (ii) the percentage of nests with alien eggs increases with the age of nesting females, (iii) the level of IBP increases with the host females' age, and (iv) the number of own eggs in the nest of parasitized females significantly decreases with age. IBP may allow those older females unable to produce as many eggs as they can incubate to gain indirect fitness without impairing their direct fitness: genetically related females specialize in their energy allocation, with young females producing more eggs than they can incubate and entrusting these to their older relatives. Intraspecific brood parasitism in ducks may constitute cooperation among generations of closely related females.
Here we present a protocol to genetically detect diatoms in sediments of the Kenyan tropical Lake Naivasha, based on taxon-specific PCR amplification of short fragments (approximately 100 bp) of the small subunit ribosomal (SSU) gene and subsequent separation of species-specific PCR products by PCR-based denaturing high-performance liquid chromatography (DHPLC). An evaluation of amplicons differing in primer specificity to diatoms and length of the fragments amplified demonstrated that the number of different diatom sequence types detected after cloning of the PCR products critically depended on the specificity of the primers to diatoms and the length of the amplified fragments whereby shorter fragments yielded more species of diatoms. The DHPLC was able to discriminate between very short amplicons based on the sequence difference, even if the fragments were of identical length and if the amplicons differed only in a small number of nucleotides. Generally, the method identified the dominant sequence types from mixed amplifications. A comparison with microscopic analysis of the sediment samples revealed that the sequence types identified in the molecular assessment corresponded well with the most dominant species. In summary, the PCR-based DHPLC protocol offers a fast, reliable and cost-efficient possibility to study DNA from sediments and other environmental samples with unknown organismic content, even for very short DNA fragments.
Laura Pavesi, Elvira De Matthaeis, Ralph Tiedemann, and Valerio Ketmaier (2011) Temporal population genetics and COI phylogeography of the sandhopper Macarorchestia remyi (Amphipoda: Talitridae). Zoological Studies 50(2): 220-229. In this study we assessed levels of genetic divergence and variability in 208 individuals of the supralittoral sandhopper Macarorchestia remyi, a species strictly associated with rotted wood stranded on sand beaches, by analyzing sequence polymorphisms in a fragment of the mitochondrial DNA (mtDNA) gene coding cytochrome oxidase subunit I (COI). The geographical distribution and ecology of the species are poorly known. The study includes 1 Tyrrhenian and 2 Adriatic populations sampled along the Italian peninsula plus a single individual found on Corfu Is. (Greece). The Tyrrhenian population was sampled monthly for 1 yr. Genetic data revealed a deep phylogeographic break between the Tyrrhenian and Adriatic populations with no shared haplotypes. The single individual collected on Corfu Is. carried the most common haplotype found in the Tyrrhenian population. A mismatch analysis could not reject the hypothesis of a sudden demographic expansion in almost all but 2 monthly samples. When compared to previous genetic data centered on a variety of Mediterranean talitrids, our results place M. remyi among those species with profound intraspecific divergence (sandhoppers) and dissimilar from beachfleas, which generally display little population genetic structuring.
Preface
(2011)
We analyzed mtDNA polymorphisms (a total of 741 bp from a part of conserved control region, ND5, ND2, Cyt b and 12S) in 91 scats and 12 tissue samples of Bengal tiger (Panthera tigris tigris) populations across Terai Arc Landscape (TAL) located at the foothills of Himalayas in North Western India, Buxa Tiger Reserve (BTR), and North East India. In TAL and BTR, we found a specific haplotype at high frequency, which was absent elsewhere, indicating a genetically distinct population in these regions. Within the TAL region, there is some evidence for genetic isolation of the tiger populations west of river Ganges, i.e., in the western part of Rajaji National Park (RNP). Although the river itself might not constitute a significant barrier for tigers, recent human-induced changes in habitat and degradation of the Motichur-Chilla Corridor connecting the two sides of the tiger habitat of RNP might effectively prevent genetic exchange. A cohesive population is observed for the rest of the TAL. Even the more eastern BTR belongs genetically to this unit, despite the present lack of a migration corridor between BTR and TAL. In spite of a close geographic proximity, Chitwan (Nepal) constitutes a tiger population genetically different from TAL. Moreover, it is observed that the North East India tiger populations are genetically different from TAL and BTR, as well as from the other Bengal tiger populations in India.
Sexual selection often leads to sexual dimorphism, where secondary sexual traits are more expressed in the male sex. This may be due, for example, to increased fighting or mate-guarding abilities of males expressing those traits. We investigated sexually dimorphic traits in four populations of a marine amphipod, Pontogammarus maeoticus (Gammaridea: Pontogammaridae), the most abundant amphipod species in the sublittoral zone along the southern shoreline of the Caspian Sea. Male amphipods are typically larger in body size than females, and have relatively larger posterior gnathopods and antennae. However, it remains to be studied for most other body appendages whether or not, and to what extent, they are sexually dimorphic. Using Analysis of Covariance (ANCOVA), we compared the relationships between body size and trait expression for 35 metric characters between males and females, and among the four populations examined by performing three different Discriminant Function Analyses (DFA). We detected several thus far undescribed sexual dimorphic traits such as the seventh peraeopods or the epimeral plates. We also found that the size of the propodus of the first and second gnathopods increases with increasing body size, and this allometric increase was stronger in males than in females. Finally, we found that the degree of sexual dimorphism in the expression of the width of the third epimeral plate varies across sites, suggesting that differences in ecology might affect the strength of sexual selection in different populations.
Supermassive black holes are a fundamental component of the universe in general and of galaxies in particular. Almost every massive galaxy harbours a supermassive black hole (SMBH) in its center. Furthermore, there is a close connection between the growth of the SMBH and the evolution of its host galaxy, manifested in the relationship between the mass of the black hole and various properties of the galaxy's spheroid component, like its stellar velocity dispersion, luminosity or mass. Understanding this relationship and the growth of SMBHs is essential for our picture of galaxy formation and evolution. In this thesis, I make several contributions to improve our knowledge on the census of SMBHs and on the coevolution of black holes and galaxies. The first route I follow on this road is to obtain a complete census of the black hole population and its properties. Here, I focus particularly on active black holes, observable as Active Galactic Nuclei (AGN) or quasars. These are found in large surveys of the sky. In this thesis, I use one of these surveys, the Hamburg/ESO survey (HES), to study the AGN population in the local volume (z~0). The demographics of AGN are traditionally represented by the AGN luminosity function, the distribution function of AGN at a given luminosity. I determined the local (z<0.3) optical luminosity function of so-called type 1 AGN, based on the broad band B_J magnitudes and AGN broad Halpha emission line luminosities, free of contamination from the host galaxy. I combined this result with fainter data from the Sloan Digital Sky Survey (SDSS) and constructed the best current optical AGN luminosity function at z~0. The comparison of the luminosity function with higher redshifts supports the current notion of 'AGN downsizing', i.e. the space density of the most luminous AGN peaks at higher redshifts and the space density of less luminous AGN peaks at lower redshifts. However, the AGN luminosity function does not reveal the full picture of active black hole demographics. This requires knowledge of the physical quantities, foremost the black hole mass and the accretion rate of the black hole, and the respective distribution functions, the active black hole mass function and the Eddington ratio distribution function. I developed a method for an unbiased estimate of these two distribution functions, employing a maximum likelihood technique and fully account for the selection function. I used this method to determine the active black hole mass function and the Eddington ratio distribution function for the local universe from the HES. I found a wide intrinsic distribution of black hole accretion rates and black hole masses. The comparison of the local active black hole mass function with the local total black hole mass function reveals evidence for 'AGN downsizing', in the sense that in the local universe the most massive black holes are in a less active stage then lower mass black holes. The second route I follow is a study of redshift evolution in the black hole-galaxy relations. While theoretical models can in general explain the existence of these relations, their redshift evolution puts strong constraints on these models. Observational studies on the black hole-galaxy relations naturally suffer from selection effects. These can potentially bias the conclusions inferred from the observations, if they are not taken into account. I investigated the issue of selection effects on type 1 AGN samples in detail and discuss various sources of bias, e.g. an AGN luminosity bias, an active fraction bias and an AGN evolution bias. If the selection function of the observational sample and the underlying distribution functions are known, it is possible to correct for this bias. I present a fitting method to obtain an unbiased estimate of the intrinsic black hole-galaxy relations from samples that are affected by selection effects. Third, I try to improve our census of dormant black holes and the determination of their masses. One of the most important techniques to determine the black hole mass in quiescent galaxies is via stellar dynamical modeling. This method employs photometric and kinematic observations of the galaxy and infers the gravitational potential from the stellar orbits. This method can reveal the presence of the black hole and give its mass, if the sphere of the black hole's gravitational influence is spatially resolved. However, usually the presence of a dark matter halo is ignored in the dynamical modeling, potentially causing a bias on the determined black hole mass. I ran dynamical models for a sample of 12 galaxies, including a dark matter halo. For galaxies for which the black hole's sphere of influence is not well resolved, I found that the black hole mass is systematically underestimated when the dark matter halo is ignored, while there is almost no effect for galaxies with well resolved sphere of influence.
Corvino, Corvino and Schoen, Chruściel and Delay have shown the existence of a large class of asymptotically flat vacuum initial data for Einstein's field equations which are static or stationary in a neighborhood of space-like infinity, yet quite general in the interior. The proof relies on some abstract, non-constructive arguments which makes it difficult to calculate such data numerically by using similar arguments. A quasilinear elliptic system of equations is presented of which we expect that it can be used to construct vacuum initial data which are asymptotically flat, time-reflection symmetric, and asymptotic to static data up to a prescribed order at space-like infinity. A perturbation argument is used to show the existence of solutions. It is valid when the order at which the solutions approach staticity is restricted to a certain range. Difficulties appear when trying to improve this result to show the existence of solutions that are asymptotically static at higher order. The problems arise from the lack of surjectivity of a certain operator. Some tensor decompositions in asymptotically flat manifolds exhibit some of the difficulties encountered above. The Helmholtz decomposition, which plays a role in the preparation of initial data for the Maxwell equations, is discussed as a model problem. A method to circumvent the difficulties that arise when fast decay rates are required is discussed. This is done in a way that opens the possibility to perform numerical computations. The insights from the analysis of the Helmholtz decomposition are applied to the York decomposition, which is related to that part of the quasilinear system which gives rise to the difficulties. For this decomposition analogous results are obtained. It turns out, however, that in this case the presence of symmetries of the underlying metric leads to certain complications. The question, whether the results obtained so far can be used again to show by a perturbation argument the existence of vacuum initial data which approach static solutions at infinity at any given order, thus remains open. The answer requires further analysis and perhaps new methods.
The essay compares the dichotomous concepts of corporeality and spirituality in Judaism and Christianity. Through the ages, deviations from normative principles of beliefs could be discerned in both religions. These can be attributed either to the somewhat confrontational interaction between Jews and Christians in the Medieval urban environment or to the impact of Hellenic civilization on both monotheistic religions. Out of this dynamic impact emerged Christian art with a predilection to expressed corporeality, whereas Jewish religiosity found its artistic expression in a spiritual noniconographical mode. A genuine Jewish art and iconography could develop only after a certain degree of assimilation and secularization. Marc Chagall was the first protagonist of a mature expression of Jewish iconography.
This thesis covers the topic ”Thinning and Turbulence in Aqueous Films”. Experimental studies in two-dimensional systems gained an increasing amount of attention during the last decade. Thin liquid films serve as paradigms of atmospheric convection, thermal convection in the Earth’s mantle or turbulence in magnetohydrodynamics. Recent research on colloids, interfaces and nanofluids lead to advances in the developtment of micro-mixers (lab-on-a-chip devices). In this project a detailed description of a thin film experiment with focus on the particular surface forces is presented. The impact of turbulence on the thinning of liquid films which are oriented parallel to the gravitational force is studied. An experimental setup was developed which permits the capturing of thin film interference patterns under controlled surface and atmospheric conditions. The measurement setup also serves as a prototype of a mixer on the basis of thermally induced turbulence in liquid thin films with thicknesses in the nanometer range. The convection is realized by placing a cooled copper rod in the center of the film. The temperature gradient between the rod and the atmosphere results in a density gradient in the liquid film, so that different buoyancies generate turbulence. In the work at hand the thermally driven convection is characterized by a newly developed algorithm, named Cluster Imaging Velocimetry (CIV). This routine determines the flow relevant vector fields (velocity and deformation). On the basis of these insights the flow in the experiment was investigated with respect to its mixing properties. The mixing characteristics were compared to theoretical models and mixing efficiency of the flow scheme calculated. The gravitationally driven thinning of the liquid film was analyzed under the influence of turbulence. Strong shear forces lead to the generation of ultra-thin domains which consist of Newton black film. Due to the exponential expansion of the thin areas and the efficient mixing, this two-phase flow rapidly turns into the convection of only ultra-thin film. This turbulence driven transition was observed and quantified for the first time. The existence of stable convection in liquid nanofilms was proven for the first time in the context of this work.
The modeling and evaluation calculus FMC-QE, the Fundamental Modeling Concepts for Quanti-tative Evaluation [1], extends the Fundamental Modeling Concepts (FMC) for performance modeling and prediction. In this new methodology, the hierarchical service requests are in the main focus, because they are the origin of every service provisioning process. Similar to physics, these service requests are a tuple of value and unit, which enables hierarchical service request transformations at the hierarchical borders and therefore the hierarchical modeling. Through reducing the model complexity of the models by decomposing the system in different hierarchical views, the distinction between operational and control states and the calculation of the performance values on the assumption of the steady state, FMC-QE has a scalable applica-bility on complex systems. According to FMC, the system is modeled in a 3-dimensional hierarchical representation space, where system performance parameters are described in three arbitrarily fine-grained hierarchi-cal bipartite diagrams. The hierarchical service request structures are modeled in Entity Relationship Diagrams. The static server structures, divided into logical and real servers, are de-scribed as Block Diagrams. The dynamic behavior and the control structures are specified as Petri Nets, more precisely Colored Time Augmented Petri Nets. From the structures and pa-rameters of the performance model, a hierarchical set of equations is derived. The calculation of the performance values is done on the assumption of stationary processes and is based on fundamental laws of the performance analysis: Little's Law and the Forced Traffic Flow Law. Little's Law is used within the different hierarchical levels (horizontal) and the Forced Traffic Flow Law is the key to the dependencies among the hierarchical levels (vertical). This calculation is suitable for complex models and allows a fast (re-)calculation of different performance scenarios in order to support development and configuration decisions. Within the Research Group Zorn at the Hasso Plattner Institute, the work is embedded in a broader research in the development of FMC-QE. While this work is concentrated on the theoretical background, description and definition of the methodology as well as the extension and validation of the applicability, other topics are in the development of an FMC-QE modeling and evaluation tool and the usage of FMC-QE in the design of an adaptive transport layer in order to fulfill Quality of Service and Service Level Agreements in volatile service based environments. This thesis contains a state-of-the-art, the description of FMC-QE as well as extensions of FMC-QE in representative general models and case studies. In the state-of-the-art part of the thesis in chapter 2, an overview on existing Queueing Theory and Time Augmented Petri Net models and other quantitative modeling and evaluation languages and methodologies is given. Also other hierarchical quantitative modeling frameworks will be considered. The description of FMC-QE in chapter 3 consists of a summary of the foundations of FMC-QE, basic definitions, the graphical notations, the FMC-QE Calculus and the modeling of open queueing networks as an introductory example. The extensions of FMC-QE in chapter 4 consist of the integration of the summation method in order to support the handling of closed networks and the modeling of multiclass and semaphore scenarios. Furthermore, FMC-QE is compared to other performance modeling and evaluation approaches. In the case study part in chapter 5, proof-of-concept examples, like the modeling of a service based search portal, a service based SAP NetWeaver application and the Axis2 Web service framework will be provided. Finally, conclusions are given by a summary of contributions and an outlook on future work in chapter 6. [1] Werner Zorn. FMC-QE - A New Approach in Quantitative Modeling. In Hamid R. Arabnia, editor, Procee-dings of the International Conference on Modeling, Simulation and Visualization Methods (MSV 2007) within WorldComp ’07, pages 280 – 287, Las Vegas, NV, USA, June 2007. CSREA Press. ISBN 1-60132-029-9.
Does it have to be trees? : Data-driven dependency parsing with incomplete and noisy training data
(2011)
We present a novel approach to training data-driven dependency parsers on incomplete annotations. Our parsers are simple modifications of two well-known dependency parsers, the transition-based Malt parser and the graph-based MST parser. While previous work on parsing with incomplete data has typically couched the task in frameworks of unsupervised or semi-supervised machine learning, we essentially treat it as a supervised problem. In particular, we propose what we call agnostic parsers which hide all fragmentation in the training data from their supervised components. We present experimental results with training data that was obtained by means of annotation projection. Annotation projection is a resource-lean technique which allows us to transfer annotations from one language to another within a parallel corpus. However, the output tends to be noisy and incomplete due to cross-lingual non-parallelism and error-prone word alignments. This makes the projected annotations a suitable test bed for our fragment parsers. Our results show that (i) dependency parsers trained on large amounts of projected annotations achieve higher accuracy than the direct projections, and that (ii) our agnostic fragment parsers perform roughly on a par with the original parsers which are trained only on strictly filtered, complete trees. Finally, (iii) when our fragment parsers are trained on artificially fragmented but otherwise gold standard dependencies, the performance loss is moderate even with up to 50% of all edges removed.
Rainfall, snow-, and glacial melt throughout the Himalaya control river discharge, which is vital for maintaining agriculture, drinking water and hydropower generation. However, the spatiotemporal contribution of these discharge components to Himalayan rivers is not well understood, mainly because of the scarcity of ground-based observations. Consequently, there is also little known about the triggers and sources of peak sediment flux events, which account for extensive hydropower reservoir filling and turbine abrasion. We therefore lack basic information on the distribution of water resources and controls of erosion processes. In this thesis, I employ various methods to assess and quantify general characteristics of and links between precipitation, river discharge, and sediment flux in the Sutlej Valley. First, I analyze daily precipitation data (1998-2007) from 80 weather stations in the western Himalaya, to decipher the distribution of rain- and snowfall. Rainfall magnitude frequency analyses indicate that 40% of the summer rainfall budget is attributed to monsoonal rainstorms, which show higher variability in the orogenic interior than in frontal regions. Combined analysis of rainstorms and sediment flux data of a major Sutlej River tributary indicate that monsoonal rainfall has a first order control on erosion processes in the orogenic interior, despite the dominance of snowfall in this region. Second, I examine the contribution of rainfall, snow and glacial melt to river discharge in the Sutlej Valley (s55,000 km2), based on a distributed hydrological model, which covers the period 2000-2008. To achieve high spatial and daily resolution despite limited ground-based observations the hydrological model is forced by daily remote sensing data, which I adjusted and calibrated with ground station data. The calibration shows that the Tropical Rainfall Measuring Mission (TRMM) 3B42 rainfall product systematically overestimates rainfall in semi-arid and arid regions, increasing with aridity. The model results indicate that snowmelt-derived discharge (74%) is most important during the pre-monsoon season (April to June) whereas rainfall (56%) and glacial melt (17%) dominate the monsoon season (July-September). Therefore, climate change most likely causes a reduction in river discharge during the pre-monsoon season, which especially affects the orogenic interior. Third, I investigate the controls on suspended sediment flux in different parts of the Sutlej catchments, based on daily gauging data from the past decade. In conjunction with meteorological data, earthquake records, and rock strength measurements I find that rainstorms are the most frequent trigger of high-discharge events with peaks in suspended sediment concentrations (SSC) that account for the bulk of the suspended sediment flux. The suspended sediment flux increases downstream, mainly due to increases in runoff. Pronounced erosion along the Himalayan Front occurs throughout the monsoon season, whereas efficient erosion of the orogenic interior is confined to single extreme events. The results of this thesis highlight the importance of snow and glacially derived melt waters in the western Himalaya, where extensive regions receive only limited amounts of monsoonal rainfall. These regions are therefore particularly susceptible to global warming with major implications on the hydrological cycle. However, the sediment discharge data show that infrequent monsoonal rainstorms that pass the orographic barrier of the Higher Himalaya are still the primary trigger of the highest-impact erosion events, despite being subordinate to snow and glacially–derived discharge. These findings may help to predict peak sediment flux events and could underpin the strategic development of preventative measures for hydropower infrastructures.
This paper discusses a hitherto undescribed usage of the particle so as a dedicated focus marker in contemporary German. I discuss grammatical and pragmatic characteristics of this focus marker, supporting my account with natural linguistic data and with controlled experimental evidence showing that so has a significant influence on speakers’ understanding of what the focus expression in a sentence is. Against this background, I sketch a possible pragmaticalization path from referential usages of so via hedging to a semantically bleached focus marker, which, unlike particles such as auch ‘also’/‘too’ or nur ‘only’, does not contribute any additional meaning.
There has been a substantial increase in the percentage for publications with co-authors located in departments from different countries in 12 major journals of psychology. The results are evidence for a remarkable internationalization of psychological research, starting in the mid 1970s and increasing in rate at the beginning of the 1990s. This growth occurs against a constant number of articles with authors from the same country; it is not due to a concomitant increase in the number of co-authors per article. Thus, international collaboration in psychology is obviously on the rise.
The East African Plateau provides a spectacular example of geodynamic plateau uplift, active continental rifting, and associated climatic forcing. It is an integral part of the East African Rift System and has an average elevation of approximately 1,000 m. Its location coincides with a negative Bouguer gravity anomaly with a semi-circular shape, closely related to a mantle plume, which influences the Cenozoic crustal development since its impingement in Eocene-Oligocene time. The uplift of the East African Plateau, preceding volcanism, and rifting formed an important orographic barrier and tectonically controlled environment, which is profoundly influenced by climate driven processes. Its location within the equatorial realm supports recently proposed hypotheses, that topographic changes in this region must be considered as the dominant forcing factor influencing atmospheric circulation patterns and rainfall distribution. The uplift of this region has therefore often been associated with fundamental climatic and environmental changes in East Africa and adjacent regions. While the far-reaching influence of the plateau uplift is widely accepted, the timing and the magnitude of the uplift are ambiguous and are still subject to ongoing discussion. This dilemma stems from the lack of datable, geomorphically meaningful reference horizons that could record surface uplift. In order to quantify the amount of plateau uplift and to find evidence for the existence of significant relief along the East African Plateau prior to rifting, I analyzed and modeled one of the longest terrestrial lava flows; the 300-km-long Yatta phonolite flow in Kenya. This lava flow is 13.5 Ma old and originated in the region that now corresponds to the eastern rift shoulders. The phonolitic flow utilized an old riverbed that once drained the eastern flank of the plateau. Due to differential erosion this lava flow now forms a positive relief above the parallel-flowing Athi River, which is mimicking the course of the paleo-river. My approach is a lava-flow modeling, based on an improved composition and temperature dependent method to parameterize the flow of an arbitrary lava in a rectangular-shaped channel. The essential growth pattern is described by a one-dimensional model, in which Newtonian rheological flow advance is governed by the development of viscosity and/or velocity in the internal parts of the lava-flow front. Comparing assessments of different magma compositions reveal that length-dominated, channelized lava flows are characterized by high effusion rates, rapid emplacement under approximately isothermal conditions, and laminar flow. By integrating the Yatta lava flow dimensions and the covered paleo-topography (slope angle) into the model, I was able to determine the pre-rift topography of the East African Plateau. The modeling results yield a pre-rift slope of at least 0.2°, suggesting that the lava flow must have originated at a minimum elevation of 1,400 m. Hence, high topography in the region of the present-day Kenya Rift must have existed by at least 13.5 Ma. This inferred mid-Miocene uplift coincides with the two-step expansion of grasslands, as well as important radiation and speciation events in tropical Africa. Accordingly, the combination of my results regarding the Yatta lava flow emplacement history, its location, and its morphologic character, validates it as a suitable “paleo-tiltmeter” and has thus to be considered as an important topographic and volcanic feature for the topographic evolution in East Africa.
This Thesis puts its focus on the physics of neutron stars and its description with methods of numerical relativity. In the first step, a new numerical framework the Whisky2D code will be developed, which solves the relativistic equations of hydrodynamics in axisymmetry. Therefore we consider an improved formulation of the conserved form of these equations. The second part will use the new code to investigate the critical behaviour of two colliding neutron stars. Considering the analogy to phase transitions in statistical physics, we will investigate the evolution of the entropy of the neutron stars during the whole process. A better understanding of the evolution of thermodynamical quantities, like the entropy in critical process, should provide deeper understanding of thermodynamics in relativity. More specifically, we have written the Whisky2D code, which solves the general-relativistic hydrodynamics equations in a flux-conservative form and in cylindrical coordinates. This of course brings in 1/r singular terms, where r is the radial cylindrical coordinate, which must be dealt with appropriately. In the above-referenced works, the flux operator is expanded and the 1/r terms, not containing derivatives, are moved to the right-hand-side of the equation (the source term), so that the left hand side assumes a form identical to the one of the three-dimensional (3D) Cartesian formulation. We call this the standard formulation. Another possibility is not to split the flux operator and to redefine the conserved variables, via a multiplication by r. We call this the new formulation. The new equations are solved with the same methods as in the Cartesian case. From a mathematical point of view, one would not expect differences between the two ways of writing the differential operator, but, of course, a difference is present at the numerical level. Our tests show that the new formulation yields results with a global truncation error which is one or more orders of magnitude smaller than those of alternative and commonly used formulations. The second part of the Thesis uses the new code for investigations of critical phenomena in general relativity. In particular, we consider the head-on-collision of two neutron stars in a region of the parameter space where two final states a new stable neutron star or a black hole, lay close to each other. In 1993, Choptuik considered one-parameter families of solutions, S[P], of the Einstein-Klein-Gordon equations for a massless scalar field in spherical symmetry, such that for every P > P⋆, S[P] contains a black hole and for every P < P⋆, S[P] is a solution not containing singularities. He studied numerically the behavior of S[P] as P → P⋆ and found that the critical solution, S[P⋆], is universal, in the sense that it is approached by all nearly-critical solutions regardless of the particular family of initial data considered. All these phenomena have the common property that, as P approaches P⋆, S[P] approaches a universal solution S[P⋆] and that all the physical quantities of S[P] depend only on |P − P⋆|. The first study of critical phenomena concerning the head-on collision of NSs was carried out by Jin and Suen in 2007. In particular, they considered a series of families of equal-mass NSs, modeled with an ideal-gas EOS, boosted towards each other and varied the mass of the stars, their separation, velocity and the polytropic index in the EOS. In this way they could observe a critical phenomenon of type I near the threshold of black-hole formation, with the putative solution being a nonlinearly oscillating star. In a successive work, they performed similar simulations but considering the head-on collision of Gaussian distributions of matter. Also in this case they found the appearance of type-I critical behaviour, but also performed a perturbative analysis of the initial distributions of matter and of the merged object. Because of the considerable difference found in the eigenfrequencies in the two cases, they concluded that the critical solution does not represent a system near equilibrium and in particular not a perturbed Tolmann-Oppenheimer-Volkoff (TOV) solution. In this Thesis we study the dynamics of the head-on collision of two equal-mass NSs using a setup which is as similar as possible to the one considered above. While we confirm that the merged object exhibits a type-I critical behaviour, we also argue against the conclusion that the critical solution cannot be described in terms of equilibrium solution. Indeed, we show that, in analogy with what is found in, the critical solution is effectively a perturbed unstable solution of the TOV equations. Our analysis also considers fine-structure of the scaling relation of type-I critical phenomena and we show that it exhibits oscillations in a similar way to the one studied in the context of scalar-field critical collapse.
In this work, the development of a new molecular building block, based on synthetic peptides derived from decorin, is presented. These peptides represent a promising basis for the design of polymer-based biomaterials that mimic the ECM on a molecular level and exploit specific biological recognition for technical applications. Multiple sequence alignments of the internal repeats of decorin that formed the inner and outer surface of the arch-shaped protein were used to develop consensus sequences. These sequences contained conserved sequence motifs that are likely to be related to structural and functional features of the protein. Peptides representative for the consensus sequences were synthesized by microwave-assisted solid phase peptide synthesis and purified by RP-HPLC, with purities higher than 95 mol%. After confirming the desired masses by MALDI-TOF-MS, the primary structure of each peptide was investigated by 1H and 2D NMR, from which a full assignment of the chemical shifts was obtained. The characterization of the peptides conformation in solution was performed by CD spectroscopy, which demonstrated that using TFE, the peptides from the outer surface of decorin show a high propensity to fold into helical structures as observed in the original protein. To the contrary, the peptides from the inner surface did not show propensity to form stable secondary structure. The investigation of the binding capability of the peptides to Collagen I was performed by surface plasmon resonance analyses, from which all but one of the peptides representing the inner surface of decorin showed binding affinity to collagen with values of dissociation constant between 2•10-7 M and 2.3•10-4 M. On the other hand, the peptides representative for the outer surface of decorin did not show any significant interaction to collagen. This information was then used to develop experimental demonstration for the binding capabilities of the peptides from the inner surface of decorin to collagen even when used in more complicated situations close to possible appications. With this purpose, the peptide (LRELHLNNN) which showed the highest binding affinity to collagen (2•10-7 M) was functionalized with an N-terminal triple bond in order to obtain a peptide dimer via copper(I)-catalyzed cycloaddition reaction with 4,4'-diazidostilbene-2,2'-disulfonic acid. Rheological measurements showed that the presence of the peptide dimer was able to enhance the elastic modulus (G') of a collagen gel from ~ 600 Pa (collagen alone) to ~ 2700 Pa (collagen and peptide dimer). Moreover, it was shown that the mechanical properties of a collagen gel can be tailored by using different molar ratios of peptide dimer respect to collagen. The same peptide, functionalized with the triple bond, was used to obtain a peptide-dye conjugate by coupling it with N-(5'-azidopentanoyl)-5-aminofluorescein. An aqueous solution (5 vol% methanol) of the peptide dye conjugate was injected into a collagen and a hyaluronic acid (HA) gel and images of fluorescence detection showed that the diffusion of the peptide was slower in the collagen gel compared to the HA gel. The third experimental demonstration was gained using the peptide (LSELRLHNN) which showed the lower binding affinity (2.3•10-4 M) to collagen. This peptide was grafted to hyaluronic acid via EDC-chemistry, with a degree of functionalization of 7 ± 2 mol% as calculated by 1H-NMR. The grafting was further confirmed by FTIR and TGA measurements, which showed that the onset of decomposition for the HA-g-peptide decreased by 10 °C compared to the native HA. Rheological measurements showed that the elastic modulus of a system based on collagen and HA-g-peptide increased by almost two order of magnitude (G' = 200 Pa) compared to a system based on collagen and HA (G' = 0.9 Pa). Overall, this study showed that the synthetic peptides, which were identified from decorin, can be applied as potential building blocks for biomimetic materials that function via biological recognition.
The inspiral and merger of two black holes is among the most exciting and extreme events in our universe. Being one of the loudest sources of gravitational waves, they provide a unique dynamical probe of strong-field general relativity and a fertile ground for the observation of fundamental physics. While the detection of gravitational waves alone will allow us to observe our universe through an entirely new window, combining the information obtained from both gravitational wave and electro-magnetic observations will allow us to gain even greater insight in some of the most exciting astrophysical phenomena. In addition, binary black-hole mergers serve as an intriguing tool to study the geometry of space-time itself. In this dissertation we study the merger process of binary black-holes in a variety of conditions. Our results show that asymmetries in the curvature distribution on the common apparent horizon are correlated to the linear momentum acquired by the merger remnant. We propose useful tools for the analysis of black holes in the dynamical and isolated horizon frameworks and shed light on how the final merger of apparent horizons proceeds after a common horizon has already formed. We connect mathematical theorems with data obtained from numerical simulations and provide a first glimpse on the behavior of these surfaces in situations not accessible to analytical tools. We study electro-magnetic counterparts of super-massive binary black-hole mergers with fully 3D general relativistic simulations of binary black-holes immersed both in a uniform magnetic field in vacuum and in a tenuous plasma. We find that while a direct detection of merger signatures with current electro-magnetic telescopes is unlikely, secondary emission, either by altering the accretion rate of the circumbinary disk or by synchrotron radiation from accelerated charges, may be detectable. We propose a novel approach to measure the electro-magnetic radiation in these simulations and find a non-collimated emission that dominates over the collimated one appearing in the form of dual jets associated with each of the black holes. Finally, we provide an optimized gravitational wave detection pipeline using phenomenological waveforms for signals from compact binary coalescence and show that by including spin effects in the waveform templates, the detection efficiency is drastically improved as well as the bias on recovered source parameters reduced. On the whole, this disseration provides evidence that a multi-messenger approach to binary black-hole merger observations provides an exciting prospect to understand these sources and, ultimately, our universe.
Biology has made great progress in identifying and measuring the building blocks of life. The availability of high-throughput methods in molecular biology has dramatically accelerated the growth of biological knowledge for various organisms. The advancements in genomic, proteomic and metabolomic technologies allow for constructing complex models of biological systems. An increasing number of biological repositories is available on the web, incorporating thousands of biochemical reactions and genetic regulations. Systems Biology is a recent research trend in life science, which fosters a systemic view on biology. In Systems Biology one is interested in integrating the knowledge from all these different sources into models that capture the interaction of these entities. By studying these models one wants to understand the emerging properties of the whole system, such as robustness. However, both measurements as well as biological networks are prone to considerable incompleteness, heterogeneity and mutual inconsistency, which makes it highly non-trivial to draw biologically meaningful conclusions in an automated way. Therefore, we want to promote Answer Set Programming (ASP) as a tool for discrete modeling in Systems Biology. ASP is a declarative problem solving paradigm, in which a problem is encoded as a logic program such that its answer sets represent solutions to the problem. ASP has intrinsic features to cope with incompleteness, offers a rich modeling language and highly efficient solving technology. We present ASP solutions, for the analysis of genetic regulatory networks, determining consistency with observed measurements and identifying minimal causes for inconsistency. We extend this approach for computing minimal repairs on model and data that restore consistency. This method allows for predicting unobserved data even in case of inconsistency. Further, we present an ASP approach to metabolic network expansion. This approach exploits the easy characterization of reachability in ASP and its various reasoning methods, to explore the biosynthetic capabilities of metabolic reaction networks and generate hypotheses for extending the network. Finally, we present the BioASP library, a Python library which encapsulates our ASP solutions into the imperative programming paradigm. The library allows for an easy integration of ASP solution into system rich environments, as they exist in Systems Biology.
Parsability approaches of several grammar formalisms generating also non-context-free languages are explored. Chomsky grammars, Lindenmayer systems, grammars with controlled derivations, and grammar systems are treated. Formal properties of these mechanisms are investigated, when they are used as language acceptors. Furthermore, cooperating distributed grammar systems are restricted so that efficient deterministic parsing without backtracking becomes possible. For this class of grammar systems, the parsing algorithm is presented and the feature of leftmost derivations is investigated in detail.
Migration and development in Senegal : a system dynamics analysis of the feedback relationships
(2011)
This thesis investigates the reciprocal relationship between migration and development in Senegal. Therewith, it contributes to the debate as to whether migration in developing countries enhances or rather impedes the development process. Even though extensive and controversial discussions can be found in the scientific literature regarding the impact of migration on development, research has scarcely examined the feedback relationships between migration and development. Science however agrees with both the fact that migration affects development as well as that the level of development in a country determines migration behaviour. Thus, both variables are neither dependent nor independent, but endogenous variables influencing each other and producing behavioural pattern that cannot be investigated using a static and unidirectional approach. On account of this, the thesis studies the feedback mechanisms existing between migration and development and the behavioural pattern generated by the high interdependence in order to be able to draw conclusions concerning the impact of changes in migration behaviour on the development process. To explore these research questions, the study applies the computer simulation method ‘System Dynamics’ and amplifies the simulation model for national development planning called ‘Threshold 21’ (T21), representing development processes endogenously and integrating economic, social and environmental aspects, using a structure that portrays the reasons and consequences of migration. The model has been customised to Senegal, being an appropriate representative of the theoretical interesting universe of cases. The comparison of the model generated scenarios - in which the intensity of emigration, the loss and gain of education, the remittances or the level of dependence changes - facilitates the analysis. The present study produces two important results. The first outcome is the development of an integrative framework representing migration and development in an endogenous way and incorporating several aspects of different theories. This model can be used as a starting point for further discussions and improvements and it is a fairly relevant and useful result against the background that migration is not integrated into most of the development planning tools despite its significant impact. The second outcome is the gained insights concerning the feedback relations between migration and development and the impact of changes in migration on development. To give two examples: It could be found that migration impacts development positively, indicated by HDI, but that the dominant behaviour of migration and development is a counteracting behaviour. That means that an increase in emigration leads to an improvement in development, while this in turn causes a decline in emigration, counterbalancing the initial increase. Another insight concerns the discovery that migration causes a decline in education in the short term, but leads to an increase in the long term, after approximately 25 years - a typical worse-before-better behaviour. From these and further observations, important policy implications can be derived for the sending and receiving countries. Hence, by overcoming the unidirectional perspective, this study contributes to an improved understanding of the highly complex relationship between migration and development and their feedback relations.
During reading oculomotor processes guide the eyes over the text. The visual information recorded is accessed, evaluated and processed. Only by retrieving the meaning of a word from the long-term memory, as well as through the connection and storage of the information about each individual word, is it possible to access the semantic meaning of a sentence. Therefore memory, and here in particular working memory, plays a pivotal role in the basic processes of reading. The following dissertation investigates to what extent different demands on memory and memory capacity have an effect on eye movement behavior while reading. The frequently used paradigm of the reading span task, in which test subjects read and evaluate individual sentences, was used for the experimental review of the research questions. The results speak for the fact that working memory processes have a direct effect on various eye movement measurements. Thus a high working memory load, for example, reduced the perceptual span while reading. The lower the individual working memory capacity of the reader was, the stronger was the influence of the working memory load on the processing of the sentence.
The Arctic is a particularly sensitive area with respect to climate change due to the high surface albedo of snow and ice and the extreme radiative conditions. Clouds and aerosols as parts of the Arctic atmosphere play an important role in the radiation budget, which is, as yet, poorly quantified and understood. The LIDAR (Light Detection And Ranging) measurements presented in this PhD thesis contribute with continuous altitude resolved aerosol profiles to the understanding of occurrence and characteristics of aerosol layers above Ny-Ålesund, Spitsbergen. The attention was turned to the analysis of periods with high aerosol load. As the Arctic spring troposphere exhibits maximum aerosol optical depths (AODs) each year, March and April of both the years 2007 and 2009 were analyzed. Furthermore, stratospheric aerosol layers of volcanic origin were analyzed for several months, subsequently to the eruptions of the Kasatochi and Sarychev volcanoes in summer 2008 and 2009, respectively. The Koldewey Aerosol Raman LIDAR (KARL) is an instrument for the active remote sensing of atmospheric parameters using pulsed laser radiation. It is operated at the AWIPEV research base and was fundamentally upgraded within the framework of this PhD project. It is now equipped with a new telescope mirror and new detection optics, which facilitate atmospheric profiling from 450m above sea level up to the mid-stratosphere. KARL provides highly resolved profiles of the scattering characteristics of aerosol and cloud particles (backscattering, extinction and depolarization) as well as water vapor profiles within the lower troposphere. Combination of KARL data with data from other instruments on site, namely radiosondes, sun photometer, Micro Pulse LIDAR, and tethersonde system, resulted in a comprehensive data set of scattering phenomena in the Arctic atmosphere. The two spring periods March and April 2007 and 2009 were at first analyzed based on meteorological parameters, like local temperature and relative humidity profiles as well as large scale pressure patterns and air mass origin regions. Here, it was not possible to find a clear correlation between enhanced AOD and air mass origin. However, in a comparison of two cloud free periods in March 2007 and April 2009, large AOD values in 2009 coincided with air mass transport through the central Arctic. This suggests the occurrence of aerosol transformation processes during the aerosol transport to Ny-Ålesund. Measurements on 4 April 2009 revealed maximum AOD values of up to 0.12 and aerosol size distributions changing with altitude. This and other performed case studies suggest the differentiation between three aerosol event types and their origin: Vertically limited aerosol layers in dry air, highly variable hygroscopic boundary layer aerosols and enhanced aerosol load across wide portions of the troposphere. For the spring period 2007, the available KARL data were statistically analyzed using a characterization scheme, which is based on optical characteristics of the scattering particles. The scheme was validated using several case studies. Volcanic eruptions in the northern hemisphere in August 2008 and June 2009 arose the opportunity to analyze volcanic aerosol layers within the stratosphere. The rate of stratospheric AOD change was similar within both years with maximum values above 0.1 about three to five weeks after the respective eruption. In both years, the stratospheric AOD persisted at higher rates than usual until the measurements were stopped in late September due to technical reasons. In 2008, up to three aerosol layers were detected, the layer structure in 2009 was characterized by up to six distinct and thin layers which smeared out to one broad layer after about two months. The lowermost aerosol layer was continuously detected at the tropopause altitude. Three case studies were performed, all revealed rather large indices of refraction of m = (1.53–1.55) - 0.02i, suggesting the presence of an absorbing carbonaceous component. The particle radius, derived with inversion calculations, was also similar in both years with values ranging from 0.16 to 0.19 μm. However, in 2009, a second mode in the size distribution was detected at about 0.5 μm. The long term measurements with the Koldewey Aerosol Raman LIDAR in Ny-Ålesund provide the opportunity to study Arctic aerosols in the troposphere and the stratosphere not only in case studies but on longer time scales. In this PhD thesis, both, tropospheric aerosols in the Arctic spring and stratospheric aerosols following volcanic eruptions have been described qualitatively and quantitatively. Case studies and comparative studies with data of other instruments on site allowed for the analysis of microphysical aerosol characteristics and their temporal evolution.
CSOM/PL is a software product line (SPL) derived from applying multi-dimensional separation of concerns (MDSOC) techniques to the domain of high-level language virtual machine (VM) implementations. For CSOM/PL, we modularised CSOM, a Smalltalk VM implemented in C, using VMADL (virtual machine architecture description language). Several features of the original CSOM were encapsulated in VMADL modules and composed in various combinations. In an evaluation of our approach, we show that applying MDSOC and SPL principles to a domain as complex as that of VMs is not only feasible but beneficial, as it improves understandability, maintainability, and configurability of VM implementations without harming performance.
In the present work synchronization phenomena in complex dynamical systems exhibiting multiple time scales have been analyzed. Multiple time scales can be active in different manners. Three different systems have been analyzed with different methods from data analysis. The first system studied is a large heterogenous network of bursting neurons, that is a system with two predominant time scales, the fast firing of action potentials (spikes) and the burst of repetitive spikes followed by a quiescent phase. This system has been integrated numerically and analyzed with methods based on recurrence in phase space. An interesting result are the different transitions to synchrony found in the two distinct time scales. Moreover, an anomalous synchronization effect can be observed in the fast time scale, i.e. there is range of the coupling strength where desynchronization occurs. The second system analyzed, numerically as well as experimentally, is a pair of coupled CO₂ lasers in a chaotic bursting regime. This system is interesting due to its similarity with epidemic models. We explain the bursts by different time scales generated from unstable periodic orbits embedded in the chaotic attractor and perform a synchronization analysis of these different orbits utilizing the continuous wavelet transform. We find a diverse route to synchrony of these different observed time scales. The last system studied is a small network motif of limit cycle oscillators. Precisely, we have studied a hub motif, which serves as elementary building block for scale-free networks, a type of network found in many real world applications. These hubs are of special importance for communication and information transfer in complex networks. Here, a detailed study on the mechanism of synchronization in oscillatory networks with a broad frequency distribution has been carried out. In particular, we find a remote synchronization of nodes in the network which are not directly coupled. We also explain the responsible mechanism and its limitations and constraints. Further we derive an analytic expression for it and show that information transmission in pure phase oscillators, such as the Kuramoto type, is limited. In addition to the numerical and analytic analysis an experiment consisting of electrical circuits has been designed. The obtained results confirm the former findings.
Public debate about energy relations between the EU and Russia is distorted. These distortions present considerable obstacles to the development of true partnership. At the core of the conflict is a struggle for resource rents between energy producing, energy consuming and transit countries. Supposed secondary aspects, however, are also of great importance. They comprise of geopolitics, market access, economic development and state sovereignty. The European Union, having engaged in energy market liberalisation, faces a widening gap between declining domestic resources and continuously growing energy demand. Diverse interests inside the EU prevent the definition of a coherent and respected energy policy. Russia, for its part, is no longer willing to subsidise its neighbouring economies by cheap energy exports. The Russian government engages in assertive policies pursuing Russian interests. In so far, it opts for a different globalisation approach, refusing the role of mere energy exporter. In view of the intensifying struggle for global resources, Russia, with its large energy potential, appears to be a very favourable option for European energy supplies, if not the best one. However, several outcomes of the strategic game between the two partners can be imagined. Engaging in non-cooperative strategies will in the end leave all stakeholders worse-off. The European Union should therefore concentrate on securing its partnership with Russia instead of damaging it. Stable cooperation would need the acceptance that the partner may pursue his own goals, which might be different from one’s own interests. The question is, how can a sustainable compromise be found? This thesis finds that a mix of continued dialogue, a tit for tat approach bolstered by an international institutional framework and increased integration efforts appears as a preferable solution.
Spatial and temporal temperature and moisture patterns across the Tibetan Plateau are very complex. The onset and magnitude of the Holocene climate optimum in the Asian monsoon realm, in particular, is a subject of considerable debate as this time period is often used as an analogue for recent global warming. In the light of contradictory inferences regarding past climate and environmental change on the Tibetan Plateau, I have attempted to explain mismatches in the timing and magnitude of change. Therefore, I analysed the temporal variation of fossil pollen and diatom spectra and the geochemical record from palaeo-ecological records covering different time scales (late Quaternary and the last 200 years) from two core regions in the NE and SE Tibetan Plateau. For interpretation purposes I combined my data with other available palaeo-ecological data to set up corresponding aquatic and terrestrial proxy data sets of two lake pairs and two sets of sites. I focused on the direct comparison of proxies representing lacustrine response to climate signals (e.g., diatoms, ostracods, geochemical record) and proxies representing changes in the terrestrial environment (i.e., terrestrial pollen), in order to asses whether the lake and its catchments respond at similar times and magnitudes to environmental changes. Therefore, I introduced the established numerical technique procrustes rotation as a new approach in palaeoecology to quantitatively compare raw data of any two sedimentary records of interest in order to assess their degree of concordance. Focusing on the late Quaternary, sediment cores from two lakes (Kuhai Lake 35.3°N; 99.2°E; 4150 m asl; and Koucha Lake 34.0°N; 97.2°E; 4540 m asl) on the semi-arid northeastern Tibetan Plateau were analysed to identify post-glacial vegetation and environmental changes, and to investigate the responses of lake ecosystems to such changes. Based on the pollen record, five major vegetation and climate changes could be identified: (1) A shift from alpine desert to alpine steppe indicates a change from cold, dry conditions to warmer and more moist conditions at 14.8 cal. ka BP, (2) alpine steppe with tundra elements points to conditions of higher effective moisture and a stepwise warming climate at 13.6 cal. ka BP, (3) the appearance of high-alpine meadow vegetation indicates a further change towards increased moisture, but with colder temperatures, at 7.0 cal. ka BP, (4) the reoccurrence of alpine steppe with desert elements suggests a return to a significantly colder and drier phase at 6.3 cal. ka BP, and (5) the establishment of alpine steppe-meadow vegetation indicates a change back to relatively moist conditions at 2.2 cal. ka BP. To place the reconstructed climate inferences from the NE Tibetan Plateau into the context of Holocene moisture evolution across the Tibetan Plateau, I applied a five-scale moisture index and average link clustering to all available continuous pollen and non-pollen palaeoclimate records from the Tibetan Plateau, in an attempt to detect coherent regional and temporal patterns of moisture evolution on the Plateau. However, no common temporal or spatial pattern of moisture evolution during the Holocene could be detected, which can be assigned to the complex responses of different proxies to environmental changes in an already very heterogeneous mountain landscape, where minor differences in elevation can result in marked variations in microenvironments. Focusing on the past 200 years, I analysed the sedimentary records (LC6 Lake 29.5°N, 94.3°E, 4132 m asl; and Wuxu Lake 29.9°N, 101.1°E, 3705 m asl) from the southeastern Tibetan Plateau. I found that despite presumed significant temperature increases over that period, pollen and diatom records from the SE Tibetan Plateau reveal only very subtle changes throughout their profiles. The compositional species turnover investigated over the last 200 years appears relatively low in comparison to the species reorganisations during the Holocene. The results indicate that climatically induced ecological thresholds are not yet crossed, but that human activity has an increasing influence, particularly on the terrestrial ecosystem. Forest clearances and reforestation have not caused forest decline in our study area, but a conversion of natural forests to semi-natural secondary forests. The results from the numerical proxy comparison of the two sets of two pairs of Tibetan lakes indicate that the use of different proxies and the work with palaeo-ecological records from different lake types can cause deviant stories of inferred change. Irrespective of the timescale (Holocene or last 200 years) or region (SE or NE Tibetan Plateau) analysed, the agreement in terms of the direction, timing, and magnitude of change between the corresponding terrestrial data sets is generally better than the match between the corresponding lacustrine data sets, suggesting that lacustrine proxies may partly be influenced by in-lake or local catchment processes whereas the terrestrial proxy reflects a more regional climatic signal. The current disaccord on coherent temporal and spatial climate patterns on the Tibetan Plateau can partly be ascribed to the complexity of proxy response and lake systems on the Tibetan Plateau. Therefore, a multi-proxy, multi-site approach is important in order to gain a reliable climate interpretation for the complex mountain landscape of the Tibetan Plateau.
In the present thesis, the self-assembly of multi thermoresponsive block copolymers in dilute aqueous solution was investigated by a combination of turbidimetry, dynamic light scattering, TEM measurements, NMR as well as fluorescence spectroscopy. The successive conversion of such block copolymers from a hydrophilic into a hydrophobic state includes intermediate amphiphilic states with a variable hydrophilic-to-lipophilic balance. As a result, the self-organization is not following an all-or-none principle but a multistep aggregation in dilute solution was observed. The synthesis of double thermoresponsive diblock copolymers as well as triple thermoresponsive triblock copolymers was realized using twofold-TMS labeled RAFT agents which provide direct information about the average molar mass as well as residual end group functionality from a routine proton NMR spectrum. First a set of double thermosensitive diblock copolymers poly(N-n-propylacrylamide)-b-poly(N-ethylacrylamide) was synthesized which differed only in the relative size of the two blocks. Depending on the relative block lengths, different aggregation pathways were found. Furthermore, the complementary TMS-labeled end groups served as NMR-probes for the self-assembly of these diblock copolymers in dilute solution. Reversible, temperature sensitive peak splitting of the TMS-signals in NMR spectroscopy was indicative for the formation of mixed star-/flower-like micelles in some cases. Moreover, triple thermoresponsive triblock copolymers from poly(N-n-propylacrylamide) (A), poly(methoxydiethylene glycol acrylate) (B) and poly(N-ethylacrylamide) (C) were obtained from sequential RAFT polymerization in all possible block sequences (ABC, BAC, ACB). Their self-organization behavior in dilute aqueous solution was found to be rather complex and dependent on the positioning of the different blocks within the terpolymers. Especially the localization of the low-LCST block (A) had a large influence on the aggregation behavior. Above the first cloud point, aggregates were only observed when the A block was located at one terminus. Once placed in the middle, unimolecular micelles were observed which showed aggregation only above the second phase transition temperature of the B block. Carrier abilities of such triple thermosensitive triblock copolymers tested in fluorescence spectroscopy, using the solvatochromic dye Nile Red, suggested that the hydrophobic probe is less efficiently incorporated by the polymer with the BAC sequence as compared to ABC or ACB polymers above the first phase transition temperature. In addition, due to the problem of increasing loss of end group functionality during the subsequent polymerization steps, a novel concept for the one-step synthesis of multi thermoresponsive block copolymers was developed. This allowed to synthesize double thermoresponsive di- and triblock copolymers in a single polymerization step. The copolymerization of different N-substituted maleimides with a thermosensitive styrene derivative (4-vinylbenzyl methoxytetrakis(oxyethylene) ether) led to alternating copolymers with variable LCST. Consequently, an excess of this styrene-based monomer allowed the synthesis of double thermoresponsive tapered block copolymers in a single polymerization step.
In the living cell, the organization of the complex internal structure relies to a large extent on molecular motors. Molecular motors are proteins that are able to convert chemical energy from the hydrolysis of adenosine triphosphate (ATP) into mechanical work. Being about 10 to 100 nanometers in size, the molecules act on a length scale, for which thermal collisions have a considerable impact onto their motion. In this way, they constitute paradigmatic examples of thermodynamic machines out of equilibrium. This study develops a theoretical description for the energy conversion by the molecular motor myosin V, using many different aspects of theoretical physics. Myosin V has been studied extensively in both bulk and single molecule experiments. Its stepping velocity has been characterized as a function of external control parameters such as nucleotide concentration and applied forces. In addition, numerous kinetic rates involved in the enzymatic reaction of the molecule have been determined. For forces that exceed the stall force of the motor, myosin V exhibits a 'ratcheting' behaviour: For loads in the direction of forward stepping, the velocity depends on the concentration of ATP, while for backward loads there is no such influence. Based on the chemical states of the motor, we construct a general network theory that incorporates experimental observations about the stepping behaviour of myosin V. The motor's motion is captured through the network description supplemented by a Markov process to describe the motor dynamics. This approach has the advantage of directly addressing the chemical kinetics of the molecule, and treating the mechanical and chemical processes on equal grounds. We utilize constraints arising from nonequilibrium thermodynamics to determine motor parameters and demonstrate that the motor behaviour is governed by several chemomechanical motor cycles. In addition, we investigate the functional dependence of stepping rates on force by deducing the motor's response to external loads via an appropriate Fokker-Planck equation. For substall forces, the dominant pathway of the motor network is profoundly different from the one for superstall forces, which leads to a stepping behaviour that is in agreement with the experimental observations. The extension of our analysis to Markov processes with absorbing boundaries allows for the calculation of the motor's dwell time distributions. These reveal aspects of the coordination of the motor's heads and contain direct information about the backsteps of the motor. Our theory provides a unified description for the myosin V motor as studied in single motor experiments.
Space is understood best through movement, and complex spaces require not only movement but navigation. The theorization of navigable space requires a conceptual representation of space which is adaptable to the great malleability of video game spaces, a malleability which allows for designs which combine spaces with differing dimensionality and even involve non-Euclidean configurations with contingent connectivity. This essay attempts to describe the structural elements of video game space and to define them in such a way so as to make them applicable to all video game spaces, including potential ones still undiscovered, and to provide analytical tools for their comparison and examination. Along with the consideration of space, there will be a brief discussion of navigational logic, which arises from detectable regularities in a spatial structure that allow players to understand and form expectations regarding a game’s spaces.
This paper addresses a theoretical reconfiguration of experience, a repositioning of the techno-social within the domains of mobility, games, and play, and embodiment. The ideas aim to counter the notion that our experience with videogames (and digital media more generally), is largely “virtual” and disembodied – or at most exclusively audiovisual. Notions of the virtual and disembodied support an often-tacit belief that technologically mediated experiences count for nothing if not perceived and valued as human. It is here where play in particular can be put to work, be made to highlight and clarify, for it is in play that we find this value of humanity most wholly embodied. Further, it is in considering the design of the metagame that questions regarding the play experience can be most powerfully engaged. While most of any given game’s metagame emerges from play communities and their larger social worlds (putting it out of reach of game design proper), mobile platforms have the potential to enable a stitching together of these experiences: experiences held across time, space, communities, and bodies. This coming together thus represents a convergence not only of media, participants, contexts, and technologies, but of human experience itself. This coming together is hardly neat, nor fully realized. It is, if nothing else, multifaceted and worthy of further study. It is a convergence in which the dynamics of screen play are reengaged.
This co-authored paper is based on research that originated in 2003 when our team started a series of extensive field studies into the character of gameplay experiences. Originally within the Children as the Actors of Game Cultures research project, our aim was to better understand why particularly young people enjoy playing games, while also asking their parents how they perceive gaming as playing partners or as close observers. Gradually our in-depth interviews started to reveal a complex picture of more general relevance, where personal experiences, social contexts and cultural practices all came together to frame gameplay within something we called game cultures. Culture was the keyword, since we were not interested in studying games and play experiences in isolation, but rather as part of the rich meaning- making practices of lived reality.
We study a new approach to determine the asymptotic behaviour of quantum many-particle systems near coalescence points of particles which interact via singular Coulomb potentials. This problem is of fundamental interest in electronic structure theory in order to establish accurate and efficient models for numerical simulations. Within our approach, coalescence points of particles are treated as embedded geometric singularities in the configuration space of electrons. Based on a general singular pseudo-differential calculus, we provide a recursive scheme for the calculation of the parametrix and corresponding Green operator of a nonrelativistic Hamiltonian. In our singular calculus, the Green operator encodes all the asymptotic information of the eigenfunctions. Explicit calculations and an asymptotic representation for the Green operator of the hydrogen atom and isoelectronic ions are presented.
On utilise de plus en plus les tests de verification pour confirmer l'atteinte du consommation d'oxygene maximale (VO(2 max)). Toutefois, le moment et les methodes d'evaluation varient d'un groupe de travail a l'autre. Les objectifs de cette etude sont de constater si on peut administrer un test de verification apres un test d'effort progressif ou s'il est preferable de le faire une autre journee et si on peut determiner le VO(2 max) tout de meme lors de la premiere seance chez des sujets ne repondant pas au critere de verification. Quarante sujets (age, 24 +/- 4 ans; VO(2 max), 50 +/- 7 mL center dot min(-1)center dot kg(-1)) participent a un test d'effort progressif sur tapis roulant et, 10 min plus tard, a un test de verification (VerifDay1) a 110 % de la velocite maximale (v(max)). Le critere de verification est un VO(2) de pointe au VerifDay1 < 5,5 % a la valeur retenue au test d'effort progressif. Les sujets ne repondant pas au critere de verification passent un autre test de verification, mais a 115 % du VerifDay1', et ce, 10 min plus tard pour confirmer le VO(2) de pointe du VerifDay1 en tant que VO(2 max). Tous les autres sujets repassent le VerifDay1 a un jour different (VerifDay2). Six sujets sur quarante ne repondent pas au critere de verification. Chez quatre d'entre eux, on confirme l'atteinte du VO(2 max) au VerifDay1'. Le VO(2) de pointe au VerifDay1 est equivalent a celui du VerifDay2 (3722 +/- 991 mL center dot min(-1) comparativement a 3752 +/- 995 mL center dot min(-1), p = 0,56), mais le temps jusqu'a l'epuisement est significativement plus long au VerifDay2 (2:06 +/- 0:22 min:s comparativement a 2:42 +/- 0:38 min:s, p < 0,001, n = 34). Le VO(2) de pointe obtenu au test de verification ne semble pas conditionne par un test d'effort progressif maximal prealable. On peut donc realiser le test d'effort progressif et le test de verification lors de la meme seance d'evaluation. Chez presque tous les individus ne repondant pas au critere de verification, on peut determiner le VO(2 max) au moyen d'un autre test de verification plus intense.
Background: Athletes may differ in their resting metabolic rate (RMR) from the general population. However, to estimate the RMR in athletes, prediction equations that have not been validated in athletes are often used. The purpose of this study was therefore to verify the applicability of commonly used RMR predictions for use in athletes. Methods: The RMR was measured by indirect calorimetry in 17 highly trained rowers and canoeists of the German national teams (BMI 24 +/- 2 kg/m(2), fat-free mass 69 +/- 15 kg). In addition, the RMR was predicted using Cunningham (CUN) and Harris-Benedict (HB) equations. A two-way repeated measures ANOVA was calculated to test for differences between predicted and measured RMR (alpha = 0.05). The root mean square percentage error (RMSPE) was calculated and the Bland-Altman procedure was used to quantify the bias for each prediction. Results: Prediction equations significantly underestimated the RMR in males (p < 0.001). The RMSPE was calculated to be 18.4% (CUN) and 20.9% (HB) in the entire group. The bias was 133 kcal/24 h for CUN and 202 kcal/24 h for HB. Conclusions: Predictions significantly underestimate the RMR in male heavyweight endurance athletes but not in females. In athletes with a high fat-free mass, prediction equations might therefore not be applicable to estimate energy requirements. Instead, measurement of the resting energy expenditure or specific prediction equations might be needed for the individual heavyweight athlete.
Neuromuscular control in functional situations and possible impairments due to Achilles tendinopathy are not well understood.
Thirty controls (CO) and 30 runners with Achilles tendinopathy (AT) were tested on a treadmill at 3.33 m s(-1) (12 km h(-1)). Neuromuscular activity of the lower leg (tibialis anterior, peroneal, and gastrocnemius muscle) was measured by surface electromyography. Mean amplitude values (MAV) for the gait cycle phases preactivation, weight acceptance and push-off were calculated and normalised to the mean activity of the entire gait cycle.
MAVs of the tibialis anterior did not differ between CO and AT in any gait cycle phase. The activation of the peroneal muscle was lower in AT in weight acceptance (p = 0.006), whereas no difference between CO and AT was found in preactivation (p = 0.71) and push-off (p = 0.83). Also, MAVs of the gastrocnemius muscle did not differ between AT and CO in preactivity (p = 0.71) but were reduced in AT during weight acceptance (p = 0.001) and push-off (p = 0.04).
Achilles tendinopathy does not seem to alter pre-programmed neural control but might induce mechanical deficits of the lower extremity during weight bearing (joint stability). This should be addressed in the therapy process of AT.
How much is too much? - a case report of nutritional supplement use of a high-performance athlete
(2011)
Although dietary nutrient intake is often adequate, nutritional supplement use is common among elite athletes. However, high-dose supplements or the use of multiple supplements may exceed the recommended daily allowance (RDA) of particular nutrients or even result in a daily intake above tolerable upper limits (UL). The present case report presents nutritional intake data and supplement use of a highly trained male swimmer competing at international level. Habitual energy and micronutrient intake were analysed by 3 d dietary reports. Supplement use and dosage were assessed, and total amount of nutrient supply was calculated. Micronutrient intake was evaluated based on RDA and UL as presented by the European Scientific Committee on Food, and maximum permitted levels in supplements (MPL) are given. The athlete's diet provided adequate micronutrient content well above RDA except for vitamin D. Simultaneous use of ten different supplements was reported, resulting in excess intake above tolerable UL for folate, vitamin E and Zn. Additionally, daily supplement dosage was considerably above MPL for nine micronutrients consumed as artificial products. Risks and possible side effects of exceeding UL by the athlete are discussed. Athletes with high energy intake may be at risk of exceeding UL of particular nutrients if multiple supplements are added. Therefore, dietary counselling of athletes should include assessment of habitual diet and nutritional supplement intake. Educating athletes to balance their diets instead of taking supplements might be prudent to prevent health risks that may occur with long-term excess nutrient intake.
Background: The elderly need strength training more and more as they grow older to stay mobile for their everyday activities. The goal of training is to reduce the loss of muscle mass and the resulting loss of motor function. The dose-response relationship of training intensity to training effect has not yet been fully elucidated.
Methods: PubMed was selectively searched for articles that appeared in the past 5 years about the effects and dose-response relationship of strength training in the elderly.
Results: Strength training in the elderly (> 60 years) increases muscle strength by increasing muscle mass, and by improving the recruitment of motor units, and increasing their firing rate. Muscle mass can be increased through training at an intensity corresponding to 60% to 85% of the individual maximum voluntary strength. Improving the rate of force development requires training at a higher intensity (above 85%), in the elderly just as in younger persons. It is now recommended that healthy old people should train 3 or 4 times weekly for the best results; persons with poor performance at the outset can achieve improvement even with less frequent training. Side effects are rare.
Conclusion: Progressive strength training in the elderly is efficient, even with higher intensities, to reduce sarcopenia, and to retain motor function.
Adequate energy intake in adolescent athletes is considered important. Total energy expenditure (TEE) can be calculated from resting energy expenditure (REE) and physical activity level (PAL). However, validated PAL recommendations are available for adult athletes only. Purpose was to comprise physical activity data in adolescent athletes and to establish PAL recommendations for this population. In 64 competitive athletes (15.3 +/- 1.5yr, 20.5 +/- 2.0kg/m(2)) and 14 controls (15.1 +/- 1.1yr, 21 +/- 2.1kg/m(2)) TEE was calculated using 7-day activity protocols validated against doubly-labeled water. REE was estimated by Schofield-HW equation, and PAL was calculated as TEE:REE. Observed PAL in adolescent athletes (1.90 +/- 0.35) did not differ compared with controls (1.84 +/- 0.32, p = .582) and was lower than recommended for adult athletes by the WHO. In conclusion, applicability of PAL values recommended for adult athletes to estimate energy requirements in adolescent athletes must be questioned. Instead, a PAL range of 1.75-2.05 is suggested.
Antiplatelet therapy in the era of percutaneous coronary intervention with drug-eluting balloons
(2011)
The high rate of restenosis associated with percutaneous coronary intervention (PCI) procedures can be reduced with the implantation of metallic stents into the stenotic vessels. The knowledge that neointimal formation can result in restenosis after stent implantation led to the development of drug-eluting stents (DES) which require long lasting antiplatelet therapy to avoid thrombotic complications. In the last years, the drug-eluting balloon (DEB) technology has emerged as an alternative option for the treatment of coronary and peripheral arteries. Clinical studies demonstrated the safety and effectiveness of DEB in various clinical scenarios and support the use of paclitaxel-eluting balloons for the treatment of in-stent restenosis, of small coronary arteries and bifurcations lesions. The protocols of DEB studies suggest that the dual antiplatelet therapy with aspirin and clopidogrel of four weeks after DEB is safe and effective.
The armed conflict in Afghanistan since 2001 has raised manifold questions pertaining to the humanitarian rules relative to the conduct of hostilities. In Afghanistan, as is often the case in so-called asymmetric conflicts, the geographical and temporal boundaries of the battlefield, and the distinction between civilians and fighters, are increasingly blurred. As a result, the risks for both civilians and soldiers operating in Afghanistan are high. The objective of this article is to assess whether - and if so how much - the armed conflict in Afghanistan has affected the application and interpretation of the principles of distinction, proportionality, and precaution - principles that form the core of legal rules pertaining to the conduct of hostilities.
This review addresses the functional organization of the mammalian cochlea under a comparative and evolutionary perspective. A comparison of the monotreme cochlea with that of marsupial and placental mammals highlights important evolutionary steps towards a hearing organ dedicated to process higher frequencies and a larger frequency range than found in non-mammalian vertebrates. Among placental mammals, there are numerous cochlear specializations which relate to hearing range in adaptation to specific habitats that are superimposed on a common basic design. These are illustrated by examples of specialist ears which evolved excellent high frequency hearing and echolocation (bats and dolphins) and by the example of subterranean rodents with ears devoted to processing low frequencies. Furthermore, structural functional correlations important for tonotopic cochlear organization and predictions of hearing capabilities are discussed.
Target-distance computation by cortical neurons sensitive to echo delay is an essential characteristic of the auditory system of insectivorous bats. To assess if functional requirements such as detection of small insects versus larger stationary surfaces of plants are reflected in cortical properties, we compare delay-tuned neurons in a frugivorous (C. perspicillata, CP) and an insectivorous (P. parnellii, PP) bat species that belong to related families within the superfamily of Noctilionoidea. The bandwidth and shape of delay-tuning curves and the range of characteristic delays are similar in both species and hence are not related to different echolocation strategies. Most units respond at 2-6 ms echo delay with most sensitive thresholds of 20-30 dB SPL. In CP, units tuned to delays > 12 ms are slightly more abundant and are more sensitive than in PP. All delay-tuned neurons in CP reliably respond to single pure-tone stimuli, whereas such responses are only observed in 49% of delay-tuned units in PP. The cortical representation of echo delay (chronotopy) covers a larger area in CP but is less precise than described in PP. Since chronotopy is absent in certain other insectivorous bat species, it is open if these differences in topography are related to echolocation behaviour.
Lissencephaly is a severe brain developmental disease in human infants, which is usually caused by mutations in either of two genes, LIS1 and DCX. These genes encode proteins interacting with both the microtubule and the actin systems. Here, we review the implications of data on Dictyostelium LIS1 for the elucidation of LIS1 function in higher cells and emphasize the role of LIS1 and nuclear envelope proteins in nuclear positioning, which is also important for coordinated cell migration during neocortical development. Furthermore, for the first time we characterize Dictyostelium DCX, the only bona fide orthologue of human DCX outside the animal kingdom. We show that DCX functionally interacts with LIS1 and that both proteins have a cytoskeleton-independent function in chemotactic signaling during development. Dictyostelium LIS1 is also required for proper attachment of the centrosome to the nucleus and, thus, nuclear positioning, where the association of these two organelles has turned out to be crucial. It involves not only dynein and dynein-associated proteins such as LIS1 but also SUN proteins of the nuclear envelope. Analyses of Dictyostelium SUN1 mutants have underscored the importance of these proteins for the linkage of centrosomes and nuclei and for the maintenance of chromatin integrity. Taken together, we show that Dictyostelium amoebae, which provide a well-established model to study the basic aspects of chemotaxis, cell migration and development, are well suited for the investigation of the molecular and cell biological basis of developmental diseases such as lissencephaly.
We have localized TACC to the microtubule-nucleating centrosomal corona and to microtubule plus ends. Using RNAi we proved that Dictyostelium TACC promotes microtubule growth during interphase and mitosis. For the first time we show in vivo that both TACC and XMAP215 family proteins can be differentially localized to microtubule plus ends during interphase and mitosis and that TACC is mainly required for recruitment of an XMAP215-family protein to interphase microtubule plus ends but not for recruitment to centrosomes and kinetochores. Moreover, we have now a marker to study dynamics and behavior of microtubule plus ends in living Dictyostelium cells. In a combination of live cell imaging of microtubule plus ends and fluorescence recovery after photobleaching (FRAP) experiments of GFP-alpha-tubulin cells we show that Dictyostelium microtubules are dynamic only in the cell periphery, while they remain stable at the centrosome, which also appears to harbor a dynamic pool of tubulin dimers.
Functional analyses of microtubule and centrosome-associated proteins in Dictyostelium discoideum
(2011)
Understanding the role of microtubule-associated proteins is the key to understand the complex mechanisms regulating microtubule dynamics. This study employs the model system Dictyostelium discoideum to elucidate the role of the microtubule-associated protein TACC (Transforming acidic coiled-coil) in promoting microtubule growth and stability. Dictyostelium TACC was localized at the centrosome throughout the entire cell cycle. The protein was also detected at microtubule plus ends, however, unexpectedly only during interphase but not during mitosis. The same cell cycle-dependent localization pattern was observed for CP224, the Dictyostelium XMAP215 homologue. These ubiquitous MAPs have been found to interact with TACC proteins directly and are known to act as microtubule polymerases and nucleators. This work shows for the first time in vivo that both a TACC and XMAP215 family protein can differentially localize to microtubule plus ends during interphase and mitosis. RNAi knockdown mutants revealed that TACC promotes microtubule growth during interphase and is essential for proper formation of astral microtubules in mitosis. In many organisms, impaired microtubule stability upon TACC depletion was explained by the failure to efficiently recruit the TACC-binding XMAP215 protein to centrosomes or spindle poles. By contrast, fluorescence recovery after photobleaching (FRAP) analyses conducted in this study demonstrate that in Dictyostelium recruitment of CP224 to centrosomes or spindle poles is not perturbed in the absence of TACC. Instead, CP224 could no longer be detected at the tips of microtubules in TACC mutant cells. This finding demonstrates for the first time in vivo that a TACC protein is essential for the association of an XMAP215 protein with microtubule plus ends. The GFP-TACC strains generated in this work also turned out to be a valuable tool to study the unusual microtubule dynamics in Dictyostelium. Here, microtubules exhibit a high degree of lateral bending movements but, in contrast most other organisms, they do not obviously undergo any growth or shrinkage events during interphase. Despite of that they are affected by microtubuledepolymerizing drugs such as thiabendazole or nocodazol which are thought to act solely on dynamic microtubules. Employing 5D-fluorescence live cell microscopy and FRAP analyses this study suggests Dictyostelium microtubules to be dynamic only in the periphery, while they are stable at the centrosome. In the recent years, the identification of yet unknown components of the Dictyostelium centrosome has made tremendous progress. A proteomic approach previously conducted by our group disclosed several uncharacterized candidate proteins, which remained to be verified as genuine centrosomal components. The second part of this study focuses on the investigation of three such candidate proteins, Cenp68, CP103 and the putative spindle assembly checkpoint protein Mad1. While a GFP-CP103 fusion protein could clearly be localized to isolated centrosomes that are free of microtubules, Cenp68 and Mad1 were found to associate with the centromeres and kinetochores, respectively. The investigation of Cenp68 included the generation of a polyclonal anti-Cenp68 antibody, the screening for interacting proteins and the generation of knockout mutants which, however, did not display any obvious phenotype. Yet, Cenp68 has turned out as a very useful marker to study centromere dynamics during the entire cell cycle. During mitosis, GFP-Mad1 localization strongly resembled the behavior of other Mad1 proteins, suggesting the existence of a yet uncharacterized spindle assembly checkpoint in Dictyostelium.
Cognitive psychology is traditionally interested in the interaction of perception, cognition, and behavioral control. Investigating eye movements in reading constitutes a field of research in which the processes and interactions of these subsystems can be studied in a well-defined environment. Thereby, the following questions are pursued: How much information is visually perceived during a fixation, how is processing achieved and temporally coordinated from visual letter encoding to final sentence comprehension, and how do such processes reflect on behavior such as the control of the eyes’ movements during reading. Various theoretical models have been proposed to account for the specific eye-movement behavior in reading (for a review see Reichle, Rayner, & Pollatsek, 2003). Some models are based on the idea of shifting attention serially from one word to the next within the sentence whereas others propose distributed attention allocating processing resources to more than one word at a time. As attention is assumed to drive word recognition processes one major difference between these models is that word processing must either occur in strict serial order, or that word processing is achieved in parallel. In spite of this crucial difference in the time course of word processing, both model classes perform well on explaining many of the benchmark effects in reading. In fact, there seems to be not much empirical evidence that challenges the models to a point at which their basic assumptions could be falsified. One issue often perceived as being decisive in the debate on serial and parallel word processing is how not-yet-fixated words to the right of fixation affect eye movements. Specifically, evidence is discussed as to what spatial extent such parafoveal words are previewed and how this influences current and subsequent word processing. Four experiments investigated parafoveal processing close to the spatial limits of the perceptual span. The present work aims to go beyond mere existence proofs of previewing words at such spatial distances. Introducing a manipulation that dissociates the sources of long-range preview effects, benefits and costs of parafoveal processing can be investigated in a single analysis and the differing impact is tracked across a three-word target region. In addition, the same manipulation evaluates the role of oculomotor error as the cause of non-local distributed effects. In this respect, the results contribute to a better understanding of the time course of word processing inside the perceptual span and attention allocation during reading.
Following up on research suggesting an age-related reduction in the rightward extent of the perceptual span during reading (Rayner, Castelhano, & Yang, 2009), we compared old and young adults in an N+2-boundary paradigm in which a nonword preview of word N+2 or word N+2 itself is replaced by the target word once the eyes cross an invisible boundary located after word N. The intermediate word N+1 was always three letters long. Gaze durations on word N+2 were significantly shorter for identical than nonword N+2 preview both for young and for old adults with no significant difference in this preview benefit. Young adults, however, did modulate their gaze duration on word N more strongly than old adults in response to the difficulty of the parafoveal word N+1. Taken together, the results suggest a dissociation of preview benefit and parafoveal-on-foveal effect. Results are discussed in terms of age-related decline in resilience towards distributed processing while simultaneously preserving the ability to integrate parafoveal information into foveal processing. As such, the present results relate to proposals of regulatory compensation strategies older adults use to secure an overall reading speed very similar to that of young adults.
Following up on research suggesting an age-related reduction in the rightward extent of the perceptual span during reading (Rayner, Castelhano, & Yang, 2009), we compared old and young adults in an N + 2-boundary paradigm in which a nonword preview of word N + 2 or word N + 2 itself is replaced by the target word once the eyes cross an invisible boundary located after word N. The intermediate word N + 1 was always three letters long. Gaze durations on word N + 2 were significantly shorter for identical than nonword N + 2 preview both for young and for old adults, with no significant difference in this preview benefit. Young adults, however, did modulate their gaze duration on word N more strongly than old adults in response to the difficulty of the parafoveal word N + 1. Taken together, the results suggest a dissociation of preview benefit and parafoveal-on-foveal effect. Results are discussed in terms of age-related decline in resilience towards distributed processing while simultaneously preserving the ability to integrate parafoveal information into foveal processing. As such, the present results relate to proposals of regulatory compensation strategies older adults use to secure an overall reading speed very similar to that of young adults. (PsycINFO Database Record (c) 2011 APA, all rights reserved)
Following up on research suggesting an age-related reduction in the rightward extent of the perceptual span during reading (Rayner, Castelhano, & Yang, 2009), we compared old and young adults in an N + 2-boundary paradigm in which a nonword preview of word N + 2 or word N + 2 itself is replaced by the target word once the eyes cross an invisible boundary located after word N. The intermediate word N + I was always three letters long. Gaze durations on word N + 2 were significantly shorter for identical than nonword N + 2 preview both for young and for old adults, with no significant difference in this preview benefit. Young adults, however, did modulate their gaze duration on word N more strongly than old adults in response to the difficulty of the parafoveal word N + I. Taken together, the results suggest a dissociation of preview benefit and parafoveal-on-foveal effect. Results are discussed in terms of age-related decline in resilience towards distributed processing while simultaneously preserving the ability to integrate parafoveal information into foveal processing. As such, the present results relate to proposals of regulatory compensation strategies older adults use to secure an overall reading speed very similar to that of young adults.
Dutch allows for variation as to whether the first position in the sentence is occupied by the subject or by some other constituent, such as the direct object. In particular situations, however, this commonly observed variation in word order is ‘frozen’ and only the subject appears in first position. We hypothesize that this partial freezing of word order in Dutch can be explained from the dependence of the speaker’s choice of word order on the hearer’s interpretation of this word order. A formal model of this interaction between the speaker’s perspective and the hearer’s perspective is presented in terms of bidirectional Optimality Theory. Empirical predictions of this model regarding the interaction between word order and definiteness are confirmed by a quantitative corpus study.
Due to their optical and electro-conductive attributes, carbazole derivatives are interesting materials for a large range of biosensor applications. In this study, we present the synthesis routes and fluorescence evaluation of newly designed carbazole fluorosensors that, by modification with uracil, have a special affinity for antiretroviral drugs via either Watson–Crick or Hoogsteen base pairing. To an N-octylcarbazole-uracil compound, four different groups were attached, namely thiophene, furane, ethylenedioxythiophene, and another uracil; yielding four different derivatives. Photophysical properties of these newly obtained derivatives are described, as are their interactions with the reverse transcriptase inhibitors such as abacavir, zidovudine, lamivudine and didanosine. The influence of each analyte on biosensor fluorescence was assessed on the basis of the Stern–Volmer equation and represented by Stern–Volmer constants. Consequently we have demonstrated that these structures based on carbazole, with a uracil group, may be successfully incorporated into alternative carbazole derivatives to form biosensors for the molecular recognition of antiretroviral drugs.
Background
More than in other domains the heterogeneous services world in bioinformatics demands for a methodology to classify and relate resources in a both human and machine accessible manner. The Semantic Web, which is meant to address exactly this challenge, is currently one of the most ambitious projects in computer science. Collective efforts within the community have already led to a basis of standards for semantic service descriptions and meta-information. In combination with process synthesis and planning methods, such knowledge about types and services can facilitate the automatic composition of workflows for particular research questions.
Results
In this study we apply the synthesis methodology that is available in the Bio-jETI workflow management framework for the semantics-based composition of EMBOSS services. EMBOSS (European Molecular Biology Open Software Suite) is a collection of 350 tools (March 2010) for various sequence analysis tasks, and thus a rich source of services and types that imply comprehensive domain models for planning and synthesis approaches. We use and compare two different setups of our EMBOSS synthesis domain: 1) a manually defined domain setup where an intuitive, high-level, semantically meaningful nomenclature is applied to describe the input/output behavior of the single EMBOSS tools and their classifications, and 2) a domain setup where this information has been automatically derived from the EMBOSS Ajax Command Definition (ACD) files and the EMBRACE Data and Methods ontology (EDAM). Our experiments demonstrate that these domain models in combination with our synthesis methodology greatly simplify working with the large, heterogeneous, and hence manually intractable EMBOSS collection. However, they also show that with the information that can be derived from the (current) ACD files and EDAM ontology alone, some essential connections between services can not be recognized.
Conclusions
Our results show that adequate domain modeling requires to incorporate as much domain knowledge as possible, far beyond the mere technical aspects of the different types and services. Finding or defining semantically appropriate service and type descriptions is a difficult task, but the bioinformatics community appears to be on the right track towards a Life Science Semantic Web, which will eventually allow automatic service composition methods to unfold their full potential.
Phase behaviour and the mesoscopic structure of zwitanionic surfactant mixtures based on the zwitterionic tetradecyldimethylamine oxide (TDMAO) and anionic lithium perfluoroalkyl carboxylates have been investigated for various chain lengths of the perfluoro surfactant with an emphasis on spontaneously forming vesicles. These mixtures were studied at a constant total concentration of 50 mM and characterised by means of dynamic light scattering (DLS), electric conductivity, small-angle neutron scattering (SANS), viscosity, and cryo-scanning electron microscopy (Cryo-SEM). No vesicles are formed for relatively short perfluoro surfactants. The extension of the vesicle phase becomes substantially larger with increasing chain length of the perfluoro surfactant, while at the same time the size of these vesicles increases. Head group interactions in these systems play a central role in the ability to form vesicles, as already protonating 10 mol% of the TDMAO largely enhances the propensity for vesicle formation. The range of vesicle formation in the phase diagram is not only substantially enlarged but also extends to shorter perfluoro surfactants, where without protonation no vesicles would be formed. The size and polydispersity of the vesicles are related to the chain length of the perfluoro surfactant, the vesicles becoming smaller and more monodisperse with increasing perfluoro surfactant chain length. The ability of the mixed systems to form well-defined unilamellar vesicles accordingly can be controlled by the length of the alkyl chain of the perfluorinated surfactant and depends strongly on the charge conditions, which can be tuned easily by pH-variation.
The enzyme diisopropyl fluorophosphatase (DFPase) from the squid Loligo vulgaris is of great interest because of its ability to catalyze the hydrolysis of highly toxic organophosphates. In this work, the enzyme structure in solution (native state) was studied by use of different scattering methods. The results are compared with those from hydrodynamic model calculations based on the DFPase crystal structure. Bicontinuous microemulsions made of sugar surfactants are discussed as host systems for the DFPase. The microemulsion remains stable in the presence of the enzyme, which is shown by means of scattering experiments. Moreover, activity assays reveal that the DFPase still has high activity in this complex reaction medium. To complement the scattering experiments cryo-SEM was also employed to study the microemulsion structure.
Cryo-electron microscopy, atomic force microscopy, and light microscopy investigations provide experimental evidence that amphiphilic emulsion copolymerization particles change their morphology in dependence on concentration. The shape of the particles is spherical at solids content above 1%, but it changes to rod-like, ring-like, and web-like structures at lower concentrations. In addition, the shape and morphology of these particles at low concentrations are not fixed but very flexible and vary with time between spheres, flexible pearlnecklace structures, and stretched rods.
Using cationic polyelectrolytes with different molecular architectures, only hyperbranched poly(ethyleneimine) with maltose shell is suited to tailor the morphological transformation of anionic vesicles into tube-like networks. The interaction features of those materials partly mimic biological features of tubular proteins in nature.
Background
Riociguat is the first of a new class of drugs, the soluble guanylate cyclase (sGC) stimulators. Riociguat has a dual mode of action: it sensitizes sGC to the body’s own NO and can also increase sGC activity in the absence of NO. The NO-sGC-pathway is impaired in many cardiovascular diseases such as heart failure, pulmonary hypertension and diabetic nephropathy (DN). DN leads to high cardiovascular morbidity and mortality. There is still a high unmet medical need. The urinary albumin excretion rate is a predictive biomarker for these clinical events. Therefore, we investigated the effect of riociguat, alone and in combination with the angiotensin II receptor antagonist (ARB) telmisartan on the progression of DN in diabetic eNOS knock out mice, a new model closely resembling human pathology.
Methods
Seventy-six male eNOS knockout C57BL/6J mice were divided into 4 groups after receiving intraperitoneal high-dose streptozotocin: telmisartan (1 mg/kg), riociguat (3 mg/kg), riociguat+telmisartan (3 and 1 mg/kg), and vehicle. Fourteen mice were used as non-diabetic controls. After 12 weeks, urine and blood were obtained and blood pressure measured. Glucose concentrations were highly increased and similar in all diabetic groups.
Results
Riociguat, alone (105.2 ± 2.5 mmHg; mean±SEM; n = 14) and in combination with telmisartan (105.0 ± 3.2 mmHg; n = 12), significantly reduced blood pressure versus diabetic controls (117.1 ± 2.2 mmHg; n = 14; p = 0.002 and p = 0.004, respectively), whereas telmisartan alone (111.2 ± 2.6 mmHg) showed a modest blood pressure lowering trend (p = 0.071; n = 14). The effects of single treatment with either riociguat (97.1 ± 15.7 µg/d; n = 13) or telmisartan (97.8 ± 26.4 µg/d; n = 14) did not significantly lower albumin excretion on its own (p = 0.067 and p = 0.101, respectively). However, the combined treatment led to significantly lower urinary albumin excretion (47.3 ± 9.6 µg/d; n = 12) compared to diabetic controls (170.8 ± 34.2 µg/d; n = 13; p = 0.004), and reached levels similar to non-diabetic controls (31.4 ± 10.1 µg/d, n = 12).
Conclusion
Riociguat significantly reduced urinary albumin excretion in diabetic eNOS knock out mice that were refractory to treatment with ARB’s alone. Patients with diabetic nephropathy refractory to treatment with ARB’s have the worst prognosis among all patients with diabetic nephropathy. Our data indicate that additional stimulation of sGC on top of standard treatment with ARB`s may offer a new therapeutic approach for patients with diabetic nephropathy resistant to ARB treatment.
Background and aims. Tumor suppressor genes are often located in frequently deleted chromosomal regions of colorectal cancers (CRCs). In contrast to microsatellite stable (MSS) tumors, only few loss of heterozygosity (LOH) studies were performed in microsatellite instable (MSI) tumors, because MSI carcinomas are generally considered to be chromosomally stable and classical LOH studies are not feasible due to MSI. The single nucleotide polymorphism (SNP) array technique enables LOH studies also in MSI CRC. The aim of our study was to analyse tissue from MSI and MSS CRC for the existence of (frequently) deleted chromosomal regions and tumor suppressor genes located therein. Methods and results. We analyzed tissues from 32 sporadic CRCs and their corresponding normal mucosa (16 MSS and 16 MSI tumors) by means of 50K SNP array analysis. MSS tumors displayed chromosomal instability that resulted in multiple deleted (LOH) and amplified regions and led to the identification of MTUS1 (8p22) as a candidate tumor suppressor gene in this region. Although the MSI tumors were chromosomally stable, we found several copy neutral LOHs (cnLOH) in the MSI tumors; these appear to be instrumental in the inactivation of the tumor suppressor gene hMLH1 and a gene located in chromosomal region 6pter-p22. Discussion. Our results suggest that in addition to classical LOH, cnLOH is an important mutational event in relation to the carcinogenesis of MSS and MSI tumors, causing the inactivation of a tumor suppressor gene without copy number alteration of the respective region; this is crucial for the development of MSI tumors and for some chromosomal regions in MSS tumors.
Plant growth and survival depend on photosynthesis in the leaves. This involves the uptake of carbon dioxide from the atmosphere and the simultaneous capture of light energy to produce organic molecules, which enter metabolism and are converted to many other compounds which then serve as building blocks for biomass growth. Leaves are organs specialised for photosynthetic carbon dioxide fixation. The function of leaves involves many trade-offs which must be optimised in order to achieve effective use of resources and maximum photosynthesis. It is known that the morphology of leaves adjusts to the growth environment of plants and this is important for optimising their function for photosynthesis. However, it is unclear how this adjustment is regulated. The general aim of the work presented in this thesis is to understand how leaf growth and morphology are regulated in the model species Arabidopsis thaliana. Special attention was dedicated to the possibility that there might be internal metabolic signals within the plant which affect the growth and development of leaves. In order to investigate this question, leaf growth and development must be considered beyond the level of the single organ and in the context of the whole plant because leaves do not grow autonomously but depend on resources and regulatory influences delivered by the rest of the plant. Due to the complexity of this question, three complementary approaches were taken. In the first and most specific approach it was asked whether a proposed down-stream component of sucrose signalling, trehalose-6-phosphate (Tre-6-P), might influence leaf development and growth. To investigate this question, transgenic Arabidopsis lines with perturbed levels of Tre-6-P were generated using the constitutive 35S promoter to express bacterial enzymes involved in trehalose metabolism. These experiments also led to an unanticipated project concerning a possible role for Tre-6-P in stomatal function, which is another very important function in leaves. In a second and more general approach it was investigated whether changes in sugar levels in plants affect the morphogenesis of leaves in response to light. For this, a series of metabolic mutants impaired in central metabolism were grown in one light environment and their leaf morphology was analysed. In a third and even more general approach the natural variation in leaf and rosette morphological traits was investigated in a panel of wild Arabidopsis accessions with the aim of understanding how leaf morphology affects leaf function and whole plant growth and how different traits relate to each other. The analysis included measurements of leaf morphological traits as well as the number of leaves in the plant to put leaf morphology in a whole plant context. The variance in plant growth could not be explained by variation in photosynthetic rates and only to a small degree by variation in rates of dark respiration. There were four key axes of variation in rosette and leaf morphology – leaf area growth, leaf thickness, cell expansion and leaf number. These four processes were integrated in the context of whole plant growth by models that employed a multiple linear regression approach. This then led to a theoretical approach in which a simple allometric mathematical model was constructed, linking leaf number, leaf size and plant growth rate together in a whole plant context in Arabidopsis.
Regulation of gene transcription plays a major role in mediating cellular responses and physiological behavior in all known organisms. The finding that similar genes are often regulated in a similar manner (co-regulated or "co-expressed") has directed several "guilt-by-association" approaches in order to reverse-engineer the cellular transcriptional networks using gene expression data as a compass. This kind of studies has been considerably assisted in the recent years by the development of high-throughput transcript measurement platforms, specifically gene microarrays and next-generation sequencing. In this thesis, I describe several approaches for improving the extraction and interpretation of the information contained in microarray based gene expression data, through four steps: (1) microarray platform design, (2) microarray data normalization, (3) gene network reverse engineering based on expression data and (4) experimental validation of expression-based guilt-by-association inferences. In the first part test case is shown aimed at the generation of a microarray for Thellungiella salsuginea, a salt and drought resistant close relative to the model plant Arabidopsis thaliana; the transcripts of this organism are generated on the combination of publicly available ESTs and newly generated ad-hoc next-generation sequencing data. Since the design of a microarray platform requires the availability of highly reliable and non-redundant transcript models, these issues are addressed consecutively, proposing several different technical solutions. In the second part I describe how inter-array correlation artifacts are generated by the common microarray normalization methods RMA and GCRMA, together with the technical and mathematical characteristics underlying the problem. A solution is proposed in the form of a novel normalization method, called tRMA. The third part of the thesis deals with the field of expression-based gene network reverse engineering. It is shown how different centrality measures in reverse engineered gene networks can be used to distinguish specific classes of genes, in particular essential genes in Arabidopsis thaliana, and how the use of conditional correlation can add a layer of understanding over the information flow processes underlying transcript regulation. Furthermore, several network reverse engineering approaches are compared, with a particular focus on the LASSO, a linear regression derivative rarely applied before in global gene network reconstruction, despite its theoretical advantages in robustness and interpretability over more standard methods. The performance of LASSO is assessed through several in silico analyses dealing with the reliability of the inferred gene networks. In the final part, LASSO and other reverse engineering methods are used to experimentally identify novel genes involved in two independent scenarios: the seed coat mucilage pathway in Arabidopsis thaliana and the hypoxic tuber development in Solanum tuberosum. In both cases an interesting method complementarity is shown, which strongly suggests a general use of hybrid approaches for transcript expression-based inferences. In conclusion, this work has helped to improve our understanding of gene transcription regulation through a better interpretation of high-throughput expression data. Part of the network reverse engineering methods described in this thesis have been included in a tool (CorTo) for gene network reverse engineering and annotated visualization from custom transcription datasets.
Hepatic insulin resistance is a major contributor to hyperglycemia in metabolic syndrome and type II diabetes. It is caused in part by the low-grade inflammation that accompanies both diseases, leading to elevated local and circulating levels of cytokines and cyclooxygenase (COX) products such as prostaglandin E-2 (PGE(2)). In a recent study, PGE(2) produced in Kupffer cells attenuated insulin-dependent glucose utilization by interrupting the intracellular signal chain downstream of the insulin receptor in hepatocytes. In addition to directly affecting insulin signaling in hepatocytes, PGE(2) in the liver might affect insulin resistance by modulating cytokine production in non-parenchymal cells. In accordance with this hypothesis, PGE(2) stimulated oncostatin M (OSM) production by Kupffer cells. OSM in turn attenuated insulin-dependent Akt activation and, as a downstream target, glucokinase induction in hepatocytes, most likely by inducing suppressor of cytokine signaling 3 (SOCS3). In addition, it inhibited the expression of key enzymes of hepatic lipid metabolism. COX-2 and OSM mRNA were induced early in the course of the development of non-alcoholic steatohepatitis (NASH) in mice. Thus, induction of OSM production in Kupffer cells by an autocrine PGE(2)-dependent feed-forward loop may be an additional, thus far unrecognized, mechanism contributing to hepatic insulin resistance and the development of NASH.
Article 22
(2011)
We deduce a new formula for the perihelion advance $Theta$ of a test particle in the Schwarzschild black hole by applying a newly developed non-linear transformation within the Schwarzschild space-time. By this transformation we are able to apply the well-known formula valid in the weak-field approximation near infinity also to trajectories in the strong-field regime near the horizon of the black hole. The resulting formula has the structure $Theta = c_1 - c_2 ln(c^2_3 - e^2) $ with positive constants $c_{1,2,3}$ depending on the angular momentum of the test particle. It is especially useful for orbits with large eccentricities $e < c_3 < 1$ showing that $Theta o infty$ as $e o c_3$.
This book assembles the contributions of the Eighth European Conference on Formal Description of Slavic Languages (FDSL VIII) which took place from 2nd to 5th December 2009 at the University Potsdam. The concern was to bring together excellent experienced but also young scholars who work in the field of formal description of Slavic Languages. Besides that, two workshops on typology of Slavic Languages and on the structure of DP/NP in Slavic were organized.
For the Lagrangian L = G ln G where G is the Gauss-Bonnet curvature scalar we deduce the field equation and solve it in closed form for 3-flat Friedman models using a statefinder parametrization. Further we show, that among all lagrangians F(G) this L is the only one not having the form G^r with a real constant r but possessing a scale-invariant field equation. This turns out to be one of its analogies to f(R)-theories in 2-dimensional space-time. In the appendix, we systematically list several formulas for the decomposition of the Riemann tensor in arbitrary dimensions n, which are applied in the main deduction for n=4.