Refine
Has Fulltext
- yes (480)
Year of publication
Document Type
- Postprint (480) (remove)
Is part of the Bibliography
- yes (480) (remove)
Keywords
- climate-change (16)
- model (16)
- climate (13)
- variability (9)
- evolution (8)
- precipitation (7)
- transport (7)
- adaptation (6)
- ancient DNA (6)
- answer set programming (6)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (480) (remove)
Generation of superoxide anion in chloroplasts of Arabidopsis thaliana during active photosynthesis
(2007)
The antioxidant defense system involves complex functional coordination of multiple components in different organelles within the plant cell. Here, we have studied the Arabidopsis thaliana early response to the generation of superoxide anion in chloroplasts during active photosynthesis. We exposed plants to methyl viologen (MV), a superoxide anion propagator in the light, and performed biochemical and expression profiling experiments using Affymetrix ATH1 GeneChip(R) microarrays under conditions in which photosynthesis and antioxidant enzymes were active. Data analysis identified superoxide-responsive genes that were compared with available microarray results. Examples include genes encoding proteins with unknown function, transcription factors and signal transduction components. A common GAAAAGTCAAAC motif containing the W-box consensus sequence of WRKY transcription factors, was found in the promoters of genes highly up-regulated by superoxide. Band shift assays showed that oxidative treatments enhanced the specific binding of leaf protein extracts to this motif. In addition, GUS reporter gene fused to WRKY30 promoter, which contains this binding motif, was induced by MV and H2O2. Overall, our study suggests that genes involved in signalling pathways and with unknown functions are rapidly activated by superoxide anion generated in photosynthetically active chloroplasts, as part of the early antioxidant response of Arabidopsis leaves.
QuantPrime
(2008)
Background
Medium- to large-scale expression profiling using quantitative polymerase chain reaction (qPCR) assays are becoming increasingly important in genomics research. A major bottleneck in experiment preparation is the design of specific primer pairs, where researchers have to make several informed choices, often outside their area of expertise. Using currently available primer design tools, several interactive decisions have to be made, resulting in lengthy design processes with varying qualities of the assays.
Results
Here we present QuantPrime, an intuitive and user-friendly, fully automated tool for primer pair design in small- to large-scale qPCR analyses. QuantPrime can be used online through the internet http://www.quantprime.de/ or on a local computer after download; it offers design and specificity checking with highly customizable parameters and is ready to use with many publicly available transcriptomes of important higher eukaryotic model organisms and plant crops (currently 295 species in total), while benefiting from exon-intron border and alternative splice variant information in available genome annotations. Experimental results with the model plant Arabidopsis thaliana, the crop Hordeum vulgare and the model green alga Chlamydomonas reinhardtii show success rates of designed primer pairs exceeding 96%.
Conclusion
QuantPrime constitutes a flexible, fully automated web application for reliable primer design for use in larger qPCR experiments, as proven by experimental data. The flexible framework is also open for simple use in other quantification applications, such as hydrolyzation probe design for qPCR and oligonucleotide probe design for quantitative in situ hybridization. Future suggestions made by users can be easily implemented, thus allowing QuantPrime to be developed into a broad-range platform for the design of RNA expression assays.
Background: Haplotype inference based on unphased SNP markers is an important task in population genetics. Although there are different approaches to the inference of haplotypes in diploid species, the existing software is not suitable for inferring haplotypes from unphased SNP data in polyploid species, such as the cultivated potato (Solanum tuberosum). Potato species are tetraploid and highly heterozygous.
Results: Here we present the software SATlotyper which is able to handle polyploid and polyallelic data. SATlo-typer uses the Boolean satisfiability problem to formulate Haplotype Inference by Pure Parsimony. The software excludes existing haplotype inferences, thus allowing for calculation of alternative inferences. As it is not known which of the multiple haplotype inferences are best supported by the given unphased data set, we use a bootstrapping procedure that allows for scoring of alternative inferences. Finally, by means of the bootstrapping scores, it is possible to optimise the phased genotypes belonging to a given haplotype inference. The program is evaluated with simulated and experimental SNP data generated for heterozygous tetraploid populations of potato. We show that, instead of taking the first haplotype inference reported by the program, we can significantly improve the quality of the final result by applying additional methods that include scoring of the alternative haplotype inferences and genotype optimisation. For a sub-population of nineteen individuals, the predicted results computed by SATlotyper were directly compared with results obtained by experimental haplotype inference via sequencing of cloned amplicons. Prediction and experiment gave similar results regarding the inferred haplotypes and phased genotypes.
Conclusion: Our results suggest that Haplotype Inference by Pure Parsimony can be solved efficiently by the SAT approach, even for data sets of unphased SNP from heterozygous polyploids. SATlotyper is freeware and is distributed as a Java JAR file. The software can be downloaded from the webpage of the GABI Primary Database at http://www.gabipd.org/projects/satlotyper/. The application of SATlotyper will provide haplotype information, which can be used in haplotype association mapping studies of polyploid plants.
The Warm-Hot Intergalactic Medium (WHIM) arises from shock-heated gas collapsing in large-scale filaments and probably harbours a substantial fraction of the baryons in the local Universe. Absorption-line measurements in the ultraviolet (UV) and in the X-ray band currently represent the best method to study the WHIM at low redshifts. We here describe the physical properties of the WHIM and the concepts behind WHIM absorption line measurements of Hi and high ions such as Ovi, Ovii, and Oviii in the far-ultraviolet and X-ray band. We review results of recent WHIM absorption line studies carried out with UV and X-ray satellites such as FUSE, HST, Chandra, and XMM-Newton and discuss their implications for our knowledge of the WHIM.
Inositol phosphates (IPs) and their turnover products have been implicated to play important roles in stress signaling in eukaryotic cells. In higher plants genes encoding inositol polyphosphate kinases have been identified previously, but their physiological functions have not been fully resolved. Here we expressed Arabidopsis inositol polyphosphate 6-/3-kinase (AtIpk2 beta) in two heterologous systems, i.e. the yeast Saccharomyces cerevisiae and in tobacco (Nicotiana tabacum), and tested the effect on abiotic stress tolerance. Expression of AtIpk2 beta rescued the salt-, osmotic- and temperature-sensitive growth defects of a yeast mutant strain (arg82 Delta) that lacks inositol polyphosphate multikinase activity encoded by the ARG82/IPK2 gene. Transgenic tobacco plants constitutively expressing AtIpk2 beta under the control of the Cauliflower Mosaic Virus 35S promoter were generated and found to exhibit improved tolerance to diverse abiotic stresses when compared to wild type plants. Expression patterns of various stress responsive genes were enhanced, and the activities of anti-oxidative enzymes were elevated in transgenic plants, suggesting a possible involvement of AtIpk2 beta in plant stress responses.
Background: For omics experiments, detailed characterisation of experimental material with respect to its genetic features, its cultivation history and its treatment history is a requirement for analyses by bioinformatics tools and for publication needs. Furthermore, meta-analysis of several experiments in systems biology based approaches make it necessary to store this information in a standardised manner, preferentially in relational databases. In the Golm Plant Database System, we devised a data management system based on a classical Laboratory Information Management System combined with web-based user interfaces for data entry and retrieval to collect this information in an academic environment.
Results: The database system contains modules representing the genetic features of the germplasm, the experimental conditions and the sampling details. In the germplasm module, genetically identical lines of biological material are generated by defined workflows, starting with the import workflow, followed by further workflows like genetic modification (transformation), vegetative or sexual reproduction. The latter workflows link lines and thus create pedigrees. For experiments, plant objects are generated from plant lines and united in so-called cultures, to which the cultivation conditions are linked. Materials and methods for each cultivation step are stored in a separate ACCESS database of the plant cultivation unit. For all cultures and thus every plant object, each cultivation site and the culture's arrival time at a site are logged by a barcode-scanner based system. Thus, for each plant object, all site-related parameters, e. g. automatically logged climate data, are available. These life history data and genetic information for the plant objects are linked to analytical results by the sampling module, which links sample components to plant object identifiers. This workflow uses controlled vocabulary for organs and treatments. Unique names generated by the system and barcode labels facilitate identification and management of the material. Web pages are provided as user interfaces to facilitate maintaining the system in an environment with many desktop computers and a rapidly changing user community. Web based search tools are the basis for joint use of the material by all researchers of the institute.
Conclusion: The Golm Plant Database system, which is based on a relational database, collects the genetic and environmental information on plant material during its production or experimental use at the Max-Planck-Institute of Molecular Plant Physiology. It thus provides information according to the MIAME standard for the component 'Sample' in a highly standardised format. The Plant Database system thus facilitates collaborative work and allows efficient queries in data analysis for systems biology research.
Bio-jETI
(2008)
Background: With Bio-jETI, we introduce a service platform for interdisciplinary work on biological application domains and illustrate its use in a concrete application concerning statistical data processing in R and xcms for an LC/MS analysis of FAAH gene knockout.
Methods: Bio-jETI uses the jABC environment for service-oriented modeling and design as a graphical process modeling tool and the jETI service integration technology for remote tool execution.
Conclusions: As a service definition and provisioning platform, Bio-jETI has the potential to become a core technology in interdisciplinary service orchestration and technology transfer. Domain experts, like biologists not trained in computer science, directly define complex service orchestrations as process models and use efficient and complex bioinformatics tools in a simple and intuitive way.
Background: An increasing number of studies demonstrate that genetic differentiation and speciation in the sea occur over much smaller spatial scales than previously appreciated given the wide distribution range of many morphologically defined coral reef invertebrate species and the presumed dispersal-enhancing qualities of ocean currents. However, knowledge about the processes that lead to population divergence and speciation is often lacking despite being essential for the understanding, conservation, and management of marine biodiversity. Sponges, a highly diverse, ecologically and economically important reef-invertebrate taxon, exhibit spatial trends in the Indo-West Pacific that are not universally reflected in other marine phyla. So far, however, processes generating those unexpected patterns are not understood.
Results: We unraveled the phylogeographic structure of the widespread Indo-Pacific coral reef sponge Leucetta chagosensis across its known geographic range using two nuclear markers: the rDNA internal transcribed spacers (ITS 1&2) and a fragment of the 28S gene, as well as the second intron of the ATP synthetase beta subunit-gene (ATPSb-iII). This enabled the detection of several deeply divergent clades congruent over both loci, one containing specimens from the Indian Ocean ( Red Sea and Maldives), another one from the Philippines, and two other large and substructured NW Pacific and SW Pacific clades with an area of overlap in the Great Barrier Reef/Coral Sea. Reciprocally monophyletic populations were observed from the Philippines, Red Sea, Maldives, Japan, Samoa, and Polynesia, demonstrating long-standing isolation. Populations along the South Equatorial Current in the south-western Pacific showed isolation-by-distance effects. Overall, the results pointed towards stepping-stone dispersal with some putative long-distance exchange, consistent with expectations from low dispersal capabilities.
Conclusion: We argue that both founder and vicariance events during the late Pliocene and Pleistocene were responsible to varying degrees for generating the deep phylogeographic structure. This structure was perpetuated largely as a result of the life history of L. chagosensis, resulting in high levels of regional isolation. Reciprocally monophyletic populations constitute putative sibling ( cryptic) species, while population para- and polyphyly may indicate incipient speciation processes. The genetic diversity and biodiversity of tropical Indo-Pacific sponges appears to be substantially underestimated since the high level of genetic divergence is not necessarily manifested at the morphological level.
We describe an approach to modeling biological networks by action languages via answer set programming. To this end, we propose an action language for modeling biological networks, building on previous work by Baral et al. We introduce its syntax and semantics along with a translation into answer set programming, an efficient Boolean Constraint Programming Paradigm. Finally, we describe one of its applications, namely, the sulfur starvation response-pathway of the model plant Arabidopsis thaliana and sketch the functionality of our system and its usage.
Background
Serotonin induces fluid secretion from Calliphora salivary glands by the parallel activation of the InsP3/Ca2+ and cAMP signaling pathways. We investigated whether cAMP affects 5-HT-induced Ca2+ signaling and InsP3-induced Ca2+ release from the endoplasmic reticulum (ER).
Results
Increasing intracellular cAMP level by bath application of forskolin, IBMX or cAMP in the continuous presence of threshold 5-HT concentrations converted oscillatory [Ca2+]i changes into a sustained increase. Intraluminal Ca2+ measurements in the ER of ß-escin-permeabilized glands with mag-fura-2 revealed that cAMP augmented InsP3-induced Ca2+ release in a concentration-dependent manner. This indicated that cAMP sensitized the InsP3 receptor Ca2+ channel for InsP3. By using cAMP analogs that activated either protein kinase A (PKA) or Epac and the application of PKA-inhibitors, we found that cAMP-induced augmentation of InsP3-induced Ca2+ release was mediated by PKA not by Epac. Recordings of the transepithelial potential of the glands suggested that cAMP sensitized the InsP3/Ca2+ signaling pathway for 5-HT, because IBMX potentiated Ca2+-dependent Cl- transport activated by a threshold 5-HT concentration.
Conclusion
This report shows, for the first time for an insect system, that cAMP can potentiate InsP3-induced Ca2+ release from the ER in a PKA-dependent manner, and that this crosstalk between cAMP and InsP3/Ca2+ signaling pathways enhances transepithelial electrolyte transport.
Thermal radiation processes
(2008)
We discuss the different physical processes that are important to understand the thermal X-ray emission and absorption spectra of the diffuse gas in clusters of galaxies and the warm-hot intergalactic medium. The ionisation balance, line and continuum emission and absorption properties are reviewed and several practical examples are given that illustrate the most important diagnostic features in the X-ray spectra.
We propose a network structure-based model for heterosis, and investigate it relying on metabolite profiles from Arabidopsis. A simple feed-forward two-layer network model (the Steinbuch matrix) is used in our conceptual approach. It allows for directly relating structural network properties with biological function. Interpreting heterosis as increased adaptability, our model predicts that the biological networks involved show increasing connectivity of regulatory interactions. A detailed analysis of metabolite profile data reveals that the increasing-connectivity prediction is true for graphical Gaussian models in our data from early development. This mirrors properties of observed heterotic Arabidopsis phenotypes. Furthermore, the model predicts a limit for increasing hybrid vigor with increasing heterozygosity—a known phenomenon in the literature.
Using ESTs for phylogenomics
(2008)
Background
While full genome sequences are still only available for a handful of taxa, large collections of partial gene sequences are available for many more. The alignment of partial gene sequences results in a multiple sequence alignment containing large gaps that are arranged in a staggered pattern. The consequences of this pattern of missing data on the accuracy of phylogenetic analysis are not well understood. We conducted a simulation study to determine the accuracy of phylogenetic trees obtained from gappy alignments using three commonly used phylogenetic reconstruction methods (Neighbor Joining, Maximum Parsimony, and Maximum Likelihood) and studied ways to improve the accuracy of trees obtained from such datasets.
Results
We found that the pattern of gappiness in multiple sequence alignments derived from partial gene sequences substantially compromised phylogenetic accuracy even in the absence of alignment error. The decline in accuracy was beyond what would be expected based on the amount of missing data. The decline was particularly dramatic for Neighbor Joining and Maximum Parsimony, where the majority of gappy alignments contained 25% to 40% incorrect quartets. To improve the accuracy of the trees obtained from a gappy multiple sequence alignment, we examined two approaches. In the first approach, alignment masking, potentially problematic columns and input sequences are excluded from from the dataset. Even in the absence of alignment error, masking improved phylogenetic accuracy up to 100-fold. However, masking retained, on average, only 83% of the input sequences. In the second approach, alignment subdivision, the missing data is statistically modelled in order to retain as many sequences as possible in the phylogenetic analysis. Subdivision resulted in more modest improvements to alignment accuracy, but succeeded in including almost all of the input sequences.
Conclusion
These results demonstrate that partial gene sequences and gappy multiple sequence alignments can pose a major problem for phylogenetic analysis. The concern will be greatest for high-throughput phylogenomic analyses, in which Neighbor Joining is often the preferred method due to its computational efficiency. Both approaches can be used to increase the accuracy of phylogenetic inference from a gappy alignment. The choice between the two approaches will depend upon how robust the application is to the loss of sequences from the input set, with alignment masking generally giving a much greater improvement in accuracy but at the cost of discarding a larger number of the input sequences.
Background
Many animals live in environments where different types of predators pose a permanent threat and call for predator specific strategies. When foraging, animals have to balance the competing needs of food and safety in order to survive. While animals sometimes can choose between microhabitats that differ in their risk of predation, many habitats are uniform in their risk distribution. So far, little is known about adaptive antipredator behavior under uniform risk. We simulated two predator types, avian and mammalian, each representing a spatially uniform risk in the artificial resource landscapes. Voles served as experimental foragers.
Results
Animals were exposed to factorial combinations of weasel odour and ground cover to simulate avian and/or mammalian predation. We measured short and long term responses with video analysis and giving-up densities. The results show that previously experienced conditions cause delayed effects. After these effects ceased, the risks of both types of predation caused a reduction in food intake. Avian predation induced a concentration on a smaller number of feeding patches. While higher avian risk caused a delay in activity, the weasel odour shortened the latency until the voles started to be active.
Conclusion
We show that the voles differed in risk types and adjusted their feeding strategies accordingly. Responses to avian and mammalian risk differed both in strength and time scales. Uniformity of risk resulted in a concentration of foraging investment and lower foraging efficiency.
GeneFisher-P
(2008)
Background: PCR primer design is an everyday, but not trivial task requiring state-of-the-art software. We describe the popular tool GeneFisher and explain its recent restructuring using workflow techniques. We apply a service-oriented approach to model and implement GeneFisher-P, a process-based version of the GeneFisher web application, as a part of the Bio-jETI platform for service modeling and execution. We show how to introduce a flexible process layer to meet the growing demand for improved user-friendliness and flexibility.
Results: Within Bio-jETI, we model the process using the jABC framework, a mature model-driven, service-oriented process definition platform. We encapsulate remote legacy tools and integrate web services using jETI, an extension of the jABC for seamless integration of remote resources as basic services, ready to be used in the process. Some of the basic services used by GeneFisher are in fact already provided as individual web services at BiBiServ and can be directly accessed. Others are legacy programs, and are made available to Bio-jETI via the jETI technology.
The full power of service-based process orientation is required when more bioinformatics tools, available as web services or via jETI, lead to easy extensions or variations of the basic process. This concerns for instance variations of data retrieval or alignment tools as provided by the European Bioinformatics Institute (EBI).
Conclusions: The resulting service-and process-oriented GeneFisher-P demonstrates how basic services from heterogeneous sources can be easily orchestrated in the Bio-jETI platform and lead to a flexible family of specialized processes tailored to specific tasks.
An efficient electrocatalytic biosensor for sulfite detection was developed by co-immobilizing sulfite oxidase and cytochrome c with polyaniline sulfonic acid in a layer-by-layer assembly. QCM, UV-Vis spectroscopy and cyclic voltammetry revealed increasing loading of electrochemically active protein with the formation of multilayers. The sensor operates reagentless at low working potential. A catalytic oxidation current was detected in the presence of sulfite at the modified gold electrode, polarized at +0.1 V ( vs. Ag/AgCl 1 M KCl). The stability of the biosensor performance was characterized and optimized. A 17-bilayer electrode has a linear range between 1 and 60 mu M sulfite with a sensitivity of 2.19 mA M-1 sulfite and a response time of 2 min. The electrode retained a stable response for 3 days with a serial reproducibility of 3.8% and lost 20% of sensitivity after 5 days of operation. It is possible to store the sensor in a dry state for more than 2 months. The multilayer electrode was used for determination of sulfite in unspiked and spiked samples of red and white wine. The recovery and the specificity of the signals were evaluated for each sample.
We present varve chronologies for sediments from two maar lakes in the Valle de Santiago region (Central Mexico): Hoya La Alberca (AD 1852-1973) and Hoya Rincn de Parangueo (AD 1839-1943). These are the first varve chronologies for Mexican lakes. The varved sections were anchored with tephras from Colima (1913) and Paricutin (1943/1944) and (210)Pb ages. We compare the sequences using the thickness of seasonal laminae and element counts (Al, Si, S, Cl, K, Ti, Mn, Fe, and Sr) determined by micro X-ray fluorescence spectrometry. The formation of the varve sublaminae is attributed to the strongly seasonal climate regime. Limited rainfall and high evaporation rates in winter and spring induce precipitation of carbonates (high Ca, Sr) enriched in (13)C and (18)O, whereas rainfall in summer increases organic and clastic input (plagioclase, quartz) with high counts of lithogenic elements (K, Al, Ti, and Si). Eolian input of Ti occurs also in the dry season. Moving correlations (5-yr windows) of the Ca and Ti counts show similar development in both sequences until the 1930s. Positive correlations indicate mixing of allochthonous Ti and autochthonous Ca, while negative correlations indicate their separation in sublaminae. Negative excursions in the correlations correspond with historic and reconstructed droughts, El Nio events, and positive SST anomalies. Based on our data, droughts (3-7 year duration) were severe and centred around the following years: the early 1850s, 1865, 1880, 1895, 1905, 1915 and the late 1920s with continuation into the 1930s. The latter dry period brought both lake systems into a critical state making them susceptible to further drying. Groundwater overexploitation due to the expansion of irrigation agriculture in the region after 1940 induced the transition from calcite to aragonite precipitation in Alberca and halite infiltration in Rincn. The proxy data indicate a faster response to increased evaporation for Rincn, the lake with the larger maar dimensions, solar radiation receipt and higher conductivity, whereas the smaller, steeper Alberca maar responded rapidly to increased precipitation.
Special p-forms are forms which have components fµ1…µp equal to +1, -1 or 0 in some orthonormal basis. A p-form ϕ ∈ pRd is called democratic if the set of nonzero components {ϕμ1...μp} is symmetric under the transitive action of a subgroup of O(d,Z) on the indices {1, . . . , d}. Knowledge of these symmetry groups allows us to define mappings of special democratic p-forms in d dimensions to special democratic P-forms in D dimensions for successively higher P = p and D = d. In particular, we display a remarkable nested structure of special forms including a U(3)-invariant 2-form in six dimensions, a G2-invariant 3-form in seven dimensions, a Spin(7)-invariant 4-form in eight dimensions and a special democratic 6-form O in ten dimensions. The latter has the remarkable property that its contraction with one of five distinct bivectors, yields, in the orthogonal eight dimensions, the Spin(7)-invariant 4-form. We discuss various properties of this ten dimensional form.
The satellite era brings new challenges in the development and the implementation of potential field models. Major aspects are, therefore, the exploitation of existing space- and ground-based gravity and magnetic data for the long-term. Moreover, a continuous and near real-time global monitoring of the Earth system, allows for a consistent integration and assimilation of these data into complex models of the Earth’s gravity and magnetic fields, which have to consider the constantly increasing amount of available data. In this paper we propose how to speed up the computation of the normal equation in potential filed modeling by using local multi-polar approximations of the modeling functions. The basic idea is to take advantage of the rather smooth behavior of the internal fields at the satellite altitude and to replace the full available gravity or magnetic data by a collection of local moments. We also investigate what are the optimal values for the free parameters of our method. Results from numerical experiments with spherical harmonic models based on both scalar gravity potential and magnetic vector data are presented and discussed. The new developed method clearly shows that very large datasets can be used in potential field modeling in a fast and more economic manner.
Quantifying uncertainty, variability and likelihood for ordinary differential equation models
(2010)
Background
In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space.
Results
The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability.
Conclusions
While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.
Background: Cysteine is a component in organic compounds including glutathione that have been implicated in the adaptation of plants to stresses. O-acetylserine (thiol) lyase (OAS-TL) catalyses the final step of cysteine biosynthesis. OAS-TL enzyme isoforms are localised in the cytoplasm, the plastids and mitochondria but the contribution of individual OAS-TL isoforms to plant sulphur metabolism has not yet been fully clarified.
Results: The seedling lethal phenotype of the Arabidopsis onset of leaf death3-1 (old3-1) mutant is due to a point mutation in the OAS-A1 gene, encoding the cytosolic OAS-TL. The mutation causes a single amino acid substitution from Gly(162) to Glu(162), abolishing old3-1 OAS-TL activity in vitro. The old3-1 mutation segregates as a monogenic semidominant trait when backcrossed to its wild type accession Landsberg erecta (Ler-0) and the Di-2 accession. Consistent with its semi-dominant behaviour, wild type Ler-0 plants transformed with the mutated old3-1 gene, displayed the early leaf death phenotype. However, the old3-1 mutation segregates in an 11: 4: 1 (wild type: semi-dominant: mutant) ratio when backcrossed to the Colombia-0 and Wassilewskija accessions. Thus, the early leaf death phenotype depends on two semi-dominant loci. The second locus that determines the old3-1 early leaf death phenotype is referred to as odd-ler (for old3 determinant in the Ler accession) and is located on chromosome 3. The early leaf death phenotype is temperature dependent and is associated with increased expression of defence-response and oxidative-stress marker genes. Independent of the presence of the odd-ler gene, OAS-A1 is involved in maintaining sulphur and thiol levels and is required for resistance against cadmium stress.
Conclusions: The cytosolic OAS-TL is involved in maintaining organic sulphur levels. The old3-1 mutation causes genome-dependent and independent phenotypes and uncovers a novel function for the mutated OAS-TL in cell death regulation.
Und der Zukunft abgewandt
(2010)
Seit dem Ende der DDR, das den Zusammenbruch des Ostblocks und damit die Beendigung des »Kalten Kriegs« einleitete, wird verstärkt versucht, das Wesen dieses Staates zu definieren und damit seine Folgen auf wirtschaftlicher, sozialer, psychologischer und bildungspolitischer Ebene zu verstehen und einzuordnen. Alexandra Budke analysiert in diesem Band das Schulfach Geographie, das neben der Staatsbürgerkunde und der Geschichte ein zentrales Fach war und in dem die in den Lehrplänen definierte »staatsbürgerliche, weltanschauliche oder ideologische Erziehung« auf der Grundlage des Marxismus-Leninismus stattfinden sollte. Sie klärt, inwiefern Geographieunterricht in der DDR genutzt wurde, um geopolitische Interessen des Staates zu kommunizieren und zu verbreiten. Damit lässt sich durch die detaillierte Analyse des Fachunterrichts auch die Frage beantworten, ob SchülerInnen im Unterricht politisch manipuliert wurden und welche Handlungsmöglichkeiten die zentralen Akteure des Unterrichts, die LehrerInnen und die SchülerInnen, im Rahmen der durch die Bildungspolitik gesetzten curricularen Vorgaben wahrgenommen haben.
Live cell flattening
(2010)
Eukaryotic cell flattening is valuable for improving microscopic observations, ranging from bright field (BF) to total internal reflection fluorescence (TIRF) microscopy. Fundamental processes, such as mitosis and in vivo actin polymerization, have been investigated using these techniques. Here, we review the well known agar overlayer protocol and the oil overlay method. In addition, we present more elaborate microfluidics-based techniques that provide us with a greater level of control. We demonstrate these techniques on the social amoebae Dictyostelium discoideum, comparing the advantages and disadvantages of each method.
Background: For heterogeneous tissues, such as blood, measurements of gene expression are confounded by relative proportions of cell types involved. Conclusions have to rely on estimation of gene expression signals for homogeneous cell populations, e.g. by applying micro-dissection, fluorescence activated cell sorting, or in-silico deconfounding. We studied feasibility and validity of a non-negative matrix decomposition algorithm using experimental gene expression data for blood and sorted cells from the same donor samples. Our objective was to optimize the algorithm regarding detection of differentially expressed genes and to enable its use for classification in the difficult scenario of reversely regulated genes. This would be of importance for the identification of candidate biomarkers in heterogeneous tissues.
Results: Experimental data and simulation studies involving noise parameters estimated from these data revealed that for valid detection of differential gene expression, quantile normalization and use of non-log data are optimal. We demonstrate the feasibility of predicting proportions of constituting cell types from gene expression data of single samples, as a prerequisite for a deconfounding-based classification approach. Classification cross-validation errors with and without using deconfounding results are reported as well as sample-size dependencies. Implementation of the algorithm, simulation and analysis scripts are available.
Conclusions: The deconfounding algorithm without decorrelation using quantile normalization on non-log data is proposed for biomarkers that are difficult to detect, and for cases where confounding by varying proportions of cell types is the suspected reason. In this case, a deconfounding ranking approach can be used as a powerful alternative to, or complement of, other statistical learning approaches to define candidate biomarkers for molecular diagnosis and prediction in biomedicine, in realistically noisy conditions and with moderate sample sizes.
Multi-color fluorescence imaging experiments of wave forming Dictyostelium cells have revealed that actin waves separate two domains of the cell cortex that differ in their actin structure and phosphoinositide composition. We propose a bistable model of actin dynamics to account for these experimental observation. The model is based on the simplifying assumption that the actin cytoskeleton is composed of two distinct network types, a dendritic and a bundled network. The two structurally different states that were observed in experiments correspond to the stable fixed points in the bistable regime of this model. Each fixed point is dominated by one of the two network types. The experimentally observed actin waves can be considered as trigger waves that propagate transitions between the two stable fixed points.
The Takab complex is composed of a variety of metamorphic rocks including amphibolites, metapelites, mafic granulites, migmatites and meta-ultramafics, which are intruded by the granitoid. The granitoid magmatic activity occurred in relation to the subduction of the Neo-Tethys oceanic crust beneath the Iranian crust during Tertiary times. The granitoids are mainly granodiorite, quartz monzodiorite, monzonite and quartz diorite. Chemically, the magmatic rocks are characterized by ASI < 1.04, AI < 0.87 and high contents of CaO (up to ∼ 14.5 wt %), which are consistent with the I-type magmatic series. Low FeO t /(FeO t +MgO) values (< 0.75) as well as low Nb, Y and K 2 O contents of the investigated rocks resemble the calc-alkaline series. Low SiO 2 , K 2 O/Na 2 O and Al 2 O 3 accompanied by high CaO and FeO contents indicate melting of metabasites as an appropriate source for the intrusions. Negative Ti and Nb anomalies verify a metaluminous crustal origin for the protoliths of the investigated igneous rocks. These are comparable with compositions of the associated mafic migmatites, in the Takab metamorphic complex, which originated from the partial melting of amphibolites. Therefore, crustal melting and a collision-related origin for the Takab calc-alkaline intrusions are proposed here on the basis of mineralogy and geochemical characteristics. The P–T evolution during magmatic crystallization and subsolidus cooling stages is determined by the study of mineral chemistry of the granodiorite and the quartz diorite. Magmatic crystallization pressure and temperature for the quartz-diorite and the granodiorite are estimated to be P ∼ 7.8 ± 2.5 kbar, T ∼ 760 ± 75 ◦C and P ∼ 5 ± 1 kbar, T ∼ 700 ◦C, respectively. Subsolidus conditions are consistent with temperatures of ∼ 620 ◦C and ∼ 600 ◦C, and pressures of ∼ 5 kbar and ∼ 3.5 kbar for the quartz-diorite and the granodiorite, respectively.
We present an approach that provides automatic or semi-automatic support for evolution and change management in heterogeneous legacy landscapes where (1) legacy heterogeneous, possibly distributed platforms are integrated in a service oriented fashion, (2) the coordination of functionality is provided at the service level, through orchestration, (3) compliance and correctness are provided through policies and business rules, (4) evolution and correctness-by-design are supported by the eXtreme Model Driven Development paradigm (XMDD) offered by the jABC (Margaria and Steffen in Annu. Rev. Commun. 57, 2004)—the model-driven service oriented development platform we use here for integration, design, evolution, and governance. The artifacts are here semantically enriched, so that automatic synthesis plugins can field the vision of Enterprise Physics: knowledge driven business process development for the end user.
We demonstrate this vision along a concrete case study that became over the past three years a benchmark for Semantic Web Service discovery and mediation. We enhance the Mediation Scenario of the Semantic Web Service Challenge along the 2 central evolution paradigms that occur in practice: (a) Platform migration: platform substitution of a legacy system by an ERP system and (b) Backend extension: extension of the legacy Customer Relationship Management (CRM) and Order Management System (OMS) backends via an additional ERP layer.
Background: Protein phosphorylation is an important post-translational modification influencing many aspects of dynamic cellular behavior. Site-specific phosphorylation of amino acid residues serine, threonine, and tyrosine can have profound effects on protein structure, activity, stability, and interaction with other biomolecules. Phosphorylation sites can be affected in diverse ways in members of any species, one such way is through single nucleotide polymorphisms (SNPs). The availability of large numbers of experimentally identified phosphorylation sites, and of natural variation datasets in Arabidopsis thaliana prompted us to analyze the effect of non-synonymous SNPs (nsSNPs) onto phosphorylation sites.
Results: From the analyses of 7,178 experimentally identified phosphorylation sites we found that: (i) Proteins with multiple phosphorylation sites occur more often than expected by chance. (ii) Phosphorylation hotspots show a preference to be located outside conserved domains. (iii) nsSNPs affected experimental phosphorylation sites as much as the corresponding non-phosphorylated amino acid residues. (iv) Losses of experimental phosphorylation sites by nsSNPs were identified in 86 A. thaliana proteins, among them receptor proteins were overrepresented.
These results were confirmed by similar analyses of predicted phosphorylation sites in A. thaliana. In addition, predicted threonine phosphorylation sites showed a significant enrichment of nsSNPs towards asparagines and a significant depletion of the synonymous substitution. Proteins in which predicted phosphorylation sites were affected by nsSNPs (loss and gain), were determined to be mainly receptor proteins, stress response proteins and proteins involved in nucleotide and protein binding. Proteins involved in metabolism, catalytic activity and biosynthesis were less affected.
Conclusions: We analyzed more than 7,100 experimentally identified phosphorylation sites in almost 4,300 protein-coding loci in silico, thus constituting the largest phosphoproteomics dataset for A. thaliana available to date. Our findings suggest a relatively high variability in the presence or absence of phosphorylation sites between different natural accessions in receptor and other proteins involved in signal transduction. Elucidating the effect of phosphorylation sites affected by nsSNPs on adaptive responses represents an exciting research goal for the future.
It has been suggested that coronal mass ejections (CMEs) remove the magnetic he-licity of their coronal source region from the Sun. Such removal is often regarded to be necessary due to the hemispheric sign preference of the helicity, which inhibits a simple annihilation by reconnection between volumes of opposite chirality. Here we monitor the relative magnetic he-licity contained in the coronal volume of a simulated flux rope CME, as well as the upward flux of relative helicity through horizontal planes in the simulation box. The unstable and erupting flux rope carries away only a minor part of the initial relative helicity; the major part remains in the volume. This is a consequence of the requirement that the current through an expanding loop must decrease if the magnetic energy of the configuration is to decrease as the loop rises, to provide the kinetic energy of the CME.
Two recent magnetic field models, GRIMM and xCHAOS, describe core field accelerations with similar behavior up to Spherical Harmonic (SH) degree 5, but which differ significantly for higher degrees. These discrepancies, due to different approaches in smoothing rapid time variations of the core field, have strong implications for the interpretation of the secular variation. Furthermore, the amount of smoothing applied to the highest SH degrees is essentially the modeler’s choice. We therefore investigate new ways of regularizing core magnetic field models. Here we propose to constrain field models to be consistent with the frozen flux induction equation by co-estimating a core magnetic field model and a flow model at the top of the outer core. The flow model is required to have smooth spatial and temporal behavior. The implementation of such constraints and their effects on a magnetic field model built from one year of CHAMP satellite and observatory data, are presented. In particular, it is shown that the chosen constraints are efficient and can be used to build reliable core magnetic field secular variation and acceleration model components.
Background
Micrometer resolution placement and immobilization of probe molecules is an important step in the preparation of biochips and a wide range of lab-on-chip systems. Most known methods for such a deposition of several different substances are costly and only suitable for a limited number of probes. In this article we present a flexible procedure for simultaneous spatially controlled immobilization of functional biomolecules by molecular ink lithography.
Results
For the bottom-up fabrication of surface bound nanostructures a universal method is presented that allows the immobilization of different types of biomolecules with micrometer resolution. A supporting surface is biotinylated and streptavidin molecules are deposited with an AFM (atomic force microscope) tip at distinct positions. Subsequent incubation with a biotinylated molecule species leads to binding only at these positions. After washing streptavidin is deposited a second time with the same AFM tip and then a second biotinylated molecule species is coupled by incubation. This procedure can be repeated several times. Here we show how to immobilize different types of biomolecules in an arbitrary arrangement whereas most common methods can deposit only one type of molecules. The presented method works on transparent as well as on opaque substrates. The spatial resolution is better than 400 nm and is limited only by the AFM's positional accuracy after repeated z-cycles since all steps are performed in situ without moving the supporting surface. The principle is demonstrated by hybridization to different immobilized DNA oligomers and was validated by fluorescence microscopy.
Conclusions
The immobilization of different types of biomolecules in high-density microarrays is a challenging task for biotechnology. The method presented here not only allows for the deposition of DNA at submicrometer resolution but also for proteins and other molecules of biological relevance that can be coupled to biotin.
Fusarium spp. infection of cereal grain is a common problem, which leads to a dramatic loss of grain quality. The aim of the present study was to investigate the effect of Fusarium infection on the wheat storage protein gluten and its fractions, the gliadins and glutenins, in an in vitro model system. Gluten proteins were digested by F. graminearum proteases for 2, 4, 8 and 24 h, separated by Osborne fractionation and characterised by chromatographic (RP-HPLC) and electrophoretic analysis (SDS-Page). Gluten digestion by F. graminearum proteases showed in comparison with gliadins a preference for the glutenins whereas the HMW subfraction was at most affected. In comparison with a untreated control, the HMW subfraction was degraded of about 97% after 4 h incubation with Fusarium proteases. Separate digestion of gliadin and glutenin underlined the preference for HMW-GS. Analogue to the observed change in the gluten composition, the yield of the proteins extracted changed. A higher amount of glutenin fragments was found in the gliadin extraction solution after digestion and could mask a gliadin destruction at the same time. This observation can contribute to explain the frequently reported reduced glutenin amount parallel to an increase in gliadin quantity after Fusarium infection in grains.
Many organisms have developed defences to avoid predation by species at higher trophic levels. The capability of primary producers to defend themselves against herbivores affects their own survival, can modulate the strength of trophic cascades and changes rates of competitive exclusion in aquatic communities. Algal species are highly flexible in their morphology, growth form, biochemical composition and production of toxic and deterrent compounds. Several of these variable traits in phytoplankton have been interpreted as defence mechanisms against grazing. Zooplankton feed with differing success on various phytoplankton species, depending primarily on size, shape, cell wall structure and the production of toxins and deterrents. Chemical cues associated with (i) mechanical damage, (ii) herbivore presence and (iii) grazing are the main factors triggering induced defences in both marine and freshwater phytoplankton, but most studies have failed to disentangle the exact mechanism(s) governing defence induction in any particular species. Induced defences in phytoplankton include changes in morphology (e.g. the formation of spines, colonies and thicker cell walls), biochemistry (such as production of toxins, repellents) and in life history characteristics (formation of cysts, reduced recruitment rate). Our categorization of inducible defences in terms of the responsible induction mechanism provides guidance for future work, as hardly any of the available studies on marine or freshwater plankton have performed all the treatments that are required to pinpoint the actual cue(s) for induction. We discuss the ecology of inducible defences in marine and freshwater phytoplankton with a special focus on the mechanisms of induction, the types of defences, their costs and benefits, and their consequences at the community level.
Due to their optical and electro-conductive attributes, carbazole derivatives are interesting materials for a large range of biosensor applications. In this study, we present the synthesis routes and fluorescence evaluation of newly designed carbazole fluorosensors that, by modification with uracil, have a special affinity for antiretroviral drugs via either Watson–Crick or Hoogsteen base pairing. To an N-octylcarbazole-uracil compound, four different groups were attached, namely thiophene, furane, ethylenedioxythiophene, and another uracil; yielding four different derivatives. Photophysical properties of these newly obtained derivatives are described, as are their interactions with the reverse transcriptase inhibitors such as abacavir, zidovudine, lamivudine and didanosine. The influence of each analyte on biosensor fluorescence was assessed on the basis of the Stern–Volmer equation and represented by Stern–Volmer constants. Consequently we have demonstrated that these structures based on carbazole, with a uracil group, may be successfully incorporated into alternative carbazole derivatives to form biosensors for the molecular recognition of antiretroviral drugs.
Background
More than in other domains the heterogeneous services world in bioinformatics demands for a methodology to classify and relate resources in a both human and machine accessible manner. The Semantic Web, which is meant to address exactly this challenge, is currently one of the most ambitious projects in computer science. Collective efforts within the community have already led to a basis of standards for semantic service descriptions and meta-information. In combination with process synthesis and planning methods, such knowledge about types and services can facilitate the automatic composition of workflows for particular research questions.
Results
In this study we apply the synthesis methodology that is available in the Bio-jETI workflow management framework for the semantics-based composition of EMBOSS services. EMBOSS (European Molecular Biology Open Software Suite) is a collection of 350 tools (March 2010) for various sequence analysis tasks, and thus a rich source of services and types that imply comprehensive domain models for planning and synthesis approaches. We use and compare two different setups of our EMBOSS synthesis domain: 1) a manually defined domain setup where an intuitive, high-level, semantically meaningful nomenclature is applied to describe the input/output behavior of the single EMBOSS tools and their classifications, and 2) a domain setup where this information has been automatically derived from the EMBOSS Ajax Command Definition (ACD) files and the EMBRACE Data and Methods ontology (EDAM). Our experiments demonstrate that these domain models in combination with our synthesis methodology greatly simplify working with the large, heterogeneous, and hence manually intractable EMBOSS collection. However, they also show that with the information that can be derived from the (current) ACD files and EDAM ontology alone, some essential connections between services can not be recognized.
Conclusions
Our results show that adequate domain modeling requires to incorporate as much domain knowledge as possible, far beyond the mere technical aspects of the different types and services. Finding or defining semantically appropriate service and type descriptions is a difficult task, but the bioinformatics community appears to be on the right track towards a Life Science Semantic Web, which will eventually allow automatic service composition methods to unfold their full potential.
Background
Riociguat is the first of a new class of drugs, the soluble guanylate cyclase (sGC) stimulators. Riociguat has a dual mode of action: it sensitizes sGC to the body’s own NO and can also increase sGC activity in the absence of NO. The NO-sGC-pathway is impaired in many cardiovascular diseases such as heart failure, pulmonary hypertension and diabetic nephropathy (DN). DN leads to high cardiovascular morbidity and mortality. There is still a high unmet medical need. The urinary albumin excretion rate is a predictive biomarker for these clinical events. Therefore, we investigated the effect of riociguat, alone and in combination with the angiotensin II receptor antagonist (ARB) telmisartan on the progression of DN in diabetic eNOS knock out mice, a new model closely resembling human pathology.
Methods
Seventy-six male eNOS knockout C57BL/6J mice were divided into 4 groups after receiving intraperitoneal high-dose streptozotocin: telmisartan (1 mg/kg), riociguat (3 mg/kg), riociguat+telmisartan (3 and 1 mg/kg), and vehicle. Fourteen mice were used as non-diabetic controls. After 12 weeks, urine and blood were obtained and blood pressure measured. Glucose concentrations were highly increased and similar in all diabetic groups.
Results
Riociguat, alone (105.2 ± 2.5 mmHg; mean±SEM; n = 14) and in combination with telmisartan (105.0 ± 3.2 mmHg; n = 12), significantly reduced blood pressure versus diabetic controls (117.1 ± 2.2 mmHg; n = 14; p = 0.002 and p = 0.004, respectively), whereas telmisartan alone (111.2 ± 2.6 mmHg) showed a modest blood pressure lowering trend (p = 0.071; n = 14). The effects of single treatment with either riociguat (97.1 ± 15.7 µg/d; n = 13) or telmisartan (97.8 ± 26.4 µg/d; n = 14) did not significantly lower albumin excretion on its own (p = 0.067 and p = 0.101, respectively). However, the combined treatment led to significantly lower urinary albumin excretion (47.3 ± 9.6 µg/d; n = 12) compared to diabetic controls (170.8 ± 34.2 µg/d; n = 13; p = 0.004), and reached levels similar to non-diabetic controls (31.4 ± 10.1 µg/d, n = 12).
Conclusion
Riociguat significantly reduced urinary albumin excretion in diabetic eNOS knock out mice that were refractory to treatment with ARB’s alone. Patients with diabetic nephropathy refractory to treatment with ARB’s have the worst prognosis among all patients with diabetic nephropathy. Our data indicate that additional stimulation of sGC on top of standard treatment with ARB`s may offer a new therapeutic approach for patients with diabetic nephropathy resistant to ARB treatment.
Preference handling and optimization are indispensable means for addressing nontrivial applications in Answer Set Programming (ASP). However, their implementation becomes difficult whenever they bring about a significant increase in computational complexity. As a consequence, existing ASP systems do not offer complex optimization capacities, supporting, for instance, inclusion-based minimization or Pareto efficiency. Rather, such complex criteria are typically addressed by resorting to dedicated modeling techniques, like saturation. Unlike the ease of common ASP modeling, however, these techniques are rather involved and hardly usable by ASP laymen. We address this problem by developing a general implementation technique by means of meta-prpogramming, thus reusing existing ASP systems to capture various forms of qualitative preferences among answer sets. In this way, complex preferences and optimization capacities become readily available for ASP applications.
Restoration of semi-natural grassland communities
involves a combination of (1) sward disturbance to
create a temporal window for establishment, and (2)
target species introduction, the latter usually by seed
sowing. With great regularity, particular species
establish only poorly. More reliable establishment
could improve outcome of restoration projects and
increase cost-effectiveness. We investigated the
abiotic germination niche of ten poorly establishing
calcareous grassland species by simultaneously
exploring the effects of moisture and light availability
and temperature fluctuation on percentage germina-
tion and speed of germination. We also investigated
the effects of three different pre-treatments used to
enhance seed germination – cold-stratification, osmo-
tic priming and priming in combination with gibberellic
acid (GA 3 ) – and how these affected abiotic
germination niches. Species varied markedly in width
of abiotic germination niche, ranging from Carex flacca
with very strict abiotic requirements, to several species
reliably germinating across the whole range of abiotic
conditions. Our results suggest pronounced differ-
ences between species in gap requirements for
establishment. Germination was improved in most
species by at least one pre-treatment. Evidence for
positive effects of adding GA 3 to seed priming
solutions was limited. In several species, pre-treated
seeds germinated under a wider range of abiotic
conditions than untreated seeds. Improved knowledge
of species-specific germination niches and the effects
of seed pre-treatments may help to improve species
establishment by sowing, and to identify species for
which sowing at a later stage of restoration or
introduction as small plants may represent a more
viable strategy.
Increased antioxidant capacity in the plasma of dogs after a single oral dosage of tocotrienols
(2011)
The intestinal absorption of tocotrienols (TCT) in dogs is, to our knowledge, so far unknown. Adult Beagle dogs (n 8) were administered a single oral dosage of a TCT-rich fraction (TRF; 40 mg/kg body weight) containing 32 % a-TCT, 2 % b-TCT, 27 % g-TCT, 14 % d-TCT and 25 % a-tocopherol (a-TCP). Blood was sampled at baseline (fasted), 1, 2, 3, 4, 5, 6, 8 and 12 h after supplementation. Plasma and chylomicron concentrations of TCT and a-TCP were measured at each time point. Plasma TAG were measured enzymatically, and plasma antioxidant capacity was assessed by the Trolox equivalent antioxidant capacity assay. In fasted dogs, levels of TCT were 0·07 ( SD 0·03) mmol/l. Following the administration of the TRF, total plasma TCT peaked at 2 h (7·16 ( SD 3·88) mmol/l; P, 0·01) and remained above baseline levels (0·67 ( SD 0·44) mmol/l; P, 0·01) at 12 h. The TCT response in chylomicrons paralleled the increase in TCT in plasma with a maximum peak (3·49 ( SD 2·06) mmol/l; P, 0·01) at 2 h post-dosage. a-TCP was the major vitamin E detected in plasma and unaffected by TRF supplementation. The Trolox equivalent values increased from 2 h (776 ( SD 51·2) mmol/l) to a maximum at 12 h (1130 ( SD 7·72) mmol/l; P,0·01). The results show that TCT are detected in postprandial plasma of dogs. The increase in antioxidant capacity suggests a potential beneficial role of TCT supplementation in the prevention or treatment of several diseases in dogs.
We introduce an approach to detecting inconsistencies in large biological networks by using answer set programming. To this end, we build upon a recently proposed notion of consistency between biochemical/genetic reactions and high-throughput profiles of cell activity. We then present an approach based on answer set programming to check the consistency of large-scale data sets. Moreover, we extend this methodology to provide explanations for inconsistencies by determining minimal representations of conflicts. In practice, this can be used to identify unreliable data or to indicate missing reactions.
Building biological models by inferring functional dependencies from experimental data is an important issue in Molecular Biology. To relieve the biologist from this traditionally manual process, various approaches have been proposed to increase the degree of automation. However, available approaches often yield a single model only, rely on specific assumptions, and/or use dedicated, heuristic algorithms that are intolerant to changing circumstances or requirements in the view of the rapid progress made in Biotechnology. Our aim is to provide a declarative solution to the problem by appeal to Answer Set Programming (ASP) overcoming these difficulties. We build upon an existing approach to Automatic Network Reconstruction proposed by part of the authors. This approach has firm mathematical foundations and is well suited for ASP due to its combinatorial flavor providing a characterization of all models explaining a set of experiments. The usage of ASP has several benefits over the existing heuristic algorithms. First, it is declarative and thus transparent for biological experts. Second, it is elaboration tolerant and thus allows for an easy exploration and incorporation of biological constraints. Third, it allows for exploring the entire space of possible models. Finally, our approach offers an excellent performance, matching existing, special-purpose systems.
Although horses and donkeys belong to the same genus, their genetic characteristics probably result in specific proteomes and post-translational modifications (PTM) of proteins. Since PTM can alter protein properties, specific PTM may contribute to species-specific characteristics. Therefore, the aim of the present study was to analyse differences in serum protein profiles of horses and donkeys as well as mules, which combine the genetic backgrounds of both species. Additionally, changes in PTM of the protein transthyretin (TTR) were analysed. Serum protein profiles of each species (five animals per species) were determined using strong anion exchanger ProteinChips (R) (Bio-Rad, Munich, Germany) in combination with surface-enhanced laser desorption ionisation-time of flight MS. The PTM of TTR were analysed subsequently by immunoprecipitation in combination with matrix-assisted laser desorption ionisation-time of flight MS. Protein profiling revealed species-specific differences in the proteome, with some protein peaks present in all three species as well as protein peaks that were unique for donkeys and mules, horses and mules or for horses alone. The molecular weight of TTR of horses and donkeys differed by 30Da, and both species revealed several modified forms of TTR besides the native form. The mass spectra of mules represented a merging of TTR spectra of horses and donkeys. In summary, the present study indicated that there are substantial differences in the proteome of horses and donkeys. Additionally, the results probably indicate that the proteome of mules reveal a higher similarity to donkeys than to horses.
Using the notion of an elementary loop, Gebser and Schaub (2005. Proceedings of the Eighth International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR’05 ), 53–65) refined the theorem on loop formulas attributable to Lin and Zhao (2004) by considering loop formulas of elementary loops only. In this paper, we reformulate the definition of an elementary loop, extend it to disjunctive programs, and study several properties of elementary loops, including how maximal elementary loops are related to minimal unfounded sets. The results provide useful insights into the stable model semantics in terms of elementary loops. For a nondisjunctive program, using a graph-theoretic characterization of an elementary loop, we show that the problem of recognizing an elementary loop is tractable. On the other hand, we also show that the corresponding problem is coNP-complete for a disjunctive program. Based on the notion of an elementary loop, we present the class of Head-Elementary-loop-Free (HEF) programs, which strictly generalizes the class of Head-Cycle-Free (HCF) programs attributable to Ben-Eliyahu and Dechter (1994. Annals of Mathematics and Artificial Intelligence 12, 53–87). Like an HCF program, an HEF program can be turned into an equivalent nondisjunctive program in polynomial time by shifting head atoms into the body.
The space missions Voyager and Cassini together with earthbound observations re-vealed a wealth of structures in Saturn’s rings. There are, for example, waves being excited at ring positions which are in orbital resonance with Saturn’s moons. Other structures can be assigned to embedded moons like empty gaps, moon induced wakes or S-shaped propeller features. Further-more, irregular radial structures are observed in the range from 10 meters until kilometers. Here some of these structures will be discussed in the frame of hydrodynamical modeling of Saturn’s dense rings. For this purpose we will characterize the physical properties of the ring particle ensemble by mean field quantities and point to the special behavior of the transport coefficients. We show that unperturbed rings can become unstable and how diffusion acts in the rings. Additionally, the alternative streamline formalism is introduced to describe perturbed regions of dense rings with applications to the wake damping and the dispersion relation of the density waves.
We show that the residue density of the logarithm of a generalized Laplacian on a closed manifold defines an invariant polynomial-valued differential form. We express it in terms of a finite sum of residues of
classical pseudodifferential symbols. In the case of the square of a Dirac operator, these formulas provide a pedestrian proof of the Atiyah–Singer formula for a pure Dirac operator in four dimensions and for a
twisted Dirac operator on a flat space of any dimension. These correspond to special cases of a more general formula by Scott and Zagier. In our approach, which is of perturbative nature, we use either a Campbell–Hausdorff formula derived by Okikiolu or a noncommutative Taylor-type formula.
We study pattern-forming instabilities in reaction-advection-diffusion systems. We develop an approach based on Lyapunov-Bloch exponents to figure out the impact of a spatially periodic mixing flow on the stability of a spatially homogeneous state. We deal with the flows periodic in space that may have arbitrary time dependence. We propose a discrete in time model, where reaction, advection, and diffusion act as successive operators, and show that a mixing advection can lead to a pattern-forming instability in a two-component system where only one of the species is advected. Physically, this can be explained as crossing a threshold of Turing instability due to effective increase of one of the diffusion constants.
X-ray observations of young Planetary Nebulæ (PNe) have revealed diffuse emission in extended regions around both H-rich and H-deficient central stars. In order to also repro-duce physical properties of H-deficient objects, we have, at first, extended our time-dependent radiation-hydrodynamic models with heat conduction for such conditions. Here we present some of the important physical concepts, which determine how and when a hot wind-blown bubble forms. In this study we have had to consider the, largely unknown, evolution of the CSPN, the slow (AGB) wind, the fast hot-CSPN wind, and the chemical composition. The main conclusion of our work is that heat conduction is needed to explain X-ray properties of wind-blown bubbles also in H-deficient objects.
To understand the evolution and morphology of planetary nebulae, a detailed knowledge of their central stars is required. Central stars that exhibit emission lines in their spectra, indicating stellar mass-loss allow to study the evolution of planetary nebulae in action. Emission line central stars constitute about 10 % of all central stars. Half of them are practically hydrogen-free Wolf-Rayet type central stars of the carbon sequence, [WC], that show strong emission lines of carbon and oxygen in their spectra. In this contribution we address the weak emission-lines central stars (wels). These stars are poorly analyzed and their hydrogen content is mostly unknown. We obtained optical spectra, that include the important Balmer lines of hydrogen, for four weak emission line central stars. We present the results of our analysis, provide spectral classification and discuss possible explanations for their formation and evolution.
We present XMM-Newton and Chandra observations of the born-again planetary nebula A 30. These X-ray observations reveal a bright unresolved source at the position of the central star whose X-ray luminosity exceeds by far the model expectations for photospheric emission and for shocks within the stellar wind. We suggest that a “born-again hot bubble” may be responsible for this X-ray emission. Diffuse X-ray emission associated with the petal-like features and one of the H-poor knots seen in the optical is also found. The weakened emission of carbon lines in the spectrum of the diffuse emission can be interpreted as the dilution of stellar wind by mass-loading or as the detection of material ejected during a very late thermal pulse.
The safe upper limit for inclusion of vitamin A in complete diets for growing dogs is uncertain, with the result that current recommendations range from 5.24 to 104.80 mu mol retinol (5000 to 100 000 IU vitamin A)/4184 kJ (1000 kcal) metabolisable energy (ME). The aim of the present study was to determine the effect of feeding four concentrations of vitamin A to puppies from weaning until 1 year of age. A total of forty-nine puppies, of two breeds, Labrador Retriever and Miniature Schnauzer, were randomly assigned to one of four treatment groups. Following weaning at 8 weeks of age, puppies were fed a complete food supplemented with retinyl acetate diluted in vegetable oil and fed at 1ml oil/100 g diet to achieve an intake of 5.24, 13.10, 78.60 and 104.80 mu mol retinol (5000, 12 500, 75 000 and 100 000 IU vitamin A)/4184 kJ (1000 kcal) ME. Fasted blood and urine samples were collected at 8, 10, 12, 14, 16, 20, 26, 36 and 52 weeks of age and analysed for markers of vitamin A metabolism and markers of safety including haematological and biochemical variables, bone-specific alkaline phosphatase, cross-linked carboxyterminal telopeptides of type I collagen and dual-energy X-ray absorptiometry. Clinical examinations were conducted every 4 weeks. Data were analysed by means of a mixed model analysis with Bonferroni corrections for multiple endpoints. There was no effect of vitamin A concentration on any of the parameters, with the exception of total serum retinyl esters, and no effect of dose on the number, type and duration of adverse events. We therefore propose that 104.80 mu mol retinol (100 000 IU vitamin A)/4184 kJ (1000 kcal) is a suitable safe upper limit for use in the formulation of diets designed for puppy growth.
Dynamic regulatory on/off minimization for biological systems under internal temporal perturbations
(2012)
Background: Flux balance analysis (FBA) together with its extension, dynamic FBA, have proven instrumental for analyzing the robustness and dynamics of metabolic networks by employing only the stoichiometry of the included reactions coupled with adequately chosen objective function. In addition, under the assumption of minimization of metabolic adjustment, dynamic FBA has recently been employed to analyze the transition between metabolic states.
Results: Here, we propose a suite of novel methods for analyzing the dynamics of (internally perturbed) metabolic networks and for quantifying their robustness with limited knowledge of kinetic parameters. Following the biochemically meaningful premise that metabolite concentrations exhibit smooth temporal changes, the proposed methods rely on minimizing the significant fluctuations of metabolic profiles to predict the time-resolved metabolic state, characterized by both fluxes and concentrations. By conducting a comparative analysis with a kinetic model of the Calvin-Benson cycle and a model of plant carbohydrate metabolism, we demonstrate that the principle of regulatory on/off minimization coupled with dynamic FBA can accurately predict the changes in metabolic states.
Conclusions: Our methods outperform the existing dynamic FBA-based modeling alternatives, and could help in revealing the mechanisms for maintaining robustness of dynamic processes in metabolic networks over time.
Cell-level kinetic models for therapeutically relevant processes increasingly benefit the early stages of drug development. Later stages of the drug development processes, however, rely on pharmacokinetic compartment models while cell-level dynamics are typically neglected. We here present a systematic approach to integrate cell-level kinetic models and pharmacokinetic compartment models. Incorporating target dynamics into pharmacokinetic models is especially useful for the development of therapeutic antibodies because their effect and pharmacokinetics are inherently interdependent. The approach is illustrated by analysing the F(ab)-mediated inhibitory effect of therapeutic antibodies targeting the epidermal growth factor receptor. We build a multi-level model for anti-EGFR antibodies by combining a systems biology model with in vitro determined parameters and a pharmacokinetic model based on in vivo pharmacokinetic data. Using this model, we investigated in silico the impact of biochemical properties of anti-EGFR antibodies on their F(ab)-mediated inhibitory effect. The multi-level model suggests that the F(ab)-mediated inhibitory effect saturates with increasing drug-receptor affinity, thereby limiting the impact of increasing antibody affinity on improving the effect. This indicates that observed differences in the therapeutic effects of high affinity antibodies in the market and in clinical development may result mainly from Fc-mediated indirect mechanisms such as antibody-dependent cell cytotoxicity.
This study provides a detailed analysis of the mid-Holocene to present-day precipitation change in the Asian monsoon region. We compare for the first time results of high resolution climate model simulations with a standardised set of mid-Holocene moisture reconstructions. Changes in the simulated summer monsoon characteristics (onset, withdrawal, length and associated rainfall) and the mechanisms causing the Holocene precipitation changes are investigated. According to the model, most parts of the Indian subcontinent received more precipitation (up to 5 mm/day) at mid-Holocene than at present-day. This is related to a stronger Indian summer monsoon accompanied by an intensified vertically integrated moisture flux convergence. The East Asian monsoon region exhibits local inhomogeneities in the simulated annual precipitation signal. The sign of this signal depends on the balance of decreased pre-monsoon and increased monsoon precipitation at mid-Holocene compared to present-day. Hence, rainfall changes in the East Asian monsoon domain are not solely associated with modifications in the summer monsoon circulation but also depend on changes in the mid-latitudinal westerly wind system that dominates the circulation during the pre-monsoon season. The proxy-based climate reconstructions confirm the regional dissimilarities in the annual precipitation signal and agree well with the model results. Our results highlight the importance of including the pre-monsoon season in climate studies of the Asian monsoon system and point out the complex response of this system to the Holocene insolation forcing. The comparison with a coarse climate model simulation reveals that this complex response can only be resolved in high resolution simulations.
The distinctness of, and overlap between, pea genotypes held in several Pisum germplasm collections has been used to determine their relatedness and to test previous ideas about the genetic diversity of Pisum. Our characterisation of genetic diversity among 4,538 Pisum accessions held in 7 European Genebanks has identified sources of novel genetic variation, and both reinforces and refines previous interpretations of the overall structure of genetic diversity in Pisum. Molecular marker analysis was based upon the presence/absence of polymorphism of retrotransposon insertions scored by a high-throughput microarray and SSAP approaches. We conclude that the diversity of Pisum constitutes a broad continuum, with graded differentiation into sub-populations which display various degrees of distinctness. The most distinct genetic groups correspond to the named taxa while the cultivars and landraces of Pisum sativum can be divided into two broad types, one of which is strongly enriched for modern cultivars. The addition of germplasm sets from six European Genebanks, chosen to represent high diversity, to a single collection previously studied with these markers resulted in modest additions to the overall diversity observed, suggesting that the great majority of the total genetic diversity collected for the Pisum genus has now been described. Two interesting sources of novel genetic variation have been identified. Finally, we have proposed reference sets of core accessions with a range of sample sizes to represent Pisum diversity for the future study and exploitation by researchers and breeders.
Background
High blood glucose and diabetes are amongst the conditions causing the greatest losses in years of healthy life worldwide. Therefore, numerous studies aim to identify reliable risk markers for development of impaired glucose metabolism and type 2 diabetes. However, the molecular basis of impaired glucose metabolism is so far insufficiently understood. The development of so called 'omics' approaches in the recent years promises to identify molecular markers and to further understand the molecular basis of impaired glucose metabolism and type 2 diabetes. Although univariate statistical approaches are often applied, we demonstrate here that the application of multivariate statistical approaches is highly recommended to fully capture the complexity of data gained using high-throughput methods.
Methods
We took blood plasma samples from 172 subjects who participated in the prospective Metabolic Syndrome Berlin Potsdam follow-up study (MESY-BEPO Follow-up). We analysed these samples using Gas Chromatography coupled with Mass Spectrometry (GC-MS), and measured 286 metabolites. Furthermore, fasting glucose levels were measured using standard methods at baseline, and after an average of six years. We did correlation analysis and built linear regression models as well as Random Forest regression models to identify metabolites that predict the development of fasting glucose in our cohort.
Results
We found a metabolic pattern consisting of nine metabolites that predicted fasting glucose development with an accuracy of 0.47 in tenfold cross-validation using Random Forest regression. We also showed that adding established risk markers did not improve the model accuracy. However, external validation is eventually desirable. Although not all metabolites belonging to the final pattern are identified yet, the pattern directs attention to amino acid metabolism, energy metabolism and redox homeostasis.
Conclusions
We demonstrate that metabolites identified using a high-throughput method (GC-MS) perform well in predicting the development of fasting plasma glucose over several years. Notably, not single, but a complex pattern of metabolites propels the prediction and therefore reflects the complexity of the underlying molecular mechanisms. This result could only be captured by application of multivariate statistical approaches. Therefore, we highly recommend the usage of statistical methods that seize the complexity of the information given by high-throughput methods.
The development of infrared observational facilities has revealed a number of massive stars in obscured environments throughout the Milky Way and beyond. The determination of their stellar and wind properties from infrared diagnostics is thus required to take full advantage of the wealth of observations available in the near and mid infrared. However, the task is challenging. This session addressed some of the problems encountered and showed the limitations and successes of infrared studies of massive stars.
ASP modulo CSP
(2012)
We present the hybrid ASP solver clingcon, combining the simple modeling language and the high performance Boolean solving capacities of Answer Set Programming (ASP) with techniques for using non-Boolean constraints from the area of Constraint Programming (CP). The new clingcon system features an extended syntax supporting global constraints and optimize statements for constraint variables. The major technical innovation improves the interaction between ASP and CP solver through elaborated learning techniques based on irreducible inconsistent sets. A broad empirical evaluation shows that these techniques yield a performance improvement of an order of magnitude.
Mineral chemistry and thermobarometry of the staurolite-chloritoid schists from Poshtuk, NW Iran
(2012)
The Poshtuk metapelitic rocks in northwestern Iran underwent two main phases of regional and contact metamorphism. Microstructures, textural features and field relations indicate that these rocks underwent a polymetamorphic history. The dominant metamorphic assemblage of the metapelites is garnet, staurolite, chloritoid, chlorite, muscovite and quartz, which grew mainly syntectonically during the later contact metamorphic event. Peak metamorphic conditions of this event took place at 580 ◦ C and ∼ 3–4 kbar, indicating that this event occurred under high-temperature and low-pressure conditions (HT/LP metamorphism), which reflects the high heat flow in this part of the crust. This event is mainly controlled by advective heat input through magmatic intrusions into all levels of the crust. These extensive Eocene metamorphic and magmatic activities can be associated with the early Alpine Orogeny, which resulted in this area from the convergence between the Arabian and Eurasian plates, and the Cenozoic closure of the Tethys oceanic tract(s).
F2C2
(2012)
Background: Flux coupling analysis (FCA) has become a useful tool in the constraint-based analysis of genome-scale metabolic networks. FCA allows detecting dependencies between reaction fluxes of metabolic networks at steady-state. On the one hand, this can help in the curation of reconstructed metabolic networks by verifying whether the coupling between reactions is in agreement with the experimental findings. On the other hand, FCA can aid in defining intervention strategies to knock out target reactions.
Results: We present a new method F2C2 for FCA, which is orders of magnitude faster than previous approaches. As a consequence, FCA of genome-scale metabolic networks can now be performed in a routine manner.
Conclusions: We propose F2C2 as a fast tool for the computation of flux coupling in genome-scale metabolic networks. F2C2 is freely available for non-commercial use at https://sourceforge.net/projects/f2c2/files/.
During reading, saccadic eye movements are generated to shift words into the center of the visual field for lexical processing. Recently, Krugel and Engbert (Vision Research 50:1532-1539, 2010) demonstrated that within-word fixation positions are largely shifted to the left after skipped words. However, explanations of the origin of this effect cannot be drawn from normal reading data alone. Here we show that the large effect of skipped words on the distribution of within-word fixation positions is primarily based on rather subtle differences in the low-level visual information acquired before saccades. Using arrangements of "x" letter strings, we reproduced the effect of skipped character strings in a highly controlled single-saccade task. Our results demonstrate that the effect of skipped words in reading is the signature of a general visuomotor phenomenon. Moreover, our findings extend beyond the scope of the widely accepted range-error model, which posits that within-word fixation positions in reading depend solely on the distances of target words. We expect that our results will provide critical boundary conditions for the development of visuomotor models of saccade planning during reading.
Background: Detection of immunogenic proteins remains an important task for life sciences as it nourishes the understanding of pathogenicity, illuminates new potential vaccine candidates and broadens the spectrum of biomarkers applicable in diagnostic tools. Traditionally, immunoscreenings of expression libraries via polyclonal sera on nitrocellulose membranes or screenings of whole proteome lysates in 2-D gel electrophoresis are performed. However, these methods feature some rather inconvenient disadvantages. Screening of expression libraries to expose novel antigens from bacteria often lead to an abundance of false positive signals owing to the high cross reactivity of polyclonal antibodies towards the proteins of the expression host. A method is presented that overcomes many disadvantages of the old procedures.
Results: Four proteins that have previously been described as immunogenic have successfully been assessed immunogenic abilities with our method. One protein with no known immunogenic behaviour before suggested potential immunogenicity. We incorporated a fusion tag prior to our genes of interest and attached the expressed fusion proteins covalently on microarrays. This enhances the specific binding of the proteins compared to nitrocellulose. Thus, it helps to reduce the number of false positives significantly. It enables us to screen for immunogenic proteins in a shorter time, with more samples and statistical reliability. We validated our method by employing several known genes from Campylobacter jejuni NCTC 11168.
Conclusions: The method presented offers a new approach for screening of bacterial expression libraries to illuminate novel proteins with immunogenic features. It could provide a powerful and attractive alternative to existing methods and help to detect and identify vaccine candidates, biomarkers and potential virulence-associated factors with immunogenic behaviour furthering the knowledge of virulence and pathogenicity of studied bacteria.
We present 3D zero-beta ideal MHD simulations of the solar flare/CME event that occurred in Active Region 11060 on 2010 April 8. The initial magnetic configurations of the two simulations are stable nonlinear force-free field and unstable magnetic field models constructed by Su et al. (2011) using the flux rope insertion method. The MHD simulations confirm that the stable model relaxes to a stable equilibrium, while the unstable model erupts as a CME. Comparisons between observations and MHD simulations of the CME are also presented.
We present the new multi-threaded version of the state-of-the-art answer set solver clasp. We detail its component and communication architecture and illustrate how they support the principal functionalities of clasp. Also, we provide some insights into the data representation used for different constraint types handled by clasp. All this is accompanied by an extensive experimental analysis of the major features related to multi-threading in clasp.
Much of our knowledge about the solar dynamo is based on sunspot observations. It is thus desirable to extend the set of positional and morphological data of sunspots into the past. Gustav Spörer observed in Germany from Anklam (1861–1873) and Potsdam (1874–1894). He left detailed prints of sunspot groups, which we digitized and processed to mitigate artifacts left in the print by the passage of time. After careful geometrical correction, the sunspot data are now available as synoptic charts for almost 450 solar rotation periods. Individual sunspot positions can thus be precisely determined and spot areas can be accurately measured using morphological image processing techniques. These methods also allow us to determine tilt angles of active regions (Joy’s law) and to assess the complexity of an active region.
The size of plant organs, such as leaves and flowers, is determined by an interaction of genotype and environmental influences. Organ growth occurs through the two successive processes of cell proliferation followed by cell expansion. A number of genes influencing either or both of these processes and thus contributing to the control of final organ size have been identified in the last decade. Although the overall picture of the genetic regulation of organ size remains fragmentary, two transcription factor/microRNA-based genetic pathways are emerging in the control of cell proliferation. However, despite this progress, fundamental questions remain unanswered, such as the problem of how the size of a growing organ could be monitored to determine the appropriate time for terminating growth. While genetic analysis will undoubtedly continue to advance our knowledge about size control in plants, a deeper understanding of this and other basic questions will require including advanced live-imaging and mathematical modeling, as impressively demonstrated by some recent examples. This should ultimately allow the comparison of the mechanisms underlying size control in plants and in animals to extract common principles and lineage-specific solutions.
All's well that ends well
(2012)
The transition from cell proliferation to cell expansion is critical for determining leaf size. Andriankaja et al. (2012) demonstrate that in leaves of dicotyledonous plants, a basal proliferation zone is maintained for several days before abruptly disappearing, and that chloroplast differentiation is required to trigger the onset of cell expansion.
Recent PIC simulations of relativistic electron-positron (electron-ion) jets injected into a stationary medium show that particle acceleration occurs in the shocked regions. Simulations show that the Weibel instability is responsible for generating and amplifying highly nonuniform, small-scale magnetic fields and for particle acceleration. These magnetic fields contribute to the electron’s transverse eflection behind the shock. The “jitter” radiation from deflected electrons in turbulent magnetic fields has properties different from synchrotron radiation calculated in a uniform magnetic field. This jitter radiation may be important for understanding the complex time evolution and/or spectral structure of gamma-ray bursts, relativistic jets in general, and supernova remnants. In order to calculate radiation from first principles and go beyond the standard synchrotron model, we have used PIC simulations. We present synthetic spectra to compare with the spectra obtained from Fermi observations.
The clumping of massive star winds is an established paradigm, which is confirmed by multiple lines of evidence and is supported by stellar wind theory. We use the results from time-dependent hydrodynamical models of the instability in the line-driven wind of a massive supergiant star to derive the time-dependent accretion rate on to a compact object in the Bondi-Hoyle-Lyttleton approximation. The strong density and velocity fluctuations in the wind result in strong variability of the synthetic X-ray light curves. Photoionization of inhomogeneous winds is different from the photoinization of smooth winds. The degree of ionization is affected by the wind clumping. The wind clumping must also be taken into account when comparing the observed and model spectra of the photoionized stellar wind.
Recent studies have claimed the existence of very massive stars (VMS) up to 300 M⊙ in the local Universe. As this finding may represent a paradigm shift for the canonical stellar upper-mass limit of 150 M⊙, it is timely to discuss the status of the data, as well as the far-reaching implications of such objects. We held a Joint Discussion at the General Assembly in Beijing to discuss (i) the determination of the current masses of the most massive stars, (ii) the formation of VMS, (iii) their mass loss, and (iv) their evolution and final fate. The prime aim was to reach broad consensus between observers and theorists on how to identify and quantify the dominant physical processes.
In the late Palaeozoic fore-arc system of north-central Chile at latitudes 31-32 degrees S (from the west to the east) three lithotectonic units are telescoped within a short distance by a Mesozoic strikeslip event (derived peak P-T conditions in brackets): (1) the basally accreted Choapa Metamorphic Complex (CMC; 350-430 degrees C, 6-9 kbar), (2) the frontally accreted Arrayan Formation (AF; 280-320 degrees C, 4-6 kbar) and (3) the retrowedge basin of the Huentelauquen Formation (HF; 280-320 degrees C, 3-4 kbar). In the CMC, Ar-Ar spot ages locally date white-mica formation at peak P-T conditions and during early exhumation at 279-242 Ma. In a local garnet mica-schist intercalation (570-585 degrees C, 11-13 kbar) Ar-Ar spot ages refer to the ascent from the subduction channel at 307-274 Ma. Portions of the CMC were isobarically heated to 510-580 degrees C at 6.6-8.5 kbar. The age of peak P-T conditions in the AF can only vaguely be approximated at >= 310 Ma by relict fission-track ages consistent with the observation that frontal accretion occurred prior to basal accretion. Zircon fission-track dating indicates cooling below similar to 280 degrees C at similar to 248 Ma in the CMC and the AF, when a regional unconformity also formed. Ar-Ar white-mica spot ages in parts of the CMC and within the entire AF and HF point to heterogeneous resetting during Mesozoic extensional and shortening events at similar to 245-240 Ma, similar to 210-200 Ma, similar to 174-159 Ma and similar to 142-127 Ma. The zircon fission-track ages are locally reset at 109-96 Ma. All resetting of Ar-Ar white-mica ages is proposed to have occurred by in situ dissolution/precipitation at low temperature in the presence of locally penetrating hydrous fluids. Hence syn-and postaccretionary events in the fore-arc system can still be distinguished and dated in spite of its complex heterogeneous postaccretional overprint.
SXP 1062 is an exceptional case of a young neutron star in a wind-fed high-mass X-ray binary associated with a supernova remnant. A unique combination of measured spin period, its derivative, luminosity and young age makes this source a key probe for the physics of accretion and neutron star evolution. Theoretical models proposed to explain the properties of SXP 1062 shall be tested with new data.
Background: The linear noise approximation (LNA) is commonly used to predict how noise is regulated and exploited at the cellular level. These predictions are exact for reaction networks composed exclusively of first order reactions or for networks involving bimolecular reactions and large numbers of molecules. It is however well known that gene regulation involves bimolecular interactions with molecule numbers as small as a single copy of a particular gene. It is therefore questionable how reliable are the LNA predictions for these systems.
Results: We implement in the software package intrinsic Noise Analyzer (iNA), a system size expansion based method which calculates the mean concentrations and the variances of the fluctuations to an order of accuracy higher than the LNA. We then use iNA to explore the parametric dependence of the Fano factors and of the coefficients of variation of the mRNA and protein fluctuations in models of genetic networks involving nonlinear protein degradation, post-transcriptional, post-translational and negative feedback regulation. We find that the LNA can significantly underestimate the amplitude and period of noise-induced oscillations in genetic oscillators. We also identify cases where the LNA predicts that noise levels can be optimized by tuning a bimolecular rate constant whereas our method shows that no such regulation is possible. All our results are confirmed by stochastic simulations.
Conclusion: The software iNA allows the investigation of parameter regimes where the LNA fares well and where it does not. We have shown that the parametric dependence of the coefficients of variation and Fano factors for common gene regulatory networks is better described by including terms of higher order than LNA in the system size expansion. This analysis is considerably faster than stochastic simulations due to the extensive ensemble averaging needed to obtain statistically meaningful results. Hence iNA is well suited for performing computationally efficient and quantitative studies of intrinsic noise in gene regulatory networks.
In this paper, we determine necessary and sufficient conditions for Bruck-Reilly and generalized Bruck-Reilly ∗-extensions of arbitrary monoids to be regular, coregular and strongly π-inverse. These semigroup classes have applications in various field of mathematics, such as matrix theory, discrete mathematics and p-adic analysis (especially in operator theory). In addition, while regularity and coregularity have so many applications in the meaning of boundaries (again in operator theory), inverse monoids and Bruck-Reilly extensions contain a mixture fixed-point results of algebra, topology and geometry within the purposes of this journal.
Communicating location-specific information to pedestrians is a challenging task which can be aided by user-friendly digital technologies. In this paper, landmark visibility analysis, as a means for developing more usable pedestrian navigation systems, is discussed. Using an algorithmic framework for image-based 3D analysis, this method integrates a 3D city model with identified landmarks and produces raster visibility layers for each one. This output enables an Android phone prototype application to indicate the visibility of landmarks from the user's actual position. Tested in the field, the method achieves sufficient accuracy for the context of use and improves navigation efficiency and effectiveness.
The genetic code is degenerate; thus, protein evolution does not uniquely determine the coding sequence. One of the puzzles in evolutionary genetics is therefore to uncover evolutionary driving forces that result in specific codon choice. In many bacteria, the first 5-10 codons of protein-coding genes are often codons that are less frequently used in the rest of the genome, an effect that has been argued to arise from selection for slowed early elongation to reduce ribosome traffic jams. However, genome analysis across many species has demonstrated that the region shows reduced mRNA folding consistent with pressure for efficient translation initiation. This raises the possibility that unusual codon usage is a side effect of selection for reduced mRNA structure. Here we discriminate between these two competing hypotheses, and show that in bacteria selection favours codons that reduce mRNA folding around the translation start, regardless of whether these codons are frequent or rare. Experiments confirm that primarily mRNA structure, and not codon usage, at the beginning of genes determines the translation rate.
TRAPID
(2013)
Transcriptome analysis through next-generation sequencing technologies allows the generation of detailed gene catalogs for non-model species, at the cost of new challenges with regards to computational requirements and bioinformatics expertise. Here, we present TRAPID, an online tool for the fast and efficient processing of assembled RNA-Seq transcriptome data, developed to mitigate these challenges. TRAPID offers high-throughput open reading frame detection, frameshift correction and includes a functional, comparative and phylogenetic toolbox, making use of 175 reference proteomes. Benchmarking and comparison against state-of-the-art transcript analysis tools reveals the efficiency and unique features of the TRAPID system. TRAPID is freely available at http://bioinformatics.psb.ugent.be/webtools/trapid/.
We study origin, parameter optimization, and thermodynamic efficiency of isothermal rocking ratchets based on fractional subdiffusion within a generalized non-Markovian Langevin equation approach. A corresponding multi-dimensional Markovian embedding dynamics is realized using a set of auxiliary Brownian particles elastically coupled to the central Brownian particle (see video on the journal web site). We show that anomalous subdiffusive transport emerges due to an interplay of nonlinear response and viscoelastic effects for fractional Brownian motion in periodic potentials with broken space-inversion symmetry and driven by a time-periodic field. The anomalous transport becomes optimal for a subthreshold driving when the driving period matches a characteristic time scale of interwell transitions. It can also be optimized by varying temperature, amplitude of periodic potential and driving strength. The useful work done against a load shows a parabolic dependence on the load strength. It grows sublinearly with time and the corresponding thermodynamic efficiency decays algebraically in time because the energy supplied by the driving field scales with time linearly. However, it compares well with the efficiency of normal diffusion rocking ratchets on an appreciably long time scale.
Proposing relevant perturbations to biological signaling networks is central to many problems in biology and medicine because it allows for enabling or disabling certain biological outcomes. In contrast to quantitative methods that permit fine-grained (kinetic) analysis, qualitative approaches allow for addressing large-scale networks. This is accomplished by more abstract representations such as logical networks. We elaborate upon such a qualitative approach aiming at the computation of minimal interventions in logical signaling networks relying on Kleene's three-valued logic and fixpoint semantics. We address this problem within answer set programming and show that it greatly outperforms previous work using dedicated algorithms.
Understanding the magnetic configuration of the source regions of coronal mass ejections (CMEs) is vital in order to determine the trigger and driver of these events. Observations of four CME productive active regions are presented here, which indicate that the pre-eruption magnetic configuration is that of a magnetic flux rope. The flux ropes are formed in the solar atmosphere by the process known as flux cancellation and are stable for several hours before the eruption. The observations also indicate that the magnetic structure that erupts is not the entire flux rope as initially formed, raising the question of whether the flux rope is able to undergo a partial eruption or whether it undergoes a transition in specific flux rope configuration shortly
before the CME.
The dynamics of external contributions to the geomagnetic field is investigated by applying time-frequency methods to magnetic observatory data. Fractal models and multiscale analysis enable obtaining maximum quantitative information related to the short-term dynamics of the geomagnetic field activity. The stochastic properties of the horizontal component of the transient external field are determined by searching for scaling laws in the power spectra. The spectrum fits a power law with a scaling exponent β, a typical characteristic of self-affine time-series. Local variations in the power-law exponent are investigated by applying wavelet analysis to the same time-series. These analyses highlight the self-affine properties of geomagnetic perturbations and their persistence. Moreover, they show that the main phases of sudden storm disturbances are uniquely characterized by a scaling exponent varying between 1 and 3, possibly related to the energy contained in the external field. These new findings suggest the existence of a long-range dependence, the scaling exponent being an efficient indicator of geomagnetic activity and singularity detection. These results show that by using magnetogram regularity to reflect the magnetosphere activity, a theoretical analysis of the external geomagnetic field based on local power-law exponents is possible.
In various biological systems and small scale technological applications particles transiently bind to a cylindrical surface. Upon unbinding the particles diffuse in the vicinal bulk before rebinding to the surface. Such bulk-mediated excursions give rise to an effective surface translation, for which we here derive and discuss the dynamic equations, including additional surface diffusion. We discuss the time evolution of the number of surface-bound particles, the effective surface mean squared displacement, and the surface propagator. In particular, we observe sub- and superdiffusive regimes. A plateau of the surface mean-squared displacement reflects a stalling of the surface diffusion at longer times. Finally, the corresponding first passage problem for the cylindrical geometry is analysed.
Background: With increasing age neuromuscular deficits (e.g., sarcopenia) may result in impaired physical performance and an increased risk for falls. Prominent intrinsic fall-risk factors are age-related decreases in balance and strength / power performance as well as cognitive decline. Additional studies are needed to develop specifically tailored exercise programs for older adults that can easily be implemented into clinical practice. Thus, the objective of the present trial is to assess the effects of a fall prevention program that was developed by an interdisciplinary expert panel on measures of balance, strength / power, body composition, cognition, psychosocial well-being, and falls self-efficacy in healthy older adults. Additionally, the time-related effects of detraining are tested.
Methods/Design: Healthy old people (n = 54) between the age of 65 to 80 years will participate in this trial. The testing protocol comprises tests for the assessment of static / dynamic steady-state balance (i.e., Sharpened Romberg Test, instrumented gait analysis), proactive balance (i.e., Functional Reach Test; Timed Up and Go Test), reactive balance (i.e., perturbation test during bipedal stance; Push and Release Test), strength (i.e., hand grip strength test; Chair Stand Test), and power (i.e., Stair Climb Power Test; countermovement jump). Further, body composition will be analysed using a bioelectrical impedance analysis system. In addition, questionnaires for the assessment of psychosocial (i.e., World Health Organisation Quality of Life Assessment-Bref), cognitive (i.e., Mini Mental State Examination), and fall risk determinants (i.e., Fall Efficacy Scale -International) will be included in the study protocol. Participants will be randomized into two intervention groups or the control / waiting group. After baseline measures, participants in the intervention groups will conduct a 12-week balance and strength / power exercise intervention 3 times per week, with each training session lasting 30 min. (actual training time). One intervention group will complete an extensive supervised training program, while the other intervention group will complete a short version (` 3 times 3') that is home-based and controlled by weekly phone calls. Post-tests will be conducted right after the intervention period. Additionally, detraining effects will be measured 12 weeks after program cessation. The control group / waiting group will not participate in any specific intervention during the experimental period, but will receive the extensive supervised program after the experimental period.
Discussion: It is expected that particularly the supervised combination of balance and strength / power training will improve performance in variables of balance, strength / power, body composition, cognitive function, psychosocial well-being, and falls self-efficacy of older adults. In addition, information regarding fall risk assessment, dose-response-relations, detraining effects, and supervision of training will be provided. Further, training-induced health-relevant changes, such as improved performance in activities of daily living, cognitive function, and quality of life, as well as a reduced risk for falls may help to lower costs in the health care system. Finally, practitioners, therapists, and instructors will be provided with a scientifically evaluated feasible, safe, and easy-to-administer exercise program for fall prevention.
During an unusually massive filament eruption on 7 June 2011, SDO/AIA imaged for the first time significant EUV emission around a magnetic reconnection region in the solar corona. The reconnection occurred between magnetic fields of the laterally expanding CME and a neighbouring active region. A pre-existing quasi-separatrix layer was activated in the process. This scenario is supported by data-constrained numerical simulations of the eruption. Observations show that dense cool filament plasma was re-directed and heated in situ, producing coronal-temperature emission around the reconnection region. These results provide the first direct observational evidence, supported by MHD simulations and magnetic modelling, that a large-scale re-configuration of the coronal magnetic field takes place during solar eruptions via the process of magnetic reconnection.
The course timetabling problem can be generally defined as the task of assigning a number of lectures to a limited set of timeslots and rooms, subject to a given set of hard and soft constraints. The modeling language for course timetabling is required to be expressive enough to specify a wide variety of soft constraints and objective functions. Furthermore, the resulting encoding is required to be extensible for capturing new constraints and for switching them between hard and soft, and to be flexible enough to deal with different formulations. In this paper, we propose to make effective use of ASP as a modeling language for course timetabling. We show that our ASP-based approach can naturally satisfy the above requirements, through an ASP encoding of the curriculum-based course timetabling problem proposed in the third track of the second international timetabling competition (ITC-2007). Our encoding is compact and human-readable, since each constraint is individually expressed by either one or two rules. Each hard constraint is expressed by using integrity constraints and aggregates of ASP. Each soft constraint S is expressed by rules in which the head is the form of penalty (S, V, C), and a violation V and its penalty cost C are detected and calculated respectively in the body. We carried out experiments on four different benchmark sets with five different formulations. We succeeded either in improving the bounds or producing the same bounds for many combinations of problem instances and formulations, compared with the previous best known bounds.
This study examines the course and driving forces of recent vegetation change in the Mongolian steppe. A sediment core covering the last 55years from a small closed-basin lake in central Mongolia was analyzed for its multi-proxy record at annual resolution. Pollen analysis shows that highest abundances of planted Poaceae and highest vegetation diversity occurred during 1977-1992, reflecting agricultural development in the lake area. A decrease in diversity and an increase in Artemisia abundance after 1992 indicate enhanced vegetation degradation in recent times, most probably because of overgrazing and farmland abandonment. Human impact is the main factor for the vegetation degradation within the past decades as revealed by a series of redundancy analyses, while climate change and soil erosion play subordinate roles. High Pediastrum (a green algae) influx, high atomic total organic carbon/total nitrogen (TOC/TN) ratios, abundant coarse detrital grains, and the decrease of C-13(org) and N-15 since about 1977 but particularly after 1992 indicate that abundant terrestrial organic matter and nutrients were transported into the lake and caused lake eutrophication, presumably because of intensified land use. Thus, we infer that the transition to a market economy in Mongolia since the early 1990s not only caused dramatic vegetation degradation but also affected the lake ecosystem through anthropogenic changes in the catchment area.
We propose a novel cluster-based reduced-order modelling (CROM) strategy for unsteady flows. CROM combines the cluster analysis pioneered in Gunzburger's group (Burkardt, Gunzburger & Lee, Comput. Meth. Appl. Mech. Engng, vol. 196, 2006a, pp. 337-355) and transition matrix models introduced in fluid dynamics in Eckhardt's group (Schneider, Eckhardt & Vollmer, Phys. Rev. E, vol. 75, 2007, art. 066313). CROM constitutes a potential alternative to POD models and generalises the Ulam-Galerkin method classically used in dynamical systems to determine a finite-rank approximation of the Perron-Frobenius operator. The proposed strategy processes a time-resolved sequence of flow snapshots in two steps. First, the snapshot data are clustered into a small number of representative states, called centroids, in the state space. These centroids partition the state space in complementary non-overlapping regions (centroidal Voronoi cells). Departing from the standard algorithm, the probabilities of the clusters are determined, and the states are sorted by analysis of the transition matrix. Second, the transitions between the states are dynamically modelled using a Markov process. Physical mechanisms are then distilled by a refined analysis of the Markov process, e. g. using finite-time Lyapunov exponent (FTLE) and entropic methods. This CROM framework is applied to the Lorenz attractor (as illustrative example), to velocity fields of the spatially evolving incompressible mixing layer and the three-dimensional turbulent wake of a bluff body. For these examples, CROM is shown to identify non-trivial quasi-attractors and transition processes in an unsupervised manner. CROM has numerous potential applications for the systematic identification of physical mechanisms of complex dynamics, for comparison of flow evolution models, for the identification of precursors to desirable and undesirable events, and for flow control applications exploiting nonlinear actuation dynamics.
claspfolio 2
(2014)
Building on the award-winning, portfolio-based ASP solver claspfolio, we present claspfolio 2, a modular and open solver architecture that integrates several different portfolio-based algorithm selection approaches and techniques. The claspfolio 2 solver framework supports various feature generators, solver selection approaches, solver portfolios, as well as solver-schedule-based pre-solving techniques. The default configuration of claspfolio 2 relies on a light-weight version of the ASP solver clasp to generate static and dynamic instance features. The flexible open design of claspfolio 2 is a distinguishing factor even beyond ASP. As such, it provides a unique framework for comparing and combining existing portfolio-based algorithm selection approaches and techniques in a single, unified framework. Taking advantage of this, we conducted an extensive experimental study to assess the impact of different feature sets, selection approaches and base solver portfolios. In addition to gaining substantial insights into the utility of the various approaches and techniques, we identified a default configuration of claspfolio 2 that achieves substantial performance gains not only over clasp's default configuration and the earlier version of claspfolio, but also over manually tuned configurations of clasp.
Modern 3D geovisualization systems (3DGeoVSs) are complex and evolving systems that are required to be adaptable and leverage distributed resources, including massive geodata. This article focuses on 3DGeoVSs built based on the principles of service-oriented architectures, standards and image-based representations (SSI) to address practically relevant challenges and potentials. Such systems facilitate resource sharing and agile and efficient system construction and change in an interoperable manner, while exploiting images as efficient, decoupled and interoperable representations. The software architecture of a 3DGeoVS and its underlying visualization model have strong effects on the system's quality attributes and support various system life cycle activities. This article contributes a software reference architecture (SRA) for 3DGeoVSs based on SSI that can be used to design, describe and analyze concrete software architectures with the intended primary benefit of an increase in effectiveness and efficiency in such activities. The SRA integrates existing, proven technology and novel contributions in a unique manner. As the foundation for the SRA, we propose the generalized visualization pipeline model that generalizes and overcomes expressiveness limitations of the prevalent visualization pipeline model. To facilitate exploiting image-based representations (IReps), the SRA integrates approaches for the representation, provisioning and styling of and interaction with IReps. Five applications of the SRA provide proofs of concept for the general applicability and utility of the SRA. A qualitative evaluation indicates the overall suitability of the SRA, its applications and the general approach of building 3DGeoVSs based on SSI.
Stress drop is a key factor in earthquake mechanics and engineering seismology. However, stress drop calculations based on fault slip can be significantly biased, particularly due to subjectively determined smoothing conditions in the traditional least-square slip inversion. In this study, we introduce a mechanically constrained Bayesian approach to simultaneously invert for fault slip and stress drop based on geodetic measurements. A Gaussian distribution for stress drop is implemented in the inversion as a prior. We have done several synthetic tests to evaluate the stability and reliability of the inversion approach, considering different fault discretization, fault geometries, utilized datasets, and variability of the slip direction, respectively. We finally apply the approach to the 2010 M8.8 Maule earthquake and invert for the coseismic slip and stress drop simultaneously. Two fault geometries from the literature are tested. Our results indicate that the derived slip models based on both fault geometries are similar, showing major slip north of the hypocenter and relatively weak slip in the south, as indicated in the slip models of other studies. The derived mean stress drop is 5-6 MPa, which is close to the stress drop of similar to 7 MPa that was independently determined according to force balance in this region Luttrell et al. (J Geophys Res, 2011). These findings indicate that stress drop values can be consistently extracted from geodetic data.
In a recent paper, the Lefschetz number for endomorphisms (modulo trace class operators) of sequences of trace class curvature was introduced. We show that this is a well defined, canonical extension of the classical Lefschetz number and establish the homotopy invariance of this number. Moreover, we apply the results to show that the Lefschetz fixed point formula holds for geometric quasiendomorphisms of elliptic quasicomplexes.
The Runge-Kutta type regularization method was recently proposed as a potent tool for the iterative solution of nonlinear ill-posed problems. In this paper we analyze the applicability of this regularization method for solving inverse problems arising in atmospheric remote sensing, particularly for the retrieval of spheroidal particle distribution. Our numerical simulations reveal that the Runge-Kutta type regularization method is able to retrieve two-dimensional particle distributions using optical backscatter and extinction coefficient profiles, as well as depolarization information.
This study investigates the spatial and temporal distributions of 14 key arboreal taxa and their driving forces during the last 22,000 calendar years before ad 1950 (kyr BP) using a taxonomically harmonized and temporally standardized fossil pollen dataset with a 500-year resolution from the eastern part of continental Asia. Logistic regression was used to estimate pollen abundance thresholds for vegetation occurrence (presence or dominance), based on modern pollen data and present ranges of 14 taxa in China. Our investigation reveals marked changes in spatial and temporal distributions of the major arboreal taxa. The thermophilous (Castanea, Castanopsis, Cyclobalanopsis, Fagus, Pterocarya) and eurythermal (Juglans, Quercus, Tilia, Ulmus) broadleaved tree taxa were restricted to the current tropical or subtropical areas of China during the Last Glacial Maximum (LGM) and spread northward since c. 14.5 kyr BP. Betula and conifer taxa (Abies, Picea, Pinus), in contrast, retained a wider distribution during the LGM and showed no distinct expansion direction during the Late Glacial. Since the late mid-Holocene, the abundance but not the spatial extent of most trees decreased. The changes in spatial and temporal distributions for the 14 taxa are a reflection of climate changes, in particular monsoonal moisture, and, in the late Holocene, human impact. The post-LGM expansion patterns in eastern continental China seem to be different from those reported for Europe and North America, for example, the westward spread for eurythermal broadleaved taxa.
We report on the development of an on-chip RPA (recombinase polymerase amplification) with simultaneous multiplex isothermal amplification and detection on a solid surface. The isothermal RPA was applied to amplify specific target sequences from the pathogens Neisseria gonorrhoeae, Salmonella enterica and methicillin-resistant Staphylococcus aureus (MRSA) using genomic DNA. Additionally, a positive plasmid control was established as an internal control. The four targets were amplified simultaneously in a quadruplex reaction. The amplicon is labeled during on-chip RPA by reverse oligonucleotide primers coupled to a fluorophore. Both amplification and spatially resolved signal generation take place on immobilized forward primers bount to expoxy-silanized glass surfaces in a pump-driven hybridization chamber. The combination of microarray technology and sensitive isothermal nucleic acid amplification at 38 °C allows for a multiparameter analysis on a rather small area. The on-chip RPA was characterized in terms of reaction time, sensitivity and inhibitory conditions. A successful enzymatic reaction is completed in <20 min and results in detection limits of 10 colony-forming units for methicillin-resistant Staphylococcus aureus and Salmonella enterica and 100 colony-forming units for Neisseria gonorrhoeae. The results show this method to be useful with respect to point-of-care testing and to enable simplified and miniaturized nucleic acid-based diagnostics.
Background
Nucleic acid amplification is the most sensitive and specific method to detect Plasmodium falciparum. However the polymerase chain reaction remains laboratory-based and has to be conducted by trained personnel. Furthermore, the power dependency for the thermocycling process and the costly equipment necessary for the read-out are difficult to cover in resource-limited settings. This study aims to develop and evaluate a combination of isothermal nucleic acid amplification and simple lateral flow dipstick detection of the malaria parasite for point-of-care testing.
Methods
A specific fragment of the 18S rRNA gene of P. falciparum was amplified in 10 min at a constant 38°C using the isothermal recombinase polymerase amplification (RPA) method. With a unique probe system added to the reaction solution, the amplification product can be visualized on a simple lateral flow strip without further labelling. The combination of these methods was tested for sensitivity and specificity with various Plasmodium and other protozoa/bacterial strains, as well as with human DNA. Additional investigations were conducted to analyse the temperature optimum, reaction speed and robustness of this assay.
Results
The lateral flow RPA (LF-RPA) assay exhibited a high sensitivity and specificity. Experiments confirmed a detection limit as low as 100 fg of genomic P. falciparum DNA, corresponding to a sensitivity of approximately four parasites per reaction. All investigated P. falciparum strains (n = 77) were positively tested while all of the total 11 non-Plasmodium samples, showed a negative test result. The enzymatic reaction can be conducted under a broad range of conditions from 30-45°C with high inhibitory concentration of known PCR inhibitors. A time to result of 15 min from start of the reaction to read-out was determined.
Conclusions
Combining the isothermal RPA and the lateral flow detection is an approach to improve molecular diagnostic for P. falciparum in resource-limited settings. The system requires none or only little instrumentation for the nucleic acid amplification reaction and the read-out is possible with the naked eye. Showing the same sensitivity and specificity as comparable diagnostic methods but simultaneously increasing reaction speed and dramatically reducing assay requirements, the method has potential to become a true point-of-care test for the malaria parasite.
The Arabidopsis Kinome
(2014)
Background
Protein kinases constitute a particularly large protein family in Arabidopsis with important functions in cellular signal transduction networks. At the same time Arabidopsis is a model plant with high frequencies of gene duplications. Here, we have conducted a systematic analysis of the Arabidopsis kinase complement, the kinome, with particular focus on gene duplication events. We matched Arabidopsis proteins to a Hidden-Markov Model of eukaryotic kinases and computed a phylogeny of 942 Arabidopsis protein kinase domains and mapped their origin by gene duplication.
Results
The phylogeny showed two major clades of receptor kinases and soluble kinases, each of which was divided into functional subclades. Based on this phylogeny, association of yet uncharacterized kinases to families was possible which extended functional annotation of unknowns. Classification of gene duplications within these protein kinases revealed that representatives of cytosolic subfamilies showed a tendency to maintain segmentally duplicated genes, while some subfamilies of the receptor kinases were enriched for tandem duplicates. Although functional diversification is observed throughout most subfamilies, some instances of functional conservation among genes transposed from the same ancestor were observed. In general, a significant enrichment of essential genes was found among genes encoding for protein kinases.
Conclusions
The inferred phylogeny allowed classification and annotation of yet uncharacterized kinases. The prediction and analysis of syntenic blocks and duplication events within gene families of interest can be used to link functional biology to insights from an evolutionary viewpoint. The approach undertaken here can be applied to any gene family in any organism with an annotated genome.
Research in rodents has shown that dietary vitamin A reduces body fat by enhancing fat mobilisation and energy utilisation; however, their effects in growing dogs remain unclear. In the present study, we evaluated the development of body weight and body composition and compared observed energy intake with predicted energy intake in forty-nine puppies from two breeds (twenty-four Labrador Retriever (LAB) and twenty-five Miniature Schnauzer (MS)). A total of four different diets with increasing vitamin A content between 5.24 and 104.80 mu mol retinol (5000-100 000 IU vitamin A)/4184 kJ (1000 kcal) metabolisable energy were fed from the age of 8 weeks up to 52 (MS) and 78 weeks (LAB). The daily energy intake was recorded throughout the experimental period. The body condition score was evaluated weekly using a seven-category system, and food allowances were adjusted to maintain optimal body condition. Body composition was assessed at the age of 26 and 52 weeks for both breeds and at the age of 78 weeks for the LAB breed only using dual-energy X-ray absorptiometry. The growth curves of the dogs followed a breed-specific pattern. However, data on energy intake showed considerable variability between the two breeds as well as when compared with predicted energy intake. In conclusion, the data show that energy intakes of puppies particularly during early growth are highly variable; however, the growth pattern and body composition of the LAB and MS breeds are not affected by the intake of vitamin A at levels up to 104.80 mu mol retinol (100 000 IU vitamin A)/4184 kJ (1000 kcal).