Refine
Year of publication
Document Type
- Article (12)
- Postprint (6)
- Monograph/Edited Volume (2)
- Other (1)
Language
- English (21)
Is part of the Bibliography
- yes (21)
Keywords
- European Bioinformatics Institute (2)
- Holocene (2)
- XRD (2)
- electronic tool integration (2)
- n-alkanes (2)
- 13C metabolic flux analysis (1)
- Alignment (1)
- Aquatic macrophytes (1)
- Aragonite (1)
- Basic Service (1)
Organic geochemical proxy data from surface sediment samples and a sediment core from Lake Donggi Cona were used to infer environmental changes on the northeastern Tibetan Plateau spanning the last 18.4 kyr. Long-chain n-alkanes dominate the aliphatic hydrocarbon fraction of the sediment extract from most surface sediment samples and the sediment core. Unsaturated mid-chain n-alkanes (nC(23:1) and nC(25:1)) have high abundances in some samples, especially in core samples from the late glacial and early Holocene. TOC contents, organic biomarker and non-pollen-palynomorph concentrations and results from organic petrologic analysis on selected samples suggest three major episodes in the history of Lake Donggi Cona. Before ca. 12.6 cal ka BP samples contain low amounts of organic matter due to cold and arid conditions during the late glacial. After 12.6 cal ka BP, relatively high contents of TOC and concentrations of Botryococcus fossils, as well as enhanced concentrations of mid-chain n-alkanes and n-alkenes suggest a higher primary and macrophyte productivity than at present This is supported by high contents of palynomorphs derived from higher plants and algae and was possibly triggered by a decrease of salinity and amelioration of climate during the early Holocene. Since 6.8 cal ka BP Lake Donggi Cona has been an oligotrophic freshwater lake. Proxy data suggest that variations in insolation drive ecological changes in the lake, with increased aquatic productivity during the early Holocene summer insolation maximum. Short-term drops of TOC contents or biomarker concentrations (at 9.9 cal ka BP, after 8.0 and between 3.5 and 1.7 cal ka BP) can possibly be related to relatively cool and dry episodes reported from other sites on the north-eastern Tibetan Plateau, which are hypothesized to occur in phase with Northern Hemisphere cooling events.
The Central Asian Pamir Mountains (Pamirs) are a high-altitude region sensitive to climatic change, with only few paleoclimatic records available. To examine the glacial-interglacial hydrological changes in the region, we analyzed the geochemical parameters of a 31-kyr record from Lake Karakul and performed a set of experiments with climate models to interpret the results. delta D values of terrestrial biomarkers showed insolation-driven trends reflecting major shifts of water vapor sources. For aquatic biomarkers, positive delta D shifts driven by changes in precipitation seasonality were observed at ca. 31-30, 28-26, and 17-14 kyr BP. Multiproxy paleoecological data and modelling results suggest that increased water availability, induced by decreased summer evaporation, triggered higher lake levels during those episodes, possibly synchronous to northern hemispheric rapid climate events. We conclude that seasonal changes in precipitation-evaporation balance significantly influenced the hydrological state of a large waterbody such as Lake Karakul, while annual precipitation amount and inflows remained fairly constant.
Autonomy is an emerging paradigm for the design and implementation of managed services and systems. Self-managed aspects frequently concern the communication of systems with their environment. Self-management subsystems are critical, they should thus be designed and implemented as high-assurance components. Here, we propose to use GEAR, a game-based model checker for the full modal mu-calculus, and derived, more user-oriented logics, as a user friendly tool that can offer automatic proofs of critical properties of such systems. Designers and engineers can interactively investigate automatically generated winning strategies resulting from the games, this way exploring the connection between the property, the system, and the proof. The benefits of the approach are illustrated on a case study that concerns the ExoMars Rover.
Flux-P
(2012)
Quantitative knowledge of intracellular fluxes in metabolic networks is invaluable for inferring metabolic system behavior and the design principles of biological systems. However, intracellular reaction rates can not often be calculated directly but have to be estimated; for instance, via 13C-based metabolic flux analysis, a model-based interpretation of stable carbon isotope patterns in intermediates of metabolism. Existing software such as FiatFlux, OpenFLUX or 13CFLUX supports experts in this complex analysis, but requires several steps that have to be carried out manually, hence restricting the use of this software for data interpretation to a rather small number of experiments. In this paper, we present Flux-P as an approach to automate and standardize 13C-based metabolic flux analysis, using the Bio-jETI workflow framework. Exemplarily based on the FiatFlux software, it demonstrates how services can be created that carry out the different analysis steps autonomously and how these can subsequently be assembled into software workflows that perform automated, high-throughput intracellular flux analysis of high quality and reproducibility. Besides significant acceleration and standardization of the data analysis, the agile workflow-based realization supports flexible changes of the analysis workflows on the user level, making it easy to perform custom analyses.
Automatic code generation is an essential cornerstone of today's model-driven approaches to software engineering. Thus a key requirement for the success of this technique is the reliability and correctness of code generators. This article describes how we employ standard model checking-based verification to check that code generator models developed within our code generation framework Genesys conform to (temporal) properties. Genesys is a graphical framework for the high-level construction of code generators on the basis of an extensible library of well-defined building blocks along the lines of the Extreme Model-Driven Development paradigm. We will illustrate our verification approach by examining complex constraints for code generators, which even span entire model hierarchies. We also show how this leads to a knowledge base of rules for code generators, which we constantly extend by e.g. combining constraints to bigger constraints, or by deriving common patterns from structurally similar constraints. In our experience, the development of code generators with Genesys boils down to re-instantiating patterns or slightly modifying the graphical process model, activities which are strongly supported by verification facilities presented in this article.
GeneFisher-P
(2007)
Background: The development of bioinformatics databases, algorithms, and tools throughout the last years has lead to a highly distributedworld of bioinformatics services. Without adequatemanagement and development support, in silico researchers are hardly able to exploit the potential of building complex, specialized analysis processes from these services. The Semantic Web aims at thoroughly equipping individual data and services with machine-processable meta-information, while workflow systems support the construction of service compositions. However, even in this combination, in silico researchers currently would have to deal manually with the service interfaces, the adequacy of the semantic annotations, type incompatibilities, and the consistency of service compositions. Results: In this paper, we demonstrate by means of two examples how Semantic Web technology together with an adequate domain modelling frees in silico researchers from dealing with interfaces, types, and inconsistencies. In Bio-jETI, bioinformatics services can be graphically combined to complex services without worrying about details of their interfaces or about type mismatches of the composition. These issues are taken care of at the semantic level by Bio-jETI’s model checking and synthesis features. Whenever possible, they automatically resolve type mismatches in the considered service setting. Otherwise, they graphically indicate impossible/incorrect service combinations. In the latter case, the workflow developermay either modify his service composition using semantically similar services, or ask for help in developing the missing mediator that correctly bridges the detected type gap. Newly developed mediators should then be adequately annotated semantically, and added to the service library for later reuse in similar situations. Conclusion: We show the power of semantic annotations in an adequately modelled and semantically enabled domain setting. Using model checking and synthesis methods, users may orchestrate complex processes from a wealth of heterogeneous services without worrying about interfaces and (type) consistency. The success of this method strongly depends on a careful semantic annotation of the provided services and on its consequent exploitation for analysis, validation, and synthesis. We are convinced that these annotations will become standard, as they will become preconditions for the success and widespread use of (preferred) services in the Semantic Web
We summarize here the main characteristics and features of the jABC framework, used in the case studies as a graphical tool for modeling scientific processes and workflows. As a comprehensive environment for service-oriented modeling and design according to the XMDD (eXtreme Model-Driven Design) paradigm, the jABC offers much more than the pure modeling capability. Associated technologies and plugins provide in fact means for a rich variety of supporting functionality, such as remote service integration, taxonomical service classification, model execution, model verification, model synthesis, and model compilation. We describe here in short both the essential jABC features and the service integration philosophy followed in the environment. In our work over the last years we have seen that this kind of service definition and provisioning platform has the potential to become a core technology in interdisciplinary service orchestration and technology transfer: Domain experts, like scientists not specially trained in computer science, directly define complex service orchestrations as process models and use efficient and complex domain-specific tools in a simple and intuitive way.
GeneFisher-P
(2008)
Background: PCR primer design is an everyday, but not trivial task requiring state-of-the-art software. We describe the popular tool GeneFisher and explain its recent restructuring using workflow techniques. We apply a service-oriented approach to model and implement GeneFisher-P, a process-based version of the GeneFisher web application, as a part of the Bio-jETI platform for service modeling and execution. We show how to introduce a flexible process layer to meet the growing demand for improved user-friendliness and flexibility.
Results: Within Bio-jETI, we model the process using the jABC framework, a mature model-driven, service-oriented process definition platform. We encapsulate remote legacy tools and integrate web services using jETI, an extension of the jABC for seamless integration of remote resources as basic services, ready to be used in the process. Some of the basic services used by GeneFisher are in fact already provided as individual web services at BiBiServ and can be directly accessed. Others are legacy programs, and are made available to Bio-jETI via the jETI technology.
The full power of service-based process orientation is required when more bioinformatics tools, available as web services or via jETI, lead to easy extensions or variations of the basic process. This concerns for instance variations of data retrieval or alignment tools as provided by the European Bioinformatics Institute (EBI).
Conclusions: The resulting service-and process-oriented GeneFisher-P demonstrates how basic services from heterogeneous sources can be easily orchestrated in the Bio-jETI platform and lead to a flexible family of specialized processes tailored to specific tasks.