Institut für Informatik und Computational Science
Refine
Document Type
- Article (9)
- Monograph/Edited Volume (4)
- Preprint (2)
- Postprint (1)
Is part of the Bibliography
- yes (16)
Keywords
- Alignment (1)
- Code generation (1)
- European Bioinformatics Institute (1)
- Extreme Model-Driven Development (1)
- Integration (1)
- Model checking (1)
- Tool (1)
- Verification (1)
- Workflow (1)
- bootstrapping (1)
Institute
Automatic code generation is an essential cornerstone of today's model-driven approaches to software engineering. Thus a key requirement for the success of this technique is the reliability and correctness of code generators. This article describes how we employ standard model checking-based verification to check that code generator models developed within our code generation framework Genesys conform to (temporal) properties. Genesys is a graphical framework for the high-level construction of code generators on the basis of an extensible library of well-defined building blocks along the lines of the Extreme Model-Driven Development paradigm. We will illustrate our verification approach by examining complex constraints for code generators, which even span entire model hierarchies. We also show how this leads to a knowledge base of rules for code generators, which we constantly extend by e.g. combining constraints to bigger constraints, or by deriving common patterns from structurally similar constraints. In our experience, the development of code generators with Genesys boils down to re-instantiating patterns or slightly modifying the graphical process model, activities which are strongly supported by verification facilities presented in this article.
Background: The development of bioinformatics databases, algorithms, and tools throughout the last years has lead to a highly distributedworld of bioinformatics services. Without adequatemanagement and development support, in silico researchers are hardly able to exploit the potential of building complex, specialized analysis processes from these services. The Semantic Web aims at thoroughly equipping individual data and services with machine-processable meta-information, while workflow systems support the construction of service compositions. However, even in this combination, in silico researchers currently would have to deal manually with the service interfaces, the adequacy of the semantic annotations, type incompatibilities, and the consistency of service compositions. Results: In this paper, we demonstrate by means of two examples how Semantic Web technology together with an adequate domain modelling frees in silico researchers from dealing with interfaces, types, and inconsistencies. In Bio-jETI, bioinformatics services can be graphically combined to complex services without worrying about details of their interfaces or about type mismatches of the composition. These issues are taken care of at the semantic level by Bio-jETI’s model checking and synthesis features. Whenever possible, they automatically resolve type mismatches in the considered service setting. Otherwise, they graphically indicate impossible/incorrect service combinations. In the latter case, the workflow developermay either modify his service composition using semantically similar services, or ask for help in developing the missing mediator that correctly bridges the detected type gap. Newly developed mediators should then be adequately annotated semantically, and added to the service library for later reuse in similar situations. Conclusion: We show the power of semantic annotations in an adequately modelled and semantically enabled domain setting. Using model checking and synthesis methods, users may orchestrate complex processes from a wealth of heterogeneous services without worrying about interfaces and (type) consistency. The success of this method strongly depends on a careful semantic annotation of the provided services and on its consequent exploitation for analysis, validation, and synthesis. We are convinced that these annotations will become standard, as they will become preconditions for the success and widespread use of (preferred) services in the Semantic Web
Denken in Services ist der Schlüssel zu einer gemeinsamen Sicht für IT Experten, Business Experten und Manager auf bestehende und entstehende Anwendungen und damit zu einer engeren Zusammenarbeit, die heutige Arbeitsabläufe revolutionieren kann: Auf der Service-Ebene ist es erstmals möglich, Fachexperten kontinuierlich während des gesamten Lebens-zyklus einer Anwendung einzubinden. Ihren Ursprung hat die service- orientierte Denkweise in der Telekommunikation, wo sie die Grundlage der modernsten Anwendungen für mobile und feste Plattformen war, insbesondere der so genannten Mehrwertdienste, wie Televoting, Freephone (die 0800 Nummern), Virtual Private Network. Nur durch die für die service-orientierung charakteristische konsequente Virtualisierung der Infrastrukturen und die lose Kopplung der Funktionalitäten war es möglich, die hochgradig heterogene Landschaft der Telefonie zu beherrschen. Dieselben Prinzipien sind aber viel allgemeiner anwendbar, zum Beispiel auch für Dienste in weniger technischen Geschäftsbereichen wie e-commerce, Logistik, Gesundheitswesen oder Verwaltung. Ihre konsequente Umsetzung als neues Paradigma für die Konzeption, den Entwurf und das Management komplexer Anwendungen hat das Potential, der Gesellschaft eine neue Generation personalisierter, sicherer, hochverfügbarer und effizienter (Internet-) Dienstleistungen zu bescheren. Damit werden viele Geschäftsbereiche revolutioniert, ähnlich wie bereits die Email in vielen Bereichen die klassische Kommunikation per Post revolutioniert hat.
Autonomy is an emerging paradigm for the design and implementation of managed services and systems. Self-managed aspects frequently concern the communication of systems with their environment. Self-management subsystems are critical, they should thus be designed and implemented as high-assurance components. Here, we propose to use GEAR, a game-based model checker for the full modal mu-calculus, and derived, more user-oriented logics, as a user friendly tool that can offer automatic proofs of critical properties of such systems. Designers and engineers can interactively investigate automatically generated winning strategies resulting from the games, this way exploring the connection between the property, the system, and the proof. The benefits of the approach are illustrated on a case study that concerns the ExoMars Rover.
Lessons Learned
(2014)
This chapter summarizes the experience and the lessons we learned concerning the application of the jABC as a framework for design and execution of scientific workflows. It reports experiences from the domain modeling (especially service integration) and workflow design phases and evaluates the resulting models statistically with respect to the SIB library and hierarchy levels.
We summarize here the main characteristics and features of the jABC framework, used in the case studies as a graphical tool for modeling scientific processes and workflows. As a comprehensive environment for service-oriented modeling and design according to the XMDD (eXtreme Model-Driven Design) paradigm, the jABC offers much more than the pure modeling capability. Associated technologies and plugins provide in fact means for a rich variety of supporting functionality, such as remote service integration, taxonomical service classification, model execution, model verification, model synthesis, and model compilation. We describe here in short both the essential jABC features and the service integration philosophy followed in the environment. In our work over the last years we have seen that this kind of service definition and provisioning platform has the potential to become a core technology in interdisciplinary service orchestration and technology transfer: Domain experts, like scientists not specially trained in computer science, directly define complex service orchestrations as process models and use efficient and complex domain-specific tools in a simple and intuitive way.
A major part of the scientific experiments that are carried out today requires thorough computational support. While database and algorithm providers face the problem of bundling resources to create and sustain powerful computation nodes, the users have to deal with combining sets of (remote) services into specific data analysis and transformation processes. Today’s attention to “big data” amplifies the issues of size, heterogeneity, and process-level diversity/integration. In the last decade, especially workflow-based approaches to deal with these processes have enjoyed great popularity. This book concerns a particularly agile and model-driven approach to manage scientific workflows that is based on the XMDD paradigm. In this chapter we explain the scope and purpose of the book, briefly describe the concepts and technologies of the XMDD paradigm, explain the principal differences to related approaches, and outline the structure of the book.