Institut für Informatik und Computational Science
Refine
Year of publication
- 2011 (54) (remove)
Document Type
- Article (27)
- Doctoral Thesis (18)
- Monograph/Edited Volume (3)
- Other (2)
- Preprint (2)
- Habilitation Thesis (1)
- Moving Images (1)
Is part of the Bibliography
- yes (54)
Keywords
- Answer Set Programming (3)
- answer set programming (3)
- Antwortmengenprogrammierung (2)
- Tracking (2)
- Abstraktion (1)
- Accepting Grammars (1)
- Akzeptierende Grammatiken (1)
- Algorithmen (1)
- Algorithms (1)
- Answer set programming (1)
- Antwortmengen Programmierung (1)
- Asynchrone Schaltung (1)
- Automata systems (1)
- Beweistheorie (1)
- Bildverarbeitung (1)
- CP-Logic (1)
- CSC (1)
- CityGML (1)
- Code generation (1)
- Complex optimization (1)
- Computergrafik (1)
- Controlled Derivations (1)
- D-galactosamine (1)
- Decidability (1)
- Entwurfsmuster für SOA-Sicherheit (1)
- Entwurfsraumexploration (1)
- Extreme Model-Driven Development (1)
- FMC-QE (1)
- FPGA (1)
- Gesteuerte Ableitungen (1)
- Grammar Systems (1)
- Grammatiksysteme (1)
- Graphfärbung (1)
- Hardware-Software-Co-Design (1)
- Hierarchically configurable mask register (1)
- IT-Security (1)
- IT-Sicherheit (1)
- Incremental answer set programming (1)
- Inkonsistenz (1)
- Knowledge Representation and Reasoning (1)
- Knowledge representation (1)
- Komplexität (1)
- Komplexitätsbewältigung (1)
- L systems (1)
- Leftmost Derivations (1)
- Leistungsvorhersage (1)
- Linksableitungen (1)
- Localization (1)
- Logiksynthese (1)
- Markov processes (1)
- Masking of X-values (1)
- Meta-Programming (1)
- Model Driven Architecture (1)
- Model checking (1)
- Modell (1)
- Modell-driven Security (1)
- Modell-getriebene Sicherheit (1)
- Modellgetriebene Architektur (1)
- Nutzungsinteresse (1)
- Ontologie (1)
- Ontology (1)
- Operation problem (1)
- Parsing (1)
- Performance Prediction (1)
- Pre-RS Traceability (1)
- Preference Handling (1)
- Problemlösen (1)
- Process model analysis (1)
- Proof Theory (1)
- Prozess (1)
- Quantitative Modeling (1)
- Quantitative Modellierung (1)
- Queuing Theory (1)
- Reparatur (1)
- SOA Security Pattern (1)
- STG decomposition (1)
- STG-Dekomposition (1)
- Security Modelling (1)
- Semantic Web (1)
- Service-Orientierte Architekturen (1)
- Service-oriented Architectures (1)
- Sicherheitsmodellierung (1)
- Statistical relational learning (1)
- Stochastic relational process (1)
- System Biologie (1)
- Texturen (1)
- Theoretische Informatik (1)
- Time Augmented Petri Nets (1)
- Time series (1)
- Transformation (1)
- Unary languages (1)
- Unvollständigkeit (1)
- Usage Interest (1)
- Verification (1)
- Warteschlangentheorie (1)
- Web Sites (1)
- Webseite (1)
- Wireless Sensor Networks (1)
- Wissensrepräsentation und -verarbeitung (1)
- Zeitbehaftete Petri Netze (1)
- abstraction (1)
- acute liver failure (1)
- asynchronous circuit (1)
- behavioral abstraction (1)
- bioinformatics (1)
- block representation (1)
- cellular automata (1)
- complexity (1)
- computer graphics (1)
- consistency (1)
- consistency checking (1)
- consistency measures (1)
- cooperating systems (1)
- decidability questions (1)
- declarative problem solving (1)
- design space exploration (1)
- diagnosis (1)
- endothelin (1)
- endothelin-converting enzyme (1)
- finite model computation (1)
- formal languages (1)
- hardware-software-codesign (1)
- image processing (1)
- incompleteness (1)
- inconsistency (1)
- kidney cancer (1)
- logic synthesis (1)
- loop formulas (1)
- metabolism (1)
- metabolomics (1)
- metastasis (1)
- model (1)
- neighborhood (1)
- neutral endopeptidase (1)
- nichtlineare Projektionen (1)
- nonlinear projections (1)
- on-chip (1)
- process (1)
- process model alignment (1)
- quantum (1)
- repair (1)
- semantisches Netz (1)
- speed independence (1)
- stable model semantics (1)
- systems biology (1)
- textures (1)
- transformation (1)
- unfounded sets (1)
- virtual 3D city models (1)
- virtuelle 3D-Stadtmodelle (1)
We introduce an approach to detecting inconsistencies in large biological networks by using answer set programming. To this end, we build upon a recently proposed notion of consistency between biochemical/genetic reactions and high-throughput profiles of cell activity. We then present an approach based on answer set programming to check the consistency of large-scale data sets. Moreover, we extend this methodology to provide explanations for inconsistencies by determining minimal representations of conflicts. In practice, this can be used to identify unreliable data or to indicate missing reactions.
Engineering of process-driven business applications can be supported by process modeling efforts in order to bridge the gap between business requirements and system specifications. However, diverging purposes of business process modeling initiatives have led to significant problems in aligning related models at different abstract levels and different perspectives. Checking the consistency of such corresponding models is a major challenge for process modeling theory and practice. In this paper, we take the inappropriateness of existing strict notions of behavioral equivalence as a starting point. Our contribution is a concept called behavioral profile that captures the essential behavioral constraints of a process model. We show that these profiles can be computed efficiently, i.e., in cubic time for sound free-choice Petri nets w.r.t. their number of places and transitions. We use behavioral profiles for the definition of a formal notion of consistency which is less sensitive to model projections than common criteria of behavioral equivalence and allows for quantifying deviation in a metric way. The derivation of behavioral profiles and the calculation of a degree of consistency have been implemented to demonstrate the applicability of our approach. We also report the findings from checking consistency between partially overlapping models of the SAP reference model.
Secondary activation of the endothelin system is thought to be involved in toxic liver injury. This study tested the hypothesis that dual endothelin-converting enzyme / neutral endopeptidase blockade might: be able to attenuate acute toxic liver injury.
Male Sprague-Dawley rats were implanted with subcutaneous minipumps to deliver the novel compound SLV338 (10 mg/kg*d) or vehicle. Four days later they received two intraperitoneal injections of D-galactosamine (1.3 g/kg each) or vehicle at an interval of 12 hours. The animals were sacrificed 48 hours after the first injection.
Injection of D-galactosamine resulted in very severe liver injury, reflected by strongly elevated plasma liver enzymes, hepatic necrosis and inflammation, and a mortality rate of 42.9 %. SLV338 treatment did not show any significant effect on the extent of acute liver injury as judged from plasma parameters, hepatic histology and mortality. Plasma measurements of SLV338 confirmed adequate drug delivery. Plasma concentrations of big endothelin-1 and endothelin-1 were significantly elevated in animals with liver injury (5-fold and 62-fold, respectively). Plasma endothelin-1 was significantly correlated with several markers of liver injury. SLV338 completely prevented the rise of plasma big endothelin-1 (p<0.05) and markedly attenuated the rise of endothelin-1 (p = 0.055).
In conclusion, dual endothelin-converting enzyme / neutral endopeptidase blockade by SLV338 did not significantly attenuate D-galactosamine-induced acute liver injury, although it largely prevented the activation of the endothelin system. An evaluation of SLV338 in a less severe model of liver injury would be of interest, since very severe intoxication might not be relevantly amenable to pharmacological interventions.
Preference handling and optimization are indispensable means for addressing nontrivial applications in Answer Set Programming (ASP). However, their implementation becomes difficult whenever they bring about a significant increase in computational complexity. As a consequence, existing ASP systems do not offer complex optimization capacities, supporting, for instance, inclusion-based minimization or Pareto efficiency. Rather, such complex criteria are typically addressed by resorting to dedicated modeling techniques, like saturation. Unlike the ease of common ASP modeling, however, these techniques are rather involved and hardly usable by ASP laymen. We address this problem by developing a general implementation technique by means of meta-prpogramming, thus reusing existing ASP systems to capture various forms of qualitative preferences among answer sets. In this way, complex preferences and optimization capacities become readily available for ASP applications.
Building biological models by inferring functional dependencies from experimental data is an important issue in Molecular Biology. To relieve the biologist from this traditionally manual process, various approaches have been proposed to increase the degree of automation. However, available approaches often yield a single model only, rely on specific assumptions, and/or use dedicated, heuristic algorithms that are intolerant to changing circumstances or requirements in the view of the rapid progress made in Biotechnology. Our aim is to provide a declarative solution to the problem by appeal to Answer Set Programming (ASP) overcoming these difficulties. We build upon an existing approach to Automatic Network Reconstruction proposed by part of the authors. This approach has firm mathematical foundations and is well suited for ASP due to its combinatorial flavor providing a characterization of all models explaining a set of experiments. The usage of ASP has several benefits over the existing heuristic algorithms. First, it is declarative and thus transparent for biological experts. Second, it is elaboration tolerant and thus allows for an easy exploration and incorporation of biological constraints. Third, it allows for exploring the entire space of possible models. Finally, our approach offers an excellent performance, matching existing, special-purpose systems.
Behavioral models capture operational principles of real-world or designed systems. Formally, each behavioral model defines the state space of a system, i.e., its states and the principles of state transitions. Such a model is the basis for analysis of the system's properties. In practice, state spaces of systems are immense, which results in huge computational complexity for their analysis. Behavioral models are typically described as executable graphs, whose execution semantics encodes a state space. The structure theory of behavioral models studies the relations between the structure of a model and the properties of its state space. In this article, we use the connectivity property of graphs to achieve an efficient and extensive discovery of the compositional structure of behavioral models; behavioral models get stepwise decomposed into components with clear structural characteristics and inter-component relations. At each decomposition step, the discovered compositional structure of a model is used for reasoning on properties of the whole state space of the system. The approach is exemplified by means of a concrete behavioral model and verification criterion. That is, we analyze workflow nets, a well-established tool for modeling behavior of distributed systems, with respect to the soundness property, a basic correctness property of workflow nets. Stepwise verification allows the detection of violations of the soundness property by inspecting small portions of a model, thereby considerably reducing the amount of work to be done to perform soundness checks. Besides formal results, we also report on findings from applying our approach to an industry model collection.
Indoor position estimation constitutes a central task in home-based assisted living environments. Such environments often rely on a heterogeneous collection of low-cost sensors whose diversity and lack of precision has to be compensated by advanced techniques for localization and tracking. Although there are well established quantitative methods in robotics and neighboring fields for addressing these problems, they lack advanced knowledge representation and reasoning capacities. Such capabilities are not only useful in dealing with heterogeneous and incomplete information but moreover they allow for a better inclusion of semantic information and more general homecare and patient-related knowledge. We address this problem and investigate how state-of-the-art localization and tracking methods can be combined with Answer Set Programming, as a popular knowledge representation and reasoning formalism. We report upon a case-study and provide a first experimental evaluation of knowledge-based position estimation both in a simulated as well as in a real setting.
Automatic code generation is an essential cornerstone of today's model-driven approaches to software engineering. Thus a key requirement for the success of this technique is the reliability and correctness of code generators. This article describes how we employ standard model checking-based verification to check that code generator models developed within our code generation framework Genesys conform to (temporal) properties. Genesys is a graphical framework for the high-level construction of code generators on the basis of an extensible library of well-defined building blocks along the lines of the Extreme Model-Driven Development paradigm. We will illustrate our verification approach by examining complex constraints for code generators, which even span entire model hierarchies. We also show how this leads to a knowledge base of rules for code generators, which we constantly extend by e.g. combining constraints to bigger constraints, or by deriving common patterns from structurally similar constraints. In our experience, the development of code generators with Genesys boils down to re-instantiating patterns or slightly modifying the graphical process model, activities which are strongly supported by verification facilities presented in this article.
Untitled
(2011)