Refine
Year of publication
- 2015 (29) (remove)
Document Type
- Article (12)
- Monograph/Edited Volume (11)
- Doctoral Thesis (3)
- Conference Proceeding (2)
- Preprint (1)
Is part of the Bibliography
- yes (29)
Keywords
- BPMN (2)
- Cloud Computing (2)
- Forschungskolleg (2)
- Geschäftsprozessmanagement (2)
- Hasso Plattner Institute (2)
- Hasso-Plattner-Institut (2)
- JavaScript (2)
- Klausurtagung (2)
- Lively Kernel (2)
- Ph.D. retreat (2)
- Service-oriented Systems Engineering (2)
- service-oriented systems engineering (2)
- "Big Data"-Dienste (1)
- 3D information visualization (1)
- 3D semiotic model (1)
- Abstraktion (1)
- Algorithmen (1)
- Analyse (1)
- BPM (1)
- Batchprozesse (1)
- Behavioral querying (1)
- Biomedicine (1)
- Blockheizkraftwerke (1)
- Business process management (1)
- Business processes (1)
- CEP (1)
- Change Management (1)
- CoExist (1)
- Cyber-Physical Systems (1)
- Data (1)
- Data exchange (1)
- Data integration (1)
- Data modeling (1)
- Daten (1)
- Datenextraktion (1)
- Datenkorrektheit (1)
- Datenobjekte (1)
- Datenzustände (1)
- Deadline-Verbreitung (1)
- Design Thinking (1)
- Duration prediction (1)
- Dynamic pricing and advertising (1)
- Effizienz (1)
- Energiesparen (1)
- Ereignisabstraktion (1)
- Ereignisse (1)
- Event normalization (1)
- Event processing (1)
- Events (1)
- Evolution in MDE (1)
- Experimentation (1)
- Fallstudie (1)
- Feedback Loops (1)
- Feedback heuristics (1)
- Finite horizon (1)
- Forschungsprojekte (1)
- Future SOC Lab (1)
- Geschäftsprozesse (1)
- Graphdatenbanken (1)
- Graphtransformationen (1)
- Image filtering (1)
- In-Memory Technologie (1)
- Innovation (1)
- Innovationsmanagement (1)
- Innovationsmethode (1)
- Interaction (1)
- Inventory holding costs (1)
- Inventory systems (1)
- JIT compilers (1)
- Kollaborationen (1)
- Languages (1)
- Leadership (1)
- Level of abstraction (1)
- MDE Ansatz (1)
- MDE settings (1)
- Management (1)
- Measurement (1)
- Mehrfamilienhäuser (1)
- Modellgetrieben (1)
- Modellierung (1)
- Modelltransformation (1)
- Multi-Instanzen (1)
- Multicore Architekturen (1)
- Natural language processing (1)
- Network graph (1)
- Network monitoring (1)
- Network topology (1)
- Object Versioning (1)
- Objekt-Constraint Programmierung (1)
- Optimal stochastic and deterministic (1)
- Optimierungen (1)
- Organisationsveränderung (1)
- Performance (1)
- Process Mining (1)
- Process Monitoring (1)
- Process choreographies (1)
- Process model repositories (1)
- Process model search (1)
- Process modeling (1)
- Prognosen (1)
- Programming Environments (1)
- Prozess- und Datenintegration (1)
- Prozessarchitektur (1)
- Prozessautomatisierung (1)
- Prozessverfeinerung (1)
- Question answering (1)
- Racket (1)
- Research School (1)
- Risk control (1)
- SQL (1)
- Service detection (1)
- Skript-Entwicklungsumgebungen (1)
- Smalltalk (1)
- Sprachspezifikation (1)
- Squeak (1)
- Stochastic Petri nets (1)
- Studie (1)
- System of Systems (1)
- Texturing (1)
- Trace inclusion (1)
- Verifikation (1)
- Verwaltung von Rechenzentren (1)
- Verzögerungs-Verbreitung (1)
- Virtual 3D scenes (1)
- Visualisierung (1)
- Visualization (1)
- Wartung von Graphdatenbanksichten (1)
- Web browsers (1)
- Werkzeuge (1)
- abstraction (1)
- adaptive Systeme (1)
- adaptive systems (1)
- adoption (1)
- algorithms (1)
- analysis (1)
- ausführbare Semantiken (1)
- batch processing (1)
- big data services (1)
- bpm (1)
- business process architecture (1)
- business process management (1)
- business processes (1)
- cartographic design (1)
- case study (1)
- change management (1)
- cloud computing (1)
- cogeneration units (1)
- collaboration (1)
- conformance analysis (1)
- contracts (1)
- control (1)
- cyber-physical systems (1)
- data (1)
- data center management (1)
- data correctness checking (1)
- data extraction (1)
- data objects (1)
- data states (1)
- deadline propagation (1)
- delay propagation (1)
- design thinking (1)
- diffusion (1)
- efficiency (1)
- energy savings (1)
- event abstraction (1)
- events (1)
- evolution in MDE (1)
- executable semantics (1)
- feedback loops (1)
- forecasts (1)
- formal framework (1)
- formales Framework (1)
- formalism (1)
- functional languages (1)
- graph databases (1)
- graph transformation (1)
- incremental graph pattern matching (1)
- inductive invariant checking (1)
- induktives Invariant Checking (1)
- inkrementelles Graph Pattern Matching (1)
- innovation (1)
- innovation capabilities (1)
- innovation management (1)
- künstliche Intelligenz (1)
- language specification (1)
- leadership (1)
- lively kernel (1)
- location-based (1)
- management (1)
- maschinelles Lernen (1)
- model transformation (1)
- model-driven (1)
- model-driven engineering (1)
- modeling (1)
- modellgetriebene Entwicklung (1)
- monitoring (1)
- multi-instances (1)
- multi-family residential buildings (1)
- object-constraint programming (1)
- optimizations (1)
- organizational change (1)
- orts-basiert (1)
- partial application conditions (1)
- partielle Anwendungsbedingungen (1)
- process and data integration (1)
- process automation (1)
- process mining (1)
- process refinement (1)
- real-time rendering (1)
- research school (1)
- scripting environments (1)
- study (1)
- system of systems (1)
- tools (1)
- tracing (1)
- user interaction (1)
- verification (1)
- view maintenance (1)
- visualization (1)
- Übereinstimmungsanalyse (1)
- Überwachung (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering gGmbH (29) (remove)
Network Topology Discovery and Inventory Listing are two of the primary features of modern network monitoring systems (NMS). Current NMSs rely heavily on active scanning techniques for discovering and mapping network information. Although this approach works, it introduces some major drawbacks such as the performance impact it can exact, specially in larger network environments. As a consequence, scans are often run less frequently which can result in stale information being presented and used by the network monitoring system. Alternatively, some NMSs rely on their agents being deployed on the hosts they monitor. In this article, we present a new approach to Network Topology Discovery and Network Inventory Listing using only passive monitoring and scanning techniques. The proposed techniques rely solely on the event logs produced by the hosts and network devices present within a network. Finally, we discuss some of the advantages and disadvantages of our approach.
Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during process execution. Aiming at a better process understanding and improvement, this event data can be used to analyze processes using process mining techniques. Process models can be automatically discovered and the execution can be checked for conformance to specified behavior. Moreover, existing process models can be enhanced and annotated with valuable information, for example for performance analysis. While the maturity of process mining algorithms is increasing and more tools are entering the market, process mining projects still face the problem of different levels of abstraction when comparing events with modeled business activities. Mapping the recorded events to activities of a given process model is essential for conformance checking, annotation and understanding of process discovery results. Current approaches try to abstract from events in an automated way that does not capture the required domain knowledge to fit business activities. Such techniques can be a good way to quickly reduce complexity in process discovery. Yet, they fail to enable techniques like conformance checking or model annotation, and potentially create misleading process discovery results by not using the known business terminology.
In this thesis, we develop approaches that abstract an event log to the same level that is needed by the business. Typically, this abstraction level is defined by a given process model. Thus, the goal of this thesis is to match events from an event log to activities in a given process model. To accomplish this goal, behavioral and linguistic aspects of process models and event logs as well as domain knowledge captured in existing process documentation are taken into account to build semiautomatic matching approaches. The approaches establish a pre--processing for every available process mining technique that produces or annotates a process model, thereby reducing the manual effort for process analysts. While each of the presented approaches can be used in isolation, we also introduce a general framework for the integration of different matching approaches.
The approaches have been evaluated in case studies with industry and using a large industry process model collection and simulated event logs. The evaluation demonstrates the effectiveness and efficiency of the approaches and their robustness towards nonconforming execution logs.
We present Pycket, a high-performance tracing JIT compiler for Racket. Pycket supports a wide variety of the sophisticated features in Racket such as contracts, continuations, classes, structures, dynamic binding, and more. On average, over a standard suite of benchmarks, Pycket outperforms existing compilers, both Racket's JIT and other highly-optimizing Scheme compilers. Further, Pycket provides much better performance for Racket proxies than existing systems, dramatically reducing the overhead of contracts and gradual typing. We validate this claim with performance evaluation on multiple existing benchmark suites.
The Pycket implementation is of independent interest as an application of the RPython meta-tracing framework (originally created for PyPy), which automatically generates tracing JIT compilers from interpreters. Prior work on meta-tracing focuses on bytecode interpreters, whereas Pycket is a high-level interpreter based on the CEK abstract machine and operates directly on abstract syntax trees. Pycket supports proper tail calls and first-class continuations. In the setting of a functional language, where recursion and higher-order functions are more prevalent than explicit loops, the most significant performance challenge for a tracing JIT is identifying which control flows constitute a loop-we discuss two strategies for identifying loops and measure their impact.
Graph databases provide a natural way of storing and querying graph data. In contrast to relational databases, queries over graph databases enable to refer directly to the graph structure of such graph data. For example, graph pattern matching can be employed to formulate queries over graph data.
However, as for relational databases running complex queries can be very time-consuming and ruin the interactivity with the database. One possible approach to deal with this performance issue is to employ database views that consist of pre-computed answers to common and often stated queries. But to ensure that database views yield consistent query results in comparison with the data from which they are derived, these database views must be updated before queries make use of these database views. Such a maintenance of database views must be performed efficiently, otherwise the effort to create and maintain views may not pay off in comparison to processing the queries directly on the data from which the database views are derived.
At the time of writing, graph databases do not support database views and are limited to graph indexes that index nodes and edges of the graph data for fast query evaluation, but do not enable to maintain pre-computed answers of complex queries over graph data. Moreover, the maintenance of database views in graph databases becomes even more challenging when negation and recursion have to be supported as in deductive relational databases.
In this technical report, we present an approach for the efficient and scalable incremental graph view maintenance for deductive graph databases. The main concept of our approach is a generalized discrimination network that enables to model nested graph conditions including negative application conditions and recursion, which specify the content of graph views derived from graph data stored by graph databases. The discrimination network enables to automatically derive generic maintenance rules using graph transformations for maintaining graph views in case the graph data from which the graph views are derived change. We evaluate our approach in terms of a case study using multiple data sets derived from open source projects.
Graph transformation systems are a powerful formal model to capture model transformations or systems with infinite state space, among others. However, this expressive power comes at the cost of rather limited automated analysis capabilities. The general case of unbounded many initial graphs or infinite state spaces is only supported by approaches with rather limited scalability or expressiveness. In this report we improve an existing approach for the automated verification of inductive invariants for graph transformation systems. By employing partial negative application conditions to represent and check many alternative conditions in a more compact manner, we can check examples with rules and constraints of substantially higher complexity. We also substantially extend the expressive power by supporting more complex negative application conditions and provide higher accuracy by employing advanced implication checks. The improvements are evaluated and compared with another applicable tool by considering three case studies.
Business Process Management has become an integral part of modern organizations in the private and public sector for improving their operations. In the course of Business Process Management efforts, companies and organizations assemble large process model repositories with many hundreds and thousands of business process models bearing a large amount of information. With the advent of large business process model collections, new challenges arise as structuring and managing a large amount of process models, their maintenance, and their quality assurance.
This is covered by business process architectures that have been introduced for organizing and structuring business process model collections. A variety of business process architecture approaches have been proposed that align business processes along aspects of interest, e. g., goals, functions, or objects. They provide a high level categorization of single processes ignoring their interdependencies, thus hiding valuable information. The production of goods or the delivery of services are often realized by a complex system of interdependent business processes. Hence, taking a holistic view at business processes interdependencies becomes a major necessity to organize, analyze, and assess the impact of their re-/design. Visualizing business processes interdependencies reveals hidden and implicit information from a process model collection.
In this thesis, we present a novel Business Process Architecture approach for representing and analyzing business process interdependencies on an abstract level. We propose a formal definition of our Business Process Architecture approach, design correctness criteria, and develop analysis techniques for assessing their quality. We describe a methodology for applying our Business Process Architecture approach top-down and bottom-up. This includes techniques for Business Process Architecture extraction from, and decomposition to process models while considering consistency issues between business process architecture and process model level. Using our extraction algorithm, we present a novel technique to identify and visualize data interdependencies in Business Process Data Architectures. Our Business Process Architecture approach provides business process experts,managers, and other users of a process model collection with an overview that allows reasoning about a large set of process models,
understanding, and analyzing their interdependencies in a facilitated way. In this regard we evaluated our Business Process Architecture approach in an experiment and provide implementations of selected techniques.
Babelsberg/RML
(2015)
New programming language designs are often evaluated on concrete implementations. However, in order to draw conclusions about the language design from the evaluation of concrete programming languages, these implementations need to be verified against the formalism of the design. To that end, we also have to ensure that the design actually meets its stated goals. A useful tool for the latter has been to create an executable semantics from a formalism that can execute a test suite of examples. However, this mechanism so far did not allow to verify an implementation against the design.
Babelsberg is a new design for a family of object-constraint languages. Recently, we have developed a formal semantics to clarify some issues in the design of those languages. Supplementing this work, we report here on how this formalism is turned into an executable operational semantics using the RML system. Furthermore, we show how we extended the executable semantics to create a framework that can generate test suites for the concrete Babelsberg implementations that provide traceability from the design to the language. Finally, we discuss how these test suites helped us find and correct mistakes in the Babelsberg implementation for JavaScript.
We report our experience in implementing SqueakJS, a bitcompatible implementation of Squeak/Smalltalk written in pure JavaScript. SqueakJS runs entirely in theWeb browser with a virtual file system that can be directed to a server or client-side storage. Our implementation is notable for simplicity and performance gained through adaptation to the host object memory and deployment leverage gained through the Lively Web development environment. We present several novel techniques as well as performance measurements for the resulting virtual machine. Much of this experience is potentially relevant to preserving other dynamic language systems and making them available in a browser-based environment.
ecoControl
(2015)
Eine dezentrale Energieversorgung ist ein erster Schritt in Richtung Energiewende. Dabei werden auch in Mehrfamilienhäusern vermehrt verschiedene Strom- und Wärmeerzeuger eingesetzt.
Besonders in Deutschland kommen in diesem Zusammenhang Blockheizkraftwerke immer häufiger zum Einsatz, weil sie Gas sehr effizient in Strom und Wärme umwandeln können. Außerdem ermöglichen sie, im Zusammenspiel mit anderen Energiesystemen wie beispielsweise Photovoltaik-Anlagen, eine kontinuierliche und dezentrale Energieversorgung.
Bei dem Betrieb von unterschiedlichen Energiesystemen ist es wünschenswert, dass die Systeme aufeinander abgestimmt arbeiten. Allerdings ist es bisher schwierig, heterogene Energiesysteme effizient miteinander zu betreiben. Dadurch bleiben Einsparungspotentiale ungenutzt.
Eine zentrale Steuerung kann deshalb die Effizienz des Gesamtsystems verbessern.
Mit ecoControl stellen wir einen erweiterbaren Prototypen vor, der die Kooperation von Energiesystemen optimiert und Umweltfaktoren miteinbezieht.
Dazu stellt die Software eine einheitliche Bedienungsoberfläche zur Konfiguration aller Systeme zur Verfügung. Außerdem bietet sie die Möglichkeit, Optimierungsalgorithmen mit Hilfe einer Programmierschnittstelle zu entwickeln, zu testen und auszuführen.
Innerhalb solcher Algorithmen können von ecoControl bereitgestellte Vorhersagen genutzt werden. Diese Vorhersagen basieren auf dem individuellen Verhalten von jedem Energiesystem, Wettervorhersagen und auf Prognosen des Energieverbrauchs. Mithilfe einer Simulation können Techniker unterschiedliche Konfigurationen und Optimierungen sofort ausprobieren, ohne diese über einen langen Zeitraum an realen Geräten testen zu müssen.
ecoControl hilft darüber hinaus auch Hausverwaltungen und Vermietern bei der Verwaltung und Analyse der Energiekosten.
Wir haben anhand von Fallbeispielen gezeigt, dass Optimierungsalgorithmen, welche die Nutzung von Wärmespeichern verbessern, die Effizienz des Gesamtsystems erheblich verbessern können.
Schließlich kommen wir zu dem Schluss, dass ecoControl in einem nächsten Schritt unter echten Bedingungen getestet werden muss, sobald eine geeignete Hardwarekomponente verfügbar ist. Über diese Schnittstelle werden die Messwerte an ecoControl gesendet und Steuersignale an die Geräte weitergeleitet.