004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Monograph/Edited Volume (165) (remove)
Language
- English (123)
- German (40)
- Multiple languages (2)
Keywords
- Hasso-Plattner-Institut (10)
- Hasso Plattner Institute (9)
- Forschungskolleg (8)
- Klausurtagung (8)
- Service-oriented Systems Engineering (8)
- cloud computing (8)
- Cloud Computing (7)
- openHPI (7)
- Forschungsprojekte (6)
- Future SOC Lab (6)
Institute
- Hasso-Plattner-Institut für Digital Engineering gGmbH (119)
- Hasso-Plattner-Institut für Digital Engineering GmbH (39)
- Extern (7)
- Institut für Informatik und Computational Science (3)
- Institut für Geowissenschaften (1)
- Kommunalwissenschaftliches Institut (1)
- Lehreinheit für Wirtschafts-Arbeit-Technik (1)
"Wir gehen multimedial. Kommt ihr mit?" war Aufruf und Leitmotiv der MultimeDies 2007. Es kamen sehr viele mit, vor allem Lehrende und Lernende der Universität Potsdam, aber auch Firmen. Diese Veranstaltung setzt eine Tradition fort, die im Bemühen steht über zukunftsweisende Technologien und Projekte, über Angebote und praktikable Lösungen an der Universität zu informieren. Die Vorträge wurden in kurzen Beiträgen zusammengestellt. Sie gliedern sich in zwei Gruppen, zum einen der Bereitstellung, zum anderen der Nutzung von Multimedia.
The World Wide Web as an application platform becomes increasingly important. However, the development of Web applications is often more complex than for the desktop. Web-based development environments like Lively Webwerkstatt can mitigate this problem by making the development process more interactive and direct. By moving the development environment into the Web, applications can be developed collaboratively in a Wiki-like manner. This report documents the results of the project seminar on Web-based Development Environments 2010. In this seminar, participants extended the Web-based development environment Lively Webwerkstatt. They worked in small teams on current research topics from the field of Web-development and tool support for programmers and implemented their results in the Webwerkstatt environment.
Process models specify behavioral execution constraints between activities as well as between activities and data objects. A data object is characterized by its states and state transitions represented as object life cycle. For process execution, all behavioral execution constraints must be correct. Correctness can be verified via soundness checking which currently only considers control flow information. For data correctness, conformance between a process model and its object life cycles is checked. Current approaches abstract from dependencies between multiple data objects and require fully specified process models although, in real-world process repositories, often underspecified models are found. Coping with these issues, we introduce the concept of synchronized object life cycles and we define a mapping of data constraints of a process model to Petri nets extending an existing mapping. Further, we apply the notion of weak conformance to process models to tell whether each time an activity needs to access a data object in a particular state, it is guaranteed that the data object is in or can reach the expected state. Then, we introduce an algorithm for an integrated verification of control flow correctness and weak data conformance using soundness checking.
Dynamics in urban environments encompasses complex processes and phenomena such as related to movement (e.g.,traffic, people) and development (e.g., construction, settlement). This paper presents novel methods for creating human-centric illustrative maps for visualizing the movement dynamics in virtual 3D environments. The methods allow a viewer to gain rapid insight into traffic density and flow. The illustrative maps represent vehicle behavior as light threads. Light threads are a familiar visual metaphor caused by moving light sources producing streaks in a long-exposure photograph. A vehicle’s front and rear lights produce light threads that convey its direction of motion as well as its velocity and acceleration. The accumulation of light threads allows a viewer to quickly perceive traffic flow and density. The light-thread technique is a key element to effective visualization systems for analytic reasoning, exploration, and monitoring of geospatial processes.
For interactive construction of CSG models understanding the layout of a model is essential for its efficient manipulation. To understand position and orientation of aggregated components of a CSG model, we need to realize its visible and occluded parts as a whole. Hence, transparency and enhanced outlines are key techniques to assist comprehension. We present a novel real-time rendering technique for visualizing design and spatial assembly of CSG models. As enabling technology we combine an image-space CSG rendering algorithm with blueprint rendering. Blueprint rendering applies depth peeling for extracting layers of ordered depth from polygonal models and then composes them in sorted order facilitating a clear insight of the models. We develop a solution for implementing depth peeling for CSG models considering their depth complexity. Capturing surface colors of each layer and later combining the results allows for generating order-independent transparency as one major rendering technique for CSG models. We further define visually important edges for CSG models and integrate an image-space edgeenhancement technique for detecting them in each layer. In this way, we extract visually important edges that are directly and not directly visible to outline a model’s layout. Combining edges with transparency rendering, finally, generates edge-enhanced depictions of image-based CSG models and allows us to realize their complex, spatial assembly.
Virtualisierung und Cloud Computing gehören derzeit zu den wichtigsten Schlagworten für Betreiber von IT Infrastrukturen. Es gibt eine Vielzahl unterschiedlicher Technologien, Produkte und Geschäftsmodelle für vollkommen verschiedene Anwendungsszenarien. Die vorliegende Studie gibt zunächst einen detaillierten Überblick über aktuelle Entwicklungen in Konzepten und Technologien der Virtualisierungstechnologie – von klassischer Servervirtualisierung über Infrastrukturen für virtuelle Arbeitsplätze bis zur Anwendungsvirtualisierung und macht den Versuch einer Klassifikation der Virtualisierungsvarianten. Bei der Betrachtung des Cloud Computing-Konzepts werden deren Grundzüge sowie verschiedene Architekturvarianten und Anwendungsfälle eingeführt. Die ausführliche Untersuchung von Vorteilen des Cloud Computing sowie möglicher Bedenken, die bei der Nutzung von Cloud-Ressourcen im Unternehmen beachtet werden müssen, zeigt, dass Cloud Computing zwar große Chancen bietet, aber nicht für jede Anwendung und nicht für jeden rechtlichen und wirtschaftlichen Rahmen in Frage kommt.. Die anschließende Marktübersicht für Virtualisierungstechnologie zeigt, dass die großen Hersteller – Citrix, Microsoft und VMware – jeweils Produkte für fast alle Virtualisierungsvarianten anbieten und hebt entscheidende Unterschiede bzw. die Stärken der jeweiligen Anbieter heraus. So ist beispielsweise die Lösung von Citrix für Virtual Desktop Infrastructures sehr ausgereift, während Microsoft hier nur auf Standardtechnologie zurückgreifen kann. VMware hat als Marktführer die größte Verbreitung in Rechenzentren gefunden und bietet als einziger Hersteller echte Fehlertoleranz. Microsoft hingegen punktet mit der nahtlosen Integration ihrer Virtualisierungsprodukte in bestehende Windows-Infrastrukturen. Im Bereich der Cloud Computing-Systeme zeigen sich einige quelloffene Softwareprojekte, die durchaus für den produktiven Betrieb von sogenannten privaten Clouds geeignet sind.
Am 1. und 2. Dezember 2011 fand am Hasso-Plattner-Institut für Softwaresystemtechnik GmbH in Potsdam der 4. Deutsche IPv6 Gipfel 2011 statt, dessen Dokumentation der vorliegende technische Report dient. Wie mit den vorhergegangenen nationalen IPv6-Gipfeln verfolgte der Deutsche IPv6-Rat auch mit dem 4. Gipfel, der unter dem Motto „Online on the Road - Der neue Standard IPv6 als Treiber der mobilen Kommunikation” stand, das Ziel, Einblicke in aktuelle Entwicklungen rund um den Einsatz von IPv6 diesmal mit einem Fokus auf die automobile Vernetzung zu geben. Gleichzeitig wurde betont, den effizienten und flächendeckenden Umstieg auf IPv6 voranzutreiben, Erfahrungen mit dem Umstieg auf und dem Einsatz von IPv6 auszutauschen, Wirtschaft und öffentliche Verwaltung zu ermutigen und motivieren, IPv6-basierte Lösungen einzusetzen und das öffentliche Problembewusstsein für die Notwendigkeit des Umstiegs auf IPv6 zu erhöhen. Ehrengast war in diesem Jahr die EU-Kommissarin für die Digitale Agenda, Neelie Kroes deren Vortrag von weiteren Beiträgen hochrangiger Vertretern aus Politik, Wissenschaft und Wirtschaft ergänzt wurde.
Die Komplexität heutiger Geschäftsabläufe und die Menge der zu verwaltenden Daten stellen hohe Anforderungen an die Entwicklung und Wartung von Geschäftsanwendungen. Ihr Umfang entsteht unter anderem aus der Vielzahl von Modellentitäten und zugehörigen Nutzeroberflächen zur Bearbeitung und Analyse der Daten. Dieser Bericht präsentiert neuartige Konzepte und deren Umsetzung zur Vereinfachung der Entwicklung solcher umfangreichen Geschäftsanwendungen. Erstens: Wir schlagen vor, die Datenbank und die Laufzeitumgebung einer dynamischen objektorientierten Programmiersprache zu vereinen. Hierzu organisieren wir die Speicherstruktur von Objekten auf die Weise einer spaltenorientierten Hauptspeicherdatenbank und integrieren darauf aufbauend Transaktionen sowie eine deklarative Anfragesprache nahtlos in dieselbe Laufzeitumgebung. Somit können transaktionale und analytische Anfragen in derselben objektorientierten Hochsprache implementiert werden, und dennoch nah an den Daten ausgeführt werden. Zweitens: Wir beschreiben Programmiersprachkonstrukte, welche es erlauben, Nutzeroberflächen sowie Nutzerinteraktionen generisch und unabhängig von konkreten Modellentitäten zu beschreiben. Um diese abstrakte Beschreibung nutzen zu können, reichert man die Domänenmodelle um vormals implizite Informationen an. Neue Modelle müssen nur um einige Informationen erweitert werden um bereits vorhandene Nutzeroberflächen und -interaktionen auch für sie verwenden zu können. Anpassungen, die nur für ein Modell gelten sollen, können unabhängig vom Standardverhalten, inkrementell, definiert werden. Drittens: Wir ermöglichen mit einem weiteren Programmiersprachkonstrukt die zusammenhängende Beschreibung von Abläufen der Anwendung, wie z.B. Bestellprozesse. Unser Programmierkonzept kapselt Nutzerinteraktionen in synchrone Funktionsaufrufe und macht somit Prozesse als zusammenhängende Folge von Berechnungen und Interaktionen darstellbar. Viertens: Wir demonstrieren ein Konzept, wie Endnutzer komplexe analytische Anfragen intuitiver formulieren können. Es basiert auf der Idee, dass Endnutzer Anfragen als Konfiguration eines Diagramms sehen. Entsprechend beschreibt ein Nutzer eine Anfrage, indem er beschreibt, was sein Diagramm darstellen soll. Nach diesem Konzept beschriebene Diagramme enthalten ausreichend Informationen, um daraus eine Anfrage generieren zu können. Hinsichtlich der Ausführungsdauer sind die generierten Anfragen äquivalent zu Anfragen, die mit konventionellen Anfragesprachen formuliert sind. Das Anfragemodell setzen wir in einem Prototypen um, der auf den zuvor eingeführten Konzepten aufsetzt.
Extract-Transform-Load (ETL) tools are used for the creation, maintenance, and evolution of data warehouses, data marts, and operational data stores. ETL workflows populate those systems with data from various data sources by specifying and executing a DAG of transformations. Over time, hundreds of individual workflows evolve as new sources and new requirements are integrated into the system. The maintenance and evolution of large-scale ETL systems requires much time and manual effort. A key problem is to understand the meaning of unfamiliar attribute labels in source and target databases and ETL transformations. Hard-to-understand attribute labels lead to frustration and time spent to develop and understand ETL workflows. We present a schema decryption technique to support ETL developers in understanding cryptic schemata of sources, targets, and ETL transformations. For a given ETL system, our recommender-like approach leverages the large number of mapped attribute labels in existing ETL workflows to produce good and meaningful decryptions. In this way we are able to decrypt attribute labels consisting of a number of unfamiliar few-letter abbreviations, such as UNP_PEN_INT, which we can decrypt to UNPAID_PENALTY_INTEREST. We evaluate our schema decryption approach on three real-world repositories of ETL workflows and show that our approach is able to suggest high-quality decryptions for cryptic attribute labels in a given schema.
Like conventional software projects, projects in model-driven software engineering require adequate management of multiple versions of development artifacts, importantly allowing living with temporary inconsistencies. In the case of model-driven software engineering, employed versioning approaches also have to handle situations where different artifacts, that is, different models, are linked via automatic model transformations.
In this report, we propose a technique for jointly handling the transformation of multiple versions of a source model into corresponding versions of a target model, which enables the use of a more compact representation that may afford improved execution time of both the transformation and further analysis operations. Our approach is based on the well-known formalism of triple graph grammars and a previously introduced encoding of model version histories called multi-version models. In addition to showing the correctness of our approach with respect to the standard semantics of triple graph grammars, we conduct an empirical evaluation that demonstrates the potential benefit regarding execution time performance.
Transmorphic
(2016)
Defining Graphical User Interfaces (GUIs) through functional abstractions can reduce the complexity that arises from mutable abstractions. Recent examples, such as Facebook's React GUI framework have shown, how modelling the view as a functional projection from the application state to a visual representation can reduce the number of interacting objects and thus help to improve the reliabiliy of the system. This however comes at the price of a more rigid, functional framework where programmers are forced to express visual entities with functional abstractions, detached from the way one intuitively thinks about the physical world.
In contrast to that, the GUI Framework Morphic allows interactions in the graphical domain, such as grabbing, dragging or resizing of elements to evolve an application at runtime, providing liveness and directness in the development workflow. Modelling each visual entity through mutable abstractions however makes it difficult to ensure correctness when GUIs start to grow more complex. Furthermore, by evolving morphs at runtime through direct manipulation we diverge more and more from the symbolic description that corresponds to the morph. Given that both of these approaches have their merits and problems, is there a way to combine them in a meaningful way that preserves their respective benefits?
As a solution for this problem, we propose to lift Morphic's concept of direct manipulation from the mutation of state to the transformation of source code. In particular, we will explore the design, implementation and integration of a bidirectional mapping between the graphical representation and a functional and declarative symbolic description of a graphical user interface within a self hosted development environment. We will present Transmorphic, a functional take on the Morphic GUI Framework, where the visual and structural properties of morphs are defined in a purely functional, declarative fashion. In Transmorphic, the developer is able to assemble different morphs at runtime through direct manipulation which is automatically translated into changes in the code of the application. In this way, the comprehensiveness and predictability of direct manipulation can be used in the context of a purely functional GUI, while the effects of the manipulation are reflected in a medium that is always in reach for the programmer and can even be used to incorporate the source transformations into the source files of the application.
When realizing a programming language as VM, implementing behavior as part of the VM, as primitive, usually results in reduced execution times. But supporting and developing primitive functions requires more effort than maintaining and using code in the hosted language since debugging is harder, and the turn-around times for VM parts are higher. Furthermore, source artifacts of primitive functions are seldom reused in new implementations of the same language. And if they are reused, the existing API usually is emulated, reducing the performance gains. Because of recent results in tracing dynamic compilation, the trade-off between performance and ease of implementation, reuse, and changeability might now be decided adversely.
In this work, we investigate the trade-offs when creating primitives, and in particular how large a difference remains between primitive and hosted function run times in VMs with tracing just-in-time compiler. To that end, we implemented the algorithmic primitive BitBlt three times for RSqueak/VM. RSqueak/VM is a Smalltalk VM utilizing the PyPy RPython toolchain. We compare primitive implementations in C, RPython, and Smalltalk, showing that due to the tracing just-in-time compiler, the performance gap has lessened by one magnitude to one magnitude.
Version control is a widely used practice among software developers. It reduces the risk of changing their software and allows them to manage different configurations and to collaborate with others more efficiently. This is amplified by code sharing platforms such as GitHub or Bitbucket. Most version control systems track files (e.g., Git, Mercurial, and Subversion do), but some programming environments do not operate on files, but on objects instead (many Smalltalk implementations do). Users of such environments want to use version control for their objects anyway. Specialized version control systems, such as the ones available for Smalltalk systems (e.g., ENVY/Developer and Monticello), focus on a small subset of objects that can be versioned. Most of these systems concentrate on the tracking of methods, classes, and configurations of these. Other user-defined and user-built objects are either not eligible for version control at all, tracking them involves complicated workarounds, or a fixed, domain-unspecific serialization format is used that does not equally suit all kinds of objects. Moreover, these version control systems that are specific to a programming environment require their own code sharing platforms; popular, well-established platforms for file-based version control systems cannot be used or adapter solutions need to be implemented and maintained.
To improve the situation for version control of arbitrary objects, a framework for tracking, converting, and storing of objects is presented in this report. It allows editions of objects to be stored in an exchangeable, existing backend version control system. The platforms of the backend version control system can thus be reused. Users and objects have control over how objects are captured for the purpose of version control. Domain-specific requirements can be implemented. The storage format (i.e. the file format, when file-based backend version control systems are used) can also vary from one object to another. Different editions of objects can be compared and sets of changes can be applied to graphs of objects. A generic way for capturing and restoring that supports most kinds of objects is described. It models each object as a collection of slots. Thus, users can begin to track their objects without first having to implement version control supplements for their own kinds of objects. The proposed architecture is evaluated using a prototype implementation that can be used to track objects in Squeak/Smalltalk with Git. The prototype improves the suboptimal standing of user objects with respect to version control described above and also simplifies some version control tasks for classes and methods as well. It also raises new problems, which are discussed in this report as well.
The correctness of model transformations is a crucial element for the model-driven engineering of high quality software. A prerequisite to verify model transformations at the level of the model transformation specification is that an unambiguous formal semantics exists and that the employed implementation of the model transformation language adheres to this semantics. However, for existing relational model transformation approaches it is usually not really clear under which constraints particular implementations are really conform to the formal semantics. In this paper, we will bridge this gap for the formal semantics of triple graph grammars (TGG) and an existing efficient implementation. Whereas the formal semantics assumes backtracking and ignores non-determinism, practical implementations do not support backtracking, require rule sets that ensure determinism, and include further optimizations. Therefore, we capture how the considered TGG implementation realizes the transformation by means of operational rules, define required criteria and show conformance to the formal semantics if these criteria are fulfilled. We further outline how static analysis can be employed to guarantee these criteria.
Scrollytellings are an innovative form of web content. Combining the benefits of books, images, movies, and video games, they are a tool to tell compelling stories and provide excellent learning opportunities. Due to their multi-modality, creating high-quality scrollytellings is not an easy task. Different professions, such as content designers, graphics designers, and developers, need to collaborate to get the best out of the possibilities the scrollytelling format provides. Collaboration unlocks great potential. However, content designers cannot create scrollytellings directly and always need to consult with developers to implement their vision. This can result in misunderstandings. Often, the resulting scrollytelling will not match the designer’s vision sufficiently, causing unnecessary iterations. Our project partner Typeshift specializes in the creation of individualized scrollytellings for their clients. Examined existing solutions for authoring interactive content are not optimally suited for creating highly customized scrollytellings while still being able to manipulate all their elements programmatically. Based on their experience and expertise, we developed an editor to author scrollytellings in the lively.next live-programming environment. In this environment, a graphical user interface for content design is combined with powerful possibilities for programming behavior with the morphic system. The editor allows content designers to take on large parts of the creation process of scrollytellings on their own, such as creating the visible elements, animating content, and fine-tuning the scrollytelling. Hence, developers can focus on interactive elements such as simulations and games. Together with Typeshift, we evaluated the tool by recreating an existing scrollytelling and identified possible future enhancements. Our editor streamlines the creation process of scrollytellings. Content designers and developers can now both work on the same scrollytelling. Due to the editor inside of the lively.next environment, they can both work with a set of tools familiar to them and their traits. Thus, we mitigate unnecessary iterations and misunderstandings by enabling content designers to realize large parts of their vision of a scrollytelling on their own. Developers can add advanced and individual behavior. Thus, developers and content designers benefit from a clearer distribution of tasks while keeping the benefits of collaboration.
In this study we examine the tonal organization of a series of recordings of liturgical chants, sung in 1966 by the Georgian master singer Artem Erkomaishvili. This dataset is the oldest corpus of Georgian chants from which the time synchronous F0-trajectories for all three voices have been reliably determined (Müller et al. 2017). It is therefore of outstanding importance for the understanding of the tuning principles of traditional Georgian vocal music.
The aim of the present study is to use various computational methods to analyze what these recordings can contribute to the ongoing scientific dispute about traditional Georgian tuning systems. Starting point for the present analysis is the re-release of the original audio data together with estimated fundamental frequency (F0) trajectories for each of the three voices, beat annotations, and digital scores (Rosenzweig et al. 2020). We present synoptic models for the pitch and the harmonic interval distributions, which are the first of such models for which the complete Erkomaishvili dataset was used. We show that these distributions can be very compactly be expressed as Gaussian mixture models, anchored on discrete sets of pitch or interval values for the pitch and interval distributions, respectively. As part of our study we demonstrate that these pitch values, which we refer to as scale pitches, and which are determined as the mean values of the Gaussian mixture elements, define the scale degrees of the melodic sound scales which build the skeleton of Artem Erkomaishvili’s intonation. The observation of consistent pitch bending of notes in melodic phrases, which appear in identical form in a group of chants, as well as the observation of harmonically driven intonation adjustments, which are clearly documented for all pure harmonic intervals, demonstrate that Artem Erkomaishvili intentionally deviates from the scale pitch skeleton quite freely. As a central result of our study, we proof that this melodic freedom is always constrained by the attracting influence of the scale pitches. Deviations of the F0-values of individual note events from the scale pitches at one instance of time are compensated for in the subsequent melodic steps. This suggests a deviation-compensation mechanism at the core of Artem Erkomaishvili’s melody generation, which clearly honors the scales but still allows for a large degree of melodic flexibility. This model, which summarizes all partial aspects of our analysis, is consistent with the melodic scale models derived from the observed pitch distributions, as well as with the melodic and harmonic interval distributions. In addition to the tangible results of our work, we believe that our work has general implications for the determination of tuning models from audio data, in particular for non-tempered music.
INTRICATE/SEC 2012 Workshop held in Conjunction with The 11th Information Security South Africa Conference (ISSA 2012).
Contents: Artem Polyvanny, Sergey Smirnow, and Mathias Weske The Triconnected Abstraction of Process Models 1 Introduction 2 Business Process Model Abstraction 3 Preliminaries 4 Triconnected Decomposition 4.1 Basic Approach for Process Component Discovery 4.2 SPQR-Tree Decomposition 4.3 SPQR-Tree Fragments in the Context of Process Models 5 Triconnected Abstraction 5.1 Abstraction Rules 5.2 Abstraction Algorithm 6 Related Work and Conclusions
Program behavior that relies on contextual information, such as physical location or network accessibility, is common in today's applications, yet its representation is not sufficiently supported by programming languages. With context-oriented programming (COP), such context-dependent behavioral variations can be explicitly modularized and dynamically activated. In general, COP could be used to manage any context-specific behavior. However, its contemporary realizations limit the control of dynamic adaptation. This, in turn, limits the interaction of COP's adaptation mechanisms with widely used architectures, such as event-based, mobile, and distributed programming. The JCop programming language extends Java with language constructs for context-oriented programming and additionally provides a domain-specific aspect language for declarative control over runtime adaptations. As a result, these redesigned implementations are more concise and better modularized than their counterparts using plain COP. JCop's main features have been described in our previous publications. However, a complete language specification has not been presented so far. This report presents the entire JCop language including the syntax and semantics of its new language constructs.
Creating fonts is a complex task that requires expert knowledge in a variety of domains. Often, this knowledge is not held by a single person, but spread across a number of domain experts. A central concept needed for designing fonts is the glyph, an elemental symbol representing a readable character. Required domains include designing glyph shapes, engineering rules to combine glyphs for complex scripts and checking legibility. This process is most often iterative and requires communication in all directions. This report outlines a platform that aims to enhance the means of communication, describes our prototyping process, discusses complex font rendering and editing in a live environment and an approach to generate code based on a user’s live-edits.
In current practice, business processes modeling is done by trained method experts. Domain experts are interviewed to elicit their process information but not involved in modeling. We created a haptic toolkit for process modeling that can be used in process elicitation sessions with domain experts. We hypothesize that this leads to more effective process elicitation. This paper brakes down "effective elicitation" to 14 operationalized hypotheses. They are assessed in a controlled experiment using questionnaires, process model feedback tests and video analysis. The experiment compares our approach to structured interviews in a repeated measurement design. We executed the experiment with 17 student clerks from a trade school. They represent potential users of the tool. Six out of fourteen hypotheses showed significant difference due to the method applied. Subjects reported more fun and more insights into process modeling with tangible media. Video analysis showed significantly more reviews and corrections applied during process elicitation. Moreover, people take more time to talk and think about their processes. We conclude that tangible media creates a different working mode for people in process elicitation with fun, new insights and instant feedback on preliminary results.
The Apache Modeling Project
(2004)
This document presents an introduction to the Apache HTTP Server, covering both an overview and implementation details. It presents results of the Apache Modelling Project done by research assistants and students of the Hasso–Plattner–Institute in 2001, 2002 and 2003. The Apache HTTP Server was used to introduce students to the application of the modeling technique FMC, a method that supports transporting knowledge about complex systems in the domain of information processing (software and hardware as well). After an introduction to HTTP servers in general, we will focus on protocols and web technology. Then we will discuss Apache, its operational environment and its extension capabilities— the module API. Finally we will guide the reader through parts of the Apache source code and explain the most important pieces.
Technical report
(2019)
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
Graphs are ubiquitous in Computer Science. For this reason, in many areas, it is very important to have the means to express and reason about graph properties. In particular, we want to be able to check automatically if a given graph property is satisfiable. Actually, in most application scenarios it is desirable to be able to explore graphs satisfying the graph property if they exist or even to get a complete and compact overview of the graphs satisfying the graph property.
We show that the tableau-based reasoning method for graph properties as introduced by Lambers and Orejas paves the way for a symbolic model generation algorithm for graph properties. Graph properties are formulated in a dedicated logic making use of graphs and graph morphisms, which is equivalent to firstorder logic on graphs as introduced by Courcelle. Our parallelizable algorithm gradually generates a finite set of so-called symbolic models, where each symbolic model describes a set of finite graphs (i.e., finite models) satisfying the graph property. The set of symbolic models jointly describes all finite models for the graph property (complete) and does not describe any finite graph violating the graph property (sound). Moreover, no symbolic model is already covered by another one (compact). Finally, the algorithm is able to generate from each symbolic model a minimal finite model immediately and allows for an exploration of further finite models. The algorithm is implemented in the new tool AutoGraph.
It is predicted that Service-oriented Architectures (SOA) will have a high impact on future electronic business and markets. Services will provide an self-contained and standardised interface towards business and are considered as the future platform for business-to-business and business-toconsumer trades. Founded by the complexity of real world business scenarios a huge need for an easy, flexible and automated creation and enactment of service compositions is observed. This survey explores the relationship of service composition with workflow management—a technology/ concept already in use in many business environments. The similarities between the both and the key differences between them are elaborated. Furthermore methods for composition of services ranging from manual, semi- to full-automated composition are sketched. This survey concludes that current tools for service composition are in an immature state and that there is still much research to do before service composition can be used easily and conveniently in real world scenarios. However, since automated service composition is a key enabler for the full potential of Service-oriented Architectures, further research on this field is imperative. This survey closes with a formal sample scenario presented in appendix A to give the reader an impression on how full-automated service composition works.
IT systems for healthcare are a complex and exciting field. One the one hand, there is a vast number of improvements and work alleviations that computers can bring to everyday healthcare. Some ways of treatment, diagnoses and organisational tasks were even made possible by computer usage in the first place. On the other hand, there are many factors that encumber computer usage and make development of IT systems for healthcare a challenging, sometimes even frustrating task. These factors are not solely technology-related, but just as well social or economical conditions. This report describes some of the idiosyncrasies of IT systems in the healthcare domain, with a special focus on legal regulations, standards and security.
STG decomposition is a promising approach to tackle the complexity problems arising in logic synthesis of speed independent circuits, a robust asynchronous (i.e. clockless) circuit type. Unfortunately, STG decomposition can result in components that in isolation have irreducible CSC conflicts. Generalising earlier work, it is shown how to resolve such conflicts by introducing internal communication between the components via structural techniques only.
Business process models are abstractions of concrete operational procedures that occur in the daily business of organizations. To cope with the complexity of these models, business process model abstraction has been introduced recently. Its goal is to derive from a detailed process model several abstract models that provide a high-level understanding of the process. While techniques for constructing abstract models are reported in the literature, little is known about the relationships between process instances and abstract models. In this paper we show how the state of an abstract activity can be calculated from the states of related, detailed process activities as they happen. The approach uses activity state propagation. With state uniqueness and state transition correctness we introduce formal properties that improve the understanding of state propagation. Algorithms to check these properties are devised. Finally, we use behavioral profiles to identify and classify behavioral inconsistencies in abstract process models that might occur, once activity state propagation is used.
Squimera
(2017)
Software development tools that work and behave consistently across different programming languages are helpful for developers, because they do not have to familiarize themselves with new tooling whenever they decide to use a new language. Also, being able to combine multiple programming languages in a program increases reusability, as developers do not have to recreate software frameworks and libraries in the language they develop in and can reuse existing software instead.
However, developers often have a broad choice with regard to tools, some of which are designed for only one specific programming language. Various Integrated Development Environments have support for multiple languages, but are usually unable to provide a consistent programming experience due to different features of language runtimes. Furthermore, common mechanisms that allow reuse of software written in other languages usually use the operating system or a network connection as the abstract layer. Tools, however, often cannot support such indirections well and are therefore less useful in debugging scenarios for example.
In this report, we present a novel approach that aims to improve the programming experience with regard to working with multiple high-level programming languages. As part of this approach, we reuse the tools of a Smalltalk programming environment for other languages and build a multi-language virtual execution environment which is able to provide the same runtime capabilities for all languages.
The prototype system Squimera is an implementation of our approach and demonstrates that it is possible to reuse development tools, so that they behave in the same way across all supported programming languages. In addition, it provides convenient means to reuse and even mix software libraries and frameworks written in different languages without breaking the debugging experience.
Duplicate detection consists in determining different representations of real-world objects in a database. Recent research has considered the use of relationships among object representations to improve duplicate detection. In the general case where relationships form a graph, research has mainly focused on duplicate detection quality/effectiveness. Scalability has been neglected so far, even though it is crucial for large real-world duplicate detection tasks. In this paper we scale up duplicate detection in graph data (DDG) to large amounts of data and pairwise comparisons, using the support of a relational database system. To this end, we first generalize the process of DDG. We then present how to scale algorithms for DDG in space (amount of data processed with limited main memory) and in time. Finally, we explore how complex similarity computation can be performed efficiently. Experiments on data an order of magnitude larger than data considered so far in DDG clearly show that our methods scale to large amounts of data not residing in main memory.
Software-Fehlerinjektion
(2016)
Fehlerinjektion ist ein essentielles Werkzeug, um die Fehlertoleranz komplexer Softwaresysteme experimentell zu evaluieren.
Wir berichten über das Seminar zum Thema Software-Fehlerinjektion, das am Fachgebiet für Betriebssysteme und Middleware am Hasso-Plattner-Institut der Universität Potsdam im Sommersemester 2015 stattfand.
In dem Seminar ging es darum, verschiedene Fehlerinjektionsansätze und -werkzeuge anzuwenden und hinsichtlich ihrer Anwendbarkeit in verschiedenen Szenarien zu bewerten.
In diesem Bericht werden die studierten Ansätze vorgestellt und verglichen.
"Forschung meets Business" - diese Kombination hat in den vergangenen Jahren immer wieder zu zahlreichen interessanten und fruchtbaren Diskussionen geführt. Mit dem Symposium "Sicherheit in Service-orientierten Architekturen" führt das Hasso-Plattner-Institut diese Tradition fort und lud alle Interessenten zu einem zweitägigen Symposium nach Potsdam ein, um gemeinsam mit Fachvertretern aus der Forschung und Industrie über die aktuellen Entwicklungen im Bereich Sicherheit von SOA zu diskutieren. Die im Rahmen dieses Symposiums vorgestellten Beiträge fokussieren sich auf die Sicherheitsthemen "Sichere Digitale Identitäten und Identitätsmanagement", "Trust Management", "Modell-getriebene SOA-Sicherheit", "Datenschutz und Privatsphäre", "Sichere Enterprise SOA", und "Sichere IT-Infrastrukturen".
Vorwort 1. Einleitung 2. Statische vs. dynamische Analyse 3. Kriterien für den Erfolg statischer Quellcodeanalysemethoden 3.1. Theoretische Vorüberlegungen 3.2. 1. Kriterium: Verfügbarkeit des Quellcodes 3.3. 2. Kriterium: Unterstützung der Programmiersprache 3.4. 3. Kriterium: Zulassung von „echten“ Programmen der Problemdomäne 3.5. 4. Kriterium: Bewältigung der auftretenden Komplexität 3.6. 5. Kriterium: Schutz vor böswilliger Speichermanipulation 3.7. 6. Kriterium: Garantie für die Umgebung des laufenden Prozesses 3.8. Fazit 3.9. Verwandte Arbeiten 4. Bewertung von statischen Methoden für C/C++ typische Programme 4.1. Hintergrund 4.2. Prämissen 4.3. 1. Problemfeld: Programmgröße und Interferenz 4.4. 2. Problemfeld: Semantik 4.5. 3. Problemfeld: Programmfluss 4.6. 4. Problemfeld: Zeigerarithmetik 4.7. Dynamische Konzepte zur Erfüllung des fünften Kriteriums auf Quellcodebasis 4.8. Fazit 4.9. Verwandte Arbeiten 5. Kriterien für den Erfolg dynamischer Ansätze 5.1. Hintergrund 5.2. Verfügbarkeit des Quellcodes 5.3. Unterstützung der Programmiersprache 5.4. Zulassung von „echten“ Programmen aus der Problemdomäne 5.5. Bewältigung der auftretenden Komplexität 5.6. Schutz vor böswilliger Speichermanipulation 5.7. Garantie für die Umgebung des laufenden Prozesses 5.8. Fazit 6. Klassifikation und Evaluation dynamischer Ansätze 6.1. Hintergrund 6.2. Quellcodesubstitution 6.3. Binärcodemodifikation/Binary-Rewriting 6.4. Maschinencodeinterpreter 6.5. Intrusion-Detection-Systeme 6.6. Virtuelle Maschinen/Safe Languages 6.7. Mechanismen zur „Härtung“ von bestehenden Code 6.8. SandBoxing/System-Call-Interposition 6.9. Herkömmliche Betriebssystemmittel 6.10. Access-Control-Lists/Domain-Type-Enforcement 6.11. Fazit 7. Sichere Ausführung nicht vertrauenswürdiger Programme im Kontext von RealTimeBattle 7.1. Vorstellung von RealTimeBattle 7.2. Charakterisierung des Problems 7.3. Alternative Lösungsvarianten/Rekapitulation 7.4. Übertragung der Ergebnisse statischer Analysemethoden auf RealTimeBattle 7.5. Übertragung der Ergebnisse dynamischer Analysemethoden auf RealTimeBattle 7.5.1. Vorstellung der RSBAC basierten Lösung 7.5.2. Vorstellung der Systrace basierten Lösung 7.6. Fazit 7.7. Verwandte Arbeiten 8. Sichere Ausführung nicht vertrauenswürdiger Programme im Kontext von Asparagus 8.1. Vorstellung von Asparagus 8.2. Charakterisierung des Problems 8.3. Lösung des Problems 8.4. Fazit 8.5. Verwandte Arbeiten 9. Sichere Ausführung nicht vertrauenswürdiger Programme im Kontext vom DCL 9.1. Vorstellung des DCL 9.2. Charakterisierung des Problems 9.3. Experimente im DCL und die jeweilige Lösung 9.3.1. Foucaultsches Pendel 9.3.2. Lego Mindstorm Roboter 9.3.3. Hau den Lukas 9.4. Fazit 9.5. Verwandte Arbeiten 10. Sichere Ausführung nicht vertrauenswürdiger Programme im Kontext der semiautomatischen Korrektur von Betriebssystemarchitektur-Übungsaufgaben 10.1. Vorstellung des Übungsbetriebes zur Vorlesung „Betriebssystsemarchitektur 10.2. Charakterisierung des Problems 10.3. Lösungsvorschläge 10.3.1. Lösungsvorschläge für das Authentifizierungs-Problem 10.3.2. Lösungsvorschläge für das Transport-Problem 10.3.3. Lösungsvorschläge für das Build-Problem 10.3.4. Lösungsvorschläge für das Ausführungs-Problem 10.3.5. Lösungsvorschläge für das Ressourcen-Problem 10.3.6. Lösungsvorschläge für das Portabilitäts-Problem 10.4. Fazit 10.5. Verwandte Arbeiten 11. Schlussbetrachtungen Literaturverzeichnis Anhang -create_guardedrobot.sh: Die RealTimeBattle Security Infrastructure -vuln.c: Ein durch Pufferüberlauf ausnutzbares Programm -exploit.c: Ein Beispielexploit für vuln.c. -aufg43.c: Lösung für eine Aufgabe im Rahmen der Betriebssystemarchitektur-Übung -Handout: Sichere Ausführung nicht vertrauenswürdiger Programme
Scalable compatibility for embedded real-time components via language progressive timed automata
(2013)
Die korrekte Komposition individuell entwickelter Komponenten von eingebetteten Realzeitsystemen ist eine Herausforderung, da neben funktionalen Eigenschaften auch nicht funktionale Eigenschaften berücksichtigt werden müssen. Ein Beispiel hierfür ist die Kompatibilität von Realzeiteigenschaften, welche eine entscheidende Rolle in eingebetteten Systemen spielen. Heutzutage wird die Kompatibilität derartiger Eigenschaften in einer aufwändigen Integrations- und Konfigurationstests am Ende des Entwicklungsprozesses geprüft, wobei diese Tests im schlechtesten Fall fehlschlagen. Aus diesem Grund wurde eine Zahl an formalen Verfahren Entwickelt, welche eine frühzeitige Analyse von Realzeiteigenschaften von Komponenten erlauben, sodass Inkompatibilitäten von Realzeiteigenschaften in späteren Phasen ausgeschlossen werden können. Existierenden Verfahren verlangen jedoch, dass eine Reihe von Bedingungen erfüllt sein muss, welche von realen Systemen nur schwer zu erfüllen sind, oder aber, die verwendeten Analyseverfahren skalieren nicht für größere Systeme. In dieser Arbeit wird ein Ansatz vorgestellt, welcher auf dem formalen Modell des Timed Automaton basiert und der keine Bedingungen verlangt, die von einem realen System nur schwer erfüllt werden können. Der in dieser Arbeit vorgestellte Ansatz enthält ein Framework, welches eine modulare Analyse erlaubt, bei der ausschließlich miteinender kommunizierende Komponenten paarweise überprüft werden müssen. Somit wird eine skalierbare Analyse von Realzeiteigenschaften ermöglicht, die keine Bedingungen verlangt, welche nur bedingt von realen Systemen erfüllt werden können.
SandBlocks
(2020)
Visuelle Programmiersprachen werden heutzutage zugunsten textueller Programmiersprachen nahezu nicht verwendet, obwohl visuelle Programmiersprachen einige Vorteile bieten. Diese reichen von der Vermeidung von Syntaxfehlern, über die Nutzung konkreter domänenspezifischer Notation bis hin zu besserer Lesbarkeit und Wartbarkeit des Programms. Trotzdem greifen professionelle Softwareentwickler nahezu ausschließlich auf textuelle Programmiersprachen zurück.
Damit Entwickler diese Vorteile visueller Programmiersprachen nutzen können, aber trotzdem nicht auf die ihnen bekannten textuellen Programmiersprachen verzichten müssen, gibt es die Idee, textuelle und visuelle Programmelemente gemeinsam in einer Programmiersprache nutzbar zu machen. Damit ist dem Entwickler überlassen wann und wie er visuelle Elemente in seinem Programmcode verwendet.
Diese Arbeit stellt das SandBlocks-Framework vor, das diese gemeinsame Nutzung visueller und textueller Programmelemente ermöglicht. Neben einer Auswertung visueller Programmiersprachen, zeigt es die technische Integration visueller Programmelemente in das Squeak/Smalltalk-System auf, gibt Einblicke in die Umsetzung und Verwendung in Live-Programmiersystemen und diskutiert ihre Verwendung in unterschiedlichen Domänen.
1 Einleitung 1.1 Motivation 1.2 Aufgabenstellung 1.3 Aufbau der Arbeit 2 Fachliches Umfeld 2.1 Grid Computing 2.2 Idle Time Computing 3 Ressourcenpartitionierung 3.1 Ressourcenpartitionierung und Scheduling 3.2 Ressourcenpartitionierung in Idle Time Computing 3.2.1 Administrative Kontrolle der Ressourcen 3.2.2 Mindestgarantien zur Sicherstellung der Lauffähigkeit 3.3 Vorhandene Lösungen und verwandte Arbeiten 3.3.3 Ressourcenmanagement im Globus Toolkit 3.3.4 Ressourcenmanagement in Condor 3.3.5 Das GARA Framework 3.3.6 Distributed Resource Management Application API 3.3.7 Grid Resource Allocation Agreement Protocol 3.3.8 SNAP 3.3.9 OGSI-Agreement 3.3.10 PBS/Maui und andere Batch Systeme 3.3.11 Wide Area Distributed Computing 3.3.12 Weitere verwandte Arbeiten 3.3.13 Überlegungen zum Ressourcenbedarf 4 Ressourcenkontrolle in Desktopbetriebssystemen 4.1 Ressourcen 4.2 Ressourcenpartitionierung unter Linux 4.2.14 Festplattenkapazität 4.2.15 Arbeitsspeicher 4.2.16 Netzwerkbandbreite 4.2.17 CPU Kapazität 4.3 Ressourcenpartitionierung unter Microsoft Windows XP 4.3.18 Festplattenkapazität 4.3.19 Arbeitsspeicher 4.3.20 Netzwerkbandbreite 4.3.21 CPU Kapazität 4.4 Fazit 5 Entwurf und Design des Frameworks 5.1 Entwurfsgrundlage - Komponentenarchitektur 5.2 Architektur 5.2.22 Broker Server 5.2.23 Broker Software auf den Clients 5.2.24 Schnittstellen 5.3 Komponententypmodell 5.4 Ressourcenidentifikation und Ressourcenzuordnung 5.5 Anbindung ans Grid 5.6 Datenbankentwurf 5.7 XML RPC Schnittstelle 6 Implementierung 6.1 Broker Server 6.1.25 Datenbank 6.1.26 Komponenten 6.1.27 Webserverskripte 6.1.28 Database Crawler 6.2 Komponenten 6.2.29 Network 6.2.30 DSCP 6.2.31 Quota 6.2.32 FSF 6.3 Linux Client 6.3.33 Broker Client 6.3.34 Komponenten 6.4 Windows Client 6.5 Abhängigkeiten 7 Evaluierung 7.1 Durchgeführte Test- und Anwendungsfälle 7.1.35 Test der Clientsoftware 7.1.36 Test der Serversoftware 7.1.37 Durchführbare Anwendungsfälle 7.2 Evaluierung der Frameworkimplementierung 7.2.38 Performanz der Serverimplementierung 7.2.39 Zuverlässigkeit der Partitionierungen 7.3 Evaluierung von Traffic Shaping mit iproute2 7.3.40 Szenario 1 7.3.41 Szenario 2 7.3.42 Szenario 3 7.3.43 Fazit 8 Zusammenfassung und Ausblick 8.1 Fazit 8.2 Weiterentwicklung 8.2.44 Weiterentwicklungen auf Entwurfsebene 8.2.45 Weiterentwicklungen auf Implementierungsebene Anhang A: Details zum Datenbankentwurf Anhang B: Bildschirmfotos der Weboberfläche Anhang C: Quellcode Linux Broker Client Anhang D: Inhalt des beiliegenden Datenträgers
Companies strive to improve their business processes in order to remain competitive. Process mining aims to infer meaningful insights from process-related data and attracted the attention of practitioners, tool-vendors, and researchers in recent years. Traditionally, event logs are assumed to describe the as-is situation. But this is not necessarily the case in environments where logging may be compromised due to manual logging. For example, hospital staff may need to manually enter information regarding the patient’s treatment. As a result, events or timestamps may be missing or incorrect. In this paper, we make use of process knowledge captured in process models, and provide a method to repair missing events in the logs. This way, we facilitate analysis of incomplete logs. We realize the repair by combining stochastic Petri nets, alignments, and Bayesian networks. We evaluate the results using both synthetic data and real event data from a Dutch hospital.
RailChain
(2023)
The RailChain project designed, implemented, and experimentally evaluated a juridical recorder that is based on a distributed consensus protocol. That juridical blockchain recorder has been realized as distributed ledger on board the advanced TrainLab (ICE-TD 605 017) of Deutsche Bahn.
For the project, a consortium consisting of DB Systel, Siemens, Siemens Mobility, the Hasso Plattner Institute for Digital Engineering, Technische Universität Braunschweig, TÜV Rheinland InterTraffic, and Spherity has been formed. These partners not only concentrated competencies in railway operation, computer science, regulation, and approval, but also combined experiences from industry, research from academia, and enthusiasm from startups.
Distributed ledger technologies (DLTs) define distributed databases and express a digital protocol for transactions between business partners without the need for a trusted intermediary. The implementation of a blockchain with real-time requirements for the local network of a railway system (e.g., interlocking or train) allows to log data in the distributed system verifiably in real-time. For this, railway-specific assumptions can be leveraged to make modifications to standard blockchains protocols.
EULYNX and OCORA (Open CCS On-board Reference Architecture) are parts of a future European reference architecture for control command and signalling (CCS, Reference CCS Architecture – RCA). Both architectural concepts outline heterogeneous IT systems with components from multiple manufacturers. Such systems introduce novel challenges for the approved and safety-relevant CCS of railways which were considered neither for road-side nor for on-board systems so far. Logging implementations, such as the common juridical recorder on vehicles, can no longer be realized as a central component of a single manufacturer. All centralized approaches are in question.
The research project RailChain is funded by the mFUND program and gives practical evidence that distributed consensus protocols are a proper means to immutably (for legal purposes) store state information of many system components from multiple manufacturers. The results of RailChain have been published, prototypically implemented, and experimentally evaluated in large-scale field tests on the advanced TrainLab. At the same time, the project showed how RailChain can be integrated into the road-side and on-board architecture given by OCORA and EULYNX.
Logged data can now be analysed sooner and also their trustworthiness is being increased. This enables, e.g., auditable predictive maintenance, because it is ensured that data is authentic and unmodified at any point in time.
One of the key challenges in service-oriented systems engineering is the prediction and assurance of non-functional properties, such as the reliability and the availability of composite interorganizational services. Such systems are often characterized by a variety of inherent uncertainties, which must be addressed in the modeling and the analysis approach. The different relevant types of uncertainties can be categorized into (1) epistemic uncertainties due to incomplete knowledge and (2) randomization as explicitly used in protocols or as a result of physical processes. In this report, we study a probabilistic timed model which allows us to quantitatively reason about nonfunctional properties for a restricted class of service-oriented real-time systems using formal methods. To properly motivate the choice for the used approach, we devise a requirements catalogue for the modeling and the analysis of probabilistic real-time systems with uncertainties and provide evidence that the uncertainties of type (1) and (2) in the targeted systems have a major impact on the used models and require distinguished analysis approaches. The formal model we use in this report are Interval Probabilistic Timed Automata (IPTA). Based on the outlined requirements, we give evidence that this model provides both enough expressiveness for a realistic and modular specifiation of the targeted class of systems, and suitable formal methods for analyzing properties, such as safety and reliability properties in a quantitative manner. As technical means for the quantitative analysis, we build on probabilistic model checking, specifically on probabilistic time-bounded reachability analysis and computation of expected reachability rewards and costs. To carry out the quantitative analysis using probabilistic model checking, we developed an extension of the Prism tool for modeling and analyzing IPTA. Our extension of Prism introduces a means for modeling probabilistic uncertainty in the form of probability intervals, as required for IPTA. For analyzing IPTA, our Prism extension moreover adds support for probabilistic reachability checking and computation of expected rewards and costs. We discuss the performance of our extended version of Prism and compare the interval-based IPTA approach to models with fixed probabilities.
This book presents an agile and model-driven approach to manage scientific workflows. The approach is based on the Extreme Model Driven Design (XMDD) paradigm and aims at simplifying and automating the complex data analysis processes carried out by scientists in their day-to-day work. Besides documenting the impact the workflow modeling might have on the work of natural scientists, this book serves three major purposes: 1. It acts as a primer for practitioners who are interested to learn how to think in terms of services and workflows when facing domain-specific scientific processes. 2. It provides interesting material for readers already familiar with this kind of tools, because it introduces systematically both the technologies used in each case study and the basic concepts behind them. 3. As the addressed thematic field becomes increasingly relevant for lectures in both computer science and experimental sciences, it also provides helpful material for teachers that plan similar courses.
Every year, the Hasso Plattner Institute (HPI) invites guests from industry and academia to a collaborative scientific workshop on the topic “Operating the Cloud”. Our goal is to provide a forum for the exchange of knowledge and experience between industry and academia. Hence, HPI’s Future SOC Lab is the adequate environment to host this event which is also supported by BITKOM.
On the occasion of this workshop we called for submissions of research papers and practitioner’s reports. ”Operating the Cloud” aims to be a platform for productive discussions of innovative ideas, visions, and upcoming technologies in the field of cloud operation and administration.
In this workshop proceedings the results of the third HPI cloud symposium ”Operating the Cloud” 2015 are published. We thank the authors for exciting presentations and insights into their current work and research. Moreover, we look forward to more interesting submissions for the upcoming symposium in 2016.
Traditionally, business process management systems only execute and monitor business process instances based on events that originate from the process engine itself or from connected client applications. However, environmental events may also influence business process execution. Recent research shows how the technological improvements in both areas, business process management and complex event processing, can be combined and harmonized. The series of technical reports included in this collection provides insights in that combination with respect to technical feasibility and improvements based on real-world use cases originating from the EU-funded GET Service project – a project targeting transport optimization and green-house gas reduction in the logistics domain. Each report is complemented by a working prototype.
This collection introduces six use cases from the logistics domain. Multiple transports – each being a single process instance – may be affected by the same events at the same point in time because of (partly) using the same transportation route, transportation vehicle or transportation mode (e.g. containers from multiple process instances on the same ship) such that these instances can be (partly) treated as batch. Thus, the first use case shows the influence of events to process instances processed in a batch. The case of sharing the entire route may be, for instance, due to origin from the same business process (e.g. transport three containers, where each is treated as single process instance because of being transported on three trucks) resulting in multi-instance process executions. The second use case shows how to handle monitoring and progress calculation in this context. Crucial to transportation processes are frequent changes of deadlines. The third use case shows how to deal with such frequent process changes in terms of propagating the changes along and beyond the process scope to identify probable deadline violations. While monitoring transport processes, disruptions may be detected which introduce some delay. Use case four shows how to propagate such delay in a non-linear fashion along the process instance to predict the end time of the instance. Non-linearity is crucial in logistics because of buffer times and missed connection on intermodal transports (a one-hour delay may result in a missed ship which is not going every hour). Finally, use cases five and six show the utilization of location-based process monitoring. Use case five enriches transport processes with real-time route and traffic event information to improve monitoring and planning capabilities. Use case six shows the inclusion of spatio-temporal events on the example of unexpected weather events.
Proceedings of the HPI Research School on Service-oriented Systems Engineering 2020 Fall Retreat
(2021)
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
Every year, the Hasso Plattner Institute (HPI) invites guests from industry and academia to a collaborative scientific workshop on the topic Every year, the Hasso Plattner Institute (HPI) invites guests from industry and academia to a collaborative scientific workshop on the topic "Operating the Cloud". Our goal is to provide a forum for the exchange of knowledge and experience between industry and academia. Co-located with the event is the HPI's Future SOC Lab day, which offers an additional attractive and conducive environment for scientific and industry related discussions. "Operating the Cloud" aims to be a platform for productive interactions of innovative ideas, visions, and upcoming technologies in the field of cloud operation and administration.
On the occasion of this symposium we called for submissions of research papers and practitioner's reports. A compilation of the research papers realized during the fourth HPI cloud symposium "Operating the Cloud" 2016 are published in this proceedings. We thank the authors for exciting presentations and insights into their current work and research.
Moreover, we look forward to more interesting submissions for the upcoming symposium later in the year. Every year, the Hasso Plattner Institute (HPI) invites guests from industry and academia to a collaborative scientific workshop on the topic "Operating the Cloud". Our goal is to provide a forum for the exchange of knowledge and experience between industry and academia. Co-located with the event is the HPI's Future SOC Lab day, which offers an additional attractive and conducive environment for scientific and industry related discussions. "Operating the Cloud" aims to be a platform for productive interactions of innovative ideas, visions, and upcoming technologies in the field of cloud operation and administration.
Every year, the Hasso Plattner Institute (HPI) invites guests from industry and academia to a collaborative scientific workshop on the topic Operating the Cloud. Our goal is to provide a forum for the exchange of knowledge and experience between industry and academia. Co-located with the event is the HPI’s Future SOC Lab day, which offers an additional attractive and conducive environment for scientific and industry related discussions. Operating the Cloud aims to be a platform for productive interactions of innovative ideas, visions, and upcoming technologies in the field of cloud operation and administration.
In these proceedings, the results of the fifth HPI cloud symposium Operating the Cloud 2017 are published. We thank the authors for exciting presentations and insights into their current work and research. Moreover, we look forward to more interesting submissions for the upcoming symposium in 2018.
In Kooperation mit Partnern aus der Industrie etabliert das Hasso-Plattner-Institut (HPI) ein “HPI Future SOC Lab”, das eine komplette Infrastruktur von hochkomplexen on-demand Systemen auf neuester, am Markt noch nicht verfügbarer, massiv paralleler (multi-/many-core) Hardware mit enormen Hauptspeicherkapazitäten und dafür konzipierte Software bereitstellt. Das HPI Future SOC Lab verfügt über prototypische 4- und 8-way Intel 64-Bit Serversysteme von Fujitsu und Hewlett-Packard mit 32- bzw. 64-Cores und 1 - 2 TB Hauptspeicher. Es kommen weiterhin hochperformante Speichersysteme von EMC² sowie Virtualisierungslösungen von VMware zum Einsatz. SAP stellt ihre neueste Business by Design (ByD) Software zur Verfügung und auch komplexe reale Unternehmensdaten stehen zur Verfügung, auf die für Forschungszwecke zugegriffen werden kann. Interessierte Wissenschaftler aus universitären und außeruniversitären Forschungsinstitutionen können im HPI Future SOC Lab zukünftige hoch-komplexe IT-Systeme untersuchen, neue Ideen / Datenstrukturen / Algorithmen entwickeln und bis hin zur praktischen Erprobung verfolgen. Dieser Technische Bericht stellt erste Ergebnisse der im Rahmen der Eröffnung des Future SOC Labs im Juni 2010 gestarteten Forschungsprojekte vor. Ausgewählte Projekte stellten ihre Ergebnisse am 27. Oktober 2010 im Rahmen der Future SOC Lab Tag Veranstaltung vor.
1. Design and Composition of 3D Geoinformation Services Benjamin Hagedorn 2. Operating System Abstractions for Service-Based Systems Michael Schöbel 3. A Task-oriented Approach to User-centered Design of Service-Based Enterprise Applications Matthias Uflacker 4. A Framework for Adaptive Transport in Service- Oriented Systems based on Performance Prediction Flavius Copaciu 5. Asynchronicity and Loose Coupling in Service-Oriented Architectures Nikola Milanovic
Aspect-oriented programming, component models, and design patterns are modern and actively evolving techniques for improving the modularization of complex software. In particular, these techniques hold great promise for the development of "systems infrastructure" software, e.g., application servers, middleware, virtual machines, compilers, operating systems, and other software that provides general services for higher-level applications. The developers of infrastructure software are faced with increasing demands from application programmers needing higher-level support for application development. Meeting these demands requires careful use of software modularization techniques, since infrastructural concerns are notoriously hard to modularize. Aspects, components, and patterns provide very different means to deal with infrastructure software, but despite their differences, they have much in common. For instance, component models try to free the developer from the need to deal directly with services like security or transactions. These are primary examples of crosscutting concerns, and modularizing such concerns are the main target of aspect-oriented languages. Similarly, design patterns like Visitor and Interceptor facilitate the clean modularization of otherwise tangled concerns. Building on the ACP4IS meetings at AOSD 2002-2009, this workshop aims to provide a highly interactive forum for researchers and developers to discuss the application of and relationships between aspects, components, and patterns within modern infrastructure software. The goal is to put aspects, components, and patterns into a common reference frame and to build connections between the software engineering and systems communities.
Design and implementation of service-oriented architectures impose numerous research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Service-oriented Systems Engineering represents a symbiosis of best practices in object orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns. Service-oriented Systems Engineering denotes a current research topic in the field of IT-Systems Engineering with high potential in academic research and industrial application.
The annual Ph.D. Retreat of the Research School provides all members the opportunity to present the current state of their research and to give an outline of prospective Ph.D. projects. Due to the interdisciplinary structure of the Research School, this technical report covers a wide range of research topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the Research Scholl, this technical report covers a wide range of research topics. These include but are not limited to: Self-Adaptive Service-Oriented Systems, Operating System Support for Service-Oriented Systems, Architecture and Modeling of Service-Oriented Systems, Adaptive Process Management, Services Composition and Workflow Planning, Security Engineering of Service-Based IT Systems, Quantitative Analysis and Optimization of Service-Oriented Systems, Service-Oriented Systems in 3D Computer Graphics sowie Service-Oriented Geoinformatics.
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application. Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services. Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns. The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the Research Scholl, this technical report covers a wide range of research topics. These include but are not limited to: Self-Adaptive Service-Oriented Systems, Operating System Support for Service-Oriented Systems, Architecture and Modeling of Service-Oriented Systems, Adaptive Process Management, Services Composition and Workflow Planning, Security Engineering of Service-Based IT Systems, Quantitative Analysis and Optimization of Service-Oriented Systems, Service-Oriented Systems in 3D Computer Graphics sowie Service-Oriented Geoinformatics.
In continuation of a successful series of events, the 4th Many-core Applications Research Community (MARC) symposium took place at the HPI in Potsdam on December 8th and 9th 2011. Over 60 researchers from different fields presented their work on many-core hardware architectures, their programming models, and the resulting research questions for the upcoming generation of heterogeneous parallel systems.
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application. Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services. Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns. The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the Research Scholl, this technical report covers a wide range of research topics. These include but are not limited to: Self-Adaptive Service-Oriented Systems, Operating System Support for Service-Oriented Systems, Architecture and Modeling of Service-Oriented Systems, Adaptive Process Management, Services Composition and Workflow Planning, Security Engineering of Service-Based IT Systems, Quantitative Analysis and Optimization of Service-Oriented Systems, Service-Oriented Systems in 3D Computer Graphics, as well as Service-Oriented Geoinformatics.
Proceedings of the 2nd International Workshop on e-learning and Virtual and Remote Laboratories
(2008)
Content Session 1: Architecture of Virtual & Remote Laboratory Infrastructures (I) An Internet-Based Laboratory Course in Chemical Reaction Engineering and Unit Operations Internet Based Laboratory for Experimentation with Multilevel Medium-Power Converters Session 2: Architecture of Virtual & Remote Laboratory Infrastructures (II) Content management and architectural issues of a remote learning laboratory Distributed Software Architecture and Applications for Remote Laboratories Tele-Lab IT-Security: an architecture for an online virtual IT security lab Session 3: New e-learning Techniques for Virtual & Remote Laboratories NeOS: Neuchˆatel Online System A Flexible Instructional Electronics Laboratory with Local and Remote LabWorkbenches in a Grid Simulation of an Intelligent Network - Basic Call State Model Remote Laboratory Session 4: Service-Orientation in Virtual & Remote Laboratories SOA Meets Robots - A Service-Based Software Infrastructure For Remote Laboratories Service Orientation in Education - Intelligent Networks for eLearning / mLearning
Contents 1. Styling for Service-Based 3D Geovisualization Benjamin Hagedorn 2. The Windows Monitoring Kernel Michael Schöbel 3. A Resource-Oriented Information Network Platform for Global Design Processes Matthias Uflacker 4. Federation in SOA – Secure Service Invocation across Trust Domains Michael Menzel 5. KStruct: A Language for Kernel Runtime Inspection Alexander Schmidt 6. Deconstructing Resources Hagen Overdick 7. FMC-QE – Case Studies Stephan Kluth 8. A Matter of Trust Rehab Al-Nemr 9. From Semi-automated Service Composition to Semantic Conformance Harald Meyer
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
Today, software has become an intrinsic part of complex distributed embedded real-time systems. The next generation of embedded real-time systems will interconnect the today unconnected systems via complex software parts and the service-oriented paradigm. Therefore besides timed behavior and probabilistic behaviour also structure dynamics, where the architecture can be subject to changes at run-time, e.g. when dynamic binding of service end-points is employed or complex collaborations are established dynamically, is required. However, a modeling and analysis approach that combines all these necessary aspects does not exist so far.
To fill the identified gap, we propose Probabilistic Timed Graph Transformation Systems (PTGTSs) as a high-level description language that supports all the necessary aspects of structure dynamics, timed behavior, and probabilistic behavior. We introduce the formal model of PTGTSs in this paper and present a mapping of models with finite state spaces to probabilistic timed automata (PTA) that allows to use the PRISM model checker to analyze PTGTS models with respect to PTCTL properties.
Cyber-physical systems often encompass complex concurrent behavior with timing constraints and probabilistic failures on demand. The analysis whether such systems with probabilistic timed behavior adhere to a given specification is essential. When the states of the system can be represented by graphs, the rule-based formalism of Probabilistic Timed Graph Transformation Systems (PTGTSs) can be used to suitably capture structure dynamics as well as probabilistic and timed behavior of the system. The model checking support for PTGTSs w.r.t. properties specified using Probabilistic Timed Computation Tree Logic (PTCTL) has been already presented. Moreover, for timed graph-based runtime monitoring, Metric Temporal Graph Logic (MTGL) has been developed for stating metric temporal properties on identified subgraphs and their structural changes over time.
In this paper, we (a) extend MTGL to the Probabilistic Metric Temporal Graph Logic (PMTGL) by allowing for the specification of probabilistic properties, (b) adapt our MTGL satisfaction checking approach to PTGTSs, and (c) combine the approaches for PTCTL model checking and MTGL satisfaction checking to obtain a Bounded Model Checking (BMC) approach for PMTGL. In our evaluation, we apply an implementation of our BMC approach in AutoGraph to a running example.
Cyber-physical systems often encompass complex concurrent behavior with timing constraints and probabilistic failures on demand. The analysis whether such systems with probabilistic timed behavior adhere to a given specification is essential. When the states of the system can be represented by graphs, the rule-based formalism of Probabilistic Timed Graph Transformation Systems (PTGTSs) can be used to suitably capture structure dynamics as well as probabilistic and timed behavior of the system. The model checking support for PTGTSs w.r.t. properties specified using Probabilistic Timed Computation Tree Logic (PTCTL) has been already presented. Moreover, for timed graph-based runtime monitoring, Metric Temporal Graph Logic (MTGL) has been developed for stating metric temporal properties on identified subgraphs and their structural changes over time. In this paper, we (a) extend MTGL to the Probabilistic Metric Temporal Graph Logic (PMTGL) by allowing for the specification of probabilistic properties, (b) adapt our MTGL satisfaction checking approach to PTGTSs, and (c) combine the approaches for PTCTL model checking and MTGL satisfaction checking to obtain a Bounded Model Checking (BMC) approach for PMTGL. In our evaluation, we apply an implementation of our BMC approach in AutoGraph to a running example.
Pattern matching is a well-established concept in the functional programming community. It provides the means for concisely identifying and destructuring values of interest. This enables a clean separation of data structures and respective functionality, as well as dispatching functionality based on more than a single value. Unfortunately, expressive pattern matching facilities are seldomly incorporated in present object-oriented programming languages. We present a seamless integration of pattern matching facilities in an object-oriented and dynamically typed programming language: Newspeak. We describe language extensions to improve the practicability and integrate our additions with the existing programming environment for Newspeak. This report is based on the first author’s master’s thesis.
Parts without a whole?
(2015)
This explorative study gives a descriptive overview of what organizations do and experience when they say they practice design thinking. It looks at how the concept has been appropriated in organizations and also describes patterns of design thinking adoption. The authors use a mixed-method research design fed by two sources: questionnaire data and semi-structured personal expert interviews. The study proceeds in six parts: (1) design thinking¹s entry points into organizations; (2) understandings of the descriptor; (3) its fields of application and organizational localization; (4) its perceived impact; (5) reasons for its discontinuation or failure; and (6) attempts to measure its success. In conclusion the report challenges managers to be more conscious of their current design thinking practice. The authors suggest a co-evolution of the concept¹s introduction with innovation capability building and the respective changes in leadership approaches. It is argued that this might help in unfolding design thinking¹s hidden potentials as well as preventing unintended side-effects such as discontented teams or the dwindling authority of managers.
This technical report presents the results of student projects which were prepared during the lecture “Operating Systems II” offered by the “Operating Systems and Middleware” group at HPI in the Summer term of 2020. The lecture covered ad- vanced aspects of operating system implementation and architecture on topics such as Virtualization, File Systems and Input/Output Systems. In addition to attending the lecture, the participating students were encouraged to gather practical experience by completing a project on a closely related topic over the course of the semester. The results of 10 selected exceptional projects are covered in this report.
The students have completed hands-on projects on the topics of Operating System Design Concepts and Implementation, Hardware/Software Co-Design, Reverse Engineering, Quantum Computing, Static Source-Code Analysis, Operating Systems History, Application Binary Formats and more. It should be recognized that over the course of the semester all of these projects have achieved outstanding results which went far beyond the scope and the expec- tations of the lecture, and we would like to thank all participating students for their commitment and their effort in completing their respective projects, as well as their work on compiling this report.
摘要。哈索•普拉特纳研究院 (HPI) 的新型互动在线教育平台 openHPI (https://openHPI.de) 可以为从事信息技术和信息学领域内容的工作和感兴趣的学员提供可自由访问的、免费的在线课程。与斯坦福大学于 2011 年首推,之后也在美国其他精英大学提供的“网络公开群众课”(简称 MOOC)一样,openHPI 同样在互联网中提供学习视频和阅读材料,其中综合了支持学习的自我测试、家庭作业和社交讨论论坛,并刺激对促进学习的虚拟学习团队的培训。与“传统的”讲座平台,比如 tele-TASK 平台 (http://www.tele-task.de) 不同(在该平台中,可调用以多媒体方式记录的和已准备好的讲座),openHPI 提供的是按教学法准备的在线课程。这些课程的开始时间固定,之后在连续六个课程周稳定的提供以多媒体方式准备的、尽可能可以互动的学习材料。每周讲解课程主题的一章。为此在该周开始前会准备一系列学习视频、文字、自我测试和家庭作业材料,课程学员在该周将精力用于处理这些内容。这些计划与一个社交讨论平台相结合,学员在该平台上可以与课程导师和其他学员交换意见、解答问题和讨论更多主题。当然,学员可以自己决定学习活动的类型和范围。他们可以为课程作出自己的贡献,比如在论坛中引用博文或推文。之后其他学员可以评论、讨论或自己扩展这些博文或推文。这样学员、教师和提供的学习内容就在一个虚拟的团体中与社交学习网络相互结合起来。
The new interactive online educational platform openHPI, (https://openHPI.de) from Hasso Plattner Institute (HPI), offers freely accessible courses at no charge for all who are interested in subjects in the field of information technology and computer science. Since 2011, “Massive Open Online Courses,” called MOOCs for short, have been offered, first at Stanford University and then later at other U.S. elite universities. Following suit, openHPI provides instructional videos on the Internet and further reading material, combined with learning-supportive self-tests, homework and a social discussion forum. Education is further stimulated by the support of a virtual learning community. In contrast to “traditional” lecture platforms, such as the tele-TASK portal (http://www.tele-task.de) where multimedia recorded lectures are available on demand, openHPI offers didactic online courses. The courses have a fixed start date and offer a balanced schedule of six consecutive weeks presented in multimedia and, whenever possible, interactive learning material. Each week, one chapter of the course subject is treated. In addition, a series of learning videos, texts, self-tests and homework exercises are provided to course participants at the beginning of the week. The course offering is combined with a social discussion platform where participants have the opportunity to enter into an exchange with course instructors and fellow participants. Here, for example, they can get answers to questions and discuss the topics in depth. The participants naturally decide themselves about the type and range of their learning activities. They can make personal contributions to the course, for example, in blog posts or tweets, which they can refer to in the forum. In turn, other participants have the chance to comment on, discuss or expand on what has been said. In this way, the learners become the teachers and the subject matter offered to a virtual community is linked to a social learning network.
Die neue interaktive Online-Bildungsplattform openHPI (https://openHPI.de) des Hasso-Plattner-Instituts (HPI) bietet frei zugängliche und kostenlose Onlinekurse für interessierte Teilnehmer an, die sich mit Inhalten aus dem Bereich der Informationstechnologien und Informatik beschäftige¬n. Wie die seit 2011 zunächst von der Stanford University, später aber auch von anderen Elite-Universitäten der USA angeboten „Massive Open Online Courses“, kurz MOOCs genannt, bietet openHPI im Internet Lernvideos und weiterführenden Lesestoff in einer Kombination mit lernunterstützenden Selbsttests, Hausaufgaben und einem sozialen Diskussionsforum an und stimuliert die Ausbildung einer das Lernen fördernden virtuellen Lerngemeinschaft. Im Unterschied zu „traditionellen“ Vorlesungsportalen, wie z.B. dem tele-TASK Portal (http://www.tele-task.de), bei dem multimedial aufgezeichnete Vorlesungen zum Abruf bereit gestellt werden, bietet openHPI didaktisch aufbereitete Onlinekurse an. Diese haben einen festen Starttermin und bieten dann in einem austarierten Zeitplan von sechs aufeinanderfolgenden Kurswochen multimedial aufbereitete und wann immer möglich interaktive Lehrmaterialien. In jeder Woche wird ein Kapitel des Kursthemas behandelt. Dazu werden zu Wochenbeginn eine Reihe von Lehrvideos, Texten, Selbsttests und ein Hausaufgabenblatt bereitgestellt, mit denen sich die Kursteilnehmer in dieser Woche beschäftigen. Kombiniert sind die Angebote mit einer sozialen Diskussionsplattform, auf der sich die Teilnehmer mit den Kursbetreuern und anderen Teilnehmern austauschen, Fragen klären und weiterführende Themen diskutieren können. Natürlich entscheiden die Teilnehmer selbst über Art und Umfang ihrer Lernaktivitäten. Sie können in den Kurs eigene Beiträge einbringen, zum Beispiel durch Blogposts oder Tweets, auf die sie im Forum verweisen. Andere Lernende können diese dann kommentieren, diskutieren oder ihrerseits erweitern. Auf diese Weise werden die Lernenden, die Lehrenden und die angebotenen Lerninhalte in einer virtuellen Gemeinschaft, einem sozialen Lernnetzwerk miteinander verknüpft.
openHPI
(2022)
On the occasion of the 10th openHPI anniversary, this technical report provides information about the HPI MOOC platform, including its core features, technology, and architecture.
In an introduction, the platform family with all partner platforms is presented; these now amount to nine platforms, including openHPI. This section introduces openHPI as an advisor and research partner in various projects.
In the second chapter, the functionalities and common course formats of the platform are presented. The functionalities are divided into learner and admin features. The learner features section provides detailed information about performance records, courses, and the learning materials of which a course is composed: videos, texts, and quizzes. In addition, the learning materials can be enriched by adding external exercise tools that communicate with the HPI MOOC platform via the Learning Tools Interoperability (LTI) standard. Furthermore, the concept of peer assessments completed the possible learning materials.
The section then proceeds with further information on the discussion forum, a fundamental concept of MOOCs compared to traditional e-learning offers. The section is concluded with a description of the quiz recap, learning objectives, mobile applications, gameful learning, and the help desk.
The next part of this chapter deals with the admin features. The described functionality is restricted to describing the news and announcements, dashboards and statistics, reporting capabilities, research options with A/B testing, the course feed, and the TransPipe tool to support the process of creating automated or manual subtitles. The platform supports a large variety of additional features, but a detailed description of these features goes beyond the scope of this report.
The chapter then elaborates on common course formats and openHPI teaching activities at the HPI. The chapter concludes with some best practices for course design and delivery.
The third chapter provides insights into the technology and architecture behind openHPI. A special characteristic of the openHPI project is the conscious decision to operate the complete application from bare metal to platform development. Hence, the chapter starts with a section about the openHPI Cloud, including detailed information about the data center and devices, the used cloud software OpenStack and Ceph, as well as the openHPI Cloud Service provided for the HPI.
Afterward, a section on the application technology stack and development tooling describes the application infrastructure components, the used automation, the deployment pipeline, and the tools used for monitoring and alerting. The chapter is concluded with detailed information about the technology stack and concrete platform implementation details. The section describes the service-oriented Ruby on Rails application, inter-service communication, and public APIs. It also provides more information on the design system and components used in the application. The section concludes with a discussion of the original microservice architecture, where we share our insights and reasoning for migrating back to a monolithic application.
The last chapter provides a summary and an outlook on the future of digital education.
openHPI
(2022)
Anlässlich des 10-jährigen Jubiläums von openHPI informiert dieser technische Bericht über die HPI-MOOC-Plattform einschließlich ihrer Kernfunktionen, Technologie und Architektur.
In einer Einleitung wird die Plattformfamilie mit allen Partnerplattformen vorgestellt; diese belaufen sich inklusive openHPI aktuell auf neun Plattformen. In diesem Abschnitt wird außerdem gezeigt, wie openHPI als Berater und Forschungspartner in verschiedenen Projekten fungiert.
Im zweiten Kapitel werden die Funktionalitäten und gängigen Kursformate der Plattform präsentiert. Die Funktionalitäten sind in Lerner- und Admin-Funktionen unterteilt. Der Bereich Lernerfunktionen bietet detaillierte Informationen zu Leistungsnachweisen, Kursen und den Lernmaterialien, aus denen sich ein Kurs zusammensetzt: Videos, Texte und Quiz. Darüber hinaus können die Lernmaterialien durch externe Übungstools angereichert werden, die über den Standard Learning Tools Interoperability (LTI) mit der HPI MOOC-Plattform kommunizieren. Das Konzept der Peer-Assessments rundet die möglichen Lernmaterialien ab.
Der Abschnitt geht dann weiter auf das Diskussionsforum ein, das einen grundlegenden Unterschied von MOOCs im Vergleich zu traditionellen E-Learning-Angeboten darstellt. Zum Abschluss des Abschnitts folgen eine Beschreibung von Quiz-Recap, Lernzielen, mobilen Anwendungen, spielerischen Lernens und dem Helpdesk.
Der nächste Teil dieses Kapitels beschäftigt sich mit den Admin-Funktionen. Die Funktionalitätsbeschreibung beschränkt sich Neuigkeiten und Ankündigungen, Dashboards und Statistiken, Berichtsfunktionen, Forschungsoptionen mit A/B-Tests, den Kurs-Feed und das TransPipe-Tool zur Unterstützung beim Erstellen von automatischen oder manuellen Untertiteln. Die Plattform unterstützt außerdem eine Vielzahl zusätzlicher Funktionen, doch eine detaillierte Beschreibung dieser Funktionen würde den Rahmen des Berichts sprengen.
Das Kapitel geht dann auf gängige Kursformate und openHPI-Lehrveranstaltungen am HPI ein, bevor es mit einigen Best Practices für die Gestaltung und Durchführung von Kursen schließt.
Zum Abschluss des technischen Berichts gibt das letzte Kapitel eine Zusammenfassung und einen Ausblick auf die Zukunft der digitalen Bildung.
Ein besonderes Merkmal des openHPI-Projekts ist die bewusste Entscheidung, die komplette Anwendung von den physischen Netzwerkkomponenten bis zur Plattformentwicklung eigenständig zu betreiben. Bei der vorliegenden deutschen Variante handelt es sich um eine gekürzte Übersetzung des technischen Berichts 148, bei der kein Einblick in die Technologien und Architektur von openHPI gegeben wird. Interessierte Leser:innen können im technischen Bericht 148 (vollständige englische Version) detaillierte Informationen zum Rechenzentrum und den Geräten, der Cloud-Software und dem openHPI Cloud Service aber auch zu Infrastruktur-Anwendungskomponenten wie Entwicklungstools, Automatisierung, Deployment-Pipeline und Monitoring erhalten. Außerdem finden sich dort weitere Informationen über den Technologiestack und konkrete Implementierungsdetails der Plattform inklusive der serviceorientierten Ruby on Rails-Anwendung, die Kommunikation zwischen den Diensten, öffentliche APIs, sowie Designsystem und -komponenten. Der Abschnitt schließt mit einer Diskussion über die ursprüngliche Microservice-Architektur und die Migration zu einer monolithischen Anwendung.
Graph queries have lately gained increased interest due to application areas such as social networks, biological networks, or model queries. For the relational database case the relational algebra and generalized discrimination networks have been studied to find appropriate decompositions into subqueries and ordering of these subqueries for query evaluation or incremental updates of query results. For graph database queries however there is no formal underpinning yet that allows us to find such suitable operationalizations. Consequently, we suggest a simple operational concept for the decomposition of arbitrary complex queries into simpler subqueries and the ordering of these subqueries in form of generalized discrimination networks for graph queries inspired by the relational case. The approach employs graph transformation rules for the nodes of the network and thus we can employ the underlying theory. We further show that the proposed generalized discrimination networks have the same expressive power as nested graph conditions.
Modular and incremental global model management with extended generalized discrimination networks
(2023)
Complex projects developed under the model-driven engineering paradigm nowadays often involve several interrelated models, which are automatically processed via a multitude of model operations. Modular and incremental construction and execution of such networks of models and model operations are required to accommodate efficient development with potentially large-scale models. The underlying problem is also called Global Model Management.
In this report, we propose an approach to modular and incremental Global Model Management via an extension to the existing technique of Generalized Discrimination Networks (GDNs). In addition to further generalizing the notion of query operations employed in GDNs, we adapt the previously query-only mechanism to operations with side effects to integrate model transformation and model synchronization. We provide incremental algorithms for the execution of the resulting extended Generalized Discrimination Networks (eGDNs), as well as a prototypical implementation for a number of example eGDN operations.
Based on this prototypical implementation, we experiment with an application scenario from the software development domain to empirically evaluate our approach with respect to scalability and conceptually demonstrate its applicability in a typical scenario. Initial results confirm that the presented approach can indeed be employed to realize efficient Global Model Management in the considered scenario.
An increasing demand on functionality and flexibility leads to an integration of beforehand isolated system solutions building a so-called System of Systems (SoS). Furthermore, the overall SoS should be adaptive to react on changing requirements and environmental conditions. Due SoS are composed of different independent systems that may join or leave the overall SoS at arbitrary point in times, the SoS structure varies during the systems lifetime and the overall SoS behavior emerges from the capabilities of the contained subsystems. In such complex system ensembles new demands of understanding the interaction among subsystems, the coupling of shared system knowledge and the influence of local adaptation strategies to the overall resulting system behavior arise. In this report, we formulate research questions with the focus of modeling interactions between system parts inside a SoS. Furthermore, we define our notion of important system types and terms by retrieving the current state of the art from literature. Having a common understanding of SoS, we discuss a set of typical SoS characteristics and derive general requirements for a collaboration modeling language. Additionally, we retrieve a broad spectrum of real scenarios and frameworks from literature and discuss how these scenarios cope with different characteristics of SoS. Finally, we discuss the state of the art for existing modeling languages that cope with collaborations for different system types such as SoS.
The service-oriented architecture supports the dynamic assembly and runtime reconfiguration of complex open IT landscapes by means of runtime binding of service contracts, launching of new components and termination of outdated ones. Furthermore, the evolution of these IT landscapes is not restricted to exchanging components with other ones using the same service contracts, as new services contracts can be added as well. However, current approaches for modeling and verification of service-oriented architectures do not support these important capabilities to their full extend.In this report we present an extension of the current OMG proposal for service modeling with UML - SoaML - which overcomes these limitations. It permits modeling services and their service contracts at different levels of abstraction, provides a formal semantics for all modeling concepts, and enables verifying critical properties. Our compositional and incremental verification approach allows for complex properties including communication parameters and time and covers besides the dynamic binding of service contracts and the replacement of components also the evolution of the systems by means of new service contracts. The modeling as well as verification capabilities of the presented approach are demonstrated by means of a supply chain example and the verification results of a first prototype are shown.
Modeling and Formal Analysis of Meta-Ecosystems with Dynamic Structure using Graph Transformation
(2022)
The dynamics of ecosystems is of crucial importance. Various model-based approaches exist to understand and analyze their internal effects. In this paper, we model the space structure dynamics and ecological dynamics of meta-ecosystems using the formal technique of Graph Transformation (short GT). We build GT models to describe how a meta-ecosystem (modeled as a graph) can evolve over time (modeled by GT rules) and to analyze these GT models with respect to qualitative properties such as the existence of structural stabilities. As a case study, we build three GT models describing the space structure dynamics and ecological dynamics of three different savanna meta-ecosystems. The first GT model considers a savanna meta-ecosystem that is limited in space to two ecosystem patches, whereas the other two GT models consider two savanna meta-ecosystems that are unlimited in the number of ecosystem patches and only differ in one GT rule describing how the space structure of the meta-ecosystem grows. In the first two GT models, the space structure dynamics and ecological dynamics of the meta-ecosystem shows two main structural stabilities: the first one based on grassland-savanna-woodland transitions and the second one based on grassland-desert transitions. The transition between these two structural stabilities is driven by high-intensity fires affecting the tree components. In the third GT model, the GT rule for savanna regeneration induces desertification and therefore a collapse of the meta-ecosystem. We believe that GT models provide a complementary avenue to that of existing approaches to rigorously study ecological phenomena.
Enacting business processes in process engines requires the coverage of control flow, resource assignments, and process data. While the first two aspects are well supported in current process engines, data dependencies need to be added and maintained manually by a process engineer. Thus, this task is error-prone and time-consuming. In this report, we address the problem of modeling processes with complex data dependencies, e.g., m:n relationships, and their automatic enactment from process models. First, we extend BPMN data objects with few annotations to allow data dependency handling as well as data instance differentiation. Second, we introduce a pattern-based approach to derive SQL queries from process models utilizing the above mentioned extensions. Therewith, we allow automatic enactment of data-aware BPMN process models. We implemented our approach for the Activiti process engine to show applicability.
The development of self-adaptive software requires the engineering of an adaptation engine that controls and adapts the underlying adaptable software by means of feedback loops. The adaptation engine often describes the adaptation by using runtime models representing relevant aspects of the adaptable software and particular activities such as analysis and planning that operate on these runtime models. To systematically address the interplay between runtime models and adaptation activities in adaptation engines, runtime megamodels have been proposed for self-adaptive software. A runtime megamodel is a specific runtime model whose elements are runtime models and adaptation activities. Thus, a megamodel captures the interplay between multiple models and between models and activities as well as the activation of the activities. In this article, we go one step further and present a modeling language for ExecUtable RuntimE MegAmodels (EUREMA) that considerably eases the development of adaptation engines by following a model-driven engineering approach. We provide a domain-specific modeling language and a runtime interpreter for adaptation engines, in particular for feedback loops. Megamodels are kept explicit and alive at runtime and by interpreting them, they are directly executed to run feedback loops. Additionally, they can be dynamically adjusted to adapt feedback loops. Thus, EUREMA supports development by making feedback loops, their runtime models, and adaptation activities explicit at a higher level of abstraction. Moreover, it enables complex solutions where multiple feedback loops interact or even operate on top of each other. Finally, it leverages the co-existence of self-adaptation and off-line adaptation for evolution.
Various kinds of typed attributed graphs are used to represent states of systems from a broad range of domains. For dynamic systems, established formalisms such as graph transformations provide a formal model for defining state sequences. We consider the extended case where time elapses between states and introduce a logic to reason about these sequences. With this logic we express properties on the structure and attributes of states as well as on the temporal occurrence of states that are related by their inner structure, which no formal logic over graphs accomplishes concisely so far. Firstly, we introduce graphs with history by equipping every graph element with the timestamp of its creation and, if applicable, its deletion. Secondly, we define a logic on graphs by integrating the temporal operator until into the well-established logic of nested graph conditions. Thirdly, we prove that our logic is equally expressive to nested graph conditions by providing a suitable reduction. Finally, the implementation of this reduction allows for the tool-based analysis of metric temporal properties for state sequences.
Graph repair, restoring consistency of a graph, plays a prominent role in several areas of computer science and beyond: For example, in model-driven engineering, the abstract syntax of models is usually encoded using graphs. Flexible edit operations temporarily create inconsistent graphs not representing a valid model, thus requiring graph repair. Similarly, in graph databases—managing the storage and manipulation of graph data—updates may cause that a given database does not satisfy some integrity constraints, requiring also graph repair.
We present a logic-based incremental approach to graph repair, generating a sound and complete (upon termination) overview of least-changing repairs. In our context, we formalize consistency by so-called graph conditions being equivalent to first-order logic on graphs. We present two kind of repair algorithms: State-based repair restores consistency independent of the graph update history, whereas deltabased (or incremental) repair takes this history explicitly into account. Technically, our algorithms rely on an existing model generation algorithm for graph conditions implemented in AutoGraph. Moreover, the delta-based approach uses the new concept of satisfaction (ST) trees for encoding if and how a graph satisfies a graph condition. We then demonstrate how to manipulate these STs incrementally with respect to a graph update.
MDE techniques are more and more used in praxis. However, there is currently a lack of detailed reports about how different MDE techniques are integrated into the development and combined with each other. To learn more about such MDE settings, we performed a descriptive and exploratory field study with SAP, which is a worldwide operating company with around 50.000 employees and builds enterprise software applications. This technical report describes insights we got during this study. For example, we identified that MDE settings are subject to evolution. Finally, this report outlines directions for future research to provide practical advises for the application of MDE settings.
Um den zunehmenden Diebstahl digitaler Identitäten zu bekämpfen, gibt es bereits mehr als ein Dutzend Technologien. Sie sind, vor allem bei der Authentifizierung per Passwort, mit spezifischen Nachteilen behaftet, haben andererseits aber auch jeweils besondere Vorteile. Wie solche Kommunikationsstandards und -Protokolle wirkungsvoll miteinander kombiniert werden können, um dadurch mehr Sicherheit zu erreichen, haben die Autoren dieser Studie analysiert. Sie sprechen sich für neuartige Identitätsmanagement-Systeme aus, die sich flexibel auf verschiedene Rollen eines einzelnen Nutzers einstellen können und bequemer zu nutzen sind als bisherige Verfahren. Als ersten Schritt auf dem Weg hin zu einer solchen Identitätsmanagement-Plattform beschreiben sie die Möglichkeiten einer Analyse, die sich auf das individuelle Verhalten eines Nutzers oder einer Sache stützt.
Ausgewertet werden dabei Sensordaten mobiler Geräte, welche die Nutzer häufig bei sich tragen und umfassend einsetzen, also z.B. internetfähige Mobiltelefone, Fitness-Tracker und Smart Watches. Die Wissenschaftler beschreiben, wie solche Kleincomputer allein z.B. anhand der Analyse von Bewegungsmustern, Positionsund Netzverbindungsdaten kontinuierlich ein „Vertrauens-Niveau“ errechnen können. Mit diesem ermittelten „Trust Level“ kann jedes Gerät ständig die Wahrscheinlichkeit angeben, mit der sein aktueller Benutzer auch der tatsächliche Besitzer ist, dessen typische Verhaltensmuster es genauestens „kennt“.
Wenn der aktuelle Wert des Vertrauens-Niveaus (nicht aber die biometrischen Einzeldaten) an eine externe Instanz wie einen Identitätsprovider übermittelt wird, kann dieser das Trust Level allen Diensten bereitstellen, welche der Anwender nutzt und darüber informieren will. Jeder Dienst ist in der Lage, selbst festzulegen, von welchem Vertrauens-Niveau an er einen Nutzer als authentifiziert ansieht. Erfährt er von einem unter das Limit gesunkenen Trust Level, kann der Identitätsprovider seine Nutzung und die anderer Services verweigern.
Die besonderen Vorteile dieses Identitätsmanagement-Ansatzes liegen darin, dass er keine spezifische und teure Hardware benötigt, um spezifische Daten auszuwerten, sondern lediglich Smartphones und so genannte Wearables. Selbst Dinge wie Maschinen, die Daten über ihr eigenes Verhalten per Sensor-Chip ins Internet funken, können einbezogen werden. Die Daten werden kontinuierlich im Hintergrund erhoben, ohne dass sich jemand darum kümmern muss. Sie sind nur für die Berechnung eines Wahrscheinlichkeits-Messwerts von Belang und verlassen niemals das Gerät. Meldet sich ein Internetnutzer bei einem Dienst an, muss er sich nicht zunächst an ein vorher festgelegtes Geheimnis – z.B. ein Passwort – erinnern, sondern braucht nur die Weitergabe seines aktuellen Vertrauens-Wertes mit einem „OK“ freizugeben.
Ändert sich das Nutzungsverhalten – etwa durch andere Bewegungen oder andere Orte des Einloggens ins Internet als die üblichen – wird dies schnell erkannt. Unbefugten kann dann sofort der Zugang zum Smartphone oder zu Internetdiensten gesperrt werden. Künftig kann die Auswertung von Verhaltens-Faktoren noch erweitert werden, indem z.B. Routinen an Werktagen, an Wochenenden oder im Urlaub erfasst werden. Der Vergleich mit den live erhobenen Daten zeigt dann an, ob das Verhalten in das übliche Muster passt, der Benutzer also mit höchster Wahrscheinlichkeit auch der ausgewiesene Besitzer des Geräts ist.
Über die Techniken des Managements digitaler Identitäten und die damit verbundenen Herausforderungen gibt diese Studie einen umfassenden Überblick. Sie beschreibt zunächst, welche Arten von Angriffen es gibt, durch die digitale Identitäten gestohlen werden können. Sodann werden die unterschiedlichen Verfahren von Identitätsnachweisen vorgestellt. Schließlich liefert die Studie noch eine zusammenfassende Übersicht über die 15 wichtigsten Protokolle und technischen Standards für die Kommunikation zwischen den drei beteiligten Akteuren: Service Provider/Dienstanbieter, Identitätsprovider und Nutzer. Abschließend wird aktuelle Forschung des Hasso-Plattner-Instituts zum Identitätsmanagement vorgestellt.
Learning from failure
(2022)
Regression testing is a widespread practice in today's software industry to ensure software product quality. Developers derive a set of test cases, and execute them frequently to ensure that their change did not adversely affect existing functionality. As the software product and its test suite grow, the time to feedback during regression test sessions increases, and impedes programmer productivity: developers wait longer for tests to complete, and delays in fault detection render fault removal increasingly difficult.
Test case prioritization addresses the problem of long feedback loops by reordering test cases, such that test cases of high failure probability run first, and test case failures become actionable early in the testing process. We ask, given test execution schedules reconstructed from publicly available data, to which extent can their fault detection efficiency improved, and which technique yields the most efficient test schedules with respect to APFD?
To this end, we recover regression 6200 test sessions from the build log files of Travis CI, a popular continuous integration service, and gather 62000 accompanying changelists. We evaluate the efficiency of current test schedules, and examine the prioritization results of state-of-the-art lightweight, history-based heuristics. We propose and evaluate a novel set of prioritization algorithms, which connect software changes and test failures in a matrix-like data structure.
Our studies indicate that the optimization potential is substantial, because the existing test plans score only 30% APFD. The predictive power of past test failures proves to be outstanding: simple heuristics, such as repeating tests with failures in recent sessions, result in efficiency scores of 95% APFD. The best-performing matrix-based heuristic achieves a similar score of 92.5% APFD. In contrast to prior approaches, we argue that matrix-based techniques are useful beyond the scope of effective prioritization, and enable a number of use cases involving software maintenance.
We validate our findings from continuous integration processes by extending a continuous testing tool within development environments with means of test prioritization, and pose further research questions. We think that our findings are suited to propel adoption of (continuous) testing practices, and that programmers' toolboxes should contain test prioritization as an existential productivity tool.
Crochet is a popular handcraft all over the world. While other techniques such as knitting or weaving have received technical support over the years through machines, crochet is still a purely manual craft. Not just the act of crochet itself is manual but also the process of creating instructions for new crochet patterns, which is barely supported by domain specific digital solutions. This leads to unstructured and often also ambiguous and erroneous pattern instructions. In this report, we propose a concept to digitally represent crochet patterns. This format incorporates crochet techniques which allows domain specific support for crochet pattern designers during the pattern creation and instruction writing process. As contributions, we present a thorough domain analysis, the concept of a graph structure used as domain specific language to specify crochet patterns and a prototype of a projectional editor using the graph as representation format of patterns and a diagramming system to visualize them in 2D and 3D. By analyzing the domain, we learned about crochet techniques and pain points of designers in their pattern creation workflow. These insights are the basis on which we defined the pattern representation. In order to evaluate our concept, we built a prototype by which the feasibility of the concept is shown and we tested the software with professional crochet designers who approved of the concept.
1. Grundlagen der Softwarevisualisierung Johannes Bohnet und Jürgen Döllner 2. Visualisierung und Exploration von Softwaresystemen mit dem Werkzeug SHriMP/Creole Alexander Gierak 3. Annex: SHriMP/Creole in der Anwendung Nebojsa Lazic 4. Metrikbasierte Softwarevisualisierung mit dem Reverse-Engineering-Werkzeug CodeCrawler Daniel Brinkmann 5. Annex: CodeCrawler in der Anwendung Benjamin Hagedorn 6. Quellcodezeilenbasierte Softwarevisualisierung Nebojsa Lazic 7. Landschafts- und Stadtmetaphern zur Softwarevisualisierung Benjamin Hagedorn 8. Visualisierung von Softwareevolution Michael Schöbel 9. Ergebnisse und Ausblick Johannes Bohnet Literaturverzeichnis Autorenverzeichnis
While offering significant expressive power, graph transformation systems often come with rather limited capabilities for automated analysis, particularly if systems with many possible initial graphs and large or infinite state spaces are concerned. One approach that tries to overcome these limitations is inductive invariant checking. However, the verification of inductive invariants often requires extensive knowledge about the system in question and faces the approach-inherent challenges of locality and lack of context.
To address that, this report discusses k-inductive invariant checking for graph transformation systems as a generalization of inductive invariants. The additional context acquired by taking multiple (k) steps into account is the key difference to inductive invariant checking and is often enough to establish the desired invariants without requiring the iterative development of additional properties.
To analyze possibly infinite systems in a finite fashion, we introduce a symbolic encoding for transformation traces using a restricted form of nested application conditions. As its central contribution, this report then presents a formal approach and algorithm to verify graph constraints as k-inductive invariants. We prove the approach's correctness and demonstrate its applicability by means of several examples evaluated with a prototypical implementation of our algorithm.
This document is an analysis of the 'Java Language Conversion Assistant'. Itr will also cover a language analysis of the Java Programming Language as well as a survey of related work concerning Java and C# interoperability on the one hand and language conversion in general on the other. Part I deals with language analysis. Part II covers the JLCA tool and tests used to analyse the tool. Additionally, it gives an overview of the above mentioned related work. Part III presents a complete project that has been translated using the JLCA.
The analysis of behavioral models such as Graph Transformation Systems (GTSs) is of central importance in model-driven engineering. However, GTSs often result in intractably large or even infinite state spaces and may be equipped with multiple or even infinitely many start graphs. To mitigate these problems, static analysis techniques based on finite symbolic representations of sets of states or paths thereof have been devised. We focus on the technique of k-induction for establishing invariants specified using graph conditions. To this end, k-induction generates symbolic paths backwards from a symbolic state representing a violation of a candidate invariant to gather information on how that violation could have been reached possibly obtaining contradictions to assumed invariants. However, GTSs where multiple agents regularly perform actions independently from each other cannot be analyzed using this technique as of now as the independence among backward steps may prevent the gathering of relevant knowledge altogether.
In this paper, we extend k-induction to GTSs with multiple agents thereby supporting a wide range of additional GTSs. As a running example, we consider an unbounded number of shuttles driving on a large-scale track topology, which adjust their velocity to speed limits to avoid derailing. As central contribution, we develop pruning techniques based on causality and independence among backward steps and verify that k-induction remains sound under this adaptation as well as terminates in cases where it did not terminate before.
The formal modeling and analysis is of crucial importance for software development processes following the model based approach. We present the formalism of Interval Probabilistic Timed Graph Transformation Systems (IPTGTSs) as a high-level modeling language. This language supports structure dynamics (based on graph transformation), timed behavior (based on clocks, guards, resets, and invariants as in Timed Automata (TA)), and interval probabilistic behavior (based on Discrete Interval Probability Distributions). That is, for the probabilistic behavior, the modeler using IPTGTSs does not need to provide precise probabilities, which are often impossible to obtain, but rather provides a probability range instead from which a precise probability is chosen nondeterministically. In fact, this feature on capturing probabilistic behavior distinguishes IPTGTSs from Probabilistic Timed Graph Transformation Systems (PTGTSs) presented earlier.
Following earlier work on Interval Probabilistic Timed Automata (IPTA) and PTGTSs, we also provide an analysis tool chain for IPTGTSs based on inter-formalism transformations. In particular, we provide in our tool AutoGraph a translation of IPTGTSs to IPTA and rely on a mapping of IPTA to Probabilistic Timed Automata (PTA) to allow for the usage of the Prism model checker. The tool Prism can then be used to analyze the resulting PTA w.r.t. probabilistic real-time queries asking for worst-case and best-case probabilities to reach a certain set of target states in a given amount of time.
During the overall development of complex engineering systems different modeling notations are employed. For example, in the domain of automotive systems system engineering models are employed quite early to capture the requirements and basic structuring of the entire system, while software engineering models are used later on to describe the concrete software architecture. Each model helps in addressing the specific design issue with appropriate notations and at a suitable level of abstraction. However, when we step forward from system design to the software design, the engineers have to ensure that all decisions captured in the system design model are correctly transferred to the software engineering model. Even worse, when changes occur later on in either model, today the consistency has to be reestablished in a cumbersome manual step. In this report, we present in an extended version of [Holger Giese, Stefan Neumann, and Stephan Hildebrandt. Model Synchronization at Work: Keeping SysML and AUTOSAR Models Consistent. In Gregor Engels, Claus Lewerentz, Wilhelm Schäfer, Andy Schürr, and B. Westfechtel, editors, Graph Transformations and Model Driven Enginering - Essays Dedicated to Manfred Nagl on the Occasion of his 65th Birthday, volume 5765 of Lecture Notes in Computer Science, pages 555–579. Springer Berlin / Heidelberg, 2010.] how model synchronization and consistency rules can be applied to automate this task and ensure that the different models are kept consistent. We also introduce a general approach for model synchronization. Besides synchronization, the approach consists of tool adapters as well as consistency rules covering the overlap between the synchronized parts of a model and the rest. We present the model synchronization algorithm based on triple graph grammars in detail and further exemplify the general approach by means of a model synchronization solution between system engineering models in SysML and software engineering models in AUTOSAR which has been developed for an industrial partner. In the appendix as extension to [19] the meta-models and all TGG rules for the SysML to AUTOSAR model synchronization are documented.
Graph transformation systems are a powerful formal model to capture model transformations or systems with infinite state space, among others. However, this expressive power comes at the cost of rather limited automated analysis capabilities. The general case of unbounded many initial graphs or infinite state spaces is only supported by approaches with rather limited scalability or expressiveness. In this report we improve an existing approach for the automated verification of inductive invariants for graph transformation systems. By employing partial negative application conditions to represent and check many alternative conditions in a more compact manner, we can check examples with rules and constraints of substantially higher complexity. We also substantially extend the expressive power by supporting more complex negative application conditions and provide higher accuracy by employing advanced implication checks. The improvements are evaluated and compared with another applicable tool by considering three case studies.
In-Memory Data Management
(2012)
Nach 50 Jahren erfolgreicher Entwicklunghat die Business-IT einen neuenWendepunkt erreicht. Hier zeigen die Autoren erstmalig, wieIn-Memory Computing dieUnternehmensprozesse künftig verändern wird. Bisher wurden Unternehmensdaten aus Performance-Gründen auf verschiedene Datenbanken verteilt: Analytische Datenresidieren in Data Warehouses und werden regelmäßig mithilfe transaktionaler Systeme synchronisiert. Diese Aufspaltung macht flexibles Echtzeit-Reporting aktueller Daten unmöglich. Doch dank leistungsfähigerMulti-Core-CPUs, großer Hauptspeicher, Cloud Computing und immerbesserer mobiler Endgeräte lassen die Unternehmen dieses restriktive Modell zunehmend hinter sich. Die Autoren stellen Techniken vor, die eine analytische und transaktionale Verarbeitung in Echtzeit erlauben und so dem Geschäftsleben neue Wege bahnen.
Developing large software projects is a complicated task and can be demanding for developers. Continuous integration is common practice for reducing complexity. By integrating and testing changes often, changesets are kept small and therefore easily comprehensible. Travis CI is a service that offers continuous integration and continuous deployment in the cloud. Software projects are build, tested, and deployed using the Travis CI infrastructure without interrupting the development process. This report describes how Travis CI works, presents how time-driven, periodic building is implemented as well as how CI data visualization can be done, and proposes a way of dealing with dependency problems.
Pictures are a medium that helps make the past tangible and preserve memories. Without context, they are not able to do so. Pictures are brought to life by their associated stories. However, the older pictures become, the fewer contemporary witnesses can tell these stories.
Especially for large, analog picture archives, knowledge and memories are spread over many people. This creates several challenges: First, the pictures must be digitized to save them from decaying and make them available to the public. Since a simple listing of all the pictures is confusing, the pictures should be structured accessibly. Second, known information that makes the stories vivid needs to be added to the pictures. Users should get the opportunity to contribute their knowledge and memories. To make this usable for all interested parties, even for older, less technophile generations, the interface should be intuitive and error-tolerant.
The resulting requirements are not covered in their entirety by any existing software solution without losing the intuitive interface or the scalability of the system.
Therefore, we have developed our digital picture archive within the scope of a bachelor project in cooperation with the Bad Harzburg-Stiftung. For the implementation of this web application, we use the UI framework React in the frontend, which communicates via a GraphQL interface with the Content Management System Strapi in the backend. The use of this system enables our project partner to create an efficient process from scanning analog pictures to presenting them to visitors in an organized and annotated way. To customize the solution for both picture delivery and information contribution for our target group, we designed prototypes and evaluated them with people from Bad Harzburg. This helped us gain valuable insights into our system’s usability and future challenges as well as requirements.
Our web application is already being used daily by our project partner. During the project, we still came up with numerous ideas for additional features to further support the exchange of knowledge.
Decubitus is one of the most relevant diseases in nursing and the most expensive to treat. It is caused by sustained pressure on tissue, so it particularly affects bed-bound patients. This work lays a foundation for pressure mattress-based decubitus prophylaxis by implementing a solution to the single-frame 2D Human Pose Estimation problem.
For this, methods of Deep Learning are employed. Two approaches are examined, a coarse-to-fine Convolutional Neural Network for direct regression of joint coordinates and a U-Net for the derivation of probability distribution heatmaps.
We conclude that training our models on a combined dataset of the publicly available Bodies at Rest and SLP data yields the best results. Furthermore, various preprocessing techniques are investigated, and a hyperparameter optimization is performed to discover an improved model architecture.
Another finding indicates that the heatmap-based approach outperforms direct regression.
This model achieves a mean per-joint position error of 9.11 cm for the Bodies at Rest data and 7.43 cm for the SLP data.
We find that it generalizes well on data from mattresses other than those seen during training but has difficulties detecting the arms correctly.
Additionally, we give a brief overview of the medical data annotation tool annoto we developed in the bachelor project and furthermore conclude that the Scrum framework and agile practices enhanced our development workflow.
The “HPI Future SOC Lab” is a cooperation of the Hasso Plattner Institute (HPI) and industry partners. Its mission is to enable and promote exchange and interaction between the research community and the industry partners.
The HPI Future SOC Lab provides researchers with free of charge access to a complete infrastructure of state of the art hard and software. This infrastructure includes components, which might be too expensive for an ordinary research environment, such as servers with up to 64 cores and 2 TB main memory. The offerings address researchers particularly from but not limited to the areas of computer science and business information systems. Main areas of research include cloud computing, parallelization, and In-Memory technologies.
This technical report presents results of research projects executed in 2018. Selected projects have presented their results on April 17th and November 14th 2017 at the Future SOC Lab Day events.