Refine
Year of publication
Document Type
- Doctoral Thesis (46) (remove)
Keywords
- Datenintegration (3)
- Geschäftsprozessmanagement (3)
- Abstraktion (2)
- Design Thinking (2)
- HCI (2)
- Modell (2)
- Modellierung (2)
- Process Mining (2)
- RDF (2)
- abstraction (2)
- business process management (2)
- data (2)
- design thinking (2)
- model (2)
- non-photorealistic rendering (2)
- process mining (2)
- systems of systems (2)
- testing (2)
- virtual 3D city models (2)
- virtuelle 3D-Stadtmodelle (2)
- 3D Computer Grafik (1)
- 3D Computer Graphics (1)
- 3D Drucken (1)
- 3D Semiotik (1)
- 3D Visualisierung (1)
- 3D printing (1)
- 3D semiotics (1)
- 3D visualization (1)
- Algorithmen (1)
- Algorithms (1)
- Analyse (1)
- Anfragepaare (1)
- Anisotroper Kuwahara Filter (1)
- Anomalien (1)
- Assoziationsregeln (1)
- Asynchrone Schaltung (1)
- Ausführungsgeschichte (1)
- Bayesian networks (1)
- Bayessche Netze (1)
- Behavior (1)
- Behaviour Analysis (1)
- Berührungseingaben (1)
- Big Data (1)
- Bildverarbeitung (1)
- Business Process Models (1)
- CSC (1)
- CSCW (1)
- Change Data Capture (1)
- CityGML (1)
- Cloud Computing (1)
- Cloud Datenzentren (1)
- Cloud computing (1)
- Compliance (1)
- Composition (1)
- Computergrafik (1)
- Data Integration (1)
- Data Mining (1)
- Data Profiling (1)
- Databases (1)
- Daten (1)
- Datenabhängigkeiten-Entdeckung (1)
- Datenbanken (1)
- Datenextraktion (1)
- Datenkorrektheit (1)
- Datenobjekte (1)
- Datenreinigung (1)
- Datenzustände (1)
- Design (1)
- Design Management (1)
- Design Thinking Diskurse (1)
- Deurema Modellierungssprache (1)
- Differenz von Gauss Filtern (1)
- Digitale Whiteboards (1)
- Disambiguierung (1)
- Discrimination Networks (1)
- Dynamic Data (1)
- Eingabegenauigkeit (1)
- Energieeffizienz (1)
- Enterprise Search (1)
- Entwicklungswerkzeuge (1)
- Entwurfsmuster für SOA-Sicherheit (1)
- Ereignisabstraktion (1)
- Evolution (1)
- FMC-QE (1)
- Feedback Loop Modellierung (1)
- Fehlende Daten (1)
- Fehlerbeseitigung (1)
- Flussgesteuerter Bilateraler Filter (1)
- Focus+Context Visualization (1)
- Fokus-&-Kontext Visualisierung (1)
- Foreign Keys (1)
- Foreign Keys Discovery (1)
- GPU (1)
- Gator Netzwerk (1)
- Gator networks (1)
- Gebäudemodelle (1)
- Geländemodelle (1)
- Generalisierung (1)
- Geodaten (1)
- Geschäftsprozessmodelle (1)
- Grammatikalische Inferenz (1)
- Graph-basiertes Ranking (1)
- Hauptspeicher Technologie (1)
- IT-Security (1)
- IT-Sicherheit (1)
- Inclusion Dependency (1)
- Inclusion Dependency Discovery (1)
- Incremental Discovery (1)
- Incrementally Inclusion Dependencies Discovery (1)
- Index (1)
- Index Structures (1)
- Indexstrukturen (1)
- Informationsextraktion (1)
- Informationsvorhaltung (1)
- Inklusionsabhängigkeit (1)
- Inklusionsabhängigkeiten (1)
- Inklusionsabhängigkeiten Entdeckung (1)
- Interactive Rendering (1)
- Interaktionsmodel (1)
- Interaktives Rendering (1)
- Internet applications (1)
- Internetanwendungen (1)
- Kartografisches Design (1)
- Kollaborationen (1)
- Komplexität (1)
- Komplexitätsbewältigung (1)
- Komposition (1)
- Kontext (1)
- LOD (1)
- Landmarken (1)
- Laser Cutten (1)
- Laufzeitmodelle (1)
- Laufzeitverhalten (1)
- Leistungsfähigkeit (1)
- Leistungsvorhersage (1)
- Link-Entdeckung (1)
- Logiksynthese (1)
- Megamodel (1)
- Megamodell (1)
- Metadata Discovery (1)
- Metadaten Entdeckung (1)
- Mind2 (1)
- Mobilgeräte (1)
- Model Consistency (1)
- Model Management (1)
- Model-Driven Engineering (1)
- Modeling (1)
- Modell Management (1)
- Modell-driven Security (1)
- Modell-getriebene Sicherheit (1)
- Modellgetrieben (1)
- Modellgetriebene Entwicklung (1)
- Modellkonsistenz (1)
- Modelltransformation (1)
- Mustererkennung (1)
- Nicht-photorealistisches Rendering (1)
- Nichtfotorealistische Bildsynthese (1)
- Nutzungsinteresse (1)
- Parallele Datenverarbeitung (1)
- Performance (1)
- Performance Prediction (1)
- Probabilistische Modelle (1)
- Process (1)
- Process Modelling (1)
- Prozess (1)
- Prozess- und Datenintegration (1)
- Prozessarchitektur (1)
- Prozessautomatisierung (1)
- Prozesse (1)
- Prozessmodellierung (1)
- Prozessmodellsuche (1)
- Prozessverfeinerung (1)
- Präsentation (1)
- Quantitative Modeling (1)
- Quantitative Modellierung (1)
- Query (1)
- Queuing Theory (1)
- Ressourcenmanagement (1)
- Rete Netzwerk (1)
- Rete networks (1)
- S-indd++ (1)
- SOA Security Pattern (1)
- SPARQL (1)
- STG decomposition (1)
- STG-Dekomposition (1)
- Scalability (1)
- Schema-Entdeckung (1)
- Search Algorithms (1)
- Security Modelling (1)
- Semantische Analyse (1)
- Service-Orientierte Architekturen (1)
- Service-oriented Architectures (1)
- Service-orientierte Systeme (1)
- Sicherheitsmodellierung (1)
- Similarity Measures (1)
- Similarity Search (1)
- Skalierbarkeit (1)
- Softwareanalyse (1)
- Softwareentwicklung (1)
- Softwareentwicklungsprozesse (1)
- Softwaretechnik (1)
- Softwaretest (1)
- Softwarevisualisierung (1)
- Softwarewartung (1)
- Stilisierung (1)
- Structuring (1)
- Strukturierung (1)
- Suchverfahren (1)
- Synonyme (1)
- Systeme von Systemen (1)
- Systems of Systems (1)
- Temporal Logic (1)
- Temporallogik (1)
- Test-getriebene Fehlernavigation (1)
- Testen (1)
- Texturen (1)
- Time Augmented Petri Nets (1)
- Traceability (1)
- Tracking (1)
- Transformation (1)
- Usage Interest (1)
- Verhalten (1)
- Verhaltensanalyse (1)
- Verifikation (1)
- Verletzung Auflösung (1)
- Verletzung Erklärung (1)
- Vernetzte Daten (1)
- Versionierung (1)
- Verteiltes Arbeiten (1)
- Videoanalyse (1)
- Videometadaten (1)
- Violation Explanation (1)
- Violation Resolution (1)
- Visualisierung (1)
- Vorhersage (1)
- Warteschlangentheorie (1)
- Web Sites (1)
- Web of Data (1)
- Webseite (1)
- Well-structuredness (1)
- Wohlstrukturiertheit (1)
- Zeitbehaftete Petri Netze (1)
- adaptive Systeme (1)
- adaptive systems (1)
- analysis (1)
- anisotropic Kuwahara filter (1)
- anomalies (1)
- association rule mining (1)
- asynchronous circuit (1)
- back-in-time (1)
- behavioral specification (1)
- bpm (1)
- building models (1)
- business process architecture (1)
- cartographic design (1)
- changeability (1)
- cleansing (1)
- cloud computing (1)
- cloud datacenter (1)
- coherence-enhancing filtering (1)
- collaborations (1)
- complexity (1)
- computer graphics (1)
- computer science (1)
- conformance analysis (1)
- context awareness (1)
- cscw (1)
- data correctness checking (1)
- data extraction (1)
- data integration (1)
- data objects (1)
- data states (1)
- database technology (1)
- debugging (1)
- dependency discovery (1)
- design (1)
- design management (1)
- design thinking discourse (1)
- deurema modeling language (1)
- development tools (1)
- difference of Gaussians (1)
- digital whiteboard (1)
- discrimination networks (1)
- dynamic consolidation (1)
- dynamische Umsortierung (1)
- empirical studies (1)
- empirische Studien (1)
- energy efficiency (1)
- enterprise search (1)
- entity alignment (1)
- event abstraction (1)
- evolution (1)
- feedback loop modeling (1)
- flow-based bilateral filter (1)
- formal framework (1)
- formales Framework (1)
- formalism (1)
- ganzheitlich (1)
- generalization (1)
- geospatial data (1)
- gesture (1)
- grammar inference (1)
- graph clustering (1)
- graph-based ranking (1)
- holistic (1)
- human computer interaction (1)
- image processing (1)
- inclusion dependency (1)
- incremental graph pattern matching (1)
- index (1)
- information extraction (1)
- inkrementelle Graphmustersuche (1)
- input accuracy (1)
- interaction (1)
- interactive simulation (1)
- interface (1)
- knowledge discovery (1)
- landmarks (1)
- link discovery (1)
- linked data (1)
- logic synthesis (1)
- main memory computing (1)
- map reduce (1)
- map/reduce (1)
- missing data (1)
- mobile (1)
- mobile devices (1)
- model transformation (1)
- model-based prototyping (1)
- model-driven (1)
- model-driven engineering (1)
- model-driven software engineering (1)
- modelgetriebene Entwicklung (1)
- modellgetriebene Softwareentwicklung (1)
- modelling (1)
- multi core data processing (1)
- nichtlineare Projektionen (1)
- nonlinear projections (1)
- parallel (1)
- prediction (1)
- prefetching (1)
- presentation (1)
- probabilistic models (1)
- process (1)
- process and data integration (1)
- process automation (1)
- process model search (1)
- process refinement (1)
- query matching (1)
- querying (1)
- rapid prototyping (1)
- remote collaboration (1)
- requirements engineering (1)
- resource management (1)
- runtime behavior (1)
- runtime models (1)
- schema discovery (1)
- semantic analysis (1)
- service-oriented systems (1)
- similarity (1)
- software analysis (1)
- software development (1)
- software development processes (1)
- software engineering (1)
- software maintenance (1)
- software visualization (1)
- speed independence (1)
- stochastic Petri nets (1)
- stochastische Petri Netze (1)
- stylization (1)
- synonym discovery (1)
- terrain models (1)
- test-driven fault navigation (1)
- textures (1)
- topics (1)
- touch input (1)
- transformation (1)
- verification (1)
- versioning (1)
- verteilte Datenbanken (1)
- video analysis (1)
- video metadata (1)
- virtualisierte IT-Infrastruktur (1)
- visualization (1)
- word sense disambiguation (1)
- Ähnlichkeit (1)
- Ähnlichkeitsmaße (1)
- Ähnlichkeitssuche (1)
- Änderbarkeit (1)
- Übereinstimmungsanalyse (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering gGmbH (46) (remove)
This work introduces concepts and corresponding tool support to enable a complementary approach in dealing with recovery. Programmers need to recover a development state, or a part thereof, when previously made changes reveal undesired implications. However, when the need arises suddenly and unexpectedly, recovery often involves expensive and tedious work. To avoid tedious work, literature recommends keeping away from unexpected recovery demands by following a structured and disciplined approach, which consists of the application of various best practices including working only on one thing at a time, performing small steps, as well as making proper use of versioning and testing tools. However, the attempt to avoid unexpected recovery is both time-consuming and error-prone. On the one hand, it requires disproportionate effort to minimize the risk of unexpected situations. On the other hand, applying recommended practices selectively, which saves time, can hardly avoid recovery. In addition, the constant need for foresight and self-control has unfavorable implications. It is exhaustive and impedes creative problem solving. This work proposes to make recovery fast and easy and introduces corresponding support called CoExist. Such dedicated support turns situations of unanticipated recovery from tedious experiences into pleasant ones. It makes recovery fast and easy to accomplish, even if explicit commits are unavailable or tests have been ignored for some time. When mistakes and unexpected insights are no longer associated with tedious corrective actions, programmers are encouraged to change source code as a means to reason about it, as opposed to making changes only after structuring and evaluating them mentally. This work further reports on an implementation of the proposed tool support in the Squeak/Smalltalk development environment. The development of the tools has been accompanied by regular performance and usability tests. In addition, this work investigates whether the proposed tools affect programmers’ performance. In a controlled lab study, 22 participants improved the design of two different applications. Using a repeated measurement setup, the study examined the effect of providing CoExist on programming performance. The result of analyzing 88 hours of programming suggests that built-in recovery support as provided with CoExist positively has a positive effect on programming performance in explorative programming tasks.
Business process models are used within a range of organizational initiatives, where every stakeholder has a unique perspective on a process and demands the respective model. As a consequence, multiple process models capturing the very same business process coexist. Keeping such models in sync is a challenge within an ever changing business environment: once a process is changed, all its models have to be updated. Due to a large number of models and their complex relations, model maintenance becomes error-prone and expensive. Against this background, business process model abstraction emerged as an operation reducing the number of stored process models and facilitating model management. Business process model abstraction is an operation preserving essential process properties and leaving out insignificant details in order to retain information relevant for a particular purpose. Process model abstraction has been addressed by several researchers. The focus of their studies has been on particular use cases and model transformations supporting these use cases. This thesis systematically approaches the problem of business process model abstraction shaping the outcome into a framework. We investigate the current industry demand in abstraction summarizing it in a catalog of business process model abstraction use cases. The thesis focuses on one prominent use case where the user demands a model with coarse-grained activities and overall process ordering constraints. We develop model transformations that support this use case starting with the transformations based on process model structure analysis. Further, abstraction methods considering the semantics of process model elements are investigated. First, we suggest how semantically related activities can be discovered in process models-a barely researched challenge. The thesis validates the designed abstraction methods against sets of industrial process models and discusses the method implementation aspects. Second, we develop a novel model transformation, which combined with the related activity discovery allows flexible non-hierarchical abstraction. In this way this thesis advocates novel model transformations that facilitate business process model management and provides the foundations for innovative tool support.
In today's world, many applications produce large amounts of data at an enormous rate. Analyzing such datasets for metadata is indispensable for effectively understanding, storing, querying, manipulating, and mining them. Metadata summarizes technical properties of a dataset which rang from basic statistics to complex structures describing data dependencies. One type of dependencies is inclusion dependency (IND), which expresses subset-relationships between attributes of datasets. Therefore, inclusion dependencies are important for many data management applications in terms of data integration, query optimization, schema redesign, or integrity checking. So, the discovery of inclusion dependencies in unknown or legacy datasets is at the core of any data profiling effort.
For exhaustively detecting all INDs in large datasets, we developed S-indd++, a new algorithm that eliminates the shortcomings of existing IND-detection algorithms and significantly outperforms them. S-indd++ is based on a novel concept for the attribute clustering for efficiently deriving INDs. Inferring INDs from our attribute clustering eliminates all redundant operations caused by other algorithms. S-indd++ is also based on a novel partitioning strategy that enables discording a large number of candidates in early phases of the discovering process. Moreover, S-indd++ does not require to fit a partition into the main memory--this is a highly appreciable property in the face of ever-growing datasets. S-indd++ reduces up to 50% of the runtime of the state-of-the-art approach.
None of the approach for discovering INDs is appropriate for the application on dynamic datasets; they can not update the INDs after an update of the dataset without reprocessing it entirely. To this end, we developed the first approach for incrementally updating INDs in frequently changing datasets. We achieved that by reducing the problem of incrementally updating INDs to the incrementally updating the attribute clustering from which all INDs are efficiently derivable. We realized the update of the clusters by designing new operations to be applied to the clusters after every data update. The incremental update of INDs reduces the time of the complete rediscovery by up to 99.999%.
All existing algorithms for discovering n-ary INDs are based on the principle of candidate generation--they generate candidates and test their validity in the given data instance. The major disadvantage of this technique is the exponentially growing number of database accesses in terms of SQL queries required for validation. We devised Mind2, the first approach for discovering n-ary INDs without candidate generation. Mind2 is based on a new mathematical framework developed in this thesis for computing the maximum INDs from which all other n-ary INDs are derivable. The experiments showed that Mind2 is significantly more scalable and effective than hypergraph-based algorithms.
Geospatial data has become a natural part of a growing number of information systems and services in the economy, society, and people's personal lives. In particular, virtual 3D city and landscape models constitute valuable information sources within a wide variety of applications such as urban planning, navigation, tourist information, and disaster management. Today, these models are often visualized in detail to provide realistic imagery. However, a photorealistic rendering does not automatically lead to high image quality, with respect to an effective information transfer, which requires important or prioritized information to be interactively highlighted in a context-dependent manner.
Approaches in non-photorealistic renderings particularly consider a user's task and camera perspective when attempting optimal expression, recognition, and communication of important or prioritized information. However, the design and implementation of non-photorealistic rendering techniques for 3D geospatial data pose a number of challenges, especially when inherently complex geometry, appearance, and thematic data must be processed interactively. Hence, a promising technical foundation is established by the programmable and parallel computing architecture of graphics processing units.
This thesis proposes non-photorealistic rendering techniques that enable both the computation and selection of the abstraction level of 3D geospatial model contents according to user interaction and dynamically changing thematic information. To achieve this goal, the techniques integrate with hardware-accelerated rendering pipelines using shader technologies of graphics processing units for real-time image synthesis. The techniques employ principles of artistic rendering, cartographic generalization, and 3D semiotics—unlike photorealistic rendering—to synthesize illustrative renditions of geospatial feature type entities such as water surfaces, buildings, and infrastructure networks. In addition, this thesis contributes a generic system that enables to integrate different graphic styles—photorealistic and non-photorealistic—and provide their seamless transition according to user tasks, camera view, and image resolution.
Evaluations of the proposed techniques have demonstrated their significance to the field of geospatial information visualization including topics such as spatial perception, cognition, and mapping. In addition, the applications in illustrative and focus+context visualization have reflected their potential impact on optimizing the information transfer regarding factors such as cognitive load, integration of non-realistic information, visualization of uncertainty, and visualization on small displays.
Nowadays, model-driven engineering (MDE) promises to ease software development by decreasing the inherent complexity of classical software development. In order to deliver on this promise, MDE increases the level of abstraction and automation, through a consideration of domain-specific models (DSMs) and model operations (e.g. model transformations or code generations). DSMs conform to domain-specific modeling languages (DSMLs), which increase the level of abstraction, and model operations are first-class entities of software development because they increase the level of automation. Nevertheless, MDE has to deal with at least two new dimensions of complexity, which are basically caused by the increased linguistic and technological heterogeneity. The first dimension of complexity is setting up an MDE environment, an activity comprised of the implementation or selection of DSMLs and model operations. Setting up an MDE environment is both time-consuming and error-prone because of the implementation or adaptation of model operations. The second dimension of complexity is concerned with applying MDE for actual software development. Applying MDE is challenging because a collection of DSMs, which conform to potentially heterogeneous DSMLs, are required to completely specify a complex software system. A single DSML can only be used to describe a specific aspect of a software system at a certain level of abstraction and from a certain perspective. Additionally, DSMs are usually not independent but instead have inherent interdependencies, reflecting (partial) similar aspects of a software system at different levels of abstraction or from different perspectives. A subset of these dependencies are applications of various model operations, which are necessary to keep the degree of automation high. This becomes even worse when addressing the first dimension of complexity. Due to continuous changes, all kinds of dependencies, including the applications of model operations, must also be managed continuously. This comprises maintaining the existence of these dependencies and the appropriate (re-)application of model operations. The contribution of this thesis is an approach that combines traceability and model management to address the aforementioned challenges of configuring and applying MDE for software development. The approach is considered as a traceability approach because it supports capturing and automatically maintaining dependencies between DSMs. The approach is considered as a model management approach because it supports managing the automated (re-)application of heterogeneous model operations. In addition, the approach is considered as a comprehensive model management. Since the decomposition of model operations is encouraged to alleviate the first dimension of complexity, the subsequent composition of model operations is required to counteract their fragmentation. A significant portion of this thesis concerns itself with providing a method for the specification of decoupled yet still highly cohesive complex compositions of heterogeneous model operations. The approach supports two different kinds of compositions - data-flow compositions and context compositions. Data-flow composition is used to define a network of heterogeneous model operations coupled by sharing input and output DSMs alone. Context composition is related to a concept used in declarative model transformation approaches to compose individual model transformation rules (units) at any level of detail. In this thesis, context composition provides the ability to use a collection of dependencies as context for the composition of other dependencies, including model operations. In addition, the actual implementation of model operations, which are going to be composed, do not need to implement any composition concerns. The approach is realized by means of a formalism called an executable and dynamic hierarchical megamodel, based on the original idea of megamodels. This formalism supports specifying compositions of dependencies (traceability and model operations). On top of this formalism, traceability is realized by means of a localization concept, and model management by means of an execution concept.
Organizations try to gain competitive advantages, and to increase customer satisfaction. To ensure the quality and efficiency of their business processes, they perform business process management. An important part of process management that happens on the daily operational level is process controlling. A prerequisite of controlling is process monitoring, i.e., keeping track of the performed activities in running process instances. Only by process monitoring can business analysts detect delays and react to deviations from the expected or guaranteed performance of a process instance. To enable monitoring, process events need to be collected from the process environment. When a business process is orchestrated by a process execution engine, monitoring is available for all orchestrated process activities. Many business processes, however, do not lend themselves to automatic orchestration, e.g., because of required freedom of action. This situation is often encountered in hospitals, where most business processes are manually enacted. Hence, in practice it is often inefficient or infeasible to document and monitor every process activity. Additionally, manual process execution and documentation is prone to errors, e.g., documentation of activities can be forgotten. Thus, organizations face the challenge of process events that occur, but are not observed by the monitoring environment. These unobserved process events can serve as basis for operational process decisions, even without exact knowledge of when they happened or when they will happen. An exemplary decision is whether to invest more resources to manage timely completion of a case, anticipating that the process end event will occur too late. This thesis offers means to reason about unobserved process events in a probabilistic way. We address decisive questions of process managers (e.g., "when will the case be finished?", or "when did we perform the activity that we forgot to document?") in this thesis. As main contribution, we introduce an advanced probabilistic model to business process management that is based on a stochastic variant of Petri nets. We present a holistic approach to use the model effectively along the business process lifecycle. Therefore, we provide techniques to discover such models from historical observations, to predict the termination time of processes, and to ensure quality by missing data management. We propose mechanisms to optimize configuration for monitoring and prediction, i.e., to offer guidance in selecting important activities to monitor. An implementation is provided as a proof of concept. For evaluation, we compare the accuracy of the approach with that of state-of-the-art approaches using real process data of a hospital. Additionally, we show its more general applicability in other domains by applying the approach on process data from logistics and finance.
Structuring process models
(2012)
One can fairly adopt the ideas of Donald E. Knuth to conclude that process modeling is both a science and an art. Process modeling does have an aesthetic sense. Similar to composing an opera or writing a novel, process modeling is carried out by humans who undergo creative practices when engineering a process model. Therefore, the very same process can be modeled in a myriad number of ways. Once modeled, processes can be analyzed by employing scientific methods. Usually, process models are formalized as directed graphs, with nodes representing tasks and decisions, and directed arcs describing temporal constraints between the nodes. Common process definition languages, such as Business Process Model and Notation (BPMN) and Event-driven Process Chain (EPC) allow process analysts to define models with arbitrary complex topologies. The absence of structural constraints supports creativity and productivity, as there is no need to force ideas into a limited amount of available structural patterns. Nevertheless, it is often preferable that models follow certain structural rules. A well-known structural property of process models is (well-)structuredness. A process model is (well-)structured if and only if every node with multiple outgoing arcs (a split) has a corresponding node with multiple incoming arcs (a join), and vice versa, such that the set of nodes between the split and the join induces a single-entry-single-exit (SESE) region; otherwise the process model is unstructured. The motivations for well-structured process models are manifold: (i) Well-structured process models are easier to layout for visual representation as their formalizations are planar graphs. (ii) Well-structured process models are easier to comprehend by humans. (iii) Well-structured process models tend to have fewer errors than unstructured ones and it is less probable to introduce new errors when modifying a well-structured process model. (iv) Well-structured process models are better suited for analysis with many existing formal techniques applicable only for well-structured process models. (v) Well-structured process models are better suited for efficient execution and optimization, e.g., when discovering independent regions of a process model that can be executed concurrently. Consequently, there are process modeling languages that encourage well-structured modeling, e.g., Business Process Execution Language (BPEL) and ADEPT. However, the well-structured process modeling implies some limitations: (i) There exist processes that cannot be formalized as well-structured process models. (ii) There exist processes that when formalized as well-structured process models require a considerable duplication of modeling constructs. Rather than expecting well-structured modeling from start, we advocate for the absence of structural constraints when modeling. Afterwards, automated methods can suggest, upon request and whenever possible, alternative formalizations that are "better" structured, preferably well-structured. In this thesis, we study the problem of automatically transforming process models into equivalent well-structured models. The developed transformations are performed under a strong notion of behavioral equivalence which preserves concurrency. The findings are implemented in a tool, which is publicly available.
The correction of software failures tends to be very cost-intensive because their debugging is an often time-consuming development activity. During this activity, developers largely attempt to understand what causes failures: Starting with a test case that reproduces the observable failure they have to follow failure causes on the infection chain back to the root cause (defect). This idealized procedure requires deep knowledge of the system and its behavior because failures and defects can be far apart from each other. Unfortunately, common debugging tools are inadequate for systematically investigating such infection chains in detail. Thus, developers have to rely primarily on their intuition and the localization of failure causes is not time-efficient. To prevent debugging by disorganized trial and error, experienced developers apply the scientific method and its systematic hypothesis-testing. However, even when using the scientific method, the search for failure causes can still be a laborious task. First, lacking expertise about the system makes it hard to understand incorrect behavior and to create reasonable hypotheses. Second, contemporary debugging approaches provide no or only partial support for the scientific method. In this dissertation, we present test-driven fault navigation as a debugging guide for localizing reproducible failures with the scientific method. Based on the analysis of passing and failing test cases, we reveal anomalies and integrate them into a breadth-first search that leads developers to defects. This systematic search consists of four specific navigation techniques that together support the creation, evaluation, and refinement of failure cause hypotheses for the scientific method. First, structure navigation localizes suspicious system parts and restricts the initial search space. Second, team navigation recommends experienced developers for helping with failures. Third, behavior navigation allows developers to follow emphasized infection chains back to root causes. Fourth, state navigation identifies corrupted state and reveals parts of the infection chain automatically. We implement test-driven fault navigation in our Path Tools framework for the Squeak/Smalltalk development environment and limit its computation cost with the help of our incremental dynamic analysis. This lightweight dynamic analysis ensures an immediate debugging experience with our tools by splitting the run-time overhead over multiple test runs depending on developers’ needs. Hence, our test-driven fault navigation in combination with our incremental dynamic analysis answers important questions in a short time: where to start debugging, who understands failure causes best, what happened before failures, and which state properties are infected.
Personal fabrication tools, such as 3D printers, are on the way of enabling a future in which non-technical users will be able to create custom objects. However, while the hardware is there, the current interaction model behind existing design tools is not suitable for non-technical users. Today, 3D printers are operated by fabricating the object in one go, which tends to take overnight due to the slow 3D printing technology. Consequently, the current interaction model requires users to think carefully before printing as every mistake may imply another overnight print. Planning every step ahead, however, is not feasible for non-technical users as they lack the experience to reason about the consequences of their design decisions.
In this dissertation, we propose changing the interaction model around personal fabrication tools to better serve this user group. We draw inspiration from personal computing and argue that the evolution of personal fabrication may resemble the evolution of personal computing: Computing started with machines that executed a program in one go before returning the result to the user. By decreasing the interaction unit to single requests, turn-taking systems such as the command line evolved, which provided users with feedback after every input. Finally, with the introduction of direct-manipulation interfaces, users continuously interacted with a program receiving feedback about every action in real-time. In this dissertation, we explore whether these interaction concepts can be applied to personal fabrication as well.
We start with fabricating an object in one go and investigate how to tighten the feedback-cycle on an object-level: We contribute a method called low-fidelity fabrication, which saves up to 90% fabrication time by creating objects as fast low-fidelity previews, which are sufficient to evaluate key design aspects. Depending on what is currently being tested, we propose different conversions that enable users to focus on different parts: faBrickator allows for a modular design in the early stages of prototyping; when users move on WirePrint allows quickly testing an object's shape, while Platener allows testing an object's technical function. We present an interactive editor for each technique and explain the underlying conversion algorithms.
By interacting on smaller units, such as a single element of an object, we explore what it means to transition from systems that fabricate objects in one go to turn-taking systems. We start with a 2D system called constructable: Users draw with a laser pointer onto the workpiece inside a laser cutter. The drawing is captured with an overhead camera. As soon as the the user finishes drawing an element, such as a line, the constructable system beautifies the path and cuts it--resulting in physical output after every editing step. We extend constructable towards 3D editing by developing a novel laser-cutting technique for 3D objects called LaserOrigami that works by heating up the workpiece with the defocused laser until the material becomes compliant and bends down under gravity. While constructable and LaserOrigami allow for fast physical feedback, the interaction is still best described as turn-taking since it consists of two discrete steps: users first create an input and afterwards the system provides physical output.
By decreasing the interaction unit even further to a single feature, we can achieve real-time physical feedback: Input by the user and output by the fabrication device are so tightly coupled that no visible lag exists. This allows us to explore what it means to transition from turn-taking interfaces, which only allow exploring one option at a time, to direct manipulation interfaces with real-time physical feedback, which allow users to explore the entire space of options continuously with a single interaction. We present a system called FormFab, which allows for such direct control. FormFab is based on the same principle as LaserOrigami: It uses a workpiece that when warmed up becomes compliant and can be reshaped. However, FormFab achieves the reshaping not based on gravity, but through a pneumatic system that users can control interactively. As users interact, they see the shape change in real-time.
We conclude this dissertation by extrapolating the current evolution into a future in which large numbers of people use the new technology to create objects. We see two additional challenges on the horizon: sustainability and intellectual property. We investigate sustainability by demonstrating how to print less and instead patch physical objects. We explore questions around intellectual property with a system called Scotty that transfers objects without creating duplicates, thereby preserving the designer's copyright.