TY - THES
A1 - Abedjan, Ziawasch
T1 - Improving RDF data with data mining
T1 - Verbessern von RDF Daten durch Data-Mining
N2 - Linked Open Data (LOD) comprises very many and often large public data sets and knowledge bases. Those datasets are mostly presented in the RDF triple structure of subject, predicate, and object, where each triple represents a statement or fact. Unfortunately, the heterogeneity of available open data requires significant integration steps before it can be used in applications. Meta information, such as ontological definitions and exact range definitions of predicates, are desirable and ideally provided by an ontology. However in the context of LOD, ontologies are often incomplete or simply not available. Thus, it is useful to automatically generate meta information, such as ontological dependencies, range definitions, and topical classifications. Association rule mining, which was originally applied for sales analysis on transactional databases, is a promising and novel technique to explore such data. We designed an adaptation of this technique for min-ing Rdf data and introduce the concept of “mining configurations”, which allows us to mine RDF data sets in various ways. Different configurations enable us to identify schema and value dependencies that in combination result in interesting use cases. To this end, we present rule-based approaches for auto-completion, data enrichment, ontology improvement, and query relaxation. Auto-completion remedies the problem of inconsistent ontology usage, providing an editing user with a sorted list of commonly used predicates. A combination of different configurations step extends this approach to create completely new facts for a knowledge base. We present two approaches for fact generation, a user-based approach where a user selects the entity to be amended with new facts and a data-driven approach where an algorithm discovers entities that have to be amended with missing facts. As knowledge bases constantly grow and evolve, another approach to improve the usage of RDF data is to improve existing ontologies. Here, we present an association rule based approach to reconcile ontology and data. Interlacing different mining configurations, we infer an algorithm to discover synonymously used predicates. Those predicates can be used to expand query results and to support users during query formulation. We provide a wide range of experiments on real world datasets for each use case. The experiments and evaluations show the added value of association rule mining for the integration and usability of RDF data and confirm the appropriateness of our mining configuration methodology.
N2 - Linked Open Data (LOD) umfasst viele und oft sehr große öffentlichen Datensätze und Wissensbanken, die hauptsächlich in der RDF Triplestruktur bestehend aus Subjekt, Prädikat und Objekt vorkommen. Dabei repräsentiert jedes Triple einen Fakt. Unglücklicherweise erfordert die Heterogenität der verfügbaren öffentlichen Daten signifikante Integrationsschritte bevor die Daten in Anwendungen genutzt werden können. Meta-Daten wie ontologische Strukturen und Bereichsdefinitionen von Prädikaten sind zwar wünschenswert und idealerweise durch eine Wissensbank verfügbar. Jedoch sind Wissensbanken im Kontext von LOD oft unvollständig oder einfach nicht verfügbar. Deshalb ist es nützlich automatisch Meta-Informationen, wie ontologische Abhängigkeiten, Bereichs-und Domänendefinitionen und thematische Assoziationen von Ressourcen generieren zu können. Eine neue und vielversprechende Technik um solche Daten zu untersuchen basiert auf das entdecken von Assoziationsregeln, welche ursprünglich für Verkaufsanalysen in transaktionalen Datenbanken angewendet wurde. Wir haben eine Adaptierung dieser Technik auf RDF Daten entworfen und stellen das Konzept der Mining Konfigurationen vor, welches uns befähigt in RDF Daten auf unterschiedlichen Weisen Muster zu erkennen. Verschiedene Konfigurationen erlauben uns Schema- und Wertbeziehungen zu erkennen, die für interessante Anwendungen genutzt werden können. In dem Sinne, stellen wir assoziationsbasierte Verfahren für eine Prädikatvorschlagsverfahren, Datenvervollständigung, Ontologieverbesserung und Anfrageerleichterung vor. Das Vorschlagen von Prädikaten behandelt das Problem der inkonsistenten Verwendung von Ontologien, indem einem Benutzer, der einen neuen Fakt einem Rdf-Datensatz hinzufügen will, eine sortierte Liste von passenden Prädikaten vorgeschlagen wird. Eine Kombinierung von verschiedenen Konfigurationen erweitert dieses Verfahren sodass automatisch komplett neue Fakten für eine Wissensbank generiert werden. Hierbei stellen wir zwei Verfahren vor, einen nutzergesteuertenVerfahren, bei dem ein Nutzer die Entität aussucht die erweitert werden soll und einen datengesteuerten Ansatz, bei dem ein Algorithmus selbst die Entitäten aussucht, die mit fehlenden Fakten erweitert werden. Da Wissensbanken stetig wachsen und sich verändern, ist ein anderer Ansatz um die Verwendung von RDF Daten zu erleichtern die Verbesserung von Ontologien. Hierbei präsentieren wir ein Assoziationsregeln-basiertes Verfahren, der Daten und zugrundeliegende Ontologien zusammenführt. Durch die Verflechtung von unterschiedlichen Konfigurationen leiten wir einen neuen Algorithmus her, der gleichbedeutende Prädikate entdeckt. Diese Prädikate können benutzt werden um Ergebnisse einer Anfrage zu erweitern oder einen Nutzer während einer Anfrage zu unterstützen. Für jeden unserer vorgestellten Anwendungen präsentieren wir eine große Auswahl an Experimenten auf Realweltdatensätzen. Die Experimente und Evaluierungen zeigen den Mehrwert von Assoziationsregeln-Generierung für die Integration und Nutzbarkeit von RDF Daten und bestätigen die Angemessenheit unserer konfigurationsbasierten Methodologie um solche Regeln herzuleiten.
KW - Assoziationsregeln
KW - RDF
KW - LOD
KW - Mustererkennung
KW - Synonyme
KW - association rule mining
KW - RDF
KW - LOD
KW - knowledge discovery
KW - synonym discovery
Y1 - 2014
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-71334
ER -
TY - BOOK
A1 - Abedjan, Ziawasch
A1 - Golab, Lukasz
A1 - Naumann, Felix
A1 - Papenbrock, Thorsten
T1 - Data Profiling
T3 - Synthesis lectures on data management, 52
Y1 - 2019
SN - 978-1-68173-446-0
PB - Morgan & Claypool Publishers
CY - San Rafael
ER -
TY - BOOK
A1 - Abedjan, Ziawasch
A1 - Naumann, Felix
T1 - Advancing the discovery of unique column combinations
N2 - Unique column combinations of a relational database table are sets of columns that contain only unique values. Discovering such combinations is a fundamental research problem and has many different data management and knowledge discovery applications. Existing discovery algorithms are either brute force or have a high memory load and can thus be applied only to small datasets or samples. In this paper, the wellknown GORDIAN algorithm and "Apriori-based" algorithms are compared and analyzed for further optimization. We greatly improve the Apriori algorithms through efficient candidate generation and statistics-based pruning methods. A hybrid solution HCAGORDIAN combines the advantages of GORDIAN and our new algorithm HCA, and it significantly outperforms all previous work in many situations.
N2 - Unique-Spaltenkombinationen sind Spaltenkombinationen einer Datenbanktabelle, die nur einzigartige Werte beinhalten. Das Finden von Unique-Spaltenkombinationen spielt sowohl eine wichtige Rolle im Bereich der Grundlagenforschung von Informationssystemen als auch in Anwendungsgebieten wie dem Datenmanagement und der Erkenntnisgewinnung aus Datenbeständen. Vorhandene Algorithmen, die dieses Problem angehen, sind entweder Brute-Force oder benötigen zu viel Hauptspeicher. Deshalb können diese Algorithmen nur auf kleine Datenmengen angewendet werden. In dieser Arbeit werden der bekannte GORDIAN-Algorithmus und Apriori-basierte Algorithmen zum Zwecke weiterer Optimierung analysiert. Wir verbessern die Apriori Algorithmen durch eine effiziente Kandidatengenerierung und Heuristikbasierten Kandidatenfilter. Eine Hybride Lösung, HCA-GORDIAN, kombiniert die Vorteile von GORDIAN und unserem neuen Algorithmus HCA, welche die bisherigen Algorithmen hinsichtlich der Effizienz in vielen Situationen übertrifft.
T3 - Technische Berichte des Hasso-Plattner-Instituts für Digital Engineering an der Universität Potsdam - 51
KW - Apriori
KW - eindeutig
KW - funktionale Abhängigkeit
KW - Schlüsselentdeckung
KW - Data Profiling
KW - apriori
KW - unique
KW - functional dependency
KW - key discovery
KW - data profiling
Y1 - 2011
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-53564
SN - 978-3-86956-148-6
SN - 1613-5652
SN - 2191-1665
PB - Universitätsverlag Potsdam
CY - Potsdam
ER -
TY - JOUR
A1 - Adnan, Hassan Sami
A1 - Matthews, Sam
A1 - Hackl, M.
A1 - Das, P. P.
A1 - Manaswini, Manisha
A1 - Gadamsetti, S.
A1 - Filali, Maroua
A1 - Owoyele, Babajide
A1 - Santuber, Joaquín
A1 - Edelman, Jonathan
T1 - Human centered AI design for clinical monitoring and data management
JF - European journal of public health : official journal of the European Health Association
N2 - In clinical settings, significant resources are spent on data collection and monitoring patients' health parameters to improve decision-making and provide better care. With increased digitization, the healthcare sector is shifting towards implementing digital technologies for data management and in administration. New technologies offer better treatment opportunities and streamline clinical workflow, but the complexity can cause ineffectiveness, frustration, and errors. To address this, we believe digital solutions alone are not sufficient. Therefore, we take a human-centred design approach for AI development, and apply systems engineering methods to identify system leverage points. We demonstrate how automation enables monitoring clinical parameters, using existing non-intrusive sensor technology, resulting in more resources toward patient care. Furthermore, we provide a framework on digitization of clinical data for integration with data management.
Y1 - 2020
U6 - https://doi.org/10.1093/eurpub/ckaa165.225
SN - 1101-1262
SN - 1464-360X
VL - 30
IS - 5
SP - V86
EP - V86
PB - Oxford Univ. Press
CY - Oxford
ER -
TY - JOUR
A1 - Adnan, Hassan Sami
A1 - Srsic, Amanda
A1 - Venticich, Pete Milos
A1 - Townend, David M.R.
T1 - Using AI for mental health analysis and prediction in school surveys
JF - European journal of public health
N2 - Background:
Childhood and adolescence are critical stages of life for mental health and well-being. Schools are a key setting for mental health promotion and illness prevention. One in five children and adolescents have a mental disorder, about half of mental disorders beginning before the age of 14. Beneficial and explainable artificial intelligence can replace current paper- based and online approaches to school mental health surveys. This can enhance data acquisition, interoperability, data driven analysis, trust and compliance. This paper presents a model for using chatbots for non-obtrusive data collection and supervised machine learning models for data analysis; and discusses ethical considerations pertaining to the use of these models.
Methods:
For data acquisition, the proposed model uses chatbots which interact with students. The conversation log acts as the source of raw data for the machine learning. Pre-processing of the data is automated by filtering for keywords and phrases.
Existing survey results, obtained through current paper-based data collection methods, are evaluated by domain experts (health professionals). These can be used to create a test dataset to validate the machine learning models. Supervised learning
can then be deployed to classify specific behaviour and mental health patterns.
Results:
We present a model that can be used to improve upon current paper-based data collection and manual data analysis methods. An open-source GitHub repository contains necessary tools and components of this model. Privacy is respected through
rigorous observance of confidentiality and data protection requirements. Critical reflection on these ethics and law aspects is included in the project.
Conclusions:
This model strengthens mental health surveillance in schools. The same tools and components could be applied to other public health data. Future extensions of this model could also incorporate unsupervised learning to find clusters and patterns
of unknown effects.
KW - ethics
KW - artificial intelligence
KW - adolescent
KW - child
KW - confidentiality
KW - health personnel
KW - mental disorders
KW - mental health
KW - personal satisfaction
KW - privacy
KW - school (environment)
KW - statutes and laws
KW - public health medicine
KW - surveillance
KW - medical
KW - prevention
KW - datasets
KW - machine learning
KW - supervised machine learning
KW - data analysis
Y1 - 2020
U6 - https://doi.org/10.1093/eurpub/ckaa165.336
SN - 1101-1262
SN - 1464-360X
VL - 30
SP - V125
EP - V125
PB - Oxford Univ. Press
CY - Oxford [u.a.]
ER -
TY - BOOK
A1 - Albrecht, Alexander
A1 - Naumann, Felix
T1 - Understanding cryptic schemata in large extract-transform-load systems
N2 - Extract-Transform-Load (ETL) tools are used for the creation, maintenance, and evolution of data warehouses, data marts, and operational data stores. ETL workflows populate those systems with data from various data sources by specifying and executing a DAG of transformations. Over time, hundreds of individual workflows evolve as new sources and new requirements are integrated into the system. The maintenance and evolution of large-scale ETL systems requires much time and manual effort. A key problem is to understand the meaning of unfamiliar attribute labels in source and target databases and ETL transformations. Hard-to-understand attribute labels lead to frustration and time spent to develop and understand ETL workflows. We present a schema decryption technique to support ETL developers in understanding cryptic schemata of sources, targets, and ETL transformations. For a given ETL system, our recommender-like approach leverages the large number of mapped attribute labels in existing ETL workflows to produce good and meaningful decryptions. In this way we are able to decrypt attribute labels consisting of a number of unfamiliar few-letter abbreviations, such as UNP_PEN_INT, which we can decrypt to UNPAID_PENALTY_INTEREST. We evaluate our schema decryption approach on three real-world repositories of ETL workflows and show that our approach is able to suggest high-quality decryptions for cryptic attribute labels in a given schema.
N2 - Extract-Transform-Load (ETL) Tools werden häufig beim Erstellen, der Wartung und der Weiterentwicklung von Data Warehouses, Data Marts und operationalen Datenbanken verwendet. ETL Workflows befüllen diese Systeme mit Daten aus vielen unterschiedlichen Quellsystemen. Ein ETL Workflow besteht aus mehreren Transformationsschritten, die einen DAG-strukturierter Graphen bilden. Mit der Zeit entstehen hunderte individueller ETL Workflows, da neue Datenquellen integriert oder neue Anforderungen umgesetzt werden müssen. Die Wartung und Weiterentwicklung von großen ETL Systemen benötigt viel Zeit und manuelle Arbeit. Ein zentrales Problem ist dabei das Verständnis unbekannter Attributnamen in Quell- und Zieldatenbanken und ETL Transformationen. Schwer verständliche Attributnamen führen zu Frustration und hohen Zeitaufwänden bei der Entwicklung und dem Verständnis von ETL Workflows. Wir präsentieren eine Schema Decryption Technik, die ETL Entwicklern das Verständnis kryptischer Schemata in Quell- und Zieldatenbanken und ETL Transformationen erleichtert. Unser Ansatz berücksichtigt für ein gegebenes ETL System die Vielzahl verknüpfter Attributnamen in den existierenden ETL Workflows. So werden gute und aussagekräftige "Decryptions" gefunden und wir sind in der Lage Attributnamen, die aus unbekannten Abkürzungen bestehen, zu "decrypten". So wird z.B. für den Attributenamen UNP_PEN_INT als Decryption UNPAIN_PENALTY_INTEREST vorgeschlagen. Unser Schema Decryption Ansatz wurde für drei ETL-Repositories evaluiert und es zeigte sich, dass unser Ansatz qualitativ hochwertige Decryptions für kryptische Attributnamen vorschlägt.
T3 - Technische Berichte des Hasso-Plattner-Instituts für Digital Engineering an der Universität Potsdam - 60
KW - Extract-Transform-Load (ETL)
KW - Data Warehouse
KW - Datenintegration
KW - Extract-Transform-Load (ETL)
KW - Data Warehouse
KW - Data Integration
Y1 - 2012
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-61257
SN - 978-3-86956-201-8
PB - Universitätsverlag Potsdam
CY - Potsdam
ER -
TY - GEN
A1 - Alibabaie, Najmeh
A1 - Ghasemzadeh, Mohammad
A1 - Meinel, Christoph
T1 - A variant of genetic algorithm for non-homogeneous population
T2 - International Conference Applied Mathematics, Computational Science and Systems Engineering 2016
N2 - Selection of initial points, the number of clusters and finding proper clusters centers are still the main challenge in clustering processes. In this paper, we suggest genetic algorithm based method which searches several solution spaces simultaneously. The solution spaces are population groups consisting of elements with similar structure. Elements in a group have the same size, while elements in different groups are of different sizes. The proposed algorithm processes the population in groups of chromosomes with one gene, two genes to k genes. These genes hold corresponding information about the cluster centers. In the proposed method, the crossover and mutation operators can accept parents with different sizes; this can lead to versatility in population and information transfer among sub-populations. We implemented the proposed method and evaluated its performance against some random datasets and the Ruspini dataset as well. The experimental results show that the proposed method could effectively determine the appropriate number of clusters and recognize their centers. Overall this research implies that using heterogeneous population in the genetic algorithm can lead to better results.
Y1 - 2017
U6 - https://doi.org/10.1051/itmconf/20170902001
SN - 2271-2097
VL - 9
PB - EDP Sciences
CY - Les Ulis
ER -
TY - BOOK
A1 - Alnemr, Rehab
A1 - Polyvyanyy, Artem
A1 - AbuJarour, Mohammed
A1 - Appeltauer, Malte
A1 - Hildebrandt, Dieter
A1 - Thomas, Ivonne
A1 - Overdick, Hagen
A1 - Schöbel, Michael
A1 - Uflacker, Matthias
A1 - Kluth, Stephan
A1 - Menzel, Michael
A1 - Schmidt, Alexander
A1 - Hagedorn, Benjamin
A1 - Pascalau, Emilian
A1 - Perscheid, Michael
A1 - Vogel, Thomas
A1 - Hentschel, Uwe
A1 - Feinbube, Frank
A1 - Kowark, Thomas
A1 - Trümper, Jonas
A1 - Vogel, Tobias
A1 - Becker, Basil
ED - Meinel, Christoph
ED - Plattner, Hasso
ED - Döllner, Jürgen Roland Friedrich
ED - Weske, Mathias
ED - Polze, Andreas
ED - Hirschfeld, Robert
ED - Naumann, Felix
ED - Giese, Holger
T1 - Proceedings of the 4th Ph.D. Retreat of the HPI Research School on Service-oriented Systems Engineering
T3 - Technische Berichte des Hasso-Plattner-Instituts für Digital Engineering an der Universität Potsdam - 31
KW - Hasso-Plattner-Institut
KW - Forschungskolleg
KW - Klausurtagung
KW - Service-oriented Systems Engineering
KW - Hasso Plattner Institute
KW - Research School
KW - Ph.D. Retreat
KW - Service-oriented Systems Engineering
Y1 - 2010
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-40838
SN - 978-3-86956-036-6
PB - Universitätsverlag Potsdam
CY - Potsdam
ER -
TY - JOUR
A1 - AlSa'deh, Ahmad
A1 - Meinel, Christoph
T1 - Secure neighbor discovery Review, challenges, perspectives, and recommendations
JF - IEEE security & privacy : building confidence in a networked world
N2 - Secure Neighbor Discovery is designed as a countermeasure to Neighbor Discovery Protocol threats. The authors discuss Secure Neighbor Discovery implementation and deployment challenges and review proposals to optimize it.
Y1 - 2012
SN - 1540-7993
VL - 10
IS - 4
SP - 26
EP - 34
PB - Inst. of Electr. and Electronics Engineers
CY - Los Alamitos
ER -
TY - JOUR
A1 - Andree, Kerstin
A1 - Ihde, Sven
A1 - Weske, Mathias
A1 - Pufahl, Luise
T1 - An exception handling framework for case management
JF - Software and Systems Modeling
N2 - In order to achieve their business goals, organizations heavily rely on the operational excellence of their business processes. In traditional scenarios, business processes are usually well-structured, clearly specifying when and how certain tasks have to be executed. Flexible and knowledge-intensive processes are gathering momentum, where a knowledge worker drives the execution of a process case and determines the exact process path at runtime. In the case of an exception, the knowledge worker decides on an appropriate handling. While there is initial work on exception handling in well-structured business processes, exceptions in case management have not been sufficiently researched. This paper proposes an exception handling framework for stage-oriented case management languages, namely Guard Stage Milestone Model, Case Management Model and Notation, and Fragment-based Case Management. The effectiveness of the framework is evaluated with two real-world use cases showing that it covers all relevant exceptions and proposed handling strategies.
KW - Exception handling
KW - Knowledge-intensive processes
KW - Flexible processes;
KW - Case management
Y1 - 2022
U6 - https://doi.org/10.1007/s10270-022-00993-3
SN - 1619-1366
SN - 1619-1374
VL - 21
IS - 3
SP - 939
EP - 962
PB - Springer
CY - Heidelberg
ER -
TY - BOOK
A1 - Appeltauer, Malte
A1 - Hirschfeld, Robert
T1 - The JCop language specification : Version 1.0, April 2012
N2 - Program behavior that relies on contextual information, such as physical location or network accessibility, is common in today's applications, yet its representation is not sufficiently supported by programming languages. With context-oriented programming (COP), such context-dependent behavioral variations can be explicitly modularized and dynamically activated. In general, COP could be used to manage any context-specific behavior. However, its contemporary realizations limit the control of dynamic adaptation. This, in turn, limits the interaction of COP's adaptation mechanisms with widely used architectures, such as event-based, mobile, and distributed programming. The JCop programming language extends Java with language constructs for context-oriented programming and additionally provides a domain-specific aspect language for declarative control over runtime adaptations. As a result, these redesigned implementations are more concise and better modularized than their counterparts using plain COP. JCop's main features have been described in our previous publications. However, a complete language specification has not been presented so far. This report presents the entire JCop language including the syntax and semantics of its new language constructs.
N2 - Das Verhalten von modernen Software-Anwendungen benötigt häufig Informationen über den Kontext ihrer Ausführung, z.B. die geografische Position, die Tageszeit oder die aktuelle Netzwerkbandbreite. Dennoch bieten heutige Programmiersprachen nur wenig Unterstützung für die Repräsentation kontextspezifischen Verhaltens. Kontextorientiertes Programmieren ist ein Ansatz, der die explizite Modularisierung und Laufzeitaktivierung von kontextspezifischem Verhalten auf der Ebene von Programmiersprachkonstrukten ermöglicht. Die bisherigen Umsetzungen von kontextorientiertem Programmieren schränken jedoch die Kontrolle der Laufzeitaktivierungen solches kontextspezifischen Verhaltens ein. Daraus folgt eine Einschränkung der Anwendungsbereiche für kontextorientiertes Programmieren, unter anderem für solche Domänen, in denen Programme sehr häufig kontextabhängiges Verhalten bereitstellen, z.B. ereignisbasierte, mobile und dienstorientierte Systeme. Die Programmiersprache JCop erweitert Java um Sprachkonstrukte für kontextorientieres Programmieren und bietet zusätzlich eine domänenspezifische Aspektsprach an, mit deren Hilfe Laufzeitadaptionen deklarativ spezifiziert werden können. Die Kernkonzepte von JCop wurden bereits in mehrern Publikationen vorgestellt, dieser Bericht enthält nun eine umfassende Sprachspezifikation von JCop.
T3 - Technische Berichte des Hasso-Plattner-Instituts für Digital Engineering an der Universität Potsdam - 59
KW - Programming Languages
KW - Context-oriented Programming
KW - Aspect-oriented Programming
KW - Java
KW - JCop
KW - runtime adaptations
Y1 - 2012
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-60208
SN - 978-3-86956-193-6
PB - Universitätsverlag Potsdam
CY - Potsdam
ER -
TY - THES
A1 - Awad, Ahmed Mahmoud Hany Aly
T1 - A compliance management framework for business process models
T1 - Ein Compliance-Management-Framework für Geschäftsprozessmodelle
N2 - Companies develop process models to explicitly describe their business operations. In the same time, business operations, business processes, must adhere to various types of compliance requirements. Regulations, e.g., Sarbanes Oxley Act of 2002, internal policies, best practices are just a few sources of compliance requirements. In some cases, non-adherence to compliance requirements makes the organization subject to legal punishment. In other cases, non-adherence to compliance leads to loss of competitive advantage and thus loss of market share. Unlike the classical domain-independent behavioral correctness of business processes, compliance requirements are domain-specific. Moreover, compliance requirements change over time. New requirements might appear due to change in laws and adoption of new policies. Compliance requirements are offered or enforced by different entities that have different objectives behind these requirements. Finally, compliance requirements might affect different aspects of business processes, e.g., control flow and data flow. As a result, it is infeasible to hard-code compliance checks in tools. Rather, a repeatable process of modeling compliance rules and checking them against business processes automatically is needed. This thesis provides a formal approach to support process design-time compliance checking. Using visual patterns, it is possible to model compliance requirements concerning control flow, data flow and conditional flow rules. Each pattern is mapped into a temporal logic formula. The thesis addresses the problem of consistency checking among various compliance requirements, as they might stem from divergent sources. Also, the thesis contributes to automatically check compliance requirements against process models using model checking. We show that extra domain knowledge, other than expressed in compliance rules, is needed to reach correct decisions. In case of violations, we are able to provide a useful feedback to the user. The feedback is in the form of parts of the process model whose execution causes the violation. In some cases, our approach is capable of providing automated remedy of the violation.
N2 - Firmen entwickeln Prozessmodelle um ihre Geschäftstätigkeit explizit zu beschreiben. Geschäftsprozesse müssen verschiedene Arten von Compliance-Anforderungen einhalten. Solche Compliance-Anforderungen entstammen einer Vielzahl von Quellen, z.B. Verordnung wie dem Sarbanes Oxley Act von 2002, interne Richtlinien und Best Practices. Die Nichteinhaltung von Compliance-Anforderungen kann zu gesetzlichen Strafen oder dem Verlust von Wettbewerbsvorteilen und somit dem Verlust von Marktanteilen führen. Im Gegensatz zum klassischen, domänen-unabhängigen Begriff der Korrektheit von Geschäftsprozessen, sind Compliance-Anforderungen domain-spezifisch und ändern sich im Laufe der Zeit. Neue Anforderungen resultieren aus neuen Gesetzen und der Einführung neuer Unternehmensrichtlinien. Aufgrund der Vielzahl der Quellen für Compliance-Anforderungen, können sie unterschiedliche Ziele verfolgen und somit widersprüchliche Aussagen treffen. Schließlich betreffen Compliance-Anforderungen verschiedene Aspekte von Geschäftsprozessen, wie Kontrollfluss- und Datenabhängigkeiten. Auf Grund dessen können Compliance-Prüfungen nicht direkt Hard-coded werden. Vielmehr ist ein Prozess der wiederholten Modellierung von Compliance-Regeln und ihrer anschließenden automatischen Prüfung gegen die Geschäftsprozesse nötig. Diese Dissertation stellt einen formalen Ansatz zur Überprüfung der Einhaltung von Compliance-Regeln während der Spezifikation von Geschäftsprozessen vor. Mit visuellen Mustern ist es möglich, Compliance-Regeln hinsichtlich Kontrollfluss- und Datenabhängigkeiten sowie bedingte Regeln zu spezifizieren. Jedes Muster wird in eine Formel der temporalen Logik abgebildet. Die Dissertation behandelt das Problem der Konsistenzprüfung zwischen verschiedenen Compliance-Anforderungen, wie sie sich aus unterschiedlichen Quellen ergeben können. Ebenfalls zeigt diese Dissertation, wie Compliance-Regeln gegen die Geschäftsprozesse automatisch mittels Model Checking geprüft werden. Es wird aufgezeigt, dass zusätzliche Domänen-Kenntnisse notwendig sind, um richtige Entscheidungen zu treffen. Der vorgestelle Ansatz ermöglicht nützliches Feedback für Modellierer im Fall eines Compliance-Verstoßes. Das Feedback wird in Form von Teilen des Prozessmodells gegeben, deren Ausführung die Verletzung verursacht. In einigen Fällen ist der vorgestellte Ansatz in der Lage, den Compliance-Verstoß automatisch zu beheben.
KW - Geschäftsprozessmodelle
KW - Compliance
KW - Temporallogik
KW - Verletzung Erklärung
KW - Verletzung Auflösung
KW - Business Process Models
KW - Compliance
KW - Temporal Logic
KW - Violation Explanation
KW - Violation Resolution
Y1 - 2010
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-49222
ER -
TY - JOUR
A1 - Awad, Ahmed Mahmoud Hany Aly
A1 - Weidlich, Matthias
A1 - Weske, Mathias
T1 - Visually specifying compliance rules and explaining their violations for business processes
JF - Journal of visual languages and computing
N2 - A business process is a set of steps designed to be executed in a certain order to achieve a business value. Such processes are often driven by and documented using process models. Nowadays, process models are also applied to drive process execution. Thus, correctness of business process models is a must. Much of the work has been devoted to check general, domain-independent correctness criteria, such as soundness. However, business processes must also adhere to and show compliance with various regulations and constraints, the so-called compliance requirements. These are domain-dependent requirements.
In many situations, verifying compliance on a model level is of great value, since violations can be resolved in an early stage prior to execution. However, this calls for using formal verification techniques, e.g., model checking, that are too complex for business experts to apply. In this paper, we utilize a visual language. BPMN-Q to express compliance requirements visually in a way similar to that used by business experts to build process models. Still, using a pattern based approach, each BPMN-Qgraph has a formal temporal logic expression in computational tree logic (CTL). Moreover, the user is able to express constraints, i.e., compliance rules, regarding control flow and data flow aspects. In order to provide valuable feedback to a user in case of violations, we depend on temporal logic querying approaches as well as BPMN-Q to visually highlight paths in a process model whose execution causes violations.
KW - Business process modeling
KW - Compliance checking
KW - Visual modeling
KW - Anti-patterns
Y1 - 2011
U6 - https://doi.org/10.1016/j.jvlc.2010.11.002
SN - 1045-926X
VL - 22
IS - 1
SP - 30
EP - 55
PB - Elsevier
CY - London
ER -
TY - JOUR
A1 - Azodi, Amir
A1 - Cheng, Feng
A1 - Meinel, Christoph
T1 - Event Driven Network Topology Discovery and Inventory Listing Using REAMS
JF - Wireless personal communications : an international journal
N2 - Network Topology Discovery and Inventory Listing are two of the primary features of modern network monitoring systems (NMS). Current NMSs rely heavily on active scanning techniques for discovering and mapping network information. Although this approach works, it introduces some major drawbacks such as the performance impact it can exact, specially in larger network environments. As a consequence, scans are often run less frequently which can result in stale information being presented and used by the network monitoring system. Alternatively, some NMSs rely on their agents being deployed on the hosts they monitor. In this article, we present a new approach to Network Topology Discovery and Network Inventory Listing using only passive monitoring and scanning techniques. The proposed techniques rely solely on the event logs produced by the hosts and network devices present within a network. Finally, we discuss some of the advantages and disadvantages of our approach.
KW - Network topology
KW - Inventory systems
KW - Network monitoring
KW - Network graph
KW - Service detection
KW - Event processing
KW - Event normalization
Y1 - 2015
U6 - https://doi.org/10.1007/s11277-015-3061-3
SN - 0929-6212
SN - 1572-834X
VL - 94
SP - 415
EP - 430
PB - Springer
CY - New York
ER -
TY - THES
A1 - Baier, Thomas
T1 - Matching events and activities
T1 - Zuordnung von Ereignissen zu Aktivitäten
BT - preprocessing event logs for process analysis
BT - Vorverarbeitung von Ereignislogs für die Prozessanalyse
N2 - Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during process execution. Aiming at a better process understanding and improvement, this event data can be used to analyze processes using process mining techniques. Process models can be automatically discovered and the execution can be checked for conformance to specified behavior. Moreover, existing process models can be enhanced and annotated with valuable information, for example for performance analysis. While the maturity of process mining algorithms is increasing and more tools are entering the market, process mining projects still face the problem of different levels of abstraction when comparing events with modeled business activities. Mapping the recorded events to activities of a given process model is essential for conformance checking, annotation and understanding of process discovery results. Current approaches try to abstract from events in an automated way that does not capture the required domain knowledge to fit business activities. Such techniques can be a good way to quickly reduce complexity in process discovery. Yet, they fail to enable techniques like conformance checking or model annotation, and potentially create misleading process discovery results by not using the known business terminology.
In this thesis, we develop approaches that abstract an event log to the same level that is needed by the business. Typically, this abstraction level is defined by a given process model. Thus, the goal of this thesis is to match events from an event log to activities in a given process model. To accomplish this goal, behavioral and linguistic aspects of process models and event logs as well as domain knowledge captured in existing process documentation are taken into account to build semiautomatic matching approaches. The approaches establish a pre--processing for every available process mining technique that produces or annotates a process model, thereby reducing the manual effort for process analysts. While each of the presented approaches can be used in isolation, we also introduce a general framework for the integration of different matching approaches.
The approaches have been evaluated in case studies with industry and using a large industry process model collection and simulated event logs. The evaluation demonstrates the effectiveness and efficiency of the approaches and their robustness towards nonconforming execution logs.
N2 - Heutzutage werden Geschäftsprozesse verstärkt durch IT Services unterstützt, welche große Mengen an Ereignisdaten während der Prozessausführung generieren. Mit dem Ziel eines besseren Prozessverständnisses und einer möglichen Verbesserung können diese Daten mit Hilfe von Process–Mining–Techniken analysiert werden. Prozessmodelle können dabei automatisiert erstellt werden und die Prozessausführung kann auf ihre Übereinstimmung hin geprüft werden. Weiterhin können existierende Modelle durch wertvolle Informationen erweitert und verbessert werden, beispielsweise für eine Performanceanalyse. Während der Reifegrad der Algorithmen immer weiter ansteigt, stehen Process–Mining–Projekte immer noch vor dem Problem unterschiedlicher Abstraktionsebenen von Ereignisdaten und Prozessmodellaktivitäten. Das Mapping der aufgezeichneten Ereignisse zu den Aktivitäten eines gegebenen Prozessmodells ist ein essentieller Schritt für die Übereinstimmungsanalyse, Prozessmodellerweiterungen sowie auch für das Verständnis der Modelle aus einer automatisierten Prozesserkennung. Bereits existierende Ansätze abstrahieren Ereignisse auf automatisierte Art und Weise, welche die notwendigen Domänenkenntnisse für ein Mapping zu bestehenden Geschäftsprozessaktivitäten nicht berücksichtigt. Diese Techniken können hilfreich sein, um die Komplexität eines automatisiert erstellten Prozessmodells schnell zu verringern, sie eignen sich jedoch nicht für Übereinstimmungsprüfungen oder Modellerweiterungen. Zudem können solch automatisierte Verfahren zu irreführenden Ergebnissen führen, da sie nicht die bekannte Geschäftsterminologie verwenden.
In dieser Dissertation entwickeln wir Ansätze, die ein Ereignislog auf die benötigte Abstraktionsebene bringen, welche typischerweise durch ein Prozessmodell gegeben ist. Daher ist das Ziel dieser Dissertation, die Ereignisse eines Ereignislogs den Aktivitäten eines Prozessmodells zuzuordnen. Um dieses Ziel zu erreichen, werden Verhaltens- und Sprachaspekte von Ereignislogs und Prozessmodellen sowie weitergehendes Domänenwissen einbezogen, um teilautomatisierte Zuordnungsansätze zu entwickeln. Die entwickelten Ansätze ermöglichen eine Vorverarbeitung von Ereignislogs, wodurch der notwendige manuelle Aufwand für den Einsatz von Process–Mining–Techniken verringert wird.
Die vorgestellten Ansätze wurden mit Hilfe von Industrie-Case-Studies und simulierten Ereignislogs aus einer großen Prozessmodellkollektion evaluiert. Die Ergebnisse demonstrieren die Effektivität der Ansätze und ihre Robustheit gegenüber nicht-konformem Prozessverhalten.
KW - process mining
KW - conformance analysis
KW - event abstraction
KW - Process Mining
KW - Übereinstimmungsanalyse
KW - Ereignisabstraktion
Y1 - 2015
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-84548
ER -
TY - JOUR
A1 - Bano, Dorina
A1 - Michael, Judith
A1 - Rumpe, Bernhard
A1 - Varga, Simon
A1 - Weske, Mathias
T1 - Process-aware digital twin cockpit synthesis from event logs
JF - Journal of computer languages
N2 - The engineering of digital twins and their user interaction parts with explicated processes, namely processaware digital twin cockpits (PADTCs), is challenging due to the complexity of the systems and the need for information from different disciplines within the engineering process. Therefore, it is interesting to investigate how to facilitate their engineering by using already existing data, namely event logs, and reducing the number of manual steps for their engineering. Current research lacks systematic, automated approaches to derive process-aware digital twin cockpits even though some helpful techniques already exist in the areas of process mining and software engineering. Within this paper, we present a low-code development approach that reduces the amount of hand-written code needed and uses process mining techniques to derive PADTCs. We describe what models could be derived from event log data, which generative steps are needed for the engineering of PADTCs, and how process mining could be incorporated into the resulting application. This process is evaluated using the MIMIC III dataset for the creation of a PADTC prototype for an automated hospital transportation system. This approach can be used for early prototyping of PADTCs as it needs no hand-written code in the first place, but it still allows for the iterative evolvement of the application. This empowers domain experts to create their PADTC prototypes.
KW - process-aware digital twin cockpit
KW - low-code development approaches
KW - sensor data
KW - event log
KW - process mining
KW - process-awareness
Y1 - 2022
U6 - https://doi.org/10.1016/j.cola.2022.101121
SN - 2590-1184
SN - 2665-9182
VL - 70
PB - Elsevier
CY - Amsterdam [u.a.]
ER -
TY - JOUR
A1 - Barkowsky, Matthias
A1 - Giese, Holger
T1 - Hybrid search plan generation for generalized graph pattern matching
JF - Journal of logical and algebraic methods in programming
N2 - In recent years, the increased interest in application areas such as social networks has resulted in a rising popularity of graph-based approaches for storing and processing large amounts of interconnected data. To extract useful information from the growing network structures, efficient querying techniques are required.
In this paper, we propose an approach for graph pattern matching that allows a uniform handling of arbitrary constraints over the query vertices. Our technique builds on a previously introduced matching algorithm, which takes concrete host graph information into account to dynamically adapt the employed search plan during query execution. The dynamic algorithm is combined with an existing static approach for search plan generation, resulting in a hybrid technique which we further extend by a more sophisticated handling of filtering effects caused by constraint checks. We evaluate the presented concepts empirically based on an implementation for our graph pattern matching tool, the Story Diagram Interpreter, with queries and data provided by the LDBC Social Network Benchmark. Our results suggest that the hybrid technique may improve search efficiency in several cases, and rarely reduces efficiency.
KW - graph pattern matching
KW - search plan generation
Y1 - 2020
U6 - https://doi.org/10.1016/j.jlamp.2020.100563
SN - 2352-2208
VL - 114
PB - Elsevier
CY - New York
ER -
TY - GEN
A1 - Bartz, Christian
A1 - Yang, Haojin
A1 - Bethge, Joseph
A1 - Meinel, Christoph
T1 - LoANs
BT - Weakly Supervised Object Detection with Localizer Assessor Networks
T2 - Computer Vision – ACCV 2018 Workshops
N2 - Recently, deep neural networks have achieved remarkable performance on the task of object detection and recognition. The reason for this success is mainly grounded in the availability of large scale, fully annotated datasets, but the creation of such a dataset is a complicated and costly task. In this paper, we propose a novel method for weakly supervised object detection that simplifies the process of gathering data for training an object detector. We train an ensemble of two models that work together in a student-teacher fashion. Our student (localizer) is a model that learns to localize an object, the teacher (assessor) assesses the quality of the localization and provides feedback to the student. The student uses this feedback to learn how to localize objects and is thus entirely supervised by the teacher, as we are using no labels for training the localizer. In our experiments, we show that our model is very robust to noise and reaches competitive performance compared to a state-of-the-art fully supervised approach. We also show the simplicity of creating a new dataset, based on a few videos (e.g. downloaded from YouTube) and artificially generated data.
Y1 - 2019
SN - 978-3-030-21074-8
SN - 978-3-030-21073-1
U6 - https://doi.org/10.1007/978-3-030-21074-8_29
SN - 0302-9743
SN - 1611-3349
VL - 11367
SP - 341
EP - 356
PB - Springer
CY - Cham
ER -
TY - THES
A1 - Bauckmann, Jana
T1 - Dependency discovery for data integration
T1 - Erkennen von Datenabhängigkeiten zur Datenintegration
N2 - Data integration aims to combine data of different sources and to provide users with a unified view on these data. This task is as challenging as valuable. In this thesis we propose algorithms for dependency discovery to provide necessary information for data integration. We focus on inclusion dependencies (INDs) in general and a special form named conditional inclusion dependencies (CINDs): (i) INDs enable the discovery of structure in a given schema. (ii) INDs and CINDs support the discovery of cross-references or links between schemas. An IND “A in B” simply states that all values of attribute A are included in the set of values of attribute B. We propose an algorithm that discovers all inclusion dependencies in a relational data source. The challenge of this task is the complexity of testing all attribute pairs and further of comparing all of each attribute pair's values. The complexity of existing approaches depends on the number of attribute pairs, while ours depends only on the number of attributes. Thus, our algorithm enables to profile entirely unknown data sources with large schemas by discovering all INDs. Further, we provide an approach to extract foreign keys from the identified INDs. We extend our IND discovery algorithm to also find three special types of INDs: (i) Composite INDs, such as “AB in CD”, (ii) approximate INDs that allow a certain amount of values of A to be not included in B, and (iii) prefix and suffix INDs that represent special cross-references between schemas. Conditional inclusion dependencies are inclusion dependencies with a limited scope defined by conditions over several attributes. Only the matching part of the instance must adhere the dependency. We generalize the definition of CINDs distinguishing covering and completeness conditions and define quality measures for conditions. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. The challenge for this task is twofold: (i) Which (and how many) attributes should be used for the conditions? (ii) Which attribute values should be chosen for the conditions? Previous approaches rely on pre-selected condition attributes or can only discover conditions applying to quality thresholds of 100%. Our approaches were motivated by two application domains: data integration in the life sciences and link discovery for linked open data. We show the efficiency and the benefits of our approaches for use cases in these domains.
N2 - Datenintegration hat das Ziel, Daten aus unterschiedlichen Quellen zu kombinieren und Nutzern eine einheitliche Sicht auf diese Daten zur Verfügung zu stellen. Diese Aufgabe ist gleichermaßen anspruchsvoll wie wertvoll. In dieser Dissertation werden Algorithmen zum Erkennen von Datenabhängigkeiten vorgestellt, die notwendige Informationen zur Datenintegration liefern. Der Schwerpunkt dieser Arbeit liegt auf Inklusionsabhängigkeiten (inclusion dependency, IND) im Allgemeinen und auf der speziellen Form der Bedingten Inklusionsabhängigkeiten (conditional inclusion dependency, CIND): (i) INDs ermöglichen das Finden von Strukturen in einem gegebenen Schema. (ii) INDs und CINDs unterstützen das Finden von Referenzen zwischen Datenquellen. Eine IND „A in B“ besagt, dass alle Werte des Attributs A in der Menge der Werte des Attributs B enthalten sind. Diese Arbeit liefert einen Algorithmus, der alle INDs in einer relationalen Datenquelle erkennt. Die Herausforderung dieser Aufgabe liegt in der Komplexität alle Attributpaare zu testen und dabei alle Werte dieser Attributpaare zu vergleichen. Die Komplexität bestehender Ansätze ist abhängig von der Anzahl der Attributpaare während der hier vorgestellte Ansatz lediglich von der Anzahl der Attribute abhängt. Damit ermöglicht der vorgestellte Algorithmus unbekannte Datenquellen mit großen Schemata zu untersuchen. Darüber hinaus wird der Algorithmus erweitert, um drei spezielle Formen von INDs zu finden, und ein Ansatz vorgestellt, der Fremdschlüssel aus den erkannten INDs filtert. Bedingte Inklusionsabhängigkeiten (CINDs) sind Inklusionsabhängigkeiten deren Geltungsbereich durch Bedingungen über bestimmten Attributen beschränkt ist. Nur der zutreffende Teil der Instanz muss der Inklusionsabhängigkeit genügen. Die Definition für CINDs wird in der vorliegenden Arbeit generalisiert durch die Unterscheidung von überdeckenden und vollständigen Bedingungen. Ferner werden Qualitätsmaße für Bedingungen definiert. Es werden effiziente Algorithmen vorgestellt, die überdeckende und vollständige Bedingungen mit gegebenen Qualitätsmaßen auffinden. Dabei erfolgt die Auswahl der verwendeten Attribute und Attributkombinationen sowie der Attributwerte automatisch. Bestehende Ansätze beruhen auf einer Vorauswahl von Attributen für die Bedingungen oder erkennen nur Bedingungen mit Schwellwerten von 100% für die Qualitätsmaße. Die Ansätze der vorliegenden Arbeit wurden durch zwei Anwendungsbereiche motiviert: Datenintegration in den Life Sciences und das Erkennen von Links in Linked Open Data. Die Effizienz und der Nutzen der vorgestellten Ansätze werden anhand von Anwendungsfällen in diesen Bereichen aufgezeigt.
KW - Datenabhängigkeiten-Entdeckung
KW - Datenintegration
KW - Schema-Entdeckung
KW - Link-Entdeckung
KW - Inklusionsabhängigkeit
KW - dependency discovery
KW - data integration
KW - schema discovery
KW - link discovery
KW - inclusion dependency
Y1 - 2013
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-66645
ER -
TY - BOOK
A1 - Bauckmann, Jana
A1 - Abedjan, Ziawasch
A1 - Leser, Ulf
A1 - Müller, Heiko
A1 - Naumann, Felix
T1 - Covering or complete? : Discovering conditional inclusion dependencies
N2 - Data dependencies, or integrity constraints, are used to improve the quality of a database schema, to optimize queries, and to ensure consistency in a database. In the last years conditional dependencies have been introduced to analyze and improve data quality. In short, a conditional dependency is a dependency with a limited scope defined by conditions over one or more attributes. Only the matching part of the instance must adhere to the dependency. In this paper we focus on conditional inclusion dependencies (CINDs). We generalize the definition of CINDs, distinguishing covering and completeness conditions. We present a new use case for such CINDs showing their value for solving complex data quality tasks. Further, we define quality measures for conditions inspired by precision and recall. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. Our algorithms choose not only the condition values but also the condition attributes automatically. Finally, we show that our approach efficiently provides meaningful and helpful results for our use case.
N2 - Datenabhängigkeiten (wie zum Beispiel Integritätsbedingungen), werden verwendet, um die Qualität eines Datenbankschemas zu erhöhen, um Anfragen zu optimieren und um Konsistenz in einer Datenbank sicherzustellen. In den letzten Jahren wurden bedingte Abhängigkeiten (conditional dependencies) vorgestellt, die die Qualität von Daten analysieren und verbessern sollen. Eine bedingte Abhängigkeit ist eine Abhängigkeit mit begrenztem Gültigkeitsbereich, der über Bedingungen auf einem oder mehreren Attributen definiert wird. In diesem Bericht betrachten wir bedingte Inklusionsabhängigkeiten (conditional inclusion dependencies; CINDs). Wir generalisieren die Definition von CINDs anhand der Unterscheidung von überdeckenden (covering) und vollständigen (completeness) Bedingungen. Wir stellen einen Anwendungsfall für solche CINDs vor, der den Nutzen von CINDs bei der Lösung komplexer Datenqualitätsprobleme aufzeigt. Darüber hinaus definieren wir Qualitätsmaße für Bedingungen basierend auf Sensitivität und Genauigkeit. Wir stellen effiziente Algorithmen vor, die überdeckende und vollständige Bedingungen innerhalb vorgegebener Schwellwerte finden. Unsere Algorithmen wählen nicht nur die Werte der Bedingungen, sondern finden auch die Bedingungsattribute automatisch. Abschließend zeigen wir, dass unser Ansatz effizient sinnvolle und hilfreiche Ergebnisse für den vorgestellten Anwendungsfall liefert.
T3 - Technische Berichte des Hasso-Plattner-Instituts für Digital Engineering an der Universität Potsdam - 62
KW - Datenabhängigkeiten
KW - Bedingte Inklusionsabhängigkeiten
KW - Erkennen von Meta-Daten
KW - Linked Open Data
KW - Link-Entdeckung
KW - Assoziationsregeln
KW - Data Dependency
KW - Conditional Inclusion Dependency
KW - Metadata Discovery
KW - Linked Open Data
KW - Link Discovery
KW - Association Rule Mining
Y1 - 2012
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-62089
SN - 978-3-86956-212-4
PB - Universitätsverlag Potsdam
CY - Potsdam
ER -
TY - BOOK
A1 - Bauckmann, Jana
A1 - Leser, Ulf
A1 - Naumann, Felix
T1 - Efficient and exact computation of inclusion dependencies for data integration
N2 - Data obtained from foreign data sources often come with only superficial structural information, such as relation names and attribute names. Other types of metadata that are important for effective integration and meaningful querying of such data sets are missing. In particular, relationships among attributes, such as foreign keys, are crucial metadata for understanding the structure of an unknown database. The discovery of such relationships is difficult, because in principle for each pair of attributes in the database each pair of data values must be compared. A precondition for a foreign key is an inclusion dependency (IND) between the key and the foreign key attributes. We present with Spider an algorithm that efficiently finds all INDs in a given relational database. It leverages the sorting facilities of DBMS but performs the actual comparisons outside of the database to save computation. Spider analyzes very large databases up to an order of magnitude faster than previous approaches. We also evaluate in detail the effectiveness of several heuristics to reduce the number of necessary comparisons. Furthermore, we generalize Spider to find composite INDs covering multiple attributes, and partial INDs, which are true INDs for all but a certain number of values. This last type is particularly relevant when integrating dirty data as is often the case in the life sciences domain - our driving motivation.
T3 - Technische Berichte des Hasso-Plattner-Instituts für Digital Engineering an der Universität Potsdam - 34
KW - Metadatenentdeckung
KW - Metadatenqualität
KW - Schemaentdeckung
KW - Datenanalyse
KW - Datenintegration
KW - metadata discovery
KW - metadata quality
KW - schema discovery
KW - data profiling
KW - data integration
Y1 - 2010
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-41396
SN - 978-3-86956-048-9
PB - Universitätsverlag Potsdam
CY - Potsdam
ER -
TY - JOUR
A1 - Baudisch, Patrick Markus
A1 - Silber, Arthur
A1 - Kommana, Yannis
A1 - Gruner, Milan
A1 - Wall, Ludwig
A1 - Reuss, Kevin
A1 - Heilman, Lukas
A1 - Kovacs, Robert
A1 - Rechlitz, Daniel
A1 - Roumen, Thijs
T1 - Kyub
BT - A 3D Editor for Modeling Sturdy Laser-Cut Objects
JF - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
N2 - We present an interactive editing system for laser cutting called kyub. Kyub allows users to create models efficiently in 3D, which it then unfolds into the 2D plates laser cutters expect. Unlike earlier systems, such as FlatFitFab, kyub affords construction based on closed box structures, which allows users to turn very thin material, such as 4mm plywood, into objects capable of withstanding large forces, such as chairs users can actually sit on. To afford such sturdy construction, every kyub project begins with a simple finger-joint "boxel"-a structure we found to be capable of withstanding over 500kg of load. Users then extend their model by attaching additional boxels. Boxels merge automatically, resulting in larger, yet equally strong structures. While the concept of stacking boxels allows kyub to offer the strong affordance and ease of use of a voxel-based editor, boxels are not confined to a grid and readily combine with kuyb's various geometry deformation tools. In our technical evaluation, objects built with kyub withstood hundreds of kilograms of loads. In our user study, non-engineers rated the learnability of kyub 6.1/7.
KW - Personal fabrication
KW - laser cutting
KW - interactive editing
Y1 - 2019
SN - 978-1-4503-5970-2
U6 - https://doi.org/10.1145/3290605.3300796
SP - 1
EP - 12
PB - Association for Computing Machinery
CY - New York
ER -
TY - CHAP
A1 - Bauer, Matthias
A1 - Malchow, Martin
A1 - Meinel, Christoph
T1 - Full Lecture Recording Watching Behavior, or Why Students Watch 90-Min Lectures in 5 Min
T2 - IMCL 2018: Mobile Technologies and Applications for the Internet of Things
N2 - Many universities record the lectures being held in their facilities to preserve knowledge and to make it available to their students and, at least for some universities and classes, to the broad public. The way with the least effort is to record the whole lecture, which in our case usually is 90 min long. This saves the labor and time of cutting and rearranging lectures scenes to provide short learning videos as known from Massive Open Online Courses (MOOCs), etc. Many lecturers fear that recording their lectures and providing them via an online platform might lead to less participation in the actual lecture. Also, many teachers fear that the lecture recordings are not used with the same focus and dedication as lectures in a lecture hall. In this work, we show that in our experience, full lectures have an average watching duration of just a few minutes and explain the reasons for that and why, in most cases, teachers do not have to worry about that.
KW - e-Learning
KW - Lecture video recording
KW - e-lecture
KW - Attention span
KW - Learning behavior
Y1 - 2019
SN - 978-3-030-11434-3
SN - 978-3-030-11433-6
U6 - https://doi.org/10.1007/978-3-030-11434-3_38
SN - 2194-5357
SN - 2194-5365
VL - 909
SP - 347
EP - 358
PB - Springer
CY - Cham
ER -
TY - JOUR
A1 - Bauman, Spenser
A1 - Bolz, Carl Friedrich
A1 - Hirschfeld, Robert
A1 - Kirilichev, Vasily
A1 - Pape, Tobias
A1 - Siek, Jeremy G.
A1 - Tobin-Hochstadt, Sam
T1 - Pycket: A Tracing JIT for a Functional Language
JF - ACM SIGPLAN notices
N2 - We present Pycket, a high-performance tracing JIT compiler for Racket. Pycket supports a wide variety of the sophisticated features in Racket such as contracts, continuations, classes, structures, dynamic binding, and more. On average, over a standard suite of benchmarks, Pycket outperforms existing compilers, both Racket's JIT and other highly-optimizing Scheme compilers. Further, Pycket provides much better performance for Racket proxies than existing systems, dramatically reducing the overhead of contracts and gradual typing. We validate this claim with performance evaluation on multiple existing benchmark suites.
The Pycket implementation is of independent interest as an application of the RPython meta-tracing framework (originally created for PyPy), which automatically generates tracing JIT compilers from interpreters. Prior work on meta-tracing focuses on bytecode interpreters, whereas Pycket is a high-level interpreter based on the CEK abstract machine and operates directly on abstract syntax trees. Pycket supports proper tail calls and first-class continuations. In the setting of a functional language, where recursion and higher-order functions are more prevalent than explicit loops, the most significant performance challenge for a tracing JIT is identifying which control flows constitute a loop-we discuss two strategies for identifying loops and measure their impact.
KW - Experimentation
KW - Languages
KW - Measurement
KW - Performance
KW - JIT compilers
KW - contracts
KW - tracing
KW - functional languages
KW - Racket
Y1 - 2015
U6 - https://doi.org/10.1145/2784731.2784740
SN - 0362-1340
SN - 1558-1160
VL - 50
IS - 9
SP - 22
EP - 34
PB - Association for Computing Machinery
CY - New York
ER -
TY - THES
A1 - Becker, Basil
T1 - Architectural modelling and verification of open service-oriented systems of systems
T1 - Architekturmodellierung und Verifikation von offenen und service-orientierten Systems of Systems
N2 - Systems of Systems (SoS) have received a lot of attention recently. In this thesis we will focus on SoS that are built atop the techniques of Service-Oriented Architectures and thus combine the benefits and challenges of both paradigms. For this thesis we will understand SoS as ensembles of single autonomous systems that are integrated to a larger system, the SoS. The interesting fact about these systems is that the previously isolated systems are still maintained, improved and developed on their own. Structural dynamics is an issue in SoS, as at every point in time systems can join and leave the ensemble. This and the fact that the cooperation among the constituent systems is not necessarily observable means that we will consider these systems as open systems. Of course, the system has a clear boundary at each point in time, but this can only be identified by halting the complete SoS. However, halting a system of that size is practically impossible. Often SoS are combinations of software systems and physical systems. Hence a failure in the software system can have a serious physical impact what makes an SoS of this kind easily a safety-critical system. The contribution of this thesis is a modelling approach that extends OMG's SoaML and basically relies on collaborations and roles as an abstraction layer above the components. This will allow us to describe SoS at an architectural level. We will also give a formal semantics for our modelling approach which employs hybrid graph-transformation systems. The modelling approach is accompanied by a modular verification scheme that will be able to cope with the complexity constraints implied by the SoS' structural dynamics and size. Building such autonomous systems as SoS without evolution at the architectural level --- i. e. adding and removing of components and services --- is inadequate. Therefore our approach directly supports the modelling and verification of evolution.
N2 - Systems of Systems (SoS) sind ein seit längerem bekanntes Konzept, das jedoch in letzter Zeit vermehrt Aufmerksamkeit erhielt. Das Hauptaugenmerk dieser Arbeit wird auf SoS liegen, die mit Hilfe von Techniken aus Service-Orientierten Architekturen erstellt werden. Somit vereinen die hier betrachteten SoS die Vorteile und Herausforderungen beider Paradigmen. SoS können definiert werden als Zusammenschlüsse einzelner, autonomer Systeme, die zu einem größeren System integriert werden. In diesem Zusammenhang interessant ist, dass die ehemals isolierten Systeme nach wie vor isoliert voneinander weiterentwickelt und gewartet werden. Desweiteren kommt der Strukturdynamik innerhalb des SoS eine beachtliche Bedeutung zu, da jederzeit Systeme dem SoS beitreten und es verlassen können. Zusammen mit der Tatsache, dass die Kooperationen zwischen den konstituierenden Systemen nicht immer beobachtbar sind, führt dies dazu, dass wir diese Systeme als offene Systeme bezeichnen. Wobei das System natürlich jederzeit eine klar definierte Grenze besitzt, diese aber nur durch ein Anhalten des Systems zu bestimmen ist. Dies jedoch ist, von einer praktischen Perspektive aus betrachtet, unmöglich. Häufig stellen SoS eine Kombination aus Softwaresystemen und pyhsikalischen Systemen dar mit der Folge, dass ein Fehler in der Software eine SoS schnell eine immense physikalische Wirkung entwickeln kann. Von daher fallen SoS leicht in die Klasse der sicherheitskritischen Systeme. In dieser Arbeit werden wir einen Modellierungsansatz vorstellen, der die Sprache SoaML der OMG erweitert. Die grundlegenden Konzepte dieses Ansatzes sind die Modellierung mit Kollaborationen und Rollen als Abstraktionsebene über Komponenten. Der vorgestellte Ansatz erlaubt es uns SoS auf einer architekturellen Ebene zu betrachten. Die formale Semantik unseres Modellierungsansatzes ist durch hybride Graphtransformationssysteme gegeben. Abgestimmt auf die Modellierung werden wir ebenfalls ein Verfahren zu Verifikation von SoS vorstellen, welches trotz der inhärenten Komplexität von SoS, diese zu verifizieren. Die Modellierung und Verifikation von Evolution wird von unserem Ansatz direkt unterstützt.
KW - Modellierung
KW - Verifikation
KW - Evolution
KW - Systems of Systems
KW - Service-orientierte Systeme
KW - modelling
KW - verification
KW - evolution
KW - systems of systems
KW - service-oriented systems
Y1 - 2013
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-70158
ER -
TY - BOOK
A1 - Becker, Basil
A1 - Giese, Holger
T1 - Cyber-physical systems with dynamic structure : towards modeling and verification of inductive invariants
N2 - Cyber-physical systems achieve sophisticated system behavior exploring the tight interconnection of physical coupling present in classical engineering systems and information technology based coupling. A particular challenging case are systems where these cyber-physical systems are formed ad hoc according to the specific local topology, the available networking capabilities, and the goals and constraints of the subsystems captured by the information processing part. In this paper we present a formalism that permits to model the sketched class of cyber-physical systems. The ad hoc formation of tightly coupled subsystems of arbitrary size are specified using a UML-based graph transformation system approach. Differential equations are employed to define the resulting tightly coupled behavior. Together, both form hybrid graph transformation systems where the graph transformation rules define the discrete steps where the topology or modes may change, while the differential equations capture the continuous behavior in between such discrete changes. In addition, we demonstrate that automated analysis techniques known for timed graph transformation systems for inductive invariants can be extended to also cover the hybrid case for an expressive case of hybrid models where the formed tightly coupled subsystems are restricted to smaller local networks.
N2 - Cyber-physical Systeme erzielen ihr ausgefeiltes Systemverhalten durch die enge Verschränkung von physikalischer Kopplung, wie sie in Systemen der klassichen Igenieurs-Disziplinen vorkommt, und der Kopplung durch Informationstechnologie. Eine besondere Herausforderung stellen in diesem Zusammenhang Systeme dar, die durch die spontane Vernetzung einzelner Cyber-Physical-Systeme entsprechend der lokalen, topologischen Gegebenheiten, verfügbarer Netzwerkfähigkeiten und der Anforderungen und Beschränkungen der Teilsysteme, die durch den informationsverabeitenden Teil vorgegeben sind, entstehen. In diesem Bericht stellen wir einen Formalismus vor, der die Modellierung der eingangs skizzierten Systeme erlaubt. Ein auf UML aufbauender Graph-Transformations-Ansatz wird genutzt, um die spontane Bildung eng kooperierender Teilsysteme beliebiger Größe zu spezifizieren. Differentialgleichungen beschreiben das kombinierte Verhalten auf physikalischer Ebene. In Kombination ergeben diese beiden Formalismen hybride Graph-Transformations-Systeme, in denen die Graph-Transformationen diskrete Schritte und die Differentialgleichungen das kontinuierliche, physikalische Verhalten des Systems beschreiben. Zusätzlich, präsentieren wir die Erweiterung einer automatischen Analysetechnik zur Verifikation induktiver Invarianten, die bereits für zeitbehaftete Systeme bekannt ist, auf den ausdrucksstärkeren Fall der hybriden Modelle.
T3 - Technische Berichte des Hasso-Plattner-Instituts für Digital Engineering an der Universität Potsdam - 64
KW - Cyber-Physical-Systeme
KW - Verifikation
KW - Modellierung
KW - hybride Graph-Transformations-Systeme
KW - Cyber-physical-systems
KW - verification
KW - modeling
KW - hybrid graph-transformation-systems
Y1 - 2012
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-62437
SN - 978-3-86956-217-9
PB - Universitätsverlag Potsdam
CY - Potsdam
ER -
TY - BOOK
A1 - Becker, Basil
A1 - Giese, Holger
A1 - Neumann, Stefan
T1 - Correct dynamic service-oriented architectures : modeling and compositional verification with dynamic collaborations
N2 - Service-oriented modeling employs collaborations to capture the coordination of multiple roles in form of service contracts. In case of dynamic collaborations the roles may join and leave the collaboration at runtime and therefore complex structural dynamics can result, which makes it very hard to ensure their correct and safe operation. We present in this paper our approach for modeling and verifying such dynamic collaborations. Modeling is supported using a well-defined subset of UML class diagrams, behavioral rules for the structural dynamics, and UML state machines for the role behavior. To be also able to verify the resulting service-oriented systems, we extended our former results for the automated verification of systems with structural dynamics [7, 8] and developed a compositional reasoning scheme, which enables the reuse of verification results. We outline our approach using the example of autonomous vehicles that use such dynamic collaborations via ad-hoc networking to coordinate and optimize their joint behavior.
N2 - Bei der Modellierung Service-orientierter Systeme werden Kollaborationen verwendet, um die Koordination mehrerer Rollen durch Service-Verträge zu beschreiben. Dynamische Kollaborationen erlauben ein Hinzufügen und Entfernen von Rollen zur Kollaboration zur Laufzeit, wodurch eine komplexe strukturelle Dynamik entstehen kann. Die automatische Analyse service-orientierter Systeme wird durch diese erheblich erschwert. In dieser Arbeit stellen wir einen Ansatz zur Modellierung und Verifikation solcher dynamischer Kollaborationen vor. Eine spezielle Untermenge der UML ermöglicht die Modellierung, wobei Klassendiagramme, Verhaltensregeln für die strukturelle Dynamik und UML Zustandsdiagramme für das Verhalten der Rollen verwendet werden. Um die Verifikation der so modellierten service-orientierten Systeme zu ermöglichen, erweiterten wir unsere früheren Ergebnisse zur Verifikation von Systemen mit struktureller Dynamik [7,8] und entwickelten einen kompositionalen Verifikationsansatz. Der entwickelte Verifikationsansatz erlaubt es Ergebnisse wiederzuverwenden. Die entwickelten Techniken werden anhand autonomer Fahrzeuge, die dynamische Kollaborationen über ad-hoc Netzwerke zur Koordination und Optimierung ihres gemeinsamen Verhaltens nutzen, exemplarisch vorgestellt.
T3 - Technische Berichte des Hasso-Plattner-Instituts für Digital Engineering an der Universität Potsdam - 29
Y1 - 2009
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-30473
SN - 978-3-940793-91-1
ER -
TY - THES
A1 - Berg, Gregor
T1 - Virtual prototypes for the model-based elicitation and validation of collaborative scenarios
T1 - Virtuelle Prototypen für die Modellbasierte Erhebung und Validierung kollaborativer Szenarien
N2 - Requirements engineers have to elicit, document, and validate how stakeholders act and interact to achieve their common goals in collaborative scenarios. Only after gathering all information concerning who interacts with whom to do what and why, can a software system be designed and realized which supports the stakeholders to do their work. To capture and structure requirements of different (groups of) stakeholders, scenario-based approaches have been widely used and investigated. Still, the elicitation and validation of requirements covering collaborative scenarios remains complicated, since the required information is highly intertwined, fragmented, and distributed over several stakeholders. Hence, it can only be elicited and validated collaboratively. In times of globally distributed companies, scheduling and conducting workshops with groups of stakeholders is usually not feasible due to budget and time constraints. Talking to individual stakeholders, on the other hand, is feasible but leads to fragmented and incomplete stakeholder scenarios. Going back and forth between different individual stakeholders to resolve this fragmentation and explore uncovered alternatives is an error-prone, time-consuming, and expensive task for the requirements engineers. While formal modeling methods can be employed to automatically check and ensure consistency of stakeholder scenarios, such methods introduce additional overhead since their formal notations have to be explained in each interaction between stakeholders and requirements engineers. Tangible prototypes as they are used in other disciplines such as design, on the other hand, allow designers to feasibly validate and iterate concepts and requirements with stakeholders. This thesis proposes a model-based approach for prototyping formal behavioral specifications of stakeholders who are involved in collaborative scenarios. By simulating and animating such specifications in a remote domain-specific visualization, stakeholders can experience and validate the scenarios captured so far, i.e., how other stakeholders act and react. This interactive scenario simulation is referred to as a model-based virtual prototype. Moreover, through observing how stakeholders interact with a virtual prototype of their collaborative scenarios, formal behavioral specifications can be automatically derived which complete the otherwise fragmented scenarios. This, in turn, enables requirements engineers to elicit and validate collaborative scenarios in individual stakeholder sessions – decoupled, since stakeholders can participate remotely and are not forced to be available for a joint session at the same time. This thesis discusses and evaluates the feasibility, understandability, and modifiability of model-based virtual prototypes. Similarly to how physical prototypes are perceived, the presented approach brings behavioral models closer to being tangible for stakeholders and, moreover, combines the advantages of joint stakeholder sessions and decoupled sessions.
N2 - Anforderungsingenieure erheben, dokumentieren und validieren wie Bedarfsträger in einzelnen und gemeinsamen Aktivitäten die Ziele ihrer kollaborativen Szenarios erreichen. Auf Grundlage von Angaben darüber, wer warum mit wem zusammen was erledigt, kann anschließend ein Softwaresystem spezifiziert und umgesetzt werden, welches die Bedarfsträger bei der Durchführung ihrer Abläufe unterstützt. Um Anforderungen verschiedener (Gruppen von) Bedarfsträger zu erfassen und zu strukturieren, werden szenariobasierte Ansätze genutzt und erforscht. Die Erhebung und Validierung von Anforderungen, die kollaborative Szenarios abdecken, ist dennoch kompliziert, da derartige Informationen hochgradig verknüpft, fragmentiert und über mehrere Bedarfsträger verteilt sind, wodurch sie nur in Gruppensitzungen effizient erhoben und validiert werden können. In Zeiten global verteilter Firmen ist die Planung und Durchführung solcher Workshops mit Gruppen von Bedarfsträgern nur selten praktikabel. Mit einzelnen Bedarfsträgern zu sprechen ist hingegen oft realisierbar, führt aber zu fragmentierten, unvollständigen Szenariobeschreibungen. Durch eine Vielzahl von Einzelgesprächen mit wechselnden Bedarfsträgern kann diese Fragmentierung aufgelöst werden – dies ist aber eine fehleranfällige und zeitaufwändige Aufgabe. Zwar bieten formale Modellierungsmethoden z.B. automatische Konsistenzchecks für Szenarios, doch führen derartige Methoden zu Mehraufwand in allen Gesprächen mit Bedarfsträgern, da diesen die verwendeten formalen Notationen jedes Mal erläutert werden müssen. Handfeste Prototypen, wie sie in anderen Disziplinen eingesetzt werden, ermöglichen es Designern, ihre Konzepte und erhobenen Anforderungen ohne viel Aufwand mit Bedarfsträgern zu validieren und zu iterieren. In dieser Dissertation wird ein modellbasierter Generierungsansatz vorgeschlagen, der kollaborative Szenarios prototypisch auf Grundlage von formalen Verhaltensmodellen für die beteiligten Bedarfsträger darstellt. Durch die Simulation dieses Verhaltens und dessen Animation innerhalb einer webbasierten, domänenspezifischen Visualisierung, können Bedarfsträger diese Modelle erleben und die bisher erfassten Szenarios validieren. Eine derartige interaktive Szenariosimulation wird als modellbasierter virtueller Prototyp bezeichnet. Basierend auf den Interaktionen zwischen Bedarfsträgern und einem virtuellen Prototypen ihrer Szenarios können zudem formale Verhaltensspezifikationen automatisch abgeleitet werden, die wiederum die fragmentierten kollaborativen Szenarios vervollständigen. Dies ermöglicht es den Anforderungsingenieuren, die kollaborativen Szenarios in individuellen Sitzungen mit einzelnen Bedarfsträgern zu erheben und zu validieren – entkoppelt voneinander, da Bedarfsträger webbasiert teilnehmen können und dabei nicht darauf angewiesen sind, dass andere Bedarfsträger ebenfalls in der gleichen Sitzung teilnehmen. Diese Dissertation diskutiert und evaluiert die Machbarkeit, Verständlichkeit sowie die Änderbarkeit der modellbasierten virtuellen Prototypen. Auf die gleiche Art wie physikalische Prototypen wahrgenommen werden, erlaubt es der vorgestellte Ansatz, Verhaltensmodelle für Bedarfsträger erlebbar zu machen und so die Vorteile von Gruppensitzungen mit denen entkoppelter Sitzungen zu verbinden.
KW - requirements engineering
KW - behavioral specification
KW - interactive simulation
KW - model-based prototyping
KW - rapid prototyping
Y1 - 2013
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-69729
ER -
TY - THES
A1 - Beyhl, Thomas
T1 - A framework for incremental view graph maintenance
T1 - Ein Framework für die inkrementelle Wartung von Sichten auf Graphen
N2 - Nowadays, graph data models are employed, when relationships between entities have to be stored and are in the scope of queries. For each entity, this graph data model locally stores relationships to adjacent entities. Users employ graph queries to query and modify these entities and relationships. These graph queries employ graph patterns to lookup all subgraphs in the graph data that satisfy certain graph structures. These subgraphs are called graph pattern matches. However, this graph pattern matching is NP-complete for subgraph isomorphism. Thus, graph queries can suffer a long response time, when the number of entities and relationships in the graph data or the graph patterns increases.
One possibility to improve the graph query performance is to employ graph views that keep ready graph pattern matches for complex graph queries for later retrieval. However, these graph views must be maintained by means of an incremental graph pattern matching to keep them consistent with the graph data from which they are derived, when the graph data changes. This maintenance adds subgraphs that satisfy a graph pattern to the graph views and removes subgraphs that do not satisfy a graph pattern anymore from the graph views.
Current approaches for incremental graph pattern matching employ Rete networks. Rete networks are discrimination networks that enumerate and maintain all graph pattern matches of certain graph queries by employing a network of condition tests, which implement partial graph patterns that together constitute the overall graph query. Each condition test stores all subgraphs that satisfy the partial graph pattern. Thus, Rete networks suffer high memory consumptions, because they store a large number of partial graph pattern matches. But, especially these partial graph pattern matches enable Rete networks to update the stored graph pattern matches efficiently, because the network maintenance exploits the already stored partial graph pattern matches to find new graph pattern matches. However, other kinds of discrimination networks exist that can perform better in time and space than Rete networks. Currently, these other kinds of networks are not used for incremental graph pattern matching.
This thesis employs generalized discrimination networks for incremental graph pattern matching. These discrimination networks permit a generalized network structure of condition tests to enable users to steer the trade-off between memory consumption and execution time for the incremental graph pattern matching. For that purpose, this thesis contributes a modeling language for the effective definition of generalized discrimination networks. Furthermore, this thesis contributes an efficient and scalable incremental maintenance algorithm, which updates the (partial) graph pattern matches that are stored by each condition test. Moreover, this thesis provides a modeling evaluation, which shows that the proposed modeling language enables the effective modeling of generalized discrimination networks. Furthermore, this thesis provides a performance evaluation, which shows that a) the incremental maintenance algorithm scales, when the graph data becomes large, and b) the generalized discrimination network structures can outperform Rete network structures in time and space at the same time for incremental graph pattern matching.
N2 - Heutzutage werden Graphdatenmodelle verwendet um Beziehungen zwischen Entitäten zu speichern und diese Beziehungen später abzufragen. Jede Entität im Graphdatenmodell speichert lokal die Beziehungen zu anderen verknüpften Entitäten. Benutzer stellen Suchanfragen um diese Entitäten und Beziehungen abzufragen und zu modifizieren. Dafür machen Suchanfragen Gebrauch von Graphmustern um alle Teilgraphen in den Graphdaten zu finden, die über bestimmte Graphstrukturen verfügen. Diese Teilgraphen werden Graphmusterübereinstimmung (Match) genannt. Allerdings ist diese Suche nach Matches NP-vollständig für Teilgraphisomorphie. Daher können Suchanfragen einer langen Antwortzeit unterliegen, wenn die Anzahl von Entitäten und Beziehungen in den Graphdaten oder -mustern ansteigt.
Eine Möglichkeit die Antwortzeiten von Suchanfragen zu verbessern ist Matches für komplexe Suchanfragen in sogenannten Sichten über die Graphdaten für spätere Suchanfragen bereitzuhalten. Allerdings müssen diese Sichten mittels einer inkrementellen Suche nach Matches gewartete werden um sie konsistent zu den sich ändernden Graphdaten zu halten. Diese Wartung ergänzt Teilgraphen, die Graphmuster erfüllen, in den Sichten und entfernt Teilgraphen, die Graphmuster nicht mehr erfüllen, aus den Sichten.
Aktuelle Ansätze für die inkrementelle Suche nach Matches verwenden Rete Netzwerke. Rete Netzwerke sind sogenannte Discrimination Networks, die alle Matches für bestimmte Suchanfragen aufzählen und warten, indem sie ein Netzwerk aus einzelnen Teilgraphmustern anwenden, die zusammen eine vollständige Suchanfrage ergeben. Das Netzwerk speichert für jedes Teilgraphmuster welche Teilgraphen das Teilgraphmuster erfüllen. Daher haben Rete Netzwerke einen hohen Speicherverbrauch, da sie alle Zwischenergebnisse speichern müssen. Jedoch sind es gerade diese gespeicherten Zwischenergebnisse, die es dem Rete Netzwerk ermöglichen die gespeicherten Zwischenergebnisse effizient zu warten, da diese Zwischenergebnisse für das Auffinden neuer Matches ausgenutzt werden. Allerdings existieren andere Arten von Discrimination Networks, die hinsichtlich Speicher- und Zeitverbrauch effizienter sind, aber derzeit nicht für die inkrementelle Suche nach Matches verwendet werden.
Diese Doktorarbeit wendet eine verallgemeinerte Art von Discrimination Networks an. Diese verallgemeinerte Art ermöglicht es die Balance zwischen Speicher- und Zeitverbrauch für die inkrementelle Suche nach Matches zu steuern. Dafür stellt diese Arbeit eine Modellierungssprache vor, die es ermöglicht verallgemeinerte Arten von Discrimination Networks effektiv zu spezifizieren. Darauf aufbauend stellt diese Arbeit einen inkrementellen Algorithmus vor, der die gespeicherten Matches effizient und skalierbar wartet. Abschließend stellt diese Arbeit eine Evaluierung vor, die aufzeigt dass die Modellierungssprache eine effektive Spezifikation von verallgemeinerten Discrimination Networks erlaubt. Außerdem zeigt die Evaluierung, dass a) der inkrementelle Wartungsalgorithmus für große Graphdaten skaliert und b) die Netzwerkstrukturen von verallgemeinerten Discrimination Networks im Vergleich zu den Netzwerkstrukturen von Rete Netzwerken im Speicher- und Zeitverbrauch für die inkrementelle Suche nach Matches effizienter sein können.
KW - incremental graph pattern matching
KW - discrimination networks
KW - Rete networks
KW - Gator networks
KW - model-driven software engineering
KW - inkrementelle Graphmustersuche
KW - Discrimination Networks
KW - Rete Netzwerk
KW - Gator Netzwerk
KW - modellgetriebene Softwareentwicklung
Y1 - 2018
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-405929
ER -
TY - BOOK
A1 - Beyhl, Thomas
A1 - Blouin, Dominique
A1 - Giese, Holger
A1 - Lambers, Leen
T1 - On the operationalization of graph queries with generalized discrimination networks
N2 - Graph queries have lately gained increased interest due to application areas such as social networks, biological networks, or model queries. For the relational database case the relational algebra and generalized discrimination networks have been studied to find appropriate decompositions into subqueries and ordering of these subqueries for query evaluation or incremental updates of query results. For graph database queries however there is no formal underpinning yet that allows us to find such suitable operationalizations. Consequently, we suggest a simple operational concept for the decomposition of arbitrary complex queries into simpler subqueries and the ordering of these subqueries in form of generalized discrimination networks for graph queries inspired by the relational case. The approach employs graph transformation rules for the nodes of the network and thus we can employ the underlying theory. We further show that the proposed generalized discrimination networks have the same expressive power as nested graph conditions.
N2 - Graph-basierte Suchanfragen erfahren in jüngster Zeit ein zunehmendes Interesse durch Anwendungsdomänen wie zum Beispiel Soziale Netzwerke, biologische Netzwerke und Softwaremodelle. Für relationale Datenbanken wurden die relationale Algebra und sogenannte generalisierte Discrimination Networks bereits studiert um Suchanfragen angemessen in kleinere Suchanfragen für die inkrementelle Auswertung zu zerlegen und zu ordnen.
Allerdings gibt es für Graphdatenbanken derzeit keine formale Grundlage, die es erlaubt solche Zerlegungen zu finden. Daher schlagen wir ein Konzept für die Zerlegung und Ordnung von komplexen Suchanfragen vor. Das Konzept basiert auf generalisierten Discrimination Networks, die aus relationalen Datenbanken bekannt sind. Der Ansatz verwendet Graphtransformationsregeln für Knoten in diesen Netzwerken, sodass die Theorie von Graphen und Graphtransformationen angewendet werden kann. Darüber hinaus zeigen wir auf, dass diese Discrimination Networks die gleiche Ausdrucksstärke besitzen wie Nested Graph Conditions.
T3 - Technische Berichte des Hasso-Plattner-Instituts für Digital Engineering an der Universität Potsdam - 106
KW - graph queries
KW - discrimination networks
KW - incremental graph pattern matching
KW - nested graph conditions
KW - Graph-basierte Suche
KW - Discrimination Networks
KW - Inkrementelle Graphmustersuche
KW - Nested Graph Conditions
Y1 - 2016
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-96279
SN - 978-3-86956-372-5
SN - 1613-5652
SN - 2191-1665
IS - 106
PB - Universitätsverlag Potsdam
CY - Potsdam
ER -
TY - BOOK
A1 - Beyhl, Thomas
A1 - Giese, Holger
T1 - Efficient and scalable graph view maintenance for deductive graph databases based on generalized discrimination networks
N2 - Graph databases provide a natural way of storing and querying graph data. In contrast to relational databases, queries over graph databases enable to refer directly to the graph structure of such graph data. For example, graph pattern matching can be employed to formulate queries over graph data.
However, as for relational databases running complex queries can be very time-consuming and ruin the interactivity with the database. One possible approach to deal with this performance issue is to employ database views that consist of pre-computed answers to common and often stated queries. But to ensure that database views yield consistent query results in comparison with the data from which they are derived, these database views must be updated before queries make use of these database views. Such a maintenance of database views must be performed efficiently, otherwise the effort to create and maintain views may not pay off in comparison to processing the queries directly on the data from which the database views are derived.
At the time of writing, graph databases do not support database views and are limited to graph indexes that index nodes and edges of the graph data for fast query evaluation, but do not enable to maintain pre-computed answers of complex queries over graph data. Moreover, the maintenance of database views in graph databases becomes even more challenging when negation and recursion have to be supported as in deductive relational databases.
In this technical report, we present an approach for the efficient and scalable incremental graph view maintenance for deductive graph databases. The main concept of our approach is a generalized discrimination network that enables to model nested graph conditions including negative application conditions and recursion, which specify the content of graph views derived from graph data stored by graph databases. The discrimination network enables to automatically derive generic maintenance rules using graph transformations for maintaining graph views in case the graph data from which the graph views are derived change. We evaluate our approach in terms of a case study using multiple data sets derived from open source projects.
N2 - Graphdatenbanken bieten natürliche Möglichkeiten Graphdaten zu speichern und abzufragen. Im Gegensatz zu relationalen Datenbanken ermöglichen Graphdatenbanken Anfragen, die direkt auf der Graphstruktur der Daten arbeiten. Zum Beispiel können Graphmuster zur Formulierung von Anfragen an die Graphdatenbanken verwendet werden.
Allerdings können wie für relationale Datenbanken komplexe Anfragen sehr zeitaufwendig sein und interaktive Anfrageszenarien mit der Datenbank verhindern. Ein möglicher Ansatz mit diesem Geschwindigkeitsproblem umzugehen, ist das Vorberechnen von Antworten für komplexe und häufig gestellt Suchanfragen in Form von sogenannten Datenbanksichten. Dabei muss sichergestellt sein, dass Anfragen, die mit Hilfe von Datenbanksichten beantwortet werden, zu jeder Zeit die gleichen Suchergebnisse zurückliefern als wenn sie ohne Datenbanksichten beantwortet werden, sodass Datenbanksichten gewartet werden müssen bevor Suchanfragen mit Hilfe dieser Datenbanksichten beantwortet werden. Eine solche Wartung von Datenbanksichten muss effizient erfolgen, anderenfalls kann sich der Aufwand für die Erzeugung und Wartung der Datenbanksichten nicht auszahlen.
Zum Zeitpunkt der Anfertigung dieses technischen Berichts, ist keine Graphdatenbank bekannt, die solche Datenbanksichten unterstützt. Lediglich Indizes werden durch Graphdatenbanken unterstützt, die es ermöglichen Knoten und Kanten eines Graphen für die schnelle Anfragenbeantwortung zu indizieren, aber ermöglichen es nicht vorberechnete Antworten auf Suchanfragen zu warten. Die Unterstützung von Datenbanksichten durch Graphdatenbanken wird zusätzlich erschwert wenn Negation und Rekursion unterstützt werden sollen wie bei relationalen deduktiven Datenbanken.
In diesen technischen Bericht beschreiben wir einen effizienten und skalierenden Ansatz zur inkrementellen Wartung von Dankenbanksichten für deduktive Graphdatenbanken. Das Hauptkonzept des Ansatzes ist ein sogenanntes verallgemeinertes Discrimination Network, dass es ermöglicht geschachtelte Graph Conditions inklusive Negation und Rekursion zu modellieren, die es ermöglichen den Inhalt von Datenbanksichten für Graphdatenbanken in Form von Graphmustern zu spezifizieren. Das Discrimination Network erlaubt die Ableitung von Regeln für die Wartung der Datenbanksichten wenn die Graphdaten von denen die Datenbanksichten abgeleitet wurden modifiziert werden. Wir evaluieren den Ansatz in Form einer Fallstudie und mehreren Graphdatensätzen, die aus Open Source Projekten abgeleitet wurden.
T3 - Technische Berichte des Hasso-Plattner-Instituts für Digital Engineering an der Universität Potsdam - 99
KW - incremental graph pattern matching
KW - graph databases
KW - view maintenance
KW - inkrementelles Graph Pattern Matching
KW - Graphdatenbanken
KW - Wartung von Graphdatenbanksichten
Y1 - 2015
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-79535
SN - 978-3-86956-339-8
SN - 1613-5652
SN - 2191-1665
IS - 99
PB - Universitätsverlag Potsdam
CY - Potsdam
ER -
TY - JOUR
A1 - Bilò, Davide
A1 - Lenzner, Pascal
T1 - On the tree conjecture for the network creation game
JF - Theory of computing systems
N2 - Selfish Network Creation focuses on modeling real world networks from a game-theoretic point of view. One of the classic models by Fabrikant et al. (2003) is the network creation game, where agents correspond to nodes in a network which buy incident edges for the price of alpha per edge to minimize their total distance to all other nodes. The model is well-studied but still has intriguing open problems. The most famous conjectures state that the price of anarchy is constant for all alpha and that for alpha >= n all equilibrium networks are trees. We introduce a novel technique for analyzing stable networks for high edge-price alpha and employ it to improve on the best known bound for the latter conjecture. In particular we show that for alpha > 4n - 13 all equilibrium networks must be trees, which implies a constant price of anarchy for this range of alpha. Moreover, we also improve the constant upper bound on the price of anarchy for equilibrium trees.
KW - network creation games
KW - price of anarchy
KW - tree conjecture
KW - algorithmic
KW - game theory
Y1 - 2019
U6 - https://doi.org/10.1007/s00224-019-09945-9
SN - 1432-4350
SN - 1433-0490
VL - 64
IS - 3
SP - 422
EP - 443
PB - Springer
CY - New York
ER -
TY - JOUR
A1 - Birnick, Johann
A1 - Bläsius, Thomas
A1 - Friedrich, Tobias
A1 - Naumann, Felix
A1 - Papenbrock, Thorsten
A1 - Schirneck, Friedrich Martin
T1 - Hitting set enumeration with partial information for unique column combination discovery
JF - Proceedings of the VLDB Endowment
N2 - Unique column combinations (UCCs) are a fundamental concept in relational databases. They identify entities in the data and support various data management activities. Still, UCCs are usually not explicitly defined and need to be discovered. State-of-the-art data profiling algorithms are able to efficiently discover UCCs in moderately sized datasets, but they tend to fail on large and, in particular, on wide datasets due to run time and memory limitations.
In this paper, we introduce HPIValid, a novel UCC discovery algorithm that implements a faster and more resource-saving search strategy. HPIValid models the metadata discovery as a hitting set enumeration problem in hypergraphs. In this way, it combines efficient discovery techniques from data profiling research with the most recent theoretical insights into enumeration algorithms. Our evaluation shows that HPIValid is not only orders of magnitude faster than related work, it also has a much smaller memory footprint.
Y1 - 2020
U6 - https://doi.org/10.14778/3407790.3407824
SN - 2150-8097
VL - 13
IS - 11
SP - 2270
EP - 2283
PB - Association for Computing Machinery
CY - [New York, NY]
ER -
TY - GEN
A1 - Björk, Jennie
A1 - Hölze, Katharina
T1 - Editorial
T2 - Creativity and innovation management
Y1 - 2019
U6 - https://doi.org/10.1111/caim.12336
SN - 0963-1690
SN - 1467-8691
VL - 28
IS - 3
SP - 289
EP - 290
PB - Wiley
CY - Hoboken
ER -
TY - JOUR
A1 - Björk, Jennie
A1 - Hölzle, Katharina
A1 - Boer, Harry
T1 - ‘What will we learn from the current crisis?’
JF - Creativity and innovation management
Y1 - 2021
U6 - https://doi.org/10.1111/caim.12442
SN - 0963-1690
SN - 1467-8691
VL - 30
IS - 2
SP - 231
EP - 232
PB - Wiley-Blackwell
CY - Oxford [u.a.]
ER -
TY - JOUR
A1 - Blaesius, Thomas
A1 - Friedrich, Tobias
A1 - Schirneck, Friedrich Martin
T1 - The complexity of dependency detection and discovery in relational databases
JF - Theoretical computer science
N2 - Multi-column dependencies in relational databases come associated with two different computational tasks. The detection problem is to decide whether a dependency of a certain type and size holds in a given database, the discovery problem asks to enumerate all valid dependencies of that type. We settle the complexity of both of these problems for unique column combinations (UCCs), functional dependencies (FDs), and inclusion dependencies (INDs). We show that the detection of UCCs and FDs is W[2]-complete when parameterized by the solution size. The discovery of inclusion-wise minimal UCCs is proven to be equivalent under parsimonious reductions to the transversal hypergraph problem of enumerating the minimal hitting sets of a hypergraph. The discovery of FDs is equivalent to the simultaneous enumeration of the hitting sets of multiple input hypergraphs. We further identify the detection of INDs as one of the first natural W[3]-complete problems. The discovery of maximal INDs is shown to be equivalent to enumerating the maximal satisfying assignments of antimonotone, 3-normalized Boolean formulas.
KW - data profiling
KW - enumeration complexity
KW - functional dependency
KW - inclusion
KW - dependency
KW - parameterized complexity
KW - parsimonious reduction
KW - transversal hypergraph
KW - Unique column combination
KW - W[3]-completeness
Y1 - 2021
U6 - https://doi.org/10.1016/j.tcs.2021.11.020
SN - 0304-3975
SN - 1879-2294
VL - 900
SP - 79
EP - 96
PB - Elsevier
CY - Amsterdam
ER -
TY - JOUR
A1 - Bläsius, Thomas
A1 - Freiberger, Cedric
A1 - Friedrich, Tobias
A1 - Katzmann, Maximilian
A1 - Montenegro-Retana, Felix
A1 - Thieffry, Marianne
T1 - Efficient Shortest Paths in Scale-Free Networks with Underlying Hyperbolic Geometry
JF - ACM Transactions on Algorithms
N2 - A standard approach to accelerating shortest path algorithms on networks is the bidirectional search, which explores the graph from the start and the destination, simultaneously. In practice this strategy performs particularly well on scale-free real-world networks. Such networks typically have a heterogeneous degree distribution (e.g., a power-law distribution) and high clustering (i.e., vertices with a common neighbor are likely to be connected themselves). These two properties can be obtained by assuming an underlying hyperbolic geometry.
To explain the observed behavior of the bidirectional search, we analyze its running time on hyperbolic random graphs and prove that it is (O) over tilde (n(2-1/alpha) + n(1/(2 alpha)) + delta(max)) with high probability, where alpha is an element of (1/2, 1) controls the power-law exponent of the degree distribution, and dmax is the maximum degree. This bound is sublinear, improving the obvious worst-case linear bound. Although our analysis depends on the underlying geometry, the algorithm itself is oblivious to it.
KW - Random graphs
KW - hyperbolic geometry
KW - scale-free networks
KW - bidirectional shortest path
Y1 - 2022
U6 - https://doi.org/10.1145/3516483
SN - 1549-6325
SN - 1549-6333
VL - 18
IS - 2
SP - 1
EP - 32
PB - Association for Computing Machinery
CY - New York
ER -
TY - JOUR
A1 - Bog, Anja
A1 - Plattner, Hasso
A1 - Zeier, Alexander
T1 - A mixed transaction processing and operational reporting benchmark
JF - Information systems frontiers
N2 - The importance of reporting is ever increasing in today's fast-paced market environments and the availability of up-to-date information for reporting has become indispensable. Current reporting systems are separated from the online transaction processing systems (OLTP) with periodic updates pushed in. A pre-defined and aggregated subset of the OLTP data, however, does not provide the flexibility, detail, and timeliness needed for today's operational reporting. As technology advances, this separation has to be re-evaluated and means to study and evaluate new trends in data storage management have to be provided. This article proposes a benchmark for combined OLTP and operational reporting, providing means to evaluate the performance of enterprise data management systems for mixed workloads of OLTP and operational reporting queries. Such systems offer up-to-date information and the flexibility of the entire data set for reporting. We describe how the benchmark provokes the conflicts that are the reason for separating the two workloads on different systems. In this article, we introduce the concepts, logical data schema, transactions and queries of the benchmark, which are entirely based on the original data sets and real workloads of existing, globally operating enterprises.
KW - Benchmarking
KW - Mixed workload
KW - OLTP
KW - Operational reporting
Y1 - 2011
U6 - https://doi.org/10.1007/s10796-010-9283-8
SN - 1387-3326
VL - 13
IS - 3
SP - 321
EP - 335
PB - Springer
CY - Dordrecht
ER -
TY - JOUR
A1 - Bohn, Nicolai
A1 - Kundisch, Dennis
T1 - What are we talking about when we talk about technology pivots?
BT - a Delphi study
JF - Information & management
N2 - Technology pivots were designed to help digital startups make adjustments to the technology underpinning their products and services. While academia and the media make liberal use of the term "technology pivot," they rarely align themselves to Ries' foundational conceptualization. Recent research suggests that a more granulated conceptualization of technology pivots is required. To scientifically derive a comprehensive conceptualization, we conduct a Delphi study with a panel of 38 experts drawn from academia and practice to explore their understanding of "technology pivots." Our study thus makes an important contribution to advance the seminal work by Ries on technology pivots.
KW - digital startup
KW - lean startup approach
KW - technology pivot
KW - conceptualization
KW - Delphi study
Y1 - 2020
U6 - https://doi.org/10.1016/j.im.2020.103319
SN - 0378-7206
SN - 1872-7530
VL - 57
IS - 6
PB - Elsevier
CY - Amsterdam
ER -
TY - JOUR
A1 - Boissier, Martin
T1 - Robust and budget-constrained encoding configurations for in-memory database systems
JF - Proceedings of the VLDB Endowment
N2 - Data encoding has been applied to database systems for decades as it mitigates bandwidth bottlenecks and reduces storage requirements. But even in the presence of these advantages, most in-memory database systems use data encoding only conservatively as the negative impact on runtime performance can be severe. Real-world systems with large parts being infrequently accessed and cost efficiency constraints in cloud environments require solutions that automatically and efficiently select encoding techniques, including heavy-weight compression. In this paper, we introduce workload-driven approaches to automaticaly determine memory budget-constrained encoding configurations using greedy heuristics and linear programming. We show for TPC-H, TPC-DS, and the Join Order Benchmark that optimized encoding configurations can reduce the main memory footprint significantly without a loss in runtime performance over state-of-the-art dictionary encoding. To yield robust selections, we extend the linear programming-based approach to incorporate query runtime constraints and mitigate unexpected performance regressions.
KW - General Earth and Planetary Sciences
KW - Water Science and Technology
KW - Geography, Planning and Development
Y1 - 2021
U6 - https://doi.org/10.14778/3503585.3503588
SN - 2150-8097
VL - 15
IS - 4
SP - 780
EP - 793
PB - Association for Computing Machinery (ACM)
CY - [New York]
ER -
TY - JOUR
A1 - Bonifati, Angela
A1 - Mior, Michael J.
A1 - Naumann, Felix
A1 - Noack, Nele Sina
T1 - How inclusive are we?
BT - an analysis of gender diversity in database venues
JF - SIGMOD record / Association for Computing Machinery, Special Interest Group on Management of Data
N2 - ACM SIGMOD, VLDB and other database organizations have committed to fostering an inclusive and diverse community, as do many other scientific organizations. Recently, different measures have been taken to advance these goals, especially for underrepresented groups. One possible measure is double-blind reviewing, which aims to hide gender, ethnicity, and other properties of the authors.
We report the preliminary results of a gender diversity analysis of publications of the database community across several peer-reviewed venues, and also compare women's authorship percentages in both single-blind and double-blind venues along the years. We also obtained a cross comparison of the obtained results in data management with other relevant areas in Computer Science.
Y1 - 2022
U6 - https://doi.org/10.1145/3516431.3516438
SN - 0163-5808
SN - 1943-5835
VL - 50
IS - 4
SP - 30
EP - 35
PB - Association for Computing Machinery
CY - New York
ER -
TY - JOUR
A1 - Borchert, Florian
A1 - Mock, Andreas
A1 - Tomczak, Aurelie
A1 - Hügel, Jonas
A1 - Alkarkoukly, Samer
A1 - Knurr, Alexander
A1 - Volckmar, Anna-Lena
A1 - Stenzinger, Albrecht
A1 - Schirmacher, Peter
A1 - Debus, Jürgen
A1 - Jäger, Dirk
A1 - Longerich, Thomas
A1 - Fröhling, Stefan
A1 - Eils, Roland
A1 - Bougatf, Nina
A1 - Sax, Ulrich
A1 - Schapranow, Matthieu-Patrick
T1 - Knowledge bases and software support for variant interpretation in precision oncology
JF - Briefings in bioinformatics
N2 - Precision oncology is a rapidly evolving interdisciplinary medical specialty. Comprehensive cancer panels are becoming increasingly available at pathology departments worldwide, creating the urgent need for scalable cancer variant annotation and molecularly informed treatment recommendations. A wealth of mainly academia-driven knowledge bases calls for software tools supporting the multi-step diagnostic process. We derive a comprehensive list of knowledge bases relevant for variant interpretation by a review of existing literature followed by a survey among medical experts from university hospitals in Germany. In addition, we review cancer variant interpretation tools, which integrate multiple knowledge bases. We categorize the knowledge bases along the diagnostic process in precision oncology and analyze programmatic access options as well as the integration of knowledge bases into software tools. The most commonly used knowledge bases provide good programmatic access options and have been integrated into a range of software tools. For the wider set of knowledge bases, access options vary across different parts of the diagnostic process. Programmatic access is limited for information regarding clinical classifications of variants and for therapy recommendations. The main issue for databases used for biological classification of pathogenic variants and pathway context information is the lack of standardized interfaces. There is no single cancer variant interpretation tool that integrates all identified knowledge bases. Specialized tools are available and need to be further developed for different steps in the diagnostic process.
KW - HiGHmed
KW - personalized medicine
KW - molecular tumor board
KW - data integration
KW - cancer therapy
Y1 - 2021
U6 - https://doi.org/10.1093/bib/bbab134
SN - 1467-5463
SN - 1477-4054
VL - 22
IS - 6
PB - Oxford Univ. Press
CY - Oxford
ER -
TY - JOUR
A1 - Borchert, Florian
A1 - Mock, Andreas
A1 - Tomczak, Aurelie
A1 - Hügel, Jonas
A1 - Alkarkoukly, Samer
A1 - Knurr, Alexander
A1 - Volckmar, Anna-Lena
A1 - Stenzinger, Albrecht
A1 - Schirmacher, Peter
A1 - Debus, Jürgen
A1 - Jäger, Dirk
A1 - Longerich, Thomas
A1 - Fröhling, Stefan
A1 - Eils, Roland
A1 - Bougatf, Nina
A1 - Sax, Ulrich
A1 - Schapranow, Matthieu-Patrick
T1 - Correction to: Knowledge bases and software support for variant interpretation in precision oncology
JF - Briefings in bioinformatics
Y1 - 2021
U6 - https://doi.org/10.1093/bib/bbab246
SN - 1467-5463
SN - 1477-4054
VL - 22
IS - 6
PB - Oxford Univ. Press
CY - Oxford
ER -
TY - BOOK
A1 - Breest, Martin
A1 - Bouché, Paul
A1 - Grund, Martin
A1 - Haubrock, Sören
A1 - Hüttenrauch, Stefan
A1 - Kylau, Uwe
A1 - Ploskonos, Anna
A1 - Queck, Tobias
A1 - Schreiter, Torben
T1 - Fundamentals of Service-Oriented Engineering
N2 - Since 2002, keywords like service-oriented engineering, service-oriented computing, and service-oriented architecture have been widely used in research, education, and enterprises. These and related terms are often misunderstood or used incorrectly. To correct these misunderstandings, a deeper knowledge of the concepts, the historical backgrounds, and an overview of service-oriented architectures is demanded and given in this paper.
T3 - Technische Berichte des Hasso-Plattner-Instituts für Digital Engineering an der Universität Potsdam - 16
Y1 - 2006
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-33801
SN - 978-3-939469-35-3
PB - Universitätsverlag Potsdam
CY - Potsdam
ER -
TY - JOUR
A1 - Buchwald, Sebastian
A1 - Wagelaar, Dennis
A1 - Dan, Li
A1 - Hegedues, Abel
A1 - Herrmannsdoerfer, Markus
A1 - Horn, Tassilo
A1 - Kalnina, Elina
A1 - Krause, Christian
A1 - Lano, Kevin
A1 - Lepper, Markus
A1 - Rensink, Arend
A1 - Rose, Louis
A1 - Waetzoldt, Sebastian
A1 - Mazanek, Steffen
T1 - A survey and comparison of transformation tools based on the transformation tool contest
JF - Science of computer programming
N2 - Model transformation is one of the key tasks in model-driven engineering and relies on the efficient matching and modification of graph-based data structures; its sibling graph rewriting has been used to successfully model problems in a variety of domains. Over the last years, a wide range of graph and model transformation tools have been developed all of them with their own particular strengths and typical application domains. In this paper, we give a survey and a comparison of the model and graph transformation tools that participated at the Transformation Tool Contest 2011. The reader gains an overview of the field and its tools, based on the illustrative solutions submitted to a Hello World task, and a comparison alongside a detailed taxonomy. The article is of interest to researchers in the field of model and graph transformation, as well as to software engineers with a transformation task at hand who have to choose a tool fitting to their needs. All solutions referenced in this article provide a SHARE demo. It supported the peer-review process for the contest, and now allows the reader to test the tools online.
KW - Graph rewriting
KW - Model transformation
KW - Tool survey
KW - Transformation tool contest
Y1 - 2014
U6 - https://doi.org/10.1016/j.scico.2013.10.009
SN - 0167-6423
SN - 1872-7964
VL - 85
SP - 41
EP - 99
PB - Elsevier
CY - Amsterdam
ER -
TY - CHAP
A1 - Bynens, Maarten
A1 - Van Landuyt, Dimitri
A1 - Truyen, Eddy
A1 - Joosen, Wouter
T1 - Towards reusable aspects: the callback mismatch problem
N2 - Because software development is increasingly expensive and timeconsuming, software reuse gains importance. Aspect-oriented software development modularizes crosscutting concerns which enables their systematic reuse. Literature provides a number of AOP patterns and best practices for developing reusable aspects based on compelling examples for concerns like tracing, transactions and persistence. However, such best practices are lacking for systematically reusing invasive aspects. In this paper, we present the ‘callback mismatch problem’. This problem arises in the context of abstraction mismatch, in which the aspect is required to issue a callback to the base application. As a consequence, the composition of invasive aspects is cumbersome to implement, difficult to maintain and impossible to reuse. We motivate this problem in a real-world example, show that it persists in the current state-of-the-art, and outline the need for advanced aspectual composition mechanisms to deal with this.
KW - reusable aspects
KW - invasive aspects
KW - aspect adapter
Y1 - 2010
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-41347
ER -
TY - THES
A1 - Böhm, Christoph
T1 - Enriching the Web of Data with topics and links
T1 - Anreicherung des Web of Data mit Themen und Verknüpfungen
N2 - This thesis presents novel ideas and research findings for the Web of Data – a global data space spanning many so-called Linked Open Data sources. Linked Open Data adheres to a set of simple principles to allow easy access and reuse for data published on the Web. Linked Open Data is by now an established concept and many (mostly academic) publishers adopted the principles building a powerful web of structured knowledge available to everybody. However, so far, Linked Open Data does not yet play a significant role among common web technologies that currently facilitate a high-standard Web experience. In this work, we thoroughly discuss the state-of-the-art for Linked Open Data and highlight several shortcomings – some of them we tackle in the main part of this work. First, we propose a novel type of data source meta-information, namely the topics of a dataset. This information could be published with dataset descriptions and support a variety of use cases, such as data source exploration and selection. For the topic retrieval, we present an approach coined Annotated Pattern Percolation (APP), which we evaluate with respect to topics extracted from Wikipedia portals. Second, we contribute to entity linking research by presenting an optimization model for joint entity linking, showing its hardness, and proposing three heuristics implemented in the LINked Data Alignment (LINDA) system. Our first solution can exploit multi-core machines, whereas the second and third approach are designed to run in a distributed shared-nothing environment. We discuss and evaluate the properties of our approaches leading to recommendations which algorithm to use in a specific scenario. The distributed algorithms are among the first of their kind, i.e., approaches for joint entity linking in a distributed fashion. Also, we illustrate that we can tackle the entity linking problem on the very large scale with data comprising more than 100 millions of entity representations from very many sources. Finally, we approach a sub-problem of entity linking, namely the alignment of concepts. We again target a method that looks at the data in its entirety and does not neglect existing relations. Also, this concept alignment method shall execute very fast to serve as a preprocessing for further computations. Our approach, called Holistic Concept Matching (HCM), achieves the required speed through grouping the input by comparing so-called knowledge representations. Within the groups, we perform complex similarity computations, relation conclusions, and detect semantic contradictions. The quality of our result is again evaluated on a large and heterogeneous dataset from the real Web. In summary, this work contributes a set of techniques for enhancing the current state of the Web of Data. All approaches have been tested on large and heterogeneous real-world input.
N2 - Die vorliegende Arbeit stellt neue Ideen sowie Forschungsergebnisse für das Web of Data vor. Hierbei handelt es sich um ein globales Netz aus sogenannten Linked Open Data (LOD) Quellen. Diese Datenquellen genügen gewissen Prinzipien, um Nutzern einen leichten Zugriff über das Internet und deren Verwendung zu ermöglichen. LOD ist bereits weit verbreitet und es existiert eine Vielzahl von Daten-Veröffentlichungen entsprechend der LOD Prinzipien. Trotz dessen ist LOD bisher kein fester Baustein des Webs des 21. Jahrhunderts. Die folgende Arbeit erläutert den aktuellen Stand der Forschung und Technik für Linked Open Data und identifiziert dessen Schwächen. Einigen Schwachstellen von LOD widmen wir uns in dem darauf folgenden Hauptteil. Zu Beginn stellen wir neuartige Metadaten für Datenquellen vor – die Themen von Datenquellen (engl. Topics). Solche Themen könnten mit Beschreibungen von Datenquellen veröffentlicht werden und eine Reihe von Anwendungsfällen, wie das Auffinden und Explorieren relevanter Daten, unterstützen. Wir diskutieren unseren Ansatz für die Extraktion dieser Metainformationen – die Annotated Pattern Percolation (APP). Experimentelle Ergebnisse werden mit Themen aus Wikipedia Portalen verglichen. Des Weiteren ergänzen wir den Stand der Forschung für das Auffinden verschiedener Repräsentationen eines Reale-Welt-Objektes (engl. Entity Linking). Für jenes Auffinden werden nicht nur lokale Entscheidungen getroffen, sondern es wird die Gesamtheit der Objektbeziehungen genutzt. Wir diskutieren unser Optimierungsmodel, beweisen dessen Schwere und präsentieren drei Ansätze zur Berechnung einer Lösung. Alle Ansätze wurden im LINked Data Alignment (LINDA) System implementiert. Die erste Methode arbeitet auf einer Maschine, kann jedoch Mehrkern-Prozessoren ausnutzen. Die weiteren Ansätze wurden für Rechnercluster ohne gemeinsamen Speicher entwickelt. Wir evaluieren unsere Ergebnisse auf mehr als 100 Millionen Entitäten und erläutern Vor- sowie Nachteile der jeweiligen Ansätze. Im verbleibenden Teil der Arbeit behandeln wir das Linking von Konzepten – ein Teilproblem des Entity Linking. Unser Ansatz, Holistic Concept Matching (HCM), betrachtet abermals die Gesamtheit der Daten. Wir gruppieren die Eingabe um eine geringe Laufzeit bei der Verarbeitung von mehreren Hunderttausenden Konzepten zu erreichen. Innerhalb der Gruppen berechnen wir komplexe Ähnlichkeiten, und spüren semantische Schlussfolgerungen und Widersprüche auf. Die Qualität des Ergebnisses evaluieren wir ebenfalls auf realen Datenmengen. Zusammenfassend trägt diese Arbeit zum aktuellen Stand der Forschung für das Web of Data bei. Alle diskutierten Techniken wurden mit realen, heterogenen und großen Datenmengen getestet.
KW - Web of Data
KW - graph clustering
KW - topics
KW - entity alignment
KW - map/reduce
Y1 - 2013
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-68624
ER -
TY - JOUR
A1 - Cabalar, Pedro
A1 - Kaminski, Roland
A1 - Schaub, Torsten H.
A1 - Schuhmann, Anna
T1 - Temporal answer set programming on finite traces
JF - Theory and practice of logic programming
N2 - In this paper, we introduce an alternative approach to Temporal Answer Set Programming that relies on a variation of Temporal Equilibrium Logic (TEL) for finite traces. This approach allows us to even out the expressiveness of TEL over infinite traces with the computational capacity of (incremental) Answer Set Programming (ASP). Also, we argue that finite traces are more natural when reasoning about action and change. As a result, our approach is readily implementable via multi-shot ASP systems and benefits from an extension of ASP's full-fledged input language with temporal operators. This includes future as well as past operators whose combination offers a rich temporal modeling language. For computation, we identify the class of temporal logic programs and prove that it constitutes a normal form for our approach. Finally, we outline two implementations, a generic one and an extension of the ASP system clingo.
Under consideration for publication in Theory and Practice of Logic Programming (TPLP)
Y1 - 2018
U6 - https://doi.org/10.1017/S1471068418000297
SN - 1471-0684
SN - 1475-3081
VL - 18
IS - 3-4
SP - 406
EP - 420
PB - Cambridge Univ. Press
CY - New York
ER -
TY - BOOK
A1 - Calmez, Conrad
A1 - Hesse, Hubert
A1 - Siegmund, Benjamin
A1 - Stamm, Sebastian
A1 - Thomschke, Astrid
A1 - Hirschfeld, Robert
A1 - Ingalls, Dan
A1 - Lincke, Jens
T1 - Explorative authoring of Active Web content in a mobile environment
N2 - Developing rich Web applications can be a complex job - especially when it comes to mobile device support. Web-based environments such as Lively Webwerkstatt can help developers implement such applications by making the development process more direct and interactive. Further the process of developing software is collaborative which creates the need that the development environment offers collaboration facilities. This report describes extensions of the webbased development environment Lively Webwerkstatt such that it can be used in a mobile environment. The extensions are collaboration mechanisms, user interface adaptations but as well event processing and performance measuring on mobile devices.
N2 - Vielseitige Webanwendungen zu entwickeln kann eine komplexe Aufgabe sein - besonders wenn es die Unterstützung mobiler Geräte betrifft. Webbasierte Umgebungen wie Lively Kernel können Entwicklern helfen Webanwendungen zu entwickeln, indem sie den Entwicklungsprozess direkter und interaktiver gestalten. Zudem sind Entwicklungsprozesse von Software kollaborativ, d.h. Enwicklungsumgebungen müssen so gestaltet sein, dass sie mit kollaborativen Elementen zu unterstützen. Diese Arbeit beschreibt die Erweiterungen der webbasierten Entwicklungsumgebung Lively Webwerkstatt, so dass diese in einer mobilen Umgebung genutzt werden kann. Die Reichweite dieser Erweiterungen erstreckt sich von Kollaborationsmechanismen und Benutzerschnittstellen bis hin zu Eventbehandlung und Performanzmessungen auf mobilen Geräten.
T3 - Technische Berichte des Hasso-Plattner-Instituts für Digital Engineering an der Universität Potsdam - 72
KW - Web applications
KW - Mobile Application Development
KW - CSCW
KW - Lively Kernel
Y1 - 2013
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-64054
SN - 978-3-86956-232-2
PB - Universitätsverlag Potsdam
CY - Potsdam
ER -
TY - JOUR
A1 - Caruccio, Loredana
A1 - Deufemia, Vincenzo
A1 - Naumann, Felix
A1 - Polese, Giuseppe
T1 - Discovering relaxed functional dependencies based on multi-attribute dominance
JF - IEEE transactions on knowledge and data engineering
N2 - With the advent of big data and data lakes, data are often integrated from multiple sources. Such integrated data are often of poor quality, due to inconsistencies, errors, and so forth. One way to check the quality of data is to infer functional dependencies (fds). However, in many modern applications it might be necessary to extract properties and relationships that are not captured through fds, due to the necessity to admit exceptions, or to consider similarity rather than equality of data values. Relaxed fds (rfds) have been introduced to meet these needs, but their discovery from data adds further complexity to an already complex problem, also due to the necessity of specifying similarity and validity thresholds. We propose Domino, a new discovery algorithm for rfds that exploits the concept of dominance in order to derive similarity thresholds of attribute values while inferring rfds. An experimental evaluation on real datasets demonstrates the discovery performance and the effectiveness of the proposed algorithm.
KW - Complexity theory
KW - Approximation algorithms
KW - Big Data
KW - Distributed
KW - databases
KW - Semantics
KW - Lakes
KW - Functional dependencies
KW - data profiling
KW - data cleansing
Y1 - 2020
U6 - https://doi.org/10.1109/TKDE.2020.2967722
SN - 1041-4347
SN - 1558-2191
VL - 33
IS - 9
SP - 3212
EP - 3228
PB - Institute of Electrical and Electronics Engineers
CY - New York, NY
ER -