TY - JOUR
A1 - Birnick, Johann
A1 - Bläsius, Thomas
A1 - Friedrich, Tobias
A1 - Naumann, Felix
A1 - Papenbrock, Thorsten
A1 - Schirneck, Friedrich Martin
T1 - Hitting set enumeration with partial information for unique column combination discovery
JF - Proceedings of the VLDB Endowment
N2 - Unique column combinations (UCCs) are a fundamental concept in relational databases. They identify entities in the data and support various data management activities. Still, UCCs are usually not explicitly defined and need to be discovered. State-of-the-art data profiling algorithms are able to efficiently discover UCCs in moderately sized datasets, but they tend to fail on large and, in particular, on wide datasets due to run time and memory limitations.
In this paper, we introduce HPIValid, a novel UCC discovery algorithm that implements a faster and more resource-saving search strategy. HPIValid models the metadata discovery as a hitting set enumeration problem in hypergraphs. In this way, it combines efficient discovery techniques from data profiling research with the most recent theoretical insights into enumeration algorithms. Our evaluation shows that HPIValid is not only orders of magnitude faster than related work, it also has a much smaller memory footprint.
Y1 - 2020
U6 - https://doi.org/10.14778/3407790.3407824
SN - 2150-8097
VL - 13
IS - 11
SP - 2270
EP - 2283
PB - Association for Computing Machinery
CY - [New York, NY]
ER -
TY - JOUR
A1 - Caruccio, Loredana
A1 - Deufemia, Vincenzo
A1 - Naumann, Felix
A1 - Polese, Giuseppe
T1 - Discovering relaxed functional dependencies based on multi-attribute dominance
JF - IEEE transactions on knowledge and data engineering
N2 - With the advent of big data and data lakes, data are often integrated from multiple sources. Such integrated data are often of poor quality, due to inconsistencies, errors, and so forth. One way to check the quality of data is to infer functional dependencies (fds). However, in many modern applications it might be necessary to extract properties and relationships that are not captured through fds, due to the necessity to admit exceptions, or to consider similarity rather than equality of data values. Relaxed fds (rfds) have been introduced to meet these needs, but their discovery from data adds further complexity to an already complex problem, also due to the necessity of specifying similarity and validity thresholds. We propose Domino, a new discovery algorithm for rfds that exploits the concept of dominance in order to derive similarity thresholds of attribute values while inferring rfds. An experimental evaluation on real datasets demonstrates the discovery performance and the effectiveness of the proposed algorithm.
KW - Complexity theory
KW - Approximation algorithms
KW - Big Data
KW - Distributed
KW - databases
KW - Semantics
KW - Lakes
KW - Functional dependencies
KW - data profiling
KW - data cleansing
Y1 - 2020
U6 - https://doi.org/10.1109/TKDE.2020.2967722
SN - 1041-4347
SN - 1558-2191
VL - 33
IS - 9
SP - 3212
EP - 3228
PB - Institute of Electrical and Electronics Engineers
CY - New York, NY
ER -
TY - JOUR
A1 - Koßmann, Jan
A1 - Papenbrock, Thorsten
A1 - Naumann, Felix
T1 - Data dependencies for query optimization
BT - a survey
JF - The VLDB journal : the international journal on very large data bases / publ. on behalf of the VLDB Endowment
N2 - Effective query optimization is a core feature of any database management system. While most query optimization techniques make use of simple metadata, such as cardinalities and other basic statistics, other optimization techniques are based on more advanced metadata including data dependencies, such as functional, uniqueness, order, or inclusion dependencies. This survey provides an overview, intuitive descriptions, and classifications of query optimization and execution strategies that are enabled by data dependencies. We consider the most popular types of data dependencies and focus on optimization strategies that target the optimization of relational database queries. The survey supports database vendors to identify optimization opportunities as well as DBMS researchers to find related work and open research questions.
KW - Query optimization
KW - Query execution
KW - Data dependencies
KW - Data profiling
KW - Unique column combinations
KW - Functional dependencies
KW - Order dependencies
KW - Inclusion dependencies
KW - Relational data
KW - SQL
Y1 - 2021
U6 - https://doi.org/10.1007/s00778-021-00676-3
SN - 1066-8888
SN - 0949-877X
VL - 31
IS - 1
SP - 1
EP - 22
PB - Springer
CY - Berlin ; Heidelberg ; New York
ER -
TY - JOUR
A1 - Momtazi, Saeedeh
A1 - Naumann, Felix
T1 - Topic modeling for expert finding using latent Dirichlet allocation
JF - Wiley interdisciplinary reviews : Data mining and knowledge discovery
N2 - The task of expert finding is to rank the experts in the search space given a field of expertise as an input query. In this paper, we propose a topic modeling approach for this task. The proposed model uses latent Dirichlet allocation (LDA) to induce probabilistic topics. In the first step of our algorithm, the main topics of a document collection are extracted using LDA. The extracted topics present the connection between expert candidates and user queries. In the second step, the topics are used as a bridge to find the probability of selecting each candidate for a given query. The candidates are then ranked based on these probabilities. The experimental results on the Text REtrieval Conference (TREC) Enterprise track for 2005 and 2006 show that the proposed topic-based approach outperforms the state-of-the-art profile- and document-based models, which use information retrieval methods to rank experts. Moreover, we present the superiority of the proposed topic-based approach to the improved document-based expert finding systems, which consider additional information such as local context, candidate prior, and query expansion.
Y1 - 2013
U6 - https://doi.org/10.1002/widm.1102
SN - 1942-4787
VL - 3
IS - 5
SP - 346
EP - 353
PB - Wiley
CY - San Fransisco
ER -
TY - BOOK
A1 - Albrecht, Alexander
A1 - Naumann, Felix
T1 - Understanding cryptic schemata in large extract-transform-load systems
N2 - Extract-Transform-Load (ETL) tools are used for the creation, maintenance, and evolution of data warehouses, data marts, and operational data stores. ETL workflows populate those systems with data from various data sources by specifying and executing a DAG of transformations. Over time, hundreds of individual workflows evolve as new sources and new requirements are integrated into the system. The maintenance and evolution of large-scale ETL systems requires much time and manual effort. A key problem is to understand the meaning of unfamiliar attribute labels in source and target databases and ETL transformations. Hard-to-understand attribute labels lead to frustration and time spent to develop and understand ETL workflows. We present a schema decryption technique to support ETL developers in understanding cryptic schemata of sources, targets, and ETL transformations. For a given ETL system, our recommender-like approach leverages the large number of mapped attribute labels in existing ETL workflows to produce good and meaningful decryptions. In this way we are able to decrypt attribute labels consisting of a number of unfamiliar few-letter abbreviations, such as UNP_PEN_INT, which we can decrypt to UNPAID_PENALTY_INTEREST. We evaluate our schema decryption approach on three real-world repositories of ETL workflows and show that our approach is able to suggest high-quality decryptions for cryptic attribute labels in a given schema.
N2 - Extract-Transform-Load (ETL) Tools werden häufig beim Erstellen, der Wartung und der Weiterentwicklung von Data Warehouses, Data Marts und operationalen Datenbanken verwendet. ETL Workflows befüllen diese Systeme mit Daten aus vielen unterschiedlichen Quellsystemen. Ein ETL Workflow besteht aus mehreren Transformationsschritten, die einen DAG-strukturierter Graphen bilden. Mit der Zeit entstehen hunderte individueller ETL Workflows, da neue Datenquellen integriert oder neue Anforderungen umgesetzt werden müssen. Die Wartung und Weiterentwicklung von großen ETL Systemen benötigt viel Zeit und manuelle Arbeit. Ein zentrales Problem ist dabei das Verständnis unbekannter Attributnamen in Quell- und Zieldatenbanken und ETL Transformationen. Schwer verständliche Attributnamen führen zu Frustration und hohen Zeitaufwänden bei der Entwicklung und dem Verständnis von ETL Workflows. Wir präsentieren eine Schema Decryption Technik, die ETL Entwicklern das Verständnis kryptischer Schemata in Quell- und Zieldatenbanken und ETL Transformationen erleichtert. Unser Ansatz berücksichtigt für ein gegebenes ETL System die Vielzahl verknüpfter Attributnamen in den existierenden ETL Workflows. So werden gute und aussagekräftige "Decryptions" gefunden und wir sind in der Lage Attributnamen, die aus unbekannten Abkürzungen bestehen, zu "decrypten". So wird z.B. für den Attributenamen UNP_PEN_INT als Decryption UNPAIN_PENALTY_INTEREST vorgeschlagen. Unser Schema Decryption Ansatz wurde für drei ETL-Repositories evaluiert und es zeigte sich, dass unser Ansatz qualitativ hochwertige Decryptions für kryptische Attributnamen vorschlägt.
T3 - Technische Berichte des Hasso-Plattner-Instituts für Digital Engineering an der Universität Potsdam - 60
KW - Extract-Transform-Load (ETL)
KW - Data Warehouse
KW - Datenintegration
KW - Extract-Transform-Load (ETL)
KW - Data Warehouse
KW - Data Integration
Y1 - 2012
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-61257
SN - 978-3-86956-201-8
PB - Universitätsverlag Potsdam
CY - Potsdam
ER -
TY - BOOK
A1 - Bauckmann, Jana
A1 - Abedjan, Ziawasch
A1 - Leser, Ulf
A1 - Müller, Heiko
A1 - Naumann, Felix
T1 - Covering or complete? : Discovering conditional inclusion dependencies
N2 - Data dependencies, or integrity constraints, are used to improve the quality of a database schema, to optimize queries, and to ensure consistency in a database. In the last years conditional dependencies have been introduced to analyze and improve data quality. In short, a conditional dependency is a dependency with a limited scope defined by conditions over one or more attributes. Only the matching part of the instance must adhere to the dependency. In this paper we focus on conditional inclusion dependencies (CINDs). We generalize the definition of CINDs, distinguishing covering and completeness conditions. We present a new use case for such CINDs showing their value for solving complex data quality tasks. Further, we define quality measures for conditions inspired by precision and recall. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. Our algorithms choose not only the condition values but also the condition attributes automatically. Finally, we show that our approach efficiently provides meaningful and helpful results for our use case.
N2 - Datenabhängigkeiten (wie zum Beispiel Integritätsbedingungen), werden verwendet, um die Qualität eines Datenbankschemas zu erhöhen, um Anfragen zu optimieren und um Konsistenz in einer Datenbank sicherzustellen. In den letzten Jahren wurden bedingte Abhängigkeiten (conditional dependencies) vorgestellt, die die Qualität von Daten analysieren und verbessern sollen. Eine bedingte Abhängigkeit ist eine Abhängigkeit mit begrenztem Gültigkeitsbereich, der über Bedingungen auf einem oder mehreren Attributen definiert wird. In diesem Bericht betrachten wir bedingte Inklusionsabhängigkeiten (conditional inclusion dependencies; CINDs). Wir generalisieren die Definition von CINDs anhand der Unterscheidung von überdeckenden (covering) und vollständigen (completeness) Bedingungen. Wir stellen einen Anwendungsfall für solche CINDs vor, der den Nutzen von CINDs bei der Lösung komplexer Datenqualitätsprobleme aufzeigt. Darüber hinaus definieren wir Qualitätsmaße für Bedingungen basierend auf Sensitivität und Genauigkeit. Wir stellen effiziente Algorithmen vor, die überdeckende und vollständige Bedingungen innerhalb vorgegebener Schwellwerte finden. Unsere Algorithmen wählen nicht nur die Werte der Bedingungen, sondern finden auch die Bedingungsattribute automatisch. Abschließend zeigen wir, dass unser Ansatz effizient sinnvolle und hilfreiche Ergebnisse für den vorgestellten Anwendungsfall liefert.
T3 - Technische Berichte des Hasso-Plattner-Instituts für Digital Engineering an der Universität Potsdam - 62
KW - Datenabhängigkeiten
KW - Bedingte Inklusionsabhängigkeiten
KW - Erkennen von Meta-Daten
KW - Linked Open Data
KW - Link-Entdeckung
KW - Assoziationsregeln
KW - Data Dependency
KW - Conditional Inclusion Dependency
KW - Metadata Discovery
KW - Linked Open Data
KW - Link Discovery
KW - Association Rule Mining
Y1 - 2012
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-62089
SN - 978-3-86956-212-4
PB - Universitätsverlag Potsdam
CY - Potsdam
ER -
TY - BOOK
A1 - Herschel, Melanie
A1 - Naumann, Felix
T1 - Space and time scalability of duplicate detection in graph data
N2 - Duplicate detection consists in determining different representations of real-world objects in a database. Recent research has considered the use of relationships among object representations to improve duplicate detection. In the general case where relationships form a graph, research has mainly focused on duplicate detection quality/effectiveness. Scalability has been neglected so far, even though it is crucial for large real-world duplicate detection tasks. In this paper we scale up duplicate detection in graph data (DDG) to large amounts of data and pairwise comparisons, using the support of a relational database system. To this end, we first generalize the process of DDG. We then present how to scale algorithms for DDG in space (amount of data processed with limited main memory) and in time. Finally, we explore how complex similarity computation can be performed efficiently. Experiments on data an order of magnitude larger than data considered so far in DDG clearly show that our methods scale to large amounts of data not residing in main memory.
T3 - Technische Berichte des Hasso-Plattner-Instituts für Digital Engineering an der Universität Potsdam - 25
Y1 - 2008
U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-32851
SN - 978-3-940793-46-1
ER -
TY - BOOK
A1 - Abedjan, Ziawasch
A1 - Golab, Lukasz
A1 - Naumann, Felix
A1 - Papenbrock, Thorsten
T1 - Data Profiling
T3 - Synthesis lectures on data management, 52
Y1 - 2019
SN - 978-1-68173-446-0
PB - Morgan & Claypool Publishers
CY - San Rafael
ER -
TY - JOUR
A1 - Bonifati, Angela
A1 - Mior, Michael J.
A1 - Naumann, Felix
A1 - Noack, Nele Sina
T1 - How inclusive are we?
BT - an analysis of gender diversity in database venues
JF - SIGMOD record / Association for Computing Machinery, Special Interest Group on Management of Data
N2 - ACM SIGMOD, VLDB and other database organizations have committed to fostering an inclusive and diverse community, as do many other scientific organizations. Recently, different measures have been taken to advance these goals, especially for underrepresented groups. One possible measure is double-blind reviewing, which aims to hide gender, ethnicity, and other properties of the authors.
We report the preliminary results of a gender diversity analysis of publications of the database community across several peer-reviewed venues, and also compare women's authorship percentages in both single-blind and double-blind venues along the years. We also obtained a cross comparison of the obtained results in data management with other relevant areas in Computer Science.
Y1 - 2022
U6 - https://doi.org/10.1145/3516431.3516438
SN - 0163-5808
SN - 1943-5835
VL - 50
IS - 4
SP - 30
EP - 35
PB - Association for Computing Machinery
CY - New York
ER -
TY - JOUR
A1 - Vitagliano, Gerardo
A1 - Jiang, Lan
A1 - Naumann, Felix
T1 - Detecting layout templates in complex multiregion files
JF - Proceedings of the VLDB Endowment
N2 - Spreadsheets are among the most commonly used file formats for data management, distribution, and analysis. Their widespread employment makes it easy to gather large collections of data, but their flexible canvas-based structure makes automated analysis difficult without heavy preparation. One of the common problems that practitioners face is the presence of multiple, independent regions in a single spreadsheet, possibly separated by repeated empty cells. We define such files as "multiregion" files. In collections of various spreadsheets, we can observe that some share the same layout. We present the Mondrian approach to automatically identify layout templates across multiple files and systematically extract the corresponding regions. Our approach is composed of three phases: first, each file is rendered as an image and inspected for elements that could form regions; then, using a clustering algorithm, the identified elements are grouped to form regions; finally, every file layout is represented as a graph and compared with others to find layout templates. We compare our method to state-of-the-art table recognition algorithms on two corpora of real-world enterprise spreadsheets. Our approach shows the best performances in detecting reliable region boundaries within each file and can correctly identify recurring layouts across files.
Y1 - 2022
U6 - https://doi.org/10.14778/3494124.3494145
SN - 2150-8097
VL - 15
IS - 3
SP - 646
EP - 658
PB - Association for Computing Machinery
CY - New York
ER -