TY - GEN A1 - Shaabani, Nuhad A1 - Meinel, Christoph T1 - Improving the efficiency of inclusion dependency detection T2 - Proceedings of the 27th ACM International Conference on Information and Knowledge Management N2 - The detection of all inclusion dependencies (INDs) in an unknown dataset is at the core of any data profiling effort. Apart from the discovery of foreign key relationships, INDs can help perform data integration, integrity checking, schema (re-)design, and query optimization. With the advent of Big Data, the demand increases for efficient INDs discovery algorithms that can scale with the input data size. To this end, we propose S-INDD++ as a scalable system for detecting unary INDs in large datasets. S-INDD++ applies a new stepwise partitioning technique that helps discard a large number of attributes in early phases of the detection by processing the first partitions of smaller sizes. S-INDD++ also extends the concept of the attribute clustering to decide which attributes to be discarded based on the clustering result of each partition. Moreover, in contrast to the state-of-the-art, S-INDD++ does not require the partition to fit into the main memory-which is a highly appreciable property in the face of the ever growing datasets. We conducted an exhaustive evaluation of S-INDD++ by applying it to large datasets with thousands attributes and more than 266 million tuples. The results show the high superiority of S-INDD++ over the state-of-the-art. S-INDD++ reduced up to 50 % of the runtime in comparison with BINDER, and up to 98 % in comparison with S-INDD. KW - Algorithms KW - Data partitioning KW - Data profiling KW - Data mining Y1 - 2018 SN - 978-1-4503-6014-2 U6 - https://doi.org/10.1145/3269206.3271724 SP - 207 EP - 216 PB - Association for Computing Machinery CY - New York ER - TY - THES A1 - Harmouch, Hazar T1 - Single-column data profiling N2 - The research area of data profiling consists of a large set of methods and processes to examine a given dataset and determine metadata about it. Typically, different data profiling tasks address different kinds of metadata, comprising either various statistics about individual columns (Single-column Analysis) or relationships among them (Dependency Discovery). Among the basic statistics about a column are data type, header, the number of unique values (the column's cardinality), maximum and minimum values, the number of null values, and the value distribution. Dependencies involve, for instance, functional dependencies (FDs), inclusion dependencies (INDs), and their approximate versions. Data profiling has a wide range of conventional use cases, namely data exploration, cleansing, and integration. The produced metadata is also useful for database management and schema reverse engineering. Data profiling has also more novel use cases, such as big data analytics. The generated metadata describes the structure of the data at hand, how to import it, what it is about, and how much of it there is. Thus, data profiling can be considered as an important preparatory task for many data analysis and mining scenarios to assess which data might be useful and to reveal and understand a new dataset's characteristics. In this thesis, the main focus is on the single-column analysis class of data profiling tasks. We study the impact and the extraction of three of the most important metadata about a column, namely the cardinality, the header, and the number of null values. First, we present a detailed experimental study of twelve cardinality estimation algorithms. We classify the algorithms and analyze their efficiency, scaling far beyond the original experiments and testing theoretical guarantees. Our results highlight their trade-offs and point out the possibility to create a parallel or a distributed version of these algorithms to cope with the growing size of modern datasets. Then, we present a fully automated, multi-phase system to discover human-understandable, representative, and consistent headers for a target table in cases where headers are missing, meaningless, or unrepresentative for the column values. Our evaluation on Wikipedia tables shows that 60% of the automatically discovered schemata are exact and complete. Considering more schema candidates, top-5 for example, increases this percentage to 72%. Finally, we formally and experimentally show the ghost and fake FDs phenomenon caused by FD discovery over datasets with missing values. We propose two efficient scores, probabilistic and likelihood-based, for estimating the genuineness of a discovered FD. Our extensive set of experiments on real-world and semi-synthetic datasets show the effectiveness and efficiency of these scores. N2 - Das Forschungsgebiet Data Profiling besteht aus einer Vielzahl von Methoden und Prozessen, die es erlauben Datensätze zu untersuchen und Metadaten über diese zu ermitteln. Typischerweise erzeugen verschiedene Data-Profiling-Techniken unterschiedliche Arten von Metadaten, die entweder verschiedene Statistiken einzelner Spalten (Single-Column Analysis) oder Beziehungen zwischen diesen (Dependency Discovery) umfassen. Zu den grundlegenden Statistiken einer Spalte gehören unter anderem ihr Datentyp, ihr Name, die Anzahl eindeutiger Werte (Kardinalität der Spalte), Maximal- und Minimalwerte, die Anzahl an Null-Werten sowie ihre Werteverteilung. Im Falle von Abhängigkeiten kann es sich beispielsweise um funktionale Abhängigkeiten (FDs), Inklusionsabhängigkeiten (INDs) sowie deren approximative Varianten handeln. Data Profiling besitzt vielfältige Anwendungsmöglichkeiten, darunter fallen die Datenexploration, -bereinigung und -integration. Darüber hinaus sind die erzeugten Metadaten sowohl für den Einsatz in Datenbankmanagementsystemen als auch für das Reverse Engineering von Datenbankschemata hilfreich. Weiterhin finden Methoden des Data Profilings immer häufiger Verwendung in neuartigen Anwendungsfällen, wie z.B. der Analyse von Big Data. Dabei beschreiben die generierten Metadaten die Struktur der vorliegenden Daten, wie diese zu importieren sind, von was sie handeln und welchen Umfang sie haben. Somit kann das Profiling von Datenbeständen als eine wichtige, vorbereitende Aufgabe für viele Datenanalyse- und Data-Mining Szenarien angesehen werden. Sie ermöglicht die Beurteilung, welche Daten nützlich sein könnten, und erlaubt es zudem die Eigenschaften eines neuen Datensatzes aufzudecken und zu verstehen. Der Schwerpunkt dieser Arbeit bildet das Single-Column Profiling. Dabei werden sowohl die Auswirkungen als auch die Extraktion von drei der wichtigsten Metadaten einer Spalte untersucht, nämlich ihrer Kardinalität, ihres Namens und ihrer Anzahl an Null-Werten. Die vorliegende Arbeit beginnt mit einer detaillierten experimentellen Studie von zwölf Algorithmen zur Kardinalitätsschätzung. Diese Studie klassifiziert die Algorithmen anhand verschiedener Kriterien und analysiert ihre Effizienz. Dabei sind die Experimente im Vergleich zu den Originalpublikationen weitaus umfassender und testen die theoretischen Garantien der untersuchten Algorithmen. Unsere Ergebnisse geben Aufschluss über Abwägungen zwischen den Algorithmen und weisen zudem auf die Möglichkeit einer parallelen bzw. verteilten Algorithmenversion hin, wodurch die stetig anwachsende Datenmenge moderner Datensätze bewältigt werden könnten. Anschließend wird ein vollautomatisches, mehrstufiges System vorgestellt, mit dem sich im Falle fehlender, bedeutungsloser oder nicht repräsentativer Kopfzeilen einer Zieltabelle menschenverständliche, repräsentative und konsistente Kopfzeilen ermitteln lassen. Unsere Auswertung auf Wikipedia-Tabellen zeigt, dass 60% der automatisch entdeckten Schemata exakt und vollständig sind. Werden darüber hinaus mehr Schemakandidaten in Betracht gezogen, z.B. die Top-5, erhöht sich dieser Prozentsatz auf 72%. Schließlich wird das Phänomen der Geist- und Schein-FDs formell und experimentell untersucht, welches bei der Entdeckung von FDs auf Datensätzen mit fehlenden Werten auftreten kann. Um die Echtheit einer entdeckten FD effizient abzuschätzen, schlagen wir sowohl eine probabilistische als auch eine wahrscheinlichkeitsbasierte Bewertungsmethode vor. Die Wirksamkeit und Effizienz beider Bewertungsmethoden zeigt sich in unseren umfangreichen Experimenten mit realen und halbsynthetischen Datensätzen. KW - Data profiling KW - Functional dependencies KW - Data quality KW - Schema discovery KW - Cardinality estimation KW - Metanome KW - Missing values KW - Kardinalitätsschätzung KW - Datenqualität KW - Funktionale Abhängigkeiten KW - Fehlende Werte KW - Schema-Entdeckung Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-474554 ER - TY - JOUR A1 - Koßmann, Jan A1 - Papenbrock, Thorsten A1 - Naumann, Felix T1 - Data dependencies for query optimization BT - a survey JF - The VLDB journal : the international journal on very large data bases / publ. on behalf of the VLDB Endowment N2 - Effective query optimization is a core feature of any database management system. While most query optimization techniques make use of simple metadata, such as cardinalities and other basic statistics, other optimization techniques are based on more advanced metadata including data dependencies, such as functional, uniqueness, order, or inclusion dependencies. This survey provides an overview, intuitive descriptions, and classifications of query optimization and execution strategies that are enabled by data dependencies. We consider the most popular types of data dependencies and focus on optimization strategies that target the optimization of relational database queries. The survey supports database vendors to identify optimization opportunities as well as DBMS researchers to find related work and open research questions. KW - Query optimization KW - Query execution KW - Data dependencies KW - Data profiling KW - Unique column combinations KW - Functional dependencies KW - Order dependencies KW - Inclusion dependencies KW - Relational data KW - SQL Y1 - 2021 U6 - https://doi.org/10.1007/s00778-021-00676-3 SN - 1066-8888 SN - 0949-877X VL - 31 IS - 1 SP - 1 EP - 22 PB - Springer CY - Berlin ; Heidelberg ; New York ER - TY - JOUR A1 - Schmidl, Sebastian A1 - Papenbrock, Thorsten T1 - Efficient distributed discovery of bidirectional order dependencies JF - The VLDB journal N2 - Bidirectional order dependencies (bODs) capture order relationships between lists of attributes in a relational table. They can express that, for example, sorting books by publication date in ascending order also sorts them by age in descending order. The knowledge about order relationships is useful for many data management tasks, such as query optimization, data cleaning, or consistency checking. Because the bODs of a specific dataset are usually not explicitly given, they need to be discovered. The discovery of all minimal bODs (in set-based canonical form) is a task with exponential complexity in the number of attributes, though, which is why existing bOD discovery algorithms cannot process datasets of practically relevant size in a reasonable time. In this paper, we propose the distributed bOD discovery algorithm DISTOD, whose execution time scales with the available hardware. DISTOD is a scalable, robust, and elastic bOD discovery approach that combines efficient pruning techniques for bOD candidates in set-based canonical form with a novel, reactive, and distributed search strategy. Our evaluation on various datasets shows that DISTOD outperforms both single-threaded and distributed state-of-the-art bOD discovery algorithms by up to orders of magnitude; it can, in particular, process much larger datasets. KW - Bidirectional order dependencies KW - Distributed computing KW - Actor KW - programming KW - Parallelization KW - Data profiling KW - Dependency discovery Y1 - 2021 U6 - https://doi.org/10.1007/s00778-021-00683-4 SN - 1066-8888 SN - 0949-877X VL - 31 IS - 1 SP - 49 EP - 74 PB - Springer CY - Berlin ; Heidelberg ; New York ER - TY - JOUR A1 - Bläsius, Thomas A1 - Friedrich, Tobias A1 - Lischeid, Julius A1 - Meeks, Kitty A1 - Schirneck, Friedrich Martin T1 - Efficiently enumerating hitting sets of hypergraphs arising in data profiling JF - Journal of computer and system sciences : JCSS N2 - The transversal hypergraph problem asks to enumerate the minimal hitting sets of a hypergraph. If the solutions have bounded size, Eiter and Gottlob [SICOMP'95] gave an algorithm running in output-polynomial time, but whose space requirement also scales with the output. We improve this to polynomial delay and space. Central to our approach is the extension problem, deciding for a set X of vertices whether it is contained in any minimal hitting set. We show that this is one of the first natural problems to be W[3]-complete. We give an algorithm for the extension problem running in time O(m(vertical bar X vertical bar+1) n) and prove a SETH-lower bound showing that this is close to optimal. We apply our enumeration method to the discovery problem of minimal unique column combinations from data profiling. Our empirical evaluation suggests that the algorithm outperforms its worst-case guarantees on hypergraphs stemming from real-world databases. KW - Data profiling KW - Enumeration algorithm KW - Minimal hitting set KW - Transversal hypergraph KW - Unique column combination KW - W[3]-Completeness Y1 - 2022 U6 - https://doi.org/10.1016/j.jcss.2021.10.002 SN - 0022-0000 SN - 1090-2724 VL - 124 SP - 192 EP - 213 PB - Elsevier CY - San Diego ER -