TY - JOUR A1 - Momtazi, Saeedeh A1 - Naumann, Felix T1 - Topic modeling for expert finding using latent Dirichlet allocation JF - Wiley interdisciplinary reviews : Data mining and knowledge discovery N2 - The task of expert finding is to rank the experts in the search space given a field of expertise as an input query. In this paper, we propose a topic modeling approach for this task. The proposed model uses latent Dirichlet allocation (LDA) to induce probabilistic topics. In the first step of our algorithm, the main topics of a document collection are extracted using LDA. The extracted topics present the connection between expert candidates and user queries. In the second step, the topics are used as a bridge to find the probability of selecting each candidate for a given query. The candidates are then ranked based on these probabilities. The experimental results on the Text REtrieval Conference (TREC) Enterprise track for 2005 and 2006 show that the proposed topic-based approach outperforms the state-of-the-art profile- and document-based models, which use information retrieval methods to rank experts. Moreover, we present the superiority of the proposed topic-based approach to the improved document-based expert finding systems, which consider additional information such as local context, candidate prior, and query expansion. Y1 - 2013 U6 - https://doi.org/10.1002/widm.1102 SN - 1942-4787 VL - 3 IS - 5 SP - 346 EP - 353 PB - Wiley CY - San Fransisco ER - TY - BOOK A1 - Abedjan, Ziawasch A1 - Golab, Lukasz A1 - Naumann, Felix A1 - Papenbrock, Thorsten T1 - Data Profiling T3 - Synthesis lectures on data management, 52 Y1 - 2019 SN - 978-1-68173-446-0 PB - Morgan & Claypool Publishers CY - San Rafael ER - TY - JOUR A1 - Bonifati, Angela A1 - Mior, Michael J. A1 - Naumann, Felix A1 - Noack, Nele Sina T1 - How inclusive are we? BT - an analysis of gender diversity in database venues JF - SIGMOD record / Association for Computing Machinery, Special Interest Group on Management of Data N2 - ACM SIGMOD, VLDB and other database organizations have committed to fostering an inclusive and diverse community, as do many other scientific organizations. Recently, different measures have been taken to advance these goals, especially for underrepresented groups. One possible measure is double-blind reviewing, which aims to hide gender, ethnicity, and other properties of the authors.
We report the preliminary results of a gender diversity analysis of publications of the database community across several peer-reviewed venues, and also compare women's authorship percentages in both single-blind and double-blind venues along the years. We also obtained a cross comparison of the obtained results in data management with other relevant areas in Computer Science. Y1 - 2022 U6 - https://doi.org/10.1145/3516431.3516438 SN - 0163-5808 SN - 1943-5835 VL - 50 IS - 4 SP - 30 EP - 35 PB - Association for Computing Machinery CY - New York ER - TY - JOUR A1 - Vitagliano, Gerardo A1 - Jiang, Lan A1 - Naumann, Felix T1 - Detecting layout templates in complex multiregion files JF - Proceedings of the VLDB Endowment N2 - Spreadsheets are among the most commonly used file formats for data management, distribution, and analysis. Their widespread employment makes it easy to gather large collections of data, but their flexible canvas-based structure makes automated analysis difficult without heavy preparation. One of the common problems that practitioners face is the presence of multiple, independent regions in a single spreadsheet, possibly separated by repeated empty cells. We define such files as "multiregion" files. In collections of various spreadsheets, we can observe that some share the same layout. We present the Mondrian approach to automatically identify layout templates across multiple files and systematically extract the corresponding regions. Our approach is composed of three phases: first, each file is rendered as an image and inspected for elements that could form regions; then, using a clustering algorithm, the identified elements are grouped to form regions; finally, every file layout is represented as a graph and compared with others to find layout templates. We compare our method to state-of-the-art table recognition algorithms on two corpora of real-world enterprise spreadsheets. Our approach shows the best performances in detecting reliable region boundaries within each file and can correctly identify recurring layouts across files. Y1 - 2022 U6 - https://doi.org/10.14778/3494124.3494145 SN - 2150-8097 VL - 15 IS - 3 SP - 646 EP - 658 PB - Association for Computing Machinery CY - New York ER - TY - JOUR A1 - Loster, Michael A1 - Koumarelas, Ioannis A1 - Naumann, Felix T1 - Knowledge transfer for entity resolution with siamese neural networks JF - ACM journal of data and information quality N2 - The integration of multiple data sources is a common problem in a large variety of applications. Traditionally, handcrafted similarity measures are used to discover, merge, and integrate multiple representations of the same entity-duplicates-into a large homogeneous collection of data. Often, these similarity measures do not cope well with the heterogeneity of the underlying dataset. In addition, domain experts are needed to manually design and configure such measures, which is both time-consuming and requires extensive domain expertise.
We propose a deep Siamese neural network, capable of learning a similarity measure that is tailored to the characteristics of a particular dataset. With the properties of deep learning methods, we are able to eliminate the manual feature engineering process and thus considerably reduce the effort required for model construction. In addition, we show that it is possible to transfer knowledge acquired during the deduplication of one dataset to another, and thus significantly reduce the amount of data required to train a similarity measure. We evaluated our method on multiple datasets and compare our approach to state-of-the-art deduplication methods. Our approach outperforms competitors by up to +26 percent F-measure, depending on task and dataset. In addition, we show that knowledge transfer is not only feasible, but in our experiments led to an improvement in F-measure of up to +4.7 percent. KW - Entity resolution KW - duplicate detection KW - transfer learning KW - neural KW - networks KW - metric learning KW - similarity learning KW - data quality Y1 - 2021 U6 - https://doi.org/10.1145/3410157 SN - 1936-1955 SN - 1936-1963 VL - 13 IS - 1 PB - Association for Computing Machinery CY - New York ER - TY - JOUR A1 - Simonini, Giovanni A1 - Zecchini, Luca A1 - Bergamaschi, Sonia A1 - Naumann, Felix T1 - Entity resolution on-demand JF - Proceedings of the VLDB Endowment N2 - Entity Resolution (ER) aims to identify and merge records that refer to the same real-world entity. ER is typically employed as an expensive cleaning step on the entire data before consuming it. Yet, determining which entities are useful once cleaned depends solely on the user's application, which may need only a fraction of them. For instance, when dealing with Web data, we would like to be able to filter the entities of interest gathered from multiple sources without cleaning the entire, continuously-growing data. Similarly, when querying data lakes, we want to transform data on-demand and return the results in a timely manner-a fundamental requirement of ELT (Extract-Load-Transform) pipelines. We propose BrewER, a framework to evaluate SQL SP queries on dirty data while progressively returning results as if they were issued on cleaned data. BrewER tries to focus the cleaning effort on one entity at a time, following an ORDER BY predicate. Thus, it inherently supports top-k and stop-and-resume execution. For a wide range of applications, a significant amount of resources can be saved. We exhaustively evaluate and show the efficacy of BrewER on four real-world datasets. Y1 - 2022 U6 - https://doi.org/10.14778/3523210.3523226 SN - 2150-8097 VL - 15 IS - 7 SP - 1506 EP - 1518 PB - Association for Computing Machinery CY - New York ER -