Refine
Year of publication
Document Type
- Article (41)
- Monograph/Edited Volume (10)
- Other (3)
- Postprint (1)
- Preprint (1)
Language
- English (56) (remove)
Is part of the Bibliography
- yes (56) (remove)
Keywords
- radiation mechanisms: non-thermal (8)
- gamma rays: galaxies (6)
- galaxies: active (5)
- gamma rays: general (5)
- ISM: supernova remnants (4)
- data profiling (4)
- Datenintegration (3)
- duplicate detection (3)
- similarity measures (3)
- Data Integration (2)
- Forschungskolleg (2)
- Functional dependencies (2)
- Hasso Plattner Institute (2)
- Hasso-Plattner-Institut (2)
- ISM: individual objects: G338.3-0.0 (2)
- Klausurtagung (2)
- Query optimization (2)
- Service-oriented Systems Engineering (2)
- acceleration of particles (2)
- data matching (2)
- data quality (2)
- data wrangling (2)
- entity resolution (2)
- galaxies: jets (2)
- record linkage (2)
- Address matching (1)
- Air showers (1)
- Approximation algorithms (1)
- Apriori (1)
- Association Rule Mining (1)
- Assoziationsregeln (1)
- BL Lacertae objects: general (1)
- BL Lacertae objects: individual: 1ES 1312-423 (1)
- BL Lacertae objects: individual: AP Librae (1)
- BL Lacertae objects: individual: PKS 0301-243 (1)
- BL Lacertae objects: individual: PKS 2155-304 (1)
- BL Lacertae objects: individual: SHBL J001355.9-185406 (1)
- BL Lacertae objects: individual: lES 0229+200 (1)
- BL Lacertae objects: individual: lES 1101-232 (1)
- Bedingte Inklusionsabhängigkeiten (1)
- Big Data (1)
- Cherenkov Telescopes (1)
- Complexity theory (1)
- Conditional Inclusion Dependency (1)
- Cross-platform (1)
- Data Dependency (1)
- Data Profiling (1)
- Data Quality (1)
- Data Warehouse (1)
- Data dependencies (1)
- Data processing (1)
- Data profiling (1)
- Data profiling application (1)
- Database (1)
- Datenabhängigkeiten (1)
- Datenanalyse (1)
- Datenqualität (1)
- Design concepts (1)
- Distributed (1)
- Duplicate Detection (1)
- Duplikaterkennung (1)
- Entity resolution (1)
- Erkennen von Meta-Daten (1)
- Extract-Transform-Load (ETL) (1)
- Foreign key (1)
- Ground based gamma ray astronomy (1)
- ISM: clouds (1)
- ISM: individual objects: Crab nebula (1)
- ISM: individual objects: HESS J1832-093 (1)
- ISM: individual objects: SNR G1.9+0.3 (1)
- ISM: individual objects: SNR G22.7-0.2 (1)
- ISM: individual objects: SNR G330.2+1.0 (1)
- ISM: magnetic fields (1)
- Inclusion dependencies (1)
- Information Extraction (1)
- Information Systems (1)
- Informationsextraktion (1)
- Informationssysteme (1)
- Lakes (1)
- Link Discovery (1)
- Link-Entdeckung (1)
- Linked Data (1)
- Linked Open Data (1)
- Metadata Discovery (1)
- Metadatenentdeckung (1)
- Metadatenqualität (1)
- Next generation Cherenkov telescopes (1)
- Order dependencies (1)
- Ph.D. Retreat (1)
- Ph.D. retreat (1)
- Polystore (1)
- Primary key (1)
- Query execution (1)
- Record linkage (1)
- Relational data (1)
- Research School (1)
- SQL (1)
- Schemaentdeckung (1)
- Schlüsselentdeckung (1)
- Semantics (1)
- TeV gamma-ray astronomy (1)
- Unique column combinations (1)
- Wikipedia (1)
- X-rays: binaries (1)
- X-rays: general (1)
- X-rays: individuals: G15.4+0.1 (1)
- X-rays: stars (1)
- address normalization (1)
- address parsing (1)
- apriori (1)
- astroparticle physics (1)
- binaries: general (1)
- clustering (1)
- conditional functional dependencies (1)
- contract (1)
- corporate takeovers (1)
- cosmic rays (1)
- cross-platform (1)
- data cleaning (1)
- data cleansing (1)
- data integration (1)
- data preparation (1)
- data processing (1)
- databases (1)
- deduplication (1)
- dependency discovery (1)
- eindeutig (1)
- errata, addenda (1)
- explainability (1)
- explainability-accuracy trade-off (1)
- explainable AI (1)
- functional dependencies (1)
- functional dependency (1)
- funktionale Abhängigkeit (1)
- galaxies: individual (M 87) (1)
- galaxies: magnetic fields (1)
- galaxies: nuclei (1)
- gamma rays: ISM (1)
- gamma rays: general(HESS J0632+057, VER J0633+057) (1)
- gamma rays: stars (1)
- gamma-ray burst: individual: GRB 100621A (1)
- gamma-rays: ISM (1)
- gamma-rays: galaxies (1)
- gamma-rays: general (1)
- geocoding (1)
- geographic information systems (1)
- globular clusters: general (1)
- infrared: diffuse background (1)
- intergalactic medium (1)
- interpretable machine learning (1)
- key discovery (1)
- law (1)
- management (1)
- matching dependencies (1)
- medical malpractice (1)
- metadata discovery (1)
- metadata quality (1)
- methods: observational (1)
- metric learning (1)
- networks (1)
- neural (1)
- polystore (1)
- pulsars: general (1)
- pulsars: individual: PSR B1259-63 (1)
- quasars: individual: PKS 1510-089 (1)
- query optimization (1)
- random forest (1)
- relativistic processes (1)
- research school (1)
- schema discovery (1)
- service-oriented systems engineering (1)
- similarity learning (1)
- stars: individual: LS 2883 (1)
- supernovae: individual: HESS J1818-154 (1)
- tort law (1)
- transfer learning (1)
- unique (1)
Data analytics are moving beyond the limits of a single data processing platform. A cross-platform query optimizer is necessary to enable applications to run their tasks over multiple platforms efficiently and in a platform-agnostic manner. For the optimizer to be effective, it must consider data movement costs across different data processing platforms. In this paper, we present the graph-based data movement strategy used by RHEEM, our open-source cross-platform system. In particular, we (i) model the data movement problem as a new graph problem, which we prove to be NP-hard, and (ii) propose a novel graph exploration algorithm, which allows RHEEM to discover multiple hidden opportunities for cross-platform data processing.
Spreadsheets are among the most commonly used file formats for data management, distribution, and analysis. Their widespread employment makes it easy to gather large collections of data, but their flexible canvas-based structure makes automated analysis difficult without heavy preparation. One of the common problems that practitioners face is the presence of multiple, independent regions in a single spreadsheet, possibly separated by repeated empty cells. We define such files as "multiregion" files. In collections of various spreadsheets, we can observe that some share the same layout. We present the Mondrian approach to automatically identify layout templates across multiple files and systematically extract the corresponding regions. Our approach is composed of three phases: first, each file is rendered as an image and inspected for elements that could form regions; then, using a clustering algorithm, the identified elements are grouped to form regions; finally, every file layout is represented as a graph and compared with others to find layout templates. We compare our method to state-of-the-art table recognition algorithms on two corpora of real-world enterprise spreadsheets. Our approach shows the best performances in detecting reliable region boundaries within each file and can correctly identify recurring layouts across files.
RHEEMix in the data jungle
(2020)
Data analytics are moving beyond the limits of a single platform. In this paper, we present the cost-based optimizer of Rheem, an open-source cross-platform system that copes with these new requirements. The optimizer allocates the subtasks of data analytic tasks to the most suitable platforms. Our main contributions are: (i) a mechanism based on graph transformations to explore alternative execution strategies; (ii) a novel graph-based approach to determine efficient data movement plans among subtasks and platforms; and (iii) an efficient plan enumeration algorithm, based on a novel enumeration algebra. We extensively evaluate our optimizer under diverse real tasks. We show that our optimizer can perform tasks more than one order of magnitude faster when using multiple platforms than when using a single platform.
RHEEMix in the data jungle
(2020)
Data analytics are moving beyond the limits of a single platform. In this paper, we present the cost-based optimizer of Rheem, an open-source cross-platform system that copes with these new requirements. The optimizer allocates the subtasks of data analytic tasks to the most suitable platforms. Our main contributions are: (i) a mechanism based on graph transformations to explore alternative execution strategies; (ii) a novel graph-based approach to determine efficient data movement plans among subtasks and platforms; and (iii) an efficient plan enumeration algorithm, based on a novel enumeration algebra. We extensively evaluate our optimizer under diverse real tasks. We show that our optimizer can perform tasks more than one order of magnitude faster when using multiple platforms than when using a single platform.
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application. Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services. Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns. The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the Research Scholl, this technical report covers a wide range of research topics. These include but are not limited to: Self-Adaptive Service-Oriented Systems, Operating System Support for Service-Oriented Systems, Architecture and Modeling of Service-Oriented Systems, Adaptive Process Management, Services Composition and Workflow Planning, Security Engineering of Service-Based IT Systems, Quantitative Analysis and Optimization of Service-Oriented Systems, Service-Oriented Systems in 3D Computer Graphics sowie Service-Oriented Geoinformatics.
The integration of multiple data sources is a common problem in a large variety of applications. Traditionally, handcrafted similarity measures are used to discover, merge, and integrate multiple representations of the same entity-duplicates-into a large homogeneous collection of data. Often, these similarity measures do not cope well with the heterogeneity of the underlying dataset. In addition, domain experts are needed to manually design and configure such measures, which is both time-consuming and requires extensive domain expertise. <br /> We propose a deep Siamese neural network, capable of learning a similarity measure that is tailored to the characteristics of a particular dataset. With the properties of deep learning methods, we are able to eliminate the manual feature engineering process and thus considerably reduce the effort required for model construction. In addition, we show that it is possible to transfer knowledge acquired during the deduplication of one dataset to another, and thus significantly reduce the amount of data required to train a similarity measure. We evaluated our method on multiple datasets and compare our approach to state-of-the-art deduplication methods. Our approach outperforms competitors by up to +26 percent F-measure, depending on task and dataset. In addition, we show that knowledge transfer is not only feasible, but in our experiments led to an improvement in F-measure of up to +4.7 percent.
Unique column combinations of a relational database table are sets of columns that contain only unique values. Discovering such combinations is a fundamental research problem and has many different data management and knowledge discovery applications. Existing discovery algorithms are either brute force or have a high memory load and can thus be applied only to small datasets or samples. In this paper, the wellknown GORDIAN algorithm and "Apriori-based" algorithms are compared and analyzed for further optimization. We greatly improve the Apriori algorithms through efficient candidate generation and statistics-based pruning methods. A hybrid solution HCAGORDIAN combines the advantages of GORDIAN and our new algorithm HCA, and it significantly outperforms all previous work in many situations.
VLDB 2021
(2021)
The 47th International Conference on Very Large Databases (VLDB'21) was held on August 16-20, 2021 as a hybrid conference. It attracted 180 in-person attendees in Copenhagen and 840 remote attendees. In this paper, we describe our key decisions as general chairs and program committee chairs and share the lessons we learned.
Discovery of high and very high-energy emission from the BL Lacertae object SHBL J001355.9-185406
(2013)
The detection of the high-frequency peaked BL Lac object (HBL) SHBL J001355.9-185406 (z = 0.095) at high (HE; 100 MeV < E < 300 GeV) and very high-energy (VHE; E > 100 GeV) with the Fermi Large Area Telescope (LAT) and the High Energy Stereoscopic System (H.E.S.S.) is reported. Dedicated observations were performed with the H. E. S. S. telescopes, leading to a detection at the 5.5 sigma significance level. The measured flux above 310 GeV is (8.3 +/- 1.7(stat) +/- 1.7(sys)) x 10(-13) photons cm(-2) s(-1) (about 0.6% of that of the Crab Nebula), and the power-law spectrum has a photon index of Gamma = 3.4 +/- 0.5(stat) +/- 0.2(sys). Using 3.5 years of publicly available Fermi-LAT data, a faint counterpart has been detected in the LAT data at the 5.5 sigma significance level, with an integrated flux above 300 MeV of (9.3 +/- 3.4(stat) +/- 0.8(sys)) x 10(-10) photons cm(-2) s(-1) and a photon index of Gamma = 1.96 +/- 0.20(stat) +/- 0.08(sys). X-ray observations with Swift-XRT allow the synchrotron peak energy in vF(v) representation to be located at similar to 1.0 keV. The broadband spectral energy distribution is modelled with a one-zone synchrotron self-Compton (SSC) model and the optical data by a black-body emission describing the thermal emission of the host galaxy. The derived parameters are typical of HBLs detected at VHE, with a particle-dominated jet.
Context. About 40% of the observation time of the High Energy Stereoscopic System (H.E.S.S.) is dedicated to studying active galactic nuclei (AGN), with the aim of increasing the sample of known extragalactic very-high-energy (VHE, E > 100 GeV) sources and constraining the physical processes at play in potential emitters.
Aims. H.E.S.S. observations of AGN, spanning a period from April 2004 to December 2011, are investigated to constrain their gamma-ray fluxes. Only the 47 sources without significant excess detected at the position of the targets are presented.
Methods. Upper limits on VHE fluxes of the targets were computed and a search for variability was performed on the nightly time scale.
Results. For 41 objects, the flux upper limits we derived are the most constraining reported to date. These constraints at VHE are compared with the flux level expected from extrapolations of Fermi-LAT measurements in the two-year catalog of AGN. The H.E.S.S. upper limits are at least a factor of two lower than the extrapolated Fermi-LAT fluxes for 11 objects Taking into account the attenuation by the extragalactic background light reduces the tension for all but two of them, suggesting intrinsic curvature in the high-energy spectra of these two AGN.
Conclusions. Compilation efforts led by current VHE instruments are of critical importance for target-selection strategies before the advent of the Cherenkov Telescope Array (CTA).