Refine
Year of publication
Document Type
- Article (41)
- Monograph/Edited Volume (11)
- Other (3)
- Conference Proceeding (1)
- Postprint (1)
- Preprint (1)
Is part of the Bibliography
- yes (58)
Keywords
- radiation mechanisms: non-thermal (8)
- gamma rays: galaxies (6)
- galaxies: active (5)
- gamma rays: general (5)
- ISM: supernova remnants (4)
- data profiling (4)
- Datenintegration (3)
- duplicate detection (3)
- similarity measures (3)
- Data Integration (2)
Data obtained from foreign data sources often come with only superficial structural information, such as relation names and attribute names. Other types of metadata that are important for effective integration and meaningful querying of such data sets are missing. In particular, relationships among attributes, such as foreign keys, are crucial metadata for understanding the structure of an unknown database. The discovery of such relationships is difficult, because in principle for each pair of attributes in the database each pair of data values must be compared. A precondition for a foreign key is an inclusion dependency (IND) between the key and the foreign key attributes. We present with Spider an algorithm that efficiently finds all INDs in a given relational database. It leverages the sorting facilities of DBMS but performs the actual comparisons outside of the database to save computation. Spider analyzes very large databases up to an order of magnitude faster than previous approaches. We also evaluate in detail the effectiveness of several heuristics to reduce the number of necessary comparisons. Furthermore, we generalize Spider to find composite INDs covering multiple attributes, and partial INDs, which are true INDs for all but a certain number of values. This last type is particularly relevant when integrating dirty data as is often the case in the life sciences domain - our driving motivation.
Unique column combinations (UCCs) are a fundamental concept in relational databases. They identify entities in the data and support various data management activities. Still, UCCs are usually not explicitly defined and need to be discovered. State-of-the-art data profiling algorithms are able to efficiently discover UCCs in moderately sized datasets, but they tend to fail on large and, in particular, on wide datasets due to run time and memory limitations. <br /> In this paper, we introduce HPIValid, a novel UCC discovery algorithm that implements a faster and more resource-saving search strategy. HPIValid models the metadata discovery as a hitting set enumeration problem in hypergraphs. In this way, it combines efficient discovery techniques from data profiling research with the most recent theoretical insights into enumeration algorithms. Our evaluation shows that HPIValid is not only orders of magnitude faster than related work, it also has a much smaller memory footprint.
With the advent of big data and data lakes, data are often integrated from multiple sources. Such integrated data are often of poor quality, due to inconsistencies, errors, and so forth. One way to check the quality of data is to infer functional dependencies (fds). However, in many modern applications it might be necessary to extract properties and relationships that are not captured through fds, due to the necessity to admit exceptions, or to consider similarity rather than equality of data values. Relaxed fds (rfds) have been introduced to meet these needs, but their discovery from data adds further complexity to an already complex problem, also due to the necessity of specifying similarity and validity thresholds. We propose Domino, a new discovery algorithm for rfds that exploits the concept of dominance in order to derive similarity thresholds of attribute values while inferring rfds. An experimental evaluation on real datasets demonstrates the discovery performance and the effectiveness of the proposed algorithm.
Effective query optimization is a core feature of any database management system. While most query optimization techniques make use of simple metadata, such as cardinalities and other basic statistics, other optimization techniques are based on more advanced metadata including data dependencies, such as functional, uniqueness, order, or inclusion dependencies. This survey provides an overview, intuitive descriptions, and classifications of query optimization and execution strategies that are enabled by data dependencies. We consider the most popular types of data dependencies and focus on optimization strategies that target the optimization of relational database queries. The survey supports database vendors to identify optimization opportunities as well as DBMS researchers to find related work and open research questions.
Sowohl in kommerziellen als auch in wissenschaftlichen Datenbanken sind Daten von niedriger Qualität allgegenwärtig. Das kann zu erheblichen wirtschaftlichen Problemen führen", erläutert der 35-jährige Informatik-Professor und verweist zum Beispiel auf Duplikate. Diese können entstehen, wenn in Unternehmen verschiedene Kundendatenbestände zusammengefügt werden, aber die Integration mehrere Datensätze des gleichen Kunden hinterlässt. "Solche doppelten Einträge zu finden, ist aus zwei Gründen schwierig: Zum einen ist die Menge der Daten oft sehr groß, zum anderen können sich Einträge über die gleiche Person leicht unterscheiden", beschreibt Prof. Naumann häufig auftretende Probleme. In seiner Antrittsvorlesung will er zwei Lösungswege vorstellen: Erstens die Definition geeigneter Ähnlichkeitsmaße und zweitens die Nutzung von Algorithmen, die es vermeiden, jeden Datensatz mit jedem anderen zu vergleichen. Außerdem soll es um grundlegende Aspekte der Verständlichkeit, Objektivität, Vollständigkeit und Fehlerhaftigkeit von Daten gehen.
The task of expert finding is to rank the experts in the search space given a field of expertise as an input query. In this paper, we propose a topic modeling approach for this task. The proposed model uses latent Dirichlet allocation (LDA) to induce probabilistic topics. In the first step of our algorithm, the main topics of a document collection are extracted using LDA. The extracted topics present the connection between expert candidates and user queries. In the second step, the topics are used as a bridge to find the probability of selecting each candidate for a given query. The candidates are then ranked based on these probabilities. The experimental results on the Text REtrieval Conference (TREC) Enterprise track for 2005 and 2006 show that the proposed topic-based approach outperforms the state-of-the-art profile- and document-based models, which use information retrieval methods to rank experts. Moreover, we present the superiority of the proposed topic-based approach to the improved document-based expert finding systems, which consider additional information such as local context, candidate prior, and query expansion.
Extract-Transform-Load (ETL) tools are used for the creation, maintenance, and evolution of data warehouses, data marts, and operational data stores. ETL workflows populate those systems with data from various data sources by specifying and executing a DAG of transformations. Over time, hundreds of individual workflows evolve as new sources and new requirements are integrated into the system. The maintenance and evolution of large-scale ETL systems requires much time and manual effort. A key problem is to understand the meaning of unfamiliar attribute labels in source and target databases and ETL transformations. Hard-to-understand attribute labels lead to frustration and time spent to develop and understand ETL workflows. We present a schema decryption technique to support ETL developers in understanding cryptic schemata of sources, targets, and ETL transformations. For a given ETL system, our recommender-like approach leverages the large number of mapped attribute labels in existing ETL workflows to produce good and meaningful decryptions. In this way we are able to decrypt attribute labels consisting of a number of unfamiliar few-letter abbreviations, such as UNP_PEN_INT, which we can decrypt to UNPAID_PENALTY_INTEREST. We evaluate our schema decryption approach on three real-world repositories of ETL workflows and show that our approach is able to suggest high-quality decryptions for cryptic attribute labels in a given schema.
Data dependencies, or integrity constraints, are used to improve the quality of a database schema, to optimize queries, and to ensure consistency in a database. In the last years conditional dependencies have been introduced to analyze and improve data quality. In short, a conditional dependency is a dependency with a limited scope defined by conditions over one or more attributes. Only the matching part of the instance must adhere to the dependency. In this paper we focus on conditional inclusion dependencies (CINDs). We generalize the definition of CINDs, distinguishing covering and completeness conditions. We present a new use case for such CINDs showing their value for solving complex data quality tasks. Further, we define quality measures for conditions inspired by precision and recall. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. Our algorithms choose not only the condition values but also the condition attributes automatically. Finally, we show that our approach efficiently provides meaningful and helpful results for our use case.
Die Tagung HDI 2014 in Freiburg zur Hochschuldidaktik der Informatik HDI wurde erneut vom Fachbereich Informatik und Ausbildung / Didaktik der Informatik (IAD) in der Gesellschaft für Informatik e. V. (GI) organisiert. Sie dient den Lehrenden der Informatik in Studiengängen an Hochschulen als Forum der Information und des Austauschs über neue didaktische Ansätze und bildungspolitische Themen im Bereich der Hochschulausbildung aus der fachlichen Perspektive der Informatik.
Die HDI 2014 ist nun bereits die sechste Ausgabe der HDI. Für sie wurde das spezielle Motto „Gestalten und Meistern von Übergängen“ gewählt. Damit soll ein besonderes Augenmerk auf die Übergänge von Schule zum Studium, vom Bachelor zum Master, vom Studium zur Promotion oder vom Studium zur Arbeitswelt gelegt werden.
Duplicate detection consists in determining different representations of real-world objects in a database. Recent research has considered the use of relationships among object representations to improve duplicate detection. In the general case where relationships form a graph, research has mainly focused on duplicate detection quality/effectiveness. Scalability has been neglected so far, even though it is crucial for large real-world duplicate detection tasks. In this paper we scale up duplicate detection in graph data (DDG) to large amounts of data and pairwise comparisons, using the support of a relational database system. To this end, we first generalize the process of DDG. We then present how to scale algorithms for DDG in space (amount of data processed with limited main memory) and in time. Finally, we explore how complex similarity computation can be performed efficiently. Experiments on data an order of magnitude larger than data considered so far in DDG clearly show that our methods scale to large amounts of data not residing in main memory.
How inclusive are we?
(2022)
ACM SIGMOD, VLDB and other database organizations have committed to fostering an inclusive and diverse community, as do many other scientific organizations. Recently, different measures have been taken to advance these goals, especially for underrepresented groups. One possible measure is double-blind reviewing, which aims to hide gender, ethnicity, and other properties of the authors. <br /> We report the preliminary results of a gender diversity analysis of publications of the database community across several peer-reviewed venues, and also compare women's authorship percentages in both single-blind and double-blind venues along the years. We also obtained a cross comparison of the obtained results in data management with other relevant areas in Computer Science.
Data analytics are moving beyond the limits of a single data processing platform. A cross-platform query optimizer is necessary to enable applications to run their tasks over multiple platforms efficiently and in a platform-agnostic manner. For the optimizer to be effective, it must consider data movement costs across different data processing platforms. In this paper, we present the graph-based data movement strategy used by RHEEM, our open-source cross-platform system. In particular, we (i) model the data movement problem as a new graph problem, which we prove to be NP-hard, and (ii) propose a novel graph exploration algorithm, which allows RHEEM to discover multiple hidden opportunities for cross-platform data processing.
Spreadsheets are among the most commonly used file formats for data management, distribution, and analysis. Their widespread employment makes it easy to gather large collections of data, but their flexible canvas-based structure makes automated analysis difficult without heavy preparation. One of the common problems that practitioners face is the presence of multiple, independent regions in a single spreadsheet, possibly separated by repeated empty cells. We define such files as "multiregion" files. In collections of various spreadsheets, we can observe that some share the same layout. We present the Mondrian approach to automatically identify layout templates across multiple files and systematically extract the corresponding regions. Our approach is composed of three phases: first, each file is rendered as an image and inspected for elements that could form regions; then, using a clustering algorithm, the identified elements are grouped to form regions; finally, every file layout is represented as a graph and compared with others to find layout templates. We compare our method to state-of-the-art table recognition algorithms on two corpora of real-world enterprise spreadsheets. Our approach shows the best performances in detecting reliable region boundaries within each file and can correctly identify recurring layouts across files.
RHEEMix in the data jungle
(2020)
Data analytics are moving beyond the limits of a single platform. In this paper, we present the cost-based optimizer of Rheem, an open-source cross-platform system that copes with these new requirements. The optimizer allocates the subtasks of data analytic tasks to the most suitable platforms. Our main contributions are: (i) a mechanism based on graph transformations to explore alternative execution strategies; (ii) a novel graph-based approach to determine efficient data movement plans among subtasks and platforms; and (iii) an efficient plan enumeration algorithm, based on a novel enumeration algebra. We extensively evaluate our optimizer under diverse real tasks. We show that our optimizer can perform tasks more than one order of magnitude faster when using multiple platforms than when using a single platform.
RHEEMix in the data jungle
(2020)
Data analytics are moving beyond the limits of a single platform. In this paper, we present the cost-based optimizer of Rheem, an open-source cross-platform system that copes with these new requirements. The optimizer allocates the subtasks of data analytic tasks to the most suitable platforms. Our main contributions are: (i) a mechanism based on graph transformations to explore alternative execution strategies; (ii) a novel graph-based approach to determine efficient data movement plans among subtasks and platforms; and (iii) an efficient plan enumeration algorithm, based on a novel enumeration algebra. We extensively evaluate our optimizer under diverse real tasks. We show that our optimizer can perform tasks more than one order of magnitude faster when using multiple platforms than when using a single platform.
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application. Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services. Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns. The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the Research Scholl, this technical report covers a wide range of research topics. These include but are not limited to: Self-Adaptive Service-Oriented Systems, Operating System Support for Service-Oriented Systems, Architecture and Modeling of Service-Oriented Systems, Adaptive Process Management, Services Composition and Workflow Planning, Security Engineering of Service-Based IT Systems, Quantitative Analysis and Optimization of Service-Oriented Systems, Service-Oriented Systems in 3D Computer Graphics sowie Service-Oriented Geoinformatics.
The integration of multiple data sources is a common problem in a large variety of applications. Traditionally, handcrafted similarity measures are used to discover, merge, and integrate multiple representations of the same entity-duplicates-into a large homogeneous collection of data. Often, these similarity measures do not cope well with the heterogeneity of the underlying dataset. In addition, domain experts are needed to manually design and configure such measures, which is both time-consuming and requires extensive domain expertise. <br /> We propose a deep Siamese neural network, capable of learning a similarity measure that is tailored to the characteristics of a particular dataset. With the properties of deep learning methods, we are able to eliminate the manual feature engineering process and thus considerably reduce the effort required for model construction. In addition, we show that it is possible to transfer knowledge acquired during the deduplication of one dataset to another, and thus significantly reduce the amount of data required to train a similarity measure. We evaluated our method on multiple datasets and compare our approach to state-of-the-art deduplication methods. Our approach outperforms competitors by up to +26 percent F-measure, depending on task and dataset. In addition, we show that knowledge transfer is not only feasible, but in our experiments led to an improvement in F-measure of up to +4.7 percent.
Unique column combinations of a relational database table are sets of columns that contain only unique values. Discovering such combinations is a fundamental research problem and has many different data management and knowledge discovery applications. Existing discovery algorithms are either brute force or have a high memory load and can thus be applied only to small datasets or samples. In this paper, the wellknown GORDIAN algorithm and "Apriori-based" algorithms are compared and analyzed for further optimization. We greatly improve the Apriori algorithms through efficient candidate generation and statistics-based pruning methods. A hybrid solution HCAGORDIAN combines the advantages of GORDIAN and our new algorithm HCA, and it significantly outperforms all previous work in many situations.
VLDB 2021
(2021)
The 47th International Conference on Very Large Databases (VLDB'21) was held on August 16-20, 2021 as a hybrid conference. It attracted 180 in-person attendees in Copenhagen and 840 remote attendees. In this paper, we describe our key decisions as general chairs and program committee chairs and share the lessons we learned.
Discovery of high and very high-energy emission from the BL Lacertae object SHBL J001355.9-185406
(2013)
The detection of the high-frequency peaked BL Lac object (HBL) SHBL J001355.9-185406 (z = 0.095) at high (HE; 100 MeV < E < 300 GeV) and very high-energy (VHE; E > 100 GeV) with the Fermi Large Area Telescope (LAT) and the High Energy Stereoscopic System (H.E.S.S.) is reported. Dedicated observations were performed with the H. E. S. S. telescopes, leading to a detection at the 5.5 sigma significance level. The measured flux above 310 GeV is (8.3 +/- 1.7(stat) +/- 1.7(sys)) x 10(-13) photons cm(-2) s(-1) (about 0.6% of that of the Crab Nebula), and the power-law spectrum has a photon index of Gamma = 3.4 +/- 0.5(stat) +/- 0.2(sys). Using 3.5 years of publicly available Fermi-LAT data, a faint counterpart has been detected in the LAT data at the 5.5 sigma significance level, with an integrated flux above 300 MeV of (9.3 +/- 3.4(stat) +/- 0.8(sys)) x 10(-10) photons cm(-2) s(-1) and a photon index of Gamma = 1.96 +/- 0.20(stat) +/- 0.08(sys). X-ray observations with Swift-XRT allow the synchrotron peak energy in vF(v) representation to be located at similar to 1.0 keV. The broadband spectral energy distribution is modelled with a one-zone synchrotron self-Compton (SSC) model and the optical data by a black-body emission describing the thermal emission of the host galaxy. The derived parameters are typical of HBLs detected at VHE, with a particle-dominated jet.
Context. About 40% of the observation time of the High Energy Stereoscopic System (H.E.S.S.) is dedicated to studying active galactic nuclei (AGN), with the aim of increasing the sample of known extragalactic very-high-energy (VHE, E > 100 GeV) sources and constraining the physical processes at play in potential emitters.
Aims. H.E.S.S. observations of AGN, spanning a period from April 2004 to December 2011, are investigated to constrain their gamma-ray fluxes. Only the 47 sources without significant excess detected at the position of the targets are presented.
Methods. Upper limits on VHE fluxes of the targets were computed and a search for variability was performed on the nightly time scale.
Results. For 41 objects, the flux upper limits we derived are the most constraining reported to date. These constraints at VHE are compared with the flux level expected from extrapolations of Fermi-LAT measurements in the two-year catalog of AGN. The H.E.S.S. upper limits are at least a factor of two lower than the extrapolated Fermi-LAT fluxes for 11 objects Taking into account the attenuation by the extragalactic background light reduces the tension for all but two of them, suggesting intrinsic curvature in the high-energy spectra of these two AGN.
Conclusions. Compilation efforts led by current VHE instruments are of critical importance for target-selection strategies before the advent of the Cherenkov Telescope Array (CTA).
Context. On March 4, 2013 the Fermi-EAT and AGILE reported a flare from the direction of the Crab nebula in which the high-energy (HE; E > 100 MeV) flux was six times above its quiescent level. Simultaneous observations in other energy bands give us hints about the emission processes during the flare episode and the physics of pulsar wind nebulae in general.
Aims. We search for variability in the emission of the Crab nebula at very-high energies (VHF,; E > 100 GeV), using contemporaneous data taken with the H.E.S.S. array of Cherenkov telescopes.
Methods. Observational data taken with the H.E.S.S. instrument on five consecutive days during the flare were analysed for the flux and spectral shape of the emission from the Crab nebula. Night-wise light curves are presented with energy thresholds of 1 TeV and 5 TeV.
Results. The observations conducted with H.E.S.S. on March 6 to March 10, 2013 show no significant changes in the flux. They limit the variation in the integral flux above 1 TeV to less than 63% and the integral flux above 5 TeV to less than 78% at a 95% confidence level.
The results of follow-up observations of the TeV gamma-ray source HESS J1640-465 from 2004 to 2011 with the High Energy Stereoscopic System (HESS) are reported in this work. The spectrum is well described by an exponential cut-off power law with photon index Gamma = 2.11 +/- 0.09(stat) +/- 0.10(sys), and a cut-off energy of E-2 = 6.0(-1.2)(+2.0) TeV. The TeV emission is significantly extended and overlaps with the northwestern part of the shell of the SNR G338.3-0.0. The new HESS results, a re-analysis of archival XMM-Newton data and multiwavelength observations suggest that a significant part of the gamma-ray emission from HESS J1640-465 originates in the supernova remnant shell. In a hadronic scenario, as suggested by the smooth connection of the GeV and TeV spectra, the product of total proton energy and mean target density could be as high as W(p)n(H) similar to 4 x 10(52)(d/10kpc)(2) erg cm(-3).
Aims. Previous observations with the High Energy Stereoscopic System (H.E.S.S.) have revealed an extended very-high-energy (VHE; E > 100 GeV) gamma-ray source, HESS J1834-087, coincident with the supernova remnant (SNR) W41. The origin of the gamma-ray emission was investigated in more detail with the H.E.S.S. array and the Large Area Telescope (LAT) onboard the Fermi Gamma-ray Space Telescope.
Methods. The gamma-ray data provided by 61 h of observations with H.E.S.S., and four years with the Fermi LAT were analyzed, covering over five decades in energy from 1.8 GeV up to 30 TeV. The morphology and spectrum of the TeV and GeV sources were studied and multiwavelength data were used to investigate the origin of the gamma-ray emission toward W41.
Results. The TeV source can be modeled with a sum of two components: one point-like and one significantly extended (sigma(TeV) = 0.17 degrees +/- 0.01 degrees), both centered on SNR W41 and exhibiting spectra described by a power law with index Gamma(TeV) similar or equal to 2.6. The GeV source detected with Fermi LAT is extended (sigma(GeV) = 0.15 degrees +/- 0.03 degrees) and morphologically matches the VHE emission. Its spectrum can be described by a power-law model with an index Gamma(GeV) = 2.15 +/- 0.12 and smoothly joins the spectrum of the whole TeV source. A break appears in the gamma-ray spectra around 100 GeV. No pulsations were found in the GeV range.
Conclusions. Two main scenarios are proposed to explain the observed emission: a pulsar wind nebula (PWN) or the interaction of SNR W41 with an associated molecular cloud. X-ray observations suggest the presence of a point-like source (a pulsar candidate) near the center of the remnant and nonthermal X-ray diffuse emission that could arise from the possibly associated PWN. The PWN scenario is supported by the compatible positions of the TeV and GeV sources with the putative pulsar. However, the spectral energy distribution from radio to gamma-rays is reproduced by a one-zone leptonic model only if an excess of low-energy electrons is injected following a Maxwellian distribution by a pulsar with a high spin-down power (> 10(37) erg s(-1)). This additional low-energy component is not needed if we consider that the point-like TeV source is unrelated to the extended GeV and TeV sources. The interacting SNR scenario is supported by the spatial coincidence between the gamma-ray sources, the detection of OH (1720 MHz) maser lines, and the hadronic modeling.
Discovery of very high energy gamma-ray emission from the BL Lacertae
object PKS0301-243 with HESS
(2013)
The active galactic nucleus PKS 0301-243 (z = 0.266) is a high-synchrotron-peaked BL Lac object that is detected at high energies (HE, 100 MeV < E < 100 GeV) by Fermi/LAT. This paper reports on the discovery of PKS 0301-243 at very high energies (E > 100 GeV) by the High Energy Stereoscopic System (H.E.S.S.) from observations between September 2009 and December 2011 for a total live time of 34.9 h. Gamma rays above 200 GeV are detected at a significance of 9.4 sigma. A hint of variability at the 2.5 sigma level is found. An integral flux I(E > 200GeV) = (3.3 +/- 1.1(stat) +/- 0.7(syst)) x 10(-12) ph cm(-2) s(-1) and a photon index Gamma = 4.6 +/- 0.7(stat) +/- 0.2(syst) are measured. Multi-wavelength light curves in HE, X-ray and optical bands show strong variability, and a minimal variability timescale of eight days is estimated from the optical light curve. A single-zone leptonic synchrotron self-Compton scenario satisfactorily reproduces the multi-wavelength data. In this model, the emitting region is out of equipartition and the jet is particle dominated. Because of its high redshift compared to other sources observed at TeV energies, the very high energy emission from PKS 0301-243 is attenuated by the extragalactic background light (EBL) and the measured spectrum is used to derive an upper limit on the opacity of the EBL.
Composite supernova remnants (SNRs) constitute a small subclass of the remnants of massive stellar explosions where non-thermal radiation is observed from both the expanding shell-like shock front and from a pulsar wind nebula (PWN) located inside of the SNR. These systems represent a unique evolutionary phase of SNRs where observations in the radio, X-ray, and gamma-ray regimes allow the study of the co-evolution of both these energetic phenomena. In this article, we report results from observations of the shell-type SNR G15.4+0.1 performed with the High Energy Stereoscopic System (H. E. S. S.) and XMM-Newton. A compact TeV gamma-ray source, HESS J1818-154, located in the center and contained within the shell of G15.4+0.1 is detected by H. E. S. S. and featurs a spectrum best represented by a power-law model with a spectral index of -2.3 +/- 0.3(stat) +/- 0.2(sys) and an integral flux of F(>0.42 TeV) = (0.9 +/- 0.3(stat) +/- 0.2(sys)) x 10(-12) cm(-2) s(-1). Furthermore, a recent observation with XMM-Newton reveals extended X-ray emission strongly peaked in the center of G15.4+0.1. The X-ray source shows indications of an energy-dependent morphology featuring a compact core at energies above 4 keV and more extended emission that fills the entire region within the SNR at lower energies. Together, the X-ray and VHE gamma-ray emission provide strong evidence of a PWN located inside the shell of G15.4+0.1 and this SNR can therefore be classified as a composite based on these observations. The radio, X-ray, and gamma-ray emission from the PWN is compatible with a one-zone leptonic model that requires a low average magnetic field inside the emission region. An unambiguous counterpart to the putative pulsar, which is thought to power the PWN, has been detected neither in radio nor in X-ray observations of G15.4+0.1.
Context. Very-high-energy (VHE; E > 100 GeV) gamma-ray emission from blazars inevitably gives rise to electron-positron pair production through the interaction of these gamma-rays with the extragalactic background light (EBL). Depending on the magnetic fields in the proximity of the source, the cascade initiated from pair production can result in either an isotropic halo around an initially- beamed source or a magnetically- broadened cascade :aux.
Aims. Both extended pair-halo (PH) and magnetically broadened cascade (MBC) emission from regions surrounding the blazars 1ES 1101-232, IRS 0229+200, and PKS 2155-304 were searched for using VHE y-ray data taken with the High Energy Stereoscopic System (HESS.) and high-energy (HE; 100 MeV < E < 100 GeV) gamma-ray data with the Fermi Large Area Telescope (LAT).
Methods. By comparing the angular distributions of the reconstructed gamma-ray events to the angular profiles calculated from detailed theoretical models, the presence of PH and MBC was investigated.
Results. Upper limits on the extended emission around lES 1101-232, lES 0229+200, and PKS 2155-304 are found to be at a level of a few per cent of the Crab nebula flux above 1 TeV, depending on the assumed photon index of the cascade emission. Assuming strong extra-Galactic magnetic field (EGME) values, >10(-12) G, this limits the production of pair haloes developing from electromagnetic cascades. For weaker magnetic fields, in which electromagnetic cascades would result in MBCs. EGMF strengths in the range (0.3-3) x 10(-15) G were excluded for PKS 2155-304 at the 99% confidence level, under the assumption of a 1 Mpc coherence length.
A deep observation campaign carried out by the High Energy Stereoscopic System (HESS) on Centaurus A enabled the discovery of gamma-rays from the blazar 1ES 1312-423, 2 degrees away from the radio galaxy. With a differential flux at 1 TeV of phi(1 TeV) = (1.9 +/- 0.6(stat) +/- 0.4(sys)) x 10(-13) cm(-2) s(-1) TeV-1 corresponding to 0.5 per cent of the Crab nebula differential flux and a spectral index Gamma = 2.9 +/- 0.5(stat) +/- 0.2(sys), 1ES 1312-423 is one of the faintest sources ever detected in the very high energy (E > 100 GeV) extragalactic sky. A careful analysis using three and a half years of Fermi Large Area Telescope (Fermi-LAT) data allows the discovery at high energies (E > 100 MeV) of a hard spectrum (Gamma = 1.4 +/- 0.4(stat) +/- 0.2(sys)) source coincident with 1ES 1312-423. Radio, optical, UV and X-ray observations complete the spectral energy distribution of this blazar, now covering 16 decades in energy. The emission is successfully fitted with a synchrotron self-Compton model for the non-thermal component, combined with a blackbody spectrum for the optical emission from the host galaxy.
Search for TeV Gamma-ray emission from GRB 100621A, an extremely bright GRB in X-rays, with HESS
(2014)
The long gamma-ray burst (GRB) 100621A, at the time the brightest X-ray transient ever detected by Swift-XRT in the 0.3-10 keV range, has been observed with the H.E.S.S. imaging air Cherenkov telescope array, sensitive to gamma radiation in the very-high-energy (VHE, >100 GeV) regime. Due to its relatively small redshift of z similar to 0.5, the favourable position in the southern sky and the relatively short follow-up time (<700 s after the satellite trigger) of the H.E.S.S. observations, this GRB could be within the sensitivity reach of the HESS. instrument. The analysis of the HESS. data shows no indication of emission and yields an integral flux upper limit above similar to 380 GeV of 4.2 x 10(-12) cm(-2) s(-1) s (95% confidence level), assuming a simple Band function extension model. A comparison to a spectral-temporal model, normalised to the prompt flux at sub-MeV energies, constraints the existence of a temporally extended and strong additional hard power law, as has been observed in the other bright X-ray GRB 130427A. A comparison between the HESS. upper limit and the contemporaneous energy output in X-rays constrains the ratio between the X-ray and VHE gamma-ray fluxes to be greater than 0.4. This value is an important quantity for modelling the afterglow and can constrain leptonic emission scenarios, where leptons are responsible for the X-ray emission and might produce VHE gamma rays.
Proceedings of the HPI Research School on Service-oriented Systems Engineering 2020 Fall Retreat
(2021)
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
TeV gamma-ray observations of the young synchrotron-dominated SNRs G1.9+0.3 and G330.2+1.0 with HESS
(2014)
The non-thermal nature of the X-ray emission from the shell-type supernova remnants (SNRs) G1.9+0.3 and G330.2+1.0 is an indication of intense particle acceleration in the shock fronts of both objects. This suggests that the SNRs are prime candidates for very-high-energy (VHE; E > 0.1 TeV) gamma-ray observations. G1.9+0.3, recently established as the youngest known SNR in the Galaxy, also offers a unique opportunity to study the earliest stages of SNR evolution in the VHE domain. The purpose of this work is to probe the level of VHE gamma-ray emission from both SNRs and use this to constrain their physical properties. Observations were conducted with the H. E. S. S. (High Energy Stereoscopic System) Cherenkov Telescope Array over a more than six-year period spanning 2004-2010. The obtained data have effective livetimes of 67 h for G1.9+0.3 and 16 h for G330.2+1.0. The data are analysed in the context of the multiwavelength observations currently available and in the framework of both leptonic and hadronic particle acceleration scenarios. No significant gamma-ray signal from G1.9+0.3 or G330.2+1.0 was detected. Upper limits (99 per cent confidence level) to the TeV flux from G1.9+0.3 and G330.2+1.0 for the assumed spectral index Gamma = 2.5 were set at 5.6 x 10(-1)3 cm(-2) s(-1) above 0.26 TeV and 3.2 x 10(-12) cm(-2) s(-1) above 0.38 TeV, respectively. In a one-zone leptonic scenario, these upper limits imply lower limits on the interior magnetic field to B-G1.9 greater than or similar to 12 mu G for G1.9+0.3 and to B-G330 greater than or similar to 8 mu G for G330.2+1.0. In a hadronic scenario, the low ambient densities and the large distances to the SNRs result in very low predicted fluxes, for which the H.E.S.S. upper limits are not constraining.
The gamma-ray spectrum of the low-frequency-peaked BL Lac (LBL) object AP Librae is studied, following the discovery of very-high-energy (VHE; E > 100 GeV) gamma-ray emission up to the TeV range by the H.E.S.S. experiment. Thismakes AP Librae one of the few VHE emitters of the LBL type. The measured spectrum yields a flux of (8.8 +/- 1.5(stat) +/- 1.8(sys)) x 10(-12) cm(-2) s(-1) above 130 GeV and a spectral index of Gamma = 2.65 +/- 0.19(stat) +/- 0.20(sys). This study also makes use of Fermi-LAT observations in the high energy (HE, E > 100 MeV) range, providing the longest continuous light curve (5 years) ever published on this source. The source underwent a flaring event between MJD 56 306-56 376 in the HE range, with a flux increase of a factor of 3.5 in the 14 day bin light curve and no significant variation in spectral shape with respect to the low-flux state. While the H.E.S.S. and (low state) Fermi-LAT fluxes are in good agreement where they overlap, a spectral curvature between the steep VHE spectrum and the Fermi-LAT spectrum is observed. The maximum of the gamma-ray emission in the spectral energy distribution is located below the GeV energy range.
Axionlike particles (ALPs) are hypothetical light (sub-eV) bosons predicted in some extensions of the Standard Model of particle physics. In astrophysical environments comprising high-energy gamma rays and turbulent magnetic fields, the existence of ALPs can modify the energy spectrum of the gamma rays for a sufficiently large coupling between ALPs and photons. This modification would take the form of an irregular behavior of the energy spectrum in a limited energy range. Data from the H. E. S. S. observations of the distant BL Lac object PKS 2155 - 304 (z = 0.116) are used to derive upper limits at the 95% C. L. on the strength of the ALP coupling to photons, g(gamma a) < 2.1 x 10(-11) GeV-1 for an ALP mass between 15 and 60 neV. The results depend on assumptions on the magnetic field around the source, which are chosen conservatively. The derived constraints apply to both light pseudoscalar and scalar bosons that couple to the electromagnetic field.
HESS J1640-465 - an exceptionally luminous TeV gamma-ray supernova remnant (vol 439, pg 2828, 2014)
(2014)
Any system at play in a data-driven project has a fundamental requirement: the ability to load data. The de-facto standard format to distribute and consume raw data is CSV. Yet, the plain text and flexible nature of this format make such files often difficult to parse and correctly load their content, requiring cumbersome data preparation steps. We propose a benchmark to assess the robustness of systems in loading data from non-standard CSV formats and with structural inconsistencies. First, we formalize a model to describe the issues that affect real-world files and use it to derive a systematic lpollutionz process to generate dialects for any given grammar. Our benchmark leverages the pollution framework for the csv format. To guide pollution, we have surveyed thousands of real-world, publicly available csv files, recording the problems we encountered. We demonstrate the applicability of our benchmark by testing and scoring 16 different systems: popular csv parsing frameworks, relational database tools, spreadsheet systems, and a data visualization tool.