Refine
Year of publication
Document Type
- Article (634)
- Doctoral Thesis (198)
- Monograph/Edited Volume (135)
- Other (27)
- Conference Proceeding (19)
- Part of a Book (11)
- Master's Thesis (11)
- Postprint (8)
- Preprint (5)
- Habilitation Thesis (1)
Keywords
- Informatik (18)
- Didaktik (15)
- Ausbildung (13)
- Hochschuldidaktik (13)
- answer set programming (12)
- Answer set programming (8)
- E-Learning (8)
- Answer Set Programming (7)
- Maschinelles Lernen (7)
- Antwortmengenprogrammierung (6)
Institute
- Institut für Informatik und Computational Science (1050) (remove)
The notion of coherence relations is quite widely accepted in general, but concrete proposals differ considerably on the questions of how they should be motivated, which relations are to be assumed, and how they should be defined. This paper takes a "bottom-up" perspective by assessing the contribution made by linguistic signals (connectives), using insights from the relevant literature as well as verification by practical text annotation. We work primarily with the German language here and focus on the realm of contrast. Thus, we suggest a new inventory of contrastive connective functions and discuss their relationship to contrastive coherence relations that have been proposed in earlier work.
Handling manufacturing and aging faults with software-based techniques in tiny embedded systems
(2017)
Non-volatile memory area occupies a large portion of the area of a chip in an embedded system. Such memories are prone to manufacturing faults, retention faults, and aging faults. The paper presents a single software based technique that allows for handling all of these fault types in tiny embedded systems without the need for hardware support. This is beneficial for low-cost embedded systems with simple memory architectures. A software infrastructure and a flow are presented that demonstrate how the presented technique is used in general for fault handling right after manufacturing and in-the-field. Moreover, a full implementation is presented for a MSP430 microcontroller, along with a discussion of the performance, overhead, and reliability impacts.
This paper describes architectural extensions for a dynamically scheduled processor, so that it can be used in three different operation modes, ranging from high-performance, to high-reliability. With minor hardware-extensions of the control path, the resources of the superscalar data-path can be used either for high-performance execution, fail-safe-operation, or fault-tolerant-operation. This makes the processor-architecture a very good candidate for applications with dynamically changing reliability requirements, e.g. for automotive applications. The paper reports the hardware-overhead for the extensions, and investigates the performance penalties introduced by the fail-safe and fault-tolerant mode. Furthermore, a comprehensive fault simulation was carried out in order to investigate the fault-coverage of the proposed approach.
Answer Set Programming (ASP) is a successful rule-based formalism for modeling and solving knowledge-intense combinatorial (optimization) problems. Despite its success in both academic and industry, open challenges like automatic source code optimization, and software engineering remains. This is because a problem encoded into an ASP might not have the desired solving performance compared to an equivalent representation. Motivated by these two challenges, this paper has three main contributions. First, we propose a developing process towards a methodology to implement ASP programs, being faithful to existing methods. Second, we present ASP encodings that serve as the basis from the developing process. Third, we demonstrate the use of ASP to reverse the standard solving process. That is, knowing answer sets in advance, and desired strong equivalent properties, “we” exhaustively reconstruct ASP programs if they exist. This paper was originally motivated by the search of propositional formulas (if they exist) that represent the semantics of a new aggregate operator. Particularly, a parity aggregate. This aggregate comes as an improvement from the already existing parity (xor) constraints from xorro, where lacks expressiveness, even though these constraints fit perfectly for reasoning modes like sampling or model counting. To this end, this extended version covers the fundaments from parity constraints as well as the xorro system. Hence, we delve a little more in the examples and the proposed methodology over parity constraints. Finally, we discuss our results by showing the only representation available, that satisfies different properties from the classical logic xor operator, which is also consistent with the semantics of parity constraints from xorro.
Continuous verification of network security compliance is an accepted need. Especially, the analysis of stateful packet filters plays a central role for network security in practice. But the few existing tools which support the analysis of stateful packet filters are based on general applicable formal methods like Satifiability Modulo Theories (SMT) or theorem prover and show runtimes in the order of minutes to hours making them unsuitable for continuous compliance verification. In this work, we address these challenges and present the concept of state shell interweaving to transform a stateful firewall rule set into a stateless rule set. This allows us to reuse any fast domain specific engine from the field of data plane verification tools leveraging smart, very fast, and domain specialized data structures and algorithms including Header Space Analysis (HSA). First, we introduce the formal language FPL that enables a high-level human-understandable specification of the desired state of network security. Second, we demonstrate the instantiation of a compliance process using a verification framework that analyzes the configuration of complex networks and devices - including stateful firewalls - for compliance with FPL policies. Our evaluation results show the scalability of the presented approach for the well known Internet2 and Stanford benchmarks as well as for large firewall rule sets where it outscales state-of-the-art tools by a factor of over 41.
The Internet can be considered as the most important infrastructure for modern society and businesses. A loss of Internet connectivity has strong negative financial impacts for businesses and economies. Therefore, assessing Internet connectivity, in particular beyond their own premises and area of direct control, is of growing importance in the face of potential failures, accidents, and malicious attacks. This paper presents CORIA, a software framework for an easy analysis of connectivity risks based on large network graphs. It provides researchers, risk analysts, network managers and security consultants with a tool to assess an organization's connectivity and paths options through the Internet backbone, including a user-friendly and insightful visual representation of results. CORIA is flexibly extensible in terms of novel data sets, graph metrics, and risk scores that enable further use cases. The performance of CORIA is evaluated by several experiments on the Internet graph and further randomly generated networks.
Das Promotionsvorhaben verfolgt das Ziel, die Zuverlässigkeit der Datenspeicherung und die Speicherdichte von neu entwickelten Speichern (Emerging Memories) mit Multi-Level-Speicherzellen zu verbessern bzw. zu erhöhen. Hierfür werden Codes zur Erkennung von unidirektionalen Fehlern analysiert, modifiziert und neu entwickelt, um sie innerhalb der neuen Speicher anwenden zu können. Der Fokus liegt dabei auf sog. Berger-Codes und m-aus-n-Codes. Da Multi-Level-Speicherzellen nicht mehr binär, sondern mit mehreren Leveln arbeiten, können bisher verwendete Codes nicht mehr verwendet werden, bzw. müssen entsprechend angepasst werden. Auf Basis der Berger-Codes und m-aus-n-Codes werden in dieser Arbeit neue Codes abgeleitet, welche in der Lage sind, Daten auch in mehrwertigen Systemen zu schützen.
Supervised machine learning to assess methane emissions of a dairy building with natural ventilation
(2020)
A reliable quantification of greenhouse gas emissions is a basis for the development of adequate mitigation measures. Protocols for emission measurements and data analysis approaches to extrapolate to accurate annual emission values are a substantial prerequisite in this context. We systematically analyzed the benefit of supervised machine learning methods to project methane emissions from a naturally ventilated cattle building with a concrete solid floor and manure scraper located in Northern Germany. We took into account approximately 40 weeks of hourly emission measurements and compared model predictions using eight regression approaches, 27 different sampling scenarios and four measures of model accuracy. Data normalization was applied based on median and quartile range. A correlation analysis was performed to evaluate the influence of individual features. This indicated only a very weak linear relation between the methane emission and features that are typically used to predict methane emission values of naturally ventilated barns. It further highlighted the added value of including day-time and squared ambient temperature as features. The error of the predicted emission values was in general below 10%. The results from Gaussian processes, ordinary multilinear regression and neural networks were least robust. More robust results were obtained with multilinear regression with regularization, support vector machines and particularly the ensemble methods gradient boosting and random forest. The latter had the added value to be rather insensitive against the normalization procedure. In the case of multilinear regression, also the removal of not significantly linearly related variables (i.e., keeping only the day-time component) led to robust modeling results. We concluded that measurement protocols with 7 days and six measurement periods can be considered sufficient to model methane emissions from the dairy barn with solid floor with manure scraper, particularly when periods are distributed over the year with a preference for transition periods. Features should be normalized according to median and quartile range and must be carefully selected depending on the modeling approach.
Answer Set Programming (ASP) is a well-known paradigm of declarative programming with roots in logic programming and non-monotonic reasoning. Similar to other closely related problemsolving technologies, such as SAT/SMT, QBF, Planning and Scheduling, advancements in ASP solving are assessed in competition events. In this paper, we report about the design and results of the Sixth ASP Competition, which was jointly organized by the University of Calabria (Italy), Aalto University (Finland), and the University of Genoa (Italy), in affiliation with the 13th International Conference on Logic Programming and Non-Monotonic Reasoning. This edition maintained some of the design decisions introduced in 2014, e.g., the conception of sub-tracks, the scoring scheme,and the adherence to a fixed modeling language in order to push the adoption of the ASP-Core-2 standard. On the other hand, it featured also some novelties, like a benchmark selection stage classifying instances according to their empirical hardness, and a “Marathon” track where the topperforming systems are given more time for solving hard benchmarks.