Institut für Informatik und Computational Science
Refine
Year of publication
Document Type
- Article (577)
- Doctoral Thesis (203)
- Monograph/Edited Volume (135)
- Other (28)
- Conference Proceeding (17)
- Part of a Book (12)
- Master's Thesis (10)
- Postprint (10)
- Preprint (4)
- Bachelor Thesis (1)
Is part of the Bibliography
- yes (999) (remove)
Keywords
- answer set programming (13)
- Answer Set Programming (10)
- Answer set programming (10)
- Machine Learning (7)
- Maschinelles Lernen (7)
- Antwortmengenprogrammierung (6)
- E-Learning (6)
- Informatik (6)
- Modellierung (5)
- Informatikdidaktik (4)
Institute
- Institut für Informatik und Computational Science (999)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (18)
- Extern (5)
- Institut für Physik und Astronomie (2)
- Universitätsbibliothek (2)
- Zentrum für Qualitätsentwicklung in Lehre und Studium (ZfQ) (2)
- eLiS - E-Learning in Studienbereichen (2)
- Department Erziehungswissenschaft (1)
- Department Linguistik (1)
- Fachgruppe Betriebswirtschaftslehre (1)
The Internet of Things (IoT) is a system of physical objects that can be discovered, monitored, controlled, or interacted with by electronic devices that communicate over various networking interfaces and eventually can be connected to the wider Internet. [Guinard and Trifa, 2016]. IoT devices are equipped with sensors and/or actuators and may be constrained in terms of memory, computational power, network bandwidth, and energy. Interoperability can help to manage such heterogeneous devices. Interoperability is the ability of different types of systems to work together smoothly. There are four levels of interoperability: physical, network and transport, integration, and data. The data interoperability is subdivided into syntactic and semantic data. Semantic data describes the meaning of data and the common understanding of vocabulary e.g. with the help of dictionaries, taxonomies, ontologies. To achieve interoperability, semantic interoperability is necessary.
Many organizations and companies are working on standards and solutions for interoperability in the IoT. However, the commercial solutions produce a vendor lock-in. They focus on centralized approaches such as cloud-based solutions. This thesis proposes a decentralized approach namely Edge Computing. Edge Computing is based on the concepts of mesh networking and distributed processing. This approach has an advantage that information collection and processing are placed closer to the sources of this information. The goals are to reduce traffic, latency, and to be robust against a lossy or failed Internet connection.
We see management of IoT devices from the network configuration management perspective. This thesis proposes a framework for network configuration management of heterogeneous, constrained IoT devices by using semantic descriptions for interoperability. The MYNO framework is an acronym for MQTT, YANG, NETCONF and Ontology. The NETCONF protocol is the IETF standard for network configuration management. The MQTT protocol is the de-facto standard in the IoT. We picked up the idea of the NETCONF-MQTT bridge, originally proposed by Scheffler and Bonneß[2017], and extended it with semantic device descriptions. These device descriptions provide a description of the device capabilities. They are based on the oneM2M Base ontology and formalized by the Semantic Web Standards.
The novel approach is using a ontology-based device description directly on a constrained device in combination with the MQTT protocol. The bridge was extended in order to query such descriptions. Using a semantic annotation, we achieved that the device capabilities are self-descriptive, machine readable and re-usable.
The concept of a Virtual Device was introduced and implemented, based on semantic device descriptions. A Virtual Device aggregates the capabilities of all devices at the edge network and contributes therefore to the scalability. Thus, it is possible to control all devices via a single RPC call.
The model-driven NETCONF Web-Client is generated automatically from this YANG model which is generated by the bridge based on the semantic device description. The Web-Client provides a user-friendly interface, offers RPC calls and displays sensor values. We demonstrate the feasibility of this approach in different use cases: sensor and actuator scenarios, as well as event configuration and triggering.
The semantic approach results in increased memory overhead. Therefore, we evaluated CBOR and RDF HDT for optimization of ontology-based device descriptions for use on constrained devices. The evaluation shows that CBOR is not suitable for long strings and RDF HDT is a promising candidate but is still a W3C Member Submission. Finally, we used an optimized JSON-LD format for the syntax of the device descriptions.
One of the security tasks of network management is the distribution of firmware updates. The MYNO Update Protocol (MUP) was developed and evaluated on constrained devices CC2538dk and 6LoWPAN. The MYNO update process is focused on freshness and authenticity of the firmware. The evaluation shows that it is challenging but feasible to bring the firmware updates to constrained devices using MQTT. As a new requirement for the next MQTT version, we propose to add a slicing feature for the better support of constrained devices. The MQTT broker should slice data to the maximum packet size specified by the device and transfer it slice-by-slice.
For the performance and scalability evaluation of MYNO framework, we setup the High Precision Agriculture demonstrator with 10 ESP-32 NodeMCU boards at the edge of the network. The ESP-32 NodeMCU boards, connected by WLAN, were equipped with six sensors and two actuators. The performance evaluation shows that the processing of ontology-based descriptions on a Raspberry Pi 3B with the RDFLib is a challenging task regarding computational power. Nevertheless, it is feasible because it must be done only once per device during the discovery process.
The MYNO framework was tested with heterogeneous devices such as CC2538dk from Texas Instruments, Arduino Yún Rev 3, and ESP-32 NodeMCU, and IP-based networks such as 6LoWPAN and WLAN.
Summarizing, with the MYNO framework we could show that the semantic approach on constrained devices is feasible in the IoT.
Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during the execution of a process. These event data can be used to analyze the process using process mining techniques to discover the real process, measure conformance to a given process model, or to enhance existing models with performance information. Mapping the produced events to activities of a given process model is essential for conformance checking, annotation and understanding of process mining results. In order to accomplish this mapping with low manual effort, we developed a semi-automatic approach that maps events to activities using insights from behavioral analysis and label analysis. The approach extracts Declare constraints from both the log and the model to build matching constraints to efficiently reduce the number of possible mappings. These mappings are further reduced using techniques from natural language processing, which allow for a matching based on labels and external knowledge sources. The evaluation with synthetic and real-life data demonstrates the effectiveness of the approach and its robustness toward non-conforming execution logs.
Novel two-dimensional tactile displays enable blind users to not only get access to the textual but also to the graphical content of a graphical user interface. Due to the higher amount of information that can be presented in parallel, orientation and exploration can be more complex. In this paper we present the HyperBraille system, which consists of a pin-matrix device as well as a graphical screen reader providing the user with appropriate presentation and interaction possibilities. To allow for a detailed analysis of bimanual interaction strategies on a pin-matrix device, we conducted two user studies with a total of 12 blind people. The task was to fill in .pdf forms on the pin-matrix device by using different input methods, namely gestures, built-in hardware buttons as well as a conventional PC keyboard. The forms were presented in a semigraphic view type that not only contains Braille but also tactile widgets in a spatial arrangement. While completion time and error rate partly depended on the chosen input method, the usage of special reading strategies seemed to be independent of it. A direct comparison of the system and a conventional assistive technology (screen reader with single-line Braille device) showed that interaction on the pin-matrix device can be very efficient if the user is trained. The two-dimensional output can improve access to .pdf forms with insufficient accessibility as the mapping of input controls and the corresponding labels can be supported by a spatial presentation.
Automated storage and retrieval systems are principal components of modern production and warehouse facilities. In particular, automated guided vehicles nowadays substitute human-operated pallet trucks in transporting production materials between storage locations and assembly stations. While low-level control systems take care of navigating such driverless vehicles along programmed routes and avoid collisions even under unforeseen circumstances, in the common case of multiple vehicles sharing the same operation area, the problem remains how to set up routes such that a collection of transport tasks is accomplished most effectively. We address this prevalent problem in the context of car assembly at Mercedes-Benz Ludwigsfelde GmbH, a large-scale producer of commercial vehicles, where routes for automated guided vehicles used in the production process have traditionally been hand-coded by human engineers. Such adhoc methods may suffice as long as a running production process remains in place, while any change in the factory layout or production targets necessitates tedious manual reconfiguration, not to mention the missing portability between different production plants. Unlike this, we propose a declarative approach based on Answer Set Programming to optimize the routes taken by automated guided vehicles for accomplishing transport tasks. The advantages include a transparent and executable problem formalization, provable optimality of routes relative to objective criteria, as well as elaboration tolerance towards particular factory layouts and production targets. Moreover, we demonstrate that our approach is efficient enough to deal with the transport tasks evolving in realistic production processes at the car factory of Mercedes-Benz Ludwigsfelde GmbH.
Research publications and data nowadays should be publicly available on the internet and, theoretically, usable for everyone to develop further research, products, or services. The long-term accessibility of research data is, therefore, fundamental in the economy of the research production process. However, the availability of data is not sufficient by itself, but also their quality must be verifiable. Measures to ensure reuse and reproducibility need to include the entire research life cycle, from the experimental design to the generation of data, quality control, statistical analysis, interpretation, and validation of the results. Hence, high-quality records, particularly for providing a string of documents for the verifiable origin of data, are essential elements that can act as a certificate for potential users (customers). These records also improve the traceability and transparency of data and processes, therefore, improving the reliability of results. Standards for data acquisition, analysis, and documentation have been fostered in the last decade driven by grassroot initiatives of researchers and organizations such as the Research Data Alliance (RDA). Nevertheless, what is still largely missing in the life science academic research are agreed procedures for complex routine research workflows. Here, well-crafted documentation like standard operating procedures (SOPs) offer clear direction and instructions specifically designed to avoid deviations as an absolute necessity for reproducibility. Therefore, this paper provides a standardized workflow that explains step by step how to write an SOP to be used as a starting point for appropriate research documentation.
Research publications and data nowadays should be publicly available on the internet and, theoretically, usable for everyone to develop further research, products, or services. The long-term accessibility of research data is, therefore, fundamental in the economy of the research production process. However, the availability of data is not sufficient by itself, but also their quality must be verifiable. Measures to ensure reuse and reproducibility need to include the entire research life cycle, from the experimental design to the generation of data, quality control, statistical analysis, interpretation, and validation of the results. Hence, high-quality records, particularly for providing a string of documents for the verifiable origin of data, are essential elements that can act as a certificate for potential users (customers). These records also improve the traceability and transparency of data and processes, therefore, improving the reliability of results. Standards for data acquisition, analysis, and documentation have been fostered in the last decade driven by grassroot initiatives of researchers and organizations such as the Research Data Alliance (RDA). Nevertheless, what is still largely missing in the life science academic research are agreed procedures for complex routine research workflows. Here, well-crafted documentation like standard operating procedures (SOPs) offer clear direction and instructions specifically designed to avoid deviations as an absolute necessity for reproducibility. Therefore, this paper provides a standardized workflow that explains step by step how to write an SOP to be used as a starting point for appropriate research documentation.
Institutionelle Bildung ist für autistische Lernende mit vielgestaltigen und spezifischen Hindernissen verbunden. Dies gilt insbesondere im Zusammenhang mit Inklusion, deren Relevanz nicht zuletzt durch das Übereinkommen der Vereinten Nationen über die Rechte von Menschen mit Behinderung gegeben ist.
Diese Arbeit diskutiert zahlreiche lernrelevante Besonderheiten im Kontext von Autismus und zeigt Diskrepanzen zu den nicht immer ausreichend angemessenen institutionellen Lehrkonzepten. Eine zentrale These ist hierbei, dass die ungewöhnlich intensive Aufmerksamkeit von Autist*innen für ihre Spezialinteressen dafür genutzt werden kann, das Lernen mit fremdgestellten Inhalten zu erleichtern. Darauf aufbauend werden Lösungsansätze diskutiert, welche in einem neuartigen Konzept für ein digitales mehrgerätebasiertes Lernspiel resultieren.
Eine wesentliche Herausforderung bei der Konzeption spielbasierten Lernens besteht in der adäquaten Einbindung von Lerninhalten in einen fesselnden narrativen Kontext. Am Beispiel von Übungen zur emotionalen Deutung von Mimik, welche für das Lernen von sozioemotionalen Kompetenzen besonders im Rahmen von Therapiekonzepten bei Autismus Verwendung finden, wird eine angemessene Narration vorgestellt, welche die störungsarme Einbindung dieser sehr speziellen Lerninhalte ermöglicht.
Die Effekte der einzelnen Konzeptionselemente werden anhand eines prototypisch entwickelten Lernspiels untersucht. Darauf aufbauend zeigt eine quantitative Studie die gute Akzeptanz und Nutzerfreundlichkeit des Spiels und belegte vor allem die
Verständlichkeit der Narration und der Spielelemente. Ein weiterer Schwerpunkt liegt in der minimalinvasiven Untersuchung möglicher Störungen des Spielerlebnisses durch den Wechsel zwischen verschiedenen Endgeräten, für die ein innovatives Messverfahren entwickelt wurde.
Im Ergebnis beleuchtet diese Arbeit die Bedeutung und die Grenzen von spielbasierten Ansätzen für autistische Lernende. Ein großer Teil der vorgestellten Konzepte lässt sich auf andersartige Lernszenarien übertragen. Das dafür entwickelte technische Framework zur Realisierung narrativer Lernpfade ist ebenfalls darauf vorbereitet, für weitere Lernszenarien, gerade auch im institutionellen Kontext, Verwendung zu finden.
The soft error rate (SER) due to heavy-ion irradiation of a clock tree is investigated in this paper. A method for clock tree SER prediction is developed, which employs a dedicated soft error analysis tool to characterize the single-event transient (SET) sensitivities of clock inverters and other commercial tools to calculate the SER through fault-injection simulations. A test circuit including a flip-flop chain and clock tree in a 65 nm CMOS technology is developed through the automatic ASIC design flow. This circuit is analyzed with the developed method to calculate its clock tree SER. In addition, this circuit is implemented in a 65 nm test chip and irradiated by heavy ions to measure its SER resulting from the SETs in the clock tree. The experimental and calculation results of this case study present good correlation, which verifies the effectiveness of the developed method.
The aim of our project design space exploration with answer set programming is to develop a general framework based on Answer Set Programming (ASP) that finds valid solutions to the system design problem and simultaneously performs Design Space Exploration (DSE) to find the most favorable alternatives. We leverage recent developments in ASP solving that allow for tight integration of background theories to create a holistic framework for effective DSE.
The Potsdam answer set solving collection, or Potassco for short, bundles various tools implementing and/or applying answer set programming. The article at hand succeeds an earlier description of the Potassco project published in Gebser et al. (AI Commun 24(2):107-124, 2011). Hence, we concentrate in what follows on the major features of the most recent, fifth generation of the ASP system clingo and highlight some recent resulting application systems.
Answer Set Programming faces an increasing popularity for problem solving in various domains. While its modeling language allows us to express many complex problems in an easy way, its solving technology enables their effective resolution. In what follows, we detail some of the key factors of its success. Answer Set Programming [ASP; Brewka et al. Commun ACM 54(12):92–103, (2011)] is seeing a rapid proliferation in academia and industry due to its easy and flexible way to model and solve knowledge-intense combinatorial (optimization) problems. To this end, ASP offers a high-level modeling language paired with high-performance solving technology. As a result, ASP systems provide out-off-the-box, general-purpose search engines that allow for enumerating (optimal) solutions. They are represented as answer sets, each being a set of atoms representing a solution. The declarative approach of ASP allows a user to concentrate on a problem’s specification rather than the computational means to solve it. This makes ASP a prime candidate for rapid prototyping and an attractive tool for teaching key AI techniques since complex problems can be expressed in a succinct and elaboration tolerant way. This is eased by the tuning of ASP’s modeling language to knowledge representation and reasoning (KRR). The resulting impact is nicely reflected by a growing range of successful applications of ASP [Erdem et al. AI Mag 37(3):53–68, 2016; Falkner et al. Industrial applications of answer set programming. K++nstliche Intelligenz (2018)]
THIS INSTALLMENT OF Research for Practice provides curated reading guides to technology for underserved communities and to new developments in personal fabrication. First, Tawanna Dillahunt describes design considerations and technology for underserved and impoverished communities. Designing for the more than 1.6 billion impoverished individuals worldwide requires special consideration of community needs, constraints, and context. Her selections span protocols for poor-quality communication networks, community-driven content generation, and resource and public service discovery. Second, Stefanie Mueller and Patrick Baudisch provide an overview of recent advances in personal fabrication (for example, 3D printers).
In vielen Studiengängen kommt es durch die oft heterogenen Vorkenntnisse in der Studieneingangsphase zu mangelnder Motivation durch Über- oder Unterforderung. Dieses Problem tritt auch in der musiktheoretischen Grundausbildung an Hochschulen auf. Durch Einsatz von Elementen, die aus dem Unterhaltungskontext geläufig sind, kann eine Steigerung der Motivation erreicht werden. Die Nutzung solcher Elemente wird als Gamification bezeichnet.
Das Ziel der vorliegenden Arbeit ist es, am Fallbeispiel der musiktheoretischen Grundausbildung zu analysieren, ob Lerngelegenheiten durch einen gamifizierten interaktiven Prototyp einer Lernumgebung unterstützt werden können. Dazu wird die folgende Forschungsfrage gestellt: Inwieweit wirkt Gamification auf die Motivation bei den Lernenden zur Beschäftigung mit dem Thema (musikalische) Funktionsanalyse?
Um die Forschungsfragen zu beantworten, wurde zunächst ein systematisches, theoriegeleitetes Vorgehensmodell zur Gamification von Lernumgebungen entwickelt und angewandt. Der so entstandene Prototyp wurde anschließend um alle Game-Design-Elemente reduziert und im Rahmen einer experimentellen Studie mit zwei unabhängigen Versuchsgruppen mit der gamifizierten Variante verglichen.
Die Untersuchung zeigte, dass die Gamification einer Lernanwendung nach dem entwickelten Vorgehensmodell grundsätzlich das Potenzial besitzt, manche Aspekte des Nutzungserlebnisses (UX) positiv zu beeinflussen. Insbesondere hatte die Gamification positive Effekte auf die Joy of Use und die Immersivität. Allerdings blieb das Ausmaß der beobachteten Effekte deutlich hinter den Erwartungen zurück, die auf Basis verschiedener Motivationstheorien getroffen wurden.
Daher erscheint Gamification besonders in außeruniversitären Kontexten vielversprechend, in denen der Fokus auf einer Erhöhung der Joy of Use oder einer Steigerung der Immersivität liegt. Allerdings lassen sich durch die Untersuchung neue Erkenntnisse zur emotionalen Wirkung von Gamification und zu einem systematischen Vorgehen bei der Gamification von Lernanwendungen herausstellen.
Weiterführende Forschung könnte an diese Erkenntnisse anknüpfen, indem sie die emotionale Wirkung von Gamification und deren Einfluss auf die Motivation näher untersucht. Darüber hinaus sollte sie Gamification auch aus einer entscheidungstheoretischen Perspektive betrachten und Analysemethoden entwickeln, mit denen entschieden werden kann, ob der Einsatz von Gamification zur Motivationssteigerung in einem spezifischen Anwendungsfall zielführend ist. Unter Verwendung des entwickelten Vorgehensmodells kann es sinnvoll sein, näher zu untersuchen, welche Faktoren insgesamt für das Gelingen einer Gamification-Maßnahme in Bildungskontexten entscheidend sind. Die Erkenntnisse einer solchen Untersuchung könnten entscheidend zur Verbesserung und Validierung des Vorgehensmodells beitragen.
In the last decades, there was a notable progress in solving the well-known Boolean satisfiability (Sat) problem, which can be witnessed by powerful Sat solvers. One of the reasons why these solvers are so fast are structural properties of instances that are utilized by the solver’s interna. This thesis deals with the well-studied structural property treewidth, which measures the closeness of an instance to being a tree. In fact, there are many problems parameterized by treewidth that are solvable in polynomial time in the instance size when parameterized by treewidth.
In this work, we study advanced treewidth-based methods and tools for problems in knowledge representation and reasoning (KR). Thereby, we provide means to establish precise runtime results (upper bounds) for canonical problems relevant to KR. Then, we present a new type of problem reduction, which we call decomposition-guided (DG) that
allows us to precisely monitor the treewidth when reducing from one problem to another problem. This new reduction type will be the basis for a long-open lower bound result for quantified Boolean formulas and allows us to design a new methodology for establishing runtime lower bounds for problems parameterized by treewidth.
Finally, despite these lower bounds, we provide an efficient implementation of algorithms that adhere to treewidth. Our approach finds suitable abstractions of instances, which are subsequently refined in a recursive fashion, and it uses Sat solvers for solving subproblems. It turns out that our resulting solver is quite competitive for two canonical counting problems related to Sat.
Discriminative Models for Biometric Identification using Micro- and Macro-Movements of the Eyes
(2021)
Human visual perception is an active process. Eye movements either alternate between fixations and saccades or follow a smooth pursuit movement in case of moving targets. Besides these macroscopic gaze patterns, the eyes perform involuntary micro-movements during fixations which are commonly categorized into micro-saccades, drift and tremor. Eye movements are frequently studied in cognitive psychology, because they reflect a complex interplay of perception, attention and oculomotor control.
A common insight of psychological research is that macro-movements are highly individual. Inspired by this finding, there has been a considerable amount of prior research on oculomotoric biometric identification. However, the accuracy of known approaches is too low and the time needed for identification is too long for any practical application. This thesis explores discriminative models for the task of biometric identification.
Discriminative models optimize a quality measure of the predictions and are usually superior to generative approaches in discriminative tasks. However, using discriminative models requires to select a suitable form of data representation for sequential eye gaze data; i.e., by engineering features or constructing a sequence kernel and the performance of the classification model strongly depends on the data representation. We study two fundamentally different ways of representing eye gaze within a discriminative framework. In the first part of this thesis, we explore the integration of data and psychological background knowledge in the form of generative models to construct representations. To this end, we first develop generative statistical models of gaze behavior during reading and scene viewing that account for viewer-specific distributional properties of gaze patterns. In a second step, we develop a discriminative identification model by deriving Fisher kernel functions from these and several baseline models. We find that an SVM with Fisher kernel is able to reliably identify users based on their eye gaze during reading and scene viewing. However, since the generative models are constrained to use low-frequency macro-movements, they discard a significant amount of information contained in the raw eye tracking signal at a high cost: identification requires about one minute of input recording, which makes it inapplicable for real world biometric systems. In the second part of this thesis, we study a purely data-driven modeling approach. Here, we aim at automatically discovering the individual pattern hidden in the raw eye tracking signal. To this end, we develop a deep convolutional neural network DeepEyedentification that processes yaw and pitch gaze velocities and learns a representation end-to-end. Compared to prior work, this model increases the identification accuracy by one order of magnitude and the time to identification decreases to only seconds. The DeepEyedentificationLive model further improves upon the identification performance by processing binocular input and it also detects presentation-attacks.
We find that by learning a representation, the performance of oculomotoric identification and presentation-attack detection can be driven close to practical relevance for biometric applications. Eye tracking devices with high sampling frequency and precision are expensive and the applicability of eye movement as a biometric feature heavily depends on cost of recording devices.
In the last part of this thesis, we therefore study the requirements on data quality by evaluating the performance of the DeepEyedentificationLive network under reduced spatial and temporal resolution. We find that the method still attains a high identification accuracy at a temporal resolution of only 250 Hz and a precision of 0.03 degrees. Reducing both does not have an additive deteriorating effect.
In this paper, we consider the computational power of a new variant of networks of splicing processors in which each processor as well as the data navigating throughout the network are now considered to be polarized. While the polarization of every processor is predefined (negative, neutral, positive), the polarization of data is dynamically computed by means of a valuation mapping. Consequently, the protocol of communication is naturally defined by means of this polarization. We show that networks of polarized splicing processors (NPSP) of size 2 are computationally complete, which immediately settles the question of designing computationally complete NPSPs of minimal size. With two more nodes we can simulate every nondeterministic Turing machine without increasing the time complexity. Particularly, we prove that NPSP of size 4 can accept all languages in NP in polynomial time. Furthermore, another computational model that is universal, namely the 2-tag system, can be simulated by NPSP of size 3 preserving the time complexity. All these results can be obtained with NPSPs with valuations in the set as well. We finally show that Turing machines can simulate a variant of NPSPs and discuss the time complexity of this simulation.
The business problem of having inefficient processes, imprecise process analyses, and simulations as well as non-transparent artificial neuronal network models can be overcome by an easy-to-use modeling concept. With the aim of developing a flexible and efficient approach to modeling, simulating, and optimizing processes, this paper proposes a flexible Concept of Neuronal Modeling (CoNM). The modeling concept, which is described by the modeling language designed and its mathematical formulation and is connected to a technical substantiation, is based on a collection of novel sub-artifacts. As these have been implemented as a computational model, the set of CoNM tools carries out novel kinds of Neuronal Process Modeling (NPM), Neuronal Process Simulations (NPS), and Neuronal Process Optimizations (NPO). The efficacy of the designed artifacts was demonstrated rigorously by means of six experiments and a simulator of real industrial production processes.
Metabolic networks play a crucial role in biology since they capture all chemical reactions in an organism. While there are networks of high quality for many model organisms, networks for less studied organisms are often of poor quality and suffer from incompleteness. To this end, we introduced in previous work an answer set programming (ASP)-based approach to metabolic network completion. Although this qualitative approach allows for restoring moderately degraded networks, it fails to restore highly degraded ones. This is because it ignores quantitative constraints capturing reaction rates. To address this problem, we propose a hybrid approach to metabolic network completion that integrates our qualitative ASP approach with quantitative means for capturing reaction rates. We begin by formally reconciling existing stoichiometric and topological approaches to network completion in a unified formalism. With it, we develop a hybrid ASP encoding and rely upon the theory reasoning capacities of the ASP system dingo for solving the resulting logic program with linear constraints over reals. We empirically evaluate our approach by means of the metabolic network of Escherichia coli. Our analysis shows that our novel approach yields greatly superior results than obtainable from purely qualitative or quantitative approaches.
In recent years, named entity linking (NEL) tools were primarily developed in terms of a general approach, whereas today numerous tools are focusing on specific domains such as e.g. the mapping of persons and organizations only, or the annotation of locations or events in microposts. However, the available benchmark datasets necessary for the evaluation of NEL tools do not reflect this focalizing trend. We have analyzed the evaluation process applied in the NEL benchmarking framework GERBIL [in: Proceedings of the 24th International Conference on World Wide Web (WWW’15), International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 2015, pp. 1133–1143, Semantic Web 9(5) (2018), 605–625] and all its benchmark datasets. Based on these insights we have extended the GERBIL framework to enable a more fine grained evaluation and in depth analysis of the available benchmark datasets with respect to different emphases. This paper presents the implementation of an adaptive filter for arbitrary entities and customized benchmark creation as well as the automated determination of typical NEL benchmark dataset properties, such as the extent of content-related ambiguity and diversity. These properties are integrated on different levels, which also enables to tailor customized new datasets out of the existing ones by remixing documents based on desired emphases. Besides a new system library to enrich provided NIF [in: International Semantic Web Conference (ISWC’13), Lecture Notes in Computer Science, Vol. 8219, Springer, Berlin, Heidelberg, 2013, pp. 98–113] datasets with statistical information, best practices for dataset remixing are presented, and an in depth analysis of the performance of entity linking systems on special focus datasets is presented.
Detect me if you can
(2019)
Spam Bots have become a threat to online social networks with their malicious behavior, posting misinformation messages and influencing online platforms to fulfill their motives. As spam bots have become more advanced over time, creating algorithms to identify bots remains an open challenge. Learning low-dimensional embeddings for nodes in graph structured data has proven to be useful in various domains. In this paper, we propose a model based on graph convolutional neural networks (GCNN) for spam bot detection. Our hypothesis is that to better detect spam bots, in addition to defining a features set, the social graph must also be taken into consideration. GCNNs are able to leverage both the features of a node and aggregate the features of a node’s neighborhood. We compare our approach, with two methods that work solely on a features set and on the structure of the graph. To our knowledge, this work is the first attempt of using graph convolutional neural networks in spam bot detection.
In this paper, we consider counting and projected model counting of extensions in abstract argumentation for various semantics. When asking for projected counts we are interested in counting the number of extensions of a given argumentation framework while multiple extensions that are identical when restricted to the projected arguments count as only one projected extension. We establish classical complexity results and parameterized complexity results when the problems are parameterized by treewidth of the undirected argumentation graph. To obtain upper bounds for counting projected extensions, we introduce novel algorithms that exploit small treewidth of the undirected argumentation graph of the input instance by dynamic programming (DP). Our algorithms run in time double or triple exponential in the treewidth depending on the considered semantics. Finally, we take the exponential time hypothesis (ETH) into account and establish lower bounds of bounded treewidth algorithms for counting extensions and projected extension.
A distinguishing feature of Answer Set Programming is that all atoms belonging to a stable model must be founded. That is, an atom must not only be true but provably true. This can be made precise by means of the constructive logic of Here-and-There, whose equilibrium models correspond to stable models. One way of looking at foundedness is to regard Boolean truth values as ordered by letting true be greater than false. Then, each Boolean variable takes the smallest truth value that can be proven for it. This idea was generalized by Aziz to ordered domains and applied to constraint satisfaction problems. As before, the idea is that a, say integer, variable gets only assigned to the smallest integer that can be justified. In this paper, we present a logical reconstruction of Aziz’ idea in the setting of the logic of Here-and-There. More precisely, we start by defining the logic of Here-and-There with lower bound founded variables along with its equilibrium models and elaborate upon its formal properties. Finally, we compare our approach with related ones and sketch future work.
Digitalisierung ermöglicht es uns, mit Partnern (z.B. Unternehmen, Institutionen) in einer IT-unterstützten Umgebung zu interagieren und Tätigkeiten auszuführen, die vormals manuell erledigt wurden. Ein Ziel der Digitalisierung ist dabei, Dienstleistungen unterschiedlicher fachlicher Domänen zu Prozessen zu kombinieren und vielen Nutzergruppen bedarfsgerecht zugänglich zu machen. Hierzu stellen Anbieter technische Dienste bereit, die in unterschiedliche Anwendungen integriert werden können.
Die Digitalisierung stellt die Anwendungsentwicklung vor neue Herausforderungen. Ein Aspekt ist die bedarfsgerechte Anbindung von Nutzern an Dienste. Zur Interaktion menschlicher Nutzer mit den Diensten werden Benutzungsschnittstellen benötigt, die auf deren Bedürfnisse zugeschnitten sind. Hierzu werden Varianten für spezifische Nutzergruppen (fachliche Varianten) und variierende Umgebungen (technische Varianten) benötigt. Zunehmend müssen diese mit Diensten anderer Anbieter kombiniert werden können, um domänenübergreifend Prozesse zu Anwendungen mit einem erhöhten Mehrwert für den Endnutzer zu verknüpfen (z.B. eine Flugbuchung mit einer optionalen Reiseversicherung).
Die Vielfältigkeit der Varianten lässt die Erstellung von Benutzungsschnittstellen komplex und die Ergebnisse sehr individuell erscheinen. Daher werden die Varianten in der Praxis vorwiegend manuell erstellt. Dies führt zur parallelen Entwicklung einer Vielzahl sehr ähnlicher Anwendungen, die nur geringes Potential zur Wiederverwendung besitzen. Die Folge sind hohe Aufwände bei Erstellung und Wartung. Dadurch wird häufig auf die Unterstützung kleiner Nutzerkreise mit speziellen Anforderungen verzichtet (z.B. Menschen mit physischen Einschränkungen), sodass diese weiterhin von der Digitalisierung ausgeschlossen bleiben.
Die Arbeit stellt eine konsistente Lösung für diese neuen Herausforderungen mit den Mitteln der modellgetriebenen Entwicklung vor. Sie präsentiert einen Ansatz zur Modellierung von Benutzungsschnittstellen, Varianten und Kompositionen und deren automatischer Generierung für digitale Dienste in einem verteilten Umfeld. Die Arbeit schafft eine Lösung zur Wiederverwendung und gemeinschaftlichen Nutzung von Benutzungsschnittstellen über Anbietergrenzen hinweg. Sie führt zu einer Infrastruktur, in der eine Vielzahl von Anbietern ihre Expertise in gemeinschaftliche Anwendungen einbringen können.
Die Beiträge bestehen im Einzelnen in Konzepten und Metamodellen zur Modellierung von Benutzungsschnittstellen, Varianten und Kompositionen sowie einem Verfahren zu deren vollständig automatisierten Transformation in funktionale Benutzungsschnittstellen. Zur Umsetzung der gemeinschaftlichen Nutzbarkeit werden diese ergänzt um eine universelle Repräsentation der Modelle, einer Methodik zur Anbindung unterschiedlicher Dienst-Anbieter sowie einer Architektur zur verteilten Nutzung der Artefakte und Verfahren in einer dienstorientierten Umgebung.
Der Ansatz bietet die Chance, unterschiedlichste Menschen bedarfsgerecht an der Digitalisierung teilhaben zu lassen. Damit setzt die Arbeit Impulse für zukünftige Methoden zur Anwendungserstellung in einem zunehmend vielfältigen Umfeld.
Zum Einfluss von Adaptivität auf die Wahrnehmung von Komplexität in der Mensch-Technik-Interaktion
(2021)
Wir leben in einer Gesellschaft, die von einem stetigen Wunsch nach Innovation und Fortschritt geprägt ist. Folgen dieses Wunsches sind die immer weiter fortschreitende Digitalisierung und informatische Vernetzung aller Lebensbereiche, die so zu immer komplexeren sozio-technischen Systemen führen. Ziele dieser Systeme sind u. a. die Unterstützung von Menschen, die Verbesserung ihrer Lebenssituation oder Lebensqualität oder die Erweiterung menschlicher Möglichkeiten. Doch haben neue komplexe technische Systeme nicht nur positive soziale und gesellschaftliche Effekte. Oft gibt es unerwünschte Nebeneffekte, die erst im Gebrauch sichtbar werden, und sowohl Konstrukteur*innen als auch Nutzer*innen komplexer vernetzter Technologien fühlen sich oft orientierungslos. Die Folgen können von sinkender Akzeptanz bis hin zum kompletten Verlust des Vertrauens in vernetze Softwaresysteme reichen. Da komplexe Anwendungen, und damit auch immer komplexere Mensch-Technik-Interaktionen, immer mehr an Relevanz gewinnen, ist es umso wichtiger, wieder Orientierung zu finden. Dazu müssen wir zuerst diejenigen Elemente identifizieren, die in der Interaktion mit vernetzten sozio-technischen Systemen zu Komplexität beitragen und somit Orientierungsbedarf hervorrufen.
Mit dieser Arbeit soll ein Beitrag geleistet werden, um ein strukturiertes Reflektieren über die Komplexität vernetzter sozio-technischer Systeme im gesamten Konstruktionsprozess zu ermöglichen. Dazu wird zuerst eine Definition von Komplexität und komplexen Systemen erarbeitet, die über das informatische Verständnis von Komplexität (also der Kompliziertheit von Problemen, Algorithmen oder Daten) hinausgeht. Im Vordergrund soll vielmehr die sozio-technische Interaktion mit und in komplexen vernetzten Systemen stehen. Basierend auf dieser Definition wird dann ein Analysewerkzeug entwickelt, welches es ermöglicht, die Komplexität in der Interaktion mit sozio-technischen Systemen sichtbar und beschreibbar zu machen.
Ein Bereich, in dem vernetzte sozio-technische Systeme zunehmenden Einzug finden, ist jener digitaler Bildungstechnologien. Besonders adaptiven Bildungstechnologien wurde in den letzten Jahrzehnten ein großes Potential zugeschrieben. Zwei adaptive Lehr- bzw. Trainingssysteme sollen deshalb exemplarisch mit dem in dieser Arbeit entwickelten Analysewerkzeug untersucht werden. Hierbei wird ein besonderes Augenmerkt auf den Einfluss von Adaptivität auf die Komplexität von Mensch-Technik-Interaktionssituationen gelegt. In empirischen Untersuchungen werden die Erfahrungen von Konstrukteur*innen und Nutzer*innen jener adaptiver Systeme untersucht, um so die entscheidenden Kriterien für Komplexität ermitteln zu können. Auf diese Weise können zum einen wiederkehrende Orientierungsfragen bei der Entwicklung adaptiver Bildungstechnologien aufgedeckt werden. Zum anderen werden als komplex wahrgenommene Interaktionssituationen identifiziert. An diesen Situationen kann gezeigt werden, wo aufgrund der Komplexität des Systems die etablierten Alltagsroutinen von Nutzenden nicht mehr ausreichen, um die Folgen der Interaktion mit dem System vollständig erfassen zu können. Dieses Wissen kann sowohl Konstrukteur*innen als auch Nutzer*innen helfen, in Zukunft besser mit der inhärenten Komplexität moderner Bildungstechnologien umzugehen.
We introduce a type and effect system, for an imperative object calculus, which infers sharing possibly introduced by the evaluation of an expression, represented as an equivalence relation among its free variables. This direct representation of sharing effects at the syntactic level allows us to express in a natural way, and to generalize, widely-used notions in literature, notably uniqueness and borrowing. Moreover, the calculus is pure in the sense that reduction is defined on language terms only, since they directly encode store. The advantage of this non-standard execution model with respect to a behaviorally equivalent standard model using a global auxiliary structure is that reachability relations among references are partly encoded by scoping. (C) 2018 Elsevier B.V. All rights reserved.
teaspoon
(2018)
Answer Set Programming (ASP) is an approach to declarative problem solving, combining a rich yet simple modeling language with high performance solving capacities. We here develop an ASP-based approach to curriculum-based course timetabling (CB-CTT), one of the most widely studied course timetabling problems. The resulting teaspoon system reads a CB-CTT instance of a standard input format and converts it into a set of ASP facts. In turn, these facts are combined with a first-order encoding for CB-CTT solving, which can subsequently be solved by any off-the-shelf ASP systems. We establish the competitiveness of our approach by empirically contrasting it to the best known bounds obtained so far via dedicated implementations. Furthermore, we extend the teaspoon system to multi-objective course timetabling and consider minimal perturbation problems.
Forschendes Lernen und die digitale Transformation sind zwei der wichtigsten Einflüsse auf die Entwicklung der Hochschuldidaktik im deutschprachigen Raum. Während das forschende Lernen als normative Theorie das sollen beschreibt, geben die digitalen Werkzeuge, alte wie neue, das können in vielen Bereichen vor.
In der vorliegenden Arbeit wird ein Prozessmodell aufgestellt, was den Versuch unternimmt, das forschende Lernen hinsichtlich interaktiver, gruppenbasierter Prozesse zu systematisieren. Basierend auf dem entwickelten Modell wurde ein Softwareprototyp implementiert, der den gesamten Forschungsprozess begleiten kann. Dabei werden Gruppenformation, Feedback- und Reflexionsprozesse und das Peer Assessment mit Bildungstechnologien unterstützt. Die Entwicklungen wurden in einem qualitativen Experiment eingesetzt, um Systemwissen über die Möglichkeiten und Grenzen der digitalen Unterstützung von forschendem Lernen zu gewinnen.
plasp 3
(2019)
We describe the new version of the Planning Domain Definition Language (PDDL)-to-Answer Set Programming (ASP) translator plasp. First, it widens the range of accepted PDDL features. Second, it contains novel planning encodings, some inspired by Satisfiability Testing (SAT) planning and others exploiting ASP features such as well-foundedness. All of them are designed for handling multivalued fluents in order to capture both PDDL as well as SAS planning formats. Third, enabled by multishot ASP solving, it offers advanced planning algorithms also borrowed from SAT planning. As a result, plasp provides us with an ASP-based framework for studying a variety of planning techniques in a uniform setting. Finally, we demonstrate in an empirical analysis that these techniques have a significant impact on the performance of ASP planning.
E-Assessment etablieren
(2020)
Elektronische Lernstandserhebungen, sogenannte E-Assessments, bieten für Lehrende und Studierende viele Vorteile z. B. hinsichtlich schneller Rückmeldungen oder kompetenzorientierter Fragenformate, und ermöglichen es, unabhängig von Ort und Zeit Prüfungen zu absolvieren. In diesem Beitrag werden die Einführung von summativen Lernstandserhebungen, sogenannter E-Klausuren, am Beispiel der Universität Potsdam, der Aufbau einer länderübergreifenden Initiative für E-Assessment sowie technische Möglichkeiten für dezentrale elektronische Klausuren vorgestellt. Dabei werden der aktuelle Stand, die Ziele und die gewählte stufenweise Umsetzungsstrategie der Universität Potsdam skizziert. Darauf aufbauend folgt eine Beschreibung des Vorgehens, der Kooperationsmöglichkeiten für den Wissens- und Erfahrungsaustausch sowie Herausforderungen der E-Assessment- Initiative. Abschließend werden verschiedene E-Klausurformen und technische Möglichkeiten zur Umsetzung komplexer Prüfungsumgebungen klassifiziert sowie mit ihren charakteristischen Vor- und Nachteilen diskutiert und eine integrierte Lösung vorgeschlagen.
Das größte der fächerübergreifenden Projekte im Potsdamer Projekt Qualitätspakt Lehre hatte die flächendeckende Etablierung von digitalen Medien als einen integralen Bestandteil von Lehre und Studium zum Gegenstand. Im Teilprojekt E-Learning in Studienbereichen (eLiS) wurden dafür Maßnahmen in den Feldern Organisations-, technische und Inhaltsentwicklung zusammengeführt. Der vorliegende Beitrag präsentiert auf Basis von Ausgangslage und Zielsetzungen die Ergebnisse rund um die Digitalisierung von Lehre und Studium an der Universität Potsdam. Exemplarisch werden fünf Dienste näher vorgestellt, die inzwischen größtenteils in den Regelbetrieb der Hochschule übergegangen sind: die Videoplattform Media.UP, die mobile App Reflect.UP, die persönliche Lernumgebung Campus. UP, das Self-Service-Portal Cook.UP und das Anzeigesystem Freiraum.UP. Dabei wird jeweils ein technischer Blick „unter die Haube“ verbunden mit einer Erläuterung der Nutzungsmöglichkeiten, denen eine aktuelle Einschätzung von Lehrenden und Studierenden der Hochschule gegenübergestellt wird. Der Beitrag schließt mit einer Einbettung der vorgestellten Entwicklungen in einen größeren Kontext und einem Ausblick auf die weiterhin anstehenden Aufgaben.
Die Setzung strategischer Ziele sowie die Zuordnung und Umsetzung dazugehörender Maßnahmen sind ein wesentliches Element, um die Innovationsfähigkeit von Organisationen zu erhalten. In den vergangenen Jahren ist auch an Hochschulen die Strategiebildung deutlich vorangetrieben worden. Dies betrifft verschiedene Handlungsfelder, und es werden verschiedene Ansätze verfolgt. Der vorliegende Beitrag greift am Beispiel der Universität Potsdam drei in den vergangenen Jahren adressierte Strategiebereiche heraus: IT, E-Learning und Forschungsdaten. Die damit verbundenen Prozesse waren in unterschiedlichem Maß von Partizipation geprägt. Die gesammelten Erfahrungen werden reflektiert, und es werden Empfehlungen für Strategieentwicklungsprozesse abgeleitet.
The Surface Water and Ocean Topography (SWOT) mission is a next generation satellite mission expected to provide a 2 km-resolution observation of the sea surface height (SSH) on a two-dimensional swath. Processing SWOT data will be challenging because of the large amount of data, the mismatch between a high spatial resolution and a low temporal resolution, and the observation errors. The present paper focuses on the reduction of the spatially structured errors of SWOT SSH data. It investigates a new error reduction method and assesses its performance in an observing system simulation experiment. The proposed error-reduction method first projects the SWOT SSH onto a subspace spanned by the SWOT spatially structured errors. This projection is removed from the SWOT SSH to obtain a detrended SSH. The detrended SSH is then processed within an ensemble data assimilation analysis to retrieve a full SSH field. In the latter step, the detrending is applied to both the SWOT data and an ensemble of model-simulated SSH fields. Numerical experiments are performed with synthetic SWOT observations and an ensemble from a North Atlantic, 1/60 degrees simulation of the ocean circulation (NATL60). The data assimilation analysis is carried out with an ensemble Kalman filter. The results are assessed with root mean square errors, power spectrum density, and spatial coherence. They show that a significant part of the large scale SWOT errors is reduced. The filter analysis also reduces the small scale errors and allows for an accurate recovery of the energy of the signal down to 25 km scales. In addition, using the SWOT nadir data to adjust the SSH detrending further reduces the errors.
A common feature in Answer Set Programming is the use of a second negation, stronger than default negation and sometimes called explicit, strong or classical negation. This explicit negation is normally used in front of atoms, rather than allowing its use as a regular operator. In this paper we consider the arbitrary combination of explicit negation with nested expressions, as those defined by Lifschitz, Tang and Turner. We extend the concept of reduct for this new syntax and then prove that it can be captured by an extension of Equilibrium Logic with this second negation. We study some properties of this variant and compare to the already known combination of Equilibrium Logic with Nelson's strong negation.
The target article discusses the question of how educational makerspaces can become places supportive of knowledge construction. This question is too often neglected by people who run makerspaces, as they mostly explain how to use different tools and focus on the creation of a product. In makerspaces, often pupils also engage in physical computing activities and thus in the creation of interactive artifacts containing embedded systems, such as smart shoes or wristbands, plant monitoring systems or drink mixing machines. This offers the opportunity to reflect on teaching physical computing in computer science education, where similarly often the creation of the product is so strongly focused upon that the reflection of the learning process is pushed into the background.
MUP
(2020)
Message Queuing Telemetry Transport (MQTT) is one of the dominating protocols for edge- and cloud-based Internet of Things (IoT) solutions. When a security vulnerability of an IoT device is known, it has to be fixed as soon as possible. This requires a firmware update procedure. In this paper, we propose a secure update protocol for MQTT-connected devices which ensures the freshness of the firmware, authenticates the new firmware and considers constrained devices. We show that the update protocol is easy to integrate in an MQTT-based IoT network using a semantic approach. The feasibility of our approach is demonstrated by a detailed performance analysis of our prototype implementation on a IoT device with 32 kB RAM. Thereby, we identify design issues in MQTT 5 which can help to improve the support of constrained devices.
MUP
(2020)
Message Queuing Telemetry Transport (MQTT) is one of the dominating protocols for edge- and cloud-based Internet of Things (IoT) solutions. When a security vulnerability of an IoT device is known, it has to be fixed as soon as possible. This requires a firmware update procedure. In this paper, we propose a secure update protocol for MQTT-connected devices which ensures the freshness of the firmware, authenticates the new firmware and considers constrained devices. We show that the update protocol is easy to integrate in an MQTT-based IoT network using a semantic approach. The feasibility of our approach is demonstrated by a detailed performance analysis of our prototype implementation on a IoT device with 32 kB RAM. Thereby, we identify design issues in MQTT 5 which can help to improve the support of constrained devices.
In a recent line of research, two familiar concepts from logic programming semantics (unfounded sets and splitting) were extrapolated to the case of epistemic logic programs. The property of epistemic splitting provides a natural and modular way to understand programs without epistemic cycles but, surprisingly, was only fulfilled by Gelfond's original semantics (G91), among the many proposals in the literature. On the other hand, G91 may suffer from a kind of self-supported, unfounded derivations when epistemic cycles come into play. Recently, the absence of these derivations was also formalised as a property of epistemic semantics called foundedness. Moreover, a first semantics proved to satisfy foundedness was also proposed, the so-called Founded Autoepistemic Equilibrium Logic (FAEEL). In this paper, we prove that FAEEL also satisfies the epistemic splitting property something that, together with foundedness, was not fulfilled by any other approach up to date. To prove this result, we provide an alternative characterisation of FAEEL as a combination of G91 with a simpler logic we called Founded Epistemic Equilibrium Logic (FEEL), which is somehow an extrapolation of the stable model semantics to the modal logic S5.
Parsing of argumentative structures has become a very active line of research in recent years. Like discourse parsing or any other natural language task that requires prediction of linguistic structures, most approaches choose to learn a local model and then perform global decoding over the local probability distributions, often imposing constraints that are specific to the task at hand. Specifically for argumentation parsing, two decoding approaches have been recently proposed: Minimum Spanning Trees (MST) and Integer Linear Programming (ILP), following similar trends in discourse parsing. In contrast to discourse parsing though, where trees are not always used as underlying annotation schemes, argumentation structures so far have always been represented with trees. Using the 'argumentative microtext corpus' [in: Argumentation and Reasoned Action: Proceedings of the 1st European Conference on Argumentation, Lisbon 2015 / Vol. 2, College Publications, London, 2016, pp. 801-815] as underlying data and replicating three different decoding mechanisms, in this paper we propose a novel ILP decoder and an extension to our earlier MST work, and then thoroughly compare the approaches. The result is that our new decoder outperforms related work in important respects, and that in general, ILP and MST yield very similar performance.
Flux-P
(2012)
Quantitative knowledge of intracellular fluxes in metabolic networks is invaluable for inferring metabolic system behavior and the design principles of biological systems. However, intracellular reaction rates can not often be calculated directly but have to be estimated; for instance, via 13C-based metabolic flux analysis, a model-based interpretation of stable carbon isotope patterns in intermediates of metabolism. Existing software such as FiatFlux, OpenFLUX or 13CFLUX supports experts in this complex analysis, but requires several steps that have to be carried out manually, hence restricting the use of this software for data interpretation to a rather small number of experiments. In this paper, we present Flux-P as an approach to automate and standardize 13C-based metabolic flux analysis, using the Bio-jETI workflow framework. Exemplarily based on the FiatFlux software, it demonstrates how services can be created that carry out the different analysis steps autonomously and how these can subsequently be assembled into software workflows that perform automated, high-throughput intracellular flux analysis of high quality and reproducibility. Besides significant acceleration and standardization of the data analysis, the agile workflow-based realization supports flexible changes of the analysis workflows on the user level, making it easy to perform custom analyses.
Answer Set Programming (ASP) has become a popular and widespread paradigm for practical Knowledge Representation thanks to its expressiveness and the available enhancements of its input language. One of such enhancements is the use of aggregates, for which different semantic proposals have been made. In this paper, we show that any ASP aggregate interpreted under Gelfond and Zhang's (GZ) semantics can be replaced (under strong equivalence) by a propositional formula. Restricted to the original GZ syntax, the resulting formula is reducible to a disjunction of conjunctions of literals but the formulation is still applicable even when the syntax is extended to allow for arbitrary formulas (including nested aggregates) in the condition. Once GZ-aggregates are represented as formulas, we establish a formal comparison (in terms of the logic of Here-and-There) to Ferraris' (F) aggregates, which are defined by a different formula translation involving nested implications. In particular, we prove that if we replace an F-aggregate by a GZ-aggregate in a rule head, we do not lose answer sets (although more can be gained). This extends the previously known result that the opposite happens in rule bodies, i.e., replacing a GZ-aggregate by an F-aggregate in the body may yield more answer sets. Finally, we characterize a class of aggregates for which GZ- and F-semantics coincide.
Detection of malware-infected computers and detection of malicious web domains based on their encrypted HTTPS traffic are challenging problems, because only addresses, timestamps, and data volumes are observable. The detection problems are coupled, because infected clients tend to interact with malicious domains. Traffic data can be collected at a large scale, and antivirus tools can be used to identify infected clients in retrospect. Domains, by contrast, have to be labeled individually after forensic analysis. We explore transfer learning based on sluice networks; this allows the detection models to bootstrap each other. In a large-scale experimental study, we find that the model outperforms known reference models and detects previously unknown malware, previously unknown malware families, and previously unknown malicious domains.
TrainTrap
(2020)
Die Fehlerkorrektur in der Codierungstheorie beschäftigt sich mit der Erkennung und Behebung von Fehlern bei der Übertragung und auch Sicherung von Nachrichten.
Hierbei wird die Nachricht durch zusätzliche Informationen in ein Codewort kodiert.
Diese Kodierungsverfahren besitzen verschiedene Ansprüche, wie zum Beispiel die maximale Anzahl der zu korrigierenden Fehler und die Geschwindigkeit der Korrektur.
Ein gängiges Codierungsverfahren ist der BCH-Code, welches industriell für bis zu vier Fehler korrigiere Codes Verwendung findet. Ein Nachteil dieser Codes ist die technische Durchlaufzeit für die Berechnung der Fehlerstellen mit zunehmender Codelänge.
Die Dissertation stellt ein neues Codierungsverfahren vor, bei dem durch spezielle Anordnung kleinere Codelängen eines BCH-Codes ein langer Code erzeugt wird. Diese Anordnung geschieht über einen weiteren speziellen Code, einem LDPC-Code, welcher für eine schneller Fehlererkennung konzipiert ist.
Hierfür wird ein neues Konstruktionsverfahren vorgestellt, welches einen Code für einen beliebige Länge mit vorgebbaren beliebigen Anzahl der zu korrigierenden Fehler vorgibt. Das vorgestellte Konstruktionsverfahren erzeugt zusätzlich zum schnellen Verfahren der Fehlererkennung auch eine leicht und schnelle Ableitung eines Verfahrens zu Kodierung der Nachricht zum Codewort. Dies ist in der Literatur für die LDPC-Codes bis zum jetzigen Zeitpunkt einmalig.
Durch die Konstruktion eines LDPC-Codes wird ein Verfahren vorgestellt wie dies mit einem BCH-Code kombiniert wird, wodurch eine Anordnung des BCH-Codes in Blöcken erzeugt wird. Neben der allgemeinen Beschreibung dieses Codes, wird ein konkreter Code für eine 2-Bitfehlerkorrektur beschrieben. Diese besteht aus zwei Teilen, welche in verschiedene Varianten beschrieben und verglichen werden. Für bestimmte Längen des BCH-Codes wird ein Problem bei der Korrektur aufgezeigt, welche einer algebraischen Regel folgt.
Der BCH-Code wird sehr allgemein beschrieben, doch existiert durch bestimmte Voraussetzungen ein BCH-Code im engerem Sinne, welcher den Standard vorgibt. Dieser BCH-Code im engerem Sinne wird in dieser Dissertation modifiziert, so dass das algebraische Problem bei der 2-Bitfehler Korrektur bei der Kombination mit dem LDPC-Code nicht mehr existiert. Es wird gezeigt, dass nach der Modifikation der neue Code weiterhin ein BCH-Code im allgemeinen Sinne ist, welcher 2-Bitfehler korrigieren und 3-Bitfehler erkennen kann. Bei der technischen Umsetzung der Fehlerkorrektur wird des Weiteren gezeigt, dass die Durchlaufzeiten des modifizierten Codes im Vergleich zum BCH-Code schneller ist und weiteres Potential für Verbesserungen besitzt.
Im letzten Kapitel wird gezeigt, dass sich dieser modifizierte Code mit beliebiger Länge eignet für die Kombination mit dem LDPC-Code, wodurch dieses Verfahren nicht nur umfänglicher in der Länge zu nutzen ist, sondern auch durch die schnellere Dekodierung auch weitere Vorteile gegenüber einem BCH-Code im engerem Sinne besitzt.
In this work we tackle the problem of checking strong equivalence of logic programs that may contain local auxiliary atoms, to be removed from their stable models and to be forbidden in any external context. We call this property projective strong equivalence (PSE). It has been recently proved that not any logic program containing auxiliary atoms can be reformulated, under PSE, as another logic program or formula without them – this is known as strongly persistent forgetting. In this paper, we introduce a conservative extension of Equilibrium Logic and its monotonic basis, the logic of Here-and-There, in which we deal with a new connective ‘|’ we call fork. We provide a semantic characterisation of PSE for forks and use it to show that, in this extension, it is always possible to forget auxiliary atoms under strong persistence. We further define when the obtained fork is representable as a regular formula.
Multi-sided platforms (MSP) strongly affect markets and play a crucial part within the digital and networked economy. Although empirical evidence indicates their occurrence in many industries, research has not investigated the game-changing impact of MSP on traditional markets to a sufficient extent. More specifically, we have little knowledge of how MSP affect value creation and customer interaction in entire markets, exploiting the potential of digital technologies to offer new value propositions. Our paper addresses this research gap and provides an initial systematic approach to analyze the impact of MSP on the insurance industry. For this purpose, we analyze the state of the art in research and practice in order to develop a reference model of the value network for the insurance industry. On this basis, we conduct a case-study analysis to discover and analyze roles which are occupied or even newly created by MSP. As a final step, we categorize MSP with regard to their relation to traditional insurance companies, resulting in a classification scheme with four MSP standard types: Competition, Coordination, Cooperation, Collaboration.
Background: The biological interpretation of large-scale gene expression data is one of the paramount challenges in current bioinformatics. In particular, placing the results in the context of other available functional genomics data, such as existing bio-ontologies, has already provided substantial improvement for detecting and categorizing genes of interest. One common approach is to look for functional annotations that are significantly enriched within a group or cluster of genes, as compared to a reference group. Results: In this work, we suggest the information-theoretic concept of mutual information to investigate the relationship between groups of genes, as given by data-driven clustering, and their respective functional categories. Drawing upon related approaches (Gibbons and Roth, Genome Research 12: 1574-1581, 2002), we seek to quantify to what extent individual attributes are sufficient to characterize a given group or cluster of genes. Conclusion: We show that the mutual information provides a systematic framework to assess the relationship between groups or clusters of genes and their functional annotations in a quantitative way. Within this framework, the mutual information allows us to address and incorporate several important issues, such as the interdependence of functional annotations and combinatorial combinations of attributes. It thus supplements and extends the conventional search for overrepresented attributes within a group or cluster of genes. In particular taking combinations of attributes into account, the mutual information opens the way to uncover specific functional descriptions of a group of genes or clustering result. All datasets and functional annotations used in this study are publicly available. All scripts used in the analysis are provided as additional files.
Incremental Support Vector Machines (SVM) are instrumental in practical applications of online learning. This work focuses on the design and analysis of efficient incremental SVM learning, with the aim of providing a fast, numerically stable and robust implementation. A detailed analysis of convergence and of algorithmic complexity of incremental SVM learning is carried out. Based on this analysis, a new design of storage and numerical operations is proposed, which speeds up the training of an incremental SVM by a factor of 5 to 20. The performance of the new algorithm is demonstrated in two scenarios: learning with limited resources and active learning. Various applications of the algorithm, such as in drug discovery, online monitoring of industrial devices and and surveillance of network traffic, can be foreseen.
A central insight from psychological studies on human eye movements is that eye movement patterns are highly individually characteristic. They can, therefore, be used as a biometric feature, that is, subjects can be identified based on their eye movements. This thesis introduces new machine learning methods to identify subjects based on their eye movements while viewing arbitrary content. The thesis focuses on probabilistic modeling of the problem, which has yielded the best results in the most recent literature. The thesis studies the problem in three phases by proposing a purely probabilistic, probabilistic deep learning, and probabilistic deep metric learning approach. In the first phase, the thesis studies models that rely on psychological concepts about eye movements. Recent literature illustrates that individual-specific distributions of gaze patterns can be used to accurately identify individuals. In these studies, models were based on a simple parametric family of distributions. Such simple parametric models can be robustly estimated from sparse data, but have limited flexibility to capture the differences between individuals. Therefore, this thesis proposes a semiparametric model of gaze patterns that is flexible yet robust for individual identification. These patterns can be understood as domain knowledge derived from psychological literature. Fixations and saccades are examples of simple gaze patterns. The proposed semiparametric densities are drawn under a Gaussian process prior centered at a simple parametric distribution. Thus, the model will stay close to the parametric class of densities if little data is available, but it can also deviate from this class if enough data is available, increasing the flexibility of the model. The proposed method is evaluated on a large-scale dataset, showing significant improvements over the state-of-the-art. Later, the thesis replaces the model based on gaze patterns derived from psychological concepts with a deep neural network that can learn more informative and complex patterns from raw eye movement data. As previous work has shown that the distribution of these patterns across a sequence is informative, a novel statistical aggregation layer called the quantile layer is introduced. It explicitly fits the distribution of deep patterns learned directly from the raw eye movement data. The proposed deep learning approach is end-to-end learnable, such that the deep model learns to extract informative, short local patterns while the quantile layer learns to approximate the distributions of these patterns. Quantile layers are a generic approach that can converge to standard pooling layers or have a more detailed description of the features being pooled, depending on the problem. The proposed model is evaluated in a large-scale study using the eye movements of subjects viewing arbitrary visual input. The model improves upon the standard pooling layers and other statistical aggregation layers proposed in the literature. It also improves upon the state-of-the-art eye movement biometrics by a wide margin. Finally, for the model to identify any subject — not just the set of subjects it is trained on — a metric learning approach is developed. Metric learning learns a distance function over instances. The metric learning model maps the instances into a metric space, where sequences of the same individual are close, and sequences of different individuals are further apart. This thesis introduces a deep metric learning approach with distributional embeddings. The approach represents sequences as a set of continuous distributions in a metric space; to achieve this, a new loss function based on Wasserstein distances is introduced. The proposed method is evaluated on multiple domains besides eye movement biometrics. This approach outperforms the state of the art in deep metric learning in several domains while also outperforming the state of the art in eye movement biometrics.
Combined optimization of spatial and temporal filters for improving brain-computer interfacing
(2006)
Brain-computer interface (BCI) systems create a novel communication channel from the brain to an output de ice by bypassing conventional motor output pathways of nerves and muscles. Therefore they could provide a new communication and control option for paralyzed patients. Modern BCI technology is essentially based on techniques for the classification of single-trial brain signals. Here we present a novel technique that allows the simultaneous optimization of a spatial and a spectral filter enhancing discriminability rates of multichannel EEG single-trials. The evaluation of 60 experiments involving 22 different subjects demonstrates the significant superiority of the proposed algorithm over to its classical counterpart: the median classification error rate was decreased by 11%. Apart from the enhanced classification, the spatial and/or the spectral filter that are determined by the algorithm can also be used for further analysis of the data, e.g., for source localization of the respective brain rhythms.
We consider generating and accepting programmed grammars with bounded degree of non-regulation, that is, the maximum number of elements in success or in failure fields of the underlying grammar. In particular, it is shown that this measure can be restricted to two without loss of descriptional capacity, regardless of whether arbitrary derivations or left-most derivations are considered. Moreover, in some cases, precise characterizations of the linear bounded automaton problem in terms of programmed grammars are obtained. Thus, the results presented in this paper shed new light on some longstanding open problem in the theory of computational complexity.
Emotions are a central element of human experience. They occur with high frequency in everyday life and play an important role in decision making. However, currently there is no consensus among researchers on what constitutes an emotion and on how emotions should be investigated. This dissertation identifies three problems of current emotion research: the problem of ground truth, the problem of incomplete constructs and the problem of optimal representation. I argue for a focus on the detailed measurement of emotion manifestations with computer-aided methods to solve these problems. This approach is demonstrated in three research projects, which describe the development of methods specific to these problems as well as their application to concrete research questions.
The problem of ground truth describes the practice to presuppose a certain structure of emotions as the a priori ground truth. This determines the range of emotion descriptions and sets a standard for the correct assignment of these descriptions. The first project illustrates how this problem can be circumvented with a multidimensional emotion perception paradigm which stands in contrast to the emotion recognition paradigm typically employed in emotion research. This paradigm allows to calculate an objective difficulty measure and to collect subjective difficulty ratings for the perception of emotional stimuli. Moreover, it enables the use of an arbitrary number of emotion stimuli categories as compared to the commonly used six basic emotion categories. Accordingly, we collected data from 441 participants using dynamic facial expression stimuli from 40 emotion categories. Our findings suggest an increase in emotion perception difficulty with increasing actor age and provide evidence to suggest that young adults, the elderly and men underestimate their emotion perception difficulty. While these effects were predicted from the literature, we also found unexpected and novel results. In particular, the increased difficulty on the objective difficulty measure for female actors and observers stood in contrast to reported findings. Exploratory analyses revealed low relevance of person-specific variables for the prediction of emotion perception difficulty, but highlighted the importance of a general pleasure dimension for the ease of emotion perception.
The second project targets the problem of incomplete constructs which relates to vaguely defined psychological constructs on emotion with insufficient ties to tangible manifestations. The project exemplifies how a modern data collection method such as face tracking data can be used to sharpen these constructs on the example of arousal, a long-standing but fuzzy construct in emotion research. It describes how measures of distance, speed and magnitude of acceleration can be computed from face tracking data and investigates their intercorrelations. We find moderate to strong correlations among all measures of static information on one hand and all measures of dynamic information on the other. The project then investigates how self-rated arousal is tied to these measures in 401 neurotypical individuals and 19 individuals with autism. Distance to the neutral face was predictive of arousal ratings in both groups. Lower mean arousal ratings were found for the autistic group, but no difference in correlation of the measures and arousal ratings could be found between groups. Results were replicated in a high autistic traits group consisting of 41 participants. The findings suggest a qualitatively similar perception of arousal for individuals with and without autism. No correlations between valence ratings and any of the measures could be found which emphasizes the specificity of our tested measures for the construct of arousal.
The problem of optimal representation refers to the search for the best representation of emotions and the assumption that there is a one-fits-all solution. In the third project we introduce partial least squares analysis as a general method to find an optimal representation to relate two high-dimensional data sets to each other. The project demonstrates its applicability to emotion research on the question of emotion perception differences between men and women. The method was used with emotion rating data from 441 participants and face tracking data computed on 306 videos. We found quantitative as well as qualitative differences in the perception of emotional facial expressions between these groups. We showed that women’s emotional perception systematically captured more of the variance in facial expressions. Additionally, we could show that significant differences exist in the way that women and men perceive some facial expressions which could be visualized as concrete facial expression sequences. These expressions suggest differing perceptions of masked and ambiguous facial expressions between the sexes. In order to facilitate use of the developed method by the research community, a package for the statistical environment R was written. Furthermore, to call attention to the method and its usefulness for emotion research, a website was designed that allows users to explore a model of emotion ratings and facial expression data in an interactive fashion.
Iterated finite state sequential transducers are considered as language generating devices. The hierarchy induced by the size of the state alphabet is proved to collapse to the fourth level. The corresponding language families are related to the families of languages generated by Lindenmayer systems and Chomsky grammars. Finally, some results on deterministic and extended iterated finite state transducers are established.
Im Rahmen eines interdisziplinären studentischen Projekts wurde ein Framework für mobile pervasive Lernspiele entwickelt. Am Beispiel des historischen Lernortes Park Sanssouci wurde auf dieser Grundlage ein Lernspiel für Schülerinnen und Schüler implementiert. Die geplante Evaluation soll die Lernwirksamkeit von geobasierten mobilen Lernspielen messen. Dazu wird die Intensität des Flow-Erlebens mit einer ortsgebundenen alternativen Umsetzung verglichen.
Deutsche Universitäten erweitern ihre E-Learning-Angebote als Service für die Studierenden und Lehrenden. Diese sind je nach Fakultät unterschiedlich ausgeprägt. Dieser Artikel zeigt, wie durch technische Erweiterung der Infrastruktur, einer Anpassung der Organisationsstruktur und einer gezielten Inhaltsentwicklung eine durchgängige und personalisierbare Lehr- und Lernumgebung (Personal Learning Environment, PLE) geschaffen und damit die Akzeptanz bei den Lehrenden und Studierenden für E-Learning erhöht werden kann. Aus der vorausgehenden, systematischen Anforderungsanalyse können Kennzahlen für die Qualitätssicherung von E-Learning-Angeboten abgeleitet werden.
PLATON
(2019)
Lesson planning is both an important and demanding task—especially as part of teacher training. This paper presents the requirements for a lesson planning system and evaluates existing systems regarding these requirements. One major drawback of existing software tools is that most are limited to a text- or form-based representation of the lesson designs. In this article, a new approach with a graphical, time-based representation with (automatic) analyses methods is proposed and the system architecture and domain model are described in detail. The approach is implemented in an interactive, web-based prototype called PLATON, which additionally supports the management of lessons in units as well as the modelling of teacher and student-generated resources. The prototype was evaluated in a study with 61 prospective teachers (bachelor’s and master’s preservice teachers as well as teacher trainees in post-university teacher training) in Berlin, Germany, with a focus on usability. The results show that this approach proofed usable for lesson planning and offers positive effects for the perception of time and self-reflection.
PLATON
(2019)
Lesson planning is both an important and demanding task—especially as part of teacher training. This paper presents the requirements for a lesson planning system and evaluates existing systems regarding these requirements. One major drawback of existing software tools is that most are limited to a text- or form-based representation of the lesson designs. In this article, a new approach with a graphical, time-based representation with (automatic) analyses methods is proposed and the system architecture and domain model are described in detail. The approach is implemented in an interactive, web-based prototype called PLATON, which additionally supports the management of lessons in units as well as the modelling of teacher and student-generated resources. The prototype was evaluated in a study with 61 prospective teachers (bachelor’s and master’s preservice teachers as well as teacher trainees in post-university teacher training) in Berlin, Germany, with a focus on usability. The results show that this approach proofed usable for lesson planning and offers positive effects for the perception of time and self-reflection.
The usage of mobile devices is rapidly growing with Android being the most prevalent mobile operating system. Thanks to the vast variety of mobile applications, users are preferring smartphones over desktops for day to day tasks like Internet surfing. Consequently, smartphones store a plenitude of sensitive data. This data together with the high values of smartphones make them an attractive target for device/data theft (thieves/malicious applications).
Unfortunately, state-of-the-art anti-theft solutions do not work if they do not have an active network connection, e.g., if the SIM card was removed from the device. In the majority of these cases, device owners permanently lose their smartphone together with their personal data, which is even worse.
Apart from that malevolent applications perform malicious activities to steal sensitive information from smartphones. Recent research considered static program analysis to detect dangerous data leaks. These analyses work well for data leaks due to inter-component communication, but suffer from shortcomings for inter-app communication with respect to precision, soundness, and scalability.
This thesis focuses on enhancing users' privacy on Android against physical device loss/theft and (un)intentional data leaks. It presents three novel frameworks: (1) ThiefTrap, an anti-theft framework for Android, (2) IIFA, a modular inter-app intent information flow analysis of Android applications, and (3) PIAnalyzer, a precise approach for PendingIntent vulnerability analysis.
ThiefTrap is based on a novel concept of an anti-theft honeypot account that protects the owner's data while preventing a thief from resetting the device.
We implemented the proposed scheme and evaluated it through an empirical user study with 35 participants. In this study, the owner's data could be protected, recovered, and anti-theft functionality could be performed unnoticed from the thief in all cases.
IIFA proposes a novel approach for Android's inter-component/inter-app communication (ICC/IAC) analysis. Our main contribution is the first fully automatic, sound, and precise ICC/IAC information flow analysis that is scalable for realistic apps due to modularity, avoiding combinatorial explosion: Our approach determines communicating apps using short summaries rather than inlining intent calls between components and apps, which requires simultaneously analyzing all apps installed on a device.
We evaluate IIFA in terms of precision, recall, and demonstrate its scalability to a large corpus of real-world apps. IIFA reports 62 problematic ICC-/IAC-related information flows via two or more apps/components.
PIAnalyzer proposes a novel approach to analyze PendingIntent related vulnerabilities. PendingIntents are a powerful and universal feature of Android for inter-component communication. We empirically evaluate PIAnalyzer on a set of 1000 randomly selected applications and find 1358 insecure usages of PendingIntents, including 70 severe vulnerabilities.
In this paper two new methods for the design of fault-tolerant pipelined sequential and combinational circuits, called Error Detection and Partial Error Correction (EDPEC) and Full Error Detection and Correction (FEDC), are described. The proposed methods are based on an Error Detection Logic (EDC) in the combinational circuit part combined with fault tolerant memory elements implemented using fault tolerant master–slave flip-flops. If a transient error, due to a transient fault in the combinational circuit part is detected by the EDC, the error signal controls the latching stage of the flip-flops such that the previous correct state of the register stage is retained until the transient error disappears. The system can continue to work in its previous correct state and no additional recovery procedure (with typically reduced clock frequency) is necessary. The target applications are dataflow processing blocks, for which software-based recovery methods cannot be easily applied. The presented architectures address both single events as well as timing faults of arbitrarily long duration. An example of this architecture is developed and described, based on the carry look-ahead adder. The timing conditions are carefully investigated and simulated up to the layout level. The enhancement of the baseline architecture is demonstrated with respect to the achieved fault tolerance for the single event and timing faults. It is observed that the number of uncorrected single events is reduced by the EDPEC architecture by 2.36 times compared with previous solution. The FEDC architecture further reduces the number of uncorrected events to zero and outperforms the Triple Modular Redundancy (TMR) with respect to correction of timing faults. The power overhead of both new architectures is about 26–28% lower than the TMR.
Die Projektierung und Abwicklung sowie die statische und dynamische Analyse von Geschäftsprozessen im Bereich des Verwaltens und Regierens auf kommunaler, Länder- wie auch Bundesebene mit Hilfe von Informations- und Kommunikationstechniken beschäftigen Politiker und Strategen für Informationstechnologie ebenso wie die Öffentlichkeit seit Langem.
Der hieraus entstandene Begriff E-Government wurde in der Folge aus den unterschiedlichsten technischen, politischen und semantischen Blickrichtungen beleuchtet.
Die vorliegende Arbeit konzentriert sich dabei auf zwei Schwerpunktthemen:
• Das erste Schwerpunktthema behandelt den Entwurf eines hierarchischen Architekturmodells, für welches sieben hierarchische Schichten identifiziert werden können. Diese erscheinen notwendig, aber auch hinreichend, um den allgemeinen Fall zu beschreiben.
Den Hintergrund hierfür liefert die langjährige Prozess- und Verwaltungserfahrung als Leiter der EDV-Abteilung der Stadtverwaltung Landshut, eine kreisfreie Stadt mit rund 69.000 Einwohnern im Nordosten von München. Sie steht als Repräsentant für viele Verwaltungsvorgänge in der Bundesrepublik Deutschland und ist dennoch als Analyseobjekt in der Gesamtkomplexität und Prozessquantität überschaubar.
Somit können aus der Analyse sämtlicher Kernabläufe statische und dynamische Strukturen extrahiert und abstrakt modelliert werden.
Die Schwerpunkte liegen in der Darstellung der vorhandenen Bedienabläufe in einer Kommune. Die Transformation der Bedienanforderung in einem hierarchischen System, die Darstellung der Kontroll- und der Operationszustände in allen Schichten wie auch die Strategie der Fehlererkennung und Fehlerbehebung schaffen eine transparente Basis für umfassende Restrukturierungen und Optimierungen.
Für die Modellierung wurde FMC-eCS eingesetzt, eine am Hasso-Plattner-Institut für Softwaresystemtechnik GmbH (HPI) im Fachgebiet Kommunikationssysteme entwickelte Methodik zur Modellierung zustandsdiskreter Systeme unter Berücksichtigung möglicher Inkonsistenzen (Betreuer: Prof. Dr.-Ing. Werner Zorn [ZW07a, ZW07b]).
• Das zweite Schwerpunktthema widmet sich der quantitativen Modellierung und Optimierung von E-Government-Bediensystemen, welche am Beispiel des Bürgerbüros der Stadt Landshut im Zeitraum 2008 bis 2015 durchgeführt wurden. Dies erfolgt auf Basis einer kontinuierlichen Betriebsdatenerfassung mit aufwendiger Vorverarbeitung zur Extrahierung mathematisch beschreibbarer Wahrscheinlichkeitsverteilungen.
Der hieraus entwickelte Dienstplan wurde hinsichtlich der erzielbaren Optimierungen im dauerhaften Echteinsatz verifiziert.
[ZW07a] Zorn, Werner: «FMC-QE A New Approach in Quantitative Modeling», Vortrag anlässlich: MSV'07- The 2007 International Conference on Modeling, Simulation and Visualization Methods WorldComp2007, Las Vegas, 28.6.2007.
[ZW07b] Zorn, Werner: «FMC-QE, A New Approach in Quantitative Modeling», Veröffentlichung, Hasso-Plattner-Institut für Softwaresystemtechnik an der Universität Potsdam, 28.6.2007.
In this thesis we introduce the concept of the degree of formality. It is directed against a dualistic point of view, which only distinguishes between formal and informal proofs. This dualistic attitude does not respect the differences between the argumentations classified as informal and it is unproductive because the individual potential of the respective argumentation styles cannot be appreciated and remains untapped.
This thesis has two parts. In the first of them we analyse the concept of the degree of formality (including a discussion about the respective benefits for each degree) while in the second we demonstrate its usefulness in three case studies. In the first case study we will repair Haskell B. Curry's view of mathematics, which incidentally is of great importance in the first part of this thesis, in light of the different degrees of formality. In the second case study we delineate how awareness of the different degrees of formality can be used to help students to learn how to prove. Third, we will show how the advantages of proofs of different degrees of formality can be combined by the development of so called tactics having a medium degree of formality. Together the three case studies show that the degrees of formality provide a convincing solution to the problem of untapped potential.
Das Training sozioemotionaler Kompetenzen ist gerade für Menschen mit Autismus nützlich. Ein solches Training kann mithilfe einer spielbasierten Anwendung effektiv gestaltet werden. Zwei Minispiele, Mimikry und Emo-Mahjong, wurden realisiert und hinsichtlich User Experience evaluiert. Die jeweiligen Konzepte und die Evaluationsergebnisse sollen hier vorgestellt werden.
E-Learning Symposium 2018
(2018)
In den vergangenen Jahren sind viele E-Learning-Innovationen entstanden. Einige davon wurden auf den vergangenen E-Learning Symposien der Universität Potsdam präsentiert: Das erste E-Learning Symposium im Jahr 2012 konzentrierte sich auf unterschiedliche Möglichkeiten der Studierendenaktivierung und Lehrgestaltung. Das Symposium 2014 rückte vor allem die Studierenden ins Zentrum der Aufmerksamkeit. Im Jahr 2016 kam es durch das Zusammengehen des Symposiums mit der DeLFI-Tagung zu einer Fokussierung auf technische Innovationen. Doch was ist aus den Leuchttürmen von gestern geworden, und brauchen wir überhaupt noch neue Leuchttürme? Das Symposium setzt sich in diesem Jahr unter dem Motto „Innovation und Nachhaltigkeit – (k)ein Gegensatz?“ mit mediengestützten Lehr- und Lernprozessen im universitären Kontext auseinander und reflektiert aktuelle technische sowie didaktische Entwicklungen mit Blick auf deren mittel- bis langfristigen Einsatz in der Praxis.
Dieser Tagungsband zum E-Learning Symposium 2018 an der Universität Potsdam beinhaltet eine Mischung von Forschungs- und Praxisbeiträgen aus verschiedenen Fachdisziplinen und eröffnet vielschichtige Perspektiven auf das Thema E-Learning. Dabei werden die Vielfalt der didaktischen Einsatzszenarien als auch die Potentiale von Werk-zeugen und Methoden der Informatik in ihrem Zusammenspiel beleuchtet.
In seiner Keynote widmet sich Reinhard Keil dem Motto des Symposiums und geht der Nachhaltigkeit bei E-Learning-Projekten auf den Grund. Dabei analysiert und beleuchtet er anhand seiner über 15-jährigen Forschungspraxis die wichtigsten Wirkfaktoren und formuliert Empfehlungen zur Konzeption von E-Learning-Projekten. Im Gegensatz zu rein auf Kostenersparnis ausgerichteten (hochschul-)politischen Forderungen proklamiert er den Ansatz der hypothesengeleiteten Technikgestaltung, in der Nachhaltigkeit als Leitfrage oder Forschungsstrategie verstanden werden kann. In eine ähnliche Richtung geht der Beitrag von Iris Braun et al., die über Erfolgsfaktoren beim Einsatz von Audience Response Systemen in der universitären Lehre berichten.
Ein weiteres aktuelles Thema, sowohl für die Bildungstechnologie als auch in den Bildungswissenschaften allgemein, ist die Kompetenzorientierung und –modellierung. Hier geht es darum (Problemlöse-)Fähigkeiten gezielt zu beschreiben und in den Mittelpunkt der Lehre zu stellen. Johannes Konert stellt in einem eingeladenen Vortrag zwei Projekte vor, die den Prozess beginnend bei der Definition von Kompetenzen, deren Modellierung in einem semantischen maschinenlesbaren Format bis hin zur Erarbeitung von Methoden zur Kompetenzmessung und der elektronischen Zertifizierung aufzeigen. Dabei geht er auf technische Möglichkeiten, aber auch Grenzen ein. Auf einer spezifischeren Ebene beschäftigt sich Sarah Stumpf mit digitalen bzw. mediendidaktischen Kompetenzen im Lehramtsstudium und stellt ein Framework für die Förderung ebensolcher Kompetenzen bei angehenden Lehrkräften vor.
Der Einsatz von E-Learning birgt noch einige Herausforderungen. Dabei geht es oft um die Verbindung von Didaktik und Technik, den Erhalt von Aufmerksamkeit oder den Aufwand für das Erstellen von interaktiven Lehr- und Lerninhalten. Drei Beiträge in diesem Tagungsband beschäftigen sich mit dieser Thematik in unterschiedlichen Kontexten und zeigen Best-Practices und Lösungsansätze auf: Der Beitrag von Martina Wahl und Michael Hölscher behandelt den besonderen Kontext von Blended Learning-Szenarien in berufsbegleitenden Studiengängen. Um die Veröffentlichung eines global frei verfügbaren Onlinekurses abseits der großen MOOC Plattformen und den didaktischen Herausforderungen auch hinsichtlich der Motivation geht es im Beitrag von Ennio Marani und Isabel Jaisli. Schließlich schlagen Gregor Damnik et al. die automatische Erzeugung von Aufgaben zur Erhöhung von Interaktivität und Adaptivität in digitalen Lernressourcen vor, um den teilweise erheblichen Erstellungsaufwand zu reduzieren.
Zum Thema E-Learning zählen auch immer mobile Apps bzw. Spiele. Gleich zwei Beiträge beschäftigen sich mit dem Einsatz von E-Learning-Tools im Gesundheitskontext: Anna Tscherejkina und Anna Morgiel stellen in ihrem Beitrag Minispiele zum Training von sozio-emotionalen Kompetenzen für Menschen mit Autismus vor, und Stephanie Herbstreit et al. berichten vom Einsatz einer mobilen Lern-App zur Verbesserung von klinisch-praktischem Unterricht.
Physical computing covers the design and realization of interactive objects and installations and allows learners to develop concrete, tangible products of the real world, which arise from their imagination. This can be used in computer science education to provide learners with interesting and motivating access to the different topic areas of the subject in constructionist and creative learning environments. However, if at all, physical computing has so far mostly been taught in afternoon clubs or other extracurricular settings. Thus, for the majority of students so far there are no opportunities to design and create their own interactive objects in regular school lessons.
Despite its increasing popularity also for schools, the topic has not yet been clearly and sufficiently characterized in the context of computer science education. The aim of this doctoral thesis therefore is to clarify physical computing from the perspective of computer science education and to adequately prepare the topic both content-wise and methodologically for secondary school teaching. For this purpose, teaching examples, activities, materials and guidelines for classroom use are developed, implemented and evaluated in schools.
In the theoretical part of the thesis, first the topic is examined from a technical point of view. A structured literature analysis shows that basic concepts used in physical computing can be derived from embedded systems, which are the core of a large field of different application areas and disciplines. Typical methods of physical computing in professional settings are analyzed and, from an educational perspective, elements suitable for computer science teaching in secondary schools are extracted, e. g. tinkering and prototyping. The investigation and classification of suitable tools for school teaching show that microcontrollers and mini computers, often with extensions that greatly facilitate the handling of additional components, are particularly attractive tools for secondary education. Considering the perspectives of science, teachers, students and society, in addition to general design principles, exemplary teaching approaches for school education and suitable learning materials are developed and the design, production and evaluation of a physical computing construction kit suitable for teaching is described.
In the practical part of this thesis, with “My Interactive Garden”, an exemplary approach to integrate physical computing in computer science teaching is tested and evaluated in different courses and refined based on the findings in a design-based research approach. In a series of workshops on physical computing, which is based on a concept for constructionist professional development that is developed specifically for this purpose, teachers are empowered and encouraged to develop and conduct physical computing lessons suitable for their particular classroom settings. Based on their in-class experiences, a process model of physical computing teaching is derived. Interviews with those teachers illustrate that benefits of physical computing, including the tangibility of crafted objects and creativity in the classroom, outweigh possible drawbacks like longer preparation times, technical difficulties or difficult assessment. Hurdles in the classroom are identified and possible solutions discussed.
Empirical investigations in the different settings reveal that “My Interactive Garden” and physical computing in general have a positive impact, among others, on learner motivation, fun and interest in class and perceived competencies.
Finally, the results from all evaluations are combined to evaluate the design principles for physical computing teaching and to provide a perspective on the development of decision-making aids for physical computing activities in school education.
Mobile Endgeräte und die dazugehörigen Applikationen sind zu einem unverzichtbaren Bestandteil des täglichen Lebens geworden und ermöglichen den ortsund zeitunabhängigen Zugriff auf wichtige Informationen. Hochschulspezifische An- gebote sind im mobilen Bereich hingegen noch immer nicht flächendeckend anzutreffen und lassen sich i. d. R. nur auf Einzelaktivitäten Studierender und Lehrender zurückführen. Dabei können mobile Applikationen einen essentiellen Beitrag zur Verbesserung der studentischen Selbstorganisation sowie für die Ausgestaltung und Ergänzung von konkreten Lehr-/Lernszenarien leisten. Dieser Artikel stellt ein modulares Hochschul-App-Framework vor, das sowohl zentrale campusbezogene Dienste als auch dezentrale Lernapplikationen unter einer Oberfläche vereint anbietet. Anhand einer Analyse von Stärken und Schwächen werden verschiedene Ansätze in Hinblick auf Anforderungen, Entwicklung, Wartung und Betrieb der Hochschul-App zusammengefasst und bewertet. Es wird auf die zugrundeliegende serviceorientierte Architektur eingegangen, die eine Portierung der Applikation auf andere Hochschulen mit einem vertretbaren Aufwand ermöglicht. Der Beitrag schließt mit einer Darstellung der ersten Ergebnisse und weiterführender Überlegungen und Arbeiten.
Institutions are facing the challenge to integrate legacy systems with steadily growing new ones, using different technologies and interaction patterns. With the demand of offering the best potential of all systems, several not matching systems including their functions have to be aggregated and offered in a useable way. This paper presents an adaptive, generalizable and self-organized Personal Learning Environment (PLE) framework with the potential to integrate several heterogeneous services using a service-oriented architecture. First, a general overview over the field is given, followed by the description of the core components of the PLE framework. A prototypical implementation is presented. Finally, it’s shown how the PLE framework can be dynamically adapted to a changing system environment, reflecting experiences from first user studies.
Mobile devices and associated applications (apps) are an indispensable part of daily life and provide access to important information anytime and anywhere. However, the availability of university-wide services in the mobile sector is still poor. If they exist they usually result from individual activities of students and teachers. Mobile applications can have an essential impact on the improvement of students’ self-organization as well as on the design and enhancement of specific learning scenarios, though. This article introduces a mobile campus app framework, which integrates central campus services and decentralized learning applications. An analysis of strengths and weaknesses of different approaches is presented to summarize and evaluate them in terms of requirements, development, maintenance and operation. The article discusses the underlying service-oriented architecture that allows transferring the campus app to other universities or institutions at reasonable cost. It concludes with a presentation of the results as well as ongoing discussions and future work
Nach wie vor stellen Klausuren mit hunderten Studierenden, sogenannte Massenklausuren, insbesondere Lehrende vor organisatorische Herausforderungen, die es jedes Semester aufs Neue zu bewältigen gilt. Dieser Beitrag zeigt technische Möglichkeiten zur Unterstützung und Durchführung von Massenklausuren auf. Hierzu werden zunächst verschiedene Verfahren der Klausurvorbereitung und -organisation aus unterschiedlichen Fachdisziplinen erfasst und analysiert.
Im Anschluss wird ein verallgemeinerbarer Prozess konzipiert, der sämtliche Schritte von der Planung, Erstellung, Durchführung, Kontrolle bis hin zur Archivierung berücksichtigt. Zur Unterstützung der einzelnen Prozessschritte werden technische Systeme vorgestellt, die mit den Bedürfnissen der Prüfungsverantwortlichen abgestimmt sind. Die entstandenen Systeme werden vorgestellt und die durchgeführten Zielgruppentests reflektiert.
In universities, diverse tools and software systems exist that each facilitates a different teaching and learning scenario. A deviating approach is taken by Personal Learning Environments (PLE) that aim to provide a common platform. Considering e-portfolios as an integral part of PLEs, especially portfolio-based learning and assessment have to be supported. Therefore, the concept of a PLE is developed further by enabling the products of different software systems to be integrated in portfolio pages and finally submitted for feedback and assessment. It is further elaborated how the PLE approach is used to support the continuous formative assessment within portfolio-based learning scenarios.
Während Qualifikationen und Kompetenzen, die auf informellem Wege erworben werden, immer mehr Beachtung finden, stellt sowohl deren Darstellung als auch die Anerkennung ein meist unüberwindbares Hindernis für Ausstellende und Erwerbende dar. Vermehrt wird unterdessen von klassisch papiergebundenen auf digitale Teilnahmezertifikate umgestellt, um den Nachweis von Kompetenz- und Qualifikationserwerb zu vereinfachen. In diesem Zusammenhang kann die Verbindung von digitalen Teilnahmezertifikaten und Open Badges einen Mehrwert für die öffentliche Darstellung und Verifikation bieten.
Mobile Applikationen eignen sich als strukturelle Unterstützungsangebote für Studierende während des Studieneinstiegs. Durch die App Reflect.UP werden Studienorganisation, Studieninhalte und -ziele von Studierenden reflektiert. Der bewusste Umgang mit dem studentischen Kompetenzerwerb als wissenschaftliche eflexionskompetenz ist immanenter Bestandteil der akademischen Professionalisierung und steht in diesem Beitrag im Vordergrund. Gezeigt wird, wie aus Studienordnungen und Modulbeschreibungen systematisch Fragen zur studentischen Reflexion herausgearbeitet werden und dadurch ein Kompetenzraster entsteht. Die durch den praktischen Einsatz von Reflect.UP gewonnenen Daten werden ausgewertet und dahingehend diskutiert, welche Rückschlüsse sich hieraus auf die Problemlagen und Lernprozesse der Studierenden sowie für die Studiengangsorganisation(en) ziehen lassen. Darüber hinaus werden die Stärken und Schwächen einer mobilen Applikation als sozial- und informationswissenschaftliches Amalgam zur strukturellen Unterstützung der Studieneingangsphase reflektiert.
The term Personal Learning Environment (PLE) is associated with the desire to put the learner in control of his own learning process, so that he is able to set and accomplish the desired learning goals at the right time with the learning environment chosen by him. Gradually, such a learning environment includes several digital content, services and tools. It is thus summarized as the Virtual Learning Environment (VLE). Even though the construction of an individual PLE is a complex task, several approaches to support this process already exist. They mostly occur under the umbrella term PLE or with little accentuations like iPLE, which especially live within the context of institutions. This paper sums up the variety of attempts and technical approaches to establish a PLE and suggests a categorization for them.
Die zunehmende Digitalisierung des Lebens hält in vielen Bereichen Einzug. Im Sinne der Forderung nach lebenslangem Lernen und mit dem Ziel den vielfältigen sich ändernden Anforderungen des täglichen Lebens erfolgreich zu begegnen, bedarf es der Schaffung einer individuellen und persönlichen Lernumgebung jedes Einzelnen. Dieser Beitrag setzt sich zunächst kritisch mit dem Begriff der persönlichen Lernumgebung auseinander. Darauf aufbauend wird der Betrachtungsrahmen auf die Verwendung innerhalb der Institution Hochschule eingeengt. Hieraus erwachsen sowohl Herausforderungen als auch Divergenzen im Spannungsfeld zwischen institutioneller und persönlicher Lernumgebung, die innerhalb eines offenen Gestaltungsprozesses zu lösen sind, sodass sich allgemeingültige Designprinzipien institutioneller persönlicher Lernumgebungen ableiten lassen.
The ongoing digitalization leads to a need of continuous change of ICT (Information and Communi-cation Technology) in all university domains and therefore affects all stakeholders in this arena. More and more ICT components, systems and tools occur and have to be integrated into the existing processes and infrastructure of the institutions. These tasks include the transfer of resources and information across multiple ICT systems. By using so-called virtual environments for domains of re-search, education, learning and work, the performance of daily tasks can be aided. Based on a user requirement analysis different short- and long-term objectives were identified and are tackled now in the context of a federal research project. In order to be prepared for the ongoing digitalization, new systems have to be provided. Both, a service-oriented infrastructure and a related web-based virtual learning environment constitute the platform Campus.UP and creates the necessary basis to be ready for future challenges. The current focus lies on e-portfolio work, hence we will present a related focus group evaluation. The results indicate a tremendous need to extend the possibilities of sharing resources across system boundaries, in order to enable a comfortable participation of exter-nal cooperating parties and to clarify the focus of each connected system. The introduction of such an infrastructure implies far-reaching changes for traditional data centers. Therefore, the challenges and risks of faculty conducting innovation projects for the ICT organization are taken as a starting point to stimulate a discussion, how data centers can utilize projects to be ready for the future needs. We are confident that Campus.UP will provide the basis for ensuring the persistent transfer of innovation to the ICT organization and thus will contribute to tackle the future challenges of digitalization.
Ob Online-Kurse, videobasierte Lehrangebote, mobile Applikationen, eigenentwickelte oder kommerzielle Web 2.0-Anwendungen, die Fülle digitaler Unterstützungsangebote ist kaum zu überblicken. Dabei bieten mobile Endgeräte, Web-Anwendungen und Apps Chancen Lehre, Studium und Forschung maßgeblich neu zu gestalten. Im Beitrag wird ein Beschreibungsrahmen für die mediendidaktische Ausgestaltung von Lehr-, Lern- und Forschungsarrangements vorgestellt, der die technischen Gesichtspunkte hervorhebt. Anschließend werden unterschiedliche Nutzungsszenarien unter Einbeziehung digitaler Medien skizziert. Diese werden als Ausgangspunkt genommen um das Konzept einer Systemarchitektur vorzustellen, die es zum einen ermöglicht beliebige Applikationen automatisiert bereit zu stellen und zum anderen die anfallenden Nutzendendaten plattformübergreifend zu aggregieren und für eine Ausgestaltung virtueller Lehr- und Lernräumen zu nutzen.
Mobile Endgeräte und Applikationen (Apps) sind dank vielfältiger Kommunikations-, Informations- und Assistenzfunktionen zu einem unverzichtbaren Bestandteil unseres täglichen Lebens geworden. Inzwischen hat sich insbesondere im Hochschulumfeld eine bunte Vielfalt an mobilen Unterstützungsangeboten etabliert, beginnend bei zentral angebotenen Uni-Apps bis hin zu unterschiedlichen Apps zur Ausgestaltung einzelner Lehrveranstaltungen oder individueller Lehr- und Lernszenarien. Angesichts der großen Aufwände zur Entwicklung, Distribution und Pflege mobiler Anwendungen ist ein Einsatz für eine möglichst große Zielgruppe wünschenswert. Dies kann jedoch mit dem Charakter mobiler Endgeräte als persönliche, individualisierte Assistenten kollidieren.
In diesem Beitrag werden entlang dieses Spektrums zwischen (fach-)spezifischen Einzellösungen und breiten Allroundern verschiedene mobile Unterstützungsangebote aus dem Hochschulbereich vorgestellt, hinsichtlich ihres Einsatzes kontextuell eingeordnet und systematisiert. Dies umfasst mobile Anwendungen, die allgemeine organisatorische Aspekte des Studiums, bestimmte Felder wie die Studieneingangsphase oder die konkrete Begleitung hybrider Lernszenarien fokussieren. Es schließt sich eine App-Auswahl an, die fachspezifischen Aspekten Rechnung trägt und in denen Inhalte in Form von Serious Games, Simulationen und Inhaltsmodulen aufbereitet sind. Neben Lehre und Studium wird auch die Forschung in den Fokus gerückt, wo Apps gleichermaßen als Forschungsgegenstand und Datenerhebungsinstrument wirken. Aus der Fülle dieser Entwicklungen resultiert eine App-Vielfalt, die verschiedene Herausforderungen aufwirft. Der Beitrag stellt die spezifischen Herausforderungen zusammen und spricht Empfehlungen aus. Dabei werden sowohl organisatorische, inhaltliche und technische Fragestellungen thematisiert als auch rechtliche Gesichtspunkte bezüglich Datenschutz und Copyright tangiert.
Die 8. Fachtagung für Hochschuldidaktik der Informatik (HDI) fand im September 2018 zusammen mit der Deutschen E-Learning Fachtagung Informatik (DeLFI) unter dem gemeinsamen Motto „Digitalisierungswahnsinn? - Wege der Bildungstransformationen“ in Frankfurt statt.
Dabei widmet sich die HDI allen Fragen der informatischen Bildung im Hochschulbereich. Schwerpunkte bildeten in diesem Jahr u. a.:
- Analyse der Inhalte und anzustrebenden Kompetenzen in Informatikveranstaltungen
- Programmieren lernen & Einstieg in Softwareentwicklung
- Spezialthemen: Data Science, Theoretische Informatik und Wissenschaftliches Arbeiten
Die Fachtagung widmet sich ausgewählten Fragestellungen dieser Themenkomplexe, die durch Vorträge ausgewiesener Experten und durch eingereichte Beiträge intensiv behandelt werden.
Solving problems combining task and motion planning requires searching across a symbolic search space and a geometric search space. Because of the semantic gap between symbolic and geometric representations, symbolic sequences of actions are not guaranteed to be geometrically feasible. This compels us to search in the combined search space, in which frequent backtracks between symbolic and geometric levels make the search inefficient.We address this problem by guiding symbolic search with rich information extracted from the geometric level through culprit detection mechanisms.
Answer Set Programming (ASP) is a declarative problem solving approach, combining a rich yet simple modeling language with high-performance solving capabilities. Although this has already resulted in various applications, certain aspects of such applications are more naturally modeled using variables over finite domains, for accounting for resources, fine timings, coordinates, or functions. Our goal is thus to extend ASP with constraints over integers while preserving its declarative nature. This allows for fast prototyping and elaboration tolerant problem descriptions of resource related applications. The resulting paradigm is called Constraint Answer Set Programming (CASP).
We present three different approaches for solving CASP problems. The first one, a lazy, modular approach combines an ASP solver with an external system for handling constraints. This approach has the advantage that two state of the art technologies work hand in hand to solve the problem, each concentrating on its part of the problem. The drawback is that inter-constraint dependencies cannot be communicated back to the ASP solver, impeding its learning algorithm. The second approach translates all constraints to ASP. Using the appropriate encoding techniques, this results in a very fast, monolithic system. Unfortunately, due to the large, explicit representation of constraints and variables, translation techniques are restricted to small and mid-sized domains. The third approach merges the lazy and the translational approach, combining the strength of both while removing their weaknesses. To this end, we enhance the dedicated learning techniques of an ASP solver with the inferences of the translating approach in a lazy way. That is, the important knowledge is only made explicit when needed.
By using state of the art techniques from neighboring fields, we provide ways to tackle real world, industrial size problems. By extending CASP to reactive solving, we open up new application areas such as online planning with continuous domains and durations.
Das Thema der vorliegenden Arbeit ist die semantische Suche im Kontext heutiger Informationsmanagementsysteme. Zu diesen Systemen zählen Intranets, Web 3.0-Anwendungen sowie viele Webportale, die Informationen in heterogenen Formaten und Strukturen beinhalten. Auf diesen befinden sich einerseits Daten in strukturierter Form und andererseits Dokumente, die inhaltlich mit diesen Daten in Beziehung stehen. Diese Dokumente sind jedoch in der Regel nur teilweise strukturiert oder vollständig unstrukturiert. So beschreiben beispielsweise Reiseportale durch strukturierte Daten den Zeitraum, das Reiseziel, den Preis einer Reise und geben in unstrukturierter Form weitere Informationen, wie Beschreibungen zum Hotel, Zielort, Ausflugsziele an.
Der Fokus heutiger semantischer Suchmaschinen liegt auf dem Finden von Wissen entweder in strukturierter Form, auch Faktensuche genannt, oder in semi- bzw. unstrukturierter Form, was üblicherweise als semantische Dokumentensuche bezeichnet wird. Einige wenige Suchmaschinen versuchen die Lücke zwischen diesen beiden Ansätzen zu schließen. Diese durchsuchen zwar gleichzeitig strukturierte sowie unstrukturierte Daten, werten diese jedoch entweder weitgehend voneinander unabhängig aus oder schränken die Suchmöglichkeiten stark ein, indem sie beispielsweise nur bestimmte Fragemuster unterstützen. Hierdurch werden die im System verfügbaren Informationen nicht ausgeschöpft und gleichzeitig unterbunden, dass Zusammenhänge zwischen einzelnen Inhalten der jeweiligen Informationssysteme und sich ergänzende Informationen den Benutzer erreichen.
Um diese Lücke zu schließen, wurde in der vorliegenden Arbeit ein neuer hybrider semantischer Suchansatz entwickelt und untersucht, der strukturierte und semi- bzw. unstrukturierte Inhalte während des gesamten Suchprozesses kombiniert. Durch diesen Ansatz werden nicht nur sowohl Fakten als auch Dokumente gefunden, es werden auch Zusammenhänge, die zwischen den unterschiedlich strukturierten Daten bestehen, in jeder Phase der Suche genutzt und fließen in die Suchergebnisse mit ein. Liegt die Antwort zu einer Suchanfrage nicht vollständig strukturiert, in Form von Fakten, oder unstrukturiert, in Form von Dokumenten vor, so liefert dieser Ansatz eine Kombination der beiden. Die Berücksichtigung von unterschiedlich Inhalten während des gesamten Suchprozesses stellt jedoch besondere Herausforderungen an die Suchmaschine. Diese muss in der Lage sein, Fakten und Dokumente in Abhängigkeit voneinander zu durchsuchen, sie zu kombinieren sowie die unterschiedlich strukturierten Ergebnisse in eine geeignete Rangordnung zu bringen. Weiterhin darf die Komplexität der Daten nicht an die Endnutzer weitergereicht werden. Die Darstellung der Inhalte muss vielmehr sowohl bei der Anfragestellung als auch bei der Darbietung der Ergebnisse verständlich und leicht interpretierbar sein.
Die zentrale Fragestellung der Arbeit ist, ob ein hybrider Ansatz auf einer vorgegebenen Datenbasis die Suchanfragen besser beantworten kann als die semantische Dokumentensuche und die Faktensuche für sich genommen, bzw. als eine Suche die diese Ansätze im Rahmen des Suchprozesses nicht kombiniert. Die durchgeführten Evaluierungen aus System- und aus Benutzersicht zeigen, dass die im Rahmen der Arbeit entwickelte hybride semantische Suchlösung durch die Kombination von strukturierten und unstrukturierten Inhalten im Suchprozess bessere Antworten liefert als die oben genannten Verfahren und somit Vorteile gegenüber bisherigen Ansätzen bietet. Eine Befragung von Benutzern macht deutlich, dass die hybride semantische Suche als verständlich empfunden und für heterogen strukturierte Datenmengen bevorzugt wird.
Contemporary multi-core processors are parallel systems that also provide shared memory for programs running on them. Both the increasing number of cores in so-called many-core systems and the still growing computational power of the cores demand for memory systems that are able to deliver high bandwidths. Caches are essential components to satisfy this requirement. Nevertheless, hardware-based cache coherence in many-core chips faces practical limits to provide both coherence and high memory bandwidths. In addition, a shift away from global coherence can be observed. As a result, alternative architectures and suitable programming models need to be investigated.
This thesis focuses on fast communication for non-cache-coherent many-core architectures. Experiments are conducted on the Single-Chip Cloud Computer (SCC), a non-cache-coherent many-core processor with 48 mesh-connected cores. Although originally designed for message passing, the results of this thesis show that shared memory can be efficiently used for one-sided communication on this kind of architecture. One-sided communication enables data exchanges between processes where the receiver is not required to know the details of the performed communication. In the notion of the Message Passing Interface (MPI) standard, this type of communication allows to access memory of remote processes. In order to support this communication scheme on non-cache-coherent architectures, both an efficient process synchronization and a communication scheme with software-managed cache coherence are designed and investigated.
The process synchronization realizes the concept of the general active target synchronization scheme from the MPI standard. An existing classification of implementation approaches is extended and used to identify an appropriate class for the non-cache-coherent shared memory platform. Based on this classification, existing implementations are surveyed in order to find beneficial concepts, which are then used to design a lightweight synchronization protocol for the SCC that uses shared memory and uncached memory accesses. The proposed scheme is not prone to process skew and also enables direct communication as soon as both communication partners are ready. Experimental results show very good scaling properties and up to five times lower synchronization latency compared to a tuned message-based MPI implementation for the SCC.
For the communication, SCOSCo, a shared memory approach with software-managed cache coherence, is presented. According requirements for the coherence that fulfill MPI's separate memory model are formulated, and a lightweight implementation exploiting SCC hard- and software features is developed. Despite a discovered malfunction in the SCC's memory subsystem, the experimental evaluation of the design reveals up to five times better bandwidths and nearly four times lower latencies in micro-benchmarks compared to the SCC-tuned but message-based MPI library. For application benchmarks, like a parallel 3D fast Fourier transform, the runtime share of communication can be reduced by a factor of up to five. In addition, this thesis postulates beneficial hardware concepts that would support software-managed coherence for one-sided communication on future non-cache-coherent architectures where coherence might be only available in local subdomains but not on a global processor level.
Although it has become common practice to build applications based on the reuse of existing components or services, technical complexity and semantic challenges constitute barriers to ensuring a successful and wide reuse of components and services. In the geospatial application domain, the barriers are self-evident due to heterogeneous geographic data, a lack of interoperability and complex analysis processes.
Constructing workflows manually and discovering proper services and data that match user intents and preferences is difficult and time-consuming especially for users who are not trained in software development. Furthermore, considering the multi-objective nature of environmental modeling for the assessment of climate change impacts and the various types of geospatial data (e.g., formats, scales, and georeferencing systems) increases the complexity challenges.
Automatic service composition approaches that provide semantics-based assistance in the process of workflow design have proven to be a solution to overcome these challenges and have become a frequent demand especially by end users who are not IT experts. In this light, the major contributions of this thesis are:
(i) Simplification of service reuse and workflow design of applications for climate impact analysis by following the eXtreme Model-Driven Development (XMDD) paradigm.
(ii) Design of a semantic domain model for climate impact analysis applications that comprises specifically designed services, ontologies that provide domain-specific vocabulary for referring to types and services, and the input/output annotation of the services using the terms defined in the ontologies.
(iii) Application of a constraint-driven method for the automatic composition of workflows for analyzing the impacts of sea-level rise. The application scenario demonstrates the impact of domain modeling decisions on the results and the performance of the synthesis algorithm.
Simulationskonzept zur Nutzenvalidierung cyber-physischer Systeme in komplexen Fabrikumgebungen
(2017)
Although Boolean Constraint Technology has made tremendous progress over the last decade, the efficacy of state-of-the-art solvers is known to vary considerably across different types of problem instances, and is known to depend strongly on algorithm parameters. This problem was addressed by means of a simple, yet effective approach using handmade, uniform, and unordered schedules of multiple solvers in ppfolio, which showed very impressive performance in the 2011 Satisfiability Testing (SAT) Competition. Inspired by this, we take advantage of the modeling and solving capacities of Answer Set Programming (ASP) to automatically determine more refined, that is, nonuniform and ordered solver schedules from the existing benchmarking data. We begin by formulating the determination of such schedules as multi-criteria optimization problems and provide corresponding ASP encodings. The resulting encodings are easily customizable for different settings, and the computation of optimum schedules can mostly be done in the blink of an eye, even when dealing with large runtime data sets stemming from many solvers on hundreds to thousands of instances. Also, the fact that our approach can be customized easily enabled us to swiftly adapt it to generate parallel schedules for multi-processor machines.
A multiple interpretation scheme is an ordered sequence of morphisms. The ordered multiple interpretation of a word is obtained by concatenating the images of that word in the given order of morphisms. The arbitrary multiple interpretation of a word is the semigroup generated by the images of that word. These interpretations are naturally extended to languages. Four types of ambiguity of multiple interpretation schemata on a language are defined: o-ambiguity, internal ambiguity, weakly external ambiguity and strongly external ambiguity. We investigate the problem of deciding whether a multiple interpretation scheme is ambiguous on regular languages.
Algorithm selection (AS) techniques - which involve choosing from a set of algorithms the one expected to solve a given problem instance most efficiently - have substantially improved the state of the art in solving many prominent AI problems, such as SAT, CSP, ASP, MAXSAT and QBF. Although several AS procedures have been introduced, not too surprisingly, none of them dominates all others across all AS scenarios. Furthermore, these procedures have parameters whose optimal values vary across AS scenarios. This holds specifically for the machine learning techniques that form the core of current AS procedures, and for their hyperparameters. Therefore, to successfully apply AS to new problems, algorithms and benchmark sets, two questions need to be answered: (i) how to select an AS approach and (ii) how to set its parameters effectively. We address both of these problems simultaneously by using automated algorithm configuration. Specifically, we demonstrate that we can automatically configure claspfolio 2, which implements a large variety of different AS approaches and their respective parameters in a single, highly-parameterized algorithm framework. Our approach, dubbed AutoFolio, allows researchers and practitioners across a broad range of applications to exploit the combined power of many different AS methods. We demonstrate AutoFolio can significantly improve the performance of claspfolio 2 on 8 out of the 13 scenarios from the Algorithm Selection Library, leads to new state-of-the-art algorithm selectors for 7 of these scenarios, and matches state-of-the-art performance (statistically) on all other scenarios. Compared to the best single algorithm for each AS scenario, AutoFolio achieves average speedup factors between 1.3 and 15.4.