TY - THES A1 - Al-Areqi, Samih Taha Mohammed T1 - Semantics-based automatic geospatial service composition T1 - Semantikbasierte automatische Komposition von GIS-Diensten N2 - Although it has become common practice to build applications based on the reuse of existing components or services, technical complexity and semantic challenges constitute barriers to ensuring a successful and wide reuse of components and services. In the geospatial application domain, the barriers are self-evident due to heterogeneous geographic data, a lack of interoperability and complex analysis processes. Constructing workflows manually and discovering proper services and data that match user intents and preferences is difficult and time-consuming especially for users who are not trained in software development. Furthermore, considering the multi-objective nature of environmental modeling for the assessment of climate change impacts and the various types of geospatial data (e.g., formats, scales, and georeferencing systems) increases the complexity challenges. Automatic service composition approaches that provide semantics-based assistance in the process of workflow design have proven to be a solution to overcome these challenges and have become a frequent demand especially by end users who are not IT experts. In this light, the major contributions of this thesis are: (i) Simplification of service reuse and workflow design of applications for climate impact analysis by following the eXtreme Model-Driven Development (XMDD) paradigm. (ii) Design of a semantic domain model for climate impact analysis applications that comprises specifically designed services, ontologies that provide domain-specific vocabulary for referring to types and services, and the input/output annotation of the services using the terms defined in the ontologies. (iii) Application of a constraint-driven method for the automatic composition of workflows for analyzing the impacts of sea-level rise. The application scenario demonstrates the impact of domain modeling decisions on the results and the performance of the synthesis algorithm. N2 - Obwohl es gängige Praxis geworden ist, Anwendungen basierend auf der Wiederverwendung von existierenden Komponenten oder Diensten zu bauen, stellen technische Komplexität und semantische Herausforderungen Hindernisse beim Sicherstellen einer erfolgreichen und breiten Wiederverwendungen von Komponenten und Diensten. In der geowissenschaftlichen Anwendungsdomäne sind die Hindernisse durch heterogene geografische Daten, fehlende Interoperabilität und komplexe Analyseprozessen besonders offensichtlich. Workflows manuell zu konstruieren und passende Dienste und Daten zu finden, welche die Nutzerabsichten und -präferenzen abdecken, ist schwierig und zeitaufwändig besonders für Nutzer, die nicht in der Softwareentwicklung ausgebildet sind. Zudem erhöhen die verschiedenen Zielrichtungen der Umweltmodellierung für die Bewertung der Auswirkungen von Klimaänderungen und die unterschiedlichen Typen geografischer Daten (z.B. Formate, Skalierungen, und Georeferenzsysteme) die Komplexität. Automatische Dienstkompositionsansätze, die Semantik-basierte Unterstützung im Prozess des Workflowdesigns zur Verfügung stellen, haben bewiesen eine Lösung zur Bewältigung dieser Herausforderungen zu sein und sind besonders von Endnutzern, die keine IT-Experten sind, eine häufige Forderung geworden. Unter diesem Gesichtspunkt sind die Hauptbeiträge dieser Doktorarbeit: I. Vereinfachung der Wiederverwendung von Diensten und des Workflowdesigns von Klimafolgenanalysen durch Anwendung des Paradigma des eXtreme Model-Driven Development (XMDD) II. Design eines semantischen Domänenmodells für Anwendungen der Klimafolgenanalysen, welches speziell entwickelte Dienste, Ontologien (die domänen-spezifisches Vokabular zur Verfügung stellen, um Typen und Dienste zu beschreiben), und Eingabe-/Ausgabe-Annotationen der Dienste (unter Verwendung von Begriffen, die in den Ontologien definiert sind) enthält. III. Anwendungen einer Constraint-getriebenen Methode für die automatische Komposition von Workflows zum Analysieren der Auswirkungen des Meeresspiegelanstiegs. Das Anwendungsszenario demonstriert die Auswirkung von Domänenmodellierungsentscheidungen auf die Ergebnisse und die Laufzeit des Synthesealgorithmus. KW - geospatial services KW - service composition KW - scientific workflows KW - semantic domain modeling KW - ontologies KW - climate impact analysis KW - GIS-Dienstkomposition KW - Wissenschaftlichesworkflows KW - semantische Domänenmodellierung KW - Ontologien KW - Klimafolgenanalyse Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-402616 ER - TY - JOUR A1 - Lindauer, Marius A1 - Hoos, Holger A1 - Leyton-Brown, Kevin A1 - Schaub, Torsten T1 - Automatic construction of parallel portfolios via algorithm configuration JF - Artificial intelligence N2 - Since 2004, increases in computational power described by Moore's law have substantially been realized in the form of additional cores rather than through faster clock speeds. To make effective use of modern hardware when solving hard computational problems, it is therefore necessary to employ parallel solution strategies. In this work, we demonstrate how effective parallel solvers for propositional satisfiability (SAT), one of the most widely studied NP-complete problems, can be produced automatically from any existing sequential, highly parametric SAT solver. Our Automatic Construction of Parallel Portfolios (ACPP) approach uses an automatic algorithm configuration procedure to identify a set of configurations that perform well when executed in parallel. Applied to two prominent SAT solvers, Lingeling and clasp, our ACPP procedure identified 8-core solvers that significantly outperformed their sequential counterparts on a diverse set of instances from the application and hard combinatorial category of the 2012 SAT Challenge. We further extended our ACPP approach to produce parallel portfolio solvers consisting of several different solvers by combining their configuration spaces. Applied to the component solvers of the 2012 SAT Challenge gold medal winning SAT Solver pfolioUZK, our ACPP procedures produced a significantly better-performing parallel SAT solver. KW - Algorithm configuration KW - Parallel SAT solving KW - Algorithm portfolios KW - Programming by optimization KW - Automated parallelization Y1 - 2016 U6 - https://doi.org/10.1016/j.artint.2016.05.004 SN - 0004-3702 SN - 1872-7921 VL - 244 SP - 272 EP - 290 PB - Elsevier CY - Amsterdam ER - TY - JOUR A1 - Baier, Thomas A1 - Di Ciccio, Claudio A1 - Mendling, Jan A1 - Weske, Mathias T1 - Matching events and activities by integrating behavioral aspects and label analysis JF - Software and systems modeling N2 - Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during the execution of a process. These event data can be used to analyze the process using process mining techniques to discover the real process, measure conformance to a given process model, or to enhance existing models with performance information. Mapping the produced events to activities of a given process model is essential for conformance checking, annotation and understanding of process mining results. In order to accomplish this mapping with low manual effort, we developed a semi-automatic approach that maps events to activities using insights from behavioral analysis and label analysis. The approach extracts Declare constraints from both the log and the model to build matching constraints to efficiently reduce the number of possible mappings. These mappings are further reduced using techniques from natural language processing, which allow for a matching based on labels and external knowledge sources. The evaluation with synthetic and real-life data demonstrates the effectiveness of the approach and its robustness toward non-conforming execution logs. KW - Process mining KW - Event mapping KW - Business process intelligence KW - Constraint satisfaction KW - Declare KW - Natural language processing Y1 - 2018 U6 - https://doi.org/10.1007/s10270-017-0603-z SN - 1619-1366 SN - 1619-1374 VL - 17 IS - 2 SP - 573 EP - 598 PB - Springer CY - Heidelberg ER - TY - JOUR A1 - Przybylla, Mareen A1 - Romeike, Ralf T1 - Empowering learners with tools in CS education BT - physical computing in secondary schools JF - it - Information Technology N2 - In computer science, computer systems are both, objects of investigation and tools that enable creative learning and design. Tools for learning have a long tradition in computer science education. Already in the late 1960s, Papert developed a concept which had an immense impact on the development of informal education in the following years: his theory of constructionism understands learning as a creative process of knowledge construction that is most effective when learners create something purposeful that they can try out, show around, discuss, analyse and receive praise for. By now, there are numerous learning and programming environments that are based on the constructionist ideas. Modern tools offer opportunities for students to learn in motivating ways and gain impressive results in programming games, animations, implementing 3D models or developing interactive objects. This article gives an overview of computer science education research related to tools and media to be used in educational settings. We analyse different types of tools with a special focus on the categorization and development of tools for student adequate physical computing activities in the classroom. Research around the development and evaluation of tools and learning resources in the domain of physical computing is illustrated with the example of "My Interactive Garden", a constructionist learning and programming environment. It is explained how the results from empirical studies are integrated in the continuous development of the learning material. KW - tools KW - media KW - resources KW - computer science education KW - physical computing Y1 - 2018 U6 - https://doi.org/10.1515/itit-2017-0032 SN - 1611-2776 SN - 2196-7032 VL - 60 IS - 2 SP - 91 EP - 101 PB - De Gruyter CY - Berlin ER - TY - GEN A1 - Frank, Mario A1 - Kreitz, Christoph T1 - A theorem prover for scientific and educational purposes T2 - Electronic proceedings in theoretical computer science N2 - We present a prototype of an integrated reasoning environment for educational purposes. The presented tool is a fragment of a proof assistant and automated theorem prover. We describe the existing and planned functionality of the theorem prover and especially the functionality of the educational fragment. This currently supports working with terms of the untyped lambda calculus and addresses both undergraduate students and researchers. We show how the tool can be used to support the students' understanding of functional programming and discuss general problems related to the process of building theorem proving software that aims at supporting both research and education. Y1 - 2018 U6 - https://doi.org/10.4204/EPTCS.267.4 SN - 2075-2180 IS - 267 SP - 59 EP - 69 PB - Open Publishing Association CY - Sydney ER - TY - CHAP A1 - Kiy, Alexander A1 - Knoth, Alexander Henning A1 - Müller, Ina ED - Harris-Huemmert, Susan ED - Pohlenz, Philipp ED - Mitterauer, Lukas T1 - ReflectUP-App Situative und kontextbezogene Evaluation des Studieneinstiegs T2 - Digitalisierung der Hochschullehre Neue Anforderungen an die Evaluation? Y1 - 2018 SN - 978-3-8309-3807-1 SP - 85 EP - 102 PB - Waxmann CY - Münster ER - TY - GEN A1 - Bordihn, Henning A1 - Nagy, Benedek A1 - Vaszil, György T1 - Preface: Non-classical models of automata and applications VIII T2 - RAIRO-Theoretical informatics and appli and applications Y1 - 2018 U6 - https://doi.org/10.1051/ita/2018019 SN - 0988-3754 SN - 1290-385X VL - 52 IS - 2-4 SP - 87 EP - 88 PB - EDP Sciences CY - Les Ulis ER - TY - GEN A1 - Afantenos, Stergos A1 - Peldszus, Andreas A1 - Stede, Manfred T1 - Comparing decoding mechanisms for parsing argumentative structures T2 - Postprints der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe N2 - Parsing of argumentative structures has become a very active line of research in recent years. Like discourse parsing or any other natural language task that requires prediction of linguistic structures, most approaches choose to learn a local model and then perform global decoding over the local probability distributions, often imposing constraints that are specific to the task at hand. Specifically for argumentation parsing, two decoding approaches have been recently proposed: Minimum Spanning Trees (MST) and Integer Linear Programming (ILP), following similar trends in discourse parsing. In contrast to discourse parsing though, where trees are not always used as underlying annotation schemes, argumentation structures so far have always been represented with trees. Using the 'argumentative microtext corpus' [in: Argumentation and Reasoned Action: Proceedings of the 1st European Conference on Argumentation, Lisbon 2015 / Vol. 2, College Publications, London, 2016, pp. 801-815] as underlying data and replicating three different decoding mechanisms, in this paper we propose a novel ILP decoder and an extension to our earlier MST work, and then thoroughly compare the approaches. The result is that our new decoder outperforms related work in important respects, and that in general, ILP and MST yield very similar performance. T3 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe - 1062 KW - argumentation structure KW - argument mining KW - parsing Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-470527 SN - 1866-8372 IS - 1062 ER - TY - JOUR A1 - Prescher, Denise A1 - Bornschein, Jens A1 - Köhlmann, Wiebke A1 - Weber, Gerhard T1 - Touching graphical applications BT - bimanual tactile interaction on the HyperBraille pin-matrix display JF - Universal Access in the Information Society N2 - Novel two-dimensional tactile displays enable blind users to not only get access to the textual but also to the graphical content of a graphical user interface. Due to the higher amount of information that can be presented in parallel, orientation and exploration can be more complex. In this paper we present the HyperBraille system, which consists of a pin-matrix device as well as a graphical screen reader providing the user with appropriate presentation and interaction possibilities. To allow for a detailed analysis of bimanual interaction strategies on a pin-matrix device, we conducted two user studies with a total of 12 blind people. The task was to fill in .pdf forms on the pin-matrix device by using different input methods, namely gestures, built-in hardware buttons as well as a conventional PC keyboard. The forms were presented in a semigraphic view type that not only contains Braille but also tactile widgets in a spatial arrangement. While completion time and error rate partly depended on the chosen input method, the usage of special reading strategies seemed to be independent of it. A direct comparison of the system and a conventional assistive technology (screen reader with single-line Braille device) showed that interaction on the pin-matrix device can be very efficient if the user is trained. The two-dimensional output can improve access to .pdf forms with insufficient accessibility as the mapping of input controls and the corresponding labels can be supported by a spatial presentation. KW - Planar tactile display KW - Blind users KW - pdf forms KW - Screen reader KW - Gesture input KW - Key input Y1 - 2018 U6 - https://doi.org/10.1007/s10209-017-0538-8 SN - 1615-5289 SN - 1615-5297 VL - 17 IS - 2 SP - 391 EP - 409 PB - Springer CY - Heidelberg ER - TY - JOUR A1 - Afantenos, Stergos A1 - Peldszus, Andreas A1 - Stede, Manfred T1 - Comparing decoding mechanisms for parsing argumentative structures JF - Argument & Computation N2 - Parsing of argumentative structures has become a very active line of research in recent years. Like discourse parsing or any other natural language task that requires prediction of linguistic structures, most approaches choose to learn a local model and then perform global decoding over the local probability distributions, often imposing constraints that are specific to the task at hand. Specifically for argumentation parsing, two decoding approaches have been recently proposed: Minimum Spanning Trees (MST) and Integer Linear Programming (ILP), following similar trends in discourse parsing. In contrast to discourse parsing though, where trees are not always used as underlying annotation schemes, argumentation structures so far have always been represented with trees. Using the ‘argumentative microtext corpus’ [in: Argumentation and Reasoned Action: Proceedings of the 1st European Conference on Argumentation, Lisbon 2015 / Vol. 2, College Publications, London, 2016, pp. 801–815] as underlying data and replicating three different decoding mechanisms, in this paper we propose a novel ILP decoder and an extension to our earlier MST work, and then thoroughly compare the approaches. The result is that our new decoder outperforms related work in important respects, and that in general, ILP and MST yield very similar performance. KW - Argumentation structure KW - argument mining KW - parsing Y1 - 2018 U6 - https://doi.org/10.3233/AAC-180033 SN - 1946-2166 SN - 1946-2174 VL - 9 IS - 3 SP - 177 EP - 192 PB - IOS Press CY - Amsterdam ER - TY - GEN A1 - Sahlmann, Kristina A1 - Schwotzer, Thomas T1 - Ontology-based virtual IoT devices for edge computing T2 - Proceedings of the 8th International Conference on the Internet of Things N2 - An IoT network may consist of hundreds heterogeneous devices. Some of them may be constrained in terms of memory, power, processing and network capacity. Manual network and service management of IoT devices are challenging. We propose a usage of an ontology for the IoT device descriptions enabling automatic network management as well as service discovery and aggregation. Our IoT architecture approach ensures interoperability using existing standards, i.e. MQTT protocol and SemanticWeb technologies. We herein introduce virtual IoT devices and their semantic framework deployed at the edge of network. As a result, virtual devices are enabled to aggregate capabilities of IoT devices, derive new services by inference, delegate requests/responses and generate events. Furthermore, they can collect and pre-process sensor data. These tasks on the edge computing overcome the shortcomings of the cloud usage regarding siloization, network bandwidth, latency and speed. We validate our proposition by implementing a virtual device on a Raspberry Pi. KW - Internet of Things KW - Edge Computing KW - oneM2M Ontology KW - M2M KW - Semantic Interoperability KW - MQTT Y1 - 2018 SN - 978-1-4503-6564-2 U6 - https://doi.org/10.1145/3277593.3277597 SP - 1 EP - 7 PB - Association for Computing Machinery CY - New York ER - TY - GEN A1 - Böhne, Sebastian A1 - Kreitz, Christoph T1 - Learning how to prove BT - from the coq proof assistant to textbook style T2 - Electronic proceedings in theoretical computer science N2 - We have developed an alternative approach to teaching computer science students how to prove. First, students are taught how to prove theorems with the Coq proof assistant. In a second, more difficult, step students will transfer their acquired skills to the area of textbook proofs. In this article we present a realisation of the second step. Proofs in Coq have a high degree of formality while textbook proofs have only a medium one. Therefore our key idea is to reduce the degree of formality from the level of Coq to textbook proofs in several small steps. For that purpose we introduce three proof styles between Coq and textbook proofs, called line by line comments, weakened line by line comments, and structure faithful proofs. While this article is mostly conceptional we also report on experiences with putting our approach into practise. Y1 - 2018 U6 - https://doi.org/10.4204/EPTCS.267.1 SN - 2075-2180 IS - 267 SP - 1 EP - 18 PB - Open Publishing Association CY - Sydney ER - TY - JOUR A1 - Giannini, Paola A1 - Richter, Tim A1 - Servetto, Marco A1 - Zucca, Elena T1 - Tracing sharing in an imperative pure calculus JF - Science of computer programming N2 - We introduce a type and effect system, for an imperative object calculus, which infers sharing possibly introduced by the evaluation of an expression, represented as an equivalence relation among its free variables. This direct representation of sharing effects at the syntactic level allows us to express in a natural way, and to generalize, widely-used notions in literature, notably uniqueness and borrowing. Moreover, the calculus is pure in the sense that reduction is defined on language terms only, since they directly encode store. The advantage of this non-standard execution model with respect to a behaviorally equivalent standard model using a global auxiliary structure is that reachability relations among references are partly encoded by scoping. (C) 2018 Elsevier B.V. All rights reserved. KW - Imperative calculi KW - Sharing KW - Type and effect systems Y1 - 2018 U6 - https://doi.org/10.1016/j.scico.2018.11.007 SN - 0167-6423 SN - 1872-7964 VL - 172 SP - 180 EP - 202 PB - Elsevier CY - Amsterdam ER - TY - CHAP A1 - Kiy, Alexander A1 - Lucke, Ulrike ED - de Witt, Claudia ED - Gloerfeld, Christina T1 - Mobile Unterstützung im Studienalltag zwischen Generalität und Individualität T2 - Handbuch Mobile Learning N2 - Mobile Endgeräte und Applikationen (Apps) sind dank vielfältiger Kommunikations-, Informations- und Assistenzfunktionen zu einem unverzichtbaren Bestandteil unseres täglichen Lebens geworden. Inzwischen hat sich insbesondere im Hochschulumfeld eine bunte Vielfalt an mobilen Unterstützungsangeboten etabliert, beginnend bei zentral angebotenen Uni-Apps bis hin zu unterschiedlichen Apps zur Ausgestaltung einzelner Lehrveranstaltungen oder individueller Lehr- und Lernszenarien. Angesichts der großen Aufwände zur Entwicklung, Distribution und Pflege mobiler Anwendungen ist ein Einsatz für eine möglichst große Zielgruppe wünschenswert. Dies kann jedoch mit dem Charakter mobiler Endgeräte als persönliche, individualisierte Assistenten kollidieren. In diesem Beitrag werden entlang dieses Spektrums zwischen (fach-)spezifischen Einzellösungen und breiten Allroundern verschiedene mobile Unterstützungsangebote aus dem Hochschulbereich vorgestellt, hinsichtlich ihres Einsatzes kontextuell eingeordnet und systematisiert. Dies umfasst mobile Anwendungen, die allgemeine organisatorische Aspekte des Studiums, bestimmte Felder wie die Studieneingangsphase oder die konkrete Begleitung hybrider Lernszenarien fokussieren. Es schließt sich eine App-Auswahl an, die fachspezifischen Aspekten Rechnung trägt und in denen Inhalte in Form von Serious Games, Simulationen und Inhaltsmodulen aufbereitet sind. Neben Lehre und Studium wird auch die Forschung in den Fokus gerückt, wo Apps gleichermaßen als Forschungsgegenstand und Datenerhebungsinstrument wirken. Aus der Fülle dieser Entwicklungen resultiert eine App-Vielfalt, die verschiedene Herausforderungen aufwirft. Der Beitrag stellt die spezifischen Herausforderungen zusammen und spricht Empfehlungen aus. Dabei werden sowohl organisatorische, inhaltliche und technische Fragestellungen thematisiert als auch rechtliche Gesichtspunkte bezüglich Datenschutz und Copyright tangiert. KW - Mobile Learning KW - mobile Applikationen KW - Apps KW - Hochschul-Apps KW - TPACK KW - SAMR Y1 - 2018 SN - 978-3-658-19123-8 U6 - https://doi.org/10.1007/978-3-658-19123-8_37 VL - 2018 SP - 777 EP - 808 PB - Springer VS CY - Wiesbaden ER - TY - THES A1 - Feinbube, Frank T1 - Ansätze zur Integration von Beschleunigern ins Betriebssystem Y1 - 2018 ER - TY - JOUR A1 - Bordihn, Henning A1 - Mitrana, Victor A1 - Negru, Maria C. A1 - Paun, Andrei A1 - Paun, Mihaela T1 - Small networks of polarized splicing processors are universal JF - Natural computing : an innovative journal bridging biosciences and computer sciences ; an international journal N2 - In this paper, we consider the computational power of a new variant of networks of splicing processors in which each processor as well as the data navigating throughout the network are now considered to be polarized. While the polarization of every processor is predefined (negative, neutral, positive), the polarization of data is dynamically computed by means of a valuation mapping. Consequently, the protocol of communication is naturally defined by means of this polarization. We show that networks of polarized splicing processors (NPSP) of size 2 are computationally complete, which immediately settles the question of designing computationally complete NPSPs of minimal size. With two more nodes we can simulate every nondeterministic Turing machine without increasing the time complexity. Particularly, we prove that NPSP of size 4 can accept all languages in NP in polynomial time. Furthermore, another computational model that is universal, namely the 2-tag system, can be simulated by NPSP of size 3 preserving the time complexity. All these results can be obtained with NPSPs with valuations in the set as well. We finally show that Turing machines can simulate a variant of NPSPs and discuss the time complexity of this simulation. KW - Computing with DNA KW - Splicing KW - Splicing processor KW - Polarization KW - 2-tag system KW - Turing machine Y1 - 2018 U6 - https://doi.org/10.1007/s11047-018-9691-0 SN - 1567-7818 SN - 1572-9796 VL - 17 IS - 4 SP - 799 EP - 809 PB - Springer CY - Dordrecht ER - TY - JOUR A1 - Tscherejkina, Anna A1 - Morgiel, Anna A1 - Moebert, Tobias T1 - Computergestütztes Training von sozio-emotionalen Kompetenzen durch Minispiele JF - E-Learning Symposium 2018 N2 - Das Training sozioemotionaler Kompetenzen ist gerade für Menschen mit Autismus nützlich. Ein solches Training kann mithilfe einer spielbasierten Anwendung effektiv gestaltet werden. Zwei Minispiele, Mimikry und Emo-Mahjong, wurden realisiert und hinsichtlich User Experience evaluiert. Die jeweiligen Konzepte und die Evaluationsergebnisse sollen hier vorgestellt werden. KW - Computergestützes Training KW - User Experience KW - Digital Game Based Learning KW - Autismus Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-421937 SP - 41 EP - 52 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - THES A1 - Przybylla, Mareen T1 - From Embedded Systems to Physical Computing: Challenges of the “Digital World” in Secondary Computer Science Education T1 - Von Eingebetteten Systemen zu Physical Computing: Herausforderungen der “Digitalen Welt” in der informatischen Bildung im Sekundarbereich N2 - Physical computing covers the design and realization of interactive objects and installations and allows learners to develop concrete, tangible products of the real world, which arise from their imagination. This can be used in computer science education to provide learners with interesting and motivating access to the different topic areas of the subject in constructionist and creative learning environments. However, if at all, physical computing has so far mostly been taught in afternoon clubs or other extracurricular settings. Thus, for the majority of students so far there are no opportunities to design and create their own interactive objects in regular school lessons. Despite its increasing popularity also for schools, the topic has not yet been clearly and sufficiently characterized in the context of computer science education. The aim of this doctoral thesis therefore is to clarify physical computing from the perspective of computer science education and to adequately prepare the topic both content-wise and methodologically for secondary school teaching. For this purpose, teaching examples, activities, materials and guidelines for classroom use are developed, implemented and evaluated in schools. In the theoretical part of the thesis, first the topic is examined from a technical point of view. A structured literature analysis shows that basic concepts used in physical computing can be derived from embedded systems, which are the core of a large field of different application areas and disciplines. Typical methods of physical computing in professional settings are analyzed and, from an educational perspective, elements suitable for computer science teaching in secondary schools are extracted, e. g. tinkering and prototyping. The investigation and classification of suitable tools for school teaching show that microcontrollers and mini computers, often with extensions that greatly facilitate the handling of additional components, are particularly attractive tools for secondary education. Considering the perspectives of science, teachers, students and society, in addition to general design principles, exemplary teaching approaches for school education and suitable learning materials are developed and the design, production and evaluation of a physical computing construction kit suitable for teaching is described. In the practical part of this thesis, with “My Interactive Garden”, an exemplary approach to integrate physical computing in computer science teaching is tested and evaluated in different courses and refined based on the findings in a design-based research approach. In a series of workshops on physical computing, which is based on a concept for constructionist professional development that is developed specifically for this purpose, teachers are empowered and encouraged to develop and conduct physical computing lessons suitable for their particular classroom settings. Based on their in-class experiences, a process model of physical computing teaching is derived. Interviews with those teachers illustrate that benefits of physical computing, including the tangibility of crafted objects and creativity in the classroom, outweigh possible drawbacks like longer preparation times, technical difficulties or difficult assessment. Hurdles in the classroom are identified and possible solutions discussed. Empirical investigations in the different settings reveal that “My Interactive Garden” and physical computing in general have a positive impact, among others, on learner motivation, fun and interest in class and perceived competencies. Finally, the results from all evaluations are combined to evaluate the design principles for physical computing teaching and to provide a perspective on the development of decision-making aids for physical computing activities in school education. N2 - Physical Computing ist die Gestaltung interaktiver Objekte und Installationen und ermöglicht Lernenden, konkrete, greifbare Produkte der realen Welt zu schaffen, die ihrer eigenen Vorstellung entsprechen. Dies kann in der informatischen Bildung genutzt werden, um Lernenden einen interessanten und motivierenden Zugang zu den verschiedenen Themengebieten des Lerngegenstandes in konstruktionistischen und kreativen Lernumgebungen anzubieten. Bisher wurde Physical Computing allerdings, wenn überhaupt, vorrangig in Nachmittagsaktivitäten und anderen extracurricularen Kontexten unterrichtet. Daher hat ein Großteil aller Schülerinnen und Schüler bisher keine Gelegenheit, im Rahmen von Schulunterricht selbst gestalterisch tätig zu werden und interaktive Objekte herzustellen. Trotz zunehmender Popularität, auch in Schulen, wurde das Thema bisher im Kontext der informatischen Bildung nicht hinreichend klar charakterisiert. Ziel dieser Dissertation ist es daher, Physical Computing aus informatikdidaktischer Sicht zu klären und sowohl inhaltlich als auch methodisch adäquat für den Schulunterricht in den Sekundarstufen aufzubereiten. Dazu werden Unterrichtsbeispiele, -aktivitäten, -materialien und -empfehlungen entwickelt, in Schulen eingesetzt und evaluiert. Im theoretischen Teil der Arbeit wird das Thema zunächst aus fachlicher Perspektive untersucht. Eine strukturierte Literaturanalyse zeigt, dass grundlegende Konzepte des Physical Computings aus dem Fachgebiet Eingebettete Systeme abgeleitet werden können, welches den Kern diverser Anwendungsgebiete und Disziplinen bildet. Typische Methoden des Physical Computings werden analysiert und geeignete Elemente für den Informatikunterricht der Sekundarstufen werden aus didaktischer Perspektive herausgearbeitet, beispielsweise Tinkering und Prototyping. Bei der Untersuchung und Klassifikation geeigneter Werkzeuge für den Schulunterricht kristallisieren sich Mikrocontroller und Mini-Computer, oft mit Erweiterungen zur deutlichen Vereinfachung der Handhabung zusätzlicher Komponenten, als besonders attraktive Werkzeuge für die Sekundarstufen heraus. Unter Berücksichtigung der Perspektiven der Fachwissenschaft, Lehrer, Schüler und Gesellschaft werden zusätzlich zu allgemeinen Gestaltungsprinzipien auch beispielhafte Unterrichtsansätze für die schulische Bildung und geeignete Lernmaterialien entwickelt und der Entwurf, die Produktion und Evaluation eines für den Unterricht geeigneten Physical-Computing-Baukastens beschrieben. Im praktischen Teil der Arbeit wird in einem Design-Based-Research-Ansatz mit „My Interactive Garden“ eine beispielhafte Umsetzung von Physical Computing im Informatikunterricht in verschiedenen Kursen getestet, evaluiert und entsprechend der Erkenntnisse überarbeitet. In einer Workshopreihe zum Thema Physical Computing, welche auf einem eigens entwickelten konstruktionistischen Lehrerfortbildungskonzept basiert, werden Lehrer befähigt und ermutigt, für ihre konkreten Unterrichtssituationen geeigneten Physical-Computing-Unterricht zu planen und durchzuführen. Aus ihren Unterrichtserfahrungen wird ein Prozessmodell für Physical-Computing-Unterricht abgeleitet. Interviews mit diesen Lehrern illustrieren, dass Vorteile des Physical Computings, z. B. die Greifbarkeit gebastelter Objekte und Kreativität im Unterricht, mögliche Nachteile wie längere Vorbereitungszeiten, technische Schwierigkeiten oder schwierige Leistungsbewertung, überwiegen. Hürden im Unterricht werden identifiziert und mögliche Ansätze, diese zu umgehen, diskutiert. Empirische Untersuchungen in den verschiedenen Unterrichtsumsetzungen zeigen, das sowohl „My Interactive Garden“ als auch Physical Computing im Allgemeinen einen positiven Einfluss unter anderem auf Lernermotivation, Spaß und Interesse im Unterricht und wahrgenommene Kompetenzen haben. Abschließend werden die Ergebnisse aller Untersuchungen zusammengeführt, um die Gestaltungsprinzipien für Physical-Computing-Unterricht zu evaluieren und einen Ausblick auf die Entwicklung von Entscheidungshilfen für Physical-Computing-Aktivitäten in der schulischen Bildung zu geben. KW - secondary computer science education KW - embedded systems KW - physical computing KW - educational reconstruction KW - design principles KW - classroom material KW - tools for teaching KW - informatische Bildung im Sekundarbereich KW - eingebettete Systeme KW - physical Computing KW - didaktische Rekonstruktion KW - Entwurfsprinzipien KW - Schulmaterial KW - Unterrichtswerkzeuge Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-418339 ER - TY - JOUR A1 - Kiy, Alexander T1 - Digitale Medien & Hochschul-Cloud: Eine vielversprechende Verbindung JF - eleed N2 - Ob Online-Kurse, videobasierte Lehrangebote, mobile Applikationen, eigenentwickelte oder kommerzielle Web 2.0-Anwendungen, die Fülle digitaler Unterstützungsangebote ist kaum zu überblicken. Dabei bieten mobile Endgeräte, Web-Anwendungen und Apps Chancen Lehre, Studium und Forschung maßgeblich neu zu gestalten. Im Beitrag wird ein Beschreibungsrahmen für die mediendidaktische Ausgestaltung von Lehr-, Lern- und Forschungsarrangements vorgestellt, der die technischen Gesichtspunkte hervorhebt. Anschließend werden unterschiedliche Nutzungsszenarien unter Einbeziehung digitaler Medien skizziert. Diese werden als Ausgangspunkt genommen um das Konzept einer Systemarchitektur vorzustellen, die es zum einen ermöglicht beliebige Applikationen automatisiert bereit zu stellen und zum anderen die anfallenden Nutzendendaten plattformübergreifend zu aggregieren und für eine Ausgestaltung virtueller Lehr- und Lernräumen zu nutzen. KW - e-learning KW - Digitale Medien KW - Berliner Modell KW - TPACK KW - Hochschullehre KW - Hochschul-Cloud KW - xAPI KW - SaaSAbstract Y1 - 2018 UR - https://eleed.campussource.de/archive/se2018/4659 IS - 12 ER - TY - GEN A1 - Marwecki, Sebastian A1 - Baudisch, Patrick T1 - Scenograph BT - Fitting Real-Walking VR Experiences into Various Tracking Volumes T2 - UIST '18: Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology N2 - When developing a real-walking virtual reality experience, designers generally create virtual locations to fit a specific tracking volume. Unfortunately, this prevents the resulting experience from running on a smaller or differently shaped tracking volume. To address this, we present a software system called Scenograph. The core of Scenograph is a tracking volume-independent representation of real-walking experiences. Scenograph instantiates the experience to a tracking volume of given size and shape by splitting the locations into smaller ones while maintaining narrative structure. In our user study, participants' ratings of realism decreased significantly when existing techniques were used to map a 25m2 experience to 9m2 and an L-shaped 8m2 tracking volume. In contrast, ratings did not differ when Scenograph was used to instantiate the experience. KW - Virtual reality KW - real-walking KW - locomotion Y1 - 2018 SN - 978-1-4503-5948-1 U6 - https://doi.org/10.1145/3242587.3242648 SP - 511 EP - 520 PB - Association for Computing Machinery CY - New York ER - TY - GEN A1 - Klieme, Eric A1 - Tietz, Christian A1 - Meinel, Christoph T1 - Beware of SMOMBIES BT - Verification of Users based on Activities while Walking T2 - The 17th IEEE International Conference on Trust, Security and Privacy in Computing and Communications (IEEE TrustCom 2018)/the 12th IEEE International Conference on Big Data Science and Engineering (IEEE BigDataSE 2018) N2 - Several research evaluated the user's style of walking for the verification of a claimed identity and showed high authentication accuracies in many settings. In this paper we present a system that successfully verifies a user's identity based on many real world smartphone placements and yet not regarded interactions while walking. Our contribution is the distinction of all considered activities into three distinct subsets and a specific one-class Support Vector Machine per subset. Using sensor data of 30 participants collected in a semi-supervised study approach, we prove that unsupervised verification is possible with very low false-acceptance and false-rejection rates. We furthermore show that these subsets can be distinguished with a high accuracy and demonstrate that this system can be deployed on off-the-shelf smartphones. KW - gait KW - authentication KW - smartphone KW - activities KW - verification KW - behavioral KW - continuous Y1 - 2018 SN - 978-1-5386-4387-7 SN - 978-1-5386-4389-1 U6 - https://doi.org/10.1109/TrustCom/BigDataSE.2018.00096 SN - 2324-9013 SP - 651 EP - 660 PB - IEEE CY - New York ER - TY - CHAP A1 - Keil, Reinhard A1 - Konert, Johannes A1 - Damnik, Gregor A1 - Gierl, Mark J. A1 - Proske, Antje A1 - Körndle, Hermann A1 - Narciss, Susanne A1 - Wahl, Marina A1 - Hölscher, Michael A1 - Mariani, Ennio A1 - Jaisli, Isabel A1 - Tscherejkina, Anna A1 - Morgiel, Anna A1 - Moebert, Tobias A1 - Herbstreit, Stephanie A1 - Mäker, Daniela A1 - Szalai, Cynthia A1 - Braun, Iris A1 - Kapp, Felix A1 - Hara, Tenshi C. A1 - Kubica, Tommy A1 - Stumpf, Sarah ED - Lucke, Ulrike ED - Strickroth, Sven T1 - E-Learning Symposium 2018 BT - Innovation und Nachhaltigkeit – (k)ein Gegensatz?! N2 - In den vergangenen Jahren sind viele E-Learning-Innovationen entstanden. Einige davon wurden auf den vergangenen E-Learning Symposien der Universität Potsdam präsentiert: Das erste E-Learning Symposium im Jahr 2012 konzentrierte sich auf unterschiedliche Möglichkeiten der Studierendenaktivierung und Lehrgestaltung. Das Symposium 2014 rückte vor allem die Studierenden ins Zentrum der Aufmerksamkeit. Im Jahr 2016 kam es durch das Zusammengehen des Symposiums mit der DeLFI-Tagung zu einer Fokussierung auf technische Innovationen. Doch was ist aus den Leuchttürmen von gestern geworden, und brauchen wir überhaupt noch neue Leuchttürme? Das Symposium setzt sich in diesem Jahr unter dem Motto „Innovation und Nachhaltigkeit – (k)ein Gegensatz?“ mit mediengestützten Lehr- und Lernprozessen im universitären Kontext auseinander und reflektiert aktuelle technische sowie didaktische Entwicklungen mit Blick auf deren mittel- bis langfristigen Einsatz in der Praxis. Dieser Tagungsband zum E-Learning Symposium 2018 an der Universität Potsdam beinhaltet eine Mischung von Forschungs- und Praxisbeiträgen aus verschiedenen Fachdisziplinen und eröffnet vielschichtige Perspektiven auf das Thema E-Learning. Dabei werden die Vielfalt der didaktischen Einsatzszenarien als auch die Potentiale von Werk-zeugen und Methoden der Informatik in ihrem Zusammenspiel beleuchtet. In seiner Keynote widmet sich Reinhard Keil dem Motto des Symposiums und geht der Nachhaltigkeit bei E-Learning-Projekten auf den Grund. Dabei analysiert und beleuchtet er anhand seiner über 15-jährigen Forschungspraxis die wichtigsten Wirkfaktoren und formuliert Empfehlungen zur Konzeption von E-Learning-Projekten. Im Gegensatz zu rein auf Kostenersparnis ausgerichteten (hochschul-)politischen Forderungen proklamiert er den Ansatz der hypothesengeleiteten Technikgestaltung, in der Nachhaltigkeit als Leitfrage oder Forschungsstrategie verstanden werden kann. In eine ähnliche Richtung geht der Beitrag von Iris Braun et al., die über Erfolgsfaktoren beim Einsatz von Audience Response Systemen in der universitären Lehre berichten. Ein weiteres aktuelles Thema, sowohl für die Bildungstechnologie als auch in den Bildungswissenschaften allgemein, ist die Kompetenzorientierung und –modellierung. Hier geht es darum (Problemlöse-)Fähigkeiten gezielt zu beschreiben und in den Mittelpunkt der Lehre zu stellen. Johannes Konert stellt in einem eingeladenen Vortrag zwei Projekte vor, die den Prozess beginnend bei der Definition von Kompetenzen, deren Modellierung in einem semantischen maschinenlesbaren Format bis hin zur Erarbeitung von Methoden zur Kompetenzmessung und der elektronischen Zertifizierung aufzeigen. Dabei geht er auf technische Möglichkeiten, aber auch Grenzen ein. Auf einer spezifischeren Ebene beschäftigt sich Sarah Stumpf mit digitalen bzw. mediendidaktischen Kompetenzen im Lehramtsstudium und stellt ein Framework für die Förderung ebensolcher Kompetenzen bei angehenden Lehrkräften vor. Der Einsatz von E-Learning birgt noch einige Herausforderungen. Dabei geht es oft um die Verbindung von Didaktik und Technik, den Erhalt von Aufmerksamkeit oder den Aufwand für das Erstellen von interaktiven Lehr- und Lerninhalten. Drei Beiträge in diesem Tagungsband beschäftigen sich mit dieser Thematik in unterschiedlichen Kontexten und zeigen Best-Practices und Lösungsansätze auf: Der Beitrag von Martina Wahl und Michael Hölscher behandelt den besonderen Kontext von Blended Learning-Szenarien in berufsbegleitenden Studiengängen. Um die Veröffentlichung eines global frei verfügbaren Onlinekurses abseits der großen MOOC Plattformen und den didaktischen Herausforderungen auch hinsichtlich der Motivation geht es im Beitrag von Ennio Marani und Isabel Jaisli. Schließlich schlagen Gregor Damnik et al. die automatische Erzeugung von Aufgaben zur Erhöhung von Interaktivität und Adaptivität in digitalen Lernressourcen vor, um den teilweise erheblichen Erstellungsaufwand zu reduzieren. Zum Thema E-Learning zählen auch immer mobile Apps bzw. Spiele. Gleich zwei Beiträge beschäftigen sich mit dem Einsatz von E-Learning-Tools im Gesundheitskontext: Anna Tscherejkina und Anna Morgiel stellen in ihrem Beitrag Minispiele zum Training von sozio-emotionalen Kompetenzen für Menschen mit Autismus vor, und Stephanie Herbstreit et al. berichten vom Einsatz einer mobilen Lern-App zur Verbesserung von klinisch-praktischem Unterricht. KW - E-Learning KW - Onlinelehre KW - Game-based learning KW - Mobiles Lernen Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-420711 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - CHAP A1 - Seegerer, Stefan A1 - Romeike, Ralf A1 - Tillmann, Alexander A1 - Krömker, Detlef A1 - Horn, Florian A1 - Gattinger, Thorsten A1 - Weicker, Karsten A1 - Schmitz, Dennis A1 - Moldt, Daniel A1 - Röpke, René A1 - Larisch, Kathrin A1 - Schroeder, Ulrik A1 - Keverpütz, Claudia A1 - Küppers, Bastian A1 - Striewe, Michael A1 - Kramer, Matthias A1 - Grillenberger, Andreas A1 - Frede, Christiane A1 - Knobelsdorf, Maria A1 - Greven, Christoph ED - Bergner, Nadine ED - Röpke, René ED - Schroeder, Ulrik ED - Krömker, Detlef T1 - Hochschuldidaktik der Informatik HDI 2018 BT - 8. Fachtagung des GI-Fachbereichs Informatik und Ausbildung/Didaktik der Informatik ; 12.-13. September 2018 an der Goethe-Universität Frankfurt am Main T2 - Commentarii informaticae didacticae (CID) N2 - Die 8. Fachtagung für Hochschuldidaktik der Informatik (HDI) fand im September 2018 zusammen mit der Deutschen E-Learning Fachtagung Informatik (DeLFI) unter dem gemeinsamen Motto „Digitalisierungswahnsinn? - Wege der Bildungstransformationen“ in Frankfurt statt. Dabei widmet sich die HDI allen Fragen der informatischen Bildung im Hochschulbereich. Schwerpunkte bildeten in diesem Jahr u. a.: - Analyse der Inhalte und anzustrebenden Kompetenzen in Informatikveranstaltungen - Programmieren lernen & Einstieg in Softwareentwicklung - Spezialthemen: Data Science, Theoretische Informatik und Wissenschaftliches Arbeiten Die Fachtagung widmet sich ausgewählten Fragestellungen dieser Themenkomplexe, die durch Vorträge ausgewiesener Experten und durch eingereichte Beiträge intensiv behandelt werden. T3 - Commentarii informaticae didacticae (CID) - 12 Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-413542 SN - 978-3-86956-435-7 SN - 1868-0844 SN - 2191-1940 IS - 12 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - JOUR A1 - Li, Yuanqing A1 - Chen, Li A1 - Nofal, Issam A1 - Chen, Mo A1 - Wang, Haibin A1 - Liu, Rui A1 - Chen, Qingyu A1 - Krstić, Miloš A1 - Shi, Shuting A1 - Guo, Gang A1 - Baeg, Sang H. A1 - Wen, Shi-Jie A1 - Wong, Richard T1 - Modeling and analysis of single-event transient sensitivity of a 65 nm clock tree JF - Microelectronics reliability N2 - The soft error rate (SER) due to heavy-ion irradiation of a clock tree is investigated in this paper. A method for clock tree SER prediction is developed, which employs a dedicated soft error analysis tool to characterize the single-event transient (SET) sensitivities of clock inverters and other commercial tools to calculate the SER through fault-injection simulations. A test circuit including a flip-flop chain and clock tree in a 65 nm CMOS technology is developed through the automatic ASIC design flow. This circuit is analyzed with the developed method to calculate its clock tree SER. In addition, this circuit is implemented in a 65 nm test chip and irradiated by heavy ions to measure its SER resulting from the SETs in the clock tree. The experimental and calculation results of this case study present good correlation, which verifies the effectiveness of the developed method. KW - Clock tree KW - Modeling KW - Single-event transient (SET) Y1 - 2018 U6 - https://doi.org/10.1016/j.microrel.2018.05.016 SN - 0026-2714 VL - 87 SP - 24 EP - 32 PB - Elsevier CY - Oxford ER - TY - GEN A1 - Krstić, Miloš A1 - Jentzsch, Anne-Kristin T1 - Reliability, safety and security of the electronics in automated driving vehicles - joint lab lecturing approach T2 - 2018 12TH European Workshop on Microelectronics Education (EWME) N2 - This paper proposes an education approach for master and bachelor students to enhance their skills in the area of reliability, safety and security of the electronic components in automated driving. The approach is based on the active synergetic work of research institutes, academia and industry in the frame of joint lab. As an example, the jointly organized summer school with the respective focus is organized and elaborated. KW - reliability KW - safety KW - security KW - automated driving KW - joint lab Y1 - 2018 SN - 978-1-5386-1157-9 SP - 21 EP - 22 PB - IEEE CY - New York ER - TY - JOUR A1 - Brewka, Gerhard A1 - Ellmauthaler, Stefan A1 - Kern-Isberner, Gabriele A1 - Obermeier, Philipp A1 - Ostrowski, Max A1 - Romero, Javier A1 - Schaub, Torsten A1 - Schieweck, Steffen T1 - Advanced solving technology for dynamic and reactive applications JF - Künstliche Intelligenz Y1 - 2018 U6 - https://doi.org/10.1007/s13218-018-0538-8 SN - 0933-1875 SN - 1610-1987 VL - 32 IS - 2-3 SP - 199 EP - 200 PB - Springer CY - Heidelberg ER - TY - JOUR A1 - Schaub, Torsten A1 - Woltran, Stefan T1 - Answer set programming unleashed! JF - Künstliche Intelligenz N2 - Answer Set Programming faces an increasing popularity for problem solving in various domains. While its modeling language allows us to express many complex problems in an easy way, its solving technology enables their effective resolution. In what follows, we detail some of the key factors of its success. Answer Set Programming [ASP; Brewka et al. Commun ACM 54(12):92–103, (2011)] is seeing a rapid proliferation in academia and industry due to its easy and flexible way to model and solve knowledge-intense combinatorial (optimization) problems. To this end, ASP offers a high-level modeling language paired with high-performance solving technology. As a result, ASP systems provide out-off-the-box, general-purpose search engines that allow for enumerating (optimal) solutions. They are represented as answer sets, each being a set of atoms representing a solution. The declarative approach of ASP allows a user to concentrate on a problem’s specification rather than the computational means to solve it. This makes ASP a prime candidate for rapid prototyping and an attractive tool for teaching key AI techniques since complex problems can be expressed in a succinct and elaboration tolerant way. This is eased by the tuning of ASP’s modeling language to knowledge representation and reasoning (KRR). The resulting impact is nicely reflected by a growing range of successful applications of ASP [Erdem et al. AI Mag 37(3):53–68, 2016; Falkner et al. Industrial applications of answer set programming. K++nstliche Intelligenz (2018)] Y1 - 2018 U6 - https://doi.org/10.1007/s13218-018-0550-z SN - 0933-1875 SN - 1610-1987 VL - 32 IS - 2-3 SP - 105 EP - 108 PB - Springer CY - Heidelberg ER - TY - GEN A1 - Brewka, Gerhard A1 - Schaub, Torsten A1 - Woltran, Stefan T1 - Interview with Gerhard Brewka T2 - Künstliche Intelligenz N2 - This interview with Gerhard Brewka was conducted by correspondance in May 2018. The question set was compiled by Torsten Schaub and Stefan Woltran. Y1 - 2018 U6 - https://doi.org/10.1007/s13218-018-0549-5 SN - 0933-1875 SN - 1610-1987 VL - 32 IS - 2-3 SP - 219 EP - 221 PB - Springer CY - Heidelberg ER - TY - GEN A1 - Schaub, Torsten A1 - Woltran, Stefan T1 - Special issue on answer set programming T2 - Künstliche Intelligenz Y1 - 2018 U6 - https://doi.org/10.1007/s13218-018-0554-8 SN - 0933-1875 SN - 1610-1987 VL - 32 IS - 2-3 SP - 101 EP - 103 PB - Springer CY - Heidelberg ER - TY - GEN A1 - Schäpers, Björn A1 - Niemueller, Tim A1 - Lakemeyer, Gerhard A1 - Gebser, Martin A1 - Schaub, Torsten T1 - ASP-Based Time-Bounded Planning for Logistics Robots T2 - Twenty-Eighth International Conference on Automated Planning and Scheduling (ICAPS 2018) N2 - Manufacturing industries are undergoing a major paradigm shift towards more autonomy. Automated planning and scheduling then becomes a necessity. The Planning and Execution Competition for Logistics Robots in Simulation held at ICAPS is based on this scenario and provides an interesting testbed. However, the posed problem is challenging as also demonstrated by the somewhat weak results in 2017. The domain requires temporal reasoning and dealing with uncertainty. We propose a novel planning system based on Answer Set Programming and the Clingo solver to tackle these problems and incentivize robot cooperation. Our results show a significant performance improvement, both, in terms of lowering computational requirements and better game metrics. Y1 - 2018 SN - 2334-0835 SN - 2334-0843 SP - 509 EP - 517 PB - ASSOC Association for the Advancement of Artificial Intelligence CY - Palo Alto ER - TY - GEN A1 - Bosser, Anne-Gwenn A1 - Cabalar, Pedro A1 - Dieguez, Martin A1 - Schaub, Torsten T1 - Introducing temporal stable models for linear dynamic logic T2 - 16th International Conference on Principles of Knowledge Representation and Reasoning N2 - We propose a new temporal extension of the logic of Here-and-There (HT) and its equilibria obtained by combining it with dynamic logic over (linear) traces. Unlike previous temporal extensions of HT based on linear temporal logic, the dynamic logic features allow us to reason about the composition of actions. For instance, this can be used to exercise fine grained control when planning in robotics, as exemplified by GOLOG. In this paper, we lay the foundations of our approach, and refer to it as Linear Dynamic Equilibrium Logic, or simply DEL. We start by developing the formal framework of DEL and provide relevant characteristic results. Among them, we elaborate upon the relationships to traditional linear dynamic logic and previous temporal extensions of HT. Y1 - 2018 UR - https://www.dc.fi.udc.es/~cabalar/del.pdf SP - 12 EP - 21 PB - ASSOC Association for the Advancement of Artificial Intelligence CY - Palo Alto ER - TY - THES A1 - Ostrowski, Max T1 - Modern constraint answer set solving T1 - Moderne Constraint Antwortmengenprogrammierung N2 - Answer Set Programming (ASP) is a declarative problem solving approach, combining a rich yet simple modeling language with high-performance solving capabilities. Although this has already resulted in various applications, certain aspects of such applications are more naturally modeled using variables over finite domains, for accounting for resources, fine timings, coordinates, or functions. Our goal is thus to extend ASP with constraints over integers while preserving its declarative nature. This allows for fast prototyping and elaboration tolerant problem descriptions of resource related applications. The resulting paradigm is called Constraint Answer Set Programming (CASP). We present three different approaches for solving CASP problems. The first one, a lazy, modular approach combines an ASP solver with an external system for handling constraints. This approach has the advantage that two state of the art technologies work hand in hand to solve the problem, each concentrating on its part of the problem. The drawback is that inter-constraint dependencies cannot be communicated back to the ASP solver, impeding its learning algorithm. The second approach translates all constraints to ASP. Using the appropriate encoding techniques, this results in a very fast, monolithic system. Unfortunately, due to the large, explicit representation of constraints and variables, translation techniques are restricted to small and mid-sized domains. The third approach merges the lazy and the translational approach, combining the strength of both while removing their weaknesses. To this end, we enhance the dedicated learning techniques of an ASP solver with the inferences of the translating approach in a lazy way. That is, the important knowledge is only made explicit when needed. By using state of the art techniques from neighboring fields, we provide ways to tackle real world, industrial size problems. By extending CASP to reactive solving, we open up new application areas such as online planning with continuous domains and durations. N2 - Die Antwortmengenprogrammierung (ASP) ist ein deklarativer Ansatz zur Problemlösung. Eine ausdrucksstarke Modellierungssprache erlaubt es, Probleme einfach und flexibel zu beschreiben. Durch sehr effiziente Problemlösungstechniken, konnten bereits verschiedene Anwendungsgebiete erschlossen werden. Allerdings lassen sich Probleme mit Ressourcen besser mit Gleichungen über Ganze oder Reelle Zahlen lösen, anstatt mit reiner Boolescher Logik. In dieser Arbeit erweitern wir ASP mit Arithmetik über Ganze Zahlen zu Constraint Answer Set Programming (CASP). Unser Hauptaugenmerk liegt dabei auf der Erweiterung der Modellierungssprache mit Arithmetik, ohne Performanz oder Flexibilität einzubüßen. In einem ersten, bedarfsgesteuertem, modularen Ansatz kombinieren wir einen ASP Solver mit einem externen System zur Lösung von ganzzahligen Gleichungen. Der Vorteil dieses Ansatzes besteht darin, dass zwei verschiedene Technologien Hand in Hand arbeiten, wobei jede nur ihren Teil des Problems betrachten muss. Ein Nachteil der sich daraus ergibt ist jedoch, dass Abhängigkeiten zwischen den Gleichungen nicht an den ASP Solver kommuniziert werden können. Das beeinträchtigt die Lernfähigkeit des zu Grunde liegenden Algorithmus. Der zweite von uns verfolgte Ansatz übersetzt die ganzzahligen Gleichungen direkt nach ASP. Durch entsprechende Kodierungstechniken erhält man ein sehr effizientes, monolithisches System. Diese Übersetzung erfordert eine explizite Darstellung aller Variablen und Gleichungen. Daher ist dieser Ansatz nur für kleine bis mittlere Wertebereiche geeignet. Die dritte Methode, die wir in dieser Arbeit vorstellen, vereinigt die Vorteile der beiden vorherigen Ansätze und überwindet ihre Kehrseiten. Wir entwickeln einen lernenden Algorithmus, der die Arithmetik implizit lässt. Dies befreit uns davon, eine möglicherweise riesige Menge an Variablen und Formeln zu speichern, und erlaubt es uns gleichzeitig dieses Wissen zu nutzen. Das Ziel dieser Arbeit ist es, durch die Kombination hochmoderner Technologien, industrielle Anwendungsgebiete für ASP zu erschliessen. Die verwendeten Techniken erlauben eine Erweiterung von CASP mit reaktiven Elementen. Das heißt, dass das Lösen des Problems ein interaktiver Prozess wird. Das Problem kann dabei ständig verändert und erweitert werden, ohne dass Informationen verloren gehen oder neu berechnet werden müssen. Dies eröffnet uns neue Möglichkeiten, wie zum Beispiel reaktives Planen mit Ressourcen und Zeiten. KW - ASP (Answer Set Programming) KW - CASP (Constraint Answer Set Programming) KW - constraints KW - hybrid KW - SMT (SAT Modulo Theories) KW - Antwortmengenprogrammierung KW - hybrides Problemlösen Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-407799 ER - TY - JOUR A1 - Gebser, Martin A1 - Obermeier, Philipp A1 - Schaub, Torsten A1 - Ratsch-Heitmann, Michel A1 - Runge, Mario T1 - Routing driverless transport vehicles in car assembly with answer set programming JF - Theory and practice of logic programming N2 - Automated storage and retrieval systems are principal components of modern production and warehouse facilities. In particular, automated guided vehicles nowadays substitute human-operated pallet trucks in transporting production materials between storage locations and assembly stations. While low-level control systems take care of navigating such driverless vehicles along programmed routes and avoid collisions even under unforeseen circumstances, in the common case of multiple vehicles sharing the same operation area, the problem remains how to set up routes such that a collection of transport tasks is accomplished most effectively. We address this prevalent problem in the context of car assembly at Mercedes-Benz Ludwigsfelde GmbH, a large-scale producer of commercial vehicles, where routes for automated guided vehicles used in the production process have traditionally been hand-coded by human engineers. Such adhoc methods may suffice as long as a running production process remains in place, while any change in the factory layout or production targets necessitates tedious manual reconfiguration, not to mention the missing portability between different production plants. Unlike this, we propose a declarative approach based on Answer Set Programming to optimize the routes taken by automated guided vehicles for accomplishing transport tasks. The advantages include a transparent and executable problem formalization, provable optimality of routes relative to objective criteria, as well as elaboration tolerance towards particular factory layouts and production targets. Moreover, we demonstrate that our approach is efficient enough to deal with the transport tasks evolving in realistic production processes at the car factory of Mercedes-Benz Ludwigsfelde GmbH. KW - automated guided vehicle routing KW - car assembly operations KW - answer set programming Y1 - 2018 U6 - https://doi.org/10.1017/S1471068418000182 SN - 1471-0684 SN - 1475-3081 VL - 18 IS - 3-4 SP - 520 EP - 534 PB - Cambridge Univ. Press CY - New York ER - TY - GEN A1 - Lifschitz, Vladimir A1 - Schaub, Torsten A1 - Woltran, Stefan T1 - Interview with Vladimir Lifschitz T2 - Künstliche Intelligenz N2 - This interview with Vladimir Lifschitz was conducted by Torsten Schaub at the University of Texas at Austin in August 2017. The question set was compiled by Torsten Schaub and Stefan Woltran. Y1 - 2018 U6 - https://doi.org/10.1007/s13218-018-0552-x SN - 0933-1875 SN - 1610-1987 VL - 32 IS - 2-3 SP - 213 EP - 218 PB - Springer CY - Heidelberg ER - TY - JOUR A1 - Banbara, Mutsunori A1 - Inoue, Katsumi A1 - Kaufmann, Benjamin A1 - Okimoto, Tenda A1 - Schaub, Torsten A1 - Soh, Takehide A1 - Tamura, Naoyuki A1 - Wanko, Philipp T1 - teaspoon BT - solving the curriculum-based course timetabling problems with answer set programming JF - Annals of operation research N2 - Answer Set Programming (ASP) is an approach to declarative problem solving, combining a rich yet simple modeling language with high performance solving capacities. We here develop an ASP-based approach to curriculum-based course timetabling (CB-CTT), one of the most widely studied course timetabling problems. The resulting teaspoon system reads a CB-CTT instance of a standard input format and converts it into a set of ASP facts. In turn, these facts are combined with a first-order encoding for CB-CTT solving, which can subsequently be solved by any off-the-shelf ASP systems. We establish the competitiveness of our approach by empirically contrasting it to the best known bounds obtained so far via dedicated implementations. Furthermore, we extend the teaspoon system to multi-objective course timetabling and consider minimal perturbation problems. KW - Educational timetabling KW - Course timetabling KW - Answer set programming KW - Multi-objective optimization KW - Minimal perturbation problems Y1 - 2018 U6 - https://doi.org/10.1007/s10479-018-2757-7 SN - 0254-5330 SN - 1572-9338 VL - 275 IS - 1 SP - 3 EP - 37 PB - Springer CY - Dordrecht ER - TY - GEN A1 - Neubauer, Kai A1 - Haubelt, Christian A1 - Wanko, Philipp A1 - Schaub, Torsten T1 - Utilizing quad-trees for efficient design space exploration with partial assignment evaluation T2 - 2018 23rd Asia and South Pacific Design Automation Conference (ASP-DAC) N2 - Recently, it has been shown that constraint-based symbolic solving techniques offer an efficient way for deciding binding and routing options in order to obtain a feasible system level implementation. In combination with various background theories, a feasibility analysis of the resulting system may already be performed on partial solutions. That is, infeasible subsets of mapping and routing options can be pruned early in the decision process, which fastens the solving accordingly. However, allowing a proper design space exploration including multi-objective optimization also requires an efficient structure for storing and managing non-dominated solutions. In this work, we propose and study the usage of the Quad-Tree data structure in the context of partial assignment evaluation during system synthesis. Out experiments show that unnecessary dominance checks can be avoided, which indicates a preference of Quad-Trees over a commonly used list-based implementation for large combinatorial optimization problems. Y1 - 2018 SN - 978-1-5090-0602-1 U6 - https://doi.org/10.1109/ASPDAC.2018.8297362 SN - 2153-6961 SP - 434 EP - 439 PB - IEEE CY - New York ER - TY - JOUR A1 - Gebser, Martin A1 - Kaminski, Roland A1 - Kaufmann, Benjamin A1 - Lühne, Patrick A1 - Obermeier, Philipp A1 - Ostrowski, Max A1 - Romero Davila, Javier A1 - Schaub, Torsten A1 - Schellhorn, Sebastian A1 - Wanko, Philipp T1 - The Potsdam Answer Set Solving Collection 5.0 JF - Künstliche Intelligenz N2 - The Potsdam answer set solving collection, or Potassco for short, bundles various tools implementing and/or applying answer set programming. The article at hand succeeds an earlier description of the Potassco project published in Gebser et al. (AI Commun 24(2):107-124, 2011). Hence, we concentrate in what follows on the major features of the most recent, fifth generation of the ASP system clingo and highlight some recent resulting application systems. Y1 - 2018 U6 - https://doi.org/10.1007/s13218-018-0528-x SN - 0933-1875 SN - 1610-1987 VL - 32 IS - 2-3 SP - 181 EP - 182 PB - Springer CY - Heidelberg ER - TY - JOUR A1 - Haubelt, Christian A1 - Neubauer, Kai A1 - Schaub, Torsten A1 - Wanko, Philipp T1 - Design space exploration with answer set programming JF - Künstliche Intelligenz N2 - The aim of our project design space exploration with answer set programming is to develop a general framework based on Answer Set Programming (ASP) that finds valid solutions to the system design problem and simultaneously performs Design Space Exploration (DSE) to find the most favorable alternatives. We leverage recent developments in ASP solving that allow for tight integration of background theories to create a holistic framework for effective DSE. Y1 - 2018 U6 - https://doi.org/10.1007/s13218-018-0530-3 SN - 0933-1875 SN - 1610-1987 VL - 32 IS - 2-3 SP - 205 EP - 206 PB - Springer CY - Heidelberg ER - TY - JOUR A1 - Prasse, Paul A1 - Knaebel, Rene A1 - Machlica, Lukas A1 - Pevny, Tomas A1 - Scheffer, Tobias T1 - Joint detection of malicious domains and infected clients JF - Machine learning N2 - Detection of malware-infected computers and detection of malicious web domains based on their encrypted HTTPS traffic are challenging problems, because only addresses, timestamps, and data volumes are observable. The detection problems are coupled, because infected clients tend to interact with malicious domains. Traffic data can be collected at a large scale, and antivirus tools can be used to identify infected clients in retrospect. Domains, by contrast, have to be labeled individually after forensic analysis. We explore transfer learning based on sluice networks; this allows the detection models to bootstrap each other. In a large-scale experimental study, we find that the model outperforms known reference models and detects previously unknown malware, previously unknown malware families, and previously unknown malicious domains. KW - Machine learning KW - Neural networks KW - Computer security KW - Traffic data KW - Https traffic Y1 - 2019 U6 - https://doi.org/10.1007/s10994-019-05789-z SN - 0885-6125 SN - 1573-0565 VL - 108 IS - 8-9 SP - 1353 EP - 1368 PB - Springer CY - Dordrecht ER - TY - THES A1 - Abdelwahab Hussein Abdelwahab Elsayed, Ahmed T1 - Probabilistic, deep, and metric learning for biometric identification from eye movements N2 - A central insight from psychological studies on human eye movements is that eye movement patterns are highly individually characteristic. They can, therefore, be used as a biometric feature, that is, subjects can be identified based on their eye movements. This thesis introduces new machine learning methods to identify subjects based on their eye movements while viewing arbitrary content. The thesis focuses on probabilistic modeling of the problem, which has yielded the best results in the most recent literature. The thesis studies the problem in three phases by proposing a purely probabilistic, probabilistic deep learning, and probabilistic deep metric learning approach. In the first phase, the thesis studies models that rely on psychological concepts about eye movements. Recent literature illustrates that individual-specific distributions of gaze patterns can be used to accurately identify individuals. In these studies, models were based on a simple parametric family of distributions. Such simple parametric models can be robustly estimated from sparse data, but have limited flexibility to capture the differences between individuals. Therefore, this thesis proposes a semiparametric model of gaze patterns that is flexible yet robust for individual identification. These patterns can be understood as domain knowledge derived from psychological literature. Fixations and saccades are examples of simple gaze patterns. The proposed semiparametric densities are drawn under a Gaussian process prior centered at a simple parametric distribution. Thus, the model will stay close to the parametric class of densities if little data is available, but it can also deviate from this class if enough data is available, increasing the flexibility of the model. The proposed method is evaluated on a large-scale dataset, showing significant improvements over the state-of-the-art. Later, the thesis replaces the model based on gaze patterns derived from psychological concepts with a deep neural network that can learn more informative and complex patterns from raw eye movement data. As previous work has shown that the distribution of these patterns across a sequence is informative, a novel statistical aggregation layer called the quantile layer is introduced. It explicitly fits the distribution of deep patterns learned directly from the raw eye movement data. The proposed deep learning approach is end-to-end learnable, such that the deep model learns to extract informative, short local patterns while the quantile layer learns to approximate the distributions of these patterns. Quantile layers are a generic approach that can converge to standard pooling layers or have a more detailed description of the features being pooled, depending on the problem. The proposed model is evaluated in a large-scale study using the eye movements of subjects viewing arbitrary visual input. The model improves upon the standard pooling layers and other statistical aggregation layers proposed in the literature. It also improves upon the state-of-the-art eye movement biometrics by a wide margin. Finally, for the model to identify any subject — not just the set of subjects it is trained on — a metric learning approach is developed. Metric learning learns a distance function over instances. The metric learning model maps the instances into a metric space, where sequences of the same individual are close, and sequences of different individuals are further apart. This thesis introduces a deep metric learning approach with distributional embeddings. The approach represents sequences as a set of continuous distributions in a metric space; to achieve this, a new loss function based on Wasserstein distances is introduced. The proposed method is evaluated on multiple domains besides eye movement biometrics. This approach outperforms the state of the art in deep metric learning in several domains while also outperforming the state of the art in eye movement biometrics. N2 - Die Art und Weise, wie wir unsere Augen bewegen, ist individuell charakteristisch. Augenbewegungen können daher zur biometrischen Identifikation verwendet werden. Die Dissertation stellt neuartige Methoden des maschinellen Lernens zur Identifzierung von Probanden anhand ihrer Blickbewegungen während des Betrachtens beliebiger visueller Inhalte vor. Die Arbeit konzentriert sich auf die probabilistische Modellierung des Problems, da dies die besten Ergebnisse in der aktuellsten Literatur liefert. Die Arbeit untersucht das Problem in drei Phasen. In der ersten Phase stützt sich die Arbeit bei der Entwicklung eines probabilistischen Modells auf Wissen über Blickbewegungen aus der psychologischen Literatur. Existierende Studien haben gezeigt, dass die individuelle Verteilung von Blickbewegungsmustern verwendet werden kann, um Individuen genau zu identifizieren. Existierende probabilistische Modelle verwenden feste Verteilungsfamilien in Form von parametrischen Modellen, um diese Verteilungen zu approximieren. Die Verwendung solcher einfacher Verteilungsfamilien hat den Vorteil, dass sie robuste Verteilungsschätzungen auch auf kleinen Mengen von Beobachtungen ermöglicht. Ihre Flexibilität, Unterschiede zwischen Personen zu erfassen, ist jedoch begrenzt. Die Arbeit schlägt daher eine semiparametrische Modellierung der Blickmuster vor, die flexibel und dennoch robust individuelle Verteilungen von Blickbewegungsmustern schätzen kann. Die modellierten Blickmuster können als Domänenwissen verstanden werden, das aus der psychologischen Literatur abgeleitet ist. Beispielsweise werden Verteilungen über Fixationsdauern und Sprungweiten (Sakkaden) bei bestimmten Vor- und Rücksprüngen innerhalb des Textes modelliert. Das semiparametrische Modell bleibt nahe des parametrischen Modells, wenn nur wenige Daten verfügbar sind, kann jedoch auch vom parametrischen Modell abweichen, wenn genügend Daten verfügbar sind, wodurch die Flexibilität erhöht wird. Die Methode wird auf einem großen Datenbestand evaluiert und zeigt eine signifikante Verbesserung gegenüber dem Stand der Technik der Forschung zur biometrischen Identifikation aus Blickbewegungen. Später ersetzt die Dissertation die zuvor untersuchten aus der psychologischen Literatur abgeleiteten Blickmuster durch ein auf tiefen neuronalen Netzen basierendes Modell, das aus den Rohdaten der Augenbewegungen informativere komplexe Muster lernen kann. Tiefe neuronale Netze sind eine Technik des maschinellen Lernens, bei der in komplexen, mehrschichtigen Modellen schrittweise abstraktere Merkmale aus Rohdaten extrahiert werden. Da frühere Arbeiten gezeigt haben, dass die Verteilung von Blickbewegungsmustern innerhalb einer Blickbewegungssequenz informativ ist, wird eine neue Aggrgationsschicht für tiefe neuronale Netze eingeführt, die explizit die Verteilung der gelernten Muster schätzt. Die vorgeschlagene Aggregationsschicht für tiefe neuronale Netze ist nicht auf die Modellierung von Blickbewegungen beschränkt, sondern kann als Verallgemeinerung von existierenden einfacheren Aggregationsschichten in beliebigen Anwendungen eingesetzt werden. Das vorgeschlagene Modell wird in einer umfangreichen Studie unter Verwendung von Augenbewegungen von Probanden evaluiert, die Videomaterial unterschiedlichen Inhalts und unterschiedlicher Länge betrachten. Das Modell verbessert die Identifikationsgenauigkeit im Vergleich zu tiefen neuronalen Netzen mit Standardaggregationsschichten und existierenden probabilistischen Modellen zur Identifikation aus Blickbewegungen. Damit das Modell zum Anwendungszeitpunkt beliebige Probanden identifizieren kann, und nicht nur diejenigen Probanden, mit deren Daten es trainiert wurde, wird ein metrischer Lernansatz entwickelt. Beim metrischen Lernen lernt das Modell eine Funktion, mit der die Ähnlichkeit zwischen Blickbewegungssequenzen geschätzt werden kann. Das metrische Lernen bildet die Instanzen in einen neuen Raum ab, in dem Sequenzen desselben Individuums nahe beieinander liegen und Sequenzen verschiedener Individuen weiter voneinander entfernt sind. Die Dissertation stellt einen neuen metrischen Lernansatz auf Basis tiefer neuronaler Netze vor. Der Ansatz repäsentiert eine Sequenz in einem metrischen Raum durch eine Menge von Verteilungen. Das vorgeschlagene Verfahren ist nicht spezifisch für die Blickbewegungsmodellierung, und wird in unterschiedlichen Anwendungsproblemen empirisch evaluiert. Das Verfahren führt zu genaueren Modellen im Vergleich zu existierenden metrischen Lernverfahren und existierenden Modellen zur Identifikation aus Blickbewegungen. KW - probabilistic deep metric learning KW - probabilistic deep learning KW - biometrics KW - eye movements KW - biometrische Identifikation KW - Augenbewegungen KW - probabilistische tiefe neuronale Netze KW - probabilistisches tiefes metrisches Lernen Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-467980 ER - TY - GEN A1 - Strickroth, Sven T1 - PLATON BT - Developing a Graphical Lesson Planning System for Prospective Teachers T2 - Postprints der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe N2 - Lesson planning is both an important and demanding task—especially as part of teacher training. This paper presents the requirements for a lesson planning system and evaluates existing systems regarding these requirements. One major drawback of existing software tools is that most are limited to a text- or form-based representation of the lesson designs. In this article, a new approach with a graphical, time-based representation with (automatic) analyses methods is proposed and the system architecture and domain model are described in detail. The approach is implemented in an interactive, web-based prototype called PLATON, which additionally supports the management of lessons in units as well as the modelling of teacher and student-generated resources. The prototype was evaluated in a study with 61 prospective teachers (bachelor’s and master’s preservice teachers as well as teacher trainees in post-university teacher training) in Berlin, Germany, with a focus on usability. The results show that this approach proofed usable for lesson planning and offers positive effects for the perception of time and self-reflection. T3 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe - 804 KW - lesson planning KW - lesson preparation KW - support system KW - automatic feedback Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-441887 SN - 1866-8372 IS - 804 ER - TY - JOUR A1 - Metref, Sammy A1 - Cosme, Emmanuel A1 - Le Sommer, Julien A1 - Poel, Nora A1 - Brankart, Jean-Michel A1 - Verron, Jacques A1 - Gomez Navarro, Laura T1 - Reduction of spatially structured errors in Wide-Swath altimetric satellite data using data assimilation JF - Remote sensing N2 - The Surface Water and Ocean Topography (SWOT) mission is a next generation satellite mission expected to provide a 2 km-resolution observation of the sea surface height (SSH) on a two-dimensional swath. Processing SWOT data will be challenging because of the large amount of data, the mismatch between a high spatial resolution and a low temporal resolution, and the observation errors. The present paper focuses on the reduction of the spatially structured errors of SWOT SSH data. It investigates a new error reduction method and assesses its performance in an observing system simulation experiment. The proposed error-reduction method first projects the SWOT SSH onto a subspace spanned by the SWOT spatially structured errors. This projection is removed from the SWOT SSH to obtain a detrended SSH. The detrended SSH is then processed within an ensemble data assimilation analysis to retrieve a full SSH field. In the latter step, the detrending is applied to both the SWOT data and an ensemble of model-simulated SSH fields. Numerical experiments are performed with synthetic SWOT observations and an ensemble from a North Atlantic, 1/60 degrees simulation of the ocean circulation (NATL60). The data assimilation analysis is carried out with an ensemble Kalman filter. The results are assessed with root mean square errors, power spectrum density, and spatial coherence. They show that a significant part of the large scale SWOT errors is reduced. The filter analysis also reduces the small scale errors and allows for an accurate recovery of the energy of the signal down to 25 km scales. In addition, using the SWOT nadir data to adjust the SSH detrending further reduces the errors. KW - SWOT KW - correlated errors KW - OSSE KW - projection KW - detrending KW - ensemble kalman filter Y1 - 2019 U6 - https://doi.org/10.3390/rs11111336 SN - 2072-4292 VL - 11 IS - 11 PB - MDPI CY - Basel ER - TY - GEN A1 - Przybylla, Mareen T1 - Interactive objects in physical computing and their role in the learning process T2 - Constructivist foundations N2 - The target article discusses the question of how educational makerspaces can become places supportive of knowledge construction. This question is too often neglected by people who run makerspaces, as they mostly explain how to use different tools and focus on the creation of a product. In makerspaces, often pupils also engage in physical computing activities and thus in the creation of interactive artifacts containing embedded systems, such as smart shoes or wristbands, plant monitoring systems or drink mixing machines. This offers the opportunity to reflect on teaching physical computing in computer science education, where similarly often the creation of the product is so strongly focused upon that the reflection of the learning process is pushed into the background. Y1 - 2019 SN - 1782-348X VL - 14 IS - 3 SP - 264 EP - 266 PB - Vrije Univ. CY - Bussels ER - TY - JOUR A1 - Pousttchi, Key A1 - Gleiß, Alexander T1 - Surrounded by middlemen - how multi-sided platforms change the insurance industry JF - Electron Markets N2 - Multi-sided platforms (MSP) strongly affect markets and play a crucial part within the digital and networked economy. Although empirical evidence indicates their occurrence in many industries, research has not investigated the game-changing impact of MSP on traditional markets to a sufficient extent. More specifically, we have little knowledge of how MSP affect value creation and customer interaction in entire markets, exploiting the potential of digital technologies to offer new value propositions. Our paper addresses this research gap and provides an initial systematic approach to analyze the impact of MSP on the insurance industry. For this purpose, we analyze the state of the art in research and practice in order to develop a reference model of the value network for the insurance industry. On this basis, we conduct a case-study analysis to discover and analyze roles which are occupied or even newly created by MSP. As a final step, we categorize MSP with regard to their relation to traditional insurance companies, resulting in a classification scheme with four MSP standard types: Competition, Coordination, Cooperation, Collaboration. KW - Multi-sided platforms KW - Insurance industry KW - Value network KW - Digitalization KW - Customer ownership Y1 - 2019 U6 - https://doi.org/10.1007/s12525-019-00363-w SN - 1019-6781 SN - 1422-8890 VL - 29 IS - 4 SP - 609 EP - 629 PB - Springer CY - Heidelberg ER - TY - JOUR A1 - Waitelonis, Jörg A1 - Jürges, Henrik A1 - Sack, Harald T1 - Remixing entity linking evaluation datasets for focused benchmarking JF - Semantic Web N2 - In recent years, named entity linking (NEL) tools were primarily developed in terms of a general approach, whereas today numerous tools are focusing on specific domains such as e.g. the mapping of persons and organizations only, or the annotation of locations or events in microposts. However, the available benchmark datasets necessary for the evaluation of NEL tools do not reflect this focalizing trend. We have analyzed the evaluation process applied in the NEL benchmarking framework GERBIL [in: Proceedings of the 24th International Conference on World Wide Web (WWW’15), International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 2015, pp. 1133–1143, Semantic Web 9(5) (2018), 605–625] and all its benchmark datasets. Based on these insights we have extended the GERBIL framework to enable a more fine grained evaluation and in depth analysis of the available benchmark datasets with respect to different emphases. This paper presents the implementation of an adaptive filter for arbitrary entities and customized benchmark creation as well as the automated determination of typical NEL benchmark dataset properties, such as the extent of content-related ambiguity and diversity. These properties are integrated on different levels, which also enables to tailor customized new datasets out of the existing ones by remixing documents based on desired emphases. Besides a new system library to enrich provided NIF [in: International Semantic Web Conference (ISWC’13), Lecture Notes in Computer Science, Vol. 8219, Springer, Berlin, Heidelberg, 2013, pp. 98–113] datasets with statistical information, best practices for dataset remixing are presented, and an in depth analysis of the performance of entity linking systems on special focus datasets is presented. KW - Entity Linking KW - GERBIL KW - evaluation KW - benchmark Y1 - 2019 U6 - https://doi.org/10.3233/SW-180334 SN - 1570-0844 SN - 2210-4968 VL - 10 IS - 2 SP - 385 EP - 412 PB - IOS Press CY - Amsterdam ER - TY - GEN A1 - Fichte, Johannes Klaus A1 - Hecher, Markus A1 - Meier, Arne T1 - Counting Complexity for Reasoning in Abstract Argumentation T2 - The Thirty-Third AAAI Conference on Artificial Intelligence, the Thirty-First Innovative Applications of Artificial Intelligence Conference, the Ninth AAAI Symposium on Educational Advances in Artificial Intelligence N2 - In this paper, we consider counting and projected model counting of extensions in abstract argumentation for various semantics. When asking for projected counts we are interested in counting the number of extensions of a given argumentation framework while multiple extensions that are identical when restricted to the projected arguments count as only one projected extension. We establish classical complexity results and parameterized complexity results when the problems are parameterized by treewidth of the undirected argumentation graph. To obtain upper bounds for counting projected extensions, we introduce novel algorithms that exploit small treewidth of the undirected argumentation graph of the input instance by dynamic programming (DP). Our algorithms run in time double or triple exponential in the treewidth depending on the considered semantics. Finally, we take the exponential time hypothesis (ETH) into account and establish lower bounds of bounded treewidth algorithms for counting extensions and projected extension. Y1 - 2019 SN - 978-1-57735-809-1 SP - 2827 EP - 2834 PB - AAAI Press CY - Palo Alto ER - TY - THES A1 - Schneider, Jan Niklas T1 - Computational approaches for emotion research T1 - Computergestützte Methoden für die Emotionsforschung N2 - Emotionen sind ein zentrales Element menschlichen Erlebens und spielen eine wichtige Rolle bei der Entscheidungsfindung. Diese Dissertation identifiziert drei methodische Probleme der aktuellen Emotionsforschung und zeigt auf, wie diese mittels computergestützter Methoden gelöst werden können. Dieser Ansatz wird in drei Forschungsprojekten demonstriert, die die Entwicklung solcher Methoden sowie deren Anwendung auf konkrete Forschungsfragen beschreiben. Das erste Projekt beschreibt ein Paradigma welches es ermöglicht, die subjektive und objektive Schwierigkeit der Emotionswahrnehmung zu messen. Darüber hinaus ermöglicht es die Verwendung einer beliebigen Anzahl von Emotionskategorien im Vergleich zu den üblichen sechs Kategorien der Basisemotionen. Die Ergebnisse deuten auf eine Zunahme der Schwierigkeiten bei der Wahrnehmung von Emotionen mit zunehmendem Alter der Darsteller hin und liefern Hinweise darauf, dass junge Erwachsene, ältere Menschen und Männer ihre Schwierigkeit bei der Wahrnehmung von Emotionen unterschätzen. Weitere Analysen zeigten eine geringe Relevanz personenbezogener Variablen und deuteten darauf hin, dass die Schwierigkeit der Emotionswahrnehmung vornehmlich durch die Ausprägung der Wertigkeit des Ausdrucks bestimmt wird. Das zweite Projekt zeigt am Beispiel von Arousal, einem etablierten, aber vagen Konstrukt der Emotionsforschung, wie Face-Tracking-Daten dazu genutzt werden können solche Konstrukte zu schärfen. Es beschreibt, wie aus Face-Tracking-Daten Maße für die Entfernung, Geschwindigkeit und Beschleunigung von Gesichtsausdrücken berechnet werden können. Das Projekt untersuchte wie diesen Maße mit der Arousal-Wahrnehmung in Menschen mit und ohne Autismus zusammenhängen. Der Abstand zum Neutralgesicht war prädiktiv für die Arousal-Bewertungen in beiden Gruppen. Die Ergebnisse deuten auf eine qualitativ ähnliche Wahrnehmung von Arousal für Menschen mit und ohne Autismus hin. Im dritten Projekt stellen wir die Partial-Least-Squares-Analyse als allgemeine Methode vor, um eine optimale Repräsentation zur Verknüpfung zweier hochdimensionale Datensätze zu finden. Das Projekt demonstriert die Anwendbarkeit dieser Methode in der Emotionsforschung anhand der Frage nach Unterschieden in der Emotionswahrnehmung zwischen Männern und Frauen. Wir konnten zeigen, dass die emotionale Wahrnehmung von Frauen systematisch mehr Varianz der Gesichtsausdrücke erfasst und dass signifikante Unterschiede in der Art und Weise bestehen, wie Frauen und Männer einige Gesichtsausdrücke wahrnehmen. Diese konnten wir als dynamische Gesichtsausdrücke visualisieren. Um die Anwendung der entwickelten Methode für die Forschungsgemeinschaft zu erleichtern, wurde ein Software-Paket für die Statistikumgebung R geschrieben. Zudem wurde eine Website entwickelt (thisemotiondoesnotexist.com), die es Besuchern erlaubt, ein Partial-Least-Squares-Modell von Emotionsbewertungen und Face-Tracking-Daten interaktiv zu erkunden, um die entwickelte Methode zu verbreiten und ihren Nutzen für die Emotionsforschung zu illustrieren. N2 - Emotions are a central element of human experience. They occur with high frequency in everyday life and play an important role in decision making. However, currently there is no consensus among researchers on what constitutes an emotion and on how emotions should be investigated. This dissertation identifies three problems of current emotion research: the problem of ground truth, the problem of incomplete constructs and the problem of optimal representation. I argue for a focus on the detailed measurement of emotion manifestations with computer-aided methods to solve these problems. This approach is demonstrated in three research projects, which describe the development of methods specific to these problems as well as their application to concrete research questions. The problem of ground truth describes the practice to presuppose a certain structure of emotions as the a priori ground truth. This determines the range of emotion descriptions and sets a standard for the correct assignment of these descriptions. The first project illustrates how this problem can be circumvented with a multidimensional emotion perception paradigm which stands in contrast to the emotion recognition paradigm typically employed in emotion research. This paradigm allows to calculate an objective difficulty measure and to collect subjective difficulty ratings for the perception of emotional stimuli. Moreover, it enables the use of an arbitrary number of emotion stimuli categories as compared to the commonly used six basic emotion categories. Accordingly, we collected data from 441 participants using dynamic facial expression stimuli from 40 emotion categories. Our findings suggest an increase in emotion perception difficulty with increasing actor age and provide evidence to suggest that young adults, the elderly and men underestimate their emotion perception difficulty. While these effects were predicted from the literature, we also found unexpected and novel results. In particular, the increased difficulty on the objective difficulty measure for female actors and observers stood in contrast to reported findings. Exploratory analyses revealed low relevance of person-specific variables for the prediction of emotion perception difficulty, but highlighted the importance of a general pleasure dimension for the ease of emotion perception. The second project targets the problem of incomplete constructs which relates to vaguely defined psychological constructs on emotion with insufficient ties to tangible manifestations. The project exemplifies how a modern data collection method such as face tracking data can be used to sharpen these constructs on the example of arousal, a long-standing but fuzzy construct in emotion research. It describes how measures of distance, speed and magnitude of acceleration can be computed from face tracking data and investigates their intercorrelations. We find moderate to strong correlations among all measures of static information on one hand and all measures of dynamic information on the other. The project then investigates how self-rated arousal is tied to these measures in 401 neurotypical individuals and 19 individuals with autism. Distance to the neutral face was predictive of arousal ratings in both groups. Lower mean arousal ratings were found for the autistic group, but no difference in correlation of the measures and arousal ratings could be found between groups. Results were replicated in a high autistic traits group consisting of 41 participants. The findings suggest a qualitatively similar perception of arousal for individuals with and without autism. No correlations between valence ratings and any of the measures could be found which emphasizes the specificity of our tested measures for the construct of arousal. The problem of optimal representation refers to the search for the best representation of emotions and the assumption that there is a one-fits-all solution. In the third project we introduce partial least squares analysis as a general method to find an optimal representation to relate two high-dimensional data sets to each other. The project demonstrates its applicability to emotion research on the question of emotion perception differences between men and women. The method was used with emotion rating data from 441 participants and face tracking data computed on 306 videos. We found quantitative as well as qualitative differences in the perception of emotional facial expressions between these groups. We showed that women’s emotional perception systematically captured more of the variance in facial expressions. Additionally, we could show that significant differences exist in the way that women and men perceive some facial expressions which could be visualized as concrete facial expression sequences. These expressions suggest differing perceptions of masked and ambiguous facial expressions between the sexes. In order to facilitate use of the developed method by the research community, a package for the statistical environment R was written. Furthermore, to call attention to the method and its usefulness for emotion research, a website was designed that allows users to explore a model of emotion ratings and facial expression data in an interactive fashion. KW - facial expression KW - emotion KW - perception KW - face tracking KW - perception differences KW - emotion representation KW - Gesichtsausdruck KW - Emotionen KW - Wahrnehmung KW - Wahrnehmungsunterschiede KW - computational methods KW - emotion research KW - computergestützte Methoden KW - Emotionsforschung KW - arousal perception KW - objective difficulty KW - Wahrnehmung von Arousal KW - Objektive Schwierigkeit Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-459275 ER - TY - JOUR A1 - Strickroth, Sven T1 - PLATON BT - Developing a Graphical Lesson Planning System for Prospective Teachers JF - Education Sciences N2 - Lesson planning is both an important and demanding task—especially as part of teacher training. This paper presents the requirements for a lesson planning system and evaluates existing systems regarding these requirements. One major drawback of existing software tools is that most are limited to a text- or form-based representation of the lesson designs. In this article, a new approach with a graphical, time-based representation with (automatic) analyses methods is proposed and the system architecture and domain model are described in detail. The approach is implemented in an interactive, web-based prototype called PLATON, which additionally supports the management of lessons in units as well as the modelling of teacher and student-generated resources. The prototype was evaluated in a study with 61 prospective teachers (bachelor’s and master’s preservice teachers as well as teacher trainees in post-university teacher training) in Berlin, Germany, with a focus on usability. The results show that this approach proofed usable for lesson planning and offers positive effects for the perception of time and self-reflection. KW - lesson planning KW - lesson preparation KW - support system KW - automatic feedback Y1 - 2019 U6 - https://doi.org/10.3390/educsci9040254 SN - 2227-7102 VL - 9 IS - 4 PB - MDPI CY - Basel ER - TY - THES A1 - Tiwari, Abhishek T1 - Enhancing Users’ Privacy: Static Resolution of the Dynamic Properties of Android N2 - The usage of mobile devices is rapidly growing with Android being the most prevalent mobile operating system. Thanks to the vast variety of mobile applications, users are preferring smartphones over desktops for day to day tasks like Internet surfing. Consequently, smartphones store a plenitude of sensitive data. This data together with the high values of smartphones make them an attractive target for device/data theft (thieves/malicious applications). Unfortunately, state-of-the-art anti-theft solutions do not work if they do not have an active network connection, e.g., if the SIM card was removed from the device. In the majority of these cases, device owners permanently lose their smartphone together with their personal data, which is even worse. Apart from that malevolent applications perform malicious activities to steal sensitive information from smartphones. Recent research considered static program analysis to detect dangerous data leaks. These analyses work well for data leaks due to inter-component communication, but suffer from shortcomings for inter-app communication with respect to precision, soundness, and scalability. This thesis focuses on enhancing users' privacy on Android against physical device loss/theft and (un)intentional data leaks. It presents three novel frameworks: (1) ThiefTrap, an anti-theft framework for Android, (2) IIFA, a modular inter-app intent information flow analysis of Android applications, and (3) PIAnalyzer, a precise approach for PendingIntent vulnerability analysis. ThiefTrap is based on a novel concept of an anti-theft honeypot account that protects the owner's data while preventing a thief from resetting the device. We implemented the proposed scheme and evaluated it through an empirical user study with 35 participants. In this study, the owner's data could be protected, recovered, and anti-theft functionality could be performed unnoticed from the thief in all cases. IIFA proposes a novel approach for Android's inter-component/inter-app communication (ICC/IAC) analysis. Our main contribution is the first fully automatic, sound, and precise ICC/IAC information flow analysis that is scalable for realistic apps due to modularity, avoiding combinatorial explosion: Our approach determines communicating apps using short summaries rather than inlining intent calls between components and apps, which requires simultaneously analyzing all apps installed on a device. We evaluate IIFA in terms of precision, recall, and demonstrate its scalability to a large corpus of real-world apps. IIFA reports 62 problematic ICC-/IAC-related information flows via two or more apps/components. PIAnalyzer proposes a novel approach to analyze PendingIntent related vulnerabilities. PendingIntents are a powerful and universal feature of Android for inter-component communication. We empirically evaluate PIAnalyzer on a set of 1000 randomly selected applications and find 1358 insecure usages of PendingIntents, including 70 severe vulnerabilities. N2 - Die Nutzung von mobilen Geräten nimmt rasant zu, wobei Android das häufigste mobile Betriebssystem ist. Dank der Vielzahl an mobilen Anwendungen bevorzugen Benutzer Smartphones gegenüber Desktops für alltägliche Aufgaben wie das Surfen im Internet. Folglich speichern Smartphones eine Vielzahl sensibler Daten. Diese Daten zusammen mit den hohen Werten von Smartphones machen sie zu einem attraktiven Ziel für Geräte/Datendiebstahl (Diebe/bösartige Anwendungen). Leider funktionieren moderne Diebstahlsicherungslösungen nicht, wenn sie keine aktive Netzwerkverbindung haben, z. B. wenn die SIM-Karte aus dem Gerät entnommen wurde. In den meisten Fällen verlieren Gerätebesitzer ihr Smartphone dauerhaft zusammen mit ihren persönlichen Daten, was noch schlimmer ist. Abgesehen davon gibt es bösartige Anwendungen, die schädliche Aktivitäten ausführen, um vertrauliche Informationen von Smartphones zu stehlen. Kürzlich durchgeführte Untersuchungen berücksichtigten die statische Programmanalyse zur Erkennung gefährlicher Datenlecks. Diese Analysen eignen sich gut für Datenlecks aufgrund der Kommunikation zwischen Komponenten, weisen jedoch hinsichtlich der Präzision, Zuverlässigkeit und Skalierbarkeit Nachteile für die Kommunikation zwischen Apps auf. Diese Dissertation konzentriert sich auf die Verbesserung der Privatsphäre der Benutzer auf Android gegen Verlust/Diebstahl von physischen Geräten und (un)vorsätzlichen Datenverlust. Es werden drei neuartige Frameworks vorgestellt: (1) ThiefTrap, ein Anti-Diebstahl-Framework für Android, (2) IIFA, eine modulare Inter-App Analyse des Informationsflusses von Android Anwendungen, und (3) PIAnalyzer, ein präziser Ansatz für PendingIntent Schwachstellenanalyse. ThiefTrap basiert auf einem neuartigen Konzept eines Diebstahlschutzkontos, das die Daten des Besitzers schützt und verhindert, dass ein Dieb das Gerät zurücksetzt. Wir haben das vorgeschlagene Schema implementiert und durch eine empirische Anwenderstudie mit 35 Teilnehmern ausgewertet. In dieser Studie könnten die Daten des Besitzers geschützt und wiederhergestellt werden, und die Diebstahlsicherungsfunktion konnte in jedem Fall unbemerkt vom Dieb ausgeführt werden. IIFA schlägt einen neuen Ansatz für die Analyse von Komponenten zwischen Komponenten/ Inter-App Kommunikation (ICC/IAC) von Android vor. Unser Hauptbeitrag ist die erste vollautomatische, solide und präzise ICC/IAC Informationsflussanalyse, die aufgrund ihrer Modularität für realistische Apps skalierbar ist und eine kombinatorische Explosion vermeidet: Unser Ansatz bestimmt, dass Apps über kurze Zusammenfassungen kommuniziert werden, anstatt Absichtsaufrufe zwischen Komponenten zu verwenden und Apps, bei denen gleichzeitig alle auf einem Gerät installierten Apps analysiert werden müssen. Wir bewerten IIFA in Bezug auf Präzision, Rückruf und demonstrieren seine Skalierbarkeit für einen großen Korpus realer Apps. IIFA meldet 62 problematische ICC- / IAC-bezogene Informationsflüsse über zwei oder mehr Apps / Komponenten. PIAnalyzer schlägt einen neuen Ansatz vor, um Schwachstellen im Zusammenhang mit PendingIntent zu analysieren. PendingIntents nutzen eine leistungsstarke und universelle Funktion von Android für die Kommunikation zwischen Komponenten. Wir evaluieren PIAnalyzer empirisch an einem Satz von 1000 zufällig ausgewählten Anwendungen und finden 1358 unsichere Verwendungen von PendingIntents, einschließlich 70 schwerwiegender Schwachstellen. KW - Android Security KW - Static Analysis KW - Privacy Protection Y1 - 2019 ER - TY - THES A1 - Böhne, Sebastian T1 - Different degrees of formality T1 - Verschiedene Formalitätsgrade BT - an introduction to the concept and a demonstration of its usefulness BT - Vorstellung des Konzepts und Nachweis seiner Nützlichkeit N2 - In this thesis we introduce the concept of the degree of formality. It is directed against a dualistic point of view, which only distinguishes between formal and informal proofs. This dualistic attitude does not respect the differences between the argumentations classified as informal and it is unproductive because the individual potential of the respective argumentation styles cannot be appreciated and remains untapped. This thesis has two parts. In the first of them we analyse the concept of the degree of formality (including a discussion about the respective benefits for each degree) while in the second we demonstrate its usefulness in three case studies. In the first case study we will repair Haskell B. Curry's view of mathematics, which incidentally is of great importance in the first part of this thesis, in light of the different degrees of formality. In the second case study we delineate how awareness of the different degrees of formality can be used to help students to learn how to prove. Third, we will show how the advantages of proofs of different degrees of formality can be combined by the development of so called tactics having a medium degree of formality. Together the three case studies show that the degrees of formality provide a convincing solution to the problem of untapped potential. N2 - In dieser Dissertation stellen wir das Konzept der Formalitätsgrade vor, welches sich gegen eine dualistische Sichtweise richtet, die nur zwischen formalen und informalen Beweisen unterscheidet. Letztere Sichtweise spiegelt nämlich die Unterschiede zwischen den als informal klassifizierten Argumentationen nicht wieder und ist außerdem unproduktiv, weil sie nicht in der Lage ist, das individuelle Potential der jeweiligen Argumentationsstile wertzuschätzen und auszuschöpfen. Die Dissertation hat zwei Teile. Im ersten analysieren wir das Konzept der Formalitätsgrade (eine Diskussion über die Vorteile der jeweiligen Grade eingeschlossen), während wir im zweiten Teil die Nützlichkeit der Formalitätsgrade anhand von drei Fallbeispielen nachweisen. Im ersten von diesen werden wir Haskell B. Currys Sichtweise zur Mathematik, die nebenbei bemerkt von größter Wichtigkeit für den ersten Teil der Dissertation ist, mithilfe der verschiedenen Formalitätsgrade reparieren. Im zweiten Fallbeispiel zeigen wir auf, wie die Beachtung der verschiedenen Formalitätsgrade den Studenten dabei helfen kann, das Beweisen zu erlernen. Im letzten Fallbeispiel werden wir dann zeigen, wie die Vorteile von Beweisen verschiedener Formalitätsgrade durch die Anwendung sogenannter Taktiken mittleren Formalitätsgrades kombiniert werden können. Zusammen zeigen die drei Fallbeispiele, dass die Formalitätsgrade eine überzeugende Lösung für das Problem des ungenutzten Potentials darstellen. KW - argumentation KW - Coq KW - Curry KW - degree of formality KW - formalism KW - logic KW - mathematics education KW - philosophy of mathematics KW - proof KW - proof assistant KW - proof environment KW - tactic KW - Argumentation KW - Beweis KW - Beweisassistent KW - Beweisumgebung KW - Coq KW - Curry KW - Formalismus KW - Formalitätsgrad KW - Logik KW - Mathematikdidaktik KW - Mathematikphilosophie KW - Taktik Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-423795 N1 - CCS -> Applied computing -> Education -> Interactive learning environments CCS -> Theory of computation -> Logic CCS -> Computing methodologies -> Symbolic and algebraic manipulation -> Symbolic and algebraic algorithms -> Theorem proving algorithms ER -