TY - JOUR A1 - Möring, Sebastian A1 - Leino, Olli Tapio T1 - Die neoliberale Bedingung von Computerspielen JF - Kontrollmaschinen - zur Dispositivtheorie des Computerspiels Y1 - 2022 SN - 978-3-643-14780-6 SP - 41 EP - 61 PB - LiteraturWissenschaft.de CY - Münster ER - TY - JOUR A1 - Prasse, Paul A1 - Iversen, Pascal A1 - Lienhard, Matthias A1 - Thedinga, Kristina A1 - Herwig, Ralf A1 - Scheffer, Tobias T1 - Pre-Training on In Vitro and Fine-Tuning on Patient-Derived Data Improves Deep Neural Networks for Anti-Cancer Drug-Sensitivity Prediction JF - MDPI N2 - Large-scale databases that report the inhibitory capacities of many combinations of candidate drug compounds and cultivated cancer cell lines have driven the development of preclinical drug-sensitivity models based on machine learning. However, cultivated cell lines have devolved from human cancer cells over years or even decades under selective pressure in culture conditions. Moreover, models that have been trained on in vitro data cannot account for interactions with other types of cells. Drug-response data that are based on patient-derived cell cultures, xenografts, and organoids, on the other hand, are not available in the quantities that are needed to train high-capacity machine-learning models. We found that pre-training deep neural network models of drug sensitivity on in vitro drug-sensitivity databases before fine-tuning the model parameters on patient-derived data improves the models’ accuracy and improves the biological plausibility of the features, compared to training only on patient-derived data. From our experiments, we can conclude that pre-trained models outperform models that have been trained on the target domains in the vast majority of cases. KW - deep neural networks KW - drug-sensitivity prediction KW - anti-cancer drugs Y1 - 2022 U6 - https://doi.org/10.3390/cancers14163950 SN - 2072-6694 VL - 14 SP - 1 EP - 14 PB - MDPI CY - Basel, Schweiz ET - 16 ER - TY - THES A1 - Klockmann, Alexander T1 - Modifizierte Unidirektionale Codes für Speicherfehler N2 - Das Promotionsvorhaben verfolgt das Ziel, die Zuverlässigkeit der Datenspeicherung und die Speicherdichte von neu entwickelten Speichern (Emerging Memories) mit Multi-Level-Speicherzellen zu verbessern bzw. zu erhöhen. Hierfür werden Codes zur Erkennung von unidirektionalen Fehlern analysiert, modifiziert und neu entwickelt, um sie innerhalb der neuen Speicher anwenden zu können. Der Fokus liegt dabei auf sog. Berger-Codes und m-aus-n-Codes. Da Multi-Level-Speicherzellen nicht mehr binär, sondern mit mehreren Leveln arbeiten, können bisher verwendete Codes nicht mehr verwendet werden, bzw. müssen entsprechend angepasst werden. Auf Basis der Berger-Codes und m-aus-n-Codes werden in dieser Arbeit neue Codes abgeleitet, welche in der Lage sind, Daten auch in mehrwertigen Systemen zu schützen. KW - Fehlererkennung KW - Codierungstheorie KW - Speicher KW - unidirektionale Fehler Y1 - 2022 ER - TY - JOUR A1 - Hecher, Markus T1 - Treewidth-aware reductions of normal ASP to SAT BT - is normal ASP harder than SAT after all? JF - Artificial intelligence N2 - Answer Set Programming (ASP) is a paradigm for modeling and solving problems for knowledge representation and reasoning. There are plenty of results dedicated to studying the hardness of (fragments of) ASP. So far, these studies resulted in characterizations in terms of computational complexity as well as in fine-grained insights presented in form of dichotomy-style results, lower bounds when translating to other formalisms like propositional satisfiability (SAT), and even detailed parameterized complexity landscapes. A generic parameter in parameterized complexity originating from graph theory is the socalled treewidth, which in a sense captures structural density of a program. Recently, there was an increase in the number of treewidth-based solvers related to SAT. While there are translations from (normal) ASP to SAT, no reduction that preserves treewidth or at least keeps track of the treewidth increase is known. In this paper we propose a novel reduction from normal ASP to SAT that is aware of the treewidth, and guarantees that a slight increase of treewidth is indeed sufficient. Further, we show a new result establishing that, when considering treewidth, already the fragment of normal ASP is slightly harder than SAT (under reasonable assumptions in computational complexity). This also confirms that our reduction probably cannot be significantly improved and that the slight increase of treewidth is unavoidable. Finally, we present an empirical study of our novel reduction from normal ASP to SAT, where we compare treewidth upper bounds that are obtained via known decomposition heuristics. Overall, our reduction works better with these heuristics than existing translations. (c) 2021 Elsevier B.V. All rights reserved. KW - Answer set programming KW - Treewidth KW - Parameterized complexity KW - Complexity KW - analysis KW - Tree decomposition KW - Treewidth-aware reductions Y1 - 2022 U6 - https://doi.org/10.1016/j.artint.2021.103651 SN - 0004-3702 SN - 1872-7921 VL - 304 PB - Elsevier CY - Amsterdam ER - TY - JOUR A1 - Al Laban, Firas A1 - Reger, Martin A1 - Lucke, Ulrike T1 - Closing the Policy Gap in the Academic Bridge JF - Education sciences N2 - The highly structured nature of the educational sector demands effective policy mechanisms close to the needs of the field. That is why evidence-based policy making, endorsed by the European Commission under Erasmus+ Key Action 3, aims to make an alignment between the domains of policy and practice. Against this background, this article addresses two issues: First, that there is a vertical gap in the translation of higher-level policies to local strategies and regulations. Second, that there is a horizontal gap between educational domains regarding the policy awareness of individual players. This was analyzed in quantitative and qualitative studies with domain experts from the fields of virtual mobility and teacher training. From our findings, we argue that the combination of both gaps puts the academic bridge from secondary to tertiary education at risk, including the associated knowledge proficiency levels. We discuss the role of digitalization in the academic bridge by asking the question: which value does the involved stakeholders expect from educational policies? As a theoretical basis, we rely on the model of value co-creation for and by stakeholders. We describe the used instruments along with the obtained results and proposed benefits. Moreover, we reflect on the methodology applied, and we finally derive recommendations for future academic bridge policies. KW - policy evaluation KW - higher education KW - virtual mobility KW - teacher training Y1 - 2022 U6 - https://doi.org/10.3390/educsci12120930 SN - 2227-7102 VL - 12 IS - 12 PB - MDPI CY - Basel ER - TY - THES A1 - Böken, Björn T1 - Improving prediction accuracy using dynamic information N2 - Accurately solving classification problems nowadays is likely to be the most relevant machine learning task. Binary classification separating two classes only is algorithmically simpler but has fewer potential applications as many real-world problems are multi-class. On the reverse, separating only a subset of classes simplifies the classification task. Even though existing multi-class machine learning algorithms are very flexible regarding the number of classes, they assume that the target set Y is fixed and cannot be restricted once the training is finished. On the other hand, existing state-of-the-art production environments are becoming increasingly interconnected with the advance of Industry 4.0 and related technologies such that additional information can simplify the respective classification problems. In light of this, the main aim of this thesis is to introduce dynamic classification that generalizes multi-class classification such that the target class set can be restricted arbitrarily to a non-empty class subset M of Y at any time between two consecutive predictions. This task is solved by a combination of two algorithmic approaches. First, classifier calibration, which transforms predictions into posterior probability estimates that are intended to be well calibrated. The analysis provided focuses on monotonic calibration and in particular corrects wrong statements that appeared in the literature. It also reveals that bin-based evaluation metrics, which became popular in recent years, are unjustified and should not be used at all. Next, the validity of Platt scaling, which is the most relevant parametric calibration approach, is analyzed in depth. In particular, its optimality for classifier predictions distributed according to four different families of probability distributions as well its equivalence with Beta calibration up to a sigmoidal preprocessing are proven. For non-monotonic calibration, extended variants on kernel density estimation and the ensemble method EKDE are introduced. Finally, the calibration techniques are evaluated using a simulation study with complete information as well as on a selection of 46 real-world data sets. Building on this, classifier calibration is applied as part of decomposition-based classification that aims to reduce multi-class problems to simpler (usually binary) prediction tasks. For the involved fusing step performed at prediction time, a new approach based on evidence theory is presented that uses classifier calibration to model mass functions. This allows the analysis of decomposition-based classification against a strictly formal background and to prove closed-form equations for the overall combinations. Furthermore, the same formalism leads to a consistent integration of dynamic class information, yielding a theoretically justified and computationally tractable dynamic classification model. The insights gained from this modeling are combined with pairwise coupling, which is one of the most relevant reduction-based classification approaches, such that all individual predictions are combined with a weight. This not only generalizes existing works on pairwise coupling but also enables the integration of dynamic class information. Lastly, a thorough empirical study is performed that compares all newly introduced approaches to existing state-of-the-art techniques. For this, evaluation metrics for dynamic classification are introduced that depend on corresponding sampling strategies. Thereafter, these are applied during a three-part evaluation. First, support vector machines and random forests are applied on 26 data sets from the UCI Machine Learning Repository. Second, two state-of-the-art deep neural networks are evaluated on five benchmark data sets from a relatively recent reference work. Here, computationally feasible strategies to apply the presented algorithms in combination with large-scale models are particularly relevant because a naive application is computationally intractable. Finally, reference data from a real-world process allowing the inclusion of dynamic class information are collected and evaluated. The results show that in combination with support vector machines and random forests, pairwise coupling approaches yield the best results, while in combination with deep neural networks, differences between the different approaches are mostly small to negligible. Most importantly, all results empirically confirm that dynamic classification succeeds in improving the respective prediction accuracies. Therefore, it is crucial to pass dynamic class information in respective applications, which requires an appropriate digital infrastructure. N2 - Klassifikationsprobleme akkurat zu lösen ist heutzutage wahrscheinlich die relevanteste Machine-Learning-Aufgabe. Binäre Klassifikation zur Unterscheidung von nur zwei Klassen ist algorithmisch einfacher, hat aber weniger potenzielle Anwendungen, da in der Praxis oft Mehrklassenprobleme auftreten. Demgegenüber vereinfacht die Unterscheidung nur innerhalb einer Untermenge von Klassen die Problemstellung. Obwohl viele existierende Machine-Learning-Algorithmen sehr flexibel mit Blick auf die Anzahl der Klassen sind, setzen sie voraus, dass die Zielmenge Y fest ist und nicht mehr eingeschränkt werden kann, sobald das Training abgeschlossen ist. Allerdings sind moderne Produktionsumgebungen mit dem Voranschreiten von Industrie 4.0 und entsprechenden Technologien zunehmend digital verbunden, sodass zusätzliche Informationen die entsprechenden Klassifikationsprobleme vereinfachen können. Vor diesem Hintergrund ist das Hauptziel dieser Arbeit, dynamische Klassifikation als Verallgemeinerung von Mehrklassen-Klassifikation einzuführen, bei der die Zielmenge jederzeit zwischen zwei aufeinanderfolgenden Vorhersagen zu einer beliebigen, nicht leeren Teilmenge eingeschränkt werden kann. Diese Aufgabe wird durch die Kombination von zwei algorithmischen Ansätzen gelöst. Zunächst wird Klassifikator-Kalibrierung eingesetzt, mittels der Vorhersagen in Schätzungen der A-Posteriori-Wahrscheinlichkeiten transformiert werden, die gut kalibriert sein sollen. Die durchgeführte Analyse zielt auf monotone Kalibrierung ab und korrigiert insbesondere Falschaussagen, die in Referenzarbeiten veröffentlicht wurden. Außerdem zeigt sie, dass Bin-basierte Fehlermaße, die in den letzten Jahren populär geworden sind, ungerechtfertigt sind und nicht verwendet werden sollten. Weiterhin wird die Validität von Platt Scaling, dem relevantesten, parametrischen Kalibrierungsverfahren, genau analysiert. Insbesondere wird seine Optimalität für Klassifikatorvorhersagen, die gemäß vier Familien von Verteilungsfunktionen verteilt sind, sowie die Äquivalenz zu Beta-Kalibrierung bis auf eine sigmoidale Vorverarbeitung gezeigt. Für nicht monotone Kalibrierung werden erweiterte Varianten der Kerndichteschätzung und die Ensemblemethode EKDE eingeführt. Schließlich werden die Kalibrierungsverfahren im Rahmen einer Simulationsstudie mit vollständiger Information sowie auf 46 Referenzdatensätzen ausgewertet. Hierauf aufbauend wird Klassifikator-Kalibrierung als Teil von reduktionsbasierter Klassifikation eingesetzt, die zum Ziel hat, Mehrklassenprobleme auf einfachere (üblicherweise binäre) Entscheidungsprobleme zu reduzieren. Für den zugehörigen, während der Vorhersage notwendigen Fusionsschritt wird ein neuer, auf Evidenztheorie basierender Ansatz eingeführt, der Klassifikator-Kalibrierung zur Modellierung von Massefunktionen nutzt. Dies ermöglicht die Analyse von reduktionsbasierter Klassifikation in einem formalen Kontext sowie geschlossene Ausdrücke für die entsprechenden Gesamtkombinationen zu beweisen. Zusätzlich führt derselbe Formalismus zu einer konsistenten Integration von dynamischen Klasseninformationen, sodass sich ein theoretisch fundiertes und effizient zu berechnendes, dynamisches Klassifikationsmodell ergibt. Die hierbei gewonnenen Einsichten werden mit Pairwise Coupling, einem der relevantesten Verfahren für reduktionsbasierte Klassifikation, verbunden, wobei alle individuellen Vorhersagen mit einer Gewichtung kombiniert werden. Dies verallgemeinert nicht nur existierende Ansätze für Pairwise Coupling, sondern führt darüber hinaus auch zu einer Integration von dynamischen Klasseninformationen. Abschließend wird eine umfangreiche empirische Studie durchgeführt, die alle neu eingeführten Verfahren mit denen aus dem Stand der Forschung vergleicht. Hierfür werden Bewertungsfunktionen für dynamische Klassifikation eingeführt, die auf Sampling-Strategien basieren. Anschließend werden diese im Rahmen einer dreiteiligen Studie angewendet. Zunächst werden Support Vector Machines und Random Forests auf 26 Referenzdatensätzen aus dem UCI Machine Learning Repository angewendet. Im zweiten Teil werden zwei moderne, tiefe neuronale Netze auf fünf Referenzdatensätzen aus einer relativ aktuellen Referenzarbeit ausgewertet. Hierbei sind insbesondere Strategien relevant, die die Anwendung der eingeführten Verfahren in Verbindung mit großen Modellen ermöglicht, da eine naive Vorgehensweise nicht durchführbar ist. Schließlich wird ein Referenzdatensatz aus einem Produktionsprozess gewonnen, der die Integration von dynamischen Klasseninformationen ermöglicht, und ausgewertet. Die Ergebnisse zeigen, dass Pairwise-Coupling-Verfahren in Verbindung mit Support Vector Machines und Random Forests die besten Ergebnisse liefern, während in Verbindung mit tiefen neuronalen Netzen die Unterschiede zwischen den Verfahren oft klein bis vernachlässigbar sind. Am wichtigsten ist, dass alle Ergebnisse zeigen, dass dynamische Klassifikation die entsprechenden Erkennungsgenauigkeiten verbessert. Daher ist es entscheidend, dynamische Klasseninformationen in den entsprechenden Anwendungen zur Verfügung zu stellen, was eine entsprechende digitale Infrastruktur erfordert. KW - dynamic classification KW - multi-class classification KW - classifier calibration KW - evidence theory KW - Dempster–Shafer theory KW - Deep Learning KW - Deep Learning KW - Dempster-Shafer-Theorie KW - Klassifikator-Kalibrierung KW - dynamische Klassifikation KW - Evidenztheorie KW - Mehrklassen-Klassifikation Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-585125 ER - TY - GEN A1 - Prasse, Paul A1 - Iversen, Pascal A1 - Lienhard, Matthias A1 - Thedinga, Kristina A1 - Herwig, Ralf A1 - Scheffer, Tobias T1 - Pre-Training on In Vitro and Fine-Tuning on Patient-Derived Data Improves Deep Neural Networks for Anti-Cancer Drug-Sensitivity Prediction T2 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe N2 - Large-scale databases that report the inhibitory capacities of many combinations of candidate drug compounds and cultivated cancer cell lines have driven the development of preclinical drug-sensitivity models based on machine learning. However, cultivated cell lines have devolved from human cancer cells over years or even decades under selective pressure in culture conditions. Moreover, models that have been trained on in vitro data cannot account for interactions with other types of cells. Drug-response data that are based on patient-derived cell cultures, xenografts, and organoids, on the other hand, are not available in the quantities that are needed to train high-capacity machine-learning models. We found that pre-training deep neural network models of drug sensitivity on in vitro drug-sensitivity databases before fine-tuning the model parameters on patient-derived data improves the models’ accuracy and improves the biological plausibility of the features, compared to training only on patient-derived data. From our experiments, we can conclude that pre-trained models outperform models that have been trained on the target domains in the vast majority of cases. T3 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe - 1300 KW - deep neural networks KW - drug-sensitivity prediction KW - anti-cancer drugs Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-577341 SN - 1866-8372 SP - 1 EP - 14 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - GEN A1 - Al Laban, Firas A1 - Reger, Martin A1 - Lucke, Ulrike T1 - Closing the Policy Gap in the Academic Bridge T2 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe N2 - The highly structured nature of the educational sector demands effective policy mechanisms close to the needs of the field. That is why evidence-based policy making, endorsed by the European Commission under Erasmus+ Key Action 3, aims to make an alignment between the domains of policy and practice. Against this background, this article addresses two issues: First, that there is a vertical gap in the translation of higher-level policies to local strategies and regulations. Second, that there is a horizontal gap between educational domains regarding the policy awareness of individual players. This was analyzed in quantitative and qualitative studies with domain experts from the fields of virtual mobility and teacher training. From our findings, we argue that the combination of both gaps puts the academic bridge from secondary to tertiary education at risk, including the associated knowledge proficiency levels. We discuss the role of digitalization in the academic bridge by asking the question: which value does the involved stakeholders expect from educational policies? As a theoretical basis, we rely on the model of value co-creation for and by stakeholders. We describe the used instruments along with the obtained results and proposed benefits. Moreover, we reflect on the methodology applied, and we finally derive recommendations for future academic bridge policies. T3 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe - 1310 KW - policy evaluation KW - higher education KW - virtual mobility KW - teacher training Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-583572 SN - 1866-8372 IS - 1310 ER - TY - THES A1 - Cichalla, Anika Katleen T1 - Ein konstruktivistisches Modell für die Didaktik der Informatik im Bachelorstudium T1 - A constructivistic model for the didactics of computational science in bachelor studies N2 - Lehrende in der Lehrkräfteausbildung sind stets damit konfrontiert, dass sie den Studierenden innovative Methoden modernen Schulunterrichts traditionell rezipierend vorstellen. In Deutschland gibt es circa 40 Universitäten, die Informatik mit Lehramtsbezug ausbilden. Allerdings gibt es nur wenige Konzepte, die sich mit der Verbindung von Bildungswissenschaften und der Informatik mit ihrer Didaktik beschäftigen und keine Konzepte, die eine konstruktivistische Lehre in der Informatik verfolgen. Daher zielt diese Masterarbeit darauf ab, diese Lücke aufgreifen und anhand des „Didaktik der Informatik I“ Moduls der Universität Potsdam ein Modell zur konstruktivistischen Hochschullehre zu entwickeln. Dabei soll ein bestehendes konstruktivistisches Lehrmodell auf die Informatikdidaktik übertragen und Elemente zur Verbindung von Bildungswissenschaften, Fachwissenschaften und Fachdidaktiken mit einbezogen werden. Dies kann eine Grundlage für die Planung von Informatikdidaktischen Modulen bieten, aber auch als Inspiration zur Übertragung bestehender innovativer Lehrkonzepte auf andere Fachdidaktiken dienen. Um ein solches konstruktivistisches Lehr-Lern-Modell zu erstellen, wird zunächst der Zusammenhang von Bildungswissenschaften, Fachwissenschaften und Fachdidaktiken erläutert und anschließend die Notwendigkeit einer Vernetzung hervorgehoben. Hieran folgt eine Darstellung zu relevanten Lerntheorien und bereits entwickelten innovativen Lernkonzepten. Anknüpfend wird darauf eingegangen, welche Anforderungen die Kultusminister- Konferenz an die Ausbildung von Lehrkräften stellt und wie diese Ausbildung für die Informatik momentan an der Universität Potsdam erfolgt. Aus allen Erkenntnissen heraus werden Anforderungen an ein konstruktivistisches Lehrmodell festgelegt. Unter Berücksichtigung der Voraussetzungen der Studienordnung für das Lehramt Informatik wird anschließend ein Modell für konstruktivistische Informatikdidaktik vorgestellt. Weiterführende Forschung könnte sich damit auseinandersetzen, inwiefern sich die Motivation und Leistung im vergleich zum ursprünglichen Modul ändert und ob die Kompetenzen zur Unterrichtsplanung und Unterrichtsgestaltung durch das neue Modulkonzept stärker ausgebaut werden können. N2 - Teachers in teacher training are always confronted with the fact that they present innovative methods of modern school teaching to students in a traditionally receptive way. In Germany, there are about 40 universities that train computational science with a focus on teaching. However, there are only a few concepts that deal with the connection of educational science and computer science with its didactics and no concepts that pursue constructivist teaching in computational science. Therefore, this master thesis aims to address this gap and to develop a model for constructivist university teaching based on the "Didactics of Computational Science I" module at the University of Potsdam. An existing constructivist teaching model is to be transferred to computational science didactics and elements for the connection of general pedagogy, scientific theory and didactics are to be included. This can provide a basis for planning computational science didactic modules, but also serve as inspiration for transferring existing innovative teaching concepts to other subject didactics. In order to create such a constructivist teaching-learning model, the interrelationship of general pedagogy, scientific theory and didactics is first explained and then the necessity of networking is emphasized. This is followed by a presentation of relevant learning theories and innovative learning concepts already developed. Subsequently, the requirements of the Standing Conference of the Ministers of Education and Cultural Affairs (Kultusministerkonferenz) for the training of teachers and how this training for computer science is currently carried out at the University of Potsdam are discussed. From all findings, requirements for a constructivist teaching model are defined. Taking into account the requirements of the study regulations for the computer science teaching profession, a model for constructivist computer science didactics is then presented. Further research could address the extent to which motivation and performance change in comparison to the original module and whether the competencies for lesson planning and lesson design can be more developed on base of the new module concept. KW - education KW - university education KW - teacher training KW - Hochschulbildung KW - Lehrkräfteausbildung KW - Konstruktivismus KW - construktivism KW - Informatik KW - Informatikdidaktik KW - Computational Science KW - didactics Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-550710 ER - TY - JOUR A1 - Lorenz, Claas A1 - Clemens, Vera Elisabeth A1 - Schrötter, Max A1 - Schnor, Bettina T1 - Continuous verification of network security compliance JF - IEEE transactions on network and service management N2 - Continuous verification of network security compliance is an accepted need. Especially, the analysis of stateful packet filters plays a central role for network security in practice. But the few existing tools which support the analysis of stateful packet filters are based on general applicable formal methods like Satifiability Modulo Theories (SMT) or theorem prover and show runtimes in the order of minutes to hours making them unsuitable for continuous compliance verification. In this work, we address these challenges and present the concept of state shell interweaving to transform a stateful firewall rule set into a stateless rule set. This allows us to reuse any fast domain specific engine from the field of data plane verification tools leveraging smart, very fast, and domain specialized data structures and algorithms including Header Space Analysis (HSA). First, we introduce the formal language FPL that enables a high-level human-understandable specification of the desired state of network security. Second, we demonstrate the instantiation of a compliance process using a verification framework that analyzes the configuration of complex networks and devices - including stateful firewalls - for compliance with FPL policies. Our evaluation results show the scalability of the presented approach for the well known Internet2 and Stanford benchmarks as well as for large firewall rule sets where it outscales state-of-the-art tools by a factor of over 41. KW - Security KW - Tools KW - Network security KW - Engines KW - Benchmark testing; KW - Analytical models KW - Scalability KW - Network KW - security KW - compliance KW - formal KW - verification Y1 - 2021 U6 - https://doi.org/10.1109/TNSM.2021.3130290 SN - 1932-4537 VL - 19 IS - 2 SP - 1729 EP - 1745 PB - Institute of Electrical and Electronics Engineers CY - New York ER - TY - JOUR A1 - Prasse, Paul A1 - Iversen, Pascal A1 - Lienhard, Matthias A1 - Thedinga, Kristina A1 - Bauer, Christopher A1 - Herwig, Ralf A1 - Scheffer, Tobias T1 - Matching anticancer compounds and tumor cell lines by neural networks with ranking loss JF - NAR: genomics and bioinformatics N2 - Computational drug sensitivity models have the potential to improve therapeutic outcomes by identifying targeted drug components that are likely to achieve the highest efficacy for a cancer cell line at hand at a therapeutic dose. State of the art drug sensitivity models use regression techniques to predict the inhibitory concentration of a drug for a tumor cell line. This regression objective is not directly aligned with either of these principal goals of drug sensitivity models: We argue that drug sensitivity modeling should be seen as a ranking problem with an optimization criterion that quantifies a drug's inhibitory capacity for the cancer cell line at hand relative to its toxicity for healthy cells. We derive an extension to the well-established drug sensitivity regression model PaccMann that employs a ranking loss and focuses on the ratio of inhibitory concentration and therapeutic dosage range. We find that the ranking extension significantly enhances the model's capability to identify the most effective anticancer drugs for unseen tumor cell profiles based in on in-vitro data. Y1 - 2022 U6 - https://doi.org/10.1093/nargab/lqab128 SN - 2631-9268 VL - 4 IS - 1 PB - Oxford Univ. Press CY - Oxford ER - TY - JOUR A1 - Steinert, Fritjof A1 - Stabernack, Benno T1 - Architecture of a low latency H.264/AVC video codec for robust ML based image classification how region of interests can minimize the impact of coding artifacts JF - Journal of Signal Processing Systems for Signal, Image, and Video Technology N2 - The use of neural networks is considered as the state of the art in the field of image classification. A large number of different networks are available for this purpose, which, appropriately trained, permit a high level of classification accuracy. Typically, these networks are applied to uncompressed image data, since a corresponding training was also carried out using image data of similar high quality. However, if image data contains image errors, the classification accuracy deteriorates drastically. This applies in particular to coding artifacts which occur due to image and video compression. Typical application scenarios for video compression are narrowband transmission channels for which video coding is required but a subsequent classification is to be carried out on the receiver side. In this paper we present a special H.264/Advanced Video Codec (AVC) based video codec that allows certain regions of a picture to be coded with near constant picture quality in order to allow a reliable classification using neural networks, whereas the remaining image will be coded using constant bit rate. We have combined this feature with the ability to run with lowest latency properties, which is usually also required in remote control applications scenarios. The codec has been implemented as a fully hardwired High Definition video capable hardware architecture which is suitable for Field Programmable Gate Arrays. KW - H.264 KW - Advanced Video Codec (AVC) KW - Low Latency KW - Region of Interest KW - Machine Learning KW - Inference KW - FPGA KW - Hardware accelerator Y1 - 2022 U6 - https://doi.org/10.1007/s11265-021-01727-2 SN - 1939-8018 SN - 1939-8115 VL - 94 IS - 7 SP - 693 EP - 708 PB - Springer CY - New York ER - TY - JOUR A1 - Breitenreiter, Anselm A1 - Andjelković, Marko A1 - Schrape, Oliver A1 - Krstić, Miloš T1 - Fast error propagation probability estimates by answer set programming and approximate model counting JF - IEEE Access N2 - We present a method employing Answer Set Programming in combination with Approximate Model Counting for fast and accurate calculation of error propagation probabilities in digital circuits. By an efficient problem encoding, we achieve an input data format similar to a Verilog netlist so that extensive preprocessing is avoided. By a tight interconnection of our application with the underlying solver, we avoid iterating over fault sites and reduce calls to the solver. Several circuits were analyzed with varying numbers of considered cycles and different degrees of approximation. Our experiments show, that the runtime can be reduced by approximation by a factor of 91, whereas the error compared to the exact result is below 1%. KW - Circuit faults KW - Integrated circuit modeling KW - Programming KW - Analytical models KW - Search problems KW - Flip-flops KW - Encoding KW - Answer set programming KW - approximate model counting KW - error propagation KW - radhard design KW - reliability analysis KW - selective fault tolerance KW - single event upsets Y1 - 2022 U6 - https://doi.org/10.1109/ACCESS.2022.3174564 SN - 2169-3536 VL - 10 SP - 51814 EP - 51825 PB - Inst. of Electr. and Electronics Engineers CY - Piscataway ER - TY - JOUR A1 - Marco Figuera, Ramiro A1 - Riedel, Christian A1 - Rossi, Angelo Pio A1 - Unnithan, Vikram T1 - Depth to diameter analysis on small simple craters at the lunar south pole - possible implications for ice harboring JF - Remote sensing N2 - In this paper, we present a study comparing the depth to diameter (d/D) ratio of small simple craters (200-1000 m) of an area between -88.5 degrees to -90 degrees latitude at the lunar south pole containing Permanent Shadowed Regions (PSRs) versus craters without PSRs. As PSRs can reach temperatures of 110 K and are capable of harboring volatiles, especially water ice, we analyzed the relationship of depth versus diameter ratios and its possible implications for harboring water ice. Variations in the d/D ratios can also be caused by other processes such as degradation, isostatic adjustment, or differences in surface properties. The conducted d/D ratio analysis suggests that a differentiation between craters containing PSRs versus craters without PSRs occurs. Thus, a possible direct relation between d/D ratio, PSRs, and water ice harboring might exist. Our results suggest that differences in the target's surface properties may explain the obtained results. The resulting d/D ratios of craters with PSRs can help to select target areas for future In-Situ Resource Utilization (ISRU) missions. KW - craters KW - lunar exploration KW - ice harboring Y1 - 2022 U6 - https://doi.org/10.3390/rs14030450 SN - 2072-4292 VL - 14 IS - 3 PB - MDPI CY - Basel ER - TY - JOUR A1 - Chen, Junchao A1 - Lange, Thomas A1 - Andjelkovic, Marko A1 - Simevski, Aleksandar A1 - Lu, Li A1 - Krstic, Milos T1 - Solar particle event and single event upset prediction from SRAM-based monitor and supervised machine learning JF - IEEE transactions on emerging topics in computing / IEEE Computer Society, Institute of Electrical and Electronics Engineers N2 - The intensity of cosmic radiation may differ over five orders of magnitude within a few hours or days during the Solar Particle Events (SPEs), thus increasing for several orders of magnitude the probability of Single Event Upsets (SEUs) in space-borne electronic systems. Therefore, it is vital to enable the early detection of the SEU rate changes in order to ensure timely activation of dynamic radiation hardening measures. In this paper, an embedded approach for the prediction of SPEs and SRAM SEU rate is presented. The proposed solution combines the real-time SRAM-based SEU monitor, the offline-trained machine learning model and online learning algorithm for the prediction. With respect to the state-of-the-art, our solution brings the following benefits: (1) Use of existing on-chip data storage SRAM as a particle detector, thus minimizing the hardware and power overhead, (2) Prediction of SRAM SEU rate one hour in advance, with the fine-grained hourly tracking of SEU variations during SPEs as well as under normal conditions, (3) Online optimization of the prediction model for enhancing the prediction accuracy during run-time, (4) Negligible cost of hardware accelerator design for the implementation of selected machine learning model and online learning algorithm. The proposed design is intended for a highly dependable and self-adaptive multiprocessing system employed in space applications, allowing to trigger the radiation mitigation mechanisms before the onset of high radiation levels. KW - Machine learning KW - Single event upsets KW - Random access memory KW - monitoring KW - machine learning algorithms KW - predictive models KW - space missions KW - solar particle event KW - single event upset KW - machine learning KW - online learning KW - hardware accelerator KW - reliability KW - self-adaptive multiprocessing system Y1 - 2022 U6 - https://doi.org/10.1109/TETC.2022.3147376 SN - 2168-6750 VL - 10 IS - 2 SP - 564 EP - 580 PB - Institute of Electrical and Electronics Engineers CY - [New York, NY] ER - TY - JOUR A1 - Abdelwahab, Ahmed A1 - Landwehr, Niels T1 - Deep Distributional Sequence Embeddings Based on a Wasserstein Loss JF - Neural processing letters N2 - Deep metric learning employs deep neural networks to embed instances into a metric space such that distances between instances of the same class are small and distances between instances from different classes are large. In most existing deep metric learning techniques, the embedding of an instance is given by a feature vector produced by a deep neural network and Euclidean distance or cosine similarity defines distances between these vectors. This paper studies deep distributional embeddings of sequences, where the embedding of a sequence is given by the distribution of learned deep features across the sequence. The motivation for this is to better capture statistical information about the distribution of patterns within the sequence in the embedding. When embeddings are distributions rather than vectors, measuring distances between embeddings involves comparing their respective distributions. The paper therefore proposes a distance metric based on Wasserstein distances between the distributions and a corresponding loss function for metric learning, which leads to a novel end-to-end trainable embedding model. We empirically observe that distributional embeddings outperform standard vector embeddings and that training with the proposed Wasserstein metric outperforms training with other distance functions. KW - Metric learning KW - Sequence embeddings KW - Deep learning Y1 - 2022 U6 - https://doi.org/10.1007/s11063-022-10784-y SN - 1370-4621 SN - 1573-773X PB - Springer CY - Dordrecht ER - TY - JOUR A1 - Tran, Son Cao A1 - Pontelli, Enrico A1 - Balduccini, Marcello A1 - Schaub, Torsten T1 - Answer set planning BT - a survey JF - Theory and practice of logic programming N2 - Answer Set Planning refers to the use of Answer Set Programming (ASP) to compute plans, that is, solutions to planning problems, that transform a given state of the world to another state. The development of efficient and scalable answer set solvers has provided a significant boost to the development of ASP-based planning systems. This paper surveys the progress made during the last two and a half decades in the area of answer set planning, from its foundations to its use in challenging planning domains. The survey explores the advantages and disadvantages of answer set planning. It also discusses typical applications of answer set planning and presents a set of challenges for future research. KW - planning KW - knowledge representation and reasoning KW - logic programming Y1 - 2022 U6 - https://doi.org/10.1017/S1471068422000072 SN - 1471-0684 SN - 1475-3081 PB - Cambridge University Press CY - New York ER - TY - JOUR A1 - Molkenthin, Christian A1 - Donner, Christian A1 - Reich, Sebastian A1 - Zöller, Gert A1 - Hainzl, Sebastian A1 - Holschneider, Matthias A1 - Opper, Manfred T1 - GP-ETAS: semiparametric Bayesian inference for the spatio-temporal epidemic type aftershock sequence model JF - Statistics and Computing N2 - The spatio-temporal epidemic type aftershock sequence (ETAS) model is widely used to describe the self-exciting nature of earthquake occurrences. While traditional inference methods provide only point estimates of the model parameters, we aim at a fully Bayesian treatment of model inference, allowing naturally to incorporate prior knowledge and uncertainty quantification of the resulting estimates. Therefore, we introduce a highly flexible, non-parametric representation for the spatially varying ETAS background intensity through a Gaussian process (GP) prior. Combined with classical triggering functions this results in a new model formulation, namely the GP-ETAS model. We enable tractable and efficient Gibbs sampling by deriving an augmented form of the GP-ETAS inference problem. This novel sampling approach allows us to assess the posterior model variables conditioned on observed earthquake catalogues, i.e., the spatial background intensity and the parameters of the triggering function. Empirical results on two synthetic data sets indicate that GP-ETAS outperforms standard models and thus demonstrate the predictive power for observed earthquake catalogues including uncertainty quantification for the estimated parameters. Finally, a case study for the l'Aquila region, Italy, with the devastating event on 6 April 2009, is presented. KW - Self-exciting point process KW - Hawkes process KW - Spatio-temporal ETAS model KW - Bayesian inference KW - Sampling KW - Earthquake modeling KW - Gaussian process KW - Data augmentation Y1 - 2022 U6 - https://doi.org/10.1007/s11222-022-10085-3 SN - 0960-3174 SN - 1573-1375 VL - 32 IS - 2 PB - Springer CY - Dordrecht ER - TY - JOUR A1 - Andjelkovic, Marko A1 - Simevski, Aleksandar A1 - Chen, Junchao A1 - Schrape, Oliver A1 - Stamenkovic, Zoran A1 - Krstic, Miloš A1 - Ilic, Stefan A1 - Ristic, Goran A1 - Jaksic, Aleksandar A1 - Vasovic, Nikola A1 - Duane, Russell A1 - Palma, Alberto J. A1 - Lallena, Antonio M. A1 - Carvajal, Miguel A. T1 - A design concept for radiation hardened RADFET readout system for space applications JF - Microprocessors and microsystems N2 - Instruments for measuring the absorbed dose and dose rate under radiation exposure, known as radiation dosimeters, are indispensable in space missions. They are composed of radiation sensors that generate current or voltage response when exposed to ionizing radiation, and processing electronics for computing the absorbed dose and dose rate. Among a wide range of existing radiation sensors, the Radiation Sensitive Field Effect Transistors (RADFETs) have unique advantages for absorbed dose measurement, and a proven record of successful exploitation in space missions. It has been shown that the RADFETs may be also used for the dose rate monitoring. In that regard, we propose a unique design concept that supports the simultaneous operation of a single RADFET as absorbed dose and dose rate monitor. This enables to reduce the cost of implementation, since the need for other types of radiation sensors can be minimized or eliminated. For processing the RADFET's response we propose a readout system composed of analog signal conditioner (ASC) and a self-adaptive multiprocessing system-on-chip (MPSoC). The soft error rate of MPSoC is monitored in real time with embedded sensors, allowing the autonomous switching between three operating modes (high-performance, de-stress and fault-tolerant), according to the application requirements and radiation conditions. KW - RADFET KW - Radiation hardness KW - Absorbed dose KW - Dose rate KW - Self-adaptive MPSoC Y1 - 2022 U6 - https://doi.org/10.1016/j.micpro.2022.104486 SN - 0141-9331 SN - 1872-9436 VL - 90 PB - Elsevier CY - Amsterdam ER - TY - JOUR A1 - Bandyopadhyay, Soumyadip A1 - Sarkar, Dipankar A1 - Mandal, Chittaranjan A1 - Giese, Holger T1 - Translation validation of coloured Petri net models of programs on integers JF - Acta informatica N2 - Programs are often subjected to significant optimizing and parallelizing transformations based on extensive dependence analysis. Formal validation of such transformations needs modelling paradigms which can capture both control and data dependences in the program vividly. Being value-based with an inherent scope of capturing parallelism, the untimed coloured Petri net (CPN) models, reported in the literature, fit the bill well; accordingly, they are likely to be more convenient as the intermediate representations (IRs) of both the source and the transformed codes for translation validation than strictly sequential variable-based IRs like sequential control flow graphs (CFGs). In this work, an efficient path-based equivalence checking method for CPN models of programs on integers is presented. Extensive experimentation has been carried out on several sequential and parallel examples. Complexity and correctness issues have been treated rigorously for the method. Y1 - 2022 U6 - https://doi.org/10.1007/s00236-022-00419-z SN - 0001-5903 SN - 1432-0525 VL - 59 IS - 6 SP - 725 EP - 759 PB - Springer CY - New York ER - TY - JOUR A1 - Michallek, Florian A1 - Genske, Ulrich A1 - Niehues, Stefan Markus A1 - Hamm, Bernd A1 - Jahnke, Paul T1 - Deep learning reconstruction improves radiomics feature stability and discriminative power in abdominal CT imaging BT - a phantom study JF - European Radiology N2 - Objectives To compare image quality of deep learning reconstruction (AiCE) for radiomics feature extraction with filtered back projection (FBP), hybrid iterative reconstruction (AIDR 3D), and model-based iterative reconstruction (FIRST). Methods Effects of image reconstruction on radiomics features were investigated using a phantom that realistically mimicked a 65-year-old patient's abdomen with hepatic metastases. The phantom was scanned at 18 doses from 0.2 to 4 mGy, with 20 repeated scans per dose. Images were reconstructed with FBP, AIDR 3D, FIRST, and AiCE. Ninety-three radiomics features were extracted from 24 regions of interest, which were evenly distributed across three tissue classes: normal liver, metastatic core, and metastatic rim. Features were analyzed in terms of their consistent characterization of tissues within the same image (intraclass correlation coefficient >= 0.75), discriminative power (Kruskal-Wallis test p value < 0.05), and repeatability (overall concordance correlation coefficient >= 0.75). Results The median fraction of consistent features across all doses was 6%, 8%, 6%, and 22% with FBP, AIDR 3D, FIRST, and AiCE, respectively. Adequate discriminative power was achieved by 48%, 82%, 84%, and 92% of features, and 52%, 20%, 17%, and 39% of features were repeatable, respectively. Only 5% of features combined consistency, discriminative power, and repeatability with FBP, AIDR 3D, and FIRST versus 13% with AiCE at doses above 1 mGy and 17% at doses >= 3 mGy. AiCE was the only reconstruction technique that enabled extraction of higher-order features. Conclusions AiCE more than doubled the yield of radiomics features at doses typically used clinically. Inconsistent tissue characterization within CT images contributes significantly to the poor stability of radiomics features. KW - Tomography KW - X-ray computed KW - Phantoms KW - imaging KW - Liver neoplasms KW - Algorithms KW - Reproducibility of results Y1 - 2022 U6 - https://doi.org/10.1007/s00330-022-08592-y SN - 1432-1084 VL - 32 IS - 7 SP - 4587 EP - 4595 PB - Springer CY - New York ER -