TY - JOUR A1 - Prasse, Paul A1 - Iversen, Pascal A1 - Lienhard, Matthias A1 - Thedinga, Kristina A1 - Herwig, Ralf A1 - Scheffer, Tobias T1 - Pre-Training on In Vitro and Fine-Tuning on Patient-Derived Data Improves Deep Neural Networks for Anti-Cancer Drug-Sensitivity Prediction JF - MDPI N2 - Large-scale databases that report the inhibitory capacities of many combinations of candidate drug compounds and cultivated cancer cell lines have driven the development of preclinical drug-sensitivity models based on machine learning. However, cultivated cell lines have devolved from human cancer cells over years or even decades under selective pressure in culture conditions. Moreover, models that have been trained on in vitro data cannot account for interactions with other types of cells. Drug-response data that are based on patient-derived cell cultures, xenografts, and organoids, on the other hand, are not available in the quantities that are needed to train high-capacity machine-learning models. We found that pre-training deep neural network models of drug sensitivity on in vitro drug-sensitivity databases before fine-tuning the model parameters on patient-derived data improves the models’ accuracy and improves the biological plausibility of the features, compared to training only on patient-derived data. From our experiments, we can conclude that pre-trained models outperform models that have been trained on the target domains in the vast majority of cases. KW - deep neural networks KW - drug-sensitivity prediction KW - anti-cancer drugs Y1 - 2022 U6 - https://doi.org/10.3390/cancers14163950 SN - 2072-6694 VL - 14 SP - 1 EP - 14 PB - MDPI CY - Basel, Schweiz ET - 16 ER - TY - JOUR A1 - Hecher, Markus T1 - Treewidth-aware reductions of normal ASP to SAT BT - is normal ASP harder than SAT after all? JF - Artificial intelligence N2 - Answer Set Programming (ASP) is a paradigm for modeling and solving problems for knowledge representation and reasoning. There are plenty of results dedicated to studying the hardness of (fragments of) ASP. So far, these studies resulted in characterizations in terms of computational complexity as well as in fine-grained insights presented in form of dichotomy-style results, lower bounds when translating to other formalisms like propositional satisfiability (SAT), and even detailed parameterized complexity landscapes. A generic parameter in parameterized complexity originating from graph theory is the socalled treewidth, which in a sense captures structural density of a program. Recently, there was an increase in the number of treewidth-based solvers related to SAT. While there are translations from (normal) ASP to SAT, no reduction that preserves treewidth or at least keeps track of the treewidth increase is known. In this paper we propose a novel reduction from normal ASP to SAT that is aware of the treewidth, and guarantees that a slight increase of treewidth is indeed sufficient. Further, we show a new result establishing that, when considering treewidth, already the fragment of normal ASP is slightly harder than SAT (under reasonable assumptions in computational complexity). This also confirms that our reduction probably cannot be significantly improved and that the slight increase of treewidth is unavoidable. Finally, we present an empirical study of our novel reduction from normal ASP to SAT, where we compare treewidth upper bounds that are obtained via known decomposition heuristics. Overall, our reduction works better with these heuristics than existing translations. (c) 2021 Elsevier B.V. All rights reserved. KW - Answer set programming KW - Treewidth KW - Parameterized complexity KW - Complexity KW - analysis KW - Tree decomposition KW - Treewidth-aware reductions Y1 - 2022 U6 - https://doi.org/10.1016/j.artint.2021.103651 SN - 0004-3702 SN - 1872-7921 VL - 304 PB - Elsevier CY - Amsterdam ER - TY - JOUR A1 - Al Laban, Firas A1 - Reger, Martin A1 - Lucke, Ulrike T1 - Closing the Policy Gap in the Academic Bridge JF - Education sciences N2 - The highly structured nature of the educational sector demands effective policy mechanisms close to the needs of the field. That is why evidence-based policy making, endorsed by the European Commission under Erasmus+ Key Action 3, aims to make an alignment between the domains of policy and practice. Against this background, this article addresses two issues: First, that there is a vertical gap in the translation of higher-level policies to local strategies and regulations. Second, that there is a horizontal gap between educational domains regarding the policy awareness of individual players. This was analyzed in quantitative and qualitative studies with domain experts from the fields of virtual mobility and teacher training. From our findings, we argue that the combination of both gaps puts the academic bridge from secondary to tertiary education at risk, including the associated knowledge proficiency levels. We discuss the role of digitalization in the academic bridge by asking the question: which value does the involved stakeholders expect from educational policies? As a theoretical basis, we rely on the model of value co-creation for and by stakeholders. We describe the used instruments along with the obtained results and proposed benefits. Moreover, we reflect on the methodology applied, and we finally derive recommendations for future academic bridge policies. KW - policy evaluation KW - higher education KW - virtual mobility KW - teacher training Y1 - 2022 U6 - https://doi.org/10.3390/educsci12120930 SN - 2227-7102 VL - 12 IS - 12 PB - MDPI CY - Basel ER - TY - THES A1 - Böken, Björn T1 - Improving prediction accuracy using dynamic information N2 - Accurately solving classification problems nowadays is likely to be the most relevant machine learning task. Binary classification separating two classes only is algorithmically simpler but has fewer potential applications as many real-world problems are multi-class. On the reverse, separating only a subset of classes simplifies the classification task. Even though existing multi-class machine learning algorithms are very flexible regarding the number of classes, they assume that the target set Y is fixed and cannot be restricted once the training is finished. On the other hand, existing state-of-the-art production environments are becoming increasingly interconnected with the advance of Industry 4.0 and related technologies such that additional information can simplify the respective classification problems. In light of this, the main aim of this thesis is to introduce dynamic classification that generalizes multi-class classification such that the target class set can be restricted arbitrarily to a non-empty class subset M of Y at any time between two consecutive predictions. This task is solved by a combination of two algorithmic approaches. First, classifier calibration, which transforms predictions into posterior probability estimates that are intended to be well calibrated. The analysis provided focuses on monotonic calibration and in particular corrects wrong statements that appeared in the literature. It also reveals that bin-based evaluation metrics, which became popular in recent years, are unjustified and should not be used at all. Next, the validity of Platt scaling, which is the most relevant parametric calibration approach, is analyzed in depth. In particular, its optimality for classifier predictions distributed according to four different families of probability distributions as well its equivalence with Beta calibration up to a sigmoidal preprocessing are proven. For non-monotonic calibration, extended variants on kernel density estimation and the ensemble method EKDE are introduced. Finally, the calibration techniques are evaluated using a simulation study with complete information as well as on a selection of 46 real-world data sets. Building on this, classifier calibration is applied as part of decomposition-based classification that aims to reduce multi-class problems to simpler (usually binary) prediction tasks. For the involved fusing step performed at prediction time, a new approach based on evidence theory is presented that uses classifier calibration to model mass functions. This allows the analysis of decomposition-based classification against a strictly formal background and to prove closed-form equations for the overall combinations. Furthermore, the same formalism leads to a consistent integration of dynamic class information, yielding a theoretically justified and computationally tractable dynamic classification model. The insights gained from this modeling are combined with pairwise coupling, which is one of the most relevant reduction-based classification approaches, such that all individual predictions are combined with a weight. This not only generalizes existing works on pairwise coupling but also enables the integration of dynamic class information. Lastly, a thorough empirical study is performed that compares all newly introduced approaches to existing state-of-the-art techniques. For this, evaluation metrics for dynamic classification are introduced that depend on corresponding sampling strategies. Thereafter, these are applied during a three-part evaluation. First, support vector machines and random forests are applied on 26 data sets from the UCI Machine Learning Repository. Second, two state-of-the-art deep neural networks are evaluated on five benchmark data sets from a relatively recent reference work. Here, computationally feasible strategies to apply the presented algorithms in combination with large-scale models are particularly relevant because a naive application is computationally intractable. Finally, reference data from a real-world process allowing the inclusion of dynamic class information are collected and evaluated. The results show that in combination with support vector machines and random forests, pairwise coupling approaches yield the best results, while in combination with deep neural networks, differences between the different approaches are mostly small to negligible. Most importantly, all results empirically confirm that dynamic classification succeeds in improving the respective prediction accuracies. Therefore, it is crucial to pass dynamic class information in respective applications, which requires an appropriate digital infrastructure. N2 - Klassifikationsprobleme akkurat zu lösen ist heutzutage wahrscheinlich die relevanteste Machine-Learning-Aufgabe. Binäre Klassifikation zur Unterscheidung von nur zwei Klassen ist algorithmisch einfacher, hat aber weniger potenzielle Anwendungen, da in der Praxis oft Mehrklassenprobleme auftreten. Demgegenüber vereinfacht die Unterscheidung nur innerhalb einer Untermenge von Klassen die Problemstellung. Obwohl viele existierende Machine-Learning-Algorithmen sehr flexibel mit Blick auf die Anzahl der Klassen sind, setzen sie voraus, dass die Zielmenge Y fest ist und nicht mehr eingeschränkt werden kann, sobald das Training abgeschlossen ist. Allerdings sind moderne Produktionsumgebungen mit dem Voranschreiten von Industrie 4.0 und entsprechenden Technologien zunehmend digital verbunden, sodass zusätzliche Informationen die entsprechenden Klassifikationsprobleme vereinfachen können. Vor diesem Hintergrund ist das Hauptziel dieser Arbeit, dynamische Klassifikation als Verallgemeinerung von Mehrklassen-Klassifikation einzuführen, bei der die Zielmenge jederzeit zwischen zwei aufeinanderfolgenden Vorhersagen zu einer beliebigen, nicht leeren Teilmenge eingeschränkt werden kann. Diese Aufgabe wird durch die Kombination von zwei algorithmischen Ansätzen gelöst. Zunächst wird Klassifikator-Kalibrierung eingesetzt, mittels der Vorhersagen in Schätzungen der A-Posteriori-Wahrscheinlichkeiten transformiert werden, die gut kalibriert sein sollen. Die durchgeführte Analyse zielt auf monotone Kalibrierung ab und korrigiert insbesondere Falschaussagen, die in Referenzarbeiten veröffentlicht wurden. Außerdem zeigt sie, dass Bin-basierte Fehlermaße, die in den letzten Jahren populär geworden sind, ungerechtfertigt sind und nicht verwendet werden sollten. Weiterhin wird die Validität von Platt Scaling, dem relevantesten, parametrischen Kalibrierungsverfahren, genau analysiert. Insbesondere wird seine Optimalität für Klassifikatorvorhersagen, die gemäß vier Familien von Verteilungsfunktionen verteilt sind, sowie die Äquivalenz zu Beta-Kalibrierung bis auf eine sigmoidale Vorverarbeitung gezeigt. Für nicht monotone Kalibrierung werden erweiterte Varianten der Kerndichteschätzung und die Ensemblemethode EKDE eingeführt. Schließlich werden die Kalibrierungsverfahren im Rahmen einer Simulationsstudie mit vollständiger Information sowie auf 46 Referenzdatensätzen ausgewertet. Hierauf aufbauend wird Klassifikator-Kalibrierung als Teil von reduktionsbasierter Klassifikation eingesetzt, die zum Ziel hat, Mehrklassenprobleme auf einfachere (üblicherweise binäre) Entscheidungsprobleme zu reduzieren. Für den zugehörigen, während der Vorhersage notwendigen Fusionsschritt wird ein neuer, auf Evidenztheorie basierender Ansatz eingeführt, der Klassifikator-Kalibrierung zur Modellierung von Massefunktionen nutzt. Dies ermöglicht die Analyse von reduktionsbasierter Klassifikation in einem formalen Kontext sowie geschlossene Ausdrücke für die entsprechenden Gesamtkombinationen zu beweisen. Zusätzlich führt derselbe Formalismus zu einer konsistenten Integration von dynamischen Klasseninformationen, sodass sich ein theoretisch fundiertes und effizient zu berechnendes, dynamisches Klassifikationsmodell ergibt. Die hierbei gewonnenen Einsichten werden mit Pairwise Coupling, einem der relevantesten Verfahren für reduktionsbasierte Klassifikation, verbunden, wobei alle individuellen Vorhersagen mit einer Gewichtung kombiniert werden. Dies verallgemeinert nicht nur existierende Ansätze für Pairwise Coupling, sondern führt darüber hinaus auch zu einer Integration von dynamischen Klasseninformationen. Abschließend wird eine umfangreiche empirische Studie durchgeführt, die alle neu eingeführten Verfahren mit denen aus dem Stand der Forschung vergleicht. Hierfür werden Bewertungsfunktionen für dynamische Klassifikation eingeführt, die auf Sampling-Strategien basieren. Anschließend werden diese im Rahmen einer dreiteiligen Studie angewendet. Zunächst werden Support Vector Machines und Random Forests auf 26 Referenzdatensätzen aus dem UCI Machine Learning Repository angewendet. Im zweiten Teil werden zwei moderne, tiefe neuronale Netze auf fünf Referenzdatensätzen aus einer relativ aktuellen Referenzarbeit ausgewertet. Hierbei sind insbesondere Strategien relevant, die die Anwendung der eingeführten Verfahren in Verbindung mit großen Modellen ermöglicht, da eine naive Vorgehensweise nicht durchführbar ist. Schließlich wird ein Referenzdatensatz aus einem Produktionsprozess gewonnen, der die Integration von dynamischen Klasseninformationen ermöglicht, und ausgewertet. Die Ergebnisse zeigen, dass Pairwise-Coupling-Verfahren in Verbindung mit Support Vector Machines und Random Forests die besten Ergebnisse liefern, während in Verbindung mit tiefen neuronalen Netzen die Unterschiede zwischen den Verfahren oft klein bis vernachlässigbar sind. Am wichtigsten ist, dass alle Ergebnisse zeigen, dass dynamische Klassifikation die entsprechenden Erkennungsgenauigkeiten verbessert. Daher ist es entscheidend, dynamische Klasseninformationen in den entsprechenden Anwendungen zur Verfügung zu stellen, was eine entsprechende digitale Infrastruktur erfordert. KW - dynamic classification KW - multi-class classification KW - classifier calibration KW - evidence theory KW - Dempster–Shafer theory KW - Deep Learning KW - Deep Learning KW - Dempster-Shafer-Theorie KW - Klassifikator-Kalibrierung KW - dynamische Klassifikation KW - Evidenztheorie KW - Mehrklassen-Klassifikation Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-585125 ER - TY - GEN A1 - Prasse, Paul A1 - Iversen, Pascal A1 - Lienhard, Matthias A1 - Thedinga, Kristina A1 - Herwig, Ralf A1 - Scheffer, Tobias T1 - Pre-Training on In Vitro and Fine-Tuning on Patient-Derived Data Improves Deep Neural Networks for Anti-Cancer Drug-Sensitivity Prediction T2 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe N2 - Large-scale databases that report the inhibitory capacities of many combinations of candidate drug compounds and cultivated cancer cell lines have driven the development of preclinical drug-sensitivity models based on machine learning. However, cultivated cell lines have devolved from human cancer cells over years or even decades under selective pressure in culture conditions. Moreover, models that have been trained on in vitro data cannot account for interactions with other types of cells. Drug-response data that are based on patient-derived cell cultures, xenografts, and organoids, on the other hand, are not available in the quantities that are needed to train high-capacity machine-learning models. We found that pre-training deep neural network models of drug sensitivity on in vitro drug-sensitivity databases before fine-tuning the model parameters on patient-derived data improves the models’ accuracy and improves the biological plausibility of the features, compared to training only on patient-derived data. From our experiments, we can conclude that pre-trained models outperform models that have been trained on the target domains in the vast majority of cases. T3 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe - 1300 KW - deep neural networks KW - drug-sensitivity prediction KW - anti-cancer drugs Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-577341 SN - 1866-8372 SP - 1 EP - 14 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - GEN A1 - Al Laban, Firas A1 - Reger, Martin A1 - Lucke, Ulrike T1 - Closing the Policy Gap in the Academic Bridge T2 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe N2 - The highly structured nature of the educational sector demands effective policy mechanisms close to the needs of the field. That is why evidence-based policy making, endorsed by the European Commission under Erasmus+ Key Action 3, aims to make an alignment between the domains of policy and practice. Against this background, this article addresses two issues: First, that there is a vertical gap in the translation of higher-level policies to local strategies and regulations. Second, that there is a horizontal gap between educational domains regarding the policy awareness of individual players. This was analyzed in quantitative and qualitative studies with domain experts from the fields of virtual mobility and teacher training. From our findings, we argue that the combination of both gaps puts the academic bridge from secondary to tertiary education at risk, including the associated knowledge proficiency levels. We discuss the role of digitalization in the academic bridge by asking the question: which value does the involved stakeholders expect from educational policies? As a theoretical basis, we rely on the model of value co-creation for and by stakeholders. We describe the used instruments along with the obtained results and proposed benefits. Moreover, we reflect on the methodology applied, and we finally derive recommendations for future academic bridge policies. T3 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe - 1310 KW - policy evaluation KW - higher education KW - virtual mobility KW - teacher training Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-583572 SN - 1866-8372 IS - 1310 ER - TY - JOUR A1 - Lorenz, Claas A1 - Clemens, Vera Elisabeth A1 - Schrötter, Max A1 - Schnor, Bettina T1 - Continuous verification of network security compliance JF - IEEE transactions on network and service management N2 - Continuous verification of network security compliance is an accepted need. Especially, the analysis of stateful packet filters plays a central role for network security in practice. But the few existing tools which support the analysis of stateful packet filters are based on general applicable formal methods like Satifiability Modulo Theories (SMT) or theorem prover and show runtimes in the order of minutes to hours making them unsuitable for continuous compliance verification. In this work, we address these challenges and present the concept of state shell interweaving to transform a stateful firewall rule set into a stateless rule set. This allows us to reuse any fast domain specific engine from the field of data plane verification tools leveraging smart, very fast, and domain specialized data structures and algorithms including Header Space Analysis (HSA). First, we introduce the formal language FPL that enables a high-level human-understandable specification of the desired state of network security. Second, we demonstrate the instantiation of a compliance process using a verification framework that analyzes the configuration of complex networks and devices - including stateful firewalls - for compliance with FPL policies. Our evaluation results show the scalability of the presented approach for the well known Internet2 and Stanford benchmarks as well as for large firewall rule sets where it outscales state-of-the-art tools by a factor of over 41. KW - Security KW - Tools KW - Network security KW - Engines KW - Benchmark testing; KW - Analytical models KW - Scalability KW - Network KW - security KW - compliance KW - formal KW - verification Y1 - 2021 U6 - https://doi.org/10.1109/TNSM.2021.3130290 SN - 1932-4537 VL - 19 IS - 2 SP - 1729 EP - 1745 PB - Institute of Electrical and Electronics Engineers CY - New York ER - TY - JOUR A1 - Prasse, Paul A1 - Iversen, Pascal A1 - Lienhard, Matthias A1 - Thedinga, Kristina A1 - Bauer, Christopher A1 - Herwig, Ralf A1 - Scheffer, Tobias T1 - Matching anticancer compounds and tumor cell lines by neural networks with ranking loss JF - NAR: genomics and bioinformatics N2 - Computational drug sensitivity models have the potential to improve therapeutic outcomes by identifying targeted drug components that are likely to achieve the highest efficacy for a cancer cell line at hand at a therapeutic dose. State of the art drug sensitivity models use regression techniques to predict the inhibitory concentration of a drug for a tumor cell line. This regression objective is not directly aligned with either of these principal goals of drug sensitivity models: We argue that drug sensitivity modeling should be seen as a ranking problem with an optimization criterion that quantifies a drug's inhibitory capacity for the cancer cell line at hand relative to its toxicity for healthy cells. We derive an extension to the well-established drug sensitivity regression model PaccMann that employs a ranking loss and focuses on the ratio of inhibitory concentration and therapeutic dosage range. We find that the ranking extension significantly enhances the model's capability to identify the most effective anticancer drugs for unseen tumor cell profiles based in on in-vitro data. Y1 - 2022 U6 - https://doi.org/10.1093/nargab/lqab128 SN - 2631-9268 VL - 4 IS - 1 PB - Oxford Univ. Press CY - Oxford ER - TY - JOUR A1 - Steinert, Fritjof A1 - Stabernack, Benno T1 - Architecture of a low latency H.264/AVC video codec for robust ML based image classification how region of interests can minimize the impact of coding artifacts JF - Journal of Signal Processing Systems for Signal, Image, and Video Technology N2 - The use of neural networks is considered as the state of the art in the field of image classification. A large number of different networks are available for this purpose, which, appropriately trained, permit a high level of classification accuracy. Typically, these networks are applied to uncompressed image data, since a corresponding training was also carried out using image data of similar high quality. However, if image data contains image errors, the classification accuracy deteriorates drastically. This applies in particular to coding artifacts which occur due to image and video compression. Typical application scenarios for video compression are narrowband transmission channels for which video coding is required but a subsequent classification is to be carried out on the receiver side. In this paper we present a special H.264/Advanced Video Codec (AVC) based video codec that allows certain regions of a picture to be coded with near constant picture quality in order to allow a reliable classification using neural networks, whereas the remaining image will be coded using constant bit rate. We have combined this feature with the ability to run with lowest latency properties, which is usually also required in remote control applications scenarios. The codec has been implemented as a fully hardwired High Definition video capable hardware architecture which is suitable for Field Programmable Gate Arrays. KW - H.264 KW - Advanced Video Codec (AVC) KW - Low Latency KW - Region of Interest KW - Machine Learning KW - Inference KW - FPGA KW - Hardware accelerator Y1 - 2022 U6 - https://doi.org/10.1007/s11265-021-01727-2 SN - 1939-8018 SN - 1939-8115 VL - 94 IS - 7 SP - 693 EP - 708 PB - Springer CY - New York ER - TY - JOUR A1 - Breitenreiter, Anselm A1 - Andjelković, Marko A1 - Schrape, Oliver A1 - Krstić, Miloš T1 - Fast error propagation probability estimates by answer set programming and approximate model counting JF - IEEE Access N2 - We present a method employing Answer Set Programming in combination with Approximate Model Counting for fast and accurate calculation of error propagation probabilities in digital circuits. By an efficient problem encoding, we achieve an input data format similar to a Verilog netlist so that extensive preprocessing is avoided. By a tight interconnection of our application with the underlying solver, we avoid iterating over fault sites and reduce calls to the solver. Several circuits were analyzed with varying numbers of considered cycles and different degrees of approximation. Our experiments show, that the runtime can be reduced by approximation by a factor of 91, whereas the error compared to the exact result is below 1%. KW - Circuit faults KW - Integrated circuit modeling KW - Programming KW - Analytical models KW - Search problems KW - Flip-flops KW - Encoding KW - Answer set programming KW - approximate model counting KW - error propagation KW - radhard design KW - reliability analysis KW - selective fault tolerance KW - single event upsets Y1 - 2022 U6 - https://doi.org/10.1109/ACCESS.2022.3174564 SN - 2169-3536 VL - 10 SP - 51814 EP - 51825 PB - Inst. of Electr. and Electronics Engineers CY - Piscataway ER -