TY - THES A1 - Frank, Mario T1 - On synthesising Linux kernel module components from Coq formalisations T1 - Über die Synthese von Linux Kernel- Modul-Komponenten aus Coq-Formalisierungen N2 - This thesis presents an attempt to use source code synthesised from Coq formalisations of device drivers for existing (micro)kernel operating systems, with a particular focus on the Linux Kernel. In the first part, the technical background and related work are described. The focus is here on the possible approaches to synthesising certified software with Coq, namely the extraction to functional languages using the Coq extraction plugin and the extraction to Clight code using the CertiCoq plugin. It is noted that the implementation of CertiCoq is verified, whereas this is not the case for the Coq extraction plugin. Consequently, there is a correctness guarantee for the generated Clight code which does not hold for the code being generated by the Coq extraction plugin. Furthermore, the differences between user space and kernel space software are discussed in relation to Linux device drivers. It is elaborated that it is not possible to generate working Linux kernel module components using the Coq extraction plugin without significant modifications. In contrast, it is possible to produce working user space drivers both with the Coq extraction plugin and CertiCoq. The subsequent parts describe the main contributions of the thesis. In the second part, it is demonstrated how to extend the Coq extraction plugin to synthesise foreign function calls between the functional language OCaml and the imperative language C. This approach has the potential to improve the type-safety of user space drivers. Furthermore, it is shown that the code being synthesised by CertiCoq cannot be used in kernel space without modifications to the necessary runtime. Consequently, the necessary modifications to the runtimes of CertiCoq and VeriFFI are introduced, resulting in the runtimes becoming compatible components of a Linux kernel module. Furthermore, justifications for the transformations are provided and possible further extensions to both plugins and solutions to failing garbage collection calls in kernel space are discussed. The third part presents a proof of concept device driver for the Linux Kernel. To achieve this, the event handler of the original PC Speaker driver is partially formalised in Coq. Furthermore, some relevant formal properties of the formalised functionality are discussed. Subsequently, a kernel module is defined, utilising the modified variants of CertiCoq and VeriFFI to compile a working device driver. It is furthermore shown that it is possible to compile the synthesised code with CompCert, thereby extending the guarantee of correctness to the assembly layer. This is followed by a performance evaluation that compares a naive formalisation of the PC speaker functionality with the original PC Speaker driver pointing out the weaknesses in the formalisation and possible improvements. The part closes with a summary of the results, their implications and open questions being raised. The last part lists all used sources, separated into scientific literature, documentations or reference manuals and artifacts, i.e. source code. N2 - Die vorliegende Dissertation präsentiert einen Ansatz zur Nutzung von Quellcode, der aus der Coq-Formalisierung eines Gerätetreibers generiert wurde, für bestehende (Mikrokernel-)Betriebssysteme, im Speziellen den Linux-Kernel. Im ersten Teil erfolgt eine Beschreibung der relevanten technischen Aspekte sowie des aktuellen Forschungsstandes. Dabei liegt der Fokus auf der Synthese von funktionalem Code durch das Coq Extraction Plugin und von Clight Code durch das CertiCoq Plugin. Des Weiteren wird dargelegt, dass die Implementierung von CertiCoq im Gegensatz zu der des Coq Extraction Plugin verifiziert ist, wodurch sich eine Korrektheitsgarantie für den generierten Clight Code ableiten lässt. Darüber hinaus werden die Unterschiede zwischen User Space und Kernel Space Software in Bezug auf Linux-Treiber erörtert. Unter Berücksichtigung der technischen Einschränkungen wird dargelegt, dass der durch das Coq Extraction Plugin generierte Code ohne gravierende Anpassungen der Laufzeitumgebung nicht als Teil eines Kernel Space Treibers nutzbar ist. Die nachfolgenden Teile der Dissertation behandeln den Beitrag dieser Arbeit. Im zweiten Teil wird dargelegt, wie das Coq Extraction Plugin derart erweitert werden kann, dass typsichere Aufrufe zwischen den Sprachen OCaml und C generiert werden können. Dies verhindert spezifische Kompilationsfehler aufgrund von Typfehlern. Des Weiteren wird aufgezeigt, dass der durch CertiCoq generierte Code ebenfalls nicht im Kernel Space genutzt werden kann, da die Laufzeitumgebung technische Einschränkungen verletzt. Daher werden die notwendigen Anpassungen an der vergleichsweise kleinen Laufzeitumgebung sowie an VeriFFI vorgestellt und deren Korrektheit begründet. Anschließend werden mögliche Erweiterungen beider Plugins sowie die Möglichkeit der Behandlung von fehlschlagenden Aufrufen der Garbage Collection von CertiCoq im Kernel Space erörtert. Im dritten Teil wird als Machbarkeitsstudie im ersten Schritt der Event-Handler des Linux PC Speaker Treibers beschrieben und eine naive Coq-Formalisierung sowie wichtige formale Eigenschaften dargelegt. Dann wird beschrieben, wie ein Kernel-Modul und dessen Kompilation definiert werden muss, um einen lauffähigen Linux Kernel Treiber zu erhalten. Des Weiteren wird erläutert, wie die generierten Teile dieses Treibers mit dem verifizierten Kompiler CompCert übersetzt werden können, wodurch auch eine Korrektheit für den resultierenden Assembler-Code gilt. Im Anschluss erfolgt eine Evaluierung der Performance des aus der naiven Coq-Formalisierung generierten Codes im Vergleich zum originalen PC-Speaker Treiber. Dabei werden die Schwächen der Formalisierung sowie mögliche Verbesserungen diskutiert. Der Teil wird mit einer Zusammenfassung der Ergebnisse sowie der daraus resultierenden offenen Fragen abgeschlossen. Der letzte Teil gibt eine Übersicht über genutzte Quellen und Hilfsmittel, unterteilt in wissenschaftliche Literatur, Dokumentationen sowie Software-Artefakte. KW - Linux device drivers KW - Coq KW - CertiCoq KW - synthesis KW - compilation KW - Geräte-Treiber KW - Linux KW - Coq KW - CertiCoq KW - Synthese KW - Kompilation Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-642558 ER - TY - THES A1 - Repp, Leo T1 - Extending the automatic theorem prover nanoCoP with arithmetic procedures T1 - Erweiterung des automatischen Theorembeweisers nanoCoP um Arithmetik und Gleichheit behandelnde Verfahren N2 - In dieser Bachelorarbeit implementiere ich den automatischen Theorembeweiser nanoCoP-Ω. Es handelt sich bei diesem neuen System um das Ergebnis einer Portierung von Arithmetik-behandelnden Prozeduren aus dem automatischen Theorembeweiser mit Arithmetik leanCoP-Ω in das System nanoCoP 2.0. Dazu wird zuerst der mathematische Hintergrund zu automatischen Theorembeweisern und Arithmetik gegeben. Ich stelle die Vorgängerprojekte leanCoP, nanoCoP und leanCoP-Ω vor, auf dessen Vorlage nanoCoP-Ω entwickelt wurde. Es folgt eine ausführliche Erklärung der Konzepte, um welche der nicht-klausale Konnektionskalkül erweitert werden muss, um eine Behandlung von arithmetischen Ausdrücken und Gleichheiten in den Kalkül zu integrieren, sowie eine Beschreibung der Implementierung dieser Konzepte in nanoCoP-Ω. Als letztes folgt eine experimentelle Evaluation von nanoCoP-Ω. Es wurde ein ausführlicher Vergleich von Laufzeit und Anzahl gelöster Probleme im Vergleich zum ähnlich aufgebauten Theorembeweiser leanCoP-Ω auf Basis der TPTP-Benchmark durchgeführt. Ich komme zu dem Ergebnis, dass nanoCoP-Ω deutlich schneller ist als leanCoP-Ω ist, jedoch weniger gut geeignet für größere Probleme. Zudem konnte ich feststellen, dass nanoCoP-Ω falsche Beweise liefern kann. Ich bespreche, wie dieses Problem gelöst werden kann, sowie einige mögliche Optimierungen und Erweiterungen des Beweissystems. N2 - In this bachelor’s thesis I implement the automatic theorem prover nanoCoP-Ω. This system is the result of porting arithmetic and equality handling procedures first introduced in the automatic theorem prover with arithmetic leanCoP-Ω into the similar system nanoCoP 2.0. To understand these procedures, I first introduce the mathematical background to both automatic theorem proving and arithmetic expressions. I present the predecessor projects leanCoP, nanoCoP and leanCoP-Ω, out of which nanCoP-Ω was developed. This is followed by an extensive description of the concepts the non-clausal connection calculus needed to be extended by, to allow for proving arithmetic expressions and equalities, as well as of their implementation into nanoCoP-Ω. An extensive comparison between both the runtimes and the number of solved problems of the systems nanoCoP-Ω and leanCoP-Ω was made. I come to the conclusion, that nanoCoP-Ω is considerably faster than leanCoP-Ω for small problems, though less well suited for larger problems. Additionally, I was able to construct a non-theorem that nanoCoP-Ω generates a false proof for. I discuss how this pressing issue could be resolved, as well as some possible optimizations and expansions of the system. KW - automatic theorem prover KW - leanCoP KW - connection calculus KW - tptp KW - arithmetic procedures KW - equality KW - omega KW - arithmethische Prozeduren KW - automatisierter Theorembeweiser KW - Konnektionskalkül KW - Gleichheit KW - leanCoP KW - Omega KW - TPTP Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-576195 ER - TY - THES A1 - Kaminski, Roland T1 - Complex reasoning with answer set programming N2 - Answer Set Programming (ASP) allows us to address knowledge-intensive search and optimization problems in a declarative way due to its integrated modeling, grounding, and solving workflow. A problem is modeled using a rule based language and then grounded and solved. Solving results in a set of stable models that correspond to solutions of the modeled problem. In this thesis, we present the design and implementation of the clingo system---perhaps, the most widely used ASP system. It features a rich modeling language originating from the field of knowledge representation and reasoning, efficient grounding algorithms based on database evaluation techniques, and high performance solving algorithms based on Boolean satisfiability (SAT) solving technology. The contributions of this thesis lie in the design of the modeling language, the design and implementation of the grounding algorithms, and the design and implementation of an Application Programmable Interface (API) facilitating the use of ASP in real world applications and the implementation of complex forms of reasoning beyond the traditional ASP workflow. KW - Answer Set Programming KW - Declarative Problem Solving KW - Grounding Theory KW - Preference Handling KW - Answer Set Solving modulo Theories KW - Temporal Answer Set Solving Y1 - 2023 ER - TY - THES A1 - Middelanis, Robin T1 - Global response to local extremes—a storyline approach on economic loss propagation from weather extremes T1 - Globale Reaktion auf lokale Extreme — ein Storyline-Ansatz zu ökonomischer Schadensausbreitung aufgrund von Wetterextremen N2 - Due to anthropogenic greenhouse gas emissions, Earth’s average surface temperature is steadily increasing. As a consequence, many weather extremes are likely to become more frequent and intense. This poses a threat to natural and human systems, with local impacts capable of destroying exposed assets and infrastructure, and disrupting economic and societal activity. Yet, these effects are not locally confined to the directly affected regions, as they can trigger indirect economic repercussions through loss propagation along supply chains. As a result, local extremes yield a potentially global economic response. To build economic resilience and design effective adaptation measures that mitigate adverse socio-economic impacts of ongoing climate change, it is crucial to gain a comprehensive understanding of indirect impacts and the underlying economic mechanisms. Presenting six articles in this thesis, I contribute towards this understanding. To this end, I expand on local impacts under current and future climate, the resulting global economic response, as well as the methods and tools to analyze this response. Starting with a traditional assessment of weather extremes under climate change, the first article investigates extreme snowfall in the Northern Hemisphere until the end of the century. Analyzing an ensemble of global climate model projections reveals an increase of the most extreme snowfall, while mean snowfall decreases. Assessing repercussions beyond local impacts, I employ numerical simulations to compute indirect economic effects from weather extremes with the numerical agent-based shock propagation model Acclimate. This model is used in conjunction with the recently emerged storyline framework, which involves analyzing the impacts of a particular reference extreme event and comparing them to impacts in plausible counterfactual scenarios under various climate or socio-economic conditions. Using this approach, I introduce three primary storylines that shed light on the complex mechanisms underlying economic loss propagation. In the second and third articles of this thesis, I analyze storylines for the historical Hurricanes Sandy (2012) and Harvey (2017) in the USA. For this, I first estimate local economic output losses and then simulate the resulting global economic response with Acclimate. The storyline for Hurricane Sandy thereby focuses on global consumption price anomalies and the resulting changes in consumption. I find that the local economic disruption leads to a global wave-like economic price ripple, with upstream effects propagating in the supplier direction and downstream effects in the buyer direction. Initially, an upstream demand reduction causes consumption price decreases, followed by a downstream supply shortage and increasing prices, before the anomalies decay in a normalization phase. A dominant upstream or downstream effect leads to net consumption gains or losses of a region, respectively. Moreover, I demonstrate that a longer direct economic shock intensifies the downstream effect for many regions, leading to an overall consumption loss. The third article of my thesis builds upon the developed loss estimation method by incorporating projections to future global warming levels. I use these projections to explore how the global production response to Hurricane Harvey would change under further increased global warming. The results show that, while the USA is able to nationally offset direct losses in the reference configuration, other countries have to compensate for increasing shares of counterfactual future losses. This compensation is mainly achieved by large exporting countries, but gradually shifts towards smaller regions. These findings not only highlight the economy’s ability to flexibly mitigate disaster losses to a certain extent, but also reveal the vulnerability and economic disadvantage of regions that are exposed to extreme weather events. The storyline in the fourth article of my thesis investigates the interaction between global economic stress and the propagation of losses from weather extremes. I examine indirect impacts of weather extremes — tropical cyclones, heat stress, and river floods — worldwide under two different economic conditions: an unstressed economy and a globally stressed economy, as seen during the Covid-19 pandemic. I demonstrate that the adverse effects of weather extremes on global consumption are strongly amplified when the economy is under stress. Specifically, consumption losses in the USA and China double and triple, respectively, due to the global economy’s decreased capacity for disaster loss compensation. An aggravated scarcity intensifies the price response, causing consumption losses to increase. Advancing on the methods and tools used here, the final two articles in my thesis extend the agent-based model Acclimate and formalize the storyline approach. With the model extension described in the fifth article, regional consumers make rational choices on the goods bought such that their utility is maximized under a constrained budget. In an out-of-equilibrium economy, these rational consumers are shown to temporarily increase consumption of certain goods in spite of rising prices. The sixth article of my thesis proposes a formalization of the storyline framework, drawing on multiple studies including storylines presented in this thesis. The proposed guideline defines eight central elements that can be used to construct a storyline. Overall, this thesis contributes towards a better understanding of economic repercussions of weather extremes. It achieves this by providing assessments of local direct impacts, highlighting mechanisms and impacts of loss propagation, and advancing on methods and tools used. N2 - Mit dem kontinuierlichen Anstieg der globalen Mitteltemperatur aufgrund anthropogener Treibhausgasemissionen kann die Intensität und Häufigkeit vieler Wetterextreme zunehmen. Diese haben das Potential sowohl natürliche als auch menschliche Systeme stark zu beeinträchtigen. So hat die Zerstörung von Vermögenswerten und Infrastruktur sowie die Unterbrechung gesellschaftlicher und ökonomischer Abläufe oft negative wirtschaftliche Konsequenzen für direkt betroffene Regionen. Die Auswirkungen sind jedoch nicht lokal begrenzt, sondern können sich entlang von Lieferketten ausbreiten und somit auch indirekte Folgen in anderen Regionen haben – bis hin zu einer potenziell globalen wirtschaftlichen Reaktion. Daher sind Strategien zur Anpassung an veränderliche Klimabedingungen notwendig, um die Resilienz globaler Handelsketten zu stärken und dadurch negative sozioökonomische Folgen abzumildern. Hierfür ist ein besseres Verständnis lokaler Auswirkungen sowie ökonomischer Mechanismen zur Schadensausbreitung und deren Folgen erforderlich. Die vorliegende Dissertation umfasst insgesamt sechs Artikel, die zu diesem Verständnis beitragen. In diesen Studien werden zunächst lokale Auswirkungen von Wetterextremen unter gegenwärtigen und zukünftigen klimatischen Bedingungen untersucht. Weiterhin werden die globalen wirtschaftlichen Auswirkungen lokaler Wetterextreme sowie die darunterliegenden ökonomischen Effekte analysiert. In diesem Zusammenhang trägt diese Arbeit ferner zu der Weiterentwicklung der verwendeten Methoden und Ansätze bei. Der erste Artikel widmet sich zunächst der Betrachtung von extremem Schneefall in der nördlichen Hemisphäre unter dem Einfluss des Klimawandels. Zu diesem Zweck wird ein Ensemble von Projektionen globaler Klimamodelle bis zum Ende des Jahrhunderts analysiert. Die Projektionen zeigen dabei eine Verstärkung von extremen Schneefallereignissen, während die mittlere Schneefallintensität abnimmt. Um indirekte Auswirkungen von Wetterextremen zu erforschen, wird weiterhin das numerische agentenbasierte Modell Acclimate verwendet, welches die Ausbreitung ökonomischer Verlustkaskaden im globalen Versorgungsnetzwerk simuliert. In mehreren sogenannten Storylines werden die Auswirkungen eines historischen Referenzereignisses analysiert und mit den potentiellen Auswirkungen dieses Ereignisses unter plausiblen alternativen klimatischen oder sozioökonomischen Bedingungen verglichen. In dieser Dissertation werden drei zentrale Storylines vorgestellt, die jeweils unterschiedliche Aspekte der Schadensausbreitung von Wetterextremen untersuchen. Im zweiten und dritten Artikel dieser Arbeit werden dazu Storylines für die historischen Hurrikane Sandy (2012) und Harvey (2017) in den USA untersucht. Hierfür werden zunächst die lokalen ökonomischen Verluste durch diese Hurrikane ermittelt, welche als direkte wirtschaftliche Schockereignisse in Acclimate zur Berechnung der globalen Reaktion verwendet werden. Hierbei untersucht die Studie zu Hurricane Sandy globale Konsumpreisanomalien und damit einhergehende Auswirkungen auf das Konsumverhalten. Der direkte Schock löst hier eine wellenartige Veränderung globaler Konsumpreise mit drei Phasen aus, welche aus gegenläufigen Effekte aufwärts und abwärts der Lieferketten resultiert – sogenannten Upstream- und Downstream-Effekten. Zunächst steigt der Konsum aufgrund sinkender Preise durch Upstream-Effekte, bevor Preise aufgrund von Güterknappheit durch Downstream-Effekte wieder ansteigen und der Konsum abfällt. In einer Normalisierungsphase klingen diese Anomalien wieder ab. Ein länger anhaltender direkter wirtschaftlicher Schock verstärkt die Downstream-Phase und führt so in vielen Regionen insgesamt zu einem Konsumverlust. Die entwickelte Methode zur Berechnung direkter Verluste wird im dritten Artikel erweitert, indem Verstärkungen unter dem Einfluss des fortschreitenden Klimawandels berücksichtigt werden. Unter Nutzung dieser verstärkten direkten Verluste wird die Veränderung globaler Produktionsanomalien in Reaktion auf Hurricane Harvey simuliert. Die Ergebnisse zeigen, dass die USA bei zunehmender Erwärmung nicht mehr in der Lage sein werden, direkte Produktionsverluste auf nationaler Ebene auszugleichen. Stattdessen muss ein zunehmender Anteil dieser Verluste durch andere, insbesondere exportstarke Länder ausgeglichen werden. Der Anteil kleinerer Regionen an dieser ausgleichenden Produktion nimmt jedoch mit zunehmenden direkten Verlusten zu. Diese Produktionsverschiebungen verdeutlichen die Möglichkeit der globalen Wirtschaft, lokale Katastrophenverluste weitgehend flexibel abzumildern. Gleichzeitig veranschaulichen sie den Wettbewerbsnachteil direkt betroffener Wirtschaftsregionen. Die Storyline im vierten Artikel befasst sich mit dem Einfluss einer globalen wirtschaftlichen Krise auf die Schadensausbreitung von tropischen Wirbelstürmen, Hitzestress und Flussüberschwemmungen weltweit. Hierfür werden die indirekten Auswirkungen dieser Extreme unter dem Einfluss der global reduzierten wirtschaftlichen Aktivität während der Covid-19-Pandemie, sowie bei „normaler“ globaler Wirtschaftsleistung simuliert. Der Vergleich beider Szenarien zeigt bei global gestörter Wirtschaft eine deutliche Verstärkung negativer Konsumauswirkungen durch die simulierten Extreme. Konsumverluste steigen besonders stark in den USA und China an, wo sie sich verdoppeln bzw. verdreifachen. Diese Veränderungen resultieren aus der global verminderten wirtschaftlichen Kapazität, die für den Ausgleich der Produktionsverluste von Wetterextremen zur Verfügung steht. Dies verstärkt die Extremewetter-bedingte Güterknappheit, was zu Preisanstiegen und erhöhten Konsumverlusten führt. Abschließend werden in den letzten beiden Artikeln die in der Arbeit verwendeten Methoden und Ansätze erweitert. Hierfür wird das Modell Acclimate im fünften Artikel weiterentwickelt, indem Konsumenten als rational agierende Agenten modelliert werden. Mit dieser Erweiterung treffen lokale Verbraucher Entscheidungen über die konsumierten Güter so, dass diese den Nutzen eines begrenzten Budgets maximieren. Die entstehende Dynamik kann außerhalb eines wirtschaftlichen Gleichgewichts dazu führen, dass bestimmte Güter temporär trotz erhöhter Preise stärker nachgefragt werden. Der sechste Artikel formalisiert den Storyline-Ansatz und präsentiert einen Leitfaden für die Erstellung von Storylines. Dieser basiert auf den Ergebnissen mehrerer Studien, die diesen Ansatz verfolgen; einschließlich Storylines aus der vorliegenden Arbeit. Es werden insgesamt acht Elemente definiert, anhand derer eine Storyline-Studie erstellt werden kann. Insgesamt trägt diese Arbeit zu einem umfassenderen Verständnis der ökonomischen Auswirkungen von Wetterextremen bei. Hierfür werden lokale Auswirkungen von Extremen unter gegenwärtigen und zukünftigen klimatischen Bedingungen untersucht, sowie wichtige ökonomische Mechanismen und Auswirkungen der resultierenden Schadensausbreitung aufgedeckt. Neben diesen Erkenntnissen werden überdies Weiterentwicklungen der Methoden und Ansätze präsentiert, die weiterführende Analysen ermöglichen. KW - Klimawandel KW - Wetterextreme KW - indirekte ökonomische Effekte KW - makroökonomische Modellierung KW - climate change KW - weather extremes KW - indirect economic impacts KW - macro-economic modelling Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-611127 ER - TY - THES A1 - Schrape, Oliver T1 - Methodology for standard cell-based design and implementation of reliable and robust hardware systems T1 - Methoden für Standardzellbasiertes Design und Implementierung zuverlässiger und robuster Hardware Systeme N2 - Reliable and robust data processing is one of the hardest requirements for systems in fields such as medicine, security, automotive, aviation, and space, to prevent critical system failures caused by changes in operating or environmental conditions. In particular, Signal Integrity (SI) effects such as crosstalk may distort the signal information in sensitive mixed-signal designs. A challenge for hardware systems used in the space are radiation effects. Namely, Single Event Effects (SEEs) induced by high-energy particle hits may lead to faulty computation, corrupted configuration settings, undesired system behavior, or even total malfunction. Since these applications require an extra effort in design and implementation, it is beneficial to master the standard cell design process and corresponding design flow methodologies optimized for such challenges. Especially for reliable, low-noise differential signaling logic such as Current Mode Logic (CML), a digital design flow is an orthogonal approach compared to traditional manual design. As a consequence, mandatory preliminary considerations need to be addressed in more detail. First of all, standard cell library concepts with suitable cell extensions for reliable systems and robust space applications have to be elaborated. Resulting design concepts at the cell level should enable the logical synthesis for differential logic design or improve the radiation-hardness. In parallel, the main objectives of the proposed cell architectures are to reduce the occupied area, power, and delay overhead. Second, a special setup for standard cell characterization is additionally required for a proper and accurate logic gate modeling. Last but not least, design methodologies for mandatory design flow stages such as logic synthesis and place and route need to be developed for the respective hardware systems to keep the reliability or the radiation-hardness at an acceptable level. This Thesis proposes and investigates standard cell-based design methodologies and techniques for reliable and robust hardware systems implemented in a conventional semi-conductor technology. The focus of this work is on reliable differential logic design and robust radiation-hardening-by-design circuits. The synergistic connections of the digital design flow stages are systematically addressed for these two types of hardware systems. In more detail, a library for differential logic is extended with single-ended pseudo-gates for intermediate design steps to support the logic synthesis and layout generation with commercial Computer-Aided Design (CAD) tools. Special cell layouts are proposed to relax signal routing. A library set for space applications is similarly extended by novel Radiation-Hardening-by-Design (RHBD) Triple Modular Redundancy (TMR) cells, enabling a one fault correction. Therein, additional optimized architectures for glitch filter cells, robust scannable and self-correcting flip-flops, and clock-gates are proposed. The circuit concepts and the physical layout representation views of the differential logic gates and the RHBD cells are discussed. However, the quality of results of designs depends implicitly on the accuracy of the standard cell characterization which is examined for both types therefore. The entire design flow is elaborated from the hardware design description to the layout representations. A 2-Phase routing approach together with an intermediate design conversion step is proposed after the initial place and route stage for reliable, pure differential designs, whereas a special constraining for RHBD applications in a standard technology is presented. The digital design flow for differential logic design is successfully demonstrated on a reliable differential bipolar CML application. A balanced routing result of its differential signal pairs is obtained by the proposed 2-Phase-routing approach. Moreover, the elaborated standard cell concepts and design methodology for RHBD circuits are applied to the digital part of a 7.5-15.5 MSPS 14-bit Analog-to-Digital Converter (ADC) and a complex microcontroller architecture. The ADC is implemented in an unhardened standard semiconductor technology and successfully verified by electrical measurements. The overhead of the proposed hardening approach is additionally evaluated by design exploration of the microcontroller application. Furthermore, the first obtained related measurement results of novel RHBD-∆TMR flip-flops show a radiation-tolerance up to a threshold Linear Energy Transfer (LET) of 46.1, 52.0, and 62.5 MeV cm2 mg-1 and savings in silicon area of 25-50 % for selected TMR standard cell candidates. As a conclusion, the presented design concepts at the cell and library levels, as well as the design flow modifications are adaptable and transferable to other technology nodes. In particular, the design of hybrid solutions with integrated reliable differential logic modules together with robust radiation-tolerant circuit parts is enabled by the standard cell concepts and design methods proposed in this work. N2 - Eine zuverlässige und robuste Datenverarbeitung ist eine der wichtigsten Voraussetzungen für Systeme in Bereichen wie Medizin, Sicherheit, Automobilbau, Luft- und Raumfahrt, um kritische Systemausfälle zu verhindern, welche durch Änderungen der Betriebsbedingung oder Umweltgegebenheiten hervorgerufen werden können. Insbesondere Signalintegritätseffekte (Signal Integrity (SI)) wie das Übersprechen und Überlagern von Signalen (crosstalk) können den Informationsgehalt in empfindlichen Mixed-Signal-Designs verzerren. Eine zusätzliche Herausforderung für Hardwaresysteme für Weltraumanwendungen ist die Strahlung. Resultierende Effekte, die durch hochenergetische Teilchentreffer ausgelöst werden (Single Event Effects (SEEs)), können zu fehlerhaften Berechnungen, beschädigten Konfigurationseinstellungen, unerwünschtem Systemverhalten oder sogar zu völliger Fehlfunktion führen. Da diese Anwendungen einen zusätzlichen Aufwand beim Entwurf und der Implementierung erfordern, ist es von Vorteil, über Standardzellenentwurfskonzepte und entsprechende Entwurfsablaufmethoden zu verfügen, die für genau solche Herausforderungen optimiert sind. Insbesondere für zuverlässige, rauscharme differenzielle Logik, wie der Current Mode Logic (CML), ist ein digitaler Entwurfsablauf ein orthogonaler Ansatz im Vergleich zum traditionellen manuellen Entwurfskonzept. Infolgedessen müssen obligatorische Vorüberlegungen detaillierter behandelt werden. Zunächst sind Konzepte für Standardzellbibliotheken mit geeigneten Zellerweiterungen für zuverlässige Systeme und robuste Raumfahrtanwendungen zu erarbeiten. Daraus resultierende Entwurfskonzepte auf Zellebene sollten die logische Synthese für den differenziellen Logikentwurf ermöglichen oder die Strahlungshärte eines Designs verbessern. Parallel dazu sind die Hauptziele der vorgeschlagenen Zellarchitekturen, die Verringerung der genutzten Siliziumfläche und der Verlustleistung sowie den Verzögerungs-Overhead zu minimieren. Zweitens ist ein spezieller Aufbau für die Charakterisierung von Standardzellen erforderlich, um eine angemessene und genaue Modellierung der Logikgatter zu ermöglichen. Nicht zuletzt müssen für die jeweiligen Hardwaresysteme Methoden für die Entwurfsphasen wie Logik-Synthese und das Platzieren und Routen (Place and Route (PnR)) entwickelt werden, um die Zuverlässigkeit beziehungsweise die Strahlungshärte auf einem akzeptablen Niveau zu halten. In dieser Arbeit werden standardisierte Zellen-basierte Entwurfsmethoden und -techniken für zuverlässige und robuste Hardwaresysteme vorgeschlagen und untersucht, welche in einer herkömmlichen Halbleitertechnologie implementiert werden. Dabei werden zuverlässige differenzielle Logikschaltungen und robuste strahlungsgehärtete Schaltungen betrachtet. Die synergetischen Verbindungen des digitalen Entwurfs werden systematisch für diese beiden Hardwaresysteme behandelt. Im Detail wird eine Bibliothek für differentielle Logik mit Single-Ended-Pseudo-Gattern für Zwischenschritte erweitert, die die Logiksynthese und Layout-Generierung mit heutigen Entwicklungswerkzeugen unterstützen. Ein spezieller Rahmen für das Layout der Zellen wird vorgeschlagen, um das Routing der Signale zu vereinfachen. Die Bibliothek für Raumfahrtanwendungen wird in ähnlicher Weise um neuartige Radiation-Hardening-by-Design (RHBD)-Zellen mit dreifacher modularer Redundanz (Triple Modular Redundancy (TMR)) erweitert, welche eine 1-Bit-Fehlerkorrektur erlaubt. Zusätzlich werden optimierte Architekturen für Glitch-Filterzellen, robuste abtastbare (scannable) und selbstkorrigierende Flip-flops und Taktgatter (clock-gates) vorgeschlagen. Die Schaltungskonzepte, die physische Layout-Repräsentation der differentiellen Logikgatter und der vorgeschlagenen RHBD-Zellen werden diskutiert. Die Qualität der Ergebnisse der Entwürfe hängt jedoch implizit von der Genauigkeit der Standardzellencharakterisierung ab, die daher für beide Typen untersucht wird. Der gesamte Entwurfsablauf wird von der Entwurfsbeschreibung der Hardware bis hin zur generierten Layout-Darstellung ausgearbeitet. Infolgedessen wird ein 2-Phasen-Routing-Ansatz zusammen mit einem zwischengeschalteten Design-Konvertierungsschritt nach der initialen PnR-Phase für zuverlässige, differentielle Designs vorgeschlagen, während ein spezielles Constraining für RHBD-Anwendungen vorgestellt wird. Der digitale Entwurfsablauf für Differenziallogik wird erfolgreich an einer zuverlässigen bipolaren Differenzial-CML-Anwendung demonstriert. Durch den 2-Phasen-Routing-Ansatz wird ein ausgewogenes Routing-Ergebnis differentieller Signalpaare erzielt. Darüber hinaus werden die erarbeiteten Standardzellenkonzepte und die Entwurfsmethodik für RHBD-Schaltungen auf den digitalen Teil eines 7.5-15.5MSPS 14-bit Analog-Digital-Wandlers (ADC) und einer komplexen Mikrocontroller-Architektur angewandt. Der ADC wurde in einer nicht-gehärteten Standard-Halbleitertechnologie implementiert und erfolgreich durch elektrische Messungen verifiziert. Der Mehraufwand des Härtungsansatzes wird zusätzlich durch Design Exploration der Mikrocontroller-Anwendung bewertet. Ferner zeigen erste Messergebnisse der neuartigen RHBD-ΔTMR-Flip-flops eine Strahlungstoleranz bis zu einem linearen Energietransfer (Linear Energy Transfers (LET)) Schwellwert von 46.1, 52.0 und 62.5MeVcm2 mg-1 und eine Einsparung an Siliziumfläche von 25-50% für ausgewählte TMR-Standardzellenkandidaten. Die vorgestellten Entwurfskonzepte auf Zell- und Bibliotheksebene sowie die Änderungen des Entwurfsablaufs sind anpassbar und übertragbar auf andere Technologieknoten. Insbesondere der Entwurf hybrider Lösungen mit integrierten zuverlässigen differenziellen Logikmodulen zusammen mit robusten strahlungstoleranten Schaltungsteilen wird durch die in dieser Arbeit vorgeschlagenen Konzepte und Entwurfsmethoden ermöglicht. KW - hardware design KW - ASIC KW - radiation hardness KW - digital design KW - ASIC (Applikationsspezifische Integrierte Schaltkreise) KW - Digital Design KW - Hardware Design KW - Strahlungshartes Design Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-589326 ER - TY - THES A1 - Chen, Junchao T1 - A self-adaptive resilient method for implementing and managing the high-reliability processing system T1 - Eine selbstadaptive belastbare Methode zum Implementieren und Verwalten von hochzuverlässigen Verarbeitungssysteme N2 - As a result of CMOS scaling, radiation-induced Single-Event Effects (SEEs) in electronic circuits became a critical reliability issue for modern Integrated Circuits (ICs) operating under harsh radiation conditions. SEEs can be triggered in combinational or sequential logic by the impact of high-energy particles, leading to destructive or non-destructive faults, resulting in data corruption or even system failure. Typically, the SEE mitigation methods are deployed statically in processing architectures based on the worst-case radiation conditions, which is most of the time unnecessary and results in a resource overhead. Moreover, the space radiation conditions are dynamically changing, especially during Solar Particle Events (SPEs). The intensity of space radiation can differ over five orders of magnitude within a few hours or days, resulting in several orders of magnitude fault probability variation in ICs during SPEs. This thesis introduces a comprehensive approach for designing a self-adaptive fault resilient multiprocessing system to overcome the static mitigation overhead issue. This work mainly addresses the following topics: (1) Design of on-chip radiation particle monitor for real-time radiation environment detection, (2) Investigation of space environment predictor, as support for solar particle events forecast, (3) Dynamic mode configuration in the resilient multiprocessing system. Therefore, according to detected and predicted in-flight space radiation conditions, the target system can be configured to use no mitigation or low-overhead mitigation during non-critical periods of time. The redundant resources can be used to improve system performance or save power. On the other hand, during increased radiation activity periods, such as SPEs, the mitigation methods can be dynamically configured appropriately depending on the real-time space radiation environment, resulting in higher system reliability. Thus, a dynamic trade-off in the target system between reliability, performance and power consumption in real-time can be achieved. All results of this work are evaluated in a highly reliable quad-core multiprocessing system that allows the self-adaptive setting of optimal radiation mitigation mechanisms during run-time. Proposed methods can serve as a basis for establishing a comprehensive self-adaptive resilient system design process. Successful implementation of the proposed design in the quad-core multiprocessor shows its application perspective also in the other designs. N2 - Infolge der CMOS-Skalierung wurden strahleninduzierte Einzelereignis-Effekte (SEEs) in elektronischen Schaltungen zu einem kritischen Zuverlässigkeitsproblem für moderne integrierte Schaltungen (ICs), die unter rauen Strahlungsbedingungen arbeiten. SEEs können in der kombinatorischen oder sequentiellen Logik durch den Aufprall hochenergetischer Teilchen ausgelöst werden, was zu destruktiven oder nicht-destruktiven Fehlern und damit zu Datenverfälschungen oder sogar Systemausfällen führt. Normalerweise werden die Methoden zur Abschwächung von SEEs statisch in Verarbeitungsarchitekturen auf der Grundlage der ungünstigsten Strahlungsbedingungen eingesetzt, was in den meisten Fällen unnötig ist und zu einem Ressourcen-Overhead führt. Darüber hinaus ändern sich die Strahlungsbedingungen im Weltraum dynamisch, insbesondere während Solar Particle Events (SPEs). Die Intensität der Weltraumstrahlung kann sich innerhalb weniger Stunden oder Tage um mehr als fünf Größenordnungen ändern, was zu einer Variation der Fehlerwahrscheinlichkeit in ICs während SPEs um mehrere Größenordnungen führt. In dieser Arbeit wird ein umfassender Ansatz für den Entwurf eines selbstanpassenden, fehlerresistenten Multiprozessorsystems vorgestellt, um das Problem des statischen Mitigation-Overheads zu überwinden. Diese Arbeit befasst sich hauptsächlich mit den folgenden Themen: (1) Entwurf eines On-Chip-Strahlungsteilchen Monitors zur Echtzeit-Erkennung von Strahlung Umgebungen, (2) Untersuchung von Weltraumumgebungsprognosen zur Unterstützung der Vorhersage von solaren Teilchen Ereignissen, (3) Konfiguration des dynamischen Modus in einem belastbaren Multiprozessorsystem. Daher kann das Zielsystem je nach den erkannten und vorhergesagten Strahlungsbedingungen während des Fluges so konfiguriert werden, dass es während unkritischer Zeiträume keine oder nur eine geringe Strahlungsminderung vornimmt. Die redundanten Ressourcen können genutzt werden, um die Systemleistung zu verbessern oder Energie zu sparen. In Zeiten erhöhter Strahlungsaktivität, wie z. B. während SPEs, können die Abschwächungsmethoden dynamisch und in Abhängigkeit von der Echtzeit-Strahlungsumgebung im Weltraum konfiguriert werden, was zu einer höheren Systemzuverlässigkeit führt. Auf diese Weise kann im Zielsystem ein dynamischer Kompromiss zwischen Zuverlässigkeit, Leistung und Stromverbrauch in Echtzeit erreicht werden. Alle Ergebnisse dieser Arbeit wurden in einem hochzuverlässigen Quad-Core-Multiprozessorsystem evaluiert, das die selbstanpassende Einstellung optimaler Strahlungsschutzmechanismen während der Laufzeit ermöglicht. Die vorgeschlagenen Methoden können als Grundlage für die Entwicklung eines umfassenden, selbstanpassenden und belastbaren Systementwurfsprozesses dienen. Die erfolgreiche Implementierung des vorgeschlagenen Entwurfs in einem Quad-Core-Multiprozessor zeigt, dass er auch für andere Entwürfe geeignet ist. KW - single event upset KW - solar particle event KW - machine learning KW - self-adaptive multiprocessing system KW - maschinelles Lernen KW - selbstanpassendes Multiprozessorsystem KW - strahleninduzierte Einzelereignis-Effekte KW - Sonnenteilchen-Ereignis Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-583139 ER - TY - JOUR A1 - Prasse, Paul A1 - Iversen, Pascal A1 - Lienhard, Matthias A1 - Thedinga, Kristina A1 - Herwig, Ralf A1 - Scheffer, Tobias T1 - Pre-Training on In Vitro and Fine-Tuning on Patient-Derived Data Improves Deep Neural Networks for Anti-Cancer Drug-Sensitivity Prediction JF - MDPI N2 - Large-scale databases that report the inhibitory capacities of many combinations of candidate drug compounds and cultivated cancer cell lines have driven the development of preclinical drug-sensitivity models based on machine learning. However, cultivated cell lines have devolved from human cancer cells over years or even decades under selective pressure in culture conditions. Moreover, models that have been trained on in vitro data cannot account for interactions with other types of cells. Drug-response data that are based on patient-derived cell cultures, xenografts, and organoids, on the other hand, are not available in the quantities that are needed to train high-capacity machine-learning models. We found that pre-training deep neural network models of drug sensitivity on in vitro drug-sensitivity databases before fine-tuning the model parameters on patient-derived data improves the models’ accuracy and improves the biological plausibility of the features, compared to training only on patient-derived data. From our experiments, we can conclude that pre-trained models outperform models that have been trained on the target domains in the vast majority of cases. KW - deep neural networks KW - drug-sensitivity prediction KW - anti-cancer drugs Y1 - 2022 U6 - https://doi.org/10.3390/cancers14163950 SN - 2072-6694 VL - 14 SP - 1 EP - 14 PB - MDPI CY - Basel, Schweiz ET - 16 ER - TY - JOUR A1 - Hecher, Markus T1 - Treewidth-aware reductions of normal ASP to SAT BT - is normal ASP harder than SAT after all? JF - Artificial intelligence N2 - Answer Set Programming (ASP) is a paradigm for modeling and solving problems for knowledge representation and reasoning. There are plenty of results dedicated to studying the hardness of (fragments of) ASP. So far, these studies resulted in characterizations in terms of computational complexity as well as in fine-grained insights presented in form of dichotomy-style results, lower bounds when translating to other formalisms like propositional satisfiability (SAT), and even detailed parameterized complexity landscapes. A generic parameter in parameterized complexity originating from graph theory is the socalled treewidth, which in a sense captures structural density of a program. Recently, there was an increase in the number of treewidth-based solvers related to SAT. While there are translations from (normal) ASP to SAT, no reduction that preserves treewidth or at least keeps track of the treewidth increase is known. In this paper we propose a novel reduction from normal ASP to SAT that is aware of the treewidth, and guarantees that a slight increase of treewidth is indeed sufficient. Further, we show a new result establishing that, when considering treewidth, already the fragment of normal ASP is slightly harder than SAT (under reasonable assumptions in computational complexity). This also confirms that our reduction probably cannot be significantly improved and that the slight increase of treewidth is unavoidable. Finally, we present an empirical study of our novel reduction from normal ASP to SAT, where we compare treewidth upper bounds that are obtained via known decomposition heuristics. Overall, our reduction works better with these heuristics than existing translations. (c) 2021 Elsevier B.V. All rights reserved. KW - Answer set programming KW - Treewidth KW - Parameterized complexity KW - Complexity KW - analysis KW - Tree decomposition KW - Treewidth-aware reductions Y1 - 2022 U6 - https://doi.org/10.1016/j.artint.2021.103651 SN - 0004-3702 SN - 1872-7921 VL - 304 PB - Elsevier CY - Amsterdam ER - TY - JOUR A1 - Al Laban, Firas A1 - Reger, Martin A1 - Lucke, Ulrike T1 - Closing the Policy Gap in the Academic Bridge JF - Education sciences N2 - The highly structured nature of the educational sector demands effective policy mechanisms close to the needs of the field. That is why evidence-based policy making, endorsed by the European Commission under Erasmus+ Key Action 3, aims to make an alignment between the domains of policy and practice. Against this background, this article addresses two issues: First, that there is a vertical gap in the translation of higher-level policies to local strategies and regulations. Second, that there is a horizontal gap between educational domains regarding the policy awareness of individual players. This was analyzed in quantitative and qualitative studies with domain experts from the fields of virtual mobility and teacher training. From our findings, we argue that the combination of both gaps puts the academic bridge from secondary to tertiary education at risk, including the associated knowledge proficiency levels. We discuss the role of digitalization in the academic bridge by asking the question: which value does the involved stakeholders expect from educational policies? As a theoretical basis, we rely on the model of value co-creation for and by stakeholders. We describe the used instruments along with the obtained results and proposed benefits. Moreover, we reflect on the methodology applied, and we finally derive recommendations for future academic bridge policies. KW - policy evaluation KW - higher education KW - virtual mobility KW - teacher training Y1 - 2022 U6 - https://doi.org/10.3390/educsci12120930 SN - 2227-7102 VL - 12 IS - 12 PB - MDPI CY - Basel ER - TY - THES A1 - Böken, Björn T1 - Improving prediction accuracy using dynamic information N2 - Accurately solving classification problems nowadays is likely to be the most relevant machine learning task. Binary classification separating two classes only is algorithmically simpler but has fewer potential applications as many real-world problems are multi-class. On the reverse, separating only a subset of classes simplifies the classification task. Even though existing multi-class machine learning algorithms are very flexible regarding the number of classes, they assume that the target set Y is fixed and cannot be restricted once the training is finished. On the other hand, existing state-of-the-art production environments are becoming increasingly interconnected with the advance of Industry 4.0 and related technologies such that additional information can simplify the respective classification problems. In light of this, the main aim of this thesis is to introduce dynamic classification that generalizes multi-class classification such that the target class set can be restricted arbitrarily to a non-empty class subset M of Y at any time between two consecutive predictions. This task is solved by a combination of two algorithmic approaches. First, classifier calibration, which transforms predictions into posterior probability estimates that are intended to be well calibrated. The analysis provided focuses on monotonic calibration and in particular corrects wrong statements that appeared in the literature. It also reveals that bin-based evaluation metrics, which became popular in recent years, are unjustified and should not be used at all. Next, the validity of Platt scaling, which is the most relevant parametric calibration approach, is analyzed in depth. In particular, its optimality for classifier predictions distributed according to four different families of probability distributions as well its equivalence with Beta calibration up to a sigmoidal preprocessing are proven. For non-monotonic calibration, extended variants on kernel density estimation and the ensemble method EKDE are introduced. Finally, the calibration techniques are evaluated using a simulation study with complete information as well as on a selection of 46 real-world data sets. Building on this, classifier calibration is applied as part of decomposition-based classification that aims to reduce multi-class problems to simpler (usually binary) prediction tasks. For the involved fusing step performed at prediction time, a new approach based on evidence theory is presented that uses classifier calibration to model mass functions. This allows the analysis of decomposition-based classification against a strictly formal background and to prove closed-form equations for the overall combinations. Furthermore, the same formalism leads to a consistent integration of dynamic class information, yielding a theoretically justified and computationally tractable dynamic classification model. The insights gained from this modeling are combined with pairwise coupling, which is one of the most relevant reduction-based classification approaches, such that all individual predictions are combined with a weight. This not only generalizes existing works on pairwise coupling but also enables the integration of dynamic class information. Lastly, a thorough empirical study is performed that compares all newly introduced approaches to existing state-of-the-art techniques. For this, evaluation metrics for dynamic classification are introduced that depend on corresponding sampling strategies. Thereafter, these are applied during a three-part evaluation. First, support vector machines and random forests are applied on 26 data sets from the UCI Machine Learning Repository. Second, two state-of-the-art deep neural networks are evaluated on five benchmark data sets from a relatively recent reference work. Here, computationally feasible strategies to apply the presented algorithms in combination with large-scale models are particularly relevant because a naive application is computationally intractable. Finally, reference data from a real-world process allowing the inclusion of dynamic class information are collected and evaluated. The results show that in combination with support vector machines and random forests, pairwise coupling approaches yield the best results, while in combination with deep neural networks, differences between the different approaches are mostly small to negligible. Most importantly, all results empirically confirm that dynamic classification succeeds in improving the respective prediction accuracies. Therefore, it is crucial to pass dynamic class information in respective applications, which requires an appropriate digital infrastructure. N2 - Klassifikationsprobleme akkurat zu lösen ist heutzutage wahrscheinlich die relevanteste Machine-Learning-Aufgabe. Binäre Klassifikation zur Unterscheidung von nur zwei Klassen ist algorithmisch einfacher, hat aber weniger potenzielle Anwendungen, da in der Praxis oft Mehrklassenprobleme auftreten. Demgegenüber vereinfacht die Unterscheidung nur innerhalb einer Untermenge von Klassen die Problemstellung. Obwohl viele existierende Machine-Learning-Algorithmen sehr flexibel mit Blick auf die Anzahl der Klassen sind, setzen sie voraus, dass die Zielmenge Y fest ist und nicht mehr eingeschränkt werden kann, sobald das Training abgeschlossen ist. Allerdings sind moderne Produktionsumgebungen mit dem Voranschreiten von Industrie 4.0 und entsprechenden Technologien zunehmend digital verbunden, sodass zusätzliche Informationen die entsprechenden Klassifikationsprobleme vereinfachen können. Vor diesem Hintergrund ist das Hauptziel dieser Arbeit, dynamische Klassifikation als Verallgemeinerung von Mehrklassen-Klassifikation einzuführen, bei der die Zielmenge jederzeit zwischen zwei aufeinanderfolgenden Vorhersagen zu einer beliebigen, nicht leeren Teilmenge eingeschränkt werden kann. Diese Aufgabe wird durch die Kombination von zwei algorithmischen Ansätzen gelöst. Zunächst wird Klassifikator-Kalibrierung eingesetzt, mittels der Vorhersagen in Schätzungen der A-Posteriori-Wahrscheinlichkeiten transformiert werden, die gut kalibriert sein sollen. Die durchgeführte Analyse zielt auf monotone Kalibrierung ab und korrigiert insbesondere Falschaussagen, die in Referenzarbeiten veröffentlicht wurden. Außerdem zeigt sie, dass Bin-basierte Fehlermaße, die in den letzten Jahren populär geworden sind, ungerechtfertigt sind und nicht verwendet werden sollten. Weiterhin wird die Validität von Platt Scaling, dem relevantesten, parametrischen Kalibrierungsverfahren, genau analysiert. Insbesondere wird seine Optimalität für Klassifikatorvorhersagen, die gemäß vier Familien von Verteilungsfunktionen verteilt sind, sowie die Äquivalenz zu Beta-Kalibrierung bis auf eine sigmoidale Vorverarbeitung gezeigt. Für nicht monotone Kalibrierung werden erweiterte Varianten der Kerndichteschätzung und die Ensemblemethode EKDE eingeführt. Schließlich werden die Kalibrierungsverfahren im Rahmen einer Simulationsstudie mit vollständiger Information sowie auf 46 Referenzdatensätzen ausgewertet. Hierauf aufbauend wird Klassifikator-Kalibrierung als Teil von reduktionsbasierter Klassifikation eingesetzt, die zum Ziel hat, Mehrklassenprobleme auf einfachere (üblicherweise binäre) Entscheidungsprobleme zu reduzieren. Für den zugehörigen, während der Vorhersage notwendigen Fusionsschritt wird ein neuer, auf Evidenztheorie basierender Ansatz eingeführt, der Klassifikator-Kalibrierung zur Modellierung von Massefunktionen nutzt. Dies ermöglicht die Analyse von reduktionsbasierter Klassifikation in einem formalen Kontext sowie geschlossene Ausdrücke für die entsprechenden Gesamtkombinationen zu beweisen. Zusätzlich führt derselbe Formalismus zu einer konsistenten Integration von dynamischen Klasseninformationen, sodass sich ein theoretisch fundiertes und effizient zu berechnendes, dynamisches Klassifikationsmodell ergibt. Die hierbei gewonnenen Einsichten werden mit Pairwise Coupling, einem der relevantesten Verfahren für reduktionsbasierte Klassifikation, verbunden, wobei alle individuellen Vorhersagen mit einer Gewichtung kombiniert werden. Dies verallgemeinert nicht nur existierende Ansätze für Pairwise Coupling, sondern führt darüber hinaus auch zu einer Integration von dynamischen Klasseninformationen. Abschließend wird eine umfangreiche empirische Studie durchgeführt, die alle neu eingeführten Verfahren mit denen aus dem Stand der Forschung vergleicht. Hierfür werden Bewertungsfunktionen für dynamische Klassifikation eingeführt, die auf Sampling-Strategien basieren. Anschließend werden diese im Rahmen einer dreiteiligen Studie angewendet. Zunächst werden Support Vector Machines und Random Forests auf 26 Referenzdatensätzen aus dem UCI Machine Learning Repository angewendet. Im zweiten Teil werden zwei moderne, tiefe neuronale Netze auf fünf Referenzdatensätzen aus einer relativ aktuellen Referenzarbeit ausgewertet. Hierbei sind insbesondere Strategien relevant, die die Anwendung der eingeführten Verfahren in Verbindung mit großen Modellen ermöglicht, da eine naive Vorgehensweise nicht durchführbar ist. Schließlich wird ein Referenzdatensatz aus einem Produktionsprozess gewonnen, der die Integration von dynamischen Klasseninformationen ermöglicht, und ausgewertet. Die Ergebnisse zeigen, dass Pairwise-Coupling-Verfahren in Verbindung mit Support Vector Machines und Random Forests die besten Ergebnisse liefern, während in Verbindung mit tiefen neuronalen Netzen die Unterschiede zwischen den Verfahren oft klein bis vernachlässigbar sind. Am wichtigsten ist, dass alle Ergebnisse zeigen, dass dynamische Klassifikation die entsprechenden Erkennungsgenauigkeiten verbessert. Daher ist es entscheidend, dynamische Klasseninformationen in den entsprechenden Anwendungen zur Verfügung zu stellen, was eine entsprechende digitale Infrastruktur erfordert. KW - dynamic classification KW - multi-class classification KW - classifier calibration KW - evidence theory KW - Dempster–Shafer theory KW - Deep Learning KW - Deep Learning KW - Dempster-Shafer-Theorie KW - Klassifikator-Kalibrierung KW - dynamische Klassifikation KW - Evidenztheorie KW - Mehrklassen-Klassifikation Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-585125 ER - TY - GEN A1 - Prasse, Paul A1 - Iversen, Pascal A1 - Lienhard, Matthias A1 - Thedinga, Kristina A1 - Herwig, Ralf A1 - Scheffer, Tobias T1 - Pre-Training on In Vitro and Fine-Tuning on Patient-Derived Data Improves Deep Neural Networks for Anti-Cancer Drug-Sensitivity Prediction T2 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe N2 - Large-scale databases that report the inhibitory capacities of many combinations of candidate drug compounds and cultivated cancer cell lines have driven the development of preclinical drug-sensitivity models based on machine learning. However, cultivated cell lines have devolved from human cancer cells over years or even decades under selective pressure in culture conditions. Moreover, models that have been trained on in vitro data cannot account for interactions with other types of cells. Drug-response data that are based on patient-derived cell cultures, xenografts, and organoids, on the other hand, are not available in the quantities that are needed to train high-capacity machine-learning models. We found that pre-training deep neural network models of drug sensitivity on in vitro drug-sensitivity databases before fine-tuning the model parameters on patient-derived data improves the models’ accuracy and improves the biological plausibility of the features, compared to training only on patient-derived data. From our experiments, we can conclude that pre-trained models outperform models that have been trained on the target domains in the vast majority of cases. T3 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe - 1300 KW - deep neural networks KW - drug-sensitivity prediction KW - anti-cancer drugs Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-577341 SN - 1866-8372 SP - 1 EP - 14 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - GEN A1 - Al Laban, Firas A1 - Reger, Martin A1 - Lucke, Ulrike T1 - Closing the Policy Gap in the Academic Bridge T2 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe N2 - The highly structured nature of the educational sector demands effective policy mechanisms close to the needs of the field. That is why evidence-based policy making, endorsed by the European Commission under Erasmus+ Key Action 3, aims to make an alignment between the domains of policy and practice. Against this background, this article addresses two issues: First, that there is a vertical gap in the translation of higher-level policies to local strategies and regulations. Second, that there is a horizontal gap between educational domains regarding the policy awareness of individual players. This was analyzed in quantitative and qualitative studies with domain experts from the fields of virtual mobility and teacher training. From our findings, we argue that the combination of both gaps puts the academic bridge from secondary to tertiary education at risk, including the associated knowledge proficiency levels. We discuss the role of digitalization in the academic bridge by asking the question: which value does the involved stakeholders expect from educational policies? As a theoretical basis, we rely on the model of value co-creation for and by stakeholders. We describe the used instruments along with the obtained results and proposed benefits. Moreover, we reflect on the methodology applied, and we finally derive recommendations for future academic bridge policies. T3 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe - 1310 KW - policy evaluation KW - higher education KW - virtual mobility KW - teacher training Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-583572 SN - 1866-8372 IS - 1310 ER - TY - JOUR A1 - Lorenz, Claas A1 - Clemens, Vera Elisabeth A1 - Schrötter, Max A1 - Schnor, Bettina T1 - Continuous verification of network security compliance JF - IEEE transactions on network and service management N2 - Continuous verification of network security compliance is an accepted need. Especially, the analysis of stateful packet filters plays a central role for network security in practice. But the few existing tools which support the analysis of stateful packet filters are based on general applicable formal methods like Satifiability Modulo Theories (SMT) or theorem prover and show runtimes in the order of minutes to hours making them unsuitable for continuous compliance verification. In this work, we address these challenges and present the concept of state shell interweaving to transform a stateful firewall rule set into a stateless rule set. This allows us to reuse any fast domain specific engine from the field of data plane verification tools leveraging smart, very fast, and domain specialized data structures and algorithms including Header Space Analysis (HSA). First, we introduce the formal language FPL that enables a high-level human-understandable specification of the desired state of network security. Second, we demonstrate the instantiation of a compliance process using a verification framework that analyzes the configuration of complex networks and devices - including stateful firewalls - for compliance with FPL policies. Our evaluation results show the scalability of the presented approach for the well known Internet2 and Stanford benchmarks as well as for large firewall rule sets where it outscales state-of-the-art tools by a factor of over 41. KW - Security KW - Tools KW - Network security KW - Engines KW - Benchmark testing; KW - Analytical models KW - Scalability KW - Network KW - security KW - compliance KW - formal KW - verification Y1 - 2021 U6 - https://doi.org/10.1109/TNSM.2021.3130290 SN - 1932-4537 VL - 19 IS - 2 SP - 1729 EP - 1745 PB - Institute of Electrical and Electronics Engineers CY - New York ER - TY - JOUR A1 - Prasse, Paul A1 - Iversen, Pascal A1 - Lienhard, Matthias A1 - Thedinga, Kristina A1 - Bauer, Christopher A1 - Herwig, Ralf A1 - Scheffer, Tobias T1 - Matching anticancer compounds and tumor cell lines by neural networks with ranking loss JF - NAR: genomics and bioinformatics N2 - Computational drug sensitivity models have the potential to improve therapeutic outcomes by identifying targeted drug components that are likely to achieve the highest efficacy for a cancer cell line at hand at a therapeutic dose. State of the art drug sensitivity models use regression techniques to predict the inhibitory concentration of a drug for a tumor cell line. This regression objective is not directly aligned with either of these principal goals of drug sensitivity models: We argue that drug sensitivity modeling should be seen as a ranking problem with an optimization criterion that quantifies a drug's inhibitory capacity for the cancer cell line at hand relative to its toxicity for healthy cells. We derive an extension to the well-established drug sensitivity regression model PaccMann that employs a ranking loss and focuses on the ratio of inhibitory concentration and therapeutic dosage range. We find that the ranking extension significantly enhances the model's capability to identify the most effective anticancer drugs for unseen tumor cell profiles based in on in-vitro data. Y1 - 2022 U6 - https://doi.org/10.1093/nargab/lqab128 SN - 2631-9268 VL - 4 IS - 1 PB - Oxford Univ. Press CY - Oxford ER - TY - JOUR A1 - Steinert, Fritjof A1 - Stabernack, Benno T1 - Architecture of a low latency H.264/AVC video codec for robust ML based image classification how region of interests can minimize the impact of coding artifacts JF - Journal of Signal Processing Systems for Signal, Image, and Video Technology N2 - The use of neural networks is considered as the state of the art in the field of image classification. A large number of different networks are available for this purpose, which, appropriately trained, permit a high level of classification accuracy. Typically, these networks are applied to uncompressed image data, since a corresponding training was also carried out using image data of similar high quality. However, if image data contains image errors, the classification accuracy deteriorates drastically. This applies in particular to coding artifacts which occur due to image and video compression. Typical application scenarios for video compression are narrowband transmission channels for which video coding is required but a subsequent classification is to be carried out on the receiver side. In this paper we present a special H.264/Advanced Video Codec (AVC) based video codec that allows certain regions of a picture to be coded with near constant picture quality in order to allow a reliable classification using neural networks, whereas the remaining image will be coded using constant bit rate. We have combined this feature with the ability to run with lowest latency properties, which is usually also required in remote control applications scenarios. The codec has been implemented as a fully hardwired High Definition video capable hardware architecture which is suitable for Field Programmable Gate Arrays. KW - H.264 KW - Advanced Video Codec (AVC) KW - Low Latency KW - Region of Interest KW - Machine Learning KW - Inference KW - FPGA KW - Hardware accelerator Y1 - 2022 U6 - https://doi.org/10.1007/s11265-021-01727-2 SN - 1939-8018 SN - 1939-8115 VL - 94 IS - 7 SP - 693 EP - 708 PB - Springer CY - New York ER - TY - JOUR A1 - Marco Figuera, Ramiro A1 - Riedel, Christian A1 - Rossi, Angelo Pio A1 - Unnithan, Vikram T1 - Depth to diameter analysis on small simple craters at the lunar south pole - possible implications for ice harboring JF - Remote sensing N2 - In this paper, we present a study comparing the depth to diameter (d/D) ratio of small simple craters (200-1000 m) of an area between -88.5 degrees to -90 degrees latitude at the lunar south pole containing Permanent Shadowed Regions (PSRs) versus craters without PSRs. As PSRs can reach temperatures of 110 K and are capable of harboring volatiles, especially water ice, we analyzed the relationship of depth versus diameter ratios and its possible implications for harboring water ice. Variations in the d/D ratios can also be caused by other processes such as degradation, isostatic adjustment, or differences in surface properties. The conducted d/D ratio analysis suggests that a differentiation between craters containing PSRs versus craters without PSRs occurs. Thus, a possible direct relation between d/D ratio, PSRs, and water ice harboring might exist. Our results suggest that differences in the target's surface properties may explain the obtained results. The resulting d/D ratios of craters with PSRs can help to select target areas for future In-Situ Resource Utilization (ISRU) missions. KW - craters KW - lunar exploration KW - ice harboring Y1 - 2022 U6 - https://doi.org/10.3390/rs14030450 SN - 2072-4292 VL - 14 IS - 3 PB - MDPI CY - Basel ER - TY - JOUR A1 - Abdelwahab, Ahmed A1 - Landwehr, Niels T1 - Deep Distributional Sequence Embeddings Based on a Wasserstein Loss JF - Neural processing letters N2 - Deep metric learning employs deep neural networks to embed instances into a metric space such that distances between instances of the same class are small and distances between instances from different classes are large. In most existing deep metric learning techniques, the embedding of an instance is given by a feature vector produced by a deep neural network and Euclidean distance or cosine similarity defines distances between these vectors. This paper studies deep distributional embeddings of sequences, where the embedding of a sequence is given by the distribution of learned deep features across the sequence. The motivation for this is to better capture statistical information about the distribution of patterns within the sequence in the embedding. When embeddings are distributions rather than vectors, measuring distances between embeddings involves comparing their respective distributions. The paper therefore proposes a distance metric based on Wasserstein distances between the distributions and a corresponding loss function for metric learning, which leads to a novel end-to-end trainable embedding model. We empirically observe that distributional embeddings outperform standard vector embeddings and that training with the proposed Wasserstein metric outperforms training with other distance functions. KW - Metric learning KW - Sequence embeddings KW - Deep learning Y1 - 2022 U6 - https://doi.org/10.1007/s11063-022-10784-y SN - 1370-4621 SN - 1573-773X PB - Springer CY - Dordrecht ER - TY - JOUR A1 - Tran, Son Cao A1 - Pontelli, Enrico A1 - Balduccini, Marcello A1 - Schaub, Torsten T1 - Answer set planning BT - a survey JF - Theory and practice of logic programming N2 - Answer Set Planning refers to the use of Answer Set Programming (ASP) to compute plans, that is, solutions to planning problems, that transform a given state of the world to another state. The development of efficient and scalable answer set solvers has provided a significant boost to the development of ASP-based planning systems. This paper surveys the progress made during the last two and a half decades in the area of answer set planning, from its foundations to its use in challenging planning domains. The survey explores the advantages and disadvantages of answer set planning. It also discusses typical applications of answer set planning and presents a set of challenges for future research. KW - planning KW - knowledge representation and reasoning KW - logic programming Y1 - 2022 U6 - https://doi.org/10.1017/S1471068422000072 SN - 1471-0684 SN - 1475-3081 PB - Cambridge University Press CY - New York ER - TY - JOUR A1 - Michallek, Florian A1 - Genske, Ulrich A1 - Niehues, Stefan Markus A1 - Hamm, Bernd A1 - Jahnke, Paul T1 - Deep learning reconstruction improves radiomics feature stability and discriminative power in abdominal CT imaging BT - a phantom study JF - European Radiology N2 - Objectives To compare image quality of deep learning reconstruction (AiCE) for radiomics feature extraction with filtered back projection (FBP), hybrid iterative reconstruction (AIDR 3D), and model-based iterative reconstruction (FIRST). Methods Effects of image reconstruction on radiomics features were investigated using a phantom that realistically mimicked a 65-year-old patient's abdomen with hepatic metastases. The phantom was scanned at 18 doses from 0.2 to 4 mGy, with 20 repeated scans per dose. Images were reconstructed with FBP, AIDR 3D, FIRST, and AiCE. Ninety-three radiomics features were extracted from 24 regions of interest, which were evenly distributed across three tissue classes: normal liver, metastatic core, and metastatic rim. Features were analyzed in terms of their consistent characterization of tissues within the same image (intraclass correlation coefficient >= 0.75), discriminative power (Kruskal-Wallis test p value < 0.05), and repeatability (overall concordance correlation coefficient >= 0.75). Results The median fraction of consistent features across all doses was 6%, 8%, 6%, and 22% with FBP, AIDR 3D, FIRST, and AiCE, respectively. Adequate discriminative power was achieved by 48%, 82%, 84%, and 92% of features, and 52%, 20%, 17%, and 39% of features were repeatable, respectively. Only 5% of features combined consistency, discriminative power, and repeatability with FBP, AIDR 3D, and FIRST versus 13% with AiCE at doses above 1 mGy and 17% at doses >= 3 mGy. AiCE was the only reconstruction technique that enabled extraction of higher-order features. Conclusions AiCE more than doubled the yield of radiomics features at doses typically used clinically. Inconsistent tissue characterization within CT images contributes significantly to the poor stability of radiomics features. KW - Tomography KW - X-ray computed KW - Phantoms KW - imaging KW - Liver neoplasms KW - Algorithms KW - Reproducibility of results Y1 - 2022 U6 - https://doi.org/10.1007/s00330-022-08592-y SN - 1432-1084 VL - 32 IS - 7 SP - 4587 EP - 4595 PB - Springer CY - New York ER - TY - JOUR A1 - Bandyopadhyay, Soumyadip A1 - Sarkar, Dipankar A1 - Mandal, Chittaranjan A1 - Giese, Holger T1 - Translation validation of coloured Petri net models of programs on integers JF - Acta informatica N2 - Programs are often subjected to significant optimizing and parallelizing transformations based on extensive dependence analysis. Formal validation of such transformations needs modelling paradigms which can capture both control and data dependences in the program vividly. Being value-based with an inherent scope of capturing parallelism, the untimed coloured Petri net (CPN) models, reported in the literature, fit the bill well; accordingly, they are likely to be more convenient as the intermediate representations (IRs) of both the source and the transformed codes for translation validation than strictly sequential variable-based IRs like sequential control flow graphs (CFGs). In this work, an efficient path-based equivalence checking method for CPN models of programs on integers is presented. Extensive experimentation has been carried out on several sequential and parallel examples. Complexity and correctness issues have been treated rigorously for the method. Y1 - 2022 U6 - https://doi.org/10.1007/s00236-022-00419-z SN - 0001-5903 SN - 1432-0525 VL - 59 IS - 6 SP - 725 EP - 759 PB - Springer CY - New York ER - TY - JOUR A1 - Chen, Junchao A1 - Lange, Thomas A1 - Andjelkovic, Marko A1 - Simevski, Aleksandar A1 - Lu, Li A1 - Krstić, Miloš T1 - Solar particle event and single event upset prediction from SRAM-based monitor and supervised machine learning JF - IEEE transactions on emerging topics in computing / IEEE Computer Society, Institute of Electrical and Electronics Engineers N2 - The intensity of cosmic radiation may differ over five orders of magnitude within a few hours or days during the Solar Particle Events (SPEs), thus increasing for several orders of magnitude the probability of Single Event Upsets (SEUs) in space-borne electronic systems. Therefore, it is vital to enable the early detection of the SEU rate changes in order to ensure timely activation of dynamic radiation hardening measures. In this paper, an embedded approach for the prediction of SPEs and SRAM SEU rate is presented. The proposed solution combines the real-time SRAM-based SEU monitor, the offline-trained machine learning model and online learning algorithm for the prediction. With respect to the state-of-the-art, our solution brings the following benefits: (1) Use of existing on-chip data storage SRAM as a particle detector, thus minimizing the hardware and power overhead, (2) Prediction of SRAM SEU rate one hour in advance, with the fine-grained hourly tracking of SEU variations during SPEs as well as under normal conditions, (3) Online optimization of the prediction model for enhancing the prediction accuracy during run-time, (4) Negligible cost of hardware accelerator design for the implementation of selected machine learning model and online learning algorithm. The proposed design is intended for a highly dependable and self-adaptive multiprocessing system employed in space applications, allowing to trigger the radiation mitigation mechanisms before the onset of high radiation levels. KW - Machine learning KW - Single event upsets KW - Random access memory KW - monitoring KW - machine learning algorithms KW - predictive models KW - space missions KW - solar particle event KW - single event upset KW - machine learning KW - online learning KW - hardware accelerator KW - reliability KW - self-adaptive multiprocessing system Y1 - 2022 U6 - https://doi.org/10.1109/TETC.2022.3147376 SN - 2168-6750 VL - 10 IS - 2 SP - 564 EP - 580 PB - Institute of Electrical and Electronics Engineers CY - [New York, NY] ER - TY - JOUR A1 - Breitenreiter, Anselm A1 - Andjelković, Marko A1 - Schrape, Oliver A1 - Krstić, Miloš T1 - Fast error propagation probability estimates by answer set programming and approximate model counting JF - IEEE Access N2 - We present a method employing Answer Set Programming in combination with Approximate Model Counting for fast and accurate calculation of error propagation probabilities in digital circuits. By an efficient problem encoding, we achieve an input data format similar to a Verilog netlist so that extensive preprocessing is avoided. By a tight interconnection of our application with the underlying solver, we avoid iterating over fault sites and reduce calls to the solver. Several circuits were analyzed with varying numbers of considered cycles and different degrees of approximation. Our experiments show, that the runtime can be reduced by approximation by a factor of 91, whereas the error compared to the exact result is below 1%. KW - Circuit faults KW - Integrated circuit modeling KW - Programming KW - Analytical models KW - Search problems KW - Flip-flops KW - Encoding KW - Answer set programming KW - approximate model counting KW - error propagation KW - radhard design KW - reliability analysis KW - selective fault tolerance KW - single event upsets Y1 - 2022 U6 - https://doi.org/10.1109/ACCESS.2022.3174564 SN - 2169-3536 VL - 10 SP - 51814 EP - 51825 PB - Inst. of Electr. and Electronics Engineers CY - Piscataway ER - TY - JOUR A1 - Andjelkovic, Marko A1 - Simevski, Aleksandar A1 - Chen, Junchao A1 - Schrape, Oliver A1 - Stamenkovic, Zoran A1 - Krstić, Miloš A1 - Ilic, Stefan A1 - Ristic, Goran A1 - Jaksic, Aleksandar A1 - Vasovic, Nikola A1 - Duane, Russell A1 - Palma, Alberto J. A1 - Lallena, Antonio M. A1 - Carvajal, Miguel A. T1 - A design concept for radiation hardened RADFET readout system for space applications JF - Microprocessors and microsystems N2 - Instruments for measuring the absorbed dose and dose rate under radiation exposure, known as radiation dosimeters, are indispensable in space missions. They are composed of radiation sensors that generate current or voltage response when exposed to ionizing radiation, and processing electronics for computing the absorbed dose and dose rate. Among a wide range of existing radiation sensors, the Radiation Sensitive Field Effect Transistors (RADFETs) have unique advantages for absorbed dose measurement, and a proven record of successful exploitation in space missions. It has been shown that the RADFETs may be also used for the dose rate monitoring. In that regard, we propose a unique design concept that supports the simultaneous operation of a single RADFET as absorbed dose and dose rate monitor. This enables to reduce the cost of implementation, since the need for other types of radiation sensors can be minimized or eliminated. For processing the RADFET's response we propose a readout system composed of analog signal conditioner (ASC) and a self-adaptive multiprocessing system-on-chip (MPSoC). The soft error rate of MPSoC is monitored in real time with embedded sensors, allowing the autonomous switching between three operating modes (high-performance, de-stress and fault-tolerant), according to the application requirements and radiation conditions. KW - RADFET KW - Radiation hardness KW - Absorbed dose KW - Dose rate KW - Self-adaptive MPSoC Y1 - 2022 U6 - https://doi.org/10.1016/j.micpro.2022.104486 SN - 0141-9331 SN - 1872-9436 VL - 90 PB - Elsevier CY - Amsterdam ER - TY - JOUR A1 - Ristic, Goran S. A1 - Ilic, Stefan D. A1 - Andjelkovic, Marko S. A1 - Duane, Russell A1 - Palma, Alberto J. A1 - Lalena, Antonio M. A1 - Krstić, Miloš A1 - Jaksic, Aleksandar B. T1 - Sensitivity and fading of irradiated RADFETs with different gate voltages JF - Nuclear Instruments and Methods in Physics Research Section A N2 - The radiation-sensitive field-effect transistors (RADFETs) with an oxide thickness of 400 nm are irradiated with gate voltages of 2, 4 and 6 V, and without gate voltage. A detailed analysis of the mechanisms responsible for the creation of traps during irradiation is performed. The creation of the traps in the oxide, near and at the silicon/silicon-dioxide (Si/SiO2) interface during irradiation is modelled very well. This modelling can also be used for other MOS transistors containing SiO2. The behaviour of radiation traps during postirradiation annealing is analysed, and the corresponding functions for their modelling are obtained. The switching traps (STs) do not have significant influence on threshold voltage shift, and two radiation-induced trap types fit the fixed traps (FTs) very well. The fading does not depend on the positive gate voltage applied during irradiation, but it is twice lower in case there is no gate voltage. A new dosimetric parameter, called the Golden Ratio (GR), is proposed, which represents the ratio between the threshold voltage shift after irradiation and fading after spontaneous annealing. This parameter can be useful for comparing MOS dosimeters. KW - pMOS radiation dosimeter KW - RADFETs KW - irradiation KW - sensitivity KW - annealing KW - fading Y1 - 2022 U6 - https://doi.org/10.1016/j.nima.2022.166473 SN - 0168-9002 SN - 1872-9576 VL - 1029 PB - Elsevier CY - Amsterdam ER - TY - THES A1 - Hecher, Markus T1 - Advanced tools and methods for treewidth-based problem solving N2 - In the last decades, there was a notable progress in solving the well-known Boolean satisfiability (Sat) problem, which can be witnessed by powerful Sat solvers. One of the reasons why these solvers are so fast are structural properties of instances that are utilized by the solver’s interna. This thesis deals with the well-studied structural property treewidth, which measures the closeness of an instance to being a tree. In fact, there are many problems parameterized by treewidth that are solvable in polynomial time in the instance size when parameterized by treewidth. In this work, we study advanced treewidth-based methods and tools for problems in knowledge representation and reasoning (KR). Thereby, we provide means to establish precise runtime results (upper bounds) for canonical problems relevant to KR. Then, we present a new type of problem reduction, which we call decomposition-guided (DG) that allows us to precisely monitor the treewidth when reducing from one problem to another problem. This new reduction type will be the basis for a long-open lower bound result for quantified Boolean formulas and allows us to design a new methodology for establishing runtime lower bounds for problems parameterized by treewidth. Finally, despite these lower bounds, we provide an efficient implementation of algorithms that adhere to treewidth. Our approach finds suitable abstractions of instances, which are subsequently refined in a recursive fashion, and it uses Sat solvers for solving subproblems. It turns out that our resulting solver is quite competitive for two canonical counting problems related to Sat. N2 - In den letzten Jahrzehnten konnte ein beachtlicher Fortschritt im Bereich der Aussagenlogik verzeichnet werden. Dieser äußerte sich dadurch, dass für das wichtigste Problem in diesem Bereich, genannt „Sat“, welches sich mit der Fragestellung befasst, ob eine gegebene aussagenlogische Formel erfüllbar ist oder nicht, überwältigend schnelle Computerprogramme („Solver“) entwickelt werden konnten. Interessanterweise liefern diese Solver eine beeindruckende Leistung, weil sie oft selbst Probleminstanzen mit mehreren Millionen von Variablen spielend leicht lösen können. Auf der anderen Seite jedoch glaubt man in der Wissenschaft weitgehend an die Exponentialzeithypothese (ETH), welche besagt, dass man im schlimmsten Fall für das Lösen einer Instanz in diesem Bereich exponentielle Laufzeit in der Anzahl der Variablen benötigt. Dieser vermeintliche Widerspruch ist noch immer nicht vollständig geklärt, denn wahrscheinlich gibt es viele ineinandergreifende Gründe für die Schnelligkeit aktueller Sat Solver. Einer dieser Gründe befasst sich weitgehend mit strukturellen Eigenschaften von Probleminstanzen, die wohl indirekt und intern von diesen Solvern ausgenützt werden. Diese Dissertation beschäftigt sich mit solchen strukturellen Eigenschaften, nämlich mit der sogenannten Baumweite. Die Baumweite ist sehr gut erforscht und versucht zu messen, wie groß der Abstand von Probleminstanzen zu Bäumen ist (Baumnähe). Allerdings ist dieser Parameter sehr generisch und bei Weitem nicht auf Problemstellungen der Aussagenlogik beschränkt. Tatsächlich gibt es viele weitere Probleme, die parametrisiert mit Baumweite in polynomieller Zeit gelöst werden können. Interessanterweise gibt es auch viele Probleme in der Wissensrepräsentation (KR), von denen man davon ausgeht, dass sie härter sind als das Problem Sat, die bei beschränkter Baumweite in polynomieller Zeit gelöst werden können. Ein prominentes Beispiel solcher Probleme ist das Problem QSat, welches sich für die Gültigkeit einer gegebenen quantifizierten, aussagenlogischen Formel (QBF), das sind aussagenlogische Formeln, wo gewisse Variablen existenziell bzw. universell quantifiziert werden können, befasst. Bemerkenswerterweise wird allerdings auch im Zusammenhang mit Baumweite, ähnlich zu Methoden der klassischen Komplexitätstheorie, die tatsächliche Komplexität (Härte) solcher Problemen quantifiziert, wo man die exakte Laufzeitabhängigkeit beim Problemlösen in der Baumweite (Stufe der Exponentialität) beschreibt. Diese Arbeit befasst sich mit fortgeschrittenen, Baumweite-basierenden Methoden und Werkzeugen für Probleme der Wissensrepräsentation und künstlichen Intelligenz (AI). Dabei präsentieren wir Methoden, um präzise Laufzeitresultate (obere Schranken) für prominente Fragmente der Antwortmengenprogrammierung (ASP), welche ein kanonisches Paradigma zum Lösen von Problemen der Wissensrepräsentation darstellt, zu erhalten. Unsere Resultate basieren auf dem Konzept der dynamischen Programmierung, die angeleitet durch eine sogenannte Baumzerlegung und ähnlich dem Prinzip „Teile-und-herrsche“ funktioniert. Solch eine Baumzerlegung ist eine konkrete, strukturelle Zerlegung einer Probleminstanz, die sich stark an der Baumweite orientiert. Des Weiteren präsentieren wir einen neuen Typ von Problemreduktion, den wir als „decomposition-guided (DG)“, also „zerlegungsangeleitet“, bezeichnen. Dieser Reduktionstyp erlaubt es, Baumweiteerhöhungen und -verringerungen während einer Problemreduktion von einem bestimmten Problem zu einem anderen Problem präzise zu untersuchen und zu kontrollieren. Zusätzlich ist dieser neue Reduktionstyp die Basis, um ein lange offen gebliebenes Resultat betreffend quantifizierter, aussagenlogischer Formeln zu zeigen. Tatsächlich sind wir damit in der Lage, präzise untere Schranken, unter der Annahme der Exponentialzeithypothese, für das Problem QSat bei beschränkter Baumweite zu zeigen. Genauer gesagt können wir mit diesem Konzept der DG Reduktionen zeigen, dass das Problem QSat, beschränkt auf Quantifizierungsrang ` und parametrisiert mit Baumweite k, im Allgemeinen nicht besser als in einer Laufzeit, die `-fach exponentiell in der Baumweite und polynomiell in der Instanzgröße ist1, lösen. Dieses Resultat hebt auf nicht-inkrementelle Weise ein bekanntes Ergebnis für Quantifizierungsrang 2 auf beliebige Quantifizierungsränge, allerdings impliziert es auch sehr viele weitere Konsequenzen. Das Resultat über die untere Schranke des Problems QSat erlaubt es, eine neue Methodologie zum Zeigen unterer Schranken einer Vielzahl von Problemen der Wissensrepräsentation und künstlichen Intelligenz, zu etablieren. In weiterer Konsequenz können wir damit auch zeigen, dass die oberen Schranken sowie die DG Reduktionen dieser Arbeit unter der Hypothese ETH „eng“ sind, d.h., sie können wahrscheinlich nicht mehr signifikant verbessert werden. Die Ergebnisse betreffend der unteren Schranken für QSat und die dazugehörige Methodologie konstituieren in gewisser Weise eine Hierarchie von über Baumweite parametrisierte Laufzeitklassen. Diese Laufzeitklassen können verwendet werden, um die Härte von Problemen für das Ausnützen von Baumweite zu quantifizieren und diese entsprechend ihrer Laufzeitabhängigkeit bezüglich Baumweite zu kategorisieren. Schlussendlich und trotz der genannten Resultate betreffend unterer Schranken sind wir im Stande, eine effiziente Implementierung von Algorithmen basierend auf dynamischer Programmierung, die entlang einer Baumzerlegung angeleitet wird, zur Verfügung zu stellen. Dabei funktioniert unser Ansatz dahingehend, indem er probiert, passende Abstraktionen von Instanzen zu finden, die dann im Endeffekt sukzessive und auf rekursive Art und Weise verfeinert und verbessert werden. Inspiriert durch die enorme Effizienz und Effektivität der Sat Solver, ist unsere Implementierung ein hybrider Ansatz, weil sie den starken Gebrauch von Sat Solvern zum Lösen diverser Subprobleme, die während der dynamischen Programmierung auftreten, pflegt. Dabei stellt sich heraus, dass der resultierende Solver unserer Implementierung im Bezug auf Effizienz beim Lösen von zwei kanonischen, Sat-verwandten Zählproblemen mit bestehenden Solvern locker mithalten kann. Tatsächlich sind wir im Stande, Instanzen, wo die oberen Schranken von Baumweite 260 übersteigen, zu lösen. Diese überraschende Beobachtung zeigt daher, dass Baumweite ein wichtiger Parameter sein könnte, der wohl in modernen Designs von Solvern berücksichtigt werden sollte. KW - Treewidth KW - Dynamic Programming KW - Knowledge Representation and Reasoning KW - Artificial Intelligence KW - Computational Complexity KW - Parameterized Complexity KW - Answer Set Programming KW - Exponential Time Hypothesis KW - Lower Bounds KW - Algorithms KW - Algorithmen KW - Antwortmengenprogrammierung KW - Künstliche Intelligenz KW - Komplexitätstheorie KW - Dynamische Programmierung KW - Exponentialzeit Hypothese KW - Wissensrepräsentation und Schlussfolgerung KW - Untere Schranken KW - Parametrisierte Komplexität KW - Baumweite Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-512519 ER - TY - JOUR A1 - Bordihn, Henning A1 - Vaszil, György T1 - Reversible parallel communicating finite automata systems JF - Acta informatica N2 - We study the concept of reversibility in connection with parallel communicating systems of finite automata (PCFA in short). We define the notion of reversibility in the case of PCFA (also covering the non-deterministic case) and discuss the relationship of the reversibility of the systems and the reversibility of its components. We show that a system can be reversible with non-reversible components, and the other way around, the reversibility of the components does not necessarily imply the reversibility of the system as a whole. We also investigate the computational power of deterministic centralized reversible PCFA. We show that these very simple types of PCFA (returning or non-returning) can recognize regular languages which cannot be accepted by reversible (deterministic) finite automata, and that they can even accept languages that are not context-free. We also separate the deterministic and non-deterministic variants in the case of systems with non-returning communication. We show that there are languages accepted by non-deterministic centralized PCFA, which cannot be recognized by any deterministic variant of the same type. KW - Finite automata KW - Reversibility KW - Systems of parallel communicating KW - automata Y1 - 2021 U6 - https://doi.org/10.1007/s00236-021-00396-9 SN - 0001-5903 SN - 1432-0525 VL - 58 IS - 4 SP - 263 EP - 279 PB - Springer CY - Berlin ; Heidelberg ; New York, NY ER - TY - JOUR A1 - Bordihn, Henning A1 - Holzer, Markus T1 - On the number of active states in finite automata JF - Acta informatica N2 - We introduce a new measure of descriptional complexity on finite automata, called the number of active states. Roughly speaking, the number of active states of an automaton A on input w counts the number of different states visited during the most economic computation of the automaton A for the word w. This concept generalizes to finite automata and regular languages in a straightforward way. We show that the number of active states of both finite automata and regular languages is computable, even with respect to nondeterministic finite automata. We further compare the number of active states to related measures for regular languages. In particular, we show incomparability to the radius of regular languages and that the difference between the number of active states and the total number of states needed in finite automata for a regular language can be of exponential order. Y1 - 2021 U6 - https://doi.org/10.1007/s00236-021-00397-8 SN - 0001-5903 SN - 1432-0525 VL - 58 IS - 4 SP - 301 EP - 318 PB - Springer CY - Berlin ; Heidelberg [u.a.] ER - TY - JOUR A1 - Kreowsky, Philipp A1 - Stabernack, Christian Benno T1 - A full-featured FPGA-based pipelined architecture for SIFT extraction JF - IEEE access : practical research, open solutions / Institute of Electrical and Electronics Engineers N2 - Image feature detection is a key task in computer vision. Scale Invariant Feature Transform (SIFT) is a prevalent and well known algorithm for robust feature detection. However, it is computationally demanding and software implementations are not applicable for real-time performance. In this paper, a versatile and pipelined hardware implementation is proposed, that is capable of computing keypoints and rotation invariant descriptors on-chip. All computations are performed in single precision floating-point format which makes it possible to implement the original algorithm with little alteration. Various rotation resolutions and filter kernel sizes are supported for images of any resolution up to ultra-high definition. For full high definition images, 84 fps can be processed. Ultra high definition images can be processed at 21 fps. KW - Field programmable gate arrays KW - Convolution KW - Signal processing KW - algorithms KW - Kernel KW - Image resolution KW - Histograms KW - Feature extraction KW - Scale-invariant feature transform (SIFT) KW - field-programmable gate array KW - (FPGA) KW - image processing KW - computer vision KW - parallel processing KW - architecture KW - real-time KW - hardware architecture Y1 - 2021 U6 - https://doi.org/10.1109/ACCESS.2021.3104387 SN - 2169-3536 VL - 9 SP - 128564 EP - 128573 PB - Inst. of Electr. and Electronics Engineers CY - New York, NY ER - TY - THES A1 - Sahlmann, Kristina T1 - Network management with semantic descriptions for interoperability on the Internet of Things T1 - Netzwerk Management mit semantischen Beschreibungen für Interoperabilität im Internet der Dinge N2 - The Internet of Things (IoT) is a system of physical objects that can be discovered, monitored, controlled, or interacted with by electronic devices that communicate over various networking interfaces and eventually can be connected to the wider Internet. [Guinard and Trifa, 2016]. IoT devices are equipped with sensors and/or actuators and may be constrained in terms of memory, computational power, network bandwidth, and energy. Interoperability can help to manage such heterogeneous devices. Interoperability is the ability of different types of systems to work together smoothly. There are four levels of interoperability: physical, network and transport, integration, and data. The data interoperability is subdivided into syntactic and semantic data. Semantic data describes the meaning of data and the common understanding of vocabulary e.g. with the help of dictionaries, taxonomies, ontologies. To achieve interoperability, semantic interoperability is necessary. Many organizations and companies are working on standards and solutions for interoperability in the IoT. However, the commercial solutions produce a vendor lock-in. They focus on centralized approaches such as cloud-based solutions. This thesis proposes a decentralized approach namely Edge Computing. Edge Computing is based on the concepts of mesh networking and distributed processing. This approach has an advantage that information collection and processing are placed closer to the sources of this information. The goals are to reduce traffic, latency, and to be robust against a lossy or failed Internet connection. We see management of IoT devices from the network configuration management perspective. This thesis proposes a framework for network configuration management of heterogeneous, constrained IoT devices by using semantic descriptions for interoperability. The MYNO framework is an acronym for MQTT, YANG, NETCONF and Ontology. The NETCONF protocol is the IETF standard for network configuration management. The MQTT protocol is the de-facto standard in the IoT. We picked up the idea of the NETCONF-MQTT bridge, originally proposed by Scheffler and Bonneß[2017], and extended it with semantic device descriptions. These device descriptions provide a description of the device capabilities. They are based on the oneM2M Base ontology and formalized by the Semantic Web Standards. The novel approach is using a ontology-based device description directly on a constrained device in combination with the MQTT protocol. The bridge was extended in order to query such descriptions. Using a semantic annotation, we achieved that the device capabilities are self-descriptive, machine readable and re-usable. The concept of a Virtual Device was introduced and implemented, based on semantic device descriptions. A Virtual Device aggregates the capabilities of all devices at the edge network and contributes therefore to the scalability. Thus, it is possible to control all devices via a single RPC call. The model-driven NETCONF Web-Client is generated automatically from this YANG model which is generated by the bridge based on the semantic device description. The Web-Client provides a user-friendly interface, offers RPC calls and displays sensor values. We demonstrate the feasibility of this approach in different use cases: sensor and actuator scenarios, as well as event configuration and triggering. The semantic approach results in increased memory overhead. Therefore, we evaluated CBOR and RDF HDT for optimization of ontology-based device descriptions for use on constrained devices. The evaluation shows that CBOR is not suitable for long strings and RDF HDT is a promising candidate but is still a W3C Member Submission. Finally, we used an optimized JSON-LD format for the syntax of the device descriptions. One of the security tasks of network management is the distribution of firmware updates. The MYNO Update Protocol (MUP) was developed and evaluated on constrained devices CC2538dk and 6LoWPAN. The MYNO update process is focused on freshness and authenticity of the firmware. The evaluation shows that it is challenging but feasible to bring the firmware updates to constrained devices using MQTT. As a new requirement for the next MQTT version, we propose to add a slicing feature for the better support of constrained devices. The MQTT broker should slice data to the maximum packet size specified by the device and transfer it slice-by-slice. For the performance and scalability evaluation of MYNO framework, we setup the High Precision Agriculture demonstrator with 10 ESP-32 NodeMCU boards at the edge of the network. The ESP-32 NodeMCU boards, connected by WLAN, were equipped with six sensors and two actuators. The performance evaluation shows that the processing of ontology-based descriptions on a Raspberry Pi 3B with the RDFLib is a challenging task regarding computational power. Nevertheless, it is feasible because it must be done only once per device during the discovery process. The MYNO framework was tested with heterogeneous devices such as CC2538dk from Texas Instruments, Arduino Yún Rev 3, and ESP-32 NodeMCU, and IP-based networks such as 6LoWPAN and WLAN. Summarizing, with the MYNO framework we could show that the semantic approach on constrained devices is feasible in the IoT. N2 - Ein Netzwerk von physischen Objekten (Dingen), die von elektronischen Geräten entdeckt, überwacht und gesteuert werden können, die über verschiedene Netzwerkschnittstellen kommunizieren und schließlich mit dem Internet verbunden werden können, bezeichnet man als Internet of Things (Internet der Dinge, IoT) [Guinard und Trifa, 2016]. Die elektronischen Geräte sind mit Sensoren und Aktuatoren ausgestattet und verfügen oft nur über begrenzte Rechenressourcen wie Leistung, Speicher, Netzwerkbandbreite und Energie. Interoperabilität ist die Fähigkeit verschiedener Systemtypen reibungslos zusammenzuarbeiten und kann helfen, heterogenen Geräte im IoT zu verwalten. Die Semantische Interoperabilität stellt sicher, dass die Bedeutung von Daten und das gemeinsame Verständnis des Vokabulars zwischen den Systemen vorhanden ist. Viele Organisationen und Unternehmen arbeiten an Standards und Lösungen für die Interoperabilität im IoT, bieten aber nur Insellösungen an. Die kommerziellen Lösungen führen jedoch zu einer Lieferantenbindung. Sie konzentrieren sich auf zentralisierte Ansätze wie Cloud-basierte Lösungen. Wir verfolgen einen dezentralen Ansatz, nämlich Edge Computing, und sehen die Verwaltung von IoT-Geräten aus der Perspektive des Netzwerkkonfigurationsmanagements. In dieser Arbeit wird ein Framework für das Netzwerkkonfigurationsmanagement heterogener IoT-Geräte mit begrenzten Rechenressourcen unter Verwendung semantischer Beschreibungen für die Interoperabilität vorgestellt. Das MYNO-Framework steht für die verwendeten Technologien MQTT, YANG, NETCONF und Ontologie. Das NETCONF-Protokoll ist der IETF-Standard für das Netzwerkkonfigurations-management und verwendet YANG als Datenmodellierungssprache. Das MQTT-Protokoll ist der De-facto-Standard im IoT. Die semantischen Beschreibungen enthalten eine detaillierte Liste der Gerätefunktionen. Sie basieren auf der oneM2M Base-Ontologie und verwenden Semantic Web Standards. Das Konzept eines Virtuellen Geräts wurde basierend auf den semantischen Gerätebeschreibungen eingeführt und implementiert. Der modellgesteuerte NETCONF Web-Client wird automatisch auf Basis von YANG generiert, das auf Basis der semantischen Gerätebeschreibung erstellt wird. Wir demonstrieren die Machbarkeit des MYNO Ansatzes in verschiedenen Anwendungsfällen: Sensor- und Aktuator-Szenarien sowie Ereigniskonfiguration und -auslösung. Eine der Sicherheitsaufgaben des Netzwerkmanagements ist die Verteilung von Firmware-Updates. Das MYNO Update Protocol (MUP) wurde auf den Geräten CC2538dk und 6LoWPAN Netzwerk entwickelt und evaluiert. Für die Bewertung der Leistung und Skalierbarkeit des MYNO-Frameworks wurde ein Precision Agriculture Demonstrator mit 10 ESP-32 NodeMCU Geräten eingerichtet. Zusammenfassend konnten wir mit dem MYNO-Framework zeigen, dass der semantische Ansatz für Geräte mit limitierten Rechenressourcen im Internet of Things machbar ist. KW - Internet of Things KW - Network Management KW - MQTT KW - Ontology KW - Interoperability KW - Netzwerk Management KW - Interoperabilität KW - Sensornetzwerke KW - Ontologie KW - 6LoWPAN KW - Semantic Web KW - IoT KW - NETCONF KW - oneM2M Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-529846 ER - TY - THES A1 - Grum, Marcus T1 - Construction of a concept of neuronal modeling N2 - The business problem of having inefficient processes, imprecise process analyses, and simulations as well as non-transparent artificial neuronal network models can be overcome by an easy-to-use modeling concept. With the aim of developing a flexible and efficient approach to modeling, simulating, and optimizing processes, this paper proposes a flexible Concept of Neuronal Modeling (CoNM). The modeling concept, which is described by the modeling language designed and its mathematical formulation and is connected to a technical substantiation, is based on a collection of novel sub-artifacts. As these have been implemented as a computational model, the set of CoNM tools carries out novel kinds of Neuronal Process Modeling (NPM), Neuronal Process Simulations (NPS), and Neuronal Process Optimizations (NPO). The efficacy of the designed artifacts was demonstrated rigorously by means of six experiments and a simulator of real industrial production processes. N2 - Die vorliegende Arbeit addressiert das Geschäftsproblem von ineffizienten Prozessen, unpräzisen Prozessanalysen und -simulationen sowie untransparenten künstlichen neuronalen Netzwerken, indem ein Modellierungskonzept zum Neuronalen Modellieren konstruiert wird. Dieses neuartige Konzept des Neuronalen Modellierens (CoNM) fungiert als flexibler und effizienter Ansatz zum Modellieren, Simulieren und Optimieren von Prozessen mit Hilfe von neuronalen Netzwerken und wird mittels einer Modellierungssprache, dessen mathematischen Formalisierung und technischen Substanziierung sowie einer Sammlung von neuartigen Subartefakten beschrieben. In der Verwendung derer Implementierung als CoNM-Werkzeuge können somit neue Arten einer Neuronalen-Prozess-Modellierung (NPM), Neuronalen-Prozess-Simulation (NPS) sowie Neuronalen-Prozess-Optimierung (NPO) realisiert werden. Die Wirksamkeit der erstellten Artefakte wurde anhand von sechs Experimenten demonstriert sowie in einem Simulator in realen Produktionsprozessen gezeigt. T2 - Konzept des Neuronalen Modellierens KW - Deep Learning KW - Artificial Neuronal Network KW - Explainability KW - Interpretability KW - Business Process KW - Simulation KW - Optimization KW - Knowledge Management KW - Process Management KW - Modeling KW - Process KW - Knowledge KW - Learning KW - Enterprise Architecture KW - Industry 4.0 KW - Künstliche Neuronale Netzwerke KW - Erklärbarkeit KW - Interpretierbarkeit KW - Geschäftsprozess KW - Simulation KW - Optimierung KW - Wissensmanagement KW - Prozessmanagement KW - Modellierung KW - Prozess KW - Wissen KW - Lernen KW - Enterprise Architecture KW - Industrie 4.0 Y1 - 2021 ER - TY - JOUR A1 - Bauer, Chris A1 - Herwig, Ralf A1 - Lienhard, Matthias A1 - Prasse, Paul A1 - Scheffer, Tobias A1 - Schuchhardt, Johannes T1 - Large-scale literature mining to assess the relation between anti-cancer drugs and cancer types JF - Journal of translational medicine N2 - Background: There is a huge body of scientific literature describing the relation between tumor types and anti-cancer drugs. The vast amount of scientific literature makes it impossible for researchers and physicians to extract all relevant information manually. Methods: In order to cope with the large amount of literature we applied an automated text mining approach to assess the relations between 30 most frequent cancer types and 270 anti-cancer drugs. We applied two different approaches, a classical text mining based on named entity recognition and an AI-based approach employing word embeddings. The consistency of literature mining results was validated with 3 independent methods: first, using data from FDA approvals, second, using experimentally measured IC-50 cell line data and third, using clinical patient survival data. Results: We demonstrated that the automated text mining was able to successfully assess the relation between cancer types and anti-cancer drugs. All validation methods showed a good correspondence between the results from literature mining and independent confirmatory approaches. The relation between most frequent cancer types and drugs employed for their treatment were visualized in a large heatmap. All results are accessible in an interactive web-based knowledge base using the following link: . Conclusions: Our approach is able to assess the relations between compounds and cancer types in an automated manner. Both, cancer types and compounds could be grouped into different clusters. Researchers can use the interactive knowledge base to inspect the presented results and follow their own research questions, for example the identification of novel indication areas for known drugs. KW - Literature mining KW - Anti-cancer drugs KW - Tumor types KW - Word embeddings KW - Database Y1 - 2021 U6 - https://doi.org/10.1186/s12967-021-02941-z SN - 1479-5876 VL - 19 IS - 1 PB - BioMed Central CY - London ER - TY - JOUR A1 - Huang, Yizhen A1 - Richter, Eric A1 - Kleickmann, Thilo A1 - Wiepke, Axel A1 - Richter, Dirk T1 - Classroom complexity affects student teachers’ behavior in a VR classroom JF - Computers & education : an international journal N2 - Student teachers often struggle to keep track of everything that is happening in the classroom, and particularly to notice and respond when students cause disruptions. The complexity of the classroom environment is a potential contributing factor that has not been empirically tested. In this experimental study, we utilized a virtual reality (VR) classroom to examine whether classroom complexity affects the likelihood of student teachers noticing disruptions and how they react after noticing. Classroom complexity was operationalized as the number of disruptions and the existence of overlapping disruptions (multidimensionality) as well as the existence of parallel teaching tasks (simultaneity). Results showed that student teachers (n = 50) were less likely to notice the scripted disruptions, and also less likely to respond to the disruptions in a comprehensive and effortful manner when facing greater complexity. These results may have implications for both teacher training and the design of VR for training or research purpose. This study contributes to the field from two aspects: 1) it revealed how features of the classroom environment can affect student teachers' noticing of and reaction to disruptions; and 2) it extends the functionality of the VR environment-from a teacher training tool to a testbed of fundamental classroom processes that are difficult to manipulate in real-life. KW - Augmented and virtual reality KW - Simulations KW - Improving classroom KW - teaching KW - Media in education KW - Pedagogical issues Y1 - 2021 U6 - https://doi.org/10.1016/j.compedu.2020.104100 SN - 0360-1315 SN - 1873-782X VL - 163 PB - Elsevier CY - Oxford ER - TY - JOUR A1 - Hawro, Tomasz A1 - Przybylowicz, Katarzyna A1 - Spindler, Max A1 - Hawro, Marlena A1 - Steć, Michał A1 - Altrichter, Sabine A1 - Weller, Karsten A1 - Magerl, Markus A1 - Reidel, Ulrich A1 - Alarbeed, Ezzat A1 - Alraboni, Ola A1 - Maurer, Marcus A1 - Metz, Martin T1 - The characteristics and impact of pruritus in adult dermatology patients BT - a prospective, cross-sectional study JF - Journal of the American Academy of Dermatology N2 - Background: Pruritus often accompanies chronic skin diseases, exerting considerable burden on many areas of patient functioning; this burden and the features of pruritus remain insufficiently characterized. Objective: To investigate characteristics, including localization patterns, and burden of pruritus in patients with chronic dermatoses. Methods: We recruited 800 patients with active chronic skin diseases. We assessed pruritus intensity, localization, and further characteristics. We used validated questionnaires to assess quality of life, work productivity and activity impairment, anxiety, depression, and sleep quality. Results: Nine out of every 10 patients had experienced pruritus throughout their disease and 73% in the last 7 days. Pruritus often affected the entire body and was not restricted to skin lesions. Patients with moderate to severe pruritus reported significantly more impairment to their sleep quality and work productivity, and they were more depressed and anxious than control individuals and patients with mild or no pruritus. Suicidal ideations were highly prevalent in patients with chronic pruritus (18.5%) and atopic dermatitis (11.8%). Conclusions: Pruritus prevalence and intensity are very high across all dermatoses studied; intensity is linked to impairment in many areas of daily functioning. Effective treatment strategies are urgently required to treat pruritus and the underlying skin disease. ( J Am Acad Dermatol 2021;84:691-700.) KW - activity KW - anxiety KW - depression KW - pruritus KW - quality of life KW - sleep quality KW - suicidal ideations KW - work productivity Y1 - 2021 U6 - https://doi.org/10.1016/J.JAAD.2020.08.035 SN - 0190-9622 SN - 1097-6787 VL - 84 IS - 3 SP - 691 EP - 700 PB - Elsevier CY - Amsterdam [u.a.] ER - TY - JOUR A1 - Middelanis, Robin A1 - Willner, Sven N. A1 - Otto, Christian A1 - Kuhla, Kilian A1 - Quante, Lennart A1 - Levermann, Anders T1 - Wave-like global economic ripple response to Hurricane Sandy JF - Environmental research letters : ERL / Institute of Physics N2 - Tropical cyclones range among the costliest disasters on Earth. Their economic repercussions along the supply and trade network also affect remote economies that are not directly affected. We here simulate possible global repercussions on consumption for the example case of Hurricane Sandy in the US (2012) using the shock-propagation model Acclimate. The modeled shock yields a global three-phase ripple: an initial production demand reduction and associated consumption price decrease, followed by a supply shortage with increasing prices, and finally a recovery phase. Regions with strong trade relations to the US experience strong magnitudes of the ripple. A dominating demand reduction or supply shortage leads to overall consumption gains or losses of a region, respectively. While finding these repercussions in historic data is challenging due to strong volatility of economic interactions, numerical models like ours can help to identify them by approaching the problem from an exploratory angle, isolating the effect of interest. For this, our model simulates the economic interactions of over 7000 regional economic sectors, interlinked through about 1.8 million trade relations. Under global warming, the wave-like structures of the economic response to major hurricanes like the one simulated here are likely to intensify and potentially overlap with other weather extremes. KW - supply chains KW - Hurricane Sandy KW - economic ripples KW - extreme weather KW - impacts KW - loss propagation KW - natural disasters Y1 - 2021 U6 - https://doi.org/10.1088/1748-9326/ac39c0 SN - 1748-9326 VL - 16 IS - 12 PB - IOP Publ. Ltd. CY - Bristol ER - TY - JOUR A1 - Quante, Lennart A1 - Willner, Sven N. A1 - Middelanis, Robin A1 - Levermann, Anders T1 - Regions of intensification of extreme snowfall under future warming JF - Scientific reports N2 - Due to climate change the frequency and character of precipitation are changing as the hydrological cycle intensifies. With regards to snowfall, global warming has two opposing influences; increasing humidity enables intense snowfall, whereas higher temperatures decrease the likelihood of snowfall. Here we show an intensification of extreme snowfall across large areas of the Northern Hemisphere under future warming. This is robust across an ensemble of global climate models when they are bias-corrected with observational data. While mean daily snowfall decreases, both the 99th and the 99.9th percentiles of daily snowfall increase in many regions in the next decades, especially for Northern America and Asia. Additionally, the average intensity of snowfall events exceeding these percentiles as experienced historically increases in many regions. This is likely to pose a challenge to municipalities in mid to high latitudes. Overall, extreme snowfall events are likely to become an increasingly important impact of climate change in the next decades, even if they will become rarer, but not necessarily less intense, in the second half of the century. Y1 - 2021 U6 - https://doi.org/10.1038/s41598-021-95979-4 SN - 2045-2322 VL - 11 IS - 1 PB - Macmillan Publishers Limited, part of Springer Nature CY - Berlin ER - TY - THES A1 - Makowski, Silvia T1 - Discriminative Models for Biometric Identification using Micro- and Macro-Movements of the Eyes N2 - Human visual perception is an active process. Eye movements either alternate between fixations and saccades or follow a smooth pursuit movement in case of moving targets. Besides these macroscopic gaze patterns, the eyes perform involuntary micro-movements during fixations which are commonly categorized into micro-saccades, drift and tremor. Eye movements are frequently studied in cognitive psychology, because they reflect a complex interplay of perception, attention and oculomotor control. A common insight of psychological research is that macro-movements are highly individual. Inspired by this finding, there has been a considerable amount of prior research on oculomotoric biometric identification. However, the accuracy of known approaches is too low and the time needed for identification is too long for any practical application. This thesis explores discriminative models for the task of biometric identification. Discriminative models optimize a quality measure of the predictions and are usually superior to generative approaches in discriminative tasks. However, using discriminative models requires to select a suitable form of data representation for sequential eye gaze data; i.e., by engineering features or constructing a sequence kernel and the performance of the classification model strongly depends on the data representation. We study two fundamentally different ways of representing eye gaze within a discriminative framework. In the first part of this thesis, we explore the integration of data and psychological background knowledge in the form of generative models to construct representations. To this end, we first develop generative statistical models of gaze behavior during reading and scene viewing that account for viewer-specific distributional properties of gaze patterns. In a second step, we develop a discriminative identification model by deriving Fisher kernel functions from these and several baseline models. We find that an SVM with Fisher kernel is able to reliably identify users based on their eye gaze during reading and scene viewing. However, since the generative models are constrained to use low-frequency macro-movements, they discard a significant amount of information contained in the raw eye tracking signal at a high cost: identification requires about one minute of input recording, which makes it inapplicable for real world biometric systems. In the second part of this thesis, we study a purely data-driven modeling approach. Here, we aim at automatically discovering the individual pattern hidden in the raw eye tracking signal. To this end, we develop a deep convolutional neural network DeepEyedentification that processes yaw and pitch gaze velocities and learns a representation end-to-end. Compared to prior work, this model increases the identification accuracy by one order of magnitude and the time to identification decreases to only seconds. The DeepEyedentificationLive model further improves upon the identification performance by processing binocular input and it also detects presentation-attacks. We find that by learning a representation, the performance of oculomotoric identification and presentation-attack detection can be driven close to practical relevance for biometric applications. Eye tracking devices with high sampling frequency and precision are expensive and the applicability of eye movement as a biometric feature heavily depends on cost of recording devices. In the last part of this thesis, we therefore study the requirements on data quality by evaluating the performance of the DeepEyedentificationLive network under reduced spatial and temporal resolution. We find that the method still attains a high identification accuracy at a temporal resolution of only 250 Hz and a precision of 0.03 degrees. Reducing both does not have an additive deteriorating effect. KW - Machine Learning Y1 - 2021 ER - TY - JOUR A1 - Schirrmann, Michael A1 - Landwehr, Niels A1 - Giebel, Antje A1 - Garz, Andreas A1 - Dammer, Karl-Heinz T1 - Early detection of stripe rust in winter wheat using deep residual neural networks JF - Frontiers in plant science : FPLS N2 - Stripe rust (Pst) is a major disease of wheat crops leading untreated to severe yield losses. The use of fungicides is often essential to control Pst when sudden outbreaks are imminent. Sensors capable of detecting Pst in wheat crops could optimize the use of fungicides and improve disease monitoring in high-throughput field phenotyping. Now, deep learning provides new tools for image recognition and may pave the way for new camera based sensors that can identify symptoms in early stages of a disease outbreak within the field. The aim of this study was to teach an image classifier to detect Pst symptoms in winter wheat canopies based on a deep residual neural network (ResNet). For this purpose, a large annotation database was created from images taken by a standard RGB camera that was mounted on a platform at a height of 2 m. Images were acquired while the platform was moved over a randomized field experiment with Pst-inoculated and Pst-free plots of winter wheat. The image classifier was trained with 224 x 224 px patches tiled from the original, unprocessed camera images. The image classifier was tested on different stages of the disease outbreak. At patch level the image classifier reached a total accuracy of 90%. To test the image classifier on image level, the image classifier was evaluated with a sliding window using a large striding length of 224 px allowing for fast test performance. At image level, the image classifier reached a total accuracy of 77%. Even in a stage with very low disease spreading (0.5%) at the very beginning of the Pst outbreak, a detection accuracy of 57% was obtained. Still in the initial phase of the Pst outbreak with 2 to 4% of Pst disease spreading, detection accuracy with 76% could be attained. With further optimizations, the image classifier could be implemented in embedded systems and deployed on drones, vehicles or scanning systems for fast mapping of Pst outbreaks. KW - yellow rust KW - monitoring KW - deep learning KW - wheat crops KW - image recognition KW - camera sensor KW - ResNet KW - smart farming Y1 - 2021 U6 - https://doi.org/10.3389/fpls.2021.469689 SN - 1664-462X VL - 12 PB - Frontiers Media CY - Lausanne ER - TY - JOUR A1 - Gautam, Khem Raj A1 - Zhang, Guoqiang A1 - Landwehr, Niels A1 - Adolphs, Julian T1 - Machine learning for improvement of thermal conditions inside a hybrid ventilated animal building JF - Computers and electronics in agriculture : COMPAG online ; an international journal N2 - In buildings with hybrid ventilation, natural ventilation opening positions (windows), mechanical ventilation rates, heating, and cooling are manipulated to maintain desired thermal conditions. The indoor temperature is regulated solely by ventilation (natural and mechanical) when the external conditions are favorable to save external heating and cooling energy. The ventilation parameters are determined by a rule-based control scheme, which is not optimal. This study proposes a methodology to enable real-time optimum control of ventilation parameters. We developed offline prediction models to estimate future thermal conditions from the data collected from building in operation. The developed offline model is then used to find the optimal controllable ventilation parameters in real-time to minimize the setpoint deviation in the building. With the proposed methodology, the experimental building's setpoint deviation improved for 87% of time, on average, by 0.53 degrees C compared to the current deviations. KW - Animal building KW - Natural ventilation KW - Automatically controlled windows KW - Machine learning KW - Optimization Y1 - 2021 U6 - https://doi.org/10.1016/j.compag.2021.106259 SN - 0168-1699 SN - 1872-7107 VL - 187 PB - Elsevier Science CY - Amsterdam [u.a.] ER - TY - JOUR A1 - Camargo, Tibor de A1 - Schirrmann, Michael A1 - Landwehr, Niels A1 - Dammer, Karl-Heinz A1 - Pflanz, Michael T1 - Optimized deep learning model as a basis for fast UAV mapping of weed species in winter wheat crops JF - Remote sensing / Molecular Diversity Preservation International (MDPI) N2 - Weed maps should be available quickly, reliably, and with high detail to be useful for site-specific management in crop protection and to promote more sustainable agriculture by reducing pesticide use. Here, the optimization of a deep residual convolutional neural network (ResNet-18) for the classification of weed and crop plants in UAV imagery is proposed. The target was to reach sufficient performance on an embedded system by maintaining the same features of the ResNet-18 model as a basis for fast UAV mapping. This would enable online recognition and subsequent mapping of weeds during UAV flying operation. Optimization was achieved mainly by avoiding redundant computations that arise when a classification model is applied on overlapping tiles in a larger input image. The model was trained and tested with imagery obtained from a UAV flight campaign at low altitude over a winter wheat field, and classification was performed on species level with the weed species Matricaria chamomilla L., Papaver rhoeas L., Veronica hederifolia L., and Viola arvensis ssp. arvensis observed in that field. The ResNet-18 model with the optimized image-level prediction pipeline reached a performance of 2.2 frames per second with an NVIDIA Jetson AGX Xavier on the full resolution UAV image, which would amount to about 1.78 ha h(-1) area output for continuous field mapping. The overall accuracy for determining crop, soil, and weed species was 94%. There were some limitations in the detection of species unknown to the model. When shifting from 16-bit to 32-bit model precision, no improvement in classification accuracy was observed, but a strong decline in speed performance, especially when a higher number of filters was used in the ResNet-18 model. Future work should be directed towards the integration of the mapping process on UAV platforms, guiding UAVs autonomously for mapping purpose, and ensuring the transferability of the models to other crop fields. KW - ResNet KW - deep residual networks KW - UAV imagery KW - embedded systems KW - crop KW - monitoring KW - image classification KW - site-specific weed management KW - real-time mapping Y1 - 2021 U6 - https://doi.org/10.3390/rs13091704 SN - 2072-4292 VL - 13 IS - 9 PB - MDPI CY - Basel ER - TY - JOUR A1 - Brede, Nuria A1 - Botta, Nicola T1 - On the correctness of monadic backward induction JF - Journal of functional programming N2 - In control theory, to solve a finite-horizon sequential decision problem (SDP) commonly means to find a list of decision rules that result in an optimal expected total reward (or cost) when taking a given number of decision steps. SDPs are routinely solved using Bellman's backward induction. Textbook authors (e.g. Bertsekas or Puterman) typically give more or less formal proofs to show that the backward induction algorithm is correct as solution method for deterministic and stochastic SDPs. Botta, Jansson and Ionescu propose a generic framework for finite horizon, monadic SDPs together with a monadic version of backward induction for solving such SDPs. In monadic SDPs, the monad captures a generic notion of uncertainty, while a generic measure function aggregates rewards. In the present paper, we define a notion of correctness for monadic SDPs and identify three conditions that allow us to prove a correctness result for monadic backward induction that is comparable to textbook correctness proofs for ordinary backward induction. The conditions that we impose are fairly general and can be cast in category-theoretical terms using the notion of Eilenberg-Moore algebra. They hold in familiar settings like those of deterministic or stochastic SDPs, but we also give examples in which they fail. Our results show that backward induction can safely be employed for a broader class of SDPs than usually treated in textbooks. However, they also rule out certain instances that were considered admissible in the context of Botta et al. 's generic framework. Our development is formalised in Idris as an extension of the Botta et al. framework and the sources are available as supplementary material. Y1 - 2021 U6 - https://doi.org/10.1017/S0956796821000228 SN - 1469-7653 SN - 0956-7968 VL - 31 PB - Cambridge University Press CY - Cambridge ER - TY - JOUR A1 - Andjelković, Marko A1 - Chen, Junchao A1 - Simevski, Aleksandar A1 - Schrape, Oliver A1 - Krstić, Miloš A1 - Kraemer, Rolf T1 - Monitoring of particle count rate and LET variations with pulse stretching inverters JF - IEEE transactions on nuclear science : a publication of the IEEE Nuclear and Plasma Sciences Society N2 - This study investigates the use of pulse stretching (skew-sized) inverters for monitoring the variation of count rate and linear energy transfer (LET) of energetic particles. The basic particle detector is a cascade of two pulse stretching inverters, and the required sensing area is obtained by connecting up to 12 two-inverter cells in parallel and employing the required number of parallel arrays. The incident particles are detected as single-event transients (SETs), whereby the SET count rate denotes the particle count rate, while the SET pulsewidth distribution depicts the LET variations. The advantage of the proposed solution is the possibility to sense the LET variations using fully digital processing logic. SPICE simulations conducted on IHP's 130-nm CMOS technology have shown that the SET pulsewidth varies by approximately 550 ps over the LET range from 1 to 100 MeV center dot cm(2) center dot mg(-1). The proposed detector is intended for triggering the fault-tolerant mechanisms within a self-adaptive multiprocessing system employed in space. It can be implemented as a standalone detector or integrated in the same chip with the target system. KW - Particle detector KW - pulse stretching inverters KW - single-event transient KW - (SET) count rate KW - SET pulsewidth distribution Y1 - 2021 U6 - https://doi.org/10.1109/TNS.2021.3076400 SN - 0018-9499 SN - 1558-1578 VL - 68 IS - 8 SP - 1772 EP - 1781 PB - Institute of Electrical and Electronics Engineers CY - New York, NY ER - TY - THES A1 - Andjelkovic, Marko T1 - A methodology for characterization, modeling and mitigation of single event transient effects in CMOS standard combinational cells T1 - Eine Methode zur Charakterisierung, Modellierung und Minderung von SET Effekten in kombinierten CMOS-Standardzellen N2 - With the downscaling of CMOS technologies, the radiation-induced Single Event Transient (SET) effects in combinational logic have become a critical reliability issue for modern integrated circuits (ICs) intended for operation under harsh radiation conditions. The SET pulses generated in combinational logic may propagate through the circuit and eventually result in soft errors. It has thus become an imperative to address the SET effects in the early phases of the radiation-hard IC design. In general, the soft error mitigation solutions should accommodate both static and dynamic measures to ensure the optimal utilization of available resources. An efficient soft-error-aware design should address synergistically three main aspects: (i) characterization and modeling of soft errors, (ii) multi-level soft error mitigation, and (iii) online soft error monitoring. Although significant results have been achieved, the effectiveness of SET characterization methods, accuracy of predictive SET models, and efficiency of SET mitigation measures are still critical issues. Therefore, this work addresses the following topics: (i) Characterization and modeling of SET effects in standard combinational cells, (ii) Static mitigation of SET effects in standard combinational cells, and (iii) Online particle detection, as a support for dynamic soft error mitigation. Since the standard digital libraries are widely used in the design of radiation-hard ICs, the characterization of SET effects in standard cells and the availability of accurate SET models for the Soft Error Rate (SER) evaluation are the main prerequisites for efficient radiation-hard design. This work introduces an approach for the SPICE-based standard cell characterization with the reduced number of simulations, improved SET models and optimized SET sensitivity database. It has been shown that the inherent similarities in the SET response of logic cells for different input levels can be utilized to reduce the number of required simulations. Based on characterization results, the fitting models for the SET sensitivity metrics (critical charge, generated SET pulse width and propagated SET pulse width) have been developed. The proposed models are based on the principle of superposition, and they express explicitly the dependence of the SET sensitivity of individual combinational cells on design, operating and irradiation parameters. In contrast to the state-of-the-art characterization methodologies which employ extensive look-up tables (LUTs) for storing the simulation results, this work proposes the use of LUTs for storing the fitting coefficients of the SET sensitivity models derived from the characterization results. In that way the amount of characterization data in the SET sensitivity database is reduced significantly. The initial step in enhancing the robustness of combinational logic is the application of gate-level mitigation techniques. As a result, significant improvement of the overall SER can be achieved with minimum area, delay and power overheads. For the SET mitigation in standard cells, it is essential to employ the techniques that do not require modifying the cell structure. This work introduces the use of decoupling cells for improving the robustness of standard combinational cells. By insertion of two decoupling cells at the output of a target cell, the critical charge of the cell’s output node is increased and the attenuation of short SETs is enhanced. In comparison to the most common gate-level techniques (gate upsizing and gate duplication), the proposed approach provides better SET filtering. However, as there is no single gate-level mitigation technique with optimal performance, a combination of multiple techniques is required. This work introduces a comprehensive characterization of gate-level mitigation techniques aimed to quantify their impact on the SET robustness improvement, as well as introduced area, delay and power overhead per gate. By characterizing the gate-level mitigation techniques together with the standard cells, the required effort in subsequent SER analysis of a target design can be reduced. The characterization database of the hardened standard cells can be utilized as a guideline for selection of the most appropriate mitigation solution for a given design. As a support for dynamic soft error mitigation techniques, it is important to enable the online detection of energetic particles causing the soft errors. This allows activating the power-greedy fault-tolerant configurations based on N-modular redundancy only at the high radiation levels. To enable such a functionality, it is necessary to monitor both the particle flux and the variation of particle LET, as these two parameters contribute significantly to the system SER. In this work, a particle detection approach based on custom-sized pulse stretching inverters is proposed. Employing the pulse stretching inverters connected in parallel enables to measure the particle flux in terms of the number of detected SETs, while the particle LET variations can be estimated from the distribution of SET pulse widths. This approach requires a purely digital processing logic, in contrast to the standard detectors which require complex mixed-signal processing. Besides the possibility of LET monitoring, additional advantages of the proposed particle detector are low detection latency and power consumption, and immunity to error accumulation. The results achieved in this thesis can serve as a basis for establishment of an overall soft-error-aware database for a given digital library, and a comprehensive multi-level radiation-hard design flow that can be implemented with the standard IC design tools. The following step will be to evaluate the achieved results with the irradiation experiments. N2 - Mit der Verkleinerung der Strukturen moderner CMOS-Technologien sind strahlungsinduzierte Single Event Transient (SET)-Effekte in kombinatorischer Logik zu einem kritischen Zuverlässigkeitsproblem in integrierten Schaltkreisen (ICs) geworden, die für den Betrieb unter rauen Strahlungsbedingungen (z. B. im Weltraum) vorgesehen sind. Die in der Kombinationslogik erzeugten SET-Impulse können durch die Schaltungen propagieren und schließlich in Speicherelementen (z.B. Flip-Flops oder Latches) zwischengespeichert werden, was zu sogenannten Soft-Errors und folglich zu Datenbeschädigungen oder einem Systemausfall führt. Daher ist es in den frühen Phasen des strahlungsharten IC-Designs unerlässlich geworden, die SET-Effekte systematisch anzugehen. Im Allgemeinen sollten die Lösungen zur Minderung von Soft-Errors sowohl statische als auch dynamische Maßnahmen berücksichtigen, um die optimale Nutzung der verfügbaren Ressourcen sicherzustellen. Somit sollte ein effizientes Soft-Error-Aware-Design drei Hauptaspekte synergistisch berücksichtigen: (i) die Charakterisierung und Modellierung von Soft-Errors, (ii) eine mehrstufige-Soft-Error-Minderung und (iii) eine Online-Soft-Error-Überwachung. Obwohl signifikante Ergebnisse erzielt wurden, sind die Wirksamkeit der SET-Charakterisierung, die Genauigkeit von Vorhersagemodellen und die Effizienz der Minderungsmaßnahmen immer noch die kritischen Punkte. Daher stellt diese Arbeit die folgenden Originalbeiträge vor: • Eine ganzheitliche Methodik zur SPICE-basierten Charakterisierung von SET-Effekten in kombinatorischen Standardzellen und entsprechenden Härtungskonfigurationen auf Gate-Ebene mit reduzierter Anzahl von Simulationen und reduzierter SET-Sensitivitätsdatenbank. • Analytische Modelle für SET-Empfindlichkeit (kritische Ladung, erzeugte SET-Pulsbreite und propagierte SET-Pulsbreite), basierend auf dem Superpositionsprinzip und Anpassung der Ergebnisse aus SPICE-Simulationen. • Ein Ansatz zur SET-Abschwächung auf Gate-Ebene, der auf dem Einfügen von zwei Entkopplungszellen am Ausgang eines Logikgatters basiert, was den Anstieg der kritischen Ladung und die signifikante Unterdrückung kurzer SETs beweist. • Eine vergleichende Charakterisierung der vorgeschlagenen SET-Abschwächungstechnik mit Entkopplungszellen und sieben bestehenden Techniken durch eine quantitative Bewertung ihrer Auswirkungen auf die Verbesserung der SET-Robustheit einzelner Logikgatter. • Ein Partikeldetektor auf Basis von Impulsdehnungs-Invertern in Skew-Größe zur Online-Überwachung des Partikelflusses und der LET-Variationen mit rein digitaler Anzeige. Die in dieser Dissertation erzielten Ergebnisse können als Grundlage für den Aufbau einer umfassenden Soft-Error-aware-Datenbank für eine gegebene digitale Bibliothek und eines umfassenden mehrstufigen strahlungsharten Designflusses dienen, der mit den Standard-IC-Designtools implementiert werden kann. Im nächsten Schritt werden die mit den Bestrahlungsexperimenten erzielten Ergebnisse ausgewertet. KW - Single Event Transient KW - radiation hardness design KW - Single Event Transient KW - Strahlungshärte Entwurf Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-534843 ER - TY - JOUR A1 - Schrape, Oliver A1 - Andjelkovic, Marko A1 - Breitenreiter, Anselm A1 - Zeidler, Steffen A1 - Balashov, Alexey A1 - Krstić, Miloš T1 - Design and evaluation of radiation-hardened standard cell flip-flops JF - IEEE transactions on circuits and systems : a publication of the IEEE Circuits and Systems Society: 1, Regular papers N2 - Use of a standard non-rad-hard digital cell library in the rad-hard design can be a cost-effective solution for space applications. In this paper we demonstrate how a standard non-rad-hard flip-flop, as one of the most vulnerable digital cells, can be converted into a rad-hard flip-flop without modifying its internal structure. We present five variants of a Triple Modular Redundancy (TMR) flip-flop: baseline TMR flip-flop, latch-based TMR flip-flop, True-Single Phase Clock (TSPC) TMR flip-flop, scannable TMR flip-flop and self-correcting TMR flipflop. For all variants, the multi-bit upsets have been addressed by applying special placement constraints, while the Single Event Transient (SET) mitigation was achieved through the usage of customized SET filters and selection of optimal inverter sizes for the clock and reset trees. The proposed flip-flop variants feature differing performance, thus enabling to choose the optimal solution for every sensitive node in the circuit, according to the predefined design constraints. Several flip-flop designs have been validated on IHP's 130nm BiCMOS process, by irradiation of custom-designed shift registers. It has been shown that the proposed TMR flip-flops are robust to soft errors with a threshold Linear Energy Transfer (LET) from (32.4 MeV.cm(2)/mg) to (62.5 MeV.cm(2)/mg), depending on the variant. KW - Single event effect KW - fault tolerance KW - triple modular redundancy KW - ASIC KW - design flow KW - radhard design Y1 - 2021 U6 - https://doi.org/10.1109/TCSI.2021.3109080 SN - 1549-8328 SN - 1558-0806 SN - 1057-7122 VL - 68 IS - 11 SP - 4796 EP - 4809 PB - Inst. of Electr. and Electronics Engineers CY - New York, NY ER - TY - JOUR A1 - Tavakoli, Hamad A1 - Alirezazadeh, Pendar A1 - Hedayatipour, Ava A1 - Nasib, A. H. Banijamali A1 - Landwehr, Niels T1 - Leaf image-based classification of some common bean cultivars using discriminative convolutional neural networks JF - Computers and electronics in agriculture : COMPAG online ; an international journal N2 - In recent years, many efforts have been made to apply image processing techniques for plant leaf identification. However, categorizing leaf images at the cultivar/variety level, because of the very low inter-class variability, is still a challenging task. In this research, we propose an automatic discriminative method based on convolutional neural networks (CNNs) for classifying 12 different cultivars of common beans that belong to three various species. We show that employing advanced loss functions, such as Additive Angular Margin Loss and Large Margin Cosine Loss, instead of the standard softmax loss function for the classification can yield better discrimination between classes and thereby mitigate the problem of low inter-class variability. The method was evaluated by classifying species (level I), cultivars from the same species (level II), and cultivars from different species (level III), based on images from the leaf foreside and backside. The results indicate that the performance of the classification algorithm on the leaf backside image dataset is superior. The maximum mean classification accuracies of 95.86, 91.37 and 86.87% were obtained at the levels I, II and III, respectively. The proposed method outperforms the previous relevant works and provides a reliable approach for plant cultivars identification. KW - Bean KW - Plant identification KW - Digital image analysis KW - VGG16 KW - Loss KW - functions Y1 - 2021 U6 - https://doi.org/10.1016/j.compag.2020.105935 SN - 0168-1699 SN - 1872-7107 VL - 181 PB - Elsevier CY - Amsterdam [u.a.] ER - TY - JOUR A1 - Cabalar, Pedro A1 - Fandiño, Jorge A1 - Fariñas del Cerro, Luis T1 - Splitting epistemic logic programs JF - Theory and practice of logic programming / publ. for the Association for Logic Programming N2 - Epistemic logic programs constitute an extension of the stable model semantics to deal with new constructs called subjective literals. Informally speaking, a subjective literal allows checking whether some objective literal is true in all or some stable models. As it can be imagined, the associated semantics has proved to be non-trivial, since the truth of subjective literals may interfere with the set of stable models it is supposed to query. As a consequence, no clear agreement has been reached and different semantic proposals have been made in the literature. Unfortunately, comparison among these proposals has been limited to a study of their effect on individual examples, rather than identifying general properties to be checked. In this paper, we propose an extension of the well-known splitting property for logic programs to the epistemic case. We formally define when an arbitrary semantics satisfies the epistemic splitting property and examine some of the consequences that can be derived from that, including its relation to conformant planning and to epistemic constraints. Interestingly, we prove (through counterexamples) that most of the existing approaches fail to fulfill the epistemic splitting property, except the original semantics proposed by Gelfond 1991 and a recent proposal by the authors, called Founded Autoepistemic Equilibrium Logic. KW - knowledge representation and nonmonotonic reasoning KW - logic programming methodology and applications KW - theory Y1 - 2021 U6 - https://doi.org/10.1017/S1471068420000058 SN - 1471-0684 SN - 1475-3081 VL - 21 IS - 3 SP - 296 EP - 316 PB - Cambridge Univ. Press CY - Cambridge [u.a.] ER - TY - THES A1 - Ashouri, Mohammadreza T1 - TrainTrap BT - a hybrid technique for vulnerability analysis in JAVA Y1 - 2020 ER - TY - JOUR A1 - Everardo Pérez, Flavio Omar A1 - Osorio, Mauricio T1 - Towards an answer set programming methodology for constructing programs following a semi-automatic approach BT - extended and revised version JF - Electronic notes in theoretical computer science N2 - Answer Set Programming (ASP) is a successful rule-based formalism for modeling and solving knowledge-intense combinatorial (optimization) problems. Despite its success in both academic and industry, open challenges like automatic source code optimization, and software engineering remains. This is because a problem encoded into an ASP might not have the desired solving performance compared to an equivalent representation. Motivated by these two challenges, this paper has three main contributions. First, we propose a developing process towards a methodology to implement ASP programs, being faithful to existing methods. Second, we present ASP encodings that serve as the basis from the developing process. Third, we demonstrate the use of ASP to reverse the standard solving process. That is, knowing answer sets in advance, and desired strong equivalent properties, “we” exhaustively reconstruct ASP programs if they exist. This paper was originally motivated by the search of propositional formulas (if they exist) that represent the semantics of a new aggregate operator. Particularly, a parity aggregate. This aggregate comes as an improvement from the already existing parity (xor) constraints from xorro, where lacks expressiveness, even though these constraints fit perfectly for reasoning modes like sampling or model counting. To this end, this extended version covers the fundaments from parity constraints as well as the xorro system. Hence, we delve a little more in the examples and the proposed methodology over parity constraints. Finally, we discuss our results by showing the only representation available, that satisfies different properties from the classical logic xor operator, which is also consistent with the semantics of parity constraints from xorro. KW - answer set programming KW - combinatorial optimization problems KW - parity aggregate operator Y1 - 2020 U6 - https://doi.org/10.1016/j.entcs.2020.10.004 SN - 1571-0661 VL - 354 SP - 29 EP - 44 PB - Elsevier CY - Amsterdam [u.a.] ER - TY - JOUR A1 - Hollmann, Susanne A1 - Frohme, Marcus A1 - Endrullat, Christoph A1 - Kremer, Andreas A1 - D’Elia, Domenica A1 - Regierer, Babette A1 - Nechyporenko, Alina T1 - Ten simple rules on how to write a standard operating procedure JF - PLOS Computational Biology N2 - Research publications and data nowadays should be publicly available on the internet and, theoretically, usable for everyone to develop further research, products, or services. The long-term accessibility of research data is, therefore, fundamental in the economy of the research production process. However, the availability of data is not sufficient by itself, but also their quality must be verifiable. Measures to ensure reuse and reproducibility need to include the entire research life cycle, from the experimental design to the generation of data, quality control, statistical analysis, interpretation, and validation of the results. Hence, high-quality records, particularly for providing a string of documents for the verifiable origin of data, are essential elements that can act as a certificate for potential users (customers). These records also improve the traceability and transparency of data and processes, therefore, improving the reliability of results. Standards for data acquisition, analysis, and documentation have been fostered in the last decade driven by grassroot initiatives of researchers and organizations such as the Research Data Alliance (RDA). Nevertheless, what is still largely missing in the life science academic research are agreed procedures for complex routine research workflows. Here, well-crafted documentation like standard operating procedures (SOPs) offer clear direction and instructions specifically designed to avoid deviations as an absolute necessity for reproducibility. Therefore, this paper provides a standardized workflow that explains step by step how to write an SOP to be used as a starting point for appropriate research documentation. Y1 - 2020 VL - 16 IS - 9 PB - PLOS CY - San Francisco ER - TY - JOUR A1 - Stede, Manfred T1 - From connectives to coherence relations BT - a case study of German contrastrive connectives JF - Revue roumaine de linguistique : RRL = Romanian review of linguistics N2 - The notion of coherence relations is quite widely accepted in general, but concrete proposals differ considerably on the questions of how they should be motivated, which relations are to be assumed, and how they should be defined. This paper takes a "bottom-up" perspective by assessing the contribution made by linguistic signals (connectives), using insights from the relevant literature as well as verification by practical text annotation. We work primarily with the German language here and focus on the realm of contrast. Thus, we suggest a new inventory of contrastive connective functions and discuss their relationship to contrastive coherence relations that have been proposed in earlier work. KW - coherence relation KW - connective KW - contrast KW - concession KW - corpus analysis Y1 - 2020 SN - 0035-3957 VL - 65 IS - 3 SP - 213 EP - 233 PB - Ed. Academiei Române CY - Bucureşti ER - TY - JOUR A1 - Tiwari, Abhishek A1 - Prakash, Jyoti A1 - Groß, Sascha A1 - Hammer, Christian T1 - A large scale analysis of Android BT - Web hybridization JF - The journal of systems and software N2 - Many Android applications embed webpages via WebView components and execute JavaScript code within Android. Hybrid applications leverage dedicated APIs to load a resource and render it in a WebView. Furthermore, Android objects can be shared with the JavaScript world. However, bridging the interfaces of the Android and JavaScript world might also incur severe security threats: Potentially untrusted webpages and their JavaScript might interfere with the Android environment and its access to native features. No general analysis is currently available to assess the implications of such hybrid apps bridging the two worlds. To understand the semantics and effects of hybrid apps, we perform a large-scale study on the usage of the hybridization APIs in the wild. We analyze and categorize the parameters to hybridization APIs for 7,500 randomly selected and the 196 most popular applications from the Google Playstore as well as 1000 malware samples. Our results advance the general understanding of hybrid applications, as well as implications for potential program analyses, and the current security situation: We discovered thousands of flows of sensitive data from Android to JavaScript, the vast majority of which could flow to potentially untrustworthy code. Our analysis identified numerous web pages embedding vulnerabilities, which we exemplarily exploited. Additionally, we discovered a multitude of applications in which potentially untrusted JavaScript code may interfere with (trusted) Android objects, both in benign and malign applications. KW - Android hybrid apps KW - static analysis KW - information flow control Y1 - 2020 U6 - https://doi.org/10.1016/j.jss.2020.110775 SN - 0164-1212 SN - 1873-1228 VL - 170 PB - Elsevier CY - New York ER -