TY - THES A1 - Makowski, Silvia T1 - Discriminative Models for Biometric Identification using Micro- and Macro-Movements of the Eyes N2 - Human visual perception is an active process. Eye movements either alternate between fixations and saccades or follow a smooth pursuit movement in case of moving targets. Besides these macroscopic gaze patterns, the eyes perform involuntary micro-movements during fixations which are commonly categorized into micro-saccades, drift and tremor. Eye movements are frequently studied in cognitive psychology, because they reflect a complex interplay of perception, attention and oculomotor control. A common insight of psychological research is that macro-movements are highly individual. Inspired by this finding, there has been a considerable amount of prior research on oculomotoric biometric identification. However, the accuracy of known approaches is too low and the time needed for identification is too long for any practical application. This thesis explores discriminative models for the task of biometric identification. Discriminative models optimize a quality measure of the predictions and are usually superior to generative approaches in discriminative tasks. However, using discriminative models requires to select a suitable form of data representation for sequential eye gaze data; i.e., by engineering features or constructing a sequence kernel and the performance of the classification model strongly depends on the data representation. We study two fundamentally different ways of representing eye gaze within a discriminative framework. In the first part of this thesis, we explore the integration of data and psychological background knowledge in the form of generative models to construct representations. To this end, we first develop generative statistical models of gaze behavior during reading and scene viewing that account for viewer-specific distributional properties of gaze patterns. In a second step, we develop a discriminative identification model by deriving Fisher kernel functions from these and several baseline models. We find that an SVM with Fisher kernel is able to reliably identify users based on their eye gaze during reading and scene viewing. However, since the generative models are constrained to use low-frequency macro-movements, they discard a significant amount of information contained in the raw eye tracking signal at a high cost: identification requires about one minute of input recording, which makes it inapplicable for real world biometric systems. In the second part of this thesis, we study a purely data-driven modeling approach. Here, we aim at automatically discovering the individual pattern hidden in the raw eye tracking signal. To this end, we develop a deep convolutional neural network DeepEyedentification that processes yaw and pitch gaze velocities and learns a representation end-to-end. Compared to prior work, this model increases the identification accuracy by one order of magnitude and the time to identification decreases to only seconds. The DeepEyedentificationLive model further improves upon the identification performance by processing binocular input and it also detects presentation-attacks. We find that by learning a representation, the performance of oculomotoric identification and presentation-attack detection can be driven close to practical relevance for biometric applications. Eye tracking devices with high sampling frequency and precision are expensive and the applicability of eye movement as a biometric feature heavily depends on cost of recording devices. In the last part of this thesis, we therefore study the requirements on data quality by evaluating the performance of the DeepEyedentificationLive network under reduced spatial and temporal resolution. We find that the method still attains a high identification accuracy at a temporal resolution of only 250 Hz and a precision of 0.03 degrees. Reducing both does not have an additive deteriorating effect. KW - Machine Learning Y1 - 2021 ER - TY - JOUR A1 - Schirrmann, Michael A1 - Landwehr, Niels A1 - Giebel, Antje A1 - Garz, Andreas A1 - Dammer, Karl-Heinz T1 - Early detection of stripe rust in winter wheat using deep residual neural networks JF - Frontiers in plant science : FPLS N2 - Stripe rust (Pst) is a major disease of wheat crops leading untreated to severe yield losses. The use of fungicides is often essential to control Pst when sudden outbreaks are imminent. Sensors capable of detecting Pst in wheat crops could optimize the use of fungicides and improve disease monitoring in high-throughput field phenotyping. Now, deep learning provides new tools for image recognition and may pave the way for new camera based sensors that can identify symptoms in early stages of a disease outbreak within the field. The aim of this study was to teach an image classifier to detect Pst symptoms in winter wheat canopies based on a deep residual neural network (ResNet). For this purpose, a large annotation database was created from images taken by a standard RGB camera that was mounted on a platform at a height of 2 m. Images were acquired while the platform was moved over a randomized field experiment with Pst-inoculated and Pst-free plots of winter wheat. The image classifier was trained with 224 x 224 px patches tiled from the original, unprocessed camera images. The image classifier was tested on different stages of the disease outbreak. At patch level the image classifier reached a total accuracy of 90%. To test the image classifier on image level, the image classifier was evaluated with a sliding window using a large striding length of 224 px allowing for fast test performance. At image level, the image classifier reached a total accuracy of 77%. Even in a stage with very low disease spreading (0.5%) at the very beginning of the Pst outbreak, a detection accuracy of 57% was obtained. Still in the initial phase of the Pst outbreak with 2 to 4% of Pst disease spreading, detection accuracy with 76% could be attained. With further optimizations, the image classifier could be implemented in embedded systems and deployed on drones, vehicles or scanning systems for fast mapping of Pst outbreaks. KW - yellow rust KW - monitoring KW - deep learning KW - wheat crops KW - image recognition KW - camera sensor KW - ResNet KW - smart farming Y1 - 2021 U6 - https://doi.org/10.3389/fpls.2021.469689 SN - 1664-462X VL - 12 PB - Frontiers Media CY - Lausanne ER - TY - JOUR A1 - Gautam, Khem Raj A1 - Zhang, Guoqiang A1 - Landwehr, Niels A1 - Adolphs, Julian T1 - Machine learning for improvement of thermal conditions inside a hybrid ventilated animal building JF - Computers and electronics in agriculture : COMPAG online ; an international journal N2 - In buildings with hybrid ventilation, natural ventilation opening positions (windows), mechanical ventilation rates, heating, and cooling are manipulated to maintain desired thermal conditions. The indoor temperature is regulated solely by ventilation (natural and mechanical) when the external conditions are favorable to save external heating and cooling energy. The ventilation parameters are determined by a rule-based control scheme, which is not optimal. This study proposes a methodology to enable real-time optimum control of ventilation parameters. We developed offline prediction models to estimate future thermal conditions from the data collected from building in operation. The developed offline model is then used to find the optimal controllable ventilation parameters in real-time to minimize the setpoint deviation in the building. With the proposed methodology, the experimental building's setpoint deviation improved for 87% of time, on average, by 0.53 degrees C compared to the current deviations. KW - Animal building KW - Natural ventilation KW - Automatically controlled windows KW - Machine learning KW - Optimization Y1 - 2021 U6 - https://doi.org/10.1016/j.compag.2021.106259 SN - 0168-1699 SN - 1872-7107 VL - 187 PB - Elsevier Science CY - Amsterdam [u.a.] ER - TY - JOUR A1 - Camargo, Tibor de A1 - Schirrmann, Michael A1 - Landwehr, Niels A1 - Dammer, Karl-Heinz A1 - Pflanz, Michael T1 - Optimized deep learning model as a basis for fast UAV mapping of weed species in winter wheat crops JF - Remote sensing / Molecular Diversity Preservation International (MDPI) N2 - Weed maps should be available quickly, reliably, and with high detail to be useful for site-specific management in crop protection and to promote more sustainable agriculture by reducing pesticide use. Here, the optimization of a deep residual convolutional neural network (ResNet-18) for the classification of weed and crop plants in UAV imagery is proposed. The target was to reach sufficient performance on an embedded system by maintaining the same features of the ResNet-18 model as a basis for fast UAV mapping. This would enable online recognition and subsequent mapping of weeds during UAV flying operation. Optimization was achieved mainly by avoiding redundant computations that arise when a classification model is applied on overlapping tiles in a larger input image. The model was trained and tested with imagery obtained from a UAV flight campaign at low altitude over a winter wheat field, and classification was performed on species level with the weed species Matricaria chamomilla L., Papaver rhoeas L., Veronica hederifolia L., and Viola arvensis ssp. arvensis observed in that field. The ResNet-18 model with the optimized image-level prediction pipeline reached a performance of 2.2 frames per second with an NVIDIA Jetson AGX Xavier on the full resolution UAV image, which would amount to about 1.78 ha h(-1) area output for continuous field mapping. The overall accuracy for determining crop, soil, and weed species was 94%. There were some limitations in the detection of species unknown to the model. When shifting from 16-bit to 32-bit model precision, no improvement in classification accuracy was observed, but a strong decline in speed performance, especially when a higher number of filters was used in the ResNet-18 model. Future work should be directed towards the integration of the mapping process on UAV platforms, guiding UAVs autonomously for mapping purpose, and ensuring the transferability of the models to other crop fields. KW - ResNet KW - deep residual networks KW - UAV imagery KW - embedded systems KW - crop KW - monitoring KW - image classification KW - site-specific weed management KW - real-time mapping Y1 - 2021 U6 - https://doi.org/10.3390/rs13091704 SN - 2072-4292 VL - 13 IS - 9 PB - MDPI CY - Basel ER - TY - JOUR A1 - Brede, Nuria A1 - Botta, Nicola T1 - On the correctness of monadic backward induction JF - Journal of functional programming N2 - In control theory, to solve a finite-horizon sequential decision problem (SDP) commonly means to find a list of decision rules that result in an optimal expected total reward (or cost) when taking a given number of decision steps. SDPs are routinely solved using Bellman's backward induction. Textbook authors (e.g. Bertsekas or Puterman) typically give more or less formal proofs to show that the backward induction algorithm is correct as solution method for deterministic and stochastic SDPs. Botta, Jansson and Ionescu propose a generic framework for finite horizon, monadic SDPs together with a monadic version of backward induction for solving such SDPs. In monadic SDPs, the monad captures a generic notion of uncertainty, while a generic measure function aggregates rewards. In the present paper, we define a notion of correctness for monadic SDPs and identify three conditions that allow us to prove a correctness result for monadic backward induction that is comparable to textbook correctness proofs for ordinary backward induction. The conditions that we impose are fairly general and can be cast in category-theoretical terms using the notion of Eilenberg-Moore algebra. They hold in familiar settings like those of deterministic or stochastic SDPs, but we also give examples in which they fail. Our results show that backward induction can safely be employed for a broader class of SDPs than usually treated in textbooks. However, they also rule out certain instances that were considered admissible in the context of Botta et al. 's generic framework. Our development is formalised in Idris as an extension of the Botta et al. framework and the sources are available as supplementary material. Y1 - 2021 U6 - https://doi.org/10.1017/S0956796821000228 SN - 1469-7653 SN - 0956-7968 VL - 31 PB - Cambridge University Press CY - Cambridge ER - TY - JOUR A1 - Andjelković, Marko A1 - Chen, Junchao A1 - Simevski, Aleksandar A1 - Schrape, Oliver A1 - Krstić, Miloš A1 - Kraemer, Rolf T1 - Monitoring of particle count rate and LET variations with pulse stretching inverters JF - IEEE transactions on nuclear science : a publication of the IEEE Nuclear and Plasma Sciences Society N2 - This study investigates the use of pulse stretching (skew-sized) inverters for monitoring the variation of count rate and linear energy transfer (LET) of energetic particles. The basic particle detector is a cascade of two pulse stretching inverters, and the required sensing area is obtained by connecting up to 12 two-inverter cells in parallel and employing the required number of parallel arrays. The incident particles are detected as single-event transients (SETs), whereby the SET count rate denotes the particle count rate, while the SET pulsewidth distribution depicts the LET variations. The advantage of the proposed solution is the possibility to sense the LET variations using fully digital processing logic. SPICE simulations conducted on IHP's 130-nm CMOS technology have shown that the SET pulsewidth varies by approximately 550 ps over the LET range from 1 to 100 MeV center dot cm(2) center dot mg(-1). The proposed detector is intended for triggering the fault-tolerant mechanisms within a self-adaptive multiprocessing system employed in space. It can be implemented as a standalone detector or integrated in the same chip with the target system. KW - Particle detector KW - pulse stretching inverters KW - single-event transient KW - (SET) count rate KW - SET pulsewidth distribution Y1 - 2021 U6 - https://doi.org/10.1109/TNS.2021.3076400 SN - 0018-9499 SN - 1558-1578 VL - 68 IS - 8 SP - 1772 EP - 1781 PB - Institute of Electrical and Electronics Engineers CY - New York, NY ER - TY - THES A1 - Andjelkovic, Marko T1 - A methodology for characterization, modeling and mitigation of single event transient effects in CMOS standard combinational cells T1 - Eine Methode zur Charakterisierung, Modellierung und Minderung von SET Effekten in kombinierten CMOS-Standardzellen N2 - With the downscaling of CMOS technologies, the radiation-induced Single Event Transient (SET) effects in combinational logic have become a critical reliability issue for modern integrated circuits (ICs) intended for operation under harsh radiation conditions. The SET pulses generated in combinational logic may propagate through the circuit and eventually result in soft errors. It has thus become an imperative to address the SET effects in the early phases of the radiation-hard IC design. In general, the soft error mitigation solutions should accommodate both static and dynamic measures to ensure the optimal utilization of available resources. An efficient soft-error-aware design should address synergistically three main aspects: (i) characterization and modeling of soft errors, (ii) multi-level soft error mitigation, and (iii) online soft error monitoring. Although significant results have been achieved, the effectiveness of SET characterization methods, accuracy of predictive SET models, and efficiency of SET mitigation measures are still critical issues. Therefore, this work addresses the following topics: (i) Characterization and modeling of SET effects in standard combinational cells, (ii) Static mitigation of SET effects in standard combinational cells, and (iii) Online particle detection, as a support for dynamic soft error mitigation. Since the standard digital libraries are widely used in the design of radiation-hard ICs, the characterization of SET effects in standard cells and the availability of accurate SET models for the Soft Error Rate (SER) evaluation are the main prerequisites for efficient radiation-hard design. This work introduces an approach for the SPICE-based standard cell characterization with the reduced number of simulations, improved SET models and optimized SET sensitivity database. It has been shown that the inherent similarities in the SET response of logic cells for different input levels can be utilized to reduce the number of required simulations. Based on characterization results, the fitting models for the SET sensitivity metrics (critical charge, generated SET pulse width and propagated SET pulse width) have been developed. The proposed models are based on the principle of superposition, and they express explicitly the dependence of the SET sensitivity of individual combinational cells on design, operating and irradiation parameters. In contrast to the state-of-the-art characterization methodologies which employ extensive look-up tables (LUTs) for storing the simulation results, this work proposes the use of LUTs for storing the fitting coefficients of the SET sensitivity models derived from the characterization results. In that way the amount of characterization data in the SET sensitivity database is reduced significantly. The initial step in enhancing the robustness of combinational logic is the application of gate-level mitigation techniques. As a result, significant improvement of the overall SER can be achieved with minimum area, delay and power overheads. For the SET mitigation in standard cells, it is essential to employ the techniques that do not require modifying the cell structure. This work introduces the use of decoupling cells for improving the robustness of standard combinational cells. By insertion of two decoupling cells at the output of a target cell, the critical charge of the cell’s output node is increased and the attenuation of short SETs is enhanced. In comparison to the most common gate-level techniques (gate upsizing and gate duplication), the proposed approach provides better SET filtering. However, as there is no single gate-level mitigation technique with optimal performance, a combination of multiple techniques is required. This work introduces a comprehensive characterization of gate-level mitigation techniques aimed to quantify their impact on the SET robustness improvement, as well as introduced area, delay and power overhead per gate. By characterizing the gate-level mitigation techniques together with the standard cells, the required effort in subsequent SER analysis of a target design can be reduced. The characterization database of the hardened standard cells can be utilized as a guideline for selection of the most appropriate mitigation solution for a given design. As a support for dynamic soft error mitigation techniques, it is important to enable the online detection of energetic particles causing the soft errors. This allows activating the power-greedy fault-tolerant configurations based on N-modular redundancy only at the high radiation levels. To enable such a functionality, it is necessary to monitor both the particle flux and the variation of particle LET, as these two parameters contribute significantly to the system SER. In this work, a particle detection approach based on custom-sized pulse stretching inverters is proposed. Employing the pulse stretching inverters connected in parallel enables to measure the particle flux in terms of the number of detected SETs, while the particle LET variations can be estimated from the distribution of SET pulse widths. This approach requires a purely digital processing logic, in contrast to the standard detectors which require complex mixed-signal processing. Besides the possibility of LET monitoring, additional advantages of the proposed particle detector are low detection latency and power consumption, and immunity to error accumulation. The results achieved in this thesis can serve as a basis for establishment of an overall soft-error-aware database for a given digital library, and a comprehensive multi-level radiation-hard design flow that can be implemented with the standard IC design tools. The following step will be to evaluate the achieved results with the irradiation experiments. N2 - Mit der Verkleinerung der Strukturen moderner CMOS-Technologien sind strahlungsinduzierte Single Event Transient (SET)-Effekte in kombinatorischer Logik zu einem kritischen Zuverlässigkeitsproblem in integrierten Schaltkreisen (ICs) geworden, die für den Betrieb unter rauen Strahlungsbedingungen (z. B. im Weltraum) vorgesehen sind. Die in der Kombinationslogik erzeugten SET-Impulse können durch die Schaltungen propagieren und schließlich in Speicherelementen (z.B. Flip-Flops oder Latches) zwischengespeichert werden, was zu sogenannten Soft-Errors und folglich zu Datenbeschädigungen oder einem Systemausfall führt. Daher ist es in den frühen Phasen des strahlungsharten IC-Designs unerlässlich geworden, die SET-Effekte systematisch anzugehen. Im Allgemeinen sollten die Lösungen zur Minderung von Soft-Errors sowohl statische als auch dynamische Maßnahmen berücksichtigen, um die optimale Nutzung der verfügbaren Ressourcen sicherzustellen. Somit sollte ein effizientes Soft-Error-Aware-Design drei Hauptaspekte synergistisch berücksichtigen: (i) die Charakterisierung und Modellierung von Soft-Errors, (ii) eine mehrstufige-Soft-Error-Minderung und (iii) eine Online-Soft-Error-Überwachung. Obwohl signifikante Ergebnisse erzielt wurden, sind die Wirksamkeit der SET-Charakterisierung, die Genauigkeit von Vorhersagemodellen und die Effizienz der Minderungsmaßnahmen immer noch die kritischen Punkte. Daher stellt diese Arbeit die folgenden Originalbeiträge vor: • Eine ganzheitliche Methodik zur SPICE-basierten Charakterisierung von SET-Effekten in kombinatorischen Standardzellen und entsprechenden Härtungskonfigurationen auf Gate-Ebene mit reduzierter Anzahl von Simulationen und reduzierter SET-Sensitivitätsdatenbank. • Analytische Modelle für SET-Empfindlichkeit (kritische Ladung, erzeugte SET-Pulsbreite und propagierte SET-Pulsbreite), basierend auf dem Superpositionsprinzip und Anpassung der Ergebnisse aus SPICE-Simulationen. • Ein Ansatz zur SET-Abschwächung auf Gate-Ebene, der auf dem Einfügen von zwei Entkopplungszellen am Ausgang eines Logikgatters basiert, was den Anstieg der kritischen Ladung und die signifikante Unterdrückung kurzer SETs beweist. • Eine vergleichende Charakterisierung der vorgeschlagenen SET-Abschwächungstechnik mit Entkopplungszellen und sieben bestehenden Techniken durch eine quantitative Bewertung ihrer Auswirkungen auf die Verbesserung der SET-Robustheit einzelner Logikgatter. • Ein Partikeldetektor auf Basis von Impulsdehnungs-Invertern in Skew-Größe zur Online-Überwachung des Partikelflusses und der LET-Variationen mit rein digitaler Anzeige. Die in dieser Dissertation erzielten Ergebnisse können als Grundlage für den Aufbau einer umfassenden Soft-Error-aware-Datenbank für eine gegebene digitale Bibliothek und eines umfassenden mehrstufigen strahlungsharten Designflusses dienen, der mit den Standard-IC-Designtools implementiert werden kann. Im nächsten Schritt werden die mit den Bestrahlungsexperimenten erzielten Ergebnisse ausgewertet. KW - Single Event Transient KW - radiation hardness design KW - Single Event Transient KW - Strahlungshärte Entwurf Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-534843 ER - TY - JOUR A1 - Schrape, Oliver A1 - Andjelkovic, Marko A1 - Breitenreiter, Anselm A1 - Zeidler, Steffen A1 - Balashov, Alexey A1 - Krstić, Miloš T1 - Design and evaluation of radiation-hardened standard cell flip-flops JF - IEEE transactions on circuits and systems : a publication of the IEEE Circuits and Systems Society: 1, Regular papers N2 - Use of a standard non-rad-hard digital cell library in the rad-hard design can be a cost-effective solution for space applications. In this paper we demonstrate how a standard non-rad-hard flip-flop, as one of the most vulnerable digital cells, can be converted into a rad-hard flip-flop without modifying its internal structure. We present five variants of a Triple Modular Redundancy (TMR) flip-flop: baseline TMR flip-flop, latch-based TMR flip-flop, True-Single Phase Clock (TSPC) TMR flip-flop, scannable TMR flip-flop and self-correcting TMR flipflop. For all variants, the multi-bit upsets have been addressed by applying special placement constraints, while the Single Event Transient (SET) mitigation was achieved through the usage of customized SET filters and selection of optimal inverter sizes for the clock and reset trees. The proposed flip-flop variants feature differing performance, thus enabling to choose the optimal solution for every sensitive node in the circuit, according to the predefined design constraints. Several flip-flop designs have been validated on IHP's 130nm BiCMOS process, by irradiation of custom-designed shift registers. It has been shown that the proposed TMR flip-flops are robust to soft errors with a threshold Linear Energy Transfer (LET) from (32.4 MeV.cm(2)/mg) to (62.5 MeV.cm(2)/mg), depending on the variant. KW - Single event effect KW - fault tolerance KW - triple modular redundancy KW - ASIC KW - design flow KW - radhard design Y1 - 2021 U6 - https://doi.org/10.1109/TCSI.2021.3109080 SN - 1549-8328 SN - 1558-0806 SN - 1057-7122 VL - 68 IS - 11 SP - 4796 EP - 4809 PB - Inst. of Electr. and Electronics Engineers CY - New York, NY ER - TY - JOUR A1 - Tavakoli, Hamad A1 - Alirezazadeh, Pendar A1 - Hedayatipour, Ava A1 - Nasib, A. H. Banijamali A1 - Landwehr, Niels T1 - Leaf image-based classification of some common bean cultivars using discriminative convolutional neural networks JF - Computers and electronics in agriculture : COMPAG online ; an international journal N2 - In recent years, many efforts have been made to apply image processing techniques for plant leaf identification. However, categorizing leaf images at the cultivar/variety level, because of the very low inter-class variability, is still a challenging task. In this research, we propose an automatic discriminative method based on convolutional neural networks (CNNs) for classifying 12 different cultivars of common beans that belong to three various species. We show that employing advanced loss functions, such as Additive Angular Margin Loss and Large Margin Cosine Loss, instead of the standard softmax loss function for the classification can yield better discrimination between classes and thereby mitigate the problem of low inter-class variability. The method was evaluated by classifying species (level I), cultivars from the same species (level II), and cultivars from different species (level III), based on images from the leaf foreside and backside. The results indicate that the performance of the classification algorithm on the leaf backside image dataset is superior. The maximum mean classification accuracies of 95.86, 91.37 and 86.87% were obtained at the levels I, II and III, respectively. The proposed method outperforms the previous relevant works and provides a reliable approach for plant cultivars identification. KW - Bean KW - Plant identification KW - Digital image analysis KW - VGG16 KW - Loss KW - functions Y1 - 2021 U6 - https://doi.org/10.1016/j.compag.2020.105935 SN - 0168-1699 SN - 1872-7107 VL - 181 PB - Elsevier CY - Amsterdam [u.a.] ER - TY - THES A1 - Ashouri, Mohammadreza T1 - TrainTrap BT - a hybrid technique for vulnerability analysis in JAVA Y1 - 2020 ER - TY - JOUR A1 - Everardo Pérez, Flavio Omar A1 - Osorio, Mauricio T1 - Towards an answer set programming methodology for constructing programs following a semi-automatic approach BT - extended and revised version JF - Electronic notes in theoretical computer science N2 - Answer Set Programming (ASP) is a successful rule-based formalism for modeling and solving knowledge-intense combinatorial (optimization) problems. Despite its success in both academic and industry, open challenges like automatic source code optimization, and software engineering remains. This is because a problem encoded into an ASP might not have the desired solving performance compared to an equivalent representation. Motivated by these two challenges, this paper has three main contributions. First, we propose a developing process towards a methodology to implement ASP programs, being faithful to existing methods. Second, we present ASP encodings that serve as the basis from the developing process. Third, we demonstrate the use of ASP to reverse the standard solving process. That is, knowing answer sets in advance, and desired strong equivalent properties, “we” exhaustively reconstruct ASP programs if they exist. This paper was originally motivated by the search of propositional formulas (if they exist) that represent the semantics of a new aggregate operator. Particularly, a parity aggregate. This aggregate comes as an improvement from the already existing parity (xor) constraints from xorro, where lacks expressiveness, even though these constraints fit perfectly for reasoning modes like sampling or model counting. To this end, this extended version covers the fundaments from parity constraints as well as the xorro system. Hence, we delve a little more in the examples and the proposed methodology over parity constraints. Finally, we discuss our results by showing the only representation available, that satisfies different properties from the classical logic xor operator, which is also consistent with the semantics of parity constraints from xorro. KW - answer set programming KW - combinatorial optimization problems KW - parity aggregate operator Y1 - 2020 U6 - https://doi.org/10.1016/j.entcs.2020.10.004 SN - 1571-0661 VL - 354 SP - 29 EP - 44 PB - Elsevier CY - Amsterdam [u.a.] ER - TY - JOUR A1 - Hollmann, Susanne A1 - Frohme, Marcus A1 - Endrullat, Christoph A1 - Kremer, Andreas A1 - D’Elia, Domenica A1 - Regierer, Babette A1 - Nechyporenko, Alina T1 - Ten simple rules on how to write a standard operating procedure JF - PLOS Computational Biology N2 - Research publications and data nowadays should be publicly available on the internet and, theoretically, usable for everyone to develop further research, products, or services. The long-term accessibility of research data is, therefore, fundamental in the economy of the research production process. However, the availability of data is not sufficient by itself, but also their quality must be verifiable. Measures to ensure reuse and reproducibility need to include the entire research life cycle, from the experimental design to the generation of data, quality control, statistical analysis, interpretation, and validation of the results. Hence, high-quality records, particularly for providing a string of documents for the verifiable origin of data, are essential elements that can act as a certificate for potential users (customers). These records also improve the traceability and transparency of data and processes, therefore, improving the reliability of results. Standards for data acquisition, analysis, and documentation have been fostered in the last decade driven by grassroot initiatives of researchers and organizations such as the Research Data Alliance (RDA). Nevertheless, what is still largely missing in the life science academic research are agreed procedures for complex routine research workflows. Here, well-crafted documentation like standard operating procedures (SOPs) offer clear direction and instructions specifically designed to avoid deviations as an absolute necessity for reproducibility. Therefore, this paper provides a standardized workflow that explains step by step how to write an SOP to be used as a starting point for appropriate research documentation. Y1 - 2020 VL - 16 IS - 9 PB - PLOS CY - San Francisco ER - TY - JOUR A1 - Stede, Manfred T1 - From connectives to coherence relations BT - a case study of German contrastrive connectives JF - Revue roumaine de linguistique : RRL = Romanian review of linguistics N2 - The notion of coherence relations is quite widely accepted in general, but concrete proposals differ considerably on the questions of how they should be motivated, which relations are to be assumed, and how they should be defined. This paper takes a "bottom-up" perspective by assessing the contribution made by linguistic signals (connectives), using insights from the relevant literature as well as verification by practical text annotation. We work primarily with the German language here and focus on the realm of contrast. Thus, we suggest a new inventory of contrastive connective functions and discuss their relationship to contrastive coherence relations that have been proposed in earlier work. KW - coherence relation KW - connective KW - contrast KW - concession KW - corpus analysis Y1 - 2020 SN - 0035-3957 VL - 65 IS - 3 SP - 213 EP - 233 PB - Ed. Academiei Române CY - Bucureşti ER - TY - JOUR A1 - Tiwari, Abhishek A1 - Prakash, Jyoti A1 - Groß, Sascha A1 - Hammer, Christian T1 - A large scale analysis of Android BT - Web hybridization JF - The journal of systems and software N2 - Many Android applications embed webpages via WebView components and execute JavaScript code within Android. Hybrid applications leverage dedicated APIs to load a resource and render it in a WebView. Furthermore, Android objects can be shared with the JavaScript world. However, bridging the interfaces of the Android and JavaScript world might also incur severe security threats: Potentially untrusted webpages and their JavaScript might interfere with the Android environment and its access to native features. No general analysis is currently available to assess the implications of such hybrid apps bridging the two worlds. To understand the semantics and effects of hybrid apps, we perform a large-scale study on the usage of the hybridization APIs in the wild. We analyze and categorize the parameters to hybridization APIs for 7,500 randomly selected and the 196 most popular applications from the Google Playstore as well as 1000 malware samples. Our results advance the general understanding of hybrid applications, as well as implications for potential program analyses, and the current security situation: We discovered thousands of flows of sensitive data from Android to JavaScript, the vast majority of which could flow to potentially untrustworthy code. Our analysis identified numerous web pages embedding vulnerabilities, which we exemplarily exploited. Additionally, we discovered a multitude of applications in which potentially untrusted JavaScript code may interfere with (trusted) Android objects, both in benign and malign applications. KW - Android hybrid apps KW - static analysis KW - information flow control Y1 - 2020 U6 - https://doi.org/10.1016/j.jss.2020.110775 SN - 0164-1212 SN - 1873-1228 VL - 170 PB - Elsevier CY - New York ER - TY - JOUR A1 - Bordihn, Henning A1 - Vaszil, György T1 - Deterministic Lindenmayer systems with dynamic control of parallelism JF - International journal of foundations of computer science N2 - M-rate 0L systems are interactionless Lindenmayer systems together with a function assigning to every string a set of multisets of productions that may be applied simultaneously to the string. Some questions that have been left open in the forerunner papers are examined, and the computational power of deterministic M-rate 0L systems is investigated, where also tabled and extended variants are taken into consideration. KW - parallel rewriting KW - Lindenmayer systems KW - restricted parallelism KW - determinism KW - developmental systems KW - formal languages Y1 - 2019 U6 - https://doi.org/10.1142/S0129054120400031 SN - 0129-0541 SN - 1793-6373 VL - 31 IS - 1 SP - 37 EP - 51 PB - World Scientific CY - Singapore ER - TY - GEN A1 - Hollmann, Susanne A1 - Frohme, Marcus A1 - Endrullat, Christoph A1 - Kremer, Andreas A1 - D’Elia, Domenica A1 - Regierer, Babette A1 - Nechyporenko, Alina T1 - Ten simple rules on how to write a standard operating procedure T2 - Postprints der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe N2 - Research publications and data nowadays should be publicly available on the internet and, theoretically, usable for everyone to develop further research, products, or services. The long-term accessibility of research data is, therefore, fundamental in the economy of the research production process. However, the availability of data is not sufficient by itself, but also their quality must be verifiable. Measures to ensure reuse and reproducibility need to include the entire research life cycle, from the experimental design to the generation of data, quality control, statistical analysis, interpretation, and validation of the results. Hence, high-quality records, particularly for providing a string of documents for the verifiable origin of data, are essential elements that can act as a certificate for potential users (customers). These records also improve the traceability and transparency of data and processes, therefore, improving the reliability of results. Standards for data acquisition, analysis, and documentation have been fostered in the last decade driven by grassroot initiatives of researchers and organizations such as the Research Data Alliance (RDA). Nevertheless, what is still largely missing in the life science academic research are agreed procedures for complex routine research workflows. Here, well-crafted documentation like standard operating procedures (SOPs) offer clear direction and instructions specifically designed to avoid deviations as an absolute necessity for reproducibility. Therefore, this paper provides a standardized workflow that explains step by step how to write an SOP to be used as a starting point for appropriate research documentation. T3 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe - 1201 Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-525877 SN - 1866-8372 IS - 9 ER - TY - JOUR A1 - Bordihn, Henning A1 - Mitrana, Victor T1 - On the degrees of non-regularity and non-context-freeness JF - Journal of computer and system sciences N2 - We study the derivational complexity of context-free and context-sensitive grammars by counting the maximal number of non-regular and non-context-free rules used in a derivation, respectively. The degree of non-regularity/non-context-freeness of a language is the minimum degree of non-regularity/non-context-freeness of context-free/context-sensitive grammars generating it. A language has finite degree of non-regularity iff it is regular. We give a condition for deciding whether the degree of non-regularity of a given unambiguous context-free grammar is finite. The problem becomes undecidable for arbitrary linear context-free grammars. The degree of non-regularity of unambiguous context-free grammars generating non-regular languages as well as that of grammars generating deterministic context-free languages that are not regular is of order Omega(n). Context-free non-regular languages of sublinear degree of non-regularity are presented. A language has finite degree of non-context-freeness if it is context-free. Context-sensitive grammars with a quadratic degree of non-context-freeness are more powerful than those of a linear degree. KW - context-free grammar KW - degree of non-regularity KW - context-sensitive KW - grammar KW - degree of non-context-freeness Y1 - 2020 U6 - https://doi.org/10.1016/j.jcss.2019.09.003 SN - 0022-0000 SN - 1090-2724 VL - 108 SP - 104 EP - 117 PB - Elsevier CY - San Diego, Calif. [u.a.] ER - TY - JOUR A1 - Gebser, Martin A1 - Janhunen, Tomi A1 - Rintanen, Jussi T1 - Declarative encodings of acyclicity properties JF - Journal of logic and computation N2 - Many knowledge representation tasks involve trees or similar structures as abstract datatypes. However, devising compact and efficient declarative representations of such structural properties is non-obvious and can be challenging indeed. In this article, we take a number of acyclicity properties into consideration and investigate various logic-based approaches to encode them. We use answer set programming as the primary representation language but also consider mappings to related formalisms, such as propositional logic, difference logic and linear programming. We study the compactness of encodings and the resulting computational performance on benchmarks involving acyclic or tree structures. KW - acyclicity properties KW - logic-based modeling KW - answer set programming KW - satisfiability Y1 - 2015 U6 - https://doi.org/10.1093/logcom/exv063 SN - 0955-792X SN - 1465-363X VL - 30 IS - 4 SP - 923 EP - 952 PB - Oxford Univ. Press CY - Eynsham, Oxford ER - TY - GEN A1 - Xenikoudakis, Georgios A1 - Ahmed, Mayeesha A1 - Harris, Jacob Colt A1 - Wadleigh, Rachel A1 - Paijmans, Johanna L. A. A1 - Hartmann, Stefanie A1 - Barlow, Axel A1 - Lerner, Heather A1 - Hofreiter, Michael T1 - Ancient DNA reveals twenty million years of aquatic life in beavers T2 - Current biology : CB N2 - Xenikoudakis et al. report a partial mitochondrial genome of the extinct giant beaver Castoroides and estimate the origin of aquatic behavior in beavers to approximately 20 million years. This time estimate coincides with the extinction of terrestrial beavers and raises the question whether the two events had a common cause. Y1 - 2020 U6 - https://doi.org/10.1016/j.cub.2019.12.041 SN - 0960-9822 SN - 1879-0445 VL - 30 IS - 3 SP - R110 EP - R111 PB - Current Biology Ltd. CY - London ER - TY - JOUR A1 - Gebser, Martin A1 - Maratea, Marco A1 - Ricca, Francesco T1 - The Seventh Answer Set Programming Competition BT - design and results JF - Theory and practice of logic programming N2 - Answer Set Programming (ASP) is a prominent knowledge representation language with roots in logic programming and non-monotonic reasoning. Biennial ASP competitions are organized in order to furnish challenging benchmark collections and assess the advancement of the state of the art in ASP solving. In this paper, we report on the design and results of the Seventh ASP Competition, jointly organized by the University of Calabria (Italy), the University of Genova (Italy), and the University of Potsdam (Germany), in affiliation with the 14th International Conference on Logic Programming and Non-Monotonic Reasoning (LPNMR 2017). KW - Answer Set Programming KW - competition Y1 - 2019 U6 - https://doi.org/10.1017/S1471068419000061 SN - 1471-0684 SN - 1475-3081 VL - 20 IS - 2 SP - 176 EP - 204 PB - Cambridge Univ. Press CY - Cambridge [u.a.] ER - TY - JOUR A1 - Cabalar, Pedro A1 - Dieguez, Martin A1 - Schaub, Torsten H. A1 - Schuhmann, Anna T1 - Towards metric temporal answer set programming JF - Theory and practice of logic programming N2 - We elaborate upon the theoretical foundations of a metric temporal extension of Answer Set Programming. In analogy to previous extensions of ASP with constructs from Linear Temporal and Dynamic Logic, we accomplish this in the setting of the logic of Here-and-There and its non-monotonic extension, called Equilibrium Logic. More precisely, we develop our logic on the same semantic underpinnings as its predecessors and thus use a simple time domain of bounded time steps. This allows us to compare all variants in a uniform framework and ultimately combine them in a common implementation. Y1 - 2020 U6 - https://doi.org/10.1017/S1471068420000307 SN - 1471-0684 SN - 1475-3081 VL - 20 IS - 5 SP - 783 EP - 798 PB - Cambridge Univ. Press CY - Cambridge [u.a.] ER - TY - JOUR A1 - Fandinno, Jorge A1 - Lifschitz, Vladimir A1 - Lühne, Patrick A1 - Schaub, Torsten H. T1 - Verifying tight logic programs with Anthem and Vampire JF - Theory and practice of logic programming N2 - This paper continues the line of research aimed at investigating the relationship between logic programs and first-order theories. We extend the definition of program completion to programs with input and output in a subset of the input language of the ASP grounder gringo, study the relationship between stable models and completion in this context, and describe preliminary experiments with the use of two software tools, anthem and vampire, for verifying the correctness of programs with input and output. Proofs of theorems are based on a lemma that relates the semantics of programs studied in this paper to stable models of first-order formulas. Y1 - 2020 U6 - https://doi.org/10.1017/S1471068420000344 SN - 1471-0684 SN - 1475-3081 VL - 20 IS - 5 SP - 735 EP - 750 PB - Cambridge Univ. Press CY - Cambridge [u.a.] ER - TY - GEN A1 - Sahlmann, Kristina A1 - Clemens, Vera A1 - Nowak, Michael A1 - Schnor, Bettina T1 - MUP BT - Simplifying Secure Over-The-Air Update with MQTT for Constrained IoT Devices T2 - Postprints der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe N2 - Message Queuing Telemetry Transport (MQTT) is one of the dominating protocols for edge- and cloud-based Internet of Things (IoT) solutions. When a security vulnerability of an IoT device is known, it has to be fixed as soon as possible. This requires a firmware update procedure. In this paper, we propose a secure update protocol for MQTT-connected devices which ensures the freshness of the firmware, authenticates the new firmware and considers constrained devices. We show that the update protocol is easy to integrate in an MQTT-based IoT network using a semantic approach. The feasibility of our approach is demonstrated by a detailed performance analysis of our prototype implementation on a IoT device with 32 kB RAM. Thereby, we identify design issues in MQTT 5 which can help to improve the support of constrained devices. T3 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe - 1094 KW - Internet of Things KW - security KW - firmware update KW - MQTT KW - edge computing Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-489013 SN - 1866-8372 IS - 1094 ER - TY - JOUR A1 - Sahlmann, Kristina A1 - Clemens, Vera A1 - Nowak, Michael A1 - Schnor, Bettina T1 - MUP BT - Simplifying Secure Over-The-Air Update with MQTT for Constrained IoT Devices JF - Sensors N2 - Message Queuing Telemetry Transport (MQTT) is one of the dominating protocols for edge- and cloud-based Internet of Things (IoT) solutions. When a security vulnerability of an IoT device is known, it has to be fixed as soon as possible. This requires a firmware update procedure. In this paper, we propose a secure update protocol for MQTT-connected devices which ensures the freshness of the firmware, authenticates the new firmware and considers constrained devices. We show that the update protocol is easy to integrate in an MQTT-based IoT network using a semantic approach. The feasibility of our approach is demonstrated by a detailed performance analysis of our prototype implementation on a IoT device with 32 kB RAM. Thereby, we identify design issues in MQTT 5 which can help to improve the support of constrained devices. KW - Internet of Things KW - security KW - firmware update KW - MQTT KW - edge computing Y1 - 2020 U6 - https://doi.org/10.3390/s21010010 SN - 1424-8220 VL - 21 IS - 1 PB - MDPI CY - Basel ER - TY - JOUR A1 - Bordihn, Henning A1 - Mitrana, Victor A1 - Paun, Andrei A1 - Paun, Mihaela T1 - Hairpin completions and reductions BT - semilinearity properties JF - Natural computing : an innovative journal bridging biosciences and computer sciences ; an international journal N2 - This paper is part of the investigation of some operations on words and languages with motivations coming from DNA biochemistry, namely three variants of hairpin completion and three variants of hairpin reduction. Since not all the hairpin completions or reductions of semilinear languages remain semilinear, we study sufficient conditions for semilinear languages to preserve their semilinearity property after applying the non-iterated hairpin completion or hairpin reduction. A similar approach is then applied to the iterated variants of these operations. Along these lines, we define the hairpin reduction root of a language and show that the hairpin reduction root of a semilinear language is not necessarily semilinear except the universal language. A few open problems are finally discussed. KW - DNA hairpin formation KW - Hairpin completions KW - Hairpin reductions KW - Semilinearity property Y1 - 2020 U6 - https://doi.org/10.1007/s11047-020-09797-0 SN - 1572-9796 VL - 20 IS - 2 SP - 193 EP - 203 PB - Springer Science + Business Media B.V. CY - Dordrecht ER - TY - JOUR A1 - Schneider, Jan Niklas A1 - Brick, Timothy R. A1 - Dziobek, Isabel T1 - Distance to the neutral face predicts arousal ratings of dynamic facial expressions in individuals with and without Autism Spectrum Disorder JF - Frontiers in psychology N2 - Arousal is one of the dimensions of core affect and frequently used to describe experienced or observed emotional states. While arousal ratings of facial expressions are collected in many studies it is not well understood how arousal is displayed in or interpreted from facial expressions. In the context of socioemotional disorders such as Autism Spectrum Disorder, this poses the question of a differential use of facial information for arousal perception. In this study, we demonstrate how automated face-tracking tools can be used to extract predictors of arousal judgments. We find moderate to strong correlations among all measures of static information on one hand and all measures of dynamic information on the other. Based on these results, we tested two measures, average distance to the neutral face and average facial movement speed, within and between neurotypical individuals (N = 401) and individuals with autism (N = 19). Distance to the neutral face was predictive of arousal in both groups. Lower mean arousal ratings were found for the autistic group, but no difference in correlation of the measures and arousal ratings could be found between groups. Results were replicated in an high autistic traits group. The findings suggest a qualitatively similar perception of arousal for individuals with and without autism. No correlations between valence ratings and any of the measures could be found, emphasizing the specificity of our tested measures. Distance and speed predictors share variability and thus speed should not be discarded as a predictor of arousal ratings. KW - arousal KW - face tracking KW - facial expression KW - autism KW - perception KW - perception differences KW - measure development Y1 - 2020 U6 - https://doi.org/10.3389/fpsyg.2020.577494 SN - 1664-1078 VL - 11 PB - Frontiers Research Foundation CY - Lausanne ER - TY - JOUR A1 - Cabalar, Pedro A1 - Fandinno, Jorge A1 - Garea, Javier A1 - Romero, Javier A1 - Schaub, Torsten H. T1 - Eclingo BT - a solver for epistemic logic programs JF - Theory and practice of logic programming N2 - We describe eclingo, a solver for epistemic logic programs under Gelfond 1991 semantics built upon the Answer Set Programming system clingo. The input language of eclingo uses the syntax extension capabilities of clingo to define subjective literals that, as usual in epistemic logic programs, allow for checking the truth of a regular literal in all or in some of the answer sets of a program. The eclingo solving process follows a guess and check strategy. It first generates potential truth values for subjective literals and, in a second step, it checks the obtained result with respect to the cautious and brave consequences of the program. This process is implemented using the multi-shot functionalities of clingo. We have also implemented some optimisations, aiming at reducing the search space and, therefore, increasing eclingo 's efficiency in some scenarios. Finally, we compare the efficiency of eclingo with two state-of-the-art solvers for epistemic logic programs on a pair of benchmark scenarios and show that eclingo generally outperforms their obtained results. KW - Answer Set Programming KW - Epistemic Logic Programs KW - Non-Monotonic KW - Reasoning KW - Conformant Planning Y1 - 2020 U6 - https://doi.org/10.1017/S1471068420000228 SN - 1471-0684 SN - 1475-3081 VL - 20 IS - 6 SP - 834 EP - 847 PB - Cambridge Univ. Press CY - New York ER - TY - JOUR A1 - Luther, Laura A1 - Tiberius, Victor A1 - Brem, Alexander T1 - User experience (UX) in business, management, and psychology BT - a bibliometric mapping of the current state of research JF - Multimodal technologies and interaction : open access journal N2 - User Experience (UX) describes the holistic experience of a user before, during, and after interaction with a platform, product, or service. UX adds value and attraction to their sole functionality and is therefore highly relevant for firms. The increased interest in UX has produced a vast amount of scholarly research since 1983. The research field is, therefore, complex and scattered. Conducting a bibliometric analysis, we aim at structuring the field quantitatively and rather abstractly. We employed citation analyses, co-citation analyses, and content analyses to evaluate productivity and impact of extant research. We suggest that future research should focus more on business and management related topics. KW - bibliometric analysis KW - co-citation analysis KW - co-occurrence analysis KW - citation analysis KW - user experience KW - UX Y1 - 2020 U6 - https://doi.org/10.3390/mti4020018 SN - 2414-4088 VL - 4 IS - 2 PB - MDPI CY - Basel ER - TY - JOUR A1 - Lucke, Ulrike A1 - Hafer, Jörg A1 - Hartmann, Niklas T1 - Strategieentwicklung in der Hochschule als partizipativer Prozess BT - Beispiele und Erkenntnisse JF - Potsdamer Beiträge zur Hochschulforschung N2 - Die Setzung strategischer Ziele sowie die Zuordnung und Umsetzung dazugehörender Maßnahmen sind ein wesentliches Element, um die Innovationsfähigkeit von Organisationen zu erhalten. In den vergangenen Jahren ist auch an Hochschulen die Strategiebildung deutlich vorangetrieben worden. Dies betrifft verschiedene Handlungsfelder, und es werden verschiedene Ansätze verfolgt. Der vorliegende Beitrag greift am Beispiel der Universität Potsdam drei in den vergangenen Jahren adressierte Strategiebereiche heraus: IT, E-Learning und Forschungsdaten. Die damit verbundenen Prozesse waren in unterschiedlichem Maß von Partizipation geprägt. Die gesammelten Erfahrungen werden reflektiert, und es werden Empfehlungen für Strategieentwicklungsprozesse abgeleitet. KW - Innovation KW - Strategie KW - Partizipation KW - IT-Infrastruktur KW - E-Learning KW - Forschungsdatenmanagement Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-492764 SN - 978-3-86956-498-2 SN - 2192-1075 SN - 2192-1083 IS - 6 SP - 99 EP - 117 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - JOUR A1 - Hempel, Sabrina A1 - Adolphs, Julian A1 - Landwehr, Niels A1 - Willink, Dilya A1 - Janke, David A1 - Amon, Thomas T1 - Supervised machine learning to assess methane emissions of a dairy building with natural ventilation JF - Applied Sciences N2 - A reliable quantification of greenhouse gas emissions is a basis for the development of adequate mitigation measures. Protocols for emission measurements and data analysis approaches to extrapolate to accurate annual emission values are a substantial prerequisite in this context. We systematically analyzed the benefit of supervised machine learning methods to project methane emissions from a naturally ventilated cattle building with a concrete solid floor and manure scraper located in Northern Germany. We took into account approximately 40 weeks of hourly emission measurements and compared model predictions using eight regression approaches, 27 different sampling scenarios and four measures of model accuracy. Data normalization was applied based on median and quartile range. A correlation analysis was performed to evaluate the influence of individual features. This indicated only a very weak linear relation between the methane emission and features that are typically used to predict methane emission values of naturally ventilated barns. It further highlighted the added value of including day-time and squared ambient temperature as features. The error of the predicted emission values was in general below 10%. The results from Gaussian processes, ordinary multilinear regression and neural networks were least robust. More robust results were obtained with multilinear regression with regularization, support vector machines and particularly the ensemble methods gradient boosting and random forest. The latter had the added value to be rather insensitive against the normalization procedure. In the case of multilinear regression, also the removal of not significantly linearly related variables (i.e., keeping only the day-time component) led to robust modeling results. We concluded that measurement protocols with 7 days and six measurement periods can be considered sufficient to model methane emissions from the dairy barn with solid floor with manure scraper, particularly when periods are distributed over the year with a preference for transition periods. Features should be normalized according to median and quartile range and must be carefully selected depending on the modeling approach. KW - greenhouse gas KW - on-farm evaluation KW - emission factor KW - regression KW - ensemble methods KW - gradient boosting KW - random forest KW - neural networks KW - support vector machines Y1 - 2020 U6 - https://doi.org/10.3390/app10196938 SN - 2076-3417 VL - 10 IS - 19 PB - MDPI CY - Basel ER - TY - JOUR A1 - Strickroth, Sven A1 - Kiy, Alexander T1 - E-Assessment etablieren BT - Auf dem Weg zu (dezentralen) E-Klausuren JF - Potsdamer Beiträge zur Hochschulforschung N2 - Elektronische Lernstandserhebungen, sogenannte E-Assessments, bieten für Lehrende und Studierende viele Vorteile z. B. hinsichtlich schneller Rückmeldungen oder kompetenzorientierter Fragenformate, und ermöglichen es, unabhängig von Ort und Zeit Prüfungen zu absolvieren. In diesem Beitrag werden die Einführung von summativen Lernstandserhebungen, sogenannter E-Klausuren, am Beispiel der Universität Potsdam, der Aufbau einer länderübergreifenden Initiative für E-Assessment sowie technische Möglichkeiten für dezentrale elektronische Klausuren vorgestellt. Dabei werden der aktuelle Stand, die Ziele und die gewählte stufenweise Umsetzungsstrategie der Universität Potsdam skizziert. Darauf aufbauend folgt eine Beschreibung des Vorgehens, der Kooperationsmöglichkeiten für den Wissens- und Erfahrungsaustausch sowie Herausforderungen der E-Assessment- Initiative. Abschließend werden verschiedene E-Klausurformen und technische Möglichkeiten zur Umsetzung komplexer Prüfungsumgebungen klassifiziert sowie mit ihren charakteristischen Vor- und Nachteilen diskutiert und eine integrierte Lösung vorgeschlagen. KW - E-Assessment KW - Elektronisches Prüfen KW - E-Klausuren KW - Digitalisierung Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-493036 SN - 978-3-86956-498-2 SN - 2192-1075 SN - 2192-1083 IS - 6 SP - 257 EP - 272 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - JOUR A1 - Lucke, Ulrike A1 - Strickroth, Sven T1 - Digitalisierung in Lehre und Studium BT - Eine hochschulweite Perspektive JF - Potsdamer Beiträge zur Hochschulforschung N2 - Das größte der fächerübergreifenden Projekte im Potsdamer Projekt Qualitätspakt Lehre hatte die flächendeckende Etablierung von digitalen Medien als einen integralen Bestandteil von Lehre und Studium zum Gegenstand. Im Teilprojekt E-Learning in Studienbereichen (eLiS) wurden dafür Maßnahmen in den Feldern Organisations-, technische und Inhaltsentwicklung zusammengeführt. Der vorliegende Beitrag präsentiert auf Basis von Ausgangslage und Zielsetzungen die Ergebnisse rund um die Digitalisierung von Lehre und Studium an der Universität Potsdam. Exemplarisch werden fünf Dienste näher vorgestellt, die inzwischen größtenteils in den Regelbetrieb der Hochschule übergegangen sind: die Videoplattform Media.UP, die mobile App Reflect.UP, die persönliche Lernumgebung Campus. UP, das Self-Service-Portal Cook.UP und das Anzeigesystem Freiraum.UP. Dabei wird jeweils ein technischer Blick „unter die Haube“ verbunden mit einer Erläuterung der Nutzungsmöglichkeiten, denen eine aktuelle Einschätzung von Lehrenden und Studierenden der Hochschule gegenübergestellt wird. Der Beitrag schließt mit einer Einbettung der vorgestellten Entwicklungen in einen größeren Kontext und einem Ausblick auf die weiterhin anstehenden Aufgaben. KW - Digitale Medien KW - E-Learning KW - Persönliche Lernumgebung KW - E-Portfolio KW - Mobile App KW - IT-Infrastruktur Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-493024 SN - 978-3-86956-498-2 SN - 2192-1075 SN - 2192-1083 IS - 6 SP - 235 EP - 255 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - JOUR A1 - Dug, Mehmed A1 - Weidling, Stefan A1 - Sogomonyan, Egor A1 - Jokic, Dejan A1 - Krstić, Miloš T1 - Full error detection and correction method applied on pipelined structure using two approaches JF - Journal of circuits, systems and computers N2 - In this paper, two approaches are evaluated using the Full Error Detection and Correction (FEDC) method for a pipelined structure. The approaches are referred to as Full Duplication with Comparison (FDC) and Concurrent Checking with Parity Prediction (CCPP). Aforementioned approaches are focused on the borderline cases of FEDC method which implement Error Detection Circuit (EDC) in two manners for the purpose of protection of combinational logic to address the soft errors of unspecified duration. The FDC approach implements a full duplication of the combinational circuit, as the most complex and expensive implementation of the FEDC method, and the CCPP approach implements only the parity prediction bit, being the simplest and cheapest technique, for soft error detection. Both approaches are capable of detecting soft errors in the combinational logic, with single faults being injected into the design. On the one hand, the FDC approach managed to detect and correct all injected faults while the CCPP approach could not detect multiple faults created at the output of combinational circuit. On the other hand, the FDC approach leads to higher power consumption and area increase compared to the CCPP approach. KW - Fault tolerance KW - FEDC KW - EDC Y1 - 2020 U6 - https://doi.org/10.1142/S0218126620502187 SN - 0218-1266 SN - 1793-6454 VL - 29 IS - 13 PB - World Scientific CY - Singapore ER - TY - JOUR A1 - Li, Yuanqing A1 - Breitenreiter, Anselm A1 - Andjelkovic, Marko A1 - Chen, Junchao A1 - Babic, Milan A1 - Krstić, Miloš T1 - Double cell upsets mitigation through triple modular redundancy JF - Microelectronics Journal N2 - A triple modular redundancy (TMR) based design technique for double cell upsets (DCUs) mitigation is investigated in this paper. This technique adds three extra self-voter circuits into a traditional TMR structure to enable the enhanced error correction capability. Fault-injection simulations show that the soft error rate (SER) of the proposed technique is lower than 3% of that of TMR. The implementation of this proposed technique is compatible with the automatic digital design flow, and its applicability and performance are evaluated on an FIFO circuit. KW - Triple modular redundancy (TMR) KW - Double cell upsets (DCUs) Y1 - 2019 U6 - https://doi.org/10.1016/j.mejo.2019.104683 SN - 0026-2692 SN - 1879-2391 VL - 96 PB - Elsevier CY - Oxford ER - TY - THES A1 - Nordmann, Paul-Patrick T1 - Fehlerkorrektur von Speicherfehlern mit Low-Density-Parity-Check-Codes N2 - Die Fehlerkorrektur in der Codierungstheorie beschäftigt sich mit der Erkennung und Behebung von Fehlern bei der Übertragung und auch Sicherung von Nachrichten. Hierbei wird die Nachricht durch zusätzliche Informationen in ein Codewort kodiert. Diese Kodierungsverfahren besitzen verschiedene Ansprüche, wie zum Beispiel die maximale Anzahl der zu korrigierenden Fehler und die Geschwindigkeit der Korrektur. Ein gängiges Codierungsverfahren ist der BCH-Code, welches industriell für bis zu vier Fehler korrigiere Codes Verwendung findet. Ein Nachteil dieser Codes ist die technische Durchlaufzeit für die Berechnung der Fehlerstellen mit zunehmender Codelänge. Die Dissertation stellt ein neues Codierungsverfahren vor, bei dem durch spezielle Anordnung kleinere Codelängen eines BCH-Codes ein langer Code erzeugt wird. Diese Anordnung geschieht über einen weiteren speziellen Code, einem LDPC-Code, welcher für eine schneller Fehlererkennung konzipiert ist. Hierfür wird ein neues Konstruktionsverfahren vorgestellt, welches einen Code für einen beliebige Länge mit vorgebbaren beliebigen Anzahl der zu korrigierenden Fehler vorgibt. Das vorgestellte Konstruktionsverfahren erzeugt zusätzlich zum schnellen Verfahren der Fehlererkennung auch eine leicht und schnelle Ableitung eines Verfahrens zu Kodierung der Nachricht zum Codewort. Dies ist in der Literatur für die LDPC-Codes bis zum jetzigen Zeitpunkt einmalig. Durch die Konstruktion eines LDPC-Codes wird ein Verfahren vorgestellt wie dies mit einem BCH-Code kombiniert wird, wodurch eine Anordnung des BCH-Codes in Blöcken erzeugt wird. Neben der allgemeinen Beschreibung dieses Codes, wird ein konkreter Code für eine 2-Bitfehlerkorrektur beschrieben. Diese besteht aus zwei Teilen, welche in verschiedene Varianten beschrieben und verglichen werden. Für bestimmte Längen des BCH-Codes wird ein Problem bei der Korrektur aufgezeigt, welche einer algebraischen Regel folgt. Der BCH-Code wird sehr allgemein beschrieben, doch existiert durch bestimmte Voraussetzungen ein BCH-Code im engerem Sinne, welcher den Standard vorgibt. Dieser BCH-Code im engerem Sinne wird in dieser Dissertation modifiziert, so dass das algebraische Problem bei der 2-Bitfehler Korrektur bei der Kombination mit dem LDPC-Code nicht mehr existiert. Es wird gezeigt, dass nach der Modifikation der neue Code weiterhin ein BCH-Code im allgemeinen Sinne ist, welcher 2-Bitfehler korrigieren und 3-Bitfehler erkennen kann. Bei der technischen Umsetzung der Fehlerkorrektur wird des Weiteren gezeigt, dass die Durchlaufzeiten des modifizierten Codes im Vergleich zum BCH-Code schneller ist und weiteres Potential für Verbesserungen besitzt. Im letzten Kapitel wird gezeigt, dass sich dieser modifizierte Code mit beliebiger Länge eignet für die Kombination mit dem LDPC-Code, wodurch dieses Verfahren nicht nur umfänglicher in der Länge zu nutzen ist, sondern auch durch die schnellere Dekodierung auch weitere Vorteile gegenüber einem BCH-Code im engerem Sinne besitzt. N2 - Error correction in coding theory is concerned with the detection and correction of errors in the transmission and also securing of messages. For this purpose a message is coded into a code word by means of additional information. These coding methods have different requirements, such as the maximum number of errors to be corrected and the speed of correction. A common coding method is the BCH code, which is used industrially for codes that can be corrected for up to 4-bit errors. A disadvantage of these codes is the run-time for calculating the error positions with increasing code length. The dissertation presents a new coding method in which a long code is generated by a special arrangement of smaller code lengths of a BCH code. This arrangement is done by means of another special code, an LDPC code, which is designed for faster fault detection. For this purpose, a new construction method for LDPC codes is presented, which specifies a code of any length with a predeterminable arbitrary number of errors to be corrected. In addition to the fast method of error detection, the presented construction method also generates an easy and fast derivation of a method for coding the message to the code word. This is unique in the literature for LDPC codes up to now. With the construction of an LDPC code a procedure is presented which combines the code with a BCH code, whereby an arrangement of the BCH code in blocks is done. Besides the general description of this code, the concrete code for a 2-bit error correction is described. This consists of two parts, which are described and compared in different variants. For certain lengths of the BCH code a correction problem is shown, which follows an algebraic rule. The BCH code is described in a very general way, but due to certain conditions a BCH code in a narrower sense exists, which sets the standard. This BCH code in a narrower sense is modified in this dissertation, so that the algebraic problem in 2-bit error correction, when combined with the LDPC code, no longer exists. It is shown that after the modification the new code is still a BCH code in the general sense, which can correct 2-bit errors and detect 3-bit errors. In the technical implementation of the error correction it is shown that the processing times of the modified code are faster compared to the BCH code and have further potential for improvement. In the last chapter it is shown that the modified code of any length is suitable for combination with the LDPC code, according to the procedure already presented. Thus this procedure, the combination of the modified BCH code with the LDPC code, is not only more comprehensively usable in the code lengths compared to the BCH code in the narrower sense with the LDPC code, but offers a further advantage due to the faster decoding with modified BCH codes. T2 - Error correction of memory errors with Low-density parity-check codes KW - Codierungstheorie KW - LDPC-Code KW - BCH-Code KW - BCH code KW - Coding theory KW - LDPC code Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-480480 ER - TY - JOUR A1 - Kuentzer, Felipe A. A1 - Krstić, Miloš T1 - Soft error detection and correction architecture for asynchronous bundled data designs JF - IEEE transactions on circuits and systems N2 - In this paper, an asynchronous design for soft error detection and correction in combinational and sequential circuits is presented. The proposed architecture is called Asynchronous Full Error Detection and Correction (AFEDC). A custom design flow with integrated commercial EDA tools generates the AFEDC using the asynchronous bundled-data design style. The AFEDC relies on an Error Detection Circuit (EDC) for protecting the combinational logic and fault-tolerant latches for protecting the sequential logic. The EDC can be implemented using different detection methods. For this work, two boundary variants are considered, the Full Duplication with Comparison (FDC) and the Partial Duplication with Parity Prediction (PDPP). The AFEDC architecture can handle single events and timing faults of arbitrarily long duration as well as the synchronous FEDC, but additionally can address known metastability issues of the FEDC and other similar synchronous architectures and provide a more practical solution for handling the error recovery process. Two case studies are developed, a carry look-ahead adder and a pipelined non-restoring array divider. Results show that the AFEDC provides equivalent fault coverage when compared to the FEDC while reducing area, ranging from 9.6% to 17.6%, and increasing energy efficiency, which can be up to 6.5%. KW - circuit Faults KW - latches KW - Fault tolerance KW - Fault tolerant systems KW - timing KW - clocks KW - transient analysis KW - asynchrounous design KW - soft errors KW - transient Faults KW - bundled data KW - click controller KW - self-checking KW - concurrent checking KW - DMR KW - TMR Y1 - 2020 U6 - https://doi.org/10.1109/TCSI.2020.2998911 SN - 1549-8328 SN - 1558-0806 VL - 67 IS - 12 SP - 4883 EP - 4894 PB - Institute of Electrical and Electronics Engineers CY - New York ER - TY - JOUR A1 - Prasse, Paul A1 - Knaebel, Rene A1 - Machlica, Lukas A1 - Pevny, Tomas A1 - Scheffer, Tobias T1 - Joint detection of malicious domains and infected clients JF - Machine learning N2 - Detection of malware-infected computers and detection of malicious web domains based on their encrypted HTTPS traffic are challenging problems, because only addresses, timestamps, and data volumes are observable. The detection problems are coupled, because infected clients tend to interact with malicious domains. Traffic data can be collected at a large scale, and antivirus tools can be used to identify infected clients in retrospect. Domains, by contrast, have to be labeled individually after forensic analysis. We explore transfer learning based on sluice networks; this allows the detection models to bootstrap each other. In a large-scale experimental study, we find that the model outperforms known reference models and detects previously unknown malware, previously unknown malware families, and previously unknown malicious domains. KW - Machine learning KW - Neural networks KW - Computer security KW - Traffic data KW - Https traffic Y1 - 2019 U6 - https://doi.org/10.1007/s10994-019-05789-z SN - 0885-6125 SN - 1573-0565 VL - 108 IS - 8-9 SP - 1353 EP - 1368 PB - Springer CY - Dordrecht ER - TY - JOUR A1 - Cabalar, Pedro A1 - Fandinno, Jorge A1 - Schaub, Torsten H. A1 - Schellhorn, Sebastian T1 - Gelfond-Zhang aggregates as propositional formulas JF - Artificial intelligence N2 - Answer Set Programming (ASP) has become a popular and widespread paradigm for practical Knowledge Representation thanks to its expressiveness and the available enhancements of its input language. One of such enhancements is the use of aggregates, for which different semantic proposals have been made. In this paper, we show that any ASP aggregate interpreted under Gelfond and Zhang's (GZ) semantics can be replaced (under strong equivalence) by a propositional formula. Restricted to the original GZ syntax, the resulting formula is reducible to a disjunction of conjunctions of literals but the formulation is still applicable even when the syntax is extended to allow for arbitrary formulas (including nested aggregates) in the condition. Once GZ-aggregates are represented as formulas, we establish a formal comparison (in terms of the logic of Here-and-There) to Ferraris' (F) aggregates, which are defined by a different formula translation involving nested implications. In particular, we prove that if we replace an F-aggregate by a GZ-aggregate in a rule head, we do not lose answer sets (although more can be gained). This extends the previously known result that the opposite happens in rule bodies, i.e., replacing a GZ-aggregate by an F-aggregate in the body may yield more answer sets. Finally, we characterize a class of aggregates for which GZ- and F-semantics coincide. KW - Aggregates KW - Answer Set Programming Y1 - 2019 U6 - https://doi.org/10.1016/j.artint.2018.10.007 SN - 0004-3702 SN - 1872-7921 VL - 274 SP - 26 EP - 43 PB - Elsevier CY - Amsterdam ER - TY - JOUR A1 - Aguado, Felicidad A1 - Cabalar, Pedro A1 - Fandiño, Jorge A1 - Pearce, David A1 - Perez, Gilberto A1 - Vidal-Peracho, Concepcion T1 - Revisiting Explicit Negation in Answer Set Programming JF - Theory and practice of logic programming KW - Answer set programming KW - Non-monotonic reasoning KW - Equilibrium logic KW - Explicit negation Y1 - 2019 U6 - https://doi.org/10.1017/S1471068419000267 SN - 1471-0684 SN - 1475-3081 VL - 19 IS - 5-6 SP - 908 EP - 924 PB - Cambridge Univ. Press CY - New York ER - TY - THES A1 - Abdelwahab Hussein Abdelwahab Elsayed, Ahmed T1 - Probabilistic, deep, and metric learning for biometric identification from eye movements N2 - A central insight from psychological studies on human eye movements is that eye movement patterns are highly individually characteristic. They can, therefore, be used as a biometric feature, that is, subjects can be identified based on their eye movements. This thesis introduces new machine learning methods to identify subjects based on their eye movements while viewing arbitrary content. The thesis focuses on probabilistic modeling of the problem, which has yielded the best results in the most recent literature. The thesis studies the problem in three phases by proposing a purely probabilistic, probabilistic deep learning, and probabilistic deep metric learning approach. In the first phase, the thesis studies models that rely on psychological concepts about eye movements. Recent literature illustrates that individual-specific distributions of gaze patterns can be used to accurately identify individuals. In these studies, models were based on a simple parametric family of distributions. Such simple parametric models can be robustly estimated from sparse data, but have limited flexibility to capture the differences between individuals. Therefore, this thesis proposes a semiparametric model of gaze patterns that is flexible yet robust for individual identification. These patterns can be understood as domain knowledge derived from psychological literature. Fixations and saccades are examples of simple gaze patterns. The proposed semiparametric densities are drawn under a Gaussian process prior centered at a simple parametric distribution. Thus, the model will stay close to the parametric class of densities if little data is available, but it can also deviate from this class if enough data is available, increasing the flexibility of the model. The proposed method is evaluated on a large-scale dataset, showing significant improvements over the state-of-the-art. Later, the thesis replaces the model based on gaze patterns derived from psychological concepts with a deep neural network that can learn more informative and complex patterns from raw eye movement data. As previous work has shown that the distribution of these patterns across a sequence is informative, a novel statistical aggregation layer called the quantile layer is introduced. It explicitly fits the distribution of deep patterns learned directly from the raw eye movement data. The proposed deep learning approach is end-to-end learnable, such that the deep model learns to extract informative, short local patterns while the quantile layer learns to approximate the distributions of these patterns. Quantile layers are a generic approach that can converge to standard pooling layers or have a more detailed description of the features being pooled, depending on the problem. The proposed model is evaluated in a large-scale study using the eye movements of subjects viewing arbitrary visual input. The model improves upon the standard pooling layers and other statistical aggregation layers proposed in the literature. It also improves upon the state-of-the-art eye movement biometrics by a wide margin. Finally, for the model to identify any subject — not just the set of subjects it is trained on — a metric learning approach is developed. Metric learning learns a distance function over instances. The metric learning model maps the instances into a metric space, where sequences of the same individual are close, and sequences of different individuals are further apart. This thesis introduces a deep metric learning approach with distributional embeddings. The approach represents sequences as a set of continuous distributions in a metric space; to achieve this, a new loss function based on Wasserstein distances is introduced. The proposed method is evaluated on multiple domains besides eye movement biometrics. This approach outperforms the state of the art in deep metric learning in several domains while also outperforming the state of the art in eye movement biometrics. N2 - Die Art und Weise, wie wir unsere Augen bewegen, ist individuell charakteristisch. Augenbewegungen können daher zur biometrischen Identifikation verwendet werden. Die Dissertation stellt neuartige Methoden des maschinellen Lernens zur Identifzierung von Probanden anhand ihrer Blickbewegungen während des Betrachtens beliebiger visueller Inhalte vor. Die Arbeit konzentriert sich auf die probabilistische Modellierung des Problems, da dies die besten Ergebnisse in der aktuellsten Literatur liefert. Die Arbeit untersucht das Problem in drei Phasen. In der ersten Phase stützt sich die Arbeit bei der Entwicklung eines probabilistischen Modells auf Wissen über Blickbewegungen aus der psychologischen Literatur. Existierende Studien haben gezeigt, dass die individuelle Verteilung von Blickbewegungsmustern verwendet werden kann, um Individuen genau zu identifizieren. Existierende probabilistische Modelle verwenden feste Verteilungsfamilien in Form von parametrischen Modellen, um diese Verteilungen zu approximieren. Die Verwendung solcher einfacher Verteilungsfamilien hat den Vorteil, dass sie robuste Verteilungsschätzungen auch auf kleinen Mengen von Beobachtungen ermöglicht. Ihre Flexibilität, Unterschiede zwischen Personen zu erfassen, ist jedoch begrenzt. Die Arbeit schlägt daher eine semiparametrische Modellierung der Blickmuster vor, die flexibel und dennoch robust individuelle Verteilungen von Blickbewegungsmustern schätzen kann. Die modellierten Blickmuster können als Domänenwissen verstanden werden, das aus der psychologischen Literatur abgeleitet ist. Beispielsweise werden Verteilungen über Fixationsdauern und Sprungweiten (Sakkaden) bei bestimmten Vor- und Rücksprüngen innerhalb des Textes modelliert. Das semiparametrische Modell bleibt nahe des parametrischen Modells, wenn nur wenige Daten verfügbar sind, kann jedoch auch vom parametrischen Modell abweichen, wenn genügend Daten verfügbar sind, wodurch die Flexibilität erhöht wird. Die Methode wird auf einem großen Datenbestand evaluiert und zeigt eine signifikante Verbesserung gegenüber dem Stand der Technik der Forschung zur biometrischen Identifikation aus Blickbewegungen. Später ersetzt die Dissertation die zuvor untersuchten aus der psychologischen Literatur abgeleiteten Blickmuster durch ein auf tiefen neuronalen Netzen basierendes Modell, das aus den Rohdaten der Augenbewegungen informativere komplexe Muster lernen kann. Tiefe neuronale Netze sind eine Technik des maschinellen Lernens, bei der in komplexen, mehrschichtigen Modellen schrittweise abstraktere Merkmale aus Rohdaten extrahiert werden. Da frühere Arbeiten gezeigt haben, dass die Verteilung von Blickbewegungsmustern innerhalb einer Blickbewegungssequenz informativ ist, wird eine neue Aggrgationsschicht für tiefe neuronale Netze eingeführt, die explizit die Verteilung der gelernten Muster schätzt. Die vorgeschlagene Aggregationsschicht für tiefe neuronale Netze ist nicht auf die Modellierung von Blickbewegungen beschränkt, sondern kann als Verallgemeinerung von existierenden einfacheren Aggregationsschichten in beliebigen Anwendungen eingesetzt werden. Das vorgeschlagene Modell wird in einer umfangreichen Studie unter Verwendung von Augenbewegungen von Probanden evaluiert, die Videomaterial unterschiedlichen Inhalts und unterschiedlicher Länge betrachten. Das Modell verbessert die Identifikationsgenauigkeit im Vergleich zu tiefen neuronalen Netzen mit Standardaggregationsschichten und existierenden probabilistischen Modellen zur Identifikation aus Blickbewegungen. Damit das Modell zum Anwendungszeitpunkt beliebige Probanden identifizieren kann, und nicht nur diejenigen Probanden, mit deren Daten es trainiert wurde, wird ein metrischer Lernansatz entwickelt. Beim metrischen Lernen lernt das Modell eine Funktion, mit der die Ähnlichkeit zwischen Blickbewegungssequenzen geschätzt werden kann. Das metrische Lernen bildet die Instanzen in einen neuen Raum ab, in dem Sequenzen desselben Individuums nahe beieinander liegen und Sequenzen verschiedener Individuen weiter voneinander entfernt sind. Die Dissertation stellt einen neuen metrischen Lernansatz auf Basis tiefer neuronaler Netze vor. Der Ansatz repäsentiert eine Sequenz in einem metrischen Raum durch eine Menge von Verteilungen. Das vorgeschlagene Verfahren ist nicht spezifisch für die Blickbewegungsmodellierung, und wird in unterschiedlichen Anwendungsproblemen empirisch evaluiert. Das Verfahren führt zu genaueren Modellen im Vergleich zu existierenden metrischen Lernverfahren und existierenden Modellen zur Identifikation aus Blickbewegungen. KW - probabilistic deep metric learning KW - probabilistic deep learning KW - biometrics KW - eye movements KW - biometrische Identifikation KW - Augenbewegungen KW - probabilistische tiefe neuronale Netze KW - probabilistisches tiefes metrisches Lernen Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-467980 ER - TY - GEN A1 - Fandinno, Jorge T1 - Founded (auto)epistemic equilibrium logic satisfies epistemic splitting T2 - Postprints der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe N2 - In a recent line of research, two familiar concepts from logic programming semantics (unfounded sets and splitting) were extrapolated to the case of epistemic logic programs. The property of epistemic splitting provides a natural and modular way to understand programs without epistemic cycles but, surprisingly, was only fulfilled by Gelfond's original semantics (G91), among the many proposals in the literature. On the other hand, G91 may suffer from a kind of self-supported, unfounded derivations when epistemic cycles come into play. Recently, the absence of these derivations was also formalised as a property of epistemic semantics called foundedness. Moreover, a first semantics proved to satisfy foundedness was also proposed, the so-called Founded Autoepistemic Equilibrium Logic (FAEEL). In this paper, we prove that FAEEL also satisfies the epistemic splitting property something that, together with foundedness, was not fulfilled by any other approach up to date. To prove this result, we provide an alternative characterisation of FAEEL as a combination of G91 with a simpler logic we called Founded Epistemic Equilibrium Logic (FEEL), which is somehow an extrapolation of the stable model semantics to the modal logic S5. T3 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe - 1060 KW - answer set programming KW - epistemic specifications KW - epistemic logic programs Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-469685 SN - 1866-8372 IS - 1060 SP - 671 EP - 687 ER - TY - GEN A1 - Aguado, Felicidad A1 - Cabalar, Pedro A1 - Fandinno, Jorge A1 - Pearce, David A1 - Perez, Gilberto A1 - Vidal, Concepcion T1 - Revisiting explicit negation in answer set programming T2 - Postprints der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe N2 - A common feature in Answer Set Programming is the use of a second negation, stronger than default negation and sometimes called explicit, strong or classical negation. This explicit negation is normally used in front of atoms, rather than allowing its use as a regular operator. In this paper we consider the arbitrary combination of explicit negation with nested expressions, as those defined by Lifschitz, Tang and Turner. We extend the concept of reduct for this new syntax and then prove that it can be captured by an extension of Equilibrium Logic with this second negation. We study some properties of this variant and compare to the already known combination of Equilibrium Logic with Nelson's strong negation. T3 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe - 1104 KW - Answer Set Programming KW - non-monotonic reasoning KW - Equilibrium logic KW - explicit negation Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-469697 SN - 1866-8372 IS - 1104 SP - 908 EP - 924 ER - TY - GEN A1 - Strickroth, Sven T1 - PLATON BT - Developing a Graphical Lesson Planning System for Prospective Teachers T2 - Postprints der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe N2 - Lesson planning is both an important and demanding task—especially as part of teacher training. This paper presents the requirements for a lesson planning system and evaluates existing systems regarding these requirements. One major drawback of existing software tools is that most are limited to a text- or form-based representation of the lesson designs. In this article, a new approach with a graphical, time-based representation with (automatic) analyses methods is proposed and the system architecture and domain model are described in detail. The approach is implemented in an interactive, web-based prototype called PLATON, which additionally supports the management of lessons in units as well as the modelling of teacher and student-generated resources. The prototype was evaluated in a study with 61 prospective teachers (bachelor’s and master’s preservice teachers as well as teacher trainees in post-university teacher training) in Berlin, Germany, with a focus on usability. The results show that this approach proofed usable for lesson planning and offers positive effects for the perception of time and self-reflection. T3 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe - 804 KW - lesson planning KW - lesson preparation KW - support system KW - automatic feedback Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-441887 SN - 1866-8372 IS - 804 ER - TY - JOUR A1 - Metref, Sammy A1 - Cosme, Emmanuel A1 - Le Sommer, Julien A1 - Poel, Nora A1 - Brankart, Jean-Michel A1 - Verron, Jacques A1 - Gomez Navarro, Laura T1 - Reduction of spatially structured errors in Wide-Swath altimetric satellite data using data assimilation JF - Remote sensing N2 - The Surface Water and Ocean Topography (SWOT) mission is a next generation satellite mission expected to provide a 2 km-resolution observation of the sea surface height (SSH) on a two-dimensional swath. Processing SWOT data will be challenging because of the large amount of data, the mismatch between a high spatial resolution and a low temporal resolution, and the observation errors. The present paper focuses on the reduction of the spatially structured errors of SWOT SSH data. It investigates a new error reduction method and assesses its performance in an observing system simulation experiment. The proposed error-reduction method first projects the SWOT SSH onto a subspace spanned by the SWOT spatially structured errors. This projection is removed from the SWOT SSH to obtain a detrended SSH. The detrended SSH is then processed within an ensemble data assimilation analysis to retrieve a full SSH field. In the latter step, the detrending is applied to both the SWOT data and an ensemble of model-simulated SSH fields. Numerical experiments are performed with synthetic SWOT observations and an ensemble from a North Atlantic, 1/60 degrees simulation of the ocean circulation (NATL60). The data assimilation analysis is carried out with an ensemble Kalman filter. The results are assessed with root mean square errors, power spectrum density, and spatial coherence. They show that a significant part of the large scale SWOT errors is reduced. The filter analysis also reduces the small scale errors and allows for an accurate recovery of the energy of the signal down to 25 km scales. In addition, using the SWOT nadir data to adjust the SSH detrending further reduces the errors. KW - SWOT KW - correlated errors KW - OSSE KW - projection KW - detrending KW - ensemble kalman filter Y1 - 2019 U6 - https://doi.org/10.3390/rs11111336 SN - 2072-4292 VL - 11 IS - 11 PB - MDPI CY - Basel ER - TY - JOUR A1 - Dimopoulos, Yannis A1 - Gebser, Martin A1 - Lühne, Patrick A1 - Romero Davila, Javier A1 - Schaub, Torsten H. T1 - plasp 3 BT - Towards Effective ASP Planning JF - Theory and practice of logic programming N2 - We describe the new version of the Planning Domain Definition Language (PDDL)-to-Answer Set Programming (ASP) translator plasp. First, it widens the range of accepted PDDL features. Second, it contains novel planning encodings, some inspired by Satisfiability Testing (SAT) planning and others exploiting ASP features such as well-foundedness. All of them are designed for handling multivalued fluents in order to capture both PDDL as well as SAS planning formats. Third, enabled by multishot ASP solving, it offers advanced planning algorithms also borrowed from SAT planning. As a result, plasp provides us with an ASP-based framework for studying a variety of planning techniques in a uniform setting. Finally, we demonstrate in an empirical analysis that these techniques have a significant impact on the performance of ASP planning. KW - knowledge representation and nonmonotonic reasoning KW - technical notes and rapid communications KW - answer set programming KW - automated planning KW - action and change Y1 - 2019 U6 - https://doi.org/10.1017/S1471068418000583 SN - 1471-0684 SN - 1475-3081 VL - 19 IS - 3 SP - 477 EP - 504 PB - Cambridge Univ. Press CY - New York ER - TY - GEN A1 - Przybylla, Mareen T1 - Interactive objects in physical computing and their role in the learning process T2 - Constructivist foundations N2 - The target article discusses the question of how educational makerspaces can become places supportive of knowledge construction. This question is too often neglected by people who run makerspaces, as they mostly explain how to use different tools and focus on the creation of a product. In makerspaces, often pupils also engage in physical computing activities and thus in the creation of interactive artifacts containing embedded systems, such as smart shoes or wristbands, plant monitoring systems or drink mixing machines. This offers the opportunity to reflect on teaching physical computing in computer science education, where similarly often the creation of the product is so strongly focused upon that the reflection of the learning process is pushed into the background. Y1 - 2019 SN - 1782-348X VL - 14 IS - 3 SP - 264 EP - 266 PB - Vrije Univ. CY - Bussels ER - TY - JOUR A1 - Pousttchi, Key A1 - Gleiß, Alexander T1 - Surrounded by middlemen - how multi-sided platforms change the insurance industry JF - Electron Markets N2 - Multi-sided platforms (MSP) strongly affect markets and play a crucial part within the digital and networked economy. Although empirical evidence indicates their occurrence in many industries, research has not investigated the game-changing impact of MSP on traditional markets to a sufficient extent. More specifically, we have little knowledge of how MSP affect value creation and customer interaction in entire markets, exploiting the potential of digital technologies to offer new value propositions. Our paper addresses this research gap and provides an initial systematic approach to analyze the impact of MSP on the insurance industry. For this purpose, we analyze the state of the art in research and practice in order to develop a reference model of the value network for the insurance industry. On this basis, we conduct a case-study analysis to discover and analyze roles which are occupied or even newly created by MSP. As a final step, we categorize MSP with regard to their relation to traditional insurance companies, resulting in a classification scheme with four MSP standard types: Competition, Coordination, Cooperation, Collaboration. KW - Multi-sided platforms KW - Insurance industry KW - Value network KW - Digitalization KW - Customer ownership Y1 - 2019 U6 - https://doi.org/10.1007/s12525-019-00363-w SN - 1019-6781 SN - 1422-8890 VL - 29 IS - 4 SP - 609 EP - 629 PB - Springer CY - Heidelberg ER - TY - JOUR A1 - Aguado, Felicidad A1 - Cabalar, Pedro A1 - Fandinno, Jorge A1 - Pearce, David A1 - Perez, Gilberto A1 - Vidal, Concepcion T1 - Forgetting auxiliary atoms in forks JF - Artificial intelligence N2 - In this work we tackle the problem of checking strong equivalence of logic programs that may contain local auxiliary atoms, to be removed from their stable models and to be forbidden in any external context. We call this property projective strong equivalence (PSE). It has been recently proved that not any logic program containing auxiliary atoms can be reformulated, under PSE, as another logic program or formula without them – this is known as strongly persistent forgetting. In this paper, we introduce a conservative extension of Equilibrium Logic and its monotonic basis, the logic of Here-and-There, in which we deal with a new connective ‘|’ we call fork. We provide a semantic characterisation of PSE for forks and use it to show that, in this extension, it is always possible to forget auxiliary atoms under strong persistence. We further define when the obtained fork is representable as a regular formula. KW - Answer set programming KW - Non-monotonic reasoning KW - Equilibrium logic KW - Denotational semantics KW - Forgetting KW - Strong equivalence Y1 - 2019 U6 - https://doi.org/10.1016/j.artint.2019.07.005 SN - 0004-3702 SN - 1872-7921 VL - 275 SP - 575 EP - 601 PB - Elsevier CY - Amsterdam ER - TY - GEN A1 - Cabalar, Pedro A1 - Fandinno, Jorge A1 - Schaub, Torsten H. A1 - Schellhorn, Sebastian T1 - Lower Bound Founded Logic of Here-and-There T2 - Logics in Artificial Intelligence N2 - A distinguishing feature of Answer Set Programming is that all atoms belonging to a stable model must be founded. That is, an atom must not only be true but provably true. This can be made precise by means of the constructive logic of Here-and-There, whose equilibrium models correspond to stable models. One way of looking at foundedness is to regard Boolean truth values as ordered by letting true be greater than false. Then, each Boolean variable takes the smallest truth value that can be proven for it. This idea was generalized by Aziz to ordered domains and applied to constraint satisfaction problems. As before, the idea is that a, say integer, variable gets only assigned to the smallest integer that can be justified. In this paper, we present a logical reconstruction of Aziz’ idea in the setting of the logic of Here-and-There. More precisely, we start by defining the logic of Here-and-There with lower bound founded variables along with its equilibrium models and elaborate upon its formal properties. Finally, we compare our approach with related ones and sketch future work. Y1 - 2019 SN - 978-3-030-19570-0 SN - 978-3-030-19569-4 U6 - https://doi.org/10.1007/978-3-030-19570-0_34 SN - 0302-9743 SN - 1611-3349 VL - 11468 SP - 509 EP - 525 PB - Springer CY - Cham ER - TY - JOUR A1 - Waitelonis, Jörg A1 - Jürges, Henrik A1 - Sack, Harald T1 - Remixing entity linking evaluation datasets for focused benchmarking JF - Semantic Web N2 - In recent years, named entity linking (NEL) tools were primarily developed in terms of a general approach, whereas today numerous tools are focusing on specific domains such as e.g. the mapping of persons and organizations only, or the annotation of locations or events in microposts. However, the available benchmark datasets necessary for the evaluation of NEL tools do not reflect this focalizing trend. We have analyzed the evaluation process applied in the NEL benchmarking framework GERBIL [in: Proceedings of the 24th International Conference on World Wide Web (WWW’15), International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 2015, pp. 1133–1143, Semantic Web 9(5) (2018), 605–625] and all its benchmark datasets. Based on these insights we have extended the GERBIL framework to enable a more fine grained evaluation and in depth analysis of the available benchmark datasets with respect to different emphases. This paper presents the implementation of an adaptive filter for arbitrary entities and customized benchmark creation as well as the automated determination of typical NEL benchmark dataset properties, such as the extent of content-related ambiguity and diversity. These properties are integrated on different levels, which also enables to tailor customized new datasets out of the existing ones by remixing documents based on desired emphases. Besides a new system library to enrich provided NIF [in: International Semantic Web Conference (ISWC’13), Lecture Notes in Computer Science, Vol. 8219, Springer, Berlin, Heidelberg, 2013, pp. 98–113] datasets with statistical information, best practices for dataset remixing are presented, and an in depth analysis of the performance of entity linking systems on special focus datasets is presented. KW - Entity Linking KW - GERBIL KW - evaluation KW - benchmark Y1 - 2019 U6 - https://doi.org/10.3233/SW-180334 SN - 1570-0844 SN - 2210-4968 VL - 10 IS - 2 SP - 385 EP - 412 PB - IOS Press CY - Amsterdam ER -