TY - JOUR A1 - Krasnova, Hanna A1 - Veltri, Natasha F. A1 - Spengler, Klaus A1 - Günther, Oliver T1 - "Deal of the Day" Platforms what drives Consumer loyalty? JF - Business & information systems engineering : the international journal of Wirtschaftsinformatik N2 - "Deal of the Day" (DoD) platforms have quickly become popular by offering savings on local services, products and vacations. For merchants, these platforms represent a new marketing channel to advertise their products and services and attract new customers. DoD platform providers, however, struggle to maintaining a stable market share and profitability, because entry and switching costs are low. To sustain a competitive market position, DoD providers are looking for ways to build a loyal customer base. However, research examining the determinants of user loyalty in this novel context is scarce. To fill this gap, this study employs Grounded Theory methodology to develop a conceptual model of customer loyalty to a DoD provider. In the next step, qualitative insights are enriched and validated using quantitative data from a survey of 202 DoD users. The authors find that customer loyalty is in large part driven by monetary incentives, but can be eroded if impressions from merchant encounters are below expectations. In addition, enhancing the share of deals relevant for consumers, i.e. signal-to-noise ratio, and mitigating perceived risks of a transaction emerge as challenges. Beyond theoretical value, the results offer practical insights into how customer loyalty to a DoD provider can be promoted. KW - Deal of the Day KW - Loyalty KW - Grounded theory KW - Structural equation modeling Y1 - 2013 U6 - https://doi.org/10.1007/s12599-013-0268-2 SN - 1867-0202 VL - 5 IS - 3 SP - 165 EP - 177 PB - Springer CY - Heidelberg ER - TY - JOUR A1 - Brain, Martin A1 - Gebser, Martin A1 - Pührer, Jörg A1 - Schaub, Torsten H. A1 - Tompits, Hans A1 - Woltran, Stefan T1 - "That is illogical, Captain!" : the debugging support tool spock for answer-set programs ; system description Y1 - 2007 ER - TY - JOUR A1 - Jürgensen, Helmut A1 - Konstantinidis, Stavros T1 - (Near-)inverses of sequences N2 - We introduce the notion of a near-inverse of a non-decreasing sequence of positive integers; near-inverses are intended to assume the role of inverses in cases when the latter cannot exist. We prove that the near-inverse of such a sequence is unique; moreover, the relation of being near-inverses of each other is symmetric, i.e. if sequence g is the near-inverse of sequence f, then f is the near-inverse of g. There is a connection, by approximations, between near- inverses of sequences and inverses of continuous strictly increasing real-valued functions which can be exploited to derive simple expressions for near-inverses Y1 - 2006 UR - http://www.informaworld.com/openurl?genre=journal&issn=0020-7160 U6 - https://doi.org/10.1080/00207160500537801 SN - 0020-7160 ER - TY - CHAP A1 - Knoth, Alexander Henning A1 - Kiy, Alexander T1 - (Self-)confident through the introductory study phase with the Reflect App BT - (Self-)confident through the introductory study phase with the Reflect App T2 - CEUR Workshop Proceedings T2 - Selbst-bewusst durch die Studieneingangsphase mit der Reflect-App Y1 - 2014 UR - http://www.scopus.com/inward/record.url?eid=2-s2.0-84921875162&partnerID=MN8TOARS SN - 1613-0073 IS - 1227 SP - 172 EP - 179 PB - CEUR-WS CY - Freiburg ER - TY - CHAP T1 - 11. Workshop Testmethoden und Zuverlässigkeit von Schaltungen und Systemen BT - vom 28. Februar bis 2. März, Potsdam-Hermannswerder, Inselhotel Y1 - 1999 SN - 978-3-9806494-1-4 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - THES A1 - Holz, Christian T1 - 3D from 2D touch T1 - 3D von 2D-Berührungen N2 - While interaction with computers used to be dominated by mice and keyboards, new types of sensors now allow users to interact through touch, speech, or using their whole body in 3D space. These new interaction modalities are often referred to as "natural user interfaces" or "NUIs." While 2D NUIs have experienced major success on billions of mobile touch devices sold, 3D NUI systems have so far been unable to deliver a mobile form factor, mainly due to their use of cameras. The fact that cameras require a certain distance from the capture volume has prevented 3D NUI systems from reaching the flat form factor mobile users expect. In this dissertation, we address this issue by sensing 3D input using flat 2D sensors. The systems we present observe the input from 3D objects as 2D imprints upon physical contact. By sampling these imprints at very high resolutions, we obtain the objects' textures. In some cases, a texture uniquely identifies a biometric feature, such as the user's fingerprint. In other cases, an imprint stems from the user's clothing, such as when walking on multitouch floors. By analyzing from which part of the 3D object the 2D imprint results, we reconstruct the object's pose in 3D space. While our main contribution is a general approach to sensing 3D input on 2D sensors upon physical contact, we also demonstrate three applications of our approach. (1) We present high-accuracy touch devices that allow users to reliably touch targets that are a third of the size of those on current touch devices. We show that different users and 3D finger poses systematically affect touch sensing, which current devices perceive as random input noise. We introduce a model for touch that compensates for this systematic effect by deriving the 3D finger pose and the user's identity from each touch imprint. We then investigate this systematic effect in detail and explore how users conceptually touch targets. Our findings indicate that users aim by aligning visual features of their fingers with the target. We present a visual model for touch input that eliminates virtually all systematic effects on touch accuracy. (2) From each touch, we identify users biometrically by analyzing their fingerprints. Our prototype Fiberio integrates fingerprint scanning and a display into the same flat surface, solving a long-standing problem in human-computer interaction: secure authentication on touchscreens. Sensing 3D input and authenticating users upon touch allows Fiberio to implement a variety of applications that traditionally require the bulky setups of current 3D NUI systems. (3) To demonstrate the versatility of 3D reconstruction on larger touch surfaces, we present a high-resolution pressure-sensitive floor that resolves the texture of objects upon touch. Using the same principles as before, our system GravitySpace analyzes all imprints and identifies users based on their shoe soles, detects furniture, and enables accurate touch input using feet. By classifying all imprints, GravitySpace detects the users' body parts that are in contact with the floor and then reconstructs their 3D body poses using inverse kinematics. GravitySpace thus enables a range of applications for future 3D NUI systems based on a flat sensor, such as smart rooms in future homes. We conclude this dissertation by projecting into the future of mobile devices. Focusing on the mobility aspect of our work, we explore how NUI devices may one day augment users directly in the form of implanted devices. N2 - Die Interaktion mit Computern war in den letzten vierzig Jahren stark von Tastatur und Maus geprägt. Neue Arten von Sensoren ermöglichen Computern nun, Eingaben durch Berührungs-, Sprach- oder 3D-Gestensensoren zu erkennen. Solch neuartige Formen der Interaktion werden häufig unter dem Begriff "natürliche Benutzungsschnittstellen" bzw. "NUIs" (englisch natural user interfaces) zusammengefasst. 2D-NUIs ist vor allem auf Mobilgeräten ein Durchbruch gelungen; über eine Milliarde solcher Geräte lassen sich durch Berührungseingaben bedienen. 3D-NUIs haben sich jedoch bisher nicht auf mobilen Plattformen durchsetzen können, da sie Nutzereingaben vorrangig mit Kameras aufzeichnen. Da Kameras Bilder jedoch erst ab einem gewissen Abstand auflösen können, eignen sie sich nicht als Sensor in einer mobilen Plattform. In dieser Arbeit lösen wir dieses Problem mit Hilfe von 2D-Sensoren, von deren Eingaben wir 3D-Informationen rekonstruieren. Unsere Prototypen zeichnen dabei die 2D-Abdrücke der Objekte, die den Sensor berühren, mit hoher Auflösung auf. Aus diesen Abdrücken leiten sie dann die Textur der Objekte ab. Anhand der Stelle der Objektoberfläche, die den Sensor berührt, rekonstruieren unsere Prototypen schließlich die 3D-Ausrichtung des jeweiligen Objektes. Neben unserem Hauptbeitrag der 3D-Rekonstruktion stellen wir drei Anwendungen unserer Methode vor. (1) Wir präsentieren Geräte, die Berührungseingaben dreimal genauer als existierende Geräte messen und damit Nutzern ermöglichen, dreimal kleinere Ziele zuverlässig mit dem Finger auszuwählen. Wir zeigen dabei, dass sowohl die Haltung des Fingers als auch der Benutzer selbst einen systematischen Einfluss auf die vom Sensor gemessene Position ausübt. Da existierende Geräte weder die Haltung des Fingers noch den Benutzer erkennen, nehmen sie solche Variationen als Eingabeungenauigkeit wahr. Wir stellen ein Modell für Berührungseingabe vor, das diese beiden Faktoren integriert, um damit die gemessenen Eingabepositionen zu präzisieren. Anschließend untersuchen wir, welches mentale Modell Nutzer beim Berühren kleiner Ziele mit dem Finger anwenden. Unsere Ergebnisse deuten auf ein visuelles Modell hin, demzufolge Benutzer Merkmale auf der Oberfläche ihres Fingers an einem Ziel ausrichten. Bei der Analyse von Berührungseingaben mit diesem Modell verschwinden nahezu alle zuvor von uns beobachteten systematischen Effekte. (2) Unsere Prototypen identifizieren Nutzer anhand der biometrischen Merkmale von Fingerabdrücken. Unser Prototyp Fiberio integriert dabei einen Fingerabdruckscanner und einen Bildschirm in die selbe Oberfläche und löst somit das seit Langem bestehende Problem der sicheren Authentifizierung auf Berührungsbildschirmen. Gemeinsam mit der 3D-Rekonstruktion von Eingaben ermöglicht diese Fähigkeit Fiberio, eine Reihe von Anwendungen zu implementieren, die bisher den sperrigen Aufbau aktueller 3D-NUI-Systeme voraussetzten. (3) Um die Flexibilität unserer Methode zu zeigen, implementieren wir sie auf einem großen, berührungsempfindlichen Fußboden, der Objekttexturen bei der Eingabe ebenfalls mit hoher Auflösung aufzeichnet. Ähnlich wie zuvor analysiert unser System GravitySpace diese Abdrücke, um Nutzer anhand ihrer Schuhsolen zu identifizieren, Möbelstücke auf dem Boden zu erkennen und Nutzern präzise Eingaben mittels ihrer Schuhe zu ermöglichen. Indem GravitySpace alle Abdrücke klassifiziert, erkennt das System die Körperteile der Benutzer, die sich in Kontakt mit dem Boden befinden. Aus der Anordnung dieser Kontakte schließt GravitySpace dann auf die Körperhaltungen aller Benutzer in 3D. GravitySpace hat daher das Potenzial, Anwendungen für zukünftige 3D-NUI-Systeme auf einer flachen Oberfläche zu implementieren, wie zum Beispiel in zukünftigen intelligenten Wohnungen. Wie schließen diese Arbeit mit einem Ausblick auf zukünftige interaktive Geräte. Dabei konzentrieren wir uns auf den Mobilitätsaspekt aktueller Entwicklungen und beleuchten, wie zukünftige mobile NUI-Geräte Nutzer in Form implantierter Geräte direkt unterstützen können. KW - HCI KW - Berührungseingaben KW - Eingabegenauigkeit KW - Modell KW - Mobilgeräte KW - HCI KW - touch input KW - input accuracy KW - model KW - mobile devices Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-67796 ER - TY - JOUR A1 - Delgrande, James Patrick A1 - Schaub, Torsten H. A1 - Tompits, Hans A1 - Wang, Kewen T1 - A classification and survey of preference handling approchaches in nonmonotonic reasoning N2 - In recent years, there has been a large amount of disparate work concerning the representation and reasoning with qualitative preferential information by means of approaches to nonmonotonic reasoning. Given the variety of underlying systems, assumptions, motivations, and intuitions, it is difficult to compare or relate one approach with another. Here, we present an overview and classification for approaches to dealing with preference. A set of criteria for classifying approaches is given, followed by a set of desiderata that an approach might be expected to satisfy. A comprehensive set of approaches is subsequently given and classified with respect to these sets of underlying principles Y1 - 2004 SN - 0824-7935 ER - TY - JOUR A1 - Wang, Kewen T1 - A comparative study of disjunctive well-founded semantics Y1 - 2001 SN - 3-540-42593-4 ER - TY - JOUR A1 - Schaub, Torsten H. A1 - Wang, Kewen T1 - A comparative study of logic programs with preference Y1 - 2001 ER - TY - JOUR A1 - Schaub, Torsten H. A1 - Wang, Kewen T1 - A comparative study of logic programs with preference Y1 - 2001 SN - 1-558-60777-3 SN - 1045-0823 ER - TY - JOUR A1 - Delgrande, James Patrick A1 - Schaub, Torsten H. A1 - Tompits, Hans T1 - A compilation of Brewka and Eiter's approach to prioritizationtion Y1 - 2000 SN - 3-540-41131-3 ER - TY - JOUR A1 - Sarsakov, Vladimir A1 - Schaub, Torsten H. A1 - Tompits, Hans A1 - Woltran, Stefan T1 - A compiler for nested logic programming Y1 - 2004 SN - 3-540- 20721-x ER - TY - JOUR A1 - Delgrande, James Patrick A1 - Schaub, Torsten H. A1 - Tompits, Hans T1 - A compiler for ordered logic programs Y1 - 2000 UR - http://arxiv.org/abs/cs.AI/0003024 ER - TY - THES A1 - Awad, Ahmed Mahmoud Hany Aly T1 - A compliance management framework for business process models T1 - Ein Compliance-Management-Framework für Geschäftsprozessmodelle N2 - Companies develop process models to explicitly describe their business operations. In the same time, business operations, business processes, must adhere to various types of compliance requirements. Regulations, e.g., Sarbanes Oxley Act of 2002, internal policies, best practices are just a few sources of compliance requirements. In some cases, non-adherence to compliance requirements makes the organization subject to legal punishment. In other cases, non-adherence to compliance leads to loss of competitive advantage and thus loss of market share. Unlike the classical domain-independent behavioral correctness of business processes, compliance requirements are domain-specific. Moreover, compliance requirements change over time. New requirements might appear due to change in laws and adoption of new policies. Compliance requirements are offered or enforced by different entities that have different objectives behind these requirements. Finally, compliance requirements might affect different aspects of business processes, e.g., control flow and data flow. As a result, it is infeasible to hard-code compliance checks in tools. Rather, a repeatable process of modeling compliance rules and checking them against business processes automatically is needed. This thesis provides a formal approach to support process design-time compliance checking. Using visual patterns, it is possible to model compliance requirements concerning control flow, data flow and conditional flow rules. Each pattern is mapped into a temporal logic formula. The thesis addresses the problem of consistency checking among various compliance requirements, as they might stem from divergent sources. Also, the thesis contributes to automatically check compliance requirements against process models using model checking. We show that extra domain knowledge, other than expressed in compliance rules, is needed to reach correct decisions. In case of violations, we are able to provide a useful feedback to the user. The feedback is in the form of parts of the process model whose execution causes the violation. In some cases, our approach is capable of providing automated remedy of the violation. N2 - Firmen entwickeln Prozessmodelle um ihre Geschäftstätigkeit explizit zu beschreiben. Geschäftsprozesse müssen verschiedene Arten von Compliance-Anforderungen einhalten. Solche Compliance-Anforderungen entstammen einer Vielzahl von Quellen, z.B. Verordnung wie dem Sarbanes Oxley Act von 2002, interne Richtlinien und Best Practices. Die Nichteinhaltung von Compliance-Anforderungen kann zu gesetzlichen Strafen oder dem Verlust von Wettbewerbsvorteilen und somit dem Verlust von Marktanteilen führen. Im Gegensatz zum klassischen, domänen-unabhängigen Begriff der Korrektheit von Geschäftsprozessen, sind Compliance-Anforderungen domain-spezifisch und ändern sich im Laufe der Zeit. Neue Anforderungen resultieren aus neuen Gesetzen und der Einführung neuer Unternehmensrichtlinien. Aufgrund der Vielzahl der Quellen für Compliance-Anforderungen, können sie unterschiedliche Ziele verfolgen und somit widersprüchliche Aussagen treffen. Schließlich betreffen Compliance-Anforderungen verschiedene Aspekte von Geschäftsprozessen, wie Kontrollfluss- und Datenabhängigkeiten. Auf Grund dessen können Compliance-Prüfungen nicht direkt Hard-coded werden. Vielmehr ist ein Prozess der wiederholten Modellierung von Compliance-Regeln und ihrer anschließenden automatischen Prüfung gegen die Geschäftsprozesse nötig. Diese Dissertation stellt einen formalen Ansatz zur Überprüfung der Einhaltung von Compliance-Regeln während der Spezifikation von Geschäftsprozessen vor. Mit visuellen Mustern ist es möglich, Compliance-Regeln hinsichtlich Kontrollfluss- und Datenabhängigkeiten sowie bedingte Regeln zu spezifizieren. Jedes Muster wird in eine Formel der temporalen Logik abgebildet. Die Dissertation behandelt das Problem der Konsistenzprüfung zwischen verschiedenen Compliance-Anforderungen, wie sie sich aus unterschiedlichen Quellen ergeben können. Ebenfalls zeigt diese Dissertation, wie Compliance-Regeln gegen die Geschäftsprozesse automatisch mittels Model Checking geprüft werden. Es wird aufgezeigt, dass zusätzliche Domänen-Kenntnisse notwendig sind, um richtige Entscheidungen zu treffen. Der vorgestelle Ansatz ermöglicht nützliches Feedback für Modellierer im Fall eines Compliance-Verstoßes. Das Feedback wird in Form von Teilen des Prozessmodells gegeben, deren Ausführung die Verletzung verursacht. In einigen Fällen ist der vorgestellte Ansatz in der Lage, den Compliance-Verstoß automatisch zu beheben. KW - Geschäftsprozessmodelle KW - Compliance KW - Temporallogik KW - Verletzung Erklärung KW - Verletzung Auflösung KW - Business Process Models KW - Compliance KW - Temporal Logic KW - Violation Explanation KW - Violation Resolution Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-49222 ER - TY - JOUR A1 - Delgrande, James Patrick A1 - Schaub, Torsten H. T1 - A concictency-based paradigm for belief change Y1 - 2003 SN - 0004-3702 ER - TY - JOUR A1 - Brüning, Stefan A1 - Schaub, Torsten H. T1 - A connection calculus for handling incomplete information Y1 - 2000 ER - TY - JOUR A1 - Delgrande, James Patrick A1 - Schaub, Torsten H. T1 - A consistency-based framework for merging knowledge bases Y1 - 2007 SN - 1570-8683 ER - TY - JOUR A1 - Delgrande, James Patrick A1 - Schaub, Torsten H. T1 - A consistency-based model for belief change: preliminary report Y1 - 2000 SN - 0-262-51112-6 ER - TY - JOUR A1 - Delgrande, James Patrick A1 - Schaub, Torsten H. T1 - A consistency-based model for belief change: preliminary report Y1 - 2000 UR - http://xxx.lanl.gov/abs/cs.AI/0003052 ER - TY - JOUR A1 - Besnard, Philippe A1 - Schaub, Torsten H. T1 - A context-based framework for default logics Y1 - 1993 SN - 0-262-51071-5 ER - TY - THES A1 - Luckow, André T1 - A dependable middleware for enhancing the fault tolerance of distributed computations in grid environments Y1 - 2009 CY - Potsdam ER - TY - JOUR A1 - Delgrande, James Patrick A1 - Schaub, Torsten H. A1 - Tompits, Hans T1 - A framework for compiling preferences in logic programs Y1 - 2003 ER - TY - JOUR A1 - Kreowsky, Philipp A1 - Stabernack, Christian Benno T1 - A full-featured FPGA-based pipelined architecture for SIFT extraction JF - IEEE access : practical research, open solutions / Institute of Electrical and Electronics Engineers N2 - Image feature detection is a key task in computer vision. Scale Invariant Feature Transform (SIFT) is a prevalent and well known algorithm for robust feature detection. However, it is computationally demanding and software implementations are not applicable for real-time performance. In this paper, a versatile and pipelined hardware implementation is proposed, that is capable of computing keypoints and rotation invariant descriptors on-chip. All computations are performed in single precision floating-point format which makes it possible to implement the original algorithm with little alteration. Various rotation resolutions and filter kernel sizes are supported for images of any resolution up to ultra-high definition. For full high definition images, 84 fps can be processed. Ultra high definition images can be processed at 21 fps. KW - Field programmable gate arrays KW - Convolution KW - Signal processing KW - algorithms KW - Kernel KW - Image resolution KW - Histograms KW - Feature extraction KW - Scale-invariant feature transform (SIFT) KW - field-programmable gate array KW - (FPGA) KW - image processing KW - computer vision KW - parallel processing KW - architecture KW - real-time KW - hardware architecture Y1 - 2021 U6 - https://doi.org/10.1109/ACCESS.2021.3104387 SN - 2169-3536 VL - 9 SP - 128564 EP - 128573 PB - Inst. of Electr. and Electronics Engineers CY - New York, NY ER - TY - JOUR A1 - Delgrande, James Patrick A1 - Schaub, Torsten H. A1 - Tompits, Hans T1 - A general framework for expressing preferences in causal reasoning and planning N2 - We consider the problem of representing arbitrary preferences in causal reasoning and planning systems. In planning, a preference may be seen as a goal or constraint that is desirable, but not necessary, to satisfy. To begin, we define a very general query language for histories, or interleaved sequences of world states and actions. Based on this, we specify a second language in which preferences are defined. A single preference defines a binary relation on histories, indicating that one history is preferred to the other. From this, one can define global preference orderings on the set of histories, the maximal elements of which are the preferred histories. The approach is very general and flexible; thus it constitutes a base language in terms of which higher-level preferences may be defined. To this end, we investigate two fundamental types of preferences that we call choice and temporal preferences. We consider concrete strategies for these types of preferences and encode them in terms of our framework. We suggest how to express aggregates in the approach, allowing, e.g. the expression of a preference for histories with lowest total action costs. Last, our approach can be used to express other approaches and so serves as a common framework in which such approaches can be expressed and compared. We illustrate this by indicating how an approach due to Son and Pontelli can be encoded in our approach, as well as the language PDDL3. Y1 - 2007 UR - http://logcom.oxfordjournals.org/ U6 - https://doi.org/10.1093/logcom/exm046 SN - 0955-792X ER - TY - JOUR A1 - Delgrande, James Patrick A1 - Schaub, Torsten H. A1 - Tompits, Hans T1 - A generic compiler for ordered logic programs Y1 - 2001 SN - 3-540-42593-4 ER - TY - JOUR A1 - Anger, Christian A1 - Konczak, Kathrin A1 - Linke, Thomas A1 - Schaub, Torsten H. T1 - A Glimpse of Answer Set Programming Y1 - 2005 UR - http://www.cs.uni-potsdam.de/~konczak/Papers/ankolisc05.pdf SN - 0170-4516 ER - TY - CHAP A1 - Kiy, Alexander A1 - Geßner, Hendrik A1 - Lucke, Ulrike A1 - Grünewald, Franka T1 - A Hybrid and Modular Framework for Mobile Campus Applications T2 - i-com N2 - Mobile devices and associated applications (apps) are an indispensable part of daily life and provide access to important information anytime and anywhere. However, the availability of university-wide services in the mobile sector is still poor. If they exist they usually result from individual activities of students and teachers. Mobile applications can have an essential impact on the improvement of students’ self-organization as well as on the design and enhancement of specific learning scenarios, though. This article introduces a mobile campus app framework, which integrates central campus services and decentralized learning applications. An analysis of strengths and weaknesses of different approaches is presented to summarize and evaluate them in terms of requirements, development, maintenance and operation. The article discusses the underlying service-oriented architecture that allows transferring the campus app to other universities or institutions at reasonable cost. It concludes with a presentation of the results as well as ongoing discussions and future work KW - Mobile Campus Application KW - Hybrid App KW - Framework KW - Service-oriented Architecture Y1 - 2015 UR - http://www.degruyter.com/view/j/icom.2015.14.issue-1/icom-2015-0016/icom-2015-0016.xml U6 - https://doi.org/10.1515/icom-2015-0016 SN - 2196-6826 VL - 2015 IS - 14 SP - 63 EP - 73 PB - de Gruyter CY - Berlin ER - TY - JOUR A1 - Tiwari, Abhishek A1 - Prakash, Jyoti A1 - Groß, Sascha A1 - Hammer, Christian T1 - A large scale analysis of Android BT - Web hybridization JF - The journal of systems and software N2 - Many Android applications embed webpages via WebView components and execute JavaScript code within Android. Hybrid applications leverage dedicated APIs to load a resource and render it in a WebView. Furthermore, Android objects can be shared with the JavaScript world. However, bridging the interfaces of the Android and JavaScript world might also incur severe security threats: Potentially untrusted webpages and their JavaScript might interfere with the Android environment and its access to native features. No general analysis is currently available to assess the implications of such hybrid apps bridging the two worlds. To understand the semantics and effects of hybrid apps, we perform a large-scale study on the usage of the hybridization APIs in the wild. We analyze and categorize the parameters to hybridization APIs for 7,500 randomly selected and the 196 most popular applications from the Google Playstore as well as 1000 malware samples. Our results advance the general understanding of hybrid applications, as well as implications for potential program analyses, and the current security situation: We discovered thousands of flows of sensitive data from Android to JavaScript, the vast majority of which could flow to potentially untrustworthy code. Our analysis identified numerous web pages embedding vulnerabilities, which we exemplarily exploited. Additionally, we discovered a multitude of applications in which potentially untrusted JavaScript code may interfere with (trusted) Android objects, both in benign and malign applications. KW - Android hybrid apps KW - static analysis KW - information flow control Y1 - 2020 U6 - https://doi.org/10.1016/j.jss.2020.110775 SN - 0164-1212 SN - 1873-1228 VL - 170 PB - Elsevier CY - New York ER - TY - JOUR A1 - Hlawiczka, A. A1 - Gössel, Michael A1 - Sogomonyan, Egor S. T1 - A linear code-preserving signature analyzer COPMISR Y1 - 1997 SN - 0-8186-7810-0 ER - TY - INPR A1 - Arnold, Holger T1 - A linearized DPLL calculus with clause learning (2nd, revised version) N2 - Many formal descriptions of DPLL-based SAT algorithms either do not include all essential proof techniques applied by modern SAT solvers or are bound to particular heuristics or data structures. This makes it difficult to analyze proof-theoretic properties or the search complexity of these algorithms. In this paper we try to improve this situation by developing a nondeterministic proof calculus that models the functioning of SAT algorithms based on the DPLL calculus with clause learning. This calculus is independent of implementation details yet precise enough to enable a formal analysis of realistic DPLL-based SAT algorithms. N2 - Viele formale Beschreibungen DPLL-basierter SAT-Algorithmen enthalten entweder nicht alle wesentlichen Beweistechniken, die in modernen SAT-Solvern implementiert sind, oder sind an bestimmte Heuristiken oder Datenstrukturen gebunden. Dies erschwert die Analyse beweistheoretischer Eigenschaften oder der Suchkomplexität derartiger Algorithmen. Mit diesem Artikel versuchen wir, diese Situation durch die Entwicklung eines nichtdeterministischen Beweiskalküls zu verbessern, der die Arbeitsweise von auf dem DPLL-Kalkül basierenden SAT-Algorithmen mit Klausellernen modelliert. Dieser Kalkül ist unabhängig von Implementierungsdetails, aber dennoch präzise genug, um eine formale Analyse realistischer DPLL-basierter SAT-Algorithmen zu ermöglichen. KW - Automatisches Beweisen KW - Logikkalkül KW - SAT KW - DPLL KW - Klausellernen KW - automated theorem proving KW - logical calculus KW - SAT KW - DPLL KW - clause learning Y1 - 2009 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-29080 ER - TY - JOUR A1 - Arnold, Holger T1 - A linearized DPLL calculus with learning N2 - This paper describes the proof calculus LD for clausal propositional logic, which is a linearized form of the well-known DPLL calculus extended by clause learning. It is motivated by the demand to model how current SAT solvers built on clause learning are working, while abstracting from decision heuristics and implementation details. The calculus is proved sound and terminating. Further, it is shown that both the original DPLL calculus and the conflict-directed backtracking calculus with clause learning, as it is implemented in many current SAT solvers, are complete and proof-confluent instances of the LD calculus. N2 - Dieser Artikel beschreibt den Beweiskalkül LD für aussagenlogische Formeln in Klauselform. Dieser Kalkül ist eine um Klausellernen erweiterte linearisierte Variante des bekannten DPLL-Kalküls. Er soll dazu dienen, das Verhalten von auf Klausellernen basierenden SAT-Beweisern zu modellieren, wobei von Entscheidungsheuristiken und Implementierungsdetails abstrahiert werden soll. Es werden Korrektheit und Terminierung des Kalküls bewiesen. Weiterhin wird gezeigt, dass sowohl der ursprüngliche DPLL-Kalkül als auch der konfliktgesteuerte Rücksetzalgorithmus mit Klausellernen, wie er in vielen aktuellen SAT-Beweisern implementiert ist, vollständige und beweiskonfluente Spezialisierungen des LD-Kalküls sind. KW - SAT KW - DPLL KW - Klausellernen KW - Automatisches Beweisen KW - SAT KW - DPLL KW - Clause Learning KW - Automated Theorem Proving Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-15421 ER - TY - THES A1 - Scherfenberg, Ivonne T1 - A logic-based Framwork to enable Attribute Assurance for Digital Identities in Service-oriented Architectures and the Web Y1 - 2012 CY - Potsdam ER - TY - JOUR A1 - Besnard, Philippe A1 - Hunter, Anthony T1 - A logic-based theory of deductive arguments Y1 - 2001 SN - 0004-3702 ER - TY - JOUR A1 - Saposhnikov, V. V. A1 - Saposhnikov, Vl. V. A1 - Gössel, Michael A1 - Morosov, Andrej T1 - A method of construction of combinational self-checking units with detection of all single faults Y1 - 1999 ER - TY - THES A1 - Andjelkovic, Marko T1 - A methodology for characterization, modeling and mitigation of single event transient effects in CMOS standard combinational cells T1 - Eine Methode zur Charakterisierung, Modellierung und Minderung von SET Effekten in kombinierten CMOS-Standardzellen N2 - With the downscaling of CMOS technologies, the radiation-induced Single Event Transient (SET) effects in combinational logic have become a critical reliability issue for modern integrated circuits (ICs) intended for operation under harsh radiation conditions. The SET pulses generated in combinational logic may propagate through the circuit and eventually result in soft errors. It has thus become an imperative to address the SET effects in the early phases of the radiation-hard IC design. In general, the soft error mitigation solutions should accommodate both static and dynamic measures to ensure the optimal utilization of available resources. An efficient soft-error-aware design should address synergistically three main aspects: (i) characterization and modeling of soft errors, (ii) multi-level soft error mitigation, and (iii) online soft error monitoring. Although significant results have been achieved, the effectiveness of SET characterization methods, accuracy of predictive SET models, and efficiency of SET mitigation measures are still critical issues. Therefore, this work addresses the following topics: (i) Characterization and modeling of SET effects in standard combinational cells, (ii) Static mitigation of SET effects in standard combinational cells, and (iii) Online particle detection, as a support for dynamic soft error mitigation. Since the standard digital libraries are widely used in the design of radiation-hard ICs, the characterization of SET effects in standard cells and the availability of accurate SET models for the Soft Error Rate (SER) evaluation are the main prerequisites for efficient radiation-hard design. This work introduces an approach for the SPICE-based standard cell characterization with the reduced number of simulations, improved SET models and optimized SET sensitivity database. It has been shown that the inherent similarities in the SET response of logic cells for different input levels can be utilized to reduce the number of required simulations. Based on characterization results, the fitting models for the SET sensitivity metrics (critical charge, generated SET pulse width and propagated SET pulse width) have been developed. The proposed models are based on the principle of superposition, and they express explicitly the dependence of the SET sensitivity of individual combinational cells on design, operating and irradiation parameters. In contrast to the state-of-the-art characterization methodologies which employ extensive look-up tables (LUTs) for storing the simulation results, this work proposes the use of LUTs for storing the fitting coefficients of the SET sensitivity models derived from the characterization results. In that way the amount of characterization data in the SET sensitivity database is reduced significantly. The initial step in enhancing the robustness of combinational logic is the application of gate-level mitigation techniques. As a result, significant improvement of the overall SER can be achieved with minimum area, delay and power overheads. For the SET mitigation in standard cells, it is essential to employ the techniques that do not require modifying the cell structure. This work introduces the use of decoupling cells for improving the robustness of standard combinational cells. By insertion of two decoupling cells at the output of a target cell, the critical charge of the cell’s output node is increased and the attenuation of short SETs is enhanced. In comparison to the most common gate-level techniques (gate upsizing and gate duplication), the proposed approach provides better SET filtering. However, as there is no single gate-level mitigation technique with optimal performance, a combination of multiple techniques is required. This work introduces a comprehensive characterization of gate-level mitigation techniques aimed to quantify their impact on the SET robustness improvement, as well as introduced area, delay and power overhead per gate. By characterizing the gate-level mitigation techniques together with the standard cells, the required effort in subsequent SER analysis of a target design can be reduced. The characterization database of the hardened standard cells can be utilized as a guideline for selection of the most appropriate mitigation solution for a given design. As a support for dynamic soft error mitigation techniques, it is important to enable the online detection of energetic particles causing the soft errors. This allows activating the power-greedy fault-tolerant configurations based on N-modular redundancy only at the high radiation levels. To enable such a functionality, it is necessary to monitor both the particle flux and the variation of particle LET, as these two parameters contribute significantly to the system SER. In this work, a particle detection approach based on custom-sized pulse stretching inverters is proposed. Employing the pulse stretching inverters connected in parallel enables to measure the particle flux in terms of the number of detected SETs, while the particle LET variations can be estimated from the distribution of SET pulse widths. This approach requires a purely digital processing logic, in contrast to the standard detectors which require complex mixed-signal processing. Besides the possibility of LET monitoring, additional advantages of the proposed particle detector are low detection latency and power consumption, and immunity to error accumulation. The results achieved in this thesis can serve as a basis for establishment of an overall soft-error-aware database for a given digital library, and a comprehensive multi-level radiation-hard design flow that can be implemented with the standard IC design tools. The following step will be to evaluate the achieved results with the irradiation experiments. N2 - Mit der Verkleinerung der Strukturen moderner CMOS-Technologien sind strahlungsinduzierte Single Event Transient (SET)-Effekte in kombinatorischer Logik zu einem kritischen Zuverlässigkeitsproblem in integrierten Schaltkreisen (ICs) geworden, die für den Betrieb unter rauen Strahlungsbedingungen (z. B. im Weltraum) vorgesehen sind. Die in der Kombinationslogik erzeugten SET-Impulse können durch die Schaltungen propagieren und schließlich in Speicherelementen (z.B. Flip-Flops oder Latches) zwischengespeichert werden, was zu sogenannten Soft-Errors und folglich zu Datenbeschädigungen oder einem Systemausfall führt. Daher ist es in den frühen Phasen des strahlungsharten IC-Designs unerlässlich geworden, die SET-Effekte systematisch anzugehen. Im Allgemeinen sollten die Lösungen zur Minderung von Soft-Errors sowohl statische als auch dynamische Maßnahmen berücksichtigen, um die optimale Nutzung der verfügbaren Ressourcen sicherzustellen. Somit sollte ein effizientes Soft-Error-Aware-Design drei Hauptaspekte synergistisch berücksichtigen: (i) die Charakterisierung und Modellierung von Soft-Errors, (ii) eine mehrstufige-Soft-Error-Minderung und (iii) eine Online-Soft-Error-Überwachung. Obwohl signifikante Ergebnisse erzielt wurden, sind die Wirksamkeit der SET-Charakterisierung, die Genauigkeit von Vorhersagemodellen und die Effizienz der Minderungsmaßnahmen immer noch die kritischen Punkte. Daher stellt diese Arbeit die folgenden Originalbeiträge vor: • Eine ganzheitliche Methodik zur SPICE-basierten Charakterisierung von SET-Effekten in kombinatorischen Standardzellen und entsprechenden Härtungskonfigurationen auf Gate-Ebene mit reduzierter Anzahl von Simulationen und reduzierter SET-Sensitivitätsdatenbank. • Analytische Modelle für SET-Empfindlichkeit (kritische Ladung, erzeugte SET-Pulsbreite und propagierte SET-Pulsbreite), basierend auf dem Superpositionsprinzip und Anpassung der Ergebnisse aus SPICE-Simulationen. • Ein Ansatz zur SET-Abschwächung auf Gate-Ebene, der auf dem Einfügen von zwei Entkopplungszellen am Ausgang eines Logikgatters basiert, was den Anstieg der kritischen Ladung und die signifikante Unterdrückung kurzer SETs beweist. • Eine vergleichende Charakterisierung der vorgeschlagenen SET-Abschwächungstechnik mit Entkopplungszellen und sieben bestehenden Techniken durch eine quantitative Bewertung ihrer Auswirkungen auf die Verbesserung der SET-Robustheit einzelner Logikgatter. • Ein Partikeldetektor auf Basis von Impulsdehnungs-Invertern in Skew-Größe zur Online-Überwachung des Partikelflusses und der LET-Variationen mit rein digitaler Anzeige. Die in dieser Dissertation erzielten Ergebnisse können als Grundlage für den Aufbau einer umfassenden Soft-Error-aware-Datenbank für eine gegebene digitale Bibliothek und eines umfassenden mehrstufigen strahlungsharten Designflusses dienen, der mit den Standard-IC-Designtools implementiert werden kann. Im nächsten Schritt werden die mit den Bestrahlungsexperimenten erzielten Ergebnisse ausgewertet. KW - Single Event Transient KW - radiation hardness design KW - Single Event Transient KW - Strahlungshärte Entwurf Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-534843 ER - TY - THES A1 - Wolter, Christian T1 - A methodology for model-driven process security Y1 - 2010 CY - Potsdam ER - TY - JOUR A1 - Brüning, Stefan A1 - Schaub, Torsten H. T1 - A model-based approach to consistency-checking Y1 - 1996 SN - 3-540-61286-6 ER - TY - JOUR A1 - Delgrande, James A1 - Schaub, Torsten H. A1 - Tompits, Hans A1 - Woltran, Stefan T1 - A model-theoretic approach to belief change in answer set programming JF - ACM transactions on computational logic N2 - We address the problem of belief change in (nonmonotonic) logic programming under answer set semantics. Our formal techniques are analogous to those of distance-based belief revision in propositional logic. In particular, we build upon the model theory of logic programs furnished by SE interpretations, where an SE interpretation is a model of a logic program in the same way that a classical interpretation is a model of a propositional formula. Hence we extend techniques from the area of belief revision based on distance between models to belief change in logic programs. We first consider belief revision: for logic programs P and Q, the goal is to determine a program R that corresponds to the revision of P by Q, denoted P * Q. We investigate several operators, including (logic program) expansion and two revision operators based on the distance between the SE models of logic programs. It proves to be the case that expansion is an interesting operator in its own right, unlike in classical belief revision where it is relatively uninteresting. Expansion and revision are shown to satisfy a suite of interesting properties; in particular, our revision operators satisfy all or nearly all of the AGM postulates for revision. We next consider approaches for merging a set of logic programs, P-1,...,P-n. Again, our formal techniques are based on notions of relative distance between the SE models of the logic programs. Two approaches are examined. The first informally selects for each program P-i those models of P-i that vary the least from models of the other programs. The second approach informally selects those models of a program P-0 that are closest to the models of programs P-1,...,P-n. In this case, P-0 can be thought of as a set of database integrity constraints. We examine these operators with regards to how they satisfy relevant postulate sets. Last, we present encodings for computing the revision as well as the merging of logic programs within the same logic programming framework. This gives rise to a direct implementation of our approach in terms of off-the-shelf answer set solvers. These encodings also reflect the fact that our change operators do not increase the complexity of the base formalism. KW - Theory KW - Answer set programming KW - belief revision KW - belief merging KW - program encodings KW - strong equivalence Y1 - 2013 U6 - https://doi.org/10.1145/2480759.2480766 SN - 1529-3785 VL - 14 IS - 2 PB - Association for Computing Machinery CY - New York ER - TY - JOUR A1 - Sogomonyan, Egor S. A1 - Singh, Adit D. A1 - Gössel, Michael T1 - A multi-mode scannable memory element for high test application efficiency and delay testing Y1 - 1998 ER - TY - JOUR A1 - Sogomonyan, Egor S. A1 - Singh, Adit D. A1 - Gössel, Michael T1 - A multi-mode scannable memory element for high test application efficiency and delay testing Y1 - 1999 ER - TY - THES A1 - Ghasemzadeh, Mohammad T1 - A new algorithm for the quantified satisfiability problem, based on zero-suppressed binary decision diagrams and memoization T1 - Ein neuer Algorithmus für die quantifizierte Aussagenlogik, basierend auf Zero-suppressed BDDs und Memoization N2 - Quantified Boolean formulas (QBFs) play an important role in theoretical computer science. QBF extends propositional logic in such a way that many advanced forms of reasoning can be easily formulated and evaluated. In this dissertation we present our ZQSAT, which is an algorithm for evaluating quantified Boolean formulas. ZQSAT is based on ZBDD: Zero-Suppressed Binary Decision Diagram , which is a variant of BDD, and an adopted version of the DPLL algorithm. It has been implemented in C using the CUDD: Colorado University Decision Diagram package. The capability of ZBDDs in storing sets of subsets efficiently enabled us to store the clauses of a QBF very compactly and let us to embed the notion of memoization to the DPLL algorithm. These points led us to implement the search algorithm in such a way that we could store and reuse the results of all previously solved subformulas with a little overheads. ZQSAT can solve some sets of standard QBF benchmark problems (known to be hard for DPLL based algorithms) faster than the best existing solvers. In addition to prenex-CNF, ZQSAT accepts prenex-NNF formulas. We show and prove how this capability can be exponentially beneficial. N2 - In der Dissertation stellen wir einen neuen Algorithmus vor, welcher Formeln der quantifizierten Aussagenlogik (engl. Quantified Boolean formula, kurz QBF) löst. QBFs sind eine Erweiterung der klassischen Aussagenlogik um die Quantifizierung über aussagenlogische Variablen. Die quantifizierte Aussagenlogik ist dabei eine konservative Erweiterung der Aussagenlogik, d.h. es können nicht mehr Theoreme nachgewiesen werden als in der gewöhnlichen Aussagenlogik. Der Vorteil der Verwendung von QBFs ergibt sich durch die Möglichkeit, Sachverhalte kompakter zu repräsentieren. SAT (die Frage nach der Erfüllbarkeit einer Formel der Aussagenlogik) und QSAT (die Frage nach der Erfüllbarkeit einer QBF) sind zentrale Probleme in der Informatik mit einer Fülle von Anwendungen, wie zum Beispiel in der Graphentheorie, bei Planungsproblemen, nichtmonotonen Logiken oder bei der Verifikation. Insbesondere die Verifikation von Hard- und Software ist ein sehr aktuelles und wichtiges Forschungsgebiet in der Informatik. Unser Algorithmus zur Lösung von QBFs basiert auf sogenannten ZBDDs (engl. Zero-suppressed Binary decision Diagrams), welche eine Variante der BDDs (engl. Binary decision Diagrams) sind. BDDs sind eine kompakte Repräsentation von Formeln der Aussagenlogik. Der Algorithmus kombiniert nun bekannte Techniken zum Lösen von QBFs mit der ZBDD-Darstellung unter Verwendung geeigneter Heuristiken und Memoization. Memoization ermöglicht dabei das einfache Wiederverwenden bereits gelöster Teilprobleme. Der Algorithmus wurde unter Verwendung des CUDD-Paketes (Colorado University Decision Diagram) implementiert und unter dem Namen ZQSAT veröffentlicht. In Tests konnten wir nachweisen, dass ZQSAT konkurrenzfähig zu existierenden QBF-Beweisern ist, in einigen Fällen sogar bessere Resultate liefern kann. KW - Binäres Entscheidungsdiagramm KW - Erfüllbarkeitsproblem KW - DPLL KW - Zero-Suppressed Binary Decision Diagram (ZDD) KW - Formeln der quantifizierten Aussagenlogik KW - Erfüllbarkeit einer Formel der Aussagenlogik KW - ZQSA KW - DPLL KW - Zero-Suppressed Binary Decision Diagram (ZDD) KW - Quantified Boolean Formula (QBF) KW - Satisfiability KW - ZQSAT Y1 - 2005 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-6378 ER - TY - JOUR A1 - Gohlke, Mario T1 - A new approach for model-based recognition using colour regions Y1 - 1995 ER - TY - JOUR A1 - Saposhnikov, V. V. A1 - Morosov, Andrej A1 - Saposhnikov, Vl. V. A1 - Gössel, Michael T1 - A new design method for self-checking unidirectional combinational circuits Y1 - 1998 ER - TY - JOUR A1 - Richter, Peter T1 - A new deterministic approach for the optimization of cable layouts for power supply systems Y1 - 1995 ER - TY - JOUR A1 - Gössel, Michael T1 - A new method of redundancy addition for circuit optimization JF - Preprint / Universität Potsdam, Institut für Informatik Y1 - 1999 SN - 0946-7580 VL - 1999, 08 PB - Univ. CY - Potsdam ER - TY - JOUR A1 - Sogomonyan, Egor S. A1 - Gössel, Michael T1 - A new parity preserving multi-input signature analyser Y1 - 1995 ER - TY - BOOK A1 - Sogomonyan, Egor S. A1 - Marienfeld, Daniel A1 - Ocheretnij, V. A1 - Gössel, Michael T1 - A new self-checking sum-bit duplicated carry-select adder T3 - Preprint / Universität Potsdam, Institut für Informatik Y1 - 2003 SN - 0946-7580 VL - 2003, 5 PB - Univ. CY - Potsdam ER - TY - JOUR A1 - Gössel, Michael A1 - Sogomonyan, Egor S. T1 - A new self-testing parity checker for ultra-reliable applications Y1 - 1996 ER - TY - JOUR A1 - Gössel, Michael A1 - Sogomonyan, Egor S. A1 - Morosov, Andrej T1 - A new totally error propagating compactor for arbitrary cores with digital interfaces Y1 - 1999 ER - TY - JOUR A1 - Kawanabe, Motoaki A1 - Blanchard, Gilles A1 - Sugiyama, Masashi A1 - Spokoiny, Vladimir G. A1 - Müller, Klaus-Robert T1 - A novel dimension reduction procedure for searching non-Gaussian subspaces N2 - In this article, we consider high-dimensional data which contains a low-dimensional non-Gaussian structure contaminated with Gaussian noise and propose a new linear method to identify the non-Gaussian subspace. Our method NGCA (Non-Gaussian Component Analysis) is based on a very general semi-parametric framework and has a theoretical guarantee that the estimation error of finding the non-Gaussian components tends to zero at a parametric rate. NGCA can be used not only as preprocessing for ICA, but also for extracting and visualizing more general structures like clusters. A numerical study demonstrates the usefulness of our method Y1 - 2006 UR - http://www.springerlink.com/content/105633/ U6 - https://doi.org/10.1007/11679363_19 SN - 0302-9743 ER -