TY - JOUR A1 - Thienen, Julia von A1 - Noweski, Christine A1 - Meinel, Christoph A1 - Rauth, Ingo T1 - The co-evolution of theory and practice in design thinking - or - "Mind the oddness trap!" Y1 - 2011 SN - 978-3-642-13756-3 ER - TY - JOUR A1 - Thienen, Julia von A1 - Noweski, Christine A1 - Rauth, Ingo A1 - Meinel, Christoph A1 - Lange, Sabine T1 - If you want to know who are, tell me where you are : the importance of places Y1 - 2012 ER - TY - JOUR A1 - Thomas, Ivonne T1 - Reliable digital identities for SOA and the Web Y1 - 2010 SN - 978-3-86956-036-6 ER - TY - JOUR A1 - Thon, Ingo A1 - Landwehr, Niels A1 - De Raedt, Luc T1 - Stochastic relational processes efficient inference and applications JF - Machine learning N2 - One of the goals of artificial intelligence is to develop agents that learn and act in complex environments. Realistic environments typically feature a variable number of objects, relations amongst them, and non-deterministic transition behavior. While standard probabilistic sequence models provide efficient inference and learning techniques for sequential data, they typically cannot fully capture the relational complexity. On the other hand, statistical relational learning techniques are often too inefficient to cope with complex sequential data. In this paper, we introduce a simple model that occupies an intermediate position in this expressiveness/efficiency trade-off. It is based on CP-logic (Causal Probabilistic Logic), an expressive probabilistic logic for modeling causality. However, by specializing CP-logic to represent a probability distribution over sequences of relational state descriptions and employing a Markov assumption, inference and learning become more tractable and effective. Specifically, we show how to solve part of the inference and learning problems directly at the first-order level, while transforming the remaining part into the problem of computing all satisfying assignments for a Boolean formula in a binary decision diagram. We experimentally validate that the resulting technique is able to handle probabilistic relational domains with a substantial number of objects and relations. KW - Statistical relational learning KW - Stochastic relational process KW - Markov processes KW - Time series KW - CP-Logic Y1 - 2011 U6 - https://doi.org/10.1007/s10994-010-5213-8 SN - 0885-6125 VL - 82 IS - 2 SP - 239 EP - 272 PB - Springer CY - Dordrecht ER - TY - JOUR A1 - Tiwari, Abhishek A1 - Prakash, Jyoti A1 - Groß, Sascha A1 - Hammer, Christian T1 - A large scale analysis of Android BT - Web hybridization JF - The journal of systems and software N2 - Many Android applications embed webpages via WebView components and execute JavaScript code within Android. Hybrid applications leverage dedicated APIs to load a resource and render it in a WebView. Furthermore, Android objects can be shared with the JavaScript world. However, bridging the interfaces of the Android and JavaScript world might also incur severe security threats: Potentially untrusted webpages and their JavaScript might interfere with the Android environment and its access to native features. No general analysis is currently available to assess the implications of such hybrid apps bridging the two worlds. To understand the semantics and effects of hybrid apps, we perform a large-scale study on the usage of the hybridization APIs in the wild. We analyze and categorize the parameters to hybridization APIs for 7,500 randomly selected and the 196 most popular applications from the Google Playstore as well as 1000 malware samples. Our results advance the general understanding of hybrid applications, as well as implications for potential program analyses, and the current security situation: We discovered thousands of flows of sensitive data from Android to JavaScript, the vast majority of which could flow to potentially untrustworthy code. Our analysis identified numerous web pages embedding vulnerabilities, which we exemplarily exploited. Additionally, we discovered a multitude of applications in which potentially untrusted JavaScript code may interfere with (trusted) Android objects, both in benign and malign applications. KW - Android hybrid apps KW - static analysis KW - information flow control Y1 - 2020 U6 - https://doi.org/10.1016/j.jss.2020.110775 SN - 0164-1212 SN - 1873-1228 VL - 170 PB - Elsevier CY - New York ER - TY - JOUR A1 - Tran, Son Cao A1 - Pontelli, Enrico A1 - Balduccini, Marcello A1 - Schaub, Torsten T1 - Answer set planning BT - a survey JF - Theory and practice of logic programming N2 - Answer Set Planning refers to the use of Answer Set Programming (ASP) to compute plans, that is, solutions to planning problems, that transform a given state of the world to another state. The development of efficient and scalable answer set solvers has provided a significant boost to the development of ASP-based planning systems. This paper surveys the progress made during the last two and a half decades in the area of answer set planning, from its foundations to its use in challenging planning domains. The survey explores the advantages and disadvantages of answer set planning. It also discusses typical applications of answer set planning and presents a set of challenges for future research. KW - planning KW - knowledge representation and reasoning KW - logic programming Y1 - 2022 U6 - https://doi.org/10.1017/S1471068422000072 SN - 1471-0684 SN - 1475-3081 PB - Cambridge University Press CY - New York ER - TY - JOUR A1 - Troeger, Peter A1 - Merzky, Andre T1 - Towards standardized job submission and control in infrastructure clouds JF - Journal of grid computing N2 - The submission and management of computational jobs is a traditional part of utility computing environments. End users and developers of domain-specific software abstractions often have to deal with the heterogeneity of such batch processing systems. This lead to a number of application programming interface and job description standards in the past, which are implemented and established for cluster and Grid systems. With the recent rise of cloud computing as new utility computing paradigm, the standardized access to batch processing facilities operated on cloud resources becomes an important issue. Furthermore, the design of such a standard has to consider a tradeoff between feature completeness and the achievable level of interoperability. The article discusses this general challenge, and presents some existing standards with traditional cluster and Grid computing background that may be applicable to cloud environments. We present OCCI-DRMAA as one approach for standardized access to batch processing facilities hosted in a cloud. KW - Cloud KW - IaaS KW - DRMS KW - DRMAA KW - OCCI KW - Batch processing KW - Job submission KW - Job monitoring Y1 - 2014 U6 - https://doi.org/10.1007/s10723-013-9275-2 SN - 1570-7873 SN - 1572-9184 VL - 12 IS - 1 SP - 111 EP - 125 PB - Springer CY - Dordrecht ER - TY - JOUR A1 - Uflacker, Matthias T1 - Computational analysis of virtual team collaboration in teh early stages of engineering design Y1 - 2010 SN - 978-3-86956-036-6 ER - TY - JOUR A1 - Uflacker, Matthias A1 - Kowark, Thomas A1 - Zeier, Alexander T1 - An instrument for real-time design interaction capture Y1 - 2011 SN - 978-3-642-13756-3 ER - TY - JOUR A1 - van Hooland, Seth A1 - Verborgh, Ruben A1 - De Wilde, Max A1 - Hercher, Johannes A1 - Mannens, Erik A1 - Van de Walle, Rik T1 - Evaluating the success of vocabulary reconciliation for cultural heritage collections JF - Journal of the American Society for Information Science and Technology N2 - The concept of Linked Data has made its entrance in the cultural heritage sector due to its potential use for the integration of heterogeneous collections and deriving additional value out of existing metadata. However, practitioners and researchers alike need a better understanding of what outcome they can reasonably expect of the reconciliation process between their local metadata and established controlled vocabularies which are already a part of the Linked Data cloud. This paper offers an in-depth analysis of how a locally developed vocabulary can be successfully reconciled with the Library of Congress Subject Headings (LCSH) and the Arts and Architecture Thesaurus (AAT) through the help of a general-purpose tool for interactive data transformation (OpenRefine). Issues negatively affecting the reconciliation process are identified and solutions are proposed in order to derive maximum value from existing metadata and controlled vocabularies in an automated manner. KW - semantic web KW - metadata KW - controlled vocabularies Y1 - 2013 U6 - https://doi.org/10.1002/asi.22763 SN - 1532-2882 VL - 64 IS - 3 SP - 464 EP - 479 PB - Wiley-Blackwell CY - Hoboken ER - TY - JOUR A1 - Videla, Santiago A1 - Guziolowski, Carito A1 - Eduati, Federica A1 - Thiele, Sven A1 - Gebser, Martin A1 - Nicolas, Jacques A1 - Saez-Rodriguez, Julio A1 - Schaub, Torsten H. A1 - Siegel, Anne T1 - Learning Boolean logic models of signaling networks with ASP JF - Theoretical computer science N2 - Boolean networks provide a simple yet powerful qualitative modeling approach in systems biology. However, manual identification of logic rules underlying the system being studied is in most cases out of reach. Therefore, automated inference of Boolean logical networks from experimental data is a fundamental question in this field. This paper addresses the problem consisting of learning from a prior knowledge network describing causal interactions and phosphorylation activities at a pseudo-steady state, Boolean logic models of immediate-early response in signaling transduction networks. The underlying optimization problem has been so far addressed through mathematical programming approaches and the use of dedicated genetic algorithms. In a recent work we have shown severe limitations of stochastic approaches in this domain and proposed to use Answer Set Programming (ASP), considering a simpler problem setting. Herein, we extend our previous work in order to consider more realistic biological conditions including numerical datasets, the presence of feedback-loops in the prior knowledge network and the necessity of multi-objective optimization. In order to cope with such extensions, we propose several discretization schemes and elaborate upon our previous ASP encoding. Towards real-world biological data, we evaluate the performance of our approach over in silico numerical datasets based on a real and large-scale prior knowledge network. The correctness of our encoding and discretization schemes are dealt with in Appendices A-B. (C) 2014 Elsevier B.V. All rights reserved. KW - Answer set programming KW - Signaling transduction networks KW - Boolean logic models KW - Combinatorial multi-objective optimization KW - Systems biology Y1 - 2015 U6 - https://doi.org/10.1016/j.tcs.2014.06.022 SN - 0304-3975 SN - 1879-2294 VL - 599 SP - 79 EP - 101 PB - Elsevier CY - Amsterdam ER - TY - JOUR A1 - Vierheller, Janine ED - Lambrecht, Anna-Lena ED - Margaria, Tiziana T1 - Exploratory Data Analysis JF - Process Design for Natural Scientists: an agile model-driven approach N2 - In bioinformatics the term exploratory data analysis refers to different methods to get an overview of large biological data sets. Hence, it helps to create a framework for further analysis and hypothesis testing. The workflow facilitates this first important step of the data analysis created by high-throughput technologies. The results are different plots showing the structure of the measurements. The goal of the workflow is the automatization of the exploratory data analysis, but also the flexibility should be guaranteed. The basic tool is the free software R. Y1 - 2014 SN - 978-3-662-45005-5 SN - 1865-0929 IS - 500 SP - 110 EP - 126 PB - Axel Springer Verlag CY - Berlin ER - TY - JOUR A1 - Waitelonis, Jörg A1 - Jürges, Henrik A1 - Sack, Harald T1 - Remixing entity linking evaluation datasets for focused benchmarking JF - Semantic Web N2 - In recent years, named entity linking (NEL) tools were primarily developed in terms of a general approach, whereas today numerous tools are focusing on specific domains such as e.g. the mapping of persons and organizations only, or the annotation of locations or events in microposts. However, the available benchmark datasets necessary for the evaluation of NEL tools do not reflect this focalizing trend. We have analyzed the evaluation process applied in the NEL benchmarking framework GERBIL [in: Proceedings of the 24th International Conference on World Wide Web (WWW’15), International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 2015, pp. 1133–1143, Semantic Web 9(5) (2018), 605–625] and all its benchmark datasets. Based on these insights we have extended the GERBIL framework to enable a more fine grained evaluation and in depth analysis of the available benchmark datasets with respect to different emphases. This paper presents the implementation of an adaptive filter for arbitrary entities and customized benchmark creation as well as the automated determination of typical NEL benchmark dataset properties, such as the extent of content-related ambiguity and diversity. These properties are integrated on different levels, which also enables to tailor customized new datasets out of the existing ones by remixing documents based on desired emphases. Besides a new system library to enrich provided NIF [in: International Semantic Web Conference (ISWC’13), Lecture Notes in Computer Science, Vol. 8219, Springer, Berlin, Heidelberg, 2013, pp. 98–113] datasets with statistical information, best practices for dataset remixing are presented, and an in depth analysis of the performance of entity linking systems on special focus datasets is presented. KW - Entity Linking KW - GERBIL KW - evaluation KW - benchmark Y1 - 2019 U6 - https://doi.org/10.3233/SW-180334 SN - 1570-0844 SN - 2210-4968 VL - 10 IS - 2 SP - 385 EP - 412 PB - IOS Press CY - Amsterdam ER - TY - JOUR A1 - Walton, Douglas A1 - Gordon, Thomas F. T1 - Formalizing informal logic JF - Informal logic : reasoning and argumentation in theory and practics N2 - In this paper we investigate the extent to which formal argumentation models can handle ten basic characteristics of informal logic identified in the informal logic literature. By showing how almost all of these characteristics can be successfully modelled formally, we claim that good progress can be made toward the project of formalizing informal logic. Of the formal argumentation models available, we chose the Carneades Argumentation System (CAS), a formal, computational model of argument that uses argument graphs as its basis, structures of a kind very familiar to practitioners of informal logic through their use of argument diagrams. KW - informal logic KW - formal argumentation systems KW - real arguments KW - premise acceptability KW - conductive argument KW - RSA triangle KW - relevance KW - sufficiency Y1 - 2015 SN - 0824-2577 VL - 35 IS - 4 SP - 508 EP - 538 PB - Centre for Research in Reasoning, Argumentation and Rhetoric, University of Windsor CY - Windsor ER - TY - JOUR A1 - Wang, Kewen T1 - A comparative study of disjunctive well-founded semantics Y1 - 2001 SN - 3-540-42593-4 ER - TY - JOUR A1 - Wang, Kewen T1 - Disjunctive well-founded semantics revisited Y1 - 2001 ER - TY - JOUR A1 - Wang, Kewen T1 - A top-down procedure for disjucntive well-founded semantics Y1 - 2001 ER - TY - JOUR A1 - Wang, Kewen T1 - A top-down procedure for disjucntive well-founded semantics Y1 - 2001 SN - 3-540-42254-4 ER - TY - JOUR A1 - Wang, Kewen T1 - Argumentation-based abduction in disjunctive logic programming Y1 - 2000 ER - TY - JOUR A1 - Wang, Kewen A1 - Zhou, Lizhu T1 - An extension to GCWA and query evaluation for disjunctive deductive databases Y1 - 2001 ER -