TY - JOUR A1 - Childs, Dorothee A1 - Grimbs, Sergio A1 - Selbig, Joachim T1 - Refined elasticity sampling for Monte Carlo-based identification of stabilizing network patterns JF - Bioinformatics N2 - Motivation: Structural kinetic modelling (SKM) is a framework to analyse whether a metabolic steady state remains stable under perturbation, without requiring detailed knowledge about individual rate equations. It provides a representation of the system's Jacobian matrix that depends solely on the network structure, steady state measurements, and the elasticities at the steady state. For a measured steady state, stability criteria can be derived by generating a large number of SKMs with randomly sampled elasticities and evaluating the resulting Jacobian matrices. The elasticity space can be analysed statistically in order to detect network positions that contribute significantly to the perturbation response. Here, we extend this approach by examining the kinetic feasibility of the elasticity combinations created during Monte Carlo sampling. Results: Using a set of small example systems, we show that the majority of sampled SKMs would yield negative kinetic parameters if they were translated back into kinetic models. To overcome this problem, a simple criterion is formulated that mitigates such infeasible models. After evaluating the small example pathways, the methodology was used to study two steady states of the neuronal TCA cycle and the intrinsic mechanisms responsible for their stability or instability. The findings of the statistical elasticity analysis confirm that several elasticities are jointly coordinated to control stability and that the main source for potential instabilities are mutations in the enzyme alpha-ketoglutarate dehydrogenase. Y1 - 2015 U6 - https://doi.org/10.1093/bioinformatics/btv243 SN - 1367-4803 SN - 1460-2059 VL - 31 IS - 12 SP - 214 EP - 220 PB - Oxford Univ. Press CY - Oxford ER - TY - JOUR A1 - Lemcke, Stefanie A1 - Haedge, Kora A1 - Zender, Raphael A1 - Lucke, Ulrike T1 - RouteMe: a multilevel pervasive game on mobile ad hoc routing JF - Personal and ubiquitous computing N2 - Pervasive educational games have the potential to transfer learning content to real-life experiences beyond lecture rooms, through realizing field trips in an augmented or virtual manner. This article introduces the pervasive educational game "RouteMe" that brings the rather abstract topic of routing in ad hoc networks to real-world environments. The game is designed for university-level courses and supports these courses in a motivating manner to deepen the learning experience. Students slip into the role of either routing nodes or applications with routing demands. On three consecutive levels of difficulty, they get introduced with the game concept, learn the basic routing mechanisms and become aware of the general limitations and functionality of routing nodes. This paper presents the pedagogical and technical game concept as well as findings from an evaluation in a university setting. KW - E-learning KW - Educational game KW - Mobile application KW - Pervasive computing KW - Location awareness KW - Ad hoc routing KW - AODV Y1 - 2015 U6 - https://doi.org/10.1007/s00779-015-0843-2 SN - 1617-4909 SN - 1617-4917 VL - 19 IS - 3-4 SP - 537 EP - 549 PB - Springer CY - London ER - TY - JOUR A1 - Videla, Santiago A1 - Guziolowski, Carito A1 - Eduati, Federica A1 - Thiele, Sven A1 - Gebser, Martin A1 - Nicolas, Jacques A1 - Saez-Rodriguez, Julio A1 - Schaub, Torsten H. A1 - Siegel, Anne T1 - Learning Boolean logic models of signaling networks with ASP JF - Theoretical computer science N2 - Boolean networks provide a simple yet powerful qualitative modeling approach in systems biology. However, manual identification of logic rules underlying the system being studied is in most cases out of reach. Therefore, automated inference of Boolean logical networks from experimental data is a fundamental question in this field. This paper addresses the problem consisting of learning from a prior knowledge network describing causal interactions and phosphorylation activities at a pseudo-steady state, Boolean logic models of immediate-early response in signaling transduction networks. The underlying optimization problem has been so far addressed through mathematical programming approaches and the use of dedicated genetic algorithms. In a recent work we have shown severe limitations of stochastic approaches in this domain and proposed to use Answer Set Programming (ASP), considering a simpler problem setting. Herein, we extend our previous work in order to consider more realistic biological conditions including numerical datasets, the presence of feedback-loops in the prior knowledge network and the necessity of multi-objective optimization. In order to cope with such extensions, we propose several discretization schemes and elaborate upon our previous ASP encoding. Towards real-world biological data, we evaluate the performance of our approach over in silico numerical datasets based on a real and large-scale prior knowledge network. The correctness of our encoding and discretization schemes are dealt with in Appendices A-B. (C) 2014 Elsevier B.V. All rights reserved. KW - Answer set programming KW - Signaling transduction networks KW - Boolean logic models KW - Combinatorial multi-objective optimization KW - Systems biology Y1 - 2015 U6 - https://doi.org/10.1016/j.tcs.2014.06.022 SN - 0304-3975 SN - 1879-2294 VL - 599 SP - 79 EP - 101 PB - Elsevier CY - Amsterdam ER - TY - JOUR A1 - Lindauer, Marius A1 - Hoos, Holger H. A1 - Hutter, Frank A1 - Schaub, Torsten H. T1 - An automatically configured algorithm selector JF - The journal of artificial intelligence research N2 - Algorithm selection (AS) techniques - which involve choosing from a set of algorithms the one expected to solve a given problem instance most efficiently - have substantially improved the state of the art in solving many prominent AI problems, such as SAT, CSP, ASP, MAXSAT and QBF. Although several AS procedures have been introduced, not too surprisingly, none of them dominates all others across all AS scenarios. Furthermore, these procedures have parameters whose optimal values vary across AS scenarios. This holds specifically for the machine learning techniques that form the core of current AS procedures, and for their hyperparameters. Therefore, to successfully apply AS to new problems, algorithms and benchmark sets, two questions need to be answered: (i) how to select an AS approach and (ii) how to set its parameters effectively. We address both of these problems simultaneously by using automated algorithm configuration. Specifically, we demonstrate that we can automatically configure claspfolio 2, which implements a large variety of different AS approaches and their respective parameters in a single, highly-parameterized algorithm framework. Our approach, dubbed AutoFolio, allows researchers and practitioners across a broad range of applications to exploit the combined power of many different AS methods. We demonstrate AutoFolio can significantly improve the performance of claspfolio 2 on 8 out of the 13 scenarios from the Algorithm Selection Library, leads to new state-of-the-art algorithm selectors for 7 of these scenarios, and matches state-of-the-art performance (statistically) on all other scenarios. Compared to the best single algorithm for each AS scenario, AutoFolio achieves average speedup factors between 1.3 and 15.4. Y1 - 2015 SN - 1076-9757 SN - 1943-5037 VL - 53 SP - 745 EP - 778 PB - AI Access Foundation CY - Marina del Rey ER - TY - JOUR A1 - Walton, Douglas A1 - Gordon, Thomas F. T1 - Formalizing informal logic JF - Informal logic : reasoning and argumentation in theory and practics N2 - In this paper we investigate the extent to which formal argumentation models can handle ten basic characteristics of informal logic identified in the informal logic literature. By showing how almost all of these characteristics can be successfully modelled formally, we claim that good progress can be made toward the project of formalizing informal logic. Of the formal argumentation models available, we chose the Carneades Argumentation System (CAS), a formal, computational model of argument that uses argument graphs as its basis, structures of a kind very familiar to practitioners of informal logic through their use of argument diagrams. KW - informal logic KW - formal argumentation systems KW - real arguments KW - premise acceptability KW - conductive argument KW - RSA triangle KW - relevance KW - sufficiency Y1 - 2015 SN - 0824-2577 VL - 35 IS - 4 SP - 508 EP - 538 PB - Centre for Research in Reasoning, Argumentation and Rhetoric, University of Windsor CY - Windsor ER - TY - JOUR A1 - Fichte, Johannes Klaus A1 - Szeider, Stefan T1 - Backdoors to tractable answer set programming JF - Artificial intelligence N2 - Answer Set Programming (ASP) is an increasingly popular framework for declarative programming that admits the description of problems by means of rules and constraints that form a disjunctive logic program. In particular, many Al problems such as reasoning in a nonmonotonic setting can be directly formulated in ASP. Although the main problems of ASP are of high computational complexity, complete for the second level of the Polynomial Hierarchy, several restrictions of ASP have been identified in the literature, under which ASP problems become tractable. In this paper we use the concept of backdoors to identify new restrictions that make ASP problems tractable. Small backdoors are sets of atoms that represent "clever reasoning shortcuts" through the search space and represent a hidden structure in the problem input. The concept of backdoors is widely used in theoretical investigations in the areas of propositional satisfiability and constraint satisfaction. We show that it can be fruitfully adapted to ASP. We demonstrate how backdoors can serve as a unifying framework that accommodates several tractable restrictions of ASP known from the literature. Furthermore, we show how backdoors allow us to deploy recent algorithmic results from parameterized complexity theory to the domain of answer set programming. (C) 2015 Elsevier B.V. All rights reserved. KW - Answer set programming KW - Backdoors KW - Computational complexity KW - Parameterized complexity KW - Kernelization Y1 - 2015 U6 - https://doi.org/10.1016/j.artint.2014.12.001 SN - 0004-3702 SN - 1872-7921 VL - 220 SP - 64 EP - 103 PB - Elsevier CY - Amsterdam ER - TY - JOUR A1 - Ahmad, Nadeem A1 - Shoaib, Umar A1 - Prinetto, Paolo T1 - Usability of Online Assistance From Semiliterate Users' Perspective JF - International journal of human computer interaction Y1 - 2015 U6 - https://doi.org/10.1080/10447318.2014.925772 SN - 1044-7318 SN - 1532-7590 VL - 31 IS - 1 SP - 55 EP - 64 PB - Taylor & Francis Group CY - Philadelphia ER - TY - JOUR A1 - Hoos, Holger A1 - Kaminski, Roland A1 - Lindauer, Marius A1 - Schaub, Torsten H. T1 - aspeed: Solver scheduling via answer set programming JF - Theory and practice of logic programming N2 - Although Boolean Constraint Technology has made tremendous progress over the last decade, the efficacy of state-of-the-art solvers is known to vary considerably across different types of problem instances, and is known to depend strongly on algorithm parameters. This problem was addressed by means of a simple, yet effective approach using handmade, uniform, and unordered schedules of multiple solvers in ppfolio, which showed very impressive performance in the 2011 Satisfiability Testing (SAT) Competition. Inspired by this, we take advantage of the modeling and solving capacities of Answer Set Programming (ASP) to automatically determine more refined, that is, nonuniform and ordered solver schedules from the existing benchmarking data. We begin by formulating the determination of such schedules as multi-criteria optimization problems and provide corresponding ASP encodings. The resulting encodings are easily customizable for different settings, and the computation of optimum schedules can mostly be done in the blink of an eye, even when dealing with large runtime data sets stemming from many solvers on hundreds to thousands of instances. Also, the fact that our approach can be customized easily enabled us to swiftly adapt it to generate parallel schedules for multi-processor machines. KW - algorithm schedules KW - answer set programming KW - portfolio-based solving Y1 - 2015 U6 - https://doi.org/10.1017/S1471068414000015 SN - 1471-0684 SN - 1475-3081 VL - 15 SP - 117 EP - 142 PB - Cambridge Univ. Press CY - New York ER - TY - JOUR A1 - Liang, Feng A1 - Liu, Yunzhen A1 - Liu, Hai A1 - Ma, Shilong A1 - Schnor, Bettina T1 - A Parallel Job Execution Time Estimation Approach Based on User Submission Patterns within Computational Grids JF - International journal of parallel programming N2 - Scheduling performance in computational grid can potentially benefit a lot from accurate execution time estimation for parallel jobs. Most existing approaches for the parallel job execution time estimation, however, require ample past job traces and the explicit correlations between the job execution time and the outer layout parameters such as the consumed processor numbers, the user-estimated execution time and the job ID, which are hard to obtain or reveal. This paper presents and evaluates a novel execution time estimation approach for parallel jobs, the user-behavior clustering for execution time estimation, which can give more accurate execution time estimation for parallel jobs through exploring the job similarity and revealing the user submission patterns. Experiment results show that compared to the state-of-art algorithms, our approach can improve the accuracy of the job execution time estimation up to 5.6 %, meanwhile the time that our approach spends on calculation can be reduced up to 3.8 %. KW - User submission pattern KW - Parallel job execution time estimation KW - Computational grid Y1 - 2015 U6 - https://doi.org/10.1007/s10766-013-0294-1 SN - 0885-7458 SN - 1573-7640 VL - 43 IS - 3 SP - 440 EP - 454 PB - Springer CY - New York ER - TY - JOUR A1 - Jung, Jörg A1 - Kiertscher, Simon A1 - Menski, Sebastian A1 - Schnor, Bettina T1 - Self-Adapting Load Balancing for DNS JF - Journal of networks N2 - The Domain Name System belongs to the core services of the Internet infrastructure. Hence, DNS availability and performance is essential for the operation of the Internet and replication as well as load balancing are used for the root and top level name servers. This paper proposes an architecture for credit based server load balancing (SLB) for DNS. Compared to traditional load balancing algorithms like round robin or least connection, the benefit of credit based SLB is that the load balancer can adapt more easily to heterogeneous load requests and back end server capacities. The challenge of this approach is the definition of a suited credit metric. While this was done before for TCP based services like HTTP, the problem was not solved for UDP based services like DNS. In the following an approach is presented to define credits also for UDP based services. This UDP/DNS approach is implemented within the credit based SLB implementation salbnet. The presented measurements confirm the benefit of the self-adapting credit based SLB approach. In our experiments, the mean (first) response time dropped significantly compared to weighted round robin (WRR) (from over 4 ms to about 0.6 ms for dynamic pressure relieve (DPR)). KW - Load Balancing KW - Cluster Computing KW - Performance Evaluation Y1 - 2015 U6 - https://doi.org/10.1109/SPECTS.2014.6879994 VL - 10 IS - 4 SP - 222 EP - 231 PB - Kluwer Academic Publishers CY - Oulu ER -