Institut für Informatik und Computational Science
Refine
Has Fulltext
- no (564) (remove)
Year of publication
Document Type
- Article (564) (remove)
Is part of the Bibliography
- yes (564)
Keywords
- Answer set programming (10)
- answer set programming (8)
- Answer Set Programming (6)
- Machine learning (3)
- formal languages (3)
- monitoring (3)
- Analytical models (2)
- Automata systems (2)
- E-learning (2)
- Equilibrium logic (2)
In this paper, we present a study comparing the depth to diameter (d/D) ratio of small simple craters (200-1000 m) of an area between -88.5 degrees to -90 degrees latitude at the lunar south pole containing Permanent Shadowed Regions (PSRs) versus craters without PSRs. As PSRs can reach temperatures of 110 K and are capable of harboring volatiles, especially water ice, we analyzed the relationship of depth versus diameter ratios and its possible implications for harboring water ice. Variations in the d/D ratios can also be caused by other processes such as degradation, isostatic adjustment, or differences in surface properties. The conducted d/D ratio analysis suggests that a differentiation between craters containing PSRs versus craters without PSRs occurs. Thus, a possible direct relation between d/D ratio, PSRs, and water ice harboring might exist. Our results suggest that differences in the target's surface properties may explain the obtained results. The resulting d/D ratios of craters with PSRs can help to select target areas for future In-Situ Resource Utilization (ISRU) missions.
We discuss the relaxation of a class of nonlinear elliptic Cauchy problems with data on a piece S of the boundary surface by means of a variational approach known in the optimal control literature as "equation error method". By the Cauchy problem is meant any boundary value problem for an unknown function y in a domain X with the property that the data on S, if combined with the differential equations in X, allow one to determine all derivatives of y on S by means of functional equations. In the case of real analytic data of the Cauchy problem, the existence of a local solution near S is guaranteed by the Cauchy-Kovalevskaya theorem. We also admit overdetermined elliptic systems, in which case the set of those Cauchy data on S for which the Cauchy problem is solvable is very "thin". For this reason we discuss a variational setting of the Cauchy problem which always possesses a generalised solution.
User Experience (UX) describes the holistic experience of a user before, during, and after interaction with a platform, product, or service. UX adds value and attraction to their sole functionality and is therefore highly relevant for firms. The increased interest in UX has produced a vast amount of scholarly research since 1983. The research field is, therefore, complex and scattered. Conducting a bibliometric analysis, we aim at structuring the field quantitatively and rather abstractly. We employed citation analyses, co-citation analyses, and content analyses to evaluate productivity and impact of extant research. We suggest that future research should focus more on business and management related topics.
Especially for sciences the provision of massive parallel CPU capacity is one of the most attractive features of a grid. A major challenge in a distributed, inherently dynamic grid is fault tolerance. The more resources and components involved, the more complicated and error-prone becomes the system. In a grid with potentially thousands of machines connected to each other the reliability of individual resources cannot be guaranteed.The benefit of the grid is that in case of a failure ail application may be migrated and restarted from a checkpoint file on another site. This approach requires a service infrastructure which handles the necessary activities transparently. In this article, we present Migol, a fault-tolerant and self-healing grid middleware for MPI applications. Migol is based on open standards and extends the services of the Globus toolkit to support the fault tolerance of grid applications.Further, the Migol framework itself is designed with special focus on fault tolerance. For example, Migol eplicates ritical services and uses a ring-based replication protocol to achieve data consistency. (c) 2007 Elsevier B.V. All rights reserved.
Owing to the loose coupling between replicas, the replica-exchange (RE) class of algorithms should be able to benefit greatly from using as many resources as available. However, the ability to effectively use multiple distributed resources to reduce the time to completion remains a challenge at many levels. Additionally, an implementation of a pleasingly distributed algorithm such as replica-exchange, which is independent of infrastructural details, does not exist. This paper proposes an extensible and scalable framework based on Simple API for Grid Applications that provides a general-purpose, opportunistic mechanism to effectively use multiple resources in an infrastructure-independent way. By analysing the requirements of the RE algorithm and the challenges of implementing it on real production systems, we propose a new abstraction (BIGJOB), which forms the basis of the adaptive redistribution and effective scheduling of replicas.
Researchers and developers worldwide have put their efforts into the design, development and use of information and communication technology to support teaching and learning. This research is driven by pedagogical as well as technological disciplines. The most challenging ideas are currently found in the application of mobile, ubiquitous, pervasive, contextualized and seamless technologies for education, which we shall refer to as pervasive education. This article provides a comprehensive overview of the existing work in this field and categorizes it with respect to educational settings. Using this approach, best practice solutions for certain educational settings and open questions for pervasive education are highlighted in order to inspire interested developers and educators. The work is assigned to different fields, identified by the main pervasive technologies used and the educational settings. Based on these assignments we identify areas within pervasive education that are currently disregarded or deemed challenging so that further research and development in these fields are stimulated in a trans-disciplinary approach. (C) 2013 Elsevier B.V. All rights reserved.
Continuous verification of network security compliance is an accepted need. Especially, the analysis of stateful packet filters plays a central role for network security in practice. But the few existing tools which support the analysis of stateful packet filters are based on general applicable formal methods like Satifiability Modulo Theories (SMT) or theorem prover and show runtimes in the order of minutes to hours making them unsuitable for continuous compliance verification. In this work, we address these challenges and present the concept of state shell interweaving to transform a stateful firewall rule set into a stateless rule set. This allows us to reuse any fast domain specific engine from the field of data plane verification tools leveraging smart, very fast, and domain specialized data structures and algorithms including Header Space Analysis (HSA). First, we introduce the formal language FPL that enables a high-level human-understandable specification of the desired state of network security. Second, we demonstrate the instantiation of a compliance process using a verification framework that analyzes the configuration of complex networks and devices - including stateful firewalls - for compliance with FPL policies. Our evaluation results show the scalability of the presented approach for the well known Internet2 and Stanford benchmarks as well as for large firewall rule sets where it outscales state-of-the-art tools by a factor of over 41.
SPAM ist in den letzten Jahren zur großten Bedrohung der E-Mail-Kommunikation herangewachsen - jedoch nicht nur auf diesen Kommunikationsweg beschrankt. Mit steigender Anzahl von VoIP-Anschlüssen werden auch hier die teilnehmenden Benutzer mit SPAM-Anrufen (SPIT) konfrontiert werden. Neben derzeit diskutierten juristischen Maßnahmen müssen auch technische Abwehrmaßnahmen geschaffen werden, welche SPAM erkennen und vermeiden können. Dieser Beitrag stellt zwei Erweiterungen für das VoIP-Protokoll SIP vor, welche es erstens den Providern ermöglichen, SPIT-Einschätzungen über den Anrufer zum angerufenen Benutzer zu übermitteln und zweitens den Angerufenen die Möglichkeit geben, mit einer Kostenanforderung auf potentielle SPIT-Anrufe zu reagieren.
Two mapping populations of a cross between the Arabidopsis thaliana accessions Col-0 and C24 were cultivated and analyzed with respect to the levels of 181 metabolites to elucidate the biological phenomenon of heterosis at the metabolic level. The relative mid-parent heterosis in the F-1 hybrids was <20% for most metabolic traits. The first mapping population consisting of 369 recombinant inbred lines (RILs) and their test cross progeny with both parents allowed us to determine the position and effect of 147 quantitative trait loci (QTL) for metabolite absolute mid-parent heterosis (aMPH). Furthermore, we identified 153 and 83 QTL for augmented additive (Z(1)) and dominance effects (Z(2)), respectively. We identified putative candidate genes for these QTL using the ARACYC database (http://www.arabidopsis.org/ biocyc), and calculated the average degree of dominance, which was within the dominance and over-dominance range for most metabolites. Analyzing a second population of 41 introgression lines (ILs) and their test crosses with the recurrent parent, we identified 634 significant differences in metabolite levels. Nine per cent of these effects were classified as over-dominant, according to the mode of inheritance. A comparison of both approaches suggested epistasis as a major contributor to metabolite heterosis in Arabidopsis. A linear combination of metabolite levels was shown to significantly correlate with biomass heterosis (r = 0.62).
In this project I constructed a workflow that takes a DNA sequence as input and provides a phylogenetic tree, consisting of the input sequence and other sequences which were found during a database search. In this phylogenetic tree the sequences are arranged depending on similarities. In bioinformatics, constructing phylogenetic trees is often used to explore the evolutionary relationships of genes or organisms and to understand the mechanisms of evolution itself.
More on nomore
(2002)
More on nomore
(2002)
Since 2004, increases in computational power described by Moore's law have substantially been realized in the form of additional cores rather than through faster clock speeds. To make effective use of modern hardware when solving hard computational problems, it is therefore necessary to employ parallel solution strategies. In this work, we demonstrate how effective parallel solvers for propositional satisfiability (SAT), one of the most widely studied NP-complete problems, can be produced automatically from any existing sequential, highly parametric SAT solver. Our Automatic Construction of Parallel Portfolios (ACPP) approach uses an automatic algorithm configuration procedure to identify a set of configurations that perform well when executed in parallel. Applied to two prominent SAT solvers, Lingeling and clasp, our ACPP procedure identified 8-core solvers that significantly outperformed their sequential counterparts on a diverse set of instances from the application and hard combinatorial category of the 2012 SAT Challenge. We further extended our ACPP approach to produce parallel portfolio solvers consisting of several different solvers by combining their configuration spaces. Applied to the component solvers of the 2012 SAT Challenge gold medal winning SAT Solver pfolioUZK, our ACPP procedures produced a significantly better-performing parallel SAT solver.
Algorithm selection (AS) techniques - which involve choosing from a set of algorithms the one expected to solve a given problem instance most efficiently - have substantially improved the state of the art in solving many prominent AI problems, such as SAT, CSP, ASP, MAXSAT and QBF. Although several AS procedures have been introduced, not too surprisingly, none of them dominates all others across all AS scenarios. Furthermore, these procedures have parameters whose optimal values vary across AS scenarios. This holds specifically for the machine learning techniques that form the core of current AS procedures, and for their hyperparameters. Therefore, to successfully apply AS to new problems, algorithms and benchmark sets, two questions need to be answered: (i) how to select an AS approach and (ii) how to set its parameters effectively. We address both of these problems simultaneously by using automated algorithm configuration. Specifically, we demonstrate that we can automatically configure claspfolio 2, which implements a large variety of different AS approaches and their respective parameters in a single, highly-parameterized algorithm framework. Our approach, dubbed AutoFolio, allows researchers and practitioners across a broad range of applications to exploit the combined power of many different AS methods. We demonstrate AutoFolio can significantly improve the performance of claspfolio 2 on 8 out of the 13 scenarios from the Algorithm Selection Library, leads to new state-of-the-art algorithm selectors for 7 of these scenarios, and matches state-of-the-art performance (statistically) on all other scenarios. Compared to the best single algorithm for each AS scenario, AutoFolio achieves average speedup factors between 1.3 and 15.4.
Scheduling performance in computational grid can potentially benefit a lot from accurate execution time estimation for parallel jobs. Most existing approaches for the parallel job execution time estimation, however, require ample past job traces and the explicit correlations between the job execution time and the outer layout parameters such as the consumed processor numbers, the user-estimated execution time and the job ID, which are hard to obtain or reveal. This paper presents and evaluates a novel execution time estimation approach for parallel jobs, the user-behavior clustering for execution time estimation, which can give more accurate execution time estimation for parallel jobs through exploring the job similarity and revealing the user submission patterns. Experiment results show that compared to the state-of-art algorithms, our approach can improve the accuracy of the job execution time estimation up to 5.6 %, meanwhile the time that our approach spends on calculation can be reduced up to 3.8 %.
The soft error rate (SER) due to heavy-ion irradiation of a clock tree is investigated in this paper. A method for clock tree SER prediction is developed, which employs a dedicated soft error analysis tool to characterize the single-event transient (SET) sensitivities of clock inverters and other commercial tools to calculate the SER through fault-injection simulations. A test circuit including a flip-flop chain and clock tree in a 65 nm CMOS technology is developed through the automatic ASIC design flow. This circuit is analyzed with the developed method to calculate its clock tree SER. In addition, this circuit is implemented in a 65 nm test chip and irradiated by heavy ions to measure its SER resulting from the SETs in the clock tree. The experimental and calculation results of this case study present good correlation, which verifies the effectiveness of the developed method.
A triple modular redundancy (TMR) based design technique for double cell upsets (DCUs) mitigation is investigated in this paper. This technique adds three extra self-voter circuits into a traditional TMR structure to enable the enhanced error correction capability. Fault-injection simulations show that the soft error rate (SER) of the proposed technique is lower than 3% of that of TMR. The implementation of this proposed technique is compatible with the automatic digital design flow, and its applicability and performance are evaluated on an FIFO circuit.
When decomposing single trial electroencephalography it is a challenge to incorporate prior physiological knowledge. Here, we develop a method that uses prior information about the phase-locking property of event-related potentials in a regularization framework to bias a blind source separation algorithm toward an improved separation of single-trial phase-locked responses in terms of an increased signal-to-noise ratio. In particular, we suggest a transformation of the data, using weighted average of the single trial and trial-averaged response, that redirects the focus of source separation methods onto the subspace of event-related potentials. The practical benefit with respect to an improved separation of such components from ongoing background activity and extraneous noise is first illustrated on artificial data and finally verified in a real-world application of extracting single-trial somatosensory evoked potentials from multichannel EEG-recordings
Data recorded in electroencephalogram (EEG)-based brain-computer interface experiments is generally very noisy, non-stationary, and contaminated with artifacts that can deteriorate discrimination/classification methods. In this paper, we extend the common spatial pattern (CSP) algorithm with the aim to alleviate these adverse effects. In particular, we suggest an extension of CSP to the state space, which utilizes the method of time delay embedding. As we will show, this allows for individually tuned frequency filters at each electrode position and, thus, yields an improved and more robust machine learning procedure. The advantages of the proposed method over the original CSP method are verified in terms of an improved information transfer rate (bits per trial) on a set of EEG-recordings from experiments of imagined limb movements
Pervasive educational games have the potential to transfer learning content to real-life experiences beyond lecture rooms, through realizing field trips in an augmented or virtual manner. This article introduces the pervasive educational game "RouteMe" that brings the rather abstract topic of routing in ad hoc networks to real-world environments. The game is designed for university-level courses and supports these courses in a motivating manner to deepen the learning experience. Students slip into the role of either routing nodes or applications with routing demands. On three consecutive levels of difficulty, they get introduced with the game concept, learn the basic routing mechanisms and become aware of the general limitations and functionality of routing nodes. This paper presents the pedagogical and technical game concept as well as findings from an evaluation in a university setting.
Two common data representations are mostly used in intelligent data analysis, namely the vectorial and the pairwise representation. Pairwise data which satisfy the restrictive conditions of Euclidean spaces can be faithfully translated into a Euclidean vectorial representation by embedding. Non-metric pairwise data with violations of symmetry, reflexivity or triangle inequality pose a substantial conceptual problem for pattern recognition since the amount of predictive structural information beyond what can be measured by embeddings is unclear. We show by systematic modeling of non-Euclidean pairwise data that there exists metric violations which can carry valuable problem specific information. Furthermore, Euclidean and non-metric data can be unified on the level of structural information contained in the data. Stable component analysis selects linear subspaces which are particularly insensitive to data fluctuations. Experimental results from different domains support our pattern recognition strategy.
Incremental Support Vector Machines (SVM) are instrumental in practical applications of online learning. This work focuses on the design and analysis of efficient incremental SVM learning, with the aim of providing a fast, numerically stable and robust implementation. A detailed analysis of convergence and of algorithmic complexity of incremental SVM learning is carried out. Based on this analysis, a new design of storage and numerical operations is proposed, which speeds up the training of an incremental SVM by a factor of 5 to 20. The performance of the new algorithm is demonstrated in two scenarios: learning with limited resources and active learning. Various applications of the algorithm, such as in drug discovery, online monitoring of industrial devices and and surveillance of network traffic, can be foreseen.
Characterizing Grids
(2003)
We present a new data model approach to describe the various objects that either represent the Grid infrastructure or make use of it. The data model is based on the experiences and experiments conducted in heterogeneous Grid environments. While very sophisticated data models exist to describe and characterize e.g. compute capacities or web services, we will show that a general description, which combines {em all} of these aspects, is needed to give an adequate representation of objects on a Grid. The Grid Object Description Language (GODsL)} is a generic and extensible approach to unify the various aspects that an object on a Grid can have. GODsL provides the content for the XML based communication in Grid migration scenarios, carried out in the GridLab project. We describe the data model architecture on a general level and focus on the Grid application scenarios.
Lessons Learned
(2014)
This chapter summarizes the experience and the lessons we learned concerning the application of the jABC as a framework for design and execution of scientific workflows. It reports experiences from the domain modeling (especially service integration) and workflow design phases and evaluates the resulting models statistically with respect to the SIB library and hierarchy levels.
The Course's SIB Libraries
(2014)
This chapter gives a detailed description of the service framework underlying all the example projects that form the foundation of this book. It describes the different SIB libraries that we made available for the course “Process modeling in the natural sciences” to provide the functionality that was required for the envisaged applications. The students used these SIB libraries to realize their projects.
We summarize here the main characteristics and features of the jABC framework, used in the case studies as a graphical tool for modeling scientific processes and workflows. As a comprehensive environment for service-oriented modeling and design according to the XMDD (eXtreme Model-Driven Design) paradigm, the jABC offers much more than the pure modeling capability. Associated technologies and plugins provide in fact means for a rich variety of supporting functionality, such as remote service integration, taxonomical service classification, model execution, model verification, model synthesis, and model compilation. We describe here in short both the essential jABC features and the service integration philosophy followed in the environment. In our work over the last years we have seen that this kind of service definition and provisioning platform has the potential to become a core technology in interdisciplinary service orchestration and technology transfer: Domain experts, like scientists not specially trained in computer science, directly define complex service orchestrations as process models and use efficient and complex domain-specific tools in a simple and intuitive way.
A major part of the scientific experiments that are carried out today requires thorough computational support. While database and algorithm providers face the problem of bundling resources to create and sustain powerful computation nodes, the users have to deal with combining sets of (remote) services into specific data analysis and transformation processes. Today’s attention to “big data” amplifies the issues of size, heterogeneity, and process-level diversity/integration. In the last decade, especially workflow-based approaches to deal with these processes have enjoyed great popularity. This book concerns a particularly agile and model-driven approach to manage scientific workflows that is based on the XMDD paradigm. In this chapter we explain the scope and purpose of the book, briefly describe the concepts and technologies of the XMDD paradigm, explain the principal differences to related approaches, and outline the structure of the book.
Solving problems combining task and motion planning requires searching across a symbolic search space and a geometric search space. Because of the semantic gap between symbolic and geometric representations, symbolic sequences of actions are not guaranteed to be geometrically feasible. This compels us to search in the combined search space, in which frequent backtracks between symbolic and geometric levels make the search inefficient.We address this problem by guiding symbolic search with rich information extracted from the geometric level through culprit detection mechanisms.
This paper surveys the field of nonphotorealistic rendering (NPR), focusing on techniques for transforming 2D input (images and video) into artistically stylized renderings. We first present a taxonomy of the 2D NPR algorithms developed over the past two decades, structured according to the design characteristics and behavior of each technique. We then describe a chronology of development from the semiautomatic paint systems of the early nineties, through to the automated painterly rendering systems of the late nineties driven by image gradient analysis. Two complementary trends in the NPR literature are then addressed, with reference to our taxonomy. First, the fusion of higher level computer vision and NPR, illustrating the trends toward scene analysis to drive artistic abstraction and diversity of style. Second, the evolution of local processing approaches toward edge-aware filtering for real-time stylization of images and video. The survey then concludes with a discussion of open challenges for 2D NPR identified in recent NPR symposia, including topics such as user and aesthetic evaluation.
A workflow for visualizing server connections using the Google Maps API was built in the jABC. It makes use of three basic services: An XML-based IP address geolocation web service, a command line tool and the Static Maps API. The result of the workflow is an URL leading to an image file of a map, showing server connections between a client and a target host.
In this paper, an asynchronous design for soft error detection and correction in combinational and sequential circuits is presented. The proposed architecture is called Asynchronous Full Error Detection and Correction (AFEDC). A custom design flow with integrated commercial EDA tools generates the AFEDC using the asynchronous bundled-data design style. The AFEDC relies on an Error Detection Circuit (EDC) for protecting the combinational logic and fault-tolerant latches for protecting the sequential logic. The EDC can be implemented using different detection methods. For this work, two boundary variants are considered, the Full Duplication with Comparison (FDC) and the Partial Duplication with Parity Prediction (PDPP). The AFEDC architecture can handle single events and timing faults of arbitrarily long duration as well as the synchronous FEDC, but additionally can address known metastability issues of the FEDC and other similar synchronous architectures and provide a more practical solution for handling the error recovery process. Two case studies are developed, a carry look-ahead adder and a pipelined non-restoring array divider. Results show that the AFEDC provides equivalent fault coverage when compared to the FEDC while reducing area, ranging from 9.6% to 17.6%, and increasing energy efficiency, which can be up to 6.5%.
In this paper two new methods for the design of fault-tolerant pipelined sequential and combinational circuits, called Error Detection and Partial Error Correction (EDPEC) and Full Error Detection and Correction (FEDC), are described. The proposed methods are based on an Error Detection Logic (EDC) in the combinational circuit part combined with fault tolerant memory elements implemented using fault tolerant master–slave flip-flops. If a transient error, due to a transient fault in the combinational circuit part is detected by the EDC, the error signal controls the latching stage of the flip-flops such that the previous correct state of the register stage is retained until the transient error disappears. The system can continue to work in its previous correct state and no additional recovery procedure (with typically reduced clock frequency) is necessary. The target applications are dataflow processing blocks, for which software-based recovery methods cannot be easily applied. The presented architectures address both single events as well as timing faults of arbitrarily long duration. An example of this architecture is developed and described, based on the carry look-ahead adder. The timing conditions are carefully investigated and simulated up to the layout level. The enhancement of the baseline architecture is demonstrated with respect to the achieved fault tolerance for the single event and timing faults. It is observed that the number of uncorrected single events is reduced by the EDPEC architecture by 2.36 times compared with previous solution. The FEDC architecture further reduces the number of uncorrected events to zero and outperforms the Triple Modular Redundancy (TMR) with respect to correction of timing faults. The power overhead of both new architectures is about 26–28% lower than the TMR.
Image feature detection is a key task in computer vision. Scale Invariant Feature Transform (SIFT) is a prevalent and well known algorithm for robust feature detection. However, it is computationally demanding and software implementations are not applicable for real-time performance. In this paper, a versatile and pipelined hardware implementation is proposed, that is capable of computing keypoints and rotation invariant descriptors on-chip. All computations are performed in single precision floating-point format which makes it possible to implement the original algorithm with little alteration. Various rotation resolutions and filter kernel sizes are supported for images of any resolution up to ultra-high definition. For full high definition images, 84 fps can be processed. Ultra high definition images can be processed at 21 fps.
"Deal of the Day" (DoD) platforms have quickly become popular by offering savings on local services, products and vacations. For merchants, these platforms represent a new marketing channel to advertise their products and services and attract new customers. DoD platform providers, however, struggle to maintaining a stable market share and profitability, because entry and switching costs are low. To sustain a competitive market position, DoD providers are looking for ways to build a loyal customer base. However, research examining the determinants of user loyalty in this novel context is scarce. To fill this gap, this study employs Grounded Theory methodology to develop a conceptual model of customer loyalty to a DoD provider. In the next step, qualitative insights are enriched and validated using quantitative data from a survey of 202 DoD users. The authors find that customer loyalty is in large part driven by monetary incentives, but can be eroded if impressions from merchant encounters are below expectations. In addition, enhancing the share of deals relevant for consumers, i.e. signal-to-noise ratio, and mitigating perceived risks of a transaction emerge as challenges. Beyond theoretical value, the results offer practical insights into how customer loyalty to a DoD provider can be promoted.
The integration of preferences into answer set programming constitutes an important practical device for distinguishing certain preferred answer sets from non-preferred ones. To this end, we elaborate upon rule dependency graphs and their colorings for characterizing different preference handling strategies found in the literature. We start from a characterization of (three types of) preferred answer sets in terms of totally colored dependency graphs. In particular, we demonstrate that this approach allows us to capture all three approaches to preferences in a uniform setting by means of the concept of a height function. In turn, we exemplarily develop an operational characterization of preferred answer sets in terms of operators on partial colorings for one particular strategy. In analogy to the notion of a derivation in proof theory, our operational characterization is expressed as a (non-deterministically formed) sequence of colorings, gradually turning an uncolored graph into a totally colored one
We investigate the usage of rule dependency graphs and their colorings for characterizing and computing answer sets of logic programs. This approach provides us with insights into the interplay between rules when inducing answer sets. We start with different characterizations of answer sets in terms of totally colored dependency graphs that differ ill graph-theoretical aspects. We then develop a series of operational characterizations of answer sets in terms of operators on partial colorings. In analogy to the notion of a derivation in proof theory, our operational characterizations are expressed as (non-deterministically formed) sequences of colorings, turning an uncolored graph into a totally colored one. In this way, we obtain an operational framework in which different combinations of operators result in different formal properties. Among others, we identify the basic strategy employed by the noMoRe system and justify its algorithmic approach. Furthermore, we distinguish operations corresponding to Fitting's operator as well as to well-founded semantics
Ob Online-Kurse, videobasierte Lehrangebote, mobile Applikationen, eigenentwickelte oder kommerzielle Web 2.0-Anwendungen, die Fülle digitaler Unterstützungsangebote ist kaum zu überblicken. Dabei bieten mobile Endgeräte, Web-Anwendungen und Apps Chancen Lehre, Studium und Forschung maßgeblich neu zu gestalten. Im Beitrag wird ein Beschreibungsrahmen für die mediendidaktische Ausgestaltung von Lehr-, Lern- und Forschungsarrangements vorgestellt, der die technischen Gesichtspunkte hervorhebt. Anschließend werden unterschiedliche Nutzungsszenarien unter Einbeziehung digitaler Medien skizziert. Diese werden als Ausgangspunkt genommen um das Konzept einer Systemarchitektur vorzustellen, die es zum einen ermöglicht beliebige Applikationen automatisiert bereit zu stellen und zum anderen die anfallenden Nutzendendaten plattformübergreifend zu aggregieren und für eine Ausgestaltung virtueller Lehr- und Lernräumen zu nutzen.
This paper presents an evaluation of ACPI energy saving modes, and deduces the design and implementation of an energy saving daemon for clusters called cherub. The design of the cherub daemon is modular and extensible. Since the only requirement is a central approach for resource management, cherub is suited for Server Load Balancing (SLB) clusters managed by dispatchers like Linux Virtual Server (LVS), as well as for High Performance Computing (HPC) clusters. Our experimental results show that cherub's scheduling algorithm works well, i.e. it will save energy, if possible, and avoids state-flapping.
In this article, we consider high-dimensional data which contains a low-dimensional non-Gaussian structure contaminated with Gaussian noise and propose a new linear method to identify the non-Gaussian subspace. Our method NGCA (Non-Gaussian Component Analysis) is based on a very general semi-parametric framework and has a theoretical guarantee that the estimation error of finding the non-Gaussian components tends to zero at a parametric rate. NGCA can be used not only as preprocessing for ICA, but also for extracting and visualizing more general structures like clusters. A numerical study demonstrates the usefulness of our method
Proposing relevant perturbations to biological signaling networks is central to many problems in biology and medicine because it allows for enabling or disabling certain biological outcomes. In contrast to quantitative methods that permit fine-grained (kinetic) analysis, qualitative approaches allow for addressing large-scale networks. This is accomplished by more abstract representations such as logical networks. We elaborate upon such a qualitative approach aiming at the computation of minimal interventions in logical signaling networks relying on Kleene's three-valued logic and fixpoint semantics. We address this problem within answer set programming and show that it greatly outperforms previous work using dedicated algorithms.
(Near-)inverses of sequences
(2006)
We introduce the notion of a near-inverse of a non-decreasing sequence of positive integers; near-inverses are intended to assume the role of inverses in cases when the latter cannot exist. We prove that the near-inverse of such a sequence is unique; moreover, the relation of being near-inverses of each other is symmetric, i.e. if sequence g is the near-inverse of sequence f, then f is the near-inverse of g. There is a connection, by approximations, between near- inverses of sequences and inverses of continuous strictly increasing real-valued functions which can be exploited to derive simple expressions for near-inverses
Automatic code generation is an essential cornerstone of today's model-driven approaches to software engineering. Thus a key requirement for the success of this technique is the reliability and correctness of code generators. This article describes how we employ standard model checking-based verification to check that code generator models developed within our code generation framework Genesys conform to (temporal) properties. Genesys is a graphical framework for the high-level construction of code generators on the basis of an extensible library of well-defined building blocks along the lines of the Extreme Model-Driven Development paradigm. We will illustrate our verification approach by examining complex constraints for code generators, which even span entire model hierarchies. We also show how this leads to a knowledge base of rules for code generators, which we constantly extend by e.g. combining constraints to bigger constraints, or by deriving common patterns from structurally similar constraints. In our experience, the development of code generators with Genesys boils down to re-instantiating patterns or slightly modifying the graphical process model, activities which are strongly supported by verification facilities presented in this article.
The Domain Name System belongs to the core services of the Internet infrastructure. Hence, DNS availability and performance is essential for the operation of the Internet and replication as well as load balancing are used for the root and top level name servers.
This paper proposes an architecture for credit based server load balancing (SLB) for DNS. Compared to traditional load balancing algorithms like round robin or least connection, the benefit of credit based SLB is that the load balancer can adapt more easily to heterogeneous load requests and back end server capacities. The challenge of this approach is the definition of a suited credit metric. While this was done before for TCP based services like HTTP, the problem was not solved for UDP based services like DNS.
In the following an approach is presented to define credits also for UDP based services. This UDP/DNS approach is implemented within the credit based SLB implementation salbnet. The presented measurements confirm the benefit of the self-adapting credit based SLB approach. In our experiments, the mean (first) response time dropped significantly compared to weighted round robin (WRR) (from over 4 ms to about 0.6 ms for dynamic pressure relieve (DPR)).
In this paper we present the design and implementation of the Migol brokering framework. Migol is a Grid middleware, which addresses the fault-tolerance of long-running and compute-intensive applications. The framework supports e. g. the automatic and transparent recovery respectively the migration of applications. Another core feature of Migol is the discovery, selection, and allocation of resources using advance reservation. Grid broker systems can significantly benefit from advance reservation. With advance reservation brokers and users can obtain execution guarantees from local resource management systems (LRM) without requiring detailed knowledge of current and future workloads or of the resource owner's policies. Migol's Advance Reservation Service (ARS) provides an adapter layer for reservation capabilities of different LRMs, which is currently not provided by existing Grid middleware platforms. Further, we propose a shortest expected delay (SED) strategy for scheduling of advance reservations within the Job Broker Service. SED needs information about the earliest start time of an application. This is currently not supported by LRMs. We added this feature for PBSPro. Migol depends on Globus and its security infrastructure. Our performance experiments show the substantial overhead of this serviceoriented approach.
This paper presents a concept for automated architecture synthesis for adaptive multiprocessors on chip, in particular for Field-Programmable Gate-Array (FPGA) devices. Given a parallel program, the intent is to simultaneously allocate processor resources and the corresponding communication network, and at the same time, to map the parallel application to get an optimum application-specific architecture. This approach builds up on a previously proposed design platform that automates system integration and FPGA synthesis for such architectures. As a result, the overall concept offers an automated design approach from application mapping to system and FPGA configuration. The automated synthesis is based on combinatorial optimization. Automation is possible because a solvable Integer Linear Programming (ILP) model that captures all necessary design trade-off parameters of such systems has been found. Experimental results to study the feasibility of the automated synthesis indicate that problems with sizes that can be encountered in the embedded domain can be readily solved. Results obtained underscore the need for an automated synthesis for design space exploration.
Student teachers often struggle to keep track of everything that is happening in the classroom, and particularly to notice and respond when students cause disruptions. The complexity of the classroom environment is a potential contributing factor that has not been empirically tested. In this experimental study, we utilized a virtual reality (VR) classroom to examine whether classroom complexity affects the likelihood of student teachers noticing disruptions and how they react after noticing. Classroom complexity was operationalized as the number of disruptions and the existence of overlapping disruptions (multidimensionality) as well as the existence of parallel teaching tasks (simultaneity). Results showed that student teachers (n = 50) were less likely to notice the scripted disruptions, and also less likely to respond to the disruptions in a comprehensive and effortful manner when facing greater complexity. These results may have implications for both teacher training and the design of VR for training or research purpose. This study contributes to the field from two aspects: 1) it revealed how features of the classroom environment can affect student teachers' noticing of and reaction to disruptions; and 2) it extends the functionality of the VR environment-from a teacher training tool to a testbed of fundamental classroom processes that are difficult to manipulate in real-life.
Software development plan
(1996)
claspfolio 2
(2014)
Building on the award-winning, portfolio-based ASP solver claspfolio, we present claspfolio 2, a modular and open solver architecture that integrates several different portfolio-based algorithm selection approaches and techniques. The claspfolio 2 solver framework supports various feature generators, solver selection approaches, solver portfolios, as well as solver-schedule-based pre-solving techniques. The default configuration of claspfolio 2 relies on a light-weight version of the ASP solver clasp to generate static and dynamic instance features. The flexible open design of claspfolio 2 is a distinguishing factor even beyond ASP. As such, it provides a unique framework for comparing and combining existing portfolio-based algorithm selection approaches and techniques in a single, unified framework. Taking advantage of this, we conducted an extensive experimental study to assess the impact of different feature sets, selection approaches and base solver portfolios. In addition to gaining substantial insights into the utility of the various approaches and techniques, we identified a default configuration of claspfolio 2 that achieves substantial performance gains not only over clasp's default configuration and the earlier version of claspfolio, but also over manually tuned configurations of clasp.
Although Boolean Constraint Technology has made tremendous progress over the last decade, the efficacy of state-of-the-art solvers is known to vary considerably across different types of problem instances, and is known to depend strongly on algorithm parameters. This problem was addressed by means of a simple, yet effective approach using handmade, uniform, and unordered schedules of multiple solvers in ppfolio, which showed very impressive performance in the 2011 Satisfiability Testing (SAT) Competition. Inspired by this, we take advantage of the modeling and solving capacities of Answer Set Programming (ASP) to automatically determine more refined, that is, nonuniform and ordered solver schedules from the existing benchmarking data. We begin by formulating the determination of such schedules as multi-criteria optimization problems and provide corresponding ASP encodings. The resulting encodings are easily customizable for different settings, and the computation of optimum schedules can mostly be done in the blink of an eye, even when dealing with large runtime data sets stemming from many solvers on hundreds to thousands of instances. Also, the fact that our approach can be customized easily enabled us to swiftly adapt it to generate parallel schedules for multi-processor machines.