Institut für Informatik und Computational Science
Refine
Year of publication
Document Type
- Article (534) (remove)
Language
- English (534) (remove)
Is part of the Bibliography
- yes (534) (remove)
Keywords
- Answer set programming (10)
- answer set programming (8)
- Answer Set Programming (6)
- Machine learning (3)
- formal languages (3)
- monitoring (3)
- Analytical models (2)
- Automata systems (2)
- E-learning (2)
- Equilibrium logic (2)
- Event mapping (2)
- Fault tolerance (2)
- Lindenmayer systems (2)
- Non-monotonic reasoning (2)
- Parameterized complexity (2)
- Process mining (2)
- ResNet (2)
- Theory (2)
- bioinformatics (2)
- cooperating systems (2)
- online learning (2)
- radhard design (2)
- security (2)
- (FPGA) (1)
- (SET) count rate (1)
- 2-tag system (1)
- 3D modeling (1)
- 3D visualization (1)
- AODV (1)
- ASIC (1)
- Absorbed dose (1)
- Abstraction (1)
- Access control (1)
- Active evaluation (1)
- Ad hoc routing (1)
- Adaptivity (1)
- Advanced Video Codec (AVC) (1)
- Aggregates (1)
- Algorithm configuration (1)
- Algorithm portfolios (1)
- Algorithms (1)
- Android hybrid apps (1)
- Animal building (1)
- Anti-cancer drugs (1)
- Argumentation structure (1)
- Assessment (1)
- Augmentation (1)
- Augmented and virtual reality (1)
- Automated parallelization (1)
- Automatically controlled windows (1)
- Backdoors (1)
- Batch processing (1)
- Bayesian inference (1)
- Bean (1)
- Benchmark testing; (1)
- Blind users (1)
- Boolean logic models (1)
- Business process intelligence (1)
- CP-Logic (1)
- Campus (1)
- Circuit faults (1)
- Clock tree (1)
- Cloud (1)
- Cluster Computing (1)
- Cluster computing (1)
- Code generation (1)
- Coherent phonons (1)
- Combinatorial multi-objective optimization (1)
- Complex optimization (1)
- Complexity (1)
- Computational complexity (1)
- Computational grid (1)
- Computer security (1)
- Computing with DNA (1)
- Conformant Planning (1)
- Conrad Hal Waddington (1)
- Constraint satisfaction (1)
- Context awareness (1)
- Contextualized learning (1)
- Continuous Testing (1)
- Continuous Versioning (1)
- Convolution (1)
- Course timetabling (1)
- Customer ownership (1)
- D-galactosamine (1)
- DMR (1)
- DNA hairpin formation (1)
- DRMAA (1)
- DRMS (1)
- Data augmentation (1)
- Data federation (1)
- Database (1)
- Deal of the Day (1)
- Debugging (1)
- Decidability (1)
- Declare (1)
- Deep learning (1)
- Defining characteristics of physical computing (1)
- Denotational semantics (1)
- Design (1)
- Design for testability (DFT) (1)
- Digital image analysis (1)
- Digitalization (1)
- Dose rate (1)
- Double cell upsets (DCUs) (1)
- Dynamical X-ray theory (1)
- E-teaching (1)
- EDC (1)
- EEG (1)
- Earthquake modeling (1)
- Educational game (1)
- Educational timetabling (1)
- Encoding (1)
- Engines (1)
- Entity Linking (1)
- Epigenetic landscape (1)
- Epistemic Logic Programs (1)
- Evaluation (1)
- Evolution (1)
- Experimentation (1)
- Explicit negation (1)
- Explore-first Programming (1)
- Extensibility (1)
- Extreme Model-Driven Development (1)
- FEDC (1)
- FPGA (1)
- Fault Localization (1)
- Fault tolerant systems (1)
- Feature extraction (1)
- Field programmable gate arrays (1)
- Finite automata (1)
- Flip-flops (1)
- Forgetting (1)
- Freshmen (1)
- GERBIL (1)
- Gaussian process (1)
- Gesture input (1)
- Green computing (1)
- Grounded theory (1)
- H.264 (1)
- Hairpin completions (1)
- Hairpin reductions (1)
- Hardware accelerator (1)
- Hawkes process (1)
- Heat diffusion (1)
- Hierarchically configurable mask register (1)
- Histograms (1)
- Https traffic (1)
- Human Factors (1)
- Hurricane Sandy (1)
- IaaS (1)
- Identifiers (1)
- Image and video stylization (1)
- Image resolution (1)
- Imperative calculi (1)
- Improving classroom (1)
- Incoherent phonons (1)
- Incremental answer set programming (1)
- Inference (1)
- Information federation (1)
- Information integration (1)
- Information retrieval (1)
- Information security (1)
- Insurance industry (1)
- Integrated circuit modeling (1)
- Interface design (1)
- Internet of Things (1)
- Job monitoring (1)
- Job submission (1)
- Kernel (1)
- Kernelization (1)
- Key input (1)
- Knowledge representation (1)
- L systems (1)
- LBA problem (1)
- Landmark visibility (1)
- Literature mining (1)
- Liver neoplasms (1)
- Load Balancing (1)
- Localization (1)
- Location awareness (1)
- Logic programming (1)
- Loss (1)
- Low Latency (1)
- Loyalty (1)
- MQTT (1)
- Machine Learning (1)
- Markov processes (1)
- Masking of X-values (1)
- Media in education (1)
- Meta-Programming (1)
- Metric learning (1)
- Minimal perturbation problems (1)
- Mobile application (1)
- Mobile devices (1)
- Mobile learning (1)
- Model checking (1)
- Modeling (1)
- Modelling (1)
- Multi-objective optimization (1)
- Multi-sided platforms (1)
- Multiple interpretation scheme (1)
- N-temperature model (1)
- NUI (1)
- Nash equilibrium (1)
- Natural language processing (1)
- Natural ventilation (1)
- Network (1)
- Network security (1)
- Neural networks (1)
- Non-Monotonic (1)
- Nonmonotonic reasoning (1)
- OCCI (1)
- OSSE (1)
- Operation problem (1)
- Optimization (1)
- Parallel SAT solving (1)
- Parallel job execution time estimation (1)
- Particle detector (1)
- Pedagogical issues (1)
- Pedestrian navigation (1)
- Performance Evaluation (1)
- Personalization (1)
- Pervasive computing (1)
- Pervasive game (1)
- Pervasive learning (1)
- Phantoms (1)
- Planar tactile display (1)
- Plant identification (1)
- Polarization (1)
- Preference Handling (1)
- Process model analysis (1)
- Product lifecycle management (1)
- Programming (1)
- Programming by optimization (1)
- Prototyping (1)
- RADFET (1)
- REST (1)
- RSA triangle (1)
- Radiation hardness (1)
- Random access memory (1)
- Ranking (1)
- Reasoning (1)
- Region of Interest (1)
- Reproducibility of results (1)
- Reversibility (1)
- SET pulsewidth distribution (1)
- SOA (1)
- SWOT (1)
- Sampling (1)
- Scalability (1)
- Scale-invariant feature transform (SIFT) (1)
- Scientific images (1)
- Screen reader (1)
- Seamless learning (1)
- Search problems (1)
- Security (1)
- Self-adaptive MPSoC (1)
- Self-exciting point process (1)
- Semantic data (1)
- Semantic web (1)
- Semilinearity property (1)
- Sequence embeddings (1)
- Service orientation (1)
- Sharing (1)
- Signal processing (1)
- Signaling transduction networks (1)
- Simulations (1)
- Single event effect (1)
- Single event upsets (1)
- Single-event transient (SET) (1)
- Spatio-temporal ETAS model (1)
- Splicing (1)
- Splicing processor (1)
- Statistical relational learning (1)
- Stochastic relational process (1)
- Strong equivalence (1)
- Structural equation modeling (1)
- Systems biology (1)
- Systems of parallel communicating (1)
- TMR (1)
- Theory formation (1)
- Thermoelasticity (1)
- Time series (1)
- Tomography (1)
- Tools (1)
- Tracking (1)
- Traffic data (1)
- Tree decomposition (1)
- Treewidth (1)
- Treewidth-aware reductions (1)
- Triple modular redundancy (TMR) (1)
- Tumor types (1)
- Turing machine (1)
- Type and effect systems (1)
- UAV imagery (1)
- UX (1)
- Ubiquitous learning (1)
- Ultrafast dynamics (1)
- Unary languages (1)
- Uniform Access Principle (1)
- Usability testing (1)
- User submission pattern (1)
- User-centred design (1)
- VGG16 (1)
- Value network (1)
- Verification (1)
- Visual metaphor (1)
- Wireless Sensor Networks (1)
- Word embeddings (1)
- X-masking (1)
- X-ray computed (1)
- X-values (1)
- accepting grammars (1)
- action and change (1)
- activity (1)
- acute liver failure (1)
- acyclicity properties (1)
- adversarial classification (1)
- algorithm schedules (1)
- algorithms (1)
- analysis (1)
- anti-cancer drugs (1)
- anxiety (1)
- approximate model counting (1)
- architecture (1)
- argument mining (1)
- arousal (1)
- artistic rendering (1)
- asynchrounous design (1)
- autism (1)
- automata (1)
- automated guided vehicle routing (1)
- automated planning (1)
- automatic feedback (1)
- behavioral abstraction (1)
- belief merging (1)
- belief revision (1)
- benchmark (1)
- bibliometric analysis (1)
- block representation (1)
- bootstrapping (1)
- brain-computer interface (1)
- bundled data (1)
- camera sensor (1)
- car assembly operations (1)
- cellular automata (1)
- circuit Faults (1)
- citation analysis (1)
- click controller (1)
- clocks (1)
- co-citation analysis (1)
- co-occurrence analysis (1)
- code generation (1)
- coherence relation (1)
- collaborative learning (1)
- combinatorial optimization problems (1)
- combined task and motion planning (1)
- common spatial patterns (1)
- competition (1)
- compliance (1)
- computational thinking (1)
- computer science education (1)
- computer vision (1)
- concession (1)
- concurrent checking (1)
- conductive argument (1)
- connective (1)
- consistency (1)
- consistency checking (1)
- consistency measures (1)
- context-free grammar (1)
- context-sensitive (1)
- contrast (1)
- controlled vocabularies (1)
- corpus analysis (1)
- correlated errors (1)
- course timetabling (1)
- craters (1)
- crop (1)
- decidability questions (1)
- declarative problem solving (1)
- deep learning (1)
- deep neural networks (1)
- deep residual networks (1)
- degree of non-context-freeness (1)
- degree of non-regularity (1)
- degree of non-regulation (1)
- depression (1)
- design flow (1)
- determinism (1)
- detrending (1)
- developmental systems (1)
- diagnosis (1)
- digitally-enabled pedagogies (1)
- domain-specific APIs (1)
- drug discovery (1)
- drug-sensitivity prediction (1)
- dynamic service binding (1)
- e-learning (1)
- eLectures (1)
- economic ripples (1)
- edge computing (1)
- educational systems (1)
- educational timetabling (1)
- embedded systems (1)
- emission factor (1)
- endothelin (1)
- endothelin-converting enzyme (1)
- ensemble kalman filter (1)
- ensemble methods (1)
- error propagation (1)
- evaluation (1)
- event-related desynchronization (1)
- evolution (1)
- external ambiguity (1)
- extreme weather (1)
- face tracking (1)
- facial expression (1)
- fault tolerance (1)
- field-programmable gate array (1)
- finite model computation (1)
- finite state sequential transducers (1)
- firmware update (1)
- formal (1)
- formal argumentation systems (1)
- functions (1)
- gap-filling (1)
- geovisualization (1)
- gradient boosting (1)
- grammar (1)
- greenhouse gas (1)
- hardware accelerator (1)
- hardware architecture (1)
- higher education (1)
- hybrid solving (1)
- ice harboring (1)
- image classification (1)
- image processing (1)
- image recognition (1)
- imaging (1)
- impacts (1)
- incremental SVM (1)
- informal and formal learning (1)
- informal logic (1)
- information flow control (1)
- internal ambiguity (1)
- intrusion detection (1)
- key competences in physical computing (1)
- kidney cancer (1)
- knowledge representation and nonmonotonic reasoning (1)
- knowledge representation and reasoning (1)
- latches (1)
- leftmost derivations (1)
- lesson planning (1)
- lesson preparation (1)
- linear programming (1)
- logic programming (1)
- logic-based modeling (1)
- loop formulas (1)
- loose programming (1)
- loss propagation (1)
- lunar exploration (1)
- machine learning (1)
- machine learning algorithms (1)
- manipulation planning (1)
- measure development (1)
- media (1)
- metabolic network (1)
- metabolism (1)
- metabolomics (1)
- metadata (1)
- metastasis (1)
- mobile learning (1)
- mobile technologies and apps (1)
- natural disasters (1)
- natural language generation (1)
- neighborhood (1)
- neural networks (1)
- neutral endopeptidase (1)
- nonphotorealistic rendering (NPR) (1)
- o-ambiguity (1)
- on-farm evaluation (1)
- organisational evolution (1)
- paper prototyping (1)
- parallel processing (1)
- parallel rewriting (1)
- parity aggregate operator (1)
- parsing (1)
- pdf forms (1)
- perception (1)
- perception differences (1)
- physical computing (1)
- physical computing tools (1)
- planning (1)
- plug-ins (1)
- policy evaluation (1)
- portfolio-based solving (1)
- predictive models (1)
- premise acceptability (1)
- process model alignment (1)
- process modeling (1)
- program encodings (1)
- programmed grammars (1)
- projection (1)
- proof complexity (1)
- pruritus (1)
- pulse stretching inverters (1)
- quality of life (1)
- quantum (1)
- random forest (1)
- real arguments (1)
- real-time (1)
- real-time mapping (1)
- reference (1)
- referential effectiveness (1)
- regression (1)
- regular language (1)
- relevance (1)
- reliability (1)
- reliability analysis (1)
- resources (1)
- restricted parallelism (1)
- satisfiability (1)
- selective fault tolerance (1)
- self-adaptive multiprocessing system (1)
- self-checking (1)
- semantic web (1)
- simplicity (1)
- single event upset (1)
- single event upsets (1)
- single-event transient (1)
- single-trial-analysis (1)
- site-specific weed management (1)
- sleep quality (1)
- smart farming (1)
- soft errors (1)
- solar particle event (1)
- space missions (1)
- stable model semantics (1)
- state complexity (1)
- static analysis (1)
- static prediction games (1)
- strong equivalence (1)
- sufficiency (1)
- suicidal ideations (1)
- supply chains (1)
- support system (1)
- support vector machines (1)
- tableau calculi (1)
- teacher training (1)
- teaching (1)
- technical notes and rapid communications (1)
- tele-teaching (1)
- test response compaction (1)
- theory of computation (1)
- timing (1)
- tools (1)
- transient Faults (1)
- transient analysis (1)
- triangulated irregular networks (1)
- triple modular redundancy (1)
- unfounded sets (1)
- user experience (1)
- verification (1)
- video annotation (1)
- virtual mobility (1)
- wheat crops (1)
- work productivity (1)
- yellow rust (1)
Institute
Circumscribing inconsistency
(1997)
This paper presents an evaluation of ACPI energy saving modes, and deduces the design and implementation of an energy saving daemon for clusters called cherub. The design of the cherub daemon is modular and extensible. Since the only requirement is a central approach for resource management, cherub is suited for Server Load Balancing (SLB) clusters managed by dispatchers like Linux Virtual Server (LVS), as well as for High Performance Computing (HPC) clusters. Our experimental results show that cherub's scheduling algorithm works well, i.e. it will save energy, if possible, and avoids state-flapping.
Characterizing Grids
(2003)
We present a new data model approach to describe the various objects that either represent the Grid infrastructure or make use of it. The data model is based on the experiences and experiments conducted in heterogeneous Grid environments. While very sophisticated data models exist to describe and characterize e.g. compute capacities or web services, we will show that a general description, which combines {em all} of these aspects, is needed to give an adequate representation of objects on a Grid. The Grid Object Description Language (GODsL)} is a generic and extensible approach to unify the various aspects that an object on a Grid can have. GODsL provides the content for the XML based communication in Grid migration scenarios, carried out in the GridLab project. We describe the data model architecture on a general level and focus on the Grid application scenarios.
Business process management
(2006)
We give a new view on building content clusters from page pair models. We measure the heuristic importance within every two pages by computing the distance of their accessed positions in usage sessions. We also compare our page pair models with the classical pair models used in information theories and natural language processing, and give different evaluation methods to build the reasonable content communities. And we finally interpret the advantages and disadvantages of our models from detailed experiment results
The correctness of model transformations is a crucial element for model-driven engineering of high-quality software. A prerequisite to verify model transformations at the level of the model transformation specification is that an unambiguous formal semantics exists and that the implementation of the model transformation language adheres to this semantics. However, for existing relational model transformation approaches, it is usually not really clear under which constraints particular implementations really conform to the formal semantics. In this paper, we will bridge this gap for the formal semantics of triple graph grammars (TGG) and an existing efficient implementation. While the formal semantics assumes backtracking and ignores non-determinism, practical implementations do not support backtracking, require rule sets that ensure determinism, and include further optimizations. Therefore, we capture how the considered TGG implementation realizes the transformation by means of operational rules, define required criteria, and show conformance to the formal semantics if these criteria are fulfilled. We further outline how static and runtime checks can be employed to guarantee these criteria.
While the maturity of process mining algorithms increases and more process mining tools enter the market, process mining projects still face the problem of different levels of abstraction when comparing events with modeled business activities. Current approaches for event log abstraction try to abstract from the events in an automated way that does not capture the required domain knowledge to fit business activities. This can lead to misinterpretation of discovered process models. We developed an approach that aims to abstract an event log to the same abstraction level that is needed by the business. We use domain knowledge extracted from existing process documentation to semi-automatically match events and activities. Our abstraction approach is able to deal with n:m relations between events and activities and also supports concurrency. We evaluated our approach in two case studies with a German IT outsourcing company. (C) 2014 Elsevier Ltd. All rights reserved.
Noninvasive electroencephalogram (EEG) recordings provide for easy and safe access to human neocortical processes which can be exploited for a brain-computer interface (BCI). At present, however, the use of BCIs is severely limited by low bit-transfer rates. We systematically analyze and develop two recent concepts, both capable of enhancing the information gain from multichannel scalp EEG recordings: 1) the combination of classifiers, each specifically tailored for different physiological phenomena, e.g., slow cortical potential shifts, such as the premovement Bereitschaftspotential or differences in spatio-spectral distributions of brain activity (i.e., focal event-related desynchronizations) and 2) behavioral paradigms inducing the subjects to generate one out of several brain states (multiclass approach) which all bare a distinctive spatio-temporal signature well discriminable in the standard scalp EEG. We derive information-theoretic predictions and demonstrate their relevance in experimental data. We will show that a suitably arranged interaction between these concepts can significantly boost BCI performances
Recently blind source separation (BSS) methods have been highly successful when applied to biomedical data. This paper reviews the concept of BSS and demonstrates its usefulness in the context of event-related MEG measurements. In a first experiment we apply BSS to artifact identification of raw MEG data and discuss how the quality of the resulting independent component projections can be evaluated. The second part of our study considers averaged data of event-related magnetic fields. Here, it is particularly important to monitor and thus avoid possible overfitting due to limited sample size. A stability assessment of the BSS decomposition allows to solve this task and an additional grouping of the BSS components reveals interesting structure, that could ultimately be used for gaining a better physiological modeling of the data
We propose two methods that reduce the post-nonlinear blind source separation problem (PNL-BSS) to a linear BSS problem. The first method is based on the concept of maximal correlation: we apply the alternating conditional expectation (ACE) algorithm-a powerful technique from nonparametric statistics-to approximately invert the componentwise nonlinear functions. The second method is a Gaussianizing transformation, which is motivated by the fact that linearly mixed signals before nonlinear transformation are approximately Gaussian distributed. This heuristic, but simple and efficient procedure works as good as the ACE method. Using the framework provided by ACE, convergence can be proven. The optimal transformations obtained by ACE coincide with the sought-after inverse functions of the nonlinearitics. After equalizing the nonlinearities, temporal decorrelation separation (TDSEP) allows us to recover the source signals. Numerical simulations testing "ACE-TD" and "Gauss-TD" on realistic examples are performed with excellent results
The emergence of drug resistance remains one of the most challenging issues in the treatment of HIV-1 infection. The extreme replication dynamics of HIV facilitates its escape from the selective pressure exerted by the human immune system and by the applied combination drug therapy. This article reviews computational methods whose combined use can support the design of optimal antiretroviral therapies based on viral genotypic and phenotypic data. Genotypic assays are based on the analysis of mutations associated with reduced drug susceptibility, but are difficult to interpret due to the numerous mutations and mutational patterns that confer drug resistance. Phenotypic resistance or susceptibility can be experimentally evaluated by measuring the inhibition of the viral replication in cell culture assays. However, this procedure is expensive and time consuming
Answer Set Programming (ASP) is an increasingly popular framework for declarative programming that admits the description of problems by means of rules and constraints that form a disjunctive logic program. In particular, many Al problems such as reasoning in a nonmonotonic setting can be directly formulated in ASP. Although the main problems of ASP are of high computational complexity, complete for the second level of the Polynomial Hierarchy, several restrictions of ASP have been identified in the literature, under which ASP problems become tractable.
In this paper we use the concept of backdoors to identify new restrictions that make ASP problems tractable. Small backdoors are sets of atoms that represent "clever reasoning shortcuts" through the search space and represent a hidden structure in the problem input. The concept of backdoors is widely used in theoretical investigations in the areas of propositional satisfiability and constraint satisfaction. We show that it can be fruitfully adapted to ASP. We demonstrate how backdoors can serve as a unifying framework that accommodates several tractable restrictions of ASP known from the literature. Furthermore, we show how backdoors allow us to deploy recent algorithmic results from parameterized complexity theory to the domain of answer set programming. (C) 2015 Elsevier B.V. All rights reserved.
Building biological models by inferring functional dependencies from experimental data is an important issue in Molecular Biology. To relieve the biologist from this traditionally manual process, various approaches have been proposed to increase the degree of automation. However, available approaches often yield a single model only, rely on specific assumptions, and/or use dedicated, heuristic algorithms that are intolerant to changing circumstances or requirements in the view of the rapid progress made in Biotechnology. Our aim is to provide a declarative solution to the problem by appeal to Answer Set Programming (ASP) overcoming these difficulties. We build upon an existing approach to Automatic Network Reconstruction proposed by part of the authors. This approach has firm mathematical foundations and is well suited for ASP due to its combinatorial flavor providing a characterization of all models explaining a set of experiments. The usage of ASP has several benefits over the existing heuristic algorithms. First, it is declarative and thus transparent for biological experts. Second, it is elaboration tolerant and thus allows for an easy exploration and incorporation of biological constraints. Third, it allows for exploring the entire space of possible models. Finally, our approach offers an excellent performance, matching existing, special-purpose systems.
Since 2004, increases in computational power described by Moore's law have substantially been realized in the form of additional cores rather than through faster clock speeds. To make effective use of modern hardware when solving hard computational problems, it is therefore necessary to employ parallel solution strategies. In this work, we demonstrate how effective parallel solvers for propositional satisfiability (SAT), one of the most widely studied NP-complete problems, can be produced automatically from any existing sequential, highly parametric SAT solver. Our Automatic Construction of Parallel Portfolios (ACPP) approach uses an automatic algorithm configuration procedure to identify a set of configurations that perform well when executed in parallel. Applied to two prominent SAT solvers, Lingeling and clasp, our ACPP procedure identified 8-core solvers that significantly outperformed their sequential counterparts on a diverse set of instances from the application and hard combinatorial category of the 2012 SAT Challenge. We further extended our ACPP approach to produce parallel portfolio solvers consisting of several different solvers by combining their configuration spaces. Applied to the component solvers of the 2012 SAT Challenge gold medal winning SAT Solver pfolioUZK, our ACPP procedures produced a significantly better-performing parallel SAT solver.
This paper presents a concept for automated architecture synthesis for adaptive multiprocessors on chip, in particular for Field-Programmable Gate-Array (FPGA) devices. Given a parallel program, the intent is to simultaneously allocate processor resources and the corresponding communication network, and at the same time, to map the parallel application to get an optimum application-specific architecture. This approach builds up on a previously proposed design platform that automates system integration and FPGA synthesis for such architectures. As a result, the overall concept offers an automated design approach from application mapping to system and FPGA configuration. The automated synthesis is based on combinatorial optimization. Automation is possible because a solvable Integer Linear Programming (ILP) model that captures all necessary design trade-off parameters of such systems has been found. Experimental results to study the feasibility of the automated synthesis indicate that problems with sizes that can be encountered in the embedded domain can be readily solved. Results obtained underscore the need for an automated synthesis for design space exploration.
Automatic code generation is an essential cornerstone of today's model-driven approaches to software engineering. Thus a key requirement for the success of this technique is the reliability and correctness of code generators. This article describes how we employ standard model checking-based verification to check that code generator models developed within our code generation framework Genesys conform to (temporal) properties. Genesys is a graphical framework for the high-level construction of code generators on the basis of an extensible library of well-defined building blocks along the lines of the Extreme Model-Driven Development paradigm. We will illustrate our verification approach by examining complex constraints for code generators, which even span entire model hierarchies. We also show how this leads to a knowledge base of rules for code generators, which we constantly extend by e.g. combining constraints to bigger constraints, or by deriving common patterns from structurally similar constraints. In our experience, the development of code generators with Genesys boils down to re-instantiating patterns or slightly modifying the graphical process model, activities which are strongly supported by verification facilities presented in this article.
Although Boolean Constraint Technology has made tremendous progress over the last decade, the efficacy of state-of-the-art solvers is known to vary considerably across different types of problem instances, and is known to depend strongly on algorithm parameters. This problem was addressed by means of a simple, yet effective approach using handmade, uniform, and unordered schedules of multiple solvers in ppfolio, which showed very impressive performance in the 2011 Satisfiability Testing (SAT) Competition. Inspired by this, we take advantage of the modeling and solving capacities of Answer Set Programming (ASP) to automatically determine more refined, that is, nonuniform and ordered solver schedules from the existing benchmarking data. We begin by formulating the determination of such schedules as multi-criteria optimization problems and provide corresponding ASP encodings. The resulting encodings are easily customizable for different settings, and the computation of optimum schedules can mostly be done in the blink of an eye, even when dealing with large runtime data sets stemming from many solvers on hundreds to thousands of instances. Also, the fact that our approach can be customized easily enabled us to swiftly adapt it to generate parallel schedules for multi-processor machines.
We present the hybrid ASP solver clingcon, combining the simple modeling language and the high performance Boolean solving capacities of Answer Set Programming (ASP) with techniques for using non-Boolean constraints from the area of Constraint Programming (CP). The new clingcon system features an extended syntax supporting global constraints and optimize statements for constraint variables. The major technical innovation improves the interaction between ASP and CP solver through elaborated learning techniques based on irreducible inconsistent sets. A broad empirical evaluation shows that these techniques yield a performance improvement of an order of magnitude.
The use of neural networks is considered as the state of the art in the field of image classification. A large number of different networks are available for this purpose, which, appropriately trained, permit a high level of classification accuracy. Typically, these networks are applied to uncompressed image data, since a corresponding training was also carried out using image data of similar high quality. However, if image data contains image errors, the classification accuracy deteriorates drastically. This applies in particular to coding artifacts which occur due to image and video compression. Typical application scenarios for video compression are narrowband transmission channels for which video coding is required but a subsequent classification is to be carried out on the receiver side. In this paper we present a special H.264/Advanced Video Codec (AVC) based video codec that allows certain regions of a picture to be coded with near constant picture quality in order to allow a reliable classification using neural networks, whereas the remaining image will be coded using constant bit rate. We have combined this feature with the ability to run with lowest latency properties, which is usually also required in remote control applications scenarios. The codec has been implemented as a fully hardwired High Definition video capable hardware architecture which is suitable for Field Programmable Gate Arrays.
We construct a new RC phase shift network based Chua's circuit, which exhibits a period-doubling bifurcation route to chaos. Using coupled versions of such a phase-shift network based Chua's oscillators, we describe a new method for achieving complete synchronization (CS), approximate lag synchronization (LS), and approximate anticipating synchronization (AS) without delay or parameter mismatch. Employing the Pecora and Carroll approach, chaos synchronization is achieved in coupled chaotic oscillators, where the drive system variables control the response system. As a result, AS or LS or CS is demonstrated without using a variable delay line both experimentally and numerically.
Answer Set Programming faces an increasing popularity for problem solving in various domains. While its modeling language allows us to express many complex problems in an easy way, its solving technology enables their effective resolution. In what follows, we detail some of the key factors of its success. Answer Set Programming [ASP; Brewka et al. Commun ACM 54(12):92–103, (2011)] is seeing a rapid proliferation in academia and industry due to its easy and flexible way to model and solve knowledge-intense combinatorial (optimization) problems. To this end, ASP offers a high-level modeling language paired with high-performance solving technology. As a result, ASP systems provide out-off-the-box, general-purpose search engines that allow for enumerating (optimal) solutions. They are represented as answer sets, each being a set of atoms representing a solution. The declarative approach of ASP allows a user to concentrate on a problem’s specification rather than the computational means to solve it. This makes ASP a prime candidate for rapid prototyping and an attractive tool for teaching key AI techniques since complex problems can be expressed in a succinct and elaboration tolerant way. This is eased by the tuning of ASP’s modeling language to knowledge representation and reasoning (KRR). The resulting impact is nicely reflected by a growing range of successful applications of ASP [Erdem et al. AI Mag 37(3):53–68, 2016; Falkner et al. Industrial applications of answer set programming. K++nstliche Intelligenz (2018)]
The course timetabling problem can be generally defined as the task of assigning a number of lectures to a limited set of timeslots and rooms, subject to a given set of hard and soft constraints. The modeling language for course timetabling is required to be expressive enough to specify a wide variety of soft constraints and objective functions. Furthermore, the resulting encoding is required to be extensible for capturing new constraints and for switching them between hard and soft, and to be flexible enough to deal with different formulations. In this paper, we propose to make effective use of ASP as a modeling language for course timetabling. We show that our ASP-based approach can naturally satisfy the above requirements, through an ASP encoding of the curriculum-based course timetabling problem proposed in the third track of the second international timetabling competition (ITC-2007). Our encoding is compact and human-readable, since each constraint is individually expressed by either one or two rules. Each hard constraint is expressed by using integrity constraints and aggregates of ASP. Each soft constraint S is expressed by rules in which the head is the form of penalty (S, V, C), and a violation V and its penalty cost C are detected and calculated respectively in the body. We carried out experiments on four different benchmark sets with five different formulations. We succeeded either in improving the bounds or producing the same bounds for many combinations of problem instances and formulations, compared with the previous best known bounds.
Answer set planning
(2022)
Answer Set Planning refers to the use of Answer Set Programming (ASP) to compute plans, that is, solutions to planning problems, that transform a given state of the world to another state. The development of efficient and scalable answer set solvers has provided a significant boost to the development of ASP-based planning systems. This paper surveys the progress made during the last two and a half decades in the area of answer set planning, from its foundations to its use in challenging planning domains. The survey explores the advantages and disadvantages of answer set planning. It also discusses typical applications of answer set planning and presents a set of challenges for future research.
And/Or reasoning graphs for determining prime implicants in multi-level combinational networks
(1997)
We address the problem of Finite Model Computation (FMC) of first-order theories and show that FMC can efficiently and transparently be solved by taking advantage of a recent extension of Answer Set Programming (ASP), called incremental Answer Set Programming (iASP). The idea is to use the incremental parameter in iASP programs to account for the domain size of a model. The FMC problem is then successively addressed for increasing domain sizes until an answer set, representing a finite model of the original first-order theory, is found. We implemented a system based on the iASP solver iClingo and demonstrate its competitiveness by showing that it slightly outperforms the winner of the FNT division of CADE's 2009 Automated Theorem Proving (ATP) competition on the respective benchmark collection.
An Extended Query language for action languages (and its application to aggregates and preferences)
(2006)
Algorithm selection (AS) techniques - which involve choosing from a set of algorithms the one expected to solve a given problem instance most efficiently - have substantially improved the state of the art in solving many prominent AI problems, such as SAT, CSP, ASP, MAXSAT and QBF. Although several AS procedures have been introduced, not too surprisingly, none of them dominates all others across all AS scenarios. Furthermore, these procedures have parameters whose optimal values vary across AS scenarios. This holds specifically for the machine learning techniques that form the core of current AS procedures, and for their hyperparameters. Therefore, to successfully apply AS to new problems, algorithms and benchmark sets, two questions need to be answered: (i) how to select an AS approach and (ii) how to set its parameters effectively. We address both of these problems simultaneously by using automated algorithm configuration. Specifically, we demonstrate that we can automatically configure claspfolio 2, which implements a large variety of different AS approaches and their respective parameters in a single, highly-parameterized algorithm framework. Our approach, dubbed AutoFolio, allows researchers and practitioners across a broad range of applications to exploit the combined power of many different AS methods. We demonstrate AutoFolio can significantly improve the performance of claspfolio 2 on 8 out of the 13 scenarios from the Algorithm Selection Library, leads to new state-of-the-art algorithm selectors for 7 of these scenarios, and matches state-of-the-art performance (statistically) on all other scenarios. Compared to the best single algorithm for each AS scenario, AutoFolio achieves average speedup factors between 1.3 and 15.4.
An asymptotic analysis and improvement of AdaBoost in the binary classification case (in Japanese)
(2000)
A multiple interpretation scheme is an ordered sequence of morphisms. The ordered multiple interpretation of a word is obtained by concatenating the images of that word in the given order of morphisms. The arbitrary multiple interpretation of a word is the semigroup generated by the images of that word. These interpretations are naturally extended to languages. Four types of ambiguity of multiple interpretation schemata on a language are defined: o-ambiguity, internal ambiguity, weakly external ambiguity and strongly external ambiguity. We investigate the problem of deciding whether a multiple interpretation scheme is ambiguous on regular languages.
Owing to the loose coupling between replicas, the replica-exchange (RE) class of algorithms should be able to benefit greatly from using as many resources as available. However, the ability to effectively use multiple distributed resources to reduce the time to completion remains a challenge at many levels. Additionally, an implementation of a pleasingly distributed algorithm such as replica-exchange, which is independent of infrastructural details, does not exist. This paper proposes an extensible and scalable framework based on Simple API for Grid Applications that provides a general-purpose, opportunistic mechanism to effectively use multiple resources in an infrastructure-independent way. By analysing the requirements of the RE algorithm and the challenges of implementing it on real production systems, we propose a new abstraction (BIGJOB), which forms the basis of the adaptive redistribution and effective scheduling of replicas.
Evaluating the quality of ranking functions is a core task in web search and other information retrieval domains. Because query distributions and item relevance change over time, ranking models often cannot be evaluated accurately on held-out training data. Instead, considerable effort is spent on manually labeling the relevance of query results for test queries in order to track ranking performance. We address the problem of estimating ranking performance as accurately as possible on a fixed labeling budget. Estimates are based on a set of most informative test queries selected by an active sampling distribution. Query labeling costs depend on the number of result items as well as item-specific attributes such as document length. We derive cost-optimal sampling distributions for the commonly used performance measures Discounted Cumulative Gain and Expected Reciprocal Rank. Experiments on web search engine data illustrate significant reductions in labeling costs.
Information integration across company borders becomes increasingly important for the success of product lifecycle management in industry and complex supply chains. Semantic technologies are about to play a crucial role in this integrative process. However, cross-company data exchange requires mechanisms to enable fine-grained access control definition and enforcement, preventing unauthorized leakage of confidential data across company borders. Currently available semantic repositories are not sufficiently equipped to satisfy this important requirement. This paper presents an infrastructure for controlled sharing of semantic data between cooperating business partners. First, we motivate the need for access control in semantic data federations by a case study in the industrial service sector. Furthermore, we present an architecture for controlling access to semantic repositories that is based on our newly developed SemForce security service. Finally, we show the practical feasibility of this architecture by an implementation and several performance experiments.
This paper describes the implementation of a workflow model for service-oriented computing of potential areas for wind turbines in jABC. By implementing a re-executable model the manual effort of a multi-criteria site analysis can be reduced. The aim is to determine the shift of typical geoprocessing tools of geographic information systems (GIS) from the desktop to the web. The analysis is based on a vector data set and mainly uses web services of the “Center for Spatial Information Science and Systems” (CSISS). This paper discusses effort, benefits and problems associated with the use of the web services.
We discuss the relaxation of a class of nonlinear elliptic Cauchy problems with data on a piece S of the boundary surface by means of a variational approach known in the optimal control literature as "equation error method". By the Cauchy problem is meant any boundary value problem for an unknown function y in a domain X with the property that the data on S, if combined with the differential equations in X, allow one to determine all derivatives of y on S by means of functional equations. In the case of real analytic data of the Cauchy problem, the existence of a local solution near S is guaranteed by the Cauchy-Kovalevskaya theorem. We also admit overdetermined elliptic systems, in which case the set of those Cauchy data on S for which the Cauchy problem is solvable is very "thin". For this reason we discuss a variational setting of the Cauchy problem which always possesses a generalised solution.
Researchers and developers worldwide have put their efforts into the design, development and use of information and communication technology to support teaching and learning. This research is driven by pedagogical as well as technological disciplines. The most challenging ideas are currently found in the application of mobile, ubiquitous, pervasive, contextualized and seamless technologies for education, which we shall refer to as pervasive education. This article provides a comprehensive overview of the existing work in this field and categorizes it with respect to educational settings. Using this approach, best practice solutions for certain educational settings and open questions for pervasive education are highlighted in order to inspire interested developers and educators. The work is assigned to different fields, identified by the main pervasive technologies used and the educational settings. Based on these assignments we identify areas within pervasive education that are currently disregarded or deemed challenging so that further research and development in these fields are stimulated in a trans-disciplinary approach. (C) 2013 Elsevier B.V. All rights reserved.
We present a new technique for uniquely identifying a single failing vector in an interval of test vectors. This technique is applicable to combinational circuits and for scan-BIST in sequential circuits with multiple scan chains. The proposed method relies on the linearity properties of the MISR and on the use of two test sequences, which are both applied to the circuit under test. The second test sequence is derived from the first in a straightforward manner and the same test pattern source is used for both test sequences. If an interval contains only a single failing vector, the algebraic analysis is guaranteed to identify it. We also show analytically that if an interval contains two failing vectors, the probability that this case is interpreted as one failing vector is very low. We present experimental results for the ISCAS benchmark circuits to demonstrate the use of the proposed method for identifying failing test vectors
A polynomial translation of logic programs with nested expressions into disjunctive logic programs
(2002)
Scheduling performance in computational grid can potentially benefit a lot from accurate execution time estimation for parallel jobs. Most existing approaches for the parallel job execution time estimation, however, require ample past job traces and the explicit correlations between the job execution time and the outer layout parameters such as the consumed processor numbers, the user-estimated execution time and the job ID, which are hard to obtain or reveal. This paper presents and evaluates a novel execution time estimation approach for parallel jobs, the user-behavior clustering for execution time estimation, which can give more accurate execution time estimation for parallel jobs through exploring the job similarity and revealing the user submission patterns. Experiment results show that compared to the state-of-art algorithms, our approach can improve the accuracy of the job execution time estimation up to 5.6 %, meanwhile the time that our approach spends on calculation can be reduced up to 3.8 %.
In this article, we consider high-dimensional data which contains a low-dimensional non-Gaussian structure contaminated with Gaussian noise and propose a new linear method to identify the non-Gaussian subspace. Our method NGCA (Non-Gaussian Component Analysis) is based on a very general semi-parametric framework and has a theoretical guarantee that the estimation error of finding the non-Gaussian components tends to zero at a parametric rate. NGCA can be used not only as preprocessing for ICA, but also for extracting and visualizing more general structures like clusters. A numerical study demonstrates the usefulness of our method