Refine
Has Fulltext
- no (27) (remove)
Year of publication
- 2013 (27) (remove)
Document Type
- Article (16)
- Doctoral Thesis (8)
- Monograph/Edited Volume (2)
- Preprint (1)
Is part of the Bibliography
- yes (27)
Keywords
- Theory (2)
- Active evaluation (1)
- Answer Set Programming (1)
- Answer set programming (1)
- Cluster computing (1)
- Continuous Testing (1)
- Continuous Versioning (1)
- Deal of the Day (1)
- Debugging (1)
- Design (1)
- Evolution (1)
- Experimentation (1)
- Explore-first Programming (1)
- Fault Localization (1)
- Green computing (1)
- Grounded theory (1)
- Human Factors (1)
- Image and video stylization (1)
- Information federation (1)
- Information retrieval (1)
- Information security (1)
- Landmark visibility (1)
- Loyalty (1)
- Pedestrian navigation (1)
- Prototyping (1)
- Ranking (1)
- Semantic web (1)
- Service orientation (1)
- Structural equation modeling (1)
- Usability testing (1)
- User-centred design (1)
- answer set programming (1)
- artistic rendering (1)
- belief merging (1)
- belief revision (1)
- controlled vocabularies (1)
- course timetabling (1)
- educational timetabling (1)
- metadata (1)
- nonphotorealistic rendering (NPR) (1)
- program encodings (1)
- proof complexity (1)
- semantic web (1)
- strong equivalence (1)
- tableau calculi (1)
Institute
- Institut für Informatik und Computational Science (27) (remove)
The concept of Linked Data has made its entrance in the cultural heritage sector due to its potential use for the integration of heterogeneous collections and deriving additional value out of existing metadata. However, practitioners and researchers alike need a better understanding of what outcome they can reasonably expect of the reconciliation process between their local metadata and established controlled vocabularies which are already a part of the Linked Data cloud. This paper offers an in-depth analysis of how a locally developed vocabulary can be successfully reconciled with the Library of Congress Subject Headings (LCSH) and the Arts and Architecture Thesaurus (AAT) through the help of a general-purpose tool for interactive data transformation (OpenRefine). Issues negatively affecting the reconciliation process are identified and solutions are proposed in order to derive maximum value from existing metadata and controlled vocabularies in an automated manner.
Programmers make many changes to the program to eventually find a good solution for a given task. In this course of change, every intermediate development state can of value, when, for example, a promising ideas suddenly turn out inappropriate or the interplay of objects turns out more complex than initially expected before making changes. Programmers would benefit from tool support that provides immediate access to source code and run-time of previous development states of interest. We present IDE extensions, implemented for Squeak/Smalltalk, to preserve, retrieve, and work with this information. With such tool support, programmers can work without worries because they can rely on tools that help them with whatever their explorations will reveal. They no longer have to follow certain best practices only to avoid undesired consequences of changing code.
Multi tenancy for cloud-based in-memory column databases : workload management and data placement
(2013)
Evaluating the quality of ranking functions is a core task in web search and other information retrieval domains. Because query distributions and item relevance change over time, ranking models often cannot be evaluated accurately on held-out training data. Instead, considerable effort is spent on manually labeling the relevance of query results for test queries in order to track ranking performance. We address the problem of estimating ranking performance as accurately as possible on a fixed labeling budget. Estimates are based on a set of most informative test queries selected by an active sampling distribution. Query labeling costs depend on the number of result items as well as item-specific attributes such as document length. We derive cost-optimal sampling distributions for the commonly used performance measures Discounted Cumulative Gain and Expected Reciprocal Rank. Experiments on web search engine data illustrate significant reductions in labeling costs.
If sites, cities, and landscapes are captured at different points in time using technology such as LiDAR, large collections of 3D point clouds result. Their efficient storage, processing, analysis, and presentation constitute a challenging task because of limited computation, memory, and time resources. In this work, we present an approach to detect changes in massive 3D point clouds based on an out-of-core spatial data structure that is designed to store data acquired at different points in time and to efficiently attribute 3D points with distance information. Based on this data structure, we present and evaluate different processing schemes optimized for performing the calculation on the CPU and GPU. In addition, we present a point-based rendering technique adapted for attributed 3D point clouds, to enable effective out-of-core real-time visualization of the computation results. Our approach enables conclusions to be drawn about temporal changes in large highly accurate 3D geodata sets of a captured area at reasonable preprocessing and rendering times. We evaluate our approach with two data sets from different points in time for the urban area of a city, describe its characteristics, and report on applications.
Derived algebraic systems
(2013)
Simplicity is a mindset, a way of looking at solutions, an extremely wide-ranging philosophical stance on the world, and thus a deeply rooted cultural paradigm. The culture of "less" can be profoundly disruptive, cutting out existing "standard" elements from products and business models, thereby revolutionizing entire markets.