004 Datenverarbeitung; Informatik
Refine
Year of publication
- 2013 (82) (remove)
Document Type
- Article (38)
- Monograph/Edited Volume (21)
- Doctoral Thesis (17)
- Conference Proceeding (2)
- Master's Thesis (2)
- Postprint (2)
Language
- English (53)
- German (28)
- Multiple languages (1)
Keywords
- Modellierung (3)
- CSCW (2)
- Cloud Computing (2)
- Datenbank (2)
- Datenschutz (2)
- E-Learning (2)
- Evolution (2)
- Forschungsprojekte (2)
- Future SOC Lab (2)
- HCI (2)
- Hauptspeicherdatenbank (2)
- ISSEP (2)
- In-Memory Technologie (2)
- Informatics Education (2)
- MOOCs (2)
- Modeling (2)
- Multicore Architekturen (2)
- Online Course (2)
- Online-Learning (2)
- Online-Lernen (2)
- Onlinekurs (2)
- Tele-Lab (2)
- Tele-Teaching (2)
- Verifikation (2)
- openHPI (2)
- tele-TASK (2)
- topics (2)
- verification (2)
- 3D Computer Grafik (1)
- 3D Computer Graphics (1)
- 3DCityDB (1)
- Adaptive hypermedia (1)
- Anfragesprache (1)
- Anisotroper Kuwahara Filter (1)
- Anomalien (1)
- Aspect-Oriented Programming (1)
- Aspektorientierte Programmierung (1)
- Attribut-Merge-Prozess (1)
- Attribute Merge Process (1)
- Ausführung von Modellen (1)
- Ausführungsgeschichte (1)
- BPM (1)
- BPMN (1)
- Batchverarbeitung (1)
- Bayes'sche Netze (1)
- Bayesian networks (1)
- Berührungseingaben (1)
- Beschränkungen und Abhängigkeiten (1)
- CS Ed Research (1)
- CS at school (1)
- CS curriculum (1)
- CityGML (1)
- Citymodel (1)
- Cloud computing (1)
- Clusteranalyse (1)
- Comparing programming environments (1)
- Computer Networks (1)
- Computernetzwerke (1)
- Constraints (1)
- Contracts (1)
- Course of Study (1)
- Data Modeling (1)
- Data Privacy (1)
- Database (1)
- Database Cost Model (1)
- Databases (1)
- Datenabhängigkeiten-Entdeckung (1)
- Datenbank-Kostenmodell (1)
- Datenbanken (1)
- Datenintegration (1)
- Datenmodellierung (1)
- Datensicht (1)
- Design Thinking (1)
- Differenz von Gauss Filtern (1)
- Digitale Whiteboards (1)
- Disambiguierung (1)
- Eingabegenauigkeit (1)
- Eingebettete Systeme (1)
- Fehlerbeseitigung (1)
- Flussgesteuerter Bilateraler Filter (1)
- Focus+Context Visualization (1)
- Fokus-&-Kontext Visualisierung (1)
- Formale Verifikation (1)
- General subject “Information” (1)
- Geschäftsanwendungen (1)
- Gruppierung von Prozessinstanzen (1)
- ICT (1)
- ICT curriculum (1)
- IPv4 (1)
- IPv6 (1)
- In-Memory Database (1)
- In-Memory technology (1)
- Index (1)
- Index Structures (1)
- Indexstrukturen (1)
- Infinite State (1)
- Information Ethics (1)
- Infrastructure (1)
- Infrastruktur (1)
- Inklusionsabhängigkeit (1)
- Interactive Rendering (1)
- Interaktives Rendering (1)
- Internet (1)
- Internet Protocol (1)
- Internet Service Provider (1)
- Internet applications (1)
- Internetanwendungen (1)
- Invarianten (1)
- Invariants (1)
- Java Security Framework (1)
- Konferenz (1)
- Kontext (1)
- Laufzeitmodelle (1)
- Leistungsfähigkeit (1)
- Liguistisch (1)
- Link-Entdeckung (1)
- Lively Kernel (1)
- Megamodell (1)
- Megamodels (1)
- Mischmodelle (1)
- Mobile Application Development (1)
- Mobilgeräte (1)
- Model Execution (1)
- Model-Driven Engineering (1)
- Modeling Languages (1)
- Modell (1)
- Modellgetriebene Softwareentwicklung (1)
- Modellierungssprachen (1)
- Models at Runtime (1)
- Multicore architectures (1)
- Navigation (1)
- Nebenläufigkeit (1)
- Network Politics (1)
- Netzpolitik (1)
- Nicht-photorealistisches Rendering (1)
- Object Constraint Programming (1)
- Objekt-orientiertes Programmieren mit Constraints (1)
- Owner-Retained Access Control (ORAC) (1)
- Performance (1)
- Policy Languages (1)
- Policy Sprachen (1)
- PostGIS (1)
- Primary informatics (1)
- Problem solving (1)
- Problem solving strategies (1)
- Process Enactment (1)
- Process Mining (1)
- Process Modeling (1)
- Programmierkonzepte (1)
- Programming environments for children (1)
- Programming learning (1)
- Prozessausführung (1)
- Prozessmodellierung (1)
- Prozessmodellsuche (1)
- Präsentation (1)
- Pytho n (1)
- Python (1)
- Query (1)
- Realzeitsysteme (1)
- Relevanz (1)
- Research Projects (1)
- Runtime Binding (1)
- SQL (1)
- Scalability (1)
- Schema-Entdeckung (1)
- Scientific understanding of Information (1)
- Search Algorithms (1)
- Selektion (1)
- Self-Adaptive Software (1)
- Semantische Analyse (1)
- Service-Oriented Architecture (1)
- Service-Orientierte Architekturen (1)
- Service-orientierte Systeme (1)
- Similarity Measures (1)
- Similarity Search (1)
- Skalierbarkeit (1)
- SoaML (1)
- Softwaretest (1)
- Spaltenlayout (1)
- Stadtmodell (1)
- Suchverfahren (1)
- Systems of Systems (1)
- Teaching problem solving strategies (1)
- Test-getriebene Fehlernavigation (1)
- Theorembeweisen (1)
- Timed Automata (1)
- Trajectories (1)
- Transaktionen (1)
- Unbegrenzter Zustandsraum (1)
- Unifikation (1)
- Verification (1)
- Verteiltes Arbeiten (1)
- Videoanalyse (1)
- Videometadaten (1)
- Vorhersage (1)
- Web applications (1)
- Web of Data (1)
- anisotropic Kuwahara filter (1)
- anomalies (1)
- answer set programming (1)
- back-in-time (1)
- batch processing (1)
- behavioral specification (1)
- circuits (1)
- cloud computing (1)
- clustering (1)
- coherence-enhancing filtering (1)
- collaboration (1)
- competence (1)
- computational thinking (1)
- computer science (1)
- computing science education (1)
- concept of algorithm (1)
- conference (1)
- constructionism (1)
- context awareness (1)
- course timetabling (1)
- cscw (1)
- data integration (1)
- data view (1)
- debugging (1)
- dependency discovery (1)
- design thinking (1)
- difference of Gaussians (1)
- digital whiteboard (1)
- educational timetabling (1)
- embedded-systems (1)
- engaged computing (1)
- entity alignment (1)
- evolution (1)
- fehlende Daten (1)
- flow-based bilateral filter (1)
- general secondary education (1)
- gesture (1)
- graph clustering (1)
- in-memory technology (1)
- inclusion dependency (1)
- index (1)
- informatics curricula (1)
- informatics education (1)
- informatics in upper secondary education (1)
- input accuracy (1)
- instruction (1)
- interaction (1)
- interactive simulation (1)
- interface (1)
- international comparison (1)
- international study (1)
- lesson (1)
- linguistic (1)
- link discovery (1)
- logic programming (1)
- machine learning (1)
- mandatory computer science foundations (1)
- map/reduce (1)
- maschinelles Lernen (1)
- misconceptions (1)
- missing data (1)
- mixture models (1)
- mobile (1)
- mobile devices (1)
- model (1)
- model-based prototyping (1)
- modelling (1)
- models (1)
- multicore architectures (1)
- non-photorealistic rendering (1)
- prediction (1)
- presentation (1)
- primary school (1)
- process instance grouping (1)
- process mining (1)
- process model search (1)
- proving (1)
- querying (1)
- rapid prototyping (1)
- real-time systems (1)
- relevance (1)
- remote collaboration (1)
- requirements engineering (1)
- research projects (1)
- schema discovery (1)
- science (1)
- selection (1)
- semantic analysis (1)
- service-oriented systems (1)
- sets (1)
- similarity (1)
- situated learning (1)
- social networking (1)
- sorting (1)
- spreadsheets (1)
- stochastic Petri nets (1)
- stochastische Petri Netze (1)
- systems biology (1)
- systems of systems (1)
- teacher (1)
- teacher education (1)
- teacher training (1)
- teaching material (1)
- test items (1)
- test-driven fault navigation (1)
- testing (1)
- theorem (1)
- timed automata (1)
- touch input (1)
- transduction (1)
- video analysis (1)
- video metadata (1)
- word sense disambiguation (1)
- Ähnlichkeit (1)
- Ähnlichkeitsmaße (1)
- Ähnlichkeitssuche (1)
Proposing relevant perturbations to biological signaling networks is central to many problems in biology and medicine because it allows for enabling or disabling certain biological outcomes. In contrast to quantitative methods that permit fine-grained (kinetic) analysis, qualitative approaches allow for addressing large-scale networks. This is accomplished by more abstract representations such as logical networks. We elaborate upon such a qualitative approach aiming at the computation of minimal interventions in logical signaling networks relying on Kleene's three-valued logic and fixpoint semantics. We address this problem within answer set programming and show that it greatly outperforms previous work using dedicated algorithms.
The course timetabling problem can be generally defined as the task of assigning a number of lectures to a limited set of timeslots and rooms, subject to a given set of hard and soft constraints. The modeling language for course timetabling is required to be expressive enough to specify a wide variety of soft constraints and objective functions. Furthermore, the resulting encoding is required to be extensible for capturing new constraints and for switching them between hard and soft, and to be flexible enough to deal with different formulations. In this paper, we propose to make effective use of ASP as a modeling language for course timetabling. We show that our ASP-based approach can naturally satisfy the above requirements, through an ASP encoding of the curriculum-based course timetabling problem proposed in the third track of the second international timetabling competition (ITC-2007). Our encoding is compact and human-readable, since each constraint is individually expressed by either one or two rules. Each hard constraint is expressed by using integrity constraints and aggregates of ASP. Each soft constraint S is expressed by rules in which the head is the form of penalty (S, V, C), and a violation V and its penalty cost C are detected and calculated respectively in the body. We carried out experiments on four different benchmark sets with five different formulations. We succeeded either in improving the bounds or producing the same bounds for many combinations of problem instances and formulations, compared with the previous best known bounds.
User-centered design processes are the first choice when new interactive systems or services are developed to address real customer needs and provide a good user experience. Common tools for collecting user research data, conducting brainstormings, or sketching ideas are whiteboards and sticky notes. They are ubiquitously available, and no technical or domain knowledge is necessary to use them. However, traditional pen and paper tools fall short when saving the content and sharing it with others unable to be in the same location. They are also missing further digital advantages such as searching or sorting content. Although research on digital whiteboard and sticky note applications has been conducted for over 20 years, these tools are not widely adopted in company contexts. While many research prototypes exist, they have not been used for an extended period of time in a real-world context. The goal of this thesis is to investigate what the enablers and obstacles for the adoption of digital whiteboard systems are. As an instrument for different studies, we developed the Tele-Board software system for collaborative creative work. Based on interviews, observations, and findings from former research, we tried to transfer the analog way of working to the digital world. Being a software system, Tele-Board can be used with a variety of hardware and does not depend on special devices. This feature became one of the main factors for adoption on a larger scale. In this thesis, I will present three studies on the use of Tele-Board with different user groups and foci. I will use a combination of research methods (laboratory case studies and data from field research) with the overall goal of finding out when a digital whiteboard system is used and in which cases not. Not surprisingly, the system is used and accepted if a user sees a main benefit that neither analog tools nor other applications can offer. However, I found that these perceived benefits are very different for each user and usage context. If a tool provides possibilities to use in different ways and with different equipment, the chances of its adoption by a larger group increase. Tele-Board has now been in use for over 1.5 years in a global IT company in at least five countries with a constantly growing user base. Its use, advantages, and disadvantages will be described based on 42 interviews and usage statistics from server logs. Through these insights and findings from laboratory case studies, I will present a detailed analysis of digital whiteboard use in different contexts with design implications for future systems.
HPI Future SOC Lab
(2013)
The “HPI Future SOC Lab” is a cooperation of the Hasso-Plattner-Institut (HPI) and industrial partners. Its mission is to enable and promote exchange and interaction between the research community and the industrial partners. The HPI Future SOC Lab provides researchers with free of charge access to a complete infrastructure of state of the art hard- and software. This infrastructure includes components, which might be too expensive for an ordinary research environment, such as servers with up to 64 cores. The offerings address researchers particularly from but not limited to the areas of computer science and business information systems. Main areas of research include cloud computing, parallelization, and In-Memory technologies. This technical report presents results of research projects executed in 2012. Selected projects have presented their results on June 18th and November 26th 2012 at the Future SOC Lab Day events.
This document presents a formula selection system for classical first order theorem proving based on the relevance of formulae for the proof of a conjecture. It is based on unifiability of predicates and is also able to use a linguistic approach for the selection. The scope of the technique is the reduction of the set of formulae and the increase of the amount of provable conjectures in a given time. Since the technique generates a subset of the formula set, it can be used as a preprocessor for automated theorem proving. The document contains the conception, implementation and evaluation of both selection concepts. While the one concept generates a search graph over the negation normal forms or Skolem normal forms of the given formulae, the linguistic concept analyses the formulae and determines frequencies of lexemes and uses a tf-idf weighting algorithm to determine the relevance of the formulae. Though the concept is built for first order logic, it is not limited to it. The concept can be used for higher order and modal logik, too, with minimal adoptions. The system was also evaluated at the world championship of automated theorem provers (CADE ATP Systems Competition, CASC-24) in combination with the leanCoP theorem prover and the evaluation of the results of the CASC and the benchmarks with the problems of the CASC of the year 2012 (CASC-J6) show that the concept of the system has positive impact to the performance of automated theorem provers. Also, the benchmarks with two different theorem provers which use different calculi have shown that the selection is independent from the calculus. Moreover, the concept of TEMPLAR has shown to be competitive to some extent with the concept of SinE and even helped one of the theorem provers to solve problems that were not (or slower) solved with SinE selection in the CASC. Finally, the evaluation implies that the combination of the unification based and linguistic selection yields more improved results though no optimisation was done for the problems.
The Semantic Web provides information contained in the World Wide Web as machine-readable facts. In comparison to a keyword-based inquiry, semantic search enables a more sophisticated exploration of web documents. By clarifying the meaning behind entities, search results are more precise and the semantics simultaneously enable an exploration of semantic relationships. However, unlike keyword searches, a semantic entity-focused search requires that web documents are annotated with semantic representations of common words and named entities. Manual semantic annotation of (web) documents is time-consuming; in response, automatic annotation services have emerged in recent years. These annotation services take continuous text as input, detect important key terms and named entities and annotate them with semantic entities contained in widely used semantic knowledge bases, such as Freebase or DBpedia. Metadata of video documents require special attention. Semantic analysis approaches for continuous text cannot be applied, because information of a context in video documents originates from multiple sources possessing different reliabilities and characteristics. This thesis presents a semantic analysis approach consisting of a context model and a disambiguation algorithm for video metadata. The context model takes into account the characteristics of video metadata and derives a confidence value for each metadata item. The confidence value represents the level of correctness and ambiguity of the textual information of the metadata item. The lower the ambiguity and the higher the prospective correctness, the higher the confidence value. The metadata items derived from the video metadata are analyzed in a specific order from high to low confidence level. Previously analyzed metadata are used as reference points in the context for subsequent disambiguation. The contextually most relevant entity is identified by means of descriptive texts and semantic relationships to the context. The context is created dynamically for each metadata item, taking into account the confidence value and other characteristics. The proposed semantic analysis follows two hypotheses: metadata items of a context should be processed in descendent order of their confidence value, and the metadata that pertains to a context should be limited by content-based segmentation boundaries. The evaluation results support the proposed hypotheses and show increased recall and precision for annotated entities, especially for metadata that originates from sources with low reliability. The algorithms have been evaluated against several state-of-the-art annotation approaches. The presented semantic analysis process is integrated into a video analysis framework and has been successfully applied in several projects for the purpose of semantic video exploration of videos.
Constraints allow developers to specify desired properties of systems in a number of domains, and have those properties be maintained automatically. This results in compact, declarative code, avoiding scattered code to check and imperatively re-satisfy invariants. Despite these advantages, constraint programming is not yet widespread, with standard imperative programming still the norm. There is a long history of research on integrating constraint programming with the imperative paradigm. However, this integration typically does not unify the constructs for encapsulation and abstraction from both paradigms. This impedes re-use of modules, as client code written in one paradigm can only use modules written to support that paradigm. Modules require redundant definitions if they are to be used in both paradigms. We present a language – Babelsberg – that unifies the constructs for en- capsulation and abstraction by using only object-oriented method definitions for both declarative and imperative code. Our prototype – Babelsberg/R – is an extension to Ruby, and continues to support Ruby’s object-oriented se- mantics. It allows programmers to add constraints to existing Ruby programs in incremental steps by placing them on the results of normal object-oriented message sends. It is implemented by modifying a state-of-the-art Ruby virtual machine. The performance of standard object-oriented code without con- straints is only modestly impacted, with typically less than 10% overhead compared with the unmodified virtual machine. Furthermore, our architec- ture for adding multiple constraint solvers allows Babelsberg to deal with constraints in a variety of domains. We argue that our approach provides a useful step toward making con- straint solving a generic tool for object-oriented programmers. We also provide example applications, written in our Ruby-based implementation, which use constraints in a variety of application domains, including interactive graphics, circuit simulations, data streaming with both hard and soft constraints on performance, and configuration file Management.
Die Automatisierung von Geschäftsprozessen unterstützt Unternehmen, die Ausführung ihrer Prozesse effizienter zu gestalten. In existierenden Business Process Management Systemen, werden die Instanzen eines Prozesses völlig unabhängig voneinander ausgeführt. Jedoch kann das Synchronisieren von Instanzen mit ähnlichen Charakteristiken wie z.B. den gleichen Daten zu reduzierten Ausführungskosten führen. Zum Beispiel, wenn ein Onlinehändler zwei Bestellungen vom selben Kunden mit der gleichen Lieferanschrift erhält, können diese zusammen verpackt und versendet werden, um Versandkosten zu sparen. In diesem Papier verwenden wir Konzepte aus dem Datenbankbereich und führen Datensichten für Geschäftsprozesse ein, um Instanzen zu identifizieren, welche synchronisiert werden können. Auf Grundlage der Datensichten führen wir das Konzept der Batch-Regionen ein. Eine Batch-Region ermöglicht eine kontext-bewusste Instanzen-Synchronisierung über mehrere verbundene Aktivitäten. Das eingeführte Konzept wird mit einer Fallstudie evaluiert, bei der ein Kostenvergleich zwischen der normalen Prozessausführung und der Batchverarbeitung durchgeführt wird.
Requirements engineers have to elicit, document, and validate how stakeholders act and interact to achieve their common goals in collaborative scenarios. Only after gathering all information concerning who interacts with whom to do what and why, can a software system be designed and realized which supports the stakeholders to do their work. To capture and structure requirements of different (groups of) stakeholders, scenario-based approaches have been widely used and investigated. Still, the elicitation and validation of requirements covering collaborative scenarios remains complicated, since the required information is highly intertwined, fragmented, and distributed over several stakeholders. Hence, it can only be elicited and validated collaboratively. In times of globally distributed companies, scheduling and conducting workshops with groups of stakeholders is usually not feasible due to budget and time constraints. Talking to individual stakeholders, on the other hand, is feasible but leads to fragmented and incomplete stakeholder scenarios. Going back and forth between different individual stakeholders to resolve this fragmentation and explore uncovered alternatives is an error-prone, time-consuming, and expensive task for the requirements engineers. While formal modeling methods can be employed to automatically check and ensure consistency of stakeholder scenarios, such methods introduce additional overhead since their formal notations have to be explained in each interaction between stakeholders and requirements engineers. Tangible prototypes as they are used in other disciplines such as design, on the other hand, allow designers to feasibly validate and iterate concepts and requirements with stakeholders. This thesis proposes a model-based approach for prototyping formal behavioral specifications of stakeholders who are involved in collaborative scenarios. By simulating and animating such specifications in a remote domain-specific visualization, stakeholders can experience and validate the scenarios captured so far, i.e., how other stakeholders act and react. This interactive scenario simulation is referred to as a model-based virtual prototype. Moreover, through observing how stakeholders interact with a virtual prototype of their collaborative scenarios, formal behavioral specifications can be automatically derived which complete the otherwise fragmented scenarios. This, in turn, enables requirements engineers to elicit and validate collaborative scenarios in individual stakeholder sessions – decoupled, since stakeholders can participate remotely and are not forced to be available for a joint session at the same time. This thesis discusses and evaluates the feasibility, understandability, and modifiability of model-based virtual prototypes. Similarly to how physical prototypes are perceived, the presented approach brings behavioral models closer to being tangible for stakeholders and, moreover, combines the advantages of joint stakeholder sessions and decoupled sessions.