004 Datenverarbeitung; Informatik
Refine
Year of publication
- 2013 (82) (remove)
Document Type
- Article (38)
- Monograph/Edited Volume (21)
- Doctoral Thesis (17)
- Conference Proceeding (2)
- Master's Thesis (2)
- Postprint (2)
Language
- English (53)
- German (28)
- Multiple languages (1)
Keywords
- Modellierung (3)
- CSCW (2)
- Cloud Computing (2)
- Datenbank (2)
- Datenschutz (2)
- E-Learning (2)
- Evolution (2)
- Forschungsprojekte (2)
- Future SOC Lab (2)
- HCI (2)
- Hauptspeicherdatenbank (2)
- ISSEP (2)
- In-Memory Technologie (2)
- Informatics Education (2)
- MOOCs (2)
- Modeling (2)
- Multicore Architekturen (2)
- Online Course (2)
- Online-Learning (2)
- Online-Lernen (2)
- Onlinekurs (2)
- Tele-Lab (2)
- Tele-Teaching (2)
- Verifikation (2)
- openHPI (2)
- tele-TASK (2)
- topics (2)
- verification (2)
- 3D Computer Grafik (1)
- 3D Computer Graphics (1)
- 3DCityDB (1)
- Adaptive hypermedia (1)
- Anfragesprache (1)
- Anisotroper Kuwahara Filter (1)
- Anomalien (1)
- Aspect-Oriented Programming (1)
- Aspektorientierte Programmierung (1)
- Attribut-Merge-Prozess (1)
- Attribute Merge Process (1)
- Ausführung von Modellen (1)
- Ausführungsgeschichte (1)
- BPM (1)
- BPMN (1)
- Batchverarbeitung (1)
- Bayes'sche Netze (1)
- Bayesian networks (1)
- Berührungseingaben (1)
- Beschränkungen und Abhängigkeiten (1)
- CS Ed Research (1)
- CS at school (1)
- CS curriculum (1)
- CityGML (1)
- Citymodel (1)
- Cloud computing (1)
- Clusteranalyse (1)
- Comparing programming environments (1)
- Computer Networks (1)
- Computernetzwerke (1)
- Constraints (1)
- Contracts (1)
- Course of Study (1)
- Data Modeling (1)
- Data Privacy (1)
- Database (1)
- Database Cost Model (1)
- Databases (1)
- Datenabhängigkeiten-Entdeckung (1)
- Datenbank-Kostenmodell (1)
- Datenbanken (1)
- Datenintegration (1)
- Datenmodellierung (1)
- Datensicht (1)
- Design Thinking (1)
- Differenz von Gauss Filtern (1)
- Digitale Whiteboards (1)
- Disambiguierung (1)
- Eingabegenauigkeit (1)
- Eingebettete Systeme (1)
- Fehlerbeseitigung (1)
- Flussgesteuerter Bilateraler Filter (1)
- Focus+Context Visualization (1)
- Fokus-&-Kontext Visualisierung (1)
- Formale Verifikation (1)
- General subject “Information” (1)
- Geschäftsanwendungen (1)
- Gruppierung von Prozessinstanzen (1)
- ICT (1)
- ICT curriculum (1)
- IPv4 (1)
- IPv6 (1)
- In-Memory Database (1)
- In-Memory technology (1)
- Index (1)
- Index Structures (1)
- Indexstrukturen (1)
- Infinite State (1)
- Information Ethics (1)
- Infrastructure (1)
- Infrastruktur (1)
- Inklusionsabhängigkeit (1)
- Interactive Rendering (1)
- Interaktives Rendering (1)
- Internet (1)
- Internet Protocol (1)
- Internet Service Provider (1)
- Internet applications (1)
- Internetanwendungen (1)
- Invarianten (1)
- Invariants (1)
- Java Security Framework (1)
- Konferenz (1)
- Kontext (1)
- Laufzeitmodelle (1)
- Leistungsfähigkeit (1)
- Liguistisch (1)
- Link-Entdeckung (1)
- Lively Kernel (1)
- Megamodell (1)
- Megamodels (1)
- Mischmodelle (1)
- Mobile Application Development (1)
- Mobilgeräte (1)
- Model Execution (1)
- Model-Driven Engineering (1)
- Modeling Languages (1)
- Modell (1)
- Modellgetriebene Softwareentwicklung (1)
- Modellierungssprachen (1)
- Models at Runtime (1)
- Multicore architectures (1)
- Navigation (1)
- Nebenläufigkeit (1)
- Network Politics (1)
- Netzpolitik (1)
- Nicht-photorealistisches Rendering (1)
- Object Constraint Programming (1)
- Objekt-orientiertes Programmieren mit Constraints (1)
- Owner-Retained Access Control (ORAC) (1)
- Performance (1)
- Policy Languages (1)
- Policy Sprachen (1)
- PostGIS (1)
- Primary informatics (1)
- Problem solving (1)
- Problem solving strategies (1)
- Process Enactment (1)
- Process Mining (1)
- Process Modeling (1)
- Programmierkonzepte (1)
- Programming environments for children (1)
- Programming learning (1)
- Prozessausführung (1)
- Prozessmodellierung (1)
- Prozessmodellsuche (1)
- Präsentation (1)
- Pytho n (1)
- Python (1)
- Query (1)
- Realzeitsysteme (1)
- Relevanz (1)
- Research Projects (1)
- Runtime Binding (1)
- SQL (1)
- Scalability (1)
- Schema-Entdeckung (1)
- Scientific understanding of Information (1)
- Search Algorithms (1)
- Selektion (1)
- Self-Adaptive Software (1)
- Semantische Analyse (1)
- Service-Oriented Architecture (1)
- Service-Orientierte Architekturen (1)
- Service-orientierte Systeme (1)
- Similarity Measures (1)
- Similarity Search (1)
- Skalierbarkeit (1)
- SoaML (1)
- Softwaretest (1)
- Spaltenlayout (1)
- Stadtmodell (1)
- Suchverfahren (1)
- Systems of Systems (1)
- Teaching problem solving strategies (1)
- Test-getriebene Fehlernavigation (1)
- Theorembeweisen (1)
- Timed Automata (1)
- Trajectories (1)
- Transaktionen (1)
- Unbegrenzter Zustandsraum (1)
- Unifikation (1)
- Verification (1)
- Verteiltes Arbeiten (1)
- Videoanalyse (1)
- Videometadaten (1)
- Vorhersage (1)
- Web applications (1)
- Web of Data (1)
- anisotropic Kuwahara filter (1)
- anomalies (1)
- answer set programming (1)
- back-in-time (1)
- batch processing (1)
- behavioral specification (1)
- circuits (1)
- cloud computing (1)
- clustering (1)
- coherence-enhancing filtering (1)
- collaboration (1)
- competence (1)
- computational thinking (1)
- computer science (1)
- computing science education (1)
- concept of algorithm (1)
- conference (1)
- constructionism (1)
- context awareness (1)
- course timetabling (1)
- cscw (1)
- data integration (1)
- data view (1)
- debugging (1)
- dependency discovery (1)
- design thinking (1)
- difference of Gaussians (1)
- digital whiteboard (1)
- educational timetabling (1)
- embedded-systems (1)
- engaged computing (1)
- entity alignment (1)
- evolution (1)
- fehlende Daten (1)
- flow-based bilateral filter (1)
- general secondary education (1)
- gesture (1)
- graph clustering (1)
- in-memory technology (1)
- inclusion dependency (1)
- index (1)
- informatics curricula (1)
- informatics education (1)
- informatics in upper secondary education (1)
- input accuracy (1)
- instruction (1)
- interaction (1)
- interactive simulation (1)
- interface (1)
- international comparison (1)
- international study (1)
- lesson (1)
- linguistic (1)
- link discovery (1)
- logic programming (1)
- machine learning (1)
- mandatory computer science foundations (1)
- map/reduce (1)
- maschinelles Lernen (1)
- misconceptions (1)
- missing data (1)
- mixture models (1)
- mobile (1)
- mobile devices (1)
- model (1)
- model-based prototyping (1)
- modelling (1)
- models (1)
- multicore architectures (1)
- non-photorealistic rendering (1)
- prediction (1)
- presentation (1)
- primary school (1)
- process instance grouping (1)
- process mining (1)
- process model search (1)
- proving (1)
- querying (1)
- rapid prototyping (1)
- real-time systems (1)
- relevance (1)
- remote collaboration (1)
- requirements engineering (1)
- research projects (1)
- schema discovery (1)
- science (1)
- selection (1)
- semantic analysis (1)
- service-oriented systems (1)
- sets (1)
- similarity (1)
- situated learning (1)
- social networking (1)
- sorting (1)
- spreadsheets (1)
- stochastic Petri nets (1)
- stochastische Petri Netze (1)
- systems biology (1)
- systems of systems (1)
- teacher (1)
- teacher education (1)
- teacher training (1)
- teaching material (1)
- test items (1)
- test-driven fault navigation (1)
- testing (1)
- theorem (1)
- timed automata (1)
- touch input (1)
- transduction (1)
- video analysis (1)
- video metadata (1)
- word sense disambiguation (1)
- Ähnlichkeit (1)
- Ähnlichkeitsmaße (1)
- Ähnlichkeitssuche (1)
Informatics as a school subject has been virtually absent from bilingual education programs in German secondary schools. Most bilingual programs in German secondary education started out by focusing on subjects from the field of social sciences. Teachers and bilingual curriculum experts alike have been regarding those as the most suitable subjects for bilingual instruction – largely due to the intercultural perspective that a bilingual approach provides. And though one cannot deny the gain that ensues from an intercultural perspective on subjects such as history or geography, this benefit is certainly not limited to social science subjects. In consequence, bilingual curriculum designers have already begun to include other subjects such as physics or chemistry in bilingual school programs. It only seems a small step to extend this to informatics. This paper will start out by addressing potential benefits of adding informatics to the range of subjects taught as part of English-language bilingual programs in German secondary education. In a second step it will sketch out a methodological (= didactical) model for teaching informatics to German learners through English. It will then provide two items of hands-on and tested teaching material in accordance with this model. The discussion will conclude with a brief outlook on the chances and prerequisites of firmly establishing informatics as part of bilingual school curricula in Germany.
The development of self-adaptive software requires the engineering of an adaptation engine that controls and adapts the underlying adaptable software by means of feedback loops. The adaptation engine often describes the adaptation by using runtime models representing relevant aspects of the adaptable software and particular activities such as analysis and planning that operate on these runtime models. To systematically address the interplay between runtime models and adaptation activities in adaptation engines, runtime megamodels have been proposed for self-adaptive software. A runtime megamodel is a specific runtime model whose elements are runtime models and adaptation activities. Thus, a megamodel captures the interplay between multiple models and between models and activities as well as the activation of the activities. In this article, we go one step further and present a modeling language for ExecUtable RuntimE MegAmodels (EUREMA) that considerably eases the development of adaptation engines by following a model-driven engineering approach. We provide a domain-specific modeling language and a runtime interpreter for adaptation engines, in particular for feedback loops. Megamodels are kept explicit and alive at runtime and by interpreting them, they are directly executed to run feedback loops. Additionally, they can be dynamically adjusted to adapt feedback loops. Thus, EUREMA supports development by making feedback loops, their runtime models, and adaptation activities explicit at a higher level of abstraction. Moreover, it enables complex solutions where multiple feedback loops interact or even operate on top of each other. Finally, it leverages the co-existence of self-adaptation and off-line adaptation for evolution.
The process of introducing compulsory ICT education at primary school level in the Czech Republic should be completed next year. Programming and Information, two topics from the basics of computer science have been included in a new textbook. The question is whether the new chapters of the textbook are comprehensible for primary school teachers, who have undergone no training in computer science. The paper reports on a pilot verification project in which pre-service primary school teachers were trained to teach these informatics topics.
Interactive rendering techniques for focus+context visualization of 3D geovirtual environments
(2013)
This thesis introduces a collection of new real-time rendering techniques and applications for focus+context visualization of interactive 3D geovirtual environments such as virtual 3D city and landscape models. These environments are generally characterized by a large number of objects and are of high complexity with respect to geometry and textures. For these reasons, their interactive 3D rendering represents a major challenge. Their 3D depiction implies a number of weaknesses such as occlusions, cluttered image contents, and partial screen-space usage. To overcome these limitations and, thus, to facilitate the effective communication of geo-information, principles of focus+context visualization can be used for the design of real-time 3D rendering techniques for 3D geovirtual environments (see Figure). In general, detailed views of a 3D geovirtual environment are combined seamlessly with abstracted views of the context within a single image. To perform the real-time image synthesis required for interactive visualization, dedicated parallel processors (GPUs) for rasterization of computer graphics primitives are used. For this purpose, the design and implementation of appropriate data structures and rendering pipelines are necessary. The contribution of this work comprises the following five real-time rendering methods: • The rendering technique for 3D generalization lenses enables the combination of different 3D city geometries (e.g., generalized versions of a 3D city model) in a single image in real time. The method is based on a generalized and fragment-precise clipping approach, which uses a compressible, raster-based data structure. It enables the combination of detailed views in the focus area with the representation of abstracted variants in the context area. • The rendering technique for the interactive visualization of dynamic raster data in 3D geovirtual environments facilitates the rendering of 2D surface lenses. It enables a flexible combination of different raster layers (e.g., aerial images or videos) using projective texturing for decoupling image and geometry data. Thus, various overlapping and nested 2D surface lenses of different contents can be visualized interactively. • The interactive rendering technique for image-based deformation of 3D geovirtual environments enables the real-time image synthesis of non-planar projections, such as cylindrical and spherical projections, as well as multi-focal 3D fisheye-lenses and the combination of planar and non-planar projections. • The rendering technique for view-dependent multi-perspective views of 3D geovirtual environments, based on the application of global deformations to the 3D scene geometry, can be used for synthesizing interactive panorama maps to combine detailed views close to the camera (focus) with abstract views in the background (context). This approach reduces occlusions, increases the usage the available screen space, and reduces the overload of image contents. • The object-based and image-based rendering techniques for highlighting objects and focus areas inside and outside the view frustum facilitate preattentive perception. The concepts and implementations of interactive image synthesis for focus+context visualization and their selected applications enable a more effective communication of spatial information, and provide building blocks for design and development of new applications and systems in the field of 3D geovirtual environments.
The Semantic Web provides information contained in the World Wide Web as machine-readable facts. In comparison to a keyword-based inquiry, semantic search enables a more sophisticated exploration of web documents. By clarifying the meaning behind entities, search results are more precise and the semantics simultaneously enable an exploration of semantic relationships. However, unlike keyword searches, a semantic entity-focused search requires that web documents are annotated with semantic representations of common words and named entities. Manual semantic annotation of (web) documents is time-consuming; in response, automatic annotation services have emerged in recent years. These annotation services take continuous text as input, detect important key terms and named entities and annotate them with semantic entities contained in widely used semantic knowledge bases, such as Freebase or DBpedia. Metadata of video documents require special attention. Semantic analysis approaches for continuous text cannot be applied, because information of a context in video documents originates from multiple sources possessing different reliabilities and characteristics. This thesis presents a semantic analysis approach consisting of a context model and a disambiguation algorithm for video metadata. The context model takes into account the characteristics of video metadata and derives a confidence value for each metadata item. The confidence value represents the level of correctness and ambiguity of the textual information of the metadata item. The lower the ambiguity and the higher the prospective correctness, the higher the confidence value. The metadata items derived from the video metadata are analyzed in a specific order from high to low confidence level. Previously analyzed metadata are used as reference points in the context for subsequent disambiguation. The contextually most relevant entity is identified by means of descriptive texts and semantic relationships to the context. The context is created dynamically for each metadata item, taking into account the confidence value and other characteristics. The proposed semantic analysis follows two hypotheses: metadata items of a context should be processed in descendent order of their confidence value, and the metadata that pertains to a context should be limited by content-based segmentation boundaries. The evaluation results support the proposed hypotheses and show increased recall and precision for annotated entities, especially for metadata that originates from sources with low reliability. The algorithms have been evaluated against several state-of-the-art annotation approaches. The presented semantic analysis process is integrated into a video analysis framework and has been successfully applied in several projects for the purpose of semantic video exploration of videos.
.NET Gadgeteer Workshop
(2013)
Cost models are an essential part of database systems, as they are the basis of query performance optimization. Based on predictions made by cost models, the fastest query execution plan can be chosen and executed or algorithms can be tuned and optimised. In-memory databases shifts the focus from disk to main memory accesses and CPU costs, compared to disk based systems where input and output costs dominate the overall costs and other processing costs are often neglected. However, modelling memory accesses is fundamentally different and common models do not apply anymore. This work presents a detailed parameter evaluation for the plan operators scan with equality selection, scan with range selection, positional lookup and insert in in-memory column stores. Based on this evaluation, a cost model based on cache misses for estimating the runtime of the considered plan operators using different data structures is developed. Considered are uncompressed columns, bit compressed and dictionary encoded columns with sorted and unsorted dictionaries. Furthermore, tree indices on the columns and dictionaries are discussed. Finally, partitioned columns consisting of one partition with a sorted and one with an unsorted dictionary are investigated. New values are inserted in the unsorted dictionary partition and moved periodically by a merge process to the sorted partition. An efficient attribute merge algorithm is described, supporting the update performance required to run enterprise applications on read-optimised databases. Further, a memory traffic based cost model for the merge process is provided.
This thesis proposes a privacy protection framework for the controlled distribution and use of personal private data. The framework is based on the idea that privacy policies can be set directly by the data owner and can be automatically enforced against the data user. Data privacy continues to be a very important topic, as our dependency on electronic communication maintains its current growth, and private data is shared between multiple devices, users and locations. The growing amount and the ubiquitous availability of personal private data increases the likelihood of data misuse. Early privacy protection techniques, such as anonymous email and payment systems have focused on data avoidance and anonymous use of services. They did not take into account that data sharing cannot be avoided when people participate in electronic communication scenarios that involve social interactions. This leads to a situation where data is shared widely and uncontrollably and in most cases the data owner has no control over further distribution and use of personal private data. Previous efforts to integrate privacy awareness into data processing workflows have focused on the extension of existing access control frameworks with privacy aware functions or have analysed specific individual problems such as the expressiveness of policy languages. So far, very few implementations of integrated privacy protection mechanisms exist and can be studied to prove their effectiveness for privacy protection. Second level issues that stem from practical application of the implemented mechanisms, such as usability, life-time data management and changes in trustworthiness have received very little attention so far, mainly because they require actual implementations to be studied. Most existing privacy protection schemes silently assume that it is the privilege of the data user to define the contract under which personal private data is released. Such an approach simplifies policy management and policy enforcement for the data user, but leaves the data owner with a binary decision to submit or withhold his or her personal data based on the provided policy. We wanted to empower the data owner to express his or her privacy preferences through privacy policies that follow the so-called Owner-Retained Access Control (ORAC) model. ORAC has been proposed by McCollum, et al. as an alternate access control mechanism that leaves the authority over access decisions by the originator of the data. The data owner is given control over the release policy for his or her personal data, and he or she can set permissions or restrictions according to individually perceived trust values. Such a policy needs to be expressed in a coherent way and must allow the deterministic policy evaluation by different entities. The privacy policy also needs to be communicated from the data owner to the data user, so that it can be enforced. Data and policy are stored together as a Protected Data Object that follows the Sticky Policy paradigm as defined by Mont, et al. and others. We developed a unique policy combination approach that takes usability aspects for the creation and maintenance of policies into consideration. Our privacy policy consists of three parts: A Default Policy provides basic privacy protection if no specific rules have been entered by the data owner. An Owner Policy part allows the customisation of the default policy by the data owner. And a so-called Safety Policy guarantees that the data owner cannot specify disadvantageous policies, which, for example, exclude him or her from further access to the private data. The combined evaluation of these three policy-parts yields the necessary access decision. The automatic enforcement of privacy policies in our protection framework is supported by a reference monitor implementation. We started our work with the development of a client-side protection mechanism that allows the enforcement of data-use restrictions after private data has been released to the data user. The client-side enforcement component for data-use policies is based on a modified Java Security Framework. Privacy policies are translated into corresponding Java permissions that can be automatically enforced by the Java Security Manager. When we later extended our work to implement server-side protection mechanisms, we found several drawbacks for the privacy enforcement through the Java Security Framework. We solved this problem by extending our reference monitor design to use Aspect-Oriented Programming (AOP) and the Java Reflection API to intercept data accesses in existing applications and provide a way to enforce data owner-defined privacy policies for business applications.
Companies strive to improve their business processes in order to remain competitive. Process mining aims to infer meaningful insights from process-related data and attracted the attention of practitioners, tool-vendors, and researchers in recent years. Traditionally, event logs are assumed to describe the as-is situation. But this is not necessarily the case in environments where logging may be compromised due to manual logging. For example, hospital staff may need to manually enter information regarding the patient’s treatment. As a result, events or timestamps may be missing or incorrect. In this paper, we make use of process knowledge captured in process models, and provide a method to repair missing events in the logs. This way, we facilitate analysis of incomplete logs. We realize the repair by combining stochastic Petri nets, alignments, and Bayesian networks. We evaluate the results using both synthetic data and real event data from a Dutch hospital.
We launched an original large-scale experiment concerning informatics learning in French high schools. We are using the France-IOI platform to federate resources and share observation for research. The first step is the implementation of an adaptive hypermedia based on very fine grain epistemic modules for Python programming learning. We define the necessary traces to be built in order to study the trajectories of navigation the pupils will draw across this hypermedia. It may be browsed by pupils either as a course support, or an extra help to solve the list of exercises (mainly for algorithmics discovery). By leaving the locus of control to the learner, we want to observe the different trajectories they finally draw through our system. These trajectories may be abstracted and interpreted as strategies and then compared for their relative efficiency. Our hypothesis is that learners have different profiles and may use the appropriate strategy accordingly. This paper presents the research questions, the method and the expected results.
Problem solving is one of the central activities performed by computer scientists as well as by computer science learners. Whereas the teaching of algorithms and programming languages is usually well structured within a curriculum, the development of learners’ problem-solving skills is largely implicit and less structured. Students at all levels often face difficulties in problem analysis and solution construction. The basic assumption of the workshop is that without some formal instruction on effective strategies, even the most inventive learner may resort to unproductive trial-and-error problemsolving processes. Hence, it is important to teach problem-solving strategies and to guide teachers on how to teach their pupils this cognitive tool. Computer science educators should be aware of the difficulties and acquire appropriate pedagogical tools to help their learners gain and experience problem-solving skills.
Die Automatisierung von Geschäftsprozessen unterstützt Unternehmen, die Ausführung ihrer Prozesse effizienter zu gestalten. In existierenden Business Process Management Systemen, werden die Instanzen eines Prozesses völlig unabhängig voneinander ausgeführt. Jedoch kann das Synchronisieren von Instanzen mit ähnlichen Charakteristiken wie z.B. den gleichen Daten zu reduzierten Ausführungskosten führen. Zum Beispiel, wenn ein Onlinehändler zwei Bestellungen vom selben Kunden mit der gleichen Lieferanschrift erhält, können diese zusammen verpackt und versendet werden, um Versandkosten zu sparen. In diesem Papier verwenden wir Konzepte aus dem Datenbankbereich und führen Datensichten für Geschäftsprozesse ein, um Instanzen zu identifizieren, welche synchronisiert werden können. Auf Grundlage der Datensichten führen wir das Konzept der Batch-Regionen ein. Eine Batch-Region ermöglicht eine kontext-bewusste Instanzen-Synchronisierung über mehrere verbundene Aktivitäten. Das eingeführte Konzept wird mit einer Fallstudie evaluiert, bei der ein Kostenvergleich zwischen der normalen Prozessausführung und der Batchverarbeitung durchgeführt wird.
We present a concept of better integration of practical teaching in student teacher education in Computer Science. As an introduction to the workshop different possible scenarios are discussed on the basis of examples. Afterwards workshop participants will have the opportunity to discuss the application of the aconcepts in other settings.
This talk will describe My Digital Life (TU100), a distance learning module that introduces computer science through immediate engagement with ubiquitous computing (ubicomp). This talk will describe some of the principles and concepts we have adopted for this modern computing introduction: the idea of the ‘informed digital citizen’; engagement through narrative; playful pedagogy; making the power of ubicomp available to novices; setting technical skills in real contexts. It will also trace how the pedagogy is informed by experiences and research in Computer Science education.
The correction of software failures tends to be very cost-intensive because their debugging is an often time-consuming development activity. During this activity, developers largely attempt to understand what causes failures: Starting with a test case that reproduces the observable failure they have to follow failure causes on the infection chain back to the root cause (defect). This idealized procedure requires deep knowledge of the system and its behavior because failures and defects can be far apart from each other. Unfortunately, common debugging tools are inadequate for systematically investigating such infection chains in detail. Thus, developers have to rely primarily on their intuition and the localization of failure causes is not time-efficient. To prevent debugging by disorganized trial and error, experienced developers apply the scientific method and its systematic hypothesis-testing. However, even when using the scientific method, the search for failure causes can still be a laborious task. First, lacking expertise about the system makes it hard to understand incorrect behavior and to create reasonable hypotheses. Second, contemporary debugging approaches provide no or only partial support for the scientific method. In this dissertation, we present test-driven fault navigation as a debugging guide for localizing reproducible failures with the scientific method. Based on the analysis of passing and failing test cases, we reveal anomalies and integrate them into a breadth-first search that leads developers to defects. This systematic search consists of four specific navigation techniques that together support the creation, evaluation, and refinement of failure cause hypotheses for the scientific method. First, structure navigation localizes suspicious system parts and restricts the initial search space. Second, team navigation recommends experienced developers for helping with failures. Third, behavior navigation allows developers to follow emphasized infection chains back to root causes. Fourth, state navigation identifies corrupted state and reveals parts of the infection chain automatically. We implement test-driven fault navigation in our Path Tools framework for the Squeak/Smalltalk development environment and limit its computation cost with the help of our incremental dynamic analysis. This lightweight dynamic analysis ensures an immediate debugging experience with our tools by splitting the run-time overhead over multiple test runs depending on developers’ needs. Hence, our test-driven fault navigation in combination with our incremental dynamic analysis answers important questions in a short time: where to start debugging, who understands failure causes best, what happened before failures, and which state properties are infected.
There are two common approaches to implement a virtual machine (VM) for a dynamic object-oriented language. On the one hand, it can be implemented in a C-like language for best performance and maximum control over the resulting executable. On the other hand, it can be implemented in a language such as Java that allows for higher-level abstractions. These abstractions, such as proper object-oriented modularization, automatic memory management, or interfaces, are missing in C-like languages but they can simplify the implementation of prevalent but complex concepts in VMs, such as garbage collectors (GCs) or just-in-time compilers (JITs). Yet, the implementation of a dynamic object-oriented language in Java eventually results in two VMs on top of each other (double stack), which impedes performance. For statically typed languages, the Maxine VM solves this problem; it is written in Java but can be executed without a Java virtual machine (JVM). However, it is currently not possible to execute dynamic object-oriented languages in Maxine. This work presents an approach to bringing object models and execution models of dynamic object-oriented languages to the Maxine VM and the application of this approach to Squeak/Smalltalk. The representation of objects in and the execution of dynamic object-oriented languages pose certain challenges to the Maxine VM that lacks certain variation points necessary to enable an effortless and straightforward implementation of dynamic object-oriented languages' execution models. The implementation of Squeak/Smalltalk in Maxine as a feasibility study is to unveil such missing variation points.
Scientific writing is an important skill for computer science and computer engineering professionals. In this paper we present a writing concept across the curriculum program directed towards scientific writing. The program is built around a hierarchy of learning outcomes. The hierarchy is constructed through analyzing the learning outcomes in relation to competencies that are needed to fulfill them.
Scalable compatibility for embedded real-time components via language progressive timed automata
(2013)
Die korrekte Komposition individuell entwickelter Komponenten von eingebetteten Realzeitsystemen ist eine Herausforderung, da neben funktionalen Eigenschaften auch nicht funktionale Eigenschaften berücksichtigt werden müssen. Ein Beispiel hierfür ist die Kompatibilität von Realzeiteigenschaften, welche eine entscheidende Rolle in eingebetteten Systemen spielen. Heutzutage wird die Kompatibilität derartiger Eigenschaften in einer aufwändigen Integrations- und Konfigurationstests am Ende des Entwicklungsprozesses geprüft, wobei diese Tests im schlechtesten Fall fehlschlagen. Aus diesem Grund wurde eine Zahl an formalen Verfahren Entwickelt, welche eine frühzeitige Analyse von Realzeiteigenschaften von Komponenten erlauben, sodass Inkompatibilitäten von Realzeiteigenschaften in späteren Phasen ausgeschlossen werden können. Existierenden Verfahren verlangen jedoch, dass eine Reihe von Bedingungen erfüllt sein muss, welche von realen Systemen nur schwer zu erfüllen sind, oder aber, die verwendeten Analyseverfahren skalieren nicht für größere Systeme. In dieser Arbeit wird ein Ansatz vorgestellt, welcher auf dem formalen Modell des Timed Automaton basiert und der keine Bedingungen verlangt, die von einem realen System nur schwer erfüllt werden können. Der in dieser Arbeit vorgestellte Ansatz enthält ein Framework, welches eine modulare Analyse erlaubt, bei der ausschließlich miteinender kommunizierende Komponenten paarweise überprüft werden müssen. Somit wird eine skalierbare Analyse von Realzeiteigenschaften ermöglicht, die keine Bedingungen verlangt, welche nur bedingt von realen Systemen erfüllt werden können.
Japan launched the new Course of Study in April 2012, which has been carried out in elementary schools and junior high schools. It will also be implemented in senior high schools from April 2013. This article presents an overview of the information studies education in the new Course of Study for K-12. Besides, the authors point out what role experts of informatics and information studies education should play in the general education centered around information studies that is meant to help people of the nation to lead an active, powerful, and flexible life until the satisfying end.