Institut für Informatik und Computational Science
Refine
Year of publication
- 2013 (58) (remove)
Document Type
- Article (37)
- Doctoral Thesis (15)
- Monograph/Edited Volume (2)
- Conference Proceeding (2)
- Master's Thesis (1)
- Preprint (1)
Keywords
- Theory (2)
- 3D Computer Grafik (1)
- 3D Computer Graphics (1)
- Active evaluation (1)
- Anisotroper Kuwahara Filter (1)
- Answer Set Programming (1)
- Answer set programming (1)
- Aspect-Oriented Programming (1)
- Aspektorientierte Programmierung (1)
- Berührungseingaben (1)
- Cloud Computing (1)
- Cloud computing (1)
- Cluster computing (1)
- Clusteranalyse (1)
- Continuous Testing (1)
- Continuous Versioning (1)
- Data Privacy (1)
- Datenschutz (1)
- Deal of the Day (1)
- Debugging (1)
- Design (1)
- Differenz von Gauss Filtern (1)
- E-Learning (1)
- Eingabegenauigkeit (1)
- Evolution (1)
- Experimentation (1)
- Explore-first Programming (1)
- Fault Localization (1)
- Flussgesteuerter Bilateraler Filter (1)
- Focus+Context Visualization (1)
- Fokus-&-Kontext Visualisierung (1)
- Green computing (1)
- Grounded theory (1)
- HCI (1)
- Human Factors (1)
- Image and video stylization (1)
- Information federation (1)
- Information retrieval (1)
- Information security (1)
- Interactive Rendering (1)
- Interaktives Rendering (1)
- Internet applications (1)
- Internetanwendungen (1)
- Java Security Framework (1)
- Landmark visibility (1)
- Learning Analytics (1)
- Leistungsfähigkeit (1)
- Life-Long Learning (1)
- Liguistisch (1)
- Loyalty (1)
- Mischmodelle (1)
- Mobilgeräte (1)
- Modell (1)
- Nicht-photorealistisches Rendering (1)
- Owner-Retained Access Control (ORAC) (1)
- Pedestrian navigation (1)
- Performance (1)
- Policy Languages (1)
- Policy Sprachen (1)
- Prototyping (1)
- Ranking (1)
- Relevanz (1)
- Scalability (1)
- Selektion (1)
- Semantic web (1)
- Service orientation (1)
- Skalierbarkeit (1)
- Structural equation modeling (1)
- Theorembeweisen (1)
- Unifikation (1)
- Usability testing (1)
- User-centred design (1)
- Vorhersage (1)
- Web of Data (1)
- anisotropic Kuwahara filter (1)
- answer set programming (1)
- artistic rendering (1)
- belief merging (1)
- belief revision (1)
- clustering (1)
- coherence-enhancing filtering (1)
- controlled vocabularies (1)
- course timetabling (1)
- difference of Gaussians (1)
- educational timetabling (1)
- entity alignment (1)
- flow-based bilateral filter (1)
- graph clustering (1)
- input accuracy (1)
- lebenslanges Lernen (1)
- linguistic (1)
- machine learning (1)
- map/reduce (1)
- maschinelles Lernen (1)
- metadata (1)
- mixture models (1)
- mobile devices (1)
- model (1)
- non-photorealistic rendering (1)
- nonphotorealistic rendering (NPR) (1)
- prediction (1)
- program encodings (1)
- proof complexity (1)
- proving (1)
- relevance (1)
- selection (1)
- semantic web (1)
- strong equivalence (1)
- tableau calculi (1)
- theorem (1)
- topics (1)
- touch input (1)
Dieser Beitrag stellt das Lehr-Lern-Konzept zur Kompetenzförderung im Software Engineering im Studiengang Mechatronik der Hochschule Aschaffenburg dar. Dieses Konzept ist mehrstufig mit Vorlesungs-, Seminar- und Projektsequenzen. Dabei werden Herausforderungen und Verbesserungspotentiale identifiziert und dargestellt. Abschließend wird ein Überblick gegeben, wie im Rahmen eines gerade gestarteten Forschungsprojektes Lehr-Lernkonzepte weiterentwickelt werden können.
Where girls the role of boys in CS - attitudes of CS students in a female-dominated environment
(2013)
A survey has been carried out in the Computer Science (CS) department at the University of Baghdad to investigate the attitudes of CS students in a female dominant environment, showing the differences between male and female students in different academic years. We also compare the attitudes of the freshman students of two different cultures (University of Baghdad, Iraq, and the University of Potsdam).
The course timetabling problem can be generally defined as the task of assigning a number of lectures to a limited set of timeslots and rooms, subject to a given set of hard and soft constraints. The modeling language for course timetabling is required to be expressive enough to specify a wide variety of soft constraints and objective functions. Furthermore, the resulting encoding is required to be extensible for capturing new constraints and for switching them between hard and soft, and to be flexible enough to deal with different formulations. In this paper, we propose to make effective use of ASP as a modeling language for course timetabling. We show that our ASP-based approach can naturally satisfy the above requirements, through an ASP encoding of the curriculum-based course timetabling problem proposed in the third track of the second international timetabling competition (ITC-2007). Our encoding is compact and human-readable, since each constraint is individually expressed by either one or two rules. Each hard constraint is expressed by using integrity constraints and aggregates of ASP. Each soft constraint S is expressed by rules in which the head is the form of penalty (S, V, C), and a violation V and its penalty cost C are detected and calculated respectively in the body. We carried out experiments on four different benchmark sets with five different formulations. We succeeded either in improving the bounds or producing the same bounds for many combinations of problem instances and formulations, compared with the previous best known bounds.
Wir stellen die Konzeption und erste Ergebnisse einer neuartigen Informatik- Lehrveranstaltung für Studierende der Geodäsie vor. Das Konzept verbindet drei didaktische Ideen: Kontextorientierung, Peer-Tutoring und Praxisbezug (Course). Die Studierenden sollen dabei in zwei Semestern wichtige Grundlagen der Informatik verstehen und anzuwenden lernen. Durch enge Verzahnung der Aufgaben mit einem für Nichtinformatiker relevanten Kontext, sowie einem sehr hohen Anteil von Selbsttätigkeit der Studierenden soll die Motivation für fachfremde Themen gesteigert werden. Die Ergebnisse zeigen, dass die Veranstaltung sehr erfolgreich war.
This thesis presents novel ideas and research findings for the Web of Data – a global data space spanning many so-called Linked Open Data sources. Linked Open Data adheres to a set of simple principles to allow easy access and reuse for data published on the Web. Linked Open Data is by now an established concept and many (mostly academic) publishers adopted the principles building a powerful web of structured knowledge available to everybody. However, so far, Linked Open Data does not yet play a significant role among common web technologies that currently facilitate a high-standard Web experience. In this work, we thoroughly discuss the state-of-the-art for Linked Open Data and highlight several shortcomings – some of them we tackle in the main part of this work. First, we propose a novel type of data source meta-information, namely the topics of a dataset. This information could be published with dataset descriptions and support a variety of use cases, such as data source exploration and selection. For the topic retrieval, we present an approach coined Annotated Pattern Percolation (APP), which we evaluate with respect to topics extracted from Wikipedia portals. Second, we contribute to entity linking research by presenting an optimization model for joint entity linking, showing its hardness, and proposing three heuristics implemented in the LINked Data Alignment (LINDA) system. Our first solution can exploit multi-core machines, whereas the second and third approach are designed to run in a distributed shared-nothing environment. We discuss and evaluate the properties of our approaches leading to recommendations which algorithm to use in a specific scenario. The distributed algorithms are among the first of their kind, i.e., approaches for joint entity linking in a distributed fashion. Also, we illustrate that we can tackle the entity linking problem on the very large scale with data comprising more than 100 millions of entity representations from very many sources. Finally, we approach a sub-problem of entity linking, namely the alignment of concepts. We again target a method that looks at the data in its entirety and does not neglect existing relations. Also, this concept alignment method shall execute very fast to serve as a preprocessing for further computations. Our approach, called Holistic Concept Matching (HCM), achieves the required speed through grouping the input by comparing so-called knowledge representations. Within the groups, we perform complex similarity computations, relation conclusions, and detect semantic contradictions. The quality of our result is again evaluated on a large and heterogeneous dataset from the real Web. In summary, this work contributes a set of techniques for enhancing the current state of the Web of Data. All approaches have been tested on large and heterogeneous real-world input.
Cloud computing is a model for enabling on-demand access to a shared pool of computing resources. With virtually limitless on-demand resources, a cloud environment enables the hosted Internet application to quickly cope when there is an increase in the workload. However, the overhead of provisioning resources exposes the Internet application to periods of under-provisioning and performance degradation. Moreover, the performance interference, due to the consolidation in the cloud environment, complicates the performance management of the Internet applications. In this dissertation, we propose two approaches to mitigate the impact of the resources provisioning overhead. The first approach employs control theory to scale resources vertically and cope fast with workload. This approach assumes that the provider has knowledge and control over the platform running in the virtual machines (VMs), which limits it to Platform as a Service (PaaS) and Software as a Service (SaaS) providers. The second approach is a customer-side one that deals with the horizontal scalability in an Infrastructure as a Service (IaaS) model. It addresses the trade-off problem between cost and performance with a multi-goal optimization solution. This approach finds the scale thresholds that achieve the highest performance with the lowest increase in the cost. Moreover, the second approach employs a proposed time series forecasting algorithm to scale the application proactively and avoid under-utilization periods. Furthermore, to mitigate the interference impact on the Internet application performance, we developed a system which finds and eliminates the VMs suffering from performance interference. The developed system is a light-weight solution which does not imply provider involvement. To evaluate our approaches and the designed algorithms at large-scale level, we developed a simulator called (ScaleSim). In the simulator, we implemented scalability components acting as the scalability components of Amazon EC2. The current scalability implementation in Amazon EC2 is used as a reference point for evaluating the improvement in the scalable application performance. ScaleSim is fed with realistic models of the RUBiS benchmark extracted from the real environment. The workload is generated from the access logs of the 1998 world cup website. The results show that optimizing the scalability thresholds and adopting proactive scalability can mitigate 88% of the resources provisioning overhead impact with only a 9% increase in the cost.