004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Article (332)
- Monograph/Edited Volume (165)
- Doctoral Thesis (157)
- Conference Proceeding (52)
- Postprint (50)
- Master's Thesis (10)
- Other (7)
- Preprint (3)
- Part of a Book (2)
- Bachelor Thesis (1)
Language
- English (587)
- German (192)
- Multiple languages (2)
Keywords
- Informatik (21)
- machine learning (17)
- Didaktik (15)
- Hochschuldidaktik (14)
- Ausbildung (13)
- answer set programming (13)
- Cloud Computing (11)
- cloud computing (11)
- Hasso-Plattner-Institut (10)
- Hasso Plattner Institute (9)
Institute
- Institut für Informatik und Computational Science (270)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (215)
- Hasso-Plattner-Institut für Digital Engineering GmbH (127)
- Extern (65)
- Fachgruppe Betriebswirtschaftslehre (27)
- Mathematisch-Naturwissenschaftliche Fakultät (24)
- Wirtschaftswissenschaften (19)
- Institut für Mathematik (16)
- Bürgerliches Recht (12)
- Digital Engineering Fakultät (8)
This thesis presents methods, techniques and tools for developing three-dimensional representations of tactical intelligence assessments. Techniques from GIScience are combined with crime mapping methods. The range of methods applied in this study provides spatio-temporal GIS analysis as well as 3D geovisualisation and GIS programming. The work presents methods to enhance digital three-dimensional city models with application specific thematic information. This information facilitates further geovisual analysis, for instance, estimations of urban risks exposure. Specific methods and workflows are developed to facilitate the integration of spatio-temporal crime scene analysis results into 3D tactical intelligence assessments. Analysis comprises hotspot identification with kernel-density-estimation techniques (KDE), LISA-based verification of KDE hotspots as well as geospatial hotspot area characterisation and repeat victimisation analysis. To visualise the findings of such extensive geospatial analysis, three-dimensional geovirtual environments are created. Workflows are developed to integrate analysis results into these environments and to combine them with additional geospatial data. The resulting 3D visualisations allow for an efficient communication of complex findings of geospatial crime scene analysis.
"Forschung meets Business" - diese Kombination hat in den vergangenen Jahren immer wieder zu zahlreichen interessanten und fruchtbaren Diskussionen geführt. Mit dem Symposium "Sicherheit in Service-orientierten Architekturen" führt das Hasso-Plattner-Institut diese Tradition fort und lud alle Interessenten zu einem zweitägigen Symposium nach Potsdam ein, um gemeinsam mit Fachvertretern aus der Forschung und Industrie über die aktuellen Entwicklungen im Bereich Sicherheit von SOA zu diskutieren. Die im Rahmen dieses Symposiums vorgestellten Beiträge fokussieren sich auf die Sicherheitsthemen "Sichere Digitale Identitäten und Identitätsmanagement", "Trust Management", "Modell-getriebene SOA-Sicherheit", "Datenschutz und Privatsphäre", "Sichere Enterprise SOA", und "Sichere IT-Infrastrukturen".
In current practice, business processes modeling is done by trained method experts. Domain experts are interviewed to elicit their process information but not involved in modeling. We created a haptic toolkit for process modeling that can be used in process elicitation sessions with domain experts. We hypothesize that this leads to more effective process elicitation. This paper brakes down "effective elicitation" to 14 operationalized hypotheses. They are assessed in a controlled experiment using questionnaires, process model feedback tests and video analysis. The experiment compares our approach to structured interviews in a repeated measurement design. We executed the experiment with 17 student clerks from a trade school. They represent potential users of the tool. Six out of fourteen hypotheses showed significant difference due to the method applied. Subjects reported more fun and more insights into process modeling with tangible media. Video analysis showed significantly more reviews and corrections applied during process elicitation. Moreover, people take more time to talk and think about their processes. We conclude that tangible media creates a different working mode for people in process elicitation with fun, new insights and instant feedback on preliminary results.
Companies develop process models to explicitly describe their business operations. In the same time, business operations, business processes, must adhere to various types of compliance requirements. Regulations, e.g., Sarbanes Oxley Act of 2002, internal policies, best practices are just a few sources of compliance requirements. In some cases, non-adherence to compliance requirements makes the organization subject to legal punishment. In other cases, non-adherence to compliance leads to loss of competitive advantage and thus loss of market share. Unlike the classical domain-independent behavioral correctness of business processes, compliance requirements are domain-specific. Moreover, compliance requirements change over time. New requirements might appear due to change in laws and adoption of new policies. Compliance requirements are offered or enforced by different entities that have different objectives behind these requirements. Finally, compliance requirements might affect different aspects of business processes, e.g., control flow and data flow. As a result, it is infeasible to hard-code compliance checks in tools. Rather, a repeatable process of modeling compliance rules and checking them against business processes automatically is needed. This thesis provides a formal approach to support process design-time compliance checking. Using visual patterns, it is possible to model compliance requirements concerning control flow, data flow and conditional flow rules. Each pattern is mapped into a temporal logic formula. The thesis addresses the problem of consistency checking among various compliance requirements, as they might stem from divergent sources. Also, the thesis contributes to automatically check compliance requirements against process models using model checking. We show that extra domain knowledge, other than expressed in compliance rules, is needed to reach correct decisions. In case of violations, we are able to provide a useful feedback to the user. The feedback is in the form of parts of the process model whose execution causes the violation. In some cases, our approach is capable of providing automated remedy of the violation.
Mit der 4. Tagung zur Hochschuldidaktik Informatik wird eine Reihe fortgesetzt, die ihren Anfang 1998 in Stuttgart unter der Überschrift „Informatik und Ausbildung“ genommen hat. Seither dienen diese Tagungen den Lehrenden im Bereich der Hochschulinformatik als Forum der Information und des Diskurses über aktuelle didaktische und bildungspolitische Entwicklungen im Bereich der Informatikausbildung. Aktuell zählen dazu insbesondere Fragen der Bildungsrelevanz informatischer Inhalte und der Herausforderung durch eine stärkere Kompetenzorientierung in der Informatik. Die eingereichten Beiträge zur HDI 2010 in Paderborn veranschaulichen unterschiedliche Bemühungen, sich mit relevanten Problemen der Informatikdidaktik an Hochschulen in Deutschland (und z. T. auch im Ausland) auseinanderzusetzen. Aus der Breite des Spektrums der Einreichungen ergaben sich zugleich Probleme bei der Begutachtung. Letztlich konnten von den zahlreichen Einreichungen nur drei die Gutachter so überzeugen, dass sie uneingeschränkt in ihrer Langfassung akzeptiert wurden. Neun weitere Einreichungen waren trotz Kritik überwiegend positiv begutachtet worden, so dass wir diese als Kurzfassung bzw. Diskussionspapier in die Tagung aufgenommen haben.
Roughly every third Wikipedia article contains an infobox - a table that displays important facts about the subject in attribute-value form. The schema of an infobox, i.e., the attributes that can be expressed for a concept, is defined by an infobox template. Often, authors do not specify all template attributes, resulting in incomplete infoboxes. With iPopulator, we introduce a system that automatically populates infoboxes of Wikipedia articles by extracting attribute values from the article's text. In contrast to prior work, iPopulator detects and exploits the structure of attribute values for independently extracting value parts. We have tested iPopulator on the entire set of infobox templates and provide a detailed analysis of its effectiveness. For instance, we achieve an average extraction precision of 91% for 1,727 distinct infobox template attributes.
Am 24. und 25. Juni 2010 fand am Hasso-Plattner-Institut für Softwaresystemtechnik GmbH in Potsdam der 3. Deutsche IPv6 Gipfel 2010 statt, dessen Dokumentation der vorliegende technische Report dient. Als nationaler Arm des weltweiten IPv6-Forums fördert der Deutsche IPv6-Rat den Übergangsprozess zur neuen Internetgeneration und brachte in diesem Rahmen nationale und internationale Experten aus Wirtschaft, Wissenschaft und öffentlicher Verwaltung zusammen, um Awareness für das Zukunftsthema IPv6 zu schaffen und um ein Resumé über die bislang erzielten Fortschritte zu ziehen. Die Grenzen des alten Internetprotokolls IPv4 sind in den vergangenen zwei Jahren deutlicher denn je zutage getreten. Waren im vergangenen Jahr anlässlich des 2. IPv6 Gipfels noch 11% aller zu vergebenden IPv4 Adressen verfügbar, ist diese Zahl mittlerweile auf nur noch 6% geschrumpft. Ehrengast war in diesem Jahr der „europäische Vater“ des Internets, Prof. Peter T. Kirstein vom University College London, dessen Hauptvortrag von weiteren Beiträgen hochrangiger Vertretern aus Politik, Wissenschaft und Wirtschaft ergänzt wurde.
The correctness of model transformations is a crucial element for the model-driven engineering of high quality software. A prerequisite to verify model transformations at the level of the model transformation specification is that an unambiguous formal semantics exists and that the employed implementation of the model transformation language adheres to this semantics. However, for existing relational model transformation approaches it is usually not really clear under which constraints particular implementations are really conform to the formal semantics. In this paper, we will bridge this gap for the formal semantics of triple graph grammars (TGG) and an existing efficient implementation. Whereas the formal semantics assumes backtracking and ignores non-determinism, practical implementations do not support backtracking, require rule sets that ensure determinism, and include further optimizations. Therefore, we capture how the considered TGG implementation realizes the transformation by means of operational rules, define required criteria and show conformance to the formal semantics if these criteria are fulfilled. We further outline how static analysis can be employed to guarantee these criteria.
Background: The development of bioinformatics databases, algorithms, and tools throughout the last years has lead to a highly distributedworld of bioinformatics services. Without adequatemanagement and development support, in silico researchers are hardly able to exploit the potential of building complex, specialized analysis processes from these services. The Semantic Web aims at thoroughly equipping individual data and services with machine-processable meta-information, while workflow systems support the construction of service compositions. However, even in this combination, in silico researchers currently would have to deal manually with the service interfaces, the adequacy of the semantic annotations, type incompatibilities, and the consistency of service compositions. Results: In this paper, we demonstrate by means of two examples how Semantic Web technology together with an adequate domain modelling frees in silico researchers from dealing with interfaces, types, and inconsistencies. In Bio-jETI, bioinformatics services can be graphically combined to complex services without worrying about details of their interfaces or about type mismatches of the composition. These issues are taken care of at the semantic level by Bio-jETI’s model checking and synthesis features. Whenever possible, they automatically resolve type mismatches in the considered service setting. Otherwise, they graphically indicate impossible/incorrect service combinations. In the latter case, the workflow developermay either modify his service composition using semantically similar services, or ask for help in developing the missing mediator that correctly bridges the detected type gap. Newly developed mediators should then be adequately annotated semantically, and added to the service library for later reuse in similar situations. Conclusion: We show the power of semantic annotations in an adequately modelled and semantically enabled domain setting. Using model checking and synthesis methods, users may orchestrate complex processes from a wealth of heterogeneous services without worrying about interfaces and (type) consistency. The success of this method strongly depends on a careful semantic annotation of the provided services and on its consequent exploitation for analysis, validation, and synthesis. We are convinced that these annotations will become standard, as they will become preconditions for the success and widespread use of (preferred) services in the Semantic Web