004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Monograph/Edited Volume (88)
- Article (41)
- Doctoral Thesis (40)
- Conference Proceeding (10)
- Other (1)
- Preprint (1)
Language
- English (181) (remove)
Keywords
- Hasso-Plattner-Institut (7)
- Cloud Computing (6)
- Forschungskolleg (6)
- Hasso Plattner Institute (6)
- Klausurtagung (6)
- Modellierung (6)
- Service-oriented Systems Engineering (6)
- cloud computing (6)
- Datenintegration (5)
- Forschungsprojekte (5)
- Future SOC Lab (5)
- In-Memory Technologie (5)
- Multicore Architekturen (5)
- Geschäftsprozessmanagement (4)
- Research School (4)
- Verifikation (4)
- data integration (4)
- graph transformation (4)
- middleware (4)
- virtual machines (4)
- Betriebssysteme (3)
- Graphtransformationen (3)
- Model Synchronisation (3)
- Model Transformation (3)
- Model-Driven Engineering (3)
- Modeling (3)
- Ph.D. Retreat (3)
- Ph.D. retreat (3)
- Privacy (3)
- Process Mining (3)
- Prozessmodellierung (3)
- Tripel-Graph-Grammatik (3)
- Virtualisierung (3)
- Virtuelle Maschinen (3)
- business process management (3)
- multicore architectures (3)
- operating systems (3)
- process mining (3)
- research projects (3)
- security (3)
- service-oriented systems engineering (3)
- verification (3)
- AUTOSAR (2)
- Abstraktion (2)
- Aspektorientierte Softwareentwicklung (2)
- Assoziationsregeln (2)
- Asynchrone Schaltung (2)
- BPMN (2)
- Bayesian networks (2)
- Bitcoin (2)
- CSC (2)
- CSCW (2)
- Cloud-Sicherheit (2)
- Cloud-Speicher (2)
- Data Integration (2)
- Data profiling (2)
- Design Thinking (2)
- Evolution (2)
- Graphtransformationssysteme (2)
- HCI (2)
- In-Memory technology (2)
- Kollaborationen (2)
- Laufzeitmodelle (2)
- Link-Entdeckung (2)
- Megamodell (2)
- Middleware (2)
- Model Synchronization (2)
- Modell (2)
- Process Modeling (2)
- RDF (2)
- Ressourcenoptimierung (2)
- Runtime analysis (2)
- SQL (2)
- STG decomposition (2)
- STG-Dekomposition (2)
- Service-Orientierte Architekturen (2)
- Sicherheit (2)
- SysML (2)
- Systemsoftware (2)
- Visualisierung (2)
- abstraction (2)
- adaptive Systeme (2)
- adaptive systems (2)
- big data services (2)
- cloud security (2)
- collaboration (2)
- data (2)
- data profiling (2)
- debugging (2)
- design thinking (2)
- digital identity (2)
- distributed systems (2)
- duplicate detection (2)
- graph constraints (2)
- in-memory technology (2)
- incremental graph pattern matching (2)
- law (2)
- missing data (2)
- model (2)
- model transformation (2)
- model-driven engineering (2)
- modeling (2)
- modellgetriebene Entwicklung (2)
- nested graph conditions (2)
- non-photorealistic rendering (2)
- privacy (2)
- programming (2)
- research school (2)
- schema discovery (2)
- self-sovereign identity (2)
- service-oriented systems (2)
- simulation (2)
- software engineering (2)
- stochastic Petri nets (2)
- stochastische Petri Netze (2)
- systems of systems (2)
- systems software (2)
- testing (2)
- virtualization (2)
- virtuelle Maschinen (2)
- visualization (2)
- "Big Data"-Dienste (1)
- 3D Computer Grafik (1)
- 3D Computer Graphics (1)
- 3D Drucken (1)
- 3D Semiotik (1)
- 3D Visualisierung (1)
- 3D point clouds (1)
- 3D printing (1)
- 3D semiotics (1)
- 3D visualization (1)
- APX-hardness (1)
- Abhängigkeiten (1)
- Abstraktion von Geschäftsprozessmodellen (1)
- Actor (1)
- Actor model (1)
- Analyse (1)
- Anfragepaare (1)
- Anisotroper Kuwahara Filter (1)
- Anomalien (1)
- Application (1)
- Apriori (1)
- Architektur (1)
- Aspect-oriented Programming (1)
- Association Rule Mining (1)
- Asynchronous circuit (1)
- Attribut-Merge-Prozess (1)
- Attribute Merge Process (1)
- Attribute aggregation (1)
- Ausführung von Modellen (1)
- Ausführungsgeschichte (1)
- Authentication (1)
- BPM (1)
- Batchprozesse (1)
- Bayes'sche Netze (1)
- Bayessche Netze (1)
- Bedingte Inklusionsabhängigkeiten (1)
- Behavior (1)
- Behaviour Analysis (1)
- Berührungseingaben (1)
- Beschränkungen und Abhängigkeiten (1)
- Bidirectional order dependencies (1)
- Bildverarbeitung (1)
- Bisimulation (1)
- Blockchain (1)
- Blockchains (1)
- Business Process Models (1)
- CCS Concepts (1)
- CEP (1)
- Carrera Digital D132 (1)
- Case management (1)
- Change Management (1)
- Cloud (1)
- Cloud Datenzentren (1)
- Cloud computing (1)
- Coccinelle (1)
- Compliance (1)
- Compliance checking (1)
- Composition (1)
- Computational photography (1)
- Computer crime (1)
- Conditional Inclusion Dependency (1)
- Conformance Überprüfung (1)
- Consistency (1)
- Constraints (1)
- Context-oriented Programming (1)
- Contracts (1)
- Controller-Resynthese (1)
- Critical pairs (1)
- Cryptography (1)
- Currencies (1)
- Cyber-Physical Systems (1)
- Cyber-Physical-Systeme (1)
- Cyber-physical-systems (1)
- Data Dependency (1)
- Data Modeling (1)
- Data Profiling (1)
- Data Quality (1)
- Data Warehouse (1)
- Data dependencies (1)
- Data integration (1)
- Data mining (1)
- Data-centric (1)
- Database Cost Model (1)
- Databases (1)
- Daten (1)
- Datenabhängigkeiten (1)
- Datenabhängigkeiten-Entdeckung (1)
- Datenanalyse (1)
- Datenbank-Kostenmodell (1)
- Datenbanken (1)
- Datenextraktion (1)
- Datenflusskorrektheit (1)
- Datenkorrektheit (1)
- Datenmodellierung (1)
- Datenobjekte (1)
- Datenqualität (1)
- Datenreinigung (1)
- Datenvertraulichkeit (1)
- Datenzustände (1)
- Deadline-Verbreitung (1)
- Deep learning (1)
- Delta preservation (1)
- Dependency discovery (1)
- Deurema Modellierungssprache (1)
- Differential Privacy (1)
- Differenz von Gauss Filtern (1)
- Digitale Whiteboards (1)
- Disambiguierung (1)
- Discrimination Networks (1)
- Distributed computing (1)
- Distributed programming (1)
- Duplicate Detection (1)
- Duplikaterkennung (1)
- Dynamic Type System (1)
- Dynamische Typ Systeme (1)
- E-Learning (1)
- EHR (1)
- EPA (1)
- Echtzeit (1)
- Echtzeitsysteme (1)
- Ecosystems (1)
- Eingabegenauigkeit (1)
- Elektronische Patientenakte (1)
- Energieeffizienz (1)
- Entity resolution (1)
- Entwicklungswerkzeuge (1)
- Entwurfsmuster für SOA-Sicherheit (1)
- Ereignisabstraktion (1)
- Ereignisse (1)
- Erfahrungsbericht (1)
- Erfüllbarkeitsanalyse (1)
- Erkennen von Meta-Daten (1)
- Estimation-of-distribution algorithm (1)
- Evolution in MDE (1)
- Evolutionary algorithms (1)
- Extract-Transform-Load (ETL) (1)
- FMC-QE (1)
- FRP (1)
- Fallstudie (1)
- Federated learning (1)
- Feedback Loop Modellierung (1)
- Feedback Loops (1)
- Fehlende Daten (1)
- Fehlerbeseitigung (1)
- Fehlersuche (1)
- Fitness-distance correlation (1)
- Flussgesteuerter Bilateraler Filter (1)
- Focus+Context Visualization (1)
- Fokus-&-Kontext Visualisierung (1)
- Formale Verifikation (1)
- Functional Lenses (1)
- Functional dependencies (1)
- GPU (1)
- Gebäudemodelle (1)
- Geländemodelle (1)
- General Earth and Planetary Sciences (1)
- Generalisierung (1)
- Geodaten (1)
- Geography, Planning and Development (1)
- Geschäftsprozesse (1)
- Geschäftsprozessmodelle (1)
- Gesetze (1)
- Graph databases (1)
- Graph homomorphisms (1)
- Graph repair (1)
- Graph transformation (1)
- Graph-Constraints (1)
- Graph-basierte Suche (1)
- Graphbedingungen (1)
- Graphdatenbanken (1)
- Graphtransformation (1)
- HENSHIN (1)
- Hasso-Plattner-Institute (1)
- Hauptspeicher Technologie (1)
- Hauptspeicherdatenbank (1)
- Herodotos (1)
- HiGHmed (1)
- History of pattern occurrences (1)
- Homomorphe Verschlüsselung (1)
- IOPS (1)
- IT-Security (1)
- IT-Sicherheit (1)
- Identity management systems (1)
- Image (1)
- Image-based rendering (1)
- In-Memory Database (1)
- In-Memory Datenbank (1)
- In-Memory-Datenbank (1)
- Inclusion dependencies (1)
- Index (1)
- Index Structures (1)
- Indexstrukturen (1)
- Individuen (1)
- Industries (1)
- Industry 4.0 (1)
- Infinite State (1)
- Information Extraction (1)
- Information Systems (1)
- Informationsextraktion (1)
- Informationssysteme (1)
- Informationsvorhaltung (1)
- Initial conflicts (1)
- Inklusionsabhängigkeit (1)
- Inkrementelle Graphmustersuche (1)
- Innovation (1)
- Innovationsmanagement (1)
- Innovationsmethode (1)
- Interactive Rendering (1)
- Interaktionsmodel (1)
- Interaktives Rendering (1)
- Internet applications (1)
- Internet of Things (1)
- Internetanwendungen (1)
- Invariant-Checking (1)
- Invarianten (1)
- Invariants (1)
- JCop (1)
- Java (1)
- Kartografisches Design (1)
- Komplexität (1)
- Komplexitätsbewältigung (1)
- Komposition (1)
- Kontext (1)
- LEGO Mindstorms EV3 (1)
- LIDAR (1)
- LOD (1)
- LSTM (1)
- Landmarken (1)
- Laser Cutten (1)
- Laufzeitanalyse (1)
- Laufzeitverhalten (1)
- Leadership (1)
- Leistungsfähigkeit (1)
- Leistungsvorhersage (1)
- Licenses (1)
- Link Discovery (1)
- Linked Data (1)
- Linked Open Data (1)
- Live-Programmierung (1)
- Lively Kernel (1)
- Logiksynthese (1)
- MDE Ansatz (1)
- MDE settings (1)
- MOOCs (1)
- Management (1)
- Matroids (1)
- Megamodel (1)
- Megamodels (1)
- Mehrkernsysteme (1)
- Metadata Discovery (1)
- Metadatenentdeckung (1)
- Metadatenqualität (1)
- Mobile Application Development (1)
- Mobilgeräte (1)
- Model Consistency (1)
- Model Execution (1)
- Model Management (1)
- Model repair (1)
- Model verification (1)
- Model-driven (1)
- Modeling Languages (1)
- Modell Management (1)
- Modell-driven Security (1)
- Modell-getriebene Sicherheit (1)
- Modell-getriebene Softwareentwicklung (1)
- Modellerzeugung (1)
- Modellgetrieben (1)
- Modellgetriebene Entwicklung (1)
- Modellgetriebene Softwareentwicklung (1)
- Modellierungssprachen (1)
- Modellkonsistenz (1)
- Modelltransformation (1)
- Modelltransformationen (1)
- Models at Runtime (1)
- Morphic (1)
- Multi-Instanzen (1)
- Multi-objective optimization (1)
- Multicore architectures (1)
- Muster (1)
- Musterabgleich (1)
- Mustererkennung (1)
- Mutation operators (1)
- Nested Graph Conditions (1)
- Nested graph conditions (1)
- Newspeak (1)
- Nicht-photorealistisches Rendering (1)
- Nichtfotorealistische Bildsynthese (1)
- Nutzungsinteresse (1)
- O (1)
- Object Constraint Programming (1)
- Object-Oriented Programming (1)
- Objekt-Constraint Programmierung (1)
- Objekt-Orientiertes Programmieren (1)
- Objekt-orientiertes Programmieren mit Constraints (1)
- Objektlebenszyklus-Synchronisation (1)
- Online Course (1)
- Online-Learning (1)
- Online-Lernen (1)
- Onlinekurs (1)
- Order dependencies (1)
- Organisationsveränderung (1)
- PRISM Modell-Checker (1)
- PRISM model checker (1)
- PTCTL (1)
- Parallelization (1)
- Pattern Matching (1)
- Patterns (1)
- Performance (1)
- Performance Prediction (1)
- Petri net Mapping (1)
- Petri net mapping (1)
- Petrinetz (1)
- Point-based rendering (1)
- Probabilistische Modelle (1)
- Process (1)
- Process Enactment (1)
- Process Modelling (1)
- Programmierung (1)
- Programming Languages (1)
- Propagation von Aktivitätsinstanzzuständen (1)
- Protocols (1)
- Prozess (1)
- Prozess- und Datenintegration (1)
- Prozessarchitektur (1)
- Prozessausführung (1)
- Prozessautomatisierung (1)
- Prozesse (1)
- Prozesserhebung (1)
- Prozessinstanz (1)
- Prozessmodellsuche (1)
- Prozessoren (1)
- Prozessverfeinerung (1)
- Präsentation (1)
- Quantitative Analysen (1)
- Quantitative Modeling (1)
- Quantitative Modellierung (1)
- Query (1)
- Query execution (1)
- Query optimization (1)
- Queuing Theory (1)
- RT_PREEMT patch (1)
- RT_PREEMT-Patch (1)
- Relational data (1)
- Research Projects (1)
- Ressourcenmanagement (1)
- Run time analysis (1)
- Runtime Binding (1)
- SOA Security Pattern (1)
- SPARQL (1)
- Sammlungsdatentypen (1)
- Satisfiability (1)
- Scalability (1)
- Schema-Entdeckung (1)
- Schemaentdeckung (1)
- Schlüsselentdeckung (1)
- Search Algorithms (1)
- Security (1)
- Security Modelling (1)
- Self-Adaptive Software (1)
- Semantische Analyse (1)
- Sequential anomaly (1)
- Sequenzen von s/t-Pattern (1)
- Service-Oriented Architecture (1)
- Service-oriented Architectures (1)
- Service-orientierte Systeme (1)
- Service-orientierte Systme (1)
- Sicherheitsmodellierung (1)
- Signalflankengraph (SFG oder STG) (1)
- Similarity Measures (1)
- Similarity Search (1)
- Simulation (1)
- Skalierbarkeit (1)
- Smalltalk (1)
- SoaML (1)
- Softwareanalyse (1)
- Softwarearchitektur (1)
- Softwareentwicklung (1)
- Softwareentwicklungsprozesse (1)
- Softwareproduktlinien (1)
- Softwaretechnik (1)
- Softwaretest (1)
- Softwaretests (1)
- Softwarevisualisierung (1)
- Softwarewartung (1)
- Sozialen Medien (1)
- Speicheroptimierungen (1)
- Sprachspezifikation (1)
- Standards (1)
- Stilisierung (1)
- Structuring (1)
- Strukturierung (1)
- Studie (1)
- Submodular function (1)
- Submodular functions (1)
- Subset selection (1)
- Suchverfahren (1)
- Synchronisation (1)
- Synonyme (1)
- System of Systems (1)
- Systeme von Systemen (1)
- Systems of Systems (1)
- Tableaumethode (1)
- Tele-Lab (1)
- Tele-Teaching (1)
- Temporal Logic (1)
- Temporallogik (1)
- Test-getriebene Fehlernavigation (1)
- Testen (1)
- Theory (1)
- Threshold Cryptography (1)
- Time Augmented Petri Nets (1)
- Time series (1)
- Traceability (1)
- Tracking (1)
- Transformation (1)
- Transformationsebene (1)
- Transformationssequenzen (1)
- Travis CI (1)
- Triple Graph Grammar (1)
- Triple Graph Grammars (1)
- Triple-Graph-Grammatiken (1)
- Unbegrenzter Zustandsraum (1)
- Unique column combinations (1)
- Unveränderlichkeit (1)
- Usage Interest (1)
- VIL (1)
- Verbindungsnetzwerke (1)
- Verhalten (1)
- Verhaltensabstraktion (1)
- Verhaltensanalyse (1)
- Verhaltensbewahrung (1)
- Verhaltensverfeinerung (1)
- Verhaltensäquivalenz (1)
- Verification (1)
- Verletzung Auflösung (1)
- Verletzung Erklärung (1)
- Vernetzte Daten (1)
- Versionierung (1)
- Verteiltes Arbeiten (1)
- Verteilungsalgorithmen (1)
- Verteilungsalgorithmus (1)
- Verwaltung von Rechenzentren (1)
- Verzögerungs-Verbreitung (1)
- Videoanalyse (1)
- Videometadaten (1)
- Violation Explanation (1)
- Violation Resolution (1)
- Virtual machines (1)
- Visualization (1)
- Vocabulary (1)
- Vorhersage (1)
- Warteschlangentheorie (1)
- Wartung von Graphdatenbanksichten (1)
- Water Science and Technology (1)
- Web Sites (1)
- Web applications (1)
- Web of Data (1)
- Web-Anwendungen (1)
- Webseite (1)
- Well-structuredness (1)
- Wikipedia (1)
- Wohlstrukturiertheit (1)
- Zeitbehaftete Petri Netze (1)
- Zugriffskontrolle (1)
- access control (1)
- activity instance state propagation (1)
- adoption (1)
- analysis (1)
- anisotropic Kuwahara filter (1)
- annotation (1)
- anomalies (1)
- approximation (1)
- apriori (1)
- architecture (1)
- architecture recovery (1)
- argumentation research (1)
- aspect adapter (1)
- aspect oriented programming (1)
- aspect-oriented (1)
- aspects (1)
- aspectualization (1)
- association rule mining (1)
- asynchronous circuit (1)
- attacks (1)
- attribute assurance (1)
- ausführbare Semantiken (1)
- back-in-time (1)
- batch processing (1)
- behavior preservation (1)
- behavioral abstraction (1)
- behavioral equivalenc (1)
- behavioral refinement (1)
- behavioral specification (1)
- beschreibende Feldstudie (1)
- big data (1)
- biomarker detection (1)
- bisimulation (1)
- bitcoin (1)
- bpm (1)
- bug tracking (1)
- building models (1)
- business process architecture (1)
- business process model abstraction (1)
- business processes (1)
- cancer therapy (1)
- cartographic design (1)
- case study (1)
- center dot Computing (1)
- change management (1)
- changeability (1)
- cleansing (1)
- cloud (1)
- cloud datacenter (1)
- cloud storage (1)
- cognition (1)
- coherence-enhancing filtering (1)
- collaborations (1)
- collection types (1)
- complexity (1)
- complexity dichotomy (1)
- comprehension (1)
- computer science (1)
- concurrency (1)
- concurrent graph rewriting (1)
- conditions (1)
- confidentiality (1)
- conflicts and dependencies in (1)
- conformance analysis (1)
- conformance checking (1)
- consistency (1)
- context awareness (1)
- continuous integration (1)
- continuous testing (1)
- contract (1)
- control resynthesis (1)
- controlled experiment (1)
- corporate takeovers (1)
- crosscutting wrappers (1)
- cryptocurrency exchanges (1)
- cscw (1)
- cyber (1)
- cyber humanistic (1)
- cyber threat intelligence (1)
- cyber-physical systems (1)
- data center management (1)
- data center management (1)
- data correctness checking (1)
- data driven approaches (1)
- data extraction (1)
- data flow correctness (1)
- data migration (1)
- data objects (1)
- data preparation (1)
- data quality (1)
- data states (1)
- data transformation (1)
- data wrangling (1)
- database systems (1)
- database technology (1)
- deadline propagation (1)
- delay propagation (1)
- dental caries classification (1)
- dependable computing (1)
- dependencies (1)
- dependency discovery (1)
- deterministic properties (1)
- deurema modeling language (1)
- development tools (1)
- difference of Gaussians (1)
- differential privacy (1)
- diffusion (1)
- digital whiteboard (1)
- direct manipulation (1)
- direkte Manipulation (1)
- discrimination networks (1)
- distributed ledger technology (1)
- distribution algorithm (1)
- dynamic typing (1)
- dynamic consolidation (1)
- dynamic programming languages (1)
- dynamic reconfiguration (1)
- dynamische Programmiersprachen (1)
- dynamische Sprachen (1)
- dynamische Umsortierung (1)
- efficient deep learning (1)
- eindeutig (1)
- eingebettete Systeme (1)
- electronic health record (1)
- embedded systems (1)
- empirical studies (1)
- empirische Studien (1)
- energy efficiency (1)
- engine (1)
- engineering (1)
- entity alignment (1)
- erfahrbare Medien (1)
- evaluation (1)
- event abstraction (1)
- events (1)
- evolution (1)
- evolution in MDE (1)
- executable semantics (1)
- experience report (1)
- explainability (1)
- explainability-accuracy trade-off (1)
- explainable AI (1)
- exploratory programming (1)
- expression (1)
- external knowledge bases (1)
- failure model (1)
- feedback loop modeling (1)
- feedback loops (1)
- fehlende Daten (1)
- flow-based bilateral filter (1)
- forensics (1)
- formal framework (1)
- formal verification (1)
- formal verification methods (1)
- formale Verifikation (1)
- formales Framework (1)
- formalism (1)
- functional dependency (1)
- functional lenses (1)
- functional programming (1)
- funktionale Abhängigkeit (1)
- funktionale Programmierung (1)
- future SOC lab (1)
- ganzheitlich (1)
- gene (1)
- gene selection (1)
- generalization (1)
- geospatial data (1)
- gesture (1)
- graph clustering (1)
- graph databases (1)
- graph languages (1)
- graph pattern matching (1)
- graph queries (1)
- graph transformation systems (1)
- graph transformations (1)
- holistic (1)
- homomorphic encryption (1)
- human computer interaction (1)
- hybrid graph-transformation-systems (1)
- hybride Graph-Transformations-Systeme (1)
- identity broker (1)
- identity management (1)
- image captioning (1)
- image processing (1)
- immutable values (1)
- in-memory database (1)
- inclusion dependency (1)
- index (1)
- individuals (1)
- inductive invariant checking (1)
- induktives Invariant Checking (1)
- inkrementelles Graph Pattern Matching (1)
- innovation (1)
- innovation capabilities (1)
- innovation management (1)
- input accuracy (1)
- interaction (1)
- interactive simulation (1)
- interconnect (1)
- interface (1)
- interpretable machine learning (1)
- invariant checking (1)
- invasive aspects (1)
- k-Induktion (1)
- k-induction (1)
- k-inductive invariant checking (1)
- k-inductive invariants (1)
- k-induktive Invarianten (1)
- k-induktives Invariant-Checking (1)
- key discovery (1)
- knowledge building (1)
- knowledge discovery (1)
- knowledge management (1)
- kontinuierliche Integration (1)
- kontinuierliches Testen (1)
- kontrolliertes Experiment (1)
- künstliche Intelligenz (1)
- landmarks (1)
- language specification (1)
- leadership (1)
- link discovery (1)
- linked data (1)
- live programming (1)
- location-based (1)
- logic synthesis (1)
- main memory computing (1)
- management (1)
- many-core (1)
- map reduce (1)
- map/reduce (1)
- maschinelles Lernen (1)
- medical malpractice (1)
- mehrdimensionale Belangtrennung (1)
- memory optimization (1)
- metadata discovery (1)
- metadata quality (1)
- methodologie (1)
- metric learning (1)
- mobile (1)
- mobile devices (1)
- model generation (1)
- model-based prototyping (1)
- model-driven (1)
- modelgetriebene Entwicklung (1)
- modelling (1)
- modular counting (1)
- modularity (1)
- molecular tumor board (1)
- monitoring (1)
- morphic (1)
- multi-core (1)
- multi-dimensional separation of concerns (1)
- multi-instances (1)
- multimodal representations (1)
- mutli-task learning (1)
- nested application conditions (1)
- networks (1)
- neural (1)
- object life cycle synchronization (1)
- object-constraint programming (1)
- openHPI (1)
- organizational change (1)
- orts-basiert (1)
- parallel (1)
- parallel computing (1)
- paralleles Rechnen (1)
- partial application conditions (1)
- partielle Anwendungsbedingungen (1)
- performance (1)
- periodic tasks (1)
- periodische Aufgaben (1)
- personalized medicine (1)
- petri net (1)
- power-law (1)
- prediction (1)
- prefetching (1)
- presentation (1)
- prior knowledge (1)
- probabilistic models (1)
- probabilistic timed automata (1)
- probabilistische zeitbehaftete Automaten (1)
- process (1)
- process and data integration (1)
- process automation (1)
- process elicitation (1)
- process instance (1)
- process model search (1)
- process refinement (1)
- process scheduling (1)
- processes (1)
- processing (1)
- processor hardware (1)
- profiling (1)
- program (1)
- program analysis (1)
- programming language (1)
- quantitative analysis (1)
- query matching (1)
- querying (1)
- random I (1)
- random graphs (1)
- rapid prototyping (1)
- reactive (1)
- reaktive Programmierung (1)
- real-time (1)
- real-time systems (1)
- record linkage (1)
- recursive tuning (1)
- reflection (1)
- relational model transformation (1)
- relationale Modelltransformationen (1)
- remodularization (1)
- remote collaboration (1)
- representation learning (1)
- requirements engineering (1)
- resilient architectures (1)
- resource management (1)
- resource optimization (1)
- restoration (1)
- reusable aspects (1)
- robustness (1)
- runtime adaptations (1)
- runtime behavior (1)
- runtime models (1)
- s/t-pattern sequences (1)
- satisfiabilitiy solving (1)
- search plan generation (1)
- security chaos engineering (1)
- security policies (1)
- security risk assessment (1)
- self-driving (1)
- self-healing (1)
- self-supervised learning (1)
- semantic analysis (1)
- semantics preservation (1)
- service-oriented (1)
- signal transition graph (1)
- similarity (1)
- similarity learning (1)
- similarity measures (1)
- small files (1)
- smalltalk (1)
- software analysis (1)
- software architecture (1)
- software development (1)
- software development processes (1)
- software maintenance (1)
- software product lines (1)
- software tests (1)
- software visualization (1)
- speed independence (1)
- speed independent (1)
- standards (1)
- static analysis (1)
- statische Analyse (1)
- study (1)
- stylization (1)
- synchronization (1)
- synonym discovery (1)
- system of systems (1)
- t.BPM (1)
- tableau method (1)
- tangible media (1)
- teamwork (1)
- tele-TASK (1)
- terrain models (1)
- test-driven fault navigation (1)
- threshold cryptography (1)
- topics (1)
- tort law (1)
- touch input (1)
- transfer learning (1)
- transformation (1)
- transformation level (1)
- transformation sequences (1)
- triple graph grammars (1)
- trust (1)
- trust model (1)
- tuple spaces (1)
- typed graph transformation systems (1)
- unique (1)
- unsupervised methods (1)
- verschachtelte Anwednungsbedingungen (1)
- verschachtelte Graphbedingungen (1)
- versioning (1)
- verteilte Datenbanken (1)
- video analysis (1)
- video metadata (1)
- view maintenance (1)
- views (1)
- virtual 3D city models (1)
- virtual groups (1)
- virtualisierte IT-Infrastruktur (1)
- virtuelle 3D-Stadtmodelle (1)
- vulnerabilities (1)
- web-applications (1)
- weight (1)
- word sense disambiguation (1)
- workload prediction (1)
- zuverlässige Datenverarbeitung (1)
- zuverlässigen Datenverarbeitung (1)
- Ähnlichkeit (1)
- Ähnlichkeitsmaße (1)
- Ähnlichkeitssuche (1)
- Änderbarkeit (1)
- Übereinstimmungsanalyse (1)
- Überwachung (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering gGmbH (181) (remove)
Linked Open Data (LOD) comprises very many and often large public data sets and knowledge bases. Those datasets are mostly presented in the RDF triple structure of subject, predicate, and object, where each triple represents a statement or fact. Unfortunately, the heterogeneity of available open data requires significant integration steps before it can be used in applications. Meta information, such as ontological definitions and exact range definitions of predicates, are desirable and ideally provided by an ontology. However in the context of LOD, ontologies are often incomplete or simply not available. Thus, it is useful to automatically generate meta information, such as ontological dependencies, range definitions, and topical classifications. Association rule mining, which was originally applied for sales analysis on transactional databases, is a promising and novel technique to explore such data. We designed an adaptation of this technique for min-ing Rdf data and introduce the concept of “mining configurations”, which allows us to mine RDF data sets in various ways. Different configurations enable us to identify schema and value dependencies that in combination result in interesting use cases. To this end, we present rule-based approaches for auto-completion, data enrichment, ontology improvement, and query relaxation. Auto-completion remedies the problem of inconsistent ontology usage, providing an editing user with a sorted list of commonly used predicates. A combination of different configurations step extends this approach to create completely new facts for a knowledge base. We present two approaches for fact generation, a user-based approach where a user selects the entity to be amended with new facts and a data-driven approach where an algorithm discovers entities that have to be amended with missing facts. As knowledge bases constantly grow and evolve, another approach to improve the usage of RDF data is to improve existing ontologies. Here, we present an association rule based approach to reconcile ontology and data. Interlacing different mining configurations, we infer an algorithm to discover synonymously used predicates. Those predicates can be used to expand query results and to support users during query formulation. We provide a wide range of experiments on real world datasets for each use case. The experiments and evaluations show the added value of association rule mining for the integration and usability of RDF data and confirm the appropriateness of our mining configuration methodology.
Unique column combinations of a relational database table are sets of columns that contain only unique values. Discovering such combinations is a fundamental research problem and has many different data management and knowledge discovery applications. Existing discovery algorithms are either brute force or have a high memory load and can thus be applied only to small datasets or samples. In this paper, the wellknown GORDIAN algorithm and "Apriori-based" algorithms are compared and analyzed for further optimization. We greatly improve the Apriori algorithms through efficient candidate generation and statistics-based pruning methods. A hybrid solution HCAGORDIAN combines the advantages of GORDIAN and our new algorithm HCA, and it significantly outperforms all previous work in many situations.
Extract-Transform-Load (ETL) tools are used for the creation, maintenance, and evolution of data warehouses, data marts, and operational data stores. ETL workflows populate those systems with data from various data sources by specifying and executing a DAG of transformations. Over time, hundreds of individual workflows evolve as new sources and new requirements are integrated into the system. The maintenance and evolution of large-scale ETL systems requires much time and manual effort. A key problem is to understand the meaning of unfamiliar attribute labels in source and target databases and ETL transformations. Hard-to-understand attribute labels lead to frustration and time spent to develop and understand ETL workflows. We present a schema decryption technique to support ETL developers in understanding cryptic schemata of sources, targets, and ETL transformations. For a given ETL system, our recommender-like approach leverages the large number of mapped attribute labels in existing ETL workflows to produce good and meaningful decryptions. In this way we are able to decrypt attribute labels consisting of a number of unfamiliar few-letter abbreviations, such as UNP_PEN_INT, which we can decrypt to UNPAID_PENALTY_INTEREST. We evaluate our schema decryption approach on three real-world repositories of ETL workflows and show that our approach is able to suggest high-quality decryptions for cryptic attribute labels in a given schema.
Program behavior that relies on contextual information, such as physical location or network accessibility, is common in today's applications, yet its representation is not sufficiently supported by programming languages. With context-oriented programming (COP), such context-dependent behavioral variations can be explicitly modularized and dynamically activated. In general, COP could be used to manage any context-specific behavior. However, its contemporary realizations limit the control of dynamic adaptation. This, in turn, limits the interaction of COP's adaptation mechanisms with widely used architectures, such as event-based, mobile, and distributed programming. The JCop programming language extends Java with language constructs for context-oriented programming and additionally provides a domain-specific aspect language for declarative control over runtime adaptations. As a result, these redesigned implementations are more concise and better modularized than their counterparts using plain COP. JCop's main features have been described in our previous publications. However, a complete language specification has not been presented so far. This report presents the entire JCop language including the syntax and semantics of its new language constructs.
Companies develop process models to explicitly describe their business operations. In the same time, business operations, business processes, must adhere to various types of compliance requirements. Regulations, e.g., Sarbanes Oxley Act of 2002, internal policies, best practices are just a few sources of compliance requirements. In some cases, non-adherence to compliance requirements makes the organization subject to legal punishment. In other cases, non-adherence to compliance leads to loss of competitive advantage and thus loss of market share. Unlike the classical domain-independent behavioral correctness of business processes, compliance requirements are domain-specific. Moreover, compliance requirements change over time. New requirements might appear due to change in laws and adoption of new policies. Compliance requirements are offered or enforced by different entities that have different objectives behind these requirements. Finally, compliance requirements might affect different aspects of business processes, e.g., control flow and data flow. As a result, it is infeasible to hard-code compliance checks in tools. Rather, a repeatable process of modeling compliance rules and checking them against business processes automatically is needed. This thesis provides a formal approach to support process design-time compliance checking. Using visual patterns, it is possible to model compliance requirements concerning control flow, data flow and conditional flow rules. Each pattern is mapped into a temporal logic formula. The thesis addresses the problem of consistency checking among various compliance requirements, as they might stem from divergent sources. Also, the thesis contributes to automatically check compliance requirements against process models using model checking. We show that extra domain knowledge, other than expressed in compliance rules, is needed to reach correct decisions. In case of violations, we are able to provide a useful feedback to the user. The feedback is in the form of parts of the process model whose execution causes the violation. In some cases, our approach is capable of providing automated remedy of the violation.
Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during process execution. Aiming at a better process understanding and improvement, this event data can be used to analyze processes using process mining techniques. Process models can be automatically discovered and the execution can be checked for conformance to specified behavior. Moreover, existing process models can be enhanced and annotated with valuable information, for example for performance analysis. While the maturity of process mining algorithms is increasing and more tools are entering the market, process mining projects still face the problem of different levels of abstraction when comparing events with modeled business activities. Mapping the recorded events to activities of a given process model is essential for conformance checking, annotation and understanding of process discovery results. Current approaches try to abstract from events in an automated way that does not capture the required domain knowledge to fit business activities. Such techniques can be a good way to quickly reduce complexity in process discovery. Yet, they fail to enable techniques like conformance checking or model annotation, and potentially create misleading process discovery results by not using the known business terminology.
In this thesis, we develop approaches that abstract an event log to the same level that is needed by the business. Typically, this abstraction level is defined by a given process model. Thus, the goal of this thesis is to match events from an event log to activities in a given process model. To accomplish this goal, behavioral and linguistic aspects of process models and event logs as well as domain knowledge captured in existing process documentation are taken into account to build semiautomatic matching approaches. The approaches establish a pre--processing for every available process mining technique that produces or annotates a process model, thereby reducing the manual effort for process analysts. While each of the presented approaches can be used in isolation, we also introduce a general framework for the integration of different matching approaches.
The approaches have been evaluated in case studies with industry and using a large industry process model collection and simulated event logs. The evaluation demonstrates the effectiveness and efficiency of the approaches and their robustness towards nonconforming execution logs.
In recent years, the increased interest in application areas such as social networks has resulted in a rising popularity of graph-based approaches for storing and processing large amounts of interconnected data. To extract useful information from the growing network structures, efficient querying techniques are required.
In this paper, we propose an approach for graph pattern matching that allows a uniform handling of arbitrary constraints over the query vertices. Our technique builds on a previously introduced matching algorithm, which takes concrete host graph information into account to dynamically adapt the employed search plan during query execution. The dynamic algorithm is combined with an existing static approach for search plan generation, resulting in a hybrid technique which we further extend by a more sophisticated handling of filtering effects caused by constraint checks. We evaluate the presented concepts empirically based on an implementation for our graph pattern matching tool, the Story Diagram Interpreter, with queries and data provided by the LDBC Social Network Benchmark. Our results suggest that the hybrid technique may improve search efficiency in several cases, and rarely reduces efficiency.
Data integration aims to combine data of different sources and to provide users with a unified view on these data. This task is as challenging as valuable. In this thesis we propose algorithms for dependency discovery to provide necessary information for data integration. We focus on inclusion dependencies (INDs) in general and a special form named conditional inclusion dependencies (CINDs): (i) INDs enable the discovery of structure in a given schema. (ii) INDs and CINDs support the discovery of cross-references or links between schemas. An IND “A in B” simply states that all values of attribute A are included in the set of values of attribute B. We propose an algorithm that discovers all inclusion dependencies in a relational data source. The challenge of this task is the complexity of testing all attribute pairs and further of comparing all of each attribute pair's values. The complexity of existing approaches depends on the number of attribute pairs, while ours depends only on the number of attributes. Thus, our algorithm enables to profile entirely unknown data sources with large schemas by discovering all INDs. Further, we provide an approach to extract foreign keys from the identified INDs. We extend our IND discovery algorithm to also find three special types of INDs: (i) Composite INDs, such as “AB in CD”, (ii) approximate INDs that allow a certain amount of values of A to be not included in B, and (iii) prefix and suffix INDs that represent special cross-references between schemas. Conditional inclusion dependencies are inclusion dependencies with a limited scope defined by conditions over several attributes. Only the matching part of the instance must adhere the dependency. We generalize the definition of CINDs distinguishing covering and completeness conditions and define quality measures for conditions. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. The challenge for this task is twofold: (i) Which (and how many) attributes should be used for the conditions? (ii) Which attribute values should be chosen for the conditions? Previous approaches rely on pre-selected condition attributes or can only discover conditions applying to quality thresholds of 100%. Our approaches were motivated by two application domains: data integration in the life sciences and link discovery for linked open data. We show the efficiency and the benefits of our approaches for use cases in these domains.
Data dependencies, or integrity constraints, are used to improve the quality of a database schema, to optimize queries, and to ensure consistency in a database. In the last years conditional dependencies have been introduced to analyze and improve data quality. In short, a conditional dependency is a dependency with a limited scope defined by conditions over one or more attributes. Only the matching part of the instance must adhere to the dependency. In this paper we focus on conditional inclusion dependencies (CINDs). We generalize the definition of CINDs, distinguishing covering and completeness conditions. We present a new use case for such CINDs showing their value for solving complex data quality tasks. Further, we define quality measures for conditions inspired by precision and recall. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. Our algorithms choose not only the condition values but also the condition attributes automatically. Finally, we show that our approach efficiently provides meaningful and helpful results for our use case.