004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Monograph/Edited Volume (157) (remove)
Language
- English (118)
- German (37)
- Multiple languages (2)
Is part of the Bibliography
- yes (157) (remove)
Keywords
- Hasso-Plattner-Institut (10)
- Hasso Plattner Institute (9)
- Forschungskolleg (8)
- Klausurtagung (8)
- Service-oriented Systems Engineering (8)
- cloud computing (7)
- openHPI (7)
- Cloud Computing (6)
- Forschungsprojekte (5)
- Future SOC Lab (5)
- Identitätsmanagement (5)
- In-Memory Technologie (5)
- Multicore Architekturen (5)
- Ph.D. retreat (5)
- cloud (5)
- cyber-physical systems (5)
- quantitative analysis (5)
- service-oriented systems engineering (5)
- Modellierung (4)
- Research School (4)
- Smalltalk (4)
- Virtualisierung (4)
- digital education (4)
- digitale Bildung (4)
- identity management (4)
- multicore architectures (4)
- nested graph conditions (4)
- probabilistic timed systems (4)
- qualitative Analyse (4)
- qualitative analysis (4)
- quantitative Analyse (4)
- research projects (4)
- research school (4)
- Cloud (3)
- Computer Networks (3)
- Computernetzwerke (3)
- Datenintegration (3)
- Datenschutz (3)
- Design Thinking (3)
- Digitalisierung (3)
- E-Learning (3)
- Graphtransformationen (3)
- IPv4 (3)
- IPv6 (3)
- In-Memory technology (3)
- Infrastructure (3)
- Infrastruktur (3)
- Innovation (3)
- Internet Protocol (3)
- Lively Kernel (3)
- MOOCs (3)
- Model Synchronisation (3)
- Model Transformation (3)
- Network Politics (3)
- Netzpolitik (3)
- Online-Lernen (3)
- Onlinekurs (3)
- Ph.D. Retreat (3)
- Privacy (3)
- Sicherheit (3)
- Tele-Lab (3)
- Tele-Teaching (3)
- Tripel-Graph-Grammatik (3)
- Verifikation (3)
- Werkzeuge (3)
- conference (3)
- digitalization (3)
- graph transformation (3)
- graph transformation systems (3)
- machine learning (3)
- maschinelles Lernen (3)
- privacy (3)
- security (3)
- tele-TASK (3)
- verification (3)
- virtual machines (3)
- virtualization (3)
- ACINQ (2)
- ASIC (2)
- AUTOSAR (2)
- Australian securities exchange (2)
- BCCC (2)
- BPM (2)
- BPMN (2)
- BTC (2)
- Betriebssysteme (2)
- BitShares (2)
- Bitcoin Core (2)
- Blockchain Auth (2)
- Blockchain-Konsortium R3 (2)
- Blockkette (2)
- Blockstack (2)
- Blockstack ID (2)
- Blumix-Plattform (2)
- Blöcke (2)
- Bounded Model Checking (2)
- Byzantine Agreement (2)
- Cloud-Sicherheit (2)
- Cloud-Speicher (2)
- Colored Coins (2)
- DAO (2)
- DPoS (2)
- Data Integration (2)
- Delegated Proof-of-Stake (2)
- Distributed Proof-of-Research (2)
- E-Wallet (2)
- ECDSA (2)
- Eris (2)
- Ether (2)
- Ethereum (2)
- European Union (2)
- Europäische Union (2)
- Federated Byzantine Agreement (2)
- Fehlertoleranz (2)
- FollowMyVote (2)
- Fork (2)
- Formale Verifikation (2)
- Graphentransformationssysteme (2)
- Graphtransformationssysteme (2)
- Gridcoin (2)
- HPI Schul-Cloud (2)
- Hard Fork (2)
- Hashed Timelock Contracts (2)
- Hauptspeicherdatenbank (2)
- IT-Infrastruktur (2)
- IT-infrastructure (2)
- Internet (2)
- Internet Service Provider (2)
- Internet der Dinge (2)
- Internet of Things (2)
- IoT (2)
- Japanese Blockchain Consortium (2)
- Japanisches Blockchain-Konsortium (2)
- Java (2)
- Kette (2)
- Konferenz (2)
- Konsensalgorithmus (2)
- Konsensprotokoll (2)
- Lightning Network (2)
- Live-Programmierung (2)
- Lock-Time-Parameter (2)
- MERLOT (2)
- MOOC (2)
- Micropayment-Kanäle (2)
- Microsoft Azur (2)
- Model Synchronization (2)
- Model-Driven Engineering (2)
- Modeling (2)
- Modellprüfung (2)
- NASDAQ (2)
- NameID (2)
- Namecoin (2)
- Off-Chain-Transaktionen (2)
- Onename (2)
- Online Course (2)
- Online-Learning (2)
- OpenBazaar (2)
- Oracles (2)
- Orphan Block (2)
- P2P (2)
- Peer-to-Peer Netz (2)
- Peercoin (2)
- PoB (2)
- PoS (2)
- PoW (2)
- Process Modeling (2)
- Proof-of-Burn (2)
- Proof-of-Stake (2)
- Proof-of-Work (2)
- Prozessmodellierung (2)
- Python (2)
- Ressourcenoptimierung (2)
- Ripple (2)
- SCP (2)
- SHA (2)
- SPV (2)
- Schule (2)
- Schwierigkeitsgrad (2)
- Simplified Payment Verification (2)
- Skalierbarkeit der Blockchain (2)
- Slock.it (2)
- Soft Fork (2)
- Steemit (2)
- Stellar Consensus Protocol (2)
- Storj (2)
- Studie (2)
- SysML (2)
- The Bitfury Group (2)
- The DAO (2)
- Timed Automata (2)
- Transaktion (2)
- Two-Way-Peg (2)
- Unspent Transaction Output (2)
- Verlässlichkeit (2)
- Versionsverwaltung (2)
- Verträge (2)
- Virtuelle Maschinen (2)
- Visualisierung (2)
- Watson IoT (2)
- Zielvorgabe (2)
- Zookos Dreieck (2)
- Zookos triangle (2)
- altchain (2)
- alternative chain (2)
- artifical intelligence (2)
- atomic swap (2)
- batch processing (2)
- bidirectional payment channels (2)
- bitcoins (2)
- blockchain (2)
- blockchain consortium (2)
- blockchain-übergreifend (2)
- blocks (2)
- blumix platform (2)
- bounded model checking (2)
- chain (2)
- confirmation period (2)
- consensus algorithm (2)
- consensus protocol (2)
- contest period (2)
- continuous integration (2)
- contracts (2)
- cross-chain (2)
- cyber-physische Systeme (2)
- data profiling (2)
- debugging (2)
- decentralized autonomous organization (2)
- dependability (2)
- dezentrale autonome Organisation (2)
- difficulty (2)
- difficulty target (2)
- digital enlightenment (2)
- digital learning platform (2)
- digital sovereignty (2)
- digitale Aufklärung (2)
- digitale Lernplattform (2)
- digitale Souveränität (2)
- doppelter Hashwert (2)
- double hashing (2)
- fault tolerance (2)
- federated voting (2)
- graph constraints (2)
- hashrate (2)
- in-memory technology (2)
- incremental graph pattern matching (2)
- innovation (2)
- intelligente Verträge (2)
- inter-chain (2)
- k-inductive invariant checking (2)
- kontinuierliche Integration (2)
- lebenslanges Lernen (2)
- ledger assets (2)
- lifelong learning (2)
- live programming (2)
- merged mining (2)
- merkle root (2)
- micropayment (2)
- micropayment channels (2)
- miner (2)
- mining (2)
- mining hardware (2)
- minting (2)
- model checking (2)
- modeling (2)
- modellgetriebene Entwicklung (2)
- nonce (2)
- off-chain transaction (2)
- operating systems (2)
- peer-to-peer network (2)
- pegged sidechains (2)
- probabilistische gezeitete Systeme (2)
- probabilistische zeitgesteuerte Systeme (2)
- programming (2)
- quorum slices (2)
- real-time systems (2)
- rootstock (2)
- scalability of blockchain (2)
- scarce tokens (2)
- sidechain (2)
- smalltalk (2)
- smart contracts (2)
- timed automata (2)
- tools (2)
- transaction (2)
- triple graph grammars (2)
- typed attributed graphs (2)
- verschachtelte Graphbedingungen (2)
- version control (2)
- virtuelle Maschinen (2)
- visualization (2)
- Abhängigkeiten (1)
- Abstraktion von Geschäftsprozessmodellen (1)
- Agile (1)
- Agilität (1)
- Aktivitäten (1)
- Algorithmen (1)
- Ambiguity (1)
- Ambiguität (1)
- Analog-zu-Digital-Konvertierung (1)
- Anfragesprache (1)
- Angriffe (1)
- Anwendungsvirtualisierung (1)
- Apriori (1)
- Architektur (1)
- Artem Erkomaishvili (1)
- Aspect-oriented Programming (1)
- Aspektorientierte Softwareentwicklung (1)
- Association Rule Mining (1)
- Assoziationsregeln (1)
- Asynchrone Schaltung (1)
- Asynchronous circuit (1)
- Attribut-Merge-Prozess (1)
- Attribute Merge Process (1)
- Ausführung von Modellen (1)
- Auswirkungen (1)
- Authentifizierung (1)
- Bahnwesen (1)
- Basic Storage Anbieter (1)
- Batchprozesse (1)
- Batchverarbeitung (1)
- Bayes'sche Netze (1)
- Bayesian networks (1)
- Bedingte Inklusionsabhängigkeiten (1)
- Behavior change (1)
- Benutzerinteraktion (1)
- Beschränkungen und Abhängigkeiten (1)
- Biometrie (1)
- Bisimulation (1)
- Bitcoin (1)
- Blockchain (1)
- Blockheizkraftwerke (1)
- Bounded Backward Model Checking (1)
- CEP (1)
- CSC (1)
- CSCW (1)
- Change Management (1)
- Conditional Inclusion Dependency (1)
- Conformance Überprüfung (1)
- Constraints (1)
- Context-oriented Programming (1)
- Contracts (1)
- Controller-Resynthese (1)
- Creative (1)
- Cyber-Physical Systems (1)
- Cyber-Physical-Systeme (1)
- Cyber-physical-systems (1)
- Cyber-physikalische Systeme (1)
- Data Dependency (1)
- Data Modeling (1)
- Data Profiling (1)
- Data Quality (1)
- Data Warehouse (1)
- Database Cost Model (1)
- Datenabhängigkeiten (1)
- Datenanalyse (1)
- Datenbank (1)
- Datenbank-Kostenmodell (1)
- Datenflusskorrektheit (1)
- Datenmodellierung (1)
- Datenqualität (1)
- Datensatz (1)
- Datenschutz-sicherer Einsatz in der Schule (1)
- Datensicht (1)
- Datenvertraulichkeit (1)
- Datenvisualisierung (1)
- Deadline-Verbreitung (1)
- Debugging (1)
- Dekubitus (1)
- Denkweise (1)
- Differential Privacy (1)
- Digital Engineering (1)
- Discrimination Networks (1)
- Distributed-Ledger-Technologie (DLT) (1)
- Duplicate Detection (1)
- Duplikaterkennung (1)
- Dynamic Type System (1)
- Dynamische Typ Systeme (1)
- EHR (1)
- EPA (1)
- Echtzeit (1)
- Echtzeitsysteme (1)
- Effizienz (1)
- Eingebettete Systeme (1)
- Elektronische Patientenakte (1)
- Energiesparen (1)
- Entwurfsmuster (1)
- Ereignisse (1)
- Erfüllbarkeitsanalyse (1)
- Erkennen von Meta-Daten (1)
- Evolution (1)
- Evolution in MDE (1)
- Extract-Transform-Load (ETL) (1)
- FIDO (1)
- FRP (1)
- Fallstudie (1)
- Feedback Loops (1)
- Fehlerinjektion (1)
- Fehlersuche (1)
- Functional Lenses (1)
- Generalized Discrimination Networks (1)
- Georgian chant (1)
- Georgische liturgische Gesänge (1)
- German schools (1)
- Geschäftsanwendungen (1)
- Geschäftsprozesse (1)
- Geschäftsprozessmanagement (1)
- Gesetze (1)
- GitHub (1)
- Graph-Constraints (1)
- Graph-basierte Suche (1)
- Graphbedingungen (1)
- Graphdatenbanken (1)
- Graphreparatur (1)
- Graphtransformation (1)
- Gruppierung von Prozessinstanzen (1)
- HENSHIN (1)
- HPI Forschung (1)
- HPI research (1)
- Hasso-Plattner-Institute (1)
- Heuristiken (1)
- Homomorphe Verschlüsselung (1)
- Häkeln (1)
- Ideation (1)
- Ideenfindung (1)
- Identity Management (1)
- Identität (1)
- Impact (1)
- Implementation in Organizations (1)
- Implementierung in Organisationen (1)
- In-Memory (1)
- In-Memory Database (1)
- In-Memory Datenbank (1)
- In-Memory-Datenbank (1)
- Individuen (1)
- Infinite State (1)
- Informatik (1)
- Informatikdidaktik (1)
- Informatiksystem (1)
- Informatikunterricht (1)
- Information Extraction (1)
- Information Systems (1)
- Informationsextraktion (1)
- Informationssysteme (1)
- Inkrementelle Graphmustersuche (1)
- Innovationsmanagement (1)
- Innovationsmethode (1)
- Interdisciplinary Teams (1)
- Interpreter (1)
- Interval Timed Automata (1)
- Invariant-Checking (1)
- Invarianten (1)
- Invariants (1)
- JCop (1)
- Kausalität (1)
- Kollaborationen (1)
- Kompetenz (1)
- Konsensprotokolle (1)
- Konsistenzrestauration (1)
- Kreativität (1)
- Kunstanalyse (1)
- Künstliche Intelligenz (1)
- Laufzeitanalyse (1)
- Laufzeitmodelle (1)
- Leadership (1)
- Leistungsmodelle von virtuellen Maschinen (1)
- Lernsoftware (1)
- Link Discovery (1)
- Link-Entdeckung (1)
- Linked Data (1)
- Linked Open Data (1)
- Liveness (1)
- Lösungsraum (1)
- MDE Ansatz (1)
- MDE settings (1)
- Management (1)
- Marktübersicht (1)
- Measurement (1)
- Megamodell (1)
- Megamodels (1)
- Mehr-Faktor-Authentifizierung (1)
- Mehrfamilienhäuser (1)
- Mehrkernsysteme (1)
- Messung (1)
- Metadata Discovery (1)
- Metadatenentdeckung (1)
- Metadatenqualität (1)
- Middleware (1)
- Mindset (1)
- Mobile Application Development (1)
- Model Execution (1)
- Model-driven SOA Security (1)
- Modeling Languages (1)
- Modell-getriebene SOA-Sicherheit (1)
- Modell-getriebene Softwareentwicklung (1)
- Modelle mit mehreren Versionen (1)
- Modellerzeugung (1)
- Modellgetriebene Softwareentwicklung (1)
- Modellierungssprachen (1)
- Modellreparatur (1)
- Modelltransformationen (1)
- Models at Runtime (1)
- Morphic (1)
- Multi-Instanzen (1)
- Multicore architectures (1)
- Multidisciplinary Teams (1)
- Muster (1)
- Musterabgleich (1)
- Nebenläufigkeit (1)
- Nested Graph Conditions (1)
- Netzneutralität (1)
- Netzwerkprotokolle (1)
- Newspeak (1)
- OAuth (1)
- Object Constraint Programming (1)
- Object-Oriented Programming (1)
- Objekt-Constraint Programmierung (1)
- Objekt-Orientiertes Programmieren (1)
- Objekt-orientiertes Programmieren mit Constraints (1)
- Objektlebenszyklus-Synchronisation (1)
- OpenID Connect (1)
- Optimierungen (1)
- Organisationsveränderung (1)
- PRISM Modell-Checker (1)
- PRISM model checker (1)
- PTCTL (1)
- Pattern Matching (1)
- Patterns (1)
- Petri net Mapping (1)
- Petri net mapping (1)
- Petrinetz (1)
- Posenabschätzung (1)
- Privatsphäre (1)
- Problem Solving (1)
- Problemlösung (1)
- Process (1)
- Process Enactment (1)
- Process Mining (1)
- Prognosen (1)
- Programmieren (1)
- Programmiererlebnis (1)
- Programmierkonzepte (1)
- Programmierung (1)
- Programming Languages (1)
- Propagation von Aktivitätsinstanzzuständen (1)
- Prototyping (1)
- Prozess (1)
- Prozessausführung (1)
- Prozesserhebung (1)
- Prozessinstanz (1)
- Prozessoren (1)
- Quanten-Computing (1)
- Quantitative Analysen (1)
- Realzeitsysteme (1)
- Regressionstests (1)
- Research Projects (1)
- Reverse Engineering (1)
- Ruby (1)
- Runtime Binding (1)
- Runtime-monitoring (1)
- SOA Security (1)
- SOA Sicherheit (1)
- SQL (1)
- STG decomposition (1)
- STG-Dekomposition (1)
- Sammlungsdatentypen (1)
- Savanne (1)
- Schemaentdeckung (1)
- Schlüsselentdeckung (1)
- Schriftartgestaltung (1)
- Schriftrendering (1)
- Scrollytelling (1)
- Secure Digital Identities (1)
- Secure Enterprise SOA (1)
- Self-Adaptive Software (1)
- Sequenzeigenschaften (1)
- Sequenzen von s/t-Pattern (1)
- Serialisierung (1)
- Service Provider (1)
- Service-Oriented Architecture (1)
- Service-Orientierte Architekturen (1)
- Service-orientierte Systme (1)
- Sichere Digitale Identitäten (1)
- Signalflankengraph (SFG oder STG) (1)
- Simulation (1)
- Single-Sign-On (1)
- Skript-Entwicklungsumgebungen (1)
- SoaML (1)
- Software-Testen (1)
- Software/Hardware Co-Design (1)
- Softwarearchitektur (1)
- Softwareproduktlinien (1)
- Softwaretests (1)
- Solution Space (1)
- Sozialen Medien (1)
- Spaltenlayout (1)
- Speicheroptimierungen (1)
- Spezifikation von gezeiteten Graph Transformationen (1)
- Sprachspezifikation (1)
- Squeak (1)
- Standardisierung (1)
- Standards (1)
- Synchronisation (1)
- System of Systems (1)
- Systemsoftware (1)
- Tableaumethode (1)
- Telemedizin (1)
- Temporallogik (1)
- Testergebnisse (1)
- Testpriorisierungs (1)
- Threshold Cryptography (1)
- Tools (1)
- Trajektorien (1)
- Transaktionen (1)
- Transformationsebene (1)
- Transformationssequenzen (1)
- Travis CI (1)
- Tripel-Graph-Grammatiken (1)
- Triple Graph Grammar (1)
- Triple Graph Grammars (1)
- Triple-Graph-Grammatiken (1)
- Trust Management (1)
- Unbegrenzter Zustandsraum (1)
- Unterricht mit digitalen Medien (1)
- Unveränderlichkeit (1)
- VUCA-World (1)
- Verbindungsnetzwerke (1)
- Verhaltensabstraktion (1)
- Verhaltensbewahrung (1)
- Verhaltensverfeinerung (1)
- Verhaltensänderung (1)
- Verhaltensäquivalenz (1)
- Verification (1)
- Verteilungsalgorithmen (1)
- Verteilungsalgorithmus (1)
- Verzögerungs-Verbreitung (1)
- Virtual Desktop Infrastructure (1)
- Virtual machines (1)
- Visualisierungskonzept-Exploration (1)
- Wartung von Graphdatenbanksichten (1)
- Web applications (1)
- Web-Anwendungen (1)
- Wicked Problems (1)
- Wikipedia (1)
- Wüstenbildung (1)
- Zugriffskontrolle (1)
- access control (1)
- activity instance state propagation (1)
- adaptive Systeme (1)
- adaptive systems (1)
- adoption (1)
- agil (1)
- algorithms (1)
- analog-to-digital conversion (1)
- application virtualization (1)
- apriori (1)
- architecture (1)
- art analysis (1)
- asset management (1)
- ausführbare Semantiken (1)
- authentication (1)
- basic cloud storage services (1)
- behavior preservation (1)
- behavioral abstraction (1)
- behavioral equivalenc (1)
- behavioral refinement (1)
- benutzergenerierte Inhalte (1)
- beschreibende Feldstudie (1)
- big data services (1)
- biometrics (1)
- bisimulation (1)
- bitcoin (1)
- bounded backward model checking (1)
- business process management (1)
- business process model abstraction (1)
- business process modeling (1)
- business processes (1)
- case study (1)
- causality (1)
- change management (1)
- cloud security (1)
- cloud storage (1)
- cogeneration units (1)
- collaboration (1)
- collection types (1)
- compositional analysis (1)
- computational ethnomusicology (1)
- computer vision (1)
- computer-aided design (1)
- computergestützte Musikethnologie (1)
- confidentiality (1)
- conformance checking (1)
- consensus protocols (1)
- consistency restoration (1)
- continuous testing (1)
- control resynthesis (1)
- controlled experiment (1)
- convolutional neural networks (1)
- crochet (1)
- cultural heritage (1)
- cyber-physikalische Systeme (1)
- data center management (1)
- data flow correctness (1)
- data in business processes (1)
- data integration (1)
- data modeling (1)
- data security (1)
- data set (1)
- data view (1)
- data visualization (1)
- deadline propagation (1)
- decentral identities (1)
- decubitus (1)
- deep learning (1)
- delay propagation (1)
- demografische Informationen (1)
- demographic information (1)
- dependable computing (1)
- dependencies (1)
- desertification (1)
- design thinking (1)
- dezentrale Identitäten (1)
- differential privacy (1)
- diffusion (1)
- digital picture archive (1)
- digital unterstützter Unterricht (1)
- digitale Infrastruktur für den Schulunterricht (1)
- digitales Bildarchiv (1)
- direct manipulation (1)
- direkte Manipulation (1)
- discrete-event model (1)
- discrimination networks (1)
- diskretes Ereignismodell (1)
- distributed performance monitoring (1)
- distribution algorithm (1)
- dynamic typing (1)
- dynamic programming languages (1)
- dynamic systems (1)
- dynamische Programmiersprachen (1)
- dynamische Sprachen (1)
- dynamische Systeme (1)
- e-learning (1)
- efficiency (1)
- eindeutig (1)
- electronic health record (1)
- embedded-systems (1)
- energy savings (1)
- erfahrbare Medien (1)
- events (1)
- evolution in MDE (1)
- executable semantics (1)
- exploratives Programmieren (1)
- exploratory programming (1)
- fault injection (1)
- feedback loops (1)
- fehlende Daten (1)
- font engineering (1)
- font rendering (1)
- forecasts (1)
- formal verification (1)
- formal verification methods (1)
- formale Verifikation (1)
- functional dependency (1)
- functional lenses (1)
- functional programming (1)
- funktionale Abhängigkeit (1)
- funktionale Programmierung (1)
- future SOC lab (1)
- gefaltete neuronale Netze (1)
- generalized discrimination networks (1)
- getypte Attributierte Graphen (1)
- global model management (1)
- globales Modellmanagement (1)
- graph databases (1)
- graph queries (1)
- graph repair (1)
- graph transformations (1)
- heuristics (1)
- homomorphic encryption (1)
- human-centered (1)
- hybrid graph-transformation-systems (1)
- hybride Graph-Transformations-Systeme (1)
- identity (1)
- immutable values (1)
- in-memory database (1)
- individuals (1)
- inductive invariant checking (1)
- induktives Invariant Checking (1)
- inkrementelles Graph Pattern Matching (1)
- innovation capabilities (1)
- innovation management (1)
- integrated development environments (1)
- integrierte Entwicklungsumgebungen (1)
- interactive media (1)
- interaktive Medien (1)
- interconnect (1)
- interdisziplinäre Teams (1)
- interpreters (1)
- interval probabilistic timed systems (1)
- interval probabilistische zeitgesteuerte Systeme (1)
- interval timed automata (1)
- intuitive Benutzeroberflächen (1)
- intuitive interfaces (1)
- invariant checking (1)
- juridical recording (1)
- k-Induktion (1)
- k-induction (1)
- k-inductive invariants (1)
- k-induktive Invarianten (1)
- k-induktive Invariantenprüfung (1)
- k-induktives Invariant-Checking (1)
- key discovery (1)
- kompositionale Analyse (1)
- kontinuierliches Testen (1)
- kontrolliertes Experiment (1)
- kulturelles Erbe (1)
- künstliche Intelligenz (1)
- language specification (1)
- law (1)
- leadership (1)
- lebenszentriert (1)
- left recursion (1)
- life-centered (1)
- lively kernel (1)
- liveness (1)
- location-based (1)
- management (1)
- many-core (1)
- market study (1)
- maschinelles Sehen (1)
- mehrdimensionale Belangtrennung (1)
- mehrsprachige Ausführungsumgebungen (1)
- memory optimization (1)
- menschenzentriert (1)
- metadata discovery (1)
- metadata quality (1)
- metric temporal logic (1)
- metric termporal graph logic (1)
- metrisch temporale Graph Logic (1)
- metrische Temporallogik (1)
- middleware (1)
- missing data (1)
- model generation (1)
- model repair (1)
- model transformation (1)
- model-driven engineering (1)
- monitoring (1)
- morphic (1)
- multi factor authentication (1)
- multi-core (1)
- multi-dimensional separation of concerns (1)
- multi-instances (1)
- multi-version models (1)
- multi-family residential buildings (1)
- multidisziplinäre Teams (1)
- musical scales (1)
- musikalische Tonleitern (1)
- nested application conditions (1)
- network protocols (1)
- object life cycle synchronization (1)
- object-constraint programming (1)
- object-oriented programming (1)
- objektorientiertes Programmieren (1)
- online course (1)
- online-learning (1)
- optimizations (1)
- organizational change (1)
- orts-basiert (1)
- packrat parsing (1)
- parallel and sequential independence (1)
- parallel computing (1)
- parallele und Sequentielle Unabhängigkeit (1)
- paralleles Rechnen (1)
- parsing expression grammars (1)
- partial application conditions (1)
- partielle Anwendungsbedingungen (1)
- performance models of virtual machines (1)
- periodic tasks (1)
- periodische Aufgaben (1)
- petri net (1)
- polyglot execution environments (1)
- pose estimation (1)
- probabilistic timed automata (1)
- probabilistische zeitbehaftete Automaten (1)
- process elicitation (1)
- process instance (1)
- process instance grouping (1)
- process mining (1)
- process modeling languages (1)
- processor hardware (1)
- profiling (1)
- programming experience (1)
- prototyping (1)
- public cloud storage services (1)
- qualitative model (1)
- qualitatives Modell (1)
- quantum computing (1)
- railways (1)
- reactive (1)
- reaktive Programmierung (1)
- real-time (1)
- rechnerunterstütztes Konstruieren (1)
- regression testing (1)
- relational model transformation (1)
- relationale Modelltransformationen (1)
- resource optimization (1)
- reverse engineering (1)
- runtime adaptations (1)
- runtime monitoring (1)
- s/t-pattern sequences (1)
- satisfiabilitiy solving (1)
- savanna (1)
- schema discovery (1)
- school (1)
- scripting environments (1)
- scrollytelling (1)
- selbstbestimmte Identitäten (1)
- self-sovereign identity (1)
- semantics preservation (1)
- sequence properties (1)
- serialization (1)
- service-oriented systems (1)
- signal transition graph (1)
- simulation (1)
- small talk (1)
- smartphone (1)
- software architecture (1)
- software product lines (1)
- software testing (1)
- software tests (1)
- software/hardware co-design (1)
- specification of timed graph transformations (1)
- speed independent (1)
- squeak (1)
- standardization (1)
- standards (1)
- static analysis (1)
- static source-code analysis (1)
- statische Analyse (1)
- statische Quellcodeanalyse (1)
- stochastic Petri nets (1)
- stochastische Petri Netze (1)
- study (1)
- symbolic analysis (1)
- symbolic graphs (1)
- symbolische Analyse (1)
- symbolische Graphen (1)
- synchronization (1)
- system of systems (1)
- systems software (1)
- t.BPM (1)
- tableau method (1)
- tangible media (1)
- tele-lab (1)
- tele-teaching (1)
- telemedicine (1)
- temporal logic (1)
- test case prioritization (1)
- test results (1)
- threshold cryptography (1)
- tiefes Lernen (1)
- traditional Georgian music (1)
- traditionelle Georgische Musik (1)
- trajectories (1)
- transformation level (1)
- transformation sequences (1)
- typed graph transformation systems (1)
- typisierte attributierte Graphen (1)
- unique (1)
- user interaction (1)
- user-generated content (1)
- verifiable credentials (1)
- verschachtelte Anwednungsbedingungen (1)
- verschachtelte Anwendungsbedingungen (1)
- verteilte Leistungsüberwachung (1)
- verzwickte Probleme (1)
- view maintenance (1)
- virtual desktop infrastructure (1)
- visual language (1)
- visual languages (1)
- visualization concept exploration (1)
- visuelle Sprache (1)
- visuelle Sprachen (1)
- wearables (1)
- web-applications (1)
- web-based development (1)
- web-based development environment (1)
- web-basierte Entwicklungsumgebung (1)
- webbasierte Entwicklung (1)
- zuverlässige Datenverarbeitung (1)
- zuverlässigen Datenverarbeitung (1)
- Überwachung (1)
- öffentliche Cloud Speicherdienste (1)
- überprüfbare Nachweise (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering gGmbH (112)
- Hasso-Plattner-Institut für Digital Engineering GmbH (39)
- Extern (6)
- Institut für Informatik und Computational Science (3)
- Institut für Geowissenschaften (1)
- Kommunalwissenschaftliches Institut (1)
- Lehreinheit für Wirtschafts-Arbeit-Technik (1)
Unique column combinations of a relational database table are sets of columns that contain only unique values. Discovering such combinations is a fundamental research problem and has many different data management and knowledge discovery applications. Existing discovery algorithms are either brute force or have a high memory load and can thus be applied only to small datasets or samples. In this paper, the wellknown GORDIAN algorithm and "Apriori-based" algorithms are compared and analyzed for further optimization. We greatly improve the Apriori algorithms through efficient candidate generation and statistics-based pruning methods. A hybrid solution HCAGORDIAN combines the advantages of GORDIAN and our new algorithm HCA, and it significantly outperforms all previous work in many situations.
Technical report
(2019)
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
Extract-Transform-Load (ETL) tools are used for the creation, maintenance, and evolution of data warehouses, data marts, and operational data stores. ETL workflows populate those systems with data from various data sources by specifying and executing a DAG of transformations. Over time, hundreds of individual workflows evolve as new sources and new requirements are integrated into the system. The maintenance and evolution of large-scale ETL systems requires much time and manual effort. A key problem is to understand the meaning of unfamiliar attribute labels in source and target databases and ETL transformations. Hard-to-understand attribute labels lead to frustration and time spent to develop and understand ETL workflows. We present a schema decryption technique to support ETL developers in understanding cryptic schemata of sources, targets, and ETL transformations. For a given ETL system, our recommender-like approach leverages the large number of mapped attribute labels in existing ETL workflows to produce good and meaningful decryptions. In this way we are able to decrypt attribute labels consisting of a number of unfamiliar few-letter abbreviations, such as UNP_PEN_INT, which we can decrypt to UNPAID_PENALTY_INTEREST. We evaluate our schema decryption approach on three real-world repositories of ETL workflows and show that our approach is able to suggest high-quality decryptions for cryptic attribute labels in a given schema.
Program behavior that relies on contextual information, such as physical location or network accessibility, is common in today's applications, yet its representation is not sufficiently supported by programming languages. With context-oriented programming (COP), such context-dependent behavioral variations can be explicitly modularized and dynamically activated. In general, COP could be used to manage any context-specific behavior. However, its contemporary realizations limit the control of dynamic adaptation. This, in turn, limits the interaction of COP's adaptation mechanisms with widely used architectures, such as event-based, mobile, and distributed programming. The JCop programming language extends Java with language constructs for context-oriented programming and additionally provides a domain-specific aspect language for declarative control over runtime adaptations. As a result, these redesigned implementations are more concise and better modularized than their counterparts using plain COP. JCop's main features have been described in our previous publications. However, a complete language specification has not been presented so far. This report presents the entire JCop language including the syntax and semantics of its new language constructs.
Für die vorliegende Studie »Qualitative Untersuchung zur Akzeptanz des neuen Personalausweises und Erarbeitung von Vorschlägen zur Verbesserung der Usability der Software AusweisApp« arbeitete ein Innovationsteam mit Hilfe der Design Thinking Methode an der Aufgabenstellung »Wie können wir die AusweisApp für Nutzer intuitiv und verständlich gestalten?« Zunächst wurde die Akzeptanz des neuen Personalausweises getestet. Bürger wurden zu ihrem Wissensstand und ihren Erwartungen hinsichtlich des neuen Personalausweises befragt, darüber hinaus zur generellen Nutzung des neuen Personalausweises, der Nutzung der Online-Ausweisfunktion sowie der Usability der AusweisApp. Weiterhin wurden Nutzer bei der Verwendung der aktuellen AusweisApp beobachtet und anschließend befragt. Dies erlaubte einen tiefen Einblick in ihre Bedürfnisse. Die Ergebnisse aus der qualitativen Untersuchung wurden verwendet, um Verbesserungsvorschläge für die AusweisApp zu entwickeln, die den Bedürfnissen der Bürger entsprechen. Die Vorschläge zur Optimierung der AusweisApp wurden prototypisch umgesetzt und mit potentiellen Nutzern getestet. Die Tests haben gezeigt, dass die entwickelten Neuerungen den Bürgern den Zugang zur Nutzung der Online-Ausweisfunktion deutlich vereinfachen. Im Ergebnis konnte festgestellt werden, dass der Akzeptanzgrad des neuen Personalausweises stark divergiert. Die Einstellung der Befragten reichte von Skepsis bis hin zu Befürwortung. Der neue Personalausweis ist ein Thema, das den Bürger polarisiert. Im Rahmen der Nutzertests konnten zahlreiche Verbesserungspotenziale des bestehenden Service Designs sowohl rund um den neuen Personalausweis, als auch im Zusammenhang mit der verwendeten Software aufgedeckt werden. Während der Nutzertests, die sich an die Ideen- und Prototypenphase anschlossen, konnte das Innovtionsteam seine Vorschläge iterieren und auch verifizieren. Die ausgearbeiteten Vorschläge beziehen sich auf die AusweisApp. Die neuen Funktionen umfassen im Wesentlichen: · den direkten Zugang zu den Diensteanbietern, · umfangreiche Hilfestellungen (Tooltips, FAQ, Wizard, Video), · eine Verlaufsfunktion, · einen Beispieldienst, der die Online-Ausweisfunktion erfahrbar macht. Insbesondere gilt es, den Nutzern mit der neuen Version der AusweisApp Anwendungsfelder für ihren neuen Personalausweis und einen Mehrwert zu bieten. Die Ausarbeitung von weiteren Funktionen der AusweisApp kann dazu beitragen, dass der neue Personalausweis sein volles Potenzial entfalten kann.
The noble way to substantiate decisions that affect many people is to ask these people for their opinions. For governments that run whole countries, this means asking all citizens for their views to consider their situations and needs.
Organizations such as Africa's Voices Foundation, who want to facilitate communication between decision-makers and citizens of a country, have difficulty mediating between these groups. To enable understanding, statements need to be summarized and visualized. Accomplishing these goals in a way that does justice to the citizens' voices and situations proves challenging. Standard charts do not help this cause as they fail to create empathy for the people behind their graphical abstractions. Furthermore, these charts do not create trust in the data they are representing as there is no way to see or navigate back to the underlying code and the original data. To fulfill these functions, visualizations would highly benefit from interactions to explore the displayed data, which standard charts often only limitedly provide.
To help improve the understanding of people's voices, we developed and categorized 80 ideas for new visualizations, new interactions, and better connections between different charts, which we present in this report. From those ideas, we implemented 10 prototypes and two systems that integrate different visualizations. We show that this integration allows consistent appearance and behavior of visualizations. The visualizations all share the same main concept: representing each individual with a single dot. To realize this idea, we discuss technologies that efficiently allow the rendering of a large number of these dots. With these visualizations, direct interactions with representations of individuals are achievable by clicking on them or by dragging a selection around them. This direct interaction is only possible with a bidirectional connection from the visualization to the data it displays. We discuss different strategies for bidirectional mappings and the trade-offs involved. Having unified behavior across visualizations enhances exploration. For our prototypes, that includes grouping, filtering, highlighting, and coloring of dots. Our prototyping work was enabled by the development environment Lively4. We explain which parts of Lively4 facilitated our prototyping process. Finally, we evaluate our approach to domain problems and our developed visualization concepts.
Our work provides inspiration and a starting point for visualization development in this domain. Our visualizations can improve communication between citizens and their government and motivate empathetic decisions. Our approach, combining low-level entities to create visualizations, provides value to an explorative and empathetic workflow. We show that the design space for visualizing this kind of data has a lot of potential and that it is possible to combine qualitative and quantitative approaches to data analysis.
Modular and incremental global model management with extended generalized discrimination networks
(2023)
Complex projects developed under the model-driven engineering paradigm nowadays often involve several interrelated models, which are automatically processed via a multitude of model operations. Modular and incremental construction and execution of such networks of models and model operations are required to accommodate efficient development with potentially large-scale models. The underlying problem is also called Global Model Management.
In this report, we propose an approach to modular and incremental Global Model Management via an extension to the existing technique of Generalized Discrimination Networks (GDNs). In addition to further generalizing the notion of query operations employed in GDNs, we adapt the previously query-only mechanism to operations with side effects to integrate model transformation and model synchronization. We provide incremental algorithms for the execution of the resulting extended Generalized Discrimination Networks (eGDNs), as well as a prototypical implementation for a number of example eGDN operations.
Based on this prototypical implementation, we experiment with an application scenario from the software development domain to empirically evaluate our approach with respect to scalability and conceptually demonstrate its applicability in a typical scenario. Initial results confirm that the presented approach can indeed be employed to realize efficient Global Model Management in the considered scenario.
Like conventional software projects, projects in model-driven software engineering require adequate management of multiple versions of development artifacts, importantly allowing living with temporary inconsistencies. In the case of model-driven software engineering, employed versioning approaches also have to handle situations where different artifacts, that is, different models, are linked via automatic model transformations.
In this report, we propose a technique for jointly handling the transformation of multiple versions of a source model into corresponding versions of a target model, which enables the use of a more compact representation that may afford improved execution time of both the transformation and further analysis operations. Our approach is based on the well-known formalism of triple graph grammars and a previously introduced encoding of model version histories called multi-version models. In addition to showing the correctness of our approach with respect to the standard semantics of triple graph grammars, we conduct an empirical evaluation that demonstrates the potential benefit regarding execution time performance.
In recent years, computer vision algorithms based on machine learning have seen rapid development. In the past, research mostly focused on solving computer vision problems such as image classification or object detection on images displaying natural scenes. Nowadays other fields such as the field of cultural heritage, where an abundance of data is available, also get into the focus of research. In the line of current research endeavours, we collaborated with the Getty Research Institute which provided us with a challenging dataset, containing images of paintings and drawings. In this technical report, we present the results of the seminar "Deep Learning for Computer Vision". In this seminar, students of the Hasso Plattner Institute evaluated state-of-the-art approaches for image classification, object detection and image recognition on the dataset of the Getty Research Institute. The main challenge when applying modern computer vision methods to the available data is the availability of annotated training data, as the dataset provided by the Getty Research Institute does not contain a sufficient amount of annotated samples for the training of deep neural networks. However, throughout the report we show that it is possible to achieve satisfying to very good results, when using further publicly available datasets, such as the WikiArt dataset, for the training of machine learning models.
Data dependencies, or integrity constraints, are used to improve the quality of a database schema, to optimize queries, and to ensure consistency in a database. In the last years conditional dependencies have been introduced to analyze and improve data quality. In short, a conditional dependency is a dependency with a limited scope defined by conditions over one or more attributes. Only the matching part of the instance must adhere to the dependency. In this paper we focus on conditional inclusion dependencies (CINDs). We generalize the definition of CINDs, distinguishing covering and completeness conditions. We present a new use case for such CINDs showing their value for solving complex data quality tasks. Further, we define quality measures for conditions inspired by precision and recall. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. Our algorithms choose not only the condition values but also the condition attributes automatically. Finally, we show that our approach efficiently provides meaningful and helpful results for our use case.
Data obtained from foreign data sources often come with only superficial structural information, such as relation names and attribute names. Other types of metadata that are important for effective integration and meaningful querying of such data sets are missing. In particular, relationships among attributes, such as foreign keys, are crucial metadata for understanding the structure of an unknown database. The discovery of such relationships is difficult, because in principle for each pair of attributes in the database each pair of data values must be compared. A precondition for a foreign key is an inclusion dependency (IND) between the key and the foreign key attributes. We present with Spider an algorithm that efficiently finds all INDs in a given relational database. It leverages the sorting facilities of DBMS but performs the actual comparisons outside of the database to save computation. Spider analyzes very large databases up to an order of magnitude faster than previous approaches. We also evaluate in detail the effectiveness of several heuristics to reduce the number of necessary comparisons. Furthermore, we generalize Spider to find composite INDs covering multiple attributes, and partial INDs, which are true INDs for all but a certain number of values. This last type is particularly relevant when integrating dirty data as is often the case in the life sciences domain - our driving motivation.
Cyber-physical systems achieve sophisticated system behavior exploring the tight interconnection of physical coupling present in classical engineering systems and information technology based coupling. A particular challenging case are systems where these cyber-physical systems are formed ad hoc according to the specific local topology, the available networking capabilities, and the goals and constraints of the subsystems captured by the information processing part. In this paper we present a formalism that permits to model the sketched class of cyber-physical systems. The ad hoc formation of tightly coupled subsystems of arbitrary size are specified using a UML-based graph transformation system approach. Differential equations are employed to define the resulting tightly coupled behavior. Together, both form hybrid graph transformation systems where the graph transformation rules define the discrete steps where the topology or modes may change, while the differential equations capture the continuous behavior in between such discrete changes. In addition, we demonstrate that automated analysis techniques known for timed graph transformation systems for inductive invariants can be extended to also cover the hybrid case for an expressive case of hybrid models where the formed tightly coupled subsystems are restricted to smaller local networks.
Service-oriented modeling employs collaborations to capture the coordination of multiple roles in form of service contracts. In case of dynamic collaborations the roles may join and leave the collaboration at runtime and therefore complex structural dynamics can result, which makes it very hard to ensure their correct and safe operation. We present in this paper our approach for modeling and verifying such dynamic collaborations. Modeling is supported using a well-defined subset of UML class diagrams, behavioral rules for the structural dynamics, and UML state machines for the role behavior. To be also able to verify the resulting service-oriented systems, we extended our former results for the automated verification of systems with structural dynamics [7, 8] and developed a compositional reasoning scheme, which enables the reuse of verification results. We outline our approach using the example of autonomous vehicles that use such dynamic collaborations via ad-hoc networking to coordinate and optimize their joint behavior.
Creating fonts is a complex task that requires expert knowledge in a variety of domains. Often, this knowledge is not held by a single person, but spread across a number of domain experts. A central concept needed for designing fonts is the glyph, an elemental symbol representing a readable character. Required domains include designing glyph shapes, engineering rules to combine glyphs for complex scripts and checking legibility. This process is most often iterative and requires communication in all directions. This report outlines a platform that aims to enhance the means of communication, describes our prototyping process, discusses complex font rendering and editing in a live environment and an approach to generate code based on a user’s live-edits.
SandBlocks
(2020)
Visuelle Programmiersprachen werden heutzutage zugunsten textueller Programmiersprachen nahezu nicht verwendet, obwohl visuelle Programmiersprachen einige Vorteile bieten. Diese reichen von der Vermeidung von Syntaxfehlern, über die Nutzung konkreter domänenspezifischer Notation bis hin zu besserer Lesbarkeit und Wartbarkeit des Programms. Trotzdem greifen professionelle Softwareentwickler nahezu ausschließlich auf textuelle Programmiersprachen zurück.
Damit Entwickler diese Vorteile visueller Programmiersprachen nutzen können, aber trotzdem nicht auf die ihnen bekannten textuellen Programmiersprachen verzichten müssen, gibt es die Idee, textuelle und visuelle Programmelemente gemeinsam in einer Programmiersprache nutzbar zu machen. Damit ist dem Entwickler überlassen wann und wie er visuelle Elemente in seinem Programmcode verwendet.
Diese Arbeit stellt das SandBlocks-Framework vor, das diese gemeinsame Nutzung visueller und textueller Programmelemente ermöglicht. Neben einer Auswertung visueller Programmiersprachen, zeigt es die technische Integration visueller Programmelemente in das Squeak/Smalltalk-System auf, gibt Einblicke in die Umsetzung und Verwendung in Live-Programmiersystemen und diskutiert ihre Verwendung in unterschiedlichen Domänen.
Die Komplexität heutiger Geschäftsabläufe und die Menge der zu verwaltenden Daten stellen hohe Anforderungen an die Entwicklung und Wartung von Geschäftsanwendungen. Ihr Umfang entsteht unter anderem aus der Vielzahl von Modellentitäten und zugehörigen Nutzeroberflächen zur Bearbeitung und Analyse der Daten. Dieser Bericht präsentiert neuartige Konzepte und deren Umsetzung zur Vereinfachung der Entwicklung solcher umfangreichen Geschäftsanwendungen. Erstens: Wir schlagen vor, die Datenbank und die Laufzeitumgebung einer dynamischen objektorientierten Programmiersprache zu vereinen. Hierzu organisieren wir die Speicherstruktur von Objekten auf die Weise einer spaltenorientierten Hauptspeicherdatenbank und integrieren darauf aufbauend Transaktionen sowie eine deklarative Anfragesprache nahtlos in dieselbe Laufzeitumgebung. Somit können transaktionale und analytische Anfragen in derselben objektorientierten Hochsprache implementiert werden, und dennoch nah an den Daten ausgeführt werden. Zweitens: Wir beschreiben Programmiersprachkonstrukte, welche es erlauben, Nutzeroberflächen sowie Nutzerinteraktionen generisch und unabhängig von konkreten Modellentitäten zu beschreiben. Um diese abstrakte Beschreibung nutzen zu können, reichert man die Domänenmodelle um vormals implizite Informationen an. Neue Modelle müssen nur um einige Informationen erweitert werden um bereits vorhandene Nutzeroberflächen und -interaktionen auch für sie verwenden zu können. Anpassungen, die nur für ein Modell gelten sollen, können unabhängig vom Standardverhalten, inkrementell, definiert werden. Drittens: Wir ermöglichen mit einem weiteren Programmiersprachkonstrukt die zusammenhängende Beschreibung von Abläufen der Anwendung, wie z.B. Bestellprozesse. Unser Programmierkonzept kapselt Nutzerinteraktionen in synchrone Funktionsaufrufe und macht somit Prozesse als zusammenhängende Folge von Berechnungen und Interaktionen darstellbar. Viertens: Wir demonstrieren ein Konzept, wie Endnutzer komplexe analytische Anfragen intuitiver formulieren können. Es basiert auf der Idee, dass Endnutzer Anfragen als Konfiguration eines Diagramms sehen. Entsprechend beschreibt ein Nutzer eine Anfrage, indem er beschreibt, was sein Diagramm darstellen soll. Nach diesem Konzept beschriebene Diagramme enthalten ausreichend Informationen, um daraus eine Anfrage generieren zu können. Hinsichtlich der Ausführungsdauer sind die generierten Anfragen äquivalent zu Anfragen, die mit konventionellen Anfragesprachen formuliert sind. Das Anfragemodell setzen wir in einem Prototypen um, der auf den zuvor eingeführten Konzepten aufsetzt.
Graph queries have lately gained increased interest due to application areas such as social networks, biological networks, or model queries. For the relational database case the relational algebra and generalized discrimination networks have been studied to find appropriate decompositions into subqueries and ordering of these subqueries for query evaluation or incremental updates of query results. For graph database queries however there is no formal underpinning yet that allows us to find such suitable operationalizations. Consequently, we suggest a simple operational concept for the decomposition of arbitrary complex queries into simpler subqueries and the ordering of these subqueries in form of generalized discrimination networks for graph queries inspired by the relational case. The approach employs graph transformation rules for the nodes of the network and thus we can employ the underlying theory. We further show that the proposed generalized discrimination networks have the same expressive power as nested graph conditions.
Graph databases provide a natural way of storing and querying graph data. In contrast to relational databases, queries over graph databases enable to refer directly to the graph structure of such graph data. For example, graph pattern matching can be employed to formulate queries over graph data.
However, as for relational databases running complex queries can be very time-consuming and ruin the interactivity with the database. One possible approach to deal with this performance issue is to employ database views that consist of pre-computed answers to common and often stated queries. But to ensure that database views yield consistent query results in comparison with the data from which they are derived, these database views must be updated before queries make use of these database views. Such a maintenance of database views must be performed efficiently, otherwise the effort to create and maintain views may not pay off in comparison to processing the queries directly on the data from which the database views are derived.
At the time of writing, graph databases do not support database views and are limited to graph indexes that index nodes and edges of the graph data for fast query evaluation, but do not enable to maintain pre-computed answers of complex queries over graph data. Moreover, the maintenance of database views in graph databases becomes even more challenging when negation and recursion have to be supported as in deductive relational databases.
In this technical report, we present an approach for the efficient and scalable incremental graph view maintenance for deductive graph databases. The main concept of our approach is a generalized discrimination network that enables to model nested graph conditions including negative application conditions and recursion, which specify the content of graph views derived from graph data stored by graph databases. The discrimination network enables to automatically derive generic maintenance rules using graph transformations for maintaining graph views in case the graph data from which the graph views are derived change. We evaluate our approach in terms of a case study using multiple data sets derived from open source projects.