Refine
Year of publication
Document Type
- Article (188)
- Monograph/Edited Volume (122)
- Doctoral Thesis (46)
- Other (31)
- Conference Proceeding (11)
- Preprint (4)
- Postprint (2)
- Review (2)
- Part of a Book (1)
Language
- English (365)
- German (40)
- Multiple languages (2)
Keywords
- Cloud Computing (9)
- cloud computing (8)
- Hasso-Plattner-Institut (7)
- Datenintegration (6)
- Forschungskolleg (6)
- Hasso Plattner Institute (6)
- Klausurtagung (6)
- Modellierung (6)
- Service-oriented Systems Engineering (6)
- Forschungsprojekte (5)
- Future SOC Lab (5)
- In-Memory Technologie (5)
- Multicore Architekturen (5)
- data profiling (5)
- machine learning (5)
- BPMN (4)
- COVID-19 (4)
- Geschäftsprozessmanagement (4)
- Model-Driven Engineering (4)
- Privacy (4)
- Research School (4)
- Verifikation (4)
- Virtualisierung (4)
- Visualization (4)
- business process management (4)
- data integration (4)
- design thinking (4)
- graph transformation (4)
- middleware (4)
- performance (4)
- process mining (4)
- verification (4)
- virtual machines (4)
- 3D point clouds (3)
- Betriebssysteme (3)
- Computer Networks (3)
- Computernetzwerke (3)
- Data Integration (3)
- Datenschutz (3)
- Design Thinking (3)
- E-Learning (3)
- Graphtransformationen (3)
- IPv4 (3)
- IPv6 (3)
- Identitätsmanagement (3)
- Infrastructure (3)
- Infrastruktur (3)
- Internet Protocol (3)
- Lively Kernel (3)
- MOOCs (3)
- Model Synchronisation (3)
- Model Transformation (3)
- Modeling (3)
- Network Politics (3)
- Netzpolitik (3)
- Online-Lernen (3)
- Onlinekurs (3)
- Ph.D. Retreat (3)
- Ph.D. retreat (3)
- Process Mining (3)
- Prozessmodellierung (3)
- SQL (3)
- Security (3)
- Tele-Lab (3)
- Tele-Teaching (3)
- Tripel-Graph-Grammatik (3)
- Virtuelle Maschinen (3)
- Visualisierung (3)
- cartographic design (3)
- cloud (3)
- conference (3)
- digital health (3)
- duplicate detection (3)
- heuristics (3)
- identity management (3)
- incremental graph pattern matching (3)
- model transformation (3)
- multicore architectures (3)
- openHPI (3)
- operating systems (3)
- prediction (3)
- privacy (3)
- programming (3)
- research projects (3)
- run time analysis (3)
- security (3)
- self-healing (3)
- service-oriented systems engineering (3)
- similarity measures (3)
- tele-TASK (3)
- theory (3)
- virtualization (3)
- visualization (3)
- 3D city models (2)
- 3D geovisualization (2)
- AUTOSAR (2)
- Abstraktion (2)
- Algorithmen (2)
- Algorithms (2)
- Aspektorientierte Softwareentwicklung (2)
- Assoziationsregeln (2)
- Asynchrone Schaltung (2)
- BIM (2)
- BPM (2)
- Bayesian networks (2)
- Big Data (2)
- Bitcoin (2)
- Blockchain (2)
- CSC (2)
- CSCW (2)
- Case management (2)
- Classification (2)
- Cloud (2)
- Cloud-Sicherheit (2)
- Cloud-Speicher (2)
- Competition (2)
- Compliance checking (2)
- Cyber-Physical Systems (2)
- Data Profiling (2)
- Data integration (2)
- Data profiling (2)
- Design (2)
- Discrimination Networks (2)
- EHR (2)
- Event processing (2)
- Evolution (2)
- Formale Verifikation (2)
- Functional dependencies (2)
- Graphtransformationssysteme (2)
- HCI (2)
- Hauptspeicherdatenbank (2)
- In-Memory technology (2)
- Informationsextraktion (2)
- Internet (2)
- Internet Service Provider (2)
- Internet of Things (2)
- Java (2)
- JavaScript (2)
- Kollaborationen (2)
- Konferenz (2)
- Laufzeitmodelle (2)
- Link-Entdeckung (2)
- Machine (2)
- Machine learning (2)
- Megamodell (2)
- Metadata Discovery (2)
- Middleware (2)
- Model Synchronization (2)
- Modell (2)
- Nested graph conditions (2)
- Online Course (2)
- Online-Learning (2)
- Performance (2)
- Point-based rendering (2)
- Process Modeling (2)
- Process modeling (2)
- RDF (2)
- Ressourcenoptimierung (2)
- Runtime analysis (2)
- SPARQL (2)
- STG decomposition (2)
- STG-Dekomposition (2)
- Service-Orientierte Architekturen (2)
- Sicherheit (2)
- Smalltalk (2)
- Structuring (2)
- Studie (2)
- SysML (2)
- Systemsoftware (2)
- Virtualization (2)
- abstraction (2)
- adaptive Systeme (2)
- adaptive systems (2)
- analysis (2)
- authentication (2)
- batch processing (2)
- big data services (2)
- clinical (2)
- cloud security (2)
- collaboration (2)
- complexity (2)
- confidentiality (2)
- contracts (2)
- data (2)
- data matching (2)
- data quality (2)
- data wrangling (2)
- databases (2)
- debugging (2)
- dependency discovery (2)
- design (2)
- dialysis (2)
- digital identity (2)
- discrimination networks (2)
- distributed systems (2)
- dynamic (2)
- dynamic programming (2)
- e-learning (2)
- electronic health record (2)
- entity resolution (2)
- evaluation (2)
- fabrication (2)
- feedback loops (2)
- functional dependency (2)
- genetic programming (2)
- graph constraints (2)
- image-based representation (2)
- in-memory technology (2)
- law (2)
- missing data (2)
- model (2)
- model-driven engineering (2)
- modeling (2)
- modellgetriebene Entwicklung (2)
- multi-core (2)
- nested application conditions (2)
- nested graph conditions (2)
- non-photorealistic rendering (2)
- real-time rendering (2)
- real-time systems (2)
- record linkage (2)
- requirements engineering (2)
- research school (2)
- risk aversion (2)
- scalability (2)
- schema discovery (2)
- self-sovereign identity (2)
- service-oriented architecture (2)
- service-oriented systems (2)
- simulation (2)
- software engineering (2)
- software reference architecture (2)
- spatial data infrastructure (2)
- standardization (2)
- stochastic Petri nets (2)
- stochastische Petri Netze (2)
- systems of systems (2)
- systems software (2)
- testing (2)
- tools (2)
- virtual 3D city models (2)
- virtuelle 3D-Stadtmodelle (2)
- virtuelle Maschinen (2)
- "Big Data"-Dienste (1)
- 2.5D Treemaps (1)
- 3D (1)
- 3D Computer Grafik (1)
- 3D Computer Graphics (1)
- 3D Drucken (1)
- 3D Point Clouds (1)
- 3D Point clouds (1)
- 3D Semiotik (1)
- 3D Visualisierung (1)
- 3D geovirtual environments (1)
- 3D information visualization (1)
- 3D printing (1)
- 3D semiotic model (1)
- 3D semiotics (1)
- 3D visualization (1)
- ACINQ (1)
- AKI (1)
- APX-hardness (1)
- ASIC (1)
- Aaron Wildavsky (1)
- Abhängigkeiten (1)
- Abstraktion von Geschäftsprozessmodellen (1)
- Accounting (1)
- Actor (1)
- Actor model (1)
- Address matching (1)
- Analyse (1)
- Anfragepaare (1)
- Anfragesprache (1)
- Angriffe (1)
- Anisotroper Kuwahara Filter (1)
- Anomalien (1)
- Anomaly detection (1)
- Anti-patterns (1)
- Anwendungsvirtualisierung (1)
- Application (1)
- Approximation algorithms (1)
- Apriori (1)
- Architektur (1)
- Artificial neural networks (1)
- Aspect-oriented Programming (1)
- Association Rule Mining (1)
- Association rule mining (1)
- Asynchronous circuit (1)
- Attention span (1)
- Attribut-Merge-Prozess (1)
- Attribute Merge Process (1)
- Attribute aggregation (1)
- Attributed graph transformation (1)
- Attributed graphs (1)
- Ausführung von Modellen (1)
- Ausführungsgeschichte (1)
- Australian securities exchange (1)
- Authentication (1)
- Authentifizierung (1)
- B2B process integration (1)
- BCCC (1)
- BPMN-Q (1)
- BTC (1)
- Basic Storage Anbieter (1)
- Batchprozesse (1)
- Batchverarbeitung (1)
- Bayes'sche Netze (1)
- Bayessche Netze (1)
- Bedingte Inklusionsabhängigkeiten (1)
- Behavior (1)
- Behavior equivalence (1)
- Behavioral querying (1)
- Behaviour Analysis (1)
- Behavioural Abstraction (1)
- Behavioural analysis (1)
- Benchmarking (1)
- Berührungseingaben (1)
- Beschränkungen und Abhängigkeiten (1)
- Biclustering (1)
- Bidirectional order dependencies (1)
- Bildverarbeitung (1)
- Billing (1)
- Biomedicine (1)
- Biometrie (1)
- Bisimulation (1)
- BitShares (1)
- Bitcoin Core (1)
- Blockchain Auth (1)
- Blockchain-Konsortium R3 (1)
- Blockchains (1)
- Blockheizkraftwerke (1)
- Blockkette (1)
- Blockstack (1)
- Blockstack ID (1)
- Bluetooth (1)
- Blumix-Plattform (1)
- Blöcke (1)
- Body sensor networks (1)
- Building Information Models (1)
- Business Process Modeling Notation (1)
- Business Process Models (1)
- Business process diagram (1)
- Business process management (1)
- Business process model (1)
- Business process modeling (1)
- Business processes (1)
- Byzantine Agreement (1)
- CCS Concepts (1)
- CEP (1)
- Cake cutting (1)
- Carrera Digital D132 (1)
- Causal Behavioural Profiles (1)
- Causality (1)
- Change Data Capture (1)
- Change Management (1)
- Change propagation (1)
- Charging (1)
- Choreographies (1)
- Chronic heart failure (1)
- CityGML (1)
- Climate change (1)
- Close-Up (1)
- Cloud Datenzentren (1)
- Cloud Native Applications (1)
- Cloud Storage Broker (1)
- Cloud access control and resource management (1)
- Cloud computing (1)
- Cloud-Security (1)
- CoExist (1)
- Coccinelle (1)
- Colored Coins (1)
- Communication systems (1)
- Commute pattern (1)
- Commute process (1)
- Complexity theory (1)
- Compliance (1)
- Compliance measurement (1)
- Composition (1)
- Computational modeling (1)
- Computational photography (1)
- Computer (1)
- Computer crime (1)
- Computergrafik (1)
- Concurrency (1)
- Conditional Inclusion Dependency (1)
- Conformance Überprüfung (1)
- Consistency (1)
- Consistency perception (1)
- Constraints (1)
- Context-oriented Programming (1)
- Context-oriented programming (1)
- ContextJS (1)
- Contracts (1)
- Controller-Resynthese (1)
- Convolution (1)
- Coordinated and Multiple Views (1)
- CorMID (1)
- Correlation (1)
- Critical pairs (1)
- Crowd-Resourcing (1)
- Cryptography (1)
- Cultural theory (1)
- Currencies (1)
- Cyber-Physical-Systeme (1)
- Cyber-physical-systems (1)
- DAO (1)
- DPoS (1)
- Data (1)
- Data Dependency (1)
- Data Mining (1)
- Data Modeling (1)
- Data Quality (1)
- Data Warehouse (1)
- Data dependencies (1)
- Data exchange (1)
- Data mining (1)
- Data modeling (1)
- Data models (1)
- Data-centric (1)
- Database Cost Model (1)
- Databases (1)
- Daten (1)
- Datenabhängigkeiten (1)
- Datenabhängigkeiten-Entdeckung (1)
- Datenanalyse (1)
- Datenbank (1)
- Datenbank-Kostenmodell (1)
- Datenbanken (1)
- Datenextraktion (1)
- Datenflusskorrektheit (1)
- Datenkorrektheit (1)
- Datenmodellierung (1)
- Datenobjekte (1)
- Datenqualität (1)
- Datenreinigung (1)
- Datensicht (1)
- Datenvertraulichkeit (1)
- Datenzustände (1)
- Deadline-Verbreitung (1)
- Deep learning (1)
- Delegated Proof-of-Stake (1)
- Delphi study (1)
- Delta preservation (1)
- Dependency discovery (1)
- Design Management (1)
- Design Thinking Diskurse (1)
- Detail plus Overview (1)
- Deurema Modellierungssprache (1)
- Differential Privacy (1)
- Differenz von Gauss Filtern (1)
- Digitale Whiteboards (1)
- Disambiguierung (1)
- Distributed (1)
- Distributed 3D geovisualization (1)
- Distributed Proof-of-Research (1)
- Distributed computing (1)
- Distributed debugging (1)
- Distributed programming (1)
- DoS (1)
- Duplicate Detection (1)
- Duplikaterkennung (1)
- Duration prediction (1)
- Dynamic Data (1)
- Dynamic Data Structures (1)
- Dynamic Pricing (1)
- Dynamic Type System (1)
- Dynamic adaptation (1)
- Dynamic analysis (1)
- Dynamic pricing (1)
- Dynamic pricing and advertising (1)
- Dynamische Typ Systeme (1)
- E-Wallet (1)
- E-commerce (1)
- ECDSA (1)
- EPA (1)
- Echtzeit (1)
- Echtzeitsysteme (1)
- Ecosystems (1)
- Effizienz (1)
- Eingabegenauigkeit (1)
- Eingebettete Systeme (1)
- Electrocardiography (1)
- Electronic prescription (1)
- Elektronische Patientenakte (1)
- Energieeffizienz (1)
- Energiesparen (1)
- Energy-aware (1)
- Enterprise Search (1)
- Entity resolution (1)
- Entwicklungswerkzeuge (1)
- Entwurfsmuster für SOA-Sicherheit (1)
- Ereignisabstraktion (1)
- Ereignisse (1)
- Erfahrungsbericht (1)
- Erfüllbarkeitsanalyse (1)
- Eris (1)
- Erkennen von Meta-Daten (1)
- Estimation-of-distribution algorithm (1)
- Ether (1)
- Ethereum (1)
- Event normalization (1)
- Events (1)
- Evolution in MDE (1)
- Evolutionary algorithms (1)
- Evolutionary computation (1)
- Exception handling (1)
- Exclusiveness (1)
- Experimentation (1)
- Extract-Transform-Load (ETL) (1)
- Eye-tracking (1)
- FIDO (1)
- FMC-QE (1)
- FRP (1)
- Facial mimicry (1)
- Fallstudie (1)
- Feature extraction (1)
- Feature selection (1)
- Federated Byzantine Agreement (1)
- Federated learning (1)
- Feedback Loop Modellierung (1)
- Feedback Loops (1)
- Feedback heuristics (1)
- Fehlende Daten (1)
- Fehlerbeseitigung (1)
- Fehlerinjektion (1)
- Fehlersuche (1)
- Fehlertoleranz (1)
- Finite horizon (1)
- First hitting time (1)
- Fitness level method (1)
- Fitness-distance correlation (1)
- Flexible Resource Manager (1)
- Flexible processes; (1)
- Flussgesteuerter Bilateraler Filter (1)
- Focus+Context Visualization (1)
- Fokus-&-Kontext Visualisierung (1)
- FollowMyVote (1)
- Foreign Keys (1)
- Foreign Keys Discovery (1)
- Fork (1)
- Formal Methods (1)
- Functional Lenses (1)
- GPU (1)
- GPU-based Real-time Rendering (1)
- Gamification (1)
- Gator Netzwerk (1)
- Gator networks (1)
- Gebäudemodelle (1)
- Geländemodelle (1)
- Gender Inequality (1)
- Gene expression (1)
- General Earth and Planetary Sciences (1)
- Generalisierung (1)
- Geodaten (1)
- Geography, Planning and Development (1)
- Geometry Draping (1)
- Geovisualization (1)
- Geschäftsanwendungen (1)
- Geschäftsprozesse (1)
- Geschäftsprozessmodelle (1)
- Gesetze (1)
- Grammatikalische Inferenz (1)
- Graph databases (1)
- Graph homomorphisms (1)
- Graph queries (1)
- Graph repair (1)
- Graph rewriting (1)
- Graph transformation (1)
- Graph-Constraints (1)
- Graph-basierte Suche (1)
- Graph-basiertes Ranking (1)
- Graphbedingungen (1)
- Graphdatenbanken (1)
- Graphtransformation (1)
- Gridcoin (1)
- Gruppierung von Prozessinstanzen (1)
- HENSHIN (1)
- HITS (1)
- HMM (1)
- HPI Forschung (1)
- HPI research (1)
- Haptics (1)
- Hard Fork (1)
- Hashed Timelock Contracts (1)
- Hasso-Plattner-Institute (1)
- Hauptspeicher Technologie (1)
- Herodotos (1)
- HiGHmed (1)
- Hidden Markov models (1)
- History of pattern occurrences (1)
- Homomorphe Verschlüsselung (1)
- Hospitalisation (1)
- Human behaviour (1)
- IDS (1)
- IDS management (1)
- IFC (1)
- IMD (1)
- IMU (1)
- IOPS (1)
- IT-Infrastruktur (1)
- IT-Security (1)
- IT-Sicherheit (1)
- IT-infrastructure (1)
- Identity Management (1)
- Identity management systems (1)
- Identität (1)
- Image (1)
- Image filtering (1)
- Image-based rendering (1)
- In-Memory Database (1)
- In-Memory Datenbank (1)
- In-Memory-Datenbank (1)
- In-memory (1)
- Inclusion Dependency (1)
- Inclusion Dependency Discovery (1)
- Inclusion dependencies (1)
- Incremental Discovery (1)
- Incrementally Inclusion Dependencies Discovery (1)
- Index (1)
- Index Structures (1)
- Indexstrukturen (1)
- Individuen (1)
- Indoor Models (1)
- Industries (1)
- Industry 4.0 (1)
- Industry Foundation Classes (1)
- Infinite State (1)
- Information Extraction (1)
- Information Systems (1)
- Information Visualization (1)
- Informationssysteme (1)
- Informationsvorhaltung (1)
- Initial conflicts (1)
- Inklusionsabhängigkeit (1)
- Inklusionsabhängigkeiten (1)
- Inklusionsabhängigkeiten Entdeckung (1)
- Inkrementelle Graphmustersuche (1)
- Innovation (1)
- Innovationsmanagement (1)
- Innovationsmethode (1)
- Integrity Verification (1)
- Interaction (1)
- Interaction modeling (1)
- Interactive Rendering (1)
- Interactive Visualization (1)
- Interaktionsmodel (1)
- Interaktives Rendering (1)
- Internet applications (1)
- Internet der Dinge (1)
- Internetanwendungen (1)
- Interviews (1)
- Introspection (1)
- Intrusion detection (1)
- Invariant-Checking (1)
- Invarianten (1)
- Invariants (1)
- Inventory holding costs (1)
- Inventory systems (1)
- IoT (1)
- JCop (1)
- JIT compilers (1)
- Japanese Blockchain Consortium (1)
- Japanisches Blockchain-Konsortium (1)
- Kartografisches Design (1)
- Kette (1)
- Knowledge bases (1)
- Knowledge-intensive processes (1)
- Komplexität (1)
- Komplexitätsbewältigung (1)
- Komposition (1)
- Konsensalgorithmus (1)
- Konsensprotokoll (1)
- Kontext (1)
- LEGO Mindstorms EV3 (1)
- LIDAR (1)
- LOD (1)
- LSTM (1)
- Label analysis (1)
- Lakes (1)
- Landmarken (1)
- Languages (1)
- Languages Model-driven engineering (1)
- Laser Cutten (1)
- Laser cutting (1)
- Laufzeitanalyse (1)
- Laufzeitverhalten (1)
- Leadership (1)
- Learning (1)
- Learning behavior (1)
- Least privilege principle (1)
- Lecture video recording (1)
- Leistungsfähigkeit (1)
- Leistungsvorhersage (1)
- Level of abstraction (1)
- Licenses (1)
- Lightning Network (1)
- Link Discovery (1)
- Linked Data (1)
- Linked Open Data (1)
- Live forensics (1)
- Live-Programmierung (1)
- Location-based services (1)
- Lock-Time-Parameter (1)
- Log conformance (1)
- Logiksynthese (1)
- M-adhesive categories (1)
- M-adhesive transformation systems (1)
- MDE Ansatz (1)
- MDE settings (1)
- MOOC (1)
- Management (1)
- Markov decision process (1)
- Markov decision process; (1)
- Markov model (1)
- Marktübersicht (1)
- Mary Douglas (1)
- Massive Open Online Courses (1)
- Matroids (1)
- Measurement (1)
- Megamodel (1)
- Megamodels (1)
- Mehr-Faktor-Authentifizierung (1)
- Mehrfamilienhäuser (1)
- Mehrkernsysteme (1)
- Memory management (1)
- Metadaten Entdeckung (1)
- Metadatenentdeckung (1)
- Metadatenqualität (1)
- Metamaterials (1)
- Micropayment-Kanäle (1)
- Microsoft Azur (1)
- Mind2 (1)
- Mixed workload (1)
- Mobile Application Development (1)
- Mobile device (1)
- Mobile sensing (1)
- Mobilgeräte (1)
- Model Consistency (1)
- Model Execution (1)
- Model Management (1)
- Model equivalence (1)
- Model generation (1)
- Model refinement (1)
- Model repair (1)
- Model synchronisation (1)
- Model transformation (1)
- Model verification (1)
- Model-driven (1)
- Model-driven SOA Security (1)
- Modeling Languages (1)
- Modell Management (1)
- Modell-driven Security (1)
- Modell-getriebene SOA-Sicherheit (1)
- Modell-getriebene Sicherheit (1)
- Modell-getriebene Softwareentwicklung (1)
- Modellerzeugung (1)
- Modellgetrieben (1)
- Modellgetriebene Entwicklung (1)
- Modellgetriebene Softwareentwicklung (1)
- Modellierungssprachen (1)
- Modellkonsistenz (1)
- Modelltransformation (1)
- Modelltransformationen (1)
- Models at Runtime (1)
- Modular decomposition (1)
- Monitoring language (1)
- Morphic (1)
- Multi-Instanzen (1)
- Multi-objective optimization (1)
- Multi-perspective Views (1)
- Multicore architectures (1)
- Multimedia retrieval (1)
- Multiscale modeling (1)
- Muster (1)
- Musterabgleich (1)
- Mustererkennung (1)
- Mutation operators (1)
- N-of-1 trial (1)
- NASDAQ (1)
- NP-completeness (1)
- NameID (1)
- Namecoin (1)
- Natural language (1)
- Natural language processing (1)
- Nebenläufigkeit (1)
- Nested Graph Conditions (1)
- Network graph (1)
- Network monitoring (1)
- Network topology (1)
- Netzneutralität (1)
- Neural Networks (1)
- Newspeak (1)
- Nicht-photorealistisches Rendering (1)
- Nichtfotorealistische Bildsynthese (1)
- Nigeria (1)
- Nutzungsinteresse (1)
- O (1)
- OAuth (1)
- OLTP (1)
- Object Constraint Programming (1)
- Object Versioning (1)
- Object-Oriented Programming (1)
- Objekt-Constraint Programmierung (1)
- Objekt-Orientiertes Programmieren (1)
- Objekt-orientiertes Programmieren mit Constraints (1)
- Objektlebenszyklus-Synchronisation (1)
- Off-Chain-Transaktionen (1)
- Onename (1)
- Open implementations (1)
- OpenBazaar (1)
- OpenID Connect (1)
- Operational reporting (1)
- Optimal Control (1)
- Optimal stochastic and deterministic (1)
- Optimierungen (1)
- Optionality (1)
- Oracles (1)
- Order Relations (1)
- Order dependencies (1)
- Organisationsveränderung (1)
- Orphan Block (1)
- Out-of-core (1)
- Outlier detection (1)
- Overview plus Detail (1)
- P2P (1)
- PPMI (parkinson's progression markers initiative) (1)
- PRISM Modell-Checker (1)
- PRISM model checker (1)
- PTCTL (1)
- Parallel programming (1)
- Parallele Datenverarbeitung (1)
- Parallelization (1)
- Pattern Matching (1)
- Patterns (1)
- Peer-to-Peer Netz (1)
- Peercoin (1)
- Performance Prediction (1)
- Performance analysis (1)
- Personal fabrication (1)
- Petri net Mapping (1)
- Petri net mapping (1)
- Petri net unfolding (1)
- Petrinetz (1)
- Plugs (1)
- PoB (1)
- PoS (1)
- PoW (1)
- Prediction (1)
- Price Cycles (1)
- Price collusion (1)
- Privilege separation concept (1)
- Probabilistische Modelle (1)
- Process (1)
- Process Enactment (1)
- Process Modelling (1)
- Process Monitoring (1)
- Process choreographies (1)
- Process compliance (1)
- Process mining (1)
- Process model (1)
- Process model alignment (1)
- Process model consistency (1)
- Process model repositories (1)
- Process model search (1)
- Processing strategies (1)
- Prognosen (1)
- Programmierkonzepte (1)
- Programmierung (1)
- Programming (1)
- Programming Environments (1)
- Programming Languages (1)
- Proof-of-Burn (1)
- Proof-of-Stake (1)
- Proof-of-Work (1)
- Propagation von Aktivitätsinstanzzuständen (1)
- Property paths (1)
- Protocols (1)
- Prozess (1)
- Prozess- und Datenintegration (1)
- Prozessarchitektur (1)
- Prozessausführung (1)
- Prozessautomatisierung (1)
- Prozesse (1)
- Prozesserhebung (1)
- Prozessinstanz (1)
- Prozessmodellsuche (1)
- Prozessoren (1)
- Prozessverfeinerung (1)
- Präsentation (1)
- Python (1)
- QRS detection (1)
- Quality of service (1)
- Quantitative Analysen (1)
- Quantitative Modeling (1)
- Quantitative Modellierung (1)
- Query (1)
- Query execution (1)
- Query optimization (1)
- Question answering (1)
- Queuing Theory (1)
- R Shiny (1)
- R package (1)
- RT_PREEMT patch (1)
- RT_PREEMT-Patch (1)
- Racket (1)
- Random graphs (1)
- Reaction Time (1)
- Real-time Rendering (1)
- Realzeitsysteme (1)
- Record and refinement (1)
- Record and replay (1)
- Record linkage (1)
- Reinforcement learning (1)
- Relational data (1)
- Remote patient management (1)
- Research Projects (1)
- Resource description framework (1)
- Resource management (1)
- Response Strategies (1)
- Ressourcenmanagement (1)
- Rete Netzwerk (1)
- Rete networks (1)
- Ripple (1)
- Risk control (1)
- Robust optimization (1)
- Role-based access control (1)
- Root cause analysis (1)
- Run time analysis (1)
- Runtime Binding (1)
- Runtime WCET Analysis (1)
- S-indd++ (1)
- SAP HANA (1)
- SCED (1)
- SCP (1)
- SHA (1)
- SOA Security (1)
- SOA Security Pattern (1)
- SOA Sicherheit (1)
- SPV (1)
- Safety Critical Systems (1)
- Sammlungsdatentypen (1)
- Satisfiability (1)
- Scalability (1)
- Schema-Entdeckung (1)
- Schemaentdeckung (1)
- Schlüsselentdeckung (1)
- Schule (1)
- Schwierigkeitsgrad (1)
- Scientific Publication Indicators (1)
- Scope (1)
- Search Algorithms (1)
- Secure Digital Identities (1)
- Secure Enterprise SOA (1)
- Security Modelling (1)
- Security-as-a-Service (1)
- Self-Adaptive Software (1)
- Self-aware computing systems (1)
- Self-configuration (1)
- Semantics (1)
- Semantische Analyse (1)
- Sequential anomaly (1)
- Sequenzen von s/t-Pattern (1)
- Service Provider (1)
- Service composition (1)
- Service detection (1)
- Service orchestration (1)
- Service-Oriented (1)
- Service-Oriented Architecture (1)
- Service-oriented Architectures (1)
- Service-oriented computing (1)
- Service-orientierte Systeme (1)
- Service-orientierte Systme (1)
- Sexism (1)
- Sichere Digitale Identitäten (1)
- Sicherheitsmodellierung (1)
- Signal-to-noise ratio (1)
- Signalflankengraph (SFG oder STG) (1)
- Similarity Measures (1)
- Similarity Search (1)
- Simplified Payment Verification (1)
- Simulation (1)
- Single-Sign-On (1)
- Skalierbarkeit (1)
- Skalierbarkeit der Blockchain (1)
- Skript-Entwicklungsumgebungen (1)
- Slock.it (1)
- SoaML (1)
- Social environment (1)
- Soft Fork (1)
- Software (1)
- Software Engineering (1)
- Software-Testen (1)
- Softwareanalyse (1)
- Softwarearchitektur (1)
- Softwareentwicklung (1)
- Softwareentwicklungsprozesse (1)
- Softwareproduktlinien (1)
- Softwaretechnik (1)
- Softwaretest (1)
- Softwaretests (1)
- Softwarevisualisierung (1)
- Softwarewartung (1)
- Sozialen Medien (1)
- Spaltenlayout (1)
- Speicheroptimierungen (1)
- Sprachspezifikation (1)
- Squeak (1)
- Standards (1)
- Steemit (1)
- Stellar Consensus Protocol (1)
- Stilisierung (1)
- Stochastic Petri nets (1)
- Storj (1)
- Structural Decomposition (1)
- Structured modeling (1)
- Strukturierung (1)
- Submodular function (1)
- Submodular functions (1)
- Subset selection (1)
- Suchverfahren (1)
- Synchronisation (1)
- Synonyme (1)
- System architecture (1)
- System of Systems (1)
- Systeme von Systemen (1)
- Systems of Systems (1)
- TRIPOD (1)
- Tableau method (1)
- Tableaumethode (1)
- Task analysis (1)
- Telemonitoring (1)
- Temporal Logic (1)
- Temporal orientation (1)
- Temporallogik (1)
- Terrain Visualization (1)
- Test-getriebene Fehlernavigation (1)
- Testen (1)
- Texturen (1)
- Texturing (1)
- The Bitfury Group (1)
- The DAO (1)
- Theory (1)
- Three-dimensional displays (1)
- Threshold Cryptography (1)
- Time Augmented Petri Nets (1)
- Time series (1)
- Timed Automata (1)
- Tool survey (1)
- Trace inclusion (1)
- Traceability (1)
- Tracking (1)
- Transaktion (1)
- Transaktionen (1)
- Transformation (1)
- Transformation tool contest (1)
- Transformationsebene (1)
- Transformationssequenzen (1)
- Travis CI (1)
- Treemaps (1)
- Triple Graph Grammar (1)
- Triple Graph Grammars (1)
- Triple-Graph-Grammatiken (1)
- Trust Management (1)
- Two dimensional displays (1)
- Two-Way-Peg (1)
- Ubiquitous computing (1)
- Unbegrenzter Zustandsraum (1)
- Unified cloud model (1)
- Unique column combination (1)
- Unique column combinations (1)
- Unspent Transaction Output (1)
- Unveränderlichkeit (1)
- Usage Interest (1)
- VIL (1)
- Verbindungsnetzwerke (1)
- Verhalten (1)
- Verhaltensabstraktion (1)
- Verhaltensanalyse (1)
- Verhaltensbewahrung (1)
- Verhaltensverfeinerung (1)
- Verhaltensäquivalenz (1)
- Verification (1)
- Verletzung Auflösung (1)
- Verletzung Erklärung (1)
- Verlässlichkeit (1)
- Vernetzte Daten (1)
- Versionierung (1)
- Verteiltes Arbeiten (1)
- Verteilungsalgorithmen (1)
- Verteilungsalgorithmus (1)
- Verträge (1)
- Verwaltung von Rechenzentren (1)
- Verzögerungs-Verbreitung (1)
- Video OCR (1)
- Video classification (1)
- Video indexing (1)
- Videoanalyse (1)
- Videometadaten (1)
- View navigation (1)
- Violation Explanation (1)
- Violation Resolution (1)
- Virtual 3D city model (1)
- Virtual 3D scenes (1)
- Virtual Desktop Infrastructure (1)
- Virtual Machine (1)
- Virtual camera control (1)
- Virtual machines (1)
- Visual modeling (1)
- Vocabulary (1)
- Vorhersage (1)
- Vulnerability Assessment (1)
- W[3]-completeness (1)
- Warteschlangentheorie (1)
- Wartung von Graphdatenbanksichten (1)
- Water Science and Technology (1)
- Watson IoT (1)
- Web Sites (1)
- Web applications (1)
- Web browsers (1)
- Web navigational language (1)
- Web of Data (1)
- Web safeness (1)
- Web-Anwendungen (1)
- Web-based rendering (1)
- Webseite (1)
- Well-structuredness (1)
- Werkzeuge (1)
- Wikipedia (1)
- Wohlstrukturiertheit (1)
- Zeitbehaftete Petri Netze (1)
- Zielvorgabe (1)
- Zookos Dreieck (1)
- Zookos triangle (1)
- Zugriffskontrolle (1)
- abdominal imaging (1)
- abundance (1)
- access control (1)
- accessibility (1)
- accounting (1)
- action recognition (1)
- activity instance state propagation (1)
- activity recognition (1)
- acute renal failure (1)
- adaptation rules (1)
- address normalization (1)
- address parsing (1)
- adolescent (1)
- adoption (1)
- affective interfaces (1)
- algorithm (EDA) (1)
- algorithmic (1)
- algorithms (1)
- altchain (1)
- alternative chain (1)
- anisotropic Kuwahara filter (1)
- anniversary (1)
- annotation (1)
- anomalies (1)
- app (1)
- application (1)
- application virtualization (1)
- approximation (1)
- apriori (1)
- architecture (1)
- architecture recovery (1)
- architecture-based adaptation (1)
- architectures (1)
- argumentation research (1)
- artificial intelligence (1)
- aspect adapter (1)
- aspect oriented programming (1)
- aspect-oriented (1)
- aspects (1)
- aspectualization (1)
- association rule mining (1)
- asynchronous circuit (1)
- atmospheric pressure chemical ionization (1)
- atomic swap (1)
- attack graph (1)
- attacks (1)
- attention mechanism (1)
- attribute assurance (1)
- ausführbare Semantiken (1)
- authentication protocol (1)
- back pain (1)
- back-in-time (1)
- basic cloud storage services (1)
- battery (1)
- battery-depletion attack (1)
- behavior preservation (1)
- behavioral abstraction (1)
- behavioral equivalenc (1)
- behavioral refinement (1)
- behavioral specification (1)
- behaviour compatibility (1)
- behaviour equivalence (1)
- behavioural models (1)
- benchmark testing (1)
- beschreibende Feldstudie (1)
- bibliometrics (1)
- bidirectional payment channels (1)
- bidirectional shortest path (1)
- big data (1)
- billing (1)
- biomarker detection (1)
- biomechanics (1)
- biometrics (1)
- bisimulation (1)
- bitcoin (1)
- bitcoins (1)
- blind (1)
- bloat control (1)
- blockchain (1)
- blockchain consortium (1)
- blockchain-übergreifend (1)
- blocks (1)
- blumix platform (1)
- bpm (1)
- bridge management systems (1)
- bug tracking (1)
- building information modeling (1)
- building models (1)
- business process architecture (1)
- business process model abstraction (1)
- business process modeling (1)
- business processes (1)
- cancer therapy (1)
- cartography-oriented visualization (1)
- case study (1)
- center dot Computing (1)
- chain (1)
- change blindness (1)
- change management (1)
- changeability (1)
- charging (1)
- child (1)
- chronic dialysis (1)
- cleansing (1)
- clinical nephrology (1)
- cloud datacenter (1)
- cloud storage (1)
- cluster resource management (1)
- clustering (1)
- co-location (1)
- cogeneration units (1)
- cognition (1)
- cognitive (1)
- coherence-enhancing filtering (1)
- cohort (1)
- collaborations (1)
- collaborative tagging (1)
- collection types (1)
- color palettes (1)
- communication network (1)
- complex emotions (1)
- complexity dichotomy (1)
- comprehension (1)
- computational design (1)
- computer graphics (1)
- computer science (1)
- computer vision (1)
- conceptualization (1)
- concurrency (1)
- concurrent graph rewriting (1)
- conditional functional dependencies (1)
- conditions (1)
- confirmation period (1)
- conflicts and dependencies in (1)
- conformance analysis (1)
- conformance checking (1)
- consensus algorithm (1)
- consensus protocol (1)
- consistency (1)
- contest period (1)
- context awareness (1)
- continuous integration (1)
- continuous testing (1)
- contract (1)
- control (1)
- control resynthesis (1)
- controlled experiment (1)
- coronavirus 2019 (1)
- corporate takeovers (1)
- cost-effectiveness (1)
- cost-utility analysis (1)
- creativity and innovation management (1)
- critical pairs (1)
- cross-chain (1)
- crosscutting wrappers (1)
- cryptocurrency exchanges (1)
- cscw (1)
- cyber (1)
- cyber humanistic (1)
- cyber threat intelligence (1)
- cyber-physical systems (1)
- damage detection (1)
- data center management (1)
- data analysis (1)
- data center management (1)
- data cleaning (1)
- data cleansing (1)
- data correctness checking (1)
- data driven approaches (1)
- data extraction (1)
- data flow correctness (1)
- data in business processes (1)
- data migration (1)
- data modeling (1)
- data objects (1)
- data preparation (1)
- data security (1)
- data states (1)
- data transformation (1)
- data view (1)
- data-driven demand (1)
- database systems (1)
- database technology (1)
- datasets (1)
- deadline propagation (1)
- decentralized autonomous organization (1)
- deduplication (1)
- deep learning (1)
- delay propagation (1)
- denial-of-service attack (1)
- dental caries classification (1)
- dependability (1)
- dependable computing (1)
- dependencies (1)
- dependency (1)
- design management (1)
- design thinking discourse (1)
- deterministic properties (1)
- deterministic random walk (1)
- deurema modeling language (1)
- development tools (1)
- dezentrale autonome Organisation (1)
- diabetes (1)
- difference of Gaussians (1)
- differential (1)
- differential privacy (1)
- difficulty (1)
- difficulty target (1)
- diffusion (1)
- digital education (1)
- digital health app (1)
- digital interventions (1)
- digital startup (1)
- digital therapy (1)
- digital whiteboard (1)
- digitale Bildung (1)
- direct manipulation (1)
- direkte Manipulation (1)
- distributed computing (1)
- distributed data-parallel processing (1)
- distributed ledger technology (1)
- distribution algorithm (1)
- doppelter Hashwert (1)
- double hashing (1)
- dynamic typing (1)
- dynamic consolidation (1)
- dynamic pricing (1)
- dynamic programming languages (1)
- dynamic reconfiguration (1)
- dynamische Programmiersprachen (1)
- dynamische Sprachen (1)
- dynamische Umsortierung (1)
- e-Learning (1)
- e-lecture (1)
- eHealth (1)
- ePA (1)
- ecological momentary assessment (1)
- ecosystems (1)
- efficiency (1)
- efficient deep learning (1)
- eindeutig (1)
- eingebettete Systeme (1)
- electronic health records (1)
- electronic mail (1)
- embedded systems (1)
- embedded-systems (1)
- embedding (1)
- emotional (1)
- empathy (1)
- empirical studies (1)
- empirische Studien (1)
- end-stage kidney disease (1)
- endogenous (1)
- energy efficiency (1)
- energy savings (1)
- engine (1)
- engineering (1)
- enrichment calculation (1)
- enterprise search (1)
- entity alignment (1)
- enumeration complexity (1)
- epistasis (1)
- erfahrbare Medien (1)
- estimation-of-distribution (1)
- estimation-of-distribution algorithm (1)
- ethics (1)
- event abstraction (1)
- event log (1)
- events (1)
- evolution (1)
- evolution in MDE (1)
- exact exponential-time algorithms (1)
- executable semantics (1)
- experience report (1)
- experimental design (1)
- experiments (1)
- expertise (1)
- explainability (1)
- explainability-accuracy trade-off (1)
- explainable AI (1)
- exploratory programming (1)
- expression (1)
- external knowledge bases (1)
- factual (1)
- failure model (1)
- failure profile (1)
- failure profile model (1)
- fair division (1)
- fault injection (1)
- fault tolerance (1)
- feature selection (1)
- federated voting (1)
- feedback loop modeling (1)
- fehlende Daten (1)
- flow-based bilateral filter (1)
- flux (1)
- focus plus context visualization (1)
- folksonomy (1)
- force-feedback (1)
- forecasts (1)
- forensics (1)
- formal framework (1)
- formal verification (1)
- formal verification methods (1)
- formale Verifikation (1)
- formales Framework (1)
- formalism (1)
- fully concurrent bisimulation (1)
- functional dependencies (1)
- functional languages (1)
- functional lenses (1)
- functional programming (1)
- funktionale Abhängigkeit (1)
- funktionale Programmierung (1)
- future SOC lab (1)
- game theory (1)
- gaming (1)
- ganzheitlich (1)
- gene (1)
- gene selection (1)
- generalization (1)
- generative multi-discriminative networks (1)
- genetic algorithms (1)
- geocoding (1)
- geographic information systems (1)
- geospatial artificial intelligence (1)
- geospatial data (1)
- geospatial digital twins (1)
- gesture (1)
- grammar inference (1)
- grammar-based compression (1)
- graph clustering (1)
- graph databases (1)
- graph languages (1)
- graph pattern matching (1)
- graph queries (1)
- graph replacement categories (1)
- graph transformation systems (1)
- graph transformations (1)
- graph-based ranking (1)
- hashrate (1)
- health apps (1)
- health information privacy concern (1)
- health personnel (1)
- healthcare (1)
- heuristic algorithms (1)
- history (1)
- holistic (1)
- home-based studies (1)
- homomorphic encryption (1)
- hospital (1)
- human computer interaction (1)
- human immunodeficiency virus (1)
- hybrid graph-transformation-systems (1)
- hybride Graph-Transformations-Systeme (1)
- hyperbolic geometry (1)
- identity (1)
- identity broker (1)
- image captioning (1)
- image processing (1)
- imbalanced learning (1)
- immutable values (1)
- implantable medical device (1)
- implants (1)
- in-memory database (1)
- inattentional blindness (1)
- inclusion (1)
- inclusion dependency (1)
- index (1)
- individuals (1)
- inductive invariant checking (1)
- induktives Invariant Checking (1)
- informatics (1)
- information extraction (1)
- information storage and (1)
- inkrementelle Graphmustersuche (1)
- inkrementelles Graph Pattern Matching (1)
- innovation (1)
- innovation capabilities (1)
- innovation management (1)
- input accuracy (1)
- intelligente Verträge (1)
- inter-chain (1)
- interaction (1)
- interactive editing (1)
- interactive simulation (1)
- interconnect (1)
- interface (1)
- interpretable machine learning (1)
- invariant checking (1)
- invasive aspects (1)
- job (1)
- k-Induktion (1)
- k-induction (1)
- k-inductive invariant checking (1)
- k-inductive invariants (1)
- k-induktive Invarianten (1)
- k-induktives Invariant-Checking (1)
- key discovery (1)
- knowledge building (1)
- knowledge discovery (1)
- knowledge management (1)
- kontinuierliche Integration (1)
- kontinuierliches Testen (1)
- kontrolliertes Experiment (1)
- künstliche Intelligenz (1)
- label-free (1)
- landmarks (1)
- language specification (1)
- languages (1)
- laser cutting (1)
- layered architecture (1)
- lazy transformation (1)
- leadership (1)
- lean startup approach (1)
- learning (1)
- ledger assets (1)
- link discovery (1)
- linked data (1)
- live programming (1)
- lively kernel (1)
- local confluence (1)
- location-based (1)
- logic (1)
- logic synthesis (1)
- low back pain (1)
- low-code development approaches (1)
- lower bound (1)
- mHealth (1)
- main memory computing (1)
- management (1)
- many-core (1)
- map reduce (1)
- map/reduce (1)
- market study (1)
- maschinelles Lernen (1)
- mass isotopologue distribution (1)
- matching dependencies (1)
- maximal structuring (1)
- mean-variance optimization (1)
- medical (1)
- medical identity theft (1)
- medical malpractice (1)
- mehrdimensionale Belangtrennung (1)
- memory optimization (1)
- memory-based clustering (1)
- memory-based correlation (1)
- memory-based databases (1)
- mental disorders (1)
- mental health (1)
- merged mining (1)
- merkle root (1)
- metadata discovery (1)
- metadata quality (1)
- methodologie (1)
- metric learning (1)
- micropayment (1)
- micropayment channels (1)
- miner (1)
- mining (1)
- mining hardware (1)
- minting (1)
- mixed-methods (1)
- mobile (1)
- mobile application (1)
- mobile devices (1)
- mobile health (1)
- mobile phone (1)
- model generation (1)
- model interpreter (1)
- model verification (1)
- model-based prototyping (1)
- model-driven (1)
- model-driven software engineering (1)
- modelgetriebene Entwicklung (1)
- modeling language (1)
- modellgetriebene Softwareentwicklung (1)
- modelling (1)
- models at runtime (1)
- modular counting (1)
- modularity (1)
- molecular tumor board (1)
- monitoring (1)
- morphic (1)
- mortality (1)
- motion analysis (1)
- motion capture (1)
- motivations (1)
- multi core data processing (1)
- multi factor authentication (1)
- multi-dimensional separation of concerns (1)
- multi-instances (1)
- multi-perspective visualization (1)
- multi-product pricing (1)
- multi-family residential buildings (1)
- multilevel systems (1)
- multimodal representations (1)
- multimodal sensing (1)
- multiview classification (1)
- mutation (1)
- mutli-task learning (1)
- network creation games (1)
- networks (1)
- neural (1)
- nichtlineare Projektionen (1)
- non-repudiation (1)
- nonce (1)
- nonlinear projections (1)
- object life cycle synchronization (1)
- object-constraint programming (1)
- off-chain transaction (1)
- one-time password (1)
- online course (1)
- online-learning (1)
- optimizations (1)
- organizational change (1)
- organizations (1)
- orthopedic (1)
- orthopedic; (1)
- orts-basiert (1)
- ownership (1)
- pain management (1)
- panorama (1)
- parallel (1)
- parallel computing (1)
- paralleles Rechnen (1)
- parameterized complexity (1)
- parkinson's disease (1)
- parsimonious reduction (1)
- partial application conditions (1)
- partielle Anwendungsbedingungen (1)
- peer-to-peer network (1)
- pegged sidechains (1)
- periodic tasks (1)
- periodische Aufgaben (1)
- personal electronic health records (1)
- personal satisfaction (1)
- personalized medicine (1)
- petri net (1)
- phishing (1)
- physical activity assessment (1)
- point clouds (1)
- portability (1)
- power-law (1)
- prefetching (1)
- presentation (1)
- prevention (1)
- price of anarchy (1)
- pricing (1)
- prior knowledge (1)
- probabilistic (1)
- probabilistic models (1)
- probabilistic timed automata (1)
- probabilistische zeitbehaftete Automaten (1)
- process (1)
- process and data integration (1)
- process automation (1)
- process elicitation (1)
- process instance (1)
- process instance grouping (1)
- process model search (1)
- process modeling (1)
- process modeling languages (1)
- process refinement (1)
- process scheduling (1)
- process-aware digital twin cockpit (1)
- process-awareness (1)
- processes (1)
- processing (1)
- processor hardware (1)
- profiling (1)
- program (1)
- program analysis (1)
- programming language (1)
- programs (1)
- proportional division (1)
- proteomics (1)
- protocols (1)
- prototypes (1)
- public cloud storage services (1)
- public health medicine (1)
- quantitative analysis (1)
- query matching (1)
- query optimisation (1)
- query rewriting (1)
- querying (1)
- quorum slices (1)
- random I (1)
- random forest (1)
- random graphs (1)
- ranking (1)
- rapid prototyping (1)
- reactive (1)
- reaktive Programmierung (1)
- real-time (1)
- recursive tuning (1)
- reflection (1)
- reinforcement learning (1)
- relational model transformation (1)
- relationale Modelltransformationen (1)
- reliability (1)
- remodularization (1)
- remote collaboration (1)
- remote monitoring (1)
- representation learning (1)
- requirements (1)
- research challenges (1)
- resilient architectures (1)
- resource management (1)
- resource optimization (1)
- restoration (1)
- restricted edges (1)
- retrieval (1)
- reusable aspects (1)
- reuse (1)
- reward (1)
- robustness (1)
- rootstock (1)
- rotor-router model (1)
- runtime adaptations (1)
- runtime analysis (1)
- runtime behavior (1)
- runtime models (1)
- s/t-pattern sequences (1)
- satisfiabilitiy solving (1)
- scalability of blockchain (1)
- scale-free networks (1)
- scarce tokens (1)
- school (1)
- school (environment) (1)
- science mapping (1)
- scripting environments (1)
- search plan generation (1)
- security chaos engineering (1)
- security policies (1)
- security risk assessment (1)
- segmentation (1)
- self-adaptive software (1)
- self-driving (1)
- self-learning scheduler (1)
- self-supervised learning (1)
- semantic (1)
- semantic analysis (1)
- semantics preservation (1)
- sensor (1)
- sensor data (1)
- separation of concerns (1)
- service-oriented (1)
- severe acute respiratory (1)
- sidechain (1)
- signal transition graph (1)
- similarity (1)
- similarity learning (1)
- simulator (1)
- single vertex discrepancy (1)
- single-case experimental design (1)
- small files (1)
- smallest grammar problem (1)
- smalltalk (1)
- smart card (1)
- smart contracts (1)
- smartphone (1)
- sociology (1)
- software (1)
- software analysis (1)
- software architecture (1)
- software development (1)
- software development processes (1)
- software evolution (1)
- software maintenance (1)
- software product lines (1)
- software testing (1)
- software tests (1)
- software visualization (1)
- software/instrumentation (1)
- spamming (1)
- speed independence (1)
- speed independent (1)
- stable matchings (1)
- staging (1)
- standards (1)
- static analysis (1)
- statische Analyse (1)
- statistics (1)
- statutes and laws (1)
- steganography (1)
- straight-line (1)
- structured process model (1)
- study (1)
- style description languages (1)
- stylization (1)
- substitution effects (1)
- supervised machine learning (1)
- surveillance (1)
- symbolic graph transformation (1)
- synchronization (1)
- syndrome coronavirus 2 (1)
- synonym discovery (1)
- system of systems (1)
- systems (1)
- t.BPM (1)
- tableau method (1)
- tangible media (1)
- teamwork (1)
- technology adoption (1)
- technology pivot (1)
- tele-lab (1)
- tele-teaching (1)
- terrain models (1)
- test-driven fault navigation (1)
- textures (1)
- threshold cryptography (1)
- timed automata (1)
- topics (1)
- tort law (1)
- touch input (1)
- traceability (1)
- tracing (1)
- transaction (1)
- transfer learning (1)
- transformation (1)
- transformation level (1)
- transformation sequences (1)
- transversal hypergraph (1)
- tree conjecture (1)
- triple graph grammars (1)
- trust (1)
- trust model (1)
- tuple spaces (1)
- typed graph transformation systems (1)
- typography (1)
- unequal shares (1)
- unique (1)
- unsupervised methods (1)
- use-cases (1)
- user interaction (1)
- utility (1)
- verschachtelte Anwednungsbedingungen (1)
- verschachtelte Graphbedingungen (1)
- versioning (1)
- verteilte Datenbanken (1)
- video analysis (1)
- video metadata (1)
- view maintenance (1)
- views (1)
- virtual desktop infrastructure (1)
- virtual groups (1)
- virtual reality (1)
- virtualisierte IT-Infrastruktur (1)
- visually impaired (1)
- vulnerabilities (1)
- wearable movement sensor (1)
- wearables (1)
- web application (1)
- web-applications (1)
- weight (1)
- word clouds (1)
- word sense disambiguation (1)
- workflow (1)
- workload prediction (1)
- zero-power defense (1)
- zuverlässige Datenverarbeitung (1)
- zuverlässigen Datenverarbeitung (1)
- Ähnlichkeit (1)
- Ähnlichkeitsmaße (1)
- Ähnlichkeitssuche (1)
- Änderbarkeit (1)
- Übereinstimmungsanalyse (1)
- Überwachung (1)
- öffentliche Cloud Speicherdienste (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering gGmbH (407) (remove)
3D from 2D touch
(2013)
While interaction with computers used to be dominated by mice and keyboards, new types of sensors now allow users to interact through touch, speech, or using their whole body in 3D space. These new interaction modalities are often referred to as "natural user interfaces" or "NUIs." While 2D NUIs have experienced major success on billions of mobile touch devices sold, 3D NUI systems have so far been unable to deliver a mobile form factor, mainly due to their use of cameras. The fact that cameras require a certain distance from the capture volume has prevented 3D NUI systems from reaching the flat form factor mobile users expect. In this dissertation, we address this issue by sensing 3D input using flat 2D sensors. The systems we present observe the input from 3D objects as 2D imprints upon physical contact. By sampling these imprints at very high resolutions, we obtain the objects' textures. In some cases, a texture uniquely identifies a biometric feature, such as the user's fingerprint. In other cases, an imprint stems from the user's clothing, such as when walking on multitouch floors. By analyzing from which part of the 3D object the 2D imprint results, we reconstruct the object's pose in 3D space. While our main contribution is a general approach to sensing 3D input on 2D sensors upon physical contact, we also demonstrate three applications of our approach. (1) We present high-accuracy touch devices that allow users to reliably touch targets that are a third of the size of those on current touch devices. We show that different users and 3D finger poses systematically affect touch sensing, which current devices perceive as random input noise. We introduce a model for touch that compensates for this systematic effect by deriving the 3D finger pose and the user's identity from each touch imprint. We then investigate this systematic effect in detail and explore how users conceptually touch targets. Our findings indicate that users aim by aligning visual features of their fingers with the target. We present a visual model for touch input that eliminates virtually all systematic effects on touch accuracy. (2) From each touch, we identify users biometrically by analyzing their fingerprints. Our prototype Fiberio integrates fingerprint scanning and a display into the same flat surface, solving a long-standing problem in human-computer interaction: secure authentication on touchscreens. Sensing 3D input and authenticating users upon touch allows Fiberio to implement a variety of applications that traditionally require the bulky setups of current 3D NUI systems. (3) To demonstrate the versatility of 3D reconstruction on larger touch surfaces, we present a high-resolution pressure-sensitive floor that resolves the texture of objects upon touch. Using the same principles as before, our system GravitySpace analyzes all imprints and identifies users based on their shoe soles, detects furniture, and enables accurate touch input using feet. By classifying all imprints, GravitySpace detects the users' body parts that are in contact with the floor and then reconstructs their 3D body poses using inverse kinematics. GravitySpace thus enables a range of applications for future 3D NUI systems based on a flat sensor, such as smart rooms in future homes. We conclude this dissertation by projecting into the future of mobile devices. Focusing on the mobility aspect of our work, we explore how NUI devices may one day augment users directly in the form of implanted devices.
Companies develop process models to explicitly describe their business operations. In the same time, business operations, business processes, must adhere to various types of compliance requirements. Regulations, e.g., Sarbanes Oxley Act of 2002, internal policies, best practices are just a few sources of compliance requirements. In some cases, non-adherence to compliance requirements makes the organization subject to legal punishment. In other cases, non-adherence to compliance leads to loss of competitive advantage and thus loss of market share. Unlike the classical domain-independent behavioral correctness of business processes, compliance requirements are domain-specific. Moreover, compliance requirements change over time. New requirements might appear due to change in laws and adoption of new policies. Compliance requirements are offered or enforced by different entities that have different objectives behind these requirements. Finally, compliance requirements might affect different aspects of business processes, e.g., control flow and data flow. As a result, it is infeasible to hard-code compliance checks in tools. Rather, a repeatable process of modeling compliance rules and checking them against business processes automatically is needed. This thesis provides a formal approach to support process design-time compliance checking. Using visual patterns, it is possible to model compliance requirements concerning control flow, data flow and conditional flow rules. Each pattern is mapped into a temporal logic formula. The thesis addresses the problem of consistency checking among various compliance requirements, as they might stem from divergent sources. Also, the thesis contributes to automatically check compliance requirements against process models using model checking. We show that extra domain knowledge, other than expressed in compliance rules, is needed to reach correct decisions. In case of violations, we are able to provide a useful feedback to the user. The feedback is in the form of parts of the process model whose execution causes the violation. In some cases, our approach is capable of providing automated remedy of the violation.
There is an increasing interest in fusing data from heterogeneous sources. Combining data sources increases the utility of existing datasets, generating new information and creating services of higher quality. A central issue in working with heterogeneous sources is data migration: In order to share and process data in different engines, resource intensive and complex movements and transformations between computing engines, services, and stores are necessary.
Muses is a distributed, high-performance data migration engine that is able to interconnect distributed data stores by forwarding, transforming, repartitioning, or broadcasting data among distributed engines' instances in a resource-, cost-, and performance-adaptive manner. As such, it performs seamless information sharing across all participating resources in a standard, modular manner. We show an overall improvement of 30 % for pipelining jobs across multiple engines, even when we count the overhead of Muses in the execution time. This performance gain implies that Muses can be used to optimise large pipelines that leverage multiple engines.
The investigation of metabolic fluxes and metabolite distributions within cells by means of tracer molecules is a valuable tool to unravel the complexity of biological systems. Technological advances in mass spectrometry (MS) technology such as atmospheric pressure chemical ionization (APCI) coupled with high resolution (HR), not only allows for highly sensitive analyses but also broadens the usefulness of tracer-based experiments, as interesting signals can be annotated de novo when not yet present in a compound library. However, several effects in the APCI ion source, i.e., fragmentation and rearrangement, lead to superimposed mass isotopologue distributions (MID) within the mass spectra, which need to be corrected during data evaluation as they will impair enrichment calculation otherwise. Here, we present and evaluate a novel software tool to automatically perform such corrections. We discuss the different effects, explain the implemented algorithm, and show its application on several experimental datasets. This adjustable tool is available as an R package from CRAN.
Text displayed in a video is an essential part for the high-level semantic information of the video content. Therefore, video text can be used as a valuable source for automated video indexing in digital video libraries. In this paper, we propose a workflow for video text detection and recognition. In the text detection stage, we have developed a fast localization-verification scheme, in which an edge-based multi-scale text detector first identifies potential text candidates with high recall rate. Then, detected candidate text lines are refined by using an image entropy-based filter. Finally, Stroke Width Transform (SWT)- and Support Vector Machine (SVM)-based verification procedures are applied to eliminate the false alarms. For text recognition, we have developed a novel skeleton-based binarization method in order to separate text from complex backgrounds to make it processible for standard OCR (Optical Character Recognition) software. Operability and accuracy of proposed text detection and binarization methods have been evaluated by using publicly available test data sets.
Nowadays, graph data models are employed, when relationships between entities have to be stored and are in the scope of queries. For each entity, this graph data model locally stores relationships to adjacent entities. Users employ graph queries to query and modify these entities and relationships. These graph queries employ graph patterns to lookup all subgraphs in the graph data that satisfy certain graph structures. These subgraphs are called graph pattern matches. However, this graph pattern matching is NP-complete for subgraph isomorphism. Thus, graph queries can suffer a long response time, when the number of entities and relationships in the graph data or the graph patterns increases.
One possibility to improve the graph query performance is to employ graph views that keep ready graph pattern matches for complex graph queries for later retrieval. However, these graph views must be maintained by means of an incremental graph pattern matching to keep them consistent with the graph data from which they are derived, when the graph data changes. This maintenance adds subgraphs that satisfy a graph pattern to the graph views and removes subgraphs that do not satisfy a graph pattern anymore from the graph views.
Current approaches for incremental graph pattern matching employ Rete networks. Rete networks are discrimination networks that enumerate and maintain all graph pattern matches of certain graph queries by employing a network of condition tests, which implement partial graph patterns that together constitute the overall graph query. Each condition test stores all subgraphs that satisfy the partial graph pattern. Thus, Rete networks suffer high memory consumptions, because they store a large number of partial graph pattern matches. But, especially these partial graph pattern matches enable Rete networks to update the stored graph pattern matches efficiently, because the network maintenance exploits the already stored partial graph pattern matches to find new graph pattern matches. However, other kinds of discrimination networks exist that can perform better in time and space than Rete networks. Currently, these other kinds of networks are not used for incremental graph pattern matching.
This thesis employs generalized discrimination networks for incremental graph pattern matching. These discrimination networks permit a generalized network structure of condition tests to enable users to steer the trade-off between memory consumption and execution time for the incremental graph pattern matching. For that purpose, this thesis contributes a modeling language for the effective definition of generalized discrimination networks. Furthermore, this thesis contributes an efficient and scalable incremental maintenance algorithm, which updates the (partial) graph pattern matches that are stored by each condition test. Moreover, this thesis provides a modeling evaluation, which shows that the proposed modeling language enables the effective modeling of generalized discrimination networks. Furthermore, this thesis provides a performance evaluation, which shows that a) the incremental maintenance algorithm scales, when the graph data becomes large, and b) the generalized discrimination network structures can outperform Rete network structures in time and space at the same time for incremental graph pattern matching.
We introduce a logic-based incremental approach to graph repair, generating a sound and complete (upon termination) overview of least-changing graph repairs from which a user may select a graph repair based on non-formalized further requirements. This incremental approach features delta preservation as it allows to restrict the generation of graph repairs to delta-preserving graph repairs, which do not revert the additions and deletions of the most recent consistency-violating graph update. We specify consistency of graphs using the logic of nested graph conditions, which is equivalent to first-order logic on graphs. Technically, the incremental approach encodes if and how the graph under repair satisfies a graph condition using the novel data structure of satisfaction trees, which are adapted incrementally according to the graph updates applied. In addition to the incremental approach, we also present two state-based graph repair algorithms, which restore consistency of a graph independent of the most recent graph update and which generate additional graph repairs using a global perspective on the graph under repair. We evaluate the developed algorithms using our prototypical implementation in the tool AutoGraph and illustrate our incremental approach using a case study from the graph database domain.
The importance of reporting is ever increasing in today's fast-paced market environments and the availability of up-to-date information for reporting has become indispensable. Current reporting systems are separated from the online transaction processing systems (OLTP) with periodic updates pushed in. A pre-defined and aggregated subset of the OLTP data, however, does not provide the flexibility, detail, and timeliness needed for today's operational reporting. As technology advances, this separation has to be re-evaluated and means to study and evaluate new trends in data storage management have to be provided. This article proposes a benchmark for combined OLTP and operational reporting, providing means to evaluate the performance of enterprise data management systems for mixed workloads of OLTP and operational reporting queries. Such systems offer up-to-date information and the flexibility of the entire data set for reporting. We describe how the benchmark provokes the conflicts that are the reason for separating the two workloads on different systems. In this article, we introduce the concepts, logical data schema, transactions and queries of the benchmark, which are entirely based on the original data sets and real workloads of existing, globally operating enterprises.
This contribution presents a quantitative evaluation procedure for Information Retrieval models and the results of this procedure applied on the enhanced Topic-based Vector Space Model (eTVSM). Since the eTVSM is an ontology-based model, its effectiveness heavily depends on the quality of the underlaying ontology. Therefore the model has been tested with different ontologies to evaluate the impact of those ontologies on the effectiveness of the eTVSM. On the highest level of abstraction, the following results have been observed during our evaluation: First, the theoretically deduced statement that the eTVSM has a similar effecitivity like the classic Vector Space Model if a trivial ontology (every term is a concept and it is independet of any other concepts) is used has been approved. Second, we were able to show that the effectiveness of the eTVSM raises if an ontology is used which is only able to resolve synonyms. We were able to derive such kind of ontology automatically from the WordNet ontology. Third, we observed that more powerful ontologies automatically derived from the WordNet, dramatically dropped the effectiveness of the eTVSM model even clearly below the effectiveness level of the Vector Space Model. Fourth, we were able to show that a manually created and optimized ontology is able to raise the effectiveness of the eTVSM to a level which is clearly above the best effectiveness levels we have found in the literature for the Latent Semantic Index model with compareable document sets.
The rapid digitalization of the Facility Management (FM) sector has increased the demand for mobile, interactive analytics approaches concerning the operational state of a building. These approaches provide the key to increasing stakeholder engagement associated with Operation and Maintenance (O&M) procedures of living and working areas, buildings, and other built environment spaces. We present a generic and fast approach to process and analyze given 3D point clouds of typical indoor office spaces to create corresponding up-to-date approximations of classified segments and object-based 3D models that can be used to analyze, record and highlight changes of spatial configurations. The approach is based on machine-learning methods used to classify the scanned 3D point cloud data using 2D images. This approach can be used to primarily track changes of objects over time for comparison, allowing for routine classification, and presentation of results used for decision making. We specifically focus on classification, segmentation, and reconstruction of multiple different object types in a 3D point-cloud scene. We present our current research and describe the implementation of these technologies as a web-based application using a services-oriented methodology.
Modern communication systems are becoming increasingly dynamic and complex. In this article a novel mechanism for next generation charging and billing is presented that enables self-configurability for accounting systems consisting of heterogeneous components. The mechanism is required to be simple, effective, efficient, scalable and fault-tolerant. Based on simulation results it is shown that the proposed simple distributed mechanism is competitive with usual cost-based or random mechanisms under realistic assumptions and up to non-extreme workload situations as well as fulfilling the posed requirements.
A simplified run time analysis of the univariate marginal distribution algorithm on LeadingOnes
(2021)
With elementary means, we prove a stronger run time guarantee for the univariate marginal distribution algorithm (UMDA) optimizing the LEADINGONES benchmark function in the desirable regime with low genetic drift. If the population size is at least quasilinear, then, with high probability, the UMDA samples the optimum in a number of iterations that is linear in the problem size divided by the logarithm of the UMDA's selection rate. This improves over the previous guarantee, obtained by Dang and Lehre (2015) via the deep level-based population method, both in terms of the run time and by demonstrating further run time gains from small selection rates. Under similar assumptions, we prove a lower bound that matches our upper bound up to constant factors.
Modern 3D geovisualization systems (3DGeoVSs) are complex and evolving systems that are required to be adaptable and leverage distributed resources, including massive geodata. This article focuses on 3DGeoVSs built based on the principles of service-oriented architectures, standards and image-based representations (SSI) to address practically relevant challenges and potentials. Such systems facilitate resource sharing and agile and efficient system construction and change in an interoperable manner, while exploiting images as efficient, decoupled and interoperable representations. The software architecture of a 3DGeoVS and its underlying visualization model have strong effects on the system's quality attributes and support various system life cycle activities. This article contributes a software reference architecture (SRA) for 3DGeoVSs based on SSI that can be used to design, describe and analyze concrete software architectures with the intended primary benefit of an increase in effectiveness and efficiency in such activities. The SRA integrates existing, proven technology and novel contributions in a unique manner. As the foundation for the SRA, we propose the generalized visualization pipeline model that generalizes and overcomes expressiveness limitations of the prevalent visualization pipeline model. To facilitate exploiting image-based representations (IReps), the SRA integrates approaches for the representation, provisioning and styling of and interaction with IReps. Five applications of the SRA provide proofs of concept for the general applicability and utility of the SRA. A qualitative evaluation indicates the overall suitability of the SRA, its applications and the general approach of building 3DGeoVSs based on SSI.
Model transformation is one of the key tasks in model-driven engineering and relies on the efficient matching and modification of graph-based data structures; its sibling graph rewriting has been used to successfully model problems in a variety of domains. Over the last years, a wide range of graph and model transformation tools have been developed all of them with their own particular strengths and typical application domains. In this paper, we give a survey and a comparison of the model and graph transformation tools that participated at the Transformation Tool Contest 2011. The reader gains an overview of the field and its tools, based on the illustrative solutions submitted to a Hello World task, and a comparison alongside a detailed taxonomy. The article is of interest to researchers in the field of model and graph transformation, as well as to software engineers with a transformation task at hand who have to choose a tool fitting to their needs. All solutions referenced in this article provide a SHARE demo. It supported the peer-review process for the contest, and now allows the reader to test the tools online.
In clinical practice, only a few reliable measurement instruments are available for monitoring knee joint rehabilitation. Advances to replace motion capturing with sensor data measurement have been made in the last years. Thus, a systematic review of the literature was performed, focusing on the implementation, diagnostic accuracy, and facilitators and barriers of integrating wearable sensor technology in clinical practices based on a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. For critical appraisal, the COSMIN Risk of Bias tool for reliability and measurement of error was used. PUBMED, Prospero, Cochrane database, and EMBASE were searched for eligible studies. Six studies reporting reliability aspects in using wearable sensor technology at any point after knee surgery in humans were included. All studies reported excellent results with high reliability coefficients, high limits of agreement, or a few detectable errors. They used different or partly inappropriate methods for estimating reliability or missed reporting essential information. Therefore, a moderate risk of bias must be considered. Further quality criterion studies in clinical settings are needed to synthesize the evidence for providing transparent recommendations for the clinical use of wearable movement sensors in knee joint rehabilitation.
Selection of initial points, the number of clusters and finding proper clusters centers are still the main challenge in clustering processes. In this paper, we suggest genetic algorithm based method which searches several solution spaces simultaneously. The solution spaces are population groups consisting of elements with similar structure. Elements in a group have the same size, while elements in different groups are of different sizes. The proposed algorithm processes the population in groups of chromosomes with one gene, two genes to k genes. These genes hold corresponding information about the cluster centers. In the proposed method, the crossover and mutation operators can accept parents with different sizes; this can lead to versatility in population and information transfer among sub-populations. We implemented the proposed method and evaluated its performance against some random datasets and the Ruspini dataset as well. The experimental results show that the proposed method could effectively determine the appropriate number of clusters and recognize their centers. Overall this research implies that using heterogeneous population in the genetic algorithm can lead to better results.
E-learning is a flexible and personalized alternative to traditional education. Nonetheless, existing e-learning systems for IT security education have difficulties in delivering hands-on experience because of the lack of proximity. Laboratory environments and practical exercises are indispensable instruction tools to IT security education, but security education in con-ventional computer laboratories poses the problem of immobility as well as high creation and maintenance costs. Hence, there is a need to effectively transform security laboratories and practical exercises into e-learning forms. This report introduces the Tele-Lab IT-Security architecture that allows students not only to learn IT security principles, but also to gain hands-on security experience by exercises in an online laboratory environment. In this architecture, virtual machines are used to provide safe user work environments instead of real computers. Thus, traditional laboratory environments can be cloned onto the Internet by software, which increases accessibilities to laboratory resources and greatly reduces investment and maintenance costs. Under the Tele-Lab IT-Security framework, a set of technical solutions is also proposed to provide effective functionalities, reliability, security, and performance. The virtual machines with appropriate resource allocation, software installation, and system configurations are used to build lightweight security laboratories on a hosting computer. Reliability and availability of laboratory platforms are covered by the virtual machine management framework. This management framework provides necessary monitoring and administration services to detect and recover critical failures of virtual machines at run time. Considering the risk that virtual machines can be misused for compromising production networks, we present security management solutions to prevent misuse of laboratory resources by security isolation at the system and network levels. This work is an attempt to bridge the gap between e-learning/tele-teaching and practical IT security education. It is not to substitute conventional teaching in laboratories but to add practical features to e-learning. This report demonstrates the possibility to implement hands-on security laboratories on the Internet reliably, securely, and economically.
Virtual 3D city models increasingly cover whole city areas; hence, the perception of complex urban structures becomes increasingly difficult. Using abstract visualization, complexity of these models can be hidden where its visibility is unnecessary, while important features are maintained and highlighted for better comprehension and communication. We present a technique to automatically generalize a given virtual 3D city model consisting of building models, an infrastructure network and optional land coverage data; this technique creates several representations of increasing levels of abstraction. Using the infrastructure network, our technique groups building models and replaces them with cell blocks, while preserving local landmarks. By computing a landmark hierarchy, we reduce the set of initial landmarks in a spatially balanced manner for use in higher levels of abstraction. In four application examples, we demonstrate smooth visualization of transitions between precomputed representations; dynamic landmark highlighting according to virtual camera distance; an implementation of a cognitively enhanced route representation, and generalization lenses to combine precomputed representations in focus + context visualization.
Business process management experiences a large uptake by the industry, and process models play an important role in the analysis and improvement of processes. While an increasing number of staff becomes involved in actual modeling practice, it is crucial to assure model quality and homogeneity along with providing suitable aids for creating models. In this paper we consider the problem of offering recommendations to the user during the act of modeling. Our key contribution is a concept for defining and identifying so-called action patterns - chunks of actions often appearing together in business processes. In particular, we specify action patterns and demonstrate how they can be identified from existing process model repositories using association rule mining techniques. Action patterns can then be used to suggest additional actions for a process model. Our approach is challenged by applying it to the collection of process models from the SAP Reference Model.
Fragmentation of peptides leaves characteristic patterns in mass spectrometry data, which can be used to identify protein sequences, but this method is challenging for mutated or modified sequences for which limited information exist. Altenburg et al. use an ad hoc learning approach to learn relevant patterns directly from unannotated fragmentation spectra.
Mass spectrometry-based proteomics provides a holistic snapshot of the entire protein set of living cells on a molecular level. Currently, only a few deep learning approaches exist that involve peptide fragmentation spectra, which represent partial sequence information of proteins.
Commonly, these approaches lack the ability to characterize less studied or even unknown patterns in spectra because of their use of explicit domain knowledge.
Here, to elevate unrestricted learning from spectra, we introduce 'ad hoc learning of fragmentation' (AHLF), a deep learning model that is end-to-end trained on 19.2 million spectra from several phosphoproteomic datasets. AHLF is interpretable, and we show that peak-level feature importance values and pairwise interactions between peaks are in line with corresponding peptide fragments.
We demonstrate our approach by detecting post-translational modifications, specifically protein phosphorylation based on only the fragmentation spectrum without a database search. AHLF increases the area under the receiver operating characteristic curve (AUC) by an average of 9.4% on recent phosphoproteomic data compared with the current state of the art on this task.
Furthermore, use of AHLF in rescoring search results increases the number of phosphopeptide identifications by a margin of up to 15.1% at a constant false discovery rate. To show the broad applicability of AHLF, we use transfer learning to also detect cross-linked peptides, as used in protein structure analysis, with an AUC of up to 94%.
Duplicate detection is the task of identifying all groups of records within a data set that represent the same real-world entity, respectively. This task is difficult, because (i) representations might differ slightly, so some similarity measure must be defined to compare pairs of records and (ii) data sets might have a high volume making a pair-wise comparison of all records infeasible. To tackle the second problem, many algorithms have been suggested that partition the data set and compare all record pairs only within each partition. One well-known such approach is the Sorted Neighborhood Method (SNM), which sorts the data according to some key and then advances a window over the data comparing only records that appear within the same window. We propose several variations of SNM that have in common a varying window size and advancement. The general intuition of such adaptive windows is that there might be regions of high similarity suggesting a larger window size and regions of lower similarity suggesting a smaller window size. We propose and thoroughly evaluate several adaption strategies, some of which are provably better than the original SNM in terms of efficiency (same results with fewer comparisons).
Industry 4.0 and the Internet of Things are recent developments that have lead to the creation of new kinds of manufacturing data. Linking this new kind of sensor data to traditional business information is crucial for enterprises to take advantage of the data’s full potential. In this paper, we present a demo which allows experiencing this data integration, both vertically between technical and business contexts and horizontally along the value chain. The tool simulates a manufacturing company, continuously producing both business and sensor data, and supports issuing ad-hoc queries that answer specific questions related to the business. In order to adapt to different environments, users can configure sensor characteristics to their needs.
Unique column combinations of a relational database table are sets of columns that contain only unique values. Discovering such combinations is a fundamental research problem and has many different data management and knowledge discovery applications. Existing discovery algorithms are either brute force or have a high memory load and can thus be applied only to small datasets or samples. In this paper, the wellknown GORDIAN algorithm and "Apriori-based" algorithms are compared and analyzed for further optimization. We greatly improve the Apriori algorithms through efficient candidate generation and statistics-based pruning methods. A hybrid solution HCAGORDIAN combines the advantages of GORDIAN and our new algorithm HCA, and it significantly outperforms all previous work in many situations.
Affect-aware word clouds
(2020)
Word clouds are widely used for non-analytic purposes, such as introducing a topic to students, or creating a gift with personally meaningful text. Surveys show that users prefer tools that yield word clouds with a stronger emotional impact. Fonts and color palettes are powerful typographical signals that may determine this impact. Typically, these signals are assigned randomly, or expected to be chosen by the users. We present an affect-aware font and color palette selection methodology that aims to facilitate more informed choices. We infer associations of fonts with a set of eight affects, and evaluate the resulting data in a series of user studies both on individual words as well as in word clouds. Relying on a recent study to procure affective color palettes, we carry out a similar user study to understand the impact of color choices on word clouds. Our findings suggest that both fonts and color palettes are powerful tools contributing to the affects evoked by a word cloud. The experiments further confirm that the novel datasets we propose are successful in enabling this. We also find that, for the majority of the affects, both signals need to be congruent to create a stronger impact. Based on this data, we implement a prototype that allows users to specify a desired affect and recommends congruent fonts and color palettes for the word.
Background:
Early reports indicate that AKI is common among patients with coronavirus disease 2019 (COVID-19) and associatedwith worse outcomes. However, AKI among hospitalized patients with COVID19 in the United States is not well described.
Methods:
This retrospective, observational study involved a review of data from electronic health records of patients aged >= 18 years with laboratory-confirmed COVID-19 admitted to the Mount Sinai Health System from February 27 to May 30, 2020. We describe the frequency of AKI and dialysis requirement, AKI recovery, and adjusted odds ratios (aORs) with mortality.
Results:
Of 3993 hospitalized patients with COVID-19, AKI occurred in 1835 (46%) patients; 347 (19%) of the patientswith AKI required dialysis. The proportionswith stages 1, 2, or 3 AKIwere 39%, 19%, and 42%, respectively. A total of 976 (24%) patients were admitted to intensive care, and 745 (76%) experienced AKI. Of the 435 patients with AKI and urine studies, 84% had proteinuria, 81% had hematuria, and 60% had leukocyturia. Independent predictors of severe AKI were CKD, men, and higher serum potassium at admission. In-hospital mortality was 50% among patients with AKI versus 8% among those without AKI (aOR, 9.2; 95% confidence interval, 7.5 to 11.3). Of survivors with AKI who were discharged, 35% had not recovered to baseline kidney function by the time of discharge. An additional 28 of 77 (36%) patients who had not recovered kidney function at discharge did so on posthospital follow-up.
Conclusions:
AKI is common among patients hospitalized with COVID-19 and is associated with high mortality. Of all patients with AKI, only 30% survived with recovery of kidney function by the time of discharge.
Für die vorliegende Studie »Qualitative Untersuchung zur Akzeptanz des neuen Personalausweises und Erarbeitung von Vorschlägen zur Verbesserung der Usability der Software AusweisApp« arbeitete ein Innovationsteam mit Hilfe der Design Thinking Methode an der Aufgabenstellung »Wie können wir die AusweisApp für Nutzer intuitiv und verständlich gestalten?« Zunächst wurde die Akzeptanz des neuen Personalausweises getestet. Bürger wurden zu ihrem Wissensstand und ihren Erwartungen hinsichtlich des neuen Personalausweises befragt, darüber hinaus zur generellen Nutzung des neuen Personalausweises, der Nutzung der Online-Ausweisfunktion sowie der Usability der AusweisApp. Weiterhin wurden Nutzer bei der Verwendung der aktuellen AusweisApp beobachtet und anschließend befragt. Dies erlaubte einen tiefen Einblick in ihre Bedürfnisse. Die Ergebnisse aus der qualitativen Untersuchung wurden verwendet, um Verbesserungsvorschläge für die AusweisApp zu entwickeln, die den Bedürfnissen der Bürger entsprechen. Die Vorschläge zur Optimierung der AusweisApp wurden prototypisch umgesetzt und mit potentiellen Nutzern getestet. Die Tests haben gezeigt, dass die entwickelten Neuerungen den Bürgern den Zugang zur Nutzung der Online-Ausweisfunktion deutlich vereinfachen. Im Ergebnis konnte festgestellt werden, dass der Akzeptanzgrad des neuen Personalausweises stark divergiert. Die Einstellung der Befragten reichte von Skepsis bis hin zu Befürwortung. Der neue Personalausweis ist ein Thema, das den Bürger polarisiert. Im Rahmen der Nutzertests konnten zahlreiche Verbesserungspotenziale des bestehenden Service Designs sowohl rund um den neuen Personalausweis, als auch im Zusammenhang mit der verwendeten Software aufgedeckt werden. Während der Nutzertests, die sich an die Ideen- und Prototypenphase anschlossen, konnte das Innovtionsteam seine Vorschläge iterieren und auch verifizieren. Die ausgearbeiteten Vorschläge beziehen sich auf die AusweisApp. Die neuen Funktionen umfassen im Wesentlichen: · den direkten Zugang zu den Diensteanbietern, · umfangreiche Hilfestellungen (Tooltips, FAQ, Wizard, Video), · eine Verlaufsfunktion, · einen Beispieldienst, der die Online-Ausweisfunktion erfahrbar macht. Insbesondere gilt es, den Nutzern mit der neuen Version der AusweisApp Anwendungsfelder für ihren neuen Personalausweis und einen Mehrwert zu bieten. Die Ausarbeitung von weiteren Funktionen der AusweisApp kann dazu beitragen, dass der neue Personalausweis sein volles Potenzial entfalten kann.
Version Control Systems (VCS) allow developers to manage changes to software artifacts. Developers interact with VCSs through a variety of client programs, such as graphical front-ends or command line tools. It is desirable to use the same version control client program against different VCSs. Unfortunately, no established abstraction over VCS concepts exists. Instead, VCS client programs implement ad-hoc solutions to support interaction with multiple VCSs. This thesis presents Pur, an abstraction over version control concepts that allows building rich client programs that can interact with multiple VCSs. We provide an implementation of this abstraction and validate it by implementing a client application.
Intrusion Detection Systems (IDS) have been widely deployed in practice for detecting malicious behavior on network communication and hosts. False-positive alerts are a popular problem for most IDS approaches. The solution to address this problem is to enhance the detection process by correlation and clustering of alerts. To meet the practical requirements, this process needs to be finished fast, which is a challenging task as the amount of alerts in large-scale IDS deployments is significantly high. We identifytextitdata storage and processing algorithms to be the most important factors influencing the performance of clustering and correlation. We propose and implement a highly efficient alert correlation platform. For storage, a column-based database, an In-Memory alert storage, and memory-based index tables lead to significant improvements of the performance. For processing, algorithms are designed and implemented which are optimized for In-Memory databases, e.g. an attack graph-based correlation algorithm. The platform can be distributed over multiple processing units to share memory and processing power. A standardized interface is designed to provide a unified view of result reports for end users. The efficiency of the platform is tested by practical experiments with several alert storage approaches, multiple algorithms, as well as a local and a distributed deployment.
Virtual 3D city models serve as integration platforms for complex geospatial and georeferenced information and as medium for effective communication of spatial information. In order to explore these information spaces, navigation techniques for controlling the virtual camera are required to facilitate wayfinding and movement. However, navigation is not a trivial task and many available navigation techniques do not support users effectively and efficiently with their respective skills and tasks. In this article, we present an assisting, constrained navigation technique for multiscale virtual 3D city models that is based on three basic principles: users point to navigate, users are lead by suggestions, and the exploitation of semantic, multiscale, hierarchical structurings of city models. The technique particularly supports users with low navigation and virtual camera control skills but is also valuable for experienced users. It supports exploration, search, inspection, and presentation tasks, is easy to learn and use, supports orientation, is efficient, and yields effective view properties. In particular, the technique is suitable for interactive kiosks and mobile devices with a touch display and low computing resources and for use in mobile situations where users only have restricted resources for operating the application. We demonstrate the validity of the proposed navigation technique by presenting an implementation and evaluation results. The implementation is based on service-oriented architectures, standards, and image-based representations and allows exploring massive virtual 3D city models particularly on mobile devices with limited computing resources. Results of a user study comparing the proposed navigation technique with standard techniques suggest that the proposed technique provides the targeted properties, and that it is more advantageous to novice than to expert users.
1 Introduction 1.1 Project formulation 1.2 Our contribution 2 Pedagogical Aspect 4 2.1 Modern teaching 2.2 Our Contribution 2.2.1 Autonomous and exploratory learning 2.2.2 Human machine interaction 2.2.3 Short multimedia clips 3 Ontology Aspect 3.1 Ontology driven expert systems 3.2 Our contribution 3.2.1 Ontology language 3.2.2 Concept Taxonomy 3.2.3 Knowledge base annotation 3.2.4 Description Logics 4 Natural language approach 4.1 Natural language processing in computer science 4.2 Our contribution 4.2.1 Explored strategies 4.2.2 Word equivalence 4.2.3 Semantic interpretation 4.2.4 Various problems 5 Information Retrieval Aspect 5.1 Modern information retrieval 5.2 Our contribution 5.2.1 Semantic query generation 5.2.2 Semantic relatedness 6 Implementation 6.1 Prototypes 6.2 Semantic layer architecture 6.3 Development 7 Experiments 7.1 Description of the experiments 7.2 General characteristics of the three sessions, instructions and procedure 7.3 First Session 7.4 Second Session 7.5 Third Session 7.6 Discussion and conclusion 8 Conclusion and future work 8.1 Conclusion 8.2 Open questions A Description Logics B Probabilistic context-free grammars
In order to achieve their business goals, organizations heavily rely on the operational excellence of their business processes. In traditional scenarios, business processes are usually well-structured, clearly specifying when and how certain tasks have to be executed. Flexible and knowledge-intensive processes are gathering momentum, where a knowledge worker drives the execution of a process case and determines the exact process path at runtime. In the case of an exception, the knowledge worker decides on an appropriate handling. While there is initial work on exception handling in well-structured business processes, exceptions in case management have not been sufficiently researched. This paper proposes an exception handling framework for stage-oriented case management languages, namely Guard Stage Milestone Model, Case Management Model and Notation, and Fragment-based Case Management. The effectiveness of the framework is evaluated with two real-world use cases showing that it covers all relevant exceptions and proposed handling strategies.
Mobile sensing technology allows us to investigate human behaviour on a daily basis. In the study, we examined temporal orientation, which refers to the capacity of thinking or talking about personal events in the past and future. We utilise the mksense platform that allows us to use the experience-sampling method. Individual's thoughts and their relationship with smartphone's Bluetooth data is analysed to understand in which contexts people are influenced by social environments, such as the people they spend the most time with. As an exploratory study, we analyse social condition influence through a collection of Bluetooth data and survey information from participant's smartphones. Preliminary results show that people are likely to focus on past events when interacting with close-related people, and focus on future planning when interacting with strangers. Similarly, people experience present temporal orientation when accompanied by known people. We believe that these findings are linked to emotions since, in its most basic state, emotion is a state of physiological arousal combined with an appropriated cognition. In this contribution, we envision a smartphone application for automatically inferring human emotions based on user's temporal orientation by using Bluetooth sensors, we briefly elaborate on the influential factor of temporal orientation episodes and conclude with a discussion and lessons learned.
E-commerce marketplaces are highly dynamic with constant competition. While this competition is challenging for many merchants, it also provides plenty of opportunities, e.g., by allowing them to automatically adjust prices in order to react to changing market situations. For practitioners however, testing automated pricing strategies is time-consuming and potentially hazardously when done in production. Researchers, on the other side, struggle to study how pricing strategies interact under heavy competition. As a consequence, we built an open continuous time framework to simulate dynamic pricing competition called Price Wars. The microservice-based architecture provides a scalable platform for large competitions with dozens of merchants and a large random stream of consumers. Our platform stores each event in a distributed log. This allows to provide different performance measures enabling users to compare profit and revenue of various repricing strategies in real-time. For researchers, price trajectories are shown which ease evaluating mutual price reactions of competing strategies. Furthermore, merchants can access historical marketplace data and apply machine learning. By providing a set of customizable, artificial merchants, users can easily simulate both simple rule-based strategies as well as sophisticated data-driven strategies using demand learning to optimize their pricing strategies.
Context-oriented programming (COP) provides dedicated support for defining and composing variations to a basic program behavior. A variation, which is defined within a layer, can be de-/activated for the dynamic extent of a code block. While this mechanism allows for control flow-specific scoping, expressing behavior adaptations can demand alternative scopes. For instance, adaptations can depend on dynamic object structure rather than control flow. We present scenarios for behavior adaptation and identify the need for new scoping mechanisms. The increasing number of scoping mechanisms calls for new language abstractions representing them. We suggest to open the implementation of scoping mechanisms so that developers can extend the COP language core according to their specific needs. Our open implementation moves layer composition into objects to be affected and with that closer to the method dispatch to be changed. We discuss the implementation of established COP scoping mechanisms using our approach and present new scoping mechanisms developed for our enhancements to Lively Kernel.
Durch die immer stärker werdende Flut an digitalen Informationen basieren immer mehr Anwendungen auf der Nutzung von kostengünstigen Cloud Storage Diensten. Die Anzahl der Anbieter, die diese Dienste zur Verfügung stellen, hat sich in den letzten Jahren deutlich erhöht. Um den passenden Anbieter für eine Anwendung zu finden, müssen verschiedene Kriterien individuell berücksichtigt werden. In der vorliegenden Studie wird eine Auswahl an Anbietern etablierter Basic Storage Diensten vorgestellt und miteinander verglichen. Für die Gegenüberstellung werden Kriterien extrahiert, welche bei jedem der untersuchten Anbieter anwendbar sind und somit eine möglichst objektive Beurteilung erlauben. Hierzu gehören unter anderem Kosten, Recht, Sicherheit, Leistungsfähigkeit sowie bereitgestellte Schnittstellen. Die vorgestellten Kriterien können genutzt werden, um Cloud Storage Anbieter bezüglich eines konkreten Anwendungsfalles zu bewerten.
Systems of Systems (SoS) have received a lot of attention recently. In this thesis we will focus on SoS that are built atop the techniques of Service-Oriented Architectures and thus combine the benefits and challenges of both paradigms. For this thesis we will understand SoS as ensembles of single autonomous systems that are integrated to a larger system, the SoS. The interesting fact about these systems is that the previously isolated systems are still maintained, improved and developed on their own. Structural dynamics is an issue in SoS, as at every point in time systems can join and leave the ensemble. This and the fact that the cooperation among the constituent systems is not necessarily observable means that we will consider these systems as open systems. Of course, the system has a clear boundary at each point in time, but this can only be identified by halting the complete SoS. However, halting a system of that size is practically impossible. Often SoS are combinations of software systems and physical systems. Hence a failure in the software system can have a serious physical impact what makes an SoS of this kind easily a safety-critical system. The contribution of this thesis is a modelling approach that extends OMG's SoaML and basically relies on collaborations and roles as an abstraction layer above the components. This will allow us to describe SoS at an architectural level. We will also give a formal semantics for our modelling approach which employs hybrid graph-transformation systems. The modelling approach is accompanied by a modular verification scheme that will be able to cope with the complexity constraints implied by the SoS' structural dynamics and size. Building such autonomous systems as SoS without evolution at the architectural level --- i. e. adding and removing of components and services --- is inadequate. Therefore our approach directly supports the modelling and verification of evolution.
Enforcing security policies to distributed systems is difficult, in particular, when a system contains untrusted components. We designed AspectKE*, a distributed AOP language based on a tuple space, to tackle this issue. In AspectKE*, aspects can enforce access control policies that depend on future behavior of running processes. One of the key language features is the predicates and functions that extract results of static program analysis, which are useful for defining security aspects that have to know about future behavior of a program. AspectKE* also provides a novel variable binding mechanism for pointcuts, so that pointcuts can uniformly specify join points based on both static and dynamic information about the program. Our implementation strategy performs fundamental static analysis at load-time, so as to retain runtime overheads minimal. We implemented a compiler for AspectKE*, and demonstrate usefulness of AspectKE* through a security aspect for a distributed chat system.
Inhaltsverzeichnis 1 Einführung 2 Aspektorientierte Programmierung 2.1 Ein System als Menge von Eigenschaften 2.2 Aspekte 2.3 Aspektweber 2.4 Vorteile Aspektorientierter Programmierung 2.5 Kategorisierung der Techniken und Werkzeuge f ¨ ur Aspektorientierte Programmierung 3 Techniken und Werkzeuge zur Analyse Aspektorientierter Softwareprogramme 3.1 Virtual Source File 3.2 FEAT 3.3 JQuery 3.4 Aspect Mining Tool 4 Techniken und Werkzeuge zum Entwurf Aspektorientierter Softwareprogramme 4.1 Concern Space Modeling Schema 4.2 Modellierung von Aspekten mit UML 4.3 CoCompose 4.4 Codagen Architect 5 Techniken und Werkzeuge zur Implementierung Aspektorientierter Softwareprogramme 5.1 Statische Aspektweber 5.2 Dynamische Aspektweber 6 Zusammenfassung
Cost models play an important role for the efficient implementation of software systems. These models can be embedded in operating systems and execution environments to optimize execution at run time. Even though non-uniform memory access (NUMA) architectures are dominating today's server landscape, there is still a lack of parallel cost models that represent NUMA system sufficiently. Therefore, the existing NUMA models are analyzed, and a two-step performance assessment strategy is proposed that incorporates low-level hardware counters as performance indicators. To support the two-step strategy, multiple tools are developed, all accumulating and enriching specific hardware event counter information, to explore, measure, and visualize these low-overhead performance indicators. The tools are showcased and discussed alongside specific experiments in the realm of performance assessment.
ATIB
(2021)
Identity management is a principle component of securing online services. In the advancement of traditional identity management patterns, the identity provider remained a Trusted Third Party (TTP). The service provider and the user need to trust a particular identity provider for correct attributes amongst other demands. This paradigm changed with the invention of blockchain-based Self-Sovereign Identity (SSI) solutions that primarily focus on the users. SSI reduces the functional scope of the identity provider to an attribute provider while enabling attribute aggregation. Besides that, the development of new protocols, disregarding established protocols and a significantly fragmented landscape of SSI solutions pose considerable challenges for an adoption by service providers. We propose an Attribute Trust-enhancing Identity Broker (ATIB) to leverage the potential of SSI for trust-enhancing attribute aggregation. Furthermore, ATIB abstracts from a dedicated SSI solution and offers standard protocols. Therefore, it facilitates the adoption by service providers. Despite the brokered integration approach, we show that ATIB provides a high security posture. Additionally, ATIB does not compromise the ten foundational SSI principles for the users.
Most of the microelectronic circuits fabricated today are synchronous, i.e. they are driven by one or several clock signals. Synchronous circuit design faces several fundamental challenges such as high-speed clock distribution, integration of multiple cores operating at different clock rates, reduction of power consumption and dealing with voltage, temperature, manufacturing and runtime variations. Asynchronous or clockless design plays a key role in alleviating these challenges, however the design and test of asynchronous circuits is much more difficult in comparison to their synchronous counterparts. A driving force for a widespread use of asynchronous technology is the availability of mature EDA (Electronic Design Automation) tools which provide an entire automated design flow starting from an HDL (Hardware Description Language) specification yielding the final circuit layout. Even though there was much progress in developing such EDA tools for asynchronous circuit design during the last two decades, the maturity level as well as the acceptance of them is still not comparable with tools for synchronous circuit design. In particular, logic synthesis (which implies the application of Boolean minimisation techniques) for the entire system's control path can significantly improve the efficiency of the resulting asynchronous implementation, e.g. in terms of chip area and performance. However, logic synthesis, in particular for asynchronous circuits, suffers from complexity problems. Signal Transitions Graphs (STGs) are labelled Petri nets which are a widely used to specify the interface behaviour of speed independent (SI) circuits - a robust subclass of asynchronous circuits. STG decomposition is a promising approach to tackle complexity problems like state space explosion in logic synthesis of SI circuits. The (structural) decomposition of STGs is guided by a partition of the output signals and generates a usually much smaller component STG for each partition member, i.e. a component STG with a much smaller state space than the initial specification. However, decomposition can result in component STGs that in isolation have so-called irreducible CSC conflicts (i.e. these components are not SI synthesisable anymore) even if the specification has none of them. A new approach is presented to avoid such conflicts by introducing internal communication between the components. So far, STG decompositions are guided by the finest output partitions, i.e. one output per component. However, this might not yield optimal circuit implementations. Efficient heuristics are presented to determine coarser partitions leading to improved circuits in terms of chip area. For the new algorithms correctness proofs are given and their implementations are incorporated into the decomposition tool DESIJ. The presented techniques are successfully applied to some benchmarks - including 'real-life' specifications arising in the context of control resynthesis - which delivered promising results.
High-dimensional data is particularly useful for data analytics research. In the healthcare domain, for instance, high-dimensional data analytics has been used successfully for drug discovery. Yet, in order to adhere to privacy legislation, data analytics service providers must guarantee anonymity for data owners. In the context of high-dimensional data, ensuring privacy is challenging because increased data dimensionality must be matched by an exponential growth in the size of the data to avoid sparse datasets. Syntactically, anonymising sparse datasets with methods that rely of statistical significance, makes obtaining sound and reliable results, a challenge. As such, strong privacy is only achievable at the cost of high information loss, rendering the data unusable for data analytics. In this paper, we make two contributions to addressing this problem from both the privacy and information loss perspectives. First, we show that by identifying dependencies between attribute subsets we can eliminate privacy violating attributes from the anonymised dataset. Second, to minimise information loss, we employ a greedy search algorithm to determine and eliminate maximal partial unique attribute combinations. Thus, one only needs to find the minimal set of identifying attributes to prevent re-identification. Experiments on a health cloud based on the SAP HANA platform using a semi-synthetic medical history dataset comprised of 109 attributes, demonstrate the effectiveness of our approach.
(1) Über die Notwendigkeit, die bisherige Informatik in eine Grundlagenwissenschaft und eine Ingenieurwissenschaft aufzuspalten (2) Was ist Ingenieurskultur? (3) Das Kommunikationsproblem der Informatiker und ihre Unfähigkeit, es wahrzunehmen (4) Besonderheiten des Softwareingenieurwesens im Vergleich mit den klassischen Ingenieurdisziplinen (5) Softwareingenieurspläne können auch für Nichtfachleute verständlich sein (6) Principles for Planning Curricula in Software Engineering
Graphs are ubiquitous in computer science. Moreover, in various application fields, graphs are equipped with attributes to express additional information such as names of entities or weights of relationships. Due to the pervasiveness of attributed graphs, it is highly important to have the means to express properties on attributed graphs to strengthen modeling capabilities and to enable analysis. Firstly, we introduce a new logic of attributed graph properties, where the graph part and attribution part are neatly separated. The graph part is equivalent to first-order logic on graphs as introduced by Courcelle. It employs graph morphisms to allow the specification of complex graph patterns. The attribution part is added to this graph part by reverting to the symbolic approach to graph attribution, where attributes are represented symbolically by variables whose possible values are specified by a set of constraints making use of algebraic specifications. Secondly, we extend our refutationally complete tableau-based reasoning method as well as our symbolic model generation approach for graph properties to attributed graph properties. Due to the new logic mentioned above, neatly separating the graph and attribution parts, and the categorical constructions employed only on a more abstract level, we can leave the graph part of the algorithms seemingly unchanged. For the integration of the attribution part into the algorithms, we use an oracle, allowing for flexible adoption of different available SMT solvers in the actual implementation. Finally, our automated reasoning approach for attributed graph properties is implemented in the tool AutoGraph integrating in particular the SMT solver Z3 for the attribute part of the properties. We motivate and illustrate our work with a particular application scenario on graph database query validation.
The correctness of model transformations is a crucial element for model-driven engineering of high quality software. In particular, behavior preservation is the most important correctness property avoiding the introduction of semantic errors during the model-driven engineering process. Behavior preservation verification techniques either show that specific properties are preserved, or more generally and complex, they show some kind of behavioral equivalence or refinement between source and target model of the transformation. Both kinds of behavior preservation verification goals have been presented with automatic tool support for the instance level, i.e. for a given source and target model specified by the model transformation. However, up until now there is no automatic verification approach available at the transformation level, i.e. for all source and target models specified by the model transformation.
In this report, we extend our results presented in [27] and outline a new sophisticated approach for the automatic verification of behavior preservation captured by bisimulation resp. simulation for model transformations specified by triple graph grammars and semantic definitions given by graph transformation rules. In particular, we show that the behavior preservation problem can be reduced to invariant checking for graph transformation and that the resulting checking problem can be addressed by our own invariant checker even for a complex example where a sequence chart is transformed into communicating automata. We further discuss today's limitations of invariant checking for graph transformation and motivate further lines of future work in this direction.