Refine
Year of publication
Document Type
- Article (184)
- Monograph/Edited Volume (89)
- Doctoral Thesis (42)
- Other (30)
- Conference Proceeding (11)
- Preprint (4)
- Postprint (2)
- Review (2)
- Part of a Book (1)
Language
- English (365) (remove)
Keywords
- Cloud Computing (7)
- Hasso-Plattner-Institut (7)
- Datenintegration (6)
- Forschungskolleg (6)
- Hasso Plattner Institute (6)
- Klausurtagung (6)
- Modellierung (6)
- Service-oriented Systems Engineering (6)
- cloud computing (6)
- Forschungsprojekte (5)
- Future SOC Lab (5)
- In-Memory Technologie (5)
- Multicore Architekturen (5)
- data profiling (5)
- machine learning (5)
- BPMN (4)
- COVID-19 (4)
- Geschäftsprozessmanagement (4)
- Model-Driven Engineering (4)
- Research School (4)
- Verifikation (4)
- Visualization (4)
- business process management (4)
- data integration (4)
- graph transformation (4)
- middleware (4)
- performance (4)
- process mining (4)
- virtual machines (4)
- 3D point clouds (3)
- Betriebssysteme (3)
- Data Integration (3)
- Graphtransformationen (3)
- Model Synchronisation (3)
- Model Transformation (3)
- Modeling (3)
- Ph.D. Retreat (3)
- Ph.D. retreat (3)
- Privacy (3)
- Process Mining (3)
- Prozessmodellierung (3)
- SQL (3)
- Security (3)
- Tripel-Graph-Grammatik (3)
- Virtualisierung (3)
- Virtuelle Maschinen (3)
- cartographic design (3)
- design thinking (3)
- digital health (3)
- duplicate detection (3)
- heuristics (3)
- incremental graph pattern matching (3)
- model transformation (3)
- multicore architectures (3)
- operating systems (3)
- prediction (3)
- privacy (3)
- programming (3)
- research projects (3)
- run time analysis (3)
- security (3)
- self-healing (3)
- service-oriented systems engineering (3)
- similarity measures (3)
- theory (3)
- verification (3)
- 3D city models (2)
- 3D geovisualization (2)
- AUTOSAR (2)
- Abstraktion (2)
- Algorithms (2)
- Aspektorientierte Softwareentwicklung (2)
- Assoziationsregeln (2)
- Asynchrone Schaltung (2)
- BIM (2)
- Bayesian networks (2)
- Big Data (2)
- Bitcoin (2)
- Blockchain (2)
- CSC (2)
- CSCW (2)
- Case management (2)
- Classification (2)
- Cloud-Sicherheit (2)
- Cloud-Speicher (2)
- Competition (2)
- Compliance checking (2)
- Cyber-Physical Systems (2)
- Data Profiling (2)
- Data integration (2)
- Data profiling (2)
- Design Thinking (2)
- Discrimination Networks (2)
- EHR (2)
- Event processing (2)
- Evolution (2)
- Functional dependencies (2)
- Graphtransformationssysteme (2)
- HCI (2)
- In-Memory technology (2)
- Java (2)
- JavaScript (2)
- Kollaborationen (2)
- Laufzeitmodelle (2)
- Link-Entdeckung (2)
- Lively Kernel (2)
- Machine (2)
- Machine learning (2)
- Megamodell (2)
- Metadata Discovery (2)
- Middleware (2)
- Model Synchronization (2)
- Modell (2)
- Nested graph conditions (2)
- Performance (2)
- Point-based rendering (2)
- Process Modeling (2)
- Process modeling (2)
- RDF (2)
- Ressourcenoptimierung (2)
- Runtime analysis (2)
- SPARQL (2)
- STG decomposition (2)
- STG-Dekomposition (2)
- Service-Orientierte Architekturen (2)
- Sicherheit (2)
- Smalltalk (2)
- Structuring (2)
- SysML (2)
- Systemsoftware (2)
- Virtualization (2)
- Visualisierung (2)
- abstraction (2)
- adaptive Systeme (2)
- adaptive systems (2)
- analysis (2)
- big data services (2)
- clinical (2)
- cloud security (2)
- collaboration (2)
- complexity (2)
- confidentiality (2)
- data (2)
- data matching (2)
- data quality (2)
- data wrangling (2)
- databases (2)
- debugging (2)
- dependency discovery (2)
- dialysis (2)
- digital identity (2)
- discrimination networks (2)
- distributed systems (2)
- dynamic (2)
- dynamic programming (2)
- electronic health record (2)
- entity resolution (2)
- evaluation (2)
- fabrication (2)
- feedback loops (2)
- functional dependency (2)
- genetic programming (2)
- graph constraints (2)
- image-based representation (2)
- in-memory technology (2)
- law (2)
- missing data (2)
- model (2)
- model-driven engineering (2)
- modeling (2)
- modellgetriebene Entwicklung (2)
- multi-core (2)
- nested application conditions (2)
- nested graph conditions (2)
- non-photorealistic rendering (2)
- real-time rendering (2)
- record linkage (2)
- requirements engineering (2)
- research school (2)
- risk aversion (2)
- scalability (2)
- schema discovery (2)
- self-sovereign identity (2)
- service-oriented architecture (2)
- service-oriented systems (2)
- simulation (2)
- software engineering (2)
- software reference architecture (2)
- spatial data infrastructure (2)
- standardization (2)
- stochastic Petri nets (2)
- stochastische Petri Netze (2)
- systems of systems (2)
- systems software (2)
- testing (2)
- virtualization (2)
- virtuelle Maschinen (2)
- visualization (2)
- "Big Data"-Dienste (1)
- 2.5D Treemaps (1)
- 3D (1)
- 3D Computer Grafik (1)
- 3D Computer Graphics (1)
- 3D Drucken (1)
- 3D Point Clouds (1)
- 3D Point clouds (1)
- 3D Semiotik (1)
- 3D Visualisierung (1)
- 3D geovirtual environments (1)
- 3D information visualization (1)
- 3D printing (1)
- 3D semiotic model (1)
- 3D semiotics (1)
- 3D visualization (1)
- AKI (1)
- APX-hardness (1)
- Aaron Wildavsky (1)
- Abhängigkeiten (1)
- Abstraktion von Geschäftsprozessmodellen (1)
- Accounting (1)
- Actor (1)
- Actor model (1)
- Address matching (1)
- Algorithmen (1)
- Analyse (1)
- Anfragepaare (1)
- Anisotroper Kuwahara Filter (1)
- Anomalien (1)
- Anomaly detection (1)
- Anti-patterns (1)
- Application (1)
- Approximation algorithms (1)
- Apriori (1)
- Architektur (1)
- Artificial neural networks (1)
- Aspect-oriented Programming (1)
- Association Rule Mining (1)
- Association rule mining (1)
- Asynchronous circuit (1)
- Attention span (1)
- Attribut-Merge-Prozess (1)
- Attribute Merge Process (1)
- Attribute aggregation (1)
- Attributed graph transformation (1)
- Attributed graphs (1)
- Ausführung von Modellen (1)
- Ausführungsgeschichte (1)
- Authentication (1)
- B2B process integration (1)
- BPM (1)
- BPMN-Q (1)
- Batchprozesse (1)
- Bayes'sche Netze (1)
- Bayessche Netze (1)
- Bedingte Inklusionsabhängigkeiten (1)
- Behavior (1)
- Behavior equivalence (1)
- Behavioral querying (1)
- Behaviour Analysis (1)
- Behavioural Abstraction (1)
- Behavioural analysis (1)
- Benchmarking (1)
- Berührungseingaben (1)
- Beschränkungen und Abhängigkeiten (1)
- Biclustering (1)
- Bidirectional order dependencies (1)
- Bildverarbeitung (1)
- Billing (1)
- Biomedicine (1)
- Bisimulation (1)
- Blockchains (1)
- Bluetooth (1)
- Body sensor networks (1)
- Building Information Models (1)
- Business Process Modeling Notation (1)
- Business Process Models (1)
- Business process diagram (1)
- Business process management (1)
- Business process model (1)
- Business process modeling (1)
- Business processes (1)
- CCS Concepts (1)
- CEP (1)
- Cake cutting (1)
- Carrera Digital D132 (1)
- Causal Behavioural Profiles (1)
- Causality (1)
- Change Data Capture (1)
- Change Management (1)
- Change propagation (1)
- Charging (1)
- Choreographies (1)
- Chronic heart failure (1)
- Climate change (1)
- Close-Up (1)
- Cloud (1)
- Cloud Datenzentren (1)
- Cloud Native Applications (1)
- Cloud Storage Broker (1)
- Cloud access control and resource management (1)
- Cloud computing (1)
- Cloud-Security (1)
- CoExist (1)
- Coccinelle (1)
- Communication systems (1)
- Commute pattern (1)
- Commute process (1)
- Complexity theory (1)
- Compliance (1)
- Compliance measurement (1)
- Composition (1)
- Computational modeling (1)
- Computational photography (1)
- Computer (1)
- Computer crime (1)
- Concurrency (1)
- Conditional Inclusion Dependency (1)
- Conformance Überprüfung (1)
- Consistency (1)
- Consistency perception (1)
- Constraints (1)
- Context-oriented Programming (1)
- Context-oriented programming (1)
- ContextJS (1)
- Contracts (1)
- Controller-Resynthese (1)
- Convolution (1)
- Coordinated and Multiple Views (1)
- CorMID (1)
- Correlation (1)
- Critical pairs (1)
- Crowd-Resourcing (1)
- Cryptography (1)
- Cultural theory (1)
- Currencies (1)
- Cyber-Physical-Systeme (1)
- Cyber-physical-systems (1)
- Data (1)
- Data Dependency (1)
- Data Mining (1)
- Data Modeling (1)
- Data Quality (1)
- Data Warehouse (1)
- Data dependencies (1)
- Data exchange (1)
- Data mining (1)
- Data modeling (1)
- Data models (1)
- Data-centric (1)
- Database Cost Model (1)
- Databases (1)
- Daten (1)
- Datenabhängigkeiten (1)
- Datenabhängigkeiten-Entdeckung (1)
- Datenanalyse (1)
- Datenbank-Kostenmodell (1)
- Datenbanken (1)
- Datenextraktion (1)
- Datenflusskorrektheit (1)
- Datenkorrektheit (1)
- Datenmodellierung (1)
- Datenobjekte (1)
- Datenqualität (1)
- Datenreinigung (1)
- Datenvertraulichkeit (1)
- Datenzustände (1)
- Deadline-Verbreitung (1)
- Deep learning (1)
- Delphi study (1)
- Delta preservation (1)
- Dependency discovery (1)
- Design (1)
- Detail plus Overview (1)
- Deurema Modellierungssprache (1)
- Differential Privacy (1)
- Differenz von Gauss Filtern (1)
- Digitale Whiteboards (1)
- Disambiguierung (1)
- Distributed (1)
- Distributed 3D geovisualization (1)
- Distributed computing (1)
- Distributed debugging (1)
- Distributed programming (1)
- DoS (1)
- Duplicate Detection (1)
- Duplikaterkennung (1)
- Duration prediction (1)
- Dynamic Data (1)
- Dynamic Data Structures (1)
- Dynamic Pricing (1)
- Dynamic Type System (1)
- Dynamic adaptation (1)
- Dynamic analysis (1)
- Dynamic pricing (1)
- Dynamic pricing and advertising (1)
- Dynamische Typ Systeme (1)
- E-Learning (1)
- E-commerce (1)
- EPA (1)
- Echtzeit (1)
- Echtzeitsysteme (1)
- Ecosystems (1)
- Eingabegenauigkeit (1)
- Electrocardiography (1)
- Electronic prescription (1)
- Elektronische Patientenakte (1)
- Energieeffizienz (1)
- Energy-aware (1)
- Entity resolution (1)
- Entwicklungswerkzeuge (1)
- Entwurfsmuster für SOA-Sicherheit (1)
- Ereignisabstraktion (1)
- Ereignisse (1)
- Erfahrungsbericht (1)
- Erfüllbarkeitsanalyse (1)
- Erkennen von Meta-Daten (1)
- Estimation-of-distribution algorithm (1)
- Event normalization (1)
- Events (1)
- Evolution in MDE (1)
- Evolutionary algorithms (1)
- Evolutionary computation (1)
- Exception handling (1)
- Exclusiveness (1)
- Experimentation (1)
- Extract-Transform-Load (ETL) (1)
- Eye-tracking (1)
- FMC-QE (1)
- FRP (1)
- Facial mimicry (1)
- Fallstudie (1)
- Feature extraction (1)
- Feature selection (1)
- Federated learning (1)
- Feedback Loop Modellierung (1)
- Feedback Loops (1)
- Feedback heuristics (1)
- Fehlende Daten (1)
- Fehlerbeseitigung (1)
- Fehlersuche (1)
- Finite horizon (1)
- First hitting time (1)
- Fitness level method (1)
- Fitness-distance correlation (1)
- Flexible Resource Manager (1)
- Flexible processes; (1)
- Flussgesteuerter Bilateraler Filter (1)
- Focus+Context Visualization (1)
- Fokus-&-Kontext Visualisierung (1)
- Foreign Keys (1)
- Foreign Keys Discovery (1)
- Formal Methods (1)
- Formale Verifikation (1)
- Functional Lenses (1)
- GPU (1)
- GPU-based Real-time Rendering (1)
- Gamification (1)
- Gator Netzwerk (1)
- Gator networks (1)
- Gebäudemodelle (1)
- Geländemodelle (1)
- Gender Inequality (1)
- Gene expression (1)
- General Earth and Planetary Sciences (1)
- Generalisierung (1)
- Geodaten (1)
- Geography, Planning and Development (1)
- Geometry Draping (1)
- Geovisualization (1)
- Geschäftsprozesse (1)
- Geschäftsprozessmodelle (1)
- Gesetze (1)
- Graph databases (1)
- Graph homomorphisms (1)
- Graph queries (1)
- Graph repair (1)
- Graph rewriting (1)
- Graph transformation (1)
- Graph-Constraints (1)
- Graph-basierte Suche (1)
- Graphbedingungen (1)
- Graphdatenbanken (1)
- Graphtransformation (1)
- HENSHIN (1)
- HITS (1)
- HMM (1)
- Haptics (1)
- Hasso-Plattner-Institute (1)
- Hauptspeicher Technologie (1)
- Hauptspeicherdatenbank (1)
- Herodotos (1)
- HiGHmed (1)
- Hidden Markov models (1)
- History of pattern occurrences (1)
- Homomorphe Verschlüsselung (1)
- Hospitalisation (1)
- Human behaviour (1)
- IDS (1)
- IDS management (1)
- IFC (1)
- IMD (1)
- IMU (1)
- IOPS (1)
- IT-Security (1)
- IT-Sicherheit (1)
- Identity management systems (1)
- Image (1)
- Image filtering (1)
- Image-based rendering (1)
- In-Memory Database (1)
- In-Memory Datenbank (1)
- In-Memory-Datenbank (1)
- In-memory (1)
- Inclusion Dependency (1)
- Inclusion Dependency Discovery (1)
- Inclusion dependencies (1)
- Incremental Discovery (1)
- Incrementally Inclusion Dependencies Discovery (1)
- Index (1)
- Index Structures (1)
- Indexstrukturen (1)
- Individuen (1)
- Indoor Models (1)
- Industries (1)
- Industry 4.0 (1)
- Industry Foundation Classes (1)
- Infinite State (1)
- Information Extraction (1)
- Information Systems (1)
- Information Visualization (1)
- Informationsextraktion (1)
- Informationssysteme (1)
- Informationsvorhaltung (1)
- Initial conflicts (1)
- Inklusionsabhängigkeit (1)
- Inklusionsabhängigkeiten (1)
- Inklusionsabhängigkeiten Entdeckung (1)
- Inkrementelle Graphmustersuche (1)
- Innovation (1)
- Innovationsmanagement (1)
- Innovationsmethode (1)
- Integrity Verification (1)
- Interaction (1)
- Interaction modeling (1)
- Interactive Rendering (1)
- Interactive Visualization (1)
- Interaktionsmodel (1)
- Interaktives Rendering (1)
- Internet applications (1)
- Internet of Things (1)
- Internetanwendungen (1)
- Interviews (1)
- Introspection (1)
- Intrusion detection (1)
- Invariant-Checking (1)
- Invarianten (1)
- Invariants (1)
- Inventory holding costs (1)
- Inventory systems (1)
- JCop (1)
- JIT compilers (1)
- Kartografisches Design (1)
- Knowledge bases (1)
- Knowledge-intensive processes (1)
- Komplexität (1)
- Komplexitätsbewältigung (1)
- Komposition (1)
- Kontext (1)
- LEGO Mindstorms EV3 (1)
- LIDAR (1)
- LOD (1)
- LSTM (1)
- Label analysis (1)
- Lakes (1)
- Landmarken (1)
- Languages (1)
- Languages Model-driven engineering (1)
- Laser Cutten (1)
- Laser cutting (1)
- Laufzeitanalyse (1)
- Laufzeitverhalten (1)
- Leadership (1)
- Learning (1)
- Learning behavior (1)
- Least privilege principle (1)
- Lecture video recording (1)
- Leistungsfähigkeit (1)
- Leistungsvorhersage (1)
- Level of abstraction (1)
- Licenses (1)
- Link Discovery (1)
- Linked Data (1)
- Linked Open Data (1)
- Live forensics (1)
- Live-Programmierung (1)
- Location-based services (1)
- Log conformance (1)
- Logiksynthese (1)
- M-adhesive categories (1)
- M-adhesive transformation systems (1)
- MDE Ansatz (1)
- MDE settings (1)
- MOOC (1)
- MOOCs (1)
- Management (1)
- Markov decision process (1)
- Markov decision process; (1)
- Markov model (1)
- Mary Douglas (1)
- Massive Open Online Courses (1)
- Matroids (1)
- Measurement (1)
- Megamodel (1)
- Megamodels (1)
- Mehrkernsysteme (1)
- Memory management (1)
- Metadaten Entdeckung (1)
- Metadatenentdeckung (1)
- Metadatenqualität (1)
- Metamaterials (1)
- Mind2 (1)
- Mixed workload (1)
- Mobile Application Development (1)
- Mobile device (1)
- Mobile sensing (1)
- Mobilgeräte (1)
- Model Consistency (1)
- Model Execution (1)
- Model Management (1)
- Model equivalence (1)
- Model generation (1)
- Model refinement (1)
- Model repair (1)
- Model synchronisation (1)
- Model transformation (1)
- Model verification (1)
- Model-driven (1)
- Modeling Languages (1)
- Modell Management (1)
- Modell-driven Security (1)
- Modell-getriebene Sicherheit (1)
- Modell-getriebene Softwareentwicklung (1)
- Modellerzeugung (1)
- Modellgetrieben (1)
- Modellgetriebene Entwicklung (1)
- Modellgetriebene Softwareentwicklung (1)
- Modellierungssprachen (1)
- Modellkonsistenz (1)
- Modelltransformation (1)
- Modelltransformationen (1)
- Models at Runtime (1)
- Modular decomposition (1)
- Monitoring language (1)
- Morphic (1)
- Multi-Instanzen (1)
- Multi-objective optimization (1)
- Multi-perspective Views (1)
- Multicore architectures (1)
- Multimedia retrieval (1)
- Multiscale modeling (1)
- Muster (1)
- Musterabgleich (1)
- Mustererkennung (1)
- Mutation operators (1)
- N-of-1 trial (1)
- NP-completeness (1)
- Natural language (1)
- Natural language processing (1)
- Nested Graph Conditions (1)
- Network graph (1)
- Network monitoring (1)
- Network topology (1)
- Neural Networks (1)
- Newspeak (1)
- Nicht-photorealistisches Rendering (1)
- Nichtfotorealistische Bildsynthese (1)
- Nigeria (1)
- Nutzungsinteresse (1)
- O (1)
- OLTP (1)
- Object Constraint Programming (1)
- Object Versioning (1)
- Object-Oriented Programming (1)
- Objekt-Constraint Programmierung (1)
- Objekt-Orientiertes Programmieren (1)
- Objekt-orientiertes Programmieren mit Constraints (1)
- Objektlebenszyklus-Synchronisation (1)
- Online Course (1)
- Online-Learning (1)
- Online-Lernen (1)
- Onlinekurs (1)
- Open implementations (1)
- Operational reporting (1)
- Optimal Control (1)
- Optimal stochastic and deterministic (1)
- Optionality (1)
- Order Relations (1)
- Order dependencies (1)
- Organisationsveränderung (1)
- Out-of-core (1)
- Outlier detection (1)
- Overview plus Detail (1)
- PPMI (parkinson's progression markers initiative) (1)
- PRISM Modell-Checker (1)
- PRISM model checker (1)
- PTCTL (1)
- Parallel programming (1)
- Parallelization (1)
- Pattern Matching (1)
- Patterns (1)
- Performance Prediction (1)
- Performance analysis (1)
- Personal fabrication (1)
- Petri net Mapping (1)
- Petri net mapping (1)
- Petri net unfolding (1)
- Petrinetz (1)
- Plugs (1)
- Prediction (1)
- Price Cycles (1)
- Price collusion (1)
- Privilege separation concept (1)
- Probabilistische Modelle (1)
- Process (1)
- Process Enactment (1)
- Process Modelling (1)
- Process Monitoring (1)
- Process choreographies (1)
- Process compliance (1)
- Process mining (1)
- Process model (1)
- Process model alignment (1)
- Process model consistency (1)
- Process model repositories (1)
- Process model search (1)
- Processing strategies (1)
- Programmierung (1)
- Programming (1)
- Programming Environments (1)
- Programming Languages (1)
- Propagation von Aktivitätsinstanzzuständen (1)
- Property paths (1)
- Protocols (1)
- Prozess (1)
- Prozess- und Datenintegration (1)
- Prozessarchitektur (1)
- Prozessausführung (1)
- Prozessautomatisierung (1)
- Prozesse (1)
- Prozesserhebung (1)
- Prozessinstanz (1)
- Prozessmodellsuche (1)
- Prozessoren (1)
- Prozessverfeinerung (1)
- Präsentation (1)
- QRS detection (1)
- Quality of service (1)
- Quantitative Analysen (1)
- Quantitative Modeling (1)
- Quantitative Modellierung (1)
- Query (1)
- Query execution (1)
- Query optimization (1)
- Question answering (1)
- Queuing Theory (1)
- R Shiny (1)
- R package (1)
- RT_PREEMT patch (1)
- RT_PREEMT-Patch (1)
- Racket (1)
- Random graphs (1)
- Reaction Time (1)
- Real-time Rendering (1)
- Record and refinement (1)
- Record and replay (1)
- Record linkage (1)
- Reinforcement learning (1)
- Relational data (1)
- Remote patient management (1)
- Research Projects (1)
- Resource description framework (1)
- Resource management (1)
- Response Strategies (1)
- Ressourcenmanagement (1)
- Rete Netzwerk (1)
- Rete networks (1)
- Risk control (1)
- Robust optimization (1)
- Role-based access control (1)
- Root cause analysis (1)
- Run time analysis (1)
- Runtime Binding (1)
- Runtime WCET Analysis (1)
- S-indd++ (1)
- SAP HANA (1)
- SCED (1)
- SOA Security Pattern (1)
- Safety Critical Systems (1)
- Sammlungsdatentypen (1)
- Satisfiability (1)
- Scalability (1)
- Schema-Entdeckung (1)
- Schemaentdeckung (1)
- Schlüsselentdeckung (1)
- Scientific Publication Indicators (1)
- Scope (1)
- Search Algorithms (1)
- Security Modelling (1)
- Security-as-a-Service (1)
- Self-Adaptive Software (1)
- Self-aware computing systems (1)
- Self-configuration (1)
- Semantics (1)
- Semantische Analyse (1)
- Sequential anomaly (1)
- Sequenzen von s/t-Pattern (1)
- Service composition (1)
- Service detection (1)
- Service orchestration (1)
- Service-Oriented (1)
- Service-Oriented Architecture (1)
- Service-oriented Architectures (1)
- Service-oriented computing (1)
- Service-orientierte Systeme (1)
- Service-orientierte Systme (1)
- Sexism (1)
- Sicherheitsmodellierung (1)
- Signal-to-noise ratio (1)
- Signalflankengraph (SFG oder STG) (1)
- Similarity Measures (1)
- Similarity Search (1)
- Simulation (1)
- Skalierbarkeit (1)
- SoaML (1)
- Social environment (1)
- Software (1)
- Software Engineering (1)
- Softwareanalyse (1)
- Softwarearchitektur (1)
- Softwareentwicklung (1)
- Softwareentwicklungsprozesse (1)
- Softwareproduktlinien (1)
- Softwaretechnik (1)
- Softwaretest (1)
- Softwaretests (1)
- Softwarevisualisierung (1)
- Softwarewartung (1)
- Sozialen Medien (1)
- Speicheroptimierungen (1)
- Sprachspezifikation (1)
- Squeak (1)
- Standards (1)
- Stilisierung (1)
- Stochastic Petri nets (1)
- Structural Decomposition (1)
- Structured modeling (1)
- Strukturierung (1)
- Studie (1)
- Submodular function (1)
- Submodular functions (1)
- Subset selection (1)
- Suchverfahren (1)
- Synchronisation (1)
- Synonyme (1)
- System architecture (1)
- System of Systems (1)
- Systeme von Systemen (1)
- Systems of Systems (1)
- TRIPOD (1)
- Tableau method (1)
- Tableaumethode (1)
- Task analysis (1)
- Tele-Lab (1)
- Tele-Teaching (1)
- Telemonitoring (1)
- Temporal Logic (1)
- Temporal orientation (1)
- Temporallogik (1)
- Terrain Visualization (1)
- Test-getriebene Fehlernavigation (1)
- Testen (1)
- Texturing (1)
- Theory (1)
- Three-dimensional displays (1)
- Threshold Cryptography (1)
- Time Augmented Petri Nets (1)
- Time series (1)
- Tool survey (1)
- Trace inclusion (1)
- Traceability (1)
- Tracking (1)
- Transformation (1)
- Transformation tool contest (1)
- Transformationsebene (1)
- Transformationssequenzen (1)
- Travis CI (1)
- Treemaps (1)
- Triple Graph Grammar (1)
- Triple Graph Grammars (1)
- Triple-Graph-Grammatiken (1)
- Two dimensional displays (1)
- Ubiquitous computing (1)
- Unbegrenzter Zustandsraum (1)
- Unified cloud model (1)
- Unique column combination (1)
- Unique column combinations (1)
- Unveränderlichkeit (1)
- Usage Interest (1)
- VIL (1)
- Verbindungsnetzwerke (1)
- Verhalten (1)
- Verhaltensabstraktion (1)
- Verhaltensanalyse (1)
- Verhaltensbewahrung (1)
- Verhaltensverfeinerung (1)
- Verhaltensäquivalenz (1)
- Verification (1)
- Verletzung Auflösung (1)
- Verletzung Erklärung (1)
- Vernetzte Daten (1)
- Versionierung (1)
- Verteiltes Arbeiten (1)
- Verteilungsalgorithmen (1)
- Verteilungsalgorithmus (1)
- Verwaltung von Rechenzentren (1)
- Verzögerungs-Verbreitung (1)
- Video OCR (1)
- Video classification (1)
- Video indexing (1)
- Videoanalyse (1)
- Videometadaten (1)
- View navigation (1)
- Violation Explanation (1)
- Violation Resolution (1)
- Virtual 3D city model (1)
- Virtual 3D scenes (1)
- Virtual Machine (1)
- Virtual camera control (1)
- Virtual machines (1)
- Visual modeling (1)
- Vocabulary (1)
- Vorhersage (1)
- Vulnerability Assessment (1)
- W[3]-completeness (1)
- Warteschlangentheorie (1)
- Wartung von Graphdatenbanksichten (1)
- Water Science and Technology (1)
- Web Sites (1)
- Web applications (1)
- Web browsers (1)
- Web navigational language (1)
- Web of Data (1)
- Web safeness (1)
- Web-Anwendungen (1)
- Web-based rendering (1)
- Webseite (1)
- Well-structuredness (1)
- Wikipedia (1)
- Wohlstrukturiertheit (1)
- Zeitbehaftete Petri Netze (1)
- Zugriffskontrolle (1)
- abdominal imaging (1)
- abundance (1)
- access control (1)
- accessibility (1)
- accounting (1)
- action recognition (1)
- activity instance state propagation (1)
- activity recognition (1)
- acute renal failure (1)
- adaptation rules (1)
- address normalization (1)
- address parsing (1)
- adolescent (1)
- adoption (1)
- affective interfaces (1)
- algorithm (EDA) (1)
- algorithmic (1)
- anisotropic Kuwahara filter (1)
- anniversary (1)
- annotation (1)
- anomalies (1)
- app (1)
- application (1)
- approximation (1)
- apriori (1)
- architecture (1)
- architecture recovery (1)
- architecture-based adaptation (1)
- architectures (1)
- argumentation research (1)
- artificial intelligence (1)
- aspect adapter (1)
- aspect oriented programming (1)
- aspect-oriented (1)
- aspects (1)
- aspectualization (1)
- association rule mining (1)
- asynchronous circuit (1)
- atmospheric pressure chemical ionization (1)
- attack graph (1)
- attacks (1)
- attention mechanism (1)
- attribute assurance (1)
- ausführbare Semantiken (1)
- authentication (1)
- authentication protocol (1)
- back pain (1)
- back-in-time (1)
- batch processing (1)
- battery (1)
- battery-depletion attack (1)
- behavior preservation (1)
- behavioral abstraction (1)
- behavioral equivalenc (1)
- behavioral refinement (1)
- behavioral specification (1)
- behaviour compatibility (1)
- behaviour equivalence (1)
- behavioural models (1)
- benchmark testing (1)
- beschreibende Feldstudie (1)
- bibliometrics (1)
- bidirectional shortest path (1)
- big data (1)
- billing (1)
- biomarker detection (1)
- biomechanics (1)
- bisimulation (1)
- bitcoin (1)
- blind (1)
- bloat control (1)
- bpm (1)
- bridge management systems (1)
- bug tracking (1)
- building information modeling (1)
- building models (1)
- business process architecture (1)
- business process model abstraction (1)
- business processes (1)
- cancer therapy (1)
- cartography-oriented visualization (1)
- case study (1)
- center dot Computing (1)
- change blindness (1)
- change management (1)
- changeability (1)
- charging (1)
- child (1)
- chronic dialysis (1)
- cleansing (1)
- clinical nephrology (1)
- cloud (1)
- cloud datacenter (1)
- cloud storage (1)
- cluster resource management (1)
- clustering (1)
- co-location (1)
- cognition (1)
- cognitive (1)
- coherence-enhancing filtering (1)
- cohort (1)
- collaborations (1)
- collaborative tagging (1)
- collection types (1)
- color palettes (1)
- communication network (1)
- complex emotions (1)
- complexity dichotomy (1)
- comprehension (1)
- computational design (1)
- computer science (1)
- computer vision (1)
- conceptualization (1)
- concurrency (1)
- concurrent graph rewriting (1)
- conditional functional dependencies (1)
- conditions (1)
- conflicts and dependencies in (1)
- conformance analysis (1)
- conformance checking (1)
- consistency (1)
- context awareness (1)
- continuous integration (1)
- continuous testing (1)
- contract (1)
- contracts (1)
- control (1)
- control resynthesis (1)
- controlled experiment (1)
- coronavirus 2019 (1)
- corporate takeovers (1)
- cost-effectiveness (1)
- cost-utility analysis (1)
- creativity and innovation management (1)
- critical pairs (1)
- crosscutting wrappers (1)
- cryptocurrency exchanges (1)
- cscw (1)
- cyber (1)
- cyber humanistic (1)
- cyber threat intelligence (1)
- cyber-physical systems (1)
- damage detection (1)
- data center management (1)
- data analysis (1)
- data center management (1)
- data cleaning (1)
- data cleansing (1)
- data correctness checking (1)
- data driven approaches (1)
- data extraction (1)
- data flow correctness (1)
- data migration (1)
- data objects (1)
- data preparation (1)
- data states (1)
- data transformation (1)
- data-driven demand (1)
- database systems (1)
- database technology (1)
- datasets (1)
- deadline propagation (1)
- deduplication (1)
- deep learning (1)
- delay propagation (1)
- denial-of-service attack (1)
- dental caries classification (1)
- dependable computing (1)
- dependencies (1)
- dependency (1)
- design (1)
- deterministic properties (1)
- deterministic random walk (1)
- deurema modeling language (1)
- development tools (1)
- diabetes (1)
- difference of Gaussians (1)
- differential (1)
- differential privacy (1)
- diffusion (1)
- digital health app (1)
- digital interventions (1)
- digital startup (1)
- digital therapy (1)
- digital whiteboard (1)
- direct manipulation (1)
- direkte Manipulation (1)
- distributed computing (1)
- distributed data-parallel processing (1)
- distributed ledger technology (1)
- distribution algorithm (1)
- dynamic typing (1)
- dynamic consolidation (1)
- dynamic pricing (1)
- dynamic programming languages (1)
- dynamic reconfiguration (1)
- dynamische Programmiersprachen (1)
- dynamische Sprachen (1)
- dynamische Umsortierung (1)
- e-Learning (1)
- e-learning (1)
- e-lecture (1)
- eHealth (1)
- ePA (1)
- ecological momentary assessment (1)
- ecosystems (1)
- efficient deep learning (1)
- eindeutig (1)
- eingebettete Systeme (1)
- electronic health records (1)
- electronic mail (1)
- embedded systems (1)
- embedding (1)
- emotional (1)
- empathy (1)
- empirical studies (1)
- empirische Studien (1)
- end-stage kidney disease (1)
- endogenous (1)
- energy efficiency (1)
- engine (1)
- engineering (1)
- enrichment calculation (1)
- entity alignment (1)
- enumeration complexity (1)
- epistasis (1)
- erfahrbare Medien (1)
- estimation-of-distribution (1)
- estimation-of-distribution algorithm (1)
- ethics (1)
- event abstraction (1)
- event log (1)
- events (1)
- evolution (1)
- evolution in MDE (1)
- exact exponential-time algorithms (1)
- executable semantics (1)
- experience report (1)
- experimental design (1)
- experiments (1)
- expertise (1)
- explainability (1)
- explainability-accuracy trade-off (1)
- explainable AI (1)
- exploratory programming (1)
- expression (1)
- external knowledge bases (1)
- factual (1)
- failure model (1)
- failure profile (1)
- failure profile model (1)
- fair division (1)
- feature selection (1)
- feedback loop modeling (1)
- fehlende Daten (1)
- flow-based bilateral filter (1)
- flux (1)
- focus plus context visualization (1)
- folksonomy (1)
- force-feedback (1)
- forensics (1)
- formal framework (1)
- formal verification (1)
- formal verification methods (1)
- formale Verifikation (1)
- formales Framework (1)
- formalism (1)
- fully concurrent bisimulation (1)
- functional dependencies (1)
- functional languages (1)
- functional lenses (1)
- functional programming (1)
- funktionale Abhängigkeit (1)
- funktionale Programmierung (1)
- future SOC lab (1)
- game theory (1)
- gaming (1)
- ganzheitlich (1)
- gene (1)
- gene selection (1)
- generalization (1)
- generative multi-discriminative networks (1)
- genetic algorithms (1)
- geocoding (1)
- geographic information systems (1)
- geospatial artificial intelligence (1)
- geospatial data (1)
- geospatial digital twins (1)
- gesture (1)
- grammar-based compression (1)
- graph clustering (1)
- graph databases (1)
- graph languages (1)
- graph pattern matching (1)
- graph queries (1)
- graph replacement categories (1)
- graph transformation systems (1)
- graph transformations (1)
- health apps (1)
- health information privacy concern (1)
- health personnel (1)
- healthcare (1)
- heuristic algorithms (1)
- history (1)
- holistic (1)
- home-based studies (1)
- homomorphic encryption (1)
- hospital (1)
- human computer interaction (1)
- human immunodeficiency virus (1)
- hybrid graph-transformation-systems (1)
- hybride Graph-Transformations-Systeme (1)
- hyperbolic geometry (1)
- identity broker (1)
- identity management (1)
- image captioning (1)
- image processing (1)
- imbalanced learning (1)
- immutable values (1)
- implantable medical device (1)
- implants (1)
- in-memory database (1)
- inattentional blindness (1)
- inclusion (1)
- inclusion dependency (1)
- index (1)
- individuals (1)
- inductive invariant checking (1)
- induktives Invariant Checking (1)
- informatics (1)
- information storage and (1)
- inkrementelle Graphmustersuche (1)
- inkrementelles Graph Pattern Matching (1)
- innovation (1)
- innovation capabilities (1)
- innovation management (1)
- input accuracy (1)
- interaction (1)
- interactive editing (1)
- interactive simulation (1)
- interconnect (1)
- interface (1)
- interpretable machine learning (1)
- invariant checking (1)
- invasive aspects (1)
- job (1)
- k-Induktion (1)
- k-induction (1)
- k-inductive invariant checking (1)
- k-inductive invariants (1)
- k-induktive Invarianten (1)
- k-induktives Invariant-Checking (1)
- key discovery (1)
- knowledge building (1)
- knowledge discovery (1)
- knowledge management (1)
- kontinuierliche Integration (1)
- kontinuierliches Testen (1)
- kontrolliertes Experiment (1)
- künstliche Intelligenz (1)
- label-free (1)
- landmarks (1)
- language specification (1)
- languages (1)
- laser cutting (1)
- layered architecture (1)
- lazy transformation (1)
- leadership (1)
- lean startup approach (1)
- learning (1)
- link discovery (1)
- linked data (1)
- live programming (1)
- local confluence (1)
- location-based (1)
- logic (1)
- logic synthesis (1)
- low back pain (1)
- low-code development approaches (1)
- lower bound (1)
- mHealth (1)
- main memory computing (1)
- management (1)
- many-core (1)
- map reduce (1)
- map/reduce (1)
- maschinelles Lernen (1)
- mass isotopologue distribution (1)
- matching dependencies (1)
- maximal structuring (1)
- mean-variance optimization (1)
- medical (1)
- medical identity theft (1)
- medical malpractice (1)
- mehrdimensionale Belangtrennung (1)
- memory optimization (1)
- memory-based clustering (1)
- memory-based correlation (1)
- memory-based databases (1)
- mental disorders (1)
- mental health (1)
- metadata discovery (1)
- metadata quality (1)
- methodologie (1)
- metric learning (1)
- mixed-methods (1)
- mobile (1)
- mobile application (1)
- mobile devices (1)
- mobile health (1)
- mobile phone (1)
- model generation (1)
- model interpreter (1)
- model verification (1)
- model-based prototyping (1)
- model-driven (1)
- model-driven software engineering (1)
- modelgetriebene Entwicklung (1)
- modeling language (1)
- modellgetriebene Softwareentwicklung (1)
- modelling (1)
- models at runtime (1)
- modular counting (1)
- modularity (1)
- molecular tumor board (1)
- monitoring (1)
- morphic (1)
- mortality (1)
- motion analysis (1)
- motion capture (1)
- motivations (1)
- multi-dimensional separation of concerns (1)
- multi-instances (1)
- multi-perspective visualization (1)
- multi-product pricing (1)
- multilevel systems (1)
- multimodal representations (1)
- multimodal sensing (1)
- multiview classification (1)
- mutation (1)
- mutli-task learning (1)
- network creation games (1)
- networks (1)
- neural (1)
- non-repudiation (1)
- object life cycle synchronization (1)
- object-constraint programming (1)
- one-time password (1)
- openHPI (1)
- organizational change (1)
- organizations (1)
- orthopedic (1)
- orthopedic; (1)
- orts-basiert (1)
- ownership (1)
- pain management (1)
- panorama (1)
- parallel (1)
- parallel computing (1)
- paralleles Rechnen (1)
- parameterized complexity (1)
- parkinson's disease (1)
- parsimonious reduction (1)
- partial application conditions (1)
- partielle Anwendungsbedingungen (1)
- periodic tasks (1)
- periodische Aufgaben (1)
- personal electronic health records (1)
- personal satisfaction (1)
- personalized medicine (1)
- petri net (1)
- phishing (1)
- physical activity assessment (1)
- point clouds (1)
- portability (1)
- power-law (1)
- prefetching (1)
- presentation (1)
- prevention (1)
- price of anarchy (1)
- pricing (1)
- prior knowledge (1)
- probabilistic (1)
- probabilistic models (1)
- probabilistic timed automata (1)
- probabilistische zeitbehaftete Automaten (1)
- process (1)
- process and data integration (1)
- process automation (1)
- process elicitation (1)
- process instance (1)
- process model search (1)
- process modeling (1)
- process refinement (1)
- process scheduling (1)
- process-aware digital twin cockpit (1)
- process-awareness (1)
- processes (1)
- processing (1)
- processor hardware (1)
- profiling (1)
- program (1)
- program analysis (1)
- programming language (1)
- programs (1)
- proportional division (1)
- proteomics (1)
- protocols (1)
- prototypes (1)
- public health medicine (1)
- quantitative analysis (1)
- query matching (1)
- query optimisation (1)
- query rewriting (1)
- querying (1)
- random I (1)
- random forest (1)
- random graphs (1)
- ranking (1)
- rapid prototyping (1)
- reactive (1)
- reaktive Programmierung (1)
- real-time (1)
- real-time systems (1)
- recursive tuning (1)
- reflection (1)
- reinforcement learning (1)
- relational model transformation (1)
- relationale Modelltransformationen (1)
- reliability (1)
- remodularization (1)
- remote collaboration (1)
- remote monitoring (1)
- representation learning (1)
- requirements (1)
- research challenges (1)
- resilient architectures (1)
- resource management (1)
- resource optimization (1)
- restoration (1)
- restricted edges (1)
- retrieval (1)
- reusable aspects (1)
- reuse (1)
- reward (1)
- robustness (1)
- rotor-router model (1)
- runtime adaptations (1)
- runtime analysis (1)
- runtime behavior (1)
- runtime models (1)
- s/t-pattern sequences (1)
- satisfiabilitiy solving (1)
- scale-free networks (1)
- school (environment) (1)
- science mapping (1)
- search plan generation (1)
- security chaos engineering (1)
- security policies (1)
- security risk assessment (1)
- segmentation (1)
- self-adaptive software (1)
- self-driving (1)
- self-learning scheduler (1)
- self-supervised learning (1)
- semantic (1)
- semantic analysis (1)
- semantics preservation (1)
- sensor (1)
- sensor data (1)
- separation of concerns (1)
- service-oriented (1)
- severe acute respiratory (1)
- signal transition graph (1)
- similarity (1)
- similarity learning (1)
- simulator (1)
- single vertex discrepancy (1)
- single-case experimental design (1)
- small files (1)
- smallest grammar problem (1)
- smalltalk (1)
- smart card (1)
- sociology (1)
- software (1)
- software analysis (1)
- software architecture (1)
- software development (1)
- software development processes (1)
- software evolution (1)
- software maintenance (1)
- software product lines (1)
- software tests (1)
- software visualization (1)
- software/instrumentation (1)
- spamming (1)
- speed independence (1)
- speed independent (1)
- stable matchings (1)
- staging (1)
- standards (1)
- static analysis (1)
- statische Analyse (1)
- statistics (1)
- statutes and laws (1)
- steganography (1)
- straight-line (1)
- structured process model (1)
- study (1)
- style description languages (1)
- stylization (1)
- substitution effects (1)
- supervised machine learning (1)
- surveillance (1)
- symbolic graph transformation (1)
- synchronization (1)
- syndrome coronavirus 2 (1)
- synonym discovery (1)
- system of systems (1)
- systems (1)
- t.BPM (1)
- tableau method (1)
- tangible media (1)
- teamwork (1)
- technology adoption (1)
- technology pivot (1)
- tele-TASK (1)
- terrain models (1)
- test-driven fault navigation (1)
- threshold cryptography (1)
- tools (1)
- topics (1)
- tort law (1)
- touch input (1)
- traceability (1)
- tracing (1)
- transfer learning (1)
- transformation (1)
- transformation level (1)
- transformation sequences (1)
- transversal hypergraph (1)
- tree conjecture (1)
- triple graph grammars (1)
- trust (1)
- trust model (1)
- tuple spaces (1)
- typed graph transformation systems (1)
- typography (1)
- unequal shares (1)
- unique (1)
- unsupervised methods (1)
- use-cases (1)
- user interaction (1)
- utility (1)
- verschachtelte Anwednungsbedingungen (1)
- verschachtelte Graphbedingungen (1)
- versioning (1)
- verteilte Datenbanken (1)
- video analysis (1)
- video metadata (1)
- view maintenance (1)
- views (1)
- virtual 3D city models (1)
- virtual groups (1)
- virtual reality (1)
- virtualisierte IT-Infrastruktur (1)
- virtuelle 3D-Stadtmodelle (1)
- visually impaired (1)
- vulnerabilities (1)
- wearable movement sensor (1)
- web application (1)
- web-applications (1)
- weight (1)
- word clouds (1)
- word sense disambiguation (1)
- workflow (1)
- workload prediction (1)
- zero-power defense (1)
- zuverlässige Datenverarbeitung (1)
- zuverlässigen Datenverarbeitung (1)
- Ähnlichkeit (1)
- Ähnlichkeitsmaße (1)
- Ähnlichkeitssuche (1)
- Änderbarkeit (1)
- Übereinstimmungsanalyse (1)
- Überwachung (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering gGmbH (365) (remove)
Mise-Unseen
(2019)
Creating or arranging objects at runtime is needed in many virtual reality applications, but such changes are noticed when they occur inside the user's field of view. We present Mise-Unseen, a software system that applies such scene changes covertly inside the user's field of view. Mise-Unseen leverages gaze tracking to create models of user attention, intention, and spatial memory to determine if and when to inject a change. We present seven applications of Mise-Unseen to unnoticeably modify the scene within view (i) to hide that task difficulty is adapted to the user, (ii) to adapt the experience to the user's preferences, (iii) to time the use of low fidelity effects, (iv) to detect user choice for passive haptics even when lacking physical props, (v) to sustain physical locomotion despite a lack of physical space, (vi) to reduce motion sickness during virtual locomotion, and (vii) to verify user understanding during story progression. We evaluated Mise-Unseen and our applications in a user study with 15 participants and find that while gaze data indeed supports obfuscating changes inside the field of view, a change is rendered unnoticeably by using gaze in combination with common masking techniques.
Editorial
(2019)
Ubiquitous computing has proven its relevance and efficiency in improving the user experience across a myriad of situations. It is now the ineluctable solution to keep pace with the ever-changing environments in which current systems operate. Despite the achievements of ubiquitous computing, this discipline is still overlooked in business process management. This is surprising, since many of today’s challenges, in this domain, can be addressed by methods and techniques from ubiquitous computing, for instance user context and dynamic aspects of resource locations. This paper takes a first step to integrate methods and techniques from ubiquitous computing in business process management. To do so, we propose discovering commute patterns via process mining. Through our proposition, we can deduce the users’ significant locations, routes, travel times and travel modes. This information can be a stepping-stone toward helping the business process management community embrace the latest achievements in ubiquitous computing, mainly in location-based service. To corroborate our claims, a user study was conducted. The significant places, routes, travel modes and commuting times of our test subjects were inferred with high accuracies. All in all, ubiquitous computing can enrich the processes with new capabilities that go beyond what has been established in business process management so far.
Interactive Close-Up Rendering for Detail plus Overview Visualization of 3D Digital Terrain Models
(2019)
This paper presents an interactive rendering technique for detail+overview visualization of 3D digital terrain models using interactive close-ups. A close-up is an alternative presentation of input data varying with respect to geometrical scale, mapping, appearance, as well as Level-of-Detail (LOD) and Level-of-Abstraction (LOA) used. The presented 3D close-up approach enables in-situ comparison of multiple Regionof-Interests (ROIs) simultaneously. We describe a GPU-based rendering technique for the image-synthesis of multiple close-ups in real-time.
The availability of detailed virtual 3D building models including representations of indoor elements, allows for a wide number of applications requiring effective exploration and navigation functionality. Depending on the application context, users should be enabled to focus on specific Objects-of-Interests (OOIs) or important building elements. This requires approaches to filtering building parts as well as techniques to visualize important building objects and their relations. For it, this paper explores the application and combination of interactive rendering techniques as well as their semanticallydriven configuration in the context of 3D indoor models.
A fundamental task in 3D geovisualization and GIS applications is the visualization of vector data that can represent features such as transportation networks or land use coverage. Mapping or draping vector data represented by geometric primitives (e.g., polylines or polygons) to 3D digital elevation or 3D digital terrain models is a challenging task. We present an interactive GPU-based approach that performs geometry-based draping of vector data on per-frame basis using an image-based representation of a 3D digital elevation or terrain model only.
Kyub
(2019)
We present an interactive editing system for laser cutting called kyub. Kyub allows users to create models efficiently in 3D, which it then unfolds into the 2D plates laser cutters expect. Unlike earlier systems, such as FlatFitFab, kyub affords construction based on closed box structures, which allows users to turn very thin material, such as 4mm plywood, into objects capable of withstanding large forces, such as chairs users can actually sit on. To afford such sturdy construction, every kyub project begins with a simple finger-joint "boxel"-a structure we found to be capable of withstanding over 500kg of load. Users then extend their model by attaching additional boxels. Boxels merge automatically, resulting in larger, yet equally strong structures. While the concept of stacking boxels allows kyub to offer the strong affordance and ease of use of a voxel-based editor, boxels are not confined to a grid and readily combine with kuyb's various geometry deformation tools. In our technical evaluation, objects built with kyub withstood hundreds of kilograms of loads. In our user study, non-engineers rated the learnability of kyub 6.1/7.
In this paper, we establish the underlying foundations of mechanisms that are composed of cell structures-known as metamaterial mechanisms. Such metamaterial mechanisms were previously shown to implement complete mechanisms in the cell structure of a 3D printed material, without the need for assembly. However, their design is highly challenging. A mechanism consists of many cells that are interconnected and impose constraints on each other. This leads to unobvious and non-linear behavior of the mechanism, which impedes user design. In this work, we investigate the underlying topological constraints of such cell structures and their influence on the resulting mechanism. Based on these findings, we contribute a computational design tool that automatically creates a metamaterial mechanism from user-defined motion paths. This tool is only feasible because our novel abstract representation of the global constraints highly reduces the search space of possible cell arrangements.
Mobile sensing technology allows us to investigate human behaviour on a daily basis. In the study, we examined temporal orientation, which refers to the capacity of thinking or talking about personal events in the past and future. We utilise the mksense platform that allows us to use the experience-sampling method. Individual's thoughts and their relationship with smartphone's Bluetooth data is analysed to understand in which contexts people are influenced by social environments, such as the people they spend the most time with. As an exploratory study, we analyse social condition influence through a collection of Bluetooth data and survey information from participant's smartphones. Preliminary results show that people are likely to focus on past events when interacting with close-related people, and focus on future planning when interacting with strangers. Similarly, people experience present temporal orientation when accompanied by known people. We believe that these findings are linked to emotions since, in its most basic state, emotion is a state of physiological arousal combined with an appropriated cognition. In this contribution, we envision a smartphone application for automatically inferring human emotions based on user's temporal orientation by using Bluetooth sensors, we briefly elaborate on the influential factor of temporal orientation episodes and conclude with a discussion and lessons learned.
Currently, a transformation of our technical world into a networked technical world where besides the embedded systems with their interaction with the physical world the interconnection of these nodes in the cyber world becomes a reality can be observed. In parallel nowadays there is a strong trend to employ artificial intelligence techniques and in particular machine learning to make software behave smart. Often cyber-physical systems must be self-adaptive at the level of the individual systems to operate as elements in open, dynamic, and deviating overall structures and to adapt to open and dynamic contexts while being developed, operated, evolved, and governed independently.
In this presentation, we will first discuss the envisioned future scenarios for cyber-physical systems with an emphasis on the synergies networking can offer and then characterize which challenges for the design, production, and operation of these systems result. We will then discuss to what extent our current capabilities, in particular concerning software engineering match these challenges and where substantial improvements for the software engineering are crucial. In today's software engineering for embedded systems models are used to plan systems upfront to maximize envisioned properties on the one hand and minimize cost on the other hand. When applying the same ideas to software for smart cyber-physical systems, it soon turned out that for these systems often somehow more subtle links between the involved models and the requirements, users, and environment exist. Self-adaptation and runtime models have been advocated as concepts to covers the demands that result from these subtler links. Lately, both trends have been brought together more thoroughly by the notion of self-aware computing systems. We will review the underlying causes, discuss some our work in this direction, and outline related open challenges and potential for future approaches to software engineering for smart cyber-physical systems.
Selfish Network Creation focuses on modeling real world networks from a game-theoretic point of view. One of the classic models by Fabrikant et al. (2003) is the network creation game, where agents correspond to nodes in a network which buy incident edges for the price of alpha per edge to minimize their total distance to all other nodes. The model is well-studied but still has intriguing open problems. The most famous conjectures state that the price of anarchy is constant for all alpha and that for alpha >= n all equilibrium networks are trees. We introduce a novel technique for analyzing stable networks for high edge-price alpha and employ it to improve on the best known bound for the latter conjecture. In particular we show that for alpha > 4n - 13 all equilibrium networks must be trees, which implies a constant price of anarchy for this range of alpha. Moreover, we also improve the constant upper bound on the price of anarchy for equilibrium trees.
Duplicate detection algorithms produce clusters of database records, each cluster representing a single real-world entity. As most of these algorithms use pairwise comparisons, the resulting (transitive) clusters can be inconsistent: Not all records within a cluster are sufficiently similar to be classified as duplicate. Thus, one of many subsequent clustering algorithms can further improve the result. <br /> We explain in detail, compare, and evaluate many of these algorithms and introduce three new clustering algorithms in the specific context of duplicate detection. Two of our three new algorithms use the structure of the input graph to create consistent clusters. Our third algorithm, and many other clustering algorithms, focus on the edge weights, instead. For evaluation, in contrast to related work, we experiment on true real-world datasets, and in addition examine in great detail various pair-selection strategies used in practice. While no overall winner emerges, we are able to identify best approaches for different situations. In scenarios with larger clusters, our proposed algorithm, Extended Maximum Clique Clustering (EMCC), and Markov Clustering show the best results. EMCC especially outperforms Markov Clustering regarding the precision of the results and additionally has the advantage that it can also be used in scenarios where edge weights are not available.
3D point cloud technology facilitates the automated and highly detailed acquisition of real-world environments such as assets, sites, and countries. We present a web-based system for the interactive exploration and inspection of arbitrary large 3D point clouds. Our approach is able to render 3D point clouds with billions of points using spatial data structures and level-of-detail representations. Point-based rendering techniques and post-processing effects are provided to enable task-specific and data-specific filtering, e.g., based on semantics. A set of interaction techniques allows users to collaboratively work with the data (e.g., measuring distances and annotating). Additional value is provided by the system’s ability to display additional, context-providing geodata alongside 3D point clouds and to integrate processing and analysis operations. We have evaluated the presented techniques and in case studies and with different data sets from aerial, mobile, and terrestrial acquisition with up to 120 billion points to show their practicality and feasibility.
SpringFit
(2019)
Joints are crucial to laser cutting as they allow making three-dimensional objects; mounts are crucial because they allow embedding technical components, such as motors. Unfortunately, mounts and joints tend to fail when trying to fabricate a model on a different laser cutter or from a different material. The reason for this lies in the way mounts and joints hold objects in place, which is by forcing them into slightly smaller openings. Such "press fit" mechanisms unfortunately are susceptible to the small changes in diameter that occur when switching to a machine that removes more or less material ("kerf"), as well as to changes in stiffness, as they occur when switching to a different material. We present a software tool called springFit that resolves this problem by replacing the problematic press fit-based mounts and joints with what we call cantilever-based mounts and joints. A cantilever spring is simply a long thin piece of material that pushes against the object to be held. Unlike press fits, cantilever springs are robust against variations in kerf and material; they can even handle very high variations, simply by using longer springs. SpringFit converts models in the form of 2D cutting plans by replacing all contained mounts, notch joints, finger joints, and t-joints. In our technical evaluation, we used springFit to convert 14 models downloaded from the web.
Using Hidden Markov Models for the accurate linguistic analysis of process model activity labels
(2019)
Many process model analysis techniques rely on the accurate analysis of the natural language contents captured in the models’ activity labels. Since these labels are typically short and diverse in terms of their grammatical style, standard natural language processing tools are not suitable to analyze them. While a dedicated technique for the analysis of process model activity labels was proposed in the past, it suffers from considerable limitations. First of all, its performance varies greatly among data sets with different characteristics and it cannot handle uncommon grammatical styles. What is more, adapting the technique requires in-depth domain knowledge. We use this paper to propose a machine learning-based technique for activity label analysis that overcomes the issues associated with this rule-based state of the art. Our technique conceptualizes activity label analysis as a tagging task based on a Hidden Markov Model. By doing so, the analysis of activity labels no longer requires the manual specification of rules. An evaluation using a collection of 15,000 activity labels demonstrates that our machine learning-based technique outperforms the state of the art in all aspects.
Evaluating the performance of self-adaptive systems (SAS) is challenging due to their complexity and interaction with the often highly dynamic environment. In the context of self-healing systems (SHS), employing simulators has been shown to be the most dominant means for performance evaluation. Simulating a SHS also requires realistic fault injection scenarios. We study the state of the practice for evaluating the performance of SHS by means of a systematic literature review. We present the current practice and point out that a more thorough and careful treatment in evaluating the performance of SHS is required.
Industry 4.0 and the Internet of Things are recent developments that have lead to the creation of new kinds of manufacturing data. Linking this new kind of sensor data to traditional business information is crucial for enterprises to take advantage of the data’s full potential. In this paper, we present a demo which allows experiencing this data integration, both vertically between technical and business contexts and horizontally along the value chain. The tool simulates a manufacturing company, continuously producing both business and sensor data, and supports issuing ad-hoc queries that answer specific questions related to the business. In order to adapt to different environments, users can configure sensor characteristics to their needs.
In cloud computing, users are able to use their own operating system (OS) image to run a virtual machine (VM) on a remote host. The virtual machine OS is started by the user using some interfaces provided by a cloud provider in public or private cloud. In peer to peer cloud, the VM is started by the host admin. After the VM is running, the user could get a remote access to the VM to install, configure, and run services. For the security reasons, the user needs to verify the integrity of the running VM, because a malicious host admin could modify the image or even replace the image with a similar image, to be able to get sensitive data from the VM. We propose an approach to verify the integrity of a running VM on a remote host, without using any specific hardware such as Trusted Platform Module (TPM). Our approach is implemented on a Linux platform where the kernel files (vmlinuz and initrd) could be replaced with new files, while the VM is running. kexec is used to reboot the VM with the new kernel files. The new kernel has secret codes that will be used to verify whether the VM was started using the new kernel files. The new kernel is used to further measuring the integrity of the running VM.
High-dimensional data is particularly useful for data analytics research. In the healthcare domain, for instance, high-dimensional data analytics has been used successfully for drug discovery. Yet, in order to adhere to privacy legislation, data analytics service providers must guarantee anonymity for data owners. In the context of high-dimensional data, ensuring privacy is challenging because increased data dimensionality must be matched by an exponential growth in the size of the data to avoid sparse datasets. Syntactically, anonymising sparse datasets with methods that rely of statistical significance, makes obtaining sound and reliable results, a challenge. As such, strong privacy is only achievable at the cost of high information loss, rendering the data unusable for data analytics. In this paper, we make two contributions to addressing this problem from both the privacy and information loss perspectives. First, we show that by identifying dependencies between attribute subsets we can eliminate privacy violating attributes from the anonymised dataset. Second, to minimise information loss, we employ a greedy search algorithm to determine and eliminate maximal partial unique attribute combinations. Thus, one only needs to find the minimal set of identifying attributes to prevent re-identification. Experiments on a health cloud based on the SAP HANA platform using a semi-synthetic medical history dataset comprised of 109 attributes, demonstrate the effectiveness of our approach.
Devices on the Internet of Things (IoT) are usually battery-powered and have limited resources. Hence, energy-efficient and lightweight protocols were designed for IoT devices, such as the popular Constrained Application Protocol (CoAP). Yet, CoAP itself does not include any defenses against denial-of-sleep attacks, which are attacks that aim at depriving victim devices of entering low-power sleep modes. For example, a denial-of-sleep attack against an IoT device that runs a CoAP server is to send plenty of CoAP messages to it, thereby forcing the IoT device to expend energy for receiving and processing these CoAP messages. All current security solutions for CoAP, namely Datagram Transport Layer Security (DTLS), IPsec, and OSCORE, fail to prevent such attacks. To fill this gap, Seitz et al. proposed a method for filtering out inauthentic and replayed CoAP messages "en-route" on 6LoWPAN border routers. In this paper, we expand on Seitz et al.'s proposal in two ways. First, we revise Seitz et al.'s software architecture so that 6LoWPAN border routers can not only check the authenticity and freshness of CoAP messages, but can also perform a wide range of further checks. Second, we propose a couple of such further checks, which, as compared to Seitz et al.'s original checks, more reliably protect IoT devices that run CoAP servers from remote denial-of-sleep attacks, as well as from remote exploits. We prototyped our solution and successfully tested its compatibility with Contiki-NG's CoAP implementation.
LoANs
(2019)
Recently, deep neural networks have achieved remarkable performance on the task of object detection and recognition. The reason for this success is mainly grounded in the availability of large scale, fully annotated datasets, but the creation of such a dataset is a complicated and costly task. In this paper, we propose a novel method for weakly supervised object detection that simplifies the process of gathering data for training an object detector. We train an ensemble of two models that work together in a student-teacher fashion. Our student (localizer) is a model that learns to localize an object, the teacher (assessor) assesses the quality of the localization and provides feedback to the student. The student uses this feedback to learn how to localize objects and is thus entirely supervised by the teacher, as we are using no labels for training the localizer. In our experiments, we show that our model is very robust to noise and reaches competitive performance compared to a state-of-the-art fully supervised approach. We also show the simplicity of creating a new dataset, based on a few videos (e.g. downloaded from YouTube) and artificially generated data.
Cloud Storage Broker (CSB) provides value-added cloud storage service for enterprise usage by leveraging multi-cloud storage architecture. However, it raises several challenges for managing resources and its access control in multiple Cloud Service Providers (CSPs) for authorized CSB stakeholders. In this paper we propose unified cloud access control model that provides the abstraction of CSP's services for centralized and automated cloud resource and access control management in multiple CSPs. Our proposal offers role-based access control for CSB stakeholders to access cloud resources by assigning necessary privileges and access control list for cloud resources and CSB stakeholders, respectively, following privilege separation concept and least privilege principle. We implement our unified model in a CSB system called CloudRAID for Business (CfB) with the evaluation result shows it provides system-and-cloud level security service for cfB and centralized resource and access control management in multiple CSPs.
Many universities record the lectures being held in their facilities to preserve knowledge and to make it available to their students and, at least for some universities and classes, to the broad public. The way with the least effort is to record the whole lecture, which in our case usually is 90 min long. This saves the labor and time of cutting and rearranging lectures scenes to provide short learning videos as known from Massive Open Online Courses (MOOCs), etc. Many lecturers fear that recording their lectures and providing them via an online platform might lead to less participation in the actual lecture. Also, many teachers fear that the lecture recordings are not used with the same focus and dedication as lectures in a lecture hall. In this work, we show that in our experience, full lectures have an average watching duration of just a few minutes and explain the reasons for that and why, in most cases, teachers do not have to worry about that.
High-throughput RNA sequencing produces large gene expression datasets whose analysis leads to a better understanding of diseases like cancer. The nature of RNA-Seq data poses challenges to its analysis in terms of its high dimensionality, noise, and complexity of the underlying biological processes. Researchers apply traditional machine learning approaches, e. g. hierarchical clustering, to analyze this data. Until it comes to validation of the results, the analysis is based on the provided data only and completely misses the biological context. However, gene expression data follows particular patterns - the underlying biological processes. In our research, we aim to integrate the available biological knowledge earlier in the analysis process. We want to adapt state-of-the-art data mining algorithms to consider the biological context in their computations and deliver meaningful results for researchers.
From face to face
(2019)
Despite advances in the conceptualisation of facial mimicry, its role in the processing of social information is a matter of debate. In the present study, we investigated the relationship between mimicry and cognitive and emotional empathy. To assess mimicry, facial electromyography was recorded for 70 participants while they completed the Multifaceted Empathy Test, which presents complex context-embedded emotional expressions. As predicted, inter-individual differences in emotional and cognitive empathy were associated with the level of facial mimicry. For positive emotions, the intensity of the mimicry response scaled with the level of state emotional empathy. Mimicry was stronger for the emotional empathy task compared to the cognitive empathy task. The specific empathy condition could be successfully detected from facial muscle activity at the level of single individuals using machine learning techniques. These results support the view that mimicry occurs depending on the social context as a tool to affiliate and it is involved in cognitive as well as emotional empathy.
We elaborate on the possibilities and needs to integrate design thinking into requirements engineering, drawing from our research and project experiences. We suggest three approaches for tailoring and integrating design thinking and requirements engineering with complementary synergies and point at open challenges for research and practice.
Applications with different characteristics in the cloud may have different resources preferences. However, traditional resource allocation and scheduling strategies rarely take into account the characteristics of applications. Considering that an I/O-intensive application is a typical type of application and that frequent I/O accesses, especially small files randomly accessing the disk, may lead to an inefficient use of resources and reduce the quality of service (QoS) of applications, a weight allocation strategy is proposed based on the available resources that a physical server can provide as well as the characteristics of the applications. Using the weight obtained, a resource allocation and scheduling strategy is presented based on the specific application characteristics in the data center. Extensive experiments show that the strategy is correct and can guarantee a high concurrency of I/O per second (IOPS) in a cloud data center with high QoS. Additionally, the strategy can efficiently improve the utilization of the disk and resources of the data center without affecting the service quality of applications.
Background Heart failure (HF) is a complex, chronic condition that is associated with debilitating symptoms, all of which necessitate close follow-up by health care providers. Lack of disease monitoring may result in increased mortality and more frequent hospital readmissions for decompensated HF. Remote patient management (RPM) in this patient population may help to detect early signs and symptoms of cardiac decompensation, thus enabling a prompt initiation of the appropriate treatment and care before a manifestation of HF decompensation. Objective The objective of the present article is to describe the design of a new trial investigating the impact of RPM on unplanned cardiovascular hospitalisations and mortality in HF patients. Methods The TIM-HF2 trial is designed as a prospective, randomised, controlled, parallel group, open (with randomisation concealment), multicentre trial with pragmatic elements introduced for data collection. Eligible patients with HF are randomised (1:1) to either RPM + usual care or to usual care only and are followed for 12 months. The primary outcome is the percentage of days lost due to unplanned cardiovascular hospitalisations or all-cause death. The main secondary outcomes are all-cause and cardiovascular mortality. Conclusion The TIM-HF2 trial will provide important prospective data on the potential beneficial effect of telemedical monitoring and RPM on unplanned cardiovascular hospitalisations and mortality in HF patients.
Blockchain technology offers a sizable promise to rethink the way interorganizational business processes are managed because of its potential to realize execution without a central party serving as a single point of trust (and failure). To stimulate research on this promise and the limits thereof, in this article, we outline the challenges and opportunities of blockchain for business process management (BPM). We first reflect how blockchains could be used in the context of the established BPM lifecycle and second how they might become relevant beyond. We conclude our discourse with a summary of seven research directions for investigating the application of blockchain technology in the context of BPM.
The rapid digitalization of the Facility Management (FM) sector has increased the demand for mobile, interactive analytics approaches concerning the operational state of a building. These approaches provide the key to increasing stakeholder engagement associated with Operation and Maintenance (O&M) procedures of living and working areas, buildings, and other built environment spaces. We present a generic and fast approach to process and analyze given 3D point clouds of typical indoor office spaces to create corresponding up-to-date approximations of classified segments and object-based 3D models that can be used to analyze, record and highlight changes of spatial configurations. The approach is based on machine-learning methods used to classify the scanned 3D point cloud data using 2D images. This approach can be used to primarily track changes of objects over time for comparison, allowing for routine classification, and presentation of results used for decision making. We specifically focus on classification, segmentation, and reconstruction of multiple different object types in a 3D point-cloud scene. We present our current research and describe the implementation of these technologies as a web-based application using a services-oriented methodology.
In this paper, we introduce an alternative approach to Temporal Answer Set Programming that relies on a variation of Temporal Equilibrium Logic (TEL) for finite traces. This approach allows us to even out the expressiveness of TEL over infinite traces with the computational capacity of (incremental) Answer Set Programming (ASP). Also, we argue that finite traces are more natural when reasoning about action and change. As a result, our approach is readily implementable via multi-shot ASP systems and benefits from an extension of ASP's full-fledged input language with temporal operators. This includes future as well as past operators whose combination offers a rich temporal modeling language. For computation, we identify the class of temporal logic programs and prove that it constitutes a normal form for our approach. Finally, we outline two implementations, a generic one and an extension of the ASP system clingo.
Under consideration for publication in Theory and Practice of Logic Programming (TPLP)
We introduce the asprilo1 framework to facilitate experimental studies of approaches addressing complex dynamic applications. For this purpose, we have chosen the domain of robotic intra-logistics. This domain is not only highly relevant in the context of today's fourth industrial revolution but it moreover combines a multitude of challenging issues within a single uniform framework. This includes multi-agent planning, reasoning about action, change, resources, strategies, etc. In return, asprilo allows users to study alternative solutions as regards effectiveness and scalability. Although asprilo relies on Answer Set Programming and Python, it is readily usable by any system complying with its fact-oriented interface format. This makes it attractive for benchmarking and teaching well beyond logic programming. More precisely, asprilo consists of a versatile benchmark generator, solution checker and visualizer as well as a bunch of reference encodings featuring various ASP techniques. Importantly, the visualizer's animation capabilities are indispensable for complex scenarios like intra-logistics in order to inspect valid as well as invalid solution candidates. Also, it allows for graphically editing benchmark layouts that can be used as a basis for generating benchmark suites.
Given a query record, record matching is the problem of finding database records that represent the same real-world object. In the easiest scenario, a database record is completely identical to the query. However, in most cases, problems do arise, for instance, as a result of data errors or data integrated from multiple sources or received from restrictive form fields. These problems are usually difficult, because they require a variety of actions, including field segmentation, decoding of values, and similarity comparisons, each requiring some domain knowledge. In this article, we study the problem of matching records that contain address information, including attributes such as Street-address and City. To facilitate this matching process, we propose a domain-specific procedure to, first, enrich each record with a more complete representation of the address information through geocoding and reverse-geocoding and, second, to select the best similarity measure per each address attribute that will finally help the classifier to achieve the best f-measure. We report on our experience in selecting geocoding services and discovering similarity measures for a concrete but common industry use-case.
Graphs are ubiquitous in computer science. Moreover, in various application fields, graphs are equipped with attributes to express additional information such as names of entities or weights of relationships. Due to the pervasiveness of attributed graphs, it is highly important to have the means to express properties on attributed graphs to strengthen modeling capabilities and to enable analysis. Firstly, we introduce a new logic of attributed graph properties, where the graph part and attribution part are neatly separated. The graph part is equivalent to first-order logic on graphs as introduced by Courcelle. It employs graph morphisms to allow the specification of complex graph patterns. The attribution part is added to this graph part by reverting to the symbolic approach to graph attribution, where attributes are represented symbolically by variables whose possible values are specified by a set of constraints making use of algebraic specifications. Secondly, we extend our refutationally complete tableau-based reasoning method as well as our symbolic model generation approach for graph properties to attributed graph properties. Due to the new logic mentioned above, neatly separating the graph and attribution parts, and the categorical constructions employed only on a more abstract level, we can leave the graph part of the algorithms seemingly unchanged. For the integration of the attribution part into the algorithms, we use an oracle, allowing for flexible adoption of different available SMT solvers in the actual implementation. Finally, our automated reasoning approach for attributed graph properties is implemented in the tool AutoGraph integrating in particular the SMT solver Z3 for the attribute part of the properties. We motivate and illustrate our work with a particular application scenario on graph database query validation.
Random walks are frequently used in randomized algorithms. We study a derandomized variant of a random walk on graphs called the rotor-router model. In this model, instead of distributing tokens randomly, each vertex serves its neighbors in a fixed deterministic order. For most setups, both processes behave in a remarkably similar way: Starting with the same initial configuration, the number of tokens in the rotor-router model deviates only slightly from the expected number of tokens on the corresponding vertex in the random walk model. The maximal difference over all vertices and all times is called single vertex discrepancy. Cooper and Spencer [Combin. Probab. Comput., 15 (2006), pp. 815-822] showed that on Z(d), the single vertex discrepancy is only a constant c(d). Other authors also determined the precise value of c(d) for d = 1, 2. All of these results, however, assume that initially all tokens are only placed on one partition of the bipartite graph Z(d). We show that this assumption is crucial by proving that, otherwise, the single vertex discrepancy can become arbitrarily large. For all dimensions d >= 1 and arbitrary discrepancies l >= 0, we construct configurations that reach a discrepancy of at least l.
DualPanto
(2018)
We present a new haptic device that enables blind users to continuously track the absolute position of moving objects in spatial virtual environments, as is the case in sports or shooter games. Users interact with DualPanto by operating the me handle with one hand and by holding on to the it handle with the other hand. Each handle is connected to a pantograph haptic input/output device. The key feature is that the two handles are spatially registered with respect to each other. When guiding their avatar through a virtual world using the me handle, spatial registration enables users to track moving objects by having the device guide the output hand. This allows blind players of a 1-on-1 soccer game to race for the ball or evade an opponent; it allows blind players of a shooter game to aim at an opponent and dodge shots. In our user study, blind participants reported very high enjoyment when using the device to play (6.5/7).
Generating a novel and descriptive caption of an image is drawing increasing interests in computer vision, natural language processing, and multimedia communities. In this work, we propose an end-to-end trainable deep bidirectional LSTM (Bi-LSTM (Long Short-Term Memory)) model to address the problem. By combining a deep convolutional neural network (CNN) and two separate LSTM networks, our model is capable of learning long-term visual-language interactions by making use of history and future context information at high-level semantic space. We also explore deep multimodal bidirectional models, in which we increase the depth of nonlinearity transition in different ways to learn hierarchical visual-language embeddings. Data augmentation techniques such as multi-crop, multi-scale, and vertical mirror are proposed to prevent over-fitting in training deep models. To understand how our models "translate" image to sentence, we visualize and qualitatively analyze the evolution of Bi-LSTM internal states over time. The effectiveness and generality of proposed models are evaluated on four benchmark datasets: Flickr8K, Flickr30K, MSCOCO, and Pascal1K datasets. We demonstrate that Bi-LSTM models achieve highly competitive performance on both caption generation and image-sentence retrieval even without integrating an additional mechanism (e.g., object detection, attention model). Our experiments also prove that multi-task learning is beneficial to increase model generality and gain performance. We also demonstrate the performance of transfer learning of the Bi-LSTM model significantly outperforms previous methods on the Pascal1K dataset.
E-commerce marketplaces are highly dynamic with constant competition. While this competition is challenging for many merchants, it also provides plenty of opportunities, e.g., by allowing them to automatically adjust prices in order to react to changing market situations. For practitioners however, testing automated pricing strategies is time-consuming and potentially hazardously when done in production. Researchers, on the other side, struggle to study how pricing strategies interact under heavy competition. As a consequence, we built an open continuous time framework to simulate dynamic pricing competition called Price Wars. The microservice-based architecture provides a scalable platform for large competitions with dozens of merchants and a large random stream of consumers. Our platform stores each event in a distributed log. This allows to provide different performance measures enabling users to compare profit and revenue of various repricing strategies in real-time. For researchers, price trajectories are shown which ease evaluating mutual price reactions of competing strategies. Furthermore, merchants can access historical marketplace data and apply machine learning. By providing a set of customizable, artificial merchants, users can easily simulate both simple rule-based strategies as well as sophisticated data-driven strategies using demand learning to optimize their pricing strategies.
Currently available wearables are usually based on a single sensor node with integrated capabilities for classifying different activities. The next generation of cooperative wearables could be able to identify not only activities, but also to evaluate them qualitatively using the data of several sensor nodes attached to the body, to provide detailed feedback for the improvement of the execution. Especially within the application domains of sports and health-care, such immediate feedback to the execution of body movements is crucial for (re-) learning and improving motor skills. To enable such systems for a broad range of activities, generalized approaches for human motion assessment within sensor networks are required. In this paper, we present a generalized trainable activity assessment chain (AAC) for the online assessment of periodic human activity within a wireless body area network. AAC evaluates the execution of separate movements of a prior trained activity on a fine-grained quality scale. We connect qualitative assessment with human knowledge by projecting the AAC on the hierarchical decomposition of motion performed by the human body as well as establishing the assessment on a kinematic evaluation of biomechanically distinct motion fragments. We evaluate AAC in a real-world setting and show that AAC successfully delimits the movements of correctly performed activity from faulty executions and provides detailed reasons for the activity assessment.
Mixed-projection treemaps
(2017)
This paper presents a novel technique for combining 2D and 2.5D treemaps using multi-perspective views to leverage the advantages of both treemap types. It enables a new form of overview+detail visualization for tree-structured data and contributes new concepts for real-time rendering of and interaction with treemaps. The technique operates by tilting the graphical elements representing inner nodes using affine transformations and animated state transitions. We explain how to mix orthogonal and perspective projections within a single treemap. Finally, we show application examples that benefit from the reduced interaction overhead.
Distributed applications are hard to debug because timing-dependent network communication is a source of non-deterministic behavior. Current approaches to debug non deterministic failures include post-mortem debugging as well as record and replay. However, the first impairs system performance to gather data, whereas the latter requires developers to understand the timing-dependent communication at a lower level of abstraction than they develop at. Furthermore, both approaches require intrusive core library modifications to gather data from live systems. In this paper, we present the Peek-At-Talk debugger for investigating non-deterministic failures with low overhead in a systematic, top-down method, with a particular focus on tool-building issues in the following areas: First, we show how our debugging framework Path Tools guides developers from failures to their root causes and gathers run-time data with low overhead. Second, we present Peek-At-Talk, an extension to our Path Tools framework to record non-deterministic communication and refine behavioral data that connects source code with network events. Finally, we scope changes to the core library to record network communication without impacting other network applications.
In this extended abstract, we will analyze the current challenges for the envisioned Self-Adaptive CPS. In addition, we will outline our results to approach these challenges with SMARTSOS [10] a generic approach based on extensions of graph transformation systems employing open and adaptive collaborations and models at runtime for trustworthy self-adaptation, self-organization, and evolution of the individual systems and the system-of-systems level taking the independent development, operation, management, and evolution of these systems into account.
Linked Data on the Web represents an immense source of knowledge suitable to be automatically processed and queried. In this respect, there are different approaches for Linked Data querying that differ on the degree of centralization adopted. On one hand, the SPARQL query language, originally defined for querying single datasets, has been enhanced with features to query federations of datasets; however, this attempt is not sufficient to cope with the distributed nature of data sources available as Linked Data. On the other hand, extensions or variations of SPARQL aim to find trade-offs between centralized and fully distributed querying. The idea is to partially move the computational load from the servers to the clients. Despite the variety and the relative merits of these approaches, as of today, there is no standard language for querying Linked Data on theWeb. A specific requirement for such a language to capture the distributed, graph-like nature of Linked Data sources on the Web is a support of graph navigation. Recently, SPARQL has been extended with a navigational feature called property paths (PPs). However, the semantics of SPARQL restricts the scope of navigation via PPs to single RDF graphs. This restriction limits the applicability of PPs for querying distributed Linked Data sources on the Web. To fill this gap, in this paper we provide formal foundations for evaluating PPs on the Web, thus contributing to the definition of a query language for Linked Data. We first introduce a family of reachability-based query semantics for PPs that distinguish between navigation on the Web and navigation at the data level. Thereafter, we consider another, alternative query semantics that couples Web graph navigation and data level navigation; we call it context-based semantics. Given these semantics, we find that for some PP-based SPARQL queries a complete evaluation on the Web is not possible. To study this phenomenon we introduce a notion of Web-safeness of queries, and prove a decidable syntactic property that enables systems to identify queries that areWeb-safe. In addition to establishing these formal foundations, we conducted an experimental comparison of the context-based semantics and a reachability- based semantics. Our experiments show that when evaluating a PP-based query under the context-based semantics one experiences a significantly smaller number of dereferencing operations, but the computed query result may contain less solutions.
Many markets are characterized by pricing competition. Typically, competitors are involved that adjust their prices in response to other competitors with different frequencies. We analyze stochastic dynamic pricing models under competition for the sale of durable goods. Given a competitor’s pricing strategy, we show how to derive optimal response strategies that take the anticipated competitor’s price adjustments into account. We study resulting price cycles and the associated expected long-term profits. We show that reaction frequencies have a major impact on a strategy’s performance. In order not to act predictable our model also allows to include randomized reaction times. Additionally, we study to which extent optimal response strategies of active competitors are affected by additional passive competitors that use constant prices. It turns out that optimized feedback strategies effectively avoid a decline in price. They help to gain profits, especially, when aggressive competitor s are involved.
Cost models play an important role for the efficient implementation of software systems. These models can be embedded in operating systems and execution environments to optimize execution at run time. Even though non-uniform memory access (NUMA) architectures are dominating today's server landscape, there is still a lack of parallel cost models that represent NUMA system sufficiently. Therefore, the existing NUMA models are analyzed, and a two-step performance assessment strategy is proposed that incorporates low-level hardware counters as performance indicators. To support the two-step strategy, multiple tools are developed, all accumulating and enriching specific hardware event counter information, to explore, measure, and visualize these low-overhead performance indicators. The tools are showcased and discussed alongside specific experiments in the realm of performance assessment.
As virtualization drives the automation of networking, the validation of security properties becomes more and more challenging eventually ruling out manual inspections. While formal verification in Software Defined Networks is provided by comprehensive tools with high speed reverification capabilities like NetPlumber for instance, the presence of middlebox functionality like firewalls is not considered. Also, they lack the ability to handle dynamic protocol elements like IPv6 extension header chains. In this work, we provide suitable modeling abstractions to enable both - the inclusion of firewalls and dynamic protocol elements. We exemplarily model the Linux ip6tables/netfilter packet filter and also provide abstractions for an application layer gateway. Finally, we present a prototype of our formal verification system FaVe.
Nowadays, graph data models are employed, when relationships between entities have to be stored and are in the scope of queries. For each entity, this graph data model locally stores relationships to adjacent entities. Users employ graph queries to query and modify these entities and relationships. These graph queries employ graph patterns to lookup all subgraphs in the graph data that satisfy certain graph structures. These subgraphs are called graph pattern matches. However, this graph pattern matching is NP-complete for subgraph isomorphism. Thus, graph queries can suffer a long response time, when the number of entities and relationships in the graph data or the graph patterns increases.
One possibility to improve the graph query performance is to employ graph views that keep ready graph pattern matches for complex graph queries for later retrieval. However, these graph views must be maintained by means of an incremental graph pattern matching to keep them consistent with the graph data from which they are derived, when the graph data changes. This maintenance adds subgraphs that satisfy a graph pattern to the graph views and removes subgraphs that do not satisfy a graph pattern anymore from the graph views.
Current approaches for incremental graph pattern matching employ Rete networks. Rete networks are discrimination networks that enumerate and maintain all graph pattern matches of certain graph queries by employing a network of condition tests, which implement partial graph patterns that together constitute the overall graph query. Each condition test stores all subgraphs that satisfy the partial graph pattern. Thus, Rete networks suffer high memory consumptions, because they store a large number of partial graph pattern matches. But, especially these partial graph pattern matches enable Rete networks to update the stored graph pattern matches efficiently, because the network maintenance exploits the already stored partial graph pattern matches to find new graph pattern matches. However, other kinds of discrimination networks exist that can perform better in time and space than Rete networks. Currently, these other kinds of networks are not used for incremental graph pattern matching.
This thesis employs generalized discrimination networks for incremental graph pattern matching. These discrimination networks permit a generalized network structure of condition tests to enable users to steer the trade-off between memory consumption and execution time for the incremental graph pattern matching. For that purpose, this thesis contributes a modeling language for the effective definition of generalized discrimination networks. Furthermore, this thesis contributes an efficient and scalable incremental maintenance algorithm, which updates the (partial) graph pattern matches that are stored by each condition test. Moreover, this thesis provides a modeling evaluation, which shows that the proposed modeling language enables the effective modeling of generalized discrimination networks. Furthermore, this thesis provides a performance evaluation, which shows that a) the incremental maintenance algorithm scales, when the graph data becomes large, and b) the generalized discrimination network structures can outperform Rete network structures in time and space at the same time for incremental graph pattern matching.
While offering significant expressive power, graph transformation systems often come with rather limited capabilities for automated analysis, particularly if systems with many possible initial graphs and large or infinite state spaces are concerned. One approach that tries to overcome these limitations is inductive invariant checking. However, the verification of inductive invariants often requires extensive knowledge about the system in question and faces the approach-inherent challenges of locality and lack of context.
To address that, this report discusses k-inductive invariant checking for graph transformation systems as a generalization of inductive invariants. The additional context acquired by taking multiple (k) steps into account is the key difference to inductive invariant checking and is often enough to establish the desired invariants without requiring the iterative development of additional properties.
To analyze possibly infinite systems in a finite fashion, we introduce a symbolic encoding for transformation traces using a restricted form of nested application conditions. As its central contribution, this report then presents a formal approach and algorithm to verify graph constraints as k-inductive invariants. We prove the approach's correctness and demonstrate its applicability by means of several examples evaluated with a prototypical implementation of our algorithm.
Today, software has become an intrinsic part of complex distributed embedded real-time systems. The next generation of embedded real-time systems will interconnect the today unconnected systems via complex software parts and the service-oriented paradigm. Therefore besides timed behavior and probabilistic behaviour also structure dynamics, where the architecture can be subject to changes at run-time, e.g. when dynamic binding of service end-points is employed or complex collaborations are established dynamically, is required. However, a modeling and analysis approach that combines all these necessary aspects does not exist so far.
To fill the identified gap, we propose Probabilistic Timed Graph Transformation Systems (PTGTSs) as a high-level description language that supports all the necessary aspects of structure dynamics, timed behavior, and probabilistic behavior. We introduce the formal model of PTGTSs in this paper and present a mapping of models with finite state spaces to probabilistic timed automata (PTA) that allows to use the PRISM model checker to analyze PTGTS models with respect to PTCTL properties.
Graphs are ubiquitous in Computer Science. For this reason, in many areas, it is very important to have the means to express and reason about graph properties. In particular, we want to be able to check automatically if a given graph property is satisfiable. Actually, in most application scenarios it is desirable to be able to explore graphs satisfying the graph property if they exist or even to get a complete and compact overview of the graphs satisfying the graph property.
We show that the tableau-based reasoning method for graph properties as introduced by Lambers and Orejas paves the way for a symbolic model generation algorithm for graph properties. Graph properties are formulated in a dedicated logic making use of graphs and graph morphisms, which is equivalent to firstorder logic on graphs as introduced by Courcelle. Our parallelizable algorithm gradually generates a finite set of so-called symbolic models, where each symbolic model describes a set of finite graphs (i.e., finite models) satisfying the graph property. The set of symbolic models jointly describes all finite models for the graph property (complete) and does not describe any finite graph violating the graph property (sound). Moreover, no symbolic model is already covered by another one (compact). Finally, the algorithm is able to generate from each symbolic model a minimal finite model immediately and allows for an exploration of further finite models. The algorithm is implemented in the new tool AutoGraph.
Every year, the Hasso Plattner Institute (HPI) invites guests from industry and academia to a collaborative scientific workshop on the topic Every year, the Hasso Plattner Institute (HPI) invites guests from industry and academia to a collaborative scientific workshop on the topic "Operating the Cloud". Our goal is to provide a forum for the exchange of knowledge and experience between industry and academia. Co-located with the event is the HPI's Future SOC Lab day, which offers an additional attractive and conducive environment for scientific and industry related discussions. "Operating the Cloud" aims to be a platform for productive interactions of innovative ideas, visions, and upcoming technologies in the field of cloud operation and administration.
On the occasion of this symposium we called for submissions of research papers and practitioner's reports. A compilation of the research papers realized during the fourth HPI cloud symposium "Operating the Cloud" 2016 are published in this proceedings. We thank the authors for exciting presentations and insights into their current work and research.
Moreover, we look forward to more interesting submissions for the upcoming symposium later in the year. Every year, the Hasso Plattner Institute (HPI) invites guests from industry and academia to a collaborative scientific workshop on the topic "Operating the Cloud". Our goal is to provide a forum for the exchange of knowledge and experience between industry and academia. Co-located with the event is the HPI's Future SOC Lab day, which offers an additional attractive and conducive environment for scientific and industry related discussions. "Operating the Cloud" aims to be a platform for productive interactions of innovative ideas, visions, and upcoming technologies in the field of cloud operation and administration.
The correctness of model transformations is a crucial element for model-driven engineering of high quality software. In particular, behavior preservation is the most important correctness property avoiding the introduction of semantic errors during the model-driven engineering process. Behavior preservation verification techniques either show that specific properties are preserved, or more generally and complex, they show some kind of behavioral equivalence or refinement between source and target model of the transformation. Both kinds of behavior preservation verification goals have been presented with automatic tool support for the instance level, i.e. for a given source and target model specified by the model transformation. However, up until now there is no automatic verification approach available at the transformation level, i.e. for all source and target models specified by the model transformation.
In this report, we extend our results presented in [27] and outline a new sophisticated approach for the automatic verification of behavior preservation captured by bisimulation resp. simulation for model transformations specified by triple graph grammars and semantic definitions given by graph transformation rules. In particular, we show that the behavior preservation problem can be reduced to invariant checking for graph transformation and that the resulting checking problem can be addressed by our own invariant checker even for a complex example where a sequence chart is transformed into communicating automata. We further discuss today's limitations of invariant checking for graph transformation and motivate further lines of future work in this direction.
Developing large software projects is a complicated task and can be demanding for developers. Continuous integration is common practice for reducing complexity. By integrating and testing changes often, changesets are kept small and therefore easily comprehensible. Travis CI is a service that offers continuous integration and continuous deployment in the cloud. Software projects are build, tested, and deployed using the Travis CI infrastructure without interrupting the development process. This report describes how Travis CI works, presents how time-driven, periodic building is implemented as well as how CI data visualization can be done, and proposes a way of dealing with dependency problems.
After almost two decades of development, modern Security Information and Event Management (SIEM) systems still face issues with normalisation of heterogeneous data sources, high number of false positive alerts and long analysis times, especially in large-scale networks with high volumes of security events. In this paper, we present our own prototype of SIEM system, which is capable of dealing with these issues. For efficient data processing, our system employs in-memory data storage (SAP HANA) and our own technologies from the previous work, such as the Object Log Format (OLF) and high-speed event normalisation. We analyse normalised data using a combination of three different approaches for security analysis: misuse detection, query-based analytics, and anomaly detection. Compared to the previous work, we have significantly improved our unsupervised anomaly detection algorithms. Most importantly, we have developed a novel hybrid outlier detection algorithm that returns ranked clusters of anomalies. It lets an operator of a SIEM system to concentrate on the several top-ranked anomalies, instead of digging through an unsorted bundle of suspicious events. We propose to use anomaly detection in a combination with signatures and queries, applied on the same data, rather than as a full replacement for misuse detection. In this case, the majority of attacks will be captured with misuse detection, whereas anomaly detection will highlight previously unknown behaviour or attacks. We also propose that only the most suspicious event clusters need to be checked by an operator, whereas other anomalies, including false positive alerts, do not need to be explicitly checked if they have a lower ranking. We have proved our concepts and algorithms on a dataset of 160 million events from a network segment of a big multinational company and suggest that our approach and methods are highly relevant for modern SIEM systems.
Creation, collection and retention of knowledge in digital communities is an activity that currently requires being explicitly targeted as a secure method of keeping intellectual capital growing in the digital era. In particular, we consider it relevant to analyze and evaluate the empathetic cognitive personalities and behaviors that individuals now have with the change from face-to-face communication (F2F) to computer-mediated communication (CMC) online. This document proposes a cyber-humanistic approach to enhance the traditional SECI knowledge management model. A cognitive perception is added to its cyclical process following design thinking interaction, exemplary for improvement of the method in which knowledge is continuously created, converted and shared. In building a cognitive-centered model, we specifically focus on the effective identification and response to cognitive stimulation of individuals, as they are the intellectual generators and multiplicators of knowledge in the online environment. Our target is to identify how geographically distributed-digital-organizations should align the individual's cognitive abilities to promote iteration and improve interaction as a reliable stimulant of collective intelligence. The new model focuses on analyzing the four different stages of knowledge processing, where individuals with sympathetic cognitive personalities can significantly boost knowledge creation in a virtual social system. For organizations, this means that multidisciplinary individuals can maximize their extensive potential, by externalizing their knowledge in the correct stage of the knowledge creation process, and by collaborating with their appropriate sympathetically cognitive remote peers.
Embedded smart home
(2017)
The popularity of MOOCs has increased considerably in the last years. A typical MOOC course consists of video content, self tests after a video and homework, which is normally in multiple choice format. After solving this homeworks for every week of a MOOC, the final exam certificate can be issued when the student has reached a sufficient score. There are also some attempts to include practical tasks, such as programming, in MOOCs for grading. Nevertheless, until now there is no known possibility to teach embedded system programming in a MOOC course where the programming can be done in a remote lab and where grading of the tasks is additionally possible. This embedded programming includes communication over GPIO pins to control LEDs and measure sensor values. We started a MOOC course called "Embedded Smart Home" as a pilot to prove the concept to teach real hardware programming in a MOOC environment under real life MOOC conditions with over 6000 students. Furthermore, also students with real hardware have the possibility to program on their own real hardware and grade their results in the MOOC course. Finally, we evaluate our approach and analyze the student acceptance of this approach to offer a course on embedded programming. We also analyze the hardware usage and working time of students solving tasks to find out if real hardware programming is an advantage and motivating achievement to support students learning success.
Selection of initial points, the number of clusters and finding proper clusters centers are still the main challenge in clustering processes. In this paper, we suggest genetic algorithm based method which searches several solution spaces simultaneously. The solution spaces are population groups consisting of elements with similar structure. Elements in a group have the same size, while elements in different groups are of different sizes. The proposed algorithm processes the population in groups of chromosomes with one gene, two genes to k genes. These genes hold corresponding information about the cluster centers. In the proposed method, the crossover and mutation operators can accept parents with different sizes; this can lead to versatility in population and information transfer among sub-populations. We implemented the proposed method and evaluated its performance against some random datasets and the Ruspini dataset as well. The experimental results show that the proposed method could effectively determine the appropriate number of clusters and recognize their centers. Overall this research implies that using heterogeneous population in the genetic algorithm can lead to better results.
The identification of vulnerabilities relies on detailed information about the target infrastructure. The gathering of the necessary information is a crucial step that requires an intensive scanning or mature expertise and knowledge about the system even though the information was already available in a different context. In this paper we propose a new method to detect vulnerabilities that reuses the existing information and eliminates the necessity of a comprehensive scan of the target system. Since our approach is able to identify vulnerabilities without the additional effort of a scan, we are able to increase the overall performance of the detection. Because of the reuse and the removal of the active testing procedures, our approach could be classified as a passive vulnerability detection. We will explain the approach and illustrate the additional possibility to increase the security awareness of users. Therefore, we applied the approach on an experimental setup and extracted security relevant information from web logs.
This paper discusses a new approach for designing and deploying Security-as-a-Service (SecaaS) applications using cloud native design patterns. Current SecaaS approaches do not efficiently handle the increasing threats to computer systems and applications. For example, requests for security assessments drastically increase after a high-risk security vulnerability is disclosed. In such scenarios, SecaaS applications are unable to dynamically scale to serve requests. A root cause of this challenge is employment of architectures not specifically fitted to cloud environments. Cloud native design patterns resolve this challenge by enabling certain properties e.g. massive scalability and resiliency via the combination of microservice patterns and cloud-focused design patterns. However adopting these patterns is a complex process, during which several security issues are introduced. In this work, we investigate these security issues, we redesign and deploy a monolithic SecaaS application using cloud native design patterns while considering appropriate, layered security counter-measures i.e. at the application and cloud networking layer. Our prototype implementation out-performs traditional, monolithic applications with an average Scanner Time of 6 minutes, without compromising security. Our approach can be employed for designing secure, scalable and performant SecaaS applications that effectively handle unexpected increase in security assessment requests.
Securing e-prescription from medical identity theft using steganography and antiphishing techniques
(2017)
Drug prescription is among the health care process that usually makes references to the patients’ medical and insurance information among other personal data, because this information is very vital and delicate, it should be adequately protected from identity thieves. This article aims at securing Electronic Prescription (EP) in order to minimize patient’s data theft and foster patients’ trust of EP system.
This paper presents a steganography and antiphishing technique for preventing medical identity theft in EP. The proposed EP system design focused on the security features in the prescriber and dispensers’ modules of EP by ensuring the prescriber sends the prescription of the patient in a safe manner and to the right dispenser without the interference of fake third parties. Hexadecimal steganography image system is used to cover and secure the
sent prescription details. Malicious electronic dispensing system is prevented through an authentication technique where a dispenser uses a captcha together with a one-time password, and the web server encrypted token for prescriber’s device authentication. The steganography system is evaluated using Peak Signal to Noise Ratio (PSNR).
The system implementation results showed that steganography
and antiphishing techniques are capable of providing a secure EP systems.
Massive Open Online Courses (MOOCs) have left their mark on the face of education during the recent years. At the Hasso Plattner Institute (HPI) in Potsdam, Germany, we are actively developing a MOOC platform, which provides our research with a plethora of e-learning topics, such as learning analytics, automated assessment, peer assessment, team-work, online proctoring, and gamification. We run several instances of this platform. On openHPI, we provide our own courses from within the HPI context. Further instances are openSAP, openWHO, and mooc.HOUSE, which is the smallest of these platforms, targeting customers with a less extensive course portfolio. In 2013, we started to work on the gamification of our platform. By now, we have implemented about two thirds of the features that we initially have evaluated as useful for our purposes. About a year ago we activated the implemented gamification features on mooc.HOUSE. Before activating the features on openHPI as well, we examined, and re-evaluated our initial considerations based on the data we collected so far and the changes in other contexts of our platforms.
Recently, due to an increasing demand on functionality and flexibility, beforehand isolated systems have become interconnected to gain powerful adaptive Systems of Systems (SoS) solutions with an overall robust, flexible and emergent behavior. The adaptive SoS comprises a variety of different system types ranging from small embedded to adaptive cyber-physical systems. On the one hand, each system is independent, follows a local strategy and optimizes its behavior to reach its goals. On the other hand, systems must cooperate with each other to enrich the overall functionality to jointly perform on the SoS level reaching global goals, which cannot be satisfied by one system alone. Due to difficulties of local and global behavior optimizations conflicts may arise between systems that have to be solved by the adaptive SoS.
This thesis proposes a modeling language that facilitates the description of an adaptive SoS by considering the adaptation capabilities in form of feedback loops as first class entities. Moreover, this thesis adopts the Models@runtime approach to integrate the available knowledge in the systems as runtime models into the modeled adaptation logic. Furthermore, the modeling language focuses on the description of system interactions within the adaptive SoS to reason about individual system functionality and how it emerges via collaborations to an overall joint SoS behavior. Therefore, the modeling language approach enables the specification of local adaptive system behavior, the integration of knowledge in form of runtime models and the joint interactions via collaboration to place the available adaptive behavior in an overall layered, adaptive SoS architecture.
Beside the modeling language, this thesis proposes analysis rules to investigate the modeled adaptive SoS, which enables the detection of architectural patterns as well as design flaws and pinpoints to possible system threats. Moreover, a simulation framework is presented, which allows the direct execution of the modeled SoS architecture. Therefore, the analysis rules and the simulation framework can be used to verify the interplay between systems as well as the modeled adaptation effects within the SoS. This thesis realizes the proposed concepts of the modeling language by mapping them to a state of the art standard from the automotive domain and thus, showing their applicability to actual systems. Finally, the modeling language approach is evaluated by remodeling up to date research scenarios from different domains, which demonstrates that the modeling language concepts are powerful enough to cope with a broad range of existing research problems.
Behavioural Models
(2016)
This textbook introduces the basis for modelling and analysing discrete dynamic systems, such as computer programmes, soft- and hardware systems, and business processes. The underlying concepts are introduced and concrete modelling techniques are described, such as finite automata, state machines, and Petri nets. The concepts are related to concrete application scenarios, among which business processes play a prominent role.
The book consists of three parts, the first of which addresses the foundations of behavioural modelling. After a general introduction to modelling, it introduces transition systems as a basic formalism for representing the behaviour of discrete dynamic systems. This section also discusses causality, a fundamental concept for modelling and reasoning about behaviour. In turn, Part II forms the heart of the book and is devoted to models of behaviour. It details both sequential and concurrent systems and introduces finite automata, state machines and several different types of Petri nets. One chapter is especially devoted to business process models, workflow patterns and BPMN, the industry standard for modelling business processes. Lastly, Part III investigates how the behaviour of systems can be analysed. To this end, it introduces readers to the concept of state spaces. Further chapters cover the comparison of behaviour and the formal analysis and verification of behavioural models.
The book was written for students of computer science and software engineering, as well as for programmers and system analysts interested in the behaviour of the systems they work on. It takes readers on a journey from the fundamentals of behavioural modelling to advanced techniques for modelling and analysing sequential and concurrent systems, and thus provides them a deep understanding of the concepts and techniques introduced and how they can be applied to concrete application scenarios.
Geospatial data has become a natural part of a growing number of information systems and services in the economy, society, and people's personal lives. In particular, virtual 3D city and landscape models constitute valuable information sources within a wide variety of applications such as urban planning, navigation, tourist information, and disaster management. Today, these models are often visualized in detail to provide realistic imagery. However, a photorealistic rendering does not automatically lead to high image quality, with respect to an effective information transfer, which requires important or prioritized information to be interactively highlighted in a context-dependent manner.
Approaches in non-photorealistic renderings particularly consider a user's task and camera perspective when attempting optimal expression, recognition, and communication of important or prioritized information. However, the design and implementation of non-photorealistic rendering techniques for 3D geospatial data pose a number of challenges, especially when inherently complex geometry, appearance, and thematic data must be processed interactively. Hence, a promising technical foundation is established by the programmable and parallel computing architecture of graphics processing units.
This thesis proposes non-photorealistic rendering techniques that enable both the computation and selection of the abstraction level of 3D geospatial model contents according to user interaction and dynamically changing thematic information. To achieve this goal, the techniques integrate with hardware-accelerated rendering pipelines using shader technologies of graphics processing units for real-time image synthesis. The techniques employ principles of artistic rendering, cartographic generalization, and 3D semiotics—unlike photorealistic rendering—to synthesize illustrative renditions of geospatial feature type entities such as water surfaces, buildings, and infrastructure networks. In addition, this thesis contributes a generic system that enables to integrate different graphic styles—photorealistic and non-photorealistic—and provide their seamless transition according to user tasks, camera view, and image resolution.
Evaluations of the proposed techniques have demonstrated their significance to the field of geospatial information visualization including topics such as spatial perception, cognition, and mapping. In addition, the applications in illustrative and focus+context visualization have reflected their potential impact on optimizing the information transfer regarding factors such as cognitive load, integration of non-realistic information, visualization of uncertainty, and visualization on small displays.
3D point clouds are a digital representation of our world and used in a variety of applications. They are captured with LiDAR or derived by image-matching approaches to get surface information of objects, e.g., indoor scenes, buildings, infrastructures, cities, and landscapes. We present novel interaction and visualization techniques for heterogeneous, time variant, and semantically rich 3D point clouds. Interactive and view-dependent see-through lenses are introduced as exploration tools to enhance recognition of objects, semantics, and temporal changes within 3D point cloud depictions. We also develop filtering and highlighting techniques that are used to dissolve occlusion to give context-specific insights. All techniques can be combined with an out-of-core real-time rendering system for massive 3D point clouds. We have evaluated the presented approach with 3D point clouds from different application domains. The results show the usability and how different visualization and exploration tasks can be improved for a variety of domain-specific applications.
Personal fabrication tools, such as 3D printers, are on the way of enabling a future in which non-technical users will be able to create custom objects. However, while the hardware is there, the current interaction model behind existing design tools is not suitable for non-technical users. Today, 3D printers are operated by fabricating the object in one go, which tends to take overnight due to the slow 3D printing technology. Consequently, the current interaction model requires users to think carefully before printing as every mistake may imply another overnight print. Planning every step ahead, however, is not feasible for non-technical users as they lack the experience to reason about the consequences of their design decisions.
In this dissertation, we propose changing the interaction model around personal fabrication tools to better serve this user group. We draw inspiration from personal computing and argue that the evolution of personal fabrication may resemble the evolution of personal computing: Computing started with machines that executed a program in one go before returning the result to the user. By decreasing the interaction unit to single requests, turn-taking systems such as the command line evolved, which provided users with feedback after every input. Finally, with the introduction of direct-manipulation interfaces, users continuously interacted with a program receiving feedback about every action in real-time. In this dissertation, we explore whether these interaction concepts can be applied to personal fabrication as well.
We start with fabricating an object in one go and investigate how to tighten the feedback-cycle on an object-level: We contribute a method called low-fidelity fabrication, which saves up to 90% fabrication time by creating objects as fast low-fidelity previews, which are sufficient to evaluate key design aspects. Depending on what is currently being tested, we propose different conversions that enable users to focus on different parts: faBrickator allows for a modular design in the early stages of prototyping; when users move on WirePrint allows quickly testing an object's shape, while Platener allows testing an object's technical function. We present an interactive editor for each technique and explain the underlying conversion algorithms.
By interacting on smaller units, such as a single element of an object, we explore what it means to transition from systems that fabricate objects in one go to turn-taking systems. We start with a 2D system called constructable: Users draw with a laser pointer onto the workpiece inside a laser cutter. The drawing is captured with an overhead camera. As soon as the the user finishes drawing an element, such as a line, the constructable system beautifies the path and cuts it--resulting in physical output after every editing step. We extend constructable towards 3D editing by developing a novel laser-cutting technique for 3D objects called LaserOrigami that works by heating up the workpiece with the defocused laser until the material becomes compliant and bends down under gravity. While constructable and LaserOrigami allow for fast physical feedback, the interaction is still best described as turn-taking since it consists of two discrete steps: users first create an input and afterwards the system provides physical output.
By decreasing the interaction unit even further to a single feature, we can achieve real-time physical feedback: Input by the user and output by the fabrication device are so tightly coupled that no visible lag exists. This allows us to explore what it means to transition from turn-taking interfaces, which only allow exploring one option at a time, to direct manipulation interfaces with real-time physical feedback, which allow users to explore the entire space of options continuously with a single interaction. We present a system called FormFab, which allows for such direct control. FormFab is based on the same principle as LaserOrigami: It uses a workpiece that when warmed up becomes compliant and can be reshaped. However, FormFab achieves the reshaping not based on gravity, but through a pneumatic system that users can control interactively. As users interact, they see the shape change in real-time.
We conclude this dissertation by extrapolating the current evolution into a future in which large numbers of people use the new technology to create objects. We see two additional challenges on the horizon: sustainability and intellectual property. We investigate sustainability by demonstrating how to print less and instead patch physical objects. We explore questions around intellectual property with a system called Scotty that transfers objects without creating duplicates, thereby preserving the designer's copyright.
We contribute to the theoretical understanding of randomized search heuristics by investigating their optimization behavior on satisfiable random k-satisfiability instances both in the planted solution model and the uniform model conditional on satisfiability. Denoting the number of variables by n, our main technical result is that the simple () evolutionary algorithm with high probability finds a satisfying assignment in time when the clause-variable density is at least logarithmic. For low density instances, evolutionary algorithms seem to be less effective, and all we can show is a subexponential upper bound on the runtime for densities below . We complement these mathematical results with numerical experiments on a broader density spectrum. They indicate that, indeed, the () EA is less efficient on lower densities. Our experiments also suggest that the implicit constants hidden in our main runtime guarantee are low. Our main result extends and considerably improves the result obtained by Sutton and Neumann (Lect Notes Comput Sci 8672:942-951, 2014) in terms of runtime, minimum density, and clause length. These improvements are made possible by establishing a close fitness-distance correlation in certain parts of the search space. This approach might be of independent interest and could be useful for other average-case analyses of randomized search heuristics. While the notion of a fitness-distance correlation has been around for a long time, to the best of our knowledge, this is the first time that fitness-distance correlation is explicitly used to rigorously prove a performance statement for an evolutionary algorithm.
When realizing a programming language as VM, implementing behavior as part of the VM, as primitive, usually results in reduced execution times. But supporting and developing primitive functions requires more effort than maintaining and using code in the hosted language since debugging is harder, and the turn-around times for VM parts are higher. Furthermore, source artifacts of primitive functions are seldom reused in new implementations of the same language. And if they are reused, the existing API usually is emulated, reducing the performance gains. Because of recent results in tracing dynamic compilation, the trade-off between performance and ease of implementation, reuse, and changeability might now be decided adversely.
In this work, we investigate the trade-offs when creating primitives, and in particular how large a difference remains between primitive and hosted function run times in VMs with tracing just-in-time compiler. To that end, we implemented the algorithmic primitive BitBlt three times for RSqueak/VM. RSqueak/VM is a Smalltalk VM utilizing the PyPy RPython toolchain. We compare primitive implementations in C, RPython, and Smalltalk, showing that due to the tracing just-in-time compiler, the performance gap has lessened by one magnitude to one magnitude.
Transmorphic
(2016)
Defining Graphical User Interfaces (GUIs) through functional abstractions can reduce the complexity that arises from mutable abstractions. Recent examples, such as Facebook's React GUI framework have shown, how modelling the view as a functional projection from the application state to a visual representation can reduce the number of interacting objects and thus help to improve the reliabiliy of the system. This however comes at the price of a more rigid, functional framework where programmers are forced to express visual entities with functional abstractions, detached from the way one intuitively thinks about the physical world.
In contrast to that, the GUI Framework Morphic allows interactions in the graphical domain, such as grabbing, dragging or resizing of elements to evolve an application at runtime, providing liveness and directness in the development workflow. Modelling each visual entity through mutable abstractions however makes it difficult to ensure correctness when GUIs start to grow more complex. Furthermore, by evolving morphs at runtime through direct manipulation we diverge more and more from the symbolic description that corresponds to the morph. Given that both of these approaches have their merits and problems, is there a way to combine them in a meaningful way that preserves their respective benefits?
As a solution for this problem, we propose to lift Morphic's concept of direct manipulation from the mutation of state to the transformation of source code. In particular, we will explore the design, implementation and integration of a bidirectional mapping between the graphical representation and a functional and declarative symbolic description of a graphical user interface within a self hosted development environment. We will present Transmorphic, a functional take on the Morphic GUI Framework, where the visual and structural properties of morphs are defined in a purely functional, declarative fashion. In Transmorphic, the developer is able to assemble different morphs at runtime through direct manipulation which is automatically translated into changes in the code of the application. In this way, the comprehensiveness and predictability of direct manipulation can be used in the context of a purely functional GUI, while the effects of the manipulation are reflected in a medium that is always in reach for the programmer and can even be used to incorporate the source transformations into the source files of the application.
Every year, the Hasso Plattner Institute (HPI) invites guests from industry and academia to a collaborative scientific workshop on the topic “Operating the Cloud”. Our goal is to provide a forum for the exchange of knowledge and experience between industry and academia. Hence, HPI’s Future SOC Lab is the adequate environment to host this event which is also supported by BITKOM.
On the occasion of this workshop we called for submissions of research papers and practitioner’s reports. ”Operating the Cloud” aims to be a platform for productive discussions of innovative ideas, visions, and upcoming technologies in the field of cloud operation and administration.
In this workshop proceedings the results of the third HPI cloud symposium ”Operating the Cloud” 2015 are published. We thank the authors for exciting presentations and insights into their current work and research. Moreover, we look forward to more interesting submissions for the upcoming symposium in 2016.
Complexity in software systems is a major factor driving development and maintenance costs. To master this complexity, software is divided into modules that can be developed and tested separately. In order to support this separation of modules, each module should provide a clean and concise public interface. Therefore, the ability to selectively hide functionality using access control is an important feature in a programming language intended for complex software systems.
Software systems are increasingly distributed, adding not only to their inherent complexity, but also presenting security challenges. The object-capability approach addresses these challenges by defining language properties providing only minimal capabilities to objects. One programming language that is based on the object-capability approach is Newspeak, a dynamic programming language designed for modularity and security. The Newspeak specification describes access control as one of Newspeak’s properties, because it is a requirement for the object-capability approach. However, access control, as defined in the Newspeak specification, is currently not enforced in its implementation.
This work introduces an access control implementation for Newspeak, enabling the security of object-capabilities and enhancing modularity. We describe our implementation of access control for Newspeak. We adapted the runtime environment, the reflective system, the compiler toolchain, and the virtual machine. Finally, we describe a migration strategy for the existing Newspeak code base, so that our access control implementation can be integrated with minimal effort.
Graph queries have lately gained increased interest due to application areas such as social networks, biological networks, or model queries. For the relational database case the relational algebra and generalized discrimination networks have been studied to find appropriate decompositions into subqueries and ordering of these subqueries for query evaluation or incremental updates of query results. For graph database queries however there is no formal underpinning yet that allows us to find such suitable operationalizations. Consequently, we suggest a simple operational concept for the decomposition of arbitrary complex queries into simpler subqueries and the ordering of these subqueries in form of generalized discrimination networks for graph queries inspired by the relational case. The approach employs graph transformation rules for the nodes of the network and thus we can employ the underlying theory. We further show that the proposed generalized discrimination networks have the same expressive power as nested graph conditions.
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
Texture mapping is a key technology in computer graphics. For the visual design of 3D scenes, in particular, effective texturing depends significantly on how important contents are expressed, e.g., by preserving global salient structures, and how their depiction is cognitively processed by the user in an application context. Edge-preserving image filtering is one key approach to address these concerns. Much research has focused on applying image filters in a post-process stage to generate artistically stylized depictions. However, these approaches generally do not preserve depth cues, which are important for the perception of 3D visualization (e.g., texture gradient). To this end, filtering is required that processes texture data coherently with respect to linear perspective and spatial relationships. In this work, we present an approach for texturing 3D scenes with perspective coherence by arbitrary image filters. We propose decoupled deferred texturing with (1) caching strategies to interactively perform image filtering prior to texture mapping and (2) for each mipmap level separately to enable a progressive level of abstraction, using (3) direct interaction interfaces to parameterize the visualization according to spatial, semantic, and thematic data. We demonstrate the potentials of our method by several applications using touch or natural language inputs to serve the different interests of users in specific information, including illustrative visualization, focus+context visualization, geometric detail removal, and semantic depth of field. The approach supports frame-to-frame coherence, order-independent transparency, multitexturing, and content-based filtering. In addition, it seamlessly integrates into real-time rendering pipelines and is extensible for custom interaction techniques. (C) 2015 Elsevier Ltd. All rights reserved.
Cartography-Oriented Design of 3D Geospatial Information Visualization - Overview and Techniques
(2015)
In economy, society and personal life map-based interactive geospatial visualization becomes a natural element of a growing number of applications and systems. The visualization of 3D geospatial information, however, raises the question how to represent the information in an effective way. Considerable research has been done in technology-driven directions in the fields of cartography and computer graphics (e.g., design principles, visualization techniques). Here, non-photorealistic rendering (NPR) represents a promising visualization category - situated between both fields - that offers a large number of degrees for the cartography-oriented visual design of complex 2D and 3D geospatial information for a given application context. Still today, however, specifications and techniques for mapping cartographic design principles to the state-of-the-art rendering pipeline of 3D computer graphics remain to be explored. This paper revisits cartographic design principles for 3D geospatial visualization and introduces an extended 3D semiotic model that complies with the general, interactive visualization pipeline. Based on this model, we propose NPR techniques to interactively synthesize cartographic renditions of basic feature types, such as terrain, water, and buildings. In particular, it includes a novel iconification concept to seamlessly interpolate between photorealistic and cartographic representations of 3D landmarks. Our work concludes with a discussion of open challenges in this field of research, including topics, such as user interaction and evaluation.
Business processes are vital to managing organizations as they sustain a company's competitiveness. Consequently, these organizations maintain collections of hundreds or thousands of process models for streamlining working procedures and facilitating process implementation. Yet, the management of large process model collections requires effective searching capabilities. Recent research focused on similarity search of process models, but querying process models is still a largely open topic. This article presents an approach to querying process models that takes a process example as input and discovers all models that allow replaying the behavior of the query. To this end, we provide a notion of behavioral inclusion that is based on trace semantics and abstraction. Additional to deciding a match, a closeness score is provided that describes how well the behavior of the query is represented in the model and can be used for ranking. The article introduces the formal foundations of the approach and shows how they are applied to querying large process model collections. An experimental evaluation has been conducted that confirms the suitability of the solution as well as its applicability and scalability in practice.
During the execution of business processes several events happen that are recorded in the company's information systems. These events deliver insights into process executions so that process monitoring and analysis can be performed resulting, for instance, in prediction of upcoming process steps or the analysis of the run time of single steps. While event capturing is trivial when a process engine with integrated logging capabilities is used, manual process execution environments do not provide automatic logging of events, so that typically external devices, like bar code scanners, have to be used. As experience shows, these manual steps are error-prone and induce additional work. Therefore, we use object state transitions as additional monitoring information, so-called object state transition events. Based on these object state transition events, we reason about the enablement and termination of activities and provide the basis for process monitoring and analysis in terms of a large event log. In this paper, we present the concept to utilize information from these object state transition events for capturing process progress. Furthermore, we discuss a methodology to create the required design time artifacts that then are used for monitoring at run time. In a proof-of-concept implementation, we show how the design time and run time side work and prove applicability of the introduced concept of object state transition events. (C) 2015 Elsevier B.V. All rights reserved.
We present Pycket, a high-performance tracing JIT compiler for Racket. Pycket supports a wide variety of the sophisticated features in Racket such as contracts, continuations, classes, structures, dynamic binding, and more. On average, over a standard suite of benchmarks, Pycket outperforms existing compilers, both Racket's JIT and other highly-optimizing Scheme compilers. Further, Pycket provides much better performance for Racket proxies than existing systems, dramatically reducing the overhead of contracts and gradual typing. We validate this claim with performance evaluation on multiple existing benchmark suites.
The Pycket implementation is of independent interest as an application of the RPython meta-tracing framework (originally created for PyPy), which automatically generates tracing JIT compilers from interpreters. Prior work on meta-tracing focuses on bytecode interpreters, whereas Pycket is a high-level interpreter based on the CEK abstract machine and operates directly on abstract syntax trees. Pycket supports proper tail calls and first-class continuations. In the setting of a functional language, where recursion and higher-order functions are more prevalent than explicit loops, the most significant performance challenge for a tracing JIT is identifying which control flows constitute a loop-we discuss two strategies for identifying loops and measure their impact.
Communication between organizations is formalized as process choreographies in daily business. While the correct ordering of exchanged messages can be modeled and enacted with current choreography techniques, no approach exists to describe and automate the exchange of data between processes in a choreography using messages. This paper describes an entirely model-driven approach for BPMN introducing a few concepts that suffice to model data retrieval, data transformation, message exchange, and correlation four aspects of data exchange. For automation, this work utilizes a recent concept to enact data dependencies in internal processes. We present a modeling guideline to derive local process models from a given choreography; their operational semantics allows to correctly enact the entire choreography from the derived models only including the exchange of data. Targeting on successful interactions, we discuss means to ensure correct process choreography modeling. Finally, we implemented our approach by extending the camunda BPM platform with our approach and show its feasibility by realizing all service interaction patterns using only model-based concepts. (C) 2015 Elsevier Ltd. All rights reserved.
We examine a special class of dynamic pricing and advertising models for the sale of perishable goods, including marginal unit costs and inventory holding costs. The time horizon is assumed to be finite and we allow several model parameters to be dependent on time. For the stochastic version of the model, we derive closed-form expressions of the value function as well as of the optimal pricing and advertising policy in feedback form. Moreover, we show that for small unit shares, the model converges to a deterministic version of the problem, whose explicit solution is characterized by an overage and an underage case. We quantify the close relationship between the open-loop solution of the deterministic model and the expected evolution of optimally controlled stochastic sales processes. For both models, we derive sensitivity results. We find that in the case of positive holding costs, on average, optimal prices increase in time and advertising rates decrease. Furthermore, we analytically verify the excellent quality of optimal feedback policies of deterministic models applied in stochastic models. (C) 2015 Elsevier B.V. All rights reserved.
Companies need to efficiently manage their business processes to deliver products and services in time. Therefore, they monitor the progress of individual cases to be able to timely detect undesired deviations and to react accordingly. For example, companies can decide to speed up process execution by raising alerts or by using additional resources, which increases the chance that a certain deadline or service level agreement can be met. Central to such process control is accurate prediction of the remaining time of a case and the estimation of the risk of missing a deadline.
To achieve this goal, we use a specific kind of stochastic Petri nets that can capture arbitrary duration distributions. Thereby, we are able to achieve higher prediction accuracy than related approaches. Further, we evaluate the approach in comparison to state of the art approaches and show the potential of exploiting a so far untapped source of information: the elapsed time since the last observed event. Real-world case studies in the financial and logistics domain serve to illustrate and evaluate the approach presented. (C) 2015 Elsevier Ltd. All rights reserved.
Business process management (BPM) is a systematic and structured approach to model, analyze, control, and execute business operations also referred to as business processes that get carried out to achieve business goals. Central to BPM are conceptual models. Most prominently, process models describe which tasks are to be executed by whom utilizing which information to reach a business goal. Process models generally cover the perspectives of control flow, resource, data flow, and information systems.
Execution of business processes leads to the work actually being carried out. Automating them increases the efficiency and is usually supported by process engines. This, though, requires the coverage of control flow, resource assignments, and process data. While the first two perspectives are well supported in current process engines, data handling needs to be implemented and maintained manually. However, model-driven data handling promises to ease implementation, reduces the error-proneness through graphical visualization, and reduces development efforts through code generation.
This thesis addresses the modeling, analysis, and execution of data in business processes and presents a novel approach to execute data-annotated process models entirely model-driven. As a first step and formal grounding for the process execution, a conceptual framework for the integration of processes and data is introduced. This framework is complemented by operational semantics through a Petri net mapping extended with data considerations. Model-driven data execution comprises the handling of complex data dependencies, process data, and data exchange in case of communication between multiple process participants. This thesis introduces concepts from the database domain into BPM to enable the distinction of data operations, to specify relations between data objects of the same as well as of different types, to correlate modeled data nodes as well as received messages to the correct run-time process instances, and to generate messages for inter-process communication. The underlying approach, which is not limited to a particular process description language, has been implemented as proof-of-concept.
Automation of data handling in business processes requires data-annotated and correct process models. Targeting the former, algorithms are introduced to extract information about data nodes, their states, and data dependencies from control information and to annotate the process model accordingly. Usually, not all required information can be extracted from control flow information, since some data manipulations are not specified. This requires further refinement of the process model. Given a set of object life cycles specifying allowed data manipulations, automated refinement of the process model towards containment of all data manipulations is enabled. Process models are an abstraction focusing on specific aspects in detail, e.g., the control flow and the data flow views are often represented through activity-centric and object-centric process models. This thesis introduces algorithms for roundtrip transformations enabling the stakeholder to add information to the process model in the view being most appropriate.
Targeting process model correctness, this thesis introduces the notion of weak conformance that checks for consistency between given object life cycles and the process model such that the process model may only utilize data manipulations specified directly or indirectly in an object life cycle. The notion is computed via soundness checking of a hybrid representation integrating control flow and data flow correctness checking. Making a process model executable, identified violations must be corrected. Therefore, an approach is proposed that identifies for each violation multiple, alternative changes to the process model or the object life cycles.
Utilizing the results of this thesis, business processes can be executed entirely model-driven from the data perspective in addition to the control flow and resource perspectives already supported before. Thereby, the model creation is supported by algorithms partly automating the creation process while model consistency is ensured by data correctness checks.
An increasing demand on functionality and flexibility leads to an integration of beforehand isolated system solutions building a so-called System of Systems (SoS). Furthermore, the overall SoS should be adaptive to react on changing requirements and environmental conditions. Due SoS are composed of different independent systems that may join or leave the overall SoS at arbitrary point in times, the SoS structure varies during the systems lifetime and the overall SoS behavior emerges from the capabilities of the contained subsystems. In such complex system ensembles new demands of understanding the interaction among subsystems, the coupling of shared system knowledge and the influence of local adaptation strategies to the overall resulting system behavior arise. In this report, we formulate research questions with the focus of modeling interactions between system parts inside a SoS. Furthermore, we define our notion of important system types and terms by retrieving the current state of the art from literature. Having a common understanding of SoS, we discuss a set of typical SoS characteristics and derive general requirements for a collaboration modeling language. Additionally, we retrieve a broad spectrum of real scenarios and frameworks from literature and discuss how these scenarios cope with different characteristics of SoS. Finally, we discuss the state of the art for existing modeling languages that cope with collaborations for different system types such as SoS.
Business Process Management has become an integral part of modern organizations in the private and public sector for improving their operations. In the course of Business Process Management efforts, companies and organizations assemble large process model repositories with many hundreds and thousands of business process models bearing a large amount of information. With the advent of large business process model collections, new challenges arise as structuring and managing a large amount of process models, their maintenance, and their quality assurance.
This is covered by business process architectures that have been introduced for organizing and structuring business process model collections. A variety of business process architecture approaches have been proposed that align business processes along aspects of interest, e. g., goals, functions, or objects. They provide a high level categorization of single processes ignoring their interdependencies, thus hiding valuable information. The production of goods or the delivery of services are often realized by a complex system of interdependent business processes. Hence, taking a holistic view at business processes interdependencies becomes a major necessity to organize, analyze, and assess the impact of their re-/design. Visualizing business processes interdependencies reveals hidden and implicit information from a process model collection.
In this thesis, we present a novel Business Process Architecture approach for representing and analyzing business process interdependencies on an abstract level. We propose a formal definition of our Business Process Architecture approach, design correctness criteria, and develop analysis techniques for assessing their quality. We describe a methodology for applying our Business Process Architecture approach top-down and bottom-up. This includes techniques for Business Process Architecture extraction from, and decomposition to process models while considering consistency issues between business process architecture and process model level. Using our extraction algorithm, we present a novel technique to identify and visualize data interdependencies in Business Process Data Architectures. Our Business Process Architecture approach provides business process experts,managers, and other users of a process model collection with an overview that allows reasoning about a large set of process models,
understanding, and analyzing their interdependencies in a facilitated way. In this regard we evaluated our Business Process Architecture approach in an experiment and provide implementations of selected techniques.
We report our experience in implementing SqueakJS, a bitcompatible implementation of Squeak/Smalltalk written in pure JavaScript. SqueakJS runs entirely in theWeb browser with a virtual file system that can be directed to a server or client-side storage. Our implementation is notable for simplicity and performance gained through adaptation to the host object memory and deployment leverage gained through the Lively Web development environment. We present several novel techniques as well as performance measurements for the resulting virtual machine. Much of this experience is potentially relevant to preserving other dynamic language systems and making them available in a browser-based environment.
We present object versioning as a generic approach to preserve access to previous development and application states. Version-aware references can manage the modifications made to the target object and record versions as desired. Such references can be provided without modifications to the virtual machine. We used proxies to implement the proposed concepts and demonstrate the Lively Kernel running on top of this object versioning layer. This enables Lively users to undo the effects of direct manipulation and other programming actions.
Biologists often pose queries to search engines and biological databases to obtain answers related to ongoing experiments. This is known to be a time consuming, and sometimes frustrating, task in which more than one query is posed and many databases are consulted to come to possible answers for a single fact. Question answering comes as an alternative to this process by allowing queries to be posed as questions, by integrating various resources of different nature and by returning an exact answer to the user. We have surveyed the current solutions on question answering for Biology, present an overview on the methods which are usually employed and give insights on how to boost performance of systems in this domain. (C) 2014 Elsevier Inc. All rights reserved.
Graph databases provide a natural way of storing and querying graph data. In contrast to relational databases, queries over graph databases enable to refer directly to the graph structure of such graph data. For example, graph pattern matching can be employed to formulate queries over graph data.
However, as for relational databases running complex queries can be very time-consuming and ruin the interactivity with the database. One possible approach to deal with this performance issue is to employ database views that consist of pre-computed answers to common and often stated queries. But to ensure that database views yield consistent query results in comparison with the data from which they are derived, these database views must be updated before queries make use of these database views. Such a maintenance of database views must be performed efficiently, otherwise the effort to create and maintain views may not pay off in comparison to processing the queries directly on the data from which the database views are derived.
At the time of writing, graph databases do not support database views and are limited to graph indexes that index nodes and edges of the graph data for fast query evaluation, but do not enable to maintain pre-computed answers of complex queries over graph data. Moreover, the maintenance of database views in graph databases becomes even more challenging when negation and recursion have to be supported as in deductive relational databases.
In this technical report, we present an approach for the efficient and scalable incremental graph view maintenance for deductive graph databases. The main concept of our approach is a generalized discrimination network that enables to model nested graph conditions including negative application conditions and recursion, which specify the content of graph views derived from graph data stored by graph databases. The discrimination network enables to automatically derive generic maintenance rules using graph transformations for maintaining graph views in case the graph data from which the graph views are derived change. We evaluate our approach in terms of a case study using multiple data sets derived from open source projects.
Graph transformation systems are a powerful formal model to capture model transformations or systems with infinite state space, among others. However, this expressive power comes at the cost of rather limited automated analysis capabilities. The general case of unbounded many initial graphs or infinite state spaces is only supported by approaches with rather limited scalability or expressiveness. In this report we improve an existing approach for the automated verification of inductive invariants for graph transformation systems. By employing partial negative application conditions to represent and check many alternative conditions in a more compact manner, we can check examples with rules and constraints of substantially higher complexity. We also substantially extend the expressive power by supporting more complex negative application conditions and provide higher accuracy by employing advanced implication checks. The improvements are evaluated and compared with another applicable tool by considering three case studies.
Babelsberg/RML
(2015)
New programming language designs are often evaluated on concrete implementations. However, in order to draw conclusions about the language design from the evaluation of concrete programming languages, these implementations need to be verified against the formalism of the design. To that end, we also have to ensure that the design actually meets its stated goals. A useful tool for the latter has been to create an executable semantics from a formalism that can execute a test suite of examples. However, this mechanism so far did not allow to verify an implementation against the design.
Babelsberg is a new design for a family of object-constraint languages. Recently, we have developed a formal semantics to clarify some issues in the design of those languages. Supplementing this work, we report here on how this formalism is turned into an executable operational semantics using the RML system. Furthermore, we show how we extended the executable semantics to create a framework that can generate test suites for the concrete Babelsberg implementations that provide traceability from the design to the language. Finally, we discuss how these test suites helped us find and correct mistakes in the Babelsberg implementation for JavaScript.
Traditionally, business process management systems only execute and monitor business process instances based on events that originate from the process engine itself or from connected client applications. However, environmental events may also influence business process execution. Recent research shows how the technological improvements in both areas, business process management and complex event processing, can be combined and harmonized. The series of technical reports included in this collection provides insights in that combination with respect to technical feasibility and improvements based on real-world use cases originating from the EU-funded GET Service project – a project targeting transport optimization and green-house gas reduction in the logistics domain. Each report is complemented by a working prototype.
This collection introduces six use cases from the logistics domain. Multiple transports – each being a single process instance – may be affected by the same events at the same point in time because of (partly) using the same transportation route, transportation vehicle or transportation mode (e.g. containers from multiple process instances on the same ship) such that these instances can be (partly) treated as batch. Thus, the first use case shows the influence of events to process instances processed in a batch. The case of sharing the entire route may be, for instance, due to origin from the same business process (e.g. transport three containers, where each is treated as single process instance because of being transported on three trucks) resulting in multi-instance process executions. The second use case shows how to handle monitoring and progress calculation in this context. Crucial to transportation processes are frequent changes of deadlines. The third use case shows how to deal with such frequent process changes in terms of propagating the changes along and beyond the process scope to identify probable deadline violations. While monitoring transport processes, disruptions may be detected which introduce some delay. Use case four shows how to propagate such delay in a non-linear fashion along the process instance to predict the end time of the instance. Non-linearity is crucial in logistics because of buffer times and missed connection on intermodal transports (a one-hour delay may result in a missed ship which is not going every hour). Finally, use cases five and six show the utilization of location-based process monitoring. Use case five enriches transport processes with real-time route and traffic event information to improve monitoring and planning capabilities. Use case six shows the inclusion of spatio-temporal events on the example of unexpected weather events.
HPI Future SOC Lab
(2015)
Das Future SOC Lab am HPI ist eine Kooperation des Hasso-Plattner-Instituts mit verschiedenen Industriepartnern. Seine Aufgabe ist die Ermöglichung und Förderung des Austausches zwischen Forschungsgemeinschaft und Industrie.
Am Lab wird interessierten Wissenschaftlern eine Infrastruktur von neuester Hard- und Software kostenfrei für Forschungszwecke zur Verfügung gestellt. Dazu zählen teilweise noch nicht am Markt verfügbare Technologien, die im normalen Hochschulbereich in der Regel nicht zu finanzieren wären, bspw. Server mit bis zu 64 Cores und 2 TB Hauptspeicher. Diese Angebote richten sich insbesondere an Wissenschaftler in den Gebieten Informatik und Wirtschaftsinformatik. Einige der Schwerpunkte sind Cloud Computing, Parallelisierung und In-Memory Technologien.
In diesem Technischen Bericht werden die Ergebnisse der Forschungsprojekte des Jahres 2015 vorgestellt. Ausgewählte Projekte stellten ihre Ergebnisse am 15. April 2015 und 4. November 2015 im Rahmen der Future SOC Lab Tag Veranstaltungen vor.
Parts without a whole?
(2015)
This explorative study gives a descriptive overview of what organizations do and experience when they say they practice design thinking. It looks at how the concept has been appropriated in organizations and also describes patterns of design thinking adoption. The authors use a mixed-method research design fed by two sources: questionnaire data and semi-structured personal expert interviews. The study proceeds in six parts: (1) design thinking¹s entry points into organizations; (2) understandings of the descriptor; (3) its fields of application and organizational localization; (4) its perceived impact; (5) reasons for its discontinuation or failure; and (6) attempts to measure its success. In conclusion the report challenges managers to be more conscious of their current design thinking practice. The authors suggest a co-evolution of the concept¹s introduction with innovation capability building and the respective changes in leadership approaches. It is argued that this might help in unfolding design thinking¹s hidden potentials as well as preventing unintended side-effects such as discontented teams or the dwindling authority of managers.
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the Research Scholl, this technical report covers a wide range of research topics. These include but are not limited to: Self-Adaptive Service-Oriented Systems, Operating System Support for Service-Oriented Systems, Architecture and Modeling of Service-Oriented Systems, Adaptive Process Management, Services Composition and Workflow Planning, Security Engineering of Service-Based IT Systems, Quantitative Analysis and Optimization of Service-Oriented Systems, Service-Oriented Systems in 3D Computer Graphics sowie Service-Oriented Geoinformatics.
Design and implementation of service-oriented architectures impose numerous research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Service-oriented Systems Engineering represents a symbiosis of best practices in object orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns. Service-oriented Systems Engineering denotes a current research topic in the field of IT-Systems Engineering with high potential in academic research and industrial application.
The annual Ph.D. Retreat of the Research School provides all members the opportunity to present the current state of their research and to give an outline of prospective Ph.D. projects. Due to the interdisciplinary structure of the Research School, this technical report covers a wide range of research topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
Network Topology Discovery and Inventory Listing are two of the primary features of modern network monitoring systems (NMS). Current NMSs rely heavily on active scanning techniques for discovering and mapping network information. Although this approach works, it introduces some major drawbacks such as the performance impact it can exact, specially in larger network environments. As a consequence, scans are often run less frequently which can result in stale information being presented and used by the network monitoring system. Alternatively, some NMSs rely on their agents being deployed on the hosts they monitor. In this article, we present a new approach to Network Topology Discovery and Network Inventory Listing using only passive monitoring and scanning techniques. The proposed techniques rely solely on the event logs produced by the hosts and network devices present within a network. Finally, we discuss some of the advantages and disadvantages of our approach.