Refine
Year of publication
Document Type
- Article (188)
- Doctoral Thesis (100)
- Other (83)
- Monograph/Edited Volume (40)
- Postprint (22)
- Conference Proceeding (3)
- Part of a Book (1)
- Habilitation Thesis (1)
- Report (1)
Keywords
- MOOC (42)
- digital education (37)
- e-learning (36)
- Digitale Bildung (34)
- online course creation (34)
- online course design (34)
- Kursdesign (33)
- Micro Degree (33)
- Online-Lehre (33)
- Onlinekurs (33)
- Onlinekurs-Produktion (33)
- micro degree (33)
- micro-credential (33)
- online teaching (33)
- machine learning (20)
- maschinelles Lernen (9)
- Cloud Computing (6)
- E-Learning (6)
- deep learning (6)
- evaluation (6)
- Smalltalk (5)
- cloud computing (5)
- 3D printing (4)
- BPMN (4)
- Blockchain (4)
- Digitalisierung (4)
- Duplikaterkennung (4)
- Forschungsprojekte (4)
- Future SOC Lab (4)
- In-Memory Technologie (4)
- Künstliche Intelligenz (4)
- MOOCs (4)
- Machine Learning (4)
- Multicore Architekturen (4)
- Scrum (4)
- artifical intelligence (4)
- blockchain (4)
- business process management (4)
- cyber-physical systems (4)
- duplicate detection (4)
- fabrication (4)
- image processing (4)
- inertial measurement unit (4)
- innovation (4)
- künstliche Intelligenz (4)
- multicore architectures (4)
- natural language processing (4)
- openHPI (4)
- probabilistic timed systems (4)
- programming (4)
- qualitative Analyse (4)
- qualitative analysis (4)
- quantitative Analyse (4)
- quantitative analysis (4)
- research projects (4)
- smart contracts (4)
- 3D visualization (3)
- 3D-Visualisierung (3)
- Business process models (3)
- DMN (3)
- Data profiling (3)
- Datenaufbereitung (3)
- Datenqualität (3)
- Design Thinking (3)
- Dynamic pricing (3)
- Geschäftsprozessmanagement (3)
- HPI Schul-Cloud (3)
- Hasso Plattner Institute (3)
- Hasso-Plattner-Institut (3)
- Innovation (3)
- Maschinelles Lernen (3)
- Natural language processing (3)
- Security Metrics (3)
- Security Risk Assessment (3)
- Teamwork (3)
- cloud (3)
- computer vision (3)
- course design (3)
- creativity (3)
- data preparation (3)
- data profiling (3)
- digital health (3)
- digital transformation (3)
- digitale Bildung (3)
- digitalization (3)
- entity resolution (3)
- graph transformation systems (3)
- intrusion detection (3)
- real-time rendering (3)
- reinforcement learning (3)
- tiefes Lernen (3)
- trajectory data (3)
- user experience (3)
- user-generated content (3)
- 3D point clouds (2)
- Agile (2)
- Android (2)
- Angriffserkennung (2)
- Anomalieerkennung (2)
- Answer set programming (2)
- Artificial Intelligence (2)
- Behavioral economics (2)
- Big Data (2)
- Bounded Model Checking (2)
- Clinical decision support (2)
- Cloud-Security (2)
- Data Profiling (2)
- Datenbank (2)
- Datenbanksysteme (2)
- Datenvisualisierung (2)
- Debugging (2)
- Decision models (2)
- Deep Learning (2)
- Dynamic programming (2)
- Economic evaluation (2)
- Electronic health record (2)
- Energy (2)
- Entitätsauflösung (2)
- Entitätsverknüpfung (2)
- Entscheidungsfindung (2)
- European Union (2)
- Europäische Union (2)
- Fabrikation (2)
- Feature selection (2)
- Forschungskolleg (2)
- Gene expression (2)
- Graphentransformationssysteme (2)
- IDS (2)
- Identitätsmanagement (2)
- In-Memory (2)
- In-Memory technology (2)
- Information flow control (2)
- Internet der Dinge (2)
- Internet of Things (2)
- Java (2)
- Kanban (2)
- Klausurtagung (2)
- Learning Analytics (2)
- Lecture Video Archive (2)
- MAC security (2)
- MERLOT (2)
- Massive Open Online Course (MOOC) (2)
- Metadaten (2)
- Metanome (2)
- Modellprüfung (2)
- Oligopoly competition (2)
- OptoGait (2)
- P2P (2)
- Peer Assessment (2)
- Peer-to-Peer ridesharing (2)
- Ph.D. retreat (2)
- Prior knowledge (2)
- Programmieren (2)
- Python (2)
- RGB-D cameras (2)
- Reproducible benchmarking (2)
- SIEM (2)
- Secure Configuration (2)
- Security (2)
- Service-oriented Systems Engineering (2)
- Sicherheit (2)
- Smart micro-grids (2)
- Social Media Analysis (2)
- Squeak (2)
- Taxonomy (2)
- Treemaps (2)
- Versionsverwaltung (2)
- Visualisierung (2)
- Visualization (2)
- Werkzeuge (2)
- X-ray (2)
- Zebris (2)
- acute kidney injury (2)
- anomaly detection (2)
- artificial intelligence (2)
- artificial intelligence for health (2)
- assignments (2)
- benchmark (2)
- bounded model checking (2)
- business processes (2)
- capstone course (2)
- categories (2)
- causal discovery (2)
- causal structure learning (2)
- classification (2)
- clustering (2)
- collaborative work (2)
- comparison of devices (2)
- computer-mediated therapy (2)
- cyber-physische Systeme (2)
- data integration (2)
- data pipeline (2)
- data quality (2)
- database (2)
- database systems (2)
- deduplication (2)
- deferred choice (2)
- design thinking (2)
- digital enlightenment (2)
- digital learning platform (2)
- digital sovereignty (2)
- digitale Aufklärung (2)
- digitale Lernplattform (2)
- digitale Souveränität (2)
- digitale Transformation (2)
- disorder recognition (2)
- distributed systems (2)
- eindeutige Spaltenkombination (2)
- end-stage kidney disease (2)
- entity linking (2)
- everyday life (2)
- federated learning (2)
- flexibility (2)
- formal semantics (2)
- formal verification (2)
- formale Verifikation (2)
- framework (2)
- functional dependencies (2)
- funktionale Abhängigkeiten (2)
- gait analysis (2)
- gait analysis algorithm (2)
- genome-wide association (2)
- geospatial data (2)
- hate speech detection (2)
- human activity recognition (2)
- human motion (2)
- identity management (2)
- image stylization (2)
- in-memory technology (2)
- inclusion dependencies (2)
- index selection (2)
- interactive media (2)
- kausale Entdeckung (2)
- kausales Strukturlernen (2)
- knowledge management (2)
- labeling (2)
- learning path (2)
- lebenslanges Lernen (2)
- lifelong learning (2)
- liveness (2)
- location prediction algorithm (2)
- maschinelles Sehen (2)
- medical documentation (2)
- memory (2)
- metadata (2)
- mobile mapping (2)
- model checking (2)
- model-driven engineering (2)
- modularization (2)
- motion capture (2)
- multiple modalities (2)
- named entity recognition (2)
- nested graph conditions (2)
- neurological disorders (2)
- non-photorealistic rendering (2)
- online learning (2)
- oracles (2)
- outlier detection (2)
- parameterized complexity (2)
- peer assessment (2)
- pervasive healthcare (2)
- privacy and security (2)
- privacy attack (2)
- probabilistische gezeitete Systeme (2)
- probabilistische zeitgesteuerte Systeme (2)
- process mining (2)
- programmable matter (2)
- proteomics (2)
- public dataset (2)
- quality assessment (2)
- query optimization (2)
- rapid eGFRcrea decline (2)
- real-time (2)
- research school (2)
- retrospective (2)
- risk-aware dispatching (2)
- security (2)
- security analytics (2)
- self-paced learning (2)
- self-sovereign identity (2)
- sensor data (2)
- service-oriented systems engineering (2)
- simulation (2)
- social media (2)
- software development (2)
- software engineering (2)
- software process improvement (2)
- speech (2)
- stable matching (2)
- study (2)
- style transfer (2)
- text mining (2)
- thinking styles (2)
- transparency (2)
- transport network companies (2)
- treemaps (2)
- trust (2)
- trustworthiness (2)
- typed attributed graphs (2)
- unique column combinations (2)
- user interaction (2)
- variable geometry truss (2)
- version control (2)
- virtuelle Realität (2)
- visualization (2)
- voice (2)
- web-based rendering (2)
- workflow patterns (2)
- (modular) counting (1)
- 0-day (1)
- 3D Druck (1)
- 3D Point Clouds (1)
- 3D Visualization (1)
- 3D city model (1)
- 3D geovirtual environment (1)
- 3D geovisualization (1)
- 3D geovisualization system (1)
- 3D point cloud (1)
- 3D portrayal (1)
- 3D-Druck (1)
- 3D-Einbettung (1)
- 3D-Geovisualisierung (1)
- 3D-Geovisualisierungssystem (1)
- 3D-Punktwolke (1)
- 3D-Punktwolken (1)
- 3D-Rendering (1)
- 3D-Stadtmodell (1)
- 3D-embedding (1)
- 3D-geovirtuelle Umgebung (1)
- ACINQ (1)
- AI Lab (1)
- AMIGOS dataset (1)
- APT (1)
- ASIC (1)
- Abgleich von Abhängigkeiten (1)
- Abhängigkeit (1)
- Abschlussbericht (1)
- Access Control (1)
- Activity-oriented Optimization (1)
- Adoption effects (1)
- Adressnormalisierung (1)
- Advanced Persistent Threats (1)
- Advertising (1)
- Agent-based model (1)
- Agile methods (1)
- Agilität (1)
- Aktivitäten (1)
- Alcohol Use Disorders Identification Test (1)
- Alcohol use assessment (1)
- Algebraic methods (1)
- Algorithms (1)
- Alzheimer's Disease (1)
- Ambiguity (1)
- Ambiguität (1)
- Analog-zu-Digital-Konvertierung (1)
- Analyse (1)
- Anfrageoptimierung (1)
- Annahme (1)
- Anomaly detection (1)
- Anwendungsbedingungen (1)
- Application Container Security (1)
- Approximation algorithms (1)
- Architecture synthesis (1)
- Architectures (1)
- Architekturadaptation (1)
- Archivanalyse (1)
- Artistic Image Stylization (1)
- Arzt-Patient-Beziehung (1)
- Attributsicherung (1)
- Audit (1)
- Aufzählungsalgorithmen (1)
- Auktion (1)
- Ausreißererkennung (1)
- Australian securities exchange (1)
- Auswirkungen (1)
- Autoimmune (1)
- Automated parsing (1)
- Automatic domain term extraction (1)
- Automatisierung (1)
- Average-Case Analyse (1)
- BCCC (1)
- BIM (1)
- BRCA1 (1)
- BTC (1)
- Back Pain (1)
- Bahnwesen (1)
- Bandwidth (1)
- Batch activity (1)
- Batch processing (1)
- Batch-Aktivität (1)
- Bayesian Networks; (1)
- Bedrohungsanalyse (1)
- Bedrohungserkennung (1)
- Bedrohungsmodell (1)
- Behavior change (1)
- Behavioral equivalence and refinement (1)
- Benchmark (1)
- Benutzerinteraktion (1)
- Beschriftung (1)
- Betriebssysteme (1)
- Bewegung (1)
- Big Five Model (1)
- Big Five model (1)
- Bildungstechnologien (1)
- Bildverarbeitung (1)
- Binary Classification (1)
- Binäre Klassifikation (1)
- Biomarker-Erkennung (1)
- Bisimulation and simulation (1)
- BitShares (1)
- Bitcoin Core (1)
- Blockchain Auth (1)
- Blockchain Governance (1)
- Blockchain-Konsortium R3 (1)
- Blockchain-enabled Governance (1)
- Blockkette (1)
- Blockstack (1)
- Blockstack ID (1)
- Blumix-Plattform (1)
- Blöcke (1)
- Boolean Networks (1)
- Boolean satisfiability (1)
- Boolsche Erfüllbarkeit (1)
- Bounded Backward Model Checking (1)
- Brand Personality (1)
- Building Management (1)
- Business Process Management (1)
- Business process choreographies (1)
- Business process improvement (1)
- Business process modeling (1)
- Business process simulation (1)
- Byzantine Agreement (1)
- C++ tool (1)
- CMOS technology (1)
- COVID-19 (1)
- Car safety management (1)
- Cardinality estimation (1)
- Case Management (1)
- Causal inference (1)
- Causal structure learning (1)
- Causality; (1)
- Cheating attacks (1)
- Chernoff-Hoeffding theorem (1)
- Chile (1)
- Chronic Nonspecific Low (1)
- Circular economy (1)
- CityGML (1)
- Clinical Data (1)
- Clinical predictive modeling (1)
- Clinical risk prediction (1)
- Cloud (1)
- Cloud Audit (1)
- Cloud Service Provider (1)
- Cloud computing (1)
- CloudRAID (1)
- CloudRAID for Business (1)
- Clusteranalyse (1)
- Clustering (1)
- Co-production (1)
- Collaborative learning (1)
- Colored Coins (1)
- Colored Petri Net (1)
- Commonsense reasoning (1)
- Community analysis (1)
- Complexity (1)
- Compound Values (1)
- Computational Photography (1)
- Computer architecture (1)
- Computergrafik (1)
- Computervision (1)
- Computing (1)
- Conceptual Fit (1)
- Conceptual modeling (1)
- Contamination (1)
- ContextErlang (1)
- Creative (1)
- Critical pair analysis (CPA) (1)
- Cross-platform (1)
- Crowd Resourcing (1)
- Cryptography (1)
- Cultural factors (1)
- Curex (1)
- Custom Writable Class (1)
- Cyber-Sicherheit (1)
- Cyber-physikalische Systeme (1)
- Cybersecurity (1)
- Cybersecurity e-Learning (1)
- Cybersicherheit (1)
- Cybersicherheit E-Learning (1)
- DAG (1)
- DALY (1)
- DAO (1)
- DBMS (1)
- DPoS (1)
- DSA (1)
- Data Analytics (1)
- Data Structure Optimization (1)
- Data breach (1)
- Data compression (1)
- Data mining (1)
- Data mining Machine learning (1)
- Data modeling (1)
- Data partitioning (1)
- Data processing (1)
- Data profiling application (1)
- Data quality (1)
- Data warehouse (1)
- Data-Mining (1)
- Data-Science (1)
- Data-driven price anticipation (1)
- Data-driven strategies (1)
- Database (1)
- Dateistruktur (1)
- Daten-Analytik (1)
- Datenabgleich (1)
- Datenanalyse (1)
- Datenbankoptimierung (1)
- Datenbereinigung (1)
- Datenintegration (1)
- Datenmodelle (1)
- Datensatz (1)
- Datensatzverknüpfung (1)
- Datenschutz (1)
- Datenschutz-sicherer Einsatz in der Schule (1)
- Datenstromverarbeitung (1)
- Datenstrukturoptimierung (1)
- Datensynthese (1)
- Datentransformation (1)
- Datenverwaltung (1)
- Datenverwaltung für Daten mit räumlich-zeitlichem Bezug (1)
- Deanonymisierung (1)
- Decision support (1)
- Declarative modelling (1)
- Deduplikation (1)
- Deep Kernel Learning (1)
- Deep learning (1)
- Dekubitus (1)
- Delays (1)
- Delegated Proof-of-Stake (1)
- Denial of sleep (1)
- Denkstile (1)
- Denkweise (1)
- Derecho (1)
- Design (1)
- Design-Forschung (1)
- Dezentrale Applikationen (1)
- Differential Expression Analysis (1)
- Digital Engineering (1)
- Digital Twin (1)
- Digital education (1)
- Digitaler Zwilling (1)
- Digitization (1)
- Dimensionsreduktion (1)
- Direkte Manipulation (1)
- Disadvantaged communities (1)
- Dissertation (1)
- Distance Learning (1)
- Distance learning (1)
- Distributed Proof-of-Research (1)
- Distributed snapshot algorithm (1)
- Distributed-Ledger-Technologie (DLT) (1)
- Diverse solution enumeration (1)
- Document classification (1)
- Dokument Analyse (1)
- Domain Objects (1)
- Domänenspezifische Modellierung (1)
- Drift (1)
- Dubletten (1)
- Durchsetzbarkeit (1)
- Dynamic pricing competition (1)
- Dynamik (1)
- E-Learning exam preparation (1)
- E-Lecture (1)
- E-Wallet (1)
- E-commerce (1)
- E-health (1)
- ECDSA (1)
- EMG (1)
- Echtzeit (1)
- Echtzeit-Rendering (1)
- Education technologies (1)
- Educational Data Mining (1)
- Educational Technology (1)
- Educational data mining (1)
- Effect measurement (1)
- Einbettungen (1)
- Einbruchserkennung (1)
- Electrical products (1)
- Embedded Programming (1)
- Emotion Mining (1)
- Endpunktsicherheit (1)
- Energy efficiency (1)
- Entdeckung (1)
- Enterprise File Synchronization and Share (1)
- Entropy (1)
- Entscheidungskorrektheit (1)
- Entscheidungsmanagement (1)
- Entscheidungsmodelle (1)
- Entstehung (1)
- Entwicklung digitaler Innovationseinheiten (1)
- Enumeration algorithm (1)
- Equity (1)
- Ereignisabonnement (1)
- Ereignisnormalisierung (1)
- Erfüllbarkeitsschwellwert (1)
- Eris (1)
- Erkennung von Metadaten (1)
- Erkennung von Quasi-Identifikatoren (1)
- Estimation-of-Distribution-Algorithmen (1)
- Estimation-of-distribution algorithm (1)
- Ether (1)
- Ethereum (1)
- European reference networks (1)
- Evaluation (1)
- Expert knowledge (1)
- Exploration (1)
- Expressive rendering (1)
- Extensibility (1)
- Eye-tracking (1)
- Fabrication (1)
- Facility Management (1)
- Federated Byzantine Agreement (1)
- Feedback control loop (1)
- Fehlende Werte (1)
- Fehlertoleranz (1)
- Fernerkundung (1)
- Fertigung (1)
- Fertigungsunternehmen (1)
- Field study (1)
- First-hitting time (1)
- Flash (1)
- FollowMyVote (1)
- Forecasting (1)
- Foreign key (1)
- Fork (1)
- Formal modelling (1)
- Formal verification of behavior preservation (1)
- Functional dependencies (1)
- Funktionale Abhängigkeiten (1)
- G-estimation (1)
- GMDH (1)
- GPU (1)
- GPU acceleration (1)
- GPU-Beschleunigung (1)
- GTEx (1)
- Gaussian process state-space models (1)
- Gaussian processes (1)
- Gauß-Prozess Zustandsraummodelle (1)
- Gauß-Prozesse (1)
- Gebäudeinformationsmodellierung (1)
- Gebäudemanagement (1)
- Gen-Expression (1)
- Gene Expression Data Analysis (1)
- General Earth and Planetary Sciences (1)
- General demand function (1)
- Generalized Discrimination Networks (1)
- Genomics education (1)
- Geodaten (1)
- Geography, Planning and Development (1)
- Geospatial intelligence (1)
- Gerichtsbarkeit (1)
- German schools (1)
- Geschäftsprozess (1)
- Geschäftsprozess-Choreografien (1)
- Geschäftsprozessarchitekturen (1)
- Gewinnung benannter Entitäten (1)
- GitHub (1)
- GraalVM (1)
- Graph Algorithms (1)
- Graph Theory (1)
- Graph algorithm (1)
- Graph logic (1)
- Graph logics (1)
- Graph transformation (1)
- Graph transformation (double pushout approach) (1)
- Graph transformations (1)
- Graph-Mining (1)
- Graphableitung (1)
- Graphbedingungen (1)
- Graphentheorie (1)
- Graphreparatur (1)
- Graphtransformationen (1)
- Graphtransformationssysteme (1)
- Grid stability (1)
- Gridcoin (1)
- Großformat (1)
- HENSHIN (1)
- HLS (1)
- HTML5 (1)
- Hard Fork (1)
- Hashed Timelock Contracts (1)
- Hasserkennung (1)
- Hauptspeicher Datenmanagement (1)
- Heart Valve Diseases (1)
- Herzklappenerkrankungen (1)
- Heuristic triangle estimation (1)
- Heuristiken (1)
- Hitting Sets (1)
- Holant problems (1)
- Home appliances (1)
- Homomorphismen (1)
- Human Computer Interaction (1)
- Hybrid (1)
- Hyperbolic Geometry (1)
- Hyrise (1)
- Häkeln (1)
- ICT (1)
- IEEE 802.15.4 (1)
- IT Softwarentwicklung (1)
- IT project (1)
- IT systems engineering (1)
- IT-Infrastruktur (1)
- IT-Sicherheit (1)
- IT-infrastructure (1)
- IT-security (1)
- Ideation (1)
- Ideenfindung (1)
- Identity leak (1)
- Identität (1)
- Image Abstraction (1)
- Image Processing (1)
- Imbalanced medical image semantic segmentation (1)
- Immobilien 4.0 (1)
- Impact (1)
- Implementation in Organizations (1)
- Implementierung (1)
- Implementierung in Organisationen (1)
- Indexauswahl (1)
- Indoor Point Clouds (1)
- Indoor environments (1)
- Indoor-Punktwolken (1)
- Industry 4.0 (1)
- Information Retrieval (1)
- Information system (1)
- Informationsextraktion (1)
- Informationsflüsse (1)
- Informationsraum (1)
- Informationsraumverletzungen (1)
- Inklusionsabhängigkeit (1)
- Inklusionsabhängigkeiten (1)
- Institutions (1)
- Integrative Gene Selection (1)
- Intent analysis (1)
- Interacting processes (1)
- Interactive Media (1)
- Interactive control (1)
- Interdisciplinary Teams (1)
- Interessengrad-Techniken (1)
- Internet (1)
- Internet of things (1)
- Interoperability (1)
- Interpretability (1)
- Interpreter (1)
- Interval Timed Automata (1)
- Interventionen (1)
- Invariant checking (1)
- IoT (1)
- Japanese Blockchain Consortium (1)
- Japanisches Blockchain-Konsortium (1)
- JavaScript (1)
- K-12 (1)
- KI-Labor (1)
- Kalman filtering (1)
- Kardinalitätsschätzung (1)
- Karten (1)
- Kategorien (1)
- Kausalität (1)
- Kernelization (1)
- Kette (1)
- Klassifikation (1)
- Klassifizierung (1)
- Klinische Daten (1)
- Knowledge Bases (1)
- Kognitionswissenschaft (1)
- Kollaboration (1)
- Komplexitätstheorie (1)
- Konsensalgorithmus (1)
- Konsensprotokoll (1)
- Konsensprotokolle (1)
- Konsistenzrestauration (1)
- Konstruktion von Wissensbasen (1)
- Konstruktion von Wissensgraphen (1)
- Korpusexploration (1)
- Korrektheit (1)
- Kraft (1)
- Kreativität (1)
- Kryptografie (1)
- Kultur (1)
- Kunstanalyse (1)
- LC-MS (1)
- Large networks (1)
- Large scale analytics (1)
- Laserscanning (1)
- Laserschneiden (1)
- Lastverteilung (1)
- Laufzeitanalyse (1)
- Laufzeitmodelle (1)
- Law (1)
- Learning Factory (1)
- Learning analytics (1)
- Learning experience (1)
- Lebendigkeit (1)
- Lecture Recording (1)
- Leistungsmodelle von virtuellen Maschinen (1)
- Lernerlebnis (1)
- Level-of-detail visualization (1)
- LiDAR (1)
- Lightning Network (1)
- Linear model (1)
- Link layer security (1)
- Literature review (1)
- Live-Migration (1)
- Live-Programmierung (1)
- Lively Kernel (1)
- Liveness (1)
- Load modeling (1)
- Lock-Time-Parameter (1)
- Log data (1)
- Lossy networks (1)
- Low-processing capable devices (1)
- Lösungsraum (1)
- MOOC Remote Lab (1)
- MQTT (1)
- MS (1)
- Machine-Learning (1)
- Machinelles Lernen (1)
- Manufacturing (1)
- Markov model (1)
- Maschinen (1)
- Maschinenlernen (1)
- Massive Open Online Courses (1)
- Massive open online courses (1)
- Measurement (1)
- Medical research (1)
- Medien Bias (1)
- Mediumzugriffskontrolle (1)
- Meltdown (1)
- Memory Dumping (1)
- Mensch Computer Interaktion (1)
- Mensch-Maschine Interaktion (1)
- Merkmalsauswahl (1)
- Messung (1)
- Meta-Selbstanpassung (1)
- Metacrate (1)
- Metamaterialien (1)
- Metamaterials (1)
- Micro-grid networks (1)
- Microbiome (1)
- Micropayment-Kanäle (1)
- Microservice (1)
- Microservices Security (1)
- Microsoft Azur (1)
- Mindset (1)
- Minimal hitting set (1)
- Minimum spanning tree (1)
- Missing values (1)
- Mixed Reality (1)
- Mobile Learning (1)
- Mobile Mapping (1)
- Mobile applications (1)
- Mobile devices (1)
- Mobile-Mapping (1)
- Mobiles (1)
- Model checking (1)
- Model extraction (1)
- Modelle mit mehreren Versionen (1)
- Modellreparatur (1)
- Monitoring (1)
- Monte Carlo simulation (1)
- Motion Mapping (1)
- Moving Target Defense (1)
- Multi-objective optimization (1)
- Multidisciplinary Teams (1)
- Multiview classification (1)
- Multiziel (1)
- N-of-1 trials (1)
- NASDAQ (1)
- NETCONF (1)
- NLP (1)
- NP-hardness (1)
- Nachrichten (1)
- NameID (1)
- Namecoin (1)
- Named-Entity-Erkennung (1)
- Nash equilibrium (1)
- Natural Language Processing (1)
- Natural language analysis (1)
- Navigational logics (1)
- Nephrology (1)
- Network Science (1)
- Network analysis (1)
- Network creation games (1)
- Netzoptimierung (1)
- Netzwerkprotokolle (1)
- Netzwerksicherheit (1)
- Neural Networks (1)
- Neural networks (1)
- New Public Governance (1)
- Non-photorealistic Rendering (1)
- Non-photorealistic rendering (1)
- Nutzer-Engagement (1)
- Nutzerinteraktion (1)
- Objects (1)
- Objekte (1)
- Off-Chain-Transaktionen (1)
- Offline-Enabled (1)
- Onename (1)
- Online Learning Environments (1)
- Online survey (1)
- Online-Gerichte (1)
- Online-Lernen (1)
- Online-Persönlichkeit (1)
- Ontology (1)
- Open source (1)
- OpenBazaar (1)
- Opinion mining (1)
- Optimal control (1)
- Optimierung (1)
- Oracles (1)
- Ordinances (1)
- Organisationsstudien (1)
- Orphan Block (1)
- Overlapping community detection (1)
- PRISM model checker (1)
- PTCTL (1)
- PTM (1)
- Parallel independence (1)
- Parallel processing (1)
- Parallelized algorithm (1)
- Pareto-Verteilung (1)
- Parking search (1)
- Patents (1)
- Patient (1)
- Patientenermündigung (1)
- Pattern (1)
- Pattern Recognition (1)
- Peer assessment (1)
- Peer-feedback (1)
- Peer-to-Peer Netz (1)
- Peercoin (1)
- Performance evaluation (1)
- Performanz (1)
- Personal genotyping (1)
- Personality Prediction (1)
- Personalized medicine (1)
- PhD thesis (1)
- PoB (1)
- PoS (1)
- PoW (1)
- Point Clouds (1)
- Politik (1)
- Polystore (1)
- Popular matching (1)
- Posenabschätzung (1)
- Power auctioning (1)
- Power consumption characterization (1)
- Power demand (1)
- Predictive Modeling (1)
- Predictive models (1)
- Pricing (1)
- Primary biliary cholangitis (1)
- Primary key (1)
- Primary sclerosing cholangitis (1)
- Prior Knowledge (1)
- Privacy (1)
- Privatsphäre (1)
- Probabilistic timed automata (1)
- Problem Solving (1)
- Problemlösung (1)
- Process (1)
- Process Enactment (1)
- Process Execution (1)
- Process architecture (1)
- Process discovery (1)
- Process landscape (1)
- Process map (1)
- Process mining (1)
- Process model (1)
- Process-related data (1)
- Profilerstellung für Daten (1)
- Programmiererlebnis (1)
- Programmierung (1)
- Programmierwerkzeuge (1)
- Programming course (1)
- Project-based learning (1)
- Proof-of-Burn (1)
- Proof-of-Stake (1)
- Proof-of-Work (1)
- Proteom (1)
- Proteomics (1)
- Prototyping (1)
- Prozess (1)
- Prozessausführung (1)
- Prozessmodelle (1)
- Prozessmodellierung (1)
- Prädiktive Modellierung (1)
- Psychiatric disorders (1)
- Psychological Emotions (1)
- Psychotherapie (1)
- Punktwolken (1)
- QUALY (1)
- Quanten-Computing (1)
- Query optimization (1)
- Query-Optimierung (1)
- REST-Interaktionen (1)
- RESTful choreographies (1)
- RESTful interactions (1)
- RL (1)
- RNAseq (1)
- Random process (1)
- Randomized clinical trials (1)
- Real Estate 4.0 (1)
- Real Walking (1)
- Real-time rendering (1)
- Recht (1)
- Reconfigurable architecture (1)
- Recurrent generative (1)
- Recursion (1)
- Recycling investments (1)
- Refactoring (1)
- Regressionstests (1)
- Rekursion (1)
- Relational model transformation (1)
- Representationlernen (1)
- Requisit (1)
- Resource Allocation (1)
- Resource Management (1)
- Resource constrained smart micro-grids (1)
- Response strategies (1)
- Reverse Engineering (1)
- Ripple (1)
- Roadmap (1)
- Ruby (1)
- Runtime improvement (1)
- Runtime-monitoring (1)
- Russia (1)
- SAFE (1)
- SCP (1)
- SET effects (1)
- SHA (1)
- SPV (1)
- SWIRL (1)
- Satz von Chernoff-Hoeffding (1)
- Savanne (1)
- Scalability (1)
- Schelling segregation (1)
- Schema discovery (1)
- Schema-Entdeckung (1)
- Schlafentzugsangriffe (1)
- School (1)
- Schriftartgestaltung (1)
- Schriftrendering (1)
- Schule (1)
- Schwachstelle (1)
- Schwierigkeitsgrad (1)
- Scrollytelling (1)
- Secondary Education (1)
- Security analytics (1)
- Selbst-Adaptive Software (1)
- Self-Regulated Learning (1)
- Semantic Enrichment (1)
- Semantic Web (1)
- Semantic enrichment (1)
- Semantische Anreicherung (1)
- Sensor Analytics (1)
- Sensor networks (1)
- Sensor-Analytik (1)
- Sequenzeigenschaften (1)
- Serialisierung (1)
- Servers (1)
- Service-Oriented Architecture (1)
- Service-Oriented Systems (1)
- Service-Orientierte Systeme (1)
- Service-oriented (1)
- Serviceorientierte Architektur (SOA) (1)
- Servicification (1)
- Siamesische Neuronale Netzwerke (1)
- Sicherheitsanalyse (1)
- Simplified Payment Verification (1)
- Simulation (1)
- Simulation study (1)
- Situationsbewusstsein (1)
- Skalierbarkeit (1)
- Skalierbarkeit der Blockchain (1)
- Skriptsprachen (1)
- Slock.it (1)
- Smart Contracts (1)
- Smart Home Education (1)
- Smart cities (1)
- Social (1)
- Social Bots erkennen (1)
- Soft Fork (1)
- Software (1)
- Software engineering (1)
- Software-Evolution (1)
- Software/Hardware Co-Design (1)
- Softwareanalytik (1)
- Softwarevisualisierung (1)
- Solution Space (1)
- Source Code Readability (1)
- Soziale Medien (1)
- Spatial data handling systems (1)
- Spatio-Temporal Data (1)
- Spatio-temporal data analysis (1)
- Spatio-temporal visualization (1)
- Specification (1)
- Spectre (1)
- Spezifikation von gezeiteten Graph Transformationen (1)
- Spieltheorie (1)
- Spin system (1)
- Sprachlernen im Limes (1)
- Squeak/Smalltalk (1)
- Stable marriage (1)
- Stable matching (1)
- StackOverflow (1)
- Standard (1)
- Standardisierung (1)
- Stapelverarbeitung (1)
- Static analysis (1)
- Statistical process mining (1)
- Steemit (1)
- Stellar Consensus Protocol (1)
- Stochastic differential games (1)
- Storj (1)
- Storytelling (1)
- Strategic cognition (1)
- Style transfer (1)
- Subject-oriented learning (1)
- Suchtberatung und -therapie (1)
- Supervised Learning (1)
- Supervised deep neural (1)
- Survey (1)
- System design (1)
- Systemmedizin (1)
- Systems Medicine (1)
- TCGA (1)
- Team Assessment (1)
- Team based assignment (1)
- Team-based Learning (1)
- Teamarbeit (1)
- Technology mapping (1)
- Teile und Herrsche (1)
- Telemedizin (1)
- Temporallogik (1)
- Testergebnisse (1)
- Testpriorisierungs (1)
- Text mining (1)
- Texterkennung (1)
- Textklassifikation (1)
- The Bitfury Group (1)
- The DAO (1)
- Theorie (1)
- Theory (1)
- Threat Models (1)
- Tiefes Lernen (1)
- Time series analysis (1)
- Time series data (1)
- Timed Automata (1)
- Tools (1)
- Topic modeling (1)
- Tragfähigkeit (1)
- Training (1)
- Trajectory Data Management (1)
- Trajectory visualization (1)
- Trajektorien (1)
- Trajektoriendaten (1)
- Transaktion (1)
- Transferlernen (1)
- Transportability (1)
- Transversal hypergraph (1)
- Transversal-Hypergraph (1)
- Tree maintenance (1)
- Tripel-Graph-Grammatiken (1)
- Triple graph grammars (1)
- Two-Way-Peg (1)
- U-Förmiges Lernen (1)
- U-Shaped-Learning (1)
- Ubiquitous (1)
- Ubiquitous business process (1)
- Unbiasedness (1)
- Unified logging system (1)
- Unique column combination (1)
- Unspent Transaction Output (1)
- Unternehmensdateien synchronisieren und teilen (1)
- Unterricht mit digitalen Medien (1)
- User Experience (1)
- Utility-Funktionen (1)
- V2X (1)
- VUCA-World (1)
- Validation (1)
- Verarbeitung natürlicher Sprache (1)
- Verbundwerte (1)
- Verhaltensforschung (1)
- Verhaltensänderung (1)
- Verifikation induktiver Invarianten (1)
- Verlässlichkeit (1)
- Verteilte Systeme (1)
- Vertrauen (1)
- Verträge (1)
- Veränderungsanalyse (1)
- Video annotations (1)
- Virtual Laboratory (1)
- Virtual Machine (1)
- Virtual Machines (1)
- Virtual Reality (1)
- Virtuelle Maschinen (1)
- Virtuelles Labor (1)
- Visual Analytics (1)
- Visual analytics (1)
- Visualisierungskonzept-Exploration (1)
- Vorhersage (1)
- Vorhersagemodelle (1)
- Vorhersagemodellierung (1)
- Vulnerability analysis (1)
- WALA (1)
- W[3]-Completeness (1)
- Walking (1)
- Water Science and Technology (1)
- Watson IoT (1)
- Wearable (1)
- Web-basiertes Rendering (1)
- Weighted clustering coefficient (1)
- Werkzeugbau (1)
- Wicked Problems (1)
- Wireless sensor networks (1)
- Wirtschaftsinformatik Projekte (1)
- Wissensbasis (1)
- Wissensgraph (1)
- Wissensgraphen (1)
- Wissensgraphen Verfeinerung (1)
- Wissensmanagement (1)
- Wissenstransfer (1)
- Wissensvalidierung (1)
- Wolke (1)
- Word embedding (1)
- Wüstenbildung (1)
- XIC extraction (1)
- YANG (1)
- Zielvorgabe (1)
- Zookos Dreieck (1)
- Zookos triangle (1)
- Zufallsgraphen (1)
- Zugriffskontrolle (1)
- Zustandsverwaltung (1)
- Zählen (1)
- accelerator architectures (1)
- acceptability (1)
- accuracy (1)
- action problems (1)
- active layers (1)
- active touch (1)
- acyclic preferences (1)
- addiction care (1)
- address normalization (1)
- adoption (1)
- advanced persistent threat (1)
- advanced threats (1)
- adversarial network (1)
- agil (1)
- agile (1)
- algorithms (1)
- allocation problem (1)
- altchain (1)
- alternative chain (1)
- analog-to-digital conversion (1)
- analysis (1)
- application conditions (1)
- approximate counting (1)
- apt (1)
- architectural adaptation (1)
- architecture-based software adaptation (1)
- architectures (1)
- architekturbasierte Softwareanpassung (1)
- archive analysis (1)
- art analysis (1)
- artistic image stylization (1)
- aspect-oriented programming (1)
- asset management (1)
- atomic swap (1)
- attribute assurance (1)
- auction (1)
- automation (1)
- autonomous (1)
- availability (1)
- average-case analysis (1)
- bachelor project (1)
- batch activity (1)
- batch processing (1)
- behavior psychotherapy (1)
- behavioral sciences (1)
- behaviourally correct learning (1)
- benchmarking (1)
- benutzergenerierte Inhalte (1)
- bestärkendes Lernen (1)
- bidirectional payment channels (1)
- bildbasierte Repräsentation (1)
- bildbasiertes Rendering (1)
- bioinformatics (1)
- bioinformatics tool (1)
- biologisches Vorwissen (1)
- biomarker detection (1)
- bitcoins (1)
- blockchain consortium (1)
- blockchain-übergreifend (1)
- blocks (1)
- blumix platform (1)
- bounded backward model checking (1)
- brand personality (1)
- breast-cancer (1)
- business process (1)
- business process architectures (1)
- business process choreographies (1)
- business process managament (1)
- causal AI (1)
- causal reasoning (1)
- causality (1)
- chain (1)
- change detection (1)
- classification zone (1)
- classification zone violations (1)
- clinical exome (1)
- cliquy tree (1)
- cloud monitoring (1)
- code generation (1)
- cognitive load (1)
- cognitive patterns (1)
- cognitive science (1)
- collaboration (1)
- collaborative learning (1)
- collective intelligence (1)
- columnar databases (1)
- combinational logic (1)
- communication (1)
- comparison (1)
- competence (1)
- complex event processing (1)
- complex networks (1)
- complexity (1)
- complexity theory (1)
- compositional analysis (1)
- computational design (1)
- computational hardness (1)
- computational mass spectrometry (1)
- computational models (1)
- computational photography (1)
- computer graphics (1)
- computer science (1)
- computer science education (1)
- computer-aided design (1)
- computergestützte Gestaltung (1)
- computervermittelte Therapie (1)
- computing (1)
- confirmation period (1)
- confluence (1)
- conformance checking (1)
- connectivity (1)
- consensus algorithm (1)
- consensus protocol (1)
- consensus protocols (1)
- consistency restoration (1)
- consistent learning (1)
- constrained optimization (1)
- content gamification (1)
- contest period (1)
- context (1)
- context groups (1)
- contextual-variability modeling (1)
- continuous integration (1)
- contracts (1)
- convolutional neural networks (1)
- coping ability (1)
- corpus exploration (1)
- corticospinal tract (1)
- cost (1)
- cost-effectiveness (1)
- cost-utility analysis (1)
- courts of justice (1)
- crochet (1)
- cross-chain (1)
- cross-platform (1)
- cultural heritage (1)
- culture (1)
- cumulative culture (1)
- curex (1)
- cyber-physikalische Systeme (1)
- cybersecurity (1)
- data (1)
- data analytics (1)
- data cleaning (1)
- data dependencies (1)
- data management (1)
- data matching (1)
- data mining (1)
- data models (1)
- data privacy (1)
- data processing (1)
- data science (1)
- data set (1)
- data synthesis (1)
- data transfer (1)
- data visualisation (1)
- data visualization (1)
- data wrangling (1)
- data-driven (1)
- database optimization (1)
- database replication (1)
- database tuning (1)
- datengetrieben (1)
- de-anonymisation (1)
- debugging (1)
- decentral identities (1)
- decentralized applications (1)
- decentralized autonomous organization (1)
- decision making (1)
- decision management (1)
- decision mining (1)
- decision models (1)
- decision soundness (1)
- decision-aware process models (1)
- decoupling cells (1)
- decubitus (1)
- deep Gaussian processes (1)
- deep kernel learning (1)
- degree-of-interest techniques (1)
- demand learning (1)
- demografische Informationen (1)
- demographic information (1)
- denial of sleep (1)
- dependability (1)
- dependency (1)
- desertification (1)
- design (1)
- design behaviour (1)
- design cognition (1)
- design research (1)
- developing countries (1)
- development artifacts (1)
- dezentrale Identitäten (1)
- dezentrale autonome Organisation (1)
- dialect (1)
- die arabische Welt (1)
- difficulty (1)
- difficulty target (1)
- diffusion MRI; (1)
- digital fabrication (1)
- digital health app (1)
- digital innovation (1)
- digital innovation units (1)
- digital learning (1)
- digital picture archive (1)
- digital product innovation (1)
- digital therapy (1)
- digital unterstützter Unterricht (1)
- digital whiteboard (1)
- digitale Fabrikation (1)
- digitale Infrastruktur für den Schulunterricht (1)
- digitale Innovation (1)
- digitale Innovationseinheit (1)
- digitale Produktinnovation (1)
- digitales Bildarchiv (1)
- digitales Lernen (1)
- digitales Whiteboard (1)
- digitalisation (1)
- digitalización (1)
- dimensionality reduction (1)
- direct manipulation (1)
- disability-adjusted life years (1)
- discovery (1)
- discrete-event model (1)
- diseño (1)
- diskretes Ereignismodell (1)
- distributed computation (1)
- distributed performance monitoring (1)
- divide-and-conquer (1)
- doctor-patient relationship (1)
- document analysis (1)
- domain-specific knowledge graphs (1)
- domain-specific modeling (1)
- dominant matching (1)
- domänenspezifisches Wissensgraphen (1)
- doppelter Hashwert (1)
- double hashing (1)
- drahtloses Netzwerk (1)
- drift theory (1)
- dry fasting (1)
- dsps (1)
- dynamic AOP (1)
- dynamic causal modeling (1)
- dynamic service adaptation (1)
- dynamic systems (1)
- dynamics (1)
- dynamische Systeme (1)
- e-Commerce (1)
- e-commerce (1)
- eLearning (1)
- eating behaviour (1)
- economic (1)
- edge-weighted networks (1)
- education (1)
- electrical muscle stimulation (1)
- electrodermal activity (1)
- elektrische Muskelstimulation (1)
- embeddings (1)
- emergence (1)
- emotion classification (1)
- emotion measurement (1)
- emotional cognitive dynamics (1)
- emotional kognitive Dynamiken (1)
- endpoint security (1)
- enforceability (1)
- entity-component-system (1)
- entscheidungsbewusste Prozessmodelle (1)
- enumeration algorithms (1)
- erzeugende gegnerische Netzwerke (1)
- escalation of commitment (1)
- eskalierendes Commitment (1)
- espbench (1)
- estimation-of-distribution algorithms (1)
- estudios de organización (1)
- event normalization (1)
- event subscription (1)
- evolution of digital innovation units (1)
- evolutionary computation (1)
- exact algorithms (1)
- expectation maximisation algorithm (1)
- experience (1)
- exploration (1)
- exploratives Programmieren (1)
- exploratory programming (1)
- extend (1)
- extension problems (1)
- eye-tracking (1)
- facial mimicry (1)
- fault tolerance (1)
- feature selection (1)
- federated voting (1)
- field study (1)
- file structure (1)
- final report (1)
- font engineering (1)
- font rendering (1)
- force (1)
- forest number (1)
- formal testing (1)
- fortschrittliche Angriffe (1)
- functional dependency (1)
- funktionale Abhängigkeit (1)
- game dynamics (1)
- game theory (1)
- gameful learning (1)
- ganzzahlige lineare Optimierung (1)
- gefaltete neuronale Netze (1)
- gemischte Daten (1)
- gene (1)
- gene expression (1)
- gene selection (1)
- generalized discrimination networks (1)
- generative adversarial networks (1)
- genomics (1)
- geographical distribution (1)
- geschichtsbewusste Laufzeit-Modelle (1)
- getypte Attributierte Graphen (1)
- gigantische Netzwerke (1)
- global model management (1)
- globales Modellmanagement (1)
- grammars (1)
- graph (1)
- graph analysis (1)
- graph conditions (1)
- graph constraints (1)
- graph inference (1)
- graph mining (1)
- graph neural networks (1)
- graph repair (1)
- graph theory (1)
- graph transformations (1)
- graph-transformations (1)
- graphical query language (1)
- graphische neuronale Netze (1)
- grobe Protokolle (1)
- group-based behavior adaptation (1)
- haptic feedback (1)
- haptisches Feedback (1)
- hardware (1)
- hashrate (1)
- health app (1)
- health behaviour (1)
- healthcare (1)
- hepatitis (1)
- heuristics (1)
- hierarchical data (1)
- hierarchische Daten (1)
- higher education (1)
- history-aware runtime models (1)
- hitting sets (1)
- homomorphisms (1)
- human computer interaction (1)
- human-centered (1)
- human-centered design (1)
- human-computer interaction (1)
- human-robot interaction (1)
- human-scale (1)
- hybrid (1)
- hybrid systems (1)
- hyperbolic geometry (1)
- hyperbolic random graphs (1)
- hyperbolische Geometrie (1)
- hyperbolische Zufallsgraphen (1)
- identity (1)
- image abstraction (1)
- image-based rendering (1)
- image-based representation (1)
- imbalanced learning (1)
- immediacy (1)
- implementation (1)
- implied methods (1)
- in-memory (1)
- in-memory data management (1)
- inclusion dependency (1)
- incremental graph query evaluation (1)
- independency tree (1)
- inductive invariant checking (1)
- industry (1)
- industry 4.0 (1)
- information extraction (1)
- information flows (1)
- information systems projects (1)
- inkrementelle Ausführung von Graphanfragen (1)
- innovación (1)
- insula (1)
- integer linear programming (1)
- integrated development environments (1)
- integrierte Entwicklungsumgebungen (1)
- intelligente Verträge (1)
- inter-chain (1)
- interactive visualization (1)
- interaktive Medien (1)
- interaktive Visualisierung (1)
- interdisziplinäre Teams (1)
- intermittent food restriction (1)
- internet of things (1)
- internet topology (1)
- interpreters (1)
- interval probabilistic timed systems (1)
- interval probabilistische zeitgesteuerte Systeme (1)
- interval timed automata (1)
- intervention (1)
- intransitivity (1)
- intuitive Benutzeroberflächen (1)
- intuitive interfaces (1)
- invention (1)
- invention mechanism (1)
- inventory management (1)
- juridical recording (1)
- k-Induktion (1)
- k-induction (1)
- k-inductive invariant (1)
- k-inductive invariant checking (1)
- k-induktive Invariante (1)
- k-induktive Invariantenprüfung (1)
- kausale KI (1)
- kausale Schlussfolgerung (1)
- key establishment (1)
- key management (1)
- key revocation (1)
- knowledge base (1)
- knowledge base construction (1)
- knowledge graph (1)
- knowledge graph construction (1)
- knowledge graph refinement (1)
- knowledge graphs (1)
- knowledge transfer (1)
- knowledge validation (1)
- kollaboratives Arbeiten (1)
- kollaboratives Lernen (1)
- komplexe Ereignisverarbeitung (1)
- kompositionale Analyse (1)
- konsistentes Lernen (1)
- kontinuierliche Integration (1)
- kulturelles Erbe (1)
- label-free quantification (1)
- language learning in the limit (1)
- large scale mechanism (1)
- large-scale mechanism (1)
- laser cutting (1)
- laserscanning (1)
- law (1)
- learner engagement (1)
- learning (1)
- learning factories (1)
- learning platform (1)
- learning styles (1)
- lebenszentriert (1)
- ledger assets (1)
- left recursion (1)
- level-replacement systems (1)
- life-centered (1)
- linear programming (1)
- link layer security (1)
- literature review (1)
- live migration (1)
- live programming (1)
- lively groups (1)
- load balancing (1)
- load-bearing (1)
- logic rules (1)
- logische Regeln (1)
- low back pain (1)
- low-duty-cycling (1)
- mHealth (1)
- machine (1)
- machines (1)
- male infertility (1)
- management (1)
- manufacturing (1)
- manufacturing companies (1)
- maps (1)
- maschinelle Verarbeitung natürlicher Sprache (1)
- massive networks (1)
- massive open online courses (1)
- matching dependencies (1)
- measurement (1)
- media (1)
- media bias (1)
- medical image analysis (1)
- medium access control (1)
- medizinische Bildanalyse (1)
- medizinische Dokumentation (1)
- mehrsprachige Ausführungsumgebungen (1)
- mehrstufiger Angriff (1)
- menschenzentriert (1)
- menschenzentriertes Design (1)
- mental models (1)
- merged mining (1)
- merkle root (1)
- meta self-adaptation (1)
- metaanalysis (1)
- metabolomics (1)
- metacognition (1)
- metacrate (1)
- metadata detection (1)
- metamaterials (1)
- metanome (1)
- methods (1)
- metric temporal graph logic (1)
- metric temporal logic (1)
- metric termporal graph logic (1)
- metrisch temporale Graph Logic (1)
- metrische Temporallogik (1)
- microcredential (1)
- micropayment (1)
- micropayment channels (1)
- microstructures (1)
- mindfulness (1)
- miner (1)
- mining (1)
- mining hardware (1)
- minting (1)
- mixed data (1)
- mixed methods (1)
- mobile app (1)
- mobile applications (1)
- mobile health (1)
- model (1)
- model repair (1)
- model-driven software engineering (1)
- modellgesteuerte Entwicklung (1)
- modellgetriebene Entwicklung (1)
- modellgetriebene Softwaretechnik (1)
- modification stoichiometry (1)
- motion and force (1)
- mpmUCC (1)
- multi-objective (1)
- multi-step attack (1)
- multi-version models (1)
- multidisziplinäre Teams (1)
- multimodal wireless sensor network (1)
- mutations (1)
- named entity mining (1)
- narrative (1)
- natürliche Sprachverarbeitung (1)
- network optimization (1)
- network protocols (1)
- network security (1)
- networks (1)
- news (1)
- nicht-parametrische bedingte Unabhängigkeitstests (1)
- nicht-uniforme Verteilung (1)
- non-parametric conditional independence testing (1)
- non-photorealistic rendering (NPR) (1)
- non-uniform distribution (1)
- nonce (1)
- note-taking (1)
- novelty detection (1)
- null results (1)
- nutzergenerierte Inhalte (1)
- object-oriented languages (1)
- object-oriented programming (1)
- objektorientiertes Programmieren (1)
- off-chain transaction (1)
- oligopoly competition (1)
- omics (1)
- oneM2M (1)
- online courts (1)
- online personality (1)
- open innovation (1)
- operating systems (1)
- optical character recognition (1)
- optimization (1)
- order dependencies (1)
- organization studies (1)
- orthopedic (1)
- oxytocin (1)
- packrat parsing (1)
- pain (1)
- parallel and sequential independence (1)
- parallel processing (1)
- parallele Verarbeitung (1)
- parallele und Sequentielle Unabhängigkeit (1)
- parietal operculum (1)
- parsing expression grammars (1)
- partial order resolution (1)
- partial replication (1)
- partielle Replikation (1)
- partition functions (1)
- patent (1)
- patent analysis (1)
- patient empowerment (1)
- peer evaluation (1)
- peer feedback (1)
- peer review (1)
- peer-to-peer network (1)
- pegged sidechains (1)
- performance (1)
- performance models of virtual machines (1)
- persistent memory (1)
- persistenter Speicher (1)
- personality prediction (1)
- phosphoproteomics (1)
- photoplethysmography (1)
- physiological signals (1)
- pmem (1)
- point-based rendering (1)
- politics (1)
- polyglot execution environments (1)
- polyglot programming (1)
- polyglottes Programmieren (1)
- polynomials (1)
- polystore (1)
- popular matching (1)
- portrait (1)
- pose estimation (1)
- poset (1)
- post-translational modification (1)
- power law (1)
- predicated generic functions (1)
- predicted spectra (1)
- prediction (1)
- prediction models (1)
- price of anarchy (1)
- primary healthcare (1)
- prior knowledge (1)
- privacy (1)
- probabilistic machine learning (1)
- probabilistic routing (1)
- probabilistic sensitivity analysis (1)
- probabilistisches maschinelles Lernen (1)
- process execution (1)
- process modeling (1)
- process models (1)
- production networks (1)
- programmierbare Materie (1)
- programming experience (1)
- programming tools (1)
- programs (1)
- progressive rendering (1)
- progressives Rendering (1)
- project based learning (1)
- props (1)
- proteomics graph networks (1)
- prototyping (1)
- pseudo-Boolean optimization (1)
- pseudoboolesche Optimierung (1)
- psychopy experiments (1)
- psychotherapy (1)
- qualitative model (1)
- qualitatives Modell (1)
- quality-adjusted life years (1)
- quantification (1)
- quantum computing (1)
- quasi-identifier discovery (1)
- quorum slices (1)
- radiation hardening (1)
- railways (1)
- rainbow connection (1)
- random graphs (1)
- random k-SAT (1)
- reactive object queries (1)
- rechnerunterstütztes Konstruieren (1)
- recommendations (1)
- reconfigurable systems (1)
- record linkage (1)
- regression testing (1)
- rekeying (1)
- relational structures (1)
- relationale Strukturen (1)
- reliability (1)
- religiously motivated (1)
- remote sensing (1)
- reported out-come measures (1)
- representation learning (1)
- reverse engineering (1)
- reward (1)
- risk (1)
- robot voice (1)
- rootstock (1)
- rough logs (1)
- run time analysis (1)
- runtime models (1)
- runtime monitoring (1)
- räumliche Geodaten (1)
- satisfiability threshold (1)
- savanna (1)
- scalability (1)
- scalability of blockchain (1)
- scalable (1)
- scarce tokens (1)
- schwach überwachtes maschinelles Lernen (1)
- screening tools (1)
- scripting languages (1)
- scrollytelling (1)
- secure data flow diagrams (DFDsec) (1)
- secure multi-execution (1)
- selbst-souveräne Identitäten (1)
- selbstanpassende Systeme (1)
- selbstbestimmte Identitäten (1)
- selbstheilende Systeme (1)
- selbstüberwachtes Lernen (1)
- self-adaptive software (1)
- self-adaptive systems (1)
- self-driving (1)
- self-efficacy (1)
- self-government (1)
- self-healing (1)
- semantic classification (1)
- semantic representations (1)
- semantische Klassifizierung (1)
- semantische Repräsentationen (1)
- sensors (1)
- sequence properties (1)
- serialization (1)
- serverseitiges 3D-Rendering (1)
- serverside 3D rendering (1)
- service-oriented architecture (SOA) (1)
- service-oriented architectures (1)
- serviceorientierte Architekturen (1)
- siamese neural networks (1)
- sichere Datenflussdiagramme (DFDsec) (1)
- sidechain (1)
- situational awareness (1)
- skalierbar (1)
- small talk (1)
- smalltalk (1)
- soccer analytics (1)
- social bot detection (1)
- social interaction (1)
- social media analysis (1)
- social modulation (1)
- social networking (1)
- social robot (1)
- software analytics (1)
- software evolution (1)
- software visualization (1)
- software/hardware co-design (1)
- somatosensation (1)
- soundness (1)
- sozialen Medien (1)
- soziales Netzwerk (1)
- spaltenorientierte Datenbanken (1)
- spatial aggregation (1)
- spatio-temporal data management (1)
- specification of timed graph transformations (1)
- spectrum clustering (1)
- squeak (1)
- standard (1)
- standardization (1)
- stark verhaltenskorrekt sperrend (1)
- state management (1)
- state space modelling (1)
- static source-code analysis (1)
- statische Quellcodeanalyse (1)
- statistics (1)
- stochastic process (1)
- storytelling (1)
- stream processing (1)
- strongly behaviourally correct locking (1)
- strongly stable matching (1)
- super stable matching (1)
- support vector machine (1)
- susceptibility (1)
- symbolic analysis (1)
- symbolic graphs (1)
- symbolische Analyse (1)
- symbolische Graphen (1)
- synchronization (1)
- tabellarische Dateien (1)
- tabular data (1)
- task realization strategies (1)
- taxonomy (1)
- teamwork (1)
- technology (1)
- telemedicine (1)
- telework (1)
- temporal graph queries (1)
- temporal logic (1)
- temporale Graphanfragen (1)
- test case prioritization (1)
- test results (1)
- text classification (1)
- the Arab world (1)
- theory (1)
- threat analysis (1)
- threat detection (1)
- threat model (1)
- tiefe Gauß-Prozesse (1)
- time horizon (1)
- timed automata (1)
- timed graph (1)
- tissue-awareness (1)
- tool building (1)
- tools (1)
- toxic comment classification (1)
- tractography (1)
- trajectories (1)
- transaction (1)
- transcriptomics (1)
- transfer learning (1)
- transformation (1)
- transversal hypergraph (1)
- tribunales de justicia (1)
- tribunales en línea (1)
- triple graph grammars (1)
- typed attributed symbolic graphs (1)
- typisierte attributierte Graphen (1)
- uBPMN (1)
- ubiquitous business process model and notation (uBPMN) (1)
- ubiquitous business process modeling (1)
- ubiquitous computing (ubicomp) (1)
- ubiquitous decision-aware business process (1)
- ubiquitous decisions (1)
- unbalancierter Datensatz (1)
- uncertainty (1)
- unique column combination (1)
- univariat (1)
- univariate (1)
- unsupervised (1)
- unsupervised learning (1)
- upper bound (1)
- usability (1)
- user engagement (1)
- user research framework (1)
- user-centered design (1)
- utility functions (1)
- variants, (1)
- variational inference (1)
- variationelle Inferenz (1)
- verhaltenskorrektes Lernen (1)
- verifiable credentials (1)
- verschachtelte Anwendungsbedingungen (1)
- verschachtelte Graphbedingungen (1)
- verstärkendes Lernen (1)
- verteilte Berechnung (1)
- verteilte Leistungsüberwachung (1)
- verzwickte Probleme (1)
- veteran (1)
- virtual (1)
- virtual machines (1)
- virtual reality (1)
- virtuell (1)
- virtuelle Maschinen (1)
- visual analytics (1)
- visual language (1)
- visual languages (1)
- visualization concept exploration (1)
- visuelle Sprache (1)
- visuelle Sprachen (1)
- vocational training (1)
- vulnerability (1)
- wake-up radio (1)
- weak supervision (1)
- weakly (1)
- wearable EEG (muse and neurosity crown) (1)
- wearables (1)
- web-based development (1)
- web-based development environment (1)
- web-basierte Entwicklungsumgebung (1)
- webbasierte Entwicklung (1)
- well-being (1)
- wireless networks (1)
- zero-day (1)
- zufälliges k-SAT (1)
- Überwachtes Lernen (1)
- überprüfbare Nachweise (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (439) (remove)
“One video fit for all”
(2023)
Online learning in mathematics has always been challenging, especially for mathematics in STEM education. This paper presents how to make “one fit for all” lecture videos for mathematics in STEM education. In general, we do believe that there is no such thing as “one fit for all” video. The curriculum requires a high level of prior knowledge in mathematics from high school to get a good understanding, and the variation of prior knowledge levels among STEM education students is often high. This creates challenges for both online teaching and on-campus teaching. This article presents experimenting and researching on a video format where students can get a real-time feeling, and which fits their needs regarding their existing prior knowledge. They have the possibility to ask and receive answers during the video without having to feel that they must jump into different sources, which helps to reduce unnecessary distractions. The fundamental video format presented here is that of dynamic branching videos, which has to little degree been researched in education related studies. The reason might be that this field is quite new for higher education, and there is relatively high requirement on the video editing skills from the teachers’ side considering the platforms that are available so far. The videos are implemented for engineering students who take the Linear Algebra course at the Norwegian University of Science and Technology in spring 2023. Feedback from the students gathered via anonymous surveys so far (N = 21) is very positive. With the high suitability for online teaching, this video format might lead the trend of online learning in the future. The design and implementation of dynamic videos in mathematics in higher education was presented for the first time at the EMOOCs conference 2023.
“Ick bin een Berlina”
(2024)
Background: Robots are increasingly used as interaction partners with humans. Social robots are designed to follow expected behavioral norms when engaging with humans and are available with different voices and even accents. Some studies suggest that people prefer robots to speak in the user’s dialect, while others indicate a preference for different dialects.
Methods: Our study examined the impact of the Berlin dialect on perceived trustworthiness and competence of a robot. One hundred and twenty German native speakers (Mage = 32 years, SD = 12 years) watched an online video featuring a NAO robot speaking either in the Berlin dialect or standard German and assessed its trustworthiness and competence.
Results: We found a positive relationship between participants’ self-reported Berlin dialect proficiency and trustworthiness in the dialect-speaking robot. Only when controlled for demographic factors, there was a positive association between participants’ dialect proficiency, dialect performance and their assessment of robot’s competence for the standard German-speaking robot. Participants’ age, gender, length of residency in Berlin, and device used to respond also influenced assessments. Finally, the robot’s competence positively predicted its trustworthiness.
Discussion: Our results inform the design of social robots and emphasize the importance of device control in online experiments.
We present fully polynomial time approximation schemes for a broad class of Holant problems with complex edge weights, which we call Holant polynomials. We transform these problems into partition functions of abstract combinatorial structures known as polymers in statistical physics. Our method involves establishing zero-free regions for the partition functions of polymer models and using the most significant terms of the cluster expansion to approximate them. Results of our technique include new approximation and sampling algorithms for a diverse class of Holant polynomials in the low-temperature regime (i.e. small external field) and approximation algorithms for general Holant problems with small signature weights. Additionally, we give randomised approximation and sampling algorithms with faster running times for more restrictive classes. Finally, we improve the known zero-free regions for a perfect matching polynomial.
xMOOCs
(2023)
The World Health Organization designed OpenWHO.org to provide an inclusive and accessible online environment to equip learners across the globe with critical up-to-date information and to be able to effectively protect themselves in health emergencies. The platform thus focuses on the eXtended Massive Open Online Course (xMOOC) modality – contentfocused and expert-driven, one-to-many modelled, and self-paced for scalable learning. In this paper, we describe how OpenWHO utilized xMOOCs to reach mass audiences during the COVID-19 pandemic; the paper specifically examines the accessibility, language inclusivity and adaptability of hosted xMOOCs. As of February 2023, OpenWHO had 7.5 million enrolments across 200 xMOOCs on health emergency, epidemic, pandemic and other public health topics available across 65 languages, including 46 courses targeted for the COVID-19 pandemic. Our results suggest that the xMOOC modality allowed OpenWHO to expand learning during the pandemic to previously underrepresented groups, including women, participants ages 70 and older, and learners younger than age 20. The OpenWHO use case shows that xMOOCs should be considered when there is a need for massive knowledge transfer in health emergency situations, yet the approach should be context-specific according to the type of health emergency, targeted population and region. Our evidence also supports previous calls to put intervention elements that contribute to removing barriers to access at the core of learning and health information dissemination. Equity must be the fundamental principle and organizing criteria for public health work.
Modern server systems with large NUMA architectures necessitate (i) data being distributed over the available computing nodes and (ii) NUMA-aware query processing to enable effective parallel processing in database systems. As these architectures incur significant latency and throughout penalties for accessing non-local data, queries should be executed as close as possible to the data. To further increase both performance and efficiency, data that is not relevant for the query result should be skipped as early as possible. One way to achieve this goal is horizontal partitioning to improve static partition pruning. As part of our ongoing work on workload-driven partitioning, we have implemented a recent approach called aggressive data skipping and extended it to handle both analytical as well as transactional access patterns. In this paper, we evaluate this approach with the workload and data of a production enterprise system of a Global 2000 company. The results show that over 80% of all tuples can be skipped in average while the resulting partitioning schemata are surprisingly stable over time.
Workload-Driven Fragment Allocation for Partially Replicated Databases Using Linear Programming
(2019)
In replication schemes, replica nodes can process read-only queries on snapshots of the master node without violating transactional consistency. By analyzing the workload, we can identify query access patterns and replicate data depending to its access frequency. In this paper, we define a linear programming (LP) model to calculate the set of partial replicas with the lowest overall memory capacity while evenly balancing the query load. Furthermore, we propose a scalable decomposition heuristic to calculate solutions for larger problem sizes. While guaranteeing the same performance as state-of-the-art heuristics, our decomposition approach calculates allocations with up to 23% lower memory footprint for the TPC-H benchmark.
With the recent growth of sensors, cloud computing handles the data processing of many applications. Processing some of this data on the cloud raises, however, many concerns regarding, e.g., privacy, latency, or single points of failure. Alternatively, thanks to the development of embedded systems, smart wireless devices can share their computation capacity, creating a local wireless cloud for in-network processing. In this context, the processing of an application is divided into smaller jobs so that a device can run one or more jobs.
The contribution of this thesis to this scenario is divided into three parts. In part one, I focus on wireless aspects, such as power control and interference management, for deciding which jobs to run on which node and how to route data between nodes. Hence, I formulate optimization problems and develop heuristic and meta-heuristic algorithms to allocate wireless and computation resources. Additionally, to deal with multiple applications competing for these resources, I develop a reinforcement learning (RL) admission controller to decide which application should be admitted. Next, I look into acoustic applications to improve wireless throughput by using microphone clock synchronization to synchronize wireless transmissions.
In the second part, I jointly work with colleagues from the acoustic processing field to optimize both network and application (i.e., acoustic) qualities. My contribution focuses on the network part, where I study the relation between acoustic and network qualities when selecting a subset of microphones for collecting audio data or selecting a subset of optional jobs for processing these data; too many microphones or too many jobs can lessen quality by unnecessary delays. Hence, I develop RL solutions to select the subset of microphones under network constraints when the speaker is moving while still providing good acoustic quality. Furthermore, I show that autonomous vehicles carrying microphones improve the acoustic qualities of different applications. Accordingly, I develop RL solutions (single and multi-agent ones) for controlling these vehicles.
In the third part, I close the gap between theory and practice. I describe the features of my open-source framework used as a proof of concept for wireless in-network processing. Next, I demonstrate how to run some algorithms developed by colleagues from acoustic processing using my framework. I also use the framework for studying in-network delays (wireless and processing) using different distributions of jobs and network topologies.
Clustering in education is important in identifying groups of objects in order to find linked patterns of correlations in educational datasets. As such, MOOCs provide a rich source of educational datasets which enable a wide selection of options to carry out clustering and an opportunity for cohort analyses. In this experience paper, five research studies on clustering in MOOCs are reviewed, drawing out several reasonings, methods, and students’ clusters that reflect certain kinds of learning behaviours. The collection of the varied clusters shows that each study identifies and defines clusters according to distinctive engagement patterns. Implications and a summary are provided at the end of the paper.
First come, first served: Critical choices between alternative actions are often made based on events external to an organization, and reacting promptly to their occurrence can be a major advantage over the competition. In Business Process Management (BPM), such deferred choices can be expressed in process models, and they are an important aspect of process engines. Blockchain-based process execution approaches are no exception to this, but are severely limited by the inherent properties of the platform: The isolated environment prevents direct access to external entities and data, and the non-continual runtime based entirely on atomic transactions impedes the monitoring and detection of events. In this paper we provide an in-depth examination of the semantics of deferred choice, and transfer them to environments such as the blockchain. We introduce and compare several oracle architectures able to satisfy certain requirements, and show that they can be implemented using state-of-the-art blockchain technology.
Which event happened first?
(2021)
First come, first served: Critical choices between alternative actions are often made based on events external to an organization, and reacting promptly to their occurrence can be a major advantage over the competition. In Business Process Management (BPM), such deferred choices can be expressed in process models, and they are an important aspect of process engines. Blockchain-based process execution approaches are no exception to this, but are severely limited by the inherent properties of the platform: The isolated environment prevents direct access to external entities and data, and the non-continual runtime based entirely on atomic transactions impedes the monitoring and detection of events. In this paper we provide an in-depth examination of the semantics of deferred choice, and transfer them to environments such as the blockchain. We introduce and compare several oracle architectures able to satisfy certain requirements, and show that they can be implemented using state-of-the-art blockchain technology.
What Stays in Mind?
(2018)
In an effort to describe and produce different formats for video instruction, the research community in technology-enhanced learning, and MOOC scholars in particular, have focused on the general style of video production: whether it is a digitally scripted “talk-and-chalk” or a “talking head” version of a learning unit. Since these production styles include various sub-elements, this paper deconstructs the inherited elements of video production in the context of educational live-streams. Using over 700 videos – both from synchronous and asynchronous modalities of large video-based platforms (YouTube and Twitch), 92 features were found in eight categories of video production. These include commonly analyzed features such as the use of green screen and a visible instructor, but also less studied features such as social media connections and changing camera perspective depending on the topic being covered. Overall, the research results enable an analysis of common video production styles and a toolbox for categorizing new formats – independent of their final (a)synchronous use in MOOCs. Keywords: video production, MOOC video styles, live-streaming.
The goal of this paper is to study the demand factors driving enrollment in massive open online courses. Using course level data from a French MOOC platform, we study the course, teacher and institution related characteristics that influence the enrollment decision of students, in a setting where enrollment is open to all students without administrative barriers. Coverage from social and traditional media done around the course is a key driver. In addition, the language of instruction and the (estimated) amount of work needed to complete the course also have a significant impact. The data also suggests that the presence of same-side externalities is limited. Finally, preferences of national and of international students tend to differ on several dimensions.
Virtual 3D city models represent and integrate a variety of spatial data and georeferenced data related to urban areas. With the help of improved remote-sensing technology, official 3D cadastral data, open data or geodata crowdsourcing, the quantity and availability of such data are constantly expanding and its quality is ever improving for many major cities and metropolitan regions. There are numerous fields of applications for such data, including city planning and development, environmental analysis and simulation, disaster and risk management, navigation systems, and interactive city maps.
The dissemination and the interactive use of virtual 3D city models represent key technical functionality required by nearly all corresponding systems, services, and applications. The size and complexity of virtual 3D city models, their management, their handling, and especially their visualization represent challenging tasks. For example, mobile applications can hardly handle these models due to their massive data volume and data heterogeneity. Therefore, the efficient usage of all computational resources (e.g., storage, processing power, main memory, and graphics hardware, etc.) is a key requirement for software engineering in this field. Common approaches are based on complex clients that require the 3D model data (e.g., 3D meshes and 2D textures) to be transferred to them and that then render those received 3D models. However, these applications have to implement most stages of the visualization pipeline on client side. Thus, as high-quality 3D rendering processes strongly depend on locally available computer graphics resources, software engineering faces the challenge of building robust cross-platform client implementations.
Web-based provisioning aims at providing a service-oriented software architecture that consists of tailored functional components for building web-based and mobile applications that manage and visualize virtual 3D city models. This thesis presents corresponding concepts and techniques for web-based provisioning of virtual 3D city models. In particular, it introduces services that allow us to efficiently build applications for virtual 3D city models based on a fine-grained service concept. The thesis covers five main areas:
1. A Service-Based Concept for Image-Based Provisioning of
Virtual 3D City Models It creates a frame for a broad range of services related to the rendering and image-based dissemination of virtual 3D city models.
2. 3D Rendering Service for Virtual 3D City Models This service provides efficient, high-quality 3D rendering functionality for virtual 3D city models. In particular, it copes with requirements such as standardized data formats, massive model texturing, detailed 3D geometry, access to associated feature data, and non-assumed frame-to-frame coherence for parallel service requests. In addition, it supports thematic and artistic styling based on an expandable graphics effects library.
3. Layered Map Service for Virtual 3D City Models It generates a map-like representation of virtual 3D city models using an oblique view. It provides high visual quality, fast initial loading times, simple map-based interaction and feature data access. Based on a configurable client framework, mobile and web-based applications for virtual 3D city models can be created easily.
4. Video Service for Virtual 3D City Models It creates and synthesizes videos from virtual 3D city models. Without requiring client-side 3D rendering capabilities, users can create camera paths by a map-based user interface, configure scene contents, styling, image overlays, text overlays, and their transitions. The service significantly reduces the manual effort typically required to produce such videos. The videos can automatically be updated when the underlying data changes.
5. Service-Based Camera Interaction It supports task-based 3D camera interactions, which can be integrated seamlessly into service-based visualization applications. It is demonstrated how to build such web-based interactive applications for virtual 3D city models using this camera service.
These contributions provide a framework for design, implementation, and deployment of future web-based applications, systems, and services for virtual 3D city models. The approach shows how to decompose the complex, monolithic functionality of current 3D geovisualization systems into independently designed, implemented, and operated service- oriented units. In that sense, this thesis also contributes to microservice architectures for 3D geovisualization systems—a key challenge of today’s IT systems engineering to build scalable IT solutions.
Quantifying neurological disorders from voice is a rapidly growing field of research and holds promise for unobtrusive and large-scale disorder monitoring. The data recording setup and data analysis pipelines are both crucial aspects to effectively obtain relevant information from participants. Therefore, we performed a systematic review to provide a high-level overview of practices across various neurological disorders and highlight emerging trends. PRISMA-based literature searches were conducted through PubMed, Web of Science, and IEEE Xplore to identify publications in which original (i.e., newly recorded) datasets were collected. Disorders of interest were psychiatric as well as neurodegenerative disorders, such as bipolar disorder, depression, and stress, as well as amyotrophic lateral sclerosis amyotrophic lateral sclerosis, Alzheimer's, and Parkinson's disease, and speech impairments (aphasia, dysarthria, and dysphonia). Of the 43 retrieved studies, Parkinson's disease is represented most prominently with 19 discovered datasets. Free speech and read speech tasks are most commonly used across disorders. Besides popular feature extraction toolkits, many studies utilise custom-built feature sets. Correlations of acoustic features with psychiatric and neurodegenerative disorders are presented. In terms of analysis, statistical analysis for significance of individual features is commonly used, as well as predictive modeling approaches, especially with support vector machines and a small number of artificial neural networks. An emerging trend and recommendation for future studies is to collect data in everyday life to facilitate longitudinal data collection and to capture the behavior of participants more naturally. Another emerging trend is to record additional modalities to voice, which can potentially increase analytical performance.
Quantifying neurological disorders from voice is a rapidly growing field of research and holds promise for unobtrusive and large-scale disorder monitoring. The data recording setup and data analysis pipelines are both crucial aspects to effectively obtain relevant information from participants. Therefore, we performed a systematic review to provide a high-level overview of practices across various neurological disorders and highlight emerging trends. PRISMA-based literature searches were conducted through PubMed, Web of Science, and IEEE Xplore to identify publications in which original (i.e., newly recorded) datasets were collected. Disorders of interest were psychiatric as well as neurodegenerative disorders, such as bipolar disorder, depression, and stress, as well as amyotrophic lateral sclerosis amyotrophic lateral sclerosis, Alzheimer's, and Parkinson's disease, and speech impairments (aphasia, dysarthria, and dysphonia). Of the 43 retrieved studies, Parkinson's disease is represented most prominently with 19 discovered datasets. Free speech and read speech tasks are most commonly used across disorders. Besides popular feature extraction toolkits, many studies utilise custom-built feature sets. Correlations of acoustic features with psychiatric and neurodegenerative disorders are presented. In terms of analysis, statistical analysis for significance of individual features is commonly used, as well as predictive modeling approaches, especially with support vector machines and a small number of artificial neural networks. An emerging trend and recommendation for future studies is to collect data in everyday life to facilitate longitudinal data collection and to capture the behavior of participants more naturally. Another emerging trend is to record additional modalities to voice, which can potentially increase analytical performance.
VLDB 2021
(2021)
The 47th International Conference on Very Large Databases (VLDB'21) was held on August 16-20, 2021 as a hybrid conference. It attracted 180 in-person attendees in Copenhagen and 840 remote attendees. In this paper, we describe our key decisions as general chairs and program committee chairs and share the lessons we learned.
Founded in 2013, OpenClassrooms is a French online learning company that offers both paid courses and free MOOCs on a wide range of topics, including computer science and education. In 2021, in partnership with the EDA research unit, OpenClassrooms shared a database to solve the problem of how to increase persistence in their paid courses, which consist of a series of MOOCs and human mentoring. Our statistical analysis aims to identify reasons for dropouts that are due to the course design rather than demographic predictors or external factors.We aim to identify at-risk students, i.e. those who are on the verge of dropping out at a specific moment. To achieve this, we use learning analytics to characterize student behavior. We conducted data analysis on a sample of data related to the “Web Designers” and “Instructional Design” courses. By visualizing the student flow and constructing speed and acceleration predictors, we can identify which parts of the course need to be calibrated and when particular attention should be paid to these at-risk students.
Virtualizing physical space
(2021)
The true cost for virtual reality is not the hardware, but the physical space it requires, as a one-to-one mapping of physical space to virtual space allows for the most immersive way of navigating in virtual reality. Such “real-walking” requires physical space to be of the same size and the same shape of the virtual world represented. This generally prevents real-walking applications from running on any space that they were not designed for.
To reduce virtual reality’s demand for physical space, creators of such applications let users navigate virtual space by means of a treadmill, altered mappings of physical to virtual space, hand-held controllers, or gesture-based techniques. While all of these solutions succeed at reducing virtual reality’s demand for physical space, none of them reach the same level of immersion that real-walking provides.
Our approach is to virtualize physical space: instead of accessing physical space directly, we allow applications to express their need for space in an abstract way, which our software systems then map to the physical space available. We allow real-walking applications to run in spaces of different size, different shape, and in spaces containing different physical objects. We also allow users immersed in different virtual environments to share the same space.
Our systems achieve this by using a tracking volume-independent representation of real-walking experiences — a graph structure that expresses the spatial and logical relationships between virtual locations, virtual elements contained within those locations, and user interactions with those elements. When run in a specific physical space, this graph representation is used to define a custom mapping of the elements of the virtual reality application and the physical space by parsing the graph using a constraint solver. To re-use space, our system splits virtual scenes and overlap virtual geometry. The system derives this split by means of hierarchically clustering of our virtual objects as nodes of our bi-partite directed graph that represents the logical ordering of events of the experience. We let applications express their demands for physical space and use pre-emptive scheduling between applications to have them share space. We present several application examples enabled by our system. They all enable real-walking, despite being mapped to physical spaces of different size and shape, containing different physical objects or other users.
We see substantial real-world impact in our systems. Today’s commercial virtual reality applications are generally designing to be navigated using less immersive solutions, as this allows them to be operated on any tracking volume. While this is a commercial necessity for the developers, it misses out on the higher immersion offered by real-walking. We let developers overcome this hurdle by allowing experiences to bring real-walking to any tracking volume, thus potentially bringing real-walking to consumers.
Die eigentlichen Kosten für Virtual Reality Anwendungen entstehen nicht primär durch die erforderliche Hardware, sondern durch die Nutzung von physischem Raum, da die eins-zu-eins Abbildung von physischem auf virtuellem Raum die immersivste Art von Navigation ermöglicht. Dieses als „Real-Walking“ bezeichnete Erlebnis erfordert hinsichtlich Größe und Form eine Entsprechung von physischem Raum und virtueller Welt. Resultierend daraus können Real-Walking-Anwendungen nicht an Orten angewandt werden, für die sie nicht entwickelt wurden.
Um den Bedarf an physischem Raum zu reduzieren, lassen Entwickler von Virtual Reality-Anwendungen ihre Nutzer auf verschiedene Arten navigieren, etwa mit Hilfe eines Laufbandes, verfälschten Abbildungen von physischem zu virtuellem Raum, Handheld-Controllern oder gestenbasierten Techniken. All diese Lösungen reduzieren zwar den Bedarf an physischem Raum, erreichen jedoch nicht denselben Grad an Immersion, den Real-Walking bietet.
Unser Ansatz zielt darauf, physischen Raum zu virtualisieren: Anstatt auf den physischen Raum direkt zuzugreifen, lassen wir Anwendungen ihren Raumbedarf auf abstrakte Weise formulieren, den unsere Softwaresysteme anschließend auf den verfügbaren physischen Raum abbilden. Dadurch ermöglichen wir Real-Walking-Anwendungen Räume mit unterschiedlichen Größen und Formen und Räume, die unterschiedliche physische Objekte enthalten, zu nutzen. Wir ermöglichen auch die zeitgleiche Nutzung desselben Raums durch mehrere Nutzer verschiedener Real-Walking-Anwendungen.
Unsere Systeme erreichen dieses Resultat durch eine Repräsentation von Real-Walking-Erfahrungen, die unabhängig sind vom gegebenen Trackingvolumen – eine Graphenstruktur, die die räumlichen und logischen Beziehungen zwischen virtuellen Orten, den virtuellen Elementen innerhalb dieser Orte, und Benutzerinteraktionen mit diesen Elementen, ausdrückt. Bei der Instanziierung der Anwendung in einem bestimmten physischen Raum wird diese Graphenstruktur und ein Constraint Solver verwendet, um eine individuelle Abbildung der virtuellen Elemente auf den physischen Raum zu erreichen. Zur mehrmaligen Verwendung des Raumes teilt unser System virtuelle Szenen und überlagert virtuelle Geometrie. Das System leitet diese Aufteilung anhand eines hierarchischen Clusterings unserer virtuellen Objekte ab, die als Knoten unseres bi-partiten, gerichteten Graphen die logische Reihenfolge aller Ereignisse repräsentieren. Wir verwenden präemptives Scheduling zwischen den Anwendungen für die zeitgleiche Nutzung von physischem Raum. Wir stellen mehrere Anwendungsbeispiele vor, die Real-Walking ermöglichen – in physischen Räumen mit unterschiedlicher Größe und Form, die verschiedene physische Objekte oder weitere Nutzer enthalten.
Wir sehen in unseren Systemen substantielles Potential. Heutige Virtual Reality-Anwendungen sind bisher zwar so konzipiert, dass sie auf einem beliebigen Trackingvolumen betrieben werden können, aber aus kommerzieller Notwendigkeit kein Real-Walking beinhalten. Damit entgeht Entwicklern die Gelegenheit eine höhere Immersion herzustellen. Indem wir es ermöglichen, Real-Walking auf jedes Trackingvolumen zu bringen, geben wir Entwicklern die Möglichkeit Real-Walking zu ihren Nutzern zu bringen.
Dynamic resource management is an essential requirement for private and public cloud computing environments. With dynamic resource management, the physical resources assignment to the cloud virtual resources depends on the actual need of the applications or the running services, which enhances the cloud physical resources utilization and reduces the offered services cost. In addition, the virtual resources can be moved across different physical resources in the cloud environment without an obvious impact on the running applications or services production. This means that the availability of the running services and applications in the cloud is independent on the hardware resources including the servers, switches and storage failures. This increases the reliability of using cloud services compared to the classical data-centers environments.
In this thesis we briefly discuss the dynamic resource management topic and then deeply focus on live migration as the definition of the compute resource dynamic management. Live migration is a commonly used and an essential feature in cloud and virtual data-centers environments. Cloud computing load balance, power saving and fault tolerance features are all dependent on live migration to optimize the virtual and physical resources usage. As we will discuss in this thesis, live migration shows many benefits to cloud and virtual data-centers environments, however the cost of live migration can not be ignored. Live migration cost includes the migration time, downtime, network overhead, power consumption increases and CPU overhead.
IT admins run virtual machines live migrations without an idea about the migration cost. So, resources bottlenecks, higher migration cost and migration failures might happen. The first problem that we discuss in this thesis is how to model the cost of the virtual machines live migration. Secondly, we investigate how to make use of machine learning techniques to help the cloud admins getting an estimation of this cost before initiating the migration for one of multiple virtual machines. Also, we discuss the optimal timing for a specific virtual machine before live migration to another server. Finally, we propose practical solutions that can be used by the cloud admins to be integrated with the cloud administration portals to answer the raised research questions above.
Our research methodology to achieve the project objectives is to propose empirical models based on using VMware test-beds with different benchmarks tools. Then we make use of the machine learning techniques to propose a prediction approach for virtual machines live migration cost. Timing optimization for live migration is also proposed in this thesis based on using the cost prediction and data-centers network utilization prediction. Live migration with persistent memory clusters is also discussed at the end of the thesis. The cost prediction and timing optimization techniques proposed in this thesis could be practically integrated with VMware vSphere cluster portal such that the IT admins can now use the cost prediction feature and timing optimization option before proceeding with a virtual machine live migration.
Testing results show that our proposed approach for VMs live migration cost prediction shows acceptable results with less than 20% prediction error and can be easily implemented and integrated with VMware vSphere as an example of a commonly used resource management portal for virtual data-centers and private cloud environments. The results show that using our proposed VMs migration timing optimization technique also could save up to 51% of migration time of the VMs migration time for memory intensive workloads and up to 27% of the migration time for network intensive workloads. This timing optimization technique can be useful for network admins to save migration time with utilizing higher network rate and higher probability of success.
At the end of this thesis, we discuss the persistent memory technology as a new trend in servers memory technology. Persistent memory modes of operation and configurations are discussed in detail to explain how live migration works between servers with different memory configuration set up. Then, we build a VMware cluster with persistent memory inside server and also with DRAM only servers to show the live migration cost difference between the VMs with DRAM only versus the VMs with persistent memory inside.
Viper
(2021)
Key-value stores (KVSs) have found wide application in modern software systems. For persistence, their data resides in slow secondary storage, which requires KVSs to employ various techniques to increase their read and write performance from and to the underlying medium. Emerging persistent memory (PMem) technologies offer data persistence at close-to-DRAM speed, making them a promising alternative to classical disk-based storage. However, simply drop-in replacing existing storage with PMem does not yield good results, as block-based access behaves differently in PMem than on disk and ignores PMem's byte addressability, layout, and unique performance characteristics. In this paper, we propose three PMem-specific access patterns and implement them in a hybrid PMem-DRAM KVS called Viper. We employ a DRAM-based hash index and a PMem-aware storage layout to utilize the random-write speed of DRAM and efficient sequential-write performance PMem. Our evaluation shows that Viper significantly outperforms existing KVSs for core KVS operations while providing full data persistence. Moreover, Viper outperforms existing PMem-only, hybrid, and disk-based KVSs by 4-18x for write workloads, while matching or surpassing their get performance.
Viper
(2021)
Key-value stores (KVSs) have found wide application in modern software systems. For persistence, their data resides in slow secondary storage, which requires KVSs to employ various techniques to increase their read and write performance from and to the underlying medium. Emerging persistent memory (PMem) technologies offer data persistence at close-to-DRAM speed, making them a promising alternative to classical disk-based storage. However, simply drop-in replacing existing storage with PMem does not yield good results, as block-based access behaves differently in PMem than on disk and ignores PMem's byte addressability, layout, and unique performance characteristics. In this paper, we propose three PMem-specific access patterns and implement them in a hybrid PMem-DRAM KVS called Viper. We employ a DRAM-based hash index and a PMem-aware storage layout to utilize the random-write speed of DRAM and efficient sequential-write performance PMem. Our evaluation shows that Viper significantly outperforms existing KVSs for core KVS operations while providing full data persistence. Moreover, Viper outperforms existing PMem-only, hybrid, and disk-based KVSs by 4-18x for write workloads, while matching or surpassing their get performance.
With rising complexity of today's software and hardware systems and the hypothesized increase in autonomous, intelligent, and self-* systems, developing correct systems remains an important challenge. Testing, although an important part of the development and maintainance process, cannot usually establish the definite correctness of a software or hardware system - especially when systems have arbitrarily large or infinite state spaces or an infinite number of initial states. This is where formal verification comes in: given a representation of the system in question in a formal framework, verification approaches and tools can be used to establish the system's adherence to its similarly formalized specification, and to complement testing.
One such formal framework is the field of graphs and graph transformation systems. Both are powerful formalisms with well-established foundations and ongoing research that can be used to describe complex hardware or software systems with varying degrees of abstraction. Since their inception in the 1970s, graph transformation systems have continuously evolved; related research spans extensions of expressive power, graph algorithms, and their implementation, application scenarios, or verification approaches, to name just a few topics.
This thesis focuses on a verification approach for graph transformation systems called k-inductive invariant checking, which is an extension of previous work on 1-inductive invariant checking. Instead of exhaustively computing a system's state space, which is a common approach in model checking, 1-inductive invariant checking symbolically analyzes graph transformation rules - i.e. system behavior - in order to draw conclusions with respect to the validity of graph constraints in the system's state space. The approach is based on an inductive argument: if a system's initial state satisfies a graph constraint and if all rules preserve that constraint's validity, we can conclude the constraint's validity in the system's entire state space - without having to compute it.
However, inductive invariant checking also comes with a specific drawback: the locality of graph transformation rules leads to a lack of context information during the symbolic analysis of potential rule applications. This thesis argues that this lack of context can be partly addressed by using k-induction instead of 1-induction. A k-inductive invariant is a graph constraint whose validity in a path of k-1 rule applications implies its validity after any subsequent rule application - as opposed to a 1-inductive invariant where only one rule application is taken into account. Considering a path of transformations then accumulates more context of the graph rules' applications.
As such, this thesis extends existing research and implementation on 1-inductive invariant checking for graph transformation systems to k-induction. In addition, it proposes a technique to perform the base case of the inductive argument in a symbolic fashion, which allows verification of systems with an infinite set of initial states. Both k-inductive invariant checking and its base case are described in formal terms. Based on that, this thesis formulates theorems and constructions to apply this general verification approach for typed graph transformation systems and nested graph constraints - and to formally prove the approach's correctness.
Since unrestricted graph constraints may lead to non-termination or impracticably high execution times given a hypothetical implementation, this thesis also presents a restricted verification approach, which limits the form of graph transformation systems and graph constraints. It is formalized, proven correct, and its procedures terminate by construction. This restricted approach has been implemented in an automated tool and has been evaluated with respect to its applicability to test cases, its performance, and its degree of completeness.
Verbal focus shifts
(2018)
Previous studies on design behaviour indicate that focus shifts positively influence ideational productivity. In this study we want to take a closer look at how these focus shifts look on the verbal level. We describe a mutually influencing relationship between mental focus shifts and verbal low coherent statements. In a case study based on the DTRS11 dataset we identify 297 low coherent statements via a combined topic modelling and manual approach. We introduce a categorization of the different instances of low coherent statements. The results indicate that designers tend to shift topics within an existing design issue instead of completely disrupting it. (C) 2018 Elsevier Ltd. All rights reserved.
Most machine learning methods provide only point estimates when being queried to predict on new data. This is problematic when the data is corrupted by noise, e.g. from imperfect measurements, or when the queried data point is very different to the data that the machine learning model has been trained with. Probabilistic modelling in machine learning naturally equips predictions with corresponding uncertainty estimates which allows a practitioner to incorporate information about measurement noise into the modelling process and to know when not to trust the predictions. A well-understood, flexible probabilistic framework is provided by Gaussian processes that are ideal as building blocks of probabilistic models. They lend themself naturally to the problem of regression, i.e., being given a set of inputs and corresponding observations and then predicting likely observations for new unseen inputs, and can also be adapted to many more machine learning tasks. However, exactly inferring the optimal parameters of such a Gaussian process model (in a computationally tractable manner) is only possible for regression tasks in small data regimes. Otherwise, approximate inference methods are needed, the most prominent of which is variational inference.
In this dissertation we study models that are composed of Gaussian processes embedded in other models in order to make those more flexible and/or probabilistic. The first example are deep Gaussian processes which can be thought of as a small network of Gaussian processes and which can be employed for flexible regression. The second model class that we study are Gaussian process state-space models. These can be used for time-series modelling, i.e., the task of being given a stream of data ordered by time and then predicting future observations. For both model classes the state-of-the-art approaches offer a trade-off between expressive models and computational properties (e.g. speed or convergence properties) and mostly employ variational inference. Our goal is to improve inference in both models by first getting a deep understanding of the existing methods and then, based on this, to design better inference methods. We achieve this by either exploring the existing trade-offs or by providing general improvements applicable to multiple methods.
We first provide an extensive background, introducing Gaussian processes and their sparse (approximate and efficient) variants. We continue with a description of the models under consideration in this thesis, deep Gaussian processes and Gaussian process state-space models, including detailed derivations and a theoretical comparison of existing methods.
Then we start analysing deep Gaussian processes more closely: Trading off the properties (good optimisation versus expressivity) of state-of-the-art methods in this field, we propose a new variational inference based approach. We then demonstrate experimentally that our new algorithm leads to better calibrated uncertainty estimates than existing methods.
Next, we turn our attention to Gaussian process state-space models, where we closely analyse the theoretical properties of existing methods.The understanding gained in this process leads us to propose a new inference scheme for general Gaussian process state-space models that incorporates effects on multiple time scales. This method is more efficient than previous approaches for long timeseries and outperforms its comparison partners on data sets in which effects on multiple time scales (fast and slowly varying dynamics) are present.
Finally, we propose a new inference approach for Gaussian process state-space models that trades off the properties of state-of-the-art methods in this field. By combining variational inference with another approximate inference method, the Laplace approximation, we design an efficient algorithm that outperforms its comparison partners since it achieves better calibrated uncertainties.
V-Edge
(2022)
As we move from 5G to 6G, edge computing is one of the concepts that needs revisiting. Its core idea is still intriguing: Instead of sending all data and tasks from an end user's device to the cloud, possibly covering thousands of kilometers and introducing delays lower-bounded by propagation speed, edge servers deployed in close proximity to the user (e.g., at some base station) serve as proxy for the cloud. This is particularly interesting for upcoming machine-learning-based intelligent services, which require substantial computational and networking performance for continuous model training. However, this promising idea is hampered by the limited number of such edge servers. In this article, we discuss a way forward, namely the V-Edge concept. V-Edge helps bridge the gap between cloud, edge, and fog by virtualizing all available resources including the end users' devices and making these resources widely available. Thus, V-Edge acts as an enabler for novel microservices as well as cooperative computing solutions in next-generation networks. We introduce the general V-Edge architecture, and we characterize some of the key research challenges to overcome in order to enable wide-spread and intelligent edge services.
MOOCs have been produced using a variety of instructional design approaches and frameworks. This paper presents experiences from the instructional approach based on the ADDIE model applied to designing and producing MOOCs in the Erasmus+ strategic partnership on Open Badge Ecosystem for Research Data Management (OBERRED). Specifically, this paper describes the case study of the production of the MOOC “Open Badges for Open Science”, delivered on the European MOOC platform EMMA. The key goal of this MOOC is to help learners develop a capacity to use Open Badges in the field of Research Data Management (RDM). To produce the MOOC, the ADDIE model was applied as a generic instructional design model and a systematic approach to the design and development following the five design phases: Analysis, Design, Development, Implementation, Evaluation. This paper outlines the MOOC production including methods, templates and tools used in this process including the interactive micro-content created with H5P in form of Open Educational Resources and digital credentials created with Open Badges and issued to MOOC participants upon successful completion of MOOC levels. The paper also outlines the results from qualitative evaluation, which applied the cognitive walkthrough methodology to elicit user requirements. The paper ends with conclusions about pros and cons of using the ADDIE model in MOOC production and formulates recommendations for further work in this area.
Despite advances in machine learning-based clinical prediction models, only few of such models are actually deployed in clinical contexts. Among other reasons, this is due to a lack of validation studies. In this paper, we present and discuss the validation results of a machine learning model for the prediction of acute kidney injury in cardiac surgery patients initially developed on the MIMIC-III dataset when applied to an external cohort of an American research hospital. To help account for the performance differences observed, we utilized interpretability methods based on feature importance, which allowed experts to scrutinize model behavior both at the global and local level, making it possible to gain further insights into why it did not behave as expected on the validation cohort. The knowledge gleaned upon derivation can be potentially useful to assist model update during validation for more generalizable and simpler models. We argue that interpretability methods should be considered by practitioners as a further tool to help explain performance differences and inform model update in validation studies.
The main aim of this article is to explore how learning analytics and synchronous collaboration could improve course completion and learner outcomes in MOOCs, which traditionally have been delivered asynchronously. Based on our experience with developing BigBlueButton, a virtual classroom platform that provides educators with live analytics, this paper explores three scenarios with business focused MOOCs to improve outcomes and strengthen learned skills.
This paper investigates the applicability of CMOS decoupling cells for mitigating the Single Event Transient (SET) effects in standard combinational gates. The concept is based on the insertion of two decoupling cells between the gate's output and the power/ground terminals. To verify the proposed hardening approach, extensive SPICE simulations have been performed with standard combinational cells designed in IHP's 130 nm bulk CMOS technology. Obtained simulation results have shown that the insertion of decoupling cells results in the increase of the gate's critical charge, thus reducing the gate's soft error rate (SER). Moreover, the decoupling cells facilitate the suppression of SET pulses propagating through the gate. It has been shown that the decoupling cells may be a competitive alternative to gate upsizing and gate duplication for hardening the gates with lower critical charge and multiple (3 or 4) inputs, as well as for filtering the short SET pulses induced by low-LET particles.
A path in an edge-colored graph is rainbow if no two edges of it are colored the same, and the graph is rainbow-connected if there is a rainbow path between each pair of its vertices. The minimum number of colors needed to rainbow-connect a graph G is the rainbow connection number of G, denoted by rc(G).& nbsp;A simple way to rainbow-connect a graph G is to color the edges of a spanning tree with distinct colors and then re-use any of these colors to color the remaining edges of G. This proves that rc(G) <= |V (G)|-1. We ask whether there is a stronger connection between tree-like structures and rainbow coloring than that is implied by the above trivial argument. For instance, is it possible to find an upper bound of t(G)-1 for rc(G), where t(G) is the number of vertices in the largest induced tree of G? The answer turns out to be negative, as there are counter-examples that show that even c .t(G) is not an upper bound for rc(G) for any given constant c.& nbsp;In this work we show that if we consider the forest number f(G), the number of vertices in a maximum induced forest of G, instead of t(G), then surprisingly we do get an upper bound. More specifically, we prove that rc(G) <= f(G) + 2. Our result indicates a stronger connection between rainbow connection and tree-like structures than that was suggested by the simple spanning tree based upper bound.
The amount of data stored in databases and the complexity of database workloads are ever- increasing. Database management systems (DBMSs) offer many configuration options, such as index creation or unique constraints, which must be adapted to the specific instance to efficiently process large volumes of data. Currently, such database optimization is complicated, manual work performed by highly skilled database administrators (DBAs). In cloud scenarios, manual database optimization even becomes infeasible: it exceeds the abilities of the best DBAs due to the enormous number of deployed DBMS instances (some providers maintain millions of instances), missing domain knowledge resulting from data privacy requirements, and the complexity of the configuration tasks.
Therefore, we investigate how to automate the configuration of DBMSs efficiently with the help of unsupervised database optimization. While there are numerous configuration options, in this thesis, we focus on automatic index selection and the use of data dependencies, such as functional dependencies, for query optimization. Both aspects have an extensive performance impact and complement each other by approaching unsupervised database optimization from different perspectives.
Our contributions are as follows: (1) we survey automated state-of-the-art index selection algorithms regarding various criteria, e.g., their support for index interaction. We contribute an extensible platform for evaluating the performance of such algorithms with industry-standard datasets and workloads. The platform is well-received by the community and has led to follow-up research. With our platform, we derive the strengths and weaknesses of the investigated algorithms. We conclude that existing solutions often have scalability issues and cannot quickly determine (near-)optimal solutions for large problem instances. (2) To overcome these limitations, we present two new algorithms. Extend determines (near-)optimal solutions with an iterative heuristic. It identifies the best index configurations for the evaluated benchmarks. Its selection runtimes are up to 10 times lower compared with other near-optimal approaches. SWIRL is based on reinforcement learning and delivers solutions instantly. These solutions perform within 3 % of the optimal ones. Extend and SWIRL are available as open-source implementations.
(3) Our index selection efforts are complemented by a mechanism that analyzes workloads to determine data dependencies for query optimization in an unsupervised fashion. We describe and classify 58 query optimization techniques based on functional, order, and inclusion dependencies as well as on unique column combinations. The unsupervised mechanism and three optimization techniques are implemented in our open-source research DBMS Hyrise. Our approach reduces the Join Order Benchmark’s runtime by 26 % and accelerates some TPC-DS queries by up to 58 times.
Additionally, we have developed a cockpit for unsupervised database optimization that allows interactive experiments to build confidence in such automated techniques. In summary, our contributions improve the performance of DBMSs, support DBAs in their work, and enable them to contribute their time to other, less arduous tasks.
Universitat Politècnica de València’s Experience with EDX MOOC Initiatives During the Covid Lockdown
(2021)
In March 2020, when massive lockdowns started to be enforced around the world to contain the spread of the COVID-19 pandemic, edX launched two initiatives to help students around the world providing free certificates for its courses, RAP, for member institutions and OCE, for any accredited academic institution. In this paper we analyze how Universitat Poltècnica de València contributed with its courses to both initiatives, providing almost 14,000 free certificate codes in total, and how UPV used the RAP initiative as a customer, describing the mechanism used to distribute more than 22,000 codes for free certificates to more than 7,000 UPV community members, what led to the achievement of more than 5,000 free certificates. We also comment the results of a post initiative survey answered by 1,612 UPV members about 3,241 edX courses, in which they communicated a satisfaction of 4,69 over 5 with the initiative.
Unified logging system for monitoring multiple cloud storage providers in cloud storage broker
(2018)
With the increasing demand for personal and enterprise data storage service, Cloud Storage Broker (CSB) provides cloud storage service using multiple Cloud Service Providers (CSPs) with guaranteed Quality of Service (QoS), such as data availability and security. However monitoring cloud storage usage in multiple CSPs has become a challenge for CSB due to lack of standardized logging format for cloud services that causes each CSP to implement its own format. In this paper we propose a unified logging system that can be used by CSB to monitor cloud storage usage across multiple CSPs. We gather cloud storage log files from three different CSPs and normalise these into our proposed log format that can be used for further analysis process. We show that our work enables a coherent view suitable for data navigation, monitoring, and analytics.
In discrete manufacturing, the knowledge about causal relationships makes it possible to avoid unforeseen production downtimes by identifying their root causes. Learning causal structures from real-world settings remains challenging due to high-dimensional data, a mix of discrete and continuous variables, and requirements for preprocessing log data under the causal perspective. In our work, we address these challenges proposing a process for causal reasoning based on raw machine log data from production monitoring. Within this process, we define a set of transformation rules to extract independent and identically distributed observations. Further, we incorporate a variable selection step to handle high-dimensionality and a discretization step to include continuous variables. We enrich a commonly used causal structure learning algorithm with domain-related orientation rules, which provides a basis for causal reasoning. We demonstrate the process on a real-world dataset from a globally operating precision mechanical engineering company. The dataset contains over 40 million log data entries from production monitoring of a single machine. In this context, we determine the causal structures embedded in operational processes. Further, we examine causal effects to support machine operators in avoiding unforeseen production stops, i.e., by detaining machine operators from drawing false conclusions on impacting factors of unforeseen production stops based on experience.
An instance of the marriage problem is given by a graph G = (A boolean OR B, E), together with, for each vertex of G, a strict preference order over its neighbors. A matching M of G is popular in the marriage instance if M does not lose a head-to-head election against any matching where vertices are voters. Every stable matching is a min-size popular matching; another subclass of popular matchings that always exists and can be easily computed is the set of dominant matchings. A popular matching M is dominant if M wins the head-to-head election against any larger matching. Thus, every dominant matching is a max-size popular matching, and it is known that the set of dominant matchings is the linear image of the set of stable matchings in an auxiliary graph. Results from the literature seem to suggest that stable and dominant matchings behave, from a complexity theory point of view, in a very similar manner within the class of popular matchings. The goal of this paper is to show that there are instead differences in the tractability of stable and dominant matchings and to investigate further their importance for popular matchings. First, we show that it is easy to check if all popular matchings are also stable; however, it is co-NP hard to check if all popular matchings are also dominant. Second, we show how some new and recent hardness results on popular matching problems can be deduced from the NP-hardness of certain problems on stable matchings, also studied in this paper, thus showing that stable matchings can be employed to show not only positive results on popular matchings (as is known) but also most negative ones. Problems for which we show new hardness results include finding a min-size (resp., max-size) popular matching that is not stable (resp., dominant). A known result for which we give a new and simple proof is the NP-hardness of finding a popular matching when G is nonbipartite.
The dynamic landscape of digital transformation entails an impact on industrial-age manufacturing companies that goes beyond product offerings, changing operational paradigms, and requiring an organization-wide metamorphosis. An initiative to address the given challenges is the creation of Digital Innovation Units (DIUs) – departments or distinct legal entities that use new structures and practices to develop digital products, services, and business models and support or drive incumbents’ digital transformation. With more than 300 units in German-speaking countries alone and an increasing number of scientific publications, DIUs have become a widespread phenomenon in both research and practice.
This dissertation examines the evolution process of DIUs in the manufacturing
industry during their first three years of operation, through an extensive longitudinal single-case study and several cross-case syntheses of seven DIUs. Building on the lenses of organizational change and development, time, and socio-technical systems, this research provides insights into the fundamentals, temporal dynamics, socio-technical interactions, and relational dynamics of a DIU’s evolution process. Thus, the dissertation promotes a dynamic understanding of DIUs and adds a two-dimensional perspective to the often one-dimensional view of these units and their interactions with the main organization throughout the startup and growth phases of a DIU.
Furthermore, the dissertation constructs a phase model that depicts the early stages of DIU evolution based on these findings and by incorporating literature from information systems research. As a result, it illustrates the progressive intensification of collaboration between the DIU and the main organization. After being implemented, the DIU sparks initial collaboration and instigates change within (parts of) the main organization. Over time, it adapts to the corporate environment to some extent, responding to changing circumstances in order to contribute to long-term transformation. Temporally, the DIU drives the early phases of cooperation and adaptation in particular, while the main organization triggers the first major evolutionary step and realignment of the DIU.
Overall, the thesis identifies DIUs as malleable organizational structures that are crucial for digital transformation. Moreover, it provides guidance for practitioners on the process of building a new DIU from scratch or optimizing an existing one.
In the context of black-box optimization, black-box complexity is used for understanding the inherent difficulty of a given optimization problem. Central to our understanding of nature-inspired search heuristics in this context is the notion of unbiasedness. Specialized black-box complexities have been developed in order to better understand the limitations of these heuristics - especially of (population-based) evolutionary algorithms (EAs). In contrast to this, we focus on a model for algorithms explicitly maintaining a probability distribution over the search space: so-called estimation-of-distribution algorithms (EDAs). We consider the recently introduced n-Bernoulli-lambda-EDA framework, which subsumes, for example, the commonly known EDAs PBIL, UMDA, lambda-MMAS(IB), and cGA. We show that an n-Bernoulli-lambda-EDA is unbiased if and only if its probability distribution satisfies a certain invariance property under isometric automorphisms of [0, 1](n). By restricting how an n-Bernoulli-lambda-EDA can perform an update, in a way common to many examples, we derive conciser characterizations, which are easy to verify. We demonstrate this by showing that our examples above are all unbiased. (C) 2018 Elsevier B.V. All rights reserved.
TrussFormer
(2019)
We present TrussFormer, an integrated end-to-end system that allows users to 3D print large-scale kinetic structures, i.e., structures that involve motion and deal with dynamic forces. TrussFormer builds on TrussFab, from which it inherits the ability to create static large-scale truss structures from 3D printed connectors and PET bottles. TrussFormer adds movement to these structures by placing linear actuators into them: either manually, wrapped in reusable components called assets, or by demonstrating the intended movement. TrussFormer verifies that the resulting structure is mechanically sound and will withstand the dynamic forces resulting from the motion. To fabricate the design, TrussFormer generates the underlying hinge system that can be printed on standard desktop 3D printers. We demonstrate TrussFormer with several example objects, including a 6-legged walking robot and a 4m-tall animatronics dinosaur with 5 degrees of freedom.
TrussFormer
(2018)
We present TrussFormer, an integrated end-to-end system that allows users to 3D print large-scale kinetic structures, i.e., structures that involve motion and deal with dynamic forces. TrussFormer builds on TrussFab, from which it inherits the ability to create static large-scale truss structures from 3D printed connectors and PET bottles. TrussFormer adds movement to these structures by placing linear actuators into them: either manually, wrapped in reusable components called assets, or by demonstrating the intended movement. TrussFormer verifies that the resulting structure is mechanically sound and will withstand the dynamic forces resulting from the motion. To fabricate the design, TrussFormer generates the underlying hinge system that can be printed on standard desktop 3D printers. We demonstrate TrussFormer with several example objects, including a 6-legged walking robot and a 4m-tall animatronics dinosaur with 5 degrees of freedom.
Inertial measurement units (IMUs) enable easy to operate and low-cost data recording for gait analysis. When combined with treadmill walking, a large number of steps can be collected in a controlled environment without the need of a dedicated gait analysis laboratory. In order to evaluate existing and novel IMU-based gait analysis algorithms for treadmill walking, a reference dataset that includes IMU data as well as reliable ground truth measurements for multiple participants and walking speeds is needed. This article provides a reference dataset consisting of 15 healthy young adults who walked on a treadmill at three different speeds. Data were acquired using seven IMUs placed on the lower body, two different reference systems (Zebris FDMT-HQ and OptoGait), and two RGB cameras. Additionally, in order to validate an existing IMU-based gait analysis algorithm using the dataset, an adaptable modular data analysis pipeline was built. Our results show agreement between the pressure-sensitive Zebris and the photoelectric OptoGait system (r = 0.99), demonstrating the quality of our reference data. As a use case, the performance of an algorithm originally designed for overground walking was tested on treadmill data using the data pipeline. The accuracy of stride length and stride time estimations was comparable to that reported in other studies with overground data, indicating that the algorithm is equally applicable to treadmill data. The Python source code of the data pipeline is publicly available, and the dataset will be provided by the authors upon request, enabling future evaluations of IMU gait analysis algorithms without the need of recording new data.
TRIPOD
(2021)
Inertial measurement units (IMUs) enable easy to operate and low-cost data recording for gait analysis. When combined with treadmill walking, a large number of steps can be collected in a controlled environment without the need of a dedicated gait analysis laboratory. In order to evaluate existing and novel IMU-based gait analysis algorithms for treadmill walking, a reference dataset that includes IMU data as well as reliable ground truth measurements for multiple participants and walking speeds is needed. This article provides a reference dataset consisting of 15 healthy young adults who walked on a treadmill at three different speeds. Data were acquired using seven IMUs placed on the lower body, two different reference systems (Zebris FDMT-HQ and OptoGait), and two RGB cameras. Additionally, in order to validate an existing IMU-based gait analysis algorithm using the dataset, an adaptable modular data analysis pipeline was built. Our results show agreement between the pressure-sensitive Zebris and the photoelectric OptoGait system (r = 0.99), demonstrating the quality of our reference data. As a use case, the performance of an algorithm originally designed for overground walking was tested on treadmill data using the data pipeline. The accuracy of stride length and stride time estimations was comparable to that reported in other studies with overground data, indicating that the algorithm is equally applicable to treadmill data. The Python source code of the data pipeline is publicly available, and the dataset will be provided by the authors upon request, enabling future evaluations of IMU gait analysis algorithms without the need of recording new data.
Like conventional software projects, projects in model-driven software engineering require adequate management of multiple versions of development artifacts, importantly allowing living with temporary inconsistencies. In the case of model-driven software engineering, employed versioning approaches also have to handle situations where different artifacts, that is, different models, are linked via automatic model transformations.
In this report, we propose a technique for jointly handling the transformation of multiple versions of a source model into corresponding versions of a target model, which enables the use of a more compact representation that may afford improved execution time of both the transformation and further analysis operations. Our approach is based on the well-known formalism of triple graph grammars and a previously introduced encoding of model version histories called multi-version models. In addition to showing the correctness of our approach with respect to the standard semantics of triple graph grammars, we conduct an empirical evaluation that demonstrates the potential benefit regarding execution time performance.
TransPipe
(2021)
Online learning environments, such as Massive Open Online Courses (MOOCs), often rely on videos as a major component to convey knowledge. However, these videos exclude potential participants who do not understand the lecturer’s language, regardless of whether that is due to language unfamiliarity or aural handicaps. Subtitles and/or interactive transcripts solve this issue, ease navigation based on the content, and enable indexing and retrieval by search engines. Although there are several automated speech-to-text converters and translation tools, their quality varies and the process of integrating them can be quite tedious. Thus, in practice, many videos on MOOC platforms only receive subtitles after the course is already finished (if at all) due to a lack of resources. This work describes an approach to tackle this issue by providing a dedicated tool, which is closing this gap between MOOC platforms and transcription and translation tools and offering a simple workflow that can easily be handled by users with a less technical background. The proposed method is designed and evaluated by qualitative interviews with three major MOOC providers.
Open edX is an incredible platform to deliver MOOCs and SPOCs, designed to be robust and support hundreds of thousands of students at the same time. Nevertheless, it lacks a lot of the fine-grained functionality needed to handle students individually in an on-campus course. This short session will present the ongoing project undertaken by the 6 public universities of the Region of Madrid plus the Universitat Politècnica de València, in the framework of a national initiative called UniDigital, funded by the Ministry of Universities of Spain within the Plan de Recuperación, Transformación y Resiliencia of the European Union. This project, led by three of these Spanish universities (UC3M, UPV, UAM), is investing more than half a million euros with the purpose of bringing the Open edX platform closer to the functionalities required for an LMS to support on-campus teaching. The aim of the project is to coordinate what is going to be developed with the Open edX development community, so these developments are incorporated into the core of the Open edX platform in its next releases. Features like a complete redesign of platform analytics to make them real-time, the creation of dashboards based on these analytics, the integration of a system for customized automatic feedback, improvement of exams and tasks and the extension of grading capabilities, improvements in the graphical interfaces for both students and teachers, the extension of the emailing capabilities, redesign of the file management system, integration of H5P content, the integration of a tool to create mind maps, the creation of a system to detect students at risk, or the integration of an advanced voice assistant and a gamification mobile app, among others, are part of the functionalities to be developed. The idea is to transform a first-class MOOC platform into the next on-campus LMS.
transferGWAS
(2022)
Motivation:
Medical images can provide rich information about diseases and their biology. However, investigating their association with genetic variation requires non-standard methods. We propose transferGWAS, a novel approach to perform genome-wide association studies directly on full medical images. First, we learn semantically meaningful representations of the images based on a transfer learning task, during which a deep neural network is trained on independent but similar data. Then, we perform genetic association tests with these representations.
Results:
We validate the type I error rates and power of transferGWAS in simulation studies of synthetic images. Then we apply transferGWAS in a genome-wide association study of retinal fundus images from the UK Biobank. This first-of-a-kind GWAS of full imaging data yielded 60 genomic regions associated with retinal fundus images, of which 7 are novel candidate loci for eye-related traits and diseases.
Comment sections of online news platforms are an essential space to express opinions and discuss political topics. In contrast to other online posts, news discussions are related to particular news articles, comments refer to each other, and individual conversations emerge. However, the misuse by spammers, haters, and trolls makes costly content moderation necessary. Sentiment analysis can not only support moderation but also help to understand the dynamics of online discussions. A subtask of content moderation is the identification of toxic comments. To this end, we describe the concept of toxicity and characterize its subclasses. Further, we present various deep learning approaches, including datasets and architectures, tailored to sentiment analysis in online discussions. One way to make these approaches more comprehensible and trustworthy is fine-grained instead of binary comment classification. On the downside, more classes require more training data. Therefore, we propose to augment training data by using transfer learning. We discuss real-world applications, such as semi-automated comment moderation and troll detection. Finally, we outline future challenges and current limitations in light of most recent research publications.
Version control is a widely used practice among software developers. It reduces the risk of changing their software and allows them to manage different configurations and to collaborate with others more efficiently. This is amplified by code sharing platforms such as GitHub or Bitbucket. Most version control systems track files (e.g., Git, Mercurial, and Subversion do), but some programming environments do not operate on files, but on objects instead (many Smalltalk implementations do). Users of such environments want to use version control for their objects anyway. Specialized version control systems, such as the ones available for Smalltalk systems (e.g., ENVY/Developer and Monticello), focus on a small subset of objects that can be versioned. Most of these systems concentrate on the tracking of methods, classes, and configurations of these. Other user-defined and user-built objects are either not eligible for version control at all, tracking them involves complicated workarounds, or a fixed, domain-unspecific serialization format is used that does not equally suit all kinds of objects. Moreover, these version control systems that are specific to a programming environment require their own code sharing platforms; popular, well-established platforms for file-based version control systems cannot be used or adapter solutions need to be implemented and maintained.
To improve the situation for version control of arbitrary objects, a framework for tracking, converting, and storing of objects is presented in this report. It allows editions of objects to be stored in an exchangeable, existing backend version control system. The platforms of the backend version control system can thus be reused. Users and objects have control over how objects are captured for the purpose of version control. Domain-specific requirements can be implemented. The storage format (i.e. the file format, when file-based backend version control systems are used) can also vary from one object to another. Different editions of objects can be compared and sets of changes can be applied to graphs of objects. A generic way for capturing and restoring that supports most kinds of objects is described. It models each object as a collection of slots. Thus, users can begin to track their objects without first having to implement version control supplements for their own kinds of objects. The proposed architecture is evaluated using a prototype implementation that can be used to track objects in Squeak/Smalltalk with Git. The prototype improves the suboptimal standing of user objects with respect to version control described above and also simplifies some version control tasks for classes and methods as well. It also raises new problems, which are discussed in this report as well.
Distance Education or e-Learning platform should be able to provide a virtual laboratory to let the participants have hands-on exercise experiences in practicing their skill remotely. Especially in Cybersecurity e-Learning where the participants need to be able to attack or defend the IT System. To have a hands-on exercise, the virtual laboratory environment must be similar to the real operational environment, where an attack or a victim is represented by a node in a virtual laboratory environment. A node is usually represented by a Virtual Machine (VM). Scalability has become a primary issue in the virtual laboratory for cybersecurity e-Learning because a VM needs a significant and fix allocation of resources. Available resources limit the number of simultaneous users. Scalability can be increased by increasing the efficiency of using available resources and by providing more resources. Increasing scalability means increasing the number of simultaneous users.
In this thesis, we propose two approaches to increase the efficiency of using the available resources. The first approach in increasing efficiency is by replacing virtual machines (VMs) with containers whenever it is possible. The second approach is sharing the load with the user-on-premise machine, where the user-on-premise machine represents one of the nodes in a virtual laboratory scenario. We also propose two approaches in providing more resources. One way to provide more resources is by using public cloud services. Another way to provide more resources is by gathering resources from the crowd, which is referred to as Crowdresourcing Virtual Laboratory (CRVL).
In CRVL, the crowd can contribute their unused resources in the form of a VM, a bare metal system, an account in a public cloud, a private cloud and an isolated group of VMs, but in this thesis, we focus on a VM. The contributor must give the credential of the VM admin or root user to the CRVL system. We propose an architecture and methods to integrate or dis-integrate VMs from the CRVL system automatically. A Team placement algorithm must also be investigated to optimize the usage of resources and at the same time giving the best service to the user. Because the CRVL system does not manage the contributor host machine, the CRVL system must be able to make sure that the VM integration will not harm their system and that the training material will be stored securely in the contributor sides, so that no one is able to take the training material away without permission. We are investigating ways to handle this kind of threats.
We propose three approaches to strengthen the VM from a malicious host admin. To verify the integrity of a VM before integration to the CRVL system, we propose a remote verification method without using any additional hardware such as the Trusted Platform Module chip. As the owner of the host machine, the host admins could have access to the VM's data via Random Access Memory (RAM) by doing live memory dumping, Spectre and Meltdown attacks. To make it harder for the malicious host admin in getting the sensitive data from RAM, we propose a method that continually moves sensitive data in RAM. We also propose a method to monitor the host machine by installing an agent on it. The agent monitors the hypervisor configurations and the host admin activities.
To evaluate our approaches, we conduct extensive experiments with different settings. The use case in our approach is Tele-Lab, a Virtual Laboratory platform for Cyber Security e-Learning. We use this platform as a basis for designing and developing our approaches. The results show that our approaches are practical and provides enhanced security.
Identity management is at the forefront of applications’ security posture. It separates the unauthorised user from the legitimate individual. Identity management models have evolved from the isolated to the centralised paradigm and identity federations. Within this advancement, the identity provider emerged as a trusted third party that holds a powerful position. Allen postulated the novel self-sovereign identity paradigm to establish a new balance. Thus, extensive research is required to comprehend its virtues and limitations. Analysing the new paradigm, initially, we investigate the blockchain-based self-sovereign identity concept structurally. Moreover, we examine trust requirements in this context by reference to patterns. These shapes comprise major entities linked by a decentralised identity provider. By comparison to the traditional models, we conclude that trust in credential management and authentication is removed. Trust-enhancing attribute aggregation based on multiple attribute providers provokes a further trust shift. Subsequently, we formalise attribute assurance trust modelling by a metaframework. It encompasses the attestation and trust network as well as the trust decision process, including the trust function, as central components. A secure attribute assurance trust model depends on the security of the trust function. The trust function should consider high trust values and several attribute authorities. Furthermore, we evaluate classification, conceptual study, practical analysis and simulation as assessment strategies of trust models. For realising trust-enhancing attribute aggregation, we propose a probabilistic approach. The method exerts the principle characteristics of correctness and validity. These values are combined for one provider and subsequently for multiple issuers. We embed this trust function in a model within the self-sovereign identity ecosystem. To practically apply the trust function and solve several challenges for the service provider that arise from adopting self-sovereign identity solutions, we conceptualise and implement an identity broker. The mediator applies a component-based architecture to abstract from a single solution. Standard identity and access management protocols build the interface for applications. We can conclude that the broker’s usage at the side of the service provider does not undermine self-sovereign principles, but fosters the advancement of the ecosystem. The identity broker is applied to sample web applications with distinct attribute requirements to showcase usefulness for authentication and attribute-based access control within a case study.
The overhead of moving data is the major limiting factor in todays hardware, especially in heterogeneous systems where data needs to be transferred frequently between host and accelerator memory. With the increasing availability of hardware-based compression facilities in modern computer architectures, this paper investigates the potential of hardware-accelerated I/O Link Compression as a promising approach to reduce data volumes and transfer time, thus improving the overall efficiency of accelerators in heterogeneous systems. Our considerations are focused on On-the-Fly compression in both Single-Node and Scale-Out deployments. Based on a theoretical analysis, this paper demonstrates the feasibility of hardware-accelerated On-the-Fly I/O Link Compression for many workloads in a Scale-Out scenario, and for some even in a Single-Node scenario. These findings are confirmed in a preliminary evaluation using software-and hardware-based implementations of the 842 compression algorithm.
Monitoring is a key prerequisite for self-adaptive software and many other forms of operating software. Monitoring relevant lower level phenomena like the occurrences of exceptions and diagnosis data requires to carefully examine which detailed information is really necessary and feasible to monitor. Adaptive monitoring permits observing a greater variety of details with less overhead, if most of the time the MAPE-K loop can operate using only a small subset of all those details. However, engineering such an adaptive monitoring is a major engineering effort on its own that further complicates the development of self-adaptive software. The proposed approach overcomes the outlined problems by providing generic adaptive monitoring via runtime models. It reduces the effort to introduce and apply adaptive monitoring by avoiding additional development effort for controlling the monitoring adaptation. Although the generic approach is independent from the monitoring purpose, it still allows for substantial savings regarding the monitoring resource consumption as demonstrated by an example.
The identification of vulnerabilities in IT infrastructures is a crucial problem in enhancing the security, because many incidents resulted from already known vulnerabilities, which could have been resolved. Thus, the initial identification of vulnerabilities has to be used to directly resolve the related weaknesses and mitigate attack possibilities. The nature of vulnerability information requires a collection and normalization of the information prior to any utilization, because the information is widely distributed in different sources with their unique formats. Therefore, the comprehensive vulnerability model was defined and different sources have been integrated into one database. Furthermore, different analytic approaches have been designed and implemented into the HPI-VDB, which directly benefit from the comprehensive vulnerability model and especially from the logical preconditions and postconditions.
Firstly, different approaches to detect vulnerabilities in both IT systems of average users and corporate networks of large companies are presented. Therefore, the approaches mainly focus on the identification of all installed applications, since it is a fundamental step in the detection. This detection is realized differently depending on the target use-case. Thus, the experience of the user, as well as the layout and possibilities of the target infrastructure are considered. Furthermore, a passive lightweight detection approach was invented that utilizes existing information on corporate networks to identify applications.
In addition, two different approaches to represent the results using attack graphs are illustrated in the comparison between traditional attack graphs and a simplistic graph version, which was integrated into the database as well. The implementation of those use-cases for vulnerability information especially considers the usability. Beside the analytic approaches, the high data quality of the vulnerability information had to be achieved and guaranteed. The different problems of receiving incomplete or unreliable information for the vulnerabilities are addressed with different correction mechanisms. The corrections can be carried out with correlation or lookup mechanisms in reliable sources or identifier dictionaries. Furthermore, a machine learning based verification procedure was presented that allows an automatic derivation of important characteristics from the textual description of the vulnerabilities.
Information technology and digital solutions as enablers in the tourism sector require continuous development of skills, as digital transformation is characterized by fast change, complexity and uncertainty. This research investigates how a cMOOC concept could support the tourism industry. A consortium of three universities, a tourism association, and a tourist attraction investigates online learning needs and habits of tourism industry stakeholders in the field of digitalization in a cross-border study in the Baltic Sea region. The multi-national survey (n = 244) reveals a high interest in participating in an online learning community, with two-thirds of respondents seeing opportunities to contributing to such community apart from consuming knowledge. The paper demonstrates preferred ways of learning, motivational and hampering aspects as well as types of possible contributions.
Ubiquitous business processes are the new generation of processes that pervade the physical space and interact with their environments using a minimum of human involvement. Although they are now widely deployed in the industry, their deployment is still ad hoc . They are implemented after an arbitrary modeling phase or no modeling phase at all. The absence of a solid modeling phase backing up the implementation generates many loopholes that are stressed in the literature. Here, we tackle the issue of modeling ubiquitous business processes. We propose patterns to represent the recent ubiquitous computing features. These patterns are the outcome of an analysis we conducted in the field of human-computer interaction to examine how the features are actually deployed. The patterns' understandability, ease-of-use, usefulness, and completeness are examined via a user experiment. The results indicate that these four indexes are on the positive track. Hence, the patterns may be the backbone of ubiquitous business process modeling in industrial applications.
Scrollytellings are an innovative form of web content. Combining the benefits of books, images, movies, and video games, they are a tool to tell compelling stories and provide excellent learning opportunities. Due to their multi-modality, creating high-quality scrollytellings is not an easy task. Different professions, such as content designers, graphics designers, and developers, need to collaborate to get the best out of the possibilities the scrollytelling format provides. Collaboration unlocks great potential. However, content designers cannot create scrollytellings directly and always need to consult with developers to implement their vision. This can result in misunderstandings. Often, the resulting scrollytelling will not match the designer’s vision sufficiently, causing unnecessary iterations. Our project partner Typeshift specializes in the creation of individualized scrollytellings for their clients. Examined existing solutions for authoring interactive content are not optimally suited for creating highly customized scrollytellings while still being able to manipulate all their elements programmatically. Based on their experience and expertise, we developed an editor to author scrollytellings in the lively.next live-programming environment. In this environment, a graphical user interface for content design is combined with powerful possibilities for programming behavior with the morphic system. The editor allows content designers to take on large parts of the creation process of scrollytellings on their own, such as creating the visible elements, animating content, and fine-tuning the scrollytelling. Hence, developers can focus on interactive elements such as simulations and games. Together with Typeshift, we evaluated the tool by recreating an existing scrollytelling and identified possible future enhancements. Our editor streamlines the creation process of scrollytellings. Content designers and developers can now both work on the same scrollytelling. Due to the editor inside of the lively.next environment, they can both work with a set of tools familiar to them and their traits. Thus, we mitigate unnecessary iterations and misunderstandings by enabling content designers to realize large parts of their vision of a scrollytelling on their own. Developers can add advanced and individual behavior. Thus, developers and content designers benefit from a clearer distribution of tasks while keeping the benefits of collaboration.
Design thinking is acknowledged as a thriving innovation practice plus something more, something in the line of a deep understanding of innovation processes. At the same time, quite how and why design thinking works-in scientific terms-appeared an open question at first. Over recent years, empirical research has achieved great progress in illuminating the principles that make design thinking successful. Lately, the community began to explore an additional approach. Rather than setting up novel studies, investigations into the history of design thinking hold the promise of adding systematically to our comprehension of basic principles. This chapter makes a start in revisiting design thinking history with the aim of explicating scientific understandings that inform design thinking practices today. It offers a summary of creative thinking theories that were brought to Stanford Engineering in the 1950s by John E. Arnold.
Optimization is a core part of technological advancement and is usually heavily aided by computers. However, since many optimization problems are hard, it is unrealistic to expect an optimal solution within reasonable time. Hence, heuristics are employed, that is, computer programs that try to produce solutions of high quality quickly. One special class are estimation-of-distribution algorithms (EDAs), which are characterized by maintaining a probabilistic model over the problem domain, which they evolve over time. In an iterative fashion, an EDA uses its model in order to generate a set of solutions, which it then uses to refine the model such that the probability of producing good solutions is increased.
In this thesis, we theoretically analyze the class of univariate EDAs over the Boolean domain, that is, over the space of all length-n bit strings. In this setting, the probabilistic model of a univariate EDA consists of an n-dimensional probability vector where each component denotes the probability to sample a 1 for that position in order to generate a bit string.
My contribution follows two main directions: first, we analyze general inherent properties of univariate EDAs. Second, we determine the expected run times of specific EDAs on benchmark functions from theory. In the first part, we characterize when EDAs are unbiased with respect to the problem encoding. We then consider a setting where all solutions look equally good to an EDA, and we show that the probabilistic model of an EDA quickly evolves into an incorrect model if it is always updated such that it does not change in expectation.
In the second part, we first show that the algorithms cGA and MMAS-fp are able to efficiently optimize a noisy version of the classical benchmark function OneMax. We perturb the function by adding Gaussian noise with a variance of σ², and we prove that the algorithms are able to generate the true optimum in a time polynomial in σ² and the problem size n. For the MMAS-fp, we generalize this result to linear functions. Further, we prove a run time of Ω(n log(n)) for the algorithm UMDA on (unnoisy) OneMax. Last, we introduce a new algorithm that is able to optimize the benchmark functions OneMax and LeadingOnes both in O(n log(n)), which is a novelty for heuristics in the domain we consider.
The COVID-19 pandemic emergency has forced a profound reshape of our lives. Our way of working and studying has been disrupted with the result of an acceleration of the shift to the digital world. To properly adapt to this change, we need to outline and implement new urgent strategies and approaches which put learning at the center, supporting workers and students to further develop “future proof” skills. In the last period, universities and educational institutions have demonstrated that they can play an important role in this context, also leveraging on the potential of Massive Open Online Courses (MOOCs) which proved to be an important vehicle of flexibility and adaptation in a general context characterised by several constraints. From March 2020 till now, we have witnessed an exponential growth of MOOCs enrollments numbers, with “traditional” students interested in different topics not necessarily integrated to their curricular studies. To support students and faculty development during the spreading of the pandemic, Politecnico di Milano focused on one main dimension: faculty development for a better integration of digital tools and contents in the e-learning experience. The current discussion focuses on how to improve the integration of MOOCs in the in-presence activities to create meaningful learning and teaching experiences, thereby leveraging blended learning approaches to engage both students and external stakeholders to equip them with future job relevance skills.
About 15 years ago, the first Massive Open Online Courses (MOOCs) appeared and revolutionized online education with more interactive and engaging course designs. Yet, keeping learners motivated and ensuring high satisfaction is one of the challenges today's course designers face. Therefore, many MOOC providers employed gamification elements that only boost extrinsic motivation briefly and are limited to platform support. In this article, we introduce and evaluate a gameful learning design we used in several iterations on computer science education courses. For each of the courses on the fundamentals of the Java programming language, we developed a self-contained, continuous story that accompanies learners through their learning journey and helps visualize key concepts. Furthermore, we share our approach to creating the surrounding story in our MOOCs and provide a guideline for educators to develop their own stories. Our data and the long-term evaluation spanning over four Java courses between 2017 and 2021 indicates the openness of learners toward storified programming courses in general and highlights those elements that had the highest impact. While only a few learners did not like the story at all, most learners consumed the additional story elements we provided. However, learners' interest in influencing the story through majority voting was negligible and did not show a considerable positive impact, so we continued with a fixed story instead. We did not find evidence that learners just participated in the narrative because they worked on all materials. Instead, for 10-16% of learners, the story was their main course motivation. We also investigated differences in the presentation format and concluded that several longer audio-book style videos were most preferred by learners in comparison to animated videos or different textual formats. Surprisingly, the availability of a coherent story embedding examples and providing a context for the practical programming exercises also led to a slightly higher ranking in the perceived quality of the learning material (by 4%). With our research in the context of storified MOOCs, we advance gameful learning designs, foster learner engagement and satisfaction in online courses, and help educators ease knowledge transfer for their learners.
The MOOC-CEDIA Observatory
(2021)
In the last few years, an important amount of Massive Open Online Courses (MOOCS) has been made available to the worldwide community, mainly by European and North American universities (i.e. United States). Since its emergence, the adoption of these educational resources has been widely studied by several research groups and universities with the aim of understanding their evolution and impact in educational models, through the time. In the case of Latin America, data from the MOOC-UC Observatory (updated until 2018) shows that, the adoption of these courses by universities in the region has been slow and heterogeneous. In the specific case of Ecuador, although some data is available, there is lack of information regarding the construction, publication and/or adoption of such courses by universities in the country. Moreover, there are not updated studies designed to identify and analyze the barriers and factors affecting the adoption of MOOCs in the country. The aim of this work is to present the MOOC-CEDIA Observatory, a web platform that offers interactive visualizations on the adoption of MOOCs in Ecuador. The main results of the study show that: (1) until 2020 there have been 99 MOOCs in Ecuador, (2) the domains of MOOCs are mostly related to applied sciences, social sciences and natural sciences, with the humanities being the least covered, (3) Open edX and Moodle are the most widely used platforms to deploy such courses. It is expected that the conclusions drawn from this analysis, will allow the design of recommendations aimed to promote the creation and use of quality MOOCs in Ecuador and help institutions to chart the route for their adoption, both for internal use by their community but also by society in general.
Recent findings suggest a role of oxytocin on the tendency to spontaneously mimic the emotional facial expressions of others. Oxytocin-related increases of facial mimicry, however, seem to be dependent on contextual factors. Given previous literature showing that people preferentially mimic emotional expressions of individuals associated with high (vs. low) rewards, we examined whether the reward value of the mimicked agent is one factor influencing the oxytocin effects on facial mimicry. To test this hypothesis, 60 male adults received 24 IU of either intranasal oxytocin or placebo in a double-blind, between-subject experiment. Next, the value of male neutral faces was manipulated using an associative learning task with monetary rewards. After the reward associations were learned, participants watched videos of the same faces displaying happy and angry expressions. Facial reactions to the emotional expressions were measured with electromyography. We found that participants judged as more pleasant the face identities associated with high reward values than with low reward values. However, happy expressions by low rewarding faces were more spontaneously mimicked than high rewarding faces. Contrary to our expectations, we did not find a significant direct effect of intranasal oxytocin on facial mimicry, nor on the reward-driven modulation of mimicry. Our results support the notion that mimicry is a complex process that depends on contextual factors, but failed to provide conclusive evidence of a role of oxytocin on the modulation of facial mimicry.
Schelling's classical segregation model gives a coherent explanation for the wide-spread phenomenon of residential segregation. We introduce an agent-based saturated open-city variant, the Flip Schelling Process (FSP), in which agents, placed on a graph, have one out of two types and, based on the predominant type in their neighborhood, decide whether to change their types; similar to a new agent arriving as soon as another agent leaves the vertex. We investigate the probability that an edge {u,v} is monochrome, i.e., that both vertices u and v have the same type in the FSP, and we provide a general framework for analyzing the influence of the underlying graph topology on residential segregation. In particular, for two adjacent vertices, we show that a highly decisive common neighborhood, i.e., a common neighborhood where the absolute value of the difference between the number of vertices with different types is high, supports segregation and, moreover, that large common neighborhoods are more decisive. As an application, we study the expected behavior of the FSP on two common random graph models with and without geometry: (1) For random geometric graphs, we show that the existence of an edge {u,v} makes a highly decisive common neighborhood for u and v more likely. Based on this, we prove the existence of a constant c>0 such that the expected fraction of monochrome edges after the FSP is at least 1/2+c. (2) For Erdős–Rényi graphs we show that large common neighborhoods are unlikely and that the expected fraction of monochrome edges after the FSP is at most 1/2+o(1). Our results indicate that the cluster structure of the underlying graph has a significant impact on the obtained segregation strength.
Creating fonts is a complex task that requires expert knowledge in a variety of domains. Often, this knowledge is not held by a single person, but spread across a number of domain experts. A central concept needed for designing fonts is the glyph, an elemental symbol representing a readable character. Required domains include designing glyph shapes, engineering rules to combine glyphs for complex scripts and checking legibility. This process is most often iterative and requires communication in all directions. This report outlines a platform that aims to enhance the means of communication, describes our prototyping process, discusses complex font rendering and editing in a live environment and an approach to generate code based on a user’s live-edits.
In this chapter, we provide a framework to specify how cheating attacks can be conducted successfully on power marketing schemes in resource constrained smart micro-grids. This is an important problem because such cheating attacks can destabilise and in the worst case result in a breakdown of the micro-grid. We consider three aspects, in relation to modelling cheating attacks on power auctioning schemes. First, we aim to specify exactly how in spite of the resource constrained character of the micro-grid, cheating can be conducted successfully. Second, we consider how mitigations can be modelled to prevent cheating, and third, we discuss methods of maintaining grid stability and reliability even in the presence of cheating attacks. We use an Automated-Cheating-Attack (ACA) conception to build a taxonomy of cheating attacks based on the idea of adversarial acquisition of surplus energy. Adversarial acquisitions of surplus energy allow malicious users to pay less for access to more power than the quota allowed for the price paid. The impact on honest users, is the lack of an adequate supply of energy to meet power demand requests. We conclude with a discussion of the performance overhead of provoking, detecting, and mitigating such attacks efficiently.
This paper presents a new design for MOOCs for professional development of skills needed to meet the UN Sustainable Development Goals – the CoMOOC or Co-designed Massive Open Online Collaboration. The CoMOOC model is based on co-design with multiple stakeholders including end-users within the professional communities the CoMOOC aims to reach. This paper shows how the CoMOOC model could help the tertiary sector deliver on the UN Sustainable Development Goals (UNSDGs) – including but not limited to SDG 4 Education – by providing a more effective vehicle for professional development at a scale that the UNSDGs require. Interviews with professionals using MOOCs, and design-based research with professionals have informed the development of the Co-MOOC model. This research shows that open, online, collaborative learning experiences are highly effective for building professional community knowledge. Moreover, this research shows that the collaborative learning design at the heart of the CoMOOC model is feasible cross-platform Research with teachers working in crisis contexts in Lebanon, many of whom were refugees, will be presented to show how this form of large scale, co-designed, online learning can support professionals, even in the most challenging contexts, such as mass displacement, where expertise is urgently required.
The "Bachelor Project"
(2019)
One of the challenges of educating the next generation of computer scientists is to teach them to become team players, that are able to communicate and interact not only with different IT systems, but also with coworkers and customers with a non-it background. The “bachelor project” is a project based on team work and a close collaboration with selected industry partners. The authors hosted some of the teams since spring term 2014/15. In the paper at hand we explain and discuss this concept and evaluate its success based on students' evaluation and reports. Furthermore, the technology-stack that has been used by the teams is evaluated to understand how self-organized students in IT-related projects work. We will show that and why the bachelor is the most successful educational format in the perception of the students and how this positive results can be improved by the mentors.
Thai MOOC academy
(2023)
Thai MOOC Academy is a national digital learning platform that has been serving as a mechanism for promoting lifelong learning in Thailand since 2017. It has recently undergone significant improvements and upgrades, including the implementation of a credit bank system and a learner’s eportfolio system interconnected with the platform. Thai MOOC Academy is introducing a national credit bank system for accreditation and management, which allows for the transfer of expected learning outcomes and educational qualifications between formal education, non-formal education, and informal education. The credit bank system has five distinct features, including issuing forgery-prevented certificates, recording learning results, transferring external credits within the same wallet, accumulating learning results, and creating a QR code for verification purposes. The paper discusses the features and future potential of Thai MOOC Academy, as it is extended towards a sandbox for the national credit bank system in Thailand.
Technical report
(2019)
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
Subject-oriented learning
(2019)
The transformation to a digitized company changes not only the work but also social context for the employees and requires inter alia new knowledge and skills from them. Additionally, individual action problems arise. This contribution proposes the subject-oriented learning theory, in which the employees´ action problems are the starting point of training activities in learning factories. In this contribution, the subject-oriented learning theory is exemplified and respective advantages for vocational training in learning factories are pointed out both theoretically and practically. Thereby, especially the individual action problems of learners and the infrastructure are emphasized as starting point for learning processes and competence development.
StudyMe
(2022)
N-of-1 trials are multi-crossover self-experiments that allow individuals to systematically evaluate the effect of interventions on their personal health goals. Although several tools for N-of-1 trials exist, there is a gap in supporting non-experts in conducting their own user-centric trials. In this study, we present StudyMe, an open-source mobile application that is freely available from https://play.google.com/store/apps/details?id=health.studyu.me and offers users flexibility and guidance in configuring every component of their trials. We also present research that informed the development of StudyMe, focusing on trial creation. Through an initial survey with 272 participants, we learned that individuals are interested in a variety of personal health aspects and have unique ideas on how to improve them. In an iterative, user-centered development process with intermediate user tests, we developed StudyMe that features an educational part to communicate N-of-1 trial concepts. A final empirical evaluation of StudyMe showed that all participants were able to create their own trials successfully using StudyMe and the app achieved a very good usability rating. Our findings suggest that StudyMe provides a significant step towards enabling individuals to apply a systematic science-oriented approach to personalize health-related interventions and behavior modifications in their everyday lives.
StudyMe
(2022)
N-of-1 trials are multi-crossover self-experiments that allow individuals to systematically evaluate the effect of interventions on their personal health goals. Although several tools for N-of-1 trials exist, there is a gap in supporting non-experts in conducting their own user-centric trials. In this study, we present StudyMe, an open-source mobile application that is freely available from https://play.google.com/store/apps/details?id=health.studyu.me and offers users flexibility and guidance in configuring every component of their trials. We also present research that informed the development of StudyMe, focusing on trial creation. Through an initial survey with 272 participants, we learned that individuals are interested in a variety of personal health aspects and have unique ideas on how to improve them. In an iterative, user-centered development process with intermediate user tests, we developed StudyMe that features an educational part to communicate N-of-1 trial concepts. A final empirical evaluation of StudyMe showed that all participants were able to create their own trials successfully using StudyMe and the app achieved a very good usability rating. Our findings suggest that StudyMe provides a significant step towards enabling individuals to apply a systematic science-oriented approach to personalize health-related interventions and behavior modifications in their everyday lives.
In 1997, Henry Lieberman stated that debugging is the dirty little secret of computer science. Since then, several promising debugging technologies have been developed such as back-in-time debuggers and automatic fault localization methods. However, the last study about the state-of-the-art in debugging is still more than 15 years old and so it is not clear whether these new approaches have been applied in practice or not. For that reason, we investigate the current state of debugging in a comprehensive study. First, we review the available literature and learn about current approaches and study results. Second, we observe several professional developers while debugging and interview them about their experiences. Third, we create a questionnaire that serves as the basis for a larger online debugging survey. Based on these results, we present new insights into debugging practice that help to suggest new directions for future research.
“How can a course structure be redesigned based on empirical data to enhance the learning effectiveness through a student-centered approach using objective criteria?”, was the research question we asked. “Digital Twins for Virtual Commissioning of Production Machines” is a course using several innovative concepts including an in-depth practical part with online experiments, called virtual labs. The teaching-learning concept is continuously evaluated. Card Sorting is a popular method for designing information architectures (IA), “a practice of effectively organizing, structuring, and labeling the content of a website or application into a structuref that enables efficient navigation” [11]. In the presented higher education context, a so-called hybrid card sort was used, in which each participants had to sort 70 cards into seven predefined categories or create new categories themselves. Twelve out of 28 students voluntarily participated in the process and short interviews were conducted after the activity. The analysis of the category mapping creates a quantitative measure of the (dis-)similarity of the keywords in specific categories using hierarchical clustering (HCA). The learning designer could then interpret the results to make decisions about the number, labeling and order of sections in the course.
Complex networks are ubiquitous in nature and society. They appear in vastly different domains, for instance as social networks, biological interactions or communication networks. Yet in spite of their different origins, these networks share many structural characteristics. For instance, their degree distribution typically follows a power law. This means that the fraction of vertices of degree k is proportional to k^(−β) for some constant β; making these networks highly inhomogeneous. Furthermore, they also typically have high clustering, meaning that links between two nodes are more likely to appear if they have a neighbor in common.
To mathematically study the behavior of such networks, they are often modeled as random graphs. Many of the popular models like inhomogeneous random graphs or Preferential Attachment excel at producing a power law degree distribution. Clustering, on the other hand, is in these models either not present or artificially enforced.
Hyperbolic random graphs bridge this gap by assuming an underlying geometry to the graph: Each vertex is assigned coordinates in the hyperbolic plane, and two vertices are connected if they are nearby. Clustering then emerges as a natural consequence: Two nodes joined by an edge are close by and therefore have many neighbors in common. On the other hand, the exponential expansion of space in the hyperbolic plane naturally produces a power law degree sequence. Due to the hyperbolic geometry, however, rigorous mathematical treatment of this model can quickly become mathematically challenging.
In this thesis, we improve upon the understanding of hyperbolic random graphs by studying its structural and algorithmical properties. Our main contribution is threefold. First, we analyze the emergence of cliques in this model. We find that whenever the power law exponent β is 2 < β < 3, there exists a clique of polynomial size in n. On the other hand, for β >= 3, the size of the largest clique is logarithmic; which severely contrasts previous models with a constant size clique in this case. We also provide efficient algorithms for finding cliques if the hyperbolic node coordinates are known. Second, we analyze the diameter, i. e., the longest shortest path in the graph. We find
that it is of order O(polylog(n)) if 2 < β < 3 and O(logn) if β > 3. To complement
these findings, we also show that the diameter is of order at least Ω(logn). Third, we provide an algorithm for embedding a real-world graph into the hyperbolic plane using only its graph structure. To ensure good quality of the embedding, we perform extensive computational experiments on generated hyperbolic random graphs. Further, as a proof of concept, we embed the Amazon product recommendation network and observe that products from the same category are mapped close together.
In this paper, we analyze stochastic dynamic pricing and advertising differential games in special oligopoly markets with constant price and advertising elasticity. We consider the sale of perishable as well as durable goods and include adoption effects in the demand. Based on a unique stochastic feedback Nash equilibrium, we derive closed-form solution formulas of the value functions and the optimal feedback policies of all competing firms. Efficient simulation techniques are used to evaluate optimally controlled sales processes over time. This way, the evolution of optimal controls as well as the firms’ profit distributions are analyzed. Moreover, we are able to compare feedback solutions of the stochastic model with its deterministic counterpart. We show that the market power of the competing firms is exactly the same as in the deterministic version of the model. Further, we discover two fundamental effects that determine the relation between both models. First, the volatility in demand results in a decline of expected profits compared to the deterministic model. Second, we find that saturation effects in demand have an opposite character. We show that the second effect can be strong enough to either exactly balance or even overcompensate the first one. As a result we are able to identify cases in which feedback solutions of the deterministic model provide useful approximations of solutions of the stochastic model.
JavaScript is the most popular programming language for web applications. Static analysis of JavaScript applications is highly challenging due to its dynamic language constructs and event-driven asynchronous executions, which also give rise to many security-related bugs. Several static analysis tools to detect such bugs exist, however, research has not yet reported much on the precision and scalability trade-off of these analyzers. As a further obstacle, JavaScript programs structured in Node. js modules need to be collected for analysis, but existing bundlers are either specific to their respective analysis tools or not particularly suitable for static analysis.
An ever-increasing number of prediction models is published every year in different medical specialties. Prognostic or diagnostic in nature, these models support medical decision making by utilizing one or more items of patient data to predict outcomes of interest, such as mortality or disease progression. While different computer tools exist that support clinical predictive modeling, I observed that the state of the art is lacking in the extent to which the needs of research clinicians are addressed. When it comes to model development, current support tools either 1) target specialist data engineers, requiring advanced coding skills, or 2) cater to a general-purpose audience, therefore not addressing the specific needs of clinical researchers. Furthermore, barriers to data access across institutional silos, cumbersome model reproducibility and extended experiment-to-result times significantly hampers validation of existing models. Similarly, without access to interpretable explanations, which allow a given model to be fully scrutinized, acceptance of machine learning approaches will remain limited. Adequate tool support, i.e., a software artifact more targeted at the needs of clinical modeling, can help mitigate the challenges identified with respect to model development, validation and interpretation. To this end, I conducted interviews with modeling practitioners in health care to better understand the modeling process itself and ascertain in what aspects adequate tool support could advance the state of the art. The functional and non-functional requirements identified served as the foundation for a software artifact that can be used for modeling outcome and risk prediction in health research. To establish the appropriateness of this approach, I implemented a use case study in the Nephrology domain for acute kidney injury, which was validated in two different hospitals. Furthermore, I conducted user evaluation to ascertain whether such an approach provides benefits compared to the state of the art and the extent to which clinical practitioners could benefit from it. Finally, when updating models for external validation, practitioners need to apply feature selection approaches to pinpoint the most relevant features, since electronic health records tend to contain several candidate predictors. Building upon interpretability methods, I developed an explanation-driven recursive feature elimination approach. This method was comprehensively evaluated against state-of-the art feature selection methods. Therefore, this thesis' main contributions are three-fold, namely, 1) designing and developing a software artifact tailored to the specific needs of the clinical modeling domain, 2) demonstrating its application in a concrete case in the Nephrology context and 3) development and evaluation of a new feature selection approach applicable in a validation context that builds upon interpretability methods. In conclusion, I argue that appropriate tooling, which relies on standardization and parametrization, can support rapid model prototyping and collaboration between clinicians and data scientists in clinical predictive modeling.
Squimera
(2017)
Software development tools that work and behave consistently across different programming languages are helpful for developers, because they do not have to familiarize themselves with new tooling whenever they decide to use a new language. Also, being able to combine multiple programming languages in a program increases reusability, as developers do not have to recreate software frameworks and libraries in the language they develop in and can reuse existing software instead.
However, developers often have a broad choice with regard to tools, some of which are designed for only one specific programming language. Various Integrated Development Environments have support for multiple languages, but are usually unable to provide a consistent programming experience due to different features of language runtimes. Furthermore, common mechanisms that allow reuse of software written in other languages usually use the operating system or a network connection as the abstract layer. Tools, however, often cannot support such indirections well and are therefore less useful in debugging scenarios for example.
In this report, we present a novel approach that aims to improve the programming experience with regard to working with multiple high-level programming languages. As part of this approach, we reuse the tools of a Smalltalk programming environment for other languages and build a multi-language virtual execution environment which is able to provide the same runtime capabilities for all languages.
The prototype system Squimera is an implementation of our approach and demonstrates that it is possible to reuse development tools, so that they behave in the same way across all supported programming languages. In addition, it provides convenient means to reuse and even mix software libraries and frameworks written in different languages without breaking the debugging experience.
Business process management is an established technique for business organizations to manage and support their processes. Those processes are typically represented by graphical models designed with modeling languages, such as the Business Process Model and Notation (BPMN).
Since process models do not only serve the purpose of documentation but are also a basis for implementation and automation of the processes, they have to satisfy certain correctness requirements. In this regard, the notion of soundness of workflow nets was developed, that can be applied to BPMN process models in order to verify their correctness. Because the original soundness criteria are very restrictive regarding the behavior of the model, different variants of the soundness notion have been developed for situations in which certain violations are not even harmful.
All of those notions do only consider the control-flow structure of a process model, however. This poses a problem, taking into account the fact that with the recent release and the ongoing development of the Decision Model and Notation (DMN) standard, an increasing number of process models are complemented by respective decision models. DMN is a dedicated modeling language for decision logic and separates the concerns of process and decision logic into two different models, process and decision models respectively.
Hence, this thesis is concerned with the development of decisionaware soundness notions, i.e., notions of soundness that build upon the original soundness ideas for process models, but additionally take into account complementary decision models. Similar to the various notions of workflow net soundness, this thesis investigates different notions of decision soundness that can be applied depending on the desired degree of restrictiveness. Since decision tables are a standardized means of DMN to represent decision logic, this thesis also puts special focus on decision tables, discussing how they can be translated into an unambiguous format and how their possible output values can be efficiently determined.
Moreover, a prototypical implementation is described that supports checking a basic version of decision soundness. The decision soundness notions were also empirically evaluated on models from participants of an online course on process and decision modeling as well as from a process management project of a large insurance company. The evaluation demonstrates that violations of decision soundness indeed occur and can be detected with our approach.
Individuals have an intrinsic need to express themselves to other humans within a given community by sharing their experiences, thoughts, actions, and opinions. As a means, they mostly prefer to use modern online social media platforms such as Twitter, Facebook, personal blogs, and Reddit. Users of these social networks interact by drafting their own statuses updates, publishing photos, and giving likes leaving a considerable amount of data behind them to be analyzed. Researchers recently started exploring the shared social media data to understand online users better and predict their Big five personality traits: agreeableness, conscientiousness, extraversion, neuroticism, and openness to experience. This thesis intends to investigate the possible relationship between users’ Big five personality traits and the published information on their social media profiles. Facebook public data such as linguistic status updates, meta-data of likes objects, profile pictures, emotions, or reactions records were adopted to address the proposed research questions. Several machine learning predictions models were constructed with various experiments to utilize the engineered features correlated with the Big 5 Personality traits. The final predictive performances improved the prediction accuracy compared to state-of-the-art approaches, and the models were evaluated based on established benchmarks in the domain. The research experiments were implemented while ethical and privacy points were concerned. Furthermore, the research aims to raise awareness about privacy between social media users and show what third parties can reveal about users’ private traits from what they share and act on different social networking platforms.
In the second part of the thesis, the variation in personality development is studied within a cross-platform environment such as Facebook and Twitter platforms. The constructed personality profiles in these social platforms are compared to evaluate the effect of the used platforms on one user’s personality development. Likewise, personality continuity and stability analysis are performed using two social media platforms samples. The implemented experiments are based on ten-year longitudinal samples aiming to understand users’ long-term personality development and further unlock the potential of cooperation between psychologists and data scientists.
Studies indicate that reliable access to power is an important enabler for economic growth. To this end, modern energy management systems have seen a shift from reliance on time-consuming manual procedures , to highly automated management , with current energy provisioning systems being run as cyber-physical systems . Operating energy grids as a cyber-physical system offers the advantage of increased reliability and dependability , but also raises issues of security and privacy. In this chapter, we provide an overview of the contents of this book showing the interrelation between the topics of the chapters in terms of smart energy provisioning. We begin by discussing the concept of smart-grids in general, proceeding to narrow our focus to smart micro-grids in particular. Lossy networks also provide an interesting framework for enabling the implementation of smart micro-grids in remote/rural areas, where deploying standard smart grids is economically and structurally infeasible. To this end, we consider an architectural design for a smart micro-grid suited to low-processing capable devices. We model malicious behaviour, and propose mitigation measures based properties to distinguish normal from malicious behaviour .
Massive Open Online Courses (MOOCs) open up new opportunities to learn a wide variety of skills online and are thus well suited for individual education, especially where proffcient teachers are not available locally. At the same time, modern society is undergoing a digital transformation, requiring the training of large numbers of current and future employees. Abstract thinking, logical reasoning, and the need to formulate instructions for computers are becoming increasingly relevant. A holistic way to train these skills is to learn how to program. Programming, in addition to being a mental discipline, is also considered a craft, and practical training is required to achieve mastery. In order to effectively convey programming skills in MOOCs, practical exercises are incorporated into the course curriculum to offer students the necessary hands-on experience to reach an in-depth understanding of the programming concepts presented. Our preliminary analysis showed that while being an integral and rewarding part of courses, practical exercises bear the risk of overburdening students who are struggling with conceptual misunderstandings and unknown syntax. In this thesis, we develop, implement, and evaluate different interventions with the aim to improve the learning experience, sustainability, and success of online programming courses. Data from four programming MOOCs, with a total of over 60,000 participants, are employed to determine criteria for practical programming exercises best suited for a given audience.
Based on over five million executions and scoring runs from students' task submissions, we deduce exercise difficulties, students' patterns in approaching the exercises, and potential flaws in exercise descriptions as well as preparatory videos. The primary issue in online learning is that students face a social gap caused by their isolated physical situation. Each individual student usually learns alone in front of a computer and suffers from the absence of a pre-determined time structure as provided in traditional school classes. Furthermore, online learning usually presses students into a one-size-fits-all curriculum, which presents the same content to all students, regardless of their individual needs and learning styles. Any means of a personalization of content or individual feedback regarding problems they encounter are mostly ruled out by the discrepancy between the number of learners and the number of instructors. This results in a high demand for self-motivation and determination of MOOC participants. Social distance exists between individual students as well as between students and course instructors. It decreases engagement and poses a threat to learning success. Within this research, we approach the identified issues within MOOCs and suggest scalable technical solutions, improving social interaction and balancing content difficulty.
Our contributions include situational interventions, approaches for personalizing educational content as well as concepts for fostering collaborative problem-solving. With these approaches, we reduce counterproductive struggles and create a universal improvement for future programming MOOCs. We evaluate our approaches and methods in detail to improve programming courses for students as well as instructors and to advance the state of knowledge in online education.
Data gathered from our experiments show that receiving peer feedback on one's programming problems improves overall course scores by up to 17%. Merely the act of phrasing a question about one's problem improved overall scores by about 14%. The rate of students reaching out for help was significantly improved by situational just-in-time interventions. Request for Comment interventions increased the share of students asking for help by up to 158%. Data from our four MOOCs further provide detailed insight into the learning behavior of students. We outline additional significant findings with regard to student behavior and demographic factors. Our approaches, the technical infrastructure, the numerous educational resources developed, and the data collected provide a solid foundation for future research.
Single-column data profiling
(2020)
The research area of data profiling consists of a large set of methods and processes to examine a given dataset and determine metadata about it. Typically, different data profiling tasks address different kinds of metadata, comprising either various statistics about individual columns (Single-column Analysis) or relationships among them (Dependency Discovery). Among the basic statistics about a column are data type, header, the number of unique values (the column's cardinality), maximum and minimum values, the number of null values, and the value distribution. Dependencies involve, for instance, functional dependencies (FDs), inclusion dependencies (INDs), and their approximate versions.
Data profiling has a wide range of conventional use cases, namely data exploration, cleansing, and integration. The produced metadata is also useful for database management and schema reverse engineering. Data profiling has also more novel use cases, such as big data analytics. The generated metadata describes the structure of the data at hand, how to import it, what it is about, and how much of it there is. Thus, data profiling can be considered as an important preparatory task for many data analysis and mining scenarios to assess which data might be useful and to reveal and understand a new dataset's characteristics.
In this thesis, the main focus is on the single-column analysis class of data profiling tasks. We study the impact and the extraction of three of the most important metadata about a column, namely the cardinality, the header, and the number of null values.
First, we present a detailed experimental study of twelve cardinality estimation algorithms. We classify the algorithms and analyze their efficiency, scaling far beyond the original experiments and testing theoretical guarantees. Our results highlight their trade-offs and point out the possibility to create a parallel or a distributed version of these algorithms to cope with the growing size of modern datasets.
Then, we present a fully automated, multi-phase system to discover human-understandable, representative, and consistent headers for a target table in cases where headers are missing, meaningless, or unrepresentative for the column values. Our evaluation on Wikipedia tables shows that 60% of the automatically discovered schemata are exact and complete. Considering more schema candidates, top-5 for example, increases this percentage to 72%.
Finally, we formally and experimentally show the ghost and fake FDs phenomenon caused by FD discovery over datasets with missing values. We propose two efficient scores, probabilistic and likelihood-based, for estimating the genuineness of a discovered FD. Our extensive set of experiments on real-world and semi-synthetic datasets show the effectiveness and efficiency of these scores.
Currently we are witnessing profound changes in the geospatial domain. Driven by recent ICT developments, such as web services, serviceoriented computing or open-source software, an explosion of geodata and geospatial applications or rapidly growing communities of non-specialist users, the crucial issue is the provision and integration of geospatial intelligence in these rapidly changing, heterogeneous developments. This paper introduces the concept of Servicification into geospatial data processing. Its core idea is the provision of expertise through a flexible number of web-based software service modules. Selection and linkage of these services to user profiles, application tasks, data resources, or additional software allow for the compilation of flexible, time-sensitive geospatial data handling processes. Encapsulated in a string of discrete services, the approach presented here aims to provide non-specialist users with geospatial expertise required for the effective, professional solution of a defined application problem. Providing users with geospatial intelligence in the form of web-based, modular services, is a completely different approach to geospatial data processing. This novel concept puts geospatial intelligence, made available through services encapsulating rule bases and algorithms, in the centre and at the disposal of the users, regardless of their expertise.
The use of Building Information Modeling (BIM) for Facility Management (FM) in the Operation and Maintenance (O&M) stages of the building life-cycle is intended to bridge the gap between operations and digital data, but lacks the functionality of assessing the state of the built environment due to non-automated generation of associated semantics. 3D point clouds can be used to capture the physical state of the built environment, but also lack these associated semantics. A prototypical implementation of a service-oriented architecture for classification of indoor point cloud scenes of office environments is presented, using multiview classification. The multiview classification approach is tested using a retrained Convolutional Neural Network (CNN) model - Inception V3. The presented approach for classifying common office furniture objects (chairs, sofas and desks), contained in 3D point cloud scans, is tested and evaluated. The results show that the presented approach can classify common office furniture up to an acceptable degree of accuracy, and is suitable for quick and robust semantics approximation - based on RGB (red, green and blue color channel) cubemap images of the octree partitioned areas of the 3D point cloud scan. Additional methods for web-based 3D visualization, editing and annotation of point clouds are also discussed. Using the described approach, captured scans of indoor environments can be semantically enriched using object annotations derived from multiview classification results. Furthermore, the presented approach is suited for semantic enrichment of lower resolution indoor point clouds acquired using commodity mobile devices.
3D geovisualization systems (3DGeoVSs) that use 3D geovirtual environments as a conceptual and technical framework are increasingly used for various applications. They facilitate obtaining insights from ubiquitous geodata by exploiting human abilities that other methods cannot provide. 3DGeoVSs are often complex and evolving systems required to be adaptable and to leverage distributed resources. Designing a 3DGeoVS based on service-oriented architectures, standards, and image-based representations (SSI) facilitates resource sharing and the agile and efficient construction and change of interoperable systems. In particular, exploiting image-based representations (IReps) of 3D views on geodata supports taking full advantage of the potential of such system designs by providing an efficient, decoupled, interoperable, and increasingly applied representation.
However, there is insufficient knowledge on how to build service-oriented, standards-based 3DGeoVSs that exploit IReps. This insufficiency is substantially due to technology and interoperability gaps between the geovisualization domain and further domains that such systems rely on.
This work presents a coherent framework of contributions that support designing the software architectures of targeted systems and exploiting IReps for providing, styling, and interacting with geodata. The contributions uniquely integrate existing concepts from multiple domains and novel contributions for identified limitations. The proposed software reference architecture (SRA) for 3DGeoVSs based on SSI facilitates designing concrete software architectures of such systems. The SRA describes the decomposition of 3DGeoVSs into a network of services and integrates the following contributions to facilitate exploiting IReps effectively and efficiently. The proposed generalized visualization pipeline model generalizes the prevalent visualization pipeline model and overcomes its expressiveness limitations with respect to transforming IReps. The proposed approach for image-based provisioning enables generating and supplying service consumers with image-based views (IViews). IViews act as first-class data entities in the communication between services and provide a suitable IRep and encoding of geodata. The proposed approach for image-based styling separates concerns of styling from image generation and enables styling geodata uniformly represented as IViews specified as algebraic compositions of high-level styling operators. The proposed approach for interactive image-based novel view generation enables generating new IViews from existing IViews in response to interactive manipulations of the viewing camera and includes an architectural pattern that generalizes common novel view generation. The proposed interactive assisting, constrained 3D navigation technique demonstrates how a navigation technique can be built that supports users in navigating multiscale virtual 3D city models, operates in 3DGeoVSs based on SSI as an application of the SRA, can exploit IReps, and can support collaborating services in exploiting IReps.
The validity of the contributions is supported by proof-of-concept prototype implementations and applications and effectiveness and efficiency studies including a user study. Results suggest that this work promises to support designing 3DGeoVSs based on SSI that are more effective and efficient and that can exploit IReps effectively and efficiently. This work presents a template software architecture and key building blocks for building novel IT solutions and applications for geodata, e.g., as components of spatial data infrastructures.
Enterprises reach out for collaborations with other organizations in order to offer complex products and services to the market. Such collaboration and coordination between different organizations, for a good share, is facilitated by information technology. The BPMN process choreography is a modeling language for specifying the exchange of information and services between different organizations at the business level. Recently, there is a surging use of the REST architectural style for the provisioning of services on the web, but few systematic engineering approach to design their collaboration. In this paper, we address this gap in a comprehensive way by defining a semi-automatic method for the derivation of RESTful choreographies from process choreographies. The method is based on natural language analysis techniques to derive interactions from the textual information in process choreographies. The proposed method is evaluated in terms of effectiveness resulting in the intervention of a web engineer in only about 10% of all generated RESTful interactions.
Deep learning has seen widespread application in many domains, mainly for its ability to learn data representations from raw input data. Nevertheless, its success has so far been coupled with the availability of large annotated (labelled) datasets. This is a requirement that is difficult to fulfil in several domains, such as in medical imaging. Annotation costs form a barrier in extending deep learning to clinically-relevant use cases. The labels associated with medical images are scarce, since the generation of expert annotations of multimodal patient data at scale is non-trivial, expensive, and time-consuming. This substantiates the need for algorithms that learn from the increasing amounts of unlabeled data. Self-supervised representation learning algorithms offer a pertinent solution, as they allow solving real-world (downstream) deep learning tasks with fewer annotations. Self-supervised approaches leverage unlabeled samples to acquire generic features about different concepts, enabling annotation-efficient downstream task solving subsequently.
Nevertheless, medical images present multiple unique and inherent challenges for existing self-supervised learning approaches, which we seek to address in this thesis: (i) medical images are multimodal, and their multiple modalities are heterogeneous in nature and imbalanced in quantities, e.g. MRI and CT; (ii) medical scans are multi-dimensional, often in 3D instead of 2D; (iii) disease patterns in medical scans are numerous and their incidence exhibits a long-tail distribution, so it is oftentimes essential to fuse knowledge from different data modalities, e.g. genomics or clinical data, to capture disease traits more comprehensively; (iv) Medical scans usually exhibit more uniform color density distributions, e.g. in dental X-Rays, than natural images. Our proposed self-supervised methods meet these challenges, besides significantly reducing the amounts of required annotations.
We evaluate our self-supervised methods on a wide array of medical imaging applications and tasks. Our experimental results demonstrate the obtained gains in both annotation-efficiency and performance; our proposed methods outperform many approaches from related literature. Additionally, in case of fusion with genetic modalities, our methods also allow for cross-modal interpretability. In this thesis, not only we show that self-supervised learning is capable of mitigating manual annotation costs, but also our proposed solutions demonstrate how to better utilize it in the medical imaging domain. Progress in self-supervised learning has the potential to extend deep learning algorithms application to clinical scenarios.
Self-adaptive data quality
(2017)
Carrying out business processes successfully is closely linked to the quality of the data inventory in an organization. Lacks in data quality lead to problems: Incorrect address data prevents (timely) shipments to customers. Erroneous orders lead to returns and thus to unnecessary effort. Wrong pricing forces companies to miss out on revenues or to impair customer satisfaction. If orders or customer records cannot be retrieved, complaint management takes longer. Due to erroneous inventories, too few or too much supplies might be reordered.
A special problem with data quality and the reason for many of the issues mentioned above are duplicates in databases. Duplicates are different representations of same real-world objects in a dataset. However, these representations differ from each other and are for that reason hard to match by a computer. Moreover, the number of required comparisons to find those duplicates grows with the square of the dataset size. To cleanse the data, these duplicates must be detected and removed. Duplicate detection is a very laborious process. To achieve satisfactory results, appropriate software must be created and configured (similarity measures, partitioning keys, thresholds, etc.). Both requires much manual effort and experience.
This thesis addresses automation of parameter selection for duplicate detection and presents several novel approaches that eliminate the need for human experience in parts of the duplicate detection process.
A pre-processing step is introduced that analyzes the datasets in question and classifies their attributes semantically. Not only do these annotations help understanding the respective datasets, but they also facilitate subsequent steps, for example, by selecting appropriate similarity measures or normalizing the data upfront. This approach works without schema information.
Following that, we show a partitioning technique that strongly reduces the number of pair comparisons for the duplicate detection process. The approach automatically finds particularly suitable partitioning keys that simultaneously allow for effective and efficient duplicate retrieval. By means of a user study, we demonstrate that this technique finds partitioning keys that outperform expert suggestions and additionally does not need manual configuration. Furthermore, this approach can be applied independently of the attribute types.
To measure the success of a duplicate detection process and to execute the described partitioning approach, a gold standard is required that provides information about the actual duplicates in a training dataset. This thesis presents a technique that uses existing duplicate detection results and crowdsourcing to create a near gold standard that can be used for the purposes above. Another part of the thesis describes and evaluates strategies how to reduce these crowdsourcing costs and to achieve a consensus with less effort.
Detecting and recognizing text in natural scene images is a challenging, yet not completely solved task. In recent years several new systems that try to solve at least one of the two sub-tasks (text detection and text recognition) have been proposed. In this paper we present SEE, a step towards semi-supervised neural networks for scene text detection and recognition, that can be optimized end-to-end. Most existing works consist of multiple deep neural networks and several pre-processing steps. In contrast to this, we propose to use a single deep neural network, that learns to detect and recognize text from natural images, in a semi-supervised way. SEE is a network that integrates and jointly learns a spatial transformer network, which can learn to detect text regions in an image, and a text recognition network that takes the identified text regions and recognizes their textual content. We introduce the idea behind our novel approach and show its feasibility, by performing a range of experiments on standard benchmark datasets, where we achieve competitive results.
With the fast rise of cloud computing adoption in the past few years, more companies are migrating their confidential files from their private data center to the cloud to help enterprise's digital transformation process. Enterprise file synchronization and share (EFSS) is one of the solutions offered for enterprises to store their files in the cloud with secure and easy file sharing and collaboration between its employees. However, the rapidly increasing number of cyberattacks on the cloud might target company's files on the cloud to be stolen or leaked to the public. It is then the responsibility of the EFSS system to ensure the company's confidential files to only be accessible by authorized employees.
CloudRAID is a secure personal cloud storage research collaboration project that provides data availability and confidentiality in the cloud. It combines erasure and cryptographic techniques to securely store files as multiple encrypted file chunks in various cloud service providers (CSPs). However, several aspects of CloudRAID's concept are unsuitable for secure and scalable enterprise cloud storage solutions, particularly key management system, location-based access control, multi-cloud storage management, and cloud file access monitoring.
This Ph.D. thesis focuses on CloudRAID for Business (CfB) as it resolves four main challenges of CloudRAID's concept for a secure and scalable EFSS system. First, the key management system is implemented using the attribute-based encryption scheme to provide secure and scalable intra-company and inter-company file-sharing functionalities. Second, an Internet-based location file access control functionality is introduced to ensure files could only be accessed at pre-determined trusted locations. Third, a unified multi-cloud storage resource management framework is utilized to securely manage cloud storage resources available in various CSPs for authorized CfB stakeholders. Lastly, a multi-cloud storage monitoring system is introduced to monitor the activities of files in the cloud using the generated cloud storage log files from multiple CSPs.
In summary, this thesis helps CfB system to provide holistic security for company's confidential files on the cloud-level, system-level, and file-level to ensure only authorized company and its employees could access the files.
Cloud storage brokerage is an abstraction aimed at providing value-added services. However, Cloud Service Brokers are challenged by several security issues including enlarged attack surfaces due to integration of disparate components and API interoperability issues. Therefore, appropriate security risk assessment methods are required to identify and evaluate these security issues, and examine the efficiency of countermeasures. A possible approach for satisfying these requirements is employment of threat modeling concepts, which have been successfully applied in traditional paradigms. In this work, we employ threat models including attack trees, attack graphs and Data Flow Diagrams against a Cloud Service Broker (CloudRAID) and analyze these security threats and risks. Furthermore, we propose an innovative technique for combining Common Vulnerability Scoring System (CVSS) and Common Configuration Scoring System (CCSS) base scores in probabilistic attack graphs to cater for configuration-based vulnerabilities which are typically leveraged for attacking cloud storage systems. This approach is necessary since existing schemes do not provide sufficient security metrics, which are imperatives for comprehensive risk assessments. We demonstrate the efficiency of our proposal by devising CCSS base scores for two common attacks against cloud storage: Cloud Storage Enumeration Attack and Cloud Storage Exploitation Attack. These metrics are then used in Attack Graph Metric-based risk assessment. Our experimental evaluation shows that our approach caters for the aforementioned gaps and provides efficient security hardening options. Therefore, our proposals can be employed to improve cloud security.
Mobile operating systems, such as Google's Android, have become a fixed part of our daily lives and are entrusted with a plethora of private information. Congruously, their data protection mechanisms have been improved steadily over the last decade and, in particular, for Android, the research community has explored various enhancements and extensions to the access control model. However, the vast majority of those solutions has been concerned with controlling the access to data, but equally important is the question of how to control the flow of data once released. Ignoring control over the dissemination of data between applications or between components of the same app, opens the door for attacks, such as permission re-delegation or privacy-violating third-party libraries. Controlling information flows is a long-standing problem, and one of the most recent and practical-oriented approaches to information flow control is secure multi-execution.
In this paper, we present Ariel, the design and implementation of an IFC architecture for Android based on the secure multi-execution of apps. Ariel demonstrably extends Android's system with support for executing multiple instances of apps, and it is equipped with a policy lattice derived from the protection levels of Android's permissions as well as an I/O scheduler to achieve control over data flows between application instances. We demonstrate how secure multi-execution with Ariel can help to mitigate two prominent attacks on Android, permission re-delegations and malicious advertisement libraries.
In the field of Business Process Management (BPM), modeling business processes and related data is a critical issue since process activities need to manage data stored in databases. The connection between processes and data is usually handled at the implementation level, even if modeling both processes and data at the conceptual level should help designers in improving business process models and identifying requirements for implementation. Especially in data -and decision-intensive contexts, business process activities need to access data stored both in databases and data warehouses. In this paper, we complete our approach for defining a novel conceptual view that bridges process activities and data. The proposed approach allows the designer to model the connection between business processes and database models and define the operations to perform, providing interesting insights on the overall connected perspective and hints for identifying activities that are crucial for decision support.
Scrum2kanban
(2018)
Using university capstone courses to teach agile software development methodologies has become commonplace, as agile methods have gained support in professional software development. This usually means students are introduced to and work with the currently most popular agile methodology: Scrum. However, as the agile methods employed in the industry change and are adapted to different contexts, university courses must follow suit. A prime example of this is the Kanban method, which has recently gathered attention in the industry. In this paper, we describe a capstone course design, which adds the hands-on learning of the lean principles advocated by Kanban into a capstone project run with Scrum. This both ensures that students are aware of recent process frameworks and ideas as well as gain a more thorough overview of how agile methods can be employed in practice. We describe the details of the course and analyze the participating students' perceptions as well as our observations. We analyze the development artifacts, created by students during the course in respect to the two different development methodologies. We further present a summary of the lessons learned as well as recommendations for future similar courses. The survey conducted at the end of the course revealed an overwhelmingly positive attitude of students towards the integration of Kanban into the course.
Scalable data profiling
(2018)
Data profiling is the act of extracting structural metadata from datasets. Structural metadata, such as data dependencies and statistics, can support data management operations, such as data integration and data cleaning. Data management often is the most time-consuming activity in any data-related project. Its support is extremely valuable in our data-driven world, so that more time can be spent on the actual utilization of the data, e. g., building analytical models. In most scenarios, however, structural metadata is not given and must be extracted first. Therefore, efficient data profiling methods are highly desirable.
Data profiling is a computationally expensive problem; in fact, most dependency discovery problems entail search spaces that grow exponentially in the number of attributes. To this end, this thesis introduces novel discovery algorithms for various types of data dependencies – namely inclusion dependencies, conditional inclusion dependencies, partial functional dependencies, and partial unique column combinations – that considerably improve over state-of-the-art algorithms in terms of efficiency and that scale to datasets that cannot be processed by existing algorithms. The key to those improvements are not only algorithmic innovations, such as novel pruning rules or traversal strategies, but also algorithm designs tailored for distributed execution. While distributed data profiling has been mostly neglected by previous works, it is a logical consequence on the face of recent hardware trends and the computational hardness of dependency discovery.
To demonstrate the utility of data profiling for data management, this thesis furthermore presents Metacrate, a database for structural metadata. Its salient features are its flexible data model, the capability to integrate various kinds of structural metadata, and its rich metadata analytics library. We show how to perform a data anamnesis of unknown, complex datasets based on this technology. In particular, we describe in detail how to reconstruct the schemata and assess their quality as part of the data anamnesis.
The data profiling algorithms and Metacrate have been carefully implemented, integrated with the Metanome data profiling tool, and are available as free software. In that way, we intend to allow for easy repeatability of our research results and also provide them for actual usage in real-world data-related projects.
Boolean Satisfiability (SAT) is one of the problems at the core of theoretical computer science. It was the first problem proven to be NP-complete by Cook and, independently, by Levin. Nowadays it is conjectured that SAT cannot be solved in sub-exponential time. Thus, it is generally assumed that SAT and its restricted version k-SAT are hard to solve. However, state-of-the-art SAT solvers can solve even huge practical instances of these problems in a reasonable amount of time.
Why is SAT hard in theory, but easy in practice? One approach to answering this question is investigating the average runtime of SAT. In order to analyze this average runtime the random k-SAT model was introduced. The model generates all k-SAT instances with n variables and m clauses with uniform probability. Researching random k-SAT led to a multitude of insights and tools for analyzing random structures in general. One major observation was the emergence of the so-called satisfiability threshold: A phase transition point in the number of clauses at which the generated formulas go from asymptotically almost surely satisfiable to asymptotically almost surely unsatisfiable. Additionally, instances around the threshold seem to be particularly hard to solve.
In this thesis we analyze a more general model of random k-SAT that we call non-uniform random k-SAT. In contrast to the classical model each of the n Boolean variables now has a distinct probability of being drawn. For each of the m clauses we draw k variables according to the variable distribution and choose their signs uniformly at random. Non-uniform random k-SAT gives us more control over the distribution of Boolean variables in the resulting formulas. This allows us to tailor distributions to the ones observed in practice. Notably, non-uniform random k-SAT contains the previously proposed models random k-SAT, power-law random k-SAT and geometric random k-SAT as special cases.
We analyze the satisfiability threshold in non-uniform random k-SAT depending on the variable probability distribution. Our goal is to derive conditions on this distribution under which an equivalent of the satisfiability threshold conjecture holds. We start with the arguably simpler case of non-uniform random 2-SAT. For this model we show under which conditions a threshold exists, if it is sharp or coarse, and what the leading constant of the threshold function is. These are exactly the three ingredients one needs in order to prove or disprove the satisfiability threshold conjecture. For non-uniform random k-SAT with k=3 we only prove sufficient conditions under which a threshold exists. We also show some properties of the variable probabilities under which the threshold is sharp in this case. These are the first results on the threshold behavior of non-uniform random k-SAT.