Hasso-Plattner-Institut für Digital Engineering GmbH
Refine
Year of publication
Document Type
- Article (194)
- Doctoral Thesis (100)
- Other (84)
- Monograph/Edited Volume (42)
- Postprint (22)
- Conference Proceeding (4)
- Part of a Book (1)
- Habilitation Thesis (1)
- Report (1)
Keywords
- MOOC (42)
- digital education (37)
- e-learning (36)
- Digitale Bildung (34)
- online course creation (34)
- online course design (34)
- Kursdesign (33)
- Micro Degree (33)
- Online-Lehre (33)
- Onlinekurs (33)
- Onlinekurs-Produktion (33)
- micro degree (33)
- micro-credential (33)
- online teaching (33)
- machine learning (21)
- maschinelles Lernen (10)
- Cloud Computing (7)
- E-Learning (6)
- cloud computing (6)
- deep learning (6)
- evaluation (6)
- Forschungsprojekte (5)
- Future SOC Lab (5)
- In-Memory Technologie (5)
- Multicore Architekturen (5)
- Smalltalk (5)
- artifical intelligence (5)
- künstliche Intelligenz (5)
- multicore architectures (5)
- research projects (5)
- 3D printing (4)
- BPMN (4)
- Blockchain (4)
- Digitalisierung (4)
- Duplikaterkennung (4)
- Künstliche Intelligenz (4)
- MOOCs (4)
- Machine Learning (4)
- Scrum (4)
- blockchain (4)
- business process management (4)
- cyber-physical systems (4)
- duplicate detection (4)
- fabrication (4)
- image processing (4)
- inertial measurement unit (4)
- innovation (4)
- natural language processing (4)
- openHPI (4)
- probabilistic timed systems (4)
- programming (4)
- qualitative Analyse (4)
- qualitative analysis (4)
- quantitative Analyse (4)
- quantitative analysis (4)
- smart contracts (4)
- 3D visualization (3)
- 3D-Visualisierung (3)
- Business process models (3)
- DMN (3)
- Data profiling (3)
- Datenaufbereitung (3)
- Datenqualität (3)
- Design Thinking (3)
- Dynamic pricing (3)
- Geschäftsprozessmanagement (3)
- HPI Schul-Cloud (3)
- Hasso Plattner Institute (3)
- Hasso-Plattner-Institut (3)
- Innovation (3)
- Maschinelles Lernen (3)
- Natural language processing (3)
- Security Metrics (3)
- Security Risk Assessment (3)
- Teamwork (3)
- cloud (3)
- computer vision (3)
- course design (3)
- creativity (3)
- data preparation (3)
- data profiling (3)
- digital health (3)
- digital transformation (3)
- digitale Bildung (3)
- digitalization (3)
- entity resolution (3)
- graph transformation systems (3)
- in-memory technology (3)
- intrusion detection (3)
- real-time rendering (3)
- reinforcement learning (3)
- tiefes Lernen (3)
- trajectory data (3)
- transparency (3)
- user experience (3)
- user-generated content (3)
- 3D point clouds (2)
- Agile (2)
- Android (2)
- Angriffserkennung (2)
- Anomalieerkennung (2)
- Answer set programming (2)
- Artificial Intelligence (2)
- Behavioral economics (2)
- Big Data (2)
- Bounded Model Checking (2)
- Clinical decision support (2)
- Cloud-Security (2)
- Data Profiling (2)
- Datenbank (2)
- Datenbanksysteme (2)
- Datenvisualisierung (2)
- Debugging (2)
- Decision models (2)
- Deep Learning (2)
- Dynamic programming (2)
- Economic evaluation (2)
- Electronic health record (2)
- Energy (2)
- Entitätsauflösung (2)
- Entitätsverknüpfung (2)
- Entscheidungsfindung (2)
- European Union (2)
- Europäische Union (2)
- Fabrikation (2)
- Feature selection (2)
- Forschungskolleg (2)
- Gene expression (2)
- Graphentransformationssysteme (2)
- IDS (2)
- Identitätsmanagement (2)
- In-Memory (2)
- In-Memory technology (2)
- Information flow control (2)
- Internet der Dinge (2)
- Internet of Things (2)
- Java (2)
- Kanban (2)
- Klausurtagung (2)
- Learning Analytics (2)
- Lecture Video Archive (2)
- MAC security (2)
- MERLOT (2)
- Massive Open Online Course (MOOC) (2)
- Metadaten (2)
- Metanome (2)
- Modellprüfung (2)
- Oligopoly competition (2)
- OptoGait (2)
- P2P (2)
- Peer Assessment (2)
- Peer-to-Peer ridesharing (2)
- Ph.D. retreat (2)
- Prior knowledge (2)
- Programmieren (2)
- Python (2)
- RGB-D cameras (2)
- Reproducible benchmarking (2)
- SIEM (2)
- Secure Configuration (2)
- Security (2)
- Service-oriented Systems Engineering (2)
- Sicherheit (2)
- Smart micro-grids (2)
- Social Media Analysis (2)
- Squeak (2)
- Taxonomy (2)
- Treemaps (2)
- Versionsverwaltung (2)
- Visualisierung (2)
- Visualization (2)
- Werkzeuge (2)
- X-ray (2)
- Zebris (2)
- acute kidney injury (2)
- anomaly detection (2)
- artificial intelligence (2)
- artificial intelligence for health (2)
- assignments (2)
- benchmark (2)
- bounded model checking (2)
- business processes (2)
- capstone course (2)
- categories (2)
- causal discovery (2)
- causal structure learning (2)
- classification (2)
- clustering (2)
- collaborative work (2)
- comparison of devices (2)
- computer-mediated therapy (2)
- cyber-physische Systeme (2)
- data integration (2)
- data pipeline (2)
- data quality (2)
- database (2)
- database systems (2)
- deduplication (2)
- deferred choice (2)
- design thinking (2)
- digital enlightenment (2)
- digital learning platform (2)
- digital sovereignty (2)
- digitale Aufklärung (2)
- digitale Lernplattform (2)
- digitale Souveränität (2)
- digitale Transformation (2)
- disorder recognition (2)
- distributed systems (2)
- eindeutige Spaltenkombination (2)
- end-stage kidney disease (2)
- entity linking (2)
- everyday life (2)
- federated learning (2)
- flexibility (2)
- formal semantics (2)
- formal verification (2)
- formale Verifikation (2)
- framework (2)
- functional dependencies (2)
- funktionale Abhängigkeiten (2)
- gait analysis (2)
- gait analysis algorithm (2)
- game dynamics (2)
- genome-wide association (2)
- geospatial data (2)
- hate speech detection (2)
- human activity recognition (2)
- human motion (2)
- identity management (2)
- image stylization (2)
- inclusion dependencies (2)
- index selection (2)
- interactive media (2)
- kausale Entdeckung (2)
- kausales Strukturlernen (2)
- knowledge management (2)
- labeling (2)
- learning path (2)
- lebenslanges Lernen (2)
- lifelong learning (2)
- liveness (2)
- location prediction algorithm (2)
- maschinelles Sehen (2)
- medical documentation (2)
- memory (2)
- metadata (2)
- mobile mapping (2)
- model checking (2)
- model-driven engineering (2)
- modularization (2)
- motion capture (2)
- multiple modalities (2)
- named entity recognition (2)
- nested graph conditions (2)
- neurological disorders (2)
- non-photorealistic rendering (2)
- online learning (2)
- oracles (2)
- outlier detection (2)
- parameterized complexity (2)
- peer assessment (2)
- pervasive healthcare (2)
- price of anarchy (2)
- privacy (2)
- privacy and security (2)
- privacy attack (2)
- probabilistische gezeitete Systeme (2)
- probabilistische zeitgesteuerte Systeme (2)
- process mining (2)
- programmable matter (2)
- proteomics (2)
- public dataset (2)
- quality assessment (2)
- query optimization (2)
- rapid eGFRcrea decline (2)
- real-time (2)
- research school (2)
- retrospective (2)
- risk-aware dispatching (2)
- security (2)
- security analytics (2)
- self-paced learning (2)
- self-sovereign identity (2)
- sensor data (2)
- service-oriented systems engineering (2)
- simulation (2)
- social media (2)
- software development (2)
- software engineering (2)
- software process improvement (2)
- speech (2)
- stable matching (2)
- study (2)
- style transfer (2)
- text mining (2)
- thinking styles (2)
- transport network companies (2)
- treemaps (2)
- trust (2)
- trustworthiness (2)
- typed attributed graphs (2)
- unique column combinations (2)
- user interaction (2)
- variable geometry truss (2)
- version control (2)
- virtuelle Realität (2)
- visualization (2)
- voice (2)
- web-based rendering (2)
- workflow patterns (2)
- (modular) counting (1)
- 0-day (1)
- 3D Druck (1)
- 3D Point Clouds (1)
- 3D Visualization (1)
- 3D city model (1)
- 3D geovirtual environment (1)
- 3D geovisualization (1)
- 3D geovisualization system (1)
- 3D point cloud (1)
- 3D portrayal (1)
- 3D-Druck (1)
- 3D-Einbettung (1)
- 3D-Geovisualisierung (1)
- 3D-Geovisualisierungssystem (1)
- 3D-Punktwolke (1)
- 3D-Punktwolken (1)
- 3D-Rendering (1)
- 3D-Stadtmodell (1)
- 3D-embedding (1)
- 3D-geovirtuelle Umgebung (1)
- ACINQ (1)
- AI Act (1)
- AI Lab (1)
- AMIGOS dataset (1)
- APT (1)
- ASIC (1)
- Abgleich von Abhängigkeiten (1)
- Abhängigkeit (1)
- Abschlussbericht (1)
- Access Control (1)
- Activity-oriented Optimization (1)
- Adoption effects (1)
- Adressnormalisierung (1)
- Advanced Persistent Threats (1)
- Advertising (1)
- Agent-based model (1)
- Agile methods (1)
- Agilität (1)
- Aktivitäten (1)
- Alcohol Use Disorders Identification Test (1)
- Alcohol use assessment (1)
- Algebraic methods (1)
- Algorithms (1)
- Alzheimer's Disease (1)
- Ambiguity (1)
- Ambiguität (1)
- Analog-zu-Digital-Konvertierung (1)
- Analyse (1)
- Anfrageoptimierung (1)
- Annahme (1)
- Anomaly detection (1)
- Anwendungsbedingungen (1)
- Application Container Security (1)
- Approximation algorithms (1)
- Architecture synthesis (1)
- Architectures (1)
- Architekturadaptation (1)
- Archivanalyse (1)
- Artistic Image Stylization (1)
- Arzt-Patient-Beziehung (1)
- Attributsicherung (1)
- Audit (1)
- Aufzählungsalgorithmen (1)
- Auktion (1)
- Ausreißererkennung (1)
- Australian securities exchange (1)
- Auswirkungen (1)
- Autoimmune (1)
- Automated parsing (1)
- Automatic domain term extraction (1)
- Automatisierung (1)
- Average-Case Analyse (1)
- BCCC (1)
- BIM (1)
- BRCA1 (1)
- BTC (1)
- Back Pain (1)
- Bahnwesen (1)
- Bandwidth (1)
- Batch activity (1)
- Batch processing (1)
- Batch-Aktivität (1)
- Bayesian Networks; (1)
- Bedrohungsanalyse (1)
- Bedrohungserkennung (1)
- Bedrohungsmodell (1)
- Behavior change (1)
- Behavioral equivalence and refinement (1)
- Benchmark (1)
- Benutzerinteraktion (1)
- Beschriftung (1)
- Betriebssysteme (1)
- Bewegung (1)
- Big Five Model (1)
- Big Five model (1)
- Bildungstechnologien (1)
- Bildverarbeitung (1)
- Binary Classification (1)
- Binäre Klassifikation (1)
- Biomarker-Erkennung (1)
- Bisimulation and simulation (1)
- BitShares (1)
- Bitcoin Core (1)
- Blockchain Auth (1)
- Blockchain Governance (1)
- Blockchain-Konsortium R3 (1)
- Blockchain-enabled Governance (1)
- Blockkette (1)
- Blockstack (1)
- Blockstack ID (1)
- Blumix-Plattform (1)
- Blöcke (1)
- Boolean Networks (1)
- Boolean satisfiability (1)
- Boolsche Erfüllbarkeit (1)
- Bounded Backward Model Checking (1)
- Brand Personality (1)
- Building Management (1)
- Business Process Management (1)
- Business process choreographies (1)
- Business process improvement (1)
- Business process modeling (1)
- Business process simulation (1)
- Byzantine Agreement (1)
- C++ tool (1)
- CMOS technology (1)
- COVID-19 (1)
- Car safety management (1)
- Cardinality estimation (1)
- Case Management (1)
- Causal inference (1)
- Causal structure learning (1)
- Causality; (1)
- Cheating attacks (1)
- Chernoff-Hoeffding theorem (1)
- Chile (1)
- Chronic Nonspecific Low (1)
- Circular economy (1)
- CityGML (1)
- Clinical Data (1)
- Clinical predictive modeling (1)
- Clinical risk prediction (1)
- Cloud (1)
- Cloud Audit (1)
- Cloud Service Provider (1)
- Cloud computing (1)
- CloudRAID (1)
- CloudRAID for Business (1)
- Clusteranalyse (1)
- Clustering (1)
- Co-production (1)
- Collaborative learning (1)
- Colored Coins (1)
- Colored Petri Net (1)
- Commonsense reasoning (1)
- Community analysis (1)
- Complexity (1)
- Compound Values (1)
- Computational Photography (1)
- Computer architecture (1)
- Computergrafik (1)
- Computervision (1)
- Computing (1)
- Conceptual Fit (1)
- Conceptual modeling (1)
- Contamination (1)
- ContextErlang (1)
- Creative (1)
- Critical pair analysis (CPA) (1)
- Cross-platform (1)
- Crowd Resourcing (1)
- Cryptography (1)
- Cultural factors (1)
- Curex (1)
- Custom Writable Class (1)
- Cyber-Sicherheit (1)
- Cyber-physikalische Systeme (1)
- Cybersecurity (1)
- Cybersecurity e-Learning (1)
- Cybersicherheit (1)
- Cybersicherheit E-Learning (1)
- DAG (1)
- DALY (1)
- DAO (1)
- DBMS (1)
- DPoS (1)
- DSA (1)
- Data Analytics (1)
- Data Structure Optimization (1)
- Data breach (1)
- Data compression (1)
- Data mining (1)
- Data mining Machine learning (1)
- Data modeling (1)
- Data partitioning (1)
- Data processing (1)
- Data profiling application (1)
- Data quality (1)
- Data warehouse (1)
- Data-Mining (1)
- Data-Science (1)
- Data-driven price anticipation (1)
- Data-driven strategies (1)
- Database (1)
- Dateistruktur (1)
- Daten-Analytik (1)
- Datenabgleich (1)
- Datenanalyse (1)
- Datenbankoptimierung (1)
- Datenbereinigung (1)
- Datenintegration (1)
- Datenmodelle (1)
- Datensatz (1)
- Datensatzverknüpfung (1)
- Datenschutz (1)
- Datenschutz-sicherer Einsatz in der Schule (1)
- Datenstromverarbeitung (1)
- Datenstrukturoptimierung (1)
- Datensynthese (1)
- Datentransformation (1)
- Datenverwaltung (1)
- Datenverwaltung für Daten mit räumlich-zeitlichem Bezug (1)
- Deanonymisierung (1)
- Decision support (1)
- Declarative modelling (1)
- Deduplikation (1)
- Deep Kernel Learning (1)
- Deep learning (1)
- Dekubitus (1)
- Delays (1)
- Delegated Proof-of-Stake (1)
- Denial of sleep (1)
- Denkstile (1)
- Denkweise (1)
- Derecho (1)
- Design (1)
- Design-Forschung (1)
- Dezentrale Applikationen (1)
- Differential Expression Analysis (1)
- Digital Engineering (1)
- Digital Twin (1)
- Digital World (1)
- Digital education (1)
- Digitaler Zwilling (1)
- Digitization (1)
- Dimensionsreduktion (1)
- Direkte Manipulation (1)
- Disadvantaged communities (1)
- Dissertation (1)
- Distance Learning (1)
- Distance learning (1)
- Distributed Proof-of-Research (1)
- Distributed snapshot algorithm (1)
- Distributed-Ledger-Technologie (DLT) (1)
- Diverse solution enumeration (1)
- Document classification (1)
- Dokument Analyse (1)
- Domain Objects (1)
- Domänenspezifische Modellierung (1)
- Drift (1)
- Dubletten (1)
- Durchsetzbarkeit (1)
- Dynamic pricing competition (1)
- Dynamik (1)
- E-Learning exam preparation (1)
- E-Lecture (1)
- E-Wallet (1)
- E-commerce (1)
- E-health (1)
- ECDSA (1)
- EMG (1)
- Echtzeit (1)
- Echtzeit-Rendering (1)
- Education technologies (1)
- Educational Data Mining (1)
- Educational Technology (1)
- Educational data mining (1)
- Effect measurement (1)
- Einbettungen (1)
- Einbruchserkennung (1)
- Electrical products (1)
- Embedded Programming (1)
- Emotion Mining (1)
- Endpunktsicherheit (1)
- Energy efficiency (1)
- Entdeckung (1)
- Enterprise File Synchronization and Share (1)
- Entropy (1)
- Entscheidungskorrektheit (1)
- Entscheidungsmanagement (1)
- Entscheidungsmodelle (1)
- Entstehung (1)
- Entwicklung digitaler Innovationseinheiten (1)
- Enumeration algorithm (1)
- Equity (1)
- Ereignisabonnement (1)
- Ereignisnormalisierung (1)
- Erfüllbarkeitsschwellwert (1)
- Eris (1)
- Erkennung von Metadaten (1)
- Erkennung von Quasi-Identifikatoren (1)
- Estimation-of-Distribution-Algorithmen (1)
- Estimation-of-distribution algorithm (1)
- Ether (1)
- Ethereum (1)
- European reference networks (1)
- Evaluation (1)
- Expert knowledge (1)
- Exploration (1)
- Expressive rendering (1)
- Extensibility (1)
- Eye-tracking (1)
- Fabrication (1)
- Facility Management (1)
- Federated Byzantine Agreement (1)
- Feedback control loop (1)
- Fehlende Werte (1)
- Fehlertoleranz (1)
- Fernerkundung (1)
- Fertigung (1)
- Fertigungsunternehmen (1)
- Field study (1)
- First-hitting time (1)
- Flash (1)
- FollowMyVote (1)
- Forecasting (1)
- Foreign key (1)
- Fork (1)
- Formal modelling (1)
- Formal verification of behavior preservation (1)
- Functional dependencies (1)
- Funktionale Abhängigkeiten (1)
- G-estimation (1)
- GMDH (1)
- GPU (1)
- GPU acceleration (1)
- GPU-Beschleunigung (1)
- GTEx (1)
- Gaussian process state-space models (1)
- Gaussian processes (1)
- Gauß-Prozess Zustandsraummodelle (1)
- Gauß-Prozesse (1)
- Gebäudeinformationsmodellierung (1)
- Gebäudemanagement (1)
- Gen-Expression (1)
- Gene Expression Data Analysis (1)
- General Earth and Planetary Sciences (1)
- General demand function (1)
- Generalized Discrimination Networks (1)
- Genomics education (1)
- Geodaten (1)
- Geography, Planning and Development (1)
- Geospatial intelligence (1)
- Gerichtsbarkeit (1)
- German schools (1)
- Geschäftsprozess (1)
- Geschäftsprozess-Choreografien (1)
- Geschäftsprozessarchitekturen (1)
- Gewinnung benannter Entitäten (1)
- GitHub (1)
- GraalVM (1)
- Graph Algorithms (1)
- Graph Theory (1)
- Graph algorithm (1)
- Graph logic (1)
- Graph logics (1)
- Graph transformation (1)
- Graph transformation (double pushout approach) (1)
- Graph transformations (1)
- Graph-Mining (1)
- Graphableitung (1)
- Graphbedingungen (1)
- Graphentheorie (1)
- Graphreparatur (1)
- Graphtransformationen (1)
- Graphtransformationssysteme (1)
- Grid stability (1)
- Gridcoin (1)
- Großformat (1)
- HENSHIN (1)
- HLS (1)
- HTML5 (1)
- Hard Fork (1)
- Hashed Timelock Contracts (1)
- Hasserkennung (1)
- Hauptspeicher Datenmanagement (1)
- Heart Valve Diseases (1)
- Herzklappenerkrankungen (1)
- Heuristic triangle estimation (1)
- Heuristiken (1)
- Hitting Sets (1)
- Holant problems (1)
- Home appliances (1)
- Homomorphismen (1)
- Human Computer Interaction (1)
- Hybrid (1)
- Hyperbolic Geometry (1)
- Hyrise (1)
- Häkeln (1)
- ICT (1)
- IEEE 802.15.4 (1)
- IT Softwarentwicklung (1)
- IT project (1)
- IT systems engineering (1)
- IT-Infrastruktur (1)
- IT-Sicherheit (1)
- IT-infrastructure (1)
- IT-security (1)
- Ideation (1)
- Ideenfindung (1)
- Identity leak (1)
- Identität (1)
- Image Abstraction (1)
- Image Processing (1)
- Imbalanced medical image semantic segmentation (1)
- Immobilien 4.0 (1)
- Impact (1)
- Implementation in Organizations (1)
- Implementierung (1)
- Implementierung in Organisationen (1)
- Indexauswahl (1)
- Indoor Point Clouds (1)
- Indoor environments (1)
- Indoor-Punktwolken (1)
- Industry 4.0 (1)
- Information Retrieval (1)
- Information system (1)
- Informationsextraktion (1)
- Informationsflüsse (1)
- Informationsraum (1)
- Informationsraumverletzungen (1)
- Inklusionsabhängigkeit (1)
- Inklusionsabhängigkeiten (1)
- Institutions (1)
- Integrative Gene Selection (1)
- Intent analysis (1)
- Interacting processes (1)
- Interactive Media (1)
- Interactive control (1)
- Interdisciplinary Teams (1)
- Interessengrad-Techniken (1)
- Internet (1)
- Internet of things (1)
- Interoperability (1)
- Interpretability (1)
- Interpreter (1)
- Interval Timed Automata (1)
- Interventionen (1)
- Invariant checking (1)
- IoT (1)
- Japanese Blockchain Consortium (1)
- Japanisches Blockchain-Konsortium (1)
- JavaScript (1)
- K-12 (1)
- KI-Labor (1)
- Kalman filtering (1)
- Kardinalitätsschätzung (1)
- Karten (1)
- Kategorien (1)
- Kausalität (1)
- Kernelization (1)
- Kette (1)
- Klassifikation (1)
- Klassifizierung (1)
- Klinische Daten (1)
- Knowledge Bases (1)
- Kognitionswissenschaft (1)
- Kollaboration (1)
- Komplexitätstheorie (1)
- Konsensalgorithmus (1)
- Konsensprotokoll (1)
- Konsensprotokolle (1)
- Konsistenzrestauration (1)
- Konstruktion von Wissensbasen (1)
- Konstruktion von Wissensgraphen (1)
- Korpusexploration (1)
- Korrektheit (1)
- Kraft (1)
- Kreativität (1)
- Kryptografie (1)
- Kultur (1)
- Kunstanalyse (1)
- LC-MS (1)
- Large networks (1)
- Large scale analytics (1)
- Laserscanning (1)
- Laserschneiden (1)
- Lastverteilung (1)
- Laufzeitanalyse (1)
- Laufzeitmodelle (1)
- Law (1)
- Learning Factory (1)
- Learning analytics (1)
- Learning experience (1)
- Lebendigkeit (1)
- Lecture Recording (1)
- Leistungsmodelle von virtuellen Maschinen (1)
- Lernerlebnis (1)
- Level-of-detail visualization (1)
- LiDAR (1)
- Lightning Network (1)
- Linear model (1)
- Link layer security (1)
- Literature review (1)
- Live-Migration (1)
- Live-Programmierung (1)
- Lively Kernel (1)
- Liveness (1)
- Load modeling (1)
- Lock-Time-Parameter (1)
- Log data (1)
- Lossy networks (1)
- Low-processing capable devices (1)
- Lösungsraum (1)
- MOOC Remote Lab (1)
- MQTT (1)
- MS (1)
- Machine-Learning (1)
- Machinelles Lernen (1)
- Manufacturing (1)
- Markov model (1)
- Maschinen (1)
- Maschinenlernen (1)
- Massive Open Online Courses (1)
- Massive open online courses (1)
- Measurement (1)
- Medical research (1)
- Medien Bias (1)
- Mediumzugriffskontrolle (1)
- Meltdown (1)
- Memory Dumping (1)
- Mensch Computer Interaktion (1)
- Mensch-Maschine Interaktion (1)
- Merkmalsauswahl (1)
- Messung (1)
- Meta-Selbstanpassung (1)
- Metacrate (1)
- Metamaterialien (1)
- Metamaterials (1)
- Micro-grid networks (1)
- Microbiome (1)
- Micropayment-Kanäle (1)
- Microservice (1)
- Microservices Security (1)
- Microsoft Azur (1)
- Mindset (1)
- Minimal hitting set (1)
- Minimum spanning tree (1)
- Missing values (1)
- Mixed Reality (1)
- Mobile Learning (1)
- Mobile Mapping (1)
- Mobile applications (1)
- Mobile devices (1)
- Mobile-Mapping (1)
- Mobiles (1)
- Model checking (1)
- Model extraction (1)
- Modelle mit mehreren Versionen (1)
- Modellreparatur (1)
- Monitoring (1)
- Monte Carlo simulation (1)
- Motion Mapping (1)
- Moving Target Defense (1)
- Multi-objective optimization (1)
- Multidisciplinary Teams (1)
- Multiview classification (1)
- Multiziel (1)
- N-of-1 trials (1)
- NASDAQ (1)
- NETCONF (1)
- NLP (1)
- NP-hardness (1)
- Nachrichten (1)
- NameID (1)
- Namecoin (1)
- Named-Entity-Erkennung (1)
- Nash equilibrium (1)
- Natural Language Processing (1)
- Natural language analysis (1)
- Navigational logics (1)
- Nephrology (1)
- Network Science (1)
- Network analysis (1)
- Network creation games (1)
- Netzoptimierung (1)
- Netzwerkprotokolle (1)
- Netzwerksicherheit (1)
- Neural Networks (1)
- Neural networks (1)
- New Public Governance (1)
- Non-photorealistic Rendering (1)
- Non-photorealistic rendering (1)
- Nutzer-Engagement (1)
- Nutzerinteraktion (1)
- Objects (1)
- Objekte (1)
- Off-Chain-Transaktionen (1)
- Offline-Enabled (1)
- Onename (1)
- Online Learning Environments (1)
- Online survey (1)
- Online-Gerichte (1)
- Online-Lernen (1)
- Online-Persönlichkeit (1)
- Ontology (1)
- Open source (1)
- OpenBazaar (1)
- Opinion mining (1)
- Optimal control (1)
- Optimierung (1)
- Oracles (1)
- Ordinances (1)
- Organisationsstudien (1)
- Orphan Block (1)
- Overlapping community detection (1)
- PRISM model checker (1)
- PTCTL (1)
- PTM (1)
- Parallel independence (1)
- Parallel processing (1)
- Parallelized algorithm (1)
- Pareto-Verteilung (1)
- Parking search (1)
- Patents (1)
- Patient (1)
- Patientenermündigung (1)
- Pattern (1)
- Pattern Recognition (1)
- Peer assessment (1)
- Peer-feedback (1)
- Peer-to-Peer Netz (1)
- Peercoin (1)
- Performance evaluation (1)
- Performanz (1)
- Personal genotyping (1)
- Personality Prediction (1)
- Personalized medicine (1)
- PhD thesis (1)
- PoB (1)
- PoS (1)
- PoW (1)
- Point Clouds (1)
- Politik (1)
- Polystore (1)
- Popular matching (1)
- Posenabschätzung (1)
- Power auctioning (1)
- Power consumption characterization (1)
- Power demand (1)
- Predictive Modeling (1)
- Predictive models (1)
- Pricing (1)
- Primary biliary cholangitis (1)
- Primary key (1)
- Primary sclerosing cholangitis (1)
- Prior Knowledge (1)
- Privacy (1)
- Privatsphäre (1)
- Probabilistic timed automata (1)
- Problem Solving (1)
- Problemlösung (1)
- Process (1)
- Process Enactment (1)
- Process Execution (1)
- Process architecture (1)
- Process discovery (1)
- Process landscape (1)
- Process map (1)
- Process mining (1)
- Process model (1)
- Process-related data (1)
- Profilerstellung für Daten (1)
- Programmiererlebnis (1)
- Programmierung (1)
- Programmierwerkzeuge (1)
- Programming course (1)
- Project-based learning (1)
- Proof-of-Burn (1)
- Proof-of-Stake (1)
- Proof-of-Work (1)
- Proteom (1)
- Proteomics (1)
- Prototyping (1)
- Prozess (1)
- Prozessausführung (1)
- Prozessmodelle (1)
- Prozessmodellierung (1)
- Prädiktive Modellierung (1)
- Psychiatric disorders (1)
- Psychological Emotions (1)
- Psychotherapie (1)
- Punktwolken (1)
- QUALY (1)
- Quanten-Computing (1)
- Query optimization (1)
- Query-Optimierung (1)
- REST-Interaktionen (1)
- RESTful choreographies (1)
- RESTful interactions (1)
- RL (1)
- RNAseq (1)
- Random process (1)
- Randomized clinical trials (1)
- Real Estate 4.0 (1)
- Real Walking (1)
- Real-time rendering (1)
- Recht (1)
- Reconfigurable architecture (1)
- Recurrent generative (1)
- Recursion (1)
- Recycling investments (1)
- Refactoring (1)
- Regressionstests (1)
- Rekursion (1)
- Relational model transformation (1)
- Representationlernen (1)
- Requisit (1)
- Resource Allocation (1)
- Resource Management (1)
- Resource constrained smart micro-grids (1)
- Response strategies (1)
- Reverse Engineering (1)
- Ripple (1)
- Roadmap (1)
- Ruby (1)
- Runtime improvement (1)
- Runtime-monitoring (1)
- Russia (1)
- SAFE (1)
- SCP (1)
- SET effects (1)
- SHA (1)
- SPV (1)
- SWIRL (1)
- Satz von Chernoff-Hoeffding (1)
- Savanne (1)
- Scalability (1)
- Schelling segregation (1)
- Schelling's segregation model (1)
- Schema discovery (1)
- Schema-Entdeckung (1)
- Schlafentzugsangriffe (1)
- School (1)
- Schriftartgestaltung (1)
- Schriftrendering (1)
- Schule (1)
- Schwachstelle (1)
- Schwierigkeitsgrad (1)
- Scrollytelling (1)
- Secondary Education (1)
- Security analytics (1)
- Selbst-Adaptive Software (1)
- Self-Regulated Learning (1)
- Semantic Enrichment (1)
- Semantic Web (1)
- Semantic enrichment (1)
- Semantische Anreicherung (1)
- Sensor Analytics (1)
- Sensor networks (1)
- Sensor-Analytik (1)
- Sequenzeigenschaften (1)
- Serialisierung (1)
- Servers (1)
- Service-Oriented Architecture (1)
- Service-Oriented Systems (1)
- Service-Orientierte Systeme (1)
- Service-oriented (1)
- Serviceorientierte Architektur (SOA) (1)
- Servicification (1)
- Siamesische Neuronale Netzwerke (1)
- Sicherheitsanalyse (1)
- Simplified Payment Verification (1)
- Simulation (1)
- Simulation study (1)
- Situationsbewusstsein (1)
- Skalierbarkeit (1)
- Skalierbarkeit der Blockchain (1)
- Skriptsprachen (1)
- Slock.it (1)
- Smart Contracts (1)
- Smart Home Education (1)
- Smart cities (1)
- Social (1)
- Social Bots erkennen (1)
- Soft Fork (1)
- Software (1)
- Software engineering (1)
- Software-Evolution (1)
- Software/Hardware Co-Design (1)
- Softwareanalytik (1)
- Softwarevisualisierung (1)
- Solution Space (1)
- Source Code Readability (1)
- Soziale Medien (1)
- Spatial data handling systems (1)
- Spatio-Temporal Data (1)
- Spatio-temporal data analysis (1)
- Spatio-temporal visualization (1)
- Specification (1)
- Spectre (1)
- Spezifikation von gezeiteten Graph Transformationen (1)
- Spieltheorie (1)
- Spin system (1)
- Sprachlernen im Limes (1)
- Squeak/Smalltalk (1)
- Stable marriage (1)
- Stable matching (1)
- StackOverflow (1)
- Standard (1)
- Standardisierung (1)
- Stapelverarbeitung (1)
- Static analysis (1)
- Statistical process mining (1)
- Steemit (1)
- Stellar Consensus Protocol (1)
- Stochastic differential games (1)
- Storj (1)
- Storytelling (1)
- Strategic cognition (1)
- Style transfer (1)
- Subject-oriented learning (1)
- Suchtberatung und -therapie (1)
- Supervised Learning (1)
- Supervised deep neural (1)
- Survey (1)
- System design (1)
- Systemmedizin (1)
- Systems Medicine (1)
- TCGA (1)
- Team Assessment (1)
- Team based assignment (1)
- Team-based Learning (1)
- Teamarbeit (1)
- Technology mapping (1)
- Teile und Herrsche (1)
- Telemedizin (1)
- Temporallogik (1)
- Testergebnisse (1)
- Testpriorisierungs (1)
- Text mining (1)
- Texterkennung (1)
- Textklassifikation (1)
- The Bitfury Group (1)
- The DAO (1)
- Theorie (1)
- Theory (1)
- Threat Models (1)
- Tiefes Lernen (1)
- Time series analysis (1)
- Time series data (1)
- Timed Automata (1)
- Tools (1)
- Topic modeling (1)
- Tragfähigkeit (1)
- Training (1)
- Trajectory Data Management (1)
- Trajectory visualization (1)
- Trajektorien (1)
- Trajektoriendaten (1)
- Transaktion (1)
- Transferlernen (1)
- Transportability (1)
- Transversal hypergraph (1)
- Transversal-Hypergraph (1)
- Tree maintenance (1)
- Tripel-Graph-Grammatiken (1)
- Triple graph grammars (1)
- Two-Way-Peg (1)
- U-Förmiges Lernen (1)
- U-Shaped-Learning (1)
- Ubiquitous (1)
- Ubiquitous business process (1)
- Unbiasedness (1)
- Unified logging system (1)
- Unique column combination (1)
- Unspent Transaction Output (1)
- Unternehmensdateien synchronisieren und teilen (1)
- Unterricht mit digitalen Medien (1)
- User Experience (1)
- Utility-Funktionen (1)
- V2X (1)
- VUCA-World (1)
- Validation (1)
- Verarbeitung natürlicher Sprache (1)
- Verbundwerte (1)
- Verhaltensforschung (1)
- Verhaltensänderung (1)
- Verifikation induktiver Invarianten (1)
- Verlässlichkeit (1)
- Verteilte Systeme (1)
- Vertrauen (1)
- Verträge (1)
- Veränderungsanalyse (1)
- Video annotations (1)
- Virtual Laboratory (1)
- Virtual Machine (1)
- Virtual Machines (1)
- Virtual Reality (1)
- Virtuelle Maschinen (1)
- Virtuelles Labor (1)
- Visual Analytics (1)
- Visual analytics (1)
- Visualisierungskonzept-Exploration (1)
- Vorhersage (1)
- Vorhersagemodelle (1)
- Vorhersagemodellierung (1)
- Vulnerability analysis (1)
- WALA (1)
- W[3]-Completeness (1)
- Walking (1)
- Water Science and Technology (1)
- Watson IoT (1)
- Wearable (1)
- Web-basiertes Rendering (1)
- Weighted clustering coefficient (1)
- Werkzeugbau (1)
- Wicked Problems (1)
- Wireless sensor networks (1)
- Wirtschaftsinformatik Projekte (1)
- Wissensbasis (1)
- Wissensgraph (1)
- Wissensgraphen (1)
- Wissensgraphen Verfeinerung (1)
- Wissensmanagement (1)
- Wissenstransfer (1)
- Wissensvalidierung (1)
- Wolke (1)
- Word embedding (1)
- Wüstenbildung (1)
- XIC extraction (1)
- YANG (1)
- Zielvorgabe (1)
- Zookos Dreieck (1)
- Zookos triangle (1)
- Zufallsgraphen (1)
- Zugriffskontrolle (1)
- Zustandsverwaltung (1)
- Zählen (1)
- accelerator architectures (1)
- acceptability (1)
- accuracy (1)
- action problems (1)
- active layers (1)
- active touch (1)
- acyclic preferences (1)
- addiction care (1)
- address normalization (1)
- adoption (1)
- advanced persistent threat (1)
- advanced threats (1)
- adversarial network (1)
- agil (1)
- agile (1)
- algorithms (1)
- allocation problem (1)
- altchain (1)
- alternative chain (1)
- analog-to-digital conversion (1)
- analysis (1)
- application conditions (1)
- approximate counting (1)
- apt (1)
- architectural adaptation (1)
- architecture-based software adaptation (1)
- architectures (1)
- architekturbasierte Softwareanpassung (1)
- archive analysis (1)
- art analysis (1)
- artistic image stylization (1)
- aspect-oriented programming (1)
- asset management (1)
- atomic swap (1)
- attribute assurance (1)
- auction (1)
- automation (1)
- autonomous (1)
- availability (1)
- average-case analysis (1)
- bachelor project (1)
- batch activity (1)
- batch processing (1)
- behavior psychotherapy (1)
- behavioral sciences (1)
- behaviourally correct learning (1)
- benchmarking (1)
- benutzergenerierte Inhalte (1)
- bestärkendes Lernen (1)
- bidirectional payment channels (1)
- bildbasierte Repräsentation (1)
- bildbasiertes Rendering (1)
- bioinformatics (1)
- bioinformatics tool (1)
- biologisches Vorwissen (1)
- biomarker detection (1)
- bitcoins (1)
- blockchain consortium (1)
- blockchain-übergreifend (1)
- blocks (1)
- blumix platform (1)
- bounded backward model checking (1)
- brand personality (1)
- breast-cancer (1)
- business process (1)
- business process architectures (1)
- business process choreographies (1)
- business process managament (1)
- causal AI (1)
- causal reasoning (1)
- causality (1)
- chain (1)
- change detection (1)
- classification zone (1)
- classification zone violations (1)
- clinical exome (1)
- cliquy tree (1)
- cloud monitoring (1)
- code generation (1)
- cognitive load (1)
- cognitive patterns (1)
- cognitive science (1)
- collaboration (1)
- collaborative learning (1)
- collective intelligence (1)
- columnar databases (1)
- combinational logic (1)
- communication (1)
- comparison (1)
- competence (1)
- complex event processing (1)
- complex networks (1)
- complexity (1)
- complexity theory (1)
- compliance (1)
- compositional analysis (1)
- computational design (1)
- computational hardness (1)
- computational mass spectrometry (1)
- computational models (1)
- computational photography (1)
- computer graphics (1)
- computer science (1)
- computer science education (1)
- computer-aided design (1)
- computergestützte Gestaltung (1)
- computervermittelte Therapie (1)
- computing (1)
- confirmation period (1)
- confluence (1)
- conformance checking (1)
- connectivity (1)
- consensus algorithm (1)
- consensus protocol (1)
- consensus protocols (1)
- consistency restoration (1)
- consistent learning (1)
- constrained optimization (1)
- content gamification (1)
- contest period (1)
- context (1)
- context groups (1)
- contextual-variability modeling (1)
- continuous integration (1)
- contracts (1)
- convolutional neural networks (1)
- coping ability (1)
- corpus exploration (1)
- corticospinal tract (1)
- cost (1)
- cost-effectiveness (1)
- cost-utility analysis (1)
- courts of justice (1)
- crochet (1)
- cross-chain (1)
- cross-platform (1)
- cultural heritage (1)
- culture (1)
- cumulative culture (1)
- curex (1)
- cyber-physikalische Systeme (1)
- cybersecurity (1)
- data (1)
- data analytics (1)
- data cleaning (1)
- data dependencies (1)
- data management (1)
- data matching (1)
- data mining (1)
- data models (1)
- data privacy (1)
- data processing (1)
- data science (1)
- data set (1)
- data synthesis (1)
- data transfer (1)
- data visualisation (1)
- data visualization (1)
- data wrangling (1)
- data-driven (1)
- database optimization (1)
- database replication (1)
- database tuning (1)
- datengetrieben (1)
- de-anonymisation (1)
- debugging (1)
- decentral identities (1)
- decentralized applications (1)
- decentralized autonomous organization (1)
- decision making (1)
- decision management (1)
- decision mining (1)
- decision models (1)
- decision soundness (1)
- decision-aware process models (1)
- decoupling cells (1)
- decubitus (1)
- deep Gaussian processes (1)
- deep kernel learning (1)
- degree-of-interest techniques (1)
- demand learning (1)
- demografische Informationen (1)
- demographic information (1)
- denial of sleep (1)
- dependability (1)
- dependency (1)
- desertification (1)
- design (1)
- design behaviour (1)
- design cognition (1)
- design research (1)
- developing countries (1)
- development artifacts (1)
- dezentrale Identitäten (1)
- dezentrale autonome Organisation (1)
- dialect (1)
- die arabische Welt (1)
- difficulty (1)
- difficulty target (1)
- diffusion MRI; (1)
- digital fabrication (1)
- digital health app (1)
- digital innovation (1)
- digital innovation units (1)
- digital learning (1)
- digital picture archive (1)
- digital product innovation (1)
- digital therapy (1)
- digital unterstützter Unterricht (1)
- digital whiteboard (1)
- digital world (1)
- digitale Fabrikation (1)
- digitale Infrastruktur für den Schulunterricht (1)
- digitale Innovation (1)
- digitale Innovationseinheit (1)
- digitale Produktinnovation (1)
- digitales Bildarchiv (1)
- digitales Lernen (1)
- digitales Whiteboard (1)
- digitalisation (1)
- digitalización (1)
- dimensionality reduction (1)
- direct manipulation (1)
- disability-adjusted life years (1)
- discovery (1)
- discrete-event model (1)
- diseño (1)
- diskretes Ereignismodell (1)
- distributed computation (1)
- distributed performance monitoring (1)
- divide-and-conquer (1)
- doctor-patient relationship (1)
- document analysis (1)
- domain-specific knowledge graphs (1)
- domain-specific modeling (1)
- dominant matching (1)
- domänenspezifisches Wissensgraphen (1)
- doppelter Hashwert (1)
- double hashing (1)
- drahtloses Netzwerk (1)
- drift theory (1)
- dry fasting (1)
- dsps (1)
- dynamic AOP (1)
- dynamic causal modeling (1)
- dynamic service adaptation (1)
- dynamic systems (1)
- dynamics (1)
- dynamische Systeme (1)
- e-Commerce (1)
- e-commerce (1)
- eLearning (1)
- eating behaviour (1)
- economic (1)
- edge-weighted networks (1)
- education (1)
- electrical muscle stimulation (1)
- electrodermal activity (1)
- elektrische Muskelstimulation (1)
- embeddings (1)
- emergence (1)
- emotion classification (1)
- emotion measurement (1)
- emotional cognitive dynamics (1)
- emotional kognitive Dynamiken (1)
- endpoint security (1)
- enforceability (1)
- entity-component-system (1)
- entscheidungsbewusste Prozessmodelle (1)
- enumeration algorithms (1)
- erzeugende gegnerische Netzwerke (1)
- escalation of commitment (1)
- eskalierendes Commitment (1)
- espbench (1)
- estimation-of-distribution algorithms (1)
- estudios de organización (1)
- event normalization (1)
- event subscription (1)
- evolution of digital innovation units (1)
- evolutionary computation (1)
- exact algorithms (1)
- expectation maximisation algorithm (1)
- experience (1)
- exploration (1)
- exploratives Programmieren (1)
- exploratory programming (1)
- extend (1)
- extension problems (1)
- eye-tracking (1)
- facial mimicry (1)
- fault tolerance (1)
- feature selection (1)
- federated voting (1)
- field study (1)
- file structure (1)
- final report (1)
- font engineering (1)
- font rendering (1)
- force (1)
- forest number (1)
- formal testing (1)
- fortschrittliche Angriffe (1)
- functional dependency (1)
- funktionale Abhängigkeit (1)
- game theory (1)
- gameful learning (1)
- ganzzahlige lineare Optimierung (1)
- gefaltete neuronale Netze (1)
- gemischte Daten (1)
- gene (1)
- gene expression (1)
- gene selection (1)
- generalized discrimination networks (1)
- generative adversarial networks (1)
- genomics (1)
- geographical distribution (1)
- geschichtsbewusste Laufzeit-Modelle (1)
- getypte Attributierte Graphen (1)
- gigantische Netzwerke (1)
- global model management (1)
- globales Modellmanagement (1)
- grammars (1)
- graph (1)
- graph analysis (1)
- graph conditions (1)
- graph constraints (1)
- graph inference (1)
- graph mining (1)
- graph neural networks (1)
- graph repair (1)
- graph theory (1)
- graph transformations (1)
- graph-transformations (1)
- graphical query language (1)
- graphische neuronale Netze (1)
- grobe Protokolle (1)
- group-based behavior adaptation (1)
- haptic feedback (1)
- haptisches Feedback (1)
- hardware (1)
- hashrate (1)
- health app (1)
- health behaviour (1)
- healthcare (1)
- hepatitis (1)
- heuristics (1)
- hierarchical data (1)
- hierarchische Daten (1)
- higher education (1)
- history-aware runtime models (1)
- hitting sets (1)
- homomorphisms (1)
- human computer interaction (1)
- human-centered (1)
- human-centered design (1)
- human-computer interaction (1)
- human-robot interaction (1)
- human-scale (1)
- hybrid (1)
- hybrid systems (1)
- hyperbolic geometry (1)
- hyperbolic random graphs (1)
- hyperbolische Geometrie (1)
- hyperbolische Zufallsgraphen (1)
- identity (1)
- image abstraction (1)
- image-based rendering (1)
- image-based representation (1)
- imbalanced learning (1)
- immediacy (1)
- implementation (1)
- implied methods (1)
- in-memory (1)
- in-memory data management (1)
- inclusion dependency (1)
- incremental graph query evaluation (1)
- independency tree (1)
- inductive invariant checking (1)
- industry (1)
- industry 4.0 (1)
- information extraction (1)
- information flows (1)
- information quality (1)
- information systems projects (1)
- inkrementelle Ausführung von Graphanfragen (1)
- innovación (1)
- innovation laboratories (1)
- insula (1)
- integer linear programming (1)
- integrated development environments (1)
- integrierte Entwicklungsumgebungen (1)
- intelligente Verträge (1)
- inter-chain (1)
- interactive visualization (1)
- interaktive Medien (1)
- interaktive Visualisierung (1)
- interdisziplinäre Teams (1)
- intermittent food restriction (1)
- internet of things (1)
- internet topology (1)
- interpreters (1)
- interval probabilistic timed systems (1)
- interval probabilistische zeitgesteuerte Systeme (1)
- interval timed automata (1)
- intervention (1)
- intransitivity (1)
- intuitive Benutzeroberflächen (1)
- intuitive interfaces (1)
- invention (1)
- invention mechanism (1)
- inventory management (1)
- juridical recording (1)
- k-Induktion (1)
- k-induction (1)
- k-inductive invariant (1)
- k-inductive invariant checking (1)
- k-induktive Invariante (1)
- k-induktive Invariantenprüfung (1)
- kausale KI (1)
- kausale Schlussfolgerung (1)
- key establishment (1)
- key management (1)
- key revocation (1)
- knowledge base (1)
- knowledge base construction (1)
- knowledge graph (1)
- knowledge graph construction (1)
- knowledge graph refinement (1)
- knowledge graphs (1)
- knowledge transfer (1)
- knowledge validation (1)
- kollaboratives Arbeiten (1)
- kollaboratives Lernen (1)
- komplexe Ereignisverarbeitung (1)
- kompositionale Analyse (1)
- konsistentes Lernen (1)
- kontinuierliche Integration (1)
- kulturelles Erbe (1)
- label-free quantification (1)
- language learning in the limit (1)
- large scale mechanism (1)
- large-scale mechanism (1)
- laser cutting (1)
- laserscanning (1)
- law (1)
- learner engagement (1)
- learning (1)
- learning factories (1)
- learning platform (1)
- learning styles (1)
- lebenszentriert (1)
- ledger assets (1)
- left recursion (1)
- level-replacement systems (1)
- liability (1)
- life-centered (1)
- linear programming (1)
- link layer security (1)
- literature review (1)
- live migration (1)
- live programming (1)
- lively groups (1)
- load balancing (1)
- load-bearing (1)
- logic rules (1)
- logische Regeln (1)
- low back pain (1)
- low-duty-cycling (1)
- mHealth (1)
- machine (1)
- machines (1)
- male infertility (1)
- management (1)
- manufacturing (1)
- manufacturing companies (1)
- maps (1)
- maschinelle Verarbeitung natürlicher Sprache (1)
- massive networks (1)
- massive open online courses (1)
- matching dependencies (1)
- measurement (1)
- media (1)
- media bias (1)
- medical image analysis (1)
- medium access control (1)
- medizinische Bildanalyse (1)
- medizinische Dokumentation (1)
- mehrsprachige Ausführungsumgebungen (1)
- mehrstufiger Angriff (1)
- menschenzentriert (1)
- menschenzentriertes Design (1)
- mental models (1)
- merged mining (1)
- merkle root (1)
- meta self-adaptation (1)
- metaanalysis (1)
- metabolomics (1)
- metacognition (1)
- metacrate (1)
- metadata detection (1)
- metamaterials (1)
- metanome (1)
- methods (1)
- metric temporal graph logic (1)
- metric temporal logic (1)
- metric termporal graph logic (1)
- metrisch temporale Graph Logic (1)
- metrische Temporallogik (1)
- microcredential (1)
- micropayment (1)
- micropayment channels (1)
- microstructures (1)
- mindfulness (1)
- miner (1)
- mining (1)
- mining hardware (1)
- minting (1)
- mixed data (1)
- mixed methods (1)
- mobile app (1)
- mobile applications (1)
- mobile health (1)
- model (1)
- model repair (1)
- model-driven software engineering (1)
- modellgesteuerte Entwicklung (1)
- modellgetriebene Entwicklung (1)
- modellgetriebene Softwaretechnik (1)
- modification stoichiometry (1)
- motion and force (1)
- mpmUCC (1)
- multi-objective (1)
- multi-step attack (1)
- multi-version models (1)
- multidisziplinäre Teams (1)
- multimodal wireless sensor network (1)
- mutations (1)
- named entity mining (1)
- narrative (1)
- natürliche Sprachverarbeitung (1)
- network optimization (1)
- network protocols (1)
- network security (1)
- networks (1)
- news (1)
- nicht-parametrische bedingte Unabhängigkeitstests (1)
- nicht-uniforme Verteilung (1)
- non-cooperative games (1)
- non-parametric conditional independence testing (1)
- non-photorealistic rendering (NPR) (1)
- non-uniform distribution (1)
- nonce (1)
- note-taking (1)
- novelty detection (1)
- null results (1)
- nutzergenerierte Inhalte (1)
- object-oriented languages (1)
- object-oriented programming (1)
- objektorientiertes Programmieren (1)
- off-chain transaction (1)
- oligopoly competition (1)
- omics (1)
- oneM2M (1)
- online courts (1)
- online personality (1)
- open innovation (1)
- operating systems (1)
- optical character recognition (1)
- optimization (1)
- order dependencies (1)
- organization studies (1)
- orthopedic (1)
- oxytocin (1)
- packrat parsing (1)
- pain (1)
- parallel and sequential independence (1)
- parallel processing (1)
- parallele Verarbeitung (1)
- parallele und Sequentielle Unabhängigkeit (1)
- parietal operculum (1)
- parsing expression grammars (1)
- partial order resolution (1)
- partial replication (1)
- partielle Replikation (1)
- partition functions (1)
- patent (1)
- patent analysis (1)
- patient empowerment (1)
- peer evaluation (1)
- peer feedback (1)
- peer review (1)
- peer-to-peer network (1)
- pegged sidechains (1)
- performance (1)
- performance models of virtual machines (1)
- persistent memory (1)
- persistenter Speicher (1)
- personality prediction (1)
- phosphoproteomics (1)
- photoplethysmography (1)
- physiological signals (1)
- pmem (1)
- point-based rendering (1)
- politics (1)
- polyglot execution environments (1)
- polyglot programming (1)
- polyglottes Programmieren (1)
- polynomials (1)
- polystore (1)
- popular matching (1)
- portrait (1)
- pose estimation (1)
- poset (1)
- post-translational modification (1)
- power law (1)
- predicated generic functions (1)
- predicted spectra (1)
- prediction (1)
- prediction models (1)
- primary healthcare (1)
- prior knowledge (1)
- probabilistic machine learning (1)
- probabilistic routing (1)
- probabilistic sensitivity analysis (1)
- probabilistisches maschinelles Lernen (1)
- process execution (1)
- process modeling (1)
- process models (1)
- production networks (1)
- programmierbare Materie (1)
- programming experience (1)
- programming tools (1)
- programs (1)
- progressive rendering (1)
- progressives Rendering (1)
- project based learning (1)
- props (1)
- proteomics graph networks (1)
- prototyping (1)
- pseudo-Boolean optimization (1)
- pseudoboolesche Optimierung (1)
- psychopy experiments (1)
- psychotherapy (1)
- qualitative model (1)
- qualitatives Modell (1)
- quality-adjusted life years (1)
- quantification (1)
- quantum computing (1)
- quasi-identifier discovery (1)
- quorum slices (1)
- radiation hardening (1)
- railways (1)
- rainbow connection (1)
- random graphs (1)
- random k-SAT (1)
- reactive object queries (1)
- rechnerunterstütztes Konstruieren (1)
- recommendations (1)
- reconfigurable systems (1)
- record linkage (1)
- regression testing (1)
- rekeying (1)
- relational structures (1)
- relationale Strukturen (1)
- reliability (1)
- religiously motivated (1)
- remote sensing (1)
- reported out-come measures (1)
- representation learning (1)
- residential segregation (1)
- reverse engineering (1)
- reward (1)
- risk (1)
- robot voice (1)
- rootstock (1)
- rough logs (1)
- run time analysis (1)
- runtime models (1)
- runtime monitoring (1)
- räumliche Geodaten (1)
- satisfiability threshold (1)
- savanna (1)
- scalability (1)
- scalability of blockchain (1)
- scalable (1)
- scarce tokens (1)
- schwach überwachtes maschinelles Lernen (1)
- screening tools (1)
- scripting languages (1)
- scrollytelling (1)
- secure data flow diagrams (DFDsec) (1)
- secure multi-execution (1)
- selbst-souveräne Identitäten (1)
- selbstanpassende Systeme (1)
- selbstbestimmte Identitäten (1)
- selbstheilende Systeme (1)
- selbstüberwachtes Lernen (1)
- self-adaptive software (1)
- self-adaptive systems (1)
- self-driving (1)
- self-efficacy (1)
- self-government (1)
- self-healing (1)
- semantic classification (1)
- semantic representations (1)
- semantische Klassifizierung (1)
- semantische Repräsentationen (1)
- sensors (1)
- sequence properties (1)
- serialization (1)
- serverseitiges 3D-Rendering (1)
- serverside 3D rendering (1)
- service-oriented architecture (SOA) (1)
- service-oriented architectures (1)
- serviceorientierte Architekturen (1)
- shared leadership (1)
- siamese neural networks (1)
- sichere Datenflussdiagramme (DFDsec) (1)
- sidechain (1)
- situational awareness (1)
- skalierbar (1)
- small talk (1)
- smalltalk (1)
- soccer analytics (1)
- social bot detection (1)
- social interaction (1)
- social media analysis (1)
- social modulation (1)
- social networking (1)
- social robot (1)
- software analytics (1)
- software evolution (1)
- software visualization (1)
- software/hardware co-design (1)
- somatosensation (1)
- soundness (1)
- sozialen Medien (1)
- soziales Netzwerk (1)
- spaltenorientierte Datenbanken (1)
- spatial aggregation (1)
- spatio-temporal data management (1)
- specification of timed graph transformations (1)
- spectrum clustering (1)
- squeak (1)
- standard (1)
- standardization (1)
- stark verhaltenskorrekt sperrend (1)
- state management (1)
- state space modelling (1)
- static source-code analysis (1)
- statische Quellcodeanalyse (1)
- statistics (1)
- stochastic process (1)
- storytelling (1)
- stream processing (1)
- strongly behaviourally correct locking (1)
- strongly stable matching (1)
- super stable matching (1)
- support vector machine (1)
- susceptibility (1)
- symbolic analysis (1)
- symbolic graphs (1)
- symbolische Analyse (1)
- symbolische Graphen (1)
- synchronization (1)
- tabellarische Dateien (1)
- tabular data (1)
- task realization strategies (1)
- taxonomy (1)
- teamwork (1)
- technology (1)
- telemedicine (1)
- telework (1)
- temporal graph queries (1)
- temporal logic (1)
- temporale Graphanfragen (1)
- test case prioritization (1)
- test results (1)
- text classification (1)
- the Arab world (1)
- theory (1)
- threat analysis (1)
- threat detection (1)
- threat model (1)
- tiefe Gauß-Prozesse (1)
- time horizon (1)
- timed automata (1)
- timed graph (1)
- tissue-awareness (1)
- tool building (1)
- tools (1)
- toxic comment classification (1)
- tractography (1)
- trajectories (1)
- transaction (1)
- transcriptomics (1)
- transfer learning (1)
- transformation (1)
- transversal hypergraph (1)
- tribunales de justicia (1)
- tribunales en línea (1)
- triple graph grammars (1)
- typed attributed symbolic graphs (1)
- typisierte attributierte Graphen (1)
- uBPMN (1)
- ubiquitous business process model and notation (uBPMN) (1)
- ubiquitous business process modeling (1)
- ubiquitous computing (ubicomp) (1)
- ubiquitous decision-aware business process (1)
- ubiquitous decisions (1)
- unbalancierter Datensatz (1)
- uncertainty (1)
- unique column combination (1)
- univariat (1)
- univariate (1)
- unsupervised (1)
- unsupervised learning (1)
- upper bound (1)
- usability (1)
- user engagement (1)
- user research framework (1)
- user-centered design (1)
- utility functions (1)
- variants, (1)
- variational inference (1)
- variationelle Inferenz (1)
- verhaltenskorrektes Lernen (1)
- verifiable credentials (1)
- verschachtelte Anwendungsbedingungen (1)
- verschachtelte Graphbedingungen (1)
- verstärkendes Lernen (1)
- verteilte Berechnung (1)
- verteilte Leistungsüberwachung (1)
- verzwickte Probleme (1)
- veteran (1)
- virtual (1)
- virtual machines (1)
- virtual reality (1)
- virtuell (1)
- virtuelle Maschinen (1)
- visual analytics (1)
- visual language (1)
- visual languages (1)
- visualization concept exploration (1)
- visuelle Sprache (1)
- visuelle Sprachen (1)
- vocational training (1)
- vulnerability (1)
- wake-up radio (1)
- weak supervision (1)
- weakly (1)
- wearable EEG (muse and neurosity crown) (1)
- wearables (1)
- web-based development (1)
- web-based development environment (1)
- web-basierte Entwicklungsumgebung (1)
- webbasierte Entwicklung (1)
- well-being (1)
- wireless networks (1)
- zero-day (1)
- zufälliges k-SAT (1)
- Überwachtes Lernen (1)
- überprüfbare Nachweise (1)
- social network analysis (1)
- team creativity (1)
- intrapreneurship (1)
This vision article outlines the main building blocks of what we term AI Compliance, an effort to bridge two complementary research areas: computer science and the law.
Such research has the goal to model, measure, and affect the quality of AI artifacts, such as data, models, and applications, to then facilitate adherence to legal standards.
DrDimont: explainable drug response prediction from differential analysis of multi-omics networks
(2022)
Motivation:
While it has been well established that drugs affect and help patients differently, personalized drug response predictions remain challenging.
Solutions based on single omics measurements have been proposed, and networks provide means to incorporate molecular interactions into reasoning.
However, how to integrate the wealth of information contained in multiple omics layers still poses a complex problem.
Results:
We present DrDimont, Drug response prediction from Differential analysis of multi-omics networks.
It allows for comparative conclusions between two conditions and translates them into differential drug response predictions.
DrDimont focuses on molecular interactions.
It establishes condition-specific networks from correlation within an omics layer that are then reduced and combined into heterogeneous, multi-omics molecular networks. A novel semi-local, path-based integration step ensures integrative conclusions. Differential predictions are derived from comparing the condition-specific integrated networks.
DrDimont's predictions are explainable, i.e. molecular differences that are the source of high differential drug scores can be retrieved. We predict differential drug response in breast cancer using transcriptomics, proteomics, phosphosite and metabolomics measurements and contrast estrogen receptor positive and receptor negative patients. DrDimont performs better than drug prediction based on differential protein expression or PageRank when evaluating it on ground truth data from cancer cell lines. We find proteomic and phosphosite layers to carry most information for distinguishing drug response.
Residential segregation is a wide-spread phenomenon that can be observed in almost every major city.
In these urban areas residents with different racial or socioeconomic background tend to form homogeneous clusters.
Schelling's famous agent-based model for residential segregation explains how such clusters can form even if all agents are tolerant, i.e., if they agree to live in mixed neighborhoods.
For segregation to occur, all it needs is a slight bias towards agents preferring similar neighbors.
Very recently, Schelling's model has been investigated from a game-theoretic point of view with selfish agents that strategically select their residential location.
In these games, agents can improve on their current location by performing a location swap with another agent who is willing to swap.
We significantly deepen these investigations by studying the influence of the underlying topology modeling the residential area on the existence of equilibria, the Price of Anarchy and on the dynamic properties of the resulting strategic multi-agent system. Moreover, as a new conceptual contribution, we also consider the influence of locality, i.e., if the location swaps are restricted to swaps of neighboring agents.
We give improved almost tight bounds on the Price of Anarchy for arbitrary underlying graphs and we present (almost) tight bounds for regular graphs, paths and cycles. Moreover, we give almost tight bounds for grids, which are commonly used in empirical studies.
For grids we also show that locality has a severe impact on the game dynamics.
"Bad" data has a direct impact on 88% of companies, with the average company losing 12% of its revenue due to it.
Duplicates - multiple but different representations of the same real-world entities are among the main reasons for poor data quality, so finding and configuring the right deduplication solution is essential.
Existing data matching benchmarks focus on the quality of matching results and neglect other important factors, such as business requirements. Additionally, they often do not support the exploration of data matching results.
To address this gap between the mere counting of record pairs vs. a comprehensive means to evaluate data matching solutions, we present the Frost platform.
It combines existing benchmarks, established quality metrics, cost and effort metrics, and exploration techniques, making it the first platform to allow systematic exploration to understand matching results.
Frost is implemented and published in the open-source application Snowman, which includes the visual exploration of matching results, as shown in Figure 1.
HPI Future SOC Lab
(2024)
The “HPI Future SOC Lab” is a cooperation of the Hasso Plattner Institute (HPI) and industry partners. Its mission is to enable and promote exchange and interaction between the research community and the industry partners.
The HPI Future SOC Lab provides researchers with free of charge access to a complete infrastructure of state of the art hard and software. This infrastructure includes components, which might be too expensive for an ordinary research environment, such as servers with up to 64 cores and 2 TB main memory. The offerings address researchers particularly from but not limited to the areas of computer science and business information systems. Main areas of research include cloud computing, parallelization, and In-Memory technologies.
This technical report presents results of research projects executed in 2020. Selected projects have presented their results on April 21st and November 10th 2020 at the Future SOC Lab Day events.
Detecting DNA of novel fungal pathogens using ResNets and a curated fungi-hosts data collection
(2022)
Background:
Emerging pathogens are a growing threat, but large data collections and approaches for predicting the risk associated with novel agents are limited to bacteria and viruses. Pathogenic fungi, which also pose a constant threat to public health, remain understudied.
Relevant data remain comparatively scarce and scattered among many different sources, hindering the development of sequencing-based detection workflows for novel fungal pathogens.
No prediction method working for agents across all three groups is available, even though the cause of an infection is often difficult to identify from symptoms alone.
Results:
We present a curated collection of fungal host range data, comprising records on human, animal and plant pathogens, as well as other plant-associated fungi, linked to publicly available genomes.
We show that it can be used to predict the pathogenic potential of novel fungal species directly from DNA sequences with either sequence homology or deep learning.
We develop learned, numerical representations of the collected genomes and visualize the landscape of fungal pathogenicity.
Finally, we train multi-class models predicting if next-generation sequencing reads originate from novel fungal, bacterial or viral threats.
Conclusions:
The neural networks trained using our data collection enable accurate detection of novel fungal pathogens.
A curated set of over 1400 genomes with host and pathogenicity metadata supports training of machine-learning models and sequence comparison, not limited to the pathogen detection task.
Um in der Schule bereits frühzeitig ein Verständnis für informatische Prozesse zu vermitteln wurde das neue Informatikfach Digitale Welt für die Klassenstufe 5 konzipiert mit der bundesweit einmaligen Verbindung von Informatik mit anwendungsbezogenen und gesellschaftlich relevanten Bezügen zur Ökologie und Ökonomie. Der Technische Report gibt eine Handreichung zur Einführung des neuen Faches.
When I started my PhD, I wanted to do something related to systems but I wasn't sure exactly what. I didn't consider data management systems initially, because I was unaware of the richness of the systems work that data management systems were build on. I thought the field was mainly about SQL. Luckily, that view changed quickly.
The Hasso Plattner Institute (HPI), academically structured as the independent Faculty of Digital Engineering at the University of Potsdam, unites computer science research and teaching with the advantages of a privately financed institute and a tuition-free study program. Founder and namesake of the institute is the SAP co-founder Hasso Plattner, who also heads the Enterprise Platform and Integration Concepts (EPIC) research center which focuses on the technical aspects of business software with a vision to provide the fastest way to get insights out of enterprise data. Founded in 2006, the EPIC combines three research groups comprising autonomous data management, enterprise software engineering, and data-driven decision support.
Schelling's classical segregation model gives a coherent explanation for the wide-spread phenomenon of residential segregation. We introduce an agent-based saturated open-city variant, the Flip Schelling Process (FSP), in which agents, placed on a graph, have one out of two types and, based on the predominant type in their neighborhood, decide whether to change their types; similar to a new agent arriving as soon as another agent leaves the vertex. We investigate the probability that an edge {u,v} is monochrome, i.e., that both vertices u and v have the same type in the FSP, and we provide a general framework for analyzing the influence of the underlying graph topology on residential segregation. In particular, for two adjacent vertices, we show that a highly decisive common neighborhood, i.e., a common neighborhood where the absolute value of the difference between the number of vertices with different types is high, supports segregation and, moreover, that large common neighborhoods are more decisive. As an application, we study the expected behavior of the FSP on two common random graph models with and without geometry: (1) For random geometric graphs, we show that the existence of an edge {u,v} makes a highly decisive common neighborhood for u and v more likely. Based on this, we prove the existence of a constant c>0 such that the expected fraction of monochrome edges after the FSP is at least 1/2+c. (2) For Erdős–Rényi graphs we show that large common neighborhoods are unlikely and that the expected fraction of monochrome edges after the FSP is at most 1/2+o(1). Our results indicate that the cluster structure of the underlying graph has a significant impact on the obtained segregation strength.
Massive Open Online Courses (MOOCs) remarkably attracted global media attention, but the spotlight has been concentrated on a handful of English-language providers. While Coursera, edX, Udacity, and FutureLearn received most of the attention and scrutiny, an entirely new ecosystem of local MOOC providers was growing in parallel. This ecosystem is harder to study than the major players: they are spread around the world, have less staff devoted to maintaining research data, and operate in multiple languages with university and corporate regional partners. To better understand how online learning opportunities are expanding through this regional MOOC ecosystem, we created a research partnership among 15 different MOOC providers from nine countries. We gathered data from over eight million learners in six thousand MOOCs, and we conducted a large-scale survey with more than 10 thousand participants. From our analysis, we argue that these regional providers may be better positioned to meet the goals of expanding access to higher education in their regions than the better-known global providers. To make this claim we highlight three trends: first, regional providers attract a larger local population with more inclusive demographic profiles; second, students predominantly choose their courses based on topical interest, and regional providers do a better job at catering to those needs; and third, many students feel more at ease learning from institutions they already know and have references from. Our work raises the importance of local education in the global MOOC ecosystem, while calling for additional research and conversations across the diversity of MOOC providers.
The automotive industry is a prime example of digital technologies reshaping mobility. Connected, autonomous, shared, and electric (CASE) trends lead to new emerging players that threaten existing industrial-aged companies. To respond, incumbents need to bridge the gap between contrasting product architecture and organizational principles in the physical and digital realms. Over-the-air (OTA) technology, that enables seamless software updates and on-demand feature additions for customers, is an example of CASE-driven digital product innovation. Through an extensive longitudinal case study of an OTA initiative by an industrial- aged automaker, this dissertation explores how incumbents accomplish digital product innovation. Building on modularity, liminality, and the mirroring hypothesis, it presents a process model that explains the triggers, mechanisms, and outcomes of this process. In contrast to the literature, the findings emphasize the primacy of addressing product architecture challenges over organizational ones and highlight the managerial implications for success.
Homomorphisms are a fundamental concept in mathematics expressing the similarity of structures. They provide a framework that captures many of the central problems of computer science with close ties to various other fields of science. Thus, many studies over the last four decades have been devoted to the algorithmic complexity of homomorphism problems. Despite their generality, it has been found that non-uniform homomorphism problems, where the target structure is fixed, frequently feature complexity dichotomies. Exploring the limits of these dichotomies represents the common goal of this line of research.
We investigate the problem of counting homomorphisms to a fixed structure over a finite field of prime order and its algorithmic complexity. Our emphasis is on graph homomorphisms and the resulting problem #_{p}Hom[H] for a graph H and a prime p. The main research question is how counting over a finite field of prime order affects the complexity.
In the first part of this thesis, we tackle the research question in its generality and develop a framework for studying the complexity of counting problems based on category theory. In the absence of problem-specific details, results in the language of category theory provide a clear picture of the properties needed and highlight common ground between different branches of science. The proposed problem #Mor^{C}[B] of counting the number of morphisms to a fixed object B of C is abstract in nature and encompasses important problems like constraint satisfaction problems, which serve as a leading example for all our results. We find explanations and generalizations for a plethora of results in counting complexity. Our main technical result is that specific matrices of morphism counts are non-singular. The strength of this result lies in its algebraic nature. First, our proofs rely on carefully constructed systems of linear equations, which we know to be uniquely solvable. Second, by exchanging the field that the matrix is defined by to a finite field of order p, we obtain analogous results for modular counting. For the latter, cancellations are implied by automorphisms of order p, but intriguingly we find that these present the only obstacle to translating our results from exact counting to modular counting. If we restrict our attention to reduced objects without automorphisms of order p, we obtain results analogue to those for exact counting. This is underscored by a confluent reduction that allows this restriction by constructing a reduced object for any given object. We emphasize the strength of the categorial perspective by applying the duality principle, which yields immediate consequences for the dual problem of counting the number of morphisms from a fixed object.
In the second part of this thesis, we focus on graphs and the problem #_{p}Hom[H]. We conjecture that automorphisms of order p capture all possible cancellations and that, for a reduced graph H, the problem #_{p}Hom[H] features the complexity dichotomy analogue to the one given for exact counting by Dyer and Greenhill. This serves as a generalization of the conjecture by Faben and Jerrum for the modulus 2. The criterion for tractability is that H is a collection of complete bipartite and reflexive complete graphs. From the findings of part one, we show that the conjectured dichotomy implies dichotomies for all quantum homomorphism problems, in particular counting vertex surjective homomorphisms and compactions modulo p. Since the tractable cases in the dichotomy are solved by trivial computations, the study of the intractable cases remains. As an initial problem in a series of reductions capable of implying hardness, we employ the problem of counting weighted independent sets in a bipartite graph modulo prime p. A dichotomy for this problem is shown, stating that the trivial cases occurring when a weight is congruent modulo p to 0 are the only tractable cases. We reduce the possible structure of H to the bipartite case by a reduction to the restricted homomorphism problem #_{p}Hom^{bip}[H] of counting modulo p the number of homomorphisms between bipartite graphs that maintain a given order of bipartition. This reduction does not have an impact on the accessibility of the technical results, thanks to the generality of the findings of part one. In order to prove the conjecture, it suffices to show that for a connected bipartite graph that is not complete, #_{p}Hom^{bip}[H] is #_{p}P-hard. Through a rigorous structural study of bipartite graphs, we establish this result for the rich class of bipartite graphs that are (K_{3,3}\{e}, domino)-free. This overcomes in particular the substantial hurdle imposed by squares, which leads us to explore the global structure of H and prove the existence of explicit structures that imply hardness.
Health app policy
(2022)
An abundant and growing supply of digital health applications (apps) exists in the commercial tech-sector, which can be bewildering for clinicians, patients, and payers. A growing challenge for the health care system is therefore to facilitate the identification of safe and effective apps for health care practitioners and patients to generate the most health benefit as well as guide payer coverage decisions. Nearly all developed countries are attempting to define policy frameworks to improve decision-making, patient care, and health outcomes in this context. This study compares the national policy approaches currently in development/use for health apps in nine countries. We used secondary data, combined with a detailed review of policy and regulatory documents, and interviews with key individuals and experts in the field of digital health policy to collect data about implemented and planned policies and initiatives. We found that most approaches aim for centralized pipelines for health app approvals, although some countries are adding decentralized elements. While the countries studied are taking diverse paths, there is nevertheless broad, international convergence in terms of requirements in the areas of transparency, health content, interoperability, and privacy and security. The sheer number of apps on the market in most countries represents a challenge for clinicians and patients. Our analyses of the relevant policies identified challenges in areas such as reimbursement, safety, and privacy and suggest that more regulatory work is needed in the areas of operationalization, implementation and international transferability of approvals. Cross-national efforts are needed around regulation and for countries to realize the benefits of these technologies.
“Ick bin een Berlina”
(2024)
Background: Robots are increasingly used as interaction partners with humans. Social robots are designed to follow expected behavioral norms when engaging with humans and are available with different voices and even accents. Some studies suggest that people prefer robots to speak in the user’s dialect, while others indicate a preference for different dialects.
Methods: Our study examined the impact of the Berlin dialect on perceived trustworthiness and competence of a robot. One hundred and twenty German native speakers (Mage = 32 years, SD = 12 years) watched an online video featuring a NAO robot speaking either in the Berlin dialect or standard German and assessed its trustworthiness and competence.
Results: We found a positive relationship between participants’ self-reported Berlin dialect proficiency and trustworthiness in the dialect-speaking robot. Only when controlled for demographic factors, there was a positive association between participants’ dialect proficiency, dialect performance and their assessment of robot’s competence for the standard German-speaking robot. Participants’ age, gender, length of residency in Berlin, and device used to respond also influenced assessments. Finally, the robot’s competence positively predicted its trustworthiness.
Discussion: Our results inform the design of social robots and emphasize the importance of device control in online experiments.
V-Edge
(2022)
As we move from 5G to 6G, edge computing is one of the concepts that needs revisiting. Its core idea is still intriguing: Instead of sending all data and tasks from an end user's device to the cloud, possibly covering thousands of kilometers and introducing delays lower-bounded by propagation speed, edge servers deployed in close proximity to the user (e.g., at some base station) serve as proxy for the cloud. This is particularly interesting for upcoming machine-learning-based intelligent services, which require substantial computational and networking performance for continuous model training. However, this promising idea is hampered by the limited number of such edge servers. In this article, we discuss a way forward, namely the V-Edge concept. V-Edge helps bridge the gap between cloud, edge, and fog by virtualizing all available resources including the end users' devices and making these resources widely available. Thus, V-Edge acts as an enabler for novel microservices as well as cooperative computing solutions in next-generation networks. We introduce the general V-Edge architecture, and we characterize some of the key research challenges to overcome in order to enable wide-spread and intelligent edge services.
Fetal alcohol-spectrum disorder (FASD) is underdiagnosed and often misdiagnosed as attention-deficit/hyperactivity disorder (ADHD). Here, we develop a screening tool for FASD in youth with ADHD symptoms. To develop the prediction model, medical record data from a German University outpatient unit are assessed including 275 patients aged 0-19 years old with FASD with or without ADHD and 170 patients with ADHD without FASD aged 0-19 years old. We train 6 machine learning models based on 13 selected variables and evaluate their performance. Random forest models yield the best prediction models with a cross-validated AUC of 0.92 (95% confidence interval [0.84, 0.99]). Follow-up analyses indicate that a random forest model with 6 variables - body length and head circumference at birth, IQ, socially intrusive behaviour, poor memory and sleep disturbance - yields equivalent predictive accuracy. We implement the prediction model in a web-based app called FASDetect - a user-friendly, clinically scalable FASD risk calculator that is freely available at https://fasdetect.dhc-lab.hpi.de.
Purpose
Due to the increasing application of genome analysis and interpretation in medical disciplines, professionals require adequate education. Here, we present the implementation of personal genotyping as an educational tool in two genomics courses targeting Digital Health students at the Hasso Plattner Institute (HPI) and medical students at the Technical University of Munich (TUM).
Methods
We compared and evaluated the courses and the students ' perceptions on the course setup using questionnaires.
Results
During the course, students changed their attitudes towards genotyping (HPI: 79% [15 of 19], TUM: 47% [25 of 53]). Predominantly, students became more critical of personal genotyping (HPI: 73% [11 of 15], TUM: 72% [18 of 25]) and most students stated that genetic analyses should not be allowed without genetic counseling (HPI: 79% [15 of 19], TUM: 70% [37 of 53]). Students found the personal genotyping component useful (HPI: 89% [17 of 19], TUM: 92% [49 of 53]) and recommended its inclusion in future courses (HPI: 95% [18 of 19], TUM: 98% [52 of 53]).
Conclusion
Students perceived the personal genotyping component as valuable in the described genomics courses. The implementation described here can serve as an example for future courses in Europe.
Deep learning has seen widespread application in many domains, mainly for its ability to learn data representations from raw input data. Nevertheless, its success has so far been coupled with the availability of large annotated (labelled) datasets. This is a requirement that is difficult to fulfil in several domains, such as in medical imaging. Annotation costs form a barrier in extending deep learning to clinically-relevant use cases. The labels associated with medical images are scarce, since the generation of expert annotations of multimodal patient data at scale is non-trivial, expensive, and time-consuming. This substantiates the need for algorithms that learn from the increasing amounts of unlabeled data. Self-supervised representation learning algorithms offer a pertinent solution, as they allow solving real-world (downstream) deep learning tasks with fewer annotations. Self-supervised approaches leverage unlabeled samples to acquire generic features about different concepts, enabling annotation-efficient downstream task solving subsequently.
Nevertheless, medical images present multiple unique and inherent challenges for existing self-supervised learning approaches, which we seek to address in this thesis: (i) medical images are multimodal, and their multiple modalities are heterogeneous in nature and imbalanced in quantities, e.g. MRI and CT; (ii) medical scans are multi-dimensional, often in 3D instead of 2D; (iii) disease patterns in medical scans are numerous and their incidence exhibits a long-tail distribution, so it is oftentimes essential to fuse knowledge from different data modalities, e.g. genomics or clinical data, to capture disease traits more comprehensively; (iv) Medical scans usually exhibit more uniform color density distributions, e.g. in dental X-Rays, than natural images. Our proposed self-supervised methods meet these challenges, besides significantly reducing the amounts of required annotations.
We evaluate our self-supervised methods on a wide array of medical imaging applications and tasks. Our experimental results demonstrate the obtained gains in both annotation-efficiency and performance; our proposed methods outperform many approaches from related literature. Additionally, in case of fusion with genetic modalities, our methods also allow for cross-modal interpretability. In this thesis, not only we show that self-supervised learning is capable of mitigating manual annotation costs, but also our proposed solutions demonstrate how to better utilize it in the medical imaging domain. Progress in self-supervised learning has the potential to extend deep learning algorithms application to clinical scenarios.
Shams et al. report that glioma patients' motor status is predicted accurately by diffusion MRI metrics along the corticospinal tract based on support vector machine method, reaching an overall accuracy of 77%. They show that these metrics are more effective than demographic and clinical variables.
Along tract statistics enables white matter characterization using various diffusion MRI metrics. These diffusion models reveal detailed insights into white matter microstructural changes with development, pathology and function. Here, we aim at assessing the clinical utility of diffusion MRI metrics along the corticospinal tract, investigating whether motor glioma patients can be classified with respect to their motor status. We retrospectively included 116 brain tumour patients suffering from either left or right supratentorial, unilateral World Health Organization Grades II, III and IV gliomas with a mean age of 53.51 +/- 16.32 years. Around 37% of patients presented with preoperative motor function deficits according to the Medical Research Council scale. At group level comparison, the highest non-overlapping diffusion MRI differences were detected in the superior portion of the tracts' profiles. Fractional anisotropy and fibre density decrease, apparent diffusion coefficient axial diffusivity and radial diffusivity increase. To predict motor deficits, we developed a method based on a support vector machine using histogram-based features of diffusion MRI tract profiles (e.g. mean, standard deviation, kurtosis and skewness), following a recursive feature elimination method. Our model achieved high performance (74% sensitivity, 75% specificity, 74% overall accuracy and 77% area under the curve). We found that apparent diffusion coefficient, fractional anisotropy and radial diffusivity contributed more than other features to the model. Incorporating the patient demographics and clinical features such as age, tumour World Health Organization grade, tumour location, gender and resting motor threshold did not affect the model's performance, revealing that these features were not as effective as microstructural measures. These results shed light on the potential patterns of tumour-related microstructural white matter changes in the prediction of functional deficits.
Background/Objective: Historically, fasting has been practiced not only for medical but also for religious reasons. Baha'is follow an annual religious intermittent dry fast of 19 days. We inquired into motivation behind and subjective health impacts of Baha'i fasting. Methods: A convergent parallel mixed methods design was embedded in a clinical single arm observational study. Semi-structured individual interviews were conducted before (n = 7), during (n = 8), and after fasting (n = 8). Three months after the fasting period, two focus group interviews were conducted (n = 5/n = 3). A total of 146 Baha'i volunteers answered an online survey at five time points before, during, and after fasting. Results: Fasting was found to play a central role for the religiosity of interviewees, implying changes in daily structures, spending time alone, engaging in religious practices, and experiencing social belonging. Results show an increase in mindfulness and well-being, which were accompanied by behavioural changes and experiences of self-efficacy and inner freedom. Survey scores point to an increase in mindfulness and well-being during fasting, while stress, anxiety, and fatigue decreased. Mindfulness remained elevated even three months after the fast. Conclusion: Baha'i fasting seems to enhance participants' mindfulness and well-being, lowering stress levels and reducing fatigue. Some of these effects lasted more than three months after fasting.
The “HPI Future SOC Lab” is a cooperation of the Hasso Plattner Institute (HPI) and industry partners. Its mission is to enable and promote exchange and interaction between the research community and the industry partners.
The HPI Future SOC Lab provides researchers with free of charge access to a complete infrastructure of state of the art hard and software. This infrastructure includes components, which might be too expensive for an ordinary research environment, such as servers with up to 64 cores and 2 TB main memory. The offerings address researchers particularly from but not limited to the areas of computer science and business information systems. Main areas of research include cloud computing, parallelization, and In-Memory technologies.
This technical report presents results of research projects executed in 2019. Selected projects have presented their results on April 9th and November 12th 2019 at the Future SOC Lab Day events.
The wide distribution of location-acquisition technologies means that large volumes of spatio-temporal data are continuously being accumulated. Positioning systems such as GPS enable the tracking of various moving objects' trajectories, which are usually represented by a chronologically ordered sequence of observed locations. The analysis of movement patterns based on detailed positional information creates opportunities for applications that can improve business decisions and processes in a broad spectrum of industries (e.g., transportation, traffic control, or medicine). Due to the large data volumes generated in these applications, the cost-efficient storage of spatio-temporal data is desirable, especially when in-memory database systems are used to achieve interactive performance requirements.
To efficiently utilize the available DRAM capacities, modern database systems support various tuning possibilities to reduce the memory footprint (e.g., data compression) or increase performance (e.g., additional indexes structures). By considering horizontal data partitioning, we can independently apply different tuning options on a fine-grained level. However, the selection of cost and performance-balancing configurations is challenging, due to the vast number of possible setups consisting of mutually dependent individual decisions.
In this thesis, we introduce multiple approaches to improve spatio-temporal data management by automatically optimizing diverse tuning options for the application-specific access patterns and data characteristics. Our contributions are as follows:
(1) We introduce a novel approach to determine fine-grained table configurations for spatio-temporal workloads. Our linear programming (LP) approach jointly optimizes the (i) data compression, (ii) ordering, (iii) indexing, and (iv) tiering. We propose different models which address cost dependencies at different levels of accuracy to compute optimized tuning configurations for a given workload, memory budgets, and data characteristics. To yield maintainable and robust configurations, we further extend our LP-based approach to incorporate reconfiguration costs as well as optimizations for multiple potential workload scenarios.
(2) To optimize the storage layout of timestamps in columnar databases, we present a heuristic approach for the workload-driven combined selection of a data layout and compression scheme. By considering attribute decomposition strategies, we are able to apply application-specific optimizations that reduce the memory footprint and improve performance.
(3) We introduce an approach that leverages past trajectory data to improve the dispatch processes of transportation network companies. Based on location probabilities, we developed risk-averse dispatch strategies that reduce critical delays.
(4) Finally, we used the use case of a transportation network company to evaluate our database optimizations on a real-world dataset. We demonstrate that workload-driven fine-grained optimizations allow us to reduce the memory footprint (up to 71% by equal performance) or increase the performance (up to 90% by equal memory size) compared to established rule-based heuristics.
Individually, our contributions provide novel approaches to the current challenges in spatio-temporal data mining and database research. Combining them allows in-memory databases to store and process spatio-temporal data more cost-efficiently.
Recently, there has been an upsurge of activity in image-based non-photorealistic rendering (NPR), and in particular portrait image stylisation, due to the advent of neural style transfer (NST). However, the state of performance evaluation in this field is poor, especially compared to the norms in the computer vision and machine learning communities. Unfortunately, the task of evaluating image stylisation is thus far not well defined, since it involves subjective, perceptual, and aesthetic aspects. To make progress towards a solution, this paper proposes a new structured, three-level, benchmark dataset for the evaluation of stylised portrait images. Rigorous criteria were used for its construction, and its consistency was validated by user studies. Moreover, a new methodology has been developed for evaluating portrait stylisation algorithms, which makes use of the different benchmark levels as well as annotations provided by user studies regarding the characteristics of the faces. We perform evaluation for a wide variety of image stylisation methods (both portrait-specific and general purpose, and also both traditional NPR approaches and NST) using the new benchmark dataset.
Any system at play in a data-driven project has a fundamental requirement: the ability to load data. The de-facto standard format to distribute and consume raw data is CSV. Yet, the plain text and flexible nature of this format make such files often difficult to parse and correctly load their content, requiring cumbersome data preparation steps. We propose a benchmark to assess the robustness of systems in loading data from non-standard CSV formats and with structural inconsistencies. First, we formalize a model to describe the issues that affect real-world files and use it to derive a systematic lpollutionz process to generate dialects for any given grammar. Our benchmark leverages the pollution framework for the csv format. To guide pollution, we have surveyed thousands of real-world, publicly available csv files, recording the problems we encountered. We demonstrate the applicability of our benchmark by testing and scoring 16 different systems: popular csv parsing frameworks, relational database tools, spreadsheet systems, and a data visualization tool.
Background and aims: Accurate and user-friendly assessment tools quantifying alcohol consumption are a prerequisite to effective prevention and treatment programmes, including Screening and Brief Intervention. Digital tools offer new potential in this field. We developed the ‘Animated Alcohol Assessment Tool’ (AAA-Tool), a mobile app providing an interactive version of the World Health Organization's Alcohol Use Disorders Identification Test (AUDIT) that facilitates the description of individual alcohol consumption via culturally informed animation features. This pilot study evaluated the Russia-specific version of the Animated Alcohol Assessment Tool with regard to (1) its usability and acceptability in a primary healthcare setting, (2) the plausibility of its alcohol consumption assessment results and (3) the adequacy of its Russia-specific vessel and beverage selection. Methods: Convenience samples of 55 patients (47% female) and 15 healthcare practitioners (80% female) in 2 Russian primary healthcare facilities self-administered the Animated Alcohol Assessment Tool and rated their experience on the Mobile Application Rating Scale – User Version. Usage data was automatically collected during app usage, and additional feedback on regional content was elicited in semi-structured interviews. Results: On average, patients completed the Animated Alcohol Assessment Tool in 6:38 min (SD = 2.49, range = 3.00–17.16). User satisfaction was good, with all subscale Mobile Application Rating Scale – User Version scores averaging >3 out of 5 points. A majority of patients (53%) and practitioners (93%) would recommend the tool to ‘many people’ or ‘everyone’. Assessed alcohol consumption was plausible, with a low number (14%) of logically impossible entries. Most patients reported the Animated Alcohol Assessment Tool to reflect all vessels (78%) and all beverages (71%) they typically used. Conclusion: High acceptability ratings by patients and healthcare practitioners, acceptable completion time, plausible alcohol usage assessment results and perceived adequacy of region-specific content underline the Animated Alcohol Assessment Tool's potential to provide a novel approach to alcohol assessment in primary healthcare. After its validation, the Animated Alcohol Assessment Tool might contribute to reducing alcohol-related harm by facilitating Screening and Brief Intervention implementation in Russia and beyond.
Background
Machine learning models promise to support diagnostic predictions, but may not perform well in new settings. Selecting the best model for a new setting without available data is challenging. We aimed to investigate the transportability by calibration and discrimination of prediction models for cognitive impairment in simulated external settings with different distributions of demographic and clinical characteristics.
Methods
We mapped and quantified relationships between variables associated with cognitive impairment using causal graphs, structural equation models, and data from the ADNI study. These estimates were then used to generate datasets and evaluate prediction models with different sets of predictors. We measured transportability to external settings under guided interventions on age, APOE & epsilon;4, and tau-protein, using performance differences between internal and external settings measured by calibration metrics and area under the receiver operating curve (AUC).
Results
Calibration differences indicated that models predicting with causes of the outcome were more transportable than those predicting with consequences. AUC differences indicated inconsistent trends of transportability between the different external settings. Models predicting with consequences tended to show higher AUC in the external settings compared to internal settings, while models predicting with parents or all variables showed similar AUC.
Conclusions
We demonstrated with a practical prediction task example that predicting with causes of the outcome results in better transportability compared to anti-causal predictions when considering calibration differences. We conclude that calibration performance is crucial when assessing model transportability to external settings.
The detection of communities in graph datasets provides insight about a graph's underlying structure and is an important tool for various domains such as social sciences, marketing, traffic forecast, and drug discovery. While most existing algorithms provide fast approaches for community detection, their results usually contain strictly separated communities. However, most datasets would semantically allow for or even require overlapping communities that can only be determined at much higher computational cost. We build on an efficient algorithm, FOX, that detects such overlapping communities. FOX measures the closeness of a node to a community by approximating the count of triangles which that node forms with that community. We propose LAZYFOX, a multi-threaded adaptation of the FOX algorithm, which provides even faster detection without an impact on community quality. This allows for the analyses of significantly larger and more complex datasets. LAZYFOX enables overlapping community detection on complex graph datasets with millions of nodes and billions of edges in days instead of weeks. As part of this work, LAZYFOX's implementation was published and is available as a tool under an MIT licence at https://github.com/TimGarrels/LazyFox.
Psychology and nutritional science research has highlighted the impact of negative emotions and cognitive load on calorie consumption behaviour using subjective questionnaires. Isolated studies in other domains objectively assess cognitive load without considering its effects on eating behaviour. This study aims to explore the potential for developing an integrated eating behaviour assistant system that incorporates cognitive load factors. Two experimental sessions were conducted using custom-developed experimentation software to induce different stimuli. During these sessions, we collected 30 h of physiological, food consumption, and affective states questionnaires data to automatically detect cognitive load and analyse its effect on food choice. Utilising grid search optimisation and leave-one-subject-out cross-validation, a support vector machine model achieved a mean classification accuracy of 85.12% for the two cognitive load tasks using eight relevant features. Statistical analysis was performed on calorie consumption and questionnaire data. Furthermore, 75% of the subjects with higher negative affect significantly increased consumption of specific foods after high-cognitive-load tasks. These findings offer insights into the intricate relationship between cognitive load, affective states, and food choice, paving the way for an eating behaviour assistant system to manage food choices during cognitive load. Future research should enhance system capabilities and explore real-world applications.
The active global SARS-CoV-2 pandemic caused more than 426 million cases and 5.8 million deaths worldwide. The development of completely new drugs for such a novel disease is a challenging, time intensive process. Despite researchers around the world working on this task, no effective treatments have been developed yet. This emphasizes the importance of drug repurposing, where treatments are found among existing drugs that are meant for different diseases. A common approach to this is based on knowledge graphs, that condense relationships between entities like drugs, diseases and genes. Graph neural networks (GNNs) can then be used for the task at hand by predicting links in such knowledge graphs. Expanding on state-of-the-art GNN research, Doshi et al. recently developed the Dr-COVID model. We further extend their work using additional output interpretation strategies. The best aggregation strategy derives a top-100 ranking of 8,070 candidate drugs, 32 of which are currently being tested in COVID-19-related clinical trials. Moreover, we present an alternative application for the model, the generation of additional candidates based on a given pre-selection of drug candidates using collaborative filtering. In addition, we improved the implementation of the Dr-COVID model by significantly shortening the inference and pre-processing time by exploiting data-parallelism. As drug repurposing is a task that requires high computation and memory resources, we further accelerate the post-processing phase using a new emerging hardware-we propose a new approach to leverage the use of high-capacity Non-Volatile Memory for aggregate drug ranking.
Background:
Contamination detection is a important step that should be carefully considered in early stages when designing and performing microbiome studies to avoid biased outcomes. Detecting and removing true contaminants is challenging, especially in low-biomass samples or in studies lacking proper controls. Interactive visualizations and analysis platforms are crucial to better guide this step, to help to identify and detect noisy patterns that could potentially be contamination. Additionally, external evidence, like aggregation of several contamination detection methods and the use of common contaminants reported in the literature, could help to discover and mitigate contamination.
Results:
We propose GRIMER, a tool that performs automated analyses and generates a portable and interactive dashboard integrating annotation, taxonomy, and metadata. It unifies several sources of evidence to help detect contamination. GRIMER is independent of quantification methods and directly analyzes contingency tables to create an interactive and offline report. Reports can be created in seconds and are accessible for nonspecialists, providing an intuitive set of charts to explore data distribution among observations and samples and its connections with external sources. Further, we compiled and used an extensive list of possible external contaminant taxa and common contaminants with 210 genera and 627 species reported in 22 published articles.
Conclusion:
GRIMER enables visual data exploration and analysis, supporting contamination detection in microbiome studies. The tool and data presented are open source and available at https://gitlab.com/dacs-hpi/grimer.
Background
The aggregation of a series of N-of-1 trials presents an innovative and efficient study design, as an alternative to traditional randomized clinical trials. Challenges for the statistical analysis arise when there is carry-over or complex dependencies of the treatment effect of interest.
Methods
In this study, we evaluate and compare methods for the analysis of aggregated N-of-1 trials in different scenarios with carry-over and complex dependencies of treatment effects on covariates. For this, we simulate data of a series of N-of-1 trials for Chronic Nonspecific Low Back Pain based on assumed causal relationships parameterized by directed acyclic graphs. In addition to existing statistical methods such as regression models, Bayesian Networks, and G-estimation, we introduce a carry-over adjusted parametric model (COAPM).
Results
The results show that all evaluated existing models have a good performance when there is no carry-over and no treatment dependence. When there is carry-over, COAPM yields unbiased and more efficient estimates while all other methods show some bias in the estimation. When there is known treatment dependence, all approaches that are capable to model it yield unbiased estimates. Finally, the efficiency of all methods decreases slightly when there are missing values, and the bias in the estimates can also increase.
Conclusions
This study presents a systematic evaluation of existing and novel approaches for the statistical analysis of a series of N-of-1 trials. We derive practical recommendations which methods may be best in which scenarios.
Economic evaluation of digital therapeutic care apps for unsupervised treatment of low back pain
(2023)
Background:
Digital therapeutic care (DTC) programs are unsupervised app-based treatments that provide video exercises and educational material to patients with nonspecific low back pain during episodes of pain and functional disability. German statutory health insurance can reimburse DTC programs since 2019, but evidence on efficacy and reasonable pricing remains scarce. This paper presents a probabilistic sensitivity analysis (PSA) to evaluate the efficacy and cost-utility of a DTC app against treatment as usual (TAU) in Germany.
Objective:
The aim of this study was to perform a PSA in the form of a Monte Carlo simulation based on the deterministic base case analysis to account for model assumptions and parameter uncertainty. We also intend to explore to what extent the results in this probabilistic analysis differ from the results in the base case analysis and to what extent a shortage of outcome data concerning quality-of-life (QoL) metrics impacts the overall results.
Methods:
The PSA builds upon a state-transition Markov chain with a 4-week cycle length over a model time horizon of 3 years from a recently published deterministic cost-utility analysis. A Monte Carlo simulation with 10,000 iterations and a cohort size of 10,000 was employed to evaluate the cost-utility from a societal perspective. Quality-adjusted life years (QALYs) were derived from Veterans RAND 6-Dimension (VR-6D) and Short-Form 6-Dimension (SF-6D) single utility scores. Finally, we also simulated reducing the price for a 3-month app prescription to analyze at which price threshold DTC would result in being the dominant strategy over TAU in Germany.
Results:
The Monte Carlo simulation yielded on average a euro135.97 (a currency exchange rate of EUR euro1=US $1.069 is applicable) incremental cost and 0.004 incremental QALYs per person and year for the unsupervised DTC app strategy compared to in-person physiotherapy in Germany. The corresponding incremental cost-utility ratio (ICUR) amounts to an additional euro34,315.19 per additional QALY. DTC yielded more QALYs in 54.96% of the iterations. DTC dominates TAU in 24.04% of the iterations for QALYs. Reducing the app price in the simulation from currently euro239.96 to euro164.61 for a 3-month prescription could yield a negative ICUR and thus make DTC the dominant strategy, even though the estimated probability of DTC being more effective than TAU is only 54.96%.
Conclusions:
Decision-makers should be cautious when considering the reimbursement of DTC apps since no significant treatment effect was found, and the probability of cost-effectiveness remains below 60% even for an infinite willingness-to-pay threshold. More app-based studies involving the utilization of QoL outcome parameters are urgently needed to account for the low and limited precision of the available QoL input parameters, which are crucial to making profound recommendations concerning the cost-utility of novel apps.
Giving emotional intelligence to machines can facilitate the early detection and prediction of mental diseases and symptoms. Electroencephalography (EEG)-based emotion recognition is widely applied because it measures electrical correlates directly from the brain rather than indirect measurement of other physiological responses initiated by the brain. Therefore, we used non-invasive and portable EEG sensors to develop a real-time emotion classification pipeline. The pipeline trains different binary classifiers for Valence and Arousal dimensions from an incoming EEG data stream achieving a 23.9% (Arousal) and 25.8% (Valence) higher F1-Score on the state-of-art AMIGOS dataset than previous work. Afterward, the pipeline was applied to the curated dataset from 15 participants using two consumer-grade EEG devices while watching 16 short emotional videos in a controlled environment. Mean F1-Scores of 87% (Arousal) and 82% (Valence) were achieved for an immediate label setting. Additionally, the pipeline proved to be fast enough to achieve predictions in real-time in a live scenario with delayed labels while continuously being updated. The significant discrepancy from the readily available labels on the classification scores leads to future work to include more data. Thereafter, the pipeline is ready to be used for real-time applications of emotion classification.
transferGWAS
(2022)
Motivation:
Medical images can provide rich information about diseases and their biology. However, investigating their association with genetic variation requires non-standard methods. We propose transferGWAS, a novel approach to perform genome-wide association studies directly on full medical images. First, we learn semantically meaningful representations of the images based on a transfer learning task, during which a deep neural network is trained on independent but similar data. Then, we perform genetic association tests with these representations.
Results:
We validate the type I error rates and power of transferGWAS in simulation studies of synthetic images. Then we apply transferGWAS in a genome-wide association study of retinal fundus images from the UK Biobank. This first-of-a-kind GWAS of full imaging data yielded 60 genomic regions associated with retinal fundus images, of which 7 are novel candidate loci for eye-related traits and diseases.
The dynamic landscape of digital transformation entails an impact on industrial-age manufacturing companies that goes beyond product offerings, changing operational paradigms, and requiring an organization-wide metamorphosis. An initiative to address the given challenges is the creation of Digital Innovation Units (DIUs) – departments or distinct legal entities that use new structures and practices to develop digital products, services, and business models and support or drive incumbents’ digital transformation. With more than 300 units in German-speaking countries alone and an increasing number of scientific publications, DIUs have become a widespread phenomenon in both research and practice.
This dissertation examines the evolution process of DIUs in the manufacturing
industry during their first three years of operation, through an extensive longitudinal single-case study and several cross-case syntheses of seven DIUs. Building on the lenses of organizational change and development, time, and socio-technical systems, this research provides insights into the fundamentals, temporal dynamics, socio-technical interactions, and relational dynamics of a DIU’s evolution process. Thus, the dissertation promotes a dynamic understanding of DIUs and adds a two-dimensional perspective to the often one-dimensional view of these units and their interactions with the main organization throughout the startup and growth phases of a DIU.
Furthermore, the dissertation constructs a phase model that depicts the early stages of DIU evolution based on these findings and by incorporating literature from information systems research. As a result, it illustrates the progressive intensification of collaboration between the DIU and the main organization. After being implemented, the DIU sparks initial collaboration and instigates change within (parts of) the main organization. Over time, it adapts to the corporate environment to some extent, responding to changing circumstances in order to contribute to long-term transformation. Temporally, the DIU drives the early phases of cooperation and adaptation in particular, while the main organization triggers the first major evolutionary step and realignment of the DIU.
Overall, the thesis identifies DIUs as malleable organizational structures that are crucial for digital transformation. Moreover, it provides guidance for practitioners on the process of building a new DIU from scratch or optimizing an existing one.
An instance of the marriage problem is given by a graph G = (A boolean OR B, E), together with, for each vertex of G, a strict preference order over its neighbors. A matching M of G is popular in the marriage instance if M does not lose a head-to-head election against any matching where vertices are voters. Every stable matching is a min-size popular matching; another subclass of popular matchings that always exists and can be easily computed is the set of dominant matchings. A popular matching M is dominant if M wins the head-to-head election against any larger matching. Thus, every dominant matching is a max-size popular matching, and it is known that the set of dominant matchings is the linear image of the set of stable matchings in an auxiliary graph. Results from the literature seem to suggest that stable and dominant matchings behave, from a complexity theory point of view, in a very similar manner within the class of popular matchings. The goal of this paper is to show that there are instead differences in the tractability of stable and dominant matchings and to investigate further their importance for popular matchings. First, we show that it is easy to check if all popular matchings are also stable; however, it is co-NP hard to check if all popular matchings are also dominant. Second, we show how some new and recent hardness results on popular matching problems can be deduced from the NP-hardness of certain problems on stable matchings, also studied in this paper, thus showing that stable matchings can be employed to show not only positive results on popular matchings (as is known) but also most negative ones. Problems for which we show new hardness results include finding a min-size (resp., max-size) popular matching that is not stable (resp., dominant). A known result for which we give a new and simple proof is the NP-hardness of finding a popular matching when G is nonbipartite.
Knowledge about causal structures is crucial for decision support in various domains. For example, in discrete manufacturing, identifying the root causes of failures and quality deviations that interrupt the highly automated production process requires causal structural knowledge. However, in practice, root cause analysis is usually built upon individual expert knowledge about associative relationships. But, "correlation does not imply causation", and misinterpreting associations often leads to incorrect conclusions. Recent developments in methods for causal discovery from observational data have opened the opportunity for a data-driven examination. Despite its potential for data-driven decision support, omnipresent challenges impede causal discovery in real-world scenarios. In this thesis, we make a threefold contribution to improving causal discovery in practice.
(1) The growing interest in causal discovery has led to a broad spectrum of methods with specific assumptions on the data and various implementations. Hence, application in practice requires careful consideration of existing methods, which becomes laborious when dealing with various parameters, assumptions, and implementations in different programming languages. Additionally, evaluation is challenging due to the lack of ground truth in practice and limited benchmark data that reflect real-world data characteristics.
To address these issues, we present a platform-independent modular pipeline for causal discovery and a ground truth framework for synthetic data generation that provides comprehensive evaluation opportunities, e.g., to examine the accuracy of causal discovery methods in case of inappropriate assumptions.
(2) Applying constraint-based methods for causal discovery requires selecting a conditional independence (CI) test, which is particularly challenging in mixed discrete-continuous data omnipresent in many real-world scenarios. In this context, inappropriate assumptions on the data or the commonly applied discretization of continuous variables reduce the accuracy of CI decisions, leading to incorrect causal structures.
Therefore, we contribute a non-parametric CI test leveraging k-nearest neighbors methods and prove its statistical validity and power in mixed discrete-continuous data, as well as the asymptotic consistency when used in constraint-based causal discovery. An extensive evaluation of synthetic and real-world data shows that the proposed CI test outperforms state-of-the-art approaches in the accuracy of CI testing and causal discovery, particularly in settings with low sample sizes.
(3) To show the applicability and opportunities of causal discovery in practice, we examine our contributions in real-world discrete manufacturing use cases. For example, we showcase how causal structural knowledge helps to understand unforeseen production downtimes or adds decision support in case of failures and quality deviations in automotive body shop assembly lines.
The question if a given partial solution to a problem can be extended reasonably occurs in many algorithmic approaches for optimization problems.
For instance, when enumerating minimal vertex covers of a graph G = (V, E), one usually arrives at the problem to decide for a vertex set U subset of V (pre-solution), if there exists a minimal vertex cover S (i.e., a vertex cover S subset of V such that no proper subset of S is a vertex cover) with U subset of S (minimal extension of U).
We propose a general, partial-order based formulation of such extension problems which allows to model parameterization and approximation aspects of extension, and also highlights relationships between extension tasks for different specific problems.
As examples, we study a number of specific problems which can be expressed and related in this framework. In particular, we discuss extension variants of the problems dominating set and feedback vertex/edge set.
All these problems are shown to be NP-complete even when restricted to bipartite graphs of bounded degree, with the exception of our extension version of feedback edge set on undirected graphs which is shown to be solvable in polynomial time.
For the extension variants of dominating and feedback vertex set, we also show NP-completeness for the restriction to planar graphs of bounded degree.
As non-graph problem, we also study an extension version of the bin packing problem. We further consider the parameterized complexity of all these extension variants, where the parameter is a measure of the pre-solution as defined by our framework.
Law smells
(2022)
Building on the computer science concept of code smells, we initiate the study of law smells, i.e., patterns in legal texts that pose threats to the comprehensibility and maintainability of the law. With five intuitive law smells as running examples-namely, duplicated phrase, long element, large reference tree, ambiguous syntax, and natural language obsession-, we develop a comprehensive law smell taxonomy. This taxonomy classifies law smells by when they can be detected, which aspects of law they relate to, and how they can be discovered. We introduce text-based and graph-based methods to identify instances of law smells, confirming their utility in practice using the United States Code as a test case. Our work demonstrates how ideas from software engineering can be leveraged to assess and improve the quality of legal code, thus drawing attention to an understudied area in the intersection of law and computer science and highlighting the potential of computational legal drafting.
We study the classical, two-sided stable marriage problem under pairwise preferences. In the most general setting, agents are allowed to express their preferences as comparisons of any two of their edges, and they also have the right to declare a draw or even withdraw from such a comparison. This freedom is then gradually restricted as we specify six stages of orderedness in the preferences, ending with the classical case of strictly ordered lists. We study all cases occurring when combining the three known notions of stability-weak, strong, and super-stability-under the assumption that each side of the bipartite market obtains one of the six degrees of orderedness. By designing three polynomial algorithms and two NP-completeness proofs, we determine the complexity of all cases not yet known and thus give an exact boundary in terms of preference structure between tractable and intractable cases.
Our input is a complete graph G on n vertices where each vertex has a strict ranking of all other vertices in G. The goal is to construct a matching in G that is popular. A matching M is popular if M does not lose a head-to-head election against any matching M ': here each vertex casts a vote for the matching in {M,M '} in which it gets a better assignment. Popular matchings need not exist in the given instance G and the popular matching problem is to decide whether one exists or not. The popular matching problem in G is easy to solve for odd n. Surprisingly, the problem becomes NP-complete for even n, as we show here. This is one of the few graph theoretic problems efficiently solvable when n has one parity and NP-complete when n has the other parity.
Human observer net
(2022)
Background:
Current software applications for human observer studies of images lack flexibility in study design, platform independence, multicenter use, and assessment methods and are not open source, limiting accessibility and expandability.
Purpose:
To develop a user-friendly software platform that enables efficient human observer studies in medical imaging with flexibility of study design.
Materials and Methods:
Software for human observer imaging studies was designed as an open-source web application to facilitate access, platform-independent usability, and multicenter studies. Different interfaces for study creation, participation, and management of results were implemented. The software was evaluated in human observer experiments between May 2019 and March 2021, in which duration of observer responses was tracked. Fourteen radiologists evaluated and graded software usability using the 100-point system usability scale. The application was tested in Chrome, Firefox, Safari, and Edge browsers.
Results:
Software function was designed to allow visual grading analysis (VGA), multiple-alternative forced-choice (m-AFC), receiver operating characteristic (ROC), localization ROC, free-response ROC, and customized designs. The mean duration of reader responses per image or per image set was 6.2 seconds 6 4.8 (standard deviation), 5.8 seconds 6 4.7, 8.7 seconds 6 5.7, and 6.0 seconds 6 4.5 in four-AFC with 160 image quartets per reader, four-AFC with 640 image quartets per reader, localization ROC, and experimental studies, respectively. The mean system usability scale score was 83 6 11 (out of 100). The documented code and a demonstration of the application are available online (https://github.com/genskeu/HON, https://hondemo.pythonanywhere.com/).
Conclusion:
A user-friendly and efficient open-source application was developed for human reader experiments that enables study design versatility, as well as platform-independent and multicenter usability.
Local laws on urban policy, i.e., ordinances directly affect our daily life in various ways (health, business etc.), yet in practice, for many citizens they remain impervious and complex. This article focuses on an approach to make urban policy more accessible and comprehensible to the general public and to government officials, while also addressing pertinent social media postings. Due to the intricacies of the natural language, ranging from complex legalese in ordinances to informal lingo in tweets, it is practical to harness human judgment here. To this end, we mine ordinances and tweets via reasoning based on commonsense knowledge so as to better account for pragmatics and semantics in the text. Ours is pioneering work in ordinance mining, and thus there is no prior labeled training data available for learning. This gap is filled by commonsense knowledge, a prudent choice in situations involving a lack of adequate training data. The ordinance mining can be beneficial to the public in fathoming policies and to officials in assessing policy effectiveness based on public reactions. This work contributes to smart governance, leveraging transparency in governing processes via public involvement. We focus significantly on ordinances contributing to smart cities, hence an important goal is to assess how well an urban region heads towards a smart city as per its policies mapping with smart city characteristics, and the corresponding public satisfaction.
VLDB 2021
(2021)
The 47th International Conference on Very Large Databases (VLDB'21) was held on August 16-20, 2021 as a hybrid conference. It attracted 180 in-person attendees in Copenhagen and 840 remote attendees. In this paper, we describe our key decisions as general chairs and program committee chairs and share the lessons we learned.
About 15 years ago, the first Massive Open Online Courses (MOOCs) appeared and revolutionized online education with more interactive and engaging course designs. Yet, keeping learners motivated and ensuring high satisfaction is one of the challenges today's course designers face. Therefore, many MOOC providers employed gamification elements that only boost extrinsic motivation briefly and are limited to platform support. In this article, we introduce and evaluate a gameful learning design we used in several iterations on computer science education courses. For each of the courses on the fundamentals of the Java programming language, we developed a self-contained, continuous story that accompanies learners through their learning journey and helps visualize key concepts. Furthermore, we share our approach to creating the surrounding story in our MOOCs and provide a guideline for educators to develop their own stories. Our data and the long-term evaluation spanning over four Java courses between 2017 and 2021 indicates the openness of learners toward storified programming courses in general and highlights those elements that had the highest impact. While only a few learners did not like the story at all, most learners consumed the additional story elements we provided. However, learners' interest in influencing the story through majority voting was negligible and did not show a considerable positive impact, so we continued with a fixed story instead. We did not find evidence that learners just participated in the narrative because they worked on all materials. Instead, for 10-16% of learners, the story was their main course motivation. We also investigated differences in the presentation format and concluded that several longer audio-book style videos were most preferred by learners in comparison to animated videos or different textual formats. Surprisingly, the availability of a coherent story embedding examples and providing a context for the practical programming exercises also led to a slightly higher ranking in the perceived quality of the learning material (by 4%). With our research in the context of storified MOOCs, we advance gameful learning designs, foster learner engagement and satisfaction in online courses, and help educators ease knowledge transfer for their learners.
Column-oriented database systems can efficiently process transactional and analytical queries on a single node. However, increasing or peak analytical loads can quickly saturate single-node database systems. Then, a common scale-out option is using a database cluster with a single primary node for transaction processing and read-only replicas. Using (the naive) full replication, queries are distributed among nodes independently of the accessed data. This approach is relatively expensive because all nodes must store all data and apply all data modifications caused by inserts, deletes, or updates.
In contrast to full replication, partial replication is a more cost-efficient implementation: Instead of duplicating all data to all replica nodes, partial replicas store only a subset of the data while being able to process a large workload share. Besides lower storage costs, partial replicas enable (i) better scaling because replicas must potentially synchronize only subsets of the data modifications and thus have more capacity for read-only queries and (ii) better elasticity because replicas have to load less data and can be set up faster. However, splitting the overall workload evenly among the replica nodes while optimizing the data allocation is a challenging assignment problem.
The calculation of optimized data allocations in a partially replicated database cluster can be modeled using integer linear programming (ILP). ILP is a common approach for solving assignment problems, also in the context of database systems. Because ILP is not scalable, existing approaches (also for calculating partial allocations) often fall back to simple (e.g., greedy) heuristics for larger problem instances. Simple heuristics may work well but can lose optimization potential.
In this thesis, we present optimal and ILP-based heuristic programming models for calculating data fragment allocations for partially replicated database clusters. Using ILP, we are flexible to extend our models to (i) consider data modifications and reallocations and (ii) increase the robustness of allocations to compensate for node failures and workload uncertainty. We evaluate our approaches for TPC-H, TPC-DS, and a real-world accounting workload and compare the results to state-of-the-art allocation approaches. Our evaluations show significant improvements for varied allocation’s properties: Compared to existing approaches, we can, for example, (i) almost halve the amount of allocated data, (ii) improve the throughput in case of node failures and workload uncertainty while using even less memory, (iii) halve the costs of data modifications, and (iv) reallocate less than 90% of data when adding a node to the cluster. Importantly, we can calculate the corresponding ILP-based heuristic solutions within a few seconds. Finally, we demonstrate that the ideas of our ILP-based heuristics are also applicable to the index selection problem.
The transversal hypergraph problem asks to enumerate the minimal hitting sets of a hypergraph. If the solutions have bounded size, Eiter and Gottlob [SICOMP'95] gave an algorithm running in output-polynomial time, but whose space requirement also scales with the output. We improve this to polynomial delay and space. Central to our approach is the extension problem, deciding for a set X of vertices whether it is contained in any minimal hitting set. We show that this is one of the first natural problems to be W[3]-complete. We give an algorithm for the extension problem running in time O(m(vertical bar X vertical bar+1) n) and prove a SETH-lower bound showing that this is close to optimal. We apply our enumeration method to the discovery problem of minimal unique column combinations from data profiling. Our empirical evaluation suggests that the algorithm outperforms its worst-case guarantees on hypergraphs stemming from real-world databases.
Circular economy
(2021)
In a circular economy, the use of recycled resources in production is a key performance indicator for management. Yet, academic studies are still unable to inform managers on appropriate recycling and pricing policies. We develop an optimal control model integrating a firm's recycling rate, which can use both virgin and recycled resources in the production process. Our model accounts for recycling influence both at the supply- and demandsides. The positive effect of a firm's use of recycled resources diminishes over time but may increase through investments. Using general formulations for demand and cost, we analytically examine joint dynamic pricing and recycling investment policies in order to determine their optimal interplay over time. We provide numerical experiments to assess the existence of a steady-state and to calculate sensitivity analyses with respect to various model parameters. The analysis shows how to dynamically adapt jointly optimized controls to reach sustainability in the production process. Our results pave the way to sounder sustainable practices for firms operating within a circular economy.