Hasso-Plattner-Institut für Digital Engineering GmbH
Refine
Year of publication
Document Type
- Article (194)
- Doctoral Thesis (100)
- Other (84)
- Monograph/Edited Volume (42)
- Postprint (22)
- Conference Proceeding (4)
- Part of a Book (1)
- Habilitation Thesis (1)
- Report (1)
Keywords
- MOOC (42)
- digital education (37)
- e-learning (36)
- Digitale Bildung (34)
- online course creation (34)
- online course design (34)
- Kursdesign (33)
- Micro Degree (33)
- Online-Lehre (33)
- Onlinekurs (33)
- Onlinekurs-Produktion (33)
- micro degree (33)
- micro-credential (33)
- online teaching (33)
- machine learning (21)
- maschinelles Lernen (10)
- Cloud Computing (7)
- E-Learning (6)
- cloud computing (6)
- deep learning (6)
- evaluation (6)
- Forschungsprojekte (5)
- Future SOC Lab (5)
- In-Memory Technologie (5)
- Multicore Architekturen (5)
- Smalltalk (5)
- artifical intelligence (5)
- künstliche Intelligenz (5)
- multicore architectures (5)
- research projects (5)
- 3D printing (4)
- BPMN (4)
- Blockchain (4)
- Digitalisierung (4)
- Duplikaterkennung (4)
- Künstliche Intelligenz (4)
- MOOCs (4)
- Machine Learning (4)
- Scrum (4)
- blockchain (4)
- business process management (4)
- cyber-physical systems (4)
- duplicate detection (4)
- fabrication (4)
- image processing (4)
- inertial measurement unit (4)
- innovation (4)
- natural language processing (4)
- openHPI (4)
- probabilistic timed systems (4)
- programming (4)
- qualitative Analyse (4)
- qualitative analysis (4)
- quantitative Analyse (4)
- quantitative analysis (4)
- smart contracts (4)
- 3D visualization (3)
- 3D-Visualisierung (3)
- Business process models (3)
- DMN (3)
- Data profiling (3)
- Datenaufbereitung (3)
- Datenqualität (3)
- Design Thinking (3)
- Dynamic pricing (3)
- Geschäftsprozessmanagement (3)
- HPI Schul-Cloud (3)
- Hasso Plattner Institute (3)
- Hasso-Plattner-Institut (3)
- Innovation (3)
- Maschinelles Lernen (3)
- Natural language processing (3)
- Security Metrics (3)
- Security Risk Assessment (3)
- Teamwork (3)
- cloud (3)
- computer vision (3)
- course design (3)
- creativity (3)
- data preparation (3)
- data profiling (3)
- digital health (3)
- digital transformation (3)
- digitale Bildung (3)
- digitalization (3)
- entity resolution (3)
- graph transformation systems (3)
- in-memory technology (3)
- intrusion detection (3)
- real-time rendering (3)
- reinforcement learning (3)
- tiefes Lernen (3)
- trajectory data (3)
- transparency (3)
- user experience (3)
- user-generated content (3)
- 3D point clouds (2)
- Agile (2)
- Android (2)
- Angriffserkennung (2)
- Anomalieerkennung (2)
- Answer set programming (2)
- Artificial Intelligence (2)
- Behavioral economics (2)
- Big Data (2)
- Bounded Model Checking (2)
- Clinical decision support (2)
- Cloud-Security (2)
- Data Profiling (2)
- Datenbank (2)
- Datenbanksysteme (2)
- Datenvisualisierung (2)
- Debugging (2)
- Decision models (2)
- Deep Learning (2)
- Dynamic programming (2)
- Economic evaluation (2)
- Electronic health record (2)
- Energy (2)
- Entitätsauflösung (2)
- Entitätsverknüpfung (2)
- Entscheidungsfindung (2)
- European Union (2)
- Europäische Union (2)
- Fabrikation (2)
- Feature selection (2)
- Forschungskolleg (2)
- Gene expression (2)
- Graphentransformationssysteme (2)
- IDS (2)
- Identitätsmanagement (2)
- In-Memory (2)
- In-Memory technology (2)
- Information flow control (2)
- Internet der Dinge (2)
- Internet of Things (2)
- Java (2)
- Kanban (2)
- Klausurtagung (2)
- Learning Analytics (2)
- Lecture Video Archive (2)
- MAC security (2)
- MERLOT (2)
- Massive Open Online Course (MOOC) (2)
- Metadaten (2)
- Metanome (2)
- Modellprüfung (2)
- Oligopoly competition (2)
- OptoGait (2)
- P2P (2)
- Peer Assessment (2)
- Peer-to-Peer ridesharing (2)
- Ph.D. retreat (2)
- Prior knowledge (2)
- Programmieren (2)
- Python (2)
- RGB-D cameras (2)
- Reproducible benchmarking (2)
- SIEM (2)
- Secure Configuration (2)
- Security (2)
- Service-oriented Systems Engineering (2)
- Sicherheit (2)
- Smart micro-grids (2)
- Social Media Analysis (2)
- Squeak (2)
- Taxonomy (2)
- Treemaps (2)
- Versionsverwaltung (2)
- Visualisierung (2)
- Visualization (2)
- Werkzeuge (2)
- X-ray (2)
- Zebris (2)
- acute kidney injury (2)
- anomaly detection (2)
- artificial intelligence (2)
- artificial intelligence for health (2)
- assignments (2)
- benchmark (2)
- bounded model checking (2)
- business processes (2)
- capstone course (2)
- categories (2)
- causal discovery (2)
- causal structure learning (2)
- classification (2)
- clustering (2)
- collaborative work (2)
- comparison of devices (2)
- computer-mediated therapy (2)
- cyber-physische Systeme (2)
- data integration (2)
- data pipeline (2)
- data quality (2)
- database (2)
- database systems (2)
- deduplication (2)
- deferred choice (2)
- design thinking (2)
- digital enlightenment (2)
- digital learning platform (2)
- digital sovereignty (2)
- digitale Aufklärung (2)
- digitale Lernplattform (2)
- digitale Souveränität (2)
- digitale Transformation (2)
- disorder recognition (2)
- distributed systems (2)
- eindeutige Spaltenkombination (2)
- end-stage kidney disease (2)
- entity linking (2)
- everyday life (2)
- federated learning (2)
- flexibility (2)
- formal semantics (2)
- formal verification (2)
- formale Verifikation (2)
- framework (2)
- functional dependencies (2)
- funktionale Abhängigkeiten (2)
- gait analysis (2)
- gait analysis algorithm (2)
- game dynamics (2)
- genome-wide association (2)
- geospatial data (2)
- hate speech detection (2)
- human activity recognition (2)
- human motion (2)
- identity management (2)
- image stylization (2)
- inclusion dependencies (2)
- index selection (2)
- interactive media (2)
- kausale Entdeckung (2)
- kausales Strukturlernen (2)
- knowledge management (2)
- labeling (2)
- learning path (2)
- lebenslanges Lernen (2)
- lifelong learning (2)
- liveness (2)
- location prediction algorithm (2)
- maschinelles Sehen (2)
- medical documentation (2)
- memory (2)
- metadata (2)
- mobile mapping (2)
- model checking (2)
- model-driven engineering (2)
- modularization (2)
- motion capture (2)
- multiple modalities (2)
- named entity recognition (2)
- nested graph conditions (2)
- neurological disorders (2)
- non-photorealistic rendering (2)
- online learning (2)
- oracles (2)
- outlier detection (2)
- parameterized complexity (2)
- peer assessment (2)
- pervasive healthcare (2)
- price of anarchy (2)
- privacy (2)
- privacy and security (2)
- privacy attack (2)
- probabilistische gezeitete Systeme (2)
- probabilistische zeitgesteuerte Systeme (2)
- process mining (2)
- programmable matter (2)
- proteomics (2)
- public dataset (2)
- quality assessment (2)
- query optimization (2)
- rapid eGFRcrea decline (2)
- real-time (2)
- research school (2)
- retrospective (2)
- risk-aware dispatching (2)
- security (2)
- security analytics (2)
- self-paced learning (2)
- self-sovereign identity (2)
- sensor data (2)
- service-oriented systems engineering (2)
- simulation (2)
- social media (2)
- software development (2)
- software engineering (2)
- software process improvement (2)
- speech (2)
- stable matching (2)
- study (2)
- style transfer (2)
- text mining (2)
- thinking styles (2)
- transport network companies (2)
- treemaps (2)
- trust (2)
- trustworthiness (2)
- typed attributed graphs (2)
- unique column combinations (2)
- user interaction (2)
- variable geometry truss (2)
- version control (2)
- virtuelle Realität (2)
- visualization (2)
- voice (2)
- web-based rendering (2)
- workflow patterns (2)
- (modular) counting (1)
- 0-day (1)
- 3D Druck (1)
- 3D Point Clouds (1)
- 3D Visualization (1)
- 3D city model (1)
- 3D geovirtual environment (1)
- 3D geovisualization (1)
- 3D geovisualization system (1)
- 3D point cloud (1)
- 3D portrayal (1)
- 3D-Druck (1)
- 3D-Einbettung (1)
- 3D-Geovisualisierung (1)
- 3D-Geovisualisierungssystem (1)
- 3D-Punktwolke (1)
- 3D-Punktwolken (1)
- 3D-Rendering (1)
- 3D-Stadtmodell (1)
- 3D-embedding (1)
- 3D-geovirtuelle Umgebung (1)
- ACINQ (1)
- AI Act (1)
- AI Lab (1)
- AMIGOS dataset (1)
- APT (1)
- ASIC (1)
- Abgleich von Abhängigkeiten (1)
- Abhängigkeit (1)
- Abschlussbericht (1)
- Access Control (1)
- Activity-oriented Optimization (1)
- Adoption effects (1)
- Adressnormalisierung (1)
- Advanced Persistent Threats (1)
- Advertising (1)
- Agent-based model (1)
- Agile methods (1)
- Agilität (1)
- Aktivitäten (1)
- Alcohol Use Disorders Identification Test (1)
- Alcohol use assessment (1)
- Algebraic methods (1)
- Algorithms (1)
- Alzheimer's Disease (1)
- Ambiguity (1)
- Ambiguität (1)
- Analog-zu-Digital-Konvertierung (1)
- Analyse (1)
- Anfrageoptimierung (1)
- Annahme (1)
- Anomaly detection (1)
- Anwendungsbedingungen (1)
- Application Container Security (1)
- Approximation algorithms (1)
- Architecture synthesis (1)
- Architectures (1)
- Architekturadaptation (1)
- Archivanalyse (1)
- Artistic Image Stylization (1)
- Arzt-Patient-Beziehung (1)
- Attributsicherung (1)
- Audit (1)
- Aufzählungsalgorithmen (1)
- Auktion (1)
- Ausreißererkennung (1)
- Australian securities exchange (1)
- Auswirkungen (1)
- Autoimmune (1)
- Automated parsing (1)
- Automatic domain term extraction (1)
- Automatisierung (1)
- Average-Case Analyse (1)
- BCCC (1)
- BIM (1)
- BRCA1 (1)
- BTC (1)
- Back Pain (1)
- Bahnwesen (1)
- Bandwidth (1)
- Batch activity (1)
- Batch processing (1)
- Batch-Aktivität (1)
- Bayesian Networks; (1)
- Bedrohungsanalyse (1)
- Bedrohungserkennung (1)
- Bedrohungsmodell (1)
- Behavior change (1)
- Behavioral equivalence and refinement (1)
- Benchmark (1)
- Benutzerinteraktion (1)
- Beschriftung (1)
- Betriebssysteme (1)
- Bewegung (1)
- Big Five Model (1)
- Big Five model (1)
- Bildungstechnologien (1)
- Bildverarbeitung (1)
- Binary Classification (1)
- Binäre Klassifikation (1)
- Biomarker-Erkennung (1)
- Bisimulation and simulation (1)
- BitShares (1)
- Bitcoin Core (1)
- Blockchain Auth (1)
- Blockchain Governance (1)
- Blockchain-Konsortium R3 (1)
- Blockchain-enabled Governance (1)
- Blockkette (1)
- Blockstack (1)
- Blockstack ID (1)
- Blumix-Plattform (1)
- Blöcke (1)
- Boolean Networks (1)
- Boolean satisfiability (1)
- Boolsche Erfüllbarkeit (1)
- Bounded Backward Model Checking (1)
- Brand Personality (1)
- Building Management (1)
- Business Process Management (1)
- Business process choreographies (1)
- Business process improvement (1)
- Business process modeling (1)
- Business process simulation (1)
- Byzantine Agreement (1)
- C++ tool (1)
- CMOS technology (1)
- COVID-19 (1)
- Car safety management (1)
- Cardinality estimation (1)
- Case Management (1)
- Causal inference (1)
- Causal structure learning (1)
- Causality; (1)
- Cheating attacks (1)
- Chernoff-Hoeffding theorem (1)
- Chile (1)
- Chronic Nonspecific Low (1)
- Circular economy (1)
- CityGML (1)
- Clinical Data (1)
- Clinical predictive modeling (1)
- Clinical risk prediction (1)
- Cloud (1)
- Cloud Audit (1)
- Cloud Service Provider (1)
- Cloud computing (1)
- CloudRAID (1)
- CloudRAID for Business (1)
- Clusteranalyse (1)
- Clustering (1)
- Co-production (1)
- Collaborative learning (1)
- Colored Coins (1)
- Colored Petri Net (1)
- Commonsense reasoning (1)
- Community analysis (1)
- Complexity (1)
- Compound Values (1)
- Computational Photography (1)
- Computer architecture (1)
- Computergrafik (1)
- Computervision (1)
- Computing (1)
- Conceptual Fit (1)
- Conceptual modeling (1)
- Contamination (1)
- ContextErlang (1)
- Creative (1)
- Critical pair analysis (CPA) (1)
- Cross-platform (1)
- Crowd Resourcing (1)
- Cryptography (1)
- Cultural factors (1)
- Curex (1)
- Custom Writable Class (1)
- Cyber-Sicherheit (1)
- Cyber-physikalische Systeme (1)
- Cybersecurity (1)
- Cybersecurity e-Learning (1)
- Cybersicherheit (1)
- Cybersicherheit E-Learning (1)
- DAG (1)
- DALY (1)
- DAO (1)
- DBMS (1)
- DPoS (1)
- DSA (1)
- Data Analytics (1)
- Data Structure Optimization (1)
- Data breach (1)
- Data compression (1)
- Data mining (1)
- Data mining Machine learning (1)
- Data modeling (1)
- Data partitioning (1)
- Data processing (1)
- Data profiling application (1)
- Data quality (1)
- Data warehouse (1)
- Data-Mining (1)
- Data-Science (1)
- Data-driven price anticipation (1)
- Data-driven strategies (1)
- Database (1)
- Dateistruktur (1)
- Daten-Analytik (1)
- Datenabgleich (1)
- Datenanalyse (1)
- Datenbankoptimierung (1)
- Datenbereinigung (1)
- Datenintegration (1)
- Datenmodelle (1)
- Datensatz (1)
- Datensatzverknüpfung (1)
- Datenschutz (1)
- Datenschutz-sicherer Einsatz in der Schule (1)
- Datenstromverarbeitung (1)
- Datenstrukturoptimierung (1)
- Datensynthese (1)
- Datentransformation (1)
- Datenverwaltung (1)
- Datenverwaltung für Daten mit räumlich-zeitlichem Bezug (1)
- Deanonymisierung (1)
- Decision support (1)
- Declarative modelling (1)
- Deduplikation (1)
- Deep Kernel Learning (1)
- Deep learning (1)
- Dekubitus (1)
- Delays (1)
- Delegated Proof-of-Stake (1)
- Denial of sleep (1)
- Denkstile (1)
- Denkweise (1)
- Derecho (1)
- Design (1)
- Design-Forschung (1)
- Dezentrale Applikationen (1)
- Differential Expression Analysis (1)
- Digital Engineering (1)
- Digital Twin (1)
- Digital World (1)
- Digital education (1)
- Digitaler Zwilling (1)
- Digitization (1)
- Dimensionsreduktion (1)
- Direkte Manipulation (1)
- Disadvantaged communities (1)
- Dissertation (1)
- Distance Learning (1)
- Distance learning (1)
- Distributed Proof-of-Research (1)
- Distributed snapshot algorithm (1)
- Distributed-Ledger-Technologie (DLT) (1)
- Diverse solution enumeration (1)
- Document classification (1)
- Dokument Analyse (1)
- Domain Objects (1)
- Domänenspezifische Modellierung (1)
- Drift (1)
- Dubletten (1)
- Durchsetzbarkeit (1)
- Dynamic pricing competition (1)
- Dynamik (1)
- E-Learning exam preparation (1)
- E-Lecture (1)
- E-Wallet (1)
- E-commerce (1)
- E-health (1)
- ECDSA (1)
- EMG (1)
- Echtzeit (1)
- Echtzeit-Rendering (1)
- Education technologies (1)
- Educational Data Mining (1)
- Educational Technology (1)
- Educational data mining (1)
- Effect measurement (1)
- Einbettungen (1)
- Einbruchserkennung (1)
- Electrical products (1)
- Embedded Programming (1)
- Emotion Mining (1)
- Endpunktsicherheit (1)
- Energy efficiency (1)
- Entdeckung (1)
- Enterprise File Synchronization and Share (1)
- Entropy (1)
- Entscheidungskorrektheit (1)
- Entscheidungsmanagement (1)
- Entscheidungsmodelle (1)
- Entstehung (1)
- Entwicklung digitaler Innovationseinheiten (1)
- Enumeration algorithm (1)
- Equity (1)
- Ereignisabonnement (1)
- Ereignisnormalisierung (1)
- Erfüllbarkeitsschwellwert (1)
- Eris (1)
- Erkennung von Metadaten (1)
- Erkennung von Quasi-Identifikatoren (1)
- Estimation-of-Distribution-Algorithmen (1)
- Estimation-of-distribution algorithm (1)
- Ether (1)
- Ethereum (1)
- European reference networks (1)
- Evaluation (1)
- Expert knowledge (1)
- Exploration (1)
- Expressive rendering (1)
- Extensibility (1)
- Eye-tracking (1)
- Fabrication (1)
- Facility Management (1)
- Federated Byzantine Agreement (1)
- Feedback control loop (1)
- Fehlende Werte (1)
- Fehlertoleranz (1)
- Fernerkundung (1)
- Fertigung (1)
- Fertigungsunternehmen (1)
- Field study (1)
- First-hitting time (1)
- Flash (1)
- FollowMyVote (1)
- Forecasting (1)
- Foreign key (1)
- Fork (1)
- Formal modelling (1)
- Formal verification of behavior preservation (1)
- Functional dependencies (1)
- Funktionale Abhängigkeiten (1)
- G-estimation (1)
- GMDH (1)
- GPU (1)
- GPU acceleration (1)
- GPU-Beschleunigung (1)
- GTEx (1)
- Gaussian process state-space models (1)
- Gaussian processes (1)
- Gauß-Prozess Zustandsraummodelle (1)
- Gauß-Prozesse (1)
- Gebäudeinformationsmodellierung (1)
- Gebäudemanagement (1)
- Gen-Expression (1)
- Gene Expression Data Analysis (1)
- General Earth and Planetary Sciences (1)
- General demand function (1)
- Generalized Discrimination Networks (1)
- Genomics education (1)
- Geodaten (1)
- Geography, Planning and Development (1)
- Geospatial intelligence (1)
- Gerichtsbarkeit (1)
- German schools (1)
- Geschäftsprozess (1)
- Geschäftsprozess-Choreografien (1)
- Geschäftsprozessarchitekturen (1)
- Gewinnung benannter Entitäten (1)
- GitHub (1)
- GraalVM (1)
- Graph Algorithms (1)
- Graph Theory (1)
- Graph algorithm (1)
- Graph logic (1)
- Graph logics (1)
- Graph transformation (1)
- Graph transformation (double pushout approach) (1)
- Graph transformations (1)
- Graph-Mining (1)
- Graphableitung (1)
- Graphbedingungen (1)
- Graphentheorie (1)
- Graphreparatur (1)
- Graphtransformationen (1)
- Graphtransformationssysteme (1)
- Grid stability (1)
- Gridcoin (1)
- Großformat (1)
- HENSHIN (1)
- HLS (1)
- HTML5 (1)
- Hard Fork (1)
- Hashed Timelock Contracts (1)
- Hasserkennung (1)
- Hauptspeicher Datenmanagement (1)
- Heart Valve Diseases (1)
- Herzklappenerkrankungen (1)
- Heuristic triangle estimation (1)
- Heuristiken (1)
- Hitting Sets (1)
- Holant problems (1)
- Home appliances (1)
- Homomorphismen (1)
- Human Computer Interaction (1)
- Hybrid (1)
- Hyperbolic Geometry (1)
- Hyrise (1)
- Häkeln (1)
- ICT (1)
- IEEE 802.15.4 (1)
- IT Softwarentwicklung (1)
- IT project (1)
- IT systems engineering (1)
- IT-Infrastruktur (1)
- IT-Sicherheit (1)
- IT-infrastructure (1)
- IT-security (1)
- Ideation (1)
- Ideenfindung (1)
- Identity leak (1)
- Identität (1)
- Image Abstraction (1)
- Image Processing (1)
- Imbalanced medical image semantic segmentation (1)
- Immobilien 4.0 (1)
- Impact (1)
- Implementation in Organizations (1)
- Implementierung (1)
- Implementierung in Organisationen (1)
- Indexauswahl (1)
- Indoor Point Clouds (1)
- Indoor environments (1)
- Indoor-Punktwolken (1)
- Industry 4.0 (1)
- Information Retrieval (1)
- Information system (1)
- Informationsextraktion (1)
- Informationsflüsse (1)
- Informationsraum (1)
- Informationsraumverletzungen (1)
- Inklusionsabhängigkeit (1)
- Inklusionsabhängigkeiten (1)
- Institutions (1)
- Integrative Gene Selection (1)
- Intent analysis (1)
- Interacting processes (1)
- Interactive Media (1)
- Interactive control (1)
- Interdisciplinary Teams (1)
- Interessengrad-Techniken (1)
- Internet (1)
- Internet of things (1)
- Interoperability (1)
- Interpretability (1)
- Interpreter (1)
- Interval Timed Automata (1)
- Interventionen (1)
- Invariant checking (1)
- IoT (1)
- Japanese Blockchain Consortium (1)
- Japanisches Blockchain-Konsortium (1)
- JavaScript (1)
- K-12 (1)
- KI-Labor (1)
- Kalman filtering (1)
- Kardinalitätsschätzung (1)
- Karten (1)
- Kategorien (1)
- Kausalität (1)
- Kernelization (1)
- Kette (1)
- Klassifikation (1)
- Klassifizierung (1)
- Klinische Daten (1)
- Knowledge Bases (1)
- Kognitionswissenschaft (1)
- Kollaboration (1)
- Komplexitätstheorie (1)
- Konsensalgorithmus (1)
- Konsensprotokoll (1)
- Konsensprotokolle (1)
- Konsistenzrestauration (1)
- Konstruktion von Wissensbasen (1)
- Konstruktion von Wissensgraphen (1)
- Korpusexploration (1)
- Korrektheit (1)
- Kraft (1)
- Kreativität (1)
- Kryptografie (1)
- Kultur (1)
- Kunstanalyse (1)
- LC-MS (1)
- Large networks (1)
- Large scale analytics (1)
- Laserscanning (1)
- Laserschneiden (1)
- Lastverteilung (1)
- Laufzeitanalyse (1)
- Laufzeitmodelle (1)
- Law (1)
- Learning Factory (1)
- Learning analytics (1)
- Learning experience (1)
- Lebendigkeit (1)
- Lecture Recording (1)
- Leistungsmodelle von virtuellen Maschinen (1)
- Lernerlebnis (1)
- Level-of-detail visualization (1)
- LiDAR (1)
- Lightning Network (1)
- Linear model (1)
- Link layer security (1)
- Literature review (1)
- Live-Migration (1)
- Live-Programmierung (1)
- Lively Kernel (1)
- Liveness (1)
- Load modeling (1)
- Lock-Time-Parameter (1)
- Log data (1)
- Lossy networks (1)
- Low-processing capable devices (1)
- Lösungsraum (1)
- MOOC Remote Lab (1)
- MQTT (1)
- MS (1)
- Machine-Learning (1)
- Machinelles Lernen (1)
- Manufacturing (1)
- Markov model (1)
- Maschinen (1)
- Maschinenlernen (1)
- Massive Open Online Courses (1)
- Massive open online courses (1)
- Measurement (1)
- Medical research (1)
- Medien Bias (1)
- Mediumzugriffskontrolle (1)
- Meltdown (1)
- Memory Dumping (1)
- Mensch Computer Interaktion (1)
- Mensch-Maschine Interaktion (1)
- Merkmalsauswahl (1)
- Messung (1)
- Meta-Selbstanpassung (1)
- Metacrate (1)
- Metamaterialien (1)
- Metamaterials (1)
- Micro-grid networks (1)
- Microbiome (1)
- Micropayment-Kanäle (1)
- Microservice (1)
- Microservices Security (1)
- Microsoft Azur (1)
- Mindset (1)
- Minimal hitting set (1)
- Minimum spanning tree (1)
- Missing values (1)
- Mixed Reality (1)
- Mobile Learning (1)
- Mobile Mapping (1)
- Mobile applications (1)
- Mobile devices (1)
- Mobile-Mapping (1)
- Mobiles (1)
- Model checking (1)
- Model extraction (1)
- Modelle mit mehreren Versionen (1)
- Modellreparatur (1)
- Monitoring (1)
- Monte Carlo simulation (1)
- Motion Mapping (1)
- Moving Target Defense (1)
- Multi-objective optimization (1)
- Multidisciplinary Teams (1)
- Multiview classification (1)
- Multiziel (1)
- N-of-1 trials (1)
- NASDAQ (1)
- NETCONF (1)
- NLP (1)
- NP-hardness (1)
- Nachrichten (1)
- NameID (1)
- Namecoin (1)
- Named-Entity-Erkennung (1)
- Nash equilibrium (1)
- Natural Language Processing (1)
- Natural language analysis (1)
- Navigational logics (1)
- Nephrology (1)
- Network Science (1)
- Network analysis (1)
- Network creation games (1)
- Netzoptimierung (1)
- Netzwerkprotokolle (1)
- Netzwerksicherheit (1)
- Neural Networks (1)
- Neural networks (1)
- New Public Governance (1)
- Non-photorealistic Rendering (1)
- Non-photorealistic rendering (1)
- Nutzer-Engagement (1)
- Nutzerinteraktion (1)
- Objects (1)
- Objekte (1)
- Off-Chain-Transaktionen (1)
- Offline-Enabled (1)
- Onename (1)
- Online Learning Environments (1)
- Online survey (1)
- Online-Gerichte (1)
- Online-Lernen (1)
- Online-Persönlichkeit (1)
- Ontology (1)
- Open source (1)
- OpenBazaar (1)
- Opinion mining (1)
- Optimal control (1)
- Optimierung (1)
- Oracles (1)
- Ordinances (1)
- Organisationsstudien (1)
- Orphan Block (1)
- Overlapping community detection (1)
- PRISM model checker (1)
- PTCTL (1)
- PTM (1)
- Parallel independence (1)
- Parallel processing (1)
- Parallelized algorithm (1)
- Pareto-Verteilung (1)
- Parking search (1)
- Patents (1)
- Patient (1)
- Patientenermündigung (1)
- Pattern (1)
- Pattern Recognition (1)
- Peer assessment (1)
- Peer-feedback (1)
- Peer-to-Peer Netz (1)
- Peercoin (1)
- Performance evaluation (1)
- Performanz (1)
- Personal genotyping (1)
- Personality Prediction (1)
- Personalized medicine (1)
- PhD thesis (1)
- PoB (1)
- PoS (1)
- PoW (1)
- Point Clouds (1)
- Politik (1)
- Polystore (1)
- Popular matching (1)
- Posenabschätzung (1)
- Power auctioning (1)
- Power consumption characterization (1)
- Power demand (1)
- Predictive Modeling (1)
- Predictive models (1)
- Pricing (1)
- Primary biliary cholangitis (1)
- Primary key (1)
- Primary sclerosing cholangitis (1)
- Prior Knowledge (1)
- Privacy (1)
- Privatsphäre (1)
- Probabilistic timed automata (1)
- Problem Solving (1)
- Problemlösung (1)
- Process (1)
- Process Enactment (1)
- Process Execution (1)
- Process architecture (1)
- Process discovery (1)
- Process landscape (1)
- Process map (1)
- Process mining (1)
- Process model (1)
- Process-related data (1)
- Profilerstellung für Daten (1)
- Programmiererlebnis (1)
- Programmierung (1)
- Programmierwerkzeuge (1)
- Programming course (1)
- Project-based learning (1)
- Proof-of-Burn (1)
- Proof-of-Stake (1)
- Proof-of-Work (1)
- Proteom (1)
- Proteomics (1)
- Prototyping (1)
- Prozess (1)
- Prozessausführung (1)
- Prozessmodelle (1)
- Prozessmodellierung (1)
- Prädiktive Modellierung (1)
- Psychiatric disorders (1)
- Psychological Emotions (1)
- Psychotherapie (1)
- Punktwolken (1)
- QUALY (1)
- Quanten-Computing (1)
- Query optimization (1)
- Query-Optimierung (1)
- REST-Interaktionen (1)
- RESTful choreographies (1)
- RESTful interactions (1)
- RL (1)
- RNAseq (1)
- Random process (1)
- Randomized clinical trials (1)
- Real Estate 4.0 (1)
- Real Walking (1)
- Real-time rendering (1)
- Recht (1)
- Reconfigurable architecture (1)
- Recurrent generative (1)
- Recursion (1)
- Recycling investments (1)
- Refactoring (1)
- Regressionstests (1)
- Rekursion (1)
- Relational model transformation (1)
- Representationlernen (1)
- Requisit (1)
- Resource Allocation (1)
- Resource Management (1)
- Resource constrained smart micro-grids (1)
- Response strategies (1)
- Reverse Engineering (1)
- Ripple (1)
- Roadmap (1)
- Ruby (1)
- Runtime improvement (1)
- Runtime-monitoring (1)
- Russia (1)
- SAFE (1)
- SCP (1)
- SET effects (1)
- SHA (1)
- SPV (1)
- SWIRL (1)
- Satz von Chernoff-Hoeffding (1)
- Savanne (1)
- Scalability (1)
- Schelling segregation (1)
- Schelling's segregation model (1)
- Schema discovery (1)
- Schema-Entdeckung (1)
- Schlafentzugsangriffe (1)
- School (1)
- Schriftartgestaltung (1)
- Schriftrendering (1)
- Schule (1)
- Schwachstelle (1)
- Schwierigkeitsgrad (1)
- Scrollytelling (1)
- Secondary Education (1)
- Security analytics (1)
- Selbst-Adaptive Software (1)
- Self-Regulated Learning (1)
- Semantic Enrichment (1)
- Semantic Web (1)
- Semantic enrichment (1)
- Semantische Anreicherung (1)
- Sensor Analytics (1)
- Sensor networks (1)
- Sensor-Analytik (1)
- Sequenzeigenschaften (1)
- Serialisierung (1)
- Servers (1)
- Service-Oriented Architecture (1)
- Service-Oriented Systems (1)
- Service-Orientierte Systeme (1)
- Service-oriented (1)
- Serviceorientierte Architektur (SOA) (1)
- Servicification (1)
- Siamesische Neuronale Netzwerke (1)
- Sicherheitsanalyse (1)
- Simplified Payment Verification (1)
- Simulation (1)
- Simulation study (1)
- Situationsbewusstsein (1)
- Skalierbarkeit (1)
- Skalierbarkeit der Blockchain (1)
- Skriptsprachen (1)
- Slock.it (1)
- Smart Contracts (1)
- Smart Home Education (1)
- Smart cities (1)
- Social (1)
- Social Bots erkennen (1)
- Soft Fork (1)
- Software (1)
- Software engineering (1)
- Software-Evolution (1)
- Software/Hardware Co-Design (1)
- Softwareanalytik (1)
- Softwarevisualisierung (1)
- Solution Space (1)
- Source Code Readability (1)
- Soziale Medien (1)
- Spatial data handling systems (1)
- Spatio-Temporal Data (1)
- Spatio-temporal data analysis (1)
- Spatio-temporal visualization (1)
- Specification (1)
- Spectre (1)
- Spezifikation von gezeiteten Graph Transformationen (1)
- Spieltheorie (1)
- Spin system (1)
- Sprachlernen im Limes (1)
- Squeak/Smalltalk (1)
- Stable marriage (1)
- Stable matching (1)
- StackOverflow (1)
- Standard (1)
- Standardisierung (1)
- Stapelverarbeitung (1)
- Static analysis (1)
- Statistical process mining (1)
- Steemit (1)
- Stellar Consensus Protocol (1)
- Stochastic differential games (1)
- Storj (1)
- Storytelling (1)
- Strategic cognition (1)
- Style transfer (1)
- Subject-oriented learning (1)
- Suchtberatung und -therapie (1)
- Supervised Learning (1)
- Supervised deep neural (1)
- Survey (1)
- System design (1)
- Systemmedizin (1)
- Systems Medicine (1)
- TCGA (1)
- Team Assessment (1)
- Team based assignment (1)
- Team-based Learning (1)
- Teamarbeit (1)
- Technology mapping (1)
- Teile und Herrsche (1)
- Telemedizin (1)
- Temporallogik (1)
- Testergebnisse (1)
- Testpriorisierungs (1)
- Text mining (1)
- Texterkennung (1)
- Textklassifikation (1)
- The Bitfury Group (1)
- The DAO (1)
- Theorie (1)
- Theory (1)
- Threat Models (1)
- Tiefes Lernen (1)
- Time series analysis (1)
- Time series data (1)
- Timed Automata (1)
- Tools (1)
- Topic modeling (1)
- Tragfähigkeit (1)
- Training (1)
- Trajectory Data Management (1)
- Trajectory visualization (1)
- Trajektorien (1)
- Trajektoriendaten (1)
- Transaktion (1)
- Transferlernen (1)
- Transportability (1)
- Transversal hypergraph (1)
- Transversal-Hypergraph (1)
- Tree maintenance (1)
- Tripel-Graph-Grammatiken (1)
- Triple graph grammars (1)
- Two-Way-Peg (1)
- U-Förmiges Lernen (1)
- U-Shaped-Learning (1)
- Ubiquitous (1)
- Ubiquitous business process (1)
- Unbiasedness (1)
- Unified logging system (1)
- Unique column combination (1)
- Unspent Transaction Output (1)
- Unternehmensdateien synchronisieren und teilen (1)
- Unterricht mit digitalen Medien (1)
- User Experience (1)
- Utility-Funktionen (1)
- V2X (1)
- VUCA-World (1)
- Validation (1)
- Verarbeitung natürlicher Sprache (1)
- Verbundwerte (1)
- Verhaltensforschung (1)
- Verhaltensänderung (1)
- Verifikation induktiver Invarianten (1)
- Verlässlichkeit (1)
- Verteilte Systeme (1)
- Vertrauen (1)
- Verträge (1)
- Veränderungsanalyse (1)
- Video annotations (1)
- Virtual Laboratory (1)
- Virtual Machine (1)
- Virtual Machines (1)
- Virtual Reality (1)
- Virtuelle Maschinen (1)
- Virtuelles Labor (1)
- Visual Analytics (1)
- Visual analytics (1)
- Visualisierungskonzept-Exploration (1)
- Vorhersage (1)
- Vorhersagemodelle (1)
- Vorhersagemodellierung (1)
- Vulnerability analysis (1)
- WALA (1)
- W[3]-Completeness (1)
- Walking (1)
- Water Science and Technology (1)
- Watson IoT (1)
- Wearable (1)
- Web-basiertes Rendering (1)
- Weighted clustering coefficient (1)
- Werkzeugbau (1)
- Wicked Problems (1)
- Wireless sensor networks (1)
- Wirtschaftsinformatik Projekte (1)
- Wissensbasis (1)
- Wissensgraph (1)
- Wissensgraphen (1)
- Wissensgraphen Verfeinerung (1)
- Wissensmanagement (1)
- Wissenstransfer (1)
- Wissensvalidierung (1)
- Wolke (1)
- Word embedding (1)
- Wüstenbildung (1)
- XIC extraction (1)
- YANG (1)
- Zielvorgabe (1)
- Zookos Dreieck (1)
- Zookos triangle (1)
- Zufallsgraphen (1)
- Zugriffskontrolle (1)
- Zustandsverwaltung (1)
- Zählen (1)
- accelerator architectures (1)
- acceptability (1)
- accuracy (1)
- action problems (1)
- active layers (1)
- active touch (1)
- acyclic preferences (1)
- addiction care (1)
- address normalization (1)
- adoption (1)
- advanced persistent threat (1)
- advanced threats (1)
- adversarial network (1)
- agil (1)
- agile (1)
- algorithms (1)
- allocation problem (1)
- altchain (1)
- alternative chain (1)
- analog-to-digital conversion (1)
- analysis (1)
- application conditions (1)
- approximate counting (1)
- apt (1)
- architectural adaptation (1)
- architecture-based software adaptation (1)
- architectures (1)
- architekturbasierte Softwareanpassung (1)
- archive analysis (1)
- art analysis (1)
- artistic image stylization (1)
- aspect-oriented programming (1)
- asset management (1)
- atomic swap (1)
- attribute assurance (1)
- auction (1)
- automation (1)
- autonomous (1)
- availability (1)
- average-case analysis (1)
- bachelor project (1)
- batch activity (1)
- batch processing (1)
- behavior psychotherapy (1)
- behavioral sciences (1)
- behaviourally correct learning (1)
- benchmarking (1)
- benutzergenerierte Inhalte (1)
- bestärkendes Lernen (1)
- bidirectional payment channels (1)
- bildbasierte Repräsentation (1)
- bildbasiertes Rendering (1)
- bioinformatics (1)
- bioinformatics tool (1)
- biologisches Vorwissen (1)
- biomarker detection (1)
- bitcoins (1)
- blockchain consortium (1)
- blockchain-übergreifend (1)
- blocks (1)
- blumix platform (1)
- bounded backward model checking (1)
- brand personality (1)
- breast-cancer (1)
- business process (1)
- business process architectures (1)
- business process choreographies (1)
- business process managament (1)
- causal AI (1)
- causal reasoning (1)
- causality (1)
- chain (1)
- change detection (1)
- classification zone (1)
- classification zone violations (1)
- clinical exome (1)
- cliquy tree (1)
- cloud monitoring (1)
- code generation (1)
- cognitive load (1)
- cognitive patterns (1)
- cognitive science (1)
- collaboration (1)
- collaborative learning (1)
- collective intelligence (1)
- columnar databases (1)
- combinational logic (1)
- communication (1)
- comparison (1)
- competence (1)
- complex event processing (1)
- complex networks (1)
- complexity (1)
- complexity theory (1)
- compliance (1)
- compositional analysis (1)
- computational design (1)
- computational hardness (1)
- computational mass spectrometry (1)
- computational models (1)
- computational photography (1)
- computer graphics (1)
- computer science (1)
- computer science education (1)
- computer-aided design (1)
- computergestützte Gestaltung (1)
- computervermittelte Therapie (1)
- computing (1)
- confirmation period (1)
- confluence (1)
- conformance checking (1)
- connectivity (1)
- consensus algorithm (1)
- consensus protocol (1)
- consensus protocols (1)
- consistency restoration (1)
- consistent learning (1)
- constrained optimization (1)
- content gamification (1)
- contest period (1)
- context (1)
- context groups (1)
- contextual-variability modeling (1)
- continuous integration (1)
- contracts (1)
- convolutional neural networks (1)
- coping ability (1)
- corpus exploration (1)
- corticospinal tract (1)
- cost (1)
- cost-effectiveness (1)
- cost-utility analysis (1)
- courts of justice (1)
- crochet (1)
- cross-chain (1)
- cross-platform (1)
- cultural heritage (1)
- culture (1)
- cumulative culture (1)
- curex (1)
- cyber-physikalische Systeme (1)
- cybersecurity (1)
- data (1)
- data analytics (1)
- data cleaning (1)
- data dependencies (1)
- data management (1)
- data matching (1)
- data mining (1)
- data models (1)
- data privacy (1)
- data processing (1)
- data science (1)
- data set (1)
- data synthesis (1)
- data transfer (1)
- data visualisation (1)
- data visualization (1)
- data wrangling (1)
- data-driven (1)
- database optimization (1)
- database replication (1)
- database tuning (1)
- datengetrieben (1)
- de-anonymisation (1)
- debugging (1)
- decentral identities (1)
- decentralized applications (1)
- decentralized autonomous organization (1)
- decision making (1)
- decision management (1)
- decision mining (1)
- decision models (1)
- decision soundness (1)
- decision-aware process models (1)
- decoupling cells (1)
- decubitus (1)
- deep Gaussian processes (1)
- deep kernel learning (1)
- degree-of-interest techniques (1)
- demand learning (1)
- demografische Informationen (1)
- demographic information (1)
- denial of sleep (1)
- dependability (1)
- dependency (1)
- desertification (1)
- design (1)
- design behaviour (1)
- design cognition (1)
- design research (1)
- developing countries (1)
- development artifacts (1)
- dezentrale Identitäten (1)
- dezentrale autonome Organisation (1)
- dialect (1)
- die arabische Welt (1)
- difficulty (1)
- difficulty target (1)
- diffusion MRI; (1)
- digital fabrication (1)
- digital health app (1)
- digital innovation (1)
- digital innovation units (1)
- digital learning (1)
- digital picture archive (1)
- digital product innovation (1)
- digital therapy (1)
- digital unterstützter Unterricht (1)
- digital whiteboard (1)
- digital world (1)
- digitale Fabrikation (1)
- digitale Infrastruktur für den Schulunterricht (1)
- digitale Innovation (1)
- digitale Innovationseinheit (1)
- digitale Produktinnovation (1)
- digitales Bildarchiv (1)
- digitales Lernen (1)
- digitales Whiteboard (1)
- digitalisation (1)
- digitalización (1)
- dimensionality reduction (1)
- direct manipulation (1)
- disability-adjusted life years (1)
- discovery (1)
- discrete-event model (1)
- diseño (1)
- diskretes Ereignismodell (1)
- distributed computation (1)
- distributed performance monitoring (1)
- divide-and-conquer (1)
- doctor-patient relationship (1)
- document analysis (1)
- domain-specific knowledge graphs (1)
- domain-specific modeling (1)
- dominant matching (1)
- domänenspezifisches Wissensgraphen (1)
- doppelter Hashwert (1)
- double hashing (1)
- drahtloses Netzwerk (1)
- drift theory (1)
- dry fasting (1)
- dsps (1)
- dynamic AOP (1)
- dynamic causal modeling (1)
- dynamic service adaptation (1)
- dynamic systems (1)
- dynamics (1)
- dynamische Systeme (1)
- e-Commerce (1)
- e-commerce (1)
- eLearning (1)
- eating behaviour (1)
- economic (1)
- edge-weighted networks (1)
- education (1)
- electrical muscle stimulation (1)
- electrodermal activity (1)
- elektrische Muskelstimulation (1)
- embeddings (1)
- emergence (1)
- emotion classification (1)
- emotion measurement (1)
- emotional cognitive dynamics (1)
- emotional kognitive Dynamiken (1)
- endpoint security (1)
- enforceability (1)
- entity-component-system (1)
- entscheidungsbewusste Prozessmodelle (1)
- enumeration algorithms (1)
- erzeugende gegnerische Netzwerke (1)
- escalation of commitment (1)
- eskalierendes Commitment (1)
- espbench (1)
- estimation-of-distribution algorithms (1)
- estudios de organización (1)
- event normalization (1)
- event subscription (1)
- evolution of digital innovation units (1)
- evolutionary computation (1)
- exact algorithms (1)
- expectation maximisation algorithm (1)
- experience (1)
- exploration (1)
- exploratives Programmieren (1)
- exploratory programming (1)
- extend (1)
- extension problems (1)
- eye-tracking (1)
- facial mimicry (1)
- fault tolerance (1)
- feature selection (1)
- federated voting (1)
- field study (1)
- file structure (1)
- final report (1)
- font engineering (1)
- font rendering (1)
- force (1)
- forest number (1)
- formal testing (1)
- fortschrittliche Angriffe (1)
- functional dependency (1)
- funktionale Abhängigkeit (1)
- game theory (1)
- gameful learning (1)
- ganzzahlige lineare Optimierung (1)
- gefaltete neuronale Netze (1)
- gemischte Daten (1)
- gene (1)
- gene expression (1)
- gene selection (1)
- generalized discrimination networks (1)
- generative adversarial networks (1)
- genomics (1)
- geographical distribution (1)
- geschichtsbewusste Laufzeit-Modelle (1)
- getypte Attributierte Graphen (1)
- gigantische Netzwerke (1)
- global model management (1)
- globales Modellmanagement (1)
- grammars (1)
- graph (1)
- graph analysis (1)
- graph conditions (1)
- graph constraints (1)
- graph inference (1)
- graph mining (1)
- graph neural networks (1)
- graph repair (1)
- graph theory (1)
- graph transformations (1)
- graph-transformations (1)
- graphical query language (1)
- graphische neuronale Netze (1)
- grobe Protokolle (1)
- group-based behavior adaptation (1)
- haptic feedback (1)
- haptisches Feedback (1)
- hardware (1)
- hashrate (1)
- health app (1)
- health behaviour (1)
- healthcare (1)
- hepatitis (1)
- heuristics (1)
- hierarchical data (1)
- hierarchische Daten (1)
- higher education (1)
- history-aware runtime models (1)
- hitting sets (1)
- homomorphisms (1)
- human computer interaction (1)
- human-centered (1)
- human-centered design (1)
- human-computer interaction (1)
- human-robot interaction (1)
- human-scale (1)
- hybrid (1)
- hybrid systems (1)
- hyperbolic geometry (1)
- hyperbolic random graphs (1)
- hyperbolische Geometrie (1)
- hyperbolische Zufallsgraphen (1)
- identity (1)
- image abstraction (1)
- image-based rendering (1)
- image-based representation (1)
- imbalanced learning (1)
- immediacy (1)
- implementation (1)
- implied methods (1)
- in-memory (1)
- in-memory data management (1)
- inclusion dependency (1)
- incremental graph query evaluation (1)
- independency tree (1)
- inductive invariant checking (1)
- industry (1)
- industry 4.0 (1)
- information extraction (1)
- information flows (1)
- information quality (1)
- information systems projects (1)
- inkrementelle Ausführung von Graphanfragen (1)
- innovación (1)
- innovation laboratories (1)
- insula (1)
- integer linear programming (1)
- integrated development environments (1)
- integrierte Entwicklungsumgebungen (1)
- intelligente Verträge (1)
- inter-chain (1)
- interactive visualization (1)
- interaktive Medien (1)
- interaktive Visualisierung (1)
- interdisziplinäre Teams (1)
- intermittent food restriction (1)
- internet of things (1)
- internet topology (1)
- interpreters (1)
- interval probabilistic timed systems (1)
- interval probabilistische zeitgesteuerte Systeme (1)
- interval timed automata (1)
- intervention (1)
- intransitivity (1)
- intuitive Benutzeroberflächen (1)
- intuitive interfaces (1)
- invention (1)
- invention mechanism (1)
- inventory management (1)
- juridical recording (1)
- k-Induktion (1)
- k-induction (1)
- k-inductive invariant (1)
- k-inductive invariant checking (1)
- k-induktive Invariante (1)
- k-induktive Invariantenprüfung (1)
- kausale KI (1)
- kausale Schlussfolgerung (1)
- key establishment (1)
- key management (1)
- key revocation (1)
- knowledge base (1)
- knowledge base construction (1)
- knowledge graph (1)
- knowledge graph construction (1)
- knowledge graph refinement (1)
- knowledge graphs (1)
- knowledge transfer (1)
- knowledge validation (1)
- kollaboratives Arbeiten (1)
- kollaboratives Lernen (1)
- komplexe Ereignisverarbeitung (1)
- kompositionale Analyse (1)
- konsistentes Lernen (1)
- kontinuierliche Integration (1)
- kulturelles Erbe (1)
- label-free quantification (1)
- language learning in the limit (1)
- large scale mechanism (1)
- large-scale mechanism (1)
- laser cutting (1)
- laserscanning (1)
- law (1)
- learner engagement (1)
- learning (1)
- learning factories (1)
- learning platform (1)
- learning styles (1)
- lebenszentriert (1)
- ledger assets (1)
- left recursion (1)
- level-replacement systems (1)
- liability (1)
- life-centered (1)
- linear programming (1)
- link layer security (1)
- literature review (1)
- live migration (1)
- live programming (1)
- lively groups (1)
- load balancing (1)
- load-bearing (1)
- logic rules (1)
- logische Regeln (1)
- low back pain (1)
- low-duty-cycling (1)
- mHealth (1)
- machine (1)
- machines (1)
- male infertility (1)
- management (1)
- manufacturing (1)
- manufacturing companies (1)
- maps (1)
- maschinelle Verarbeitung natürlicher Sprache (1)
- massive networks (1)
- massive open online courses (1)
- matching dependencies (1)
- measurement (1)
- media (1)
- media bias (1)
- medical image analysis (1)
- medium access control (1)
- medizinische Bildanalyse (1)
- medizinische Dokumentation (1)
- mehrsprachige Ausführungsumgebungen (1)
- mehrstufiger Angriff (1)
- menschenzentriert (1)
- menschenzentriertes Design (1)
- mental models (1)
- merged mining (1)
- merkle root (1)
- meta self-adaptation (1)
- metaanalysis (1)
- metabolomics (1)
- metacognition (1)
- metacrate (1)
- metadata detection (1)
- metamaterials (1)
- metanome (1)
- methods (1)
- metric temporal graph logic (1)
- metric temporal logic (1)
- metric termporal graph logic (1)
- metrisch temporale Graph Logic (1)
- metrische Temporallogik (1)
- microcredential (1)
- micropayment (1)
- micropayment channels (1)
- microstructures (1)
- mindfulness (1)
- miner (1)
- mining (1)
- mining hardware (1)
- minting (1)
- mixed data (1)
- mixed methods (1)
- mobile app (1)
- mobile applications (1)
- mobile health (1)
- model (1)
- model repair (1)
- model-driven software engineering (1)
- modellgesteuerte Entwicklung (1)
- modellgetriebene Entwicklung (1)
- modellgetriebene Softwaretechnik (1)
- modification stoichiometry (1)
- motion and force (1)
- mpmUCC (1)
- multi-objective (1)
- multi-step attack (1)
- multi-version models (1)
- multidisziplinäre Teams (1)
- multimodal wireless sensor network (1)
- mutations (1)
- named entity mining (1)
- narrative (1)
- natürliche Sprachverarbeitung (1)
- network optimization (1)
- network protocols (1)
- network security (1)
- networks (1)
- news (1)
- nicht-parametrische bedingte Unabhängigkeitstests (1)
- nicht-uniforme Verteilung (1)
- non-cooperative games (1)
- non-parametric conditional independence testing (1)
- non-photorealistic rendering (NPR) (1)
- non-uniform distribution (1)
- nonce (1)
- note-taking (1)
- novelty detection (1)
- null results (1)
- nutzergenerierte Inhalte (1)
- object-oriented languages (1)
- object-oriented programming (1)
- objektorientiertes Programmieren (1)
- off-chain transaction (1)
- oligopoly competition (1)
- omics (1)
- oneM2M (1)
- online courts (1)
- online personality (1)
- open innovation (1)
- operating systems (1)
- optical character recognition (1)
- optimization (1)
- order dependencies (1)
- organization studies (1)
- orthopedic (1)
- oxytocin (1)
- packrat parsing (1)
- pain (1)
- parallel and sequential independence (1)
- parallel processing (1)
- parallele Verarbeitung (1)
- parallele und Sequentielle Unabhängigkeit (1)
- parietal operculum (1)
- parsing expression grammars (1)
- partial order resolution (1)
- partial replication (1)
- partielle Replikation (1)
- partition functions (1)
- patent (1)
- patent analysis (1)
- patient empowerment (1)
- peer evaluation (1)
- peer feedback (1)
- peer review (1)
- peer-to-peer network (1)
- pegged sidechains (1)
- performance (1)
- performance models of virtual machines (1)
- persistent memory (1)
- persistenter Speicher (1)
- personality prediction (1)
- phosphoproteomics (1)
- photoplethysmography (1)
- physiological signals (1)
- pmem (1)
- point-based rendering (1)
- politics (1)
- polyglot execution environments (1)
- polyglot programming (1)
- polyglottes Programmieren (1)
- polynomials (1)
- polystore (1)
- popular matching (1)
- portrait (1)
- pose estimation (1)
- poset (1)
- post-translational modification (1)
- power law (1)
- predicated generic functions (1)
- predicted spectra (1)
- prediction (1)
- prediction models (1)
- primary healthcare (1)
- prior knowledge (1)
- probabilistic machine learning (1)
- probabilistic routing (1)
- probabilistic sensitivity analysis (1)
- probabilistisches maschinelles Lernen (1)
- process execution (1)
- process modeling (1)
- process models (1)
- production networks (1)
- programmierbare Materie (1)
- programming experience (1)
- programming tools (1)
- programs (1)
- progressive rendering (1)
- progressives Rendering (1)
- project based learning (1)
- props (1)
- proteomics graph networks (1)
- prototyping (1)
- pseudo-Boolean optimization (1)
- pseudoboolesche Optimierung (1)
- psychopy experiments (1)
- psychotherapy (1)
- qualitative model (1)
- qualitatives Modell (1)
- quality-adjusted life years (1)
- quantification (1)
- quantum computing (1)
- quasi-identifier discovery (1)
- quorum slices (1)
- radiation hardening (1)
- railways (1)
- rainbow connection (1)
- random graphs (1)
- random k-SAT (1)
- reactive object queries (1)
- rechnerunterstütztes Konstruieren (1)
- recommendations (1)
- reconfigurable systems (1)
- record linkage (1)
- regression testing (1)
- rekeying (1)
- relational structures (1)
- relationale Strukturen (1)
- reliability (1)
- religiously motivated (1)
- remote sensing (1)
- reported out-come measures (1)
- representation learning (1)
- residential segregation (1)
- reverse engineering (1)
- reward (1)
- risk (1)
- robot voice (1)
- rootstock (1)
- rough logs (1)
- run time analysis (1)
- runtime models (1)
- runtime monitoring (1)
- räumliche Geodaten (1)
- satisfiability threshold (1)
- savanna (1)
- scalability (1)
- scalability of blockchain (1)
- scalable (1)
- scarce tokens (1)
- schwach überwachtes maschinelles Lernen (1)
- screening tools (1)
- scripting languages (1)
- scrollytelling (1)
- secure data flow diagrams (DFDsec) (1)
- secure multi-execution (1)
- selbst-souveräne Identitäten (1)
- selbstanpassende Systeme (1)
- selbstbestimmte Identitäten (1)
- selbstheilende Systeme (1)
- selbstüberwachtes Lernen (1)
- self-adaptive software (1)
- self-adaptive systems (1)
- self-driving (1)
- self-efficacy (1)
- self-government (1)
- self-healing (1)
- semantic classification (1)
- semantic representations (1)
- semantische Klassifizierung (1)
- semantische Repräsentationen (1)
- sensors (1)
- sequence properties (1)
- serialization (1)
- serverseitiges 3D-Rendering (1)
- serverside 3D rendering (1)
- service-oriented architecture (SOA) (1)
- service-oriented architectures (1)
- serviceorientierte Architekturen (1)
- shared leadership (1)
- siamese neural networks (1)
- sichere Datenflussdiagramme (DFDsec) (1)
- sidechain (1)
- situational awareness (1)
- skalierbar (1)
- small talk (1)
- smalltalk (1)
- soccer analytics (1)
- social bot detection (1)
- social interaction (1)
- social media analysis (1)
- social modulation (1)
- social networking (1)
- social robot (1)
- software analytics (1)
- software evolution (1)
- software visualization (1)
- software/hardware co-design (1)
- somatosensation (1)
- soundness (1)
- sozialen Medien (1)
- soziales Netzwerk (1)
- spaltenorientierte Datenbanken (1)
- spatial aggregation (1)
- spatio-temporal data management (1)
- specification of timed graph transformations (1)
- spectrum clustering (1)
- squeak (1)
- standard (1)
- standardization (1)
- stark verhaltenskorrekt sperrend (1)
- state management (1)
- state space modelling (1)
- static source-code analysis (1)
- statische Quellcodeanalyse (1)
- statistics (1)
- stochastic process (1)
- storytelling (1)
- stream processing (1)
- strongly behaviourally correct locking (1)
- strongly stable matching (1)
- super stable matching (1)
- support vector machine (1)
- susceptibility (1)
- symbolic analysis (1)
- symbolic graphs (1)
- symbolische Analyse (1)
- symbolische Graphen (1)
- synchronization (1)
- tabellarische Dateien (1)
- tabular data (1)
- task realization strategies (1)
- taxonomy (1)
- teamwork (1)
- technology (1)
- telemedicine (1)
- telework (1)
- temporal graph queries (1)
- temporal logic (1)
- temporale Graphanfragen (1)
- test case prioritization (1)
- test results (1)
- text classification (1)
- the Arab world (1)
- theory (1)
- threat analysis (1)
- threat detection (1)
- threat model (1)
- tiefe Gauß-Prozesse (1)
- time horizon (1)
- timed automata (1)
- timed graph (1)
- tissue-awareness (1)
- tool building (1)
- tools (1)
- toxic comment classification (1)
- tractography (1)
- trajectories (1)
- transaction (1)
- transcriptomics (1)
- transfer learning (1)
- transformation (1)
- transversal hypergraph (1)
- tribunales de justicia (1)
- tribunales en línea (1)
- triple graph grammars (1)
- typed attributed symbolic graphs (1)
- typisierte attributierte Graphen (1)
- uBPMN (1)
- ubiquitous business process model and notation (uBPMN) (1)
- ubiquitous business process modeling (1)
- ubiquitous computing (ubicomp) (1)
- ubiquitous decision-aware business process (1)
- ubiquitous decisions (1)
- unbalancierter Datensatz (1)
- uncertainty (1)
- unique column combination (1)
- univariat (1)
- univariate (1)
- unsupervised (1)
- unsupervised learning (1)
- upper bound (1)
- usability (1)
- user engagement (1)
- user research framework (1)
- user-centered design (1)
- utility functions (1)
- variants, (1)
- variational inference (1)
- variationelle Inferenz (1)
- verhaltenskorrektes Lernen (1)
- verifiable credentials (1)
- verschachtelte Anwendungsbedingungen (1)
- verschachtelte Graphbedingungen (1)
- verstärkendes Lernen (1)
- verteilte Berechnung (1)
- verteilte Leistungsüberwachung (1)
- verzwickte Probleme (1)
- veteran (1)
- virtual (1)
- virtual machines (1)
- virtual reality (1)
- virtuell (1)
- virtuelle Maschinen (1)
- visual analytics (1)
- visual language (1)
- visual languages (1)
- visualization concept exploration (1)
- visuelle Sprache (1)
- visuelle Sprachen (1)
- vocational training (1)
- vulnerability (1)
- wake-up radio (1)
- weak supervision (1)
- weakly (1)
- wearable EEG (muse and neurosity crown) (1)
- wearables (1)
- web-based development (1)
- web-based development environment (1)
- web-basierte Entwicklungsumgebung (1)
- webbasierte Entwicklung (1)
- well-being (1)
- wireless networks (1)
- zero-day (1)
- zufälliges k-SAT (1)
- Überwachtes Lernen (1)
- überprüfbare Nachweise (1)
- social network analysis (1)
- team creativity (1)
- intrapreneurship (1)
Design thinking is a well-established practical and educational approach to fostering high-level creativity and innovation, which has been refined since the 1950s with the participation of experts like Joy Paul Guilford and Abraham Maslow. Through real-world projects, trainees learn to optimize their creative outcomes by developing and practicing creative cognition and metacognition. This paper provides a holistic perspective on creativity, enabling the formulation of a comprehensive theoretical framework of creative metacognition. It focuses on the design thinking approach to creativity and explores the role of metacognition in four areas of creativity expertise: Products, Processes, People, and Places. The analysis includes task-outcome relationships (product metacognition), the monitoring of strategy effectiveness (process metacognition), an understanding of individual or group strengths and weaknesses (people metacognition), and an examination of the mutual impact between environments and creativity (place metacognition). It also reviews measures taken in design thinking education, including a distribution of cognition and metacognition, to support students in their development of creative mastery. On these grounds, we propose extended methods for measuring creative metacognition with the goal of enhancing comprehensive assessments of the phenomenon. Proposed methodological advancements include accuracy sub-scales, experimental tasks where examinees explore problem and solution spaces, combinations of naturalistic observations with capability testing, as well as physiological assessments as indirect measures of creative metacognition.
CrashNet
(2021)
Destructive car crash tests are an elaborate, time-consuming, and expensive necessity of the automotive development process. Today, finite element method (FEM) simulations are used to reduce costs by simulating car crashes computationally. We propose CrashNet, an encoder-decoder deep neural network architecture that reduces costs further and models specific outcomes of car crashes very accurately. We achieve this by formulating car crash events as time series prediction enriched with a set of scalar features. Traditional sequence-to-sequence models are usually composed of convolutional neural network (CNN) and CNN transpose layers. We propose to concatenate those with an MLP capable of learning how to inject the given scalars into the output time series. In addition, we replace the CNN transpose with 2D CNN transpose layers in order to force the model to process the hidden state of the set of scalars as one time series. The proposed CrashNet model can be trained efficiently and is able to process scalars and time series as input in order to infer the results of crash tests. CrashNet produces results faster and at a lower cost compared to destructive tests and FEM simulations. Moreover, it represents a novel approach in the car safety management domain.
Viper
(2021)
Key-value stores (KVSs) have found wide application in modern software systems. For persistence, their data resides in slow secondary storage, which requires KVSs to employ various techniques to increase their read and write performance from and to the underlying medium. Emerging persistent memory (PMem) technologies offer data persistence at close-to-DRAM speed, making them a promising alternative to classical disk-based storage. However, simply drop-in replacing existing storage with PMem does not yield good results, as block-based access behaves differently in PMem than on disk and ignores PMem's byte addressability, layout, and unique performance characteristics. In this paper, we propose three PMem-specific access patterns and implement them in a hybrid PMem-DRAM KVS called Viper. We employ a DRAM-based hash index and a PMem-aware storage layout to utilize the random-write speed of DRAM and efficient sequential-write performance PMem. Our evaluation shows that Viper significantly outperforms existing KVSs for core KVS operations while providing full data persistence. Moreover, Viper outperforms existing PMem-only, hybrid, and disk-based KVSs by 4-18x for write workloads, while matching or surpassing their get performance.
RHEEMix in the data jungle
(2020)
Data analytics are moving beyond the limits of a single platform. In this paper, we present the cost-based optimizer of Rheem, an open-source cross-platform system that copes with these new requirements. The optimizer allocates the subtasks of data analytic tasks to the most suitable platforms. Our main contributions are: (i) a mechanism based on graph transformations to explore alternative execution strategies; (ii) a novel graph-based approach to determine efficient data movement plans among subtasks and platforms; and (iii) an efficient plan enumeration algorithm, based on a novel enumeration algebra. We extensively evaluate our optimizer under diverse real tasks. We show that our optimizer can perform tasks more than one order of magnitude faster when using multiple platforms than when using a single platform.
Concepts and techniques for 3D-embedded treemaps and their application to software visualization
(2024)
This thesis addresses concepts and techniques for interactive visualization of hierarchical data using treemaps. It explores (1) how treemaps can be embedded in 3D space to improve their information content and expressiveness, (2) how the readability of treemaps can be improved using level-of-detail and degree-of-interest techniques, and (3) how to design and implement a software framework for the real-time web-based rendering of treemaps embedded in 3D. With a particular emphasis on their application, use cases from software analytics are taken to test and evaluate the presented concepts and techniques.
Concerning the first challenge, this thesis shows that a 3D attribute space offers enhanced possibilities for the visual mapping of data compared to classical 2D treemaps. In particular, embedding in 3D allows for improved implementation of visual variables (e.g., by sketchiness and color weaving), provision of new visual variables (e.g., by physically based materials and in situ templates), and integration of visual metaphors (e.g., by reference surfaces and renderings of natural phenomena) into the three-dimensional representation of treemaps.
For the second challenge—the readability of an information visualization—the work shows that the generally higher visual clutter and increased cognitive load typically associated with three-dimensional information representations can be kept low in treemap-based representations of both small and large hierarchical datasets. By introducing an adaptive level-of-detail technique, we cannot only declutter the visualization results, thereby reducing cognitive load and mitigating occlusion problems, but also summarize and highlight relevant data. Furthermore, this approach facilitates automatic labeling, supports the emphasis on data outliers, and allows visual variables to be adjusted via degree-of-interest measures.
The third challenge is addressed by developing a real-time rendering framework with WebGL and accumulative multi-frame rendering. The framework removes hardware constraints and graphics API requirements, reduces interaction response times, and simplifies high-quality rendering. At the same time, the implementation effort for a web-based deployment of treemaps is kept reasonable.
The presented visualization concepts and techniques are applied and evaluated for use cases in software analysis. In this domain, data about software systems, especially about the state and evolution of the source code, does not have a descriptive appearance or natural geometric mapping, making information visualization a key technology here. In particular, software source code can be visualized with treemap-based approaches because of its inherently hierarchical structure. With treemaps embedded in 3D, we can create interactive software maps that visually map, software metrics, software developer activities, or information about the evolution of software systems alongside their hierarchical module structure.
Discussions on remaining challenges and opportunities for future research for 3D-embedded treemaps and their applications conclude the thesis.
Invention
(2023)
This entry addresses invention from five different perspectives: (i) definition of the term, (ii) mechanisms underlying invention processes, (iii) (pre-)history of human inventions, (iv) intellectual property protection vs open innovation, and (v) case studies of great inventors. Regarding the definition, an invention is the outcome of a creative process taking place within a technological milieu, which is recognized as successful in terms of its effectiveness as an original technology. In the process of invention, a technological possibility becomes realized. Inventions are distinct from either discovery or innovation. In human creative processes, seven mechanisms of invention can be observed, yielding characteristic outcomes: (1) basic inventions, (2) invention branches, (3) invention combinations, (4) invention toolkits, (5) invention exaptations, (6) invention values, and (7) game-changing inventions. The development of humanity has been strongly shaped by inventions ever since early stone tools and the conception of agriculture. An “explosion of creativity” has been associated with Homo sapiens, and inventions in all fields of human endeavor have followed suit, engendering an exponential growth of cumulative culture. This culture development emerges essentially through a reuse of previous inventions, their revision, amendment and rededication. In sociocultural terms, humans have increasingly regulated processes of invention and invention-reuse through concepts such as intellectual property, patents, open innovation and licensing methods. Finally, three case studies of great inventors are considered: Edison, Marconi, and Montessori, next to a discussion of human invention processes as collaborative endeavors.
In liquid-chromatography-tandem-mass-spectrometry-based proteomics, information about the presence and stoichiometry ofprotein modifications is not readily available. To overcome this problem,we developed multiFLEX-LF, a computational tool that builds uponFLEXIQuant, which detects modified peptide precursors and quantifiestheir modification extent by monitoring the differences between observedand expected intensities of the unmodified precursors. multiFLEX-LFrelies on robust linear regression to calculate the modification extent of agiven precursor relative to a within-study reference. multiFLEX-LF cananalyze entire label-free discovery proteomics data sets in a precursor-centric manner without preselecting a protein of interest. To analyzemodification dynamics and coregulated modifications, we hierarchicallyclustered the precursors of all proteins based on their computed relativemodification scores. We applied multiFLEX-LF to a data-independent-acquisition-based data set acquired using the anaphase-promoting complex/cyclosome (APC/C) isolated at various time pointsduring mitosis. The clustering of the precursors allows for identifying varying modification dynamics and ordering the modificationevents. Overall, multiFLEX-LF enables the fast identification of potentially differentially modified peptide precursors and thequantification of their differential modification extent in large data sets using a personal computer. Additionally, multiFLEX-LF candrive the large-scale investigation of the modification dynamics of peptide precursors in time-series and case-control studies.multiFLEX-LF is available athttps://gitlab.com/SteenOmicsLab/multiflex-lf.
CovRadar
(2022)
The ongoing pandemic caused by SARS-CoV-2 emphasizes the importance of genomic surveillance to understand the evolution of the virus, to monitor the viral population, and plan epidemiological responses. Detailed analysis, easy visualization and intuitive filtering of the latest viral sequences are powerful for this purpose. We present CovRadar, a tool for genomic surveillance of the SARS-CoV-2 Spike protein. CovRadar consists of an analytical pipeline and a web application that enable the analysis and visualization of hundreds of thousand sequences. First, CovRadar extracts the regions of interest using local alignment, then builds a multiple sequence alignment, infers variants and consensus and finally presents the results in an interactive app, making accessing and reporting simple, flexible and fast.
Omics and male infertility
(2022)
Male infertility is a multifaceted disorder affecting approximately 50% of male partners in infertile couples.
Over the years, male infertility has been diagnosed mainly through semen analysis, hormone evaluations, medical records and physical examinations, which of course are fundamental, but yet inefficient, because 30% of male infertility cases remain idiopathic. This dilemmatic status of the unknown needs to be addressed with more sophisticated and result-driven technologies and/or techniques.
Genetic alterations have been linked with male infertility, thereby unveiling the practicality of investigating this disorder from the "omics" perspective.
Omics aims at analyzing the structure and functions of a whole constituent of a given biological function at different levels, including the molecular gene level (genomics), transcript level (transcriptomics), protein level (proteomics) and metabolites level (metabolomics). In the current study, an overview of the four branches of omics and their roles in male infertility are briefly discussed; the potential usefulness of assessing transcriptomic data to understand this pathology is also elucidated.
After assessing the publicly obtainable transcriptomic data for datasets on male infertility, a total of 1385 datasets were retrieved, of which 10 datasets met the inclusion criteria and were used for further analysis.
These datasets were classified into groups according to the disease or cause of male infertility.
The groups include non-obstructive azoospermia (NOA), obstructive azoospermia (OA), non-obstructive and obstructive azoospermia (NOA and OA), spermatogenic dysfunction, sperm dysfunction, and Y chromosome microdeletion.
Findings revealed that 8 genes (LDHC, PDHA2, TNP1, TNP2, ODF1, ODF2, SPINK2, PCDHB3) were commonly differentially expressed between all disease groups.
Likewise, 56 genes were common between NOA versus NOA and OA (ADAD1, BANF2, BCL2L14, C12orf50, C20orf173, C22orf23, C6orf99, C9orf131, C9orf24, CABS1, CAPZA3, CCDC187, CCDC54, CDKN3, CEP170, CFAP206, CRISP2, CT83, CXorf65, FAM209A, FAM71F1, FAM81B, GALNTL5, GTSF1, H1FNT, HEMGN, HMGB4, KIF2B, LDHC, LOC441601, LYZL2, ODF1, ODF2, PCDHB3, PDHA2, PGK2, PIH1D2, PLCZ1, PROCA1, RIMBP3, ROPN1L, SHCBP1L, SMCP, SPATA16, SPATA19, SPINK2, TEX33, TKTL2, TMCO2, TMCO5A, TNP1, TNP2, TSPAN16, TSSK1B, TTLL2, UBQLN3).
These genes, particularly the above-mentioned 8 genes, are involved in diverse biological processes such as germ cell development, spermatid development, spermatid differentiation, regulation of proteolysis, spermatogenesis and metabolic processes.
Owing to the stage-specific expression of these genes, any mal-expression can ultimately lead to male infertility.
Therefore, currently available data on all branches of omics relating to male fertility can be used to identify biomarkers for diagnosing male infertility, which can potentially help in unravelling some idiopathic cases.
Peer assessment in MOOCs
(2021)
We report on a systematic review of the landscape of peer assessment in massive open online courses (MOOCs) with papers from 2014 to 2020 in 20 leading education technology publication venues across four databases containing education technology-related papers, addressing three research issues: the evolution of peer assessment in MOOCs during the period 2014 to 2020, the methods used in MOOCs to assess peers, and the challenges of and future directions in MOOC peer assessment. We provide summary statistics and a review of methods across the corpus and highlight three directions for improving the use of peer assessment in MOOCs: the need for focusing on scaling learning through peer evaluations, the need for scaling and optimizing team submissions in team peer assessments, and the need for embedding a social process for peer assessment.
A path in an edge-colored graph is rainbow if no two edges of it are colored the same, and the graph is rainbow-connected if there is a rainbow path between each pair of its vertices. The minimum number of colors needed to rainbow-connect a graph G is the rainbow connection number of G, denoted by rc(G).& nbsp;A simple way to rainbow-connect a graph G is to color the edges of a spanning tree with distinct colors and then re-use any of these colors to color the remaining edges of G. This proves that rc(G) <= |V (G)|-1. We ask whether there is a stronger connection between tree-like structures and rainbow coloring than that is implied by the above trivial argument. For instance, is it possible to find an upper bound of t(G)-1 for rc(G), where t(G) is the number of vertices in the largest induced tree of G? The answer turns out to be negative, as there are counter-examples that show that even c .t(G) is not an upper bound for rc(G) for any given constant c.& nbsp;In this work we show that if we consider the forest number f(G), the number of vertices in a maximum induced forest of G, instead of t(G), then surprisingly we do get an upper bound. More specifically, we prove that rc(G) <= f(G) + 2. Our result indicates a stronger connection between rainbow connection and tree-like structures than that was suggested by the simple spanning tree based upper bound.
Graphs play an important role in many areas of Computer Science. In particular, our work is motivated by model-driven software development and by graph databases. For this reason, it is very important to have the means to express and to reason about the properties that a given graph may satisfy. With this aim, in this paper we present a visual logic that allows us to describe graph properties, including navigational properties, i.e., properties about the paths in a graph. The logic is equipped with a deductive tableau method that we have proved to be sound and complete.
Industry 4.0 is transforming how businesses innovate and, as a result, companies are spearheading the movement towards 'Digital Transformation'. While some scholars advocate the use of design thinking to identify new innovative behaviours, cognition experts emphasise the importance of top managers in supporting employees to develop these behaviours. However, there is a dearth of research in this domain and companies are struggling to implement the required behaviours. To address this gap, this study aims to identify and prioritise behavioural strategies conducive to design thinking to inform the creation of a managerial mental model. We identify 20 behavioural strategies from 45 interviewees with practitioners and educators and combine them with the concepts of 'paradigm-mindset-mental model' from cognition theory. The paper contributes to the body of knowledge by identifying and prioritising specific behavioural strategies to form a novel set of survival conditions aligned to the new industrial paradigm of Industry 4.0.
As resources are valuable assets, organizations have to decide which resources to allocate to business process tasks in a way that the process is executed not only effectively but also efficiently. Traditional role-based resource allocation leads to effective process executions, since each task is performed by a resource that has the required skills and competencies to do so. However, the resulting allocations are typically not as efficient as they could be, since optimization techniques have yet to find their way in traditional business process management scenarios. On the other hand, operations research provides a rich set of analytical methods for supporting problem-specific decisions on resource allocation. This paper provides a novel framework for creating transparency on existing tasks and resources, supporting individualized allocations for each activity in a process, and the possibility to integrate problem-specific analytical methods of the operations research domain. To validate the framework, the paper reports on the design and prototypical implementation of a software architecture, which extends a traditional process engine with a dedicated resource management component. This component allows us to define specific resource allocation problems at design time, and it also facilitates optimized resource allocation at run time. The framework is evaluated using a real-world parcel delivery process. The evaluation shows that the quality of the allocation results increase significantly with a technique from operations research in contrast to the traditional applied rule-based approach.
In the field of Business Process Management (BPM), modeling business processes and related data is a critical issue since process activities need to manage data stored in databases. The connection between processes and data is usually handled at the implementation level, even if modeling both processes and data at the conceptual level should help designers in improving business process models and identifying requirements for implementation. Especially in data -and decision-intensive contexts, business process activities need to access data stored both in databases and data warehouses. In this paper, we complete our approach for defining a novel conceptual view that bridges process activities and data. The proposed approach allows the designer to model the connection between business processes and database models and define the operations to perform, providing interesting insights on the overall connected perspective and hints for identifying activities that are crucial for decision support.
Background:
More patient data are needed to improve research on rare liver diseases. Mobile health apps enable an exhaustive data collection. Therefore, the European Reference Network on Hepatological diseases (ERN RARE-LIVER) intends to implement an app for patients with rare liver diseases communicating with a patient registry, but little is known about which features patients and their healthcare providers regard as being useful.
Aims:
This study aimed to investigate how an app for rare liver diseases would be accepted, and to find out which features are considered useful.
Methods:
An anonymous survey was conducted on adult patients with rare liver diseases at a single academic, tertiary care outpatient-service. Additionally, medical experts of the ERN working group on autoimmune hepatitis were invited to participate in an online survey.
Results:
In total, the responses from 100 patients with autoimmune (n = 90) or other rare (n = 10) liver diseases and 32 experts were analyzed. Patients were convinced to use a disease specific app (80%) and expected some benefit to their health (78%) but responses differed signifi-cantly between younger and older patients (93% vs. 62%, p < 0.001; 88% vs. 64%, p < 0.01). Comparing patients' and experts' feedback, patients more often expected a simplified healthcare pathway (e.g. 89% vs. 59% (p < 0.001) wanted access to one's own medical records), while healthcare providers saw the benefit mainly in improving compliance and treatment outcome (e.g. 93% vs. 31% (p < 0.001) and 70% vs. 21% (p < 0.001) expected the app to reduce mistakes in taking medication and improve quality of life, respectively).
Process mining techniques are valuable to gain insights into and help improve (work) processes. Many of these techniques focus on the sequential order in which activities are performed. Few of these techniques consider the statistical relations within processes. In particular, existing techniques do not allow insights into how responses to an event (action) result in desired or undesired outcomes (effects). We propose and formalize the ARE miner, a novel technique that allows us to analyze and understand these action-response-effect patterns. We take a statistical approach to uncover potential dependency relations in these patterns. The goal of this research is to generate processes that are: (1) appropriately represented, and (2) effectively filtered to show meaningful relations. We evaluate the ARE miner in two ways. First, we use an artificial data set to demonstrate the effectiveness of the ARE miner compared to two traditional process-oriented approaches. Second, we apply the ARE miner to a real-world data set from a Dutch healthcare institution. We show that the ARE miner generates comprehensible representations that lead to informative insights into statistical relations between actions, responses, and effects.
Despite advances in machine learning-based clinical prediction models, only few of such models are actually deployed in clinical contexts. Among other reasons, this is due to a lack of validation studies. In this paper, we present and discuss the validation results of a machine learning model for the prediction of acute kidney injury in cardiac surgery patients initially developed on the MIMIC-III dataset when applied to an external cohort of an American research hospital. To help account for the performance differences observed, we utilized interpretability methods based on feature importance, which allowed experts to scrutinize model behavior both at the global and local level, making it possible to gain further insights into why it did not behave as expected on the validation cohort. The knowledge gleaned upon derivation can be potentially useful to assist model update during validation for more generalizable and simpler models. We argue that interpretability methods should be considered by practitioners as a further tool to help explain performance differences and inform model update in validation studies.
Eskalation des Commitments in Wirtschaftsinformatik Projekten: eine kognitiv-affektive Perspektive
(2024)
Projekte im Bereich der Wirtschaftsinformatik (IS-Projekte) sind von zentraler Bedeutung für die Steuerung von Unternehmensstrategien und die Aufrechterhaltung von Wettbewerbsvorteilen, überschreiten jedoch häufig das Budget, sprengen den Zeitrahmen und weisen eine hohe Misserfolgsquote auf. Diese Dissertation befasst sich mit den psychologischen Grundlagen menschlichen Verhaltens - insbesondere Kognition und Emotion - im Zusammenhang mit einem weit verbreiteten Problem im IS-Projektmanagement: der Tendenz, an fehlgehenden Handlungssträngen festzuhalten, auch Eskalation des Commitments (Englisch: “escalation of commitment” - EoC) genannt.
Mit einem kombinierten Forschungsansatz (dem Mix von qualitativen und quantitativen Methoden) untersuche ich in meiner Dissertation die emotionalen und kognitiven Grundlagen der Entscheidungsfindung hinter eskalierendem Commitment zu scheiternden IS-Projekten und deren Entwicklung über die Zeit. Die Ergebnisse eines psychophysiologischen Laborexperiments liefern Belege auf die Vorhersagen bezüglich der Rolle von negativen und komplexen situativen Emotionen der kognitiven Dissonanz Theorie gegenüber der Coping-Theorie und trägt zu einem besseren Verständnis dafür bei, wie sich Eskalationstendenzen während sequenzieller Entscheidungsfindung aufgrund kognitiver Lerneffekte verändern. Mit Hilfe psychophysiologischer Messungen, einschließlich der Daten-Triangulation zwischen elektrodermaler und kardiovaskulärer Aktivität sowie künstliche Intelligenz-basierter Analyse von Gesichtsmikroexpressionen, enthüllt diese Forschung physiologische Marker für eskalierendes Commitment. Ergänzend zu dem Experiment zeigt eine qualitative Analyse text-basierter Reflexionen während der Eskalationssituationen, dass Entscheidungsträger verschiedene kognitive Begründungsmuster verwenden, um eskalierende Verhaltensweisen zu rechtfertigen, die auf eine Sequenz von vier unterschiedlichen kognitiven Phasen schließen lassen.
Durch die Integration von qualitativen und quantitativen Erkenntnissen entwickelt diese Dissertation ein umfassendes theoretisches Model dafür, wie Kognition und Emotion eskalierendes Commitment über die Zeit beeinflussen. Ich schlage vor, dass eskalierendes Commitment eine zyklische Anpassung von Denkmodellen ist, die sich durch Veränderungen in kognitiven Begründungsmustern, Variationen im zeitlichen Kognitionsmodus und Interaktionen mit situativen Emotionen und deren Erwartung auszeichnet. Der Hauptbeitrag dieser Arbeit liegt in der Entflechtung der emotionalen und kognitiven Mechanismen, die eskalierendes Commitment im Kontext von IS-Projekten antreiben. Die Erkenntnisse tragen dazu bei, die Qualität von Entscheidungen unter Unsicherheit zu verbessern und liefern die Grundlage für die Entwicklung von Deeskalationsstrategien. Beteiligte an „in Schieflage geratenden“ IS-Projekten sollten sich der Tendenz auf fehlgeschlagenen Aktionen zu beharren und der Bedeutung der zugrundeliegenden emotionalen und kognitiven Dynamiken bewusst sein.
Classification, prediction and evaluation of graph neural networks on online social media platforms
(2024)
The vast amount of data generated on social media platforms have made them a valuable source of information for businesses, governments and researchers. Social media data can provide insights into user behavior, preferences, and opinions. In this work, we address two important challenges in social media analytics. Predicting user engagement with online content has become a critical task for content creators to increase user engagement and reach larger audiences. Traditional user engagement prediction approaches rely solely on features derived from the user and content. However, a new class of deep learning methods based on graphs captures not only the content features but also the graph structure of social media networks.
This thesis proposes a novel Graph Neural Network (GNN) approach to predict user interaction with tweets. The proposed approach combines the features of users, tweets and their engagement graphs. The tweet text features are extracted using pre-trained embeddings from language models, and a GNN layer is used to embed the user in a vector space. The GNN model then combines the features and graph structure to predict user engagement. The proposed approach achieves an accuracy value of 94.22% in classifying user interactions, including likes, retweets, replies, and quotes.
Another major challenge in social media analysis is detecting and classifying social bot accounts. Social bots are automated accounts used to manipulate public opinion by spreading misinformation or generating fake interactions. Detecting social bots is critical to prevent their negative impact on public opinion and trust in social media. In this thesis, we classify social bots on Twitter by applying Graph Neural Networks. The proposed approach uses a combination of both the features of a node and an aggregation of the features of a node’s neighborhood to classify social bot accounts. Our final results indicate a 6% improvement in the area under the curve score in the final predictions through the utilization of GNN.
Overall, our work highlights the importance of social media data and the potential of new methods such as GNNs to predict user engagement and detect social bots. These methods have important implications for improving the quality and reliability of information on social media platforms and mitigating the negative impact of social bots on public opinion and discourse.
Anomaly detection in process mining aims to recognize outlying or unexpected behavior in event logs for purposes such as the removal of noise and identification of conformance violations. Existing techniques for this task are primarily frequency-based, arguing that behavior is anomalous because it is uncommon. However, such techniques ignore the semantics of recorded events and, therefore, do not take the meaning of potential anomalies into consideration. In this work, we overcome this caveat and focus on the detection of anomalies from a semantic perspective, arguing that anomalies can be recognized when process behavior does not make sense. To achieve this, we propose an approach that exploits the natural language associated with events. Our key idea is to detect anomalous process behavior by identifying semantically inconsistent execution patterns. To detect such patterns, we first automatically extract business objects and actions from the textual labels of events. We then compare these against a process-independent knowledge base. By populating this knowledge base with patterns from various kinds of resources, our approach can be used in a range of contexts and domains. We demonstrate the capability of our approach to successfully detect semantic execution anomalies through an evaluation based on a set of real-world and synthetic event logs and show the complementary nature of semantics-based anomaly detection to existing frequency-based techniques.
Efficiently managing large state is a key challenge for data management systems. Traditionally, state is split into fast but volatile state in memory for processing and persistent but slow state on secondary storage for durability. Persistent memory (PMem), as a new technology in the storage hierarchy, blurs the lines between these states by offering both byte-addressability and low latency like DRAM as well persistence like secondary storage. These characteristics have the potential to cause a major performance shift in database systems.
Driven by the potential impact that PMem has on data management systems, in this thesis we explore their use of PMem. We first evaluate the performance of real PMem hardware in the form of Intel Optane in a wide range of setups. To this end, we propose PerMA-Bench, a configurable benchmark framework that allows users to evaluate the performance of customizable database-related PMem access. Based on experimental results obtained with PerMA-Bench, we discuss findings and identify general and implementation-specific aspects that influence PMem performance and should be considered in future work to improve PMem-aware designs. We then propose Viper, a hybrid PMem-DRAM key-value store. Based on PMem-aware access patterns, we show how to leverage PMem and DRAM efficiently to design a key database component. Our evaluation shows that Viper outperforms existing key-value stores by 4–18x for inserts while offering full data persistence and achieving similar or better lookup performance. Next, we show which changes must be made to integrate PMem components into larger systems. By the example of stream processing engines, we highlight limitations of current designs and propose a prototype engine that overcomes these limitations. This allows our prototype to fully leverage PMem's performance for its internal state management. Finally, in light of Optane's discontinuation, we discuss how insights from PMem research can be transferred to future multi-tier memory setups by the example of Compute Express Link (CXL).
Overall, we show that PMem offers high performance for state management, bridging the gap between fast but volatile DRAM and persistent but slow secondary storage. Although Optane was discontinued, new memory technologies are continuously emerging in various forms and we outline how novel designs for them can build on insights from existing PMem research.
The landscape of software self-adaptation is shaped in accordance with the need to cost-effectively achieve and maintain (software) quality at runtime and in the face of dynamic operation conditions. Optimization-based solutions perform an exhaustive search in the adaptation space, thus they may provide quality guarantees. However, these solutions render the attainment of optimal adaptation plans time-intensive, thereby hindering scalability. Conversely, deterministic rule-based solutions yield only sub-optimal adaptation decisions, as they are typically bound by design-time assumptions, yet they offer efficient processing and implementation, readability, expressivity of individual rules supporting early verification. Addressing the quality-cost trade-of requires solutions that simultaneously exhibit the scalability and cost-efficiency of rulebased policy formalism and the optimality of optimization-based policy formalism as explicit artifacts for adaptation. Utility functions, i.e., high-level specifications that capture system objectives, support the explicit treatment of quality-cost trade-off. Nevertheless, non-linearities, complex dynamic architectures, black-box models, and runtime uncertainty that makes the prior knowledge obsolete are a few of the sources of uncertainty and subjectivity that render the elicitation of utility non-trivial.
This thesis proposes a twofold solution for incremental self-adaptation of dynamic architectures. First, we introduce Venus, a solution that combines in its design a ruleand an optimization-based formalism enabling optimal and scalable adaptation of dynamic architectures. Venus incorporates rule-like constructs and relies on utility theory for decision-making. Using a graph-based representation of the architecture, Venus captures rules as graph patterns that represent architectural fragments, thus enabling runtime extensibility and, in turn, support for dynamic architectures; the architecture is evaluated by assigning utility values to fragments; pattern-based definition of rules and utility enables incremental computation of changes on the utility that result from rule executions, rather than evaluating the complete architecture, which supports scalability. Second, we introduce HypeZon, a hybrid solution for runtime coordination of multiple off-the-shelf adaptation policies, which typically offer only partial satisfaction of the quality and cost requirements. Realized based on meta-self-aware architectures, HypeZon complements Venus by re-using existing policies at runtime for balancing the quality-cost trade-off.
The twofold solution of this thesis is integrated in an adaptation engine that leverages state- and event-based principles for incremental execution, therefore, is scalable for large and dynamic software architectures with growing size and complexity. The utility elicitation challenge is resolved by defining a methodology to train utility-change prediction models. The thesis addresses the quality-cost trade-off in adaptation of dynamic software architectures via design-time combination (Venus) and runtime coordination (HypeZon) of rule- and optimization-based policy formalisms, while offering supporting mechanisms for optimal, cost-effective, scalable, and robust adaptation. The solutions are evaluated according to a methodology that is obtained based on our systematic literature review of evaluation in self-healing systems; the applicability and effectiveness of the contributions are demonstrated to go beyond the state-of-the-art in coverage of a wide spectrum of the problem space for software self-adaptation.
To manage tabular data files and leverage their content in a given downstream task, practitioners often design and execute complex transformation pipelines to prepare them. The complexity of such pipelines stems from different factors, including the nature of the preparation tasks, often exploratory or ad-hoc to specific datasets; the large repertory of tools, algorithms, and frameworks that practitioners need to master; and the volume, variety, and velocity of the files to be prepared. Metadata plays a fundamental role in reducing this complexity: characterizing a file assists end users in the design of data preprocessing pipelines, and furthermore paves the way for suggestion, automation, and optimization of data preparation tasks.
Previous research in the areas of data profiling, data integration, and data cleaning, has focused on extracting and characterizing metadata regarding the content of tabular data files, i.e., about the records and attributes of tables. Content metadata are useful for the latter stages of a preprocessing pipeline, e.g., error correction, duplicate detection, or value normalization, but they require a properly formed tabular input. Therefore, these metadata are not relevant for the early stages of a preparation pipeline, i.e., to correctly parse tables out of files. In this dissertation, we turn our focus to what we call the structure of a tabular data file, i.e., the set of characters within a file that do not represent data values but are required to parse and understand the content of the file. We provide three different approaches to represent file structure, an explicit representation based on context-free grammars; an implicit representation based on file-wise similarity; and a learned representation based on machine learning.
In our first contribution, we use the grammar-based representation to characterize a set of over 3000 real-world csv files and identify multiple structural issues that let files deviate from the csv standard, e.g., by having inconsistent delimiters or containing multiple tables. We leverage our learnings about real-world files and propose Pollock, a benchmark to test how well systems parse csv files that have a non-standard structure, without any previous preparation. We report on our experiments on using Pollock to evaluate the performance of 16 real-world data management systems.
Following, we characterize the structure of files implicitly, by defining a measure of structural similarity for file pairs. We design a novel algorithm to compute this measure, which is based on a graph representation of the files' content. We leverage this algorithm and propose Mondrian, a graphical system to assist users in identifying layout templates in a dataset, classes of files that have the same structure, and therefore can be prepared by applying the same preparation pipeline.
Finally, we introduce MaGRiTTE, a novel architecture that uses self-supervised learning to automatically learn structural representations of files in the form of vectorial embeddings at three different levels: cell level, row level, and file level. We experiment with the application of structural embeddings for several tasks, namely dialect detection, row classification, and data preparation efforts estimation.
Our experimental results show that structural metadata, either identified explicitly on parsing grammars, derived implicitly as file-wise similarity, or learned with the help of machine learning architectures, is fundamental to automate several tasks, to scale up preparation to large quantities of files, and to provide repeatable preparation pipelines.
Open edX is an incredible platform to deliver MOOCs and SPOCs, designed to be robust and support hundreds of thousands of students at the same time. Nevertheless, it lacks a lot of the fine-grained functionality needed to handle students individually in an on-campus course. This short session will present the ongoing project undertaken by the 6 public universities of the Region of Madrid plus the Universitat Politècnica de València, in the framework of a national initiative called UniDigital, funded by the Ministry of Universities of Spain within the Plan de Recuperación, Transformación y Resiliencia of the European Union. This project, led by three of these Spanish universities (UC3M, UPV, UAM), is investing more than half a million euros with the purpose of bringing the Open edX platform closer to the functionalities required for an LMS to support on-campus teaching. The aim of the project is to coordinate what is going to be developed with the Open edX development community, so these developments are incorporated into the core of the Open edX platform in its next releases. Features like a complete redesign of platform analytics to make them real-time, the creation of dashboards based on these analytics, the integration of a system for customized automatic feedback, improvement of exams and tasks and the extension of grading capabilities, improvements in the graphical interfaces for both students and teachers, the extension of the emailing capabilities, redesign of the file management system, integration of H5P content, the integration of a tool to create mind maps, the creation of a system to detect students at risk, or the integration of an advanced voice assistant and a gamification mobile app, among others, are part of the functionalities to be developed. The idea is to transform a first-class MOOC platform into the next on-campus LMS.
“How can a course structure be redesigned based on empirical data to enhance the learning effectiveness through a student-centered approach using objective criteria?”, was the research question we asked. “Digital Twins for Virtual Commissioning of Production Machines” is a course using several innovative concepts including an in-depth practical part with online experiments, called virtual labs. The teaching-learning concept is continuously evaluated. Card Sorting is a popular method for designing information architectures (IA), “a practice of effectively organizing, structuring, and labeling the content of a website or application into a structuref that enables efficient navigation” [11]. In the presented higher education context, a so-called hybrid card sort was used, in which each participants had to sort 70 cards into seven predefined categories or create new categories themselves. Twelve out of 28 students voluntarily participated in the process and short interviews were conducted after the activity. The analysis of the category mapping creates a quantitative measure of the (dis-)similarity of the keywords in specific categories using hierarchical clustering (HCA). The learning designer could then interpret the results to make decisions about the number, labeling and order of sections in the course.
With the growing number of online learning resources, it becomes increasingly difficult and overwhelming to keep track of the latest developments and to find orientation in the plethora of offers. AI-driven services to recommend standalone learning resources or even complete learning paths are discussed as a possible solution for this challenge. To function properly, such services require a well-defined set of metadata provided by the learning resource. During the last few years, the so-called MOOChub metadata format has been established as a de-facto standard by a group of MOOC providers in German-speaking countries. This format, which is based on schema.org, already delivers a quite comprehensive set of metadata. So far, this set has been sufficient to list, display, sort, filter, and search for courses on several MOOC and open educational resources (OER) aggregators. AI recommendation services and further automated integration, beyond a plain listing, have special requirements, however. To optimize the format for proper support of such systems, several extensions and modifications have to be applied. We herein report on a set of suggested changes to prepare the format for this task.
The integration of MOOCs into the Moroccan Higher Education (MHE) took place in 2013 by developing different partnerships and projects at national and international levels. As elsewhere, the Covid-19 crisis has played an important role in accelerating distance education in MHE. However, based on our experience as both university professors and specialists in educational engineering, the effective execution of the digital transition has not yet been implemented. Thus, in this article, we present a retrospective feedback of MOOCs in Morocco, focusing on the policies taken by the government to better support the digital transition in general and MOOCs in particular. We are therefore seeking to establish an optimal scenario for the promotion of MOOCs, which emphasizes the policies to be considered, and which recalls the importance of conducting a delicate articulation taking into account four levels, namely environmental, institutional, organizational and individual. We conclude with recommendations that are inspired by the Moroccan academic contex that focus on the major role that MOOCs plays for university students and on maintaining lifelong learning.
“Financial Analysis” is an online course designed for professionals consisting of three MOOCs, offering a professionally and institutionally recognized certificate in finance. The course is open but not free of charge and attracts mostly professionals from the banking industry. The primary objective of this study is to identify indicators that can predict learners at high risk of failure. To achieve this, we analyzed data from a previous course that had 875 enrolled learners and involve in the course during Fall 2021. We utilized correspondence analysis to examine demographic and behavioral variables.
The initial results indicate that demographic factors have a minor impact on the risk of failure in comparison to learners’ behaviors on the course platform. Two primary profiles were identified: (1) successful learners who utilized all the documents offered and spent between one to two hours per week, and (2) unsuccessful learners who used less than half of the proposed documents and spent less than one hour per week. Between these groups, at-risk students were identified as those who used more than half of the proposed documents and spent more than two hours per week. The goal is to identify those in group 1 who may be at risk of failing and those in group 2 who may succeed in the current MOOC, and to implement strategies to assist all learners in achieving success.
This paper presents a new design for MOOCs for professional development of skills needed to meet the UN Sustainable Development Goals – the CoMOOC or Co-designed Massive Open Online Collaboration. The CoMOOC model is based on co-design with multiple stakeholders including end-users within the professional communities the CoMOOC aims to reach. This paper shows how the CoMOOC model could help the tertiary sector deliver on the UN Sustainable Development Goals (UNSDGs) – including but not limited to SDG 4 Education – by providing a more effective vehicle for professional development at a scale that the UNSDGs require. Interviews with professionals using MOOCs, and design-based research with professionals have informed the development of the Co-MOOC model. This research shows that open, online, collaborative learning experiences are highly effective for building professional community knowledge. Moreover, this research shows that the collaborative learning design at the heart of the CoMOOC model is feasible cross-platform Research with teachers working in crisis contexts in Lebanon, many of whom were refugees, will be presented to show how this form of large scale, co-designed, online learning can support professionals, even in the most challenging contexts, such as mass displacement, where expertise is urgently required.
xMOOCs
(2023)
The World Health Organization designed OpenWHO.org to provide an inclusive and accessible online environment to equip learners across the globe with critical up-to-date information and to be able to effectively protect themselves in health emergencies. The platform thus focuses on the eXtended Massive Open Online Course (xMOOC) modality – contentfocused and expert-driven, one-to-many modelled, and self-paced for scalable learning. In this paper, we describe how OpenWHO utilized xMOOCs to reach mass audiences during the COVID-19 pandemic; the paper specifically examines the accessibility, language inclusivity and adaptability of hosted xMOOCs. As of February 2023, OpenWHO had 7.5 million enrolments across 200 xMOOCs on health emergency, epidemic, pandemic and other public health topics available across 65 languages, including 46 courses targeted for the COVID-19 pandemic. Our results suggest that the xMOOC modality allowed OpenWHO to expand learning during the pandemic to previously underrepresented groups, including women, participants ages 70 and older, and learners younger than age 20. The OpenWHO use case shows that xMOOCs should be considered when there is a need for massive knowledge transfer in health emergency situations, yet the approach should be context-specific according to the type of health emergency, targeted population and region. Our evidence also supports previous calls to put intervention elements that contribute to removing barriers to access at the core of learning and health information dissemination. Equity must be the fundamental principle and organizing criteria for public health work.
How to reuse inclusive stem Moocs in blended settings to engage young girls to scientific careers
(2023)
The FOSTWOM project (2019–2022), an ERASMUS+ funding, gave METID (Politecnico di Milano) and the MOOC Técnico (Instituto Superior Técnico, University of Lisbon), together with other partners, the opportunity to support the design and creation of gender-inclusive MOOCs. Among other project outputs, we designed a toolkit and a framework that enabled the production of two MOOCs for undergraduate and graduate students in Science, Technology, Engineering and Maths (STEM) and used them as academic content free of gender stereotypes about intellectual ability. In this short paper, the authors aim to 1) briefly share the main outputs of the project; 2) tell the story of how the FOSTWOM approach together with 3) a motivational strategy, the Heroine’s Learning Journey, proved to be effective in the context of rural and marginal areas in Brazil, with young girls as a specific target audience.
Challenges and proposals for introducing digital certificates in higher education infrastructures
(2023)
Questions about the recognition of MOOCs within and outside higher education were already being raised in the early 2010s. Today, recognition decisions are still made more or less on a case-by-case basis. However, digital certification approaches are now emerging that could automate recognition processes. The technical development of the required machinereadable documents and infrastructures is already well advanced in some cases. The DigiCerts consortium has developed a solution based on a collective blockchain. There are ongoing and open discussions regarding the particular technology, but the institutional implementation of digital certificates raises further questions. A number of workshops have been held at the Institute for Interactive Systems at Technische Hochschule Lübeck, which have identified the need for new responsibilities for issuing certificates. It has also become clear that all members of higher education institutions need to develop skills in the use of digital certificates.
To implement OERs at HEIs sustainably, not just technical infrastructure is required, but also well-trained staff. The University of Graz is in charge of an OER training program for university staff as part of the collaborative project Open Education Austria Advanced (OEAA) with the aim of ensuring long-term competence growth in the use and creation of OERs. The program consists of a MOOC and a guided blended learning format that was evaluated to find out which accompanying teaching and learning concepts can best facilitate targeted competence development. The evaluation of the program shows that learning videos, self-study assignments and synchronous sessions are most useful for the learning process. The results indicate that the creation of OERs is a complex process that can be undergone more effectively in the guided program.
Loss of expertise in the fields of Nuclear- and Radio-Chemistry (NRC) is problematic at a scientific and social level. This has been addressed by developing a MOOC, in order to let students in scientific matters discover all the benefits of NRC to society and improving their awareness of this discipline. The MOOC “Essential Radiochemistry for Society” includes current societal challenges related to health, clean and sustainable energy for safety and quality of food and agriculture.
NRC teachers belonging to CINCH network were invited to use the MOOC in their teaching, according to various usage models: on the basis of these different experiences, some usage patterns were designed, describing context characteristics (number and age of students, course), activities’ scheduling and organization, results and students’ feedback, with the aim of encouraging the use of MOOCs in university teaching, as an opportunity for both lecturers and students. These models were the basis of a “toolkit for teachers”. By experiencing digital teaching resources created by different lecturers, CINCH teachers took a first meaningful step towards understanding the worth of Open Educational Resources (OER) and the importance of their creation, adoption and sharing for knowledge progress. In this paper, the entire path from MOOC concept to MOOC different usage models, to awareness-raising regarding OER is traced in conceptual stages.
Innovat MOOC
(2023)
The COVID-19 pandemic has revealed the importance for university teachers to have adequate pedagogical and technological competences to cope with the various possible educational scenarios (face-to-face, online, hybrid, etc.), making use of appropriate active learning methodologies and supporting technologies to foster a more effective learning environment. In this context, the InnovaT project has been an important initiative to support the development of pedagogical and technological competences of university teachers in Latin America through several trainings aiming to promote teacher innovation. These trainings combined synchronous online training through webinars and workshops with asynchronous online training through the MOOC “Innovative Teaching in Higher Education.” This MOOC was released twice. The first run took place right during the lockdown of 2020, when Latin American teachers needed urgent training to move to emergency remote teaching overnight. The second run took place in 2022 with the return to face-to-face teaching and the implementation of hybrid educational models. This article shares the results of the design of the MOOC considering the constraints derived from the lockdowns applied in each country, the lessons learned from the delivery of such a MOOC to Latin American university teachers, and the results of the two runs of the MOOC.
As Thailand moves towards becoming an innovation-driven economy, the need for human capital development has become crucial. Work-based skill MOOCs, offered on Thai MOOC, a national digital learning platform launched by Thailand Cyber University Project, ministry of Higher Education, Science, Research and Innovation, provide an effective way to overcome this challenge. This paper discusses the challenges faced in designing an instruction for work-based skill MOOCs that can serve as a foundation model for many more to come. The instructional design of work-based skill courses in Thai MOOC involves four simple steps, including course selection, learning from accredited providers, course requirements completion, and certification of acquired skills. The development of such courses is ongoing at the higher education level, vocational level, and pre-university level, which serve as a foundation model for many more work-based skill MOOC that will be offered on Thai MOOC soon. The instructional design of work-based skills courses should focus on the development of currently demanded professional competencies and skills, increasing the efficiency of work in the organization, creativity, and happiness in life that meets the human resources needs of industries in the 4.0 economy era in Thailand. This paper aims to present the challenges of designing instruction for work-based skill MOOCs and suggests effective ways to design instruction to enhance workforce development in Thailand.
ReadBouncer
(2022)
Motivation:
Nanopore sequencers allow targeted sequencing of interesting nucleotide sequences by rejecting other sequences from individual pores. This feature facilitates the enrichment of low-abundant sequences by depleting overrepresented ones in-silico. Existing tools for adaptive sampling either apply signal alignment, which cannot handle human-sized reference sequences, or apply read mapping in sequence space relying on fast graphical processing units (GPU) base callers for real-time read rejection. Using nanopore long-read mapping tools is also not optimal when mapping shorter reads as usually analyzed in adaptive sampling applications.
Results:
Here, we present a new approach for nanopore adaptive sampling that combines fast CPU and GPU base calling with read classification based on Interleaved Bloom Filters. ReadBouncer improves the potential enrichment of low abundance sequences by its high read classification sensitivity and specificity, outperforming existing tools in the field. It robustly removes even reads belonging to large reference sequences while running on commodity hardware without GPUs, making adaptive sampling accessible for in-field researchers. Readbouncer also provides a user-friendly interface and installer files for end-users without a bioinformatics background.
This research paper aims to introduce a novel practitioner-oriented and research-based taxonomy of video genres. This taxonomy can serve as a scaffolding strategy to support educators throughout the entire educational system in creating videos for pedagogical purposes. A taxonomy of video genres is essential as videos are highly valued resources among learners. Although the use of videos in education has been extensively researched and well-documented in systematic research reviews, gaps remain in the literature. Predominantly, researchers employ sophisticated quantitative methods and similar approaches to measure the performance of videos. This trend has led to the emergence of a strong learning analytics research tradition with its embedded literature. This body of research includes analysis of performance of videos in online courses such as Massive Open Online Courses (MOOCs). Surprisingly, this same literature is limited in terms of research outlining approaches to designing and creating educational videos, which applies to both video-based learning and online courses. This issue results in a knowledge gap, highlighting the need for developing pedagogical tools and strategies for video making. These can be found in frameworks, guidelines, and taxonomies, which can serve as scaffolding strategies. In contrast, there appears to be very few frameworks available for designing and creating videos for pedagogica purposes, apart from a few well-known frameworks. In this regard, this research paper proposes a novel taxonomy of video genres that educators can utilize when creating videos intended for use in either video-based learning environments or online courses. To create this taxonomy, a large number of videos from online courses were collected and analyzed using a mixed-method research design approach.
From MOOC to “2M-POC”
(2023)
IFP School develops and produces MOOCs since 2014. After the COVID-19 crisis, the demand of our industrial and international partners to offer continuous training to their employees increased drastically in an energy transition and sustainable mobility environment that finds itself in constant and rapid evolution. Therefore, it is time for a new format of digital learning tools to efficiently and rapidly train an important number of employees. To address this new demand, in a more and more digital learning environment, we have completely changed our initial MOOC model to propose an innovative SPOC business model mixing synchronous and asynchronous modules. This paper describes the work that has been done to transform our MOOCs to a hybrid SPOC model. We changed the format itself from a standard MOOC model of several weeks to small modules of one week average more adapted to our client’s demand. We precisely engineered the exchanges between learners and the social aspect all along the SPOC duration. We propose a multimodal approach with a combination of asynchronous activities like online module, exercises, and synchronous activities like webinars with experts, and after-work sessions. Additionally, this new format increases the number of uses of the MOOC resources by our professors in our own master programs.
With all these actions, we were able to reach a completion rate between 80 and 96% – total enrolled –, compared to the completion rate of 15 to 28% – total enrolled – as to be recorded in our original MOOC format. This is to be observed for small groups (50–100 learners) as SPOC but also for large groups (more than 2500 learners), as a Massive and Multimodal Private Online Course (“2M-POC”). Today a MOOC is not a simple assembly of videos, text, discussions forums and validation exercises but a complete multimodal learning path including social learning, personal followup, synchronous and asynchronous modules. We conclude that the original MOOC format is not at all suitable to propose efficient training to companies, and we must re-engineer the learning path to have a SPOC hybrid and multimodal training compatible with a cost-effective business model.
In 2020, the project “iMooX – The MOOC Platform as a Service for all Austrian Universities” was launched. It is co-financed by the Austrian Ministry of Education, Science and Research. After half of the funding period, the project management wants to assess and share results and outcomes but also address (potential) additional “impacts” of the MOOC platform. Building upon work on OER impact assessment, this contribution describes in detail how the specific iMooX.at approach of impact measurement was developed. Literature review, stakeholder analysis, and problem-based interviews were the base for developing a questionnaire addressing the defined key stakeholder “MOOC creators”. The article also presents the survey results in English for the first time but focuses more on the development, strengths, and weaknesses of the selected methods. The article is seen as a contribution to the further development of impact assessment for MOOC platforms.
Thai MOOC academy
(2023)
Thai MOOC Academy is a national digital learning platform that has been serving as a mechanism for promoting lifelong learning in Thailand since 2017. It has recently undergone significant improvements and upgrades, including the implementation of a credit bank system and a learner’s eportfolio system interconnected with the platform. Thai MOOC Academy is introducing a national credit bank system for accreditation and management, which allows for the transfer of expected learning outcomes and educational qualifications between formal education, non-formal education, and informal education. The credit bank system has five distinct features, including issuing forgery-prevented certificates, recording learning results, transferring external credits within the same wallet, accumulating learning results, and creating a QR code for verification purposes. The paper discusses the features and future potential of Thai MOOC Academy, as it is extended towards a sandbox for the national credit bank system in Thailand.
The MOOChub is a joined web-based catalog of all relevant German and Austrian MOOC platforms that lists well over 750 Massive Open Online Courses (MOOCs). Automatically building such a catalog requires that all partners describe and publicly offer the metadata of their courses in the same way. The paper at hand presents the genesis of the idea to establish a common metadata standard and the story of its subsequent development. The result of this effort is, first, an open-licensed de-facto-standard, which is based on existing commonly used standards and second, a first prototypical platform that is using this standard: the MOOChub, which lists all courses of the involved partners. This catalog is searchable and provides a more comprehensive overview of basically all MOOCs that are offered by German and Austrian MOOC platforms. Finally, the upcoming developments to further optimize the catalog and the metadata standard are reported.
Digital technologies have enabled a variety of learning offers that opened new challenges in terms of recognition of formal, informal and non-formal learning, such as MOOCs.
This paper focuses on how providing relevant data to describe a MOOC is conducive to increase the transparency of information and, ultimately, the flexibility of European higher education.
The EU-funded project ECCOE took up these challenges and developed a solution by identifying the most relevant descriptors of a learning opportunity with a view to supporting a European system for micro-credentials. Descriptors indicate the specific properties of a learning opportunity according to European standards. They can provide a recognition framework also for small volumes of learning (micro-credentials) to support the integration of non-formal learning (MOOCs) into formal learning (e.g. institutional university courses) and to tackle skills shortage, upskilling and reskilling by acquiring relevant competencies. The focus on learning outcomes can facilitate the recognition of skills and competences of students and enhance both virtual and physical mobility and employability.
This paper presents two contexts where ECCOE descriptors have been adopted: the Politecnico di Milano MOOC platform (Polimi Open Knowledge – POK), which is using these descriptors as the standard information to document the features of its learning opportunities, and the EU-funded Uforest project on urban forestry, which developed a blended training program for students of partner universities whose MOOCs used the ECCOE descriptors.
Practice with ECCOE descriptors shows how they can be used not only to detail MOOC features, but also as a compass to design the learning offer. In addition, some rules of thumb can be derived and applied when using specific descriptors.
Founded in 2013, OpenClassrooms is a French online learning company that offers both paid courses and free MOOCs on a wide range of topics, including computer science and education. In 2021, in partnership with the EDA research unit, OpenClassrooms shared a database to solve the problem of how to increase persistence in their paid courses, which consist of a series of MOOCs and human mentoring. Our statistical analysis aims to identify reasons for dropouts that are due to the course design rather than demographic predictors or external factors.We aim to identify at-risk students, i.e. those who are on the verge of dropping out at a specific moment. To achieve this, we use learning analytics to characterize student behavior. We conducted data analysis on a sample of data related to the “Web Designers” and “Instructional Design” courses. By visualizing the student flow and constructing speed and acceleration predictors, we can identify which parts of the course need to be calibrated and when particular attention should be paid to these at-risk students.
The main aim of this article is to explore how learning analytics and synchronous collaboration could improve course completion and learner outcomes in MOOCs, which traditionally have been delivered asynchronously. Based on our experience with developing BigBlueButton, a virtual classroom platform that provides educators with live analytics, this paper explores three scenarios with business focused MOOCs to improve outcomes and strengthen learned skills.
Modularization describes the transformation of MOOCs from a comprehensive academic course format into smaller, more manageable learning offerings. It can be seen as one of the prerequisites for the successful implementation of MOOC-based micro-credentials in professional education and training. This short paper reports on the development and application of a modularization framework for Open Online Courses. Using the example of eGov-Campus, a German MOOC provider for the public sector linked to both academia and formal professional development, the structural specifications for modularized MOOC offerings and a methodology for course transformation as well as associated challenges in technology, organization and educational design are outlined. Following on from this, future prospects are discussed under the headings of individualization, certification and integration.
This work explores the use of different generative AI tools in the design of MOOC courses. Authors in this experience employed a variety of AI-based tools, including natural language processing tools (e.g. Chat-GPT), and multimedia content authoring tools (e.g. DALLE-2, Midjourney, Tome.ai) to assist in the course design process. The aim was to address the unique challenges of MOOC course design, which includes to create engaging and effective content, to design interactive learning activities, and to assess student learning outcomes. The authors identified positive results with the incorporation of AI-based tools, which significantly improved the quality and effectiveness of MOOC course design. The tools proved particularly effective in analyzing and categorizing course content, identifying key learning objectives, and designing interactive learning activities that engaged students and facilitated learning. Moreover, the use of AI-based tools, streamlined the course design process, significantly reducing the time required to design and prepare the courses. In conclusion, the integration of generative AI tools into the MOOC course design process holds great potential for improving the quality and efficiency of these courses. Researchers and course designers should consider the advantages of incorporating generative AI tools into their design process to enhance their course offerings and facilitate student learning outcomes while also reducing the time and effort required for course development.
The massive growth of MOOCs in 2011 laid the groundwork for the achievement of SDG 4. With the various benefits of MOOCs, there is also anticipation that online education should focus on more interactivity and global collaboration. In this context, the Global MOOC and Online Education Alliance (GMA) established a diverse group of 17 world-leading universities and three online education platforms from across 14 countries on all six continents in 2020. Through nearly three years of exploration, GMA has gained experience and achieved progress in fostering global cooperation in higher education. First, in joint teaching, GMA has promoted in-depth cooperation between members inside and outside the alliance. Examples include promoting the exchange of high-quality MOOCs, encouraging the creation of Global Hybrid Classroom, and launching Global Hybrid Classroom Certificate Programs. Second, in capacity building and knowledge sharing, GMA has launched Online Education Dialogues and the Global MOOC and Online Education Conference, inviting global experts to share best practices and attracting more than 10 million viewers around the world. Moreover, GMA is collaborating with international organizations to support teachers’ professional growth, create an online learning community, and serve as a resource for further development. Third, in public advocacy, GMA has launched the SDG Hackathon and Global Massive Open Online Challenge (GMOOC) and attracted global learners to acquire knowledge and incubate their innovative ideas within a cross-cultural community to solve real-world problems that all humans face and jointly create a better future. Based on past experiences and challenges, GMA will explore more diverse cooperation models with more partners utilizing advanced technology, provide more support for digital transformation in higher education, and further promote global cooperation towards building a human community with a shared future.