004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Article (342)
- Monograph/Edited Volume (168)
- Doctoral Thesis (160)
- Conference Proceeding (61)
- Postprint (50)
- Master's Thesis (10)
- Other (9)
- Preprint (3)
- Part of a Book (2)
- Bachelor Thesis (1)
Language
- English (613)
- German (193)
- Multiple languages (2)
Keywords
- Informatik (21)
- machine learning (20)
- Didaktik (15)
- Hochschuldidaktik (14)
- Ausbildung (13)
- Cloud Computing (13)
- answer set programming (13)
- cloud computing (13)
- maschinelles Lernen (11)
- Forschungsprojekte (10)
- Future SOC Lab (10)
- Hasso-Plattner-Institut (10)
- In-Memory Technologie (10)
- Multicore Architekturen (10)
- Hasso Plattner Institute (9)
- E-Learning (8)
- Forschungskolleg (8)
- Klausurtagung (8)
- Machine Learning (8)
- Maschinelles Lernen (8)
- Service-oriented Systems Engineering (8)
- multicore architectures (8)
- research projects (8)
- Modellierung (7)
- openHPI (7)
- social media (7)
- Datenintegration (6)
- Geschäftsprozessmanagement (6)
- Prozessmodellierung (6)
- Smalltalk (6)
- Visualisierung (6)
- business process management (6)
- cloud (6)
- visualization (6)
- Antwortmengenprogrammierung (5)
- Big Data (5)
- Computer Science Education (5)
- Datenschutz (5)
- Design Thinking (5)
- Identitätsmanagement (5)
- MOOCs (5)
- Ph.D. retreat (5)
- Verifikation (5)
- artifical intelligence (5)
- blockchain (5)
- cyber-physical systems (5)
- data integration (5)
- digital education (5)
- digitale Bildung (5)
- higher education (5)
- identity management (5)
- in-memory technology (5)
- künstliche Intelligenz (5)
- privacy (5)
- programming (5)
- quantitative analysis (5)
- security (5)
- service-oriented systems engineering (5)
- verification (5)
- virtual machines (5)
- Betriebssysteme (4)
- Digitalisierung (4)
- In-Memory technology (4)
- Informatics Education (4)
- Informatikdidaktik (4)
- Infrastruktur (4)
- Künstliche Intelligenz (4)
- Privacy (4)
- Research School (4)
- Semantic Web (4)
- Sicherheit (4)
- Virtualisierung (4)
- Virtuelle Maschinen (4)
- Vorhersage (4)
- digitalization (4)
- education (4)
- evaluation (4)
- graph transformation (4)
- image processing (4)
- innovation (4)
- middleware (4)
- nested graph conditions (4)
- operating systems (4)
- probabilistic timed systems (4)
- process mining (4)
- qualitative Analyse (4)
- qualitative analysis (4)
- quantitative Analyse (4)
- research school (4)
- self-sovereign identity (4)
- smart contracts (4)
- software engineering (4)
- 3D visualization (3)
- Algorithmen (3)
- Answer Set Programming (3)
- BPMN (3)
- Bildverarbeitung (3)
- Blockchains (3)
- CityGML (3)
- Cloud (3)
- Competence Measurement (3)
- Computer Networks (3)
- Computernetzwerke (3)
- DPLL (3)
- Data profiling (3)
- Datenanalyse (3)
- Datenbank (3)
- Datenbanken (3)
- Graphtransformationen (3)
- HCI (3)
- IPv4 (3)
- IPv6 (3)
- Identität (3)
- Informatics (3)
- Informatikstudium (3)
- Informationsextraktion (3)
- Infrastructure (3)
- Innovation (3)
- Internet Protocol (3)
- Internet of Things (3)
- JSP (3)
- Klassifikation (3)
- Kompetenzen (3)
- Laufzeitmodelle (3)
- Lively Kernel (3)
- MOOC (3)
- Mensch-Computer-Interaktion (3)
- Middleware (3)
- Model Synchronisation (3)
- Model Transformation (3)
- Model-Driven Engineering (3)
- Modeling (3)
- Navigation (3)
- Network Politics (3)
- Netzpolitik (3)
- Online-Lernen (3)
- Onlinekurs (3)
- Optimization (3)
- Ph.D. Retreat (3)
- Process Mining (3)
- Process Modeling (3)
- SAT (3)
- Secondary Education (3)
- Softwareentwicklung (3)
- Tele-Lab (3)
- Tele-Teaching (3)
- Tripel-Graph-Grammatik (3)
- Twitter (3)
- Werkzeuge (3)
- abstraction (3)
- algorithms (3)
- bibliometric analysis (3)
- business processes (3)
- citation analysis (3)
- clustering (3)
- collaboration (3)
- computational thinking (3)
- computer science (3)
- computer vision (3)
- conference (3)
- data preparation (3)
- data profiling (3)
- database systems (3)
- debugging (3)
- didactics (3)
- digital transformation (3)
- distributed systems (3)
- duplicate detection (3)
- geospatial data (3)
- graph transformation systems (3)
- informatics (3)
- model (3)
- model-driven engineering (3)
- modellgetriebene Entwicklung (3)
- models (3)
- non-photorealistic rendering (3)
- outlier detection (3)
- prediction (3)
- real-time (3)
- simulation (3)
- social network analysis (3)
- systems biology (3)
- tele-TASK (3)
- tools (3)
- trust (3)
- user experience (3)
- virtual reality (3)
- virtualization (3)
- virtuelle Maschinen (3)
- 3D point clouds (2)
- 3D-Geovisualisierung (2)
- 3D-Punktwolken (2)
- 3D-Stadtmodelle (2)
- 3D-Visualisierung (2)
- ACINQ (2)
- ASIC (2)
- AUTOSAR (2)
- Abstraktion (2)
- Algorithmic Game Theory (2)
- Algorithmische Spieltheorie (2)
- Algorithms (2)
- Analyse (2)
- Anomalieerkennung (2)
- Artificial Intelligence (2)
- Aspektorientierte Softwareentwicklung (2)
- Assessment (2)
- Assoziationsregeln (2)
- Asynchrone Schaltung (2)
- Augmented reality (2)
- Australian securities exchange (2)
- Auswirkungen (2)
- Authentifizierung (2)
- Automatisches Beweisen (2)
- BCCC (2)
- BPM (2)
- BTC (2)
- Bayesian networks (2)
- Bibliometrics (2)
- BitShares (2)
- Bitcoin (2)
- Bitcoin Core (2)
- Blended Learning (2)
- Blockchain (2)
- Blockchain Auth (2)
- Blockchain-Konsortium R3 (2)
- Blockkette (2)
- Blockstack (2)
- Blockstack ID (2)
- Blumix-Plattform (2)
- Blöcke (2)
- Bounded Model Checking (2)
- Business Process Management (2)
- Byzantine Agreement (2)
- COVID-19 (2)
- CSC (2)
- CSCW (2)
- Classification (2)
- Cloud-Sicherheit (2)
- Cloud-Speicher (2)
- Clusteranalyse (2)
- Code (2)
- Colored Coins (2)
- Competence Modelling (2)
- Computational thinking (2)
- Computer Science (2)
- Computergrafik (2)
- Computersicherheit (2)
- Computing (2)
- Constraint Solving (2)
- DAO (2)
- DPoS (2)
- Data Integration (2)
- Data Modeling (2)
- Data Privacy (2)
- Data Profiling (2)
- Databases (2)
- Datenaufbereitung (2)
- Datenbanksysteme (2)
- Datenmodellierung (2)
- Datenqualität (2)
- Deduction (2)
- Deep learning (2)
- Delegated Proof-of-Stake (2)
- Delphi study (2)
- Distributed Proof-of-Research (2)
- Diversity (2)
- Duplikaterkennung (2)
- E-Wallet (2)
- ECDSA (2)
- EEG (2)
- Echtzeit (2)
- Echtzeit-Rendering (2)
- Economics (2)
- Electronic and spintronic devices (2)
- Eris (2)
- Ether (2)
- Ethereum (2)
- European Bioinformatics Institute (2)
- European Union (2)
- Europäische Union (2)
- Evolution (2)
- Exploration (2)
- FMC (2)
- FPGA (2)
- Federated Byzantine Agreement (2)
- Feedback (2)
- Fehlende Daten (2)
- Fehlertoleranz (2)
- FollowMyVote (2)
- Fork (2)
- Formale Verifikation (2)
- GPU (2)
- Game Dynamics (2)
- General Earth and Planetary Sciences (2)
- Geodaten (2)
- Geography, Planning and Development (2)
- Graphentransformationssysteme (2)
- Graphtransformationssysteme (2)
- Gridcoin (2)
- HPI Schul-Cloud (2)
- Hard Fork (2)
- Hashed Timelock Contracts (2)
- Hauptspeicherdatenbank (2)
- Hochschullehre (2)
- ICA (2)
- ICT (2)
- ICT competencies (2)
- ISSEP (2)
- IT-Infrastruktur (2)
- IT-infrastructure (2)
- Impact (2)
- In-Memory (2)
- Informatics Modelling (2)
- Informatics System Application (2)
- Informatics System Comprehension (2)
- Internet (2)
- Internet Service Provider (2)
- Internet der Dinge (2)
- IoT (2)
- Japanese Blockchain Consortium (2)
- Japanisches Blockchain-Konsortium (2)
- Java (2)
- Kette (2)
- Key Competencies (2)
- Klausellernen (2)
- Knowledge Representation and Reasoning (2)
- Kollaborationen (2)
- Kompetenz (2)
- Konferenz (2)
- Konsensalgorithmus (2)
- Konsensprotokoll (2)
- Learning Analytics (2)
- Lightning Network (2)
- Link-Entdeckung (2)
- Live-Programmierung (2)
- Lock-Time-Parameter (2)
- Logic Programming (2)
- Logics (2)
- MERLOT (2)
- Machine learning (2)
- Measurement (2)
- Megamodell (2)
- Metaverse (2)
- Micropayment-Kanäle (2)
- Microsoft Azur (2)
- Model Synchronization (2)
- Modell (2)
- Modellprüfung (2)
- Mustererkennung (2)
- NASDAQ (2)
- NameID (2)
- Namecoin (2)
- Off-Chain-Transaktionen (2)
- Onename (2)
- Online Course (2)
- Online-Learning (2)
- Ontologie (2)
- OpenBazaar (2)
- Oracles (2)
- Orphan Block (2)
- P2P (2)
- Patterns (2)
- Peer-to-Peer Netz (2)
- Peercoin (2)
- Planing (2)
- PoB (2)
- PoS (2)
- PoW (2)
- Preis der Anarchie (2)
- Price of Anarchy (2)
- Privatsphäre (2)
- Problem Solving (2)
- Process (2)
- Programmieren (2)
- Programmierung (2)
- Proof-of-Burn (2)
- Proof-of-Stake (2)
- Proof-of-Work (2)
- Prozess (2)
- Python (2)
- RDF (2)
- Relevanz (2)
- Ressourcenoptimierung (2)
- Ripple (2)
- Runtime analysis (2)
- SCP (2)
- SHA (2)
- SPV (2)
- SQL (2)
- STG decomposition (2)
- STG-Dekomposition (2)
- Satisfiability (2)
- Schule (2)
- Schwierigkeitsgrad (2)
- Second Life (2)
- Semiconductors (2)
- Service-Oriented Architecture (2)
- Service-Orientierte Architekturen (2)
- Simplified Payment Verification (2)
- Skalierbarkeit der Blockchain (2)
- Slock.it (2)
- Social (2)
- Soft Fork (2)
- Software Engineering (2)
- Softwarearchitektur (2)
- Steemit (2)
- Stellar Consensus Protocol (2)
- Storj (2)
- Studie (2)
- SysML (2)
- Systematics (2)
- Systemsoftware (2)
- Systemstruktur (2)
- Taxonomy (2)
- Teamarbeit (2)
- Temporallogik (2)
- Texturen (2)
- The Bitfury Group (2)
- The DAO (2)
- Theorembeweisen (2)
- Theoretische Informatik (2)
- Theory (2)
- Timed Automata (2)
- Transaktion (2)
- Two-Way-Peg (2)
- UX (2)
- Unifikation (2)
- Unspent Transaction Output (2)
- User Experience (2)
- VM (2)
- Verhalten (2)
- Verlässlichkeit (2)
- Versionsverwaltung (2)
- Verträge (2)
- Virtual Reality (2)
- Visualization (2)
- Water Science and Technology (2)
- Watson IoT (2)
- YouTube (2)
- Zielvorgabe (2)
- Zookos Dreieck (2)
- Zookos triangle (2)
- adaptive (2)
- adaptive Systeme (2)
- adaptive systems (2)
- altchain (2)
- alternative chain (2)
- anomaly detection (2)
- anxiety (2)
- architecture (2)
- atomic swap (2)
- attribute assurance (2)
- authentication (2)
- authorship attribution (2)
- batch processing (2)
- bidirectional payment channels (2)
- big data (2)
- big data services (2)
- bitcoins (2)
- blockchain consortium (2)
- blockchain-übergreifend (2)
- blocks (2)
- blumix platform (2)
- bounded model checking (2)
- causal discovery (2)
- causal structure learning (2)
- chain (2)
- classification (2)
- cloud security (2)
- co-citation analysis (2)
- co-occurrence analysis (2)
- code (2)
- competence (2)
- competencies (2)
- complexity (2)
- comprehension (2)
- computer graphics (2)
- computer science education (2)
- computer security (2)
- confirmation period (2)
- consensus algorithm (2)
- consensus protocol (2)
- consistency (2)
- contest period (2)
- continuous integration (2)
- contracts (2)
- conversational agents (2)
- cross-chain (2)
- cyber-physische Systeme (2)
- cyberbullying (2)
- data (2)
- data analytics (2)
- data mining (2)
- data quality (2)
- data wrangling (2)
- decentralized autonomous organization (2)
- deep learning (2)
- deferred choice (2)
- dependability (2)
- depression (2)
- design thinking (2)
- dezentrale autonome Organisation (2)
- difficulty (2)
- difficulty target (2)
- digital enlightenment (2)
- digital health (2)
- digital identity (2)
- digital learning platform (2)
- digital sovereignty (2)
- digital whiteboard (2)
- digitale Aufklärung (2)
- digitale Lernplattform (2)
- digitale Souveränität (2)
- direct manipulation (2)
- doppelter Hashwert (2)
- double hashing (2)
- dynamic reconfiguration (2)
- e-Learning (2)
- e-learning (2)
- empathy (2)
- engagement (2)
- exploratory programming (2)
- fault tolerance (2)
- federated voting (2)
- formal semantics (2)
- functional dependencies (2)
- funktionale Abhängigkeiten (2)
- gender (2)
- geovisualization (2)
- graph constraints (2)
- hashrate (2)
- human computer interaction (2)
- identity (2)
- identity theory (2)
- image stylization (2)
- inclusion dependencies (2)
- incremental graph pattern matching (2)
- index selection (2)
- individual effects (2)
- informatics education (2)
- information extraction (2)
- intelligente Verträge (2)
- inter-chain (2)
- interactive technologies (2)
- intrusion detection (2)
- job shop scheduling (2)
- k-inductive invariant checking (2)
- kausale Entdeckung (2)
- kausales Strukturlernen (2)
- key competencies (2)
- knowledge management (2)
- knowledge representation and nonmonotonic reasoning (2)
- kontinuierliche Integration (2)
- law (2)
- learning (2)
- lebenslanges Lernen (2)
- ledger assets (2)
- lifelong learning (2)
- live programming (2)
- liveness (2)
- logic programming (2)
- longitudinal (2)
- maschinelles Sehen (2)
- memory (2)
- merged mining (2)
- merkle root (2)
- method comparision (2)
- micropayment (2)
- micropayment channels (2)
- miner (2)
- mining (2)
- mining hardware (2)
- minting (2)
- missing data (2)
- mobile (2)
- mobile mapping (2)
- model checking (2)
- model transformation (2)
- modeling (2)
- modelling (2)
- monitoring (2)
- navigation (2)
- networks (2)
- neural networks (2)
- news media (2)
- nonce (2)
- off-chain transaction (2)
- oracles (2)
- parallel processing (2)
- peer-to-peer network (2)
- pegged sidechains (2)
- perception of robots (2)
- probabilistische gezeitete Systeme (2)
- probabilistische zeitgesteuerte Systeme (2)
- production planning and control (2)
- propositional satisfiability (2)
- quorum slices (2)
- real-time systems (2)
- relevance (2)
- representation learning (2)
- rootstock (2)
- runtime models (2)
- scalability of blockchain (2)
- scarce tokens (2)
- schema discovery (2)
- search (2)
- selection (2)
- self-driving (2)
- service-oriented systems (2)
- sidechain (2)
- smalltalk (2)
- societal effects (2)
- software development (2)
- solver (2)
- standardization (2)
- stochastic Petri nets (2)
- stochastische Petri Netze (2)
- synchronization (2)
- systematic literature review (2)
- systems of systems (2)
- systems software (2)
- taxonomy (2)
- teacher training (2)
- teamwork (2)
- technology (2)
- testing (2)
- text based classification methods (2)
- textures (2)
- theorem (2)
- tiefes Lernen (2)
- timed automata (2)
- topics (2)
- transaction (2)
- triple graph grammars (2)
- typed attributed graphs (2)
- usability (2)
- user interaction (2)
- user-generated content (2)
- verschachtelte Graphbedingungen (2)
- version control (2)
- virtual 3D city models (2)
- virtuelle 3D-Stadtmodelle (2)
- vocational training (2)
- wearables (2)
- workflow patterns (2)
- "Big Data"-Dienste (1)
- 'Peer To Peer' (1)
- (FPGA) (1)
- 0-day (1)
- 21st century skills, (1)
- 3D Computer Grafik (1)
- 3D Computer Graphics (1)
- 3D Drucken (1)
- 3D Linsen (1)
- 3D Point Clouds (1)
- 3D Semiotik (1)
- 3D Visualisierung (1)
- 3D city model (1)
- 3D city models (1)
- 3D computer graphics (1)
- 3D geovisualisation (1)
- 3D geovisualization (1)
- 3D lenses (1)
- 3D point cloud (1)
- 3D portrayal (1)
- 3D printing (1)
- 3D semiotics (1)
- 3D-Punktwolke (1)
- 3D-Rendering (1)
- 3D-Stadtmodell (1)
- 3DCityDB (1)
- 3d city models (1)
- 47A52 (1)
- 65R20 (1)
- 65R32 (1)
- 78A46 (1)
- ABRACADABRA (1)
- ADFS (1)
- AMNET (1)
- APT (1)
- APX-hardness (1)
- ARCS Modell (1)
- Abbrecherquote (1)
- Abhängigkeiten (1)
- Abstraktion von Geschäftsprozessmodellen (1)
- Accepting Grammars (1)
- Achievement (1)
- Ackerschmalwand (1)
- Active Directory Federation Services (1)
- Active Evaluation (1)
- Activity Theory (1)
- Activity-orientated Learning (1)
- Activity-oriented Optimization (1)
- Actor (1)
- Actor model (1)
- Adaption (1)
- Adaptive hypermedia (1)
- Advanced Persistent Threats (1)
- Advanced Video Codec (AVC) (1)
- Adversarial Learning (1)
- Agency (1)
- Agile (1)
- Agilität (1)
- Aktive Evaluierung (1)
- Aktivitäten (1)
- Akzeptierende Grammatiken (1)
- Alcohol Use Disorders Identification Test (1)
- Alcohol use assessment (1)
- Algebraic methods (1)
- Algorithmenablaufplanung (1)
- Algorithmenkonfiguration (1)
- Algorithmenselektion (1)
- Alignment (1)
- Ambiguity (1)
- Ambiguität (1)
- Analog-zu-Digital-Konvertierung (1)
- Andere Fachrichtungen (1)
- Android Security (1)
- Anerkennung (1)
- Anfrageoptimierung (1)
- Anfragepaare (1)
- Anfragesprache (1)
- Angewandte Spieltheorie (1)
- Angriffe (1)
- Angriffserkennung (1)
- Animal building (1)
- Anisotroper Kuwahara Filter (1)
- Anleitung (1)
- Anomalien (1)
- Anrechnung (1)
- Antwortmengen Programmierung (1)
- Anwendungsvirtualisierung (1)
- Application (1)
- Application Server (1)
- Applied Game Theory (1)
- Apriori (1)
- Arabidopsis thaliana (1)
- Architektur (1)
- Architekturadaptation (1)
- Archivanalyse (1)
- Arduino (1)
- Argument Mining (1)
- Artem Erkomaishvili (1)
- Artificial neural networks (1)
- Arzt-Patient-Beziehung (1)
- Aspect-Oriented Programming (1)
- Aspect-oriented Programming (1)
- Aspektorientierte Programmierung (1)
- Association Rule Mining (1)
- Asynchronous circuit (1)
- Attribut-Merge-Prozess (1)
- Attribute Merge Process (1)
- Attribute aggregation (1)
- Attributsicherung (1)
- Audience Response Systeme (1)
- Aufzählung (1)
- Augmented Reality (1)
- Augmented and virtual reality (1)
- Ausführung von Modellen (1)
- Ausführungsgeschichte (1)
- Ausführungssemantiken (1)
- Ausreissererkennung (1)
- Ausreißererkennung (1)
- Austria (1)
- Authentication (1)
- Authorization (1)
- Autismus (1)
- Automated Theorem Proving (1)
- Automatically controlled windows (1)
- Autorisierung (1)
- BCH (1)
- BCI (1)
- BSS (1)
- Bachelor (1)
- Bachelorstudierende der Informatik (1)
- Bachelorstudium (1)
- Bahnwesen (1)
- Bank (1)
- Barrierefreiheit (1)
- Basic Service (1)
- Basic Storage Anbieter (1)
- Batchprozesse (1)
- Batchverarbeitung (1)
- Baumweite (1)
- Bayes'sche Netze (1)
- Bayessche Netze (1)
- Bean (1)
- Bedingte Inklusionsabhängigkeiten (1)
- Bedrohungserkennung (1)
- Behavior (1)
- Behavior change (1)
- Behaviour Analysis (1)
- Benutzerinteraktion (1)
- Benutzeroberfläche (1)
- Berufsausbildung (1)
- Berührungseingaben (1)
- Beschränkungen und Abhängigkeiten (1)
- Betrachtungsebenen (1)
- Beweisaufgaben (1)
- Beweistheorie (1)
- Bidirectional order dependencies (1)
- Big Five model (1)
- Bilddatenanalyse (1)
- Bildung (1)
- Binäres Entscheidungsdiagramm (1)
- Bioacoustics (1)
- Bioakustik (1)
- Biocomputing (1)
- Bioelektrisches Signal (1)
- Bioinformatik (1)
- Biometrie (1)
- Bisimulation (1)
- Blended learning (1)
- Blockheizkraftwerke (1)
- Bloom’s Taxonomy (1)
- Body schema (1)
- Boolean constraint solver (1)
- Bounded Backward Model Checking (1)
- Brain Computer Interface (1)
- Brownian motion with discontinuous drift (1)
- Business Process Models (1)
- Business process modeling (1)
- Bystander (1)
- C++ tool (1)
- C-Test (1)
- CCS Concepts (1)
- CEP (1)
- CS Ed Research (1)
- CS at school (1)
- CS concepts (1)
- CS curriculum (1)
- Cactus (1)
- Calibration (1)
- Canvas (1)
- Capability approach (1)
- Carrera Digital D132 (1)
- Case Management (1)
- Case management (1)
- Challenges (1)
- Change Management (1)
- Choreographien (1)
- Citymodel (1)
- Clause Learning (1)
- Clinical predictive modeling (1)
- Cloud Datenzentren (1)
- Cloud computing (1)
- Clustering (1)
- Coccinelle (1)
- Codeverständnis (1)
- Codierung (1)
- Cognitive Skills (1)
- Cographs (1)
- Coherent partition (1)
- Common Spatial Pattern (1)
- Commonsense reasoning (1)
- Community analysis (1)
- Comparing programming environments (1)
- Competences (1)
- Competencies (1)
- Complementary Circuits (1)
- Complexity (1)
- Compliance (1)
- Compliance checking (1)
- Composition (1)
- Compound Values (1)
- Computation Tree Logic (1)
- Computational Complexity (1)
- Computational Hardness (1)
- Computational Photography (1)
- Computational Thinking (1)
- Computational photography (1)
- Computer Science in Context (1)
- Computer crime (1)
- Computergestützes Training (1)
- Conceptual (1)
- Conceptual modeling (1)
- Condition number (1)
- Conditional Inclusion Dependency (1)
- Conformance Überprüfung (1)
- Consistency (1)
- Constraint (1)
- Constraint-Programmierung (1)
- Constraints (1)
- Constructive solid geometry (1)
- Contest (1)
- Context-oriented Programming (1)
- Contextualisation (1)
- Contracts (1)
- Contradictions (1)
- Controlled Derivations (1)
- Controller-Resynthese (1)
- Convolution (1)
- Course development (1)
- Course marketing (1)
- Course of Study (1)
- Courses for female students (1)
- Covariate Shift (1)
- Covid (1)
- Creative (1)
- Crime mapping (1)
- Critical pairs (1)
- Crowd-sourcing (1)
- Cryptography (1)
- Currencies (1)
- Curricula Development (1)
- Curriculum (1)
- Curriculum Development (1)
- Curriculum analysis (1)
- Customer ownership (1)
- Cyber-Physical Systems (1)
- Cyber-Physical-Systeme (1)
- Cyber-Sicherheit (1)
- Cyber-physical-systems (1)
- Cyber-physikalische Systeme (1)
- DBMS (1)
- DDoS (1)
- DNA (1)
- DNA computing (1)
- DNS (1)
- Data Analysis (1)
- Data Dependency (1)
- Data Literacy (1)
- Data Management (1)
- Data Quality (1)
- Data Science (1)
- Data Structure Optimization (1)
- Data Warehouse (1)
- Data dependencies (1)
- Data integration (1)
- Data mining (1)
- Data modeling (1)
- Data warehouse (1)
- Data-Mining (1)
- Data-Science (1)
- Data-centric (1)
- Database (1)
- Database Cost Model (1)
- Dateistruktur (1)
- Daten (1)
- Datenabhängigkeiten (1)
- Datenabhängigkeiten-Entdeckung (1)
- Datenbank-Kostenmodell (1)
- Datenbankoptimierung (1)
- Datenextraktion (1)
- Datenflusskorrektheit (1)
- Datenkorrektheit (1)
- Datenmodelle (1)
- Datenobjekte (1)
- Datenreinigung (1)
- Datensatz (1)
- Datenschutz-sicherer Einsatz in der Schule (1)
- Datensicht (1)
- Datenstrukturoptimierung (1)
- Datensynthese (1)
- Datentransformation (1)
- Datenvertraulichkeit (1)
- Datenverwaltung für Daten mit räumlich-zeitlichem Bezug (1)
- Datenvisualisierung (1)
- Datenzustände (1)
- Deadline-Verbreitung (1)
- Debugging (1)
- Decision support (1)
- Deep Learning (1)
- Defining characteristics of physical computing (1)
- Degenerationsprozesse (1)
- Dekubitus (1)
- Delphine (1)
- Delta preservation (1)
- Dempster-Shafer-Theorie (1)
- Dempster–Shafer theory (1)
- Denkweise (1)
- Dependency discovery (1)
- Description Logics (1)
- Design-Forschung (1)
- Deskriptive Logik (1)
- Deurema Modellierungssprache (1)
- Developmental robotics (1)
- Diagonalisierung (1)
- Didaktik der Informatik (1)
- Didaktische Konzepte (1)
- Dienstkomposition (1)
- Dienstplattform (1)
- Differential Privacy (1)
- Differenz von Gauss Filtern (1)
- Digital Competence (1)
- Digital Education (1)
- Digital Engineering (1)
- Digital Game Based Learning (1)
- Digital Revolution (1)
- Digital World (1)
- Digital image analysis (1)
- Digitale Transformation (1)
- Digitale Whiteboards (1)
- Digitalisierung von Produktionsprozessen (1)
- Digitalization (1)
- Digitalstrategie (1)
- Direkte Manipulation (1)
- Disambiguierung (1)
- Discrimination Networks (1)
- Diskussionskultur (1)
- Dispositional learning analytics (1)
- Distanzlehre (1)
- Distributed Computing (1)
- Distributed computing (1)
- Distributed programming (1)
- Distributed-Ledger-Technologie (DLT) (1)
- Diversität (1)
- Dolphins (1)
- Domänenspezifische Modellierung (1)
- Dreidimensionale Computergraphik (1)
- Dubletten (1)
- Duplicate Detection (1)
- Durchlässigkeit (1)
- Dynamic Programming (1)
- Dynamic Type System (1)
- Dynamic assessment (1)
- Dynamische Programmierung (1)
- Dynamische Rekonfiguration (1)
- Dynamische Typ Systeme (1)
- EHR (1)
- EPA (1)
- Early Literacy (1)
- Echtzeitanwendung (1)
- Echtzeitsysteme (1)
- Ecosystems (1)
- Educational Standards (1)
- Educational software (1)
- Effizienz (1)
- Einbruchserkennung (1)
- Eingabegenauigkeit (1)
- Eingebettete Systeme (1)
- Eisenbahnnetz (1)
- Elektroencephalographie (1)
- Elektronische Patientenakte (1)
- Embedded Systems (1)
- Emerging Topics in Digital Government (1)
- Emotionen (1)
- Emotionsforschung (1)
- Empfehlungen (1)
- Empirische Untersuchung (1)
- Endpunktsicherheit (1)
- Energieeffizienz (1)
- Energiesparen (1)
- Enterprise Search (1)
- Entity resolution (1)
- Entitätsverknüpfung (1)
- Entscheidungsbäume (1)
- Entscheidungsfindung (1)
- Entscheidungsmanagement (1)
- Entscheidungsmodelle (1)
- Entwicklungswerkzeuge (1)
- Entwurf (1)
- Entwurfsmuster (1)
- Entwurfsmuster für SOA-Sicherheit (1)
- Entwurfsraumexploration (1)
- Enumeration algorithm (1)
- Equilibrium logic (1)
- Ereignisabstraktion (1)
- Ereignisse (1)
- Erfahrungsbericht (1)
- Erfolgsmessung (1)
- Erfüllbarkeit einer Formel der Aussagenlogik (1)
- Erfüllbarkeitsanalyse (1)
- Erfüllbarkeitsproblem (1)
- Erkennen von Meta-Daten (1)
- Erkennung von Metadaten (1)
- Error Estimation (1)
- Error-Detection Circuits (1)
- Erweiterte Realität (1)
- Escherichia-coli (1)
- Estimation-of-distribution algorithm (1)
- Ethics (1)
- Euclid’s algorithm (1)
- Evaluation (1)
- Evidenztheorie (1)
- Evolution in MDE (1)
- Evolutionary algorithms (1)
- Execution Semantics (1)
- Explorative Datenanalyse (1)
- Exponential Time Hypothesis (1)
- Exponentialzeit Hypothese (1)
- Extract-Transform-Load (ETL) (1)
- FIDO (1)
- FMC-QE (1)
- FOSS (1)
- FRP (1)
- Facebook (1)
- Fachinformatik (1)
- Fachinformatiker (1)
- Fallmanagement (1)
- Fallstudie (1)
- Feature Combination (1)
- Feature extraction (1)
- Feature selection (1)
- Federated learning (1)
- Feedback Loop Modellierung (1)
- Feedback Loops (1)
- Fehlerbeseitigung (1)
- Fehlererkennung (1)
- Fehlerinjektion (1)
- Fehlerkorrektur (1)
- Fehlerschätzung (1)
- Fehlersuche (1)
- Fehlvorstellung (1)
- Fernerkundung (1)
- Fertigkeiten (1)
- Fertigung (1)
- Fibonacci numbers (1)
- Field programmable gate arrays (1)
- Finite automata (1)
- Fintech (1)
- Fitness-distance correlation (1)
- Flussgesteuerter Bilateraler Filter (1)
- Focus+Context Visualization (1)
- Fokus-&-Kontext Visualisierung (1)
- Formal modelling (1)
- Formale Sprachen und Automaten (1)
- Formative assessment (1)
- Formeln der quantifizierten Aussagenlogik (1)
- Forschung (1)
- Fredholm complexes (1)
- Function (1)
- Functional Lenses (1)
- Functional dependencies (1)
- Fundamental Ideas (1)
- Fundamental Modeling Concepts (1)
- Fußgängernavigation (1)
- GIS (1)
- GPU acceleration (1)
- GPU-Beschleunigung (1)
- Game-based learning (1)
- Gaussian process state-space models (1)
- Gaussian processes (1)
- Gauß-Prozess Zustandsraummodelle (1)
- Gauß-Prozesse (1)
- Gebäudemodelle (1)
- Gehirn-Computer-Schnittstelle (1)
- Geländemodelle (1)
- Gender (1)
- Gene expression (1)
- General subject “Information” (1)
- Generalisierung (1)
- Generalized Discrimination Networks (1)
- Geometrieerzeugung (1)
- Georgian chant (1)
- Georgische liturgische Gesänge (1)
- Geovisualisierung (1)
- German schools (1)
- Geschäftsanwendungen (1)
- Geschäftsmodell (1)
- Geschäftsprozessarchitekturen (1)
- Geschäftsprozesse (1)
- Geschäftsprozessmodelle (1)
- Gesetze (1)
- Gesichtsausdruck (1)
- Gesteuerte Ableitungen (1)
- Gewinnung benannter Entitäten (1)
- GitHub (1)
- Gleichheit (1)
- Globus (1)
- GraalVM (1)
- Grammar Systems (1)
- Grammatikalische Inferenz (1)
- Grammatiksysteme (1)
- Graph algorithm (1)
- Graph databases (1)
- Graph homomorphisms (1)
- Graph logic (1)
- Graph partitions (1)
- Graph repair (1)
- Graph transformation (1)
- Graph-Constraints (1)
- Graph-Mining (1)
- Graph-basierte Suche (1)
- Graph-basiertes Ranking (1)
- Graphableitung (1)
- Graphbedingungen (1)
- Graphdatenbanken (1)
- Graphensuche (1)
- Graphfärbung (1)
- Graphreparatur (1)
- Graphtransformation (1)
- Grid (1)
- Grid Computing (1)
- Grounded Theory (1)
- Gruppierung von Prozessinstanzen (1)
- H.264 (1)
- HDI (1)
- HEI (1)
- HENSHIN (1)
- HPI Forschung (1)
- HPI research (1)
- Hardware accelerator (1)
- Hardware-Software-Co-Design (1)
- Hasserkennung (1)
- Hasso-Plattner-Institute (1)
- Hauptkomponentenanalyse (1)
- Hauptspeicher Datenmanagement (1)
- Hauptspeicher Technologie (1)
- Helmholtz problem (1)
- Herodotos (1)
- Heterogenität (1)
- Heuristic triangle estimation (1)
- Heuristiken (1)
- HiGHmed (1)
- High-Level Synthesis (1)
- Histograms (1)
- History of pattern occurrences (1)
- Hochschule (1)
- Hochschulkurse (1)
- Hochschulsystem (1)
- Homomorphe Verschlüsselung (1)
- Human (1)
- Human-robot interaction (1)
- Hyrise (1)
- Häkeln (1)
- I/O-effiziente Algorithmen (1)
- IBM 360 (1)
- ICT Competence (1)
- ICT curriculum (1)
- ICT skills (1)
- IDS (1)
- IHL (1)
- IHRL (1)
- IOPS (1)
- IT security (1)
- IT-Security (1)
- IT-Sicherheit (1)
- Ideation (1)
- Ideenfindung (1)
- Identity Management (1)
- Identity management systems (1)
- Ill-conditioning (1)
- Image (1)
- Image resolution (1)
- Image-based rendering (1)
- Imperative calculi (1)
- Implementation in Organizations (1)
- Implementierung in Organisationen (1)
- Improving classroom (1)
- In-Memory Database (1)
- In-Memory Datenbank (1)
- In-Memory-Datenbank (1)
- Inclusion dependencies (1)
- Indefinite (1)
- Index (1)
- Index Structures (1)
- Indexauswahl (1)
- Indexstrukturen (1)
- Individuen (1)
- Industries (1)
- Industry 4.0 (1)
- Inference (1)
- Infinite State (1)
- Informatik B. Sc. (1)
- Informatik für alle (1)
- Informatik-Studiengänge (1)
- Informatiksystem (1)
- Informatikunterricht (1)
- Informatikvoraussetzungen (1)
- Information Ethics (1)
- Information Extraction (1)
- Information Systems (1)
- Information Transfer Rate (1)
- Informationskompetenz (1)
- Informationssysteme (1)
- Informationsvorhaltung (1)
- Informatische Kompetenzen (1)
- Inhalte (1)
- Inhaltsanalyse (1)
- Initial conflicts (1)
- Inklusionsabhängigkeit (1)
- Inklusionsabhängigkeiten (1)
- Inkonsistenz (1)
- Inkrementelle Graphmustersuche (1)
- Innovationsmanagement (1)
- Innovationsmethode (1)
- Input Validation (1)
- Inquiry-based Learning (1)
- Instagram (1)
- Insurance industry (1)
- Integration (1)
- Interactive Rendering (1)
- Interactive system (1)
- Interaktionsmodel (1)
- Interaktionsmodellierung (1)
- Interaktionstechniken (1)
- Interaktives Rendering (1)
- Interaktives System (1)
- Interdisciplinary Teams (1)
- Interface design (1)
- Internet Security (1)
- Internet applications (1)
- Internet-Sicherheit (1)
- Internetanwendungen (1)
- Interpretability (1)
- Interpreter (1)
- Intersectionality (1)
- Interval Timed Automata (1)
- Interventionen (1)
- Intuition (1)
- Invariant-Checking (1)
- Invarianten (1)
- Invariants (1)
- Inverted Classroom (1)
- JCop (1)
- Java 2 Enterprise Edition (1)
- Java Security Framework (1)
- Java Virtual Machine (1)
- KI (1)
- Karten (1)
- Kartografisches Design (1)
- Kausalität (1)
- Kern-PCA (1)
- Kernel (1)
- Kernmethoden (1)
- Klassifikator-Kalibrierung (1)
- Klassifizierung (1)
- Kollaboration (1)
- Kommunikation (1)
- Kompetenzentwicklung (1)
- Kompetenzerwerb (1)
- Kompetenzmessung (1)
- Komplexität (1)
- Komplexität der Berechnung (1)
- Komplexitätsbewältigung (1)
- Komplexitätstheorie (1)
- Komposition (1)
- Konnektionskalkül (1)
- Konsensprotokolle (1)
- Konsistenzrestauration (1)
- Kontext (1)
- Konzeptionell (1)
- Kreativität (1)
- Kryptographie (1)
- Kundenverhalten (1)
- Kunstanalyse (1)
- Kybernetik (1)
- LEGO Mindstorms EV3 (1)
- LIDAR (1)
- LMS (1)
- LOD (1)
- LSTM (1)
- Landmarken (1)
- Large networks (1)
- Laser Cutten (1)
- Laserscanning (1)
- Lastverteilung (1)
- Laufzeitanalyse (1)
- Laufzeitverhalten (1)
- Leadership (1)
- Learners (1)
- Learning Fields (1)
- Learning analytics (1)
- Learning dispositions (1)
- Learning ecology (1)
- Learning interfaces development (1)
- Learning with ICT (1)
- Lebendigkeit (1)
- Lebenslanges Lernen (1)
- Lefschetz number (1)
- Leftmost Derivations (1)
- Lehr- und Lernformate (1)
- Lehramtsstudium (1)
- Lehre (1)
- Lehrer (1)
- Lehrer*innenbildung (1)
- Lehrevaluation (1)
- Leistungsfähigkeit (1)
- Leistungsmodelle von virtuellen Maschinen (1)
- Leistungsvorhersage (1)
- Lern-App (1)
- Lernerfolg (1)
- Lernmotivation (1)
- Lernsoftware (1)
- Lernzentrum (1)
- LiDAR (1)
- Licenses (1)
- Liguistisch (1)
- Lindenmayer systems (1)
- Link Discovery (1)
- Linked Data (1)
- Linked Open Data (1)
- Linksableitungen (1)
- Live-Migration (1)
- Liveness (1)
- Logarithm (1)
- Logikkalkül (1)
- Logiksynthese (1)
- Loss (1)
- Low Latency (1)
- Lower Bounds (1)
- Lower Secondary Level (1)
- Lösungsraum (1)
- Lückentext (1)
- MDE Ansatz (1)
- MDE settings (1)
- MEG (1)
- Machine-Learning (1)
- Machinelles Lernen (1)
- Magnetoencephalographie (1)
- Malware (1)
- Management (1)
- Marketing (1)
- Marktübersicht (1)
- Maschinen (1)
- Massive Open Online Courses (1)
- Matrizen-Eigenwertaufgabe (1)
- Matroids (1)
- Media in education (1)
- Megamodel (1)
- Megamodels (1)
- Mehr-Faktor-Authentifizierung (1)
- Mehrfamilienhäuser (1)
- Mehrkernsysteme (1)
- Mehrklassen-Klassifikation (1)
- Messung (1)
- Metacrate (1)
- Metadata Discovery (1)
- Metadaten (1)
- Metadatenentdeckung (1)
- Metadatenqualität (1)
- Metamodell (1)
- Migration (1)
- Mindset (1)
- Minimal hitting set (1)
- Mischmodelle (1)
- Mischung <Signalverarbeitung> (1)
- Mobil (1)
- Mobile Application Development (1)
- Mobile Mapping (1)
- Mobile learning (1)
- Mobile-Mapping (1)
- Mobiles Lernen (1)
- Mobilgeräte (1)
- Model Based Engineering (1)
- Model Checking (1)
- Model Consistency (1)
- Model Driven Architecture (1)
- Model Execution (1)
- Model Management (1)
- Model repair (1)
- Model verification (1)
- Model-driven (1)
- Model-driven SOA Security (1)
- Modeling Languages (1)
- Modell Management (1)
- Modell-driven Security (1)
- Modell-getriebene SOA-Sicherheit (1)
- Modell-getriebene Sicherheit (1)
- Modell-getriebene Softwareentwicklung (1)
- Modellbasiert (1)
- Modelle mit mehreren Versionen (1)
- Modellerzeugung (1)
- Modellgetrieben (1)
- Modellgetriebene Architektur (1)
- Modellgetriebene Entwicklung (1)
- Modellgetriebene Softwareentwicklung (1)
- Modellierungssprachen (1)
- Modellkonsistenz (1)
- Modellreparatur (1)
- Modelltransformation (1)
- Modelltransformationen (1)
- Models at Runtime (1)
- Molekulare Bioinformatik (1)
- Monitoring (1)
- Morphic (1)
- Multi Task Learning (1)
- Multi-Class (1)
- Multi-Instanzen (1)
- Multi-Task-Lernen (1)
- Multi-objective optimization (1)
- Multi-sided platforms (1)
- Multicore architectures (1)
- Multidisciplinary Teams (1)
- Multimodal behavior (1)
- Multiprocessor (1)
- Multiprozessor (1)
- Music Technology (1)
- Muster (1)
- Musterabgleich (1)
- Mutation operators (1)
- N-of-1 trial (1)
- NUI (1)
- Nash Equilibrium (1)
- Natural Science Education (1)
- Natural ventilation (1)
- Nebenläufigkeit (1)
- Nephrology (1)
- Nested Graph Conditions (1)
- Nested graph conditions (1)
- Network Creation Game (1)
- Network clustering (1)
- Netzneutralität (1)
- Netzwerk (1)
- Netzwerke (1)
- Netzwerkprotokolle (1)
- Neuronales Netz (1)
- New On-Line Error-Detection Methode (1)
- Newspeak (1)
- Next Generation Network (1)
- Nicht-photorealistisches Rendering (1)
- Nichtfotorealistische Bildsynthese (1)
- NoSQL (1)
- Non-photorealistic Rendering (1)
- Norway (1)
- Novice programmers (1)
- Nutzerinteraktion (1)
- Nutzungsinteresse (1)
- O (1)
- OAuth (1)
- Object Constraint Programming (1)
- Object-Oriented Programming (1)
- Objects (1)
- Objekt-Constraint Programmierung (1)
- Objekt-Orientiertes Programmieren (1)
- Objekt-orientiertes Programmieren mit Constraints (1)
- Objekte (1)
- Objektive Schwierigkeit (1)
- Objektlebenszyklus-Synchronisation (1)
- Omega (1)
- Online Learning Environments (1)
- Onlinekurse (1)
- Onlinelehre (1)
- Ontologies (1)
- Ontology (1)
- Open Source (1)
- Open source (1)
- OpenID Connect (1)
- OpenOLAT (1)
- Opinion mining (1)
- Optimierungen (1)
- Optimierungsproblem (1)
- OptoGait (1)
- Order dependencies (1)
- Ordinances (1)
- Organisationsveränderung (1)
- Overlapping community detection (1)
- Owner-Retained Access Control (ORAC) (1)
- PAVM (1)
- PRISM Modell-Checker (1)
- PRISM model checker (1)
- PTCTL (1)
- Parallel Programming (1)
- Parallele Datenverarbeitung (1)
- Paralleles Rechnen (1)
- Parallelization (1)
- Parallelized algorithm (1)
- Parallelrechner (1)
- Parameterized Complexity (1)
- Parametrisierte Komplexität (1)
- Parsing (1)
- Patientenermündigung (1)
- Pattern Matching (1)
- Pattern Recognition (1)
- Pedagogical content knowledge (1)
- Pedagogical issues (1)
- Peer-Review (1)
- Peer-to-Peer-Netz ; GRID computing ; Zuverlässigkeit ; Web Services ; Betriebsmittelverwaltung ; Migration (1)
- Performance (1)
- Performance Prediction (1)
- Peripersonal space (1)
- Personal Data (1)
- Personas (1)
- Petri net Mapping (1)
- Petri net mapping (1)
- Petrinetz (1)
- Physical Science (1)
- Plant identification (1)
- Plattform-Ökosysteme (1)
- Platzierung (1)
- Point-based rendering (1)
- Policy Enforcement (1)
- Policy Languages (1)
- Policy Sprachen (1)
- Polymerase Chain Reaction Experiment (1)
- Popular matching (1)
- Posenabschätzung (1)
- PostGIS (1)
- Pre-RS Traceability (1)
- Prediction Game (1)
- Predictive Models (1)
- Preprocessing (1)
- Primary informatics (1)
- Prime graphs (1)
- Prior knowledge (1)
- Privacy Protection (1)
- Probabilistische Modelle (1)
- Problem solving (1)
- Problem solving strategies (1)
- Probleme in der Studie (1)
- Problemlösen (1)
- Problemlösung (1)
- Process Enactment (1)
- Process Execution (1)
- Process Modelling (1)
- Process modeling (1)
- Professoren (1)
- Prognosen (1)
- Programmierabstraktionen (1)
- Programmierausbildung (1)
- Programmiererlebnis (1)
- Programmierkonzepte (1)
- Programmierwerkzeuge (1)
- Programming Languages (1)
- Programming environments for children (1)
- Programming learning (1)
- Projekte (1)
- Prolog (1)
- Proof Theory (1)
- Propagation von Aktivitätsinstanzzuständen (1)
- Protocols (1)
- Prototyping (1)
- Prozess Verbesserung (1)
- Prozess- und Datenintegration (1)
- Prozessarchitektur (1)
- Prozessausführung (1)
- Prozessautomatisierung (1)
- Prozesse (1)
- Prozesserhebung (1)
- Prozessinstanz (1)
- Prozessmodell (1)
- Prozessmodelle (1)
- Prozessmodellsuche (1)
- Prozessoren (1)
- Prozessverfeinerung (1)
- Prädiktionsspiel (1)
- Präferenzen (1)
- Präsentation (1)
- Psychotherapie (1)
- Pytho n (1)
- Quanten-Computing (1)
- Quantenkryptographie (1)
- Quantified Boolean Formula (QBF) (1)
- Quantitative Analysen (1)
- Quantitative Modeling (1)
- Quantitative Modellierung (1)
- Query (1)
- Query execution (1)
- Query optimization (1)
- Query-Optimierung (1)
- Queuing Theory (1)
- RL (1)
- RT_PREEMT patch (1)
- RT_PREEMT-Patch (1)
- Rainfall-runoff (1)
- Random access memory (1)
- Re-Engineering (1)
- Real-Time Rendering (1)
- Realzeitsysteme (1)
- Recommendations for CS-Curricula in Higher Education (1)
- Reconfigurable (1)
- Region of Interest (1)
- Regressionstests (1)
- Rekonfiguration (1)
- Relational data (1)
- Rendering (1)
- Reparatur (1)
- Representationlernen (1)
- Reproducible benchmarking (1)
- Research Projects (1)
- Resource Allocation (1)
- Resource Management (1)
- Ressourcenmanagement (1)
- Reverse Engineering (1)
- Reversibility (1)
- Robot learning (1)
- Robot personality (1)
- Ruby (1)
- Run time analysis (1)
- Runtime Binding (1)
- Runtime improvement (1)
- Runtime-monitoring (1)
- Russia (1)
- SCED (1)
- SIEM (1)
- SMT (1)
- SOA (1)
- SOA Security (1)
- SOA Security Pattern (1)
- SOA Sicherheit (1)
- SPARQL (1)
- SSO (1)
- STEM (1)
- SWIRL (1)
- Sammlungsdatentypen (1)
- Sample Selection Bias (1)
- Savanne (1)
- Scalability (1)
- Scale-invariant feature transform (SIFT) (1)
- Scene graph systems (1)
- Schelling Process (1)
- Schelling Prozess (1)
- Schelling Segregation (1)
- Schema-Entdeckung (1)
- Schemaentdeckung (1)
- Schlüsselkompetenzen (1)
- Schlüsselentdeckung (1)
- Schlüsselkompetenzen (1)
- Schriftartgestaltung (1)
- Schriftrendering (1)
- Scientific understanding of Information (1)
- Scrollytelling (1)
- Search Algorithms (1)
- Secure Digital Identities (1)
- Secure Enterprise SOA (1)
- Security (1)
- Security Modelling (1)
- Segmentierung (1)
- Selbst-Adaptive Software (1)
- Selektion (1)
- Selektionsbias (1)
- Self-Adaptive Software (1)
- Self-Checking Circuits (1)
- Self-Regulated Learning (1)
- Semantic Search (1)
- Semantik Web (1)
- Semantische Analyse (1)
- Semantische Suche (1)
- Seminarkonzept (1)
- Sensors (1)
- Sequential anomaly (1)
- Sequenzeigenschaften (1)
- Sequenzen von s/t-Pattern (1)
- Serialisierung (1)
- Service Creation (1)
- Service Delivery Platform (1)
- Service Provider (1)
- Service convergence (1)
- Service-oriented Architectures (1)
- Service-orientierte Systeme (1)
- Service-orientierte Systme (1)
- Shader (1)
- Sharing (1)
- Sichere Digitale Identitäten (1)
- Sicherheitsanalyse (1)
- Sicherheitsmodellierung (1)
- Signal Processing (1)
- Signal processing (1)
- Signalflankengraph (SFG oder STG) (1)
- Signalquellentrennung (1)
- Signaltrennung (1)
- Similarity Measures (1)
- Similarity Search (1)
- Simulation (1)
- Simulations (1)
- Simultane Diagonalisierung (1)
- Single Sign On (1)
- Single Trial Analysis (1)
- Single event upsets (1)
- Single-Sign-On (1)
- Situationsbewusstsein (1)
- Skalierbarkeit (1)
- Skelettberechnung (1)
- Skript-Entwicklungsumgebungen (1)
- Skriptsprachen (1)
- Small Private Online Courses (1)
- Smart cities (1)
- SoaML (1)
- Social impact (1)
- Sociotechnical Design (1)
- Software (1)
- Software architecture (1)
- Software-Evolution (1)
- Software-Testen (1)
- Software/Hardware Co-Design (1)
- Softwareanalyse (1)
- Softwareentwicklungsprozesse (1)
- Softwareproduktlinien (1)
- Softwaretechnik (1)
- Softwaretest (1)
- Softwaretests (1)
- Softwarevisualisierung (1)
- Softwarewartung (1)
- Solution Space (1)
- Soziale Medien (1)
- Sozialen Medien (1)
- Spaltenlayout (1)
- Spam (1)
- Spam Filtering (1)
- Spam-Erkennung (1)
- Spam-Filter (1)
- Spam-Filtering (1)
- Spatio-Spectral Filter (1)
- Spawning (1)
- Specification (1)
- Speicheroptimierungen (1)
- Spezifikation von gezeiteten Graph Transformationen (1)
- Spieldynamik (1)
- Spieldynamiken (1)
- Sprachdesign (1)
- Sprachlernen im Limes (1)
- Sprachspezifikation (1)
- Squeak (1)
- Squeak/Smalltalk (1)
- Stable marriage (1)
- Stable matching (1)
- Stadtmodell (1)
- Stance Detection (1)
- Standardisierung (1)
- Standards (1)
- Static Analysis (1)
- Statistical Tests (1)
- Statistikprogramm R (1)
- Statistische Tests (1)
- Stilisierung (1)
- Strategie (1)
- Structuring (1)
- Strukturierung (1)
- Strukturverbesserung (1)
- Student Engagement (1)
- Studentenerwartungen (1)
- Studentenhaltungen (1)
- Studentenjobs (1)
- Studienabbrecher (1)
- Studienabbruch (1)
- Studienanfänger*innen (1)
- Studiendauer (1)
- Studieneingangsphase (1)
- Studiengestaltung (1)
- Studiengänge (1)
- Studienverläufe (1)
- Studierendenperformance (1)
- Studium (1)
- Submodular function (1)
- Submodular functions (1)
- Subset selection (1)
- Suche (1)
- Suchtberatung und -therapie (1)
- Suchverfahren (1)
- Synchronisation (1)
- Synonyme (1)
- Synthese (1)
- System Biologie (1)
- System of Systems (1)
- System structure (1)
- Systembiologie (1)
- Systeme von Systemen (1)
- Systementwurf (1)
- Systems of Systems (1)
- Systems of parallel communicating (1)
- Szenengraph (1)
- TPTP (1)
- Tableaumethode (1)
- Tasks (1)
- Teacher perceptions (1)
- Teachers (1)
- Teaching information security (1)
- Teaching problem solving strategies (1)
- Technology proficiency (1)
- Telekommunikation (1)
- Telemedizin (1)
- Temporal Logic (1)
- Temporäre Anbindung (1)
- Terminologische Logik (1)
- Terminology (1)
- Test (1)
- Test-getriebene Fehlernavigation (1)
- Testen (1)
- Testergebnisse (1)
- Testpriorisierungs (1)
- Tests (1)
- Text mining (1)
- Texterkennung (1)
- Textklassifikation (1)
- The Sharing Economy (1)
- Theoretical Foundations (1)
- Theoretischen Vorlesungen (1)
- Threshold Cryptography (1)
- Time Augmented Petri Nets (1)
- Time series (1)
- Tool (1)
- Tools (1)
- Traceability (1)
- Tracking (1)
- Trajectories (1)
- Trajektorien (1)
- Trajektoriendaten (1)
- Transaktionen (1)
- Transformation (1)
- Transformationsebene (1)
- Transformationssequenzen (1)
- Transversal hypergraph (1)
- Travis CI (1)
- Treewidth (1)
- Tripel-Graph-Grammatiken (1)
- Triple Graph Grammar (1)
- Triple Graph Grammars (1)
- Triple-Graph-Grammatiken (1)
- Trust Management (1)
- Type and effect systems (1)
- Umfrage (1)
- Unabhängige Komponentenanalyse (1)
- Unbegrenzter Zustandsraum (1)
- Uncanny valley (1)
- Unique column combination (1)
- Unique column combinations (1)
- Universität Bagdad (1)
- Universität Potsdam (1)
- Universitätseinstellungen (1)
- Untere Schranken (1)
- Unterricht mit digitalen Medien (1)
- Unveränderlichkeit (1)
- Unvollständigkeit (1)
- Usage Interest (1)
- VGG16 (1)
- VIL (1)
- VM Integration (1)
- VR (1)
- VUCA-World (1)
- Validation (1)
- Value network (1)
- Verbindungsnetzwerke (1)
- Verbundwerte (1)
- Verhaltensabstraktion (1)
- Verhaltensanalyse (1)
- Verhaltensbewahrung (1)
- Verhaltensverfeinerung (1)
- Verhaltensänderung (1)
- Verhaltensäquivalenz (1)
- Verification (1)
- Verletzung Auflösung (1)
- Verletzung Erklärung (1)
- Vernetzte Daten (1)
- Versionierung (1)
- Verteiltes Arbeiten (1)
- Verteiltes Rechnen (1)
- Verteilungsalgorithmen (1)
- Verteilungsalgorithmus (1)
- Verteilungsunterschied (1)
- Vertrauen (1)
- Verwaltung von Rechenzentren (1)
- Verzögerungs-Verbreitung (1)
- Veränderungsanalyse (1)
- Videoanalyse (1)
- Videometadaten (1)
- Violation Explanation (1)
- Violation Resolution (1)
- Virtual Desktop Infrastructure (1)
- Virtual Machines (1)
- Virtual machines (1)
- Virtuelle Maschine (1)
- Virtuelle Realität (1)
- Virtuelles 3D Stadtmodell (1)
- Visualisierungskonzept-Exploration (1)
- Vocabulary (1)
- Vocational Education (1)
- Vorhersagemodelle (1)
- Vorkenntnisse (1)
- Vorwissen (1)
- W[3]-Completeness (1)
- Wahrnehmung (1)
- Wahrnehmung von Arousal (1)
- Wahrnehmungsunterschiede (1)
- Warteschlangentheorie (1)
- Wartung von Graphdatenbanksichten (1)
- Wartung von Lehrveranstaltungen (1)
- Wearable (1)
- Web Services (1)
- Web Sites (1)
- Web applications (1)
- Web of Data (1)
- Web-Anwendungen (1)
- Web-Based Rendering (1)
- Webbasiertes Rendering (1)
- Webseite (1)
- Weighted clustering coefficient (1)
- Weiterbildung (1)
- Well-structuredness (1)
- Werbung (1)
- Werkzeugbau (1)
- Wertschöpfungskooperation (1)
- WhatsApp (1)
- Wicked Problems (1)
- Wikipedia (1)
- Wirtschaftsinformatik (1)
- Wissenschaftliches Arbeiten (1)
- Wissensrepräsentation und -verarbeitung (1)
- Wissensrepräsentation und Schlussfolgerung (1)
- Wohlstrukturiertheit (1)
- Wolke (1)
- Women and IT (1)
- Workflow (1)
- Wüstenbildung (1)
- X-ray imaging (1)
- XM (1)
- Young People (1)
- ZQSA (1)
- ZQSAT (1)
- Zebris (1)
- Zeitbehaftete Petri Netze (1)
- Zero-Suppressed Binary Decision Diagram (ZDD) (1)
- Zoom fatigue (1)
- Zugriffskontrolle (1)
- Zuverlässigkeitsanalyse (1)
- acceptability (1)
- access control (1)
- action and change (1)
- action language (1)
- activity instance state propagation (1)
- acyclic preferences (1)
- ad hoc learning (1)
- ad hoc messaging network (1)
- adaptiv (1)
- addiction care (1)
- adolescent (1)
- adoption (1)
- advanced persistent threat (1)
- advanced threats (1)
- aerosol size distribution (1)
- agil (1)
- agile government (1)
- agility (1)
- airbnb (1)
- algorithm (1)
- algorithm configuration (1)
- algorithm schedules (1)
- algorithm scheduling (1)
- algorithm selection (1)
- analog-to-digital conversion (1)
- analogical thinking (1)
- analysis (1)
- animated PCA (1)
- animierte PCA (1)
- anisotropic Kuwahara filter (1)
- annotation (1)
- anomalies (1)
- answer (1)
- answer set (1)
- app (1)
- application virtualization (1)
- approximate joint diagonalization (1)
- approximation (1)
- apriori (1)
- apt (1)
- architectural adaptation (1)
- architecture recovery (1)
- archive analysis (1)
- argument mining (1)
- argumentation research (1)
- argumentation structure (1)
- arithmethische Prozeduren (1)
- arithmetic procedures (1)
- arousal perception (1)
- art analysis (1)
- artificial intelligence (1)
- aspect adapter (1)
- aspect oriented programming (1)
- aspect-oriented (1)
- aspects (1)
- aspectualization (1)
- asset management (1)
- assistive Technologien (1)
- assistive technologies (1)
- association rule mining (1)
- asynchronous circuit (1)
- attacks (1)
- augmented reality (1)
- ausführbare Semantiken (1)
- automata (1)
- automated planning (1)
- automated theorem proving (1)
- automatic theorem prover (1)
- automatisierter Theorembeweiser (1)
- automotive electronics (1)
- autonomous (1)
- back-in-time (1)
- balance analysis (1)
- bank (1)
- basic cloud storage services (1)
- behavior preservation (1)
- behavioral abstraction (1)
- behavioral equivalenc (1)
- behavioral refinement (1)
- behavioral specification (1)
- behaviour (1)
- behaviourally correct learning (1)
- benchmark (1)
- benutzergenerierte Inhalte (1)
- beschreibende Feldstudie (1)
- betriebliche Weiterbildungspraxis (1)
- bidirectional optimality theory (1)
- bild (1)
- bildbasiertes Rendering (1)
- binary representation (1)
- binary search (1)
- bio-computing (1)
- bioinformatics (1)
- biological network (1)
- biological network model (1)
- biological networks (1)
- biomarker detection (1)
- biometrics (1)
- bisimulation (1)
- bitcoin (1)
- blind source separation (1)
- bottom–up (1)
- bounded backward model checking (1)
- bpm (1)
- brand ambassadors (1)
- brand personality (1)
- bug tracking (1)
- building models (1)
- built–in predicates (1)
- business informatics (1)
- business models (1)
- business process architecture (1)
- business process architectures (1)
- business process model abstraction (1)
- business process modeling (1)
- bystander (1)
- cancer therapy (1)
- cartographic design (1)
- case study (1)
- categories (1)
- causal AI (1)
- causal reasoning (1)
- causality (1)
- center dot Computing (1)
- change detection (1)
- change management (1)
- changeability (1)
- changing the study field (1)
- changing the university (1)
- choreographies (1)
- circuits (1)
- classes of logic programs (1)
- classifier calibration (1)
- classroom language (1)
- clause elimination (1)
- clause learning (1)
- cleansing (1)
- cloud datacenter (1)
- cloud storage (1)
- cluster-analysis (1)
- code generation (1)
- coding and information theory (1)
- cogeneration units (1)
- cognition (1)
- cognitive load (1)
- cognitive load theory (1)
- cognitive modifiability (1)
- coherence-enhancing filtering (1)
- collaborations (1)
- collection types (1)
- columnar databases (1)
- combined task and motion planning (1)
- communication (1)
- community (1)
- competence development (1)
- competency (1)
- complex optimization (1)
- complexity dichotomy (1)
- composite service (1)
- compositional analysis (1)
- computational biology (1)
- computational ethnomusicology (1)
- computational methods (1)
- computational photography (1)
- computed tomography (1)
- computer science education (CSE) (1)
- computer science teachers (1)
- computer-aided design (1)
- computer-mediated therapy (1)
- computergestützte Methoden (1)
- computergestützte Musikethnologie (1)
- computervermittelte Therapie (1)
- computing (1)
- computing science education (1)
- concept of algorithm (1)
- concurrency (1)
- concurrent graph rewriting (1)
- conditions (1)
- confidentiality (1)
- conflicts and dependencies in (1)
- confluence (1)
- conformance analysis (1)
- conformance checking (1)
- connection calculus (1)
- consensus protocols (1)
- consistency restoration (1)
- consistent learning (1)
- constraint (1)
- constraint programming (1)
- constraints (1)
- constructionism (1)
- consumer behavior (1)
- context awareness (1)
- continuous testing (1)
- contract (1)
- control resynthesis (1)
- controlled experiment (1)
- convolutional neural networks (1)
- coronavirus (1)
- corporate nomadism (1)
- corporate takeovers (1)
- corpus study (1)
- couple reaction (1)
- coupling relationship (1)
- course timetabling (1)
- creativity (1)
- crochet (1)
- crosscutting wrappers (1)
- cryptocurrency exchanges (1)
- cryptography (1)
- cryptology (1)
- cs4fn (1)
- cscw (1)
- cultural heritage (1)
- cumulative culture (1)
- curriculum theory (1)
- cyber (1)
- cyber humanistic (1)
- cyber threat intelligence (1)
- cyber-attack (1)
- cyber-physikalische Systeme (1)
- cybersecurity (1)
- cyberwar (1)
- data center management (1)
- data assimilation (1)
- data center management (1)
- data correctness checking (1)
- data dependencies (1)
- data driven approaches (1)
- data extraction (1)
- data flow correctness (1)
- data in business processes (1)
- data migration (1)
- data modeling (1)
- data models (1)
- data objects (1)
- data pipeline (1)
- data requirements (1)
- data science (1)
- data security (1)
- data set (1)
- data sharing (1)
- data states (1)
- data structures and information theory (1)
- data synthesis (1)
- data transformation (1)
- data view (1)
- data visualization (1)
- data-driven (1)
- data-driven artifacts (1)
- database (1)
- database optimization (1)
- database technology (1)
- database tuning (1)
- datengetrieben (1)
- dbms (1)
- deadline propagation (1)
- decentral identities (1)
- decision management (1)
- decision mining (1)
- decision models (1)
- decision trees (1)
- decubitus (1)
- deductive databases (1)
- deduplication (1)
- deep Gaussian processes (1)
- definiteness (1)
- delay propagation (1)
- demografische Informationen (1)
- demographic information (1)
- dental caries classification (1)
- dependable computing (1)
- dependencies (1)
- dependency discovery (1)
- depressive symptoms (1)
- desertification (1)
- design (1)
- design research (1)
- design space exploration (1)
- design-science research (1)
- determinism (1)
- deterministic properties (1)
- deurema modeling language (1)
- development tools (1)
- developmental systems (1)
- deviant behaviors (1)
- dezentrale Identitäten (1)
- diagnosis (1)
- didaktisches Konzept (1)
- difference of Gaussians (1)
- differential gene expression (1)
- differential privacy (1)
- diffusion (1)
- diffusion of innovations (1)
- digital activism (1)
- digital interventions (1)
- digital nomadism (1)
- digital nudging (1)
- digital picture archive (1)
- digital platform openness (1)
- digital strategy (1)
- digital unterstützter Unterricht (1)
- digital workplace transformation (1)
- digital world (1)
- digitale Hochschullehre (1)
- digitale Infrastruktur für den Schulunterricht (1)
- digitales Bildarchiv (1)
- digitales Whiteboard (1)
- digitally-enabled pedagogies (1)
- digitization of production processes (1)
- dimensional (1)
- direkte Manipulation (1)
- discrete-event model (1)
- discrimination networks (1)
- diskretes Ereignismodell (1)
- distributed computation (1)
- distributed ledger technology (1)
- distributed performance monitoring (1)
- distribution algorithm (1)
- divide and conquer (1)
- doctor-patient relationship (1)
- domain-specific modeling (1)
- drift theory (1)
- dropout (1)
- duale IT-Ausbildung (1)
- dynamic (1)
- dynamic typing (1)
- dynamic classification (1)
- dynamic consolidation (1)
- dynamic programming languages (1)
- dynamic systems (1)
- dynamisch (1)
- dynamische Klassifikation (1)
- dynamische Programmiersprachen (1)
- dynamische Sprachen (1)
- dynamische Systeme (1)
- dynamische Umsortierung (1)
- e-Assessment (1)
- e-learning platform (1)
- e-mentoring (1)
- education and public policy (1)
- educational programming (1)
- educational systems (1)
- educational timetabling (1)
- edutainment (1)
- efficiency (1)
- efficient deep learning (1)
- eindeutig (1)
- eingebettete Systeme (1)
- elections (1)
- electrical muscle stimulation (1)
- electronic health record (1)
- electronic tool integration (1)
- elektrische Muskelstimulation (1)
- elliptic complexes (1)
- email spam detection (1)
- embedded systems (1)
- embedded-systems (1)
- emote work (1)
- emotion (1)
- emotion representation (1)
- emotion research (1)
- emotional design (1)
- empirical studies (1)
- empirische Studien (1)
- endpoint security (1)
- energy efficiency (1)
- energy savings (1)
- engaged computing (1)
- engine (1)
- engineering (1)
- enterprise search (1)
- entity alignment (1)
- entity linking (1)
- entity resolution (1)
- enumeration (1)
- environments (1)
- epistemic logic programs (1)
- epistemic specifications (1)
- equality (1)
- erfahrbare Medien (1)
- error correction (1)
- error detection (1)
- erzeugende gegnerische Netzwerke (1)
- ethics (1)
- event abstraction (1)
- events (1)
- evidence theory (1)
- evolution (1)
- evolution in MDE (1)
- evolutionary computation (1)
- evolving systems (1)
- exact simulation methods (1)
- executable semantics (1)
- experience (1)
- experience report (1)
- experiment (1)
- explainability (1)
- explainability-accuracy trade-off (1)
- explainable AI (1)
- explicit knowledge (1)
- explicit negation (1)
- exploration (1)
- exploratives Programmieren (1)
- exponentiation (1)
- expression (1)
- extend (1)
- extensions of logic programs (1)
- external knowledge bases (1)
- external memory algorithms (1)
- fMRI (1)
- face tracking (1)
- facial expression (1)
- failure model (1)
- fashion (1)
- fatty acid amide hydrolase (1)
- fault injection (1)
- federated industrial platform ecosystems (1)
- feedback loop modeling (1)
- feedback loops (1)
- fehlende Daten (1)
- field-programmable gate array (1)
- file structure (1)
- flow-based bilateral filter (1)
- font engineering (1)
- font rendering (1)
- forecasts (1)
- forensics (1)
- formal framework (1)
- formal languages (1)
- formal verification (1)
- formal verification methods (1)
- formale Verifikation (1)
- formales Framework (1)
- formalism (1)
- forschendes Lernen (1)
- forschungsorientiertes Lernen (1)
- fortschrittliche Angriffe (1)
- forward / backward chaining (1)
- freie Daten (1)
- freie Software (1)
- fsQCA (1)
- fun (1)
- function symbols (1)
- functional dependency (1)
- functional lenses (1)
- functional programming (1)
- functions (1)
- funktionale Abhängigkeit (1)
- funktionale Programmierung (1)
- future SOC lab (1)
- fächerverbindend (1)
- gait analysis algorithm (1)
- ganzheitlich (1)
- ganzzahlige lineare Optimierung (1)
- gefaltete neuronale Netze (1)
- gemischte Daten (1)
- gender-specific medicine (1)
- gene (1)
- gene expression matrix (1)
- gene selection (1)
- general (1)
- general education in computer science (1)
- general secondary education (1)
- generalization (1)
- generalized discrimination networks (1)
- generalized logic programs (1)
- generative adversarial networks (1)
- genome annotation (1)
- geometry generation (1)
- geovirtual environments (1)
- geovirtuelle Umgebungen (1)
- geschichtsbewusste Laufzeit-Modelle (1)
- gesture (1)
- getypte Attributierte Graphen (1)
- gewerkschaftlich unterstützte Weiterbildungspraxis (1)
- global constraints (1)
- global model management (1)
- globale Constraints (1)
- globales Modellmanagement (1)
- grammar inference (1)
- grammars (1)
- graph clustering (1)
- graph databases (1)
- graph inference (1)
- graph languages (1)
- graph mining (1)
- graph pattern matching (1)
- graph queries (1)
- graph repair (1)
- graph transformations (1)
- graph-based ranking (1)
- graph-search (1)
- graph-transformations (1)
- hardware accelerator (1)
- hardware architecture (1)
- hardware-software-codesign (1)
- hate speech detection (1)
- health care (1)
- health data (1)
- healthcare (1)
- heterogeneity (1)
- heterogeneous computing (1)
- heterogeneous tissue (1)
- heterogenes Rechnen (1)
- heuristics (1)
- high school (1)
- higher (1)
- history-aware runtime models (1)
- holistic (1)
- home office (1)
- homogeneous cell population (1)
- homomorphic encryption (1)
- human-centered (1)
- human–computer interaction (1)
- hybrid graph-transformation-systems (1)
- hybrid systems (1)
- hybride Graph-Transformations-Systeme (1)
- hyrise (1)
- identity broker (1)
- image (1)
- image captioning (1)
- image data analysis (1)
- image-based rendering (1)
- imdb (1)
- immediacy (1)
- immersion (1)
- immutable values (1)
- in-memory (1)
- in-memory data management (1)
- in-memory database (1)
- inclusion dependency (1)
- incompleteness (1)
- inconsistency (1)
- incremental graph query evaluation (1)
- incumbent (1)
- independent component analysis (1)
- index (1)
- individuals (1)
- individuelle Lernwege (1)
- inductive invariant checking (1)
- induktives Invariant Checking (1)
- inertial measurement unit (1)
- inference (1)
- influence mapping (1)
- informal and formal learning (1)
- informatics curricula (1)
- informatics in upper secondary education (1)
- information diffusion (1)
- informatische Allgemeinbildung (1)
- informatische Grundkompetenzen (1)
- infrastructure (1)
- inkrementelle Ausführung von Graphanfragen (1)
- inkrementelles Graph Pattern Matching (1)
- innovation capabilities (1)
- innovation management (1)
- input accuracy (1)
- instruction (1)
- integer linear programming (1)
- integral equation (1)
- integrated development environments (1)
- integrierte Entwicklungsumgebungen (1)
- interaction (1)
- interaction modeling (1)
- interaction techniques (1)
- interactive course (1)
- interactive media (1)
- interactive simulation (1)
- interactive workshop (1)
- interaktive Medien (1)
- interconnect (1)
- interdisziplinäre Teams (1)
- interface (1)
- international comparison (1)
- international human rights (1)
- international humanitarian law (1)
- international study (1)
- interpretable machine learning (1)
- interpretative research (1)
- interpreters (1)
- interval probabilistic timed systems (1)
- interval probabilistische zeitgesteuerte Systeme (1)
- interval timed automata (1)
- intransitivity (1)
- intuition (1)
- intuitive Benutzeroberflächen (1)
- intuitive interfaces (1)
- invariant checking (1)
- invasive aspects (1)
- invention (1)
- invention mechanism (1)
- inverse ill-posed problem (1)
- inverse scattering (1)
- iteration method (1)
- iterative regularization (1)
- job-shop scheduling (1)
- juridical recording (1)
- k-Induktion (1)
- k-induction (1)
- k-inductive invariants (1)
- k-induktive Invarianten (1)
- k-induktive Invariantenprüfung (1)
- k-induktives Invariant-Checking (1)
- kausale KI (1)
- kausale Schlussfolgerung (1)
- kernel PCA (1)
- kernel methods (1)
- key competences in physical computing (1)
- key discovery (1)
- kinaesthetic teaching (1)
- klinisch-praktischer Unterricht (1)
- knowledge building (1)
- knowledge discovery (1)
- knowledge engineering (1)
- knowledge management system (1)
- knowledge representation (1)
- knowledge transfer (1)
- knowledge work (1)
- kompositionale Analyse (1)
- konsistentes Lernen (1)
- kontinuierliches Testen (1)
- kontrolliertes Experiment (1)
- konvergente Dienste (1)
- kulturelles Erbe (1)
- labour union education (1)
- landmarks (1)
- language design (1)
- language learning in the limit (1)
- language specification (1)
- laser remote sensing (1)
- laserscanning (1)
- law and technology (1)
- leadership (1)
- leanCoP (1)
- learner characteristics (1)
- learning factory (1)
- lebenszentriert (1)
- left recursion (1)
- lesson (1)
- level-replacement systems (1)
- life-centered (1)
- linear code (1)
- linear programming problem (1)
- linearer Code (1)
- linguistic (1)
- link discovery (1)
- linked data (1)
- literature review (1)
- live migration (1)
- lively kernel (1)
- load balancing (1)
- localization (1)
- location-based (1)
- logic (1)
- logic programming methodology and applications (1)
- logic synthesis (1)
- logical calculus (1)
- logical signaling networks (1)
- logische Ergänzung (1)
- logische Programmierung (1)
- logische Signalnetzwerke (1)
- long-term interaction (1)
- loop formulas (1)
- machine (1)
- machine learning algorithms (1)
- machines (1)
- main memory computing (1)
- malware detection (1)
- management (1)
- mandatory computer science foundations (1)
- manipulation planning (1)
- manufacturing (1)
- many-core (1)
- map reduce (1)
- map/reduce (1)
- maps (1)
- market study (1)
- maschinelle Verarbeitung natürlicher Sprache (1)
- maschninelles Lernen (1)
- matrices (1)
- media (1)
- mediated conversation (1)
- mediated learning experience (1)
- medical (1)
- medical documentation (1)
- medical malpractice (1)
- medizinisch (1)
- medizinische Dokumentation (1)
- mehrdimensionale Belangtrennung (1)
- mehrsprachige Ausführungsumgebungen (1)
- memory optimization (1)
- menschenzentriert (1)
- meta model (1)
- meta-programming (1)
- metabolic network (1)
- metabolite profile (1)
- metacrate (1)
- metadata (1)
- metadata detection (1)
- metadata discovery (1)
- metadata quality (1)
- metaverse (1)
- methodologie (1)
- methods (1)
- metric learning (1)
- metric temporal logic (1)
- metric termporal graph logic (1)
- metrisch temporale Graph Logic (1)
- metrische Temporallogik (1)
- microcredential (1)
- microdissection (1)
- migration (1)
- misconception (1)
- misconceptions (1)
- mixed data (1)
- mixture models (1)
- mmdb (1)
- mobile application (1)
- mobile applications (1)
- mobile devices (1)
- mobile learning (1)
- mobile technologies and apps (1)
- mobiles Lernen (1)
- model generation (1)
- model repair (1)
- model-based (1)
- model-based prototyping (1)
- model-driven (1)
- model-driven architecture (1)
- model-driven software engineering (1)
- modelgetriebene Entwicklung (1)
- modellgetriebene Softwaretechnik (1)
- modular counting (1)
- modularity (1)
- molecular network (1)
- molecular networks (1)
- molecular tumor board (1)
- molekulare Netzwerke (1)
- monetary incentive delay task (1)
- mood (1)
- morphic (1)
- morphological analysis (1)
- multi core data processing (1)
- multi factor authentication (1)
- multi-class classification (1)
- multi-core (1)
- multi-dimensional separation of concerns (1)
- multi-instances (1)
- multi-valued logic (1)
- multi-version models (1)
- multi-family residential buildings (1)
- multidisziplinäre Teams (1)
- multimedia learning (1)
- multimodal representations (1)
- multiuser (1)
- musical scales (1)
- musikalische Tonleitern (1)
- mutli-task learning (1)
- mutual gaze (1)
- mutual information (1)
- named entity mining (1)
- narratives (1)
- natural language processing (1)
- nested application conditions (1)
- nested expressions (1)
- network (1)
- network protocols (1)
- networks-on-chip (1)
- neue Online-Fehlererkennungsmethode (1)
- neural (1)
- new technologies (1)
- nicht-parametrische bedingte Unabhängigkeitstests (1)
- nichtlineare ICA (1)
- nichtlineare PCA (NLPCA) (1)
- nichtlineare Projektionen (1)
- non-monotonic reasoning (1)
- non-parametric conditional independence testing (1)
- non-photorealistic rendering (NPR) (1)
- nonlinear ICA (1)
- nonlinear PCA (NLPCA) (1)
- nonlinear projections (1)
- notation (1)
- novelty detection (1)
- nutzergenerierte Inhalte (1)
- nvm (1)
- object life cycle synchronization (1)
- object-constraint programming (1)
- object-oriented programming (1)
- objective difficulty (1)
- objektorientiertes Programmieren (1)
- omega (1)
- on-chip (1)
- online assistance (1)
- online course (1)
- online course creation (1)
- online course design (1)
- online learning (1)
- online photographs (1)
- online-learning (1)
- open innovation (1)
- open learning (1)
- open science (1)
- open science practices in information systems research (1)
- open source (1)
- open source software (1)
- operating system (1)
- optical character recognition (1)
- optimal transport (1)
- optimizations (1)
- order dependencies (1)
- organisational evolution (1)
- organizational change (1)
- orts-basiert (1)
- overcomplete ICA (1)
- packrat parsing (1)
- paper prototyping (1)
- paraconsistency (1)
- parallel (1)
- parallel and sequential independence (1)
- parallel computing (1)
- parallel execution (1)
- parallel rewriting (1)
- parallel solving (1)
- parallele Verarbeitung (1)
- parallele und Sequentielle Unabhängigkeit (1)
- paralleles Lösen (1)
- paralleles Rechnen (1)
- parameter (1)
- parsing (1)
- parsing expression grammars (1)
- partial application conditions (1)
- partial correlation (1)
- partial replication (1)
- partielle Anwendungsbedingungen (1)
- partielle Replikation (1)
- patent (1)
- pathways (1)
- patient empowerment (1)
- pattern recognition (1)
- pedagogy (1)
- pedestrian navigation (1)
- perception (1)
- perception differences (1)
- performance (1)
- performance models of virtual machines (1)
- periodic tasks (1)
- periodische Aufgaben (1)
- personal (1)
- personal response systems (1)
- personality prediction (1)
- personalization principle (1)
- personalized medicine (1)
- perspective (1)
- persönliche Informationen (1)
- pervasive learning (1)
- petri net (1)
- philosophical foundation of informatics pedagogy (1)
- phone (1)
- physical computing tools (1)
- placement (1)
- planning (1)
- platform ecosystems (1)
- platypus (1)
- policy evaluation (1)
- polyglot execution environments (1)
- polyglot programming (1)
- polyglottes Programmieren (1)
- portfolio-based solving (1)
- portrait (1)
- pose estimation (1)
- poset (1)
- power relations (1)
- power-law (1)
- pre-primary level (1)
- predictive models (1)
- preference handling (1)
- preferences (1)
- prefetching (1)
- preprocessing (1)
- presentation (1)
- primary education (1)
- primary healthcare (1)
- primary level (1)
- primary school (1)
- prime pair (1)
- primer pair design (1)
- prior knowledge (1)
- priorities (1)
- probabilistic machine learning (1)
- probabilistic models (1)
- probabilistic timed automata (1)
- probabilistische zeitbehaftete Automaten (1)
- probabilistisches maschinelles Lernen (1)
- problem-solving (1)
- process (1)
- process and data integration (1)
- process automation (1)
- process elicitation (1)
- process improvement (1)
- process instance (1)
- process instance grouping (1)
- process model (1)
- process model search (1)
- process modeling languages (1)
- process modelling (1)
- process models (1)
- process refinement (1)
- process scheduling (1)
- processes (1)
- processing (1)
- processor hardware (1)
- professional development (1)
- professors (1)
- profiling (1)
- program (1)
- program analysis (1)
- programming abstraction (1)
- programming experience (1)
- programming in context (1)
- programming language (1)
- programming skills (1)
- programming tools (1)
- programs (1)
- prototyping (1)
- proving (1)
- psychotherapy (1)
- public administration (1)
- public cloud storage services (1)
- public dataset (1)
- public sector organizations (1)
- qualitative model (1)
- qualitatives Modell (1)
- quantification protocol (1)
- quantified logics (1)
- quantile normalization (1)
- quantum computing (1)
- quantum cryptography (1)
- query matching (1)
- query optimization (1)
- querying (1)
- railway network (1)
- railways (1)
- random I (1)
- random graphs (1)
- randomized control trial (1)
- rapid prototyping (1)
- raum-zeitlich (1)
- raumbezogene Straftatenanalyse (1)
- reactive (1)
- reaktive Programmierung (1)
- real-time application (1)
- real-time rendering (1)
- rechnerunterstütztes Konstruieren (1)
- recognition (1)
- recommendation (1)
- reconfigurable systems (1)
- reconfiguration (1)
- reconstruction (1)
- record linkage (1)
- recursive tuning (1)
- reflection (1)
- regression testing (1)
- regulatory networks (1)
- reinforcement learning (1)
- rekonfigurierbar (1)
- relational model transformation (1)
- relationale Modelltransformationen (1)
- relationship quality (1)
- reliability (1)
- reliability assessment (1)
- remodularization (1)
- remote collaboration (1)
- remote sensing (1)
- remote-first (1)
- repair (1)
- reputation management (1)
- requirements engineering (1)
- research data management (1)
- resilient architectures (1)
- resource management (1)
- resource optimization (1)
- rest service (1)
- restoration (1)
- restricted parallelism (1)
- reusable aspects (1)
- reverse engineering (1)
- reversible reaction (1)
- review (1)
- reward system (1)
- robust ICA (1)
- robuste ICA (1)
- robustness (1)
- romantic (1)
- romantic relationship (1)
- runtime adaptations (1)
- runtime behavior (1)
- runtime monitoring (1)
- räumliche Geodaten (1)
- s/t-pattern sequences (1)
- sat (1)
- satisfiabilitiy solving (1)
- savanna (1)
- scheduling (1)
- school (1)
- schwach überwachtes maschinelles Lernen (1)
- science (1)
- scm (1)
- screening tools (1)
- scripting environments (1)
- scripting languages (1)
- scrollytelling (1)
- search plan generation (1)
- secondary computer science education (1)
- secondary education (1)
- security analytics (1)
- security chaos engineering (1)
- security policies (1)
- security risk assessment (1)
- segmentation (1)
- selbst-souveräne Identitäten (1)
- selbstbestimmte Identitäten (1)
- selbstprüfende Schaltungen (1)
- selbstüberwachtes Lernen (1)
- self-adaptive multiprocessing system (1)
- self-adaptive software (1)
- self-awareness (1)
- self-disclosure (1)
- self-efficacy (1)
- self-healing (1)
- self-supervised learning (1)
- self-view (1)
- semantic analysis (1)
- semantic classification (1)
- semantic web services (1)
- semantics (1)
- semantics preservation (1)
- semantische Klassifizierung (1)
- semantisches Netz (1)
- sentiment (1)
- sentiment analysis (1)
- sequence properties (1)
- serialization (1)
- series (1)
- serverseitiges 3D-Rendering (1)
- serverside 3D rendering (1)
- service mediation (1)
- service orchestration (1)
- service-oriented (1)
- service-oriented architectures (1)
- serviceorientierte Architekturen (1)
- sets (1)
- shader (1)
- sharing economy (1)
- sign language (1)
- signal processing (1)
- signal transition graph (1)
- significant edge (1)
- similarity (1)
- similarity learning (1)
- similarity measures (1)
- single event upset (1)
- single-case experimental design (1)
- situated learning (1)
- situational awareness (1)
- skeletonization (1)
- skew Brownian motion (1)
- skew diffusions (1)
- small files (1)
- small talk (1)
- smartphone (1)
- smoother (1)
- social attraction (1)
- social media analysis (1)
- social networking (1)
- social networking sites (1)
- sociotechnical (1)
- software (1)
- software analysis (1)
- software architecture (1)
- software development processes (1)
- software evolution (1)
- software maintenance (1)
- software product lines (1)
- software selection (1)
- software testing (1)
- software tests (1)
- software visualization (1)
- software/hardware co-design (1)
- solar particle event (1)
- sorting (1)
- space missions (1)
- spaltenorientierte Datenbanken (1)
- spatio-temporal (1)
- spatio-temporal data management (1)
- spatio-temporal sensor data (1)
- specific prime pair (1)
- specification of timed graph transformations (1)
- speed independence (1)
- speed independent (1)
- spread correction (1)
- spreadsheets (1)
- squeak (1)
- stable matching (1)
- stable model semantics (1)
- stakeholder analysis (1)
- standards (1)
- stark verhaltenskorrekt sperrend (1)
- static analysis (1)
- static source-code analysis (1)
- statische Analyse (1)
- statische Quellcodeanalyse (1)
- statistics program R (1)
- stochastic process (1)
- stratification (1)
- strong and uniform equivalence (1)
- strongly behaviourally correct locking (1)
- strongly stable matching (1)
- structured output prediction (1)
- strukturierte Vorhersage (1)
- student activation (1)
- student experience (1)
- student perceptions (1)
- studentische Forschung (1)
- students’ conceptions (1)
- students’ knowledge (1)
- study (1)
- study problems (1)
- style transfer (1)
- stylization (1)
- super stable matching (1)
- survey mode (1)
- symbolic analysis (1)
- symbolic graphs (1)
- symbolische Analyse (1)
- symbolische Graphen (1)
- synonym discovery (1)
- system of systems (1)
- systems (1)
- t.BPM (1)
- tabellarische Dateien (1)
- tableau method (1)
- tabular data (1)
- tacit knowledge (1)
- tangible media (1)
- teacher (1)
- teacher competencies (1)
- teacher education (1)
- teachers (1)
- teaching (1)
- teaching informatics in general education (1)
- teaching material (1)
- technical notes and rapid communications (1)
- technische Rahmenbedingungen (1)
- technologies (1)
- technostress (1)
- tele-lab (1)
- tele-teaching (1)
- telemedicine (1)
- temporal graph queries (1)
- temporal logic (1)
- temporale Graphanfragen (1)
- temporary binding (1)
- terminology (1)
- terrain models (1)
- test (1)
- test case prioritization (1)
- test items (1)
- test results (1)
- test-driven fault navigation (1)
- text classification (1)
- text mining (1)
- the bright and dark side of social media in the marginalized contexts (1)
- theoretische Grundlagen (1)
- theory (1)
- threat detection (1)
- threshold cryptography (1)
- tiefe Gauß-Prozesse (1)
- tiering (1)
- tool building (1)
- top– down (1)
- tort law (1)
- touch input (1)
- tptp (1)
- tracing (1)
- traditional Georgian music (1)
- traditionelle Georgische Musik (1)
- traditionelle Unternehmen (1)
- training (1)
- trajectories (1)
- trajectory data (1)
- transduction (1)
- transfer learning (1)
- transformation (1)
- transformation level (1)
- transformation sequences (1)
- trust model (1)
- tuple spaces (1)
- tutorial section (1)
- typed graph transformation systems (1)
- typisierte attributierte Graphen (1)
- uncanny valley (1)
- unferring cellular networks (1)
- unfounded sets (1)
- unification (1)
- unique (1)
- unique column combinations (1)
- unsupervised (1)
- unsupervised learning (1)
- unsupervised methods (1)
- user interfaces (1)
- user-centred (1)
- value co-creation (1)
- variables (1)
- variation (1)
- variational inference (1)
- variationelle Inferenz (1)
- various applications (1)
- ventral striatum (1)
- verhaltenskorrektes Lernen (1)
- verifiable credentials (1)
- verschachtelte Anwednungsbedingungen (1)
- verschachtelte Anwendungsbedingungen (1)
- versioning (1)
- verteilte Berechnung (1)
- verteilte Datenbanken (1)
- verteilte Leistungsüberwachung (1)
- verzwickte Probleme (1)
- video analysis (1)
- video metadata (1)
- videoconferencing (1)
- view maintenance (1)
- views (1)
- virtual (1)
- virtual 3D city model (1)
- virtual collaboration (1)
- virtual desktop infrastructure (1)
- virtual groups (1)
- virtual learning environments (1)
- virtual machine (1)
- virtual mobility (1)
- virtual teams (1)
- virtualisierte IT-Infrastruktur (1)
- virtuell (1)
- virtuelle Realität (1)
- visual analytics (1)
- visual language (1)
- visual languages (1)
- visualization concept exploration (1)
- visuelle Sprache (1)
- visuelle Sprachen (1)
- vulnerabilities (1)
- weak supervision (1)
- weakly (1)
- web application (1)
- web services (1)
- web-applications (1)
- web-based development (1)
- web-based development environment (1)
- web-basierte Entwicklungsumgebung (1)
- webbasierte Entwicklung (1)
- weight (1)
- well-being (1)
- wissenschaftliches Arbeiten (1)
- wissenschaftliches Schreiben (1)
- word order freezing (1)
- word sense disambiguation (1)
- workload prediction (1)
- zero-day (1)
- zuverlässige Datenverarbeitung (1)
- zuverlässigen Datenverarbeitung (1)
- Ähnlichkeit (1)
- Ähnlichkeitsmaße (1)
- Ähnlichkeitssuche (1)
- Änderbarkeit (1)
- Übereinstimmungsanalyse (1)
- Überwachung (1)
- öffentliche Cloud Speicherdienste (1)
- überbestimmte ICA (1)
- überprüfbare Nachweise (1)
- ‘unplugged’ computing (1)
Institute
- Institut für Informatik und Computational Science (271)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (214)
- Hasso-Plattner-Institut für Digital Engineering GmbH (136)
- Extern (65)
- Fachgruppe Betriebswirtschaftslehre (40)
- Mathematisch-Naturwissenschaftliche Fakultät (24)
- Wirtschaftswissenschaften (19)
- Institut für Mathematik (16)
- Bürgerliches Recht (12)
- Digital Engineering Fakultät (8)
A core operator of evolutionary algorithms (EAs) is the mutation. Recently, much attention has been devoted to the study of mutation operators with dynamic and non-uniform mutation rates. Following up on this area of work, we propose a new mutation operator and analyze its performance on the (1 + 1) Evolutionary Algorithm (EA). Our analyses show that this mutation operator competes with pre-existing ones, when used by the (1 + 1) EA on classes of problems for which results on the other mutation operators are available. We show that the (1 + 1) EA using our mutation operator finds a (1/3)-approximation ratio on any non-negative submodular function in polynomial time. We also consider the problem of maximizing a symmetric submodular function under a single matroid constraint and show that the (1 + 1) EA using our operator finds a (1/3)-approximation within polynomial time. This performance matches that of combinatorial local search algorithms specifically designed to solve these problems and outperforms them with constant probability. Finally, we evaluate the performance of the (1 + 1) EA using our operator experimentally by considering two applications: (a) the maximum directed cut problem on real-world graphs of different origins, with up to 6.6 million vertices and 56 million edges and (b) the symmetric mutual information problem using a four month period air pollution data set. In comparison with uniform mutation and a recently proposed dynamic scheme, our operator comes out on top on these instances.
The reconstruction of cone-beam computed tomography data using filtered back-projection algorithms unavoidably results in severe artefacts. We describe how the Direct Iterative Reconstruction of Computed Tomography Trajectories (DIRECTT) algorithm can be combined with a model of the artefacts for the reconstruction of such data. The implementation of DIRECTT results in reconstructed volumes of superior quality compared to the conventional algorithms.
I can see it in your eyes
(2021)
Over the past years, extensive research has been dedicated to developing robust platforms and data-driven dialog models to support long-term human-robot interactions. However, little is known about how people's perception of robots and engagement with them develop over time and how these can be accurately assessed through implicit and continuous measurement techniques. In this paper, we explore this by involving participants in three interaction sessions with multiple days of zero exposure in between. Each session consists of a joint task with a robot as well as two short social chats with it before and after the task. We measure participants' gaze patterns with a wearable eye-tracker and gauge their perception of the robot and engagement with it and the joint task using questionnaires. Results disclose that aversion of gaze in a social chat is an indicator of a robot's uncanniness and that the more people gaze at the robot in a joint task, the worse they perform. In contrast with most HRI literature, our results show that gaze toward an object of shared attention, rather than gaze toward a robotic partner, is the most meaningful predictor of engagement in a joint task. Furthermore, the analyses of gaze patterns in repeated interactions disclose that people's mutual gaze in a social chat develops congruently with their perceptions of the robot over time. These are key findings for the HRI community as they entail that gaze behavior can be used as an implicit measure of people's perception of robots in a social chat and of their engagement and task performance in a joint task.
Bitcoin is gaining traction as an alternative store of value. Its market capitalization transcends all other cryptocurrencies in the market. But its high monetary value also makes it an attractive target to cyber criminal actors. Hacking campaigns usually target an ecosystem's weakest points. In Bitcoin, the exchange platforms are one of them. Each exchange breach is a threat not only to direct victims, but to the credibility of Bitcoin's entire ecosystem. Based on an extensive analysis of 36 breaches of Bitcoin exchanges, we show the attack patterns used to exploit Bitcoin exchange platforms using an industry standard for reporting intelligence on cyber security breaches. Based on this we are able to provide an overview of the most common attack vectors, showing that all except three hacks were possible due to relatively lax security. We show that while the security regimen of Bitcoin exchanges is subpar compared to other financial service providers, the use of stolen credentials, which does not require any hacking, is decreasing. We also show that the amount of BTC taken during a breach is decreasing, as well as the exchanges that terminate after being breached. Furthermore we show that overall security posture has improved, but still has major flaws. To discover adversarial methods post-breach, we have analyzed two cases of BTC laundering. Through this analysis we provide insight into how exchange platforms with lax cyber security even further increase the intermediary risk introduced by them into the Bitcoin ecosystem.
According to the personalization principle, addressing learners by means of a personalized compared to a nonpersonalized message can foster learning. Interestingly, though, a recent study found that the personalization principle can invert for aversive contents. The present study investigated whether the negative effect of a personalized message for an aversive content can be compensated when learners are in a happy mood. It was hypothesized that the negative effect of a personalized compared to a nonpersonalized message would only be observable for participants in a sad mood, while for participants in a happy mood a personalized message should be beneficial. A 2 x 2 between-subject design with mood (happy vs. sad) and personalization (personalized vs. nonpersonalized message) was used (N = 125 University students). Mood was experimentally varied prior to learning. Learning outcomes were measured by a retention and a transfer test. Results were essentially in line with the assumption: For participants in the sad mood condition, a negative effect of a personalized message was observable for retention and transfer. For participants in the happy mood condition, a positive effect of personalized message was observable for retention, but no effect for transfer. Note that the manipulation check measure for the mood induction procedure did not detect differences between conditions; this may be due to a shortcoming of the used measure (as indicated by an additional evaluation study). The study emphasizes the importance to consider the inherent emotional content of a topic, such as its aversive nature, since the emotional content of a topic can be a boundary condition for design principles in multimedia learning. The study also highlights the complex interplay of externally induced and inherently arising emotions.
We introduce a logic-based incremental approach to graph repair, generating a sound and complete (upon termination) overview of least-changing graph repairs from which a user may select a graph repair based on non-formalized further requirements. This incremental approach features delta preservation as it allows to restrict the generation of graph repairs to delta-preserving graph repairs, which do not revert the additions and deletions of the most recent consistency-violating graph update. We specify consistency of graphs using the logic of nested graph conditions, which is equivalent to first-order logic on graphs. Technically, the incremental approach encodes if and how the graph under repair satisfies a graph condition using the novel data structure of satisfaction trees, which are adapted incrementally according to the graph updates applied. In addition to the incremental approach, we also present two state-based graph repair algorithms, which restore consistency of a graph independent of the most recent graph update and which generate additional graph repairs using a global perspective on the graph under repair. We evaluate the developed algorithms using our prototypical implementation in the tool AutoGraph and illustrate our incremental approach using a case study from the graph database domain.
A simplified run time analysis of the univariate marginal distribution algorithm on LeadingOnes
(2021)
With elementary means, we prove a stronger run time guarantee for the univariate marginal distribution algorithm (UMDA) optimizing the LEADINGONES benchmark function in the desirable regime with low genetic drift. If the population size is at least quasilinear, then, with high probability, the UMDA samples the optimum in a number of iterations that is linear in the problem size divided by the logarithm of the UMDA's selection rate. This improves over the previous guarantee, obtained by Dang and Lehre (2015) via the deep level-based population method, both in terms of the run time and by demonstrating further run time gains from small selection rates. Under similar assumptions, we prove a lower bound that matches our upper bound up to constant factors.
Business processes are often specified in descriptive or normative models. Both types of models should adhere to internal and external regulations, such as company guidelines or laws. Employing compliance checking techniques, it is possible to verify process models against rules. While traditionally compliance checking focuses on well-structured processes, we address case management scenarios. In case management, knowledge workers drive multi-variant and adaptive processes. Our contribution is based on the fragment-based case management approach, which splits a process into a set of fragments. The fragments are synchronized through shared data but can, otherwise, be dynamically instantiated and executed. We formalize case models using Petri nets. We demonstrate the formalization for design-time and run-time compliance checking and present a proof-of-concept implementation. The application of the implemented compliance checking approach to a use case exemplifies its effectiveness while designing a case model. The empirical evaluation on a set of case models for measuring the performance of the approach shows that rules can often be checked in less than a second.
The MOOC-CEDIA Observatory
(2021)
In the last few years, an important amount of Massive Open Online Courses (MOOCS) has been made available to the worldwide community, mainly by European and North American universities (i.e. United States). Since its emergence, the adoption of these educational resources has been widely studied by several research groups and universities with the aim of understanding their evolution and impact in educational models, through the time. In the case of Latin America, data from the MOOC-UC Observatory (updated until 2018) shows that, the adoption of these courses by universities in the region has been slow and heterogeneous. In the specific case of Ecuador, although some data is available, there is lack of information regarding the construction, publication and/or adoption of such courses by universities in the country. Moreover, there are not updated studies designed to identify and analyze the barriers and factors affecting the adoption of MOOCs in the country. The aim of this work is to present the MOOC-CEDIA Observatory, a web platform that offers interactive visualizations on the adoption of MOOCs in Ecuador. The main results of the study show that: (1) until 2020 there have been 99 MOOCs in Ecuador, (2) the domains of MOOCs are mostly related to applied sciences, social sciences and natural sciences, with the humanities being the least covered, (3) Open edX and Moodle are the most widely used platforms to deploy such courses. It is expected that the conclusions drawn from this analysis, will allow the design of recommendations aimed to promote the creation and use of quality MOOCs in Ecuador and help institutions to chart the route for their adoption, both for internal use by their community but also by society in general.
Background:
Research into the application of virtual reality technology in the health care sector has rapidly increased, resulting in a large body of research that is difficult to keep up with.
Objective:
We will provide an overview of the annual publication numbers in this field and the most productive and influential countries, journals, and authors, as well as the most used, most co-occurring, and most recent keywords.
Methods:
Based on a data set of 356 publications and 20,363 citations derived from Web of Science, we conducted a bibliometric analysis using BibExcel, HistCite, and VOSviewer.
Results:
The strongest growth in publications occurred in 2020, accounting for 29.49% of all publications so far. The most productive countries are the United States, the United Kingdom, and Spain; the most influential countries are the United States, Canada, and the United Kingdom. The most productive journals are the Journal of Medical Internet Research (JMIR), JMIR Serious Games, and the Games for Health Journal; the most influential journals are Patient Education and Counselling, Medical Education, and Quality of Life Research. The most productive authors are Riva, del Piccolo, and Schwebel; the most influential authors are Finset, del Piccolo, and Eide. The most frequently occurring keywords other than “virtual” and “reality” are “training,” “trial,” and “patients.” The most relevant research themes are communication, education, and novel treatments; the most recent research trends are fitness and exergames.
Conclusions:
The analysis shows that the field has left its infant state and its specialization is advancing, with a clear focus on patient usability.
Student teachers often struggle to keep track of everything that is happening in the classroom, and particularly to notice and respond when students cause disruptions. The complexity of the classroom environment is a potential contributing factor that has not been empirically tested. In this experimental study, we utilized a virtual reality (VR) classroom to examine whether classroom complexity affects the likelihood of student teachers noticing disruptions and how they react after noticing. Classroom complexity was operationalized as the number of disruptions and the existence of overlapping disruptions (multidimensionality) as well as the existence of parallel teaching tasks (simultaneity). Results showed that student teachers (n = 50) were less likely to notice the scripted disruptions, and also less likely to respond to the disruptions in a comprehensive and effortful manner when facing greater complexity. These results may have implications for both teacher training and the design of VR for training or research purpose. This study contributes to the field from two aspects: 1) it revealed how features of the classroom environment can affect student teachers' noticing of and reaction to disruptions; and 2) it extends the functionality of the VR environment-from a teacher training tool to a testbed of fundamental classroom processes that are difficult to manipulate in real-life.
Intrinsic decomposition refers to the problem of estimating scene characteristics, such as albedo and shading, when one view or multiple views of a scene are provided. The inverse problem setting, where multiple unknowns are solved given a single known pixel-value, is highly under-constrained. When provided with correlating image and depth data, intrinsic scene decomposition can be facilitated using depth-based priors, which nowadays is easy to acquire with high-end smartphones by utilizing their depth sensors. In this work, we present a system for intrinsic decomposition of RGB-D images on smartphones and the algorithmic as well as design choices therein. Unlike state-of-the-art methods that assume only diffuse reflectance, we consider both diffuse and specular pixels. For this purpose, we present a novel specularity extraction algorithm based on a multi-scale intensity decomposition and chroma inpainting. At this, the diffuse component is further decomposed into albedo and shading components. We use an inertial proximal algorithm for non-convex optimization (iPiano) to ensure albedo sparsity. Our GPU-based visual processing is implemented on iOS via the Metal API and enables interactive performance on an iPhone 11 Pro. Further, a qualitative evaluation shows that we are able to obtain high-quality outputs. Furthermore, our proposed approach for specularity removal outperforms state-of-the-art approaches for real-world images, while our albedo and shading layer decomposition is faster than the prior work at a comparable output quality. Manifold applications such as recoloring, retexturing, relighting, appearance editing, and stylization are shown, each using the intrinsic layers obtained with our method and/or the corresponding depth data.
Helping overcome distance, the use of videoconferencing tools has surged during the pandemic. To shed light on the consequences of videoconferencing at work, this study takes a granular look at the implications of the self-view feature for meeting outcomes. Building on self-awareness research and self-regulation theory, we argue that by heightening the state of self-awareness, self-view engagement depletes participants’ mental resources and thereby can undermine online meeting outcomes. Evaluation of our theoretical model on a sample of 179 employees reveals a nuanced picture. Self-view engagement while speaking and while listening is positively associated with self-awareness, which, in turn, is negatively associated with satisfaction with meeting process, perceived productivity, and meeting enjoyment. The criticality of the communication role is put forward: looking at self while listening to other attendees has a negative direct and indirect effect on meeting outcomes; however, looking at self while speaking produces equivocal effects.
Data encoding has been applied to database systems for decades as it mitigates bandwidth bottlenecks and reduces storage requirements. But even in the presence of these advantages, most in-memory database systems use data encoding only conservatively as the negative impact on runtime performance can be severe. Real-world systems with large parts being infrequently accessed and cost efficiency constraints in cloud environments require solutions that automatically and efficiently select encoding techniques, including heavy-weight compression. In this paper, we introduce workload-driven approaches to automaticaly determine memory budget-constrained encoding configurations using greedy heuristics and linear programming. We show for TPC-H, TPC-DS, and the Join Order Benchmark that optimized encoding configurations can reduce the main memory footprint significantly without a loss in runtime performance over state-of-the-art dictionary encoding. To yield robust selections, we extend the linear programming-based approach to incorporate query runtime constraints and mitigate unexpected performance regressions.
Spreadsheets are among the most commonly used file formats for data management, distribution, and analysis. Their widespread employment makes it easy to gather large collections of data, but their flexible canvas-based structure makes automated analysis difficult without heavy preparation. One of the common problems that practitioners face is the presence of multiple, independent regions in a single spreadsheet, possibly separated by repeated empty cells. We define such files as "multiregion" files. In collections of various spreadsheets, we can observe that some share the same layout. We present the Mondrian approach to automatically identify layout templates across multiple files and systematically extract the corresponding regions. Our approach is composed of three phases: first, each file is rendered as an image and inspected for elements that could form regions; then, using a clustering algorithm, the identified elements are grouped to form regions; finally, every file layout is represented as a graph and compared with others to find layout templates. We compare our method to state-of-the-art table recognition algorithms on two corpora of real-world enterprise spreadsheets. Our approach shows the best performances in detecting reliable region boundaries within each file and can correctly identify recurring layouts across files.
Viper
(2021)
Key-value stores (KVSs) have found wide application in modern software systems. For persistence, their data resides in slow secondary storage, which requires KVSs to employ various techniques to increase their read and write performance from and to the underlying medium. Emerging persistent memory (PMem) technologies offer data persistence at close-to-DRAM speed, making them a promising alternative to classical disk-based storage. However, simply drop-in replacing existing storage with PMem does not yield good results, as block-based access behaves differently in PMem than on disk and ignores PMem's byte addressability, layout, and unique performance characteristics. In this paper, we propose three PMem-specific access patterns and implement them in a hybrid PMem-DRAM KVS called Viper. We employ a DRAM-based hash index and a PMem-aware storage layout to utilize the random-write speed of DRAM and efficient sequential-write performance PMem. Our evaluation shows that Viper significantly outperforms existing KVSs for core KVS operations while providing full data persistence. Moreover, Viper outperforms existing PMem-only, hybrid, and disk-based KVSs by 4-18x for write workloads, while matching or surpassing their get performance.
Selbstbestimmtes Lernen mit Onlinekursen findet zunehmend mehr Akzeptanz in unserer Gesellschaft. Lernende können mithilfe von Onlinekursen selbst festlegen, was sie wann lernen und Kurse können durch vielfältige Adaptionen an den Lernfortschritt der Nutzer angepasst und individualisiert werden. Auf der einen Seite ist eine große Zielgruppe für diese Lernangebote vorhanden. Auf der anderen Seite sind die Erstellung von Onlinekursen, ihre Bereitstellung, Wartung und Betreuung kostenintensiv, wodurch hochwertige Angebote häufig kostenpflichtig angeboten werden müssen, um als Anbieter zumindest kostenneutral agieren zu können. In diesem Beitrag erörtern und diskutieren wir ein offenes, nachhaltiges datengetriebenes zweiseitiges Geschäftsmodell zur Verwertung geprüfter Onlinekurse und deren kostenfreie Bereitstellung für jeden Lernenden. Kern des Geschäftsmodells ist die Nutzung der dabei entstehenden Verhaltensdaten, die daraus mögliche Ableitung von Persönlichkeitsmerkmalen und Interessen und deren Nutzung im kommerziellen Kontext. Dies ist eine bei der Websuche bereits weitläufig akzeptierte Methode, welche nun auf den Lernkontext übertragen wird. Welche Möglichkeiten, Herausforderungen, aber auch Barrieren überwunden werden müssen, damit das Geschäftsmodell nachhaltig und ethisch vertretbar funktioniert, werden zwei unabhängige, jedoch synergetisch verbundene Geschäftsmodelle vorgestellt und diskutiert. Zusätzlich wurde die Akzeptanz und Erwartung der Zielgruppe für das vorgestellte Geschäftsmodell untersucht, um notwendige Kernressourcen für die Praxis abzuleiten. Die Ergebnisse der Untersuchung zeigen, dass das Geschäftsmodell von den Nutzer*innen grundlegend akzeptiert wird. 10 % der Befragten würden es bevorzugen, mit virtuellen Assistenten – anstelle mit Tutor*innen zu lernen. Zudem ist der Großteil der Nutzer*innen sich nicht darüber bewusst, dass Persönlichkeitsmerkmale anhand des Nutzerverhaltens abgeleitet werden können.
The noble way to substantiate decisions that affect many people is to ask these people for their opinions. For governments that run whole countries, this means asking all citizens for their views to consider their situations and needs.
Organizations such as Africa's Voices Foundation, who want to facilitate communication between decision-makers and citizens of a country, have difficulty mediating between these groups. To enable understanding, statements need to be summarized and visualized. Accomplishing these goals in a way that does justice to the citizens' voices and situations proves challenging. Standard charts do not help this cause as they fail to create empathy for the people behind their graphical abstractions. Furthermore, these charts do not create trust in the data they are representing as there is no way to see or navigate back to the underlying code and the original data. To fulfill these functions, visualizations would highly benefit from interactions to explore the displayed data, which standard charts often only limitedly provide.
To help improve the understanding of people's voices, we developed and categorized 80 ideas for new visualizations, new interactions, and better connections between different charts, which we present in this report. From those ideas, we implemented 10 prototypes and two systems that integrate different visualizations. We show that this integration allows consistent appearance and behavior of visualizations. The visualizations all share the same main concept: representing each individual with a single dot. To realize this idea, we discuss technologies that efficiently allow the rendering of a large number of these dots. With these visualizations, direct interactions with representations of individuals are achievable by clicking on them or by dragging a selection around them. This direct interaction is only possible with a bidirectional connection from the visualization to the data it displays. We discuss different strategies for bidirectional mappings and the trade-offs involved. Having unified behavior across visualizations enhances exploration. For our prototypes, that includes grouping, filtering, highlighting, and coloring of dots. Our prototyping work was enabled by the development environment Lively4. We explain which parts of Lively4 facilitated our prototyping process. Finally, we evaluate our approach to domain problems and our developed visualization concepts.
Our work provides inspiration and a starting point for visualization development in this domain. Our visualizations can improve communication between citizens and their government and motivate empathetic decisions. Our approach, combining low-level entities to create visualizations, provides value to an explorative and empathetic workflow. We show that the design space for visualizing this kind of data has a lot of potential and that it is possible to combine qualitative and quantitative approaches to data analysis.
Crochet is a popular handcraft all over the world. While other techniques such as knitting or weaving have received technical support over the years through machines, crochet is still a purely manual craft. Not just the act of crochet itself is manual but also the process of creating instructions for new crochet patterns, which is barely supported by domain specific digital solutions. This leads to unstructured and often also ambiguous and erroneous pattern instructions. In this report, we propose a concept to digitally represent crochet patterns. This format incorporates crochet techniques which allows domain specific support for crochet pattern designers during the pattern creation and instruction writing process. As contributions, we present a thorough domain analysis, the concept of a graph structure used as domain specific language to specify crochet patterns and a prototype of a projectional editor using the graph as representation format of patterns and a diagramming system to visualize them in 2D and 3D. By analyzing the domain, we learned about crochet techniques and pain points of designers in their pattern creation workflow. These insights are the basis on which we defined the pattern representation. In order to evaluate our concept, we built a prototype by which the feasibility of the concept is shown and we tested the software with professional crochet designers who approved of the concept.
In recent years, computer vision algorithms based on machine learning have seen rapid development. In the past, research mostly focused on solving computer vision problems such as image classification or object detection on images displaying natural scenes. Nowadays other fields such as the field of cultural heritage, where an abundance of data is available, also get into the focus of research. In the line of current research endeavours, we collaborated with the Getty Research Institute which provided us with a challenging dataset, containing images of paintings and drawings. In this technical report, we present the results of the seminar "Deep Learning for Computer Vision". In this seminar, students of the Hasso Plattner Institute evaluated state-of-the-art approaches for image classification, object detection and image recognition on the dataset of the Getty Research Institute. The main challenge when applying modern computer vision methods to the available data is the availability of annotated training data, as the dataset provided by the Getty Research Institute does not contain a sufficient amount of annotated samples for the training of deep neural networks. However, throughout the report we show that it is possible to achieve satisfying to very good results, when using further publicly available datasets, such as the WikiArt dataset, for the training of machine learning models.
The formal modeling and analysis is of crucial importance for software development processes following the model based approach. We present the formalism of Interval Probabilistic Timed Graph Transformation Systems (IPTGTSs) as a high-level modeling language. This language supports structure dynamics (based on graph transformation), timed behavior (based on clocks, guards, resets, and invariants as in Timed Automata (TA)), and interval probabilistic behavior (based on Discrete Interval Probability Distributions). That is, for the probabilistic behavior, the modeler using IPTGTSs does not need to provide precise probabilities, which are often impossible to obtain, but rather provides a probability range instead from which a precise probability is chosen nondeterministically. In fact, this feature on capturing probabilistic behavior distinguishes IPTGTSs from Probabilistic Timed Graph Transformation Systems (PTGTSs) presented earlier.
Following earlier work on Interval Probabilistic Timed Automata (IPTA) and PTGTSs, we also provide an analysis tool chain for IPTGTSs based on inter-formalism transformations. In particular, we provide in our tool AutoGraph a translation of IPTGTSs to IPTA and rely on a mapping of IPTA to Probabilistic Timed Automata (PTA) to allow for the usage of the Prism model checker. The tool Prism can then be used to analyze the resulting PTA w.r.t. probabilistic real-time queries asking for worst-case and best-case probabilities to reach a certain set of target states in a given amount of time.
Cyber-physical systems often encompass complex concurrent behavior with timing constraints and probabilistic failures on demand. The analysis whether such systems with probabilistic timed behavior adhere to a given specification is essential. When the states of the system can be represented by graphs, the rule-based formalism of Probabilistic Timed Graph Transformation Systems (PTGTSs) can be used to suitably capture structure dynamics as well as probabilistic and timed behavior of the system. The model checking support for PTGTSs w.r.t. properties specified using Probabilistic Timed Computation Tree Logic (PTCTL) has been already presented. Moreover, for timed graph-based runtime monitoring, Metric Temporal Graph Logic (MTGL) has been developed for stating metric temporal properties on identified subgraphs and their structural changes over time. In this paper, we (a) extend MTGL to the Probabilistic Metric Temporal Graph Logic (PMTGL) by allowing for the specification of probabilistic properties, (b) adapt our MTGL satisfaction checking approach to PTGTSs, and (c) combine the approaches for PTCTL model checking and MTGL satisfaction checking to obtain a Bounded Model Checking (BMC) approach for PMTGL. In our evaluation, we apply an implementation of our BMC approach in AutoGraph to a running example.
Trotz erfolgreicher Impfkampagne droht nach dem Sommer eine vierte Infektionswelle der Corona-Pandemie. Ob es dazu kommen wird, hängt maßgeblich davon ab, wie viele Menschen sich für eine Corona-Schutzimpfung entscheiden. Am Impfstoff mangelt es nicht mehr, dafür an der Impfbereitschaft. Viele Arbeitgeber fragen sich daher, was sie unternehmen können, um die Impfquote in ihren Betrieben zu erhöhen.
Die Digitalisierung unseres Lebens löst die Grenzen zwischen Privat- und Berufsleben immer weiter auf. Bekanntes Beispiel ist das Homeoffice. Arbeitgeber begegnen aber auch zahlreichen weiteren Trends in diesem Zusammenhang. Dazu gehören „workation“, also die Verbindung zwischen Arbeit („work“) und Urlaub („vacation“) ebenso wie „bleisure“, dh die Verbindung von Dienstreisen („business“) und Urlaub („leisure“). Der Beitrag geht den rechtlichen Rahmenbedingungen hierfür nach.
Learning analytics at scale
(2021)
Digital technologies are paving the way for innovative educational approaches. The learning format of Massive Open Online Courses (MOOCs) provides a highly accessible path to lifelong learning while being more affordable and flexible than face-to-face courses. Thereby, thousands of learners can enroll in courses mostly without admission restrictions, but this also raises challenges. Individual supervision by teachers is barely feasible, and learning persistence and success depend on students' self-regulatory skills. Here, technology provides the means for support. The use of data for decision-making is already transforming many fields, whereas in education, it is still a young research discipline. Learning Analytics (LA) is defined as the measurement, collection, analysis, and reporting of data about learners and their learning contexts with the purpose of understanding and improving learning and learning environments. The vast amount of data that MOOCs produce on the learning behavior and success of thousands of students provides the opportunity to study human learning and develop approaches addressing the demands of learners and teachers.
The overall purpose of this dissertation is to investigate the implementation of LA at the scale of MOOCs and to explore how data-driven technology can support learning and teaching in this context. To this end, several research prototypes have been iteratively developed for the HPI MOOC Platform. Hence, they were tested and evaluated in an authentic real-world learning environment. Most of the results can be applied on a conceptual level to other MOOC platforms as well. The research contribution of this thesis thus provides practical insights beyond what is theoretically possible. In total, four system components were developed and extended:
(1) The Learning Analytics Architecture: A technical infrastructure to collect, process, and analyze event-driven learning data based on schema-agnostic pipelining in a service-oriented MOOC platform. (2) The Learning Analytics Dashboard for Learners: A tool for data-driven support of self-regulated learning, in particular to enable learners to evaluate and plan their learning activities, progress, and success by themselves. (3) Personalized Learning Objectives: A set of features to better connect learners' success to their personal intentions based on selected learning objectives to offer guidance and align the provided data-driven insights about their learning progress. (4) The Learning Analytics Dashboard for Teachers: A tool supporting teachers with data-driven insights to enable the monitoring of their courses with thousands of learners, identify potential issues, and take informed action.
For all aspects examined in this dissertation, related research is presented, development processes and implementation concepts are explained, and evaluations are conducted in case studies. Among other findings, the usage of the learner dashboard in combination with personalized learning objectives demonstrated improved certification rates of 11.62% to 12.63%. Furthermore, it was observed that the teacher dashboard is a key tool and an integral part for teaching in MOOCs. In addition to the results and contributions, general limitations of the work are discussed—which altogether provide a solid foundation for practical implications and future research.
Generative adversarial networks (GANs) have been broadly applied to a wide range of application domains since their proposal. In this thesis, we propose several methods that aim to tackle different existing problems in GANs. Particularly, even though GANs are generally able to generate high-quality samples, the diversity of the generated set is often sub-optimal. Moreover, the common increase of the number of models in the original GANs framework, as well as their architectural sizes, introduces additional costs. Additionally, even though challenging, the proper evaluation of a generated set is an important direction to ultimately improve the generation process in GANs. We start by introducing two diversification methods that extend the original GANs framework to multiple adversaries to stimulate sample diversity in a generated set. Then, we introduce a new post-training compression method based on Monte Carlo methods and importance sampling to quantize and prune the weights and activations of pre-trained neural networks without any additional training. The previous method may be used to reduce the memory and computational costs introduced by increasing the number of models in the original GANs framework. Moreover, we use a similar procedure to quantize and prune gradients during training, which also reduces the communication costs between different workers in a distributed training setting. We introduce several topology-based evaluation methods to assess data generation in different settings, namely image generation and language generation. Our methods retrieve both single-valued and double-valued metrics, which, given a real set, may be used to broadly assess a generated set or separately evaluate sample quality and sample diversity, respectively. Moreover, two of our metrics use locality-sensitive hashing to accurately assess the generated sets of highly compressed GANs. The analysis of the compression effects in GANs paves the way for their efficient employment in real-world applications. Given their general applicability, the methods proposed in this thesis may be extended beyond the context of GANs. Hence, they may be generally applied to enhance existing neural networks and, in particular, generative frameworks.
EMOOCs 2021
(2021)
From June 22 to June 24, 2021, Hasso Plattner Institute, Potsdam, hosted the seventh European MOOC Stakeholder Summit (EMOOCs 2021) together with the eighth ACM Learning@Scale Conference.
Due to the COVID-19 situation, the conference was held fully online.
The boost in digital education worldwide as a result of the pandemic was also one of the main topics of this year’s EMOOCs. All institutions of learning have been forced to transform and redesign their educational methods, moving from traditional models to hybrid or completely online models at scale. The learnings, derived from practical experience and research, have been explored in EMOOCs 2021 in six tracks and additional workshops, covering various aspects of this field. In this publication, we present papers from the conference’s Experience Track, the Policy Track, the Business Track, the International Track, and the Workshops.
TransPipe
(2021)
Online learning environments, such as Massive Open Online Courses (MOOCs), often rely on videos as a major component to convey knowledge. However, these videos exclude potential participants who do not understand the lecturer’s language, regardless of whether that is due to language unfamiliarity or aural handicaps. Subtitles and/or interactive transcripts solve this issue, ease navigation based on the content, and enable indexing and retrieval by search engines. Although there are several automated speech-to-text converters and translation tools, their quality varies and the process of integrating them can be quite tedious. Thus, in practice, many videos on MOOC platforms only receive subtitles after the course is already finished (if at all) due to a lack of resources. This work describes an approach to tackle this issue by providing a dedicated tool, which is closing this gap between MOOC platforms and transcription and translation tools and offering a simple workflow that can easily be handled by users with a less technical background. The proposed method is designed and evaluated by qualitative interviews with three major MOOC providers.
ATIB
(2021)
Identity management is a principle component of securing online services. In the advancement of traditional identity management patterns, the identity provider remained a Trusted Third Party (TTP). The service provider and the user need to trust a particular identity provider for correct attributes amongst other demands. This paradigm changed with the invention of blockchain-based Self-Sovereign Identity (SSI) solutions that primarily focus on the users. SSI reduces the functional scope of the identity provider to an attribute provider while enabling attribute aggregation. Besides that, the development of new protocols, disregarding established protocols and a significantly fragmented landscape of SSI solutions pose considerable challenges for an adoption by service providers. We propose an Attribute Trust-enhancing Identity Broker (ATIB) to leverage the potential of SSI for trust-enhancing attribute aggregation. Furthermore, ATIB abstracts from a dedicated SSI solution and offers standard protocols. Therefore, it facilitates the adoption by service providers. Despite the brokered integration approach, we show that ATIB provides a high security posture. Additionally, ATIB does not compromise the ten foundational SSI principles for the users.
Gene expression data provide the expression levels of tens of thousands of genes from several hundred samples. These data are analyzed to detect biomarkers that can be of prognostic or diagnostic use. Traditionally, biomarker detection for gene expression data is the task of gene selection. The vast number of genes is reduced to a few relevant ones that achieve the best performance for the respective use case. Traditional approaches select genes based on their statistical significance in the data set. This results in issues of robustness, redundancy and true biological relevance of the selected genes. Integrative analyses typically address these shortcomings by integrating multiple data artifacts from the same objects, e.g. gene expression and methylation data. When only gene expression data are available, integrative analyses instead use curated information on biological processes from public knowledge bases. With knowledge bases providing an ever-increasing amount of curated biological knowledge, such prior knowledge approaches become more powerful. This paper provides a thorough overview on the status quo of biomarker detection on gene expression data with prior biological knowledge. We discuss current shortcomings of traditional approaches, review recent external knowledge bases, provide a classification and qualitative comparison of existing prior knowledge approaches and discuss open challenges for this kind of gene selection.
Comprior
(2021)
Background
Reproducible benchmarking is important for assessing the effectiveness of novel feature selection approaches applied on gene expression data, especially for prior knowledge approaches that incorporate biological information from online knowledge bases. However, no full-fledged benchmarking system exists that is extensible, provides built-in feature selection approaches, and a comprehensive result assessment encompassing classification performance, robustness, and biological relevance. Moreover, the particular needs of prior knowledge feature selection approaches, i.e. uniform access to knowledge bases, are not addressed. As a consequence, prior knowledge approaches are not evaluated amongst each other, leaving open questions regarding their effectiveness.
Results
We present the Comprior benchmark tool, which facilitates the rapid development and effortless benchmarking of feature selection approaches, with a special focus on prior knowledge approaches. Comprior is extensible by custom approaches, offers built-in standard feature selection approaches, enables uniform access to multiple knowledge bases, and provides a customizable evaluation infrastructure to compare multiple feature selection approaches regarding their classification performance, robustness, runtime, and biological relevance.
Conclusion
Comprior allows reproducible benchmarking especially of prior knowledge approaches, which facilitates their applicability and for the first time enables a comprehensive assessment of their effectiveness
The integration of multiple data sources is a common problem in a large variety of applications. Traditionally, handcrafted similarity measures are used to discover, merge, and integrate multiple representations of the same entity-duplicates-into a large homogeneous collection of data. Often, these similarity measures do not cope well with the heterogeneity of the underlying dataset. In addition, domain experts are needed to manually design and configure such measures, which is both time-consuming and requires extensive domain expertise. <br /> We propose a deep Siamese neural network, capable of learning a similarity measure that is tailored to the characteristics of a particular dataset. With the properties of deep learning methods, we are able to eliminate the manual feature engineering process and thus considerably reduce the effort required for model construction. In addition, we show that it is possible to transfer knowledge acquired during the deduplication of one dataset to another, and thus significantly reduce the amount of data required to train a similarity measure. We evaluated our method on multiple datasets and compare our approach to state-of-the-art deduplication methods. Our approach outperforms competitors by up to +26 percent F-measure, depending on task and dataset. In addition, we show that knowledge transfer is not only feasible, but in our experiments led to an improvement in F-measure of up to +4.7 percent.
Graphs play an important role in many areas of Computer Science. In particular, our work is motivated by model-driven software development and by graph databases. For this reason, it is very important to have the means to express and to reason about the properties that a given graph may satisfy. With this aim, in this paper we present a visual logic that allows us to describe graph properties, including navigational properties, i.e., properties about the paths in a graph. The logic is equipped with a deductive tableau method that we have proved to be sound and complete.
Discriminative Models for Biometric Identification using Micro- and Macro-Movements of the Eyes
(2021)
Human visual perception is an active process. Eye movements either alternate between fixations and saccades or follow a smooth pursuit movement in case of moving targets. Besides these macroscopic gaze patterns, the eyes perform involuntary micro-movements during fixations which are commonly categorized into micro-saccades, drift and tremor. Eye movements are frequently studied in cognitive psychology, because they reflect a complex interplay of perception, attention and oculomotor control.
A common insight of psychological research is that macro-movements are highly individual. Inspired by this finding, there has been a considerable amount of prior research on oculomotoric biometric identification. However, the accuracy of known approaches is too low and the time needed for identification is too long for any practical application. This thesis explores discriminative models for the task of biometric identification.
Discriminative models optimize a quality measure of the predictions and are usually superior to generative approaches in discriminative tasks. However, using discriminative models requires to select a suitable form of data representation for sequential eye gaze data; i.e., by engineering features or constructing a sequence kernel and the performance of the classification model strongly depends on the data representation. We study two fundamentally different ways of representing eye gaze within a discriminative framework. In the first part of this thesis, we explore the integration of data and psychological background knowledge in the form of generative models to construct representations. To this end, we first develop generative statistical models of gaze behavior during reading and scene viewing that account for viewer-specific distributional properties of gaze patterns. In a second step, we develop a discriminative identification model by deriving Fisher kernel functions from these and several baseline models. We find that an SVM with Fisher kernel is able to reliably identify users based on their eye gaze during reading and scene viewing. However, since the generative models are constrained to use low-frequency macro-movements, they discard a significant amount of information contained in the raw eye tracking signal at a high cost: identification requires about one minute of input recording, which makes it inapplicable for real world biometric systems. In the second part of this thesis, we study a purely data-driven modeling approach. Here, we aim at automatically discovering the individual pattern hidden in the raw eye tracking signal. To this end, we develop a deep convolutional neural network DeepEyedentification that processes yaw and pitch gaze velocities and learns a representation end-to-end. Compared to prior work, this model increases the identification accuracy by one order of magnitude and the time to identification decreases to only seconds. The DeepEyedentificationLive model further improves upon the identification performance by processing binocular input and it also detects presentation-attacks.
We find that by learning a representation, the performance of oculomotoric identification and presentation-attack detection can be driven close to practical relevance for biometric applications. Eye tracking devices with high sampling frequency and precision are expensive and the applicability of eye movement as a biometric feature heavily depends on cost of recording devices.
In the last part of this thesis, we therefore study the requirements on data quality by evaluating the performance of the DeepEyedentificationLive network under reduced spatial and temporal resolution. We find that the method still attains a high identification accuracy at a temporal resolution of only 250 Hz and a precision of 0.03 degrees. Reducing both does not have an additive deteriorating effect.
Machine learning for improvement of thermal conditions inside a hybrid ventilated animal building
(2021)
In buildings with hybrid ventilation, natural ventilation opening positions (windows), mechanical ventilation rates, heating, and cooling are manipulated to maintain desired thermal conditions. The indoor temperature is regulated solely by ventilation (natural and mechanical) when the external conditions are favorable to save external heating and cooling energy. The ventilation parameters are determined by a rule-based control scheme, which is not optimal. This study proposes a methodology to enable real-time optimum control of ventilation parameters. We developed offline prediction models to estimate future thermal conditions from the data collected from building in operation. The developed offline model is then used to find the optimal controllable ventilation parameters in real-time to minimize the setpoint deviation in the building. With the proposed methodology, the experimental building's setpoint deviation improved for 87% of time, on average, by 0.53 degrees C compared to the current deviations.
Correction to: Knowledge bases and software support for variant interpretation in precision oncology
(2021)
In the field of Business Process Management (BPM), modeling business processes and related data is a critical issue since process activities need to manage data stored in databases. The connection between processes and data is usually handled at the implementation level, even if modeling both processes and data at the conceptual level should help designers in improving business process models and identifying requirements for implementation. Especially in data -and decision-intensive contexts, business process activities need to access data stored both in databases and data warehouses. In this paper, we complete our approach for defining a novel conceptual view that bridges process activities and data. The proposed approach allows the designer to model the connection between business processes and database models and define the operations to perform, providing interesting insights on the overall connected perspective and hints for identifying activities that are crucial for decision support.
Um in der digitalisierten Wirtschaft mitzuspielen, müssen Unternehmen, Markt und insbesondere Kunden detailliert verstanden werden. Neben den „Big Playern“ aus dem Silicon Valley sieht der deutsche Mittelstand, der zu großen Teilen noch auf gewachsenen IT-Infrastrukturen und Prozessen agiert, oft alt aus. Um in den nächsten Jahren nicht gänzlich abgehängt zu werden, ist ein Umbruch notwendig. Sowohl Leistungserstellungsprozesse als auch Leistungsangebot müssen transparent und datenbasiert ausgerichtet werden. Nur so können Geschäftsvorfälle, das Marktgeschehen sowie Handeln der Akteure integrativ bewertet und fundierte Entscheidungen getroffen werden. In diesem Beitrag wird das Konzept der Data-Driven Organization vorgestellt und aufgezeigt, wie Unternehmen den eigenen Analyticsreifegrad ermitteln und in einem iterativen Transformationsprozess steigern können.
The increasing demand for software engineers cannot completely be fulfilled by university education and conventional training approaches due to limited capacities. Accordingly, an alternative approach is necessary where potential software engineers are being educated in software engineering skills using new methods. We suggest micro tasks combined with theoretical lessons to overcome existing skill deficits and acquire fast trainable capabilities. This paper addresses the gap between demand and supply of software engineers by introducing an actionoriented and scenario-based didactical approach, which enables non-computer scientists to code. Therein, the learning content is provided in small tasks and embedded in learning factory scenarios. Therefore, different requirements for software engineers from the market side and from an academic viewpoint are analyzed and synthesized into an integrated, yet condensed skills catalogue. This enables the development of training and education units that focus on the most important skills demanded on the market. To achieve this objective, individual learning scenarios are developed. Of course, proper basic skills in coding cannot be learned over night but software programming is also no sorcery.
Information technology and digital solutions as enablers in the tourism sector require continuous development of skills, as digital transformation is characterized by fast change, complexity and uncertainty. This research investigates how a cMOOC concept could support the tourism industry. A consortium of three universities, a tourism association, and a tourist attraction investigates online learning needs and habits of tourism industry stakeholders in the field of digitalization in a cross-border study in the Baltic Sea region. The multi-national survey (n = 244) reveals a high interest in participating in an online learning community, with two-thirds of respondents seeing opportunities to contributing to such community apart from consuming knowledge. The paper demonstrates preferred ways of learning, motivational and hampering aspects as well as types of possible contributions.
Despite the phenomenal growth of Big Data Analytics in the last few years, little research is done to explicate the relationship between Big Data Analytics Capability (BDAC) and indirect strategic value derived from such digital capabilities. We attempt to address this gap by proposing a conceptual model of the BDAC - Innovation relationship using dynamic capability theory. The work expands on BDAC business value research and extends the nominal research done on BDAC – innovation. We focus on BDAC's relationship with different innovation objects, namely product, business process, and business model innovation, impacting all value chain activities. The insights gained will stimulate academic and practitioner interest in explicating strategic value generated from BDAC and serve as a framework for future research on the subject
We study the classical, two-sided stable marriage problem under pairwise preferences. In the most general setting, agents are allowed to express their preferences as comparisons of any two of their edges, and they also have the right to declare a draw or even withdraw from such a comparison. This freedom is then gradually restricted as we specify six stages of orderedness in the preferences, ending with the classical case of strictly ordered lists. We study all cases occurring when combining the three known notions of stability-weak, strong, and super-stability-under the assumption that each side of the bipartite market obtains one of the six degrees of orderedness. By designing three polynomial algorithms and two NP-completeness proofs, we determine the complexity of all cases not yet known and thus give an exact boundary in terms of preference structure between tractable and intractable cases.
Our input is a complete graph G on n vertices where each vertex has a strict ranking of all other vertices in G. The goal is to construct a matching in G that is popular. A matching M is popular if M does not lose a head-to-head election against any matching M ': here each vertex casts a vote for the matching in {M,M '} in which it gets a better assignment. Popular matchings need not exist in the given instance G and the popular matching problem is to decide whether one exists or not. The popular matching problem in G is easy to solve for odd n. Surprisingly, the problem becomes NP-complete for even n, as we show here. This is one of the few graph theoretic problems efficiently solvable when n has one parity and NP-complete when n has the other parity.
In control theory, to solve a finite-horizon sequential decision problem (SDP) commonly means to find a list of decision rules that result in an optimal expected total reward (or cost) when taking a given number of decision steps. SDPs are routinely solved using Bellman's backward induction. Textbook authors (e.g. Bertsekas or Puterman) typically give more or less formal proofs to show that the backward induction algorithm is correct as solution method for deterministic and stochastic SDPs. Botta, Jansson and Ionescu propose a generic framework for finite horizon, monadic SDPs together with a monadic version of backward induction for solving such SDPs. In monadic SDPs, the monad captures a generic notion of uncertainty, while a generic measure function aggregates rewards. In the present paper, we define a notion of correctness for monadic SDPs and identify three conditions that allow us to prove a correctness result for monadic backward induction that is comparable to textbook correctness proofs for ordinary backward induction. The conditions that we impose are fairly general and can be cast in category-theoretical terms using the notion of Eilenberg-Moore algebra. They hold in familiar settings like those of deterministic or stochastic SDPs, but we also give examples in which they fail. Our results show that backward induction can safely be employed for a broader class of SDPs than usually treated in textbooks. However, they also rule out certain instances that were considered admissible in the context of Botta et al. 's generic framework. Our development is formalised in Idris as an extension of the Botta et al. framework and the sources are available as supplementary material.
Viper
(2021)
Key-value stores (KVSs) have found wide application in modern software systems. For persistence, their data resides in slow secondary storage, which requires KVSs to employ various techniques to increase their read and write performance from and to the underlying medium. Emerging persistent memory (PMem) technologies offer data persistence at close-to-DRAM speed, making them a promising alternative to classical disk-based storage. However, simply drop-in replacing existing storage with PMem does not yield good results, as block-based access behaves differently in PMem than on disk and ignores PMem's byte addressability, layout, and unique performance characteristics. In this paper, we propose three PMem-specific access patterns and implement them in a hybrid PMem-DRAM KVS called Viper. We employ a DRAM-based hash index and a PMem-aware storage layout to utilize the random-write speed of DRAM and efficient sequential-write performance PMem. Our evaluation shows that Viper significantly outperforms existing KVSs for core KVS operations while providing full data persistence. Moreover, Viper outperforms existing PMem-only, hybrid, and disk-based KVSs by 4-18x for write workloads, while matching or surpassing their get performance.
The devil in disguise
(2021)
Envy constitutes a serious issue on Social Networking Sites (SNSs), as this painful emotion can severely diminish individuals' well-being. With prior research mainly focusing on the affective consequences of envy in the SNS context, its behavioral consequences remain puzzling. While negative interactions among SNS users are an alarming issue, it remains unclear to which extent the harmful emotion of malicious envy contributes to these toxic dynamics. This study constitutes a first step in understanding malicious envy’s causal impact on negative interactions within the SNS sphere. Within an online experiment, we experimentally induce malicious envy and measure its immediate impact on users’ negative behavior towards other users. Our findings show that malicious envy seems to be an essential factor fueling negativity among SNS users and further illustrate that this effect is especially pronounced when users are provided an objective factor to mask their envy and justify their norm-violating negative behavior.
Motivation:
Constraint-based modeling approaches allow the estimation of maximal in vivo enzyme catalytic rates that can serve as proxies for enzyme turnover numbers. Yet, genome-scale flux profiling remains a challenge in deploying these approaches to catalogue proxies for enzyme catalytic rates across organisms.
Results:
Here, we formulate a constraint-based approach, termed NIDLE-flux, to estimate fluxes at a genome-scale level by using the principle of efficient usage of expressed enzymes. Using proteomics data from Escherichia coli, we show that the fluxes estimated by NIDLE-flux and the existing approaches are in excellent qualitative agreement (Pearson correlation > 0.9). We also find that the maximal in vivo catalytic rates estimated by NIDLE-flux exhibits a Pearson correlation of 0.74 with in vitro enzyme turnover numbers. However, NIDLE-flux results in a 1.4-fold increase in the size of the estimated maximal in vivo catalytic rates in comparison to the contenders. Integration of the maximum in vivo catalytic rates with publically available proteomics and metabolomics data provide a better match to fluxes estimated by NIDLE-flux. Therefore, NIDLE-flux facilitates more effective usage of proteomics data to estimate proxies for kcatomes.
Coherent network partitions
(2021)
We continue to study coherent partitions of graphs whereby the vertex set is partitioned into subsets that induce biclique spanned subgraphs. The problem of identifying the minimum number of edges to obtain biclique spanned connected components (CNP), called the coherence number, is NP-hard even on bipartite graphs. Here, we propose a graph transformation geared towards obtaining an O (log n)-approximation algorithm for the CNP on a bipartite graph with n vertices. The transformation is inspired by a new characterization of biclique spanned subgraphs. In addition, we study coherent partitions on prime graphs, and show that finding coherent partitions reduces to the problem of finding coherent partitions in a prime graph. Therefore, these results provide future directions for approximation algorithms for the coherence number of a given graph.
As a central functionality of SNSs, the newsfeed is responsible for the way, how content is presented. This paper investigates the implications of current content presentation on Facebook, which has appeared to be a matter of users’ criticism. Leaning on the communication theory, we conceptualize clutter on a newsfeed as noise that hinders the receiver’s adequate message decoding (i.e., sensemaking). We further operationalize newsfeed clutter via perceived disorder, information overload, and system feature overload. Our participants browsed their Facebook newsfeed for at least 5 minutes. The follow-up survey results provide partial support for our hypotheses, with only perceived disorder significantly associated with lower sensemaking. These findings shed new light on user experience and underpin the importance of SNSs as communication systems, adding to the existent literature on the dark sides of social media.
Power relations within the area of blockchain governance are complex by definition and a comprehensive analysis that links technological and institutional elements is missing to date. The research that is presented with this article focuses on the visualization of the shifting power relations with the introduction of blockchain. For this purpose, the analysis leverages an adjusted version of the multi-stakeholder influence mapping tool. The analysis considers the various stakeholders within the multi-layered blockchain technology stack and compares three fundamental blockchain scenarios, including public and private blockchain settings. The findings show that public administrations face indeed less power with the introduction of blockchain, while new stakeholders come into play who wield influence rather uncontrolled. Nonetheless, public administrations are not powerless overall and remain influential stakeholders. This paper concludes that blockchain governance is not as democratic as blockchain enthusiasts tend to argue and derives corresponding opportunities for further research.
In the last decades, there was a notable progress in solving the well-known Boolean satisfiability (Sat) problem, which can be witnessed by powerful Sat solvers. One of the reasons why these solvers are so fast are structural properties of instances that are utilized by the solver’s interna. This thesis deals with the well-studied structural property treewidth, which measures the closeness of an instance to being a tree. In fact, there are many problems parameterized by treewidth that are solvable in polynomial time in the instance size when parameterized by treewidth.
In this work, we study advanced treewidth-based methods and tools for problems in knowledge representation and reasoning (KR). Thereby, we provide means to establish precise runtime results (upper bounds) for canonical problems relevant to KR. Then, we present a new type of problem reduction, which we call decomposition-guided (DG) that
allows us to precisely monitor the treewidth when reducing from one problem to another problem. This new reduction type will be the basis for a long-open lower bound result for quantified Boolean formulas and allows us to design a new methodology for establishing runtime lower bounds for problems parameterized by treewidth.
Finally, despite these lower bounds, we provide an efficient implementation of algorithms that adhere to treewidth. Our approach finds suitable abstractions of instances, which are subsequently refined in a recursive fashion, and it uses Sat solvers for solving subproblems. It turns out that our resulting solver is quite competitive for two canonical counting problems related to Sat.
The automated detection of sequential anomalies in time series is an essential task for many applications, such as the monitoring of technical systems, fraud detection in high-frequency trading, or the early detection of disease symptoms. All these applications require the detection to find all sequential anomalies possibly fast on potentially very large time series. In other words, the detection needs to be effective, efficient and scalable w.r.t. the input size. Series2Graph is an effective solution based on graph embeddings that are robust against re-occurring anomalies and can discover sequential anomalies of arbitrary length and works without training data. Yet, Series2Graph is no t scalable due to its single-threaded approach; it cannot, in particular, process arbitrarily large sequences due to the memory constraints of a single machine. In this paper, we propose our distributed anomaly detection system, short DADS, which is an efficient and scalable adaptation of Series2Graph. Based on the actor programming model, DADS distributes the input time sequence, intermediate state and the computation to all processors of a cluster in a way that minimizes communication costs and synchronization barriers. Our evaluation shows that DADS is orders of magnitude faster than S2G, scales almost linearly with the number of processors in the cluster and can process much larger input sequences due to its scale-out property.
Dass Technologien wie Machine Learning-Anwendungen oder Big bzw. Smart Data- Verfahren unbedingt Daten in ausreichender Menge und Güte benötigen, erscheint inzwischen als Binsenweisheit. Vor diesem Hintergrund hat insbesondere der EU-Gesetzgeber für sich zuletzt ein neues Betätigungsfeld entdeckt, indem er versucht, auf unterschiedlichen Wegen Anreize zum Datenteilen zu schaffen, um Innovation zu kreieren. Hierzu zählt auch eine geradezu wohltönend mit ,,Datenaltruismus‘‘ verschlagwortete Konstellation. Der Beitrag stellt die diesbezüglichen Regulierungserwägungen auf supranationaler Ebene dar und nimmt eine erste Analyse vor.
We present a general approach to planning with incomplete information in Answer Set Programming (ASP). More precisely, we consider the problems of conformant and conditional planning with sensing actions and assumptions. We represent planning problems using a simple formalism where logic programs describe the transition function between states, the initial states and the goal states. For solving planning problems, we use Quantified Answer Set Programming (QASP), an extension of ASP with existential and universal quantifiers over atoms that is analogous to Quantified Boolean Formulas (QBFs). We define the language of quantified logic programs and use it to represent the solutions different variants of conformant and conditional planning. On the practical side, we present a translation-based QASP solver that converts quantified logic programs into QBFs and then executes a QBF solver, and we evaluate experimentally the approach on conformant and conditional planning benchmarks.
Data privacy is a very important issue. Especially in fields like medicine, it is paramount to abide by the existing privacy regulations to preserve patients' anonymity. However, data is required for research and training machine learning models that could help gain insight into complex correlations or personalised treatments that may otherwise stay undiscovered. Those models generally scale with the amount of data available, but the current situation often prohibits building large databases across sites. So it would be beneficial to be able to combine similar or related data from different sites all over the world while still preserving data privacy. Federated learning has been proposed as a solution for this, because it relies on the sharing of machine learning models, instead of the raw data itself. That means private data never leaves the site or device it was collected on. Federated learning is an emerging research area, and many domains have been identified for the application of those methods. This systematic literature review provides an extensive look at the concept of and research into federated learning and its applicability for confidential healthcare datasets.
VLDB 2021
(2021)
The 47th International Conference on Very Large Databases (VLDB'21) was held on August 16-20, 2021 as a hybrid conference. It attracted 180 in-person attendees in Copenhagen and 840 remote attendees. In this paper, we describe our key decisions as general chairs and program committee chairs and share the lessons we learned.
Phe2vec
(2021)
Robust phenotyping of patients from electronic health records (EHRs) at scale is a challenge in clinical informatics. Here, we introduce Phe2vec, an automated framework for disease phenotyping from EHRs based on unsupervised learning and assess its effectiveness against standard rule-based algorithms from Phenotype KnowledgeBase (PheKB). Phe2vec is based on pre-computing embeddings of medical concepts and patients' clinical history. Disease phenotypes are then derived from a seed concept and its neighbors in the embedding space. Patients are linked to a disease if their embedded representation is close to the disease phenotype. Comparing Phe2vec and PheKB cohorts head-to-head using chart review, Phe2vec performed on par or better in nine out of ten diseases. Differently from other approaches, it can scale to any condition and was validated against widely adopted expert-based standards. Phe2vec aims to optimize clinical informatics research by augmenting current frameworks to characterize patients by condition and derive reliable disease cohorts.
Despite advances in machine learning-based clinical prediction models, only few of such models are actually deployed in clinical contexts. Among other reasons, this is due to a lack of validation studies. In this paper, we present and discuss the validation results of a machine learning model for the prediction of acute kidney injury in cardiac surgery patients initially developed on the MIMIC-III dataset when applied to an external cohort of an American research hospital. To help account for the performance differences observed, we utilized interpretability methods based on feature importance, which allowed experts to scrutinize model behavior both at the global and local level, making it possible to gain further insights into why it did not behave as expected on the validation cohort. The knowledge gleaned upon derivation can be potentially useful to assist model update during validation for more generalizable and simpler models. We argue that interpretability methods should be considered by practitioners as a further tool to help explain performance differences and inform model update in validation studies.
Precision oncology is a rapidly evolving interdisciplinary medical specialty. Comprehensive cancer panels are becoming increasingly available at pathology departments worldwide, creating the urgent need for scalable cancer variant annotation and molecularly informed treatment recommendations. A wealth of mainly academia-driven knowledge bases calls for software tools supporting the multi-step diagnostic process. We derive a comprehensive list of knowledge bases relevant for variant interpretation by a review of existing literature followed by a survey among medical experts from university hospitals in Germany. In addition, we review cancer variant interpretation tools, which integrate multiple knowledge bases. We categorize the knowledge bases along the diagnostic process in precision oncology and analyze programmatic access options as well as the integration of knowledge bases into software tools. The most commonly used knowledge bases provide good programmatic access options and have been integrated into a range of software tools. For the wider set of knowledge bases, access options vary across different parts of the diagnostic process. Programmatic access is limited for information regarding clinical classifications of variants and for therapy recommendations. The main issue for databases used for biological classification of pathogenic variants and pathway context information is the lack of standardized interfaces. There is no single cancer variant interpretation tool that integrates all identified knowledge bases. Specialized tools are available and need to be further developed for different steps in the diagnostic process.
Analysis of protrusion dynamics in amoeboid cell motility by means of regularized contour flows
(2021)
Amoeboid cell motility is essential for a wide range of biological processes including wound healing, embryonic morphogenesis, and cancer metastasis. It relies on complex dynamical patterns of cell shape changes that pose long-standing challenges to mathematical modeling and raise a need for automated and reproducible approaches to extract quantitative morphological features from image sequences. Here, we introduce a theoretical framework and a computational method for obtaining smooth representations of the spatiotemporal contour dynamics from stacks of segmented microscopy images. Based on a Gaussian process regression we propose a one-parameter family of regularized contour flows that allows us to continuously track reference points (virtual markers) between successive cell contours. We use this approach to define a coordinate system on the moving cell boundary and to represent different local geometric quantities in this frame of reference. In particular, we introduce the local marker dispersion as a measure to identify localized membrane expansions and provide a fully automated way to extract the properties of such expansions, including their area and growth time. The methods are available as an open-source software package called AmoePy, a Python-based toolbox for analyzing amoeboid cell motility (based on time-lapse microscopy data), including a graphical user interface and detailed documentation. Due to the mathematical rigor of our framework, we envision it to be of use for the development of novel cell motility models. We mainly use experimental data of the social amoeba Dictyostelium discoideum to illustrate and validate our approach. <br /> Author summary Amoeboid motion is a crawling-like cell migration that plays an important key role in multiple biological processes such as wound healing and cancer metastasis. This type of cell motility results from expanding and simultaneously contracting parts of the cell membrane. From fluorescence images, we obtain a sequence of points, representing the cell membrane, for each time step. By using regression analysis on these sequences, we derive smooth representations, so-called contours, of the membrane. Since the number of measurements is discrete and often limited, the question is raised of how to link consecutive contours with each other. In this work, we present a novel mathematical framework in which these links are described by regularized flows allowing a certain degree of concentration or stretching of neighboring reference points on the same contour. This stretching rate, the so-called local dispersion, is used to identify expansions and contractions of the cell membrane providing a fully automated way of extracting properties of these cell shape changes. We applied our methods to time-lapse microscopy data of the social amoeba Dictyostelium discoideum.
In recent years, many efforts have been made to apply image processing techniques for plant leaf identification. However, categorizing leaf images at the cultivar/variety level, because of the very low inter-class variability, is still a challenging task. In this research, we propose an automatic discriminative method based on convolutional neural networks (CNNs) for classifying 12 different cultivars of common beans that belong to three various species. We show that employing advanced loss functions, such as Additive Angular Margin Loss and Large Margin Cosine Loss, instead of the standard softmax loss function for the classification can yield better discrimination between classes and thereby mitigate the problem of low inter-class variability. The method was evaluated by classifying species (level I), cultivars from the same species (level II), and cultivars from different species (level III), based on images from the leaf foreside and backside. The results indicate that the performance of the classification algorithm on the leaf backside image dataset is superior. The maximum mean classification accuracies of 95.86, 91.37 and 86.87% were obtained at the levels I, II and III, respectively. The proposed method outperforms the previous relevant works and provides a reliable approach for plant cultivars identification.
Epistemic logic programs constitute an extension of the stable model semantics to deal with new constructs called subjective literals. Informally speaking, a subjective literal allows checking whether some objective literal is true in all or some stable models. As it can be imagined, the associated semantics has proved to be non-trivial, since the truth of subjective literals may interfere with the set of stable models it is supposed to query. As a consequence, no clear agreement has been reached and different semantic proposals have been made in the literature. Unfortunately, comparison among these proposals has been limited to a study of their effect on individual examples, rather than identifying general properties to be checked. In this paper, we propose an extension of the well-known splitting property for logic programs to the epistemic case. We formally define when an arbitrary semantics satisfies the epistemic splitting property and examine some of the consequences that can be derived from that, including its relation to conformant planning and to epistemic constraints. Interestingly, we prove (through counterexamples) that most of the existing approaches fail to fulfill the epistemic splitting property, except the original semantics proposed by Gelfond 1991 and a recent proposal by the authors, called Founded Autoepistemic Equilibrium Logic.
Many important graph-theoretic notions can be encoded as counting graph homomorphism problems, such as partition functions in statistical physics, in particular independent sets and colourings. In this article, we study the complexity of #(p) HOMSTOH, the problem of counting graph homomorphisms from an input graph to a graph H modulo a prime number p. Dyer and Greenhill proved a dichotomy stating that the tractability of non-modular counting graph homomorphisms depends on the structure of the target graph. Many intractable cases in non-modular counting become tractable in modular counting due to the common phenomenon of cancellation. In subsequent studies on counting modulo 2, however, the influence of the structure of H on the tractability was shown to persist, which yields similar dichotomies. <br /> Our main result states that for every tree H and every prime p the problem #pHOMSTOH is either polynomial time computable or #P-p-complete. This relates to the conjecture of Faben and Jerrum stating that this dichotomy holds for every graph H when counting modulo 2. In contrast to previous results on modular counting, the tractable cases of #pHOMSTOH are essentially the same for all values of the modulo when H is a tree. To prove this result, we study the structural properties of a homomorphism. As an important interim result, our study yields a dichotomy for the problem of counting weighted independent sets in a bipartite graph modulo some prime p. These results are the first suggesting that such dichotomies hold not only for the modulo 2 case but also for the modular counting functions of all primes p.
Safe human-robot interactions require robots to be able to learn how to behave appropriately in spaces populated by people and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyze what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration.
PC2P
(2021)
Motivation:
Prediction of protein complexes from protein-protein interaction (PPI) networks is an important problem in systems biology, as they control different cellular functions. The existing solutions employ algorithms for network community detection that identify dense subgraphs in PPI networks. However, gold standards in yeast and human indicate that protein complexes can also induce sparse subgraphs, introducing further challenges in protein complex prediction.
Results:
To address this issue, we formalize protein complexes as biclique spanned subgraphs, which include both sparse and dense subgraphs. We then cast the problem of protein complex prediction as a network partitioning into biclique spanned subgraphs with removal of minimum number of edges, called coherent partition. Since finding a coherent partition is a computationally intractable problem, we devise a parameter-free greedy approximation algorithm, termed Protein Complexes from Coherent Partition (PC2P), based on key properties of biclique spanned subgraphs. Through comparison with nine contenders, we demonstrate that PC2P: (i) successfully identifies modular structure in networks, as a prerequisite for protein complex prediction, (ii) outperforms the existing solutions with respect to a composite score of five performance measures on 75% and 100% of the analyzed PPI networks and gold standards in yeast and human, respectively, and (iii,iv) does not compromise GO semantic similarity and enrichment score of the predicted protein complexes. Therefore, our study demonstrates that clustering of networks in terms of biclique spanned subgraphs is a promising framework for detection of complexes in PPI networks.
TRIPOD
(2021)
Inertial measurement units (IMUs) enable easy to operate and low-cost data recording for gait analysis. When combined with treadmill walking, a large number of steps can be collected in a controlled environment without the need of a dedicated gait analysis laboratory. In order to evaluate existing and novel IMU-based gait analysis algorithms for treadmill walking, a reference dataset that includes IMU data as well as reliable ground truth measurements for multiple participants and walking speeds is needed. This article provides a reference dataset consisting of 15 healthy young adults who walked on a treadmill at three different speeds. Data were acquired using seven IMUs placed on the lower body, two different reference systems (Zebris FDMT-HQ and OptoGait), and two RGB cameras. Additionally, in order to validate an existing IMU-based gait analysis algorithm using the dataset, an adaptable modular data analysis pipeline was built. Our results show agreement between the pressure-sensitive Zebris and the photoelectric OptoGait system (r = 0.99), demonstrating the quality of our reference data. As a use case, the performance of an algorithm originally designed for overground walking was tested on treadmill data using the data pipeline. The accuracy of stride length and stride time estimations was comparable to that reported in other studies with overground data, indicating that the algorithm is equally applicable to treadmill data. The Python source code of the data pipeline is publicly available, and the dataset will be provided by the authors upon request, enabling future evaluations of IMU gait analysis algorithms without the need of recording new data.
The usage of data to improve or create business models has become vital for companies in the 21st century. However, to extract value from data it is important to understand the business model. Taxonomies for data-driven business models (DDBM) aim to provide guidance for the development and ideation of new business models relying on data. In IS research, however, different taxonomies have emerged in recent years, partly redundant, partly contradictory. Thus, there is a need to synthesize the common ground of these taxonomies within IS research. Based on 26 IS-related taxonomies and 30 cases, we derive and define 14 generic building blocks of DDBM to develop a consolidated taxonomy that represents the current state-of-the-art. Thus, we integrate existing research on DDBM and provide avenues for further exploration of data-induced potentials for business models as well as for the development and analysis of general or industry-specific DDBM.
Proceedings of the HPI Research School on Service-oriented Systems Engineering 2020 Fall Retreat
(2021)
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
The purpose of this study was to examine the moderating effects of technology use for relationship maintenance on the longitudinal associations among self-isolation during the coronavirus-19 (COVID-19) pandemic and romantic relationship quality among adolescents. Participants were 239 (120 female; M age = 16.69, standard deviation [SD] = 0.61; 60 percent Caucasian) 11th and 12th graders from three midwestern high schools. To qualify for this study, adolescents had to be in the same romantic relationship for the duration of the study, similar to 7 months (M length of relationship = 10.03 months). Data were collected in October of 2019 (Time 1) and again 7 months later in May of 2020 (Time 2). Adolescents completed a romantic relationship questionnaire at Time 1 and again at Time 2, along with questionnaires on frequency of self-isolation during the COVID-19 pandemic and use of technology for romantic relationship maintenance. Findings revealed that increases in self-isolation during the COVID-19 pandemic related positively to the use of technology for romantic relationship maintenance and negatively to Time 2 romantic relationship quality. High use of technology for romantic relationship maintenance buffered against the negative effects of self-isolation during the COVID-19 pandemic on adolescents' romantic relationship quality 7 months later, whereas low use strengthened the negative relationship between self-isolation during the COVID-19 pandemic and romantic relationship quality. These findings suggest the importance of considering the implications of societal crisis or pandemics on adolescents' close relationships, particularly their romantic relationships.
In this paper, we examine conditioning of the discretization of the Helmholtz problem. Although the discrete Helmholtz problem has been studied from different perspectives, to the best of our knowledge, there is no conditioning analysis for it. We aim to fill this gap in the literature. We propose a novel method in 1D to observe the near-zero eigenvalues of a symmetric indefinite matrix. Standard classification of ill-conditioning based on the matrix condition number is not true for the discrete Helmholtz problem. We relate the ill-conditioning of the discretization of the Helmholtz problem with the condition number of the matrix. We carry out analytical conditioning analysis in 1D and extend our observations to 2D with numerical observations. We examine several discretizations. We find different regions in which the condition number of the problem shows different characteristics. We also explain the general behavior of the solutions in these regions.
First-class concepts
(2022)
Ideally, programs are partitioned into independently maintainable and understandable modules. As a system grows, its architecture gradually loses the capability to accommodate new concepts in a modular way. While refactoring is expensive and not always possible, and the programming language might lack dedicated primary language constructs to express certain cross-cutting concerns, programmers are still able to explain and delineate convoluted concepts through secondary means: code comments, use of whitespace and arrangement of code, documentation, or communicating tacit knowledge. <br /> Secondary constructs are easy to change and provide high flexibility in communicating cross-cutting concerns and other concepts among programmers. However, such secondary constructs usually have no reified representation that can be explored and manipulated as first-class entities through the programming environment. <br /> In this exploratory work, we discuss novel ways to express a wide range of concepts, including cross-cutting concerns, patterns, and lifecycle artifacts independently of the dominant decomposition imposed by an existing architecture. We propose the representation of concepts as first-class objects inside the programming environment that retain the capability to change as easily as code comments. We explore new tools that allow programmers to view, navigate, and change programs based on conceptual perspectives. In a small case study, we demonstrate how such views can be created and how the programming experience changes from draining programmers' attention by stretching it across multiple modules toward focusing it on cohesively presented concepts. Our designs are geared toward facilitating multiple secondary perspectives on a system to co-exist in symbiosis with the original architecture, hence making it easier to explore, understand, and explain complex contexts and narratives that are hard or impossible to express using primary modularity constructs.
Bidirectional order dependencies (bODs) capture order relationships between lists of attributes in a relational table. They can express that, for example, sorting books by publication date in ascending order also sorts them by age in descending order. The knowledge about order relationships is useful for many data management tasks, such as query optimization, data cleaning, or consistency checking. Because the bODs of a specific dataset are usually not explicitly given, they need to be discovered. The discovery of all minimal bODs (in set-based canonical form) is a task with exponential complexity in the number of attributes, though, which is why existing bOD discovery algorithms cannot process datasets of practically relevant size in a reasonable time. In this paper, we propose the distributed bOD discovery algorithm DISTOD, whose execution time scales with the available hardware. DISTOD is a scalable, robust, and elastic bOD discovery approach that combines efficient pruning techniques for bOD candidates in set-based canonical form with a novel, reactive, and distributed search strategy. Our evaluation on various datasets shows that DISTOD outperforms both single-threaded and distributed state-of-the-art bOD discovery algorithms by up to orders of magnitude; it can, in particular, process much larger datasets.
A decade ago, it became feasible to store multi-terabyte databases in main memory. These in-memory databases (IMDBs) profit from DRAM's low latency and high throughput as well as from the removal of costly abstractions used in disk-based systems, such as the buffer cache. However, as the DRAM technology approaches physical limits, scaling these databases becomes difficult. Non-volatile memory (NVM) addresses this challenge. This new type of memory is persistent, has more capacity than DRAM (4x), and does not suffer from its density-inhibiting limitations. Yet, as NVM has a higher latency (5-15x) and a lower throughput (0.35x), it cannot fully replace DRAM.
IMDBs thus need to navigate the trade-off between the two memory tiers. We present a solution to this optimization problem. Leveraging information about access frequencies and patterns, our solution utilizes NVM's additional capacity while minimizing the associated access costs. Unlike buffer cache-based implementations, our tiering abstraction does not add any costs when reading data from DRAM. As such, it can act as a drop-in replacement for existing IMDBs. Our contributions are as follows:
(1) As the foundation for our research, we present Hyrise, an open-source, columnar IMDB that we re-engineered and re-wrote from scratch. Hyrise enables realistic end-to-end benchmarks of SQL workloads and offers query performance which is competitive with other research and commercial systems. At the same time, Hyrise is easy to understand and modify as repeatedly demonstrated by its uses in research and teaching.
(2) We present a novel memory management framework for different memory and storage tiers. By encapsulating the allocation and access methods of these tiers, we enable existing data structures to be stored on different tiers with no modifications to their implementation. Besides DRAM and NVM, we also support and evaluate SSDs and have made provisions for upcoming technologies such as disaggregated memory.
(3) To identify the parts of the data that can be moved to (s)lower tiers with little performance impact, we present a tracking method that identifies access skew both in the row and column dimensions and that detects patterns within consecutive accesses. Unlike existing methods that have substantial associated costs, our access counters exhibit no identifiable overhead in standard benchmarks despite their increased accuracy.
(4) Finally, we introduce a tiering algorithm that optimizes the data placement for a given memory budget. In the TPC-H benchmark, this allows us to move 90% of the data to NVM while the throughput is reduced by only 10.8% and the query latency is increased by 11.6%. With this, we outperform approaches that ignore the workload's access skew and access patterns and increase the query latency by 20% or more.
Individually, our contributions provide novel approaches to current challenges in systems engineering and database research. Combining them allows IMDBs to scale past the limits of DRAM while continuing to profit from the benefits of in-memory computing.
Accurately solving classification problems nowadays is likely to be the most relevant machine learning task. Binary classification separating two classes only is algorithmically simpler but has fewer potential applications as many real-world problems are multi-class. On the reverse, separating only a subset of classes simplifies the classification task. Even though existing multi-class machine learning algorithms are very flexible regarding the number of classes, they assume that the target set Y is fixed and cannot be restricted once the training is finished. On the other hand, existing state-of-the-art production environments are becoming increasingly interconnected with the advance of Industry 4.0 and related technologies such that additional information can simplify the respective classification problems. In light of this, the main aim of this thesis is to introduce dynamic classification that generalizes multi-class classification such that the target class set can be restricted arbitrarily to a non-empty class subset M of Y at any time between two consecutive predictions.
This task is solved by a combination of two algorithmic approaches. First, classifier calibration, which transforms predictions into posterior probability estimates that are intended to be well calibrated. The analysis provided focuses on monotonic calibration and in particular corrects wrong statements that appeared in the literature. It also reveals that bin-based evaluation metrics, which became popular in recent years, are unjustified and should not be used at all. Next, the validity of Platt scaling, which is the most relevant parametric calibration approach, is analyzed in depth. In particular, its optimality for classifier predictions distributed according to four different families of probability distributions as well its equivalence with Beta calibration up to a sigmoidal preprocessing are proven. For non-monotonic calibration, extended variants on kernel density estimation and the ensemble method EKDE are introduced. Finally, the calibration techniques are evaluated using a simulation study with complete information as well as on a selection of 46 real-world data sets.
Building on this, classifier calibration is applied as part of decomposition-based classification that aims to reduce multi-class problems to simpler (usually binary) prediction tasks. For the involved fusing step performed at prediction time, a new approach based on evidence theory is presented that uses classifier calibration to model mass functions. This allows the analysis of decomposition-based classification against a strictly formal background and to prove closed-form equations for the overall combinations. Furthermore, the same formalism leads to a consistent integration of dynamic class information, yielding a theoretically justified and computationally tractable dynamic classification model. The insights gained from this modeling are combined with pairwise coupling, which is one of the most relevant reduction-based classification approaches, such that all individual predictions are combined with a weight. This not only generalizes existing works on pairwise coupling but also enables the integration of dynamic class information.
Lastly, a thorough empirical study is performed that compares all newly introduced approaches to existing state-of-the-art techniques. For this, evaluation metrics for dynamic classification are introduced that depend on corresponding sampling strategies. Thereafter, these are applied during a three-part evaluation. First, support vector machines and random forests are applied on 26 data sets from the UCI Machine Learning Repository. Second, two state-of-the-art deep neural networks are evaluated on five benchmark data sets from a relatively recent reference work. Here, computationally feasible strategies to apply the presented algorithms in combination with large-scale models are particularly relevant because a naive application is computationally intractable. Finally, reference data from a real-world process allowing the inclusion of dynamic class information are collected and evaluated. The results show that in combination with support vector machines and random forests, pairwise coupling approaches yield the best results, while in combination with deep neural networks, differences between the different approaches are mostly small to negligible. Most importantly, all results empirically confirm that dynamic classification succeeds in improving the respective prediction accuracies. Therefore, it is crucial to pass dynamic class information in respective applications, which requires an appropriate digital infrastructure.
Active use of social networking sites (SNSs) has long been assumed to benefit users' well-being. However, this established hypothesis is increasingly being challenged, with scholars criticizing its lack of empirical support and the imprecise conceptualization of active use. Nevertheless, with considerable heterogeneity among existing studies on the hypothesis and causal evidence still limited, a final verdict on its robustness is still pending. To contribute to this ongoing debate, we conducted a week-long randomized control trial with N = 381 adult Instagram users recruited via Prolific. Specifically, we tested how active SNS use, operationalized as picture postings on Instagram, affects different dimensions of well-being. The results depicted a positive effect on users' positive affect but null findings for other well-being outcomes. The findings broadly align with the recent criticism against the active use hypothesis and support the call for a more nuanced view on the impact of SNSs. <br /> Lay Summary Active use of social networking sites (SNSs) has long been assumed to benefit users' well-being. However, this established assumption is increasingly being challenged, with scholars criticizing its lack of empirical support and the imprecise conceptualization of active use. Nevertheless, with great diversity among conducted studies on the hypothesis and a lack of causal evidence, a final verdict on its viability is still pending. To contribute to this ongoing debate, we conducted a week-long experimental investigation with 381 adult Instagram users. Specifically, we tested how posting pictures on Instagram affects different aspects of well-being. The results of this study depicted a positive effect of posting Instagram pictures on users' experienced positive emotions but no effects on other aspects of well-being. The findings broadly align with the recent criticism against the active use hypothesis and support the call for a more nuanced view on the impact of SNSs on users.
The heterogeneity of today's state-of-the-art computer architectures is confronting application developers with an immense degree of complexity which results from two major challenges. First, developers need to acquire profound knowledge about the programming models or the interaction models associated with each type of heterogeneous system resource to make efficient use thereof. Second, developers must take into account that heterogeneous system resources always need to exchange data with each other in order to work on a problem together. However, this data exchange is always associated with a certain amount of overhead, which is why the amounts of data exchanged should be kept as low as possible.
This thesis proposes three programming abstractions to lessen the burdens imposed by these major challenges with the goal of making heterogeneous system resources accessible to a wider range of application developers. The lib842 compression library provides the first method for accessing the compression and decompression facilities of the NX-842 on-chip compression accelerator available in IBM Power CPUs from user space applications running on Linux. Addressing application development of scale-out GPU workloads, the CloudCL framework makes the resources of GPU clusters more accessible by hiding many aspects of distributed computing while enabling application developers to focus on the aspects of the data parallel programming model associated with GPUs. Furthermore, CloudCL is augmented with transparent data compression facilities based on the lib842 library in order to improve the efficiency of data transfers among cluster nodes. The improved data transfer efficiency provided by the integration of transparent data compression yields performance improvements ranging between 1.11x and 2.07x across four data-intensive scale-out GPU workloads. To investigate the impact of programming abstractions for data placement in NUMA systems, a comprehensive evaluation of the PGASUS framework for NUMA-aware C++ application development is conducted. On a wide range of test systems, the evaluation demonstrates that PGASUS does not only improve the developer experience across all workloads, but that it is also capable of outperforming NUMA-agnostic implementations with average performance improvements of 1.56x.
Based on these programming abstractions, this thesis demonstrates that by providing a sufficient degree of abstraction, the accessibility of heterogeneous system resources can be improved for application developers without occluding performance-critical properties of the underlying hardware.
Teaching and learning as well as administrative processes are still experiencing intensive changes with the rise of artificial intelligence (AI) technologies and its diverse application opportunities in the context of higher education. Therewith, the scientific interest in the topic in general, but also specific focal points rose as well. However, there is no structured overview on AI in teaching and administration processes in higher education institutions that allows to identify major research topics and trends, and concretizing peculiarities and develops recommendations for further action. To overcome this gap, this study seeks to systematize the current scientific discourse on AI in teaching and administration in higher education institutions. This study identified an (1) imbalance in research on AI in educational and administrative contexts, (2) an imbalance in disciplines and lack of interdisciplinary research, (3) inequalities in cross-national research activities, as well as (4) neglected research topics and paths. In this way, a comparative analysis between AI usage in administration and teaching and learning processes, a systematization of the state of research, an identification of research gaps as well as further research path on AI in higher education institutions are contributed to research.
Duplicate detection describes the process of finding multiple representations of the same real-world entity in the absence of a unique identifier, and has many application areas, such as customer relationship management, genealogy and social sciences, or online shopping. Due to the increasing amount of data in recent years, the problem has become even more challenging on the one hand, but has led to a renaissance in duplicate detection research on the other hand.
This thesis examines the effects and opportunities of transitive relationships on the duplicate detection process. Transitivity implies that if record pairs ⟨ri,rj⟩ and ⟨rj,rk⟩ are classified as duplicates, then also record pair ⟨ri,rk⟩ has to be a duplicate. However, this reasoning might contradict with the pairwise classification, which is usually based on the similarity of objects. An essential property of similarity, in contrast to equivalence, is that similarity is not necessarily transitive.
First, we experimentally evaluate the effect of an increasing data volume on the threshold selection to classify whether a record pair is a duplicate or non-duplicate. Our experiments show that independently of the pair selection algorithm and the used similarity measure, selecting a suitable threshold becomes more difficult with an increasing number of records due to an increased probability of adding a false duplicate to an existing cluster. Thus, the best threshold changes with the dataset size, and a good threshold for a small (possibly sampled) dataset is not necessarily a good threshold for a larger (possibly complete) dataset. As data grows over time, earlier selected thresholds are no longer a suitable choice, and the problem becomes worse for datasets with larger clusters.
Second, we present with the Duplicate Count Strategy (DCS) and its enhancement DCS++ two alternatives to the standard Sorted Neighborhood Method (SNM) for the selection of candidate record pairs. DCS adapts SNMs window size based on the number of detected duplicates and DCS++ uses transitive dependencies to save complex comparisons for finding duplicates in larger clusters. We prove that with a proper (domain- and data-independent!) threshold, DCS++ is more efficient than SNM without loss of effectiveness.
Third, we tackle the problem of contradicting pairwise classifications. Usually, the transitive closure is used for pairwise classifications to obtain a transitively closed result set. However, the transitive closure disregards negative classifications. We present three new and several existing clustering algorithms and experimentally evaluate them on various datasets and under various algorithm configurations. The results show that the commonly used transitive closure is inferior to most other clustering algorithms, especially for the precision of results. In scenarios with larger clusters, our proposed EMCC algorithm is, together with Markov Clustering, the best performing clustering approach for duplicate detection, although its runtime is longer than Markov Clustering due to the subexponential time complexity. EMCC especially outperforms Markov Clustering regarding the precision of the results and additionally has the advantage that it can also be used in scenarios where edge weights are not available.
The highly structured nature of the educational sector demands effective policy mechanisms close to the needs of the field. That is why evidence-based policy making, endorsed by the European Commission under Erasmus+ Key Action 3, aims to make an alignment between the domains of policy and practice. Against this background, this article addresses two issues: First, that there is a vertical gap in the translation of higher-level policies to local strategies and regulations. Second, that there is a horizontal gap between educational domains regarding the policy awareness of individual players. This was analyzed in quantitative and qualitative studies with domain experts from the fields of virtual mobility and teacher training. From our findings, we argue that the combination of both gaps puts the academic bridge from secondary to tertiary education at risk, including the associated knowledge proficiency levels. We discuss the role of digitalization in the academic bridge by asking the question: which value does the involved stakeholders expect from educational policies? As a theoretical basis, we rely on the model of value co-creation for and by stakeholders. We describe the used instruments along with the obtained results and proposed benefits. Moreover, we reflect on the methodology applied, and we finally derive recommendations for future academic bridge policies.
Complex networks like the Internet or social networks are fundamental parts of our everyday lives. It is essential to understand their structural properties and how these networks are formed. A game-theoretic approach to network design problems has become of high interest in the last decades. The reason is that many real-world networks are the outcomes of decentralized strategic behavior of independent agents without central coordination. Fabrikant, Luthra, Maneva, Papadimitriou, and Schenker proposed a game-theoretic model aiming to explain the formation of the Internet-like networks. In this model, called the Network Creation Game, agents are associated with nodes of a network. Each agent seeks to maximize her centrality by establishing costly connections to other agents. The model is relatively simple but shows a high potential in modeling complex real-world networks. In this thesis, we contribute to the line of research on variants of the Network Creation Games. Inspired by real-world networks, we propose and analyze several novel network creation models. We aim to understand the impact of certain realistic modeling assumptions on the structure of the created networks and the involved agents’ behavior.
The first natural additional objective that we consider is the network’s robustness. We consider a game where the agents seek to maximize their centrality and, at the same time, the stability of the created network against random edge failure.
Our second point of interest is a model that incorporates an underlying geometry. We consider a network creation model where the agents correspond to points in some underlying space and where edge lengths are equal to the distances between the endpoints in that space. The geometric setting captures many physical real-world networks like transport networks and fiber-optic communication networks.
We focus on the formation of social networks and consider two models that incorporate particular realistic behavior observed in real-world networks. In the first model, we embed the anti-preferential attachment link formation. Namely, we assume that the cost of the connection is proportional to the popularity of the targeted agent. Our second model is based on the observation that the probability of two persons to connect is inversely proportional to the length of their shortest chain of mutual acquaintances.
For each of the four models above, we provide a complete game-theoretical analysis. In particular, we focus on distinctive structural properties of the equilibria, the hardness of computing a best response, the quality of equilibria in comparison to the centrally designed socially optimal networks. We also analyze the game dynamics, i.e., the process of sequential strategic improvements by the agents, and analyze the convergence to an equilibrium state and its properties.
Knowledge-intensive business processes are flexible and data-driven. Therefore, traditional process modeling languages do not meet their requirements: These languages focus on highly structured processes in which data plays a minor role. As a result, process-oriented information systems fail to assist knowledge workers on executing their processes. We propose a novel case management approach that combines flexible activity-centric processes with data models, and we provide a joint semantics using colored Petri nets. The approach is suited to model, verify, and enact knowledge-intensive processes and can aid the development of information systems that support knowledge work.
Knowledge-intensive processes are human-centered, multi-variant, and data-driven. Typical domains include healthcare, insurances, and law. The processes cannot be fully modeled, since the underlying knowledge is too vast and changes too quickly. Thus, models for knowledge-intensive processes are necessarily underspecified. In fact, a case emerges gradually as knowledge workers make informed decisions. Knowledge work imposes special requirements on modeling and managing respective processes. They include flexibility during design and execution, ad-hoc adaption to unforeseen situations, and the integration of behavior and data. However, the predominantly used process modeling languages (e.g., BPMN) are unsuited for this task.
Therefore, novel modeling languages have been proposed. Many of them focus on activities' data requirements and declarative constraints rather than imperative control flow. Fragment-Based Case Management, for example, combines activity-centric imperative process fragments with declarative data requirements. At runtime, fragments can be combined dynamically, and new ones can be added. Yet, no integrated semantics for flexible activity-centric process models and data models exists.
In this thesis, Wickr, a novel case modeling approach extending fragment-based Case Management, is presented. It supports batch processing of data, sharing data among cases, and a full-fledged data model with associations and multiplicity constraints. We develop a translational semantics for Wickr targeting (colored) Petri nets. The semantics assert that a case adheres to the constraints in both the process fragments and the data models. Among other things, multiplicity constraints must not be violated. Furthermore, the semantics are extended to multiple cases that operate on shared data. Wickr shows that the data structure may reflect process behavior and vice versa. Based on its semantics, prototypes for executing and verifying case models showcase the feasibility of Wickr. Its applicability to knowledge-intensive and to data-centric processes is evaluated using well-known requirements from related work.
Teaching and learning as well as administrative processes are still experiencing intensive changes with the rise of artificial intelligence (AI) technologies and its diverse application opportunities in the context of higher education. Therewith, the scientific interest in the topic in general, but also specific focal points rose as well. However, there is no structured overview on AI in teaching and administration processes in higher education institutions that allows to identify major research topics and trends, and concretizing peculiarities and develops recommendations for further action. To overcome this gap, this study seeks to systematize the current scientific discourse on AI in teaching and administration in higher education institutions. This study identified an (1) imbalance in research on AI in educational and administrative contexts, (2) an imbalance in disciplines and lack of interdisciplinary research, (3) inequalities in cross-national research activities, as well as (4) neglected research topics and paths. In this way, a comparative analysis between AI usage in administration and teaching and learning processes, a systematization of the state of research, an identification of research gaps as well as further research path on AI in higher education institutions are contributed to research.
Integriert statt isoliert
(2022)
Dass Daten und Analysen Innovationstreiber sind und nicht mehr nur einen Hygienefaktor darstellen, haben viele Unternehmen erkannt. Um Potenziale zu heben, müssen Daten zielführend integriert werden. Komplexe Systemlandschaften und isolierte Datenbestände erschweren dies. Technologien für die erfolgreiche Umsetzung von datengetriebenem Management müssen richtig eingesetzt werden.
The use of neural networks is considered as the state of the art in the field of image classification. A large number of different networks are available for this purpose, which, appropriately trained, permit a high level of classification accuracy. Typically, these networks are applied to uncompressed image data, since a corresponding training was also carried out using image data of similar high quality. However, if image data contains image errors, the classification accuracy deteriorates drastically. This applies in particular to coding artifacts which occur due to image and video compression. Typical application scenarios for video compression are narrowband transmission channels for which video coding is required but a subsequent classification is to be carried out on the receiver side. In this paper we present a special H.264/Advanced Video Codec (AVC) based video codec that allows certain regions of a picture to be coded with near constant picture quality in order to allow a reliable classification using neural networks, whereas the remaining image will be coded using constant bit rate. We have combined this feature with the ability to run with lowest latency properties, which is usually also required in remote control applications scenarios. The codec has been implemented as a fully hardwired High Definition video capable hardware architecture which is suitable for Field Programmable Gate Arrays.
How inclusive are we?
(2022)
ACM SIGMOD, VLDB and other database organizations have committed to fostering an inclusive and diverse community, as do many other scientific organizations. Recently, different measures have been taken to advance these goals, especially for underrepresented groups. One possible measure is double-blind reviewing, which aims to hide gender, ethnicity, and other properties of the authors. <br /> We report the preliminary results of a gender diversity analysis of publications of the database community across several peer-reviewed venues, and also compare women's authorship percentages in both single-blind and double-blind venues along the years. We also obtained a cross comparison of the obtained results in data management with other relevant areas in Computer Science.
Advances in Web 2.0 technologies have led to the widespread assimilation of electronic commerce platforms as an innovative shopping method and an alternative to traditional shopping. However, due to pro-technology bias, scholars focus more on adopting technology, and slightly less attention has been given to the impact of electronic word of mouth (eWOM) on customers’ intention to use social commerce. This study addresses the gap by examining the intention through exploring the effect of eWOM on males’ and females’ intentions and identifying the mediation of perceived crowding. To this end, we adopted a dual-stage multi-group structural equation modeling and artificial neural network (SEM-ANN) approach. We successfully extended the eWOM concept by integrating negative and positive factors and perceived crowding. The results reveal the causal and non-compensatory relationships between the constructs. The variables supported by the SEM analysis are adopted as the ANN model’s input neurons. According to the natural significance obtained from the ANN approach, males’ intentions to accept social commerce are related mainly to helping the company, followed by core functionalities. In contrast, females are highly influenced by technical aspects and mishandling. The ANN model predicts customers’ intentions to use social commerce with an accuracy of 97%. We discuss the theoretical and practical implications of increasing customers’ intention toward social commerce channels among consumers based on our findings.
Language developers who design domain-specific languages or new language features need a way to make fast changes to language definitions. Those fast changes require immediate feedback. Also, it should be possible to parse the developed languages quickly to handle extensive sets of code.
Parsing expression grammars provides an easy to understand method for language definitions. Packrat parsing is a method to parse grammars of this kind, but this method is unable to handle left-recursion properly. Existing solutions either partially rewrite left-recursive rules and partly forbid them, or use complex extensions to packrat parsing that are hard to understand and cost-intensive. We investigated methods to make parsing as fast as possible, using easy to follow algorithms while not losing the ability to make fast changes to grammars.
We focused our efforts on two approaches.
One is to start from an existing technique for limited left-recursion rewriting and enhance it to work for general left-recursive grammars. The second approach is to design a grammar compilation process to find left-recursion before parsing, and in this way, reduce computational costs wherever possible and generate ready to use parser classes.
Rewriting parsing expression grammars is a task that, if done in a general way, unveils a large number of cases such that any rewriting algorithm surpasses the complexity of other left-recursive parsing algorithms. Lookahead operators introduce this complexity. However, most languages have only little portions that are left-recursive and in virtually all cases, have no indirect or hidden left-recursion. This means that the distinction of left-recursive parts of grammars from components that are non-left-recursive holds great improvement potential for existing parsers.
In this report, we list all the required steps for grammar rewriting to handle left-recursion, including grammar analysis, grammar rewriting itself, and syntax tree restructuring. Also, we describe the implementation of a parsing expression grammar framework in Squeak/Smalltalk and the possible interactions with the already existing parser Ohm/S. We quantitatively benchmarked this framework directing our focus on parsing time and the ability to use it in a live programming context. Compared with Ohm, we achieved massive parsing time improvements while preserving the ability to use our parser it as a live programming tool.
The work is essential because, for one, we outlined the difficulties and complexity that come with grammar rewriting. Also, we removed the existing limitations that came with left-recursion by eliminating them before parsing.
Pictures are a medium that helps make the past tangible and preserve memories. Without context, they are not able to do so. Pictures are brought to life by their associated stories. However, the older pictures become, the fewer contemporary witnesses can tell these stories.
Especially for large, analog picture archives, knowledge and memories are spread over many people. This creates several challenges: First, the pictures must be digitized to save them from decaying and make them available to the public. Since a simple listing of all the pictures is confusing, the pictures should be structured accessibly. Second, known information that makes the stories vivid needs to be added to the pictures. Users should get the opportunity to contribute their knowledge and memories. To make this usable for all interested parties, even for older, less technophile generations, the interface should be intuitive and error-tolerant.
The resulting requirements are not covered in their entirety by any existing software solution without losing the intuitive interface or the scalability of the system.
Therefore, we have developed our digital picture archive within the scope of a bachelor project in cooperation with the Bad Harzburg-Stiftung. For the implementation of this web application, we use the UI framework React in the frontend, which communicates via a GraphQL interface with the Content Management System Strapi in the backend. The use of this system enables our project partner to create an efficient process from scanning analog pictures to presenting them to visitors in an organized and annotated way. To customize the solution for both picture delivery and information contribution for our target group, we designed prototypes and evaluated them with people from Bad Harzburg. This helped us gain valuable insights into our system’s usability and future challenges as well as requirements.
Our web application is already being used daily by our project partner. During the project, we still came up with numerous ideas for additional features to further support the exchange of knowledge.
The analysis of behavioral models such as Graph Transformation Systems (GTSs) is of central importance in model-driven engineering. However, GTSs often result in intractably large or even infinite state spaces and may be equipped with multiple or even infinitely many start graphs. To mitigate these problems, static analysis techniques based on finite symbolic representations of sets of states or paths thereof have been devised. We focus on the technique of k-induction for establishing invariants specified using graph conditions. To this end, k-induction generates symbolic paths backwards from a symbolic state representing a violation of a candidate invariant to gather information on how that violation could have been reached possibly obtaining contradictions to assumed invariants. However, GTSs where multiple agents regularly perform actions independently from each other cannot be analyzed using this technique as of now as the independence among backward steps may prevent the gathering of relevant knowledge altogether.
In this paper, we extend k-induction to GTSs with multiple agents thereby supporting a wide range of additional GTSs. As a running example, we consider an unbounded number of shuttles driving on a large-scale track topology, which adjust their velocity to speed limits to avoid derailing. As central contribution, we develop pruning techniques based on causality and independence among backward steps and verify that k-induction remains sound under this adaptation as well as terminates in cases where it did not terminate before.
Cyber-physical systems often encompass complex concurrent behavior with timing constraints and probabilistic failures on demand. The analysis whether such systems with probabilistic timed behavior adhere to a given specification is essential. When the states of the system can be represented by graphs, the rule-based formalism of Probabilistic Timed Graph Transformation Systems (PTGTSs) can be used to suitably capture structure dynamics as well as probabilistic and timed behavior of the system. The model checking support for PTGTSs w.r.t. properties specified using Probabilistic Timed Computation Tree Logic (PTCTL) has been already presented. Moreover, for timed graph-based runtime monitoring, Metric Temporal Graph Logic (MTGL) has been developed for stating metric temporal properties on identified subgraphs and their structural changes over time.
In this paper, we (a) extend MTGL to the Probabilistic Metric Temporal Graph Logic (PMTGL) by allowing for the specification of probabilistic properties, (b) adapt our MTGL satisfaction checking approach to PTGTSs, and (c) combine the approaches for PTCTL model checking and MTGL satisfaction checking to obtain a Bounded Model Checking (BMC) approach for PMTGL. In our evaluation, we apply an implementation of our BMC approach in AutoGraph to a running example.
Scrollytellings are an innovative form of web content. Combining the benefits of books, images, movies, and video games, they are a tool to tell compelling stories and provide excellent learning opportunities. Due to their multi-modality, creating high-quality scrollytellings is not an easy task. Different professions, such as content designers, graphics designers, and developers, need to collaborate to get the best out of the possibilities the scrollytelling format provides. Collaboration unlocks great potential. However, content designers cannot create scrollytellings directly and always need to consult with developers to implement their vision. This can result in misunderstandings. Often, the resulting scrollytelling will not match the designer’s vision sufficiently, causing unnecessary iterations. Our project partner Typeshift specializes in the creation of individualized scrollytellings for their clients. Examined existing solutions for authoring interactive content are not optimally suited for creating highly customized scrollytellings while still being able to manipulate all their elements programmatically. Based on their experience and expertise, we developed an editor to author scrollytellings in the lively.next live-programming environment. In this environment, a graphical user interface for content design is combined with powerful possibilities for programming behavior with the morphic system. The editor allows content designers to take on large parts of the creation process of scrollytellings on their own, such as creating the visible elements, animating content, and fine-tuning the scrollytelling. Hence, developers can focus on interactive elements such as simulations and games. Together with Typeshift, we evaluated the tool by recreating an existing scrollytelling and identified possible future enhancements. Our editor streamlines the creation process of scrollytellings. Content designers and developers can now both work on the same scrollytelling. Due to the editor inside of the lively.next environment, they can both work with a set of tools familiar to them and their traits. Thus, we mitigate unnecessary iterations and misunderstandings by enabling content designers to realize large parts of their vision of a scrollytelling on their own. Developers can add advanced and individual behavior. Thus, developers and content designers benefit from a clearer distribution of tasks while keeping the benefits of collaboration.
Learning from failure
(2022)
Regression testing is a widespread practice in today's software industry to ensure software product quality. Developers derive a set of test cases, and execute them frequently to ensure that their change did not adversely affect existing functionality. As the software product and its test suite grow, the time to feedback during regression test sessions increases, and impedes programmer productivity: developers wait longer for tests to complete, and delays in fault detection render fault removal increasingly difficult.
Test case prioritization addresses the problem of long feedback loops by reordering test cases, such that test cases of high failure probability run first, and test case failures become actionable early in the testing process. We ask, given test execution schedules reconstructed from publicly available data, to which extent can their fault detection efficiency improved, and which technique yields the most efficient test schedules with respect to APFD?
To this end, we recover regression 6200 test sessions from the build log files of Travis CI, a popular continuous integration service, and gather 62000 accompanying changelists. We evaluate the efficiency of current test schedules, and examine the prioritization results of state-of-the-art lightweight, history-based heuristics. We propose and evaluate a novel set of prioritization algorithms, which connect software changes and test failures in a matrix-like data structure.
Our studies indicate that the optimization potential is substantial, because the existing test plans score only 30% APFD. The predictive power of past test failures proves to be outstanding: simple heuristics, such as repeating tests with failures in recent sessions, result in efficiency scores of 95% APFD. The best-performing matrix-based heuristic achieves a similar score of 92.5% APFD. In contrast to prior approaches, we argue that matrix-based techniques are useful beyond the scope of effective prioritization, and enable a number of use cases involving software maintenance.
We validate our findings from continuous integration processes by extending a continuous testing tool within development environments with means of test prioritization, and pose further research questions. We think that our findings are suited to propel adoption of (continuous) testing practices, and that programmers' toolboxes should contain test prioritization as an existential productivity tool.
The transversal hypergraph problem asks to enumerate the minimal hitting sets of a hypergraph. If the solutions have bounded size, Eiter and Gottlob [SICOMP'95] gave an algorithm running in output-polynomial time, but whose space requirement also scales with the output. We improve this to polynomial delay and space. Central to our approach is the extension problem, deciding for a set X of vertices whether it is contained in any minimal hitting set. We show that this is one of the first natural problems to be W[3]-complete. We give an algorithm for the extension problem running in time O(m(vertical bar X vertical bar+1) n) and prove a SETH-lower bound showing that this is close to optimal. We apply our enumeration method to the discovery problem of minimal unique column combinations from data profiling. Our empirical evaluation suggests that the algorithm outperforms its worst-case guarantees on hypergraphs stemming from real-world databases.
Das Statusfeststellungsverfahren ermöglicht auf Antrag bei der alleinzuständigen Deutschen Rentenversicherung Bund den Erhalt einer verbindlichen Einschätzung der häufig komplizierten und folgenschweren Abgrenzung einer selbstständigen Tätigkeit von einer abhängigen Beschäftigung. Zum 1.4.2022 wurde das Statusfeststellungsverfahren umfassend reformiert. In der Praxis haben sich die eingeführten Novellierungen bislang unterschiedlich bewährt.
Mit Urteil vom 1.3.2022 (NZA2022, NZA Jahr 2022 Seite 780) hat das BAG erneut über die Wirksamkeit einer Rückzahlungsklausel in einer Fortbildungsvereinbarung entschieden. Die Entscheidung reiht sich in eine nicht leicht zu durchschauende Anzahl von Urteilen hierzu ein. Sie dient uns zum Anlass, einen Überblick über die Rechtsprechung zu geben.
Krieg in Europa
(2022)
Am 24.2.2022 begann der russische Angriffskrieg in der Ukraine. Seitdem fliehen täglich zahlreiche ukrainische Staatsbürger in die Europäische Union, viele davon nach Deutschland. Vorrangig ist jetzt die Sicherung der Grundbedürfnisse, wie Verpflegung, Unterkunft und medizinischer Versorgung. Daneben fragen sich Arbeitgeber, wie sie ukrainische Staatsbürger möglichst schnell beschäftigen können. Wir geben einen Überblick über die Möglichkeiten, ukrainische Geflüchtete möglichst schnell in den deutschen Arbeitsmarkt zu integrieren.
Dynamic resource management is an essential requirement for private and public cloud computing environments. With dynamic resource management, the physical resources assignment to the cloud virtual resources depends on the actual need of the applications or the running services, which enhances the cloud physical resources utilization and reduces the offered services cost. In addition, the virtual resources can be moved across different physical resources in the cloud environment without an obvious impact on the running applications or services production. This means that the availability of the running services and applications in the cloud is independent on the hardware resources including the servers, switches and storage failures. This increases the reliability of using cloud services compared to the classical data-centers environments.
In this thesis we briefly discuss the dynamic resource management topic and then deeply focus on live migration as the definition of the compute resource dynamic management. Live migration is a commonly used and an essential feature in cloud and virtual data-centers environments. Cloud computing load balance, power saving and fault tolerance features are all dependent on live migration to optimize the virtual and physical resources usage. As we will discuss in this thesis, live migration shows many benefits to cloud and virtual data-centers environments, however the cost of live migration can not be ignored. Live migration cost includes the migration time, downtime, network overhead, power consumption increases and CPU overhead.
IT admins run virtual machines live migrations without an idea about the migration cost. So, resources bottlenecks, higher migration cost and migration failures might happen. The first problem that we discuss in this thesis is how to model the cost of the virtual machines live migration. Secondly, we investigate how to make use of machine learning techniques to help the cloud admins getting an estimation of this cost before initiating the migration for one of multiple virtual machines. Also, we discuss the optimal timing for a specific virtual machine before live migration to another server. Finally, we propose practical solutions that can be used by the cloud admins to be integrated with the cloud administration portals to answer the raised research questions above.
Our research methodology to achieve the project objectives is to propose empirical models based on using VMware test-beds with different benchmarks tools. Then we make use of the machine learning techniques to propose a prediction approach for virtual machines live migration cost. Timing optimization for live migration is also proposed in this thesis based on using the cost prediction and data-centers network utilization prediction. Live migration with persistent memory clusters is also discussed at the end of the thesis. The cost prediction and timing optimization techniques proposed in this thesis could be practically integrated with VMware vSphere cluster portal such that the IT admins can now use the cost prediction feature and timing optimization option before proceeding with a virtual machine live migration.
Testing results show that our proposed approach for VMs live migration cost prediction shows acceptable results with less than 20% prediction error and can be easily implemented and integrated with VMware vSphere as an example of a commonly used resource management portal for virtual data-centers and private cloud environments. The results show that using our proposed VMs migration timing optimization technique also could save up to 51% of migration time of the VMs migration time for memory intensive workloads and up to 27% of the migration time for network intensive workloads. This timing optimization technique can be useful for network admins to save migration time with utilizing higher network rate and higher probability of success.
At the end of this thesis, we discuss the persistent memory technology as a new trend in servers memory technology. Persistent memory modes of operation and configurations are discussed in detail to explain how live migration works between servers with different memory configuration set up. Then, we build a VMware cluster with persistent memory inside server and also with DRAM only servers to show the live migration cost difference between the VMs with DRAM only versus the VMs with persistent memory inside.
Digitale Medien sind aus unserem Alltag kaum noch wegzudenken. Einer der zentralsten Bereiche für unsere Gesellschaft, die schulische Bildung, darf hier nicht hintanstehen. Wann immer der Einsatz digital unterstützter Tools pädagogisch sinnvoll ist, muss dieser in einem sicheren Rahmen ermöglicht werden können. Die HPI Schul-Cloud ist dieser Vision gefolgt, die vom Nationalen IT-Gipfel 2016 angestoßen wurde und dem Bericht vorangestellt ist – gefolgt. Sie hat sich in den vergangenen fünf Jahren vom Pilotprojekt zur unverzichtbaren IT-Infrastruktur für zahlreiche Schulen entwickelt. Während der Corona-Pandemie hat sie für viele Tausend Schulen wichtige Unterstützung bei der Umsetzung ihres Bildungsauftrags geboten. Das Ziel, eine zukunftssichere und datenschutzkonforme Infrastruktur zur digitalen Unterstützung des Unterrichts zur Verfügung zu stellen, hat sie damit mehr als erreicht. Aktuell greifen rund 1,4 Millionen Lehrkräfte und Schülerinnen und Schüler bundesweit und an den deutschen Auslandsschulen auf die HPI Schul-Cloud zu.