004 Datenverarbeitung; Informatik
Refine
Has Fulltext
- yes (411) (remove)
Year of publication
Document Type
- Monograph/Edited Volume (121)
- Doctoral Thesis (116)
- Article (86)
- Postprint (50)
- Conference Proceeding (30)
- Master's Thesis (3)
- Preprint (3)
- Bachelor Thesis (1)
- Habilitation Thesis (1)
Language
- English (411) (remove)
Keywords
- machine learning (14)
- answer set programming (10)
- Cloud Computing (9)
- Hasso-Plattner-Institut (9)
- cloud computing (9)
- Forschungskolleg (8)
- Forschungsprojekte (8)
- Future SOC Lab (8)
- Hasso Plattner Institute (8)
- In-Memory Technologie (8)
- Klausurtagung (8)
- Maschinelles Lernen (8)
- Multicore Architekturen (8)
- Service-oriented Systems Engineering (8)
- maschinelles Lernen (7)
- Datenintegration (6)
- Geschäftsprozessmanagement (6)
- Machine Learning (6)
- Modellierung (6)
- multicore architectures (6)
- research projects (6)
- Computer Science Education (5)
- Ph.D. retreat (5)
- Prozessmodellierung (5)
- Smalltalk (5)
- Verifikation (5)
- business process management (5)
- cyber-physical systems (5)
- quantitative analysis (5)
- service-oriented systems engineering (5)
- virtual machines (5)
- Antwortmengenprogrammierung (4)
- Betriebssysteme (4)
- In-Memory technology (4)
- Research School (4)
- Sicherheit (4)
- Virtuelle Maschinen (4)
- Visualisierung (4)
- Vorhersage (4)
- middleware (4)
- nested graph conditions (4)
- operating systems (4)
- privacy (4)
- probabilistic timed systems (4)
- process mining (4)
- qualitative Analyse (4)
- qualitative analysis (4)
- quantitative Analyse (4)
- research school (4)
- security (4)
- verification (4)
- 3D visualization (3)
- Answer Set Programming (3)
- Big Data (3)
- Competence Measurement (3)
- DPLL (3)
- Design Thinking (3)
- Digitalisierung (3)
- E-Learning (3)
- Graphtransformationen (3)
- HCI (3)
- Informatics (3)
- Informatics Education (3)
- Künstliche Intelligenz (3)
- Laufzeitmodelle (3)
- MOOCs (3)
- Middleware (3)
- Model Synchronisation (3)
- Model Transformation (3)
- Model-Driven Engineering (3)
- Modeling (3)
- Ph.D. Retreat (3)
- Process Mining (3)
- Process Modeling (3)
- SAT (3)
- Secondary Education (3)
- Tripel-Graph-Grammatik (3)
- Virtualisierung (3)
- abstraction (3)
- artifical intelligence (3)
- cloud (3)
- computational thinking (3)
- data integration (3)
- data profiling (3)
- debugging (3)
- digitalization (3)
- geospatial data (3)
- graph transformation (3)
- graph transformation systems (3)
- higher education (3)
- in-memory technology (3)
- künstliche Intelligenz (3)
- model (3)
- model-driven engineering (3)
- modellgetriebene Entwicklung (3)
- models (3)
- non-photorealistic rendering (3)
- openHPI (3)
- outlier detection (3)
- prediction (3)
- programming (3)
- social media (3)
- systems biology (3)
- virtuelle Maschinen (3)
- visualization (3)
- 3D-Geovisualisierung (2)
- 3D-Punktwolken (2)
- 3D-Stadtmodelle (2)
- 3D-Visualisierung (2)
- AUTOSAR (2)
- Abstraktion (2)
- Algorithmen (2)
- Algorithmic Game Theory (2)
- Algorithmische Spieltheorie (2)
- Algorithms (2)
- Anomalieerkennung (2)
- Aspektorientierte Softwareentwicklung (2)
- Assoziationsregeln (2)
- Asynchrone Schaltung (2)
- Automatisches Beweisen (2)
- BPMN (2)
- Bayesian networks (2)
- Bildverarbeitung (2)
- Bounded Model Checking (2)
- CSC (2)
- CSCW (2)
- Cloud-Sicherheit (2)
- Cloud-Speicher (2)
- Competence Modelling (2)
- Computational thinking (2)
- Computer Science (2)
- Computersicherheit (2)
- Computing (2)
- Constraint Solving (2)
- Data Integration (2)
- Data Modeling (2)
- Data Privacy (2)
- Data Profiling (2)
- Databases (2)
- Datenanalyse (2)
- Datenaufbereitung (2)
- Datenbanken (2)
- Datenbanksysteme (2)
- Datenmodellierung (2)
- Datenqualität (2)
- Deduction (2)
- Diversity (2)
- Duplikaterkennung (2)
- EEG (2)
- Echtzeit (2)
- Echtzeit-Rendering (2)
- European Bioinformatics Institute (2)
- Evolution (2)
- Exploration (2)
- Fehlende Daten (2)
- GPU (2)
- Game Dynamics (2)
- Geodaten (2)
- Graphentransformationssysteme (2)
- Graphtransformationssysteme (2)
- ICA (2)
- ICT (2)
- Identitätsmanagement (2)
- Informatics Modelling (2)
- Informatics System Application (2)
- Informatics System Comprehension (2)
- Informatik (2)
- Informationsextraktion (2)
- Innovation (2)
- Java (2)
- Key Competencies (2)
- Klausellernen (2)
- Knowledge Representation and Reasoning (2)
- Kollaborationen (2)
- Link-Entdeckung (2)
- Live-Programmierung (2)
- Lively Kernel (2)
- Logic Programming (2)
- Logics (2)
- MOOC (2)
- Measurement (2)
- Megamodell (2)
- Model Synchronization (2)
- Modell (2)
- Modellprüfung (2)
- Optimization (2)
- Planing (2)
- Preis der Anarchie (2)
- Price of Anarchy (2)
- Privacy (2)
- Privatsphäre (2)
- Problem Solving (2)
- Process (2)
- Programmierung (2)
- Prozess (2)
- RDF (2)
- Relevanz (2)
- Ressourcenoptimierung (2)
- STG decomposition (2)
- STG-Dekomposition (2)
- Semantic Web (2)
- Service-Oriented Architecture (2)
- Service-Orientierte Architekturen (2)
- SysML (2)
- Systemsoftware (2)
- Temporallogik (2)
- Theorembeweisen (2)
- Unifikation (2)
- Verhalten (2)
- Versionsverwaltung (2)
- Werkzeuge (2)
- adaptive Systeme (2)
- adaptive systems (2)
- anomaly detection (2)
- big data services (2)
- blockchain (2)
- bounded model checking (2)
- business processes (2)
- causal discovery (2)
- causal structure learning (2)
- clustering (2)
- collaboration (2)
- competence (2)
- complexity (2)
- computer science (2)
- computer science education (2)
- computer security (2)
- computer vision (2)
- consistency (2)
- continuous integration (2)
- cyber-physische Systeme (2)
- data (2)
- data preparation (2)
- database systems (2)
- deep learning (2)
- design thinking (2)
- digital education (2)
- digital whiteboard (2)
- direct manipulation (2)
- distributed systems (2)
- dynamic reconfiguration (2)
- education (2)
- functional dependencies (2)
- funktionale Abhängigkeiten (2)
- graph constraints (2)
- identity management (2)
- image processing (2)
- inclusion dependencies (2)
- incremental graph pattern matching (2)
- index selection (2)
- informatics education (2)
- innovation (2)
- intrusion detection (2)
- k-inductive invariant checking (2)
- kausale Entdeckung (2)
- kausales Strukturlernen (2)
- kontinuierliche Integration (2)
- live programming (2)
- logic programming (2)
- maschinelles Sehen (2)
- missing data (2)
- mobile (2)
- mobile mapping (2)
- model checking (2)
- model transformation (2)
- modeling (2)
- probabilistische gezeitete Systeme (2)
- probabilistische zeitgesteuerte Systeme (2)
- propositional satisfiability (2)
- real-time (2)
- relevance (2)
- runtime models (2)
- schema discovery (2)
- selection (2)
- self-sovereign identity (2)
- service-oriented systems (2)
- simulation (2)
- smalltalk (2)
- smart contracts (2)
- solver (2)
- stochastic Petri nets (2)
- stochastische Petri Netze (2)
- synchronization (2)
- systems of systems (2)
- systems software (2)
- teacher training (2)
- testing (2)
- theorem (2)
- tiefes Lernen (2)
- tools (2)
- topics (2)
- triple graph grammars (2)
- typed attributed graphs (2)
- user experience (2)
- verschachtelte Graphbedingungen (2)
- version control (2)
- virtual reality (2)
- virtualization (2)
- "Big Data"-Dienste (1)
- 'Peer To Peer' (1)
- 0-day (1)
- 21st century skills, (1)
- 3D Computer Grafik (1)
- 3D Computer Graphics (1)
- 3D Drucken (1)
- 3D Linsen (1)
- 3D Point Clouds (1)
- 3D Semiotik (1)
- 3D Visualisierung (1)
- 3D city model (1)
- 3D city models (1)
- 3D geovisualisation (1)
- 3D geovisualization (1)
- 3D lenses (1)
- 3D point cloud (1)
- 3D point clouds (1)
- 3D portrayal (1)
- 3D printing (1)
- 3D semiotics (1)
- 3D-Punktwolke (1)
- 3D-Rendering (1)
- 3D-Stadtmodell (1)
- 3d city models (1)
- 47A52 (1)
- 65R20 (1)
- 65R32 (1)
- 78A46 (1)
- ABRACADABRA (1)
- ACINQ (1)
- AMNET (1)
- APT (1)
- ASIC (1)
- Abbrecherquote (1)
- Abhängigkeiten (1)
- Abstraktion von Geschäftsprozessmodellen (1)
- Accepting Grammars (1)
- Achievement (1)
- Ackerschmalwand (1)
- Active Evaluation (1)
- Activity Theory (1)
- Activity-orientated Learning (1)
- Adaptive hypermedia (1)
- Advanced Persistent Threats (1)
- Adversarial Learning (1)
- Agile (1)
- Agilität (1)
- Aktive Evaluierung (1)
- Aktivitäten (1)
- Akzeptierende Grammatiken (1)
- Algorithmenablaufplanung (1)
- Algorithmenkonfiguration (1)
- Algorithmenselektion (1)
- Alignment (1)
- Ambiguity (1)
- Ambiguität (1)
- Analog-zu-Digital-Konvertierung (1)
- Analyse (1)
- Anfrageoptimierung (1)
- Anfragepaare (1)
- Angewandte Spieltheorie (1)
- Angriffserkennung (1)
- Anisotroper Kuwahara Filter (1)
- Anleitung (1)
- Anomalien (1)
- Antwortmengen Programmierung (1)
- Applied Game Theory (1)
- Apriori (1)
- Architektur (1)
- Architekturadaptation (1)
- Archivanalyse (1)
- Arduino (1)
- Artem Erkomaishvili (1)
- Artificial Intelligence (1)
- Arzt-Patient-Beziehung (1)
- Aspect-Oriented Programming (1)
- Aspect-oriented Programming (1)
- Aspektorientierte Programmierung (1)
- Assessment (1)
- Association Rule Mining (1)
- Asynchronous circuit (1)
- Attribut-Merge-Prozess (1)
- Attribute Merge Process (1)
- Attributsicherung (1)
- Augmented reality (1)
- Ausführung von Modellen (1)
- Ausführungsgeschichte (1)
- Ausführungssemantiken (1)
- Ausreissererkennung (1)
- Ausreißererkennung (1)
- Australian securities exchange (1)
- Austria (1)
- Auswirkungen (1)
- Authentifizierung (1)
- Automated Theorem Proving (1)
- BCCC (1)
- BCI (1)
- BPM (1)
- BSS (1)
- BTC (1)
- Bachelor (1)
- Bachelorstudierende der Informatik (1)
- Bahnwesen (1)
- Bank (1)
- Basic Service (1)
- Batchprozesse (1)
- Baumweite (1)
- Bayes'sche Netze (1)
- Bayessche Netze (1)
- Bedingte Inklusionsabhängigkeiten (1)
- Bedrohungserkennung (1)
- Behavior (1)
- Behavior change (1)
- Behaviour Analysis (1)
- Berührungseingaben (1)
- Beschränkungen und Abhängigkeiten (1)
- Beweistheorie (1)
- Bibliometrics (1)
- Bilddatenanalyse (1)
- Bildung (1)
- Binäres Entscheidungsdiagramm (1)
- Bioelektrisches Signal (1)
- Bioinformatik (1)
- Bisimulation (1)
- BitShares (1)
- Bitcoin (1)
- Bitcoin Core (1)
- Blended learning (1)
- Blockchain (1)
- Blockchain Auth (1)
- Blockchain-Konsortium R3 (1)
- Blockchains (1)
- Blockkette (1)
- Blockstack (1)
- Blockstack ID (1)
- Bloom’s Taxonomy (1)
- Blumix-Plattform (1)
- Blöcke (1)
- Boolean constraint solver (1)
- Bounded Backward Model Checking (1)
- Brain Computer Interface (1)
- Business Process Management (1)
- Business Process Models (1)
- Byzantine Agreement (1)
- CEP (1)
- CS Ed Research (1)
- CS at school (1)
- CS concepts (1)
- CS curriculum (1)
- Cactus (1)
- Capability approach (1)
- Carrera Digital D132 (1)
- Case Management (1)
- Challenges (1)
- Change Management (1)
- Choreographien (1)
- CityGML (1)
- Classification (1)
- Clause Learning (1)
- Cloud (1)
- Cloud Datenzentren (1)
- Cloud computing (1)
- Clusteranalyse (1)
- Clustering (1)
- Coccinelle (1)
- Cognitive Skills (1)
- Colored Coins (1)
- Common Spatial Pattern (1)
- Comparing programming environments (1)
- Competences (1)
- Competencies (1)
- Compliance (1)
- Composition (1)
- Compound Values (1)
- Computation Tree Logic (1)
- Computational Complexity (1)
- Computational Hardness (1)
- Computational Photography (1)
- Computational Thinking (1)
- Computer Science in Context (1)
- Computergrafik (1)
- Conditional Inclusion Dependency (1)
- Conformance Überprüfung (1)
- Constraints (1)
- Contest (1)
- Context-oriented Programming (1)
- Contextualisation (1)
- Contracts (1)
- Contradictions (1)
- Controlled Derivations (1)
- Controller-Resynthese (1)
- Course development (1)
- Course marketing (1)
- Course of Study (1)
- Courses for female students (1)
- Covariate Shift (1)
- Creative (1)
- Crime mapping (1)
- Curricula Development (1)
- Curriculum (1)
- Curriculum Development (1)
- Curriculum analysis (1)
- Cyber-Physical Systems (1)
- Cyber-Physical-Systeme (1)
- Cyber-Sicherheit (1)
- Cyber-physical-systems (1)
- Cyber-physikalische Systeme (1)
- DAO (1)
- DBMS (1)
- DDoS (1)
- DPoS (1)
- Data Analysis (1)
- Data Dependency (1)
- Data Management (1)
- Data Quality (1)
- Data Structure Optimization (1)
- Data Warehouse (1)
- Data-Mining (1)
- Data-Science (1)
- Database Cost Model (1)
- Dateistruktur (1)
- Daten (1)
- Datenabhängigkeiten (1)
- Datenabhängigkeiten-Entdeckung (1)
- Datenbank (1)
- Datenbank-Kostenmodell (1)
- Datenextraktion (1)
- Datenflusskorrektheit (1)
- Datenkorrektheit (1)
- Datenmodelle (1)
- Datenobjekte (1)
- Datenreinigung (1)
- Datensatz (1)
- Datenschutz (1)
- Datenstrukturoptimierung (1)
- Datensynthese (1)
- Datentransformation (1)
- Datenvertraulichkeit (1)
- Datenvisualisierung (1)
- Datenzustände (1)
- Deadline-Verbreitung (1)
- Debugging (1)
- Deep Learning (1)
- Defining characteristics of physical computing (1)
- Dekubitus (1)
- Delegated Proof-of-Stake (1)
- Delphi study (1)
- Dempster-Shafer-Theorie (1)
- Dempster–Shafer theory (1)
- Denkweise (1)
- Description Logics (1)
- Design-Forschung (1)
- Deskriptive Logik (1)
- Deurema Modellierungssprache (1)
- Diagonalisierung (1)
- Didaktik der Informatik (1)
- Dienstkomposition (1)
- Dienstplattform (1)
- Differential Privacy (1)
- Differenz von Gauss Filtern (1)
- Digital Competence (1)
- Digital Education (1)
- Digital Revolution (1)
- Digitale Transformation (1)
- Digitale Whiteboards (1)
- Digitalstrategie (1)
- Direkte Manipulation (1)
- Disambiguierung (1)
- Discrimination Networks (1)
- Dispositional learning analytics (1)
- Distributed Computing (1)
- Distributed Proof-of-Research (1)
- Distributed-Ledger-Technologie (DLT) (1)
- Domänenspezifische Modellierung (1)
- Dubletten (1)
- Duplicate Detection (1)
- Dynamic Programming (1)
- Dynamic Type System (1)
- Dynamic assessment (1)
- Dynamische Programmierung (1)
- Dynamische Rekonfiguration (1)
- Dynamische Typ Systeme (1)
- E-Wallet (1)
- ECDSA (1)
- EHR (1)
- EPA (1)
- Early Literacy (1)
- Echtzeitanwendung (1)
- Echtzeitsysteme (1)
- Economics (1)
- Educational Standards (1)
- Educational software (1)
- Einbruchserkennung (1)
- Eingabegenauigkeit (1)
- Electronic and spintronic devices (1)
- Elektroencephalographie (1)
- Elektronische Patientenakte (1)
- Embedded Systems (1)
- Emotionen (1)
- Emotionsforschung (1)
- Empfehlungen (1)
- Endpunktsicherheit (1)
- Energieeffizienz (1)
- Entscheidungsbäume (1)
- Entscheidungsfindung (1)
- Entscheidungsmanagement (1)
- Entscheidungsmodelle (1)
- Entwicklungswerkzeuge (1)
- Entwurfsmuster für SOA-Sicherheit (1)
- Equilibrium logic (1)
- Ereignisabstraktion (1)
- Ereignisse (1)
- Erfahrungsbericht (1)
- Erfüllbarkeit einer Formel der Aussagenlogik (1)
- Erfüllbarkeitsanalyse (1)
- Erfüllbarkeitsproblem (1)
- Eris (1)
- Erkennen von Meta-Daten (1)
- Erkennung von Metadaten (1)
- Error Estimation (1)
- Escherichia-coli (1)
- Ether (1)
- Ethereum (1)
- Ethics (1)
- Euclid’s algorithm (1)
- European Union (1)
- Europäische Union (1)
- Evaluation (1)
- Evidenztheorie (1)
- Evolution in MDE (1)
- Execution Semantics (1)
- Exponential Time Hypothesis (1)
- Exponentialzeit Hypothese (1)
- Extract-Transform-Load (ETL) (1)
- FMC-QE (1)
- FRP (1)
- Facebook (1)
- Fallmanagement (1)
- Fallstudie (1)
- Feature Combination (1)
- Federated Byzantine Agreement (1)
- Feedback (1)
- Feedback Loop Modellierung (1)
- Feedback Loops (1)
- Fehlerbeseitigung (1)
- Fehlerschätzung (1)
- Fehlersuche (1)
- Fehlertoleranz (1)
- Fernerkundung (1)
- Fertigung (1)
- Fibonacci numbers (1)
- Fintech (1)
- Flussgesteuerter Bilateraler Filter (1)
- Focus+Context Visualization (1)
- Fokus-&-Kontext Visualisierung (1)
- FollowMyVote (1)
- Fork (1)
- Formale Verifikation (1)
- Formative assessment (1)
- Formeln der quantifizierten Aussagenlogik (1)
- Fredholm complexes (1)
- Function (1)
- Functional Lenses (1)
- Fundamental Ideas (1)
- GIS (1)
- GPU acceleration (1)
- GPU-Beschleunigung (1)
- Gaussian process state-space models (1)
- Gaussian processes (1)
- Gauß-Prozess Zustandsraummodelle (1)
- Gauß-Prozesse (1)
- Gebäudemodelle (1)
- Gehirn-Computer-Schnittstelle (1)
- Geländemodelle (1)
- Gender (1)
- General subject “Information” (1)
- Generalisierung (1)
- Generalized Discrimination Networks (1)
- Geometrieerzeugung (1)
- Georgian chant (1)
- Georgische liturgische Gesänge (1)
- Geovisualisierung (1)
- Geschäftsprozessarchitekturen (1)
- Geschäftsprozesse (1)
- Geschäftsprozessmodelle (1)
- Gesetze (1)
- Gesichtsausdruck (1)
- Gesteuerte Ableitungen (1)
- Gewinnung benannter Entitäten (1)
- GitHub (1)
- Gleichheit (1)
- Globus (1)
- GraalVM (1)
- Grammar Systems (1)
- Grammatiksysteme (1)
- Graph-Constraints (1)
- Graph-Mining (1)
- Graph-basierte Suche (1)
- Graphableitung (1)
- Graphbedingungen (1)
- Graphdatenbanken (1)
- Graphensuche (1)
- Graphreparatur (1)
- Graphtransformation (1)
- Grid (1)
- Grid Computing (1)
- Gridcoin (1)
- HENSHIN (1)
- HPI Schul-Cloud (1)
- Hard Fork (1)
- Hashed Timelock Contracts (1)
- Hasserkennung (1)
- Hasso-Plattner-Institute (1)
- Hauptkomponentenanalyse (1)
- Hauptspeicher Technologie (1)
- Hauptspeicherdatenbank (1)
- Herodotos (1)
- Heuristiken (1)
- High-Level Synthesis (1)
- History of pattern occurrences (1)
- Hochschulsystem (1)
- Homomorphe Verschlüsselung (1)
- Hyrise (1)
- Häkeln (1)
- I/O-effiziente Algorithmen (1)
- ICT Competence (1)
- ICT competencies (1)
- ICT curriculum (1)
- ICT skills (1)
- IDS (1)
- ISSEP (1)
- IT security (1)
- IT-Security (1)
- IT-Sicherheit (1)
- Ideation (1)
- Ideenfindung (1)
- Identität (1)
- Impact (1)
- Implementation in Organizations (1)
- Implementierung in Organisationen (1)
- In-Memory (1)
- In-Memory Database (1)
- In-Memory Datenbank (1)
- In-Memory-Datenbank (1)
- Index (1)
- Index Structures (1)
- Indexauswahl (1)
- Indexstrukturen (1)
- Individuen (1)
- Infinite State (1)
- Informatik-Studiengänge (1)
- Informatikdidaktik (1)
- Informatikvoraussetzungen (1)
- Information Ethics (1)
- Information Extraction (1)
- Information Systems (1)
- Information Transfer Rate (1)
- Informationssysteme (1)
- Informationsvorhaltung (1)
- Inklusionsabhängigkeit (1)
- Inklusionsabhängigkeiten (1)
- Inkonsistenz (1)
- Inkrementelle Graphmustersuche (1)
- Innovationsmanagement (1)
- Innovationsmethode (1)
- Input Validation (1)
- Inquiry-based Learning (1)
- Integration (1)
- Interactive Rendering (1)
- Interaktionsmodel (1)
- Interaktionsmodellierung (1)
- Interaktives Rendering (1)
- Interdisciplinary Teams (1)
- Interface design (1)
- Internet Security (1)
- Internet applications (1)
- Internet der Dinge (1)
- Internet of Things (1)
- Internet-Sicherheit (1)
- Internetanwendungen (1)
- Interpreter (1)
- Intersectionality (1)
- Interval Timed Automata (1)
- Invariant-Checking (1)
- Invarianten (1)
- Invariants (1)
- IoT (1)
- JCop (1)
- JSP (1)
- Japanese Blockchain Consortium (1)
- Japanisches Blockchain-Konsortium (1)
- Java Security Framework (1)
- Karten (1)
- Kartografisches Design (1)
- Kausalität (1)
- Kern-PCA (1)
- Kernmethoden (1)
- Kette (1)
- Klassifikation (1)
- Klassifikator-Kalibrierung (1)
- Klassifizierung (1)
- Kommunikation (1)
- Komplexität (1)
- Komplexität der Berechnung (1)
- Komplexitätsbewältigung (1)
- Komplexitätstheorie (1)
- Komposition (1)
- Konnektionskalkül (1)
- Konsensalgorithmus (1)
- Konsensprotokoll (1)
- Konsensprotokolle (1)
- Konsistenzrestauration (1)
- Kontext (1)
- Kreativität (1)
- Kryptographie (1)
- Kundenverhalten (1)
- Kunstanalyse (1)
- Kybernetik (1)
- LEGO Mindstorms EV3 (1)
- LOD (1)
- Landmarken (1)
- Laser Cutten (1)
- Laserscanning (1)
- Lastverteilung (1)
- Laufzeitanalyse (1)
- Laufzeitverhalten (1)
- Leadership (1)
- Learners (1)
- Learning Analytics (1)
- Learning Fields (1)
- Learning analytics (1)
- Learning dispositions (1)
- Learning ecology (1)
- Learning interfaces development (1)
- Learning with ICT (1)
- Lebendigkeit (1)
- Lefschetz number (1)
- Leftmost Derivations (1)
- Lehrer (1)
- Leistungsfähigkeit (1)
- Leistungsmodelle von virtuellen Maschinen (1)
- Leistungsvorhersage (1)
- LiDAR (1)
- Lightning Network (1)
- Liguistisch (1)
- Link Discovery (1)
- Linked Data (1)
- Linked Open Data (1)
- Linksableitungen (1)
- Live-Migration (1)
- Lock-Time-Parameter (1)
- Logarithm (1)
- Logikkalkül (1)
- Logiksynthese (1)
- Lower Bounds (1)
- Lower Secondary Level (1)
- Lösungsraum (1)
- MDE Ansatz (1)
- MDE settings (1)
- MEG (1)
- MERLOT (1)
- Machine-Learning (1)
- Machinelles Lernen (1)
- Magnetoencephalographie (1)
- Malware (1)
- Management (1)
- Maschinen (1)
- Massive Open Online Courses (1)
- Matrizen-Eigenwertaufgabe (1)
- Megamodel (1)
- Megamodels (1)
- Mehrkernsysteme (1)
- Mehrklassen-Klassifikation (1)
- Mensch-Computer-Interaktion (1)
- Messung (1)
- Metacrate (1)
- Metadata Discovery (1)
- Metadaten (1)
- Metadatenentdeckung (1)
- Metadatenqualität (1)
- Metaverse (1)
- Micropayment-Kanäle (1)
- Microsoft Azur (1)
- Migration (1)
- Mindset (1)
- Mischmodelle (1)
- Mischung <Signalverarbeitung> (1)
- Mobil (1)
- Mobile Application Development (1)
- Mobile Mapping (1)
- Mobile learning (1)
- Mobile-Mapping (1)
- Mobilgeräte (1)
- Model Consistency (1)
- Model Execution (1)
- Model Management (1)
- Modeling Languages (1)
- Modell Management (1)
- Modell-driven Security (1)
- Modell-getriebene Sicherheit (1)
- Modell-getriebene Softwareentwicklung (1)
- Modelle mit mehreren Versionen (1)
- Modellerzeugung (1)
- Modellgetrieben (1)
- Modellgetriebene Entwicklung (1)
- Modellgetriebene Softwareentwicklung (1)
- Modellierungssprachen (1)
- Modellkonsistenz (1)
- Modellreparatur (1)
- Modelltransformation (1)
- Modelltransformationen (1)
- Models at Runtime (1)
- Molekulare Bioinformatik (1)
- Morphic (1)
- Multi Task Learning (1)
- Multi-Class (1)
- Multi-Instanzen (1)
- Multi-Task-Lernen (1)
- Multicore architectures (1)
- Multidisciplinary Teams (1)
- Multiprocessor (1)
- Multiprozessor (1)
- Music Technology (1)
- Muster (1)
- Musterabgleich (1)
- Mustererkennung (1)
- N-of-1 trial (1)
- NASDAQ (1)
- NUI (1)
- NameID (1)
- Namecoin (1)
- Nash Equilibrium (1)
- Natural Science Education (1)
- Navigation (1)
- Nested Graph Conditions (1)
- Network Creation Game (1)
- Netzwerk (1)
- Netzwerke (1)
- Netzwerkprotokolle (1)
- Neuronales Netz (1)
- Newspeak (1)
- Next Generation Network (1)
- Nicht-photorealistisches Rendering (1)
- Nichtfotorealistische Bildsynthese (1)
- NoSQL (1)
- Non-photorealistic Rendering (1)
- Norway (1)
- Novice programmers (1)
- Nutzerinteraktion (1)
- Nutzungsinteresse (1)
- Object Constraint Programming (1)
- Object-Oriented Programming (1)
- Objects (1)
- Objekt-Constraint Programmierung (1)
- Objekt-Orientiertes Programmieren (1)
- Objekt-orientiertes Programmieren mit Constraints (1)
- Objekte (1)
- Objektive Schwierigkeit (1)
- Objektlebenszyklus-Synchronisation (1)
- Off-Chain-Transaktionen (1)
- Omega (1)
- Onename (1)
- Online Course (1)
- Online Learning Environments (1)
- Online-Learning (1)
- Online-Lernen (1)
- Onlinekurs (1)
- Ontologie (1)
- Ontologies (1)
- OpenBazaar (1)
- Optimierungsproblem (1)
- Oracles (1)
- Organisationsveränderung (1)
- Orphan Block (1)
- Owner-Retained Access Control (ORAC) (1)
- P2P (1)
- PRISM Modell-Checker (1)
- PRISM model checker (1)
- PTCTL (1)
- Parallel Programming (1)
- Paralleles Rechnen (1)
- Parallelrechner (1)
- Parameterized Complexity (1)
- Parametrisierte Komplexität (1)
- Parsing (1)
- Patientenermündigung (1)
- Pattern Matching (1)
- Patterns (1)
- Pedagogical content knowledge (1)
- Peer-to-Peer Netz (1)
- Peer-to-Peer-Netz ; GRID computing ; Zuverlässigkeit ; Web Services ; Betriebsmittelverwaltung ; Migration (1)
- Peercoin (1)
- Performance (1)
- Performance Prediction (1)
- Personal Data (1)
- Petri net Mapping (1)
- Petri net mapping (1)
- Petrinetz (1)
- Physical Science (1)
- Plattform-Ökosysteme (1)
- Platzierung (1)
- PoB (1)
- PoS (1)
- PoW (1)
- Policy Enforcement (1)
- Policy Languages (1)
- Policy Sprachen (1)
- Polymerase Chain Reaction Experiment (1)
- Posenabschätzung (1)
- Prediction Game (1)
- Predictive Models (1)
- Preprocessing (1)
- Primary informatics (1)
- Probabilistische Modelle (1)
- Problem solving (1)
- Problem solving strategies (1)
- Probleme in der Studie (1)
- Problemlösung (1)
- Process Enactment (1)
- Process Modelling (1)
- Process modeling (1)
- Professoren (1)
- Programmierabstraktionen (1)
- Programmieren (1)
- Programmiererlebnis (1)
- Programmierwerkzeuge (1)
- Programming Languages (1)
- Programming environments for children (1)
- Programming learning (1)
- Prolog (1)
- Proof Theory (1)
- Proof-of-Burn (1)
- Proof-of-Stake (1)
- Proof-of-Work (1)
- Propagation von Aktivitätsinstanzzuständen (1)
- Prototyping (1)
- Prozess- und Datenintegration (1)
- Prozessarchitektur (1)
- Prozessausführung (1)
- Prozessautomatisierung (1)
- Prozesse (1)
- Prozesserhebung (1)
- Prozessinstanz (1)
- Prozessmodelle (1)
- Prozessmodellsuche (1)
- Prozessoren (1)
- Prozessverfeinerung (1)
- Prädiktionsspiel (1)
- Präferenzen (1)
- Präsentation (1)
- Psychotherapie (1)
- Pytho n (1)
- Python (1)
- Quanten-Computing (1)
- Quantenkryptographie (1)
- Quantified Boolean Formula (QBF) (1)
- Quantitative Analysen (1)
- Quantitative Modeling (1)
- Quantitative Modellierung (1)
- Query (1)
- Query-Optimierung (1)
- Queuing Theory (1)
- RL (1)
- RT_PREEMT patch (1)
- RT_PREEMT-Patch (1)
- Real-Time Rendering (1)
- Recommendations for CS-Curricula in Higher Education (1)
- Reconfigurable (1)
- Regressionstests (1)
- Rekonfiguration (1)
- Reparatur (1)
- Research Projects (1)
- Ressourcenmanagement (1)
- Reverse Engineering (1)
- Ripple (1)
- Ruby (1)
- Runtime Binding (1)
- Runtime-monitoring (1)
- SCED (1)
- SCP (1)
- SHA (1)
- SIEM (1)
- SMT (1)
- SOA (1)
- SOA Security Pattern (1)
- SPARQL (1)
- SPV (1)
- SQL (1)
- STEM (1)
- SWIRL (1)
- Sammlungsdatentypen (1)
- Sample Selection Bias (1)
- Satisfiability (1)
- Savanne (1)
- Scalability (1)
- Schelling Process (1)
- Schelling Prozess (1)
- Schelling Segregation (1)
- Schema-Entdeckung (1)
- Schemaentdeckung (1)
- Schlüsselentdeckung (1)
- Schlüsselkompetenzen (1)
- Schriftartgestaltung (1)
- Schriftrendering (1)
- Schwierigkeitsgrad (1)
- Scientific understanding of Information (1)
- Scrollytelling (1)
- Search Algorithms (1)
- Second Life (1)
- Security Modelling (1)
- Segmentierung (1)
- Selbst-Adaptive Software (1)
- Selektion (1)
- Selektionsbias (1)
- Self-Adaptive Software (1)
- Self-Regulated Learning (1)
- Semantic Search (1)
- Semantik Web (1)
- Semantische Analyse (1)
- Semantische Suche (1)
- Semiconductors (1)
- Sensors (1)
- Sequenzeigenschaften (1)
- Sequenzen von s/t-Pattern (1)
- Serialisierung (1)
- Service Creation (1)
- Service Delivery Platform (1)
- Service convergence (1)
- Service-oriented Architectures (1)
- Service-orientierte Systeme (1)
- Service-orientierte Systme (1)
- Shader (1)
- Sicherheitsanalyse (1)
- Sicherheitsmodellierung (1)
- Signal Processing (1)
- Signalflankengraph (SFG oder STG) (1)
- Signalquellentrennung (1)
- Signaltrennung (1)
- Similarity Measures (1)
- Similarity Search (1)
- Simplified Payment Verification (1)
- Simulation (1)
- Simultane Diagonalisierung (1)
- Single Trial Analysis (1)
- Situationsbewusstsein (1)
- Skalierbarkeit (1)
- Skalierbarkeit der Blockchain (1)
- Skelettberechnung (1)
- Skriptsprachen (1)
- Slock.it (1)
- Small Private Online Courses (1)
- SoaML (1)
- Social (1)
- Social impact (1)
- Sociotechnical Design (1)
- Soft Fork (1)
- Software-Evolution (1)
- Software/Hardware Co-Design (1)
- Softwareanalyse (1)
- Softwarearchitektur (1)
- Softwareentwicklung (1)
- Softwareentwicklungsprozesse (1)
- Softwareproduktlinien (1)
- Softwaretechnik (1)
- Softwaretest (1)
- Softwaretests (1)
- Softwarevisualisierung (1)
- Softwarewartung (1)
- Solution Space (1)
- Soziale Medien (1)
- Sozialen Medien (1)
- Spam (1)
- Spam Filtering (1)
- Spam-Erkennung (1)
- Spam-Filter (1)
- Spam-Filtering (1)
- Spatio-Spectral Filter (1)
- Spawning (1)
- Speicheroptimierungen (1)
- Spezifikation von gezeiteten Graph Transformationen (1)
- Spieldynamik (1)
- Spieldynamiken (1)
- Sprachdesign (1)
- Sprachlernen im Limes (1)
- Sprachspezifikation (1)
- Squeak (1)
- Squeak/Smalltalk (1)
- Standardisierung (1)
- Standards (1)
- Statistical Tests (1)
- Statistische Tests (1)
- Steemit (1)
- Stellar Consensus Protocol (1)
- Stilisierung (1)
- Storj (1)
- Structuring (1)
- Strukturierung (1)
- Studentenerwartungen (1)
- Studentenhaltungen (1)
- Studie (1)
- Suchtberatung und -therapie (1)
- Suchverfahren (1)
- Synchronisation (1)
- Synonyme (1)
- Synthese (1)
- System Biologie (1)
- System of Systems (1)
- Systematics (1)
- Systembiologie (1)
- Systeme von Systemen (1)
- Systems of Systems (1)
- TPTP (1)
- Tableaumethode (1)
- Tasks (1)
- Taxonomy (1)
- Teacher perceptions (1)
- Teachers (1)
- Teaching information security (1)
- Teaching problem solving strategies (1)
- Technology proficiency (1)
- Tele-Lab (1)
- Tele-Teaching (1)
- Telekommunikation (1)
- Telemedizin (1)
- Temporal Logic (1)
- Temporäre Anbindung (1)
- Terminologische Logik (1)
- Terminology (1)
- Test-getriebene Fehlernavigation (1)
- Testen (1)
- Testergebnisse (1)
- Testpriorisierungs (1)
- Tests (1)
- Texterkennung (1)
- Textklassifikation (1)
- Texturen (1)
- The Bitfury Group (1)
- The DAO (1)
- Theoretischen Vorlesungen (1)
- Theory (1)
- Threshold Cryptography (1)
- Time Augmented Petri Nets (1)
- Timed Automata (1)
- Tool (1)
- Tools (1)
- Traceability (1)
- Tracking (1)
- Trajectories (1)
- Trajektorien (1)
- Transaktion (1)
- Transformation (1)
- Transformationsebene (1)
- Transformationssequenzen (1)
- Travis CI (1)
- Treewidth (1)
- Tripel-Graph-Grammatiken (1)
- Triple Graph Grammar (1)
- Triple Graph Grammars (1)
- Triple-Graph-Grammatiken (1)
- Two-Way-Peg (1)
- UX (1)
- Unabhängige Komponentenanalyse (1)
- Unbegrenzter Zustandsraum (1)
- Universität Bagdad (1)
- Universität Potsdam (1)
- Universitätseinstellungen (1)
- Unspent Transaction Output (1)
- Untere Schranken (1)
- Unveränderlichkeit (1)
- Unvollständigkeit (1)
- Usage Interest (1)
- User Experience (1)
- VIL (1)
- VM (1)
- VUCA-World (1)
- Verbindungsnetzwerke (1)
- Verbundwerte (1)
- Verhaltensabstraktion (1)
- Verhaltensanalyse (1)
- Verhaltensbewahrung (1)
- Verhaltensverfeinerung (1)
- Verhaltensänderung (1)
- Verhaltensäquivalenz (1)
- Verification (1)
- Verletzung Auflösung (1)
- Verletzung Erklärung (1)
- Verlässlichkeit (1)
- Vernetzte Daten (1)
- Versionierung (1)
- Verteiltes Arbeiten (1)
- Verteiltes Rechnen (1)
- Verteilungsalgorithmen (1)
- Verteilungsalgorithmus (1)
- Verteilungsunterschied (1)
- Vertrauen (1)
- Verträge (1)
- Verwaltung von Rechenzentren (1)
- Verzögerungs-Verbreitung (1)
- Veränderungsanalyse (1)
- Videoanalyse (1)
- Videometadaten (1)
- Violation Explanation (1)
- Violation Resolution (1)
- Virtual Machines (1)
- Virtual Reality (1)
- Virtual machines (1)
- Virtuelle Realität (1)
- Virtuelles 3D Stadtmodell (1)
- Visualisierungskonzept-Exploration (1)
- Visualization (1)
- Vocational Education (1)
- Vorhersagemodelle (1)
- Wahrnehmung (1)
- Wahrnehmung von Arousal (1)
- Wahrnehmungsunterschiede (1)
- Warteschlangentheorie (1)
- Wartung von Graphdatenbanksichten (1)
- Watson IoT (1)
- Wearable (1)
- Web Services (1)
- Web Sites (1)
- Web applications (1)
- Web of Data (1)
- Web-Anwendungen (1)
- Web-Based Rendering (1)
- Webbasiertes Rendering (1)
- Webseite (1)
- Well-structuredness (1)
- Werkzeugbau (1)
- Wertschöpfungskooperation (1)
- Wicked Problems (1)
- Wikipedia (1)
- Wirtschaftsinformatik (1)
- Wissensrepräsentation und -verarbeitung (1)
- Wissensrepräsentation und Schlussfolgerung (1)
- Wohlstrukturiertheit (1)
- Wolke (1)
- Women and IT (1)
- Workflow (1)
- Wüstenbildung (1)
- XM (1)
- YouTube (1)
- Young People (1)
- ZQSA (1)
- ZQSAT (1)
- Zeitbehaftete Petri Netze (1)
- Zero-Suppressed Binary Decision Diagram (ZDD) (1)
- Zielvorgabe (1)
- Zookos Dreieck (1)
- Zookos triangle (1)
- Zugriffskontrolle (1)
- Zuverlässigkeitsanalyse (1)
- access control (1)
- action language (1)
- activity instance state propagation (1)
- ad hoc learning (1)
- ad hoc messaging network (1)
- adaptiv (1)
- adaptive (1)
- addiction care (1)
- adoption (1)
- advanced persistent threat (1)
- advanced threats (1)
- aerosol size distribution (1)
- agil (1)
- algorithm (1)
- algorithm configuration (1)
- algorithm schedules (1)
- algorithm scheduling (1)
- algorithm selection (1)
- algorithms (1)
- altchain (1)
- alternative chain (1)
- analog-to-digital conversion (1)
- analogical thinking (1)
- analysis (1)
- anisotropic Kuwahara filter (1)
- anomalies (1)
- answer (1)
- answer set (1)
- anxiety (1)
- app (1)
- approximate joint diagonalization (1)
- apriori (1)
- apt (1)
- architectural adaptation (1)
- architecture (1)
- archive analysis (1)
- argument mining (1)
- argumentation structure (1)
- arithmethische Prozeduren (1)
- arithmetic procedures (1)
- arousal perception (1)
- art analysis (1)
- artificial intelligence (1)
- aspect adapter (1)
- aspect oriented programming (1)
- aspect-oriented (1)
- aspects (1)
- aspectualization (1)
- asset management (1)
- assistive Technologien (1)
- assistive technologies (1)
- association rule mining (1)
- asynchronous circuit (1)
- atomic swap (1)
- attribute assurance (1)
- ausführbare Semantiken (1)
- authentication (1)
- authorship attribution (1)
- automated theorem proving (1)
- automatic theorem prover (1)
- automatisierter Theorembeweiser (1)
- autonomous (1)
- back-in-time (1)
- balance analysis (1)
- bank (1)
- batch processing (1)
- behavior preservation (1)
- behavioral abstraction (1)
- behavioral equivalenc (1)
- behavioral refinement (1)
- behavioral specification (1)
- behaviour (1)
- behaviourally correct learning (1)
- benutzergenerierte Inhalte (1)
- beschreibende Feldstudie (1)
- bibliometric analysis (1)
- bidirectional optimality theory (1)
- bidirectional payment channels (1)
- bild (1)
- bildbasiertes Rendering (1)
- binary representation (1)
- binary search (1)
- bioinformatics (1)
- biological network (1)
- biological network model (1)
- biological networks (1)
- bisimulation (1)
- bitcoin (1)
- bitcoins (1)
- blind source separation (1)
- blockchain consortium (1)
- blockchain-übergreifend (1)
- blocks (1)
- blumix platform (1)
- bottom–up (1)
- bounded backward model checking (1)
- bpm (1)
- bug tracking (1)
- building models (1)
- built–in predicates (1)
- business informatics (1)
- business process architecture (1)
- business process architectures (1)
- business process model abstraction (1)
- bystander (1)
- cartographic design (1)
- case study (1)
- categories (1)
- causal AI (1)
- causal reasoning (1)
- causality (1)
- chain (1)
- change detection (1)
- change management (1)
- changeability (1)
- changing the study field (1)
- changing the university (1)
- choreographies (1)
- circuits (1)
- citation analysis (1)
- classes of logic programs (1)
- classification (1)
- classifier calibration (1)
- classroom language (1)
- clause elimination (1)
- clause learning (1)
- cleansing (1)
- cloud datacenter (1)
- cloud security (1)
- cloud storage (1)
- cluster-analysis (1)
- co-citation analysis (1)
- co-occurrence analysis (1)
- code generation (1)
- cognitive modifiability (1)
- coherence-enhancing filtering (1)
- collaborations (1)
- collection types (1)
- communication (1)
- community (1)
- competencies (1)
- competency (1)
- complex optimization (1)
- composite service (1)
- compositional analysis (1)
- comprehension (1)
- computational biology (1)
- computational ethnomusicology (1)
- computational methods (1)
- computational photography (1)
- computer graphics (1)
- computer science teachers (1)
- computer-aided design (1)
- computer-mediated therapy (1)
- computergestützte Methoden (1)
- computergestützte Musikethnologie (1)
- computervermittelte Therapie (1)
- computing (1)
- computing science education (1)
- concept of algorithm (1)
- concurrency (1)
- confidentiality (1)
- confirmation period (1)
- confluence (1)
- conformance analysis (1)
- conformance checking (1)
- connection calculus (1)
- consensus algorithm (1)
- consensus protocol (1)
- consensus protocols (1)
- consistency restoration (1)
- consistent learning (1)
- constraints (1)
- constructionism (1)
- consumer behavior (1)
- contest period (1)
- context awareness (1)
- continuous testing (1)
- contracts (1)
- control resynthesis (1)
- controlled experiment (1)
- convolutional neural networks (1)
- corpus study (1)
- couple reaction (1)
- coupling relationship (1)
- course timetabling (1)
- crochet (1)
- cross-chain (1)
- crosscutting wrappers (1)
- cryptography (1)
- cs4fn (1)
- cscw (1)
- cultural heritage (1)
- curriculum theory (1)
- cyber-physikalische Systeme (1)
- cyberbullying (1)
- cybersecurity (1)
- data center management (1)
- data analytics (1)
- data center management (1)
- data correctness checking (1)
- data dependencies (1)
- data extraction (1)
- data flow correctness (1)
- data mining (1)
- data models (1)
- data objects (1)
- data quality (1)
- data science (1)
- data set (1)
- data states (1)
- data synthesis (1)
- data visualization (1)
- data wrangling (1)
- data-driven (1)
- database (1)
- database optimization (1)
- database technology (1)
- datengetrieben (1)
- dbms (1)
- deadline propagation (1)
- decentral identities (1)
- decentralized autonomous organization (1)
- decision management (1)
- decision mining (1)
- decision models (1)
- decision trees (1)
- decubitus (1)
- deductive databases (1)
- deduplication (1)
- deep Gaussian processes (1)
- deferred choice (1)
- definiteness (1)
- delay propagation (1)
- demografische Informationen (1)
- demographic information (1)
- dependability (1)
- dependable computing (1)
- dependencies (1)
- dependency discovery (1)
- depression (1)
- depressive symptoms (1)
- desertification (1)
- design research (1)
- deurema modeling language (1)
- development tools (1)
- dezentrale Identitäten (1)
- dezentrale autonome Organisation (1)
- diagnosis (1)
- difference of Gaussians (1)
- differential gene expression (1)
- differential privacy (1)
- difficulty (1)
- difficulty target (1)
- diffusion (1)
- digital enlightenment (1)
- digital health (1)
- digital interventions (1)
- digital learning platform (1)
- digital picture archive (1)
- digital sovereignty (1)
- digital strategy (1)
- digital transformation (1)
- digitale Aufklärung (1)
- digitale Bildung (1)
- digitale Lernplattform (1)
- digitale Souveränität (1)
- digitales Bildarchiv (1)
- digitales Whiteboard (1)
- digitally-enabled pedagogies (1)
- dimensional (1)
- direkte Manipulation (1)
- discrete-event model (1)
- discrimination networks (1)
- diskretes Ereignismodell (1)
- distributed computation (1)
- distributed performance monitoring (1)
- distribution algorithm (1)
- divide and conquer (1)
- doctor-patient relationship (1)
- domain-specific modeling (1)
- doppelter Hashwert (1)
- double hashing (1)
- dropout (1)
- duplicate detection (1)
- dynamic (1)
- dynamic typing (1)
- dynamic classification (1)
- dynamic consolidation (1)
- dynamic programming languages (1)
- dynamic systems (1)
- dynamisch (1)
- dynamische Klassifikation (1)
- dynamische Programmiersprachen (1)
- dynamische Sprachen (1)
- dynamische Systeme (1)
- dynamische Umsortierung (1)
- e-Learning (1)
- e-learning (1)
- e-learning platform (1)
- e-mentoring (1)
- education and public policy (1)
- educational programming (1)
- educational systems (1)
- educational timetabling (1)
- edutainment (1)
- eindeutig (1)
- eingebettete Systeme (1)
- electrical muscle stimulation (1)
- electronic health record (1)
- electronic tool integration (1)
- elektrische Muskelstimulation (1)
- elliptic complexes (1)
- email spam detection (1)
- embedded systems (1)
- emotion (1)
- emotion representation (1)
- emotion research (1)
- empathy (1)
- empirical studies (1)
- empirische Studien (1)
- endpoint security (1)
- energy efficiency (1)
- engaged computing (1)
- entity alignment (1)
- entity resolution (1)
- environments (1)
- epistemic logic programs (1)
- epistemic specifications (1)
- equality (1)
- erfahrbare Medien (1)
- erzeugende gegnerische Netzwerke (1)
- evaluation (1)
- event abstraction (1)
- events (1)
- evidence theory (1)
- evolution (1)
- evolution in MDE (1)
- evolving systems (1)
- executable semantics (1)
- experience (1)
- experience report (1)
- explicit negation (1)
- exploration (1)
- exploratives Programmieren (1)
- exploratory programming (1)
- exponentiation (1)
- extend (1)
- extensions of logic programs (1)
- external memory algorithms (1)
- fMRI (1)
- face tracking (1)
- facial expression (1)
- fatty acid amide hydrolase (1)
- fault tolerance (1)
- federated voting (1)
- feedback loop modeling (1)
- feedback loops (1)
- fehlende Daten (1)
- file structure (1)
- flow-based bilateral filter (1)
- font engineering (1)
- font rendering (1)
- formal framework (1)
- formal semantics (1)
- formal verification (1)
- formal verification methods (1)
- formale Verifikation (1)
- formales Framework (1)
- formalism (1)
- fortschrittliche Angriffe (1)
- forward / backward chaining (1)
- fun (1)
- function symbols (1)
- functional dependency (1)
- functional lenses (1)
- functional programming (1)
- funktionale Abhängigkeit (1)
- funktionale Programmierung (1)
- future SOC lab (1)
- ganzheitlich (1)
- ganzzahlige lineare Optimierung (1)
- gefaltete neuronale Netze (1)
- gemischte Daten (1)
- gender (1)
- gene expression matrix (1)
- general secondary education (1)
- generalization (1)
- generalized discrimination networks (1)
- generalized logic programs (1)
- generative adversarial networks (1)
- genome annotation (1)
- geometry generation (1)
- geovirtual environments (1)
- geovirtuelle Umgebungen (1)
- geovisualization (1)
- geschichtsbewusste Laufzeit-Modelle (1)
- gesture (1)
- getypte Attributierte Graphen (1)
- global model management (1)
- globales Modellmanagement (1)
- grammars (1)
- graph clustering (1)
- graph databases (1)
- graph inference (1)
- graph mining (1)
- graph queries (1)
- graph repair (1)
- graph transformations (1)
- graph-search (1)
- graph-transformations (1)
- hashrate (1)
- hate speech detection (1)
- heterogeneity (1)
- heterogeneous computing (1)
- heterogeneous tissue (1)
- heterogenes Rechnen (1)
- heuristics (1)
- high school (1)
- higher (1)
- history-aware runtime models (1)
- holistic (1)
- homogeneous cell population (1)
- homomorphic encryption (1)
- human computer interaction (1)
- human-centered (1)
- hybrid graph-transformation-systems (1)
- hybrid systems (1)
- hybride Graph-Transformations-Systeme (1)
- hyrise (1)
- identity (1)
- image (1)
- image data analysis (1)
- image stylization (1)
- image-based rendering (1)
- imdb (1)
- immediacy (1)
- immutable values (1)
- in-memory (1)
- in-memory database (1)
- inclusion dependency (1)
- incompleteness (1)
- inconsistency (1)
- incremental graph query evaluation (1)
- incumbent (1)
- independent component analysis (1)
- index (1)
- individual effects (1)
- individuals (1)
- inductive invariant checking (1)
- induktives Invariant Checking (1)
- inference (1)
- informal and formal learning (1)
- informatics curricula (1)
- informatics in upper secondary education (1)
- information extraction (1)
- inkrementelle Ausführung von Graphanfragen (1)
- inkrementelles Graph Pattern Matching (1)
- innovation capabilities (1)
- innovation management (1)
- input accuracy (1)
- instruction (1)
- integer linear programming (1)
- integral equation (1)
- integrated development environments (1)
- integrierte Entwicklungsumgebungen (1)
- intelligente Verträge (1)
- inter-chain (1)
- interaction (1)
- interaction modeling (1)
- interactive course (1)
- interactive media (1)
- interactive simulation (1)
- interactive technologies (1)
- interactive workshop (1)
- interaktive Medien (1)
- interconnect (1)
- interdisziplinäre Teams (1)
- interface (1)
- international comparison (1)
- international study (1)
- interpreters (1)
- interval probabilistic timed systems (1)
- interval probabilistische zeitgesteuerte Systeme (1)
- interval timed automata (1)
- intuitive Benutzeroberflächen (1)
- intuitive interfaces (1)
- invariant checking (1)
- invasive aspects (1)
- inverse ill-posed problem (1)
- inverse scattering (1)
- iterative regularization (1)
- job shop scheduling (1)
- juridical recording (1)
- k-Induktion (1)
- k-induction (1)
- k-inductive invariants (1)
- k-induktive Invarianten (1)
- k-induktive Invariantenprüfung (1)
- k-induktives Invariant-Checking (1)
- kausale KI (1)
- kausale Schlussfolgerung (1)
- kernel PCA (1)
- kernel methods (1)
- key competences in physical computing (1)
- key competencies (1)
- key discovery (1)
- kinaesthetic teaching (1)
- knowledge discovery (1)
- knowledge representation (1)
- kompositionale Analyse (1)
- konsistentes Lernen (1)
- kontinuierliches Testen (1)
- kontrolliertes Experiment (1)
- konvergente Dienste (1)
- kulturelles Erbe (1)
- landmarks (1)
- language design (1)
- language learning in the limit (1)
- language specification (1)
- laser remote sensing (1)
- laserscanning (1)
- law (1)
- leadership (1)
- leanCoP (1)
- learning (1)
- lebenslanges Lernen (1)
- lebenszentriert (1)
- ledger assets (1)
- left recursion (1)
- lesson (1)
- level-replacement systems (1)
- life-centered (1)
- lifelong learning (1)
- linear programming problem (1)
- linguistic (1)
- link discovery (1)
- linked data (1)
- live migration (1)
- liveness (1)
- load balancing (1)
- location-based (1)
- logic (1)
- logic synthesis (1)
- logical calculus (1)
- logical signaling networks (1)
- logische Programmierung (1)
- logische Signalnetzwerke (1)
- longitudinal (1)
- loop formulas (1)
- machines (1)
- main memory computing (1)
- malware detection (1)
- management (1)
- mandatory computer science foundations (1)
- manufacturing (1)
- many-core (1)
- map reduce (1)
- map/reduce (1)
- maps (1)
- maschinelle Verarbeitung natürlicher Sprache (1)
- maschninelles Lernen (1)
- mediated learning experience (1)
- medical (1)
- medical documentation (1)
- medizinisch (1)
- medizinische Dokumentation (1)
- mehrdimensionale Belangtrennung (1)
- mehrsprachige Ausführungsumgebungen (1)
- memory (1)
- memory optimization (1)
- menschenzentriert (1)
- merged mining (1)
- merkle root (1)
- meta-programming (1)
- metabolic network (1)
- metabolite profile (1)
- metacrate (1)
- metadata (1)
- metadata detection (1)
- metadata discovery (1)
- metadata quality (1)
- method comparision (1)
- metric temporal logic (1)
- metric termporal graph logic (1)
- metrisch temporale Graph Logic (1)
- metrische Temporallogik (1)
- microcredential (1)
- microdissection (1)
- micropayment (1)
- micropayment channels (1)
- miner (1)
- mining (1)
- mining hardware (1)
- minting (1)
- misconceptions (1)
- mixed data (1)
- mixture models (1)
- mmdb (1)
- mobile application (1)
- mobile devices (1)
- mobile learning (1)
- mobile technologies and apps (1)
- model generation (1)
- model repair (1)
- model-based prototyping (1)
- model-driven (1)
- model-driven software engineering (1)
- modelgetriebene Entwicklung (1)
- modellgetriebene Softwaretechnik (1)
- modelling (1)
- molecular network (1)
- molecular networks (1)
- molekulare Netzwerke (1)
- monetary incentive delay task (1)
- monitoring (1)
- morphic (1)
- multi-class classification (1)
- multi-core (1)
- multi-dimensional separation of concerns (1)
- multi-instances (1)
- multi-valued logic (1)
- multi-version models (1)
- multidisziplinäre Teams (1)
- multiuser (1)
- musical scales (1)
- musikalische Tonleitern (1)
- mutual information (1)
- named entity mining (1)
- natural language processing (1)
- nested application conditions (1)
- nested expressions (1)
- network (1)
- network protocols (1)
- networks (1)
- networks-on-chip (1)
- neural networks (1)
- news media (1)
- nicht-parametrische bedingte Unabhängigkeitstests (1)
- nichtlineare ICA (1)
- nichtlineare PCA (NLPCA) (1)
- non-monotonic reasoning (1)
- non-parametric conditional independence testing (1)
- nonce (1)
- nonlinear ICA (1)
- nonlinear PCA (NLPCA) (1)
- novelty detection (1)
- nvm (1)
- object life cycle synchronization (1)
- object-constraint programming (1)
- object-oriented programming (1)
- objective difficulty (1)
- objektorientiertes Programmieren (1)
- off-chain transaction (1)
- omega (1)
- online assistance (1)
- online course creation (1)
- online course design (1)
- operating system (1)
- optical character recognition (1)
- oracles (1)
- order dependencies (1)
- organisational evolution (1)
- organizational change (1)
- orts-basiert (1)
- overcomplete ICA (1)
- packrat parsing (1)
- paper prototyping (1)
- paraconsistency (1)
- parallel (1)
- parallel and sequential independence (1)
- parallel computing (1)
- parallel execution (1)
- parallel processing (1)
- parallel solving (1)
- parallele Verarbeitung (1)
- parallele und Sequentielle Unabhängigkeit (1)
- paralleles Lösen (1)
- paralleles Rechnen (1)
- parameter (1)
- parsing (1)
- parsing expression grammars (1)
- partial application conditions (1)
- partial correlation (1)
- partial replication (1)
- partielle Anwendungsbedingungen (1)
- partielle Replikation (1)
- pathways (1)
- patient empowerment (1)
- pattern recognition (1)
- pedagogy (1)
- peer-to-peer network (1)
- pegged sidechains (1)
- perception (1)
- perception differences (1)
- performance models of virtual machines (1)
- periodic tasks (1)
- periodische Aufgaben (1)
- personal (1)
- personal response systems (1)
- persönliche Informationen (1)
- pervasive learning (1)
- petri net (1)
- philosophical foundation of informatics pedagogy (1)
- physical computing tools (1)
- placement (1)
- platform ecosystems (1)
- platypus (1)
- policy evaluation (1)
- polyglot execution environments (1)
- polyglot programming (1)
- polyglottes Programmieren (1)
- portfolio-based solving (1)
- pose estimation (1)
- pre-primary level (1)
- preference handling (1)
- preferences (1)
- prefetching (1)
- preprocessing (1)
- presentation (1)
- primary education (1)
- primary level (1)
- primary school (1)
- prime pair (1)
- primer pair design (1)
- priorities (1)
- probabilistic machine learning (1)
- probabilistic models (1)
- probabilistic timed automata (1)
- probabilistische zeitbehaftete Automaten (1)
- probabilistisches maschinelles Lernen (1)
- problem-solving (1)
- process (1)
- process and data integration (1)
- process automation (1)
- process elicitation (1)
- process instance (1)
- process model search (1)
- process models (1)
- process refinement (1)
- processor hardware (1)
- production planning and control (1)
- professional development (1)
- professors (1)
- profiling (1)
- program analysis (1)
- programming abstraction (1)
- programming experience (1)
- programming in context (1)
- programming language (1)
- programming tools (1)
- programs (1)
- prototyping (1)
- proving (1)
- psychotherapy (1)
- qualitative model (1)
- qualitatives Modell (1)
- quantification protocol (1)
- quantile normalization (1)
- quantum computing (1)
- quantum cryptography (1)
- query matching (1)
- query optimization (1)
- querying (1)
- quorum slices (1)
- railways (1)
- rapid prototyping (1)
- raum-zeitlich (1)
- raumbezogene Straftatenanalyse (1)
- reactive (1)
- reaktive Programmierung (1)
- real-time application (1)
- real-time rendering (1)
- real-time systems (1)
- rechnerunterstütztes Konstruieren (1)
- recognition (1)
- recommendation (1)
- reconfigurable systems (1)
- reconfiguration (1)
- reconstruction (1)
- reflection (1)
- regression testing (1)
- regulatory networks (1)
- reinforcement learning (1)
- rekonfigurierbar (1)
- relational model transformation (1)
- relationale Modelltransformationen (1)
- reliability assessment (1)
- remote collaboration (1)
- remote sensing (1)
- repair (1)
- requirements engineering (1)
- resource management (1)
- resource optimization (1)
- rest service (1)
- reusable aspects (1)
- reverse engineering (1)
- reversible reaction (1)
- reward system (1)
- robust ICA (1)
- robuste ICA (1)
- rootstock (1)
- runtime adaptations (1)
- runtime behavior (1)
- runtime monitoring (1)
- räumliche Geodaten (1)
- s/t-pattern sequences (1)
- sat (1)
- satisfiabilitiy solving (1)
- savanna (1)
- scalability of blockchain (1)
- scarce tokens (1)
- scheduling (1)
- schwach überwachtes maschinelles Lernen (1)
- science (1)
- scm (1)
- scripting languages (1)
- scrollytelling (1)
- search (1)
- secondary computer science education (1)
- secondary education (1)
- security analytics (1)
- security policies (1)
- segmentation (1)
- selbst-souveräne Identitäten (1)
- selbstbestimmte Identitäten (1)
- self-adaptive software (1)
- self-driving (1)
- self-efficacy (1)
- semantic analysis (1)
- semantic classification (1)
- semantic web services (1)
- semantics (1)
- semantics preservation (1)
- semantische Klassifizierung (1)
- sequence properties (1)
- serialization (1)
- series (1)
- serverseitiges 3D-Rendering (1)
- serverside 3D rendering (1)
- service mediation (1)
- service orchestration (1)
- service-oriented (1)
- service-oriented architectures (1)
- serviceorientierte Architekturen (1)
- sets (1)
- shader (1)
- sidechain (1)
- sign language (1)
- signal transition graph (1)
- significant edge (1)
- similarity (1)
- single-case experimental design (1)
- situated learning (1)
- situational awareness (1)
- skeletonization (1)
- small talk (1)
- social network analysis (1)
- social networking (1)
- societal effects (1)
- software analysis (1)
- software architecture (1)
- software development (1)
- software development processes (1)
- software engineering (1)
- software evolution (1)
- software maintenance (1)
- software product lines (1)
- software tests (1)
- software visualization (1)
- software/hardware co-design (1)
- sorting (1)
- spatio-temporal (1)
- specific prime pair (1)
- specification of timed graph transformations (1)
- speed independence (1)
- speed independent (1)
- spreadsheets (1)
- squeak (1)
- stable model semantics (1)
- standardization (1)
- standards (1)
- stark verhaltenskorrekt sperrend (1)
- static analysis (1)
- static source-code analysis (1)
- statische Analyse (1)
- statische Quellcodeanalyse (1)
- stratification (1)
- strong and uniform equivalence (1)
- strongly behaviourally correct locking (1)
- structured output prediction (1)
- strukturierte Vorhersage (1)
- student activation (1)
- student experience (1)
- student perceptions (1)
- students’ conceptions (1)
- students’ knowledge (1)
- study (1)
- study problems (1)
- stylization (1)
- symbolic analysis (1)
- symbolic graphs (1)
- symbolische Analyse (1)
- symbolische Graphen (1)
- synonym discovery (1)
- system of systems (1)
- systematic literature review (1)
- systems (1)
- t.BPM (1)
- tabellarische Dateien (1)
- tableau method (1)
- tabular data (1)
- tangible media (1)
- taxonomy (1)
- teacher (1)
- teacher competencies (1)
- teacher education (1)
- teachers (1)
- teaching informatics in general education (1)
- teaching material (1)
- tele-TASK (1)
- telemedicine (1)
- temporal graph queries (1)
- temporal logic (1)
- temporale Graphanfragen (1)
- temporary binding (1)
- terrain models (1)
- test case prioritization (1)
- test items (1)
- test results (1)
- test-driven fault navigation (1)
- text based classification methods (1)
- text classification (1)
- textures (1)
- threat detection (1)
- threshold cryptography (1)
- tiefe Gauß-Prozesse (1)
- tiering (1)
- timed automata (1)
- tool building (1)
- top– down (1)
- touch input (1)
- tptp (1)
- tracing (1)
- traditional Georgian music (1)
- traditionelle Georgische Musik (1)
- traditionelle Unternehmen (1)
- trajectories (1)
- transaction (1)
- transduction (1)
- transformation (1)
- transformation level (1)
- transformation sequences (1)
- trust (1)
- tuple spaces (1)
- tutorial section (1)
- typed graph transformation systems (1)
- typisierte attributierte Graphen (1)
- unferring cellular networks (1)
- unfounded sets (1)
- unification (1)
- unique (1)
- unique column combinations (1)
- unsupervised (1)
- user interaction (1)
- user interfaces (1)
- user-centred (1)
- user-generated content (1)
- value co-creation (1)
- variables (1)
- variation (1)
- variational inference (1)
- variationelle Inferenz (1)
- ventral striatum (1)
- verhaltenskorrektes Lernen (1)
- verifiable credentials (1)
- verschachtelte Anwednungsbedingungen (1)
- verschachtelte Anwendungsbedingungen (1)
- versioning (1)
- verteilte Berechnung (1)
- verteilte Datenbanken (1)
- verteilte Leistungsüberwachung (1)
- verzwickte Probleme (1)
- video analysis (1)
- video metadata (1)
- view maintenance (1)
- views (1)
- virtual (1)
- virtual 3D city model (1)
- virtual 3D city models (1)
- virtual machine (1)
- virtual mobility (1)
- virtualisierte IT-Infrastruktur (1)
- virtuell (1)
- virtuelle 3D-Stadtmodelle (1)
- virtuelle Realität (1)
- visual language (1)
- visualization concept exploration (1)
- visuelle Sprache (1)
- weak supervision (1)
- wearables (1)
- web application (1)
- web services (1)
- web-applications (1)
- web-based development (1)
- web-based development environment (1)
- web-basierte Entwicklungsumgebung (1)
- webbasierte Entwicklung (1)
- word order freezing (1)
- word sense disambiguation (1)
- workflow patterns (1)
- zero-day (1)
- zuverlässige Datenverarbeitung (1)
- zuverlässigen Datenverarbeitung (1)
- Ähnlichkeit (1)
- Ähnlichkeitsmaße (1)
- Ähnlichkeitssuche (1)
- Änderbarkeit (1)
- Übereinstimmungsanalyse (1)
- Überwachung (1)
- überbestimmte ICA (1)
- überprüfbare Nachweise (1)
- ‘unplugged’ computing (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering gGmbH (136)
- Institut für Informatik und Computational Science (108)
- Hasso-Plattner-Institut für Digital Engineering GmbH (98)
- Extern (46)
- Mathematisch-Naturwissenschaftliche Fakultät (24)
- Wirtschaftswissenschaften (8)
- Digital Engineering Fakultät (7)
- Institut für Physik und Astronomie (4)
- Fachgruppe Betriebswirtschaftslehre (2)
- Institut für Umweltwissenschaften und Geographie (2)
Different properties of programs, implemented in Constraint Handling Rules (CHR), have already been investigated. Proving these properties in CHR is fairly simpler than proving them in any type of imperative programming language, which triggered the proposal of a methodology to map imperative programs into equivalent CHR. The equivalence of both programs implies that if a property is satisfied for one, then it is satisfied for the other. The mapping methodology could be put to other beneficial uses. One such use is the automatic generation of global constraints, at an attempt to demonstrate the benefits of having a rule-based implementation for constraint solvers.
Linked Open Data (LOD) comprises very many and often large public data sets and knowledge bases. Those datasets are mostly presented in the RDF triple structure of subject, predicate, and object, where each triple represents a statement or fact. Unfortunately, the heterogeneity of available open data requires significant integration steps before it can be used in applications. Meta information, such as ontological definitions and exact range definitions of predicates, are desirable and ideally provided by an ontology. However in the context of LOD, ontologies are often incomplete or simply not available. Thus, it is useful to automatically generate meta information, such as ontological dependencies, range definitions, and topical classifications. Association rule mining, which was originally applied for sales analysis on transactional databases, is a promising and novel technique to explore such data. We designed an adaptation of this technique for min-ing Rdf data and introduce the concept of “mining configurations”, which allows us to mine RDF data sets in various ways. Different configurations enable us to identify schema and value dependencies that in combination result in interesting use cases. To this end, we present rule-based approaches for auto-completion, data enrichment, ontology improvement, and query relaxation. Auto-completion remedies the problem of inconsistent ontology usage, providing an editing user with a sorted list of commonly used predicates. A combination of different configurations step extends this approach to create completely new facts for a knowledge base. We present two approaches for fact generation, a user-based approach where a user selects the entity to be amended with new facts and a data-driven approach where an algorithm discovers entities that have to be amended with missing facts. As knowledge bases constantly grow and evolve, another approach to improve the usage of RDF data is to improve existing ontologies. Here, we present an association rule based approach to reconcile ontology and data. Interlacing different mining configurations, we infer an algorithm to discover synonymously used predicates. Those predicates can be used to expand query results and to support users during query formulation. We provide a wide range of experiments on real world datasets for each use case. The experiments and evaluations show the added value of association rule mining for the integration and usability of RDF data and confirm the appropriateness of our mining configuration methodology.
Unique column combinations of a relational database table are sets of columns that contain only unique values. Discovering such combinations is a fundamental research problem and has many different data management and knowledge discovery applications. Existing discovery algorithms are either brute force or have a high memory load and can thus be applied only to small datasets or samples. In this paper, the wellknown GORDIAN algorithm and "Apriori-based" algorithms are compared and analyzed for further optimization. We greatly improve the Apriori algorithms through efficient candidate generation and statistics-based pruning methods. A hybrid solution HCAGORDIAN combines the advantages of GORDIAN and our new algorithm HCA, and it significantly outperforms all previous work in many situations.
Technical report
(2019)
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
Parsing of argumentative structures has become a very active line of research in recent years. Like discourse parsing or any other natural language task that requires prediction of linguistic structures, most approaches choose to learn a local model and then perform global decoding over the local probability distributions, often imposing constraints that are specific to the task at hand. Specifically for argumentation parsing, two decoding approaches have been recently proposed: Minimum Spanning Trees (MST) and Integer Linear Programming (ILP), following similar trends in discourse parsing. In contrast to discourse parsing though, where trees are not always used as underlying annotation schemes, argumentation structures so far have always been represented with trees. Using the 'argumentative microtext corpus' [in: Argumentation and Reasoned Action: Proceedings of the 1st European Conference on Argumentation, Lisbon 2015 / Vol. 2, College Publications, London, 2016, pp. 801-815] as underlying data and replicating three different decoding mechanisms, in this paper we propose a novel ILP decoder and an extension to our earlier MST work, and then thoroughly compare the approaches. The result is that our new decoder outperforms related work in important respects, and that in general, ILP and MST yield very similar performance.
A common feature in Answer Set Programming is the use of a second negation, stronger than default negation and sometimes called explicit, strong or classical negation. This explicit negation is normally used in front of atoms, rather than allowing its use as a regular operator. In this paper we consider the arbitrary combination of explicit negation with nested expressions, as those defined by Lifschitz, Tang and Turner. We extend the concept of reduct for this new syntax and then prove that it can be captured by an extension of Equilibrium Logic with this second negation. We study some properties of this variant and compare to the already known combination of Equilibrium Logic with Nelson's strong negation.
The objective and motivation behind this research is to provide applications with easy-to-use interfaces to communities of deaf and functionally illiterate users, which enables them to work without any human assistance. Although recent years have witnessed technological advancements, the availability of technology does not ensure accessibility to information and communication technologies (ICT). Extensive use of text from menus to document contents means that deaf or functionally illiterate can not access services implemented on most computer software. Consequently, most existing computer applications pose an accessibility barrier to those who are unable to read fluently. Online technologies intended for such groups should be developed in continuous partnership with primary users and include a thorough investigation into their limitations, requirements and usability barriers. In this research, I investigated existing tools in voice, web and other multimedia technologies to identify learning gaps and explored ways to enhance the information literacy for deaf and functionally illiterate users. I worked on the development of user-centered interfaces to increase the capabilities of deaf and low literacy users by enhancing lexical resources and by evaluating several multimedia interfaces for them. The interface of the platform-independent Italian Sign Language (LIS) Dictionary has been developed to enhance the lexical resources for deaf users. The Sign Language Dictionary accepts Italian lemmas as input and provides their representation in the Italian Sign Language as output. The Sign Language dictionary has 3082 signs as set of Avatar animations in which each sign is linked to a corresponding Italian lemma. I integrated the LIS lexical resources with MultiWordNet (MWN) database to form the first LIS MultiWordNet(LMWN). LMWN contains information about lexical relations between words, semantic relations between lexical concepts (synsets), correspondences between Italian and sign language lexical concepts and semantic fields (domains). The approach enhances the deaf users’ understanding of written Italian language and shows that a relatively small set of lexicon can cover a significant portion of MWN. Integration of LIS signs with MWN made it useful tool for computational linguistics and natural language processing. The rule-based translation process from written Italian text to LIS has been transformed into service-oriented system. The translation process is composed of various modules including parser, semantic interpreter, generator, and spatial allocation planner. This translation procedure has been implemented in the Java Application Building Center (jABC), which is a framework for extreme model driven design (XMDD). The XMDD approach focuses on bringing software development closer to conceptual design, so that the functionality of a software solution could be understood by someone who is unfamiliar with programming concepts. The transformation addresses the heterogeneity challenge and enhances the re-usability of the system. For enhancing the e-participation of functionally illiterate users, two detailed studies were conducted in the Republic of Rwanda. In the first study, the traditional (textual) interface was compared with the virtual character-based interactive interface. The study helped to identify usability barriers and users evaluated these interfaces according to three fundamental areas of usability, i.e. effectiveness, efficiency and satisfaction. In another study, we developed four different interfaces to analyze the usability and effects of online assistance (consistent help) for functionally illiterate users and compared different help modes including textual, vocal and virtual character on the performance of semi-literate users. In our newly designed interfaces the instructions were automatically translated in Swahili language. All the interfaces were evaluated on the basis of task accomplishment, time consumption, System Usability Scale (SUS) rating and number of times the help was acquired. The results show that the performance of semi-literate users improved significantly when using the online assistance. The dissertation thus introduces a new development approach in which virtual characters are used as additional support for barely literate or naturally challenged users. Such components enhanced the application utility by offering a variety of services like translating contents in local language, providing additional vocal information, and performing automatic translation from text to sign language. Obviously, there is no such thing as one design solution that fits for all in the underlying domain. Context sensitivity, literacy and mental abilities are key factors on which I concentrated and the results emphasize that computer interfaces must be based on a thoughtful definition of target groups, purposes and objectives.
The highly structured nature of the educational sector demands effective policy mechanisms close to the needs of the field. That is why evidence-based policy making, endorsed by the European Commission under Erasmus+ Key Action 3, aims to make an alignment between the domains of policy and practice. Against this background, this article addresses two issues: First, that there is a vertical gap in the translation of higher-level policies to local strategies and regulations. Second, that there is a horizontal gap between educational domains regarding the policy awareness of individual players. This was analyzed in quantitative and qualitative studies with domain experts from the fields of virtual mobility and teacher training. From our findings, we argue that the combination of both gaps puts the academic bridge from secondary to tertiary education at risk, including the associated knowledge proficiency levels. We discuss the role of digitalization in the academic bridge by asking the question: which value does the involved stakeholders expect from educational policies? As a theoretical basis, we rely on the model of value co-creation for and by stakeholders. We describe the used instruments along with the obtained results and proposed benefits. Moreover, we reflect on the methodology applied, and we finally derive recommendations for future academic bridge policies.
The main objective of this dissertation is to analyse prerequisites, expectations, apprehensions, and attitudes of students studying computer science, who are willing to gain a bachelor degree. The research will also investigate in the students’ learning style according to the Felder-Silverman model. These investigations fall in the attempt to make an impact on reducing the “dropout”/shrinkage rate among students, and to suggest a better learning environment.
The first investigation starts with a survey that has been made at the computer science department at the University of Baghdad to investigate the attitudes of computer science students in an environment dominated by women, showing the differences in attitudes between male and female students in different study years. Students are accepted to university studies via a centrally controlled admission procedure depending mainly on their final score at school. This leads to a high percentage of students studying subjects they do not want. Our analysis shows that 75% of the female students do not regret studying computer science although it was not their first choice. And according to statistics over previous years, women manage to succeed in their study and often graduate on top of their class. We finish with a comparison of attitudes between the freshman students of two different cultures and two different university enrolment procedures (University of Baghdad, in Iraq, and the University of Potsdam, in Germany) both with opposite gender majority.
The second step of investigation took place at the department of computer science at the University of Potsdam in Germany and analyzes the learning styles of students studying the three major fields of study offered by the department (computer science, business informatics, and computer science teaching). Investigating the differences in learning styles between the students of those study fields who usually take some joint courses is important to be aware of which changes are necessary to be adopted in the teaching methods to address those different students. It was a two stage study using two questionnaires; the main one is based on the Index of Learning Styles Questionnaire of B. A. Solomon and R. M. Felder, and the second questionnaire was an investigation on the students’ attitudes towards the findings of their personal first questionnaire. Our analysis shows differences in the preferences of learning style between male and female students of the different study fields, as well as differences between students with the different specialties (computer science, business informatics, and computer science teaching).
The third investigation looks closely into the difficulties, issues, apprehensions and expectations of freshman students studying computer science. The study took place at the computer science department at the University of Potsdam with a volunteer sample of students. The goal is to determine and discuss the difficulties and issues that they are facing in their study that may lead them to think in dropping-out, changing the study field, or changing the university. The research continued with the same sample of students (with business informatics students being the majority) through more than three semesters. Difficulties and issues during the study were documented, as well as students’ attitudes, apprehensions, and expectations. Some of the professors and lecturers opinions and solutions to some students’ problems were also documented. Many participants had apprehensions and difficulties, especially towards informatics subjects. Some business informatics participants began to think of changing the university, in particular when they reached their third semester, others thought about changing their field of study. Till the end of this research, most of the participants continued in their studies (the study they have started with or the new study they have changed to) without leaving the higher education system.
A survey has been carried out in the Computer Science (CS) department at the University of Baghdad to investigate the attitudes of CS students in a female dominant environment, showing the differences between male and female students in different academic years. We also compare the attitudes of the freshman students of two different cultures (University of Baghdad, Iraq, and the University of Potsdam).
Extract-Transform-Load (ETL) tools are used for the creation, maintenance, and evolution of data warehouses, data marts, and operational data stores. ETL workflows populate those systems with data from various data sources by specifying and executing a DAG of transformations. Over time, hundreds of individual workflows evolve as new sources and new requirements are integrated into the system. The maintenance and evolution of large-scale ETL systems requires much time and manual effort. A key problem is to understand the meaning of unfamiliar attribute labels in source and target databases and ETL transformations. Hard-to-understand attribute labels lead to frustration and time spent to develop and understand ETL workflows. We present a schema decryption technique to support ETL developers in understanding cryptic schemata of sources, targets, and ETL transformations. For a given ETL system, our recommender-like approach leverages the large number of mapped attribute labels in existing ETL workflows to produce good and meaningful decryptions. In this way we are able to decrypt attribute labels consisting of a number of unfamiliar few-letter abbreviations, such as UNP_PEN_INT, which we can decrypt to UNPAID_PENALTY_INTEREST. We evaluate our schema decryption approach on three real-world repositories of ETL workflows and show that our approach is able to suggest high-quality decryptions for cryptic attribute labels in a given schema.
In the era of social networks, internet of things and location-based services, many online services produce a huge amount of data that have valuable objective information, such as geographic coordinates and date time. These characteristics (parameters) in the combination with a textual parameter bring the challenge for the discovery of geospatiotemporal knowledge. This challenge requires efficient methods for clustering and pattern mining in spatial, temporal and textual spaces.
In this thesis, we address the challenge of providing methods and frameworks for geospatiotemporal data analytics. As an initial step, we address the challenges of geospatial data processing: data gathering, normalization, geolocation, and storage. That initial step is the basement to tackle the next challenge -- geospatial clustering challenge. The first step of this challenge is to design the method for online clustering of georeferenced data. This algorithm can be used as a server-side clustering algorithm for online maps that visualize massive georeferenced data. As the second step, we develop the extension of this method that considers, additionally, the temporal aspect of data. For that, we propose the density and intensity-based geospatiotemporal clustering algorithm with fixed distance and time radius.
Each version of the clustering algorithm has its own use case that we show in the thesis.
In the next chapter of the thesis, we look at the spatiotemporal analytics from the perspective of the sequential rule mining challenge. We design and implement the framework that transfers data into textual geospatiotemporal data - data that contain geographic coordinates, time and textual parameters. By this way, we address the challenge of applying pattern/rule mining algorithms in geospatiotemporal space. As the applicable use case study, we propose spatiotemporal crime analytics -- discovery spatiotemporal patterns of crimes in publicly available crime data.
The second part of the thesis, we dedicate to the application part and use case studies. We design and implement the application that uses the proposed clustering algorithms to discover knowledge in data. Jointly with the application, we propose the use case studies for analysis of georeferenced data in terms of situational and public safety awareness.
We propose a network structure-based model for heterosis, and investigate it relying on metabolite profiles from Arabidopsis. A simple feed-forward two-layer network model (the Steinbuch matrix) is used in our conceptual approach. It allows for directly relating structural network properties with biological function. Interpreting heterosis as increased adaptability, our model predicts that the biological networks involved show increasing connectivity of regulatory interactions. A detailed analysis of metabolite profile data reveals that the increasing-connectivity prediction is true for graphical Gaussian models in our data from early development. This mirrors properties of observed heterotic Arabidopsis phenotypes. Furthermore, the model predicts a limit for increasing hybrid vigor with increasing heterozygosity—a known phenomenon in the literature.
Program behavior that relies on contextual information, such as physical location or network accessibility, is common in today's applications, yet its representation is not sufficiently supported by programming languages. With context-oriented programming (COP), such context-dependent behavioral variations can be explicitly modularized and dynamically activated. In general, COP could be used to manage any context-specific behavior. However, its contemporary realizations limit the control of dynamic adaptation. This, in turn, limits the interaction of COP's adaptation mechanisms with widely used architectures, such as event-based, mobile, and distributed programming. The JCop programming language extends Java with language constructs for context-oriented programming and additionally provides a domain-specific aspect language for declarative control over runtime adaptations. As a result, these redesigned implementations are more concise and better modularized than their counterparts using plain COP. JCop's main features have been described in our previous publications. However, a complete language specification has not been presented so far. This report presents the entire JCop language including the syntax and semantics of its new language constructs.
This paper describes the proof calculus LD for clausal propositional logic, which is a linearized form of the well-known DPLL calculus extended by clause learning. It is motivated by the demand to model how current SAT solvers built on clause learning are working, while abstracting from decision heuristics and implementation details. The calculus is proved sound and terminating. Further, it is shown that both the original DPLL calculus and the conflict-directed backtracking calculus with clause learning, as it is implemented in many current SAT solvers, are complete and proof-confluent instances of the LD calculus.
Many formal descriptions of DPLL-based SAT algorithms either do not include all essential proof techniques applied by modern SAT solvers or are bound to particular heuristics or data structures. This makes it difficult to analyze proof-theoretic properties or the search complexity of these algorithms. In this paper we try to improve this situation by developing a nondeterministic proof calculus that models the functioning of SAT algorithms based on the DPLL calculus with clause learning. This calculus is independent of implementation details yet precise enough to enable a formal analysis of realistic DPLL-based SAT algorithms.
QuantPrime
(2008)
Background
Medium- to large-scale expression profiling using quantitative polymerase chain reaction (qPCR) assays are becoming increasingly important in genomics research. A major bottleneck in experiment preparation is the design of specific primer pairs, where researchers have to make several informed choices, often outside their area of expertise. Using currently available primer design tools, several interactive decisions have to be made, resulting in lengthy design processes with varying qualities of the assays.
Results
Here we present QuantPrime, an intuitive and user-friendly, fully automated tool for primer pair design in small- to large-scale qPCR analyses. QuantPrime can be used online through the internet http://www.quantprime.de/ or on a local computer after download; it offers design and specificity checking with highly customizable parameters and is ready to use with many publicly available transcriptomes of important higher eukaryotic model organisms and plant crops (currently 295 species in total), while benefiting from exon-intron border and alternative splice variant information in available genome annotations. Experimental results with the model plant Arabidopsis thaliana, the crop Hordeum vulgare and the model green alga Chlamydomonas reinhardtii show success rates of designed primer pairs exceeding 96%.
Conclusion
QuantPrime constitutes a flexible, fully automated web application for reliable primer design for use in larger qPCR experiments, as proven by experimental data. The flexible framework is also open for simple use in other quantification applications, such as hydrolyzation probe design for qPCR and oligonucleotide probe design for quantitative in situ hybridization. Future suggestions made by users can be easily implemented, thus allowing QuantPrime to be developed into a broad-range platform for the design of RNA expression assays.
Companies develop process models to explicitly describe their business operations. In the same time, business operations, business processes, must adhere to various types of compliance requirements. Regulations, e.g., Sarbanes Oxley Act of 2002, internal policies, best practices are just a few sources of compliance requirements. In some cases, non-adherence to compliance requirements makes the organization subject to legal punishment. In other cases, non-adherence to compliance leads to loss of competitive advantage and thus loss of market share. Unlike the classical domain-independent behavioral correctness of business processes, compliance requirements are domain-specific. Moreover, compliance requirements change over time. New requirements might appear due to change in laws and adoption of new policies. Compliance requirements are offered or enforced by different entities that have different objectives behind these requirements. Finally, compliance requirements might affect different aspects of business processes, e.g., control flow and data flow. As a result, it is infeasible to hard-code compliance checks in tools. Rather, a repeatable process of modeling compliance rules and checking them against business processes automatically is needed. This thesis provides a formal approach to support process design-time compliance checking. Using visual patterns, it is possible to model compliance requirements concerning control flow, data flow and conditional flow rules. Each pattern is mapped into a temporal logic formula. The thesis addresses the problem of consistency checking among various compliance requirements, as they might stem from divergent sources. Also, the thesis contributes to automatically check compliance requirements against process models using model checking. We show that extra domain knowledge, other than expressed in compliance rules, is needed to reach correct decisions. In case of violations, we are able to provide a useful feedback to the user. The feedback is in the form of parts of the process model whose execution causes the violation. In some cases, our approach is capable of providing automated remedy of the violation.
Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during process execution. Aiming at a better process understanding and improvement, this event data can be used to analyze processes using process mining techniques. Process models can be automatically discovered and the execution can be checked for conformance to specified behavior. Moreover, existing process models can be enhanced and annotated with valuable information, for example for performance analysis. While the maturity of process mining algorithms is increasing and more tools are entering the market, process mining projects still face the problem of different levels of abstraction when comparing events with modeled business activities. Mapping the recorded events to activities of a given process model is essential for conformance checking, annotation and understanding of process discovery results. Current approaches try to abstract from events in an automated way that does not capture the required domain knowledge to fit business activities. Such techniques can be a good way to quickly reduce complexity in process discovery. Yet, they fail to enable techniques like conformance checking or model annotation, and potentially create misleading process discovery results by not using the known business terminology.
In this thesis, we develop approaches that abstract an event log to the same level that is needed by the business. Typically, this abstraction level is defined by a given process model. Thus, the goal of this thesis is to match events from an event log to activities in a given process model. To accomplish this goal, behavioral and linguistic aspects of process models and event logs as well as domain knowledge captured in existing process documentation are taken into account to build semiautomatic matching approaches. The approaches establish a pre--processing for every available process mining technique that produces or annotates a process model, thereby reducing the manual effort for process analysts. While each of the presented approaches can be used in isolation, we also introduce a general framework for the integration of different matching approaches.
The approaches have been evaluated in case studies with industry and using a large industry process model collection and simulated event logs. The evaluation demonstrates the effectiveness and efficiency of the approaches and their robustness towards nonconforming execution logs.
The noble way to substantiate decisions that affect many people is to ask these people for their opinions. For governments that run whole countries, this means asking all citizens for their views to consider their situations and needs.
Organizations such as Africa's Voices Foundation, who want to facilitate communication between decision-makers and citizens of a country, have difficulty mediating between these groups. To enable understanding, statements need to be summarized and visualized. Accomplishing these goals in a way that does justice to the citizens' voices and situations proves challenging. Standard charts do not help this cause as they fail to create empathy for the people behind their graphical abstractions. Furthermore, these charts do not create trust in the data they are representing as there is no way to see or navigate back to the underlying code and the original data. To fulfill these functions, visualizations would highly benefit from interactions to explore the displayed data, which standard charts often only limitedly provide.
To help improve the understanding of people's voices, we developed and categorized 80 ideas for new visualizations, new interactions, and better connections between different charts, which we present in this report. From those ideas, we implemented 10 prototypes and two systems that integrate different visualizations. We show that this integration allows consistent appearance and behavior of visualizations. The visualizations all share the same main concept: representing each individual with a single dot. To realize this idea, we discuss technologies that efficiently allow the rendering of a large number of these dots. With these visualizations, direct interactions with representations of individuals are achievable by clicking on them or by dragging a selection around them. This direct interaction is only possible with a bidirectional connection from the visualization to the data it displays. We discuss different strategies for bidirectional mappings and the trade-offs involved. Having unified behavior across visualizations enhances exploration. For our prototypes, that includes grouping, filtering, highlighting, and coloring of dots. Our prototyping work was enabled by the development environment Lively4. We explain which parts of Lively4 facilitated our prototyping process. Finally, we evaluate our approach to domain problems and our developed visualization concepts.
Our work provides inspiration and a starting point for visualization development in this domain. Our visualizations can improve communication between citizens and their government and motivate empathetic decisions. Our approach, combining low-level entities to create visualizations, provides value to an explorative and empathetic workflow. We show that the design space for visualizing this kind of data has a lot of potential and that it is possible to combine qualitative and quantitative approaches to data analysis.
The course timetabling problem can be generally defined as the task of assigning a number of lectures to a limited set of timeslots and rooms, subject to a given set of hard and soft constraints. The modeling language for course timetabling is required to be expressive enough to specify a wide variety of soft constraints and objective functions. Furthermore, the resulting encoding is required to be extensible for capturing new constraints and for switching them between hard and soft, and to be flexible enough to deal with different formulations. In this paper, we propose to make effective use of ASP as a modeling language for course timetabling. We show that our ASP-based approach can naturally satisfy the above requirements, through an ASP encoding of the curriculum-based course timetabling problem proposed in the third track of the second international timetabling competition (ITC-2007). Our encoding is compact and human-readable, since each constraint is individually expressed by either one or two rules. Each hard constraint is expressed by using integrity constraints and aggregates of ASP. Each soft constraint S is expressed by rules in which the head is the form of penalty (S, V, C), and a violation V and its penalty cost C are detected and calculated respectively in the body. We carried out experiments on four different benchmark sets with five different formulations. We succeeded either in improving the bounds or producing the same bounds for many combinations of problem instances and formulations, compared with the previous best known bounds.
Abstract interpretation-based model checking provides an approach to verifying properties of infinite-state systems. In practice, most previous work on abstract model checking is either restricted to verifying universal properties, or develops special techniques for temporal logics such as modal transition systems or other dual transition systems. By contrast we apply completely standard techniques for constructing abstract interpretations to the abstraction of a CTL semantic function, without restricting the kind of properties that can be verified. Furthermore we show that this leads directly to implementation of abstract model checking algorithms for abstract domains based on constraints, making use of an SMT solver.
In the last two decades, process mining has developed from a niche
discipline to a significant research area with considerable impact on academia and industry. Process mining enables organisations to identify the running business processes from historical execution data. The first requirement of any process mining technique is an event log, an artifact that represents concrete business process executions in the form of sequence of events. These logs can be extracted from the organization's information systems and are used by process experts to retrieve deep insights from the organization's running processes. Considering the events pertaining to such logs, the process models can be automatically discovered and enhanced or annotated with performance-related information. Besides behavioral information, event logs contain domain specific data, albeit implicitly. However, such data are usually overlooked and, thus, not utilized to their full potential.
Within the process mining area, we address in this thesis the research gap of discovering, from event logs, the contextual information that cannot be captured by applying existing process mining techniques. Within this research gap, we identify four key problems and tackle them by looking at an event log from different angles. First, we address the problem of deriving an event log in the absence of a proper database access and domain knowledge. The second problem is related to the under-utilization of the implicit domain knowledge present in an event log that can increase the understandability of the discovered process model. Next, there is a lack of a holistic representation of the historical data manipulation at the process model level of abstraction. Last but not least, each process model presumes to be independent of other process models when discovered from an event log, thus, ignoring possible data dependencies between processes within an organization.
For each of the problems mentioned above, this thesis proposes a dedicated method. The first method provides a solution to extract an event log only from the transactions performed on the database that are stored in the form of redo logs. The second method deals with discovering the underlying data model that is implicitly embedded in the event log, thus, complementing the discovered process model with important domain knowledge information. The third method captures, on the process model level, how the data affects the running process instances. Lastly, the fourth method is about the discovery of the relations between business processes (i.e., how they exchange data) from a set of event logs and explicitly representing such complex interdependencies in a business process architecture.
All the methods introduced in this thesis are implemented as a prototype and their feasibility is proven by being applied on real-life event logs.
Modular and incremental global model management with extended generalized discrimination networks
(2023)
Complex projects developed under the model-driven engineering paradigm nowadays often involve several interrelated models, which are automatically processed via a multitude of model operations. Modular and incremental construction and execution of such networks of models and model operations are required to accommodate efficient development with potentially large-scale models. The underlying problem is also called Global Model Management.
In this report, we propose an approach to modular and incremental Global Model Management via an extension to the existing technique of Generalized Discrimination Networks (GDNs). In addition to further generalizing the notion of query operations employed in GDNs, we adapt the previously query-only mechanism to operations with side effects to integrate model transformation and model synchronization. We provide incremental algorithms for the execution of the resulting extended Generalized Discrimination Networks (eGDNs), as well as a prototypical implementation for a number of example eGDN operations.
Based on this prototypical implementation, we experiment with an application scenario from the software development domain to empirically evaluate our approach with respect to scalability and conceptually demonstrate its applicability in a typical scenario. Initial results confirm that the presented approach can indeed be employed to realize efficient Global Model Management in the considered scenario.
Like conventional software projects, projects in model-driven software engineering require adequate management of multiple versions of development artifacts, importantly allowing living with temporary inconsistencies. In the case of model-driven software engineering, employed versioning approaches also have to handle situations where different artifacts, that is, different models, are linked via automatic model transformations.
In this report, we propose a technique for jointly handling the transformation of multiple versions of a source model into corresponding versions of a target model, which enables the use of a more compact representation that may afford improved execution time of both the transformation and further analysis operations. Our approach is based on the well-known formalism of triple graph grammars and a previously introduced encoding of model version histories called multi-version models. In addition to showing the correctness of our approach with respect to the standard semantics of triple graph grammars, we conduct an empirical evaluation that demonstrates the potential benefit regarding execution time performance.
Regardless of what is intended by government curriculum
specifications and advised by educational experts, the competencies
taught and learned in and out of classrooms can vary considerably.
In this paper, we discuss in particular how we can investigate the
perceptions that individual teachers have of competencies in ICT,
and how these and other factors may influence students’ learning. We
report case study research which identifies contradictions within the
teaching of ICT competencies as an activity system, highlighting issues
concerning the object of the curriculum, the roles of the participants and
the school cultures. In a particular case, contradictions in the learning
objectives between higher order skills and the use of application tools
have been resolved by a change in the teacher’s perceptions which
have not led to changes in other aspects of the activity system. We look
forward to further investigation of the effects of these contradictions in
other case studies and on forthcoming curriculum change.
Text is a ubiquitous entity in our world and daily life. We encounter it nearly everywhere in shops, on the street, or in our flats. Nowadays, more and more text is contained in digital images. These images are either taken using cameras, e.g., smartphone cameras, or taken using scanning devices such as document scanners. The sheer amount of available data, e.g., millions of images taken by Google Streetview, prohibits manual analysis and metadata extraction. Although much progress was made in the area of optical character recognition (OCR) for printed text in documents, broad areas of OCR are still not fully explored and hold many research challenges. With the mainstream usage of machine learning and especially deep learning, one of the most pressing problems is the availability and acquisition of annotated ground truth for the training of machine learning models because obtaining annotated training data using manual annotation mechanisms is time-consuming and costly. In this thesis, we address of how we can reduce the costs of acquiring ground truth annotations for the application of state-of-the-art machine learning methods to optical character recognition pipelines. To this end, we investigate how we can reduce the annotation cost by using only a fraction of the typically required ground truth annotations, e.g., for scene text recognition systems. We also investigate how we can use synthetic data to reduce the need of manual annotation work, e.g., in the area of document analysis for archival material. In the area of scene text recognition, we have developed a novel end-to-end scene text recognition system that can be trained using inexact supervision and shows competitive/state-of-the-art performance on standard benchmark datasets for scene text recognition. Our method consists of two independent neural networks, combined using spatial transformer networks. Both networks learn together to perform text localization and text recognition at the same time while only using annotations for the recognition task. We apply our model to end-to-end scene text recognition (meaning localization and recognition of words) and pure scene text recognition without any changes in the network architecture.
In the second part of this thesis, we introduce novel approaches for using and generating synthetic data to analyze handwriting in archival data. First, we propose a novel preprocessing method to determine whether a given document page contains any handwriting. We propose a novel data synthesis strategy to train a classification model and show that our data synthesis strategy is viable by evaluating the trained model on real images from an archive. Second, we introduce the new analysis task of handwriting classification. Handwriting classification entails classifying a given handwritten word image into classes such as date, word, or number. Such an analysis step allows us to select the best fitting recognition model for subsequent text recognition; it also allows us to reason about the semantic content of a given document page without the need for fine-grained text recognition and further analysis steps, such as Named Entity Recognition. We show that our proposed approaches work well when trained on synthetic data. Further, we propose a flexible metric learning approach to allow zero-shot classification of classes unseen during the network’s training. Last, we propose a novel data synthesis algorithm to train off-the-shelf pixel-wise semantic segmentation networks for documents. Our data synthesis pipeline is based on the famous Style-GAN architecture and can synthesize realistic document images with their corresponding segmentation annotation without the need for any annotated data!
In recent years, computer vision algorithms based on machine learning have seen rapid development. In the past, research mostly focused on solving computer vision problems such as image classification or object detection on images displaying natural scenes. Nowadays other fields such as the field of cultural heritage, where an abundance of data is available, also get into the focus of research. In the line of current research endeavours, we collaborated with the Getty Research Institute which provided us with a challenging dataset, containing images of paintings and drawings. In this technical report, we present the results of the seminar "Deep Learning for Computer Vision". In this seminar, students of the Hasso Plattner Institute evaluated state-of-the-art approaches for image classification, object detection and image recognition on the dataset of the Getty Research Institute. The main challenge when applying modern computer vision methods to the available data is the availability of annotated training data, as the dataset provided by the Getty Research Institute does not contain a sufficient amount of annotated samples for the training of deep neural networks. However, throughout the report we show that it is possible to achieve satisfying to very good results, when using further publicly available datasets, such as the WikiArt dataset, for the training of machine learning models.
Data integration aims to combine data of different sources and to provide users with a unified view on these data. This task is as challenging as valuable. In this thesis we propose algorithms for dependency discovery to provide necessary information for data integration. We focus on inclusion dependencies (INDs) in general and a special form named conditional inclusion dependencies (CINDs): (i) INDs enable the discovery of structure in a given schema. (ii) INDs and CINDs support the discovery of cross-references or links between schemas. An IND “A in B” simply states that all values of attribute A are included in the set of values of attribute B. We propose an algorithm that discovers all inclusion dependencies in a relational data source. The challenge of this task is the complexity of testing all attribute pairs and further of comparing all of each attribute pair's values. The complexity of existing approaches depends on the number of attribute pairs, while ours depends only on the number of attributes. Thus, our algorithm enables to profile entirely unknown data sources with large schemas by discovering all INDs. Further, we provide an approach to extract foreign keys from the identified INDs. We extend our IND discovery algorithm to also find three special types of INDs: (i) Composite INDs, such as “AB in CD”, (ii) approximate INDs that allow a certain amount of values of A to be not included in B, and (iii) prefix and suffix INDs that represent special cross-references between schemas. Conditional inclusion dependencies are inclusion dependencies with a limited scope defined by conditions over several attributes. Only the matching part of the instance must adhere the dependency. We generalize the definition of CINDs distinguishing covering and completeness conditions and define quality measures for conditions. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. The challenge for this task is twofold: (i) Which (and how many) attributes should be used for the conditions? (ii) Which attribute values should be chosen for the conditions? Previous approaches rely on pre-selected condition attributes or can only discover conditions applying to quality thresholds of 100%. Our approaches were motivated by two application domains: data integration in the life sciences and link discovery for linked open data. We show the efficiency and the benefits of our approaches for use cases in these domains.
Data dependencies, or integrity constraints, are used to improve the quality of a database schema, to optimize queries, and to ensure consistency in a database. In the last years conditional dependencies have been introduced to analyze and improve data quality. In short, a conditional dependency is a dependency with a limited scope defined by conditions over one or more attributes. Only the matching part of the instance must adhere to the dependency. In this paper we focus on conditional inclusion dependencies (CINDs). We generalize the definition of CINDs, distinguishing covering and completeness conditions. We present a new use case for such CINDs showing their value for solving complex data quality tasks. Further, we define quality measures for conditions inspired by precision and recall. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. Our algorithms choose not only the condition values but also the condition attributes automatically. Finally, we show that our approach efficiently provides meaningful and helpful results for our use case.
Data obtained from foreign data sources often come with only superficial structural information, such as relation names and attribute names. Other types of metadata that are important for effective integration and meaningful querying of such data sets are missing. In particular, relationships among attributes, such as foreign keys, are crucial metadata for understanding the structure of an unknown database. The discovery of such relationships is difficult, because in principle for each pair of attributes in the database each pair of data values must be compared. A precondition for a foreign key is an inclusion dependency (IND) between the key and the foreign key attributes. We present with Spider an algorithm that efficiently finds all INDs in a given relational database. It leverages the sorting facilities of DBMS but performs the actual comparisons outside of the database to save computation. Spider analyzes very large databases up to an order of magnitude faster than previous approaches. We also evaluate in detail the effectiveness of several heuristics to reduce the number of necessary comparisons. Furthermore, we generalize Spider to find composite INDs covering multiple attributes, and partial INDs, which are true INDs for all but a certain number of values. This last type is particularly relevant when integrating dirty data as is often the case in the life sciences domain - our driving motivation.
Business process management is an acknowledged asset for running an organization in a productive and sustainable way. One of the most important aspects of business process management, occurring on a daily basis at all levels, is decision making. In recent years, a number of decision management frameworks have appeared in addition to existing business process management systems. More recently, Decision Model and Notation (DMN) was developed by the OMG consortium with the aim of complementing the widely used Business Process Model and Notation (BPMN). One of the reasons for the emergence of DMN is the increasing interest in the evolving paradigm known as the separation of concerns. This paradigm states that modeling decisions complementary to processes reduces process complexity by externalizing decision logic from process models and importing it into a dedicated decision model. Such an approach increases the agility of model design and execution. This provides organizations with the flexibility to adapt to the ever increasing rapid and dynamic changes in the business ecosystem. The research gap, identified by us, is that the separation of concerns, recommended by DMN, prescribes the externalization of the decision logic of process models in one or more separate decision models, but it does not specify this can be achieved.
The goal of this thesis is to overcome the presented gap by developing a framework for discovering decision models in a semi-automated way from information about existing process decision making. Thus, in this thesis we develop methodologies to extract decision models from: (1) control flow and data of process models that exist in enterprises; and (2) from event logs recorded by enterprise information systems, encapsulating day-to-day operations. Furthermore, we provide an extension of the methodologies to discover decision models from event logs enriched with fuzziness, a tool dealing with partial knowledge of the process execution information. All the proposed techniques are implemented and evaluated in case studies using real-life and synthetic process models and event logs. The evaluation of these case studies shows that the proposed methodologies provide valid and accurate output decision models that can serve as blueprints for executing decisions complementary to process models. Thus, these methodologies have applicability in the real world and they can be used, for example, for compliance checks, among other uses, which could improve the organization's decision making and hence it's overall performance.
Systems of Systems (SoS) have received a lot of attention recently. In this thesis we will focus on SoS that are built atop the techniques of Service-Oriented Architectures and thus combine the benefits and challenges of both paradigms. For this thesis we will understand SoS as ensembles of single autonomous systems that are integrated to a larger system, the SoS. The interesting fact about these systems is that the previously isolated systems are still maintained, improved and developed on their own. Structural dynamics is an issue in SoS, as at every point in time systems can join and leave the ensemble. This and the fact that the cooperation among the constituent systems is not necessarily observable means that we will consider these systems as open systems. Of course, the system has a clear boundary at each point in time, but this can only be identified by halting the complete SoS. However, halting a system of that size is practically impossible. Often SoS are combinations of software systems and physical systems. Hence a failure in the software system can have a serious physical impact what makes an SoS of this kind easily a safety-critical system. The contribution of this thesis is a modelling approach that extends OMG's SoaML and basically relies on collaborations and roles as an abstraction layer above the components. This will allow us to describe SoS at an architectural level. We will also give a formal semantics for our modelling approach which employs hybrid graph-transformation systems. The modelling approach is accompanied by a modular verification scheme that will be able to cope with the complexity constraints implied by the SoS' structural dynamics and size. Building such autonomous systems as SoS without evolution at the architectural level --- i. e. adding and removing of components and services --- is inadequate. Therefore our approach directly supports the modelling and verification of evolution.
Cyber-physical systems achieve sophisticated system behavior exploring the tight interconnection of physical coupling present in classical engineering systems and information technology based coupling. A particular challenging case are systems where these cyber-physical systems are formed ad hoc according to the specific local topology, the available networking capabilities, and the goals and constraints of the subsystems captured by the information processing part. In this paper we present a formalism that permits to model the sketched class of cyber-physical systems. The ad hoc formation of tightly coupled subsystems of arbitrary size are specified using a UML-based graph transformation system approach. Differential equations are employed to define the resulting tightly coupled behavior. Together, both form hybrid graph transformation systems where the graph transformation rules define the discrete steps where the topology or modes may change, while the differential equations capture the continuous behavior in between such discrete changes. In addition, we demonstrate that automated analysis techniques known for timed graph transformation systems for inductive invariants can be extended to also cover the hybrid case for an expressive case of hybrid models where the formed tightly coupled subsystems are restricted to smaller local networks.
Service-oriented modeling employs collaborations to capture the coordination of multiple roles in form of service contracts. In case of dynamic collaborations the roles may join and leave the collaboration at runtime and therefore complex structural dynamics can result, which makes it very hard to ensure their correct and safe operation. We present in this paper our approach for modeling and verifying such dynamic collaborations. Modeling is supported using a well-defined subset of UML class diagrams, behavioral rules for the structural dynamics, and UML state machines for the role behavior. To be also able to verify the resulting service-oriented systems, we extended our former results for the automated verification of systems with structural dynamics [7, 8] and developed a compositional reasoning scheme, which enables the reuse of verification results. We outline our approach using the example of autonomous vehicles that use such dynamic collaborations via ad-hoc networking to coordinate and optimize their joint behavior.
Creating fonts is a complex task that requires expert knowledge in a variety of domains. Often, this knowledge is not held by a single person, but spread across a number of domain experts. A central concept needed for designing fonts is the glyph, an elemental symbol representing a readable character. Required domains include designing glyph shapes, engineering rules to combine glyphs for complex scripts and checking legibility. This process is most often iterative and requires communication in all directions. This report outlines a platform that aims to enhance the means of communication, describes our prototyping process, discusses complex font rendering and editing in a live environment and an approach to generate code based on a user’s live-edits.
Confidence Counts
(2021)
The increasing reliance on online learning in higher education has been further expedited by the on-going Covid-19 pandemic. Students need to be supported as they adapt to this new learning environment. Research has established that learners with positive online learning self-efficacy beliefs are more likely to persevere and achieve their higher education goals when learning online. In this paper, we explore how MOOC design can contribute to the four sources of self-efficacy beliefs posited by Bandura [4]. Specifically, we will explore, drawing on learner reflections, whether design elements of the MOOC, The Digital Edge: Essentials for the Online Learner, provided participants with the necessary mastery experiences, vicarious experiences, verbal persuasion, and affective regulation opportunities, to evaluate and develop their online learning self-efficacy beliefs. Findings from a content analysis of discussion forum posts show that learners referenced three of the four information sources when reflecting on their experience of the MOOC. This paper illustrates the potential of MOOCs as a pedagogical tool for enhancing online learning self-efficacy among students.
In this paper we report on our experiments in teaching computer science concepts with a mix of tangible and abstract object manipulations. The goal we set ourselves was to let pupils discover the challenges one has to meet to automatically manipulate formatted text. We worked with a group of 25 secondary school pupils (9-10th grade), and they were actually able to “invent” the concept of mark-up language. From this experiment we distilled a set of activities which will be replicated in other classes (6th grade) under the guidance of maths teachers.
A method is presented of acquiring the principles of three sorting algorithms through developing interactive applications in Excel.
Algorithmic management
(2022)
Viper
(2021)
Key-value stores (KVSs) have found wide application in modern software systems. For persistence, their data resides in slow secondary storage, which requires KVSs to employ various techniques to increase their read and write performance from and to the underlying medium. Emerging persistent memory (PMem) technologies offer data persistence at close-to-DRAM speed, making them a promising alternative to classical disk-based storage. However, simply drop-in replacing existing storage with PMem does not yield good results, as block-based access behaves differently in PMem than on disk and ignores PMem's byte addressability, layout, and unique performance characteristics. In this paper, we propose three PMem-specific access patterns and implement them in a hybrid PMem-DRAM KVS called Viper. We employ a DRAM-based hash index and a PMem-aware storage layout to utilize the random-write speed of DRAM and efficient sequential-write performance PMem. Our evaluation shows that Viper significantly outperforms existing KVSs for core KVS operations while providing full data persistence. Moreover, Viper outperforms existing PMem-only, hybrid, and disk-based KVSs by 4-18x for write workloads, while matching or surpassing their get performance.
Requirements engineers have to elicit, document, and validate how stakeholders act and interact to achieve their common goals in collaborative scenarios. Only after gathering all information concerning who interacts with whom to do what and why, can a software system be designed and realized which supports the stakeholders to do their work. To capture and structure requirements of different (groups of) stakeholders, scenario-based approaches have been widely used and investigated. Still, the elicitation and validation of requirements covering collaborative scenarios remains complicated, since the required information is highly intertwined, fragmented, and distributed over several stakeholders. Hence, it can only be elicited and validated collaboratively. In times of globally distributed companies, scheduling and conducting workshops with groups of stakeholders is usually not feasible due to budget and time constraints. Talking to individual stakeholders, on the other hand, is feasible but leads to fragmented and incomplete stakeholder scenarios. Going back and forth between different individual stakeholders to resolve this fragmentation and explore uncovered alternatives is an error-prone, time-consuming, and expensive task for the requirements engineers. While formal modeling methods can be employed to automatically check and ensure consistency of stakeholder scenarios, such methods introduce additional overhead since their formal notations have to be explained in each interaction between stakeholders and requirements engineers. Tangible prototypes as they are used in other disciplines such as design, on the other hand, allow designers to feasibly validate and iterate concepts and requirements with stakeholders. This thesis proposes a model-based approach for prototyping formal behavioral specifications of stakeholders who are involved in collaborative scenarios. By simulating and animating such specifications in a remote domain-specific visualization, stakeholders can experience and validate the scenarios captured so far, i.e., how other stakeholders act and react. This interactive scenario simulation is referred to as a model-based virtual prototype. Moreover, through observing how stakeholders interact with a virtual prototype of their collaborative scenarios, formal behavioral specifications can be automatically derived which complete the otherwise fragmented scenarios. This, in turn, enables requirements engineers to elicit and validate collaborative scenarios in individual stakeholder sessions – decoupled, since stakeholders can participate remotely and are not forced to be available for a joint session at the same time. This thesis discusses and evaluates the feasibility, understandability, and modifiability of model-based virtual prototypes. Similarly to how physical prototypes are perceived, the presented approach brings behavioral models closer to being tangible for stakeholders and, moreover, combines the advantages of joint stakeholder sessions and decoupled sessions.
The challenge is providing teachers with the resources they need to strengthen their instructions and better prepare students for the jobs of the 21st Century. Technology can help meet the challenge. Teachers’ Tryscience is a noncommercial offer, developed by the New York Hall of Science, TeachEngineering, the National Board for Professional Teaching Standards and IBM Citizenship to provide teachers with such resources. The workshop provides deeper insight into this tool and discussion of how to support teaching of informatics in schools.
TransPipe
(2021)
Online learning environments, such as Massive Open Online Courses (MOOCs), often rely on videos as a major component to convey knowledge. However, these videos exclude potential participants who do not understand the lecturer’s language, regardless of whether that is due to language unfamiliarity or aural handicaps. Subtitles and/or interactive transcripts solve this issue, ease navigation based on the content, and enable indexing and retrieval by search engines. Although there are several automated speech-to-text converters and translation tools, their quality varies and the process of integrating them can be quite tedious. Thus, in practice, many videos on MOOC platforms only receive subtitles after the course is already finished (if at all) due to a lack of resources. This work describes an approach to tackle this issue by providing a dedicated tool, which is closing this gap between MOOC platforms and transcription and translation tools and offering a simple workflow that can easily be handled by users with a less technical background. The proposed method is designed and evaluated by qualitative interviews with three major MOOC providers.
In the most abstract definition of its operational semantics, the declarative and concurrent programming language CHR is trivially non-terminating for a significant class of programs. Common refinements of this definition, in closing the gap to real-world implementations, compromise on declarativity and/or concurrency. Building on recent work and the notion of persistent constraints, we introduce an operational semantics avoiding trivial non-termination without compromising on its essential features.
Graph queries have lately gained increased interest due to application areas such as social networks, biological networks, or model queries. For the relational database case the relational algebra and generalized discrimination networks have been studied to find appropriate decompositions into subqueries and ordering of these subqueries for query evaluation or incremental updates of query results. For graph database queries however there is no formal underpinning yet that allows us to find such suitable operationalizations. Consequently, we suggest a simple operational concept for the decomposition of arbitrary complex queries into simpler subqueries and the ordering of these subqueries in form of generalized discrimination networks for graph queries inspired by the relational case. The approach employs graph transformation rules for the nodes of the network and thus we can employ the underlying theory. We further show that the proposed generalized discrimination networks have the same expressive power as nested graph conditions.
Graph databases provide a natural way of storing and querying graph data. In contrast to relational databases, queries over graph databases enable to refer directly to the graph structure of such graph data. For example, graph pattern matching can be employed to formulate queries over graph data.
However, as for relational databases running complex queries can be very time-consuming and ruin the interactivity with the database. One possible approach to deal with this performance issue is to employ database views that consist of pre-computed answers to common and often stated queries. But to ensure that database views yield consistent query results in comparison with the data from which they are derived, these database views must be updated before queries make use of these database views. Such a maintenance of database views must be performed efficiently, otherwise the effort to create and maintain views may not pay off in comparison to processing the queries directly on the data from which the database views are derived.
At the time of writing, graph databases do not support database views and are limited to graph indexes that index nodes and edges of the graph data for fast query evaluation, but do not enable to maintain pre-computed answers of complex queries over graph data. Moreover, the maintenance of database views in graph databases becomes even more challenging when negation and recursion have to be supported as in deductive relational databases.
In this technical report, we present an approach for the efficient and scalable incremental graph view maintenance for deductive graph databases. The main concept of our approach is a generalized discrimination network that enables to model nested graph conditions including negative application conditions and recursion, which specify the content of graph views derived from graph data stored by graph databases. The discrimination network enables to automatically derive generic maintenance rules using graph transformations for maintaining graph views in case the graph data from which the graph views are derived change. We evaluate our approach in terms of a case study using multiple data sets derived from open source projects.
One of the main problems in machine learning is to train a predictive model from training data and to make predictions on test data. Most predictive models are constructed under the assumption that the training data is governed by the exact same distribution which the model will later be exposed to. In practice, control over the data collection process is often imperfect. A typical scenario is when labels are collected by questionnaires and one does not have access to the test population. For example, parts of the test population are underrepresented in the survey, out of reach, or do not return the questionnaire. In many applications training data from the test distribution are scarce because they are difficult to obtain or very expensive. Data from auxiliary sources drawn from similar distributions are often cheaply available. This thesis centers around learning under differing training and test distributions and covers several problem settings with different assumptions on the relationship between training and test distributions-including multi-task learning and learning under covariate shift and sample selection bias. Several new models are derived that directly characterize the divergence between training and test distributions, without the intermediate step of estimating training and test distributions separately. The integral part of these models are rescaling weights that match the rescaled or resampled training distribution to the test distribution. Integrated models are studied where only one optimization problem needs to be solved for learning under differing distributions. With a two-step approximation to the integrated models almost any supervised learning algorithm can be adopted to biased training data. In case studies on spam filtering, HIV therapy screening, targeted advertising, and other applications the performance of the new models is compared to state-of-the-art reference methods.
Learning During COVID-19
(2021)
During the COVID-19 pandemic, learning in higher education and beyond shifted en masse to online formats, with the short- and long-term consequences for Massive Open Online Course (MOOC) platforms, learners, and creators still under evaluation. In this paper, we sought to determine whether the COVID-19 pandemic and this shift to online learning led to increased learner engagement and attainment in a single introductory biology MOOC through evaluating enrollment, proportional and individual engagement, and verification and performance data. As this MOOC regularly operates each year, we compared these data collected from two course runs during the pandemic to three pre-pandemic runs. During the first pandemic run, the number and rate of learners enrolling in the course doubled when compared to prior runs, while the second pandemic run indicated a gradual return to pre-pandemic enrollment. Due to higher enrollment, more learners viewed videos, attempted problems, and posted to the discussion forums during the pandemic. Participants engaged with forums in higher proportions in both pandemic runs, but the proportion of participants who viewed videos decreased in the second pandemic run relative to the prior runs. A higher percentage of learners chose to pursue a certificate via the verified track in each pandemic run, though a smaller proportion earned certification in the second pandemic run. During the pandemic, more enrolled learners did not necessarily correlate to greater engagement by all metrics. While verified-track learner performance varied widely during each run, the effects of the pandemic were not uniform for learners, much like in other aspects of life. As such, individual engagement trends in the first pandemic run largely resemble pre-pandemic metrics but with more learners overall, while engagement trends in the second pandemic run are less like pre-pandemic metrics, hinting at learner “fatigue”. This study serves to highlight the life-long learning opportunity that MOOCs offer is even more critical when traditional education modes are disrupted and more people are at home or unemployed. This work indicates that this boom in MOOC participation may not remain at a high level for the longer term in any one course, but overall, the number of MOOCs, programs, and learners continues to grow.