004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Article (233)
- Doctoral Thesis (131)
- Monograph/Edited Volume (124)
- Postprint (50)
- Conference Proceeding (45)
- Other (5)
- Master's Thesis (3)
- Preprint (3)
- Part of a Book (2)
- Bachelor Thesis (1)
Language
- English (598) (remove)
Keywords
- machine learning (19)
- answer set programming (12)
- Cloud Computing (10)
- cloud computing (10)
- Forschungsprojekte (9)
- Future SOC Lab (9)
- Hasso-Plattner-Institut (9)
- In-Memory Technologie (9)
- Multicore Architekturen (9)
- maschinelles Lernen (9)
- Forschungskolleg (8)
- Hasso Plattner Institute (8)
- Klausurtagung (8)
- Machine Learning (8)
- Maschinelles Lernen (8)
- Service-oriented Systems Engineering (8)
- multicore architectures (7)
- research projects (7)
- social media (7)
- Datenintegration (6)
- Geschäftsprozessmanagement (6)
- Modellierung (6)
- business process management (6)
- Computer Science Education (5)
- Ph.D. retreat (5)
- Prozessmodellierung (5)
- Smalltalk (5)
- Verifikation (5)
- cyber-physical systems (5)
- data integration (5)
- privacy (5)
- quantitative analysis (5)
- security (5)
- service-oriented systems engineering (5)
- virtual machines (5)
- Antwortmengenprogrammierung (4)
- Betriebssysteme (4)
- In-Memory technology (4)
- Künstliche Intelligenz (4)
- Research School (4)
- Sicherheit (4)
- Virtuelle Maschinen (4)
- Visualisierung (4)
- Vorhersage (4)
- artifical intelligence (4)
- evaluation (4)
- graph transformation (4)
- in-memory technology (4)
- künstliche Intelligenz (4)
- middleware (4)
- nested graph conditions (4)
- operating systems (4)
- probabilistic timed systems (4)
- process mining (4)
- programming (4)
- qualitative Analyse (4)
- qualitative analysis (4)
- quantitative Analyse (4)
- research school (4)
- self-sovereign identity (4)
- verification (4)
- 3D visualization (3)
- Answer Set Programming (3)
- BPMN (3)
- Big Data (3)
- Blockchains (3)
- Competence Measurement (3)
- DPLL (3)
- Data profiling (3)
- Design Thinking (3)
- Digitalisierung (3)
- E-Learning (3)
- Graphtransformationen (3)
- HCI (3)
- Informatics (3)
- Informatics Education (3)
- JSP (3)
- Laufzeitmodelle (3)
- MOOCs (3)
- Middleware (3)
- Model Synchronisation (3)
- Model Transformation (3)
- Model-Driven Engineering (3)
- Modeling (3)
- Optimization (3)
- Ph.D. Retreat (3)
- Privacy (3)
- Process Mining (3)
- Process Modeling (3)
- SAT (3)
- Secondary Education (3)
- Semantic Web (3)
- Tripel-Graph-Grammatik (3)
- Virtualisierung (3)
- abstraction (3)
- bibliometric analysis (3)
- blockchain (3)
- business processes (3)
- citation analysis (3)
- cloud (3)
- clustering (3)
- collaboration (3)
- computational thinking (3)
- computer vision (3)
- data preparation (3)
- data profiling (3)
- database systems (3)
- debugging (3)
- digital transformation (3)
- digitalization (3)
- distributed systems (3)
- duplicate detection (3)
- geospatial data (3)
- graph transformation systems (3)
- higher education (3)
- identity management (3)
- image processing (3)
- innovation (3)
- model (3)
- model-driven engineering (3)
- modellgetriebene Entwicklung (3)
- models (3)
- non-photorealistic rendering (3)
- openHPI (3)
- outlier detection (3)
- prediction (3)
- real-time (3)
- simulation (3)
- smart contracts (3)
- software engineering (3)
- systems biology (3)
- trust (3)
- user experience (3)
- virtual reality (3)
- virtuelle Maschinen (3)
- visualization (3)
- 3D point clouds (2)
- 3D-Geovisualisierung (2)
- 3D-Punktwolken (2)
- 3D-Stadtmodelle (2)
- 3D-Visualisierung (2)
- AUTOSAR (2)
- Abstraktion (2)
- Algorithmen (2)
- Algorithmic Game Theory (2)
- Algorithmische Spieltheorie (2)
- Algorithms (2)
- Anomalieerkennung (2)
- Artificial Intelligence (2)
- Aspektorientierte Softwareentwicklung (2)
- Assoziationsregeln (2)
- Asynchrone Schaltung (2)
- Augmented reality (2)
- Automatisches Beweisen (2)
- Bayesian networks (2)
- Bibliometrics (2)
- Bildverarbeitung (2)
- Bitcoin (2)
- Blockchain (2)
- Bounded Model Checking (2)
- Business Process Management (2)
- CSC (2)
- CSCW (2)
- Cloud-Sicherheit (2)
- Cloud-Speicher (2)
- Clusteranalyse (2)
- Competence Modelling (2)
- Computational thinking (2)
- Computer Science (2)
- Computersicherheit (2)
- Computing (2)
- Constraint Solving (2)
- Data Integration (2)
- Data Modeling (2)
- Data Privacy (2)
- Data Profiling (2)
- Databases (2)
- Datenanalyse (2)
- Datenaufbereitung (2)
- Datenbanken (2)
- Datenbanksysteme (2)
- Datenmodellierung (2)
- Datenqualität (2)
- Deduction (2)
- Deep learning (2)
- Delphi study (2)
- Diversity (2)
- Duplikaterkennung (2)
- EEG (2)
- Echtzeit (2)
- Echtzeit-Rendering (2)
- Economics (2)
- Electronic and spintronic devices (2)
- European Bioinformatics Institute (2)
- Evolution (2)
- Exploration (2)
- Fehlende Daten (2)
- GPU (2)
- Game Dynamics (2)
- General Earth and Planetary Sciences (2)
- Geodaten (2)
- Geography, Planning and Development (2)
- Graphentransformationssysteme (2)
- Graphtransformationssysteme (2)
- ICA (2)
- ICT (2)
- Identitätsmanagement (2)
- Informatics Modelling (2)
- Informatics System Application (2)
- Informatics System Comprehension (2)
- Informatik (2)
- Informationsextraktion (2)
- Innovation (2)
- Internet of Things (2)
- Java (2)
- Key Competencies (2)
- Klassifikation (2)
- Klausellernen (2)
- Knowledge Representation and Reasoning (2)
- Kollaborationen (2)
- Link-Entdeckung (2)
- Live-Programmierung (2)
- Lively Kernel (2)
- Logic Programming (2)
- Logics (2)
- MOOC (2)
- Machine learning (2)
- Measurement (2)
- Megamodell (2)
- Metaverse (2)
- Model Synchronization (2)
- Modell (2)
- Modellprüfung (2)
- Planing (2)
- Preis der Anarchie (2)
- Price of Anarchy (2)
- Privatsphäre (2)
- Problem Solving (2)
- Process (2)
- Programmierung (2)
- Prozess (2)
- RDF (2)
- Relevanz (2)
- Ressourcenoptimierung (2)
- Runtime analysis (2)
- SQL (2)
- STG decomposition (2)
- STG-Dekomposition (2)
- Satisfiability (2)
- Second Life (2)
- Semiconductors (2)
- Service-Oriented Architecture (2)
- Service-Orientierte Architekturen (2)
- Social (2)
- SysML (2)
- Systematics (2)
- Systemsoftware (2)
- Taxonomy (2)
- Temporallogik (2)
- Theorembeweisen (2)
- Theory (2)
- Twitter (2)
- UX (2)
- Unifikation (2)
- Verhalten (2)
- Versionsverwaltung (2)
- Visualization (2)
- Water Science and Technology (2)
- Werkzeuge (2)
- YouTube (2)
- adaptive (2)
- adaptive Systeme (2)
- adaptive systems (2)
- algorithms (2)
- anomaly detection (2)
- anxiety (2)
- architecture (2)
- attribute assurance (2)
- authorship attribution (2)
- big data (2)
- big data services (2)
- bounded model checking (2)
- causal discovery (2)
- causal structure learning (2)
- classification (2)
- cloud security (2)
- co-citation analysis (2)
- co-occurrence analysis (2)
- competence (2)
- complexity (2)
- comprehension (2)
- computer science (2)
- computer science education (2)
- computer security (2)
- consistency (2)
- continuous integration (2)
- conversational agents (2)
- cyber-physische Systeme (2)
- cyberbullying (2)
- data (2)
- data mining (2)
- data quality (2)
- data wrangling (2)
- deep learning (2)
- deferred choice (2)
- depression (2)
- design thinking (2)
- digital education (2)
- digital health (2)
- digital identity (2)
- digital whiteboard (2)
- direct manipulation (2)
- dynamic reconfiguration (2)
- education (2)
- empathy (2)
- engagement (2)
- exploratory programming (2)
- formal semantics (2)
- functional dependencies (2)
- funktionale Abhängigkeiten (2)
- gender (2)
- graph constraints (2)
- identity theory (2)
- image stylization (2)
- inclusion dependencies (2)
- incremental graph pattern matching (2)
- index selection (2)
- individual effects (2)
- informatics education (2)
- interactive technologies (2)
- intrusion detection (2)
- job shop scheduling (2)
- k-inductive invariant checking (2)
- kausale Entdeckung (2)
- kausales Strukturlernen (2)
- knowledge management (2)
- knowledge representation and nonmonotonic reasoning (2)
- kontinuierliche Integration (2)
- law (2)
- learning (2)
- live programming (2)
- logic programming (2)
- longitudinal (2)
- maschinelles Sehen (2)
- memory (2)
- method comparision (2)
- missing data (2)
- mobile (2)
- mobile mapping (2)
- model checking (2)
- model transformation (2)
- modeling (2)
- modelling (2)
- monitoring (2)
- networks (2)
- neural networks (2)
- news media (2)
- oracles (2)
- parallel processing (2)
- perception of robots (2)
- probabilistische gezeitete Systeme (2)
- probabilistische zeitgesteuerte Systeme (2)
- production planning and control (2)
- propositional satisfiability (2)
- relevance (2)
- representation learning (2)
- runtime models (2)
- schema discovery (2)
- selection (2)
- self-driving (2)
- service-oriented systems (2)
- smalltalk (2)
- social network analysis (2)
- societal effects (2)
- solver (2)
- standardization (2)
- stochastic Petri nets (2)
- stochastische Petri Netze (2)
- synchronization (2)
- systematic literature review (2)
- systems of systems (2)
- systems software (2)
- taxonomy (2)
- teacher training (2)
- teamwork (2)
- testing (2)
- text based classification methods (2)
- theorem (2)
- tiefes Lernen (2)
- tools (2)
- topics (2)
- triple graph grammars (2)
- typed attributed graphs (2)
- usability (2)
- user-generated content (2)
- verschachtelte Graphbedingungen (2)
- version control (2)
- virtualization (2)
- workflow patterns (2)
- "Big Data"-Dienste (1)
- 'Peer To Peer' (1)
- (FPGA) (1)
- 0-day (1)
- 21st century skills, (1)
- 3D Computer Grafik (1)
- 3D Computer Graphics (1)
- 3D Drucken (1)
- 3D Linsen (1)
- 3D Point Clouds (1)
- 3D Semiotik (1)
- 3D Visualisierung (1)
- 3D city model (1)
- 3D city models (1)
- 3D geovisualisation (1)
- 3D geovisualization (1)
- 3D lenses (1)
- 3D point cloud (1)
- 3D portrayal (1)
- 3D printing (1)
- 3D semiotics (1)
- 3D-Punktwolke (1)
- 3D-Rendering (1)
- 3D-Stadtmodell (1)
- 3d city models (1)
- 47A52 (1)
- 65R20 (1)
- 65R32 (1)
- 78A46 (1)
- ABRACADABRA (1)
- ACINQ (1)
- AMNET (1)
- APT (1)
- APX-hardness (1)
- ASIC (1)
- Abbrecherquote (1)
- Abhängigkeiten (1)
- Abstraktion von Geschäftsprozessmodellen (1)
- Accepting Grammars (1)
- Achievement (1)
- Ackerschmalwand (1)
- Active Evaluation (1)
- Activity Theory (1)
- Activity-orientated Learning (1)
- Activity-oriented Optimization (1)
- Actor (1)
- Actor model (1)
- Adaptive hypermedia (1)
- Advanced Persistent Threats (1)
- Advanced Video Codec (AVC) (1)
- Adversarial Learning (1)
- Agency (1)
- Agile (1)
- Agilität (1)
- Aktive Evaluierung (1)
- Aktivitäten (1)
- Akzeptierende Grammatiken (1)
- Alcohol Use Disorders Identification Test (1)
- Alcohol use assessment (1)
- Algebraic methods (1)
- Algorithmenablaufplanung (1)
- Algorithmenkonfiguration (1)
- Algorithmenselektion (1)
- Alignment (1)
- Ambiguity (1)
- Ambiguität (1)
- Analog-zu-Digital-Konvertierung (1)
- Analyse (1)
- Android Security (1)
- Anfrageoptimierung (1)
- Anfragepaare (1)
- Angewandte Spieltheorie (1)
- Angriffserkennung (1)
- Animal building (1)
- Anisotroper Kuwahara Filter (1)
- Anleitung (1)
- Anomalien (1)
- Antwortmengen Programmierung (1)
- Application (1)
- Applied Game Theory (1)
- Apriori (1)
- Architektur (1)
- Architekturadaptation (1)
- Archivanalyse (1)
- Arduino (1)
- Argument Mining (1)
- Artem Erkomaishvili (1)
- Artificial neural networks (1)
- Arzt-Patient-Beziehung (1)
- Aspect-Oriented Programming (1)
- Aspect-oriented Programming (1)
- Aspektorientierte Programmierung (1)
- Assessment (1)
- Association Rule Mining (1)
- Asynchronous circuit (1)
- Attribut-Merge-Prozess (1)
- Attribute Merge Process (1)
- Attribute aggregation (1)
- Attributsicherung (1)
- Augmented and virtual reality (1)
- Ausführung von Modellen (1)
- Ausführungsgeschichte (1)
- Ausführungssemantiken (1)
- Ausreissererkennung (1)
- Ausreißererkennung (1)
- Australian securities exchange (1)
- Austria (1)
- Auswirkungen (1)
- Authentication (1)
- Authentifizierung (1)
- Automated Theorem Proving (1)
- Automatically controlled windows (1)
- BCCC (1)
- BCI (1)
- BPM (1)
- BSS (1)
- BTC (1)
- Bachelor (1)
- Bachelorstudierende der Informatik (1)
- Bahnwesen (1)
- Bank (1)
- Basic Service (1)
- Batchprozesse (1)
- Baumweite (1)
- Bayes'sche Netze (1)
- Bayessche Netze (1)
- Bean (1)
- Bedingte Inklusionsabhängigkeiten (1)
- Bedrohungserkennung (1)
- Behavior (1)
- Behavior change (1)
- Behaviour Analysis (1)
- Berührungseingaben (1)
- Beschränkungen und Abhängigkeiten (1)
- Beweistheorie (1)
- Bidirectional order dependencies (1)
- Big Five model (1)
- Bilddatenanalyse (1)
- Bildung (1)
- Binäres Entscheidungsdiagramm (1)
- Bioelektrisches Signal (1)
- Bioinformatik (1)
- Bisimulation (1)
- BitShares (1)
- Bitcoin Core (1)
- Blended learning (1)
- Blockchain Auth (1)
- Blockchain-Konsortium R3 (1)
- Blockkette (1)
- Blockstack (1)
- Blockstack ID (1)
- Bloom’s Taxonomy (1)
- Blumix-Plattform (1)
- Blöcke (1)
- Body schema (1)
- Boolean constraint solver (1)
- Bounded Backward Model Checking (1)
- Brain Computer Interface (1)
- Brownian motion with discontinuous drift (1)
- Business Process Models (1)
- Business process modeling (1)
- Bystander (1)
- Byzantine Agreement (1)
- C++ tool (1)
- CCS Concepts (1)
- CEP (1)
- CS Ed Research (1)
- CS at school (1)
- CS concepts (1)
- CS curriculum (1)
- Cactus (1)
- Calibration (1)
- Capability approach (1)
- Carrera Digital D132 (1)
- Case Management (1)
- Case management (1)
- Challenges (1)
- Change Management (1)
- Choreographien (1)
- CityGML (1)
- Classification (1)
- Clause Learning (1)
- Clinical predictive modeling (1)
- Cloud (1)
- Cloud Datenzentren (1)
- Cloud computing (1)
- Clustering (1)
- Coccinelle (1)
- Cognitive Skills (1)
- Cographs (1)
- Coherent partition (1)
- Colored Coins (1)
- Common Spatial Pattern (1)
- Commonsense reasoning (1)
- Community analysis (1)
- Comparing programming environments (1)
- Competences (1)
- Competencies (1)
- Complexity (1)
- Compliance (1)
- Compliance checking (1)
- Composition (1)
- Compound Values (1)
- Computation Tree Logic (1)
- Computational Complexity (1)
- Computational Hardness (1)
- Computational Photography (1)
- Computational Thinking (1)
- Computational photography (1)
- Computer Science in Context (1)
- Computer crime (1)
- Computergrafik (1)
- Conceptual modeling (1)
- Condition number (1)
- Conditional Inclusion Dependency (1)
- Conformance Überprüfung (1)
- Consistency (1)
- Constraints (1)
- Contest (1)
- Context-oriented Programming (1)
- Contextualisation (1)
- Contracts (1)
- Contradictions (1)
- Controlled Derivations (1)
- Controller-Resynthese (1)
- Convolution (1)
- Course development (1)
- Course marketing (1)
- Course of Study (1)
- Courses for female students (1)
- Covariate Shift (1)
- Covid (1)
- Creative (1)
- Crime mapping (1)
- Critical pairs (1)
- Crowd-sourcing (1)
- Cryptography (1)
- Currencies (1)
- Curricula Development (1)
- Curriculum (1)
- Curriculum Development (1)
- Curriculum analysis (1)
- Customer ownership (1)
- Cyber-Physical Systems (1)
- Cyber-Physical-Systeme (1)
- Cyber-Sicherheit (1)
- Cyber-physical-systems (1)
- Cyber-physikalische Systeme (1)
- DAO (1)
- DBMS (1)
- DDoS (1)
- DPoS (1)
- Data Analysis (1)
- Data Dependency (1)
- Data Management (1)
- Data Quality (1)
- Data Structure Optimization (1)
- Data Warehouse (1)
- Data dependencies (1)
- Data integration (1)
- Data mining (1)
- Data modeling (1)
- Data warehouse (1)
- Data-Mining (1)
- Data-Science (1)
- Data-centric (1)
- Database Cost Model (1)
- Dateistruktur (1)
- Daten (1)
- Datenabhängigkeiten (1)
- Datenabhängigkeiten-Entdeckung (1)
- Datenbank (1)
- Datenbank-Kostenmodell (1)
- Datenbankoptimierung (1)
- Datenextraktion (1)
- Datenflusskorrektheit (1)
- Datenkorrektheit (1)
- Datenmodelle (1)
- Datenobjekte (1)
- Datenreinigung (1)
- Datensatz (1)
- Datenschutz (1)
- Datenstrukturoptimierung (1)
- Datensynthese (1)
- Datentransformation (1)
- Datenvertraulichkeit (1)
- Datenverwaltung für Daten mit räumlich-zeitlichem Bezug (1)
- Datenvisualisierung (1)
- Datenzustände (1)
- Deadline-Verbreitung (1)
- Debugging (1)
- Decision support (1)
- Deep Learning (1)
- Defining characteristics of physical computing (1)
- Dekubitus (1)
- Delegated Proof-of-Stake (1)
- Delta preservation (1)
- Dempster-Shafer-Theorie (1)
- Dempster–Shafer theory (1)
- Denkweise (1)
- Dependency discovery (1)
- Description Logics (1)
- Design-Forschung (1)
- Deskriptive Logik (1)
- Deurema Modellierungssprache (1)
- Developmental robotics (1)
- Diagonalisierung (1)
- Didaktik der Informatik (1)
- Dienstkomposition (1)
- Dienstplattform (1)
- Differential Privacy (1)
- Differenz von Gauss Filtern (1)
- Digital Competence (1)
- Digital Education (1)
- Digital Revolution (1)
- Digital image analysis (1)
- Digitale Transformation (1)
- Digitale Whiteboards (1)
- Digitalization (1)
- Digitalstrategie (1)
- Direkte Manipulation (1)
- Disambiguierung (1)
- Discrimination Networks (1)
- Dispositional learning analytics (1)
- Distributed Computing (1)
- Distributed Proof-of-Research (1)
- Distributed computing (1)
- Distributed programming (1)
- Distributed-Ledger-Technologie (DLT) (1)
- Domänenspezifische Modellierung (1)
- Dubletten (1)
- Duplicate Detection (1)
- Dynamic Programming (1)
- Dynamic Type System (1)
- Dynamic assessment (1)
- Dynamische Programmierung (1)
- Dynamische Rekonfiguration (1)
- Dynamische Typ Systeme (1)
- E-Wallet (1)
- ECDSA (1)
- EHR (1)
- EPA (1)
- Early Literacy (1)
- Echtzeitanwendung (1)
- Echtzeitsysteme (1)
- Ecosystems (1)
- Educational Standards (1)
- Educational software (1)
- Einbruchserkennung (1)
- Eingabegenauigkeit (1)
- Elektroencephalographie (1)
- Elektronische Patientenakte (1)
- Embedded Systems (1)
- Emotionen (1)
- Emotionsforschung (1)
- Empfehlungen (1)
- Endpunktsicherheit (1)
- Energieeffizienz (1)
- Entity resolution (1)
- Entitätsverknüpfung (1)
- Entscheidungsbäume (1)
- Entscheidungsfindung (1)
- Entscheidungsmanagement (1)
- Entscheidungsmodelle (1)
- Entwicklungswerkzeuge (1)
- Entwurfsmuster für SOA-Sicherheit (1)
- Enumeration algorithm (1)
- Equilibrium logic (1)
- Ereignisabstraktion (1)
- Ereignisse (1)
- Erfahrungsbericht (1)
- Erfüllbarkeit einer Formel der Aussagenlogik (1)
- Erfüllbarkeitsanalyse (1)
- Erfüllbarkeitsproblem (1)
- Eris (1)
- Erkennen von Meta-Daten (1)
- Erkennung von Metadaten (1)
- Error Estimation (1)
- Escherichia-coli (1)
- Estimation-of-distribution algorithm (1)
- Ether (1)
- Ethereum (1)
- Ethics (1)
- Euclid’s algorithm (1)
- European Union (1)
- Europäische Union (1)
- Evaluation (1)
- Evidenztheorie (1)
- Evolution in MDE (1)
- Evolutionary algorithms (1)
- Execution Semantics (1)
- Exponential Time Hypothesis (1)
- Exponentialzeit Hypothese (1)
- Extract-Transform-Load (ETL) (1)
- FMC-QE (1)
- FPGA (1)
- FRP (1)
- Facebook (1)
- Fallmanagement (1)
- Fallstudie (1)
- Feature Combination (1)
- Feature extraction (1)
- Feature selection (1)
- Federated Byzantine Agreement (1)
- Federated learning (1)
- Feedback (1)
- Feedback Loop Modellierung (1)
- Feedback Loops (1)
- Fehlerbeseitigung (1)
- Fehlerschätzung (1)
- Fehlersuche (1)
- Fehlertoleranz (1)
- Fernerkundung (1)
- Fertigung (1)
- Fibonacci numbers (1)
- Field programmable gate arrays (1)
- Finite automata (1)
- Fintech (1)
- Fitness-distance correlation (1)
- Flussgesteuerter Bilateraler Filter (1)
- Focus+Context Visualization (1)
- Fokus-&-Kontext Visualisierung (1)
- FollowMyVote (1)
- Fork (1)
- Formal modelling (1)
- Formale Verifikation (1)
- Formative assessment (1)
- Formeln der quantifizierten Aussagenlogik (1)
- Fredholm complexes (1)
- Function (1)
- Functional Lenses (1)
- Functional dependencies (1)
- Fundamental Ideas (1)
- GIS (1)
- GPU acceleration (1)
- GPU-Beschleunigung (1)
- Gaussian process state-space models (1)
- Gaussian processes (1)
- Gauß-Prozess Zustandsraummodelle (1)
- Gauß-Prozesse (1)
- Gebäudemodelle (1)
- Gehirn-Computer-Schnittstelle (1)
- Geländemodelle (1)
- Gender (1)
- Gene expression (1)
- General subject “Information” (1)
- Generalisierung (1)
- Generalized Discrimination Networks (1)
- Geometrieerzeugung (1)
- Georgian chant (1)
- Georgische liturgische Gesänge (1)
- Geovisualisierung (1)
- Geschäftsprozessarchitekturen (1)
- Geschäftsprozesse (1)
- Geschäftsprozessmodelle (1)
- Gesetze (1)
- Gesichtsausdruck (1)
- Gesteuerte Ableitungen (1)
- Gewinnung benannter Entitäten (1)
- GitHub (1)
- Gleichheit (1)
- Globus (1)
- GraalVM (1)
- Grammar Systems (1)
- Grammatiksysteme (1)
- Graph algorithm (1)
- Graph databases (1)
- Graph homomorphisms (1)
- Graph logic (1)
- Graph partitions (1)
- Graph repair (1)
- Graph transformation (1)
- Graph-Constraints (1)
- Graph-Mining (1)
- Graph-basierte Suche (1)
- Graphableitung (1)
- Graphbedingungen (1)
- Graphdatenbanken (1)
- Graphensuche (1)
- Graphreparatur (1)
- Graphtransformation (1)
- Grid (1)
- Grid Computing (1)
- Gridcoin (1)
- H.264 (1)
- HENSHIN (1)
- HPI Schul-Cloud (1)
- Hard Fork (1)
- Hardware accelerator (1)
- Hashed Timelock Contracts (1)
- Hasserkennung (1)
- Hasso-Plattner-Institute (1)
- Hauptkomponentenanalyse (1)
- Hauptspeicher Datenmanagement (1)
- Hauptspeicher Technologie (1)
- Hauptspeicherdatenbank (1)
- Helmholtz problem (1)
- Herodotos (1)
- Heuristic triangle estimation (1)
- Heuristiken (1)
- HiGHmed (1)
- High-Level Synthesis (1)
- Histograms (1)
- History of pattern occurrences (1)
- Hochschulsystem (1)
- Homomorphe Verschlüsselung (1)
- Human (1)
- Human-robot interaction (1)
- Hyrise (1)
- Häkeln (1)
- I/O-effiziente Algorithmen (1)
- ICT Competence (1)
- ICT competencies (1)
- ICT curriculum (1)
- ICT skills (1)
- IDS (1)
- IHL (1)
- IHRL (1)
- IOPS (1)
- ISSEP (1)
- IT security (1)
- IT-Security (1)
- IT-Sicherheit (1)
- Ideation (1)
- Ideenfindung (1)
- Identity management systems (1)
- Identität (1)
- Ill-conditioning (1)
- Image (1)
- Image resolution (1)
- Image-based rendering (1)
- Impact (1)
- Imperative calculi (1)
- Implementation in Organizations (1)
- Implementierung in Organisationen (1)
- Improving classroom (1)
- In-Memory (1)
- In-Memory Database (1)
- In-Memory Datenbank (1)
- In-Memory-Datenbank (1)
- Inclusion dependencies (1)
- Indefinite (1)
- Index (1)
- Index Structures (1)
- Indexauswahl (1)
- Indexstrukturen (1)
- Individuen (1)
- Industries (1)
- Industry 4.0 (1)
- Inference (1)
- Infinite State (1)
- Informatik-Studiengänge (1)
- Informatikdidaktik (1)
- Informatikvoraussetzungen (1)
- Information Ethics (1)
- Information Extraction (1)
- Information Systems (1)
- Information Transfer Rate (1)
- Informationssysteme (1)
- Informationsvorhaltung (1)
- Initial conflicts (1)
- Inklusionsabhängigkeit (1)
- Inklusionsabhängigkeiten (1)
- Inkonsistenz (1)
- Inkrementelle Graphmustersuche (1)
- Innovationsmanagement (1)
- Innovationsmethode (1)
- Input Validation (1)
- Inquiry-based Learning (1)
- Instagram (1)
- Insurance industry (1)
- Integration (1)
- Interactive Rendering (1)
- Interaktionsmodel (1)
- Interaktionsmodellierung (1)
- Interaktives Rendering (1)
- Interdisciplinary Teams (1)
- Interface design (1)
- Internet Security (1)
- Internet applications (1)
- Internet der Dinge (1)
- Internet-Sicherheit (1)
- Internetanwendungen (1)
- Interpretability (1)
- Interpreter (1)
- Intersectionality (1)
- Interval Timed Automata (1)
- Invariant-Checking (1)
- Invarianten (1)
- Invariants (1)
- IoT (1)
- JCop (1)
- Japanese Blockchain Consortium (1)
- Japanisches Blockchain-Konsortium (1)
- Java Security Framework (1)
- Karten (1)
- Kartografisches Design (1)
- Kausalität (1)
- Kern-PCA (1)
- Kernel (1)
- Kernmethoden (1)
- Kette (1)
- Klassifikator-Kalibrierung (1)
- Klassifizierung (1)
- Kommunikation (1)
- Komplexität (1)
- Komplexität der Berechnung (1)
- Komplexitätsbewältigung (1)
- Komplexitätstheorie (1)
- Komposition (1)
- Konnektionskalkül (1)
- Konsensalgorithmus (1)
- Konsensprotokoll (1)
- Konsensprotokolle (1)
- Konsistenzrestauration (1)
- Kontext (1)
- Kreativität (1)
- Kryptographie (1)
- Kundenverhalten (1)
- Kunstanalyse (1)
- Kybernetik (1)
- LEGO Mindstorms EV3 (1)
- LIDAR (1)
- LOD (1)
- LSTM (1)
- Landmarken (1)
- Large networks (1)
- Laser Cutten (1)
- Laserscanning (1)
- Lastverteilung (1)
- Laufzeitanalyse (1)
- Laufzeitverhalten (1)
- Leadership (1)
- Learners (1)
- Learning Analytics (1)
- Learning Fields (1)
- Learning analytics (1)
- Learning dispositions (1)
- Learning ecology (1)
- Learning interfaces development (1)
- Learning with ICT (1)
- Lebendigkeit (1)
- Lefschetz number (1)
- Leftmost Derivations (1)
- Lehrer (1)
- Leistungsfähigkeit (1)
- Leistungsmodelle von virtuellen Maschinen (1)
- Leistungsvorhersage (1)
- LiDAR (1)
- Licenses (1)
- Lightning Network (1)
- Liguistisch (1)
- Lindenmayer systems (1)
- Link Discovery (1)
- Linked Data (1)
- Linked Open Data (1)
- Linksableitungen (1)
- Live-Migration (1)
- Lock-Time-Parameter (1)
- Logarithm (1)
- Logikkalkül (1)
- Logiksynthese (1)
- Loss (1)
- Low Latency (1)
- Lower Bounds (1)
- Lower Secondary Level (1)
- Lösungsraum (1)
- MDE Ansatz (1)
- MDE settings (1)
- MEG (1)
- MERLOT (1)
- Machine-Learning (1)
- Machinelles Lernen (1)
- Magnetoencephalographie (1)
- Malware (1)
- Management (1)
- Maschinen (1)
- Massive Open Online Courses (1)
- Matrizen-Eigenwertaufgabe (1)
- Matroids (1)
- Media in education (1)
- Megamodel (1)
- Megamodels (1)
- Mehrkernsysteme (1)
- Mehrklassen-Klassifikation (1)
- Mensch-Computer-Interaktion (1)
- Messung (1)
- Metacrate (1)
- Metadata Discovery (1)
- Metadaten (1)
- Metadatenentdeckung (1)
- Metadatenqualität (1)
- Micropayment-Kanäle (1)
- Microsoft Azur (1)
- Migration (1)
- Mindset (1)
- Minimal hitting set (1)
- Mischmodelle (1)
- Mischung <Signalverarbeitung> (1)
- Mobil (1)
- Mobile Application Development (1)
- Mobile Mapping (1)
- Mobile learning (1)
- Mobile-Mapping (1)
- Mobilgeräte (1)
- Model Consistency (1)
- Model Execution (1)
- Model Management (1)
- Model repair (1)
- Model verification (1)
- Model-driven (1)
- Modeling Languages (1)
- Modell Management (1)
- Modell-driven Security (1)
- Modell-getriebene Sicherheit (1)
- Modell-getriebene Softwareentwicklung (1)
- Modelle mit mehreren Versionen (1)
- Modellerzeugung (1)
- Modellgetrieben (1)
- Modellgetriebene Entwicklung (1)
- Modellgetriebene Softwareentwicklung (1)
- Modellierungssprachen (1)
- Modellkonsistenz (1)
- Modellreparatur (1)
- Modelltransformation (1)
- Modelltransformationen (1)
- Models at Runtime (1)
- Molekulare Bioinformatik (1)
- Morphic (1)
- Multi Task Learning (1)
- Multi-Class (1)
- Multi-Instanzen (1)
- Multi-Task-Lernen (1)
- Multi-objective optimization (1)
- Multi-sided platforms (1)
- Multicore architectures (1)
- Multidisciplinary Teams (1)
- Multimodal behavior (1)
- Multiprocessor (1)
- Multiprozessor (1)
- Music Technology (1)
- Muster (1)
- Musterabgleich (1)
- Mustererkennung (1)
- Mutation operators (1)
- N-of-1 trial (1)
- NASDAQ (1)
- NUI (1)
- NameID (1)
- Namecoin (1)
- Nash Equilibrium (1)
- Natural Science Education (1)
- Natural ventilation (1)
- Navigation (1)
- Nephrology (1)
- Nested Graph Conditions (1)
- Nested graph conditions (1)
- Network Creation Game (1)
- Network clustering (1)
- Netzwerk (1)
- Netzwerke (1)
- Netzwerkprotokolle (1)
- Neuronales Netz (1)
- Newspeak (1)
- Next Generation Network (1)
- Nicht-photorealistisches Rendering (1)
- Nichtfotorealistische Bildsynthese (1)
- NoSQL (1)
- Non-photorealistic Rendering (1)
- Norway (1)
- Novice programmers (1)
- Nutzerinteraktion (1)
- Nutzungsinteresse (1)
- O (1)
- Object Constraint Programming (1)
- Object-Oriented Programming (1)
- Objects (1)
- Objekt-Constraint Programmierung (1)
- Objekt-Orientiertes Programmieren (1)
- Objekt-orientiertes Programmieren mit Constraints (1)
- Objekte (1)
- Objektive Schwierigkeit (1)
- Objektlebenszyklus-Synchronisation (1)
- Off-Chain-Transaktionen (1)
- Omega (1)
- Onename (1)
- Online Course (1)
- Online Learning Environments (1)
- Online-Learning (1)
- Online-Lernen (1)
- Onlinekurs (1)
- Ontologie (1)
- Ontologies (1)
- Open source (1)
- OpenBazaar (1)
- Opinion mining (1)
- Optimierungsproblem (1)
- OptoGait (1)
- Oracles (1)
- Order dependencies (1)
- Ordinances (1)
- Organisationsveränderung (1)
- Orphan Block (1)
- Overlapping community detection (1)
- Owner-Retained Access Control (ORAC) (1)
- P2P (1)
- PRISM Modell-Checker (1)
- PRISM model checker (1)
- PTCTL (1)
- Parallel Programming (1)
- Paralleles Rechnen (1)
- Parallelization (1)
- Parallelized algorithm (1)
- Parallelrechner (1)
- Parameterized Complexity (1)
- Parametrisierte Komplexität (1)
- Parsing (1)
- Patientenermündigung (1)
- Pattern Matching (1)
- Patterns (1)
- Pedagogical content knowledge (1)
- Pedagogical issues (1)
- Peer-to-Peer Netz (1)
- Peer-to-Peer-Netz ; GRID computing ; Zuverlässigkeit ; Web Services ; Betriebsmittelverwaltung ; Migration (1)
- Peercoin (1)
- Performance (1)
- Performance Prediction (1)
- Peripersonal space (1)
- Personal Data (1)
- Petri net Mapping (1)
- Petri net mapping (1)
- Petrinetz (1)
- Physical Science (1)
- Plant identification (1)
- Plattform-Ökosysteme (1)
- Platzierung (1)
- PoB (1)
- PoS (1)
- PoW (1)
- Point-based rendering (1)
- Policy Enforcement (1)
- Policy Languages (1)
- Policy Sprachen (1)
- Polymerase Chain Reaction Experiment (1)
- Popular matching (1)
- Posenabschätzung (1)
- Prediction Game (1)
- Predictive Models (1)
- Preprocessing (1)
- Primary informatics (1)
- Prime graphs (1)
- Prior knowledge (1)
- Privacy Protection (1)
- Probabilistische Modelle (1)
- Problem solving (1)
- Problem solving strategies (1)
- Probleme in der Studie (1)
- Problemlösung (1)
- Process Enactment (1)
- Process Execution (1)
- Process Modelling (1)
- Process modeling (1)
- Professoren (1)
- Programmierabstraktionen (1)
- Programmieren (1)
- Programmiererlebnis (1)
- Programmierwerkzeuge (1)
- Programming Languages (1)
- Programming environments for children (1)
- Programming learning (1)
- Prolog (1)
- Proof Theory (1)
- Proof-of-Burn (1)
- Proof-of-Stake (1)
- Proof-of-Work (1)
- Propagation von Aktivitätsinstanzzuständen (1)
- Protocols (1)
- Prototyping (1)
- Prozess- und Datenintegration (1)
- Prozessarchitektur (1)
- Prozessausführung (1)
- Prozessautomatisierung (1)
- Prozesse (1)
- Prozesserhebung (1)
- Prozessinstanz (1)
- Prozessmodelle (1)
- Prozessmodellsuche (1)
- Prozessoren (1)
- Prozessverfeinerung (1)
- Prädiktionsspiel (1)
- Präferenzen (1)
- Präsentation (1)
- Psychotherapie (1)
- Pytho n (1)
- Python (1)
- Quanten-Computing (1)
- Quantenkryptographie (1)
- Quantified Boolean Formula (QBF) (1)
- Quantitative Analysen (1)
- Quantitative Modeling (1)
- Quantitative Modellierung (1)
- Query (1)
- Query execution (1)
- Query optimization (1)
- Query-Optimierung (1)
- Queuing Theory (1)
- RL (1)
- RT_PREEMT patch (1)
- RT_PREEMT-Patch (1)
- Rainfall-runoff (1)
- Random access memory (1)
- Real-Time Rendering (1)
- Recommendations for CS-Curricula in Higher Education (1)
- Reconfigurable (1)
- Region of Interest (1)
- Regressionstests (1)
- Rekonfiguration (1)
- Relational data (1)
- Reparatur (1)
- Representationlernen (1)
- Reproducible benchmarking (1)
- Research Projects (1)
- Resource Allocation (1)
- Resource Management (1)
- Ressourcenmanagement (1)
- Reverse Engineering (1)
- Reversibility (1)
- Ripple (1)
- Robot learning (1)
- Robot personality (1)
- Ruby (1)
- Run time analysis (1)
- Runtime Binding (1)
- Runtime improvement (1)
- Runtime-monitoring (1)
- Russia (1)
- SCED (1)
- SCP (1)
- SHA (1)
- SIEM (1)
- SMT (1)
- SOA (1)
- SOA Security Pattern (1)
- SPARQL (1)
- SPV (1)
- STEM (1)
- SWIRL (1)
- Sammlungsdatentypen (1)
- Sample Selection Bias (1)
- Savanne (1)
- Scalability (1)
- Scale-invariant feature transform (SIFT) (1)
- Schelling Process (1)
- Schelling Prozess (1)
- Schelling Segregation (1)
- Schema-Entdeckung (1)
- Schemaentdeckung (1)
- Schlüsselentdeckung (1)
- Schlüsselkompetenzen (1)
- Schriftartgestaltung (1)
- Schriftrendering (1)
- Schwierigkeitsgrad (1)
- Scientific understanding of Information (1)
- Scrollytelling (1)
- Search Algorithms (1)
- Security (1)
- Security Modelling (1)
- Segmentierung (1)
- Selbst-Adaptive Software (1)
- Selektion (1)
- Selektionsbias (1)
- Self-Adaptive Software (1)
- Self-Regulated Learning (1)
- Semantic Search (1)
- Semantik Web (1)
- Semantische Analyse (1)
- Semantische Suche (1)
- Sensors (1)
- Sequential anomaly (1)
- Sequenzeigenschaften (1)
- Sequenzen von s/t-Pattern (1)
- Serialisierung (1)
- Service Creation (1)
- Service Delivery Platform (1)
- Service convergence (1)
- Service-oriented Architectures (1)
- Service-orientierte Systeme (1)
- Service-orientierte Systme (1)
- Shader (1)
- Sharing (1)
- Sicherheitsanalyse (1)
- Sicherheitsmodellierung (1)
- Signal Processing (1)
- Signal processing (1)
- Signalflankengraph (SFG oder STG) (1)
- Signalquellentrennung (1)
- Signaltrennung (1)
- Similarity Measures (1)
- Similarity Search (1)
- Simplified Payment Verification (1)
- Simulation (1)
- Simulations (1)
- Simultane Diagonalisierung (1)
- Single Trial Analysis (1)
- Single event upsets (1)
- Situationsbewusstsein (1)
- Skalierbarkeit (1)
- Skalierbarkeit der Blockchain (1)
- Skelettberechnung (1)
- Skriptsprachen (1)
- Slock.it (1)
- Small Private Online Courses (1)
- Smart cities (1)
- SoaML (1)
- Social impact (1)
- Sociotechnical Design (1)
- Soft Fork (1)
- Software-Evolution (1)
- Software/Hardware Co-Design (1)
- Softwareanalyse (1)
- Softwarearchitektur (1)
- Softwareentwicklung (1)
- Softwareentwicklungsprozesse (1)
- Softwareproduktlinien (1)
- Softwaretechnik (1)
- Softwaretest (1)
- Softwaretests (1)
- Softwarevisualisierung (1)
- Softwarewartung (1)
- Solution Space (1)
- Soziale Medien (1)
- Sozialen Medien (1)
- Spam (1)
- Spam Filtering (1)
- Spam-Erkennung (1)
- Spam-Filter (1)
- Spam-Filtering (1)
- Spatio-Spectral Filter (1)
- Spawning (1)
- Specification (1)
- Speicheroptimierungen (1)
- Spezifikation von gezeiteten Graph Transformationen (1)
- Spieldynamik (1)
- Spieldynamiken (1)
- Sprachdesign (1)
- Sprachlernen im Limes (1)
- Sprachspezifikation (1)
- Squeak (1)
- Squeak/Smalltalk (1)
- Stable marriage (1)
- Stable matching (1)
- Stance Detection (1)
- Standardisierung (1)
- Standards (1)
- Static Analysis (1)
- Statistical Tests (1)
- Statistische Tests (1)
- Steemit (1)
- Stellar Consensus Protocol (1)
- Stilisierung (1)
- Storj (1)
- Structuring (1)
- Strukturierung (1)
- Studentenerwartungen (1)
- Studentenhaltungen (1)
- Studie (1)
- Submodular function (1)
- Submodular functions (1)
- Subset selection (1)
- Suchtberatung und -therapie (1)
- Suchverfahren (1)
- Synchronisation (1)
- Synonyme (1)
- Synthese (1)
- System Biologie (1)
- System of Systems (1)
- Systembiologie (1)
- Systeme von Systemen (1)
- Systems of Systems (1)
- Systems of parallel communicating (1)
- TPTP (1)
- Tableaumethode (1)
- Tasks (1)
- Teacher perceptions (1)
- Teachers (1)
- Teaching information security (1)
- Teaching problem solving strategies (1)
- Technology proficiency (1)
- Tele-Lab (1)
- Tele-Teaching (1)
- Telekommunikation (1)
- Telemedizin (1)
- Temporal Logic (1)
- Temporäre Anbindung (1)
- Terminologische Logik (1)
- Terminology (1)
- Test-getriebene Fehlernavigation (1)
- Testen (1)
- Testergebnisse (1)
- Testpriorisierungs (1)
- Tests (1)
- Text mining (1)
- Texterkennung (1)
- Textklassifikation (1)
- Texturen (1)
- The Bitfury Group (1)
- The DAO (1)
- The Sharing Economy (1)
- Theoretischen Vorlesungen (1)
- Threshold Cryptography (1)
- Time Augmented Petri Nets (1)
- Time series (1)
- Timed Automata (1)
- Tool (1)
- Tools (1)
- Traceability (1)
- Tracking (1)
- Trajectories (1)
- Trajektorien (1)
- Trajektoriendaten (1)
- Transaktion (1)
- Transformation (1)
- Transformationsebene (1)
- Transformationssequenzen (1)
- Transversal hypergraph (1)
- Travis CI (1)
- Treewidth (1)
- Tripel-Graph-Grammatiken (1)
- Triple Graph Grammar (1)
- Triple Graph Grammars (1)
- Triple-Graph-Grammatiken (1)
- Two-Way-Peg (1)
- Type and effect systems (1)
- Unabhängige Komponentenanalyse (1)
- Unbegrenzter Zustandsraum (1)
- Uncanny valley (1)
- Unique column combination (1)
- Unique column combinations (1)
- Universität Bagdad (1)
- Universität Potsdam (1)
- Universitätseinstellungen (1)
- Unspent Transaction Output (1)
- Untere Schranken (1)
- Unveränderlichkeit (1)
- Unvollständigkeit (1)
- Usage Interest (1)
- User Experience (1)
- VGG16 (1)
- VIL (1)
- VM (1)
- VR (1)
- VUCA-World (1)
- Validation (1)
- Value network (1)
- Verbindungsnetzwerke (1)
- Verbundwerte (1)
- Verhaltensabstraktion (1)
- Verhaltensanalyse (1)
- Verhaltensbewahrung (1)
- Verhaltensverfeinerung (1)
- Verhaltensänderung (1)
- Verhaltensäquivalenz (1)
- Verification (1)
- Verletzung Auflösung (1)
- Verletzung Erklärung (1)
- Verlässlichkeit (1)
- Vernetzte Daten (1)
- Versionierung (1)
- Verteiltes Arbeiten (1)
- Verteiltes Rechnen (1)
- Verteilungsalgorithmen (1)
- Verteilungsalgorithmus (1)
- Verteilungsunterschied (1)
- Vertrauen (1)
- Verträge (1)
- Verwaltung von Rechenzentren (1)
- Verzögerungs-Verbreitung (1)
- Veränderungsanalyse (1)
- Videoanalyse (1)
- Videometadaten (1)
- Violation Explanation (1)
- Violation Resolution (1)
- Virtual Machines (1)
- Virtual Reality (1)
- Virtual machines (1)
- Virtuelle Realität (1)
- Virtuelles 3D Stadtmodell (1)
- Visualisierungskonzept-Exploration (1)
- Vocabulary (1)
- Vocational Education (1)
- Vorhersagemodelle (1)
- W[3]-Completeness (1)
- Wahrnehmung (1)
- Wahrnehmung von Arousal (1)
- Wahrnehmungsunterschiede (1)
- Warteschlangentheorie (1)
- Wartung von Graphdatenbanksichten (1)
- Watson IoT (1)
- Wearable (1)
- Web Services (1)
- Web Sites (1)
- Web applications (1)
- Web of Data (1)
- Web-Anwendungen (1)
- Web-Based Rendering (1)
- Webbasiertes Rendering (1)
- Webseite (1)
- Weighted clustering coefficient (1)
- Well-structuredness (1)
- Werkzeugbau (1)
- Wertschöpfungskooperation (1)
- WhatsApp (1)
- Wicked Problems (1)
- Wikipedia (1)
- Wirtschaftsinformatik (1)
- Wissensrepräsentation und -verarbeitung (1)
- Wissensrepräsentation und Schlussfolgerung (1)
- Wohlstrukturiertheit (1)
- Wolke (1)
- Women and IT (1)
- Workflow (1)
- Wüstenbildung (1)
- X-ray imaging (1)
- XM (1)
- Young People (1)
- ZQSA (1)
- ZQSAT (1)
- Zebris (1)
- Zeitbehaftete Petri Netze (1)
- Zero-Suppressed Binary Decision Diagram (ZDD) (1)
- Zielvorgabe (1)
- Zookos Dreieck (1)
- Zookos triangle (1)
- Zugriffskontrolle (1)
- Zuverlässigkeitsanalyse (1)
- acceptability (1)
- access control (1)
- action and change (1)
- action language (1)
- activity instance state propagation (1)
- acyclic preferences (1)
- ad hoc learning (1)
- ad hoc messaging network (1)
- adaptiv (1)
- addiction care (1)
- adoption (1)
- advanced persistent threat (1)
- advanced threats (1)
- aerosol size distribution (1)
- agil (1)
- agile government (1)
- agility (1)
- airbnb (1)
- algorithm (1)
- algorithm configuration (1)
- algorithm schedules (1)
- algorithm scheduling (1)
- algorithm selection (1)
- altchain (1)
- alternative chain (1)
- analog-to-digital conversion (1)
- analogical thinking (1)
- analysis (1)
- anisotropic Kuwahara filter (1)
- annotation (1)
- anomalies (1)
- answer (1)
- answer set (1)
- app (1)
- approximate joint diagonalization (1)
- approximation (1)
- apriori (1)
- apt (1)
- architectural adaptation (1)
- architecture recovery (1)
- archive analysis (1)
- argument mining (1)
- argumentation research (1)
- argumentation structure (1)
- arithmethische Prozeduren (1)
- arithmetic procedures (1)
- arousal perception (1)
- art analysis (1)
- artificial intelligence (1)
- aspect adapter (1)
- aspect oriented programming (1)
- aspect-oriented (1)
- aspects (1)
- aspectualization (1)
- asset management (1)
- assistive Technologien (1)
- assistive technologies (1)
- association rule mining (1)
- asynchronous circuit (1)
- atomic swap (1)
- attacks (1)
- ausführbare Semantiken (1)
- authentication (1)
- automata (1)
- automated planning (1)
- automated theorem proving (1)
- automatic theorem prover (1)
- automatisierter Theorembeweiser (1)
- autonomous (1)
- back-in-time (1)
- balance analysis (1)
- bank (1)
- batch processing (1)
- behavior preservation (1)
- behavioral abstraction (1)
- behavioral equivalenc (1)
- behavioral refinement (1)
- behavioral specification (1)
- behaviour (1)
- behaviourally correct learning (1)
- benchmark (1)
- benutzergenerierte Inhalte (1)
- beschreibende Feldstudie (1)
- bidirectional optimality theory (1)
- bidirectional payment channels (1)
- bild (1)
- bildbasiertes Rendering (1)
- binary representation (1)
- binary search (1)
- bioinformatics (1)
- biological network (1)
- biological network model (1)
- biological networks (1)
- biomarker detection (1)
- bisimulation (1)
- bitcoin (1)
- bitcoins (1)
- blind source separation (1)
- blockchain consortium (1)
- blockchain-übergreifend (1)
- blocks (1)
- blumix platform (1)
- bottom–up (1)
- bounded backward model checking (1)
- bpm (1)
- brand ambassadors (1)
- brand personality (1)
- bug tracking (1)
- building models (1)
- built–in predicates (1)
- business informatics (1)
- business models (1)
- business process architecture (1)
- business process architectures (1)
- business process model abstraction (1)
- bystander (1)
- cancer therapy (1)
- cartographic design (1)
- case study (1)
- categories (1)
- causal AI (1)
- causal reasoning (1)
- causality (1)
- center dot Computing (1)
- chain (1)
- change detection (1)
- change management (1)
- changeability (1)
- changing the study field (1)
- changing the university (1)
- choreographies (1)
- circuits (1)
- classes of logic programs (1)
- classifier calibration (1)
- classroom language (1)
- clause elimination (1)
- clause learning (1)
- cleansing (1)
- cloud datacenter (1)
- cloud storage (1)
- cluster-analysis (1)
- code generation (1)
- cognition (1)
- cognitive load (1)
- cognitive load theory (1)
- cognitive modifiability (1)
- coherence-enhancing filtering (1)
- collaborations (1)
- collection types (1)
- columnar databases (1)
- combined task and motion planning (1)
- communication (1)
- community (1)
- competencies (1)
- competency (1)
- complex optimization (1)
- complexity dichotomy (1)
- composite service (1)
- compositional analysis (1)
- computational biology (1)
- computational ethnomusicology (1)
- computational methods (1)
- computational photography (1)
- computed tomography (1)
- computer graphics (1)
- computer science teachers (1)
- computer-aided design (1)
- computer-mediated therapy (1)
- computergestützte Methoden (1)
- computergestützte Musikethnologie (1)
- computervermittelte Therapie (1)
- computing (1)
- computing science education (1)
- concept of algorithm (1)
- concurrency (1)
- concurrent graph rewriting (1)
- conditions (1)
- confidentiality (1)
- confirmation period (1)
- conflicts and dependencies in (1)
- confluence (1)
- conformance analysis (1)
- conformance checking (1)
- connection calculus (1)
- consensus algorithm (1)
- consensus protocol (1)
- consensus protocols (1)
- consistency restoration (1)
- consistent learning (1)
- constraints (1)
- constructionism (1)
- consumer behavior (1)
- contest period (1)
- context awareness (1)
- continuous testing (1)
- contract (1)
- contracts (1)
- control resynthesis (1)
- controlled experiment (1)
- convolutional neural networks (1)
- corporate nomadism (1)
- corporate takeovers (1)
- corpus study (1)
- couple reaction (1)
- coupling relationship (1)
- course timetabling (1)
- creativity (1)
- crochet (1)
- cross-chain (1)
- crosscutting wrappers (1)
- cryptocurrency exchanges (1)
- cryptography (1)
- cs4fn (1)
- cscw (1)
- cultural heritage (1)
- cumulative culture (1)
- curriculum theory (1)
- cyber (1)
- cyber humanistic (1)
- cyber threat intelligence (1)
- cyber-attack (1)
- cyber-physikalische Systeme (1)
- cybersecurity (1)
- cyberwar (1)
- data center management (1)
- data analytics (1)
- data assimilation (1)
- data center management (1)
- data correctness checking (1)
- data dependencies (1)
- data driven approaches (1)
- data extraction (1)
- data flow correctness (1)
- data migration (1)
- data models (1)
- data objects (1)
- data pipeline (1)
- data science (1)
- data set (1)
- data sharing (1)
- data states (1)
- data synthesis (1)
- data transformation (1)
- data visualization (1)
- data-driven (1)
- data-driven artifacts (1)
- database (1)
- database optimization (1)
- database technology (1)
- database tuning (1)
- datengetrieben (1)
- dbms (1)
- deadline propagation (1)
- decentral identities (1)
- decentralized autonomous organization (1)
- decision management (1)
- decision mining (1)
- decision models (1)
- decision trees (1)
- decubitus (1)
- deductive databases (1)
- deduplication (1)
- deep Gaussian processes (1)
- definiteness (1)
- delay propagation (1)
- demografische Informationen (1)
- demographic information (1)
- dental caries classification (1)
- dependability (1)
- dependable computing (1)
- dependencies (1)
- dependency discovery (1)
- depressive symptoms (1)
- desertification (1)
- design research (1)
- design-science research (1)
- determinism (1)
- deterministic properties (1)
- deurema modeling language (1)
- development tools (1)
- developmental systems (1)
- deviant behaviors (1)
- dezentrale Identitäten (1)
- dezentrale autonome Organisation (1)
- diagnosis (1)
- difference of Gaussians (1)
- differential gene expression (1)
- differential privacy (1)
- difficulty (1)
- difficulty target (1)
- diffusion (1)
- digital activism (1)
- digital enlightenment (1)
- digital interventions (1)
- digital learning platform (1)
- digital nomadism (1)
- digital nudging (1)
- digital picture archive (1)
- digital platform openness (1)
- digital sovereignty (1)
- digital strategy (1)
- digital workplace transformation (1)
- digitale Aufklärung (1)
- digitale Bildung (1)
- digitale Lernplattform (1)
- digitale Souveränität (1)
- digitales Bildarchiv (1)
- digitales Whiteboard (1)
- digitally-enabled pedagogies (1)
- dimensional (1)
- direkte Manipulation (1)
- discrete-event model (1)
- discrimination networks (1)
- diskretes Ereignismodell (1)
- distributed computation (1)
- distributed ledger technology (1)
- distributed performance monitoring (1)
- distribution algorithm (1)
- divide and conquer (1)
- doctor-patient relationship (1)
- domain-specific modeling (1)
- doppelter Hashwert (1)
- double hashing (1)
- drift theory (1)
- dropout (1)
- dynamic (1)
- dynamic typing (1)
- dynamic classification (1)
- dynamic consolidation (1)
- dynamic programming languages (1)
- dynamic systems (1)
- dynamisch (1)
- dynamische Klassifikation (1)
- dynamische Programmiersprachen (1)
- dynamische Sprachen (1)
- dynamische Systeme (1)
- dynamische Umsortierung (1)
- e-Learning (1)
- e-learning (1)
- e-learning platform (1)
- e-mentoring (1)
- education and public policy (1)
- educational programming (1)
- educational systems (1)
- educational timetabling (1)
- edutainment (1)
- efficient deep learning (1)
- eindeutig (1)
- eingebettete Systeme (1)
- elections (1)
- electrical muscle stimulation (1)
- electronic health record (1)
- electronic tool integration (1)
- elektrische Muskelstimulation (1)
- elliptic complexes (1)
- email spam detection (1)
- embedded systems (1)
- emotion (1)
- emotion representation (1)
- emotion research (1)
- emotional design (1)
- empirical studies (1)
- empirische Studien (1)
- endpoint security (1)
- energy efficiency (1)
- engaged computing (1)
- engine (1)
- engineering (1)
- entity alignment (1)
- entity linking (1)
- entity resolution (1)
- environments (1)
- epistemic logic programs (1)
- epistemic specifications (1)
- equality (1)
- erfahrbare Medien (1)
- erzeugende gegnerische Netzwerke (1)
- ethics (1)
- event abstraction (1)
- events (1)
- evidence theory (1)
- evolution (1)
- evolution in MDE (1)
- evolutionary computation (1)
- evolving systems (1)
- exact simulation methods (1)
- executable semantics (1)
- experience (1)
- experience report (1)
- experiment (1)
- explainability (1)
- explainability-accuracy trade-off (1)
- explainable AI (1)
- explicit knowledge (1)
- explicit negation (1)
- exploration (1)
- exploratives Programmieren (1)
- exponentiation (1)
- expression (1)
- extend (1)
- extensions of logic programs (1)
- external knowledge bases (1)
- external memory algorithms (1)
- fMRI (1)
- face tracking (1)
- facial expression (1)
- failure model (1)
- fashion (1)
- fatty acid amide hydrolase (1)
- fault tolerance (1)
- federated industrial platform ecosystems (1)
- federated voting (1)
- feedback loop modeling (1)
- feedback loops (1)
- fehlende Daten (1)
- field-programmable gate array (1)
- file structure (1)
- flow-based bilateral filter (1)
- font engineering (1)
- font rendering (1)
- forensics (1)
- formal framework (1)
- formal languages (1)
- formal verification (1)
- formal verification methods (1)
- formale Verifikation (1)
- formales Framework (1)
- formalism (1)
- fortschrittliche Angriffe (1)
- forward / backward chaining (1)
- fsQCA (1)
- fun (1)
- function symbols (1)
- functional dependency (1)
- functional lenses (1)
- functional programming (1)
- functions (1)
- funktionale Abhängigkeit (1)
- funktionale Programmierung (1)
- future SOC lab (1)
- gait analysis algorithm (1)
- ganzheitlich (1)
- ganzzahlige lineare Optimierung (1)
- gefaltete neuronale Netze (1)
- gemischte Daten (1)
- gene (1)
- gene expression matrix (1)
- gene selection (1)
- general secondary education (1)
- generalization (1)
- generalized discrimination networks (1)
- generalized logic programs (1)
- generative adversarial networks (1)
- genome annotation (1)
- geometry generation (1)
- geovirtual environments (1)
- geovirtuelle Umgebungen (1)
- geovisualization (1)
- geschichtsbewusste Laufzeit-Modelle (1)
- gesture (1)
- getypte Attributierte Graphen (1)
- global model management (1)
- globales Modellmanagement (1)
- grammars (1)
- graph clustering (1)
- graph databases (1)
- graph inference (1)
- graph languages (1)
- graph mining (1)
- graph pattern matching (1)
- graph queries (1)
- graph repair (1)
- graph transformations (1)
- graph-search (1)
- graph-transformations (1)
- hardware accelerator (1)
- hardware architecture (1)
- hashrate (1)
- hate speech detection (1)
- health care (1)
- healthcare (1)
- heterogeneity (1)
- heterogeneous computing (1)
- heterogeneous tissue (1)
- heterogenes Rechnen (1)
- heuristics (1)
- high school (1)
- higher (1)
- history-aware runtime models (1)
- holistic (1)
- home office (1)
- homogeneous cell population (1)
- homomorphic encryption (1)
- human computer interaction (1)
- human-centered (1)
- human–computer interaction (1)
- hybrid graph-transformation-systems (1)
- hybrid systems (1)
- hybride Graph-Transformations-Systeme (1)
- hyrise (1)
- identity (1)
- identity broker (1)
- image (1)
- image captioning (1)
- image data analysis (1)
- image-based rendering (1)
- imdb (1)
- immediacy (1)
- immersion (1)
- immutable values (1)
- in-memory (1)
- in-memory data management (1)
- in-memory database (1)
- inclusion dependency (1)
- incompleteness (1)
- inconsistency (1)
- incremental graph query evaluation (1)
- incumbent (1)
- independent component analysis (1)
- index (1)
- individuals (1)
- inductive invariant checking (1)
- induktives Invariant Checking (1)
- inertial measurement unit (1)
- inference (1)
- informal and formal learning (1)
- informatics curricula (1)
- informatics in upper secondary education (1)
- information diffusion (1)
- information extraction (1)
- inkrementelle Ausführung von Graphanfragen (1)
- inkrementelles Graph Pattern Matching (1)
- innovation capabilities (1)
- innovation management (1)
- input accuracy (1)
- instruction (1)
- integer linear programming (1)
- integral equation (1)
- integrated development environments (1)
- integrierte Entwicklungsumgebungen (1)
- intelligente Verträge (1)
- inter-chain (1)
- interaction (1)
- interaction modeling (1)
- interactive course (1)
- interactive media (1)
- interactive simulation (1)
- interactive workshop (1)
- interaktive Medien (1)
- interconnect (1)
- interdisziplinäre Teams (1)
- interface (1)
- international comparison (1)
- international human rights (1)
- international humanitarian law (1)
- international study (1)
- interpretable machine learning (1)
- interpretative research (1)
- interpreters (1)
- interval probabilistic timed systems (1)
- interval probabilistische zeitgesteuerte Systeme (1)
- interval timed automata (1)
- intransitivity (1)
- intuitive Benutzeroberflächen (1)
- intuitive interfaces (1)
- invariant checking (1)
- invasive aspects (1)
- invention (1)
- invention mechanism (1)
- inverse ill-posed problem (1)
- inverse scattering (1)
- iteration method (1)
- iterative regularization (1)
- job-shop scheduling (1)
- juridical recording (1)
- k-Induktion (1)
- k-induction (1)
- k-inductive invariants (1)
- k-induktive Invarianten (1)
- k-induktive Invariantenprüfung (1)
- k-induktives Invariant-Checking (1)
- kausale KI (1)
- kausale Schlussfolgerung (1)
- kernel PCA (1)
- kernel methods (1)
- key competences in physical computing (1)
- key competencies (1)
- key discovery (1)
- kinaesthetic teaching (1)
- knowledge building (1)
- knowledge discovery (1)
- knowledge engineering (1)
- knowledge management system (1)
- knowledge representation (1)
- knowledge transfer (1)
- knowledge work (1)
- kompositionale Analyse (1)
- konsistentes Lernen (1)
- kontinuierliches Testen (1)
- kontrolliertes Experiment (1)
- konvergente Dienste (1)
- kulturelles Erbe (1)
- landmarks (1)
- language design (1)
- language learning in the limit (1)
- language specification (1)
- laser remote sensing (1)
- laserscanning (1)
- law and technology (1)
- leadership (1)
- leanCoP (1)
- learner characteristics (1)
- learning factory (1)
- lebenslanges Lernen (1)
- lebenszentriert (1)
- ledger assets (1)
- left recursion (1)
- lesson (1)
- level-replacement systems (1)
- life-centered (1)
- lifelong learning (1)
- linear programming problem (1)
- linguistic (1)
- link discovery (1)
- linked data (1)
- literature review (1)
- live migration (1)
- liveness (1)
- load balancing (1)
- localization (1)
- location-based (1)
- logic (1)
- logic programming methodology and applications (1)
- logic synthesis (1)
- logical calculus (1)
- logical signaling networks (1)
- logische Programmierung (1)
- logische Signalnetzwerke (1)
- long-term interaction (1)
- loop formulas (1)
- machine (1)
- machine learning algorithms (1)
- machines (1)
- main memory computing (1)
- malware detection (1)
- management (1)
- mandatory computer science foundations (1)
- manipulation planning (1)
- manufacturing (1)
- many-core (1)
- map reduce (1)
- map/reduce (1)
- maps (1)
- maschinelle Verarbeitung natürlicher Sprache (1)
- maschninelles Lernen (1)
- matrices (1)
- media (1)
- mediated conversation (1)
- mediated learning experience (1)
- medical (1)
- medical documentation (1)
- medical malpractice (1)
- medizinisch (1)
- medizinische Dokumentation (1)
- mehrdimensionale Belangtrennung (1)
- mehrsprachige Ausführungsumgebungen (1)
- memory optimization (1)
- menschenzentriert (1)
- merged mining (1)
- merkle root (1)
- meta-programming (1)
- metabolic network (1)
- metabolite profile (1)
- metacrate (1)
- metadata (1)
- metadata detection (1)
- metadata discovery (1)
- metadata quality (1)
- metaverse (1)
- methodologie (1)
- methods (1)
- metric learning (1)
- metric temporal logic (1)
- metric termporal graph logic (1)
- metrisch temporale Graph Logic (1)
- metrische Temporallogik (1)
- microcredential (1)
- microdissection (1)
- micropayment (1)
- micropayment channels (1)
- migration (1)
- miner (1)
- mining (1)
- mining hardware (1)
- minting (1)
- misconceptions (1)
- mixed data (1)
- mixture models (1)
- mmdb (1)
- mobile application (1)
- mobile applications (1)
- mobile devices (1)
- mobile learning (1)
- mobile technologies and apps (1)
- model generation (1)
- model repair (1)
- model-based prototyping (1)
- model-driven (1)
- model-driven software engineering (1)
- modelgetriebene Entwicklung (1)
- modellgetriebene Softwaretechnik (1)
- modular counting (1)
- modularity (1)
- molecular network (1)
- molecular networks (1)
- molecular tumor board (1)
- molekulare Netzwerke (1)
- monetary incentive delay task (1)
- mood (1)
- morphic (1)
- morphological analysis (1)
- multi-class classification (1)
- multi-core (1)
- multi-dimensional separation of concerns (1)
- multi-instances (1)
- multi-valued logic (1)
- multi-version models (1)
- multidisziplinäre Teams (1)
- multimedia learning (1)
- multimodal representations (1)
- multiuser (1)
- musical scales (1)
- musikalische Tonleitern (1)
- mutli-task learning (1)
- mutual gaze (1)
- mutual information (1)
- named entity mining (1)
- narratives (1)
- natural language processing (1)
- nested application conditions (1)
- nested expressions (1)
- network (1)
- network protocols (1)
- networks-on-chip (1)
- neural (1)
- new technologies (1)
- nicht-parametrische bedingte Unabhängigkeitstests (1)
- nichtlineare ICA (1)
- nichtlineare PCA (NLPCA) (1)
- non-monotonic reasoning (1)
- non-parametric conditional independence testing (1)
- non-photorealistic rendering (NPR) (1)
- nonce (1)
- nonlinear ICA (1)
- nonlinear PCA (NLPCA) (1)
- notation (1)
- novelty detection (1)
- nutzergenerierte Inhalte (1)
- nvm (1)
- object life cycle synchronization (1)
- object-constraint programming (1)
- object-oriented programming (1)
- objective difficulty (1)
- objektorientiertes Programmieren (1)
- off-chain transaction (1)
- omega (1)
- online assistance (1)
- online course creation (1)
- online course design (1)
- online learning (1)
- online photographs (1)
- open innovation (1)
- open science (1)
- open science practices in information systems research (1)
- operating system (1)
- optical character recognition (1)
- optimal transport (1)
- order dependencies (1)
- organisational evolution (1)
- organizational change (1)
- orts-basiert (1)
- overcomplete ICA (1)
- packrat parsing (1)
- paper prototyping (1)
- paraconsistency (1)
- parallel (1)
- parallel and sequential independence (1)
- parallel computing (1)
- parallel execution (1)
- parallel rewriting (1)
- parallel solving (1)
- parallele Verarbeitung (1)
- parallele und Sequentielle Unabhängigkeit (1)
- paralleles Lösen (1)
- paralleles Rechnen (1)
- parameter (1)
- parsing (1)
- parsing expression grammars (1)
- partial application conditions (1)
- partial correlation (1)
- partial replication (1)
- partielle Anwendungsbedingungen (1)
- partielle Replikation (1)
- patent (1)
- pathways (1)
- patient empowerment (1)
- pattern recognition (1)
- pedagogy (1)
- peer-to-peer network (1)
- pegged sidechains (1)
- perception (1)
- perception differences (1)
- performance (1)
- performance models of virtual machines (1)
- periodic tasks (1)
- periodische Aufgaben (1)
- personal (1)
- personal response systems (1)
- personality prediction (1)
- personalization principle (1)
- personalized medicine (1)
- perspective (1)
- persönliche Informationen (1)
- pervasive learning (1)
- petri net (1)
- philosophical foundation of informatics pedagogy (1)
- phone (1)
- physical computing tools (1)
- placement (1)
- planning (1)
- platform ecosystems (1)
- platypus (1)
- policy evaluation (1)
- polyglot execution environments (1)
- polyglot programming (1)
- polyglottes Programmieren (1)
- portfolio-based solving (1)
- portrait (1)
- pose estimation (1)
- poset (1)
- power-law (1)
- pre-primary level (1)
- predictive models (1)
- preference handling (1)
- preferences (1)
- prefetching (1)
- preprocessing (1)
- presentation (1)
- primary education (1)
- primary healthcare (1)
- primary level (1)
- primary school (1)
- prime pair (1)
- primer pair design (1)
- prior knowledge (1)
- priorities (1)
- probabilistic machine learning (1)
- probabilistic models (1)
- probabilistic timed automata (1)
- probabilistische zeitbehaftete Automaten (1)
- probabilistisches maschinelles Lernen (1)
- problem-solving (1)
- process (1)
- process and data integration (1)
- process automation (1)
- process elicitation (1)
- process instance (1)
- process model search (1)
- process models (1)
- process refinement (1)
- process scheduling (1)
- processes (1)
- processing (1)
- processor hardware (1)
- professional development (1)
- professors (1)
- profiling (1)
- program (1)
- program analysis (1)
- programming abstraction (1)
- programming experience (1)
- programming in context (1)
- programming language (1)
- programming skills (1)
- programming tools (1)
- programs (1)
- prototyping (1)
- proving (1)
- psychotherapy (1)
- public administration (1)
- public dataset (1)
- public sector organizations (1)
- qualitative model (1)
- qualitatives Modell (1)
- quantification protocol (1)
- quantified logics (1)
- quantile normalization (1)
- quantum computing (1)
- quantum cryptography (1)
- query matching (1)
- query optimization (1)
- querying (1)
- quorum slices (1)
- railways (1)
- random I (1)
- random graphs (1)
- randomized control trial (1)
- rapid prototyping (1)
- raum-zeitlich (1)
- raumbezogene Straftatenanalyse (1)
- reactive (1)
- reaktive Programmierung (1)
- real-time application (1)
- real-time rendering (1)
- real-time systems (1)
- rechnerunterstütztes Konstruieren (1)
- recognition (1)
- recommendation (1)
- reconfigurable systems (1)
- reconfiguration (1)
- reconstruction (1)
- record linkage (1)
- recursive tuning (1)
- reflection (1)
- regression testing (1)
- regulatory networks (1)
- reinforcement learning (1)
- rekonfigurierbar (1)
- relational model transformation (1)
- relationale Modelltransformationen (1)
- reliability (1)
- reliability assessment (1)
- remodularization (1)
- remote collaboration (1)
- remote sensing (1)
- remote-first (1)
- repair (1)
- reputation management (1)
- requirements engineering (1)
- research data management (1)
- resilient architectures (1)
- resource management (1)
- resource optimization (1)
- rest service (1)
- restoration (1)
- restricted parallelism (1)
- reusable aspects (1)
- reverse engineering (1)
- reversible reaction (1)
- review (1)
- reward system (1)
- robust ICA (1)
- robuste ICA (1)
- robustness (1)
- rootstock (1)
- runtime adaptations (1)
- runtime behavior (1)
- runtime monitoring (1)
- räumliche Geodaten (1)
- s/t-pattern sequences (1)
- sat (1)
- satisfiabilitiy solving (1)
- savanna (1)
- scalability of blockchain (1)
- scarce tokens (1)
- scheduling (1)
- schwach überwachtes maschinelles Lernen (1)
- science (1)
- scm (1)
- screening tools (1)
- scripting languages (1)
- scrollytelling (1)
- search (1)
- search plan generation (1)
- secondary computer science education (1)
- secondary education (1)
- security analytics (1)
- security chaos engineering (1)
- security policies (1)
- security risk assessment (1)
- segmentation (1)
- selbst-souveräne Identitäten (1)
- selbstbestimmte Identitäten (1)
- selbstüberwachtes Lernen (1)
- self-adaptive multiprocessing system (1)
- self-adaptive software (1)
- self-disclosure (1)
- self-efficacy (1)
- self-healing (1)
- self-supervised learning (1)
- semantic analysis (1)
- semantic classification (1)
- semantic web services (1)
- semantics (1)
- semantics preservation (1)
- semantische Klassifizierung (1)
- sentiment (1)
- sentiment analysis (1)
- sequence properties (1)
- serialization (1)
- series (1)
- serverseitiges 3D-Rendering (1)
- serverside 3D rendering (1)
- service mediation (1)
- service orchestration (1)
- service-oriented (1)
- service-oriented architectures (1)
- serviceorientierte Architekturen (1)
- sets (1)
- shader (1)
- sharing economy (1)
- sidechain (1)
- sign language (1)
- signal processing (1)
- signal transition graph (1)
- significant edge (1)
- similarity (1)
- similarity learning (1)
- similarity measures (1)
- single event upset (1)
- single-case experimental design (1)
- situated learning (1)
- situational awareness (1)
- skeletonization (1)
- skew Brownian motion (1)
- skew diffusions (1)
- small files (1)
- small talk (1)
- smoother (1)
- social attraction (1)
- social media analysis (1)
- social networking (1)
- social networking sites (1)
- sociotechnical (1)
- software analysis (1)
- software architecture (1)
- software development (1)
- software development processes (1)
- software evolution (1)
- software maintenance (1)
- software product lines (1)
- software tests (1)
- software visualization (1)
- software/hardware co-design (1)
- solar particle event (1)
- sorting (1)
- space missions (1)
- spaltenorientierte Datenbanken (1)
- spatio-temporal (1)
- spatio-temporal data management (1)
- specific prime pair (1)
- specification of timed graph transformations (1)
- speed independence (1)
- speed independent (1)
- spread correction (1)
- spreadsheets (1)
- squeak (1)
- stable matching (1)
- stable model semantics (1)
- standards (1)
- stark verhaltenskorrekt sperrend (1)
- static analysis (1)
- static source-code analysis (1)
- statische Analyse (1)
- statische Quellcodeanalyse (1)
- stochastic process (1)
- stratification (1)
- strong and uniform equivalence (1)
- strongly behaviourally correct locking (1)
- strongly stable matching (1)
- structured output prediction (1)
- strukturierte Vorhersage (1)
- student activation (1)
- student experience (1)
- student perceptions (1)
- students’ conceptions (1)
- students’ knowledge (1)
- study (1)
- study problems (1)
- style transfer (1)
- stylization (1)
- super stable matching (1)
- survey mode (1)
- symbolic analysis (1)
- symbolic graphs (1)
- symbolische Analyse (1)
- symbolische Graphen (1)
- synonym discovery (1)
- system of systems (1)
- systems (1)
- t.BPM (1)
- tabellarische Dateien (1)
- tableau method (1)
- tabular data (1)
- tacit knowledge (1)
- tangible media (1)
- teacher (1)
- teacher competencies (1)
- teacher education (1)
- teachers (1)
- teaching (1)
- teaching informatics in general education (1)
- teaching material (1)
- technical notes and rapid communications (1)
- technologies (1)
- technology (1)
- tele-TASK (1)
- telemedicine (1)
- temporal graph queries (1)
- temporal logic (1)
- temporale Graphanfragen (1)
- temporary binding (1)
- terminology (1)
- terrain models (1)
- test case prioritization (1)
- test items (1)
- test results (1)
- test-driven fault navigation (1)
- text classification (1)
- text mining (1)
- textures (1)
- the bright and dark side of social media in the marginalized contexts (1)
- theory (1)
- threat detection (1)
- threshold cryptography (1)
- tiefe Gauß-Prozesse (1)
- tiering (1)
- timed automata (1)
- tool building (1)
- top– down (1)
- tort law (1)
- touch input (1)
- tptp (1)
- tracing (1)
- traditional Georgian music (1)
- traditionelle Georgische Musik (1)
- traditionelle Unternehmen (1)
- training (1)
- trajectories (1)
- trajectory data (1)
- transaction (1)
- transduction (1)
- transfer learning (1)
- transformation (1)
- transformation level (1)
- transformation sequences (1)
- trust model (1)
- tuple spaces (1)
- tutorial section (1)
- typed graph transformation systems (1)
- typisierte attributierte Graphen (1)
- uncanny valley (1)
- unferring cellular networks (1)
- unfounded sets (1)
- unification (1)
- unique (1)
- unique column combinations (1)
- unsupervised (1)
- unsupervised learning (1)
- unsupervised methods (1)
- user interaction (1)
- user interfaces (1)
- user-centred (1)
- value co-creation (1)
- variables (1)
- variation (1)
- variational inference (1)
- variationelle Inferenz (1)
- various applications (1)
- ventral striatum (1)
- verhaltenskorrektes Lernen (1)
- verifiable credentials (1)
- verschachtelte Anwednungsbedingungen (1)
- verschachtelte Anwendungsbedingungen (1)
- versioning (1)
- verteilte Berechnung (1)
- verteilte Datenbanken (1)
- verteilte Leistungsüberwachung (1)
- verzwickte Probleme (1)
- video analysis (1)
- video metadata (1)
- view maintenance (1)
- views (1)
- virtual (1)
- virtual 3D city model (1)
- virtual 3D city models (1)
- virtual collaboration (1)
- virtual groups (1)
- virtual learning environments (1)
- virtual machine (1)
- virtual mobility (1)
- virtual teams (1)
- virtualisierte IT-Infrastruktur (1)
- virtuell (1)
- virtuelle 3D-Stadtmodelle (1)
- virtuelle Realität (1)
- visual analytics (1)
- visual language (1)
- visualization concept exploration (1)
- visuelle Sprache (1)
- vocational training (1)
- vulnerabilities (1)
- weak supervision (1)
- weakly (1)
- wearables (1)
- web application (1)
- web services (1)
- web-applications (1)
- web-based development (1)
- web-based development environment (1)
- web-basierte Entwicklungsumgebung (1)
- webbasierte Entwicklung (1)
- weight (1)
- well-being (1)
- word order freezing (1)
- word sense disambiguation (1)
- workload prediction (1)
- zero-day (1)
- zuverlässige Datenverarbeitung (1)
- zuverlässigen Datenverarbeitung (1)
- Ähnlichkeit (1)
- Ähnlichkeitsmaße (1)
- Ähnlichkeitssuche (1)
- Änderbarkeit (1)
- Übereinstimmungsanalyse (1)
- Überwachung (1)
- überbestimmte ICA (1)
- überprüfbare Nachweise (1)
- ‘unplugged’ computing (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering gGmbH (180)
- Institut für Informatik und Computational Science (163)
- Hasso-Plattner-Institut für Digital Engineering GmbH (127)
- Extern (49)
- Fachgruppe Betriebswirtschaftslehre (26)
- Mathematisch-Naturwissenschaftliche Fakultät (24)
- Wirtschaftswissenschaften (18)
- Institut für Physik und Astronomie (8)
- Digital Engineering Fakultät (7)
- Institut für Mathematik (7)
If taking a flipped learning approach, MOOC content can be used for online pre-class instruction. After which students can put the knowledge they gained from the MOOC into practice either synchronously or asynchronously. This study examined one such, asynchronous, course in teacher education. The course ran with 40 students over 13 weeks from February to May 2020. A case study approach was followed using mixed methods to assess the efficacy of the course. Quantitative data was gathered on achievement of learning outcomes, online engagement, and satisfaction. Qualitative data was gathered via student interviews from which a thematic analysis was undertaken. From a combined analysis of the data, three themes emerged as pertinent to course efficacy: quality and quantity of communication and collaboration; suitability of the MOOC; and significance for career development.
With recent advances in the area of information extraction, automatically extracting structured information from a vast amount of unstructured textual data becomes an important task, which is infeasible for humans to capture all information manually. Named entities (e.g., persons, organizations, and locations), which are crucial components in texts, are usually the subjects of structured information from textual documents. Therefore, the task of named entity mining receives much attention. It consists of three major subtasks, which are named entity recognition, named entity linking, and relation extraction.
These three tasks build up an entire pipeline of a named entity mining system, where each of them has its challenges and can be employed for further applications. As a fundamental task in the natural language processing domain, studies on named entity recognition have a long history, and many existing approaches produce reliable results. The task is aiming to extract mentions of named entities in text and identify their types. Named entity linking recently received much attention with the development of knowledge bases that contain rich information about entities. The goal is to disambiguate mentions of named entities and to link them to the corresponding entries in a knowledge base. Relation extraction, as the final step of named entity mining, is a highly challenging task, which is to extract semantic relations between named entities, e.g., the ownership relation between two companies.
In this thesis, we review the state-of-the-art of named entity mining domain in detail, including valuable features, techniques, evaluation methodologies, and so on. Furthermore, we present two of our approaches that focus on the named entity linking and relation extraction tasks separately.
To solve the named entity linking task, we propose the entity linking technique, BEL, which operates on a textual range of relevant terms and aggregates decisions from an ensemble of simple classifiers. Each of the classifiers operates on a randomly sampled subset of the above range. In extensive experiments on hand-labeled and benchmark datasets, our approach outperformed state-of-the-art entity linking techniques, both in terms of quality and efficiency.
For the task of relation extraction, we focus on extracting a specific group of difficult relation types, business relations between companies. These relations can be used to gain valuable insight into the interactions between companies and perform complex analytics, such as predicting risk or valuating companies. Our semi-supervised strategy can extract business relations between companies based on only a few user-provided seed company pairs. By doing so, we also provide a solution for the problem of determining the direction of asymmetric relations, such as the ownership_of relation. We improve the reliability of the extraction process by using a holistic pattern identification method, which classifies the generated extraction patterns. Our experiments show that we can accurately and reliably extract new entity pairs occurring in the target relation by using as few as five labeled seed pairs.
Current curricular trends require teachers in Baden-
Wuerttemberg (Germany) to integrate Computer Science (CS) into
traditional subjects, such as Physical Science. However, concrete guidelines
are missing. To fill this gap, we outline an approach where a
microcontroller is used to perform and evaluate measurements in the
Physical Science classroom.
Using the open-source Arduino platform, we expect students to acquire
and develop both CS and Physical Science competencies by using a
self-programmed microcontroller. In addition to this combined development
of competencies in Physical Science and CS, the subject matter
will be embedded in suitable contexts and learning environments,
such as weather and climate.
This thesis is concerned with the solution of the blind source separation problem (BSS). The BSS problem occurs frequently in various scientific and technical applications. In essence, it consists in separating meaningful underlying components out of a mixture of a multitude of superimposed signals. In the recent research literature there are two related approaches to the BSS problem: The first is known as Independent Component Analysis (ICA), where the goal is to transform the data such that the components become as independent as possible. The second is based on the notion of diagonality of certain characteristic matrices derived from the data. Here the goal is to transform the matrices such that they become as diagonal as possible. In this thesis we study the latter method of approximate joint diagonalization (AJD) to achieve a solution of the BSS problem. After an introduction to the general setting, the thesis provides an overview on particular choices for the set of target matrices that can be used for BSS by joint diagonalization. As the main contribution of the thesis, new algorithms for approximate joint diagonalization of several matrices with non-orthogonal transformations are developed. These newly developed algorithms will be tested on synthetic benchmark datasets and compared to other previous diagonalization algorithms. Applications of the BSS methods to biomedical signal processing are discussed and exemplified with real-life data sets of multi-channel biomagnetic recordings.
In this talk, I would like to share my experiences gained from participating in four CSP solver competitions and the second ASP solver competition. In particular, I’ll talk about how various programming techniques can make huge differences in solving some of the benchmark problems used in the competitions. These techniques include global constraints, table constraints, and problem-specific propagators and labeling strategies for selecting variables and values. I’ll present these techniques with experimental results from B-Prolog and other CLP(FD) systems.
The “HPI Future SOC Lab” is a cooperation of the Hasso Plattner Institute (HPI) and industry partners. Its mission is to enable and promote exchange and interaction between the research community and the industry partners.
The HPI Future SOC Lab provides researchers with free of charge access to a complete infrastructure of state of the art hard and software. This infrastructure includes components, which might be too expensive for an ordinary research environment, such as servers with up to 64 cores and 2 TB main memory. The offerings address researchers particularly from but not limited to the areas of computer science and business information systems. Main areas of research include cloud computing, parallelization, and In-Memory technologies.
This technical report presents results of research projects executed in 2017. Selected projects have presented their results on April 25th and November 15th 2017 at the Future SOC Lab Day events.
Motivation:
Constraint-based modeling approaches allow the estimation of maximal in vivo enzyme catalytic rates that can serve as proxies for enzyme turnover numbers. Yet, genome-scale flux profiling remains a challenge in deploying these approaches to catalogue proxies for enzyme catalytic rates across organisms.
Results:
Here, we formulate a constraint-based approach, termed NIDLE-flux, to estimate fluxes at a genome-scale level by using the principle of efficient usage of expressed enzymes. Using proteomics data from Escherichia coli, we show that the fluxes estimated by NIDLE-flux and the existing approaches are in excellent qualitative agreement (Pearson correlation > 0.9). We also find that the maximal in vivo catalytic rates estimated by NIDLE-flux exhibits a Pearson correlation of 0.74 with in vitro enzyme turnover numbers. However, NIDLE-flux results in a 1.4-fold increase in the size of the estimated maximal in vivo catalytic rates in comparison to the contenders. Integration of the maximum in vivo catalytic rates with publically available proteomics and metabolomics data provide a better match to fluxes estimated by NIDLE-flux. Therefore, NIDLE-flux facilitates more effective usage of proteomics data to estimate proxies for kcatomes.
The dark side of metaverse: a multi-perspective of deviant behaviors from PLS-SEM and fsQCA findings
(2024)
The metaverse has created a huge buzz of interest because such a phenomenon is emerging. The behavioral aspect of the metaverse includes user engagement and deviant behaviors in the metaverse. Such technology has brought various dangers to individuals and society. There are growing cases reported of sexual abuse, racism, harassment, hate speech, and bullying because of online disinhibition make us feel more relaxed. This study responded to the literature call by investigating the effect of technical and social features through mediating roles of security and privacy on deviant behaviors in the metaverse. The data collected from virtual network users reached 1121 respondents. Partial Least Squares based structural equation modeling (PLS-SEM) and fuzzy set Qualitative Comparative Analysis (fsQCA) were used. PLS-SEM results revealed that social features such as user-to-user interaction, homophily, social ties, and social identity, and technical design such as immersive experience and invisibility significantly affect users’ deviant behavior in the metaverse. The fsQCA results provided insights into the multiple causal solutions and configurations. This study is exceptional because it provided decisive results by understanding the deviant behavior of users based on the symmetrical and asymmetrical approach to virtual networks.
An increasing demand on functionality and flexibility leads to an integration of beforehand isolated system solutions building a so-called System of Systems (SoS). Furthermore, the overall SoS should be adaptive to react on changing requirements and environmental conditions. Due SoS are composed of different independent systems that may join or leave the overall SoS at arbitrary point in times, the SoS structure varies during the systems lifetime and the overall SoS behavior emerges from the capabilities of the contained subsystems. In such complex system ensembles new demands of understanding the interaction among subsystems, the coupling of shared system knowledge and the influence of local adaptation strategies to the overall resulting system behavior arise. In this report, we formulate research questions with the focus of modeling interactions between system parts inside a SoS. Furthermore, we define our notion of important system types and terms by retrieving the current state of the art from literature. Having a common understanding of SoS, we discuss a set of typical SoS characteristics and derive general requirements for a collaboration modeling language. Additionally, we retrieve a broad spectrum of real scenarios and frameworks from literature and discuss how these scenarios cope with different characteristics of SoS. Finally, we discuss the state of the art for existing modeling languages that cope with collaborations for different system types such as SoS.
Recently, due to an increasing demand on functionality and flexibility, beforehand isolated systems have become interconnected to gain powerful adaptive Systems of Systems (SoS) solutions with an overall robust, flexible and emergent behavior. The adaptive SoS comprises a variety of different system types ranging from small embedded to adaptive cyber-physical systems. On the one hand, each system is independent, follows a local strategy and optimizes its behavior to reach its goals. On the other hand, systems must cooperate with each other to enrich the overall functionality to jointly perform on the SoS level reaching global goals, which cannot be satisfied by one system alone. Due to difficulties of local and global behavior optimizations conflicts may arise between systems that have to be solved by the adaptive SoS.
This thesis proposes a modeling language that facilitates the description of an adaptive SoS by considering the adaptation capabilities in form of feedback loops as first class entities. Moreover, this thesis adopts the Models@runtime approach to integrate the available knowledge in the systems as runtime models into the modeled adaptation logic. Furthermore, the modeling language focuses on the description of system interactions within the adaptive SoS to reason about individual system functionality and how it emerges via collaborations to an overall joint SoS behavior. Therefore, the modeling language approach enables the specification of local adaptive system behavior, the integration of knowledge in form of runtime models and the joint interactions via collaboration to place the available adaptive behavior in an overall layered, adaptive SoS architecture.
Beside the modeling language, this thesis proposes analysis rules to investigate the modeled adaptive SoS, which enables the detection of architectural patterns as well as design flaws and pinpoints to possible system threats. Moreover, a simulation framework is presented, which allows the direct execution of the modeled SoS architecture. Therefore, the analysis rules and the simulation framework can be used to verify the interplay between systems as well as the modeled adaptation effects within the SoS. This thesis realizes the proposed concepts of the modeling language by mapping them to a state of the art standard from the automotive domain and thus, showing their applicability to actual systems. Finally, the modeling language approach is evaluated by remodeling up to date research scenarios from different domains, which demonstrates that the modeling language concepts are powerful enough to cope with a broad range of existing research problems.
While the role of and consequences of being a bystander to face-to-face bullying has received some attention in the literature, to date, little is known about the effects of being a bystander to cyberbullying. It is also unknown how empathy might impact the negative consequences associated with being a bystander of cyberbullying. The present study focused on examining the longitudinal association between bystander of cyberbullying depression, and anxiety, and the moderating role of empathy in the relationship between bystander of cyberbullying and subsequent depression and anxiety. There were 1,090 adolescents (M-age = 12.19; 50% female) from the United States included at Time 1, and they completed questionnaires on empathy, cyberbullying roles (bystander, perpetrator, victim), depression, and anxiety. One year later, at Time 2, 1,067 adolescents (M-age = 13.76; 51% female) completed questionnaires on depression and anxiety. Results revealed a positive association between bystander of cyberbullying and depression and anxiety. Further, empathy moderated the positive relationship between bystander of cyberbullying and depression, but not for anxiety. Implications for intervention and prevention programs are discussed.
While the role of and consequences of being a bystander to face-to-face bullying has received some attention in the literature, to date, little is known about the effects of being a bystander to cyberbullying. It is also unknown how empathy might impact the negative consequences associated with being a bystander of cyberbullying. The present study focused on examining the longitudinal association between bystander of cyberbullying depression, and anxiety, and the moderating role of empathy in the relationship between bystander of cyberbullying and subsequent depression and anxiety. There were 1,090 adolescents (M-age = 12.19; 50% female) from the United States included at Time 1, and they completed questionnaires on empathy, cyberbullying roles (bystander, perpetrator, victim), depression, and anxiety. One year later, at Time 2, 1,067 adolescents (M-age = 13.76; 51% female) completed questionnaires on depression and anxiety. Results revealed a positive association between bystander of cyberbullying and depression and anxiety. Further, empathy moderated the positive relationship between bystander of cyberbullying and depression, but not for anxiety. Implications for intervention and prevention programs are discussed.
This thesis presents methods, techniques and tools for developing three-dimensional representations of tactical intelligence assessments. Techniques from GIScience are combined with crime mapping methods. The range of methods applied in this study provides spatio-temporal GIS analysis as well as 3D geovisualisation and GIS programming. The work presents methods to enhance digital three-dimensional city models with application specific thematic information. This information facilitates further geovisual analysis, for instance, estimations of urban risks exposure. Specific methods and workflows are developed to facilitate the integration of spatio-temporal crime scene analysis results into 3D tactical intelligence assessments. Analysis comprises hotspot identification with kernel-density-estimation techniques (KDE), LISA-based verification of KDE hotspots as well as geospatial hotspot area characterisation and repeat victimisation analysis. To visualise the findings of such extensive geospatial analysis, three-dimensional geovirtual environments are created. Workflows are developed to integrate analysis results into these environments and to combine them with additional geospatial data. The resulting 3D visualisations allow for an efficient communication of complex findings of geospatial crime scene analysis.
3D point clouds are a universal and discrete digital representation of three-dimensional objects and environments. For geospatial applications, 3D point clouds have become a fundamental type of raw data acquired and generated using various methods and techniques. In particular, 3D point clouds serve as raw data for creating digital twins of the built environment.
This thesis concentrates on the research and development of concepts, methods, and techniques for preprocessing, semantically enriching, analyzing, and visualizing 3D point clouds for applications around transport infrastructure. It introduces a collection of preprocessing techniques that aim to harmonize raw 3D point cloud data, such as point density reduction and scan profile detection. Metrics such as, e.g., local density, verticality, and planarity are calculated for later use. One of the key contributions tackles the problem of analyzing and deriving semantic information in 3D point clouds. Three different approaches are investigated: a geometric analysis, a machine learning approach operating on synthetically generated 2D images, and a machine learning approach operating on 3D point clouds without intermediate representation.
In the first application case, 2D image classification is applied and evaluated for mobile mapping data focusing on road networks to derive road marking vector data. The second application case investigates how 3D point clouds can be merged with ground-penetrating radar data for a combined visualization and to automatically identify atypical areas in the data. For example, the approach detects pavement regions with developing potholes. The third application case explores the combination of a 3D environment based on 3D point clouds with panoramic imagery to improve visual representation and the detection of 3D objects such as traffic signs.
The presented methods were implemented and tested based on software frameworks for 3D point clouds and 3D visualization. In particular, modules for metric computation, classification procedures, and visualization techniques were integrated into a modular pipeline-based C++ research framework for geospatial data processing, extended by Python machine learning scripts. All visualization and analysis techniques scale to large real-world datasets such as road networks of entire cities or railroad networks.
The thesis shows that some use cases allow taking advantage of established image vision methods to analyze images rendered from mobile mapping data efficiently. The two presented semantic classification methods working directly on 3D point clouds are use case independent and show similar overall accuracy when compared to each other. While the geometry-based method requires less computation time, the machine learning-based method supports arbitrary semantic classes but requires training the network with ground truth data. Both methods can be used in combination to gradually build this ground truth with manual corrections via a respective annotation tool.
This thesis contributes results for IT system engineering of applications, systems, and services that require spatial digital twins of transport infrastructure such as road networks and railroad networks based on 3D point clouds as raw data. It demonstrates the feasibility of fully automated data flows that map captured 3D point clouds to semantically classified models. This provides a key component for seamlessly integrated spatial digital twins in IT solutions that require up-to-date, object-based, and semantically enriched information about the built environment.
CovRadar
(2022)
The ongoing pandemic caused by SARS-CoV-2 emphasizes the importance of genomic surveillance to understand the evolution of the virus, to monitor the viral population, and plan epidemiological responses. Detailed analysis, easy visualization and intuitive filtering of the latest viral sequences are powerful for this purpose. We present CovRadar, a tool for genomic surveillance of the SARS-CoV-2 Spike protein. CovRadar consists of an analytical pipeline and a web application that enable the analysis and visualization of hundreds of thousand sequences. First, CovRadar extracts the regions of interest using local alignment, then builds a multiple sequence alignment, infers variants and consensus and finally presents the results in an interactive app, making accessing and reporting simple, flexible and fast.
STG decomposition is a promising approach to tackle the complexity problems arising in logic synthesis of speed independent circuits, a robust asynchronous (i.e. clockless) circuit type. Unfortunately, STG decomposition can result in components that in isolation have irreducible CSC conflicts. Generalising earlier work, it is shown how to resolve such conflicts by introducing internal communication between the components via structural techniques only.
Most of the microelectronic circuits fabricated today are synchronous, i.e. they are driven by one or several clock signals. Synchronous circuit design faces several fundamental challenges such as high-speed clock distribution, integration of multiple cores operating at different clock rates, reduction of power consumption and dealing with voltage, temperature, manufacturing and runtime variations. Asynchronous or clockless design plays a key role in alleviating these challenges, however the design and test of asynchronous circuits is much more difficult in comparison to their synchronous counterparts. A driving force for a widespread use of asynchronous technology is the availability of mature EDA (Electronic Design Automation) tools which provide an entire automated design flow starting from an HDL (Hardware Description Language) specification yielding the final circuit layout. Even though there was much progress in developing such EDA tools for asynchronous circuit design during the last two decades, the maturity level as well as the acceptance of them is still not comparable with tools for synchronous circuit design. In particular, logic synthesis (which implies the application of Boolean minimisation techniques) for the entire system's control path can significantly improve the efficiency of the resulting asynchronous implementation, e.g. in terms of chip area and performance. However, logic synthesis, in particular for asynchronous circuits, suffers from complexity problems. Signal Transitions Graphs (STGs) are labelled Petri nets which are a widely used to specify the interface behaviour of speed independent (SI) circuits - a robust subclass of asynchronous circuits. STG decomposition is a promising approach to tackle complexity problems like state space explosion in logic synthesis of SI circuits. The (structural) decomposition of STGs is guided by a partition of the output signals and generates a usually much smaller component STG for each partition member, i.e. a component STG with a much smaller state space than the initial specification. However, decomposition can result in component STGs that in isolation have so-called irreducible CSC conflicts (i.e. these components are not SI synthesisable anymore) even if the specification has none of them. A new approach is presented to avoid such conflicts by introducing internal communication between the components. So far, STG decompositions are guided by the finest output partitions, i.e. one output per component. However, this might not yield optimal circuit implementations. Efficient heuristics are presented to determine coarser partitions leading to improved circuits in terms of chip area. For the new algorithms correctness proofs are given and their implementations are incorporated into the decomposition tool DESIJ. The presented techniques are successfully applied to some benchmarks - including 'real-life' specifications arising in the context of control resynthesis - which delivered promising results.
Background and aims: Accurate and user-friendly assessment tools quantifying alcohol consumption are a prerequisite to effective prevention and treatment programmes, including Screening and Brief Intervention. Digital tools offer new potential in this field. We developed the ‘Animated Alcohol Assessment Tool’ (AAA-Tool), a mobile app providing an interactive version of the World Health Organization's Alcohol Use Disorders Identification Test (AUDIT) that facilitates the description of individual alcohol consumption via culturally informed animation features. This pilot study evaluated the Russia-specific version of the Animated Alcohol Assessment Tool with regard to (1) its usability and acceptability in a primary healthcare setting, (2) the plausibility of its alcohol consumption assessment results and (3) the adequacy of its Russia-specific vessel and beverage selection. Methods: Convenience samples of 55 patients (47% female) and 15 healthcare practitioners (80% female) in 2 Russian primary healthcare facilities self-administered the Animated Alcohol Assessment Tool and rated their experience on the Mobile Application Rating Scale – User Version. Usage data was automatically collected during app usage, and additional feedback on regional content was elicited in semi-structured interviews. Results: On average, patients completed the Animated Alcohol Assessment Tool in 6:38 min (SD = 2.49, range = 3.00–17.16). User satisfaction was good, with all subscale Mobile Application Rating Scale – User Version scores averaging >3 out of 5 points. A majority of patients (53%) and practitioners (93%) would recommend the tool to ‘many people’ or ‘everyone’. Assessed alcohol consumption was plausible, with a low number (14%) of logically impossible entries. Most patients reported the Animated Alcohol Assessment Tool to reflect all vessels (78%) and all beverages (71%) they typically used. Conclusion: High acceptability ratings by patients and healthcare practitioners, acceptable completion time, plausible alcohol usage assessment results and perceived adequacy of region-specific content underline the Animated Alcohol Assessment Tool's potential to provide a novel approach to alcohol assessment in primary healthcare. After its validation, the Animated Alcohol Assessment Tool might contribute to reducing alcohol-related harm by facilitating Screening and Brief Intervention implementation in Russia and beyond.
Developing large software projects is a complicated task and can be demanding for developers. Continuous integration is common practice for reducing complexity. By integrating and testing changes often, changesets are kept small and therefore easily comprehensible. Travis CI is a service that offers continuous integration and continuous deployment in the cloud. Software projects are build, tested, and deployed using the Travis CI infrastructure without interrupting the development process. This report describes how Travis CI works, presents how time-driven, periodic building is implemented as well as how CI data visualization can be done, and proposes a way of dealing with dependency problems.
COMMIT
(2022)
Composition and functions of microbial communities affect important traits in diverse hosts, from crops to humans. Yet, mechanistic understanding of how metabolism of individual microbes is affected by the community composition and metabolite leakage is lacking. Here, we first show that the consensus of automatically generated metabolic reconstructions improves the quality of the draft reconstructions, measured by comparison to reference models. We then devise an approach for gap filling, termed COMMIT, that considers metabolites for secretion based on their permeability and the composition of the community. By applying COMMIT with two soil communities from the Arabidopsis thaliana culture collection, we could significantly reduce the gap-filling solution in comparison to filling gaps in individual reconstructions without affecting the genomic support. Inspection of the metabolic interactions in the soil communities allows us to identify microbes with community roles of helpers and beneficiaries. Therefore, COMMIT offers a versatile fully automated solution for large-scale modelling of microbial communities for diverse biotechnological applications. <br /> Author summaryMicrobial communities are important in ecology, human health, and crop productivity. However, detailed information on the interactions within natural microbial communities is hampered by the community size, lack of detailed information on the biochemistry of single organisms, and the complexity of interactions between community members. Metabolic models are comprised of biochemical reaction networks based on the genome annotation, and can provide mechanistic insights into community functions. Previous analyses of microbial community models have been performed with high-quality reference models or models generated using a single reconstruction pipeline. However, these models do not contain information on the composition of the community that determines the metabolites exchanged between the community members. In addition, the quality of metabolic models is affected by the reconstruction approach used, with direct consequences on the inferred interactions between community members. Here, we use fully automated consensus reconstructions from four approaches to arrive at functional models with improved genomic support while considering the community composition. We applied our pipeline to two soil communities from the Arabidopsis thaliana culture collection, providing only genome sequences. Finally, we show that the obtained models have 90% genomic support and demonstrate that the derived interactions are corroborated by independent computational predictions.
Informatics as a school subject has been virtually absent from bilingual education programs in German secondary schools. Most bilingual programs in German secondary education started out by focusing on subjects from the field of social sciences. Teachers and bilingual curriculum experts alike have been regarding those as the most suitable subjects for bilingual instruction – largely due to the intercultural perspective that a bilingual approach provides. And though one cannot deny the gain that ensues from an intercultural perspective on subjects such as history or geography, this benefit is certainly not limited to social science subjects. In consequence, bilingual curriculum designers have already begun to include other subjects such as physics or chemistry in bilingual school programs. It only seems a small step to extend this to informatics. This paper will start out by addressing potential benefits of adding informatics to the range of subjects taught as part of English-language bilingual programs in German secondary education. In a second step it will sketch out a methodological (= didactical) model for teaching informatics to German learners through English. It will then provide two items of hands-on and tested teaching material in accordance with this model. The discussion will conclude with a brief outlook on the chances and prerequisites of firmly establishing informatics as part of bilingual school curricula in Germany.
How Things Work
(2015)
Recognizing and defining functionality is a key competence
adopted in all kinds of programming projects. This study investigates
how far students without specific informatics training are able to identify
and verbalize functions and parameters. It presents observations
from classroom activities on functional modeling in high school chemistry
lessons with altogether 154 students. Finally it discusses the potential
of functional modelling to improve the comprehension of scientific
content.
Business Process Management (BPM) emerged as a means to control, analyse, and optimise business operations. Conceptual models are of central importance for BPM. Most prominently, process models define the behaviour that is performed to achieve a business value. In essence, a process model is a mapping of properties of the original business process to the model, created for a purpose. Different modelling purposes, therefore, result in different models of a business process. Against this background, the misalignment of process models often observed in the field of BPM is no surprise. Even if the same business scenario is considered, models created for strategic decision making differ in content significantly from models created for process automation. Despite their differences, process models that refer to the same business process should be consistent, i.e., free of contradictions. Apparently, there is a trade-off between strictness of a notion of consistency and appropriateness of process models serving different purposes. Existing work on consistency analysis builds upon behaviour equivalences and hierarchical refinements between process models. Hence, these approaches are computationally hard and do not offer the flexibility to gradually relax consistency requirements towards a certain setting. This thesis presents a framework for the analysis of behaviour consistency that takes a fundamentally different approach. As a first step, an alignment between corresponding elements of related process models is constructed. Then, this thesis conducts behavioural analysis grounded on a relational abstraction of the behaviour of a process model, its behavioural profile. Different variants of these profiles are proposed, along with efficient computation techniques for a broad class of process models. Using behavioural profiles, consistency of an alignment between process models is judged by different notions and measures. The consistency measures are also adjusted to assess conformance of process logs that capture the observed execution of a process. Further, this thesis proposes various complementary techniques to support consistency management. It elaborates on how to implement consistent change propagation between process models, addresses the exploration of behavioural commonalities and differences, and proposes a model synthesis for behavioural profiles.
ProtoSense
(2015)
The development of new and better optimization and approximation methods for Job Shop Scheduling Problems (JSP) uses simulations to compare their performance. The test data required for this has an uncertain influence on the simulation results, because the feasable search space can be changed drastically by small variations of the initial problem model. Methods could benefit from this to varying degrees. This speaks in favor of defining standardized and reusable test data for JSP problem classes, which in turn requires a systematic describability of the test data in order to be able to compile problem adequate data sets. This article looks at the test data used for comparing methods by literature review. It also shows how and why the differences in test data have to be taken into account. From this, corresponding challenges are derived which the management of test data must face in the context of JSP research.
Keywords
The development of new and better optimization and approximation methods for Job Shop Scheduling Problems (JSP) uses simulations to compare their performance. The test data required for this has an uncertain influence on the simulation results, because the feasable search space can be changed drastically by small variations of the initial problem model. Methods could benefit from this to varying degrees. This speaks in favor of defining standardized and reusable test data for JSP problem classes, which in turn requires a systematic describability of the test data in order to be able to compile problem adequate data sets. This article looks at the test data used for comparing methods by literature review. It also shows how and why the differences in test data have to be taken into account. From this, corresponding challenges are derived which the management of test data must face in the context of JSP research.
Decubitus is one of the most relevant diseases in nursing and the most expensive to treat. It is caused by sustained pressure on tissue, so it particularly affects bed-bound patients. This work lays a foundation for pressure mattress-based decubitus prophylaxis by implementing a solution to the single-frame 2D Human Pose Estimation problem.
For this, methods of Deep Learning are employed. Two approaches are examined, a coarse-to-fine Convolutional Neural Network for direct regression of joint coordinates and a U-Net for the derivation of probability distribution heatmaps.
We conclude that training our models on a combined dataset of the publicly available Bodies at Rest and SLP data yields the best results. Furthermore, various preprocessing techniques are investigated, and a hyperparameter optimization is performed to discover an improved model architecture.
Another finding indicates that the heatmap-based approach outperforms direct regression.
This model achieves a mean per-joint position error of 9.11 cm for the Bodies at Rest data and 7.43 cm for the SLP data.
We find that it generalizes well on data from mattresses other than those seen during training but has difficulties detecting the arms correctly.
Additionally, we give a brief overview of the medical data annotation tool annoto we developed in the bachelor project and furthermore conclude that the Scrum framework and agile practices enhanced our development workflow.
This paper originated from discussions about the need for
important changes in the curriculum for Computing including two focus
group meetings at IFIP conferences over the last two years. The
paper examines how recent developments in curriculum, together with
insights from curriculum thinking in other subject areas, especially mathematics
and science, can inform curriculum design for Computing.
The analysis presented in the paper provides insights into the complexity
of curriculum design as well as identifying important constraints and
considerations for the ongoing development of a vision and framework
for a Computing curriculum.
When realizing a programming language as VM, implementing behavior as part of the VM, as primitive, usually results in reduced execution times. But supporting and developing primitive functions requires more effort than maintaining and using code in the hosted language since debugging is harder, and the turn-around times for VM parts are higher. Furthermore, source artifacts of primitive functions are seldom reused in new implementations of the same language. And if they are reused, the existing API usually is emulated, reducing the performance gains. Because of recent results in tracing dynamic compilation, the trade-off between performance and ease of implementation, reuse, and changeability might now be decided adversely.
In this work, we investigate the trade-offs when creating primitives, and in particular how large a difference remains between primitive and hosted function run times in VMs with tracing just-in-time compiler. To that end, we implemented the algorithmic primitive BitBlt three times for RSqueak/VM. RSqueak/VM is a Smalltalk VM utilizing the PyPy RPython toolchain. We compare primitive implementations in C, RPython, and Smalltalk, showing that due to the tracing just-in-time compiler, the performance gap has lessened by one magnitude to one magnitude.
The exponential expanding of the numbers of web sites and Internet users makes WWW the most important global information resource. From information publishing and electronic commerce to entertainment and social networking, the Web allows an inexpensive and efficient access to the services provided by individuals and institutions. The basic units for distributing these services are the web sites scattered throughout the world. However, the extreme fragility of web services and content, the high competence between similar services supplied by different sites, and the wide geographic distributions of the web users drive the urgent requirement from the web managers to track and understand the usage interest of their web customers. This thesis, "X-tracking the Usage Interest on Web Sites", aims to fulfill this requirement. "X" stands two meanings: one is that the usage interest differs from various web sites, and the other is that usage interest is depicted from multi aspects: internal and external, structural and conceptual, objective and subjective. "Tracking" shows that our concentration is on locating and measuring the differences and changes among usage patterns. This thesis presents the methodologies on discovering usage interest on three kinds of web sites: the public information portal site, e-learning site that provides kinds of streaming lectures and social site that supplies the public discussions on IT issues. On different sites, we concentrate on different issues related with mining usage interest. The educational information portal sites were the first implementation scenarios on discovering usage patterns and optimizing the organization of web services. In such cases, the usage patterns are modeled as frequent page sets, navigation paths, navigation structures or graphs. However, a necessary requirement is to rebuild the individual behaviors from usage history. We give a systematic study on how to rebuild individual behaviors. Besides, this thesis shows a new strategy on building content clusters based on pair browsing retrieved from usage logs. The difference between such clusters and the original web structure displays the distance between the destinations from usage side and the expectations from design side. Moreover, we study the problem on tracking the changes of usage patterns in their life cycles. The changes are described from internal side integrating conceptual and structure features, and from external side for the physical features; and described from local side measuring the difference between two time spans, and global side showing the change tendency along the life cycle. A platform, Web-Cares, is developed to discover the usage interest, to measure the difference between usage interest and site expectation and to track the changes of usage patterns. E-learning site provides the teaching materials such as slides, recorded lecture videos and exercise sheets. We focus on discovering the learning interest on streaming lectures, such as real medias, mp4 and flash clips. Compared to the information portal site, the usage on streaming lectures encapsulates the variables such as viewing time and actions during learning processes. The learning interest is discovered in the form of answering 6 questions, which covers finding the relations between pieces of lectures and the preference among different forms of lectures. We prefer on detecting the changes of learning interest on the same course from different semesters. The differences on the content and structure between two courses leverage the changes on the learning interest. We give an algorithm on measuring the difference on learning interest integrated with similarity comparison between courses. A search engine, TASK-Moniminer, is created to help the teacher query the learning interest on their streaming lectures on tele-TASK site. Social site acts as an online community attracting web users to discuss the common topics and share their interesting information. Compared to the public information portal site and e-learning web site, the rich interactions among users and web content bring the wider range of content quality, on the other hand, provide more possibilities to express and model usage interest. We propose a framework on finding and recommending high reputation articles in a social site. We observed that the reputation is classified into global and local categories; the quality of the articles having high reputation is related with the content features. Based on these observations, our framework is implemented firstly by finding the articles having global or local reputation, and secondly clustering articles based on their content relations, and then the articles are selected and recommended from each cluster based on their reputation ranks.
Generating a novel and descriptive caption of an image is drawing increasing interests in computer vision, natural language processing, and multimedia communities. In this work, we propose an end-to-end trainable deep bidirectional LSTM (Bi-LSTM (Long Short-Term Memory)) model to address the problem. By combining a deep convolutional neural network (CNN) and two separate LSTM networks, our model is capable of learning long-term visual-language interactions by making use of history and future context information at high-level semantic space. We also explore deep multimodal bidirectional models, in which we increase the depth of nonlinearity transition in different ways to learn hierarchical visual-language embeddings. Data augmentation techniques such as multi-crop, multi-scale, and vertical mirror are proposed to prevent over-fitting in training deep models. To understand how our models "translate" image to sentence, we visualize and qualitatively analyze the evolution of Bi-LSTM internal states over time. The effectiveness and generality of proposed models are evaluated on four benchmark datasets: Flickr8K, Flickr30K, MSCOCO, and Pascal1K datasets. We demonstrate that Bi-LSTM models achieve highly competitive performance on both caption generation and image-sentence retrieval even without integrating an additional mechanism (e.g., object detection, attention model). Our experiments also prove that multi-task learning is beneficial to increase model generality and gain performance. We also demonstrate the performance of transfer learning of the Bi-LSTM model significantly outperforms previous methods on the Pascal1K dataset.
In a recent paper, the Lefschetz number for endomorphisms (modulo trace class operators) of sequences of trace class curvature was introduced. We show that this is a well defined, canonical extension of the classical Lefschetz number and establish the homotopy invariance of this number. Moreover, we apply the results to show that the Lefschetz fixed point formula holds for geometric quasiendomorphisms of elliptic quasicomplexes.
The development of self-adaptive software requires the engineering of an adaptation engine that controls and adapts the underlying adaptable software by means of feedback loops. The adaptation engine often describes the adaptation by using runtime models representing relevant aspects of the adaptable software and particular activities such as analysis and planning that operate on these runtime models. To systematically address the interplay between runtime models and adaptation activities in adaptation engines, runtime megamodels have been proposed for self-adaptive software. A runtime megamodel is a specific runtime model whose elements are runtime models and adaptation activities. Thus, a megamodel captures the interplay between multiple models and between models and activities as well as the activation of the activities. In this article, we go one step further and present a modeling language for ExecUtable RuntimE MegAmodels (EUREMA) that considerably eases the development of adaptation engines by following a model-driven engineering approach. We provide a domain-specific modeling language and a runtime interpreter for adaptation engines, in particular for feedback loops. Megamodels are kept explicit and alive at runtime and by interpreting them, they are directly executed to run feedback loops. Additionally, they can be dynamically adjusted to adapt feedback loops. Thus, EUREMA supports development by making feedback loops, their runtime models, and adaptation activities explicit at a higher level of abstraction. Moreover, it enables complex solutions where multiple feedback loops interact or even operate on top of each other. Finally, it leverages the co-existence of self-adaptation and off-line adaptation for evolution.
The development of self-adaptive software requires the engineering of an adaptation engine that controls the underlying adaptable software by a feedback loop. State-of-the-art approaches prescribe the feedback loop in terms of numbers, how the activities (e.g., monitor, analyze, plan, and execute (MAPE)) and the knowledge are structured to a feedback loop, and the type of knowledge. Moreover, the feedback loop is usually hidden in the implementation or framework and therefore not visible in the architectural design. Additionally, an adaptation engine often employs runtime models that either represent the adaptable software or capture strategic knowledge such as reconfiguration strategies. State-of-the-art approaches do not systematically address the interplay of such runtime models, which would otherwise allow developers to freely design the entire feedback loop.
This thesis presents ExecUtable RuntimE MegAmodels (EUREMA), an integrated model-driven engineering (MDE) solution that rigorously uses models for engineering feedback loops. EUREMA provides a domain-specific modeling language to specify and an interpreter to execute feedback loops. The language allows developers to freely design a feedback loop concerning the activities and runtime models (knowledge) as well as the number of feedback loops. It further supports structuring the feedback loops in the adaptation engine that follows a layered architectural style. Thus, EUREMA makes the feedback loops explicit in the design and enables developers to reason about design decisions.
To address the interplay of runtime models, we propose the concept of a runtime megamodel, which is a runtime model that contains other runtime models as well as activities (e.g., MAPE) working on the contained models. This concept is the underlying principle of EUREMA. The resulting EUREMA (mega)models are kept alive at runtime and they are directly executed by the EUREMA interpreter to run the feedback loops. Interpretation provides the flexibility to dynamically adapt a feedback loop. In this context, EUREMA supports engineering self-adaptive software in which feedback loops run independently or in a coordinated fashion within the same layer as well as on top of each other in different layers of the adaptation engine. Moreover, we consider preliminary means to evolve self-adaptive software by providing a maintenance interface to the adaptation engine.
This thesis discusses in detail EUREMA by applying it to different scenarios such as single, multiple, and stacked feedback loops for self-repairing and self-optimizing the mRUBiS application. Moreover, it investigates the design and expressiveness of EUREMA, reports on experiments with a running system (mRUBiS) and with alternative solutions, and assesses EUREMA with respect to quality attributes such as performance and scalability.
The conducted evaluation provides evidence that EUREMA as an integrated and open MDE approach for engineering self-adaptive software seamlessly integrates the development and runtime environments using the same formalism to specify and execute feedback loops, supports the dynamic adaptation of feedback loops in layered architectures, and achieves an efficient execution of feedback loops by leveraging incrementality.
The management of knowledge in organizations considers both established long-term
processes and cooperation in agile project teams. Since knowledge can be both tacit and explicit, its transfer from the individual to the organizational knowledge base poses a challenge in organizations. This challenge increases when the fluctuation of knowledge carriers is exceptionally high. Especially in large projects in which external consultants are involved, there is a risk that critical, company-relevant knowledge generated in the project will leave the company with the external knowledge carrier and thus be lost. In this paper, we show the advantages of an early warning system for knowledge management to avoid this loss. In particular, the potential of visual analytics in the context of knowledge management systems is presented and discussed. We present a project for the development of a business-critical software system and discuss the first implementations and results.
Spreadsheets are among the most commonly used file formats for data management, distribution, and analysis. Their widespread employment makes it easy to gather large collections of data, but their flexible canvas-based structure makes automated analysis difficult without heavy preparation. One of the common problems that practitioners face is the presence of multiple, independent regions in a single spreadsheet, possibly separated by repeated empty cells. We define such files as "multiregion" files. In collections of various spreadsheets, we can observe that some share the same layout. We present the Mondrian approach to automatically identify layout templates across multiple files and systematically extract the corresponding regions. Our approach is composed of three phases: first, each file is rendered as an image and inspected for elements that could form regions; then, using a clustering algorithm, the identified elements are grouped to form regions; finally, every file layout is represented as a graph and compared with others to find layout templates. We compare our method to state-of-the-art table recognition algorithms on two corpora of real-world enterprise spreadsheets. Our approach shows the best performances in detecting reliable region boundaries within each file and can correctly identify recurring layouts across files.
Any system at play in a data-driven project has a fundamental requirement: the ability to load data. The de-facto standard format to distribute and consume raw data is CSV. Yet, the plain text and flexible nature of this format make such files often difficult to parse and correctly load their content, requiring cumbersome data preparation steps. We propose a benchmark to assess the robustness of systems in loading data from non-standard CSV formats and with structural inconsistencies. First, we formalize a model to describe the issues that affect real-world files and use it to derive a systematic lpollutionz process to generate dialects for any given grammar. Our benchmark leverages the pollution framework for the csv format. To guide pollution, we have surveyed thousands of real-world, publicly available csv files, recording the problems we encountered. We demonstrate the applicability of our benchmark by testing and scoring 16 different systems: popular csv parsing frameworks, relational database tools, spreadsheet systems, and a data visualization tool.
To manage tabular data files and leverage their content in a given downstream task, practitioners often design and execute complex transformation pipelines to prepare them. The complexity of such pipelines stems from different factors, including the nature of the preparation tasks, often exploratory or ad-hoc to specific datasets; the large repertory of tools, algorithms, and frameworks that practitioners need to master; and the volume, variety, and velocity of the files to be prepared. Metadata plays a fundamental role in reducing this complexity: characterizing a file assists end users in the design of data preprocessing pipelines, and furthermore paves the way for suggestion, automation, and optimization of data preparation tasks.
Previous research in the areas of data profiling, data integration, and data cleaning, has focused on extracting and characterizing metadata regarding the content of tabular data files, i.e., about the records and attributes of tables. Content metadata are useful for the latter stages of a preprocessing pipeline, e.g., error correction, duplicate detection, or value normalization, but they require a properly formed tabular input. Therefore, these metadata are not relevant for the early stages of a preparation pipeline, i.e., to correctly parse tables out of files. In this dissertation, we turn our focus to what we call the structure of a tabular data file, i.e., the set of characters within a file that do not represent data values but are required to parse and understand the content of the file. We provide three different approaches to represent file structure, an explicit representation based on context-free grammars; an implicit representation based on file-wise similarity; and a learned representation based on machine learning.
In our first contribution, we use the grammar-based representation to characterize a set of over 3000 real-world csv files and identify multiple structural issues that let files deviate from the csv standard, e.g., by having inconsistent delimiters or containing multiple tables. We leverage our learnings about real-world files and propose Pollock, a benchmark to test how well systems parse csv files that have a non-standard structure, without any previous preparation. We report on our experiments on using Pollock to evaluate the performance of 16 real-world data management systems.
Following, we characterize the structure of files implicitly, by defining a measure of structural similarity for file pairs. We design a novel algorithm to compute this measure, which is based on a graph representation of the files' content. We leverage this algorithm and propose Mondrian, a graphical system to assist users in identifying layout templates in a dataset, classes of files that have the same structure, and therefore can be prepared by applying the same preparation pipeline.
Finally, we introduce MaGRiTTE, a novel architecture that uses self-supervised learning to automatically learn structural representations of files in the form of vectorial embeddings at three different levels: cell level, row level, and file level. We experiment with the application of structural embeddings for several tasks, namely dialect detection, row classification, and data preparation efforts estimation.
Our experimental results show that structural metadata, either identified explicitly on parsing grammars, derived implicitly as file-wise similarity, or learned with the help of machine learning architectures, is fundamental to automate several tasks, to scale up preparation to large quantities of files, and to provide repeatable preparation pipelines.
Exploratory Data Analysis
(2014)
In bioinformatics the term exploratory data analysis refers to different methods to get an overview of large biological data sets. Hence, it helps to create a framework for further analysis and hypothesis testing. The workflow facilitates this first important step of the data analysis created by high-throughput technologies. The results are different plots showing the structure of the measurements. The goal of the workflow is the automatization of the exploratory data analysis, but also the flexibility should be guaranteed. The basic tool is the free software R.
Deciphering the functioning of biological networks is one of the central tasks in systems biology. In particular, signal transduction networks are crucial for the understanding of the cellular response to external and internal perturbations. Importantly, in order to cope with the complexity of these networks, mathematical and computational modeling is required. We propose a computational modeling framework in order to achieve more robust discoveries in the context of logical signaling networks. More precisely, we focus on modeling the response of logical signaling networks by means of automated reasoning using Answer Set Programming (ASP). ASP provides a declarative language for modeling various knowledge representation and reasoning problems. Moreover, available ASP solvers provide several reasoning modes for assessing the multitude of answer sets. Therefore, leveraging its rich modeling language and its highly efficient solving capacities, we use ASP to address three challenging problems in the context of logical signaling networks: learning of (Boolean) logical networks, experimental design, and identification of intervention strategies. Overall, the contribution of this thesis is three-fold. Firstly, we introduce a mathematical framework for characterizing and reasoning on the response of logical signaling networks. Secondly, we contribute to a growing list of successful applications of ASP in systems biology. Thirdly, we present a software providing a complete pipeline for automated reasoning on the response of logical signaling networks.
The process of introducing compulsory ICT education at primary school level in the Czech Republic should be completed next year. Programming and Information, two topics from the basics of computer science have been included in a new textbook. The question is whether the new chapters of the textbook are comprehensible for primary school teachers, who have undergone no training in computer science. The paper reports on a pilot verification project in which pre-service primary school teachers were trained to teach these informatics topics.
Every year, the Hasso Plattner Institute (HPI) invites guests from industry and academia to a collaborative scientific workshop on the topic Operating the Cloud. Our goal is to provide a forum for the exchange of knowledge and experience between industry and academia. Co-located with the event is the HPI’s Future SOC Lab day, which offers an additional attractive and conducive environment for scientific and industry related discussions. Operating the Cloud aims to be a platform for productive interactions of innovative ideas, visions, and upcoming technologies in the field of cloud operation and administration.
In these proceedings, the results of the fifth HPI cloud symposium Operating the Cloud 2017 are published. We thank the authors for exciting presentations and insights into their current work and research. Moreover, we look forward to more interesting submissions for the upcoming symposium in 2018.
ReadBouncer
(2022)
Motivation:
Nanopore sequencers allow targeted sequencing of interesting nucleotide sequences by rejecting other sequences from individual pores. This feature facilitates the enrichment of low-abundant sequences by depleting overrepresented ones in-silico. Existing tools for adaptive sampling either apply signal alignment, which cannot handle human-sized reference sequences, or apply read mapping in sequence space relying on fast graphical processing units (GPU) base callers for real-time read rejection. Using nanopore long-read mapping tools is also not optimal when mapping shorter reads as usually analyzed in adaptive sampling applications.
Results:
Here, we present a new approach for nanopore adaptive sampling that combines fast CPU and GPU base calling with read classification based on Interleaved Bloom Filters. ReadBouncer improves the potential enrichment of low abundance sequences by its high read classification sensitivity and specificity, outperforming existing tools in the field. It robustly removes even reads belonging to large reference sequences while running on commodity hardware without GPUs, making adaptive sampling accessible for in-field researchers. Readbouncer also provides a user-friendly interface and installer files for end-users without a bioinformatics background.
Teaching and learning as well as administrative processes are still experiencing intensive changes with the rise of artificial intelligence (AI) technologies and its diverse application opportunities in the context of higher education. Therewith, the scientific interest in the topic in general, but also specific focal points rose as well. However, there is no structured overview on AI in teaching and administration processes in higher education institutions that allows to identify major research topics and trends, and concretizing peculiarities and develops recommendations for further action. To overcome this gap, this study seeks to systematize the current scientific discourse on AI in teaching and administration in higher education institutions. This study identified an (1) imbalance in research on AI in educational and administrative contexts, (2) an imbalance in disciplines and lack of interdisciplinary research, (3) inequalities in cross-national research activities, as well as (4) neglected research topics and paths. In this way, a comparative analysis between AI usage in administration and teaching and learning processes, a systematization of the state of research, an identification of research gaps as well as further research path on AI in higher education institutions are contributed to research.
Teaching and learning as well as administrative processes are still experiencing intensive changes with the rise of artificial intelligence (AI) technologies and its diverse application opportunities in the context of higher education. Therewith, the scientific interest in the topic in general, but also specific focal points rose as well. However, there is no structured overview on AI in teaching and administration processes in higher education institutions that allows to identify major research topics and trends, and concretizing peculiarities and develops recommendations for further action. To overcome this gap, this study seeks to systematize the current scientific discourse on AI in teaching and administration in higher education institutions. This study identified an (1) imbalance in research on AI in educational and administrative contexts, (2) an imbalance in disciplines and lack of interdisciplinary research, (3) inequalities in cross-national research activities, as well as (4) neglected research topics and paths. In this way, a comparative analysis between AI usage in administration and teaching and learning processes, a systematization of the state of research, an identification of research gaps as well as further research path on AI in higher education institutions are contributed to research.
The increasing demand for software engineers cannot completely be fulfilled by university education and conventional training approaches due to limited capacities. Accordingly, an alternative approach is necessary where potential software engineers are being educated in software engineering skills using new methods. We suggest micro tasks combined with theoretical lessons to overcome existing skill deficits and acquire fast trainable capabilities. This paper addresses the gap between demand and supply of software engineers by introducing an actionoriented and scenario-based didactical approach, which enables non-computer scientists to code. Therein, the learning content is provided in small tasks and embedded in learning factory scenarios. Therefore, different requirements for software engineers from the market side and from an academic viewpoint are analyzed and synthesized into an integrated, yet condensed skills catalogue. This enables the development of training and education units that focus on the most important skills demanded on the market. To achieve this objective, individual learning scenarios are developed. Of course, proper basic skills in coding cannot be learned over night but software programming is also no sorcery.