004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Article (331)
- Monograph/Edited Volume (165)
- Doctoral Thesis (157)
- Conference Proceeding (52)
- Postprint (50)
- Master's Thesis (10)
- Other (7)
- Preprint (3)
- Part of a Book (2)
- Bachelor Thesis (1)
Language
- English (586)
- German (192)
- Multiple languages (2)
Keywords
- Informatik (21)
- machine learning (17)
- Didaktik (15)
- Hochschuldidaktik (14)
- Ausbildung (13)
- answer set programming (13)
- Cloud Computing (11)
- cloud computing (11)
- Hasso-Plattner-Institut (10)
- Hasso Plattner Institute (9)
- E-Learning (8)
- Forschungskolleg (8)
- Forschungsprojekte (8)
- Future SOC Lab (8)
- In-Memory Technologie (8)
- Klausurtagung (8)
- Machine Learning (8)
- Maschinelles Lernen (8)
- Multicore Architekturen (8)
- Service-oriented Systems Engineering (8)
- maschinelles Lernen (8)
- Modellierung (7)
- openHPI (7)
- Datenintegration (6)
- Geschäftsprozessmanagement (6)
- Prozessmodellierung (6)
- Smalltalk (6)
- Visualisierung (6)
- business process management (6)
- cloud (6)
- multicore architectures (6)
- research projects (6)
- social media (6)
- Antwortmengenprogrammierung (5)
- Big Data (5)
- Computer Science Education (5)
- Datenschutz (5)
- Identitätsmanagement (5)
- MOOCs (5)
- Ph.D. retreat (5)
- Verifikation (5)
- cyber-physical systems (5)
- data integration (5)
- digital education (5)
- digitale Bildung (5)
- higher education (5)
- identity management (5)
- programming (5)
- quantitative analysis (5)
- security (5)
- service-oriented systems engineering (5)
- verification (5)
- virtual machines (5)
- visualization (5)
- Betriebssysteme (4)
- Design Thinking (4)
- Digitalisierung (4)
- In-Memory technology (4)
- Informatics Education (4)
- Informatikdidaktik (4)
- Infrastruktur (4)
- Privacy (4)
- Research School (4)
- Semantic Web (4)
- Sicherheit (4)
- Virtualisierung (4)
- Virtuelle Maschinen (4)
- Vorhersage (4)
- blockchain (4)
- digitalization (4)
- education (4)
- graph transformation (4)
- image processing (4)
- innovation (4)
- middleware (4)
- nested graph conditions (4)
- operating systems (4)
- privacy (4)
- probabilistic timed systems (4)
- process mining (4)
- qualitative Analyse (4)
- qualitative analysis (4)
- quantitative Analyse (4)
- research school (4)
- self-sovereign identity (4)
- smart contracts (4)
- software engineering (4)
- 3D visualization (3)
- Algorithmen (3)
- Answer Set Programming (3)
- BPMN (3)
- Bildverarbeitung (3)
- Blockchains (3)
- CityGML (3)
- Cloud (3)
- Competence Measurement (3)
- Computer Networks (3)
- Computernetzwerke (3)
- DPLL (3)
- Data profiling (3)
- Datenanalyse (3)
- Datenbank (3)
- Datenbanken (3)
- Graphtransformationen (3)
- HCI (3)
- IPv4 (3)
- IPv6 (3)
- Identität (3)
- Informatics (3)
- Informatikstudium (3)
- Informationsextraktion (3)
- Infrastructure (3)
- Innovation (3)
- Internet Protocol (3)
- Internet of Things (3)
- JSP (3)
- Klassifikation (3)
- Kompetenzen (3)
- Künstliche Intelligenz (3)
- Laufzeitmodelle (3)
- Lively Kernel (3)
- MOOC (3)
- Mensch-Computer-Interaktion (3)
- Middleware (3)
- Model Synchronisation (3)
- Model Transformation (3)
- Model-Driven Engineering (3)
- Modeling (3)
- Navigation (3)
- Network Politics (3)
- Netzpolitik (3)
- Online-Lernen (3)
- Onlinekurs (3)
- Optimization (3)
- Ph.D. Retreat (3)
- Process Mining (3)
- Process Modeling (3)
- SAT (3)
- Secondary Education (3)
- Softwareentwicklung (3)
- Tele-Lab (3)
- Tele-Teaching (3)
- Tripel-Graph-Grammatik (3)
- Werkzeuge (3)
- abstraction (3)
- algorithms (3)
- artifical intelligence (3)
- bibliometric analysis (3)
- business processes (3)
- citation analysis (3)
- clustering (3)
- collaboration (3)
- computational thinking (3)
- computer science (3)
- computer vision (3)
- conference (3)
- data preparation (3)
- data profiling (3)
- database systems (3)
- debugging (3)
- didactics (3)
- distributed systems (3)
- duplicate detection (3)
- evaluation (3)
- geospatial data (3)
- graph transformation systems (3)
- in-memory technology (3)
- informatics (3)
- künstliche Intelligenz (3)
- model (3)
- model-driven engineering (3)
- modellgetriebene Entwicklung (3)
- models (3)
- non-photorealistic rendering (3)
- outlier detection (3)
- prediction (3)
- real-time (3)
- simulation (3)
- systems biology (3)
- tele-TASK (3)
- tools (3)
- trust (3)
- user experience (3)
- virtual reality (3)
- virtualization (3)
- virtuelle Maschinen (3)
- 3D point clouds (2)
- 3D-Geovisualisierung (2)
- 3D-Punktwolken (2)
- 3D-Stadtmodelle (2)
- 3D-Visualisierung (2)
- ACINQ (2)
- ASIC (2)
- AUTOSAR (2)
- Abstraktion (2)
- Algorithmic Game Theory (2)
- Algorithmische Spieltheorie (2)
- Algorithms (2)
- Analyse (2)
- Anomalieerkennung (2)
- Aspektorientierte Softwareentwicklung (2)
- Assessment (2)
- Assoziationsregeln (2)
- Asynchrone Schaltung (2)
- Augmented reality (2)
- Australian securities exchange (2)
- Authentifizierung (2)
- Automatisches Beweisen (2)
- BCCC (2)
- BPM (2)
- BTC (2)
- Bayesian networks (2)
- Bibliometrics (2)
- BitShares (2)
- Bitcoin (2)
- Bitcoin Core (2)
- Blended Learning (2)
- Blockchain (2)
- Blockchain Auth (2)
- Blockchain-Konsortium R3 (2)
- Blockkette (2)
- Blockstack (2)
- Blockstack ID (2)
- Blumix-Plattform (2)
- Blöcke (2)
- Bounded Model Checking (2)
- Business Process Management (2)
- Byzantine Agreement (2)
- CSC (2)
- CSCW (2)
- Classification (2)
- Cloud-Sicherheit (2)
- Cloud-Speicher (2)
- Clusteranalyse (2)
- Code (2)
- Colored Coins (2)
- Competence Modelling (2)
- Computational thinking (2)
- Computer Science (2)
- Computergrafik (2)
- Computersicherheit (2)
- Computing (2)
- Constraint Solving (2)
- DAO (2)
- DPoS (2)
- Data Integration (2)
- Data Modeling (2)
- Data Privacy (2)
- Data Profiling (2)
- Databases (2)
- Datenaufbereitung (2)
- Datenbanksysteme (2)
- Datenmodellierung (2)
- Datenqualität (2)
- Deduction (2)
- Deep learning (2)
- Delegated Proof-of-Stake (2)
- Delphi study (2)
- Distributed Proof-of-Research (2)
- Diversity (2)
- Duplikaterkennung (2)
- E-Wallet (2)
- ECDSA (2)
- EEG (2)
- Echtzeit (2)
- Echtzeit-Rendering (2)
- Economics (2)
- Electronic and spintronic devices (2)
- Eris (2)
- Ether (2)
- Ethereum (2)
- European Bioinformatics Institute (2)
- European Union (2)
- Europäische Union (2)
- Evolution (2)
- Exploration (2)
- FMC (2)
- FPGA (2)
- Federated Byzantine Agreement (2)
- Feedback (2)
- Fehlende Daten (2)
- Fehlertoleranz (2)
- FollowMyVote (2)
- Fork (2)
- Formale Verifikation (2)
- GPU (2)
- Game Dynamics (2)
- General Earth and Planetary Sciences (2)
- Geodaten (2)
- Geography, Planning and Development (2)
- Graphentransformationssysteme (2)
- Graphtransformationssysteme (2)
- Gridcoin (2)
- HPI Schul-Cloud (2)
- Hard Fork (2)
- Hashed Timelock Contracts (2)
- Hauptspeicherdatenbank (2)
- Hochschullehre (2)
- ICA (2)
- ICT (2)
- ICT competencies (2)
- ISSEP (2)
- IT-Infrastruktur (2)
- IT-infrastructure (2)
- In-Memory (2)
- Informatics Modelling (2)
- Informatics System Application (2)
- Informatics System Comprehension (2)
- Internet (2)
- Internet Service Provider (2)
- Internet der Dinge (2)
- IoT (2)
- Japanese Blockchain Consortium (2)
- Japanisches Blockchain-Konsortium (2)
- Java (2)
- Kette (2)
- Key Competencies (2)
- Klausellernen (2)
- Knowledge Representation and Reasoning (2)
- Kollaborationen (2)
- Kompetenz (2)
- Konferenz (2)
- Konsensalgorithmus (2)
- Konsensprotokoll (2)
- Learning Analytics (2)
- Lightning Network (2)
- Link-Entdeckung (2)
- Live-Programmierung (2)
- Lock-Time-Parameter (2)
- Logic Programming (2)
- Logics (2)
- MERLOT (2)
- Machine learning (2)
- Measurement (2)
- Megamodell (2)
- Metaverse (2)
- Micropayment-Kanäle (2)
- Microsoft Azur (2)
- Model Synchronization (2)
- Modell (2)
- Modellprüfung (2)
- Mustererkennung (2)
- N-of-1 trial (2)
- NASDAQ (2)
- NameID (2)
- Namecoin (2)
- Off-Chain-Transaktionen (2)
- Onename (2)
- Online Course (2)
- Online-Learning (2)
- Ontologie (2)
- OpenBazaar (2)
- Oracles (2)
- Orphan Block (2)
- P2P (2)
- Patterns (2)
- Peer-to-Peer Netz (2)
- Peercoin (2)
- Planing (2)
- PoB (2)
- PoS (2)
- PoW (2)
- Preis der Anarchie (2)
- Price of Anarchy (2)
- Privatsphäre (2)
- Problem Solving (2)
- Process (2)
- Programmieren (2)
- Programmierung (2)
- Proof-of-Burn (2)
- Proof-of-Stake (2)
- Proof-of-Work (2)
- Prozess (2)
- Python (2)
- RDF (2)
- Relevanz (2)
- Ressourcenoptimierung (2)
- Ripple (2)
- Runtime analysis (2)
- SCED (2)
- SCP (2)
- SHA (2)
- SPV (2)
- SQL (2)
- STG decomposition (2)
- STG-Dekomposition (2)
- Satisfiability (2)
- Schule (2)
- Schwierigkeitsgrad (2)
- Second Life (2)
- Semiconductors (2)
- Service-Oriented Architecture (2)
- Service-Orientierte Architekturen (2)
- Simplified Payment Verification (2)
- Skalierbarkeit der Blockchain (2)
- Slock.it (2)
- Social (2)
- Soft Fork (2)
- Software Engineering (2)
- Softwarearchitektur (2)
- Steemit (2)
- Stellar Consensus Protocol (2)
- Storj (2)
- Studie (2)
- SysML (2)
- Systematics (2)
- Systemsoftware (2)
- Systemstruktur (2)
- Taxonomy (2)
- Teamarbeit (2)
- Temporallogik (2)
- Texturen (2)
- The Bitfury Group (2)
- The DAO (2)
- Theorembeweisen (2)
- Theoretische Informatik (2)
- Theory (2)
- Timed Automata (2)
- Transaktion (2)
- Twitter (2)
- Two-Way-Peg (2)
- UX (2)
- Unifikation (2)
- Unspent Transaction Output (2)
- User Experience (2)
- VM (2)
- Verhalten (2)
- Verlässlichkeit (2)
- Versionsverwaltung (2)
- Verträge (2)
- Virtual Reality (2)
- Visualization (2)
- Water Science and Technology (2)
- Watson IoT (2)
- YouTube (2)
- Zielvorgabe (2)
- Zookos Dreieck (2)
- Zookos triangle (2)
- adaptive (2)
- adaptive Systeme (2)
- adaptive systems (2)
- altchain (2)
- alternative chain (2)
- anomaly detection (2)
- anxiety (2)
- app (2)
- architecture (2)
- atomic swap (2)
- attribute assurance (2)
- authentication (2)
- authorship attribution (2)
- batch processing (2)
- bidirectional payment channels (2)
- big data (2)
- big data services (2)
- bitcoins (2)
- blockchain consortium (2)
- blockchain-übergreifend (2)
- blocks (2)
- blumix platform (2)
- bounded model checking (2)
- causal discovery (2)
- causal structure learning (2)
- chain (2)
- classification (2)
- cloud security (2)
- co-citation analysis (2)
- co-occurrence analysis (2)
- code (2)
- competence (2)
- competencies (2)
- complexity (2)
- comprehension (2)
- computer graphics (2)
- computer science education (2)
- computer security (2)
- confirmation period (2)
- consensus algorithm (2)
- consensus protocol (2)
- consistency (2)
- contest period (2)
- continuous integration (2)
- contracts (2)
- cross-chain (2)
- cyber-physische Systeme (2)
- cyberbullying (2)
- data (2)
- data analytics (2)
- data mining (2)
- data quality (2)
- data wrangling (2)
- decentralized autonomous organization (2)
- deep learning (2)
- deferred choice (2)
- dependability (2)
- depression (2)
- design thinking (2)
- dezentrale autonome Organisation (2)
- difficulty (2)
- difficulty target (2)
- digital enlightenment (2)
- digital health (2)
- digital identity (2)
- digital interventions (2)
- digital learning platform (2)
- digital sovereignty (2)
- digital transformation (2)
- digital whiteboard (2)
- digitale Aufklärung (2)
- digitale Lernplattform (2)
- digitale Souveränität (2)
- direct manipulation (2)
- doppelter Hashwert (2)
- double hashing (2)
- dynamic reconfiguration (2)
- e-Learning (2)
- e-learning (2)
- empathy (2)
- engagement (2)
- exploratory programming (2)
- fault tolerance (2)
- federated voting (2)
- formal semantics (2)
- functional dependencies (2)
- funktionale Abhängigkeiten (2)
- gender (2)
- geovisualization (2)
- graph constraints (2)
- hashrate (2)
- human computer interaction (2)
- identity (2)
- identity theory (2)
- inclusion dependencies (2)
- incremental graph pattern matching (2)
- index selection (2)
- individual effects (2)
- informatics education (2)
- information extraction (2)
- intelligente Verträge (2)
- inter-chain (2)
- interactive technologies (2)
- intrusion detection (2)
- job shop scheduling (2)
- k-inductive invariant checking (2)
- kausale Entdeckung (2)
- kausales Strukturlernen (2)
- key competencies (2)
- knowledge management (2)
- kontinuierliche Integration (2)
- law (2)
- learning (2)
- lebenslanges Lernen (2)
- ledger assets (2)
- lifelong learning (2)
- live programming (2)
- liveness (2)
- logic programming (2)
- longitudinal (2)
- maschinelles Sehen (2)
- memory (2)
- merged mining (2)
- merkle root (2)
- method comparision (2)
- micropayment (2)
- micropayment channels (2)
- miner (2)
- mining (2)
- mining hardware (2)
- minting (2)
- missing data (2)
- mobile (2)
- mobile application (2)
- mobile mapping (2)
- model checking (2)
- model transformation (2)
- modeling (2)
- modelling (2)
- monitoring (2)
- navigation (2)
- networks (2)
- neural networks (2)
- news media (2)
- nonce (2)
- off-chain transaction (2)
- oracles (2)
- parallel processing (2)
- peer-to-peer network (2)
- pegged sidechains (2)
- perception of robots (2)
- probabilistische gezeitete Systeme (2)
- probabilistische zeitgesteuerte Systeme (2)
- production planning and control (2)
- propositional satisfiability (2)
- quorum slices (2)
- real-time systems (2)
- relevance (2)
- rootstock (2)
- runtime models (2)
- scalability of blockchain (2)
- scarce tokens (2)
- schema discovery (2)
- search (2)
- selection (2)
- self-driving (2)
- service-oriented systems (2)
- sidechain (2)
- single-case experimental design (2)
- smalltalk (2)
- social network analysis (2)
- societal effects (2)
- software development (2)
- solver (2)
- standardization (2)
- stochastic Petri nets (2)
- stochastische Petri Netze (2)
- synchronization (2)
- systematic literature review (2)
- systems of systems (2)
- systems software (2)
- taxonomy (2)
- teacher training (2)
- teamwork (2)
- testing (2)
- text based classification methods (2)
- textures (2)
- theorem (2)
- tiefes Lernen (2)
- timed automata (2)
- topics (2)
- transaction (2)
- triple graph grammars (2)
- typed attributed graphs (2)
- user interaction (2)
- user-generated content (2)
- verschachtelte Graphbedingungen (2)
- version control (2)
- virtual 3D city models (2)
- virtuelle 3D-Stadtmodelle (2)
- vocational training (2)
- wearables (2)
- web application (2)
- workflow patterns (2)
- "Big Data"-Dienste (1)
- 'Peer To Peer' (1)
- (FPGA) (1)
- 0-day (1)
- 21st century skills, (1)
- 3D Computer Grafik (1)
- 3D Computer Graphics (1)
- 3D Drucken (1)
- 3D Linsen (1)
- 3D Point Clouds (1)
- 3D Semiotik (1)
- 3D Visualisierung (1)
- 3D city model (1)
- 3D city models (1)
- 3D computer graphics (1)
- 3D geovisualisation (1)
- 3D geovisualization (1)
- 3D lenses (1)
- 3D point cloud (1)
- 3D portrayal (1)
- 3D printing (1)
- 3D semiotics (1)
- 3D-Punktwolke (1)
- 3D-Rendering (1)
- 3D-Stadtmodell (1)
- 3DCityDB (1)
- 3d city models (1)
- 47A52 (1)
- 65R20 (1)
- 65R32 (1)
- 78A46 (1)
- ABRACADABRA (1)
- ADFS (1)
- AMNET (1)
- APT (1)
- APX-hardness (1)
- ARCS Modell (1)
- Abbrecherquote (1)
- Abhängigkeiten (1)
- Abstraktion von Geschäftsprozessmodellen (1)
- Accepting Grammars (1)
- Achievement (1)
- Ackerschmalwand (1)
- Active Directory Federation Services (1)
- Active Evaluation (1)
- Activity Theory (1)
- Activity-orientated Learning (1)
- Activity-oriented Optimization (1)
- Actor (1)
- Actor model (1)
- Adaption (1)
- Adaptive hypermedia (1)
- Advanced Persistent Threats (1)
- Advanced Video Codec (AVC) (1)
- Adversarial Learning (1)
- Agile (1)
- Agilität (1)
- Aktive Evaluierung (1)
- Aktivitäten (1)
- Akzeptierende Grammatiken (1)
- Algebraic methods (1)
- Algorithmenablaufplanung (1)
- Algorithmenkonfiguration (1)
- Algorithmenselektion (1)
- Alignment (1)
- Ambiguity (1)
- Ambiguität (1)
- Analog-zu-Digital-Konvertierung (1)
- Andere Fachrichtungen (1)
- Android Security (1)
- Anerkennung (1)
- Anfrageoptimierung (1)
- Anfragepaare (1)
- Anfragesprache (1)
- Angewandte Spieltheorie (1)
- Angriffe (1)
- Angriffserkennung (1)
- Animal building (1)
- Anisotroper Kuwahara Filter (1)
- Anleitung (1)
- Anomalien (1)
- Anrechnung (1)
- Antwortmengen Programmierung (1)
- Anwendungsvirtualisierung (1)
- Application (1)
- Application Server (1)
- Applied Game Theory (1)
- Apriori (1)
- Arabidopsis thaliana (1)
- Architektur (1)
- Architekturadaptation (1)
- Archivanalyse (1)
- Arduino (1)
- Argument Mining (1)
- Artem Erkomaishvili (1)
- Artificial Intelligence (1)
- Artificial neural networks (1)
- Arzt-Patient-Beziehung (1)
- Aspect-Oriented Programming (1)
- Aspect-oriented Programming (1)
- Aspektorientierte Programmierung (1)
- Association Rule Mining (1)
- Asynchronous circuit (1)
- Attribut-Merge-Prozess (1)
- Attribute Merge Process (1)
- Attribute aggregation (1)
- Attributsicherung (1)
- Audience Response Systeme (1)
- Aufzählung (1)
- Augmented Reality (1)
- Augmented and virtual reality (1)
- Ausführung von Modellen (1)
- Ausführungsgeschichte (1)
- Ausführungssemantiken (1)
- Ausreissererkennung (1)
- Ausreißererkennung (1)
- Austria (1)
- Auswirkungen (1)
- Authentication (1)
- Authorization (1)
- Autismus (1)
- Automated Theorem Proving (1)
- Automatically controlled windows (1)
- Autorisierung (1)
- BCH (1)
- BCI (1)
- BSS (1)
- Bachelor (1)
- Bachelorstudierende der Informatik (1)
- Bachelorstudium (1)
- Bahnwesen (1)
- Bank (1)
- Barrierefreiheit (1)
- Basic Service (1)
- Basic Storage Anbieter (1)
- Batchprozesse (1)
- Batchverarbeitung (1)
- Baumweite (1)
- Bayes'sche Netze (1)
- Bayessche Netze (1)
- Bean (1)
- Bedingte Inklusionsabhängigkeiten (1)
- Bedrohungserkennung (1)
- Behavior (1)
- Behavior change (1)
- Behaviour Analysis (1)
- Benutzerinteraktion (1)
- Benutzeroberfläche (1)
- Berufsausbildung (1)
- Berührungseingaben (1)
- Beschränkungen und Abhängigkeiten (1)
- Betrachtungsebenen (1)
- Beweisaufgaben (1)
- Beweistheorie (1)
- Bidirectional order dependencies (1)
- Big Five model (1)
- Bilddatenanalyse (1)
- Bildung (1)
- Binäres Entscheidungsdiagramm (1)
- Bioacoustics (1)
- Bioakustik (1)
- Biocomputing (1)
- Bioelektrisches Signal (1)
- Bioinformatik (1)
- Biometrie (1)
- Bisimulation (1)
- Blended learning (1)
- Blockheizkraftwerke (1)
- Bloom’s Taxonomy (1)
- Boolean constraint solver (1)
- Bounded Backward Model Checking (1)
- Brain Computer Interface (1)
- Brownian motion with discontinuous drift (1)
- Business Process Models (1)
- Business process modeling (1)
- Bystander (1)
- C-Test (1)
- CCS Concepts (1)
- CEP (1)
- COVID-19 (1)
- CS Ed Research (1)
- CS at school (1)
- CS concepts (1)
- CS curriculum (1)
- Cactus (1)
- Calibration (1)
- Canvas (1)
- Capability approach (1)
- Carrera Digital D132 (1)
- Case Management (1)
- Case management (1)
- Challenges (1)
- Change Management (1)
- Choreographien (1)
- Citymodel (1)
- Clause Learning (1)
- Clinical predictive modeling (1)
- Cloud Datenzentren (1)
- Cloud computing (1)
- Clustering (1)
- Coccinelle (1)
- Codeverständnis (1)
- Codierung (1)
- Cognitive Skills (1)
- Cographs (1)
- Coherent partition (1)
- Common Spatial Pattern (1)
- Commonsense reasoning (1)
- Comparing programming environments (1)
- Competences (1)
- Competencies (1)
- Complementary Circuits (1)
- Complexity (1)
- Compliance (1)
- Compliance checking (1)
- Composition (1)
- Compound Values (1)
- Computation Tree Logic (1)
- Computational Complexity (1)
- Computational Hardness (1)
- Computational Photography (1)
- Computational Thinking (1)
- Computational photography (1)
- Computer Science in Context (1)
- Computer crime (1)
- Computergestützes Training (1)
- Conceptual (1)
- Conceptual modeling (1)
- Condition number (1)
- Conditional Inclusion Dependency (1)
- Conformance Überprüfung (1)
- Consistency (1)
- Constraint (1)
- Constraint-Programmierung (1)
- Constraints (1)
- Constructive solid geometry (1)
- Contest (1)
- Context-oriented Programming (1)
- Contextualisation (1)
- Contracts (1)
- Contradictions (1)
- Controlled Derivations (1)
- Controller-Resynthese (1)
- Convolution (1)
- Course development (1)
- Course marketing (1)
- Course of Study (1)
- Courses for female students (1)
- Covariate Shift (1)
- Covid (1)
- Creative (1)
- Crime mapping (1)
- Critical pairs (1)
- Crowd-sourcing (1)
- Cryptography (1)
- Currencies (1)
- Curricula Development (1)
- Curriculum (1)
- Curriculum Development (1)
- Curriculum analysis (1)
- Customer ownership (1)
- Cyber-Physical Systems (1)
- Cyber-Physical-Systeme (1)
- Cyber-Sicherheit (1)
- Cyber-physical-systems (1)
- Cyber-physikalische Systeme (1)
- DBMS (1)
- DDoS (1)
- DNA (1)
- DNA computing (1)
- DNS (1)
- Data Analysis (1)
- Data Dependency (1)
- Data Literacy (1)
- Data Management (1)
- Data Quality (1)
- Data Science (1)
- Data Structure Optimization (1)
- Data Warehouse (1)
- Data dependencies (1)
- Data integration (1)
- Data mining (1)
- Data modeling (1)
- Data warehouse (1)
- Data-Mining (1)
- Data-Science (1)
- Data-centric (1)
- Database (1)
- Database Cost Model (1)
- Dateistruktur (1)
- Daten (1)
- Datenabhängigkeiten (1)
- Datenabhängigkeiten-Entdeckung (1)
- Datenbank-Kostenmodell (1)
- Datenextraktion (1)
- Datenflusskorrektheit (1)
- Datenkorrektheit (1)
- Datenmodelle (1)
- Datenobjekte (1)
- Datenreinigung (1)
- Datensatz (1)
- Datenschutz-sicherer Einsatz in der Schule (1)
- Datensicht (1)
- Datenstrukturoptimierung (1)
- Datensynthese (1)
- Datentransformation (1)
- Datenvertraulichkeit (1)
- Datenvisualisierung (1)
- Datenzustände (1)
- Deadline-Verbreitung (1)
- Debugging (1)
- Decision support (1)
- Deep Learning (1)
- Defining characteristics of physical computing (1)
- Degenerationsprozesse (1)
- Dekubitus (1)
- Delphine (1)
- Delta preservation (1)
- Dempster-Shafer-Theorie (1)
- Dempster–Shafer theory (1)
- Denkweise (1)
- Dependency discovery (1)
- Description Logics (1)
- Design-Forschung (1)
- Deskriptive Logik (1)
- Deurema Modellierungssprache (1)
- Diagonalisierung (1)
- Didaktik der Informatik (1)
- Didaktische Konzepte (1)
- Dienstkomposition (1)
- Dienstplattform (1)
- Differential Privacy (1)
- Differenz von Gauss Filtern (1)
- Digital Competence (1)
- Digital Education (1)
- Digital Engineering (1)
- Digital Game Based Learning (1)
- Digital Revolution (1)
- Digital image analysis (1)
- Digitale Transformation (1)
- Digitale Whiteboards (1)
- Digitalisierung von Produktionsprozessen (1)
- Digitalization (1)
- Digitalstrategie (1)
- Direkte Manipulation (1)
- Disambiguierung (1)
- Discrimination Networks (1)
- Diskussionskultur (1)
- Dispositional learning analytics (1)
- Distanzlehre (1)
- Distributed Computing (1)
- Distributed computing (1)
- Distributed programming (1)
- Distributed-Ledger-Technologie (DLT) (1)
- Diversität (1)
- Dolphins (1)
- Domänenspezifische Modellierung (1)
- Dreidimensionale Computergraphik (1)
- Dubletten (1)
- Duplicate Detection (1)
- Durchlässigkeit (1)
- Dynamic Programming (1)
- Dynamic Type System (1)
- Dynamic assessment (1)
- Dynamische Programmierung (1)
- Dynamische Rekonfiguration (1)
- Dynamische Typ Systeme (1)
- EHR (1)
- EPA (1)
- Early Literacy (1)
- Echtzeitanwendung (1)
- Echtzeitsysteme (1)
- Ecosystems (1)
- Educational Standards (1)
- Educational software (1)
- Effizienz (1)
- Einbruchserkennung (1)
- Eingabegenauigkeit (1)
- Eingebettete Systeme (1)
- Eisenbahnnetz (1)
- Elektroencephalographie (1)
- Elektronische Patientenakte (1)
- Embedded Systems (1)
- Emotionen (1)
- Emotionsforschung (1)
- Empfehlungen (1)
- Empirische Untersuchung (1)
- Endpunktsicherheit (1)
- Energieeffizienz (1)
- Energiesparen (1)
- Enterprise Search (1)
- Entity resolution (1)
- Entitätsverknüpfung (1)
- Entscheidungsbäume (1)
- Entscheidungsfindung (1)
- Entscheidungsmanagement (1)
- Entscheidungsmodelle (1)
- Entwicklungswerkzeuge (1)
- Entwurf (1)
- Entwurfsmuster (1)
- Entwurfsmuster für SOA-Sicherheit (1)
- Entwurfsraumexploration (1)
- Enumeration algorithm (1)
- Equilibrium logic (1)
- Ereignisabstraktion (1)
- Ereignisse (1)
- Erfahrungsbericht (1)
- Erfolgsmessung (1)
- Erfüllbarkeit einer Formel der Aussagenlogik (1)
- Erfüllbarkeitsanalyse (1)
- Erfüllbarkeitsproblem (1)
- Erkennen von Meta-Daten (1)
- Erkennung von Metadaten (1)
- Error Estimation (1)
- Error-Detection Circuits (1)
- Erweiterte Realität (1)
- Escherichia-coli (1)
- Estimation-of-distribution algorithm (1)
- Ethics (1)
- Euclid’s algorithm (1)
- Evaluation (1)
- Evidenztheorie (1)
- Evolution in MDE (1)
- Evolutionary algorithms (1)
- Execution Semantics (1)
- Explorative Datenanalyse (1)
- Exponential Time Hypothesis (1)
- Exponentialzeit Hypothese (1)
- Extract-Transform-Load (ETL) (1)
- FIDO (1)
- FMC-QE (1)
- FOSS (1)
- FRP (1)
- Facebook (1)
- Fachinformatik (1)
- Fachinformatiker (1)
- Fallmanagement (1)
- Fallstudie (1)
- Feature Combination (1)
- Feature extraction (1)
- Feature selection (1)
- Federated learning (1)
- Feedback Loop Modellierung (1)
- Feedback Loops (1)
- Fehlerbeseitigung (1)
- Fehlererkennung (1)
- Fehlerinjektion (1)
- Fehlerkorrektur (1)
- Fehlerschätzung (1)
- Fehlersuche (1)
- Fehlvorstellung (1)
- Fernerkundung (1)
- Fertigkeiten (1)
- Fertigung (1)
- Fibonacci numbers (1)
- Field programmable gate arrays (1)
- Finite automata (1)
- Fintech (1)
- Fitness-distance correlation (1)
- Flussgesteuerter Bilateraler Filter (1)
- Focus+Context Visualization (1)
- Fokus-&-Kontext Visualisierung (1)
- Formal modelling (1)
- Formale Sprachen und Automaten (1)
- Formative assessment (1)
- Formeln der quantifizierten Aussagenlogik (1)
- Forschung (1)
- Fredholm complexes (1)
- Function (1)
- Functional Lenses (1)
- Functional dependencies (1)
- Fundamental Ideas (1)
- Fundamental Modeling Concepts (1)
- Fußgängernavigation (1)
- GIS (1)
- GPU acceleration (1)
- GPU-Beschleunigung (1)
- Game-based learning (1)
- Gaussian process state-space models (1)
- Gaussian processes (1)
- Gauß-Prozess Zustandsraummodelle (1)
- Gauß-Prozesse (1)
- Gebäudemodelle (1)
- Gehirn-Computer-Schnittstelle (1)
- Geländemodelle (1)
- Gender (1)
- Gene expression (1)
- General subject “Information” (1)
- Generalisierung (1)
- Generalized Discrimination Networks (1)
- Geometrieerzeugung (1)
- Georgian chant (1)
- Georgische liturgische Gesänge (1)
- Geovisualisierung (1)
- German schools (1)
- Geschäftsanwendungen (1)
- Geschäftsmodell (1)
- Geschäftsprozessarchitekturen (1)
- Geschäftsprozesse (1)
- Geschäftsprozessmodelle (1)
- Gesetze (1)
- Gesichtsausdruck (1)
- Gesteuerte Ableitungen (1)
- Gewinnung benannter Entitäten (1)
- GitHub (1)
- Gleichheit (1)
- Globus (1)
- GraalVM (1)
- Grammar Systems (1)
- Grammatikalische Inferenz (1)
- Grammatiksysteme (1)
- Graph databases (1)
- Graph homomorphisms (1)
- Graph logic (1)
- Graph partitions (1)
- Graph repair (1)
- Graph transformation (1)
- Graph-Constraints (1)
- Graph-Mining (1)
- Graph-basierte Suche (1)
- Graph-basiertes Ranking (1)
- Graphableitung (1)
- Graphbedingungen (1)
- Graphdatenbanken (1)
- Graphensuche (1)
- Graphfärbung (1)
- Graphreparatur (1)
- Graphtransformation (1)
- Grid (1)
- Grid Computing (1)
- Gruppierung von Prozessinstanzen (1)
- H.264 (1)
- HDI (1)
- HEI (1)
- HENSHIN (1)
- HPI Forschung (1)
- HPI research (1)
- Hardware accelerator (1)
- Hardware-Software-Co-Design (1)
- Hasserkennung (1)
- Hasso-Plattner-Institute (1)
- Hauptkomponentenanalyse (1)
- Hauptspeicher Technologie (1)
- Helmholtz problem (1)
- Herodotos (1)
- Heterogenität (1)
- Heuristiken (1)
- HiGHmed (1)
- High-Level Synthesis (1)
- Histograms (1)
- History of pattern occurrences (1)
- Hochschule (1)
- Hochschulkurse (1)
- Hochschulsystem (1)
- Homomorphe Verschlüsselung (1)
- Human (1)
- Human-robot interaction (1)
- Hyrise (1)
- Häkeln (1)
- I/O-effiziente Algorithmen (1)
- IBM 360 (1)
- ICT Competence (1)
- ICT curriculum (1)
- ICT skills (1)
- IDS (1)
- IHL (1)
- IHRL (1)
- IOPS (1)
- IT security (1)
- IT-Security (1)
- IT-Sicherheit (1)
- Ideation (1)
- Ideenfindung (1)
- Identity Management (1)
- Identity management systems (1)
- Ill-conditioning (1)
- Image (1)
- Image resolution (1)
- Image-based rendering (1)
- Impact (1)
- Imperative calculi (1)
- Implementation in Organizations (1)
- Implementierung in Organisationen (1)
- Improving classroom (1)
- In-Memory Database (1)
- In-Memory Datenbank (1)
- In-Memory-Datenbank (1)
- Inclusion dependencies (1)
- Indefinite (1)
- Index (1)
- Index Structures (1)
- Indexauswahl (1)
- Indexstrukturen (1)
- Individuen (1)
- Industries (1)
- Industry 4.0 (1)
- Inference (1)
- Infinite State (1)
- Informatik B. Sc. (1)
- Informatik für alle (1)
- Informatik-Studiengänge (1)
- Informatiksystem (1)
- Informatikunterricht (1)
- Informatikvoraussetzungen (1)
- Information Ethics (1)
- Information Extraction (1)
- Information Systems (1)
- Information Transfer Rate (1)
- Informationskompetenz (1)
- Informationssysteme (1)
- Informationsvorhaltung (1)
- Informatische Kompetenzen (1)
- Inhalte (1)
- Inhaltsanalyse (1)
- Initial conflicts (1)
- Inklusionsabhängigkeit (1)
- Inklusionsabhängigkeiten (1)
- Inkonsistenz (1)
- Inkrementelle Graphmustersuche (1)
- Innovationsmanagement (1)
- Innovationsmethode (1)
- Input Validation (1)
- Inquiry-based Learning (1)
- Instagram (1)
- Insurance industry (1)
- Integration (1)
- Interactive Rendering (1)
- Interactive system (1)
- Interaktionsmodel (1)
- Interaktionsmodellierung (1)
- Interaktionstechniken (1)
- Interaktives Rendering (1)
- Interaktives System (1)
- Interdisciplinary Teams (1)
- Interface design (1)
- Internet Security (1)
- Internet applications (1)
- Internet-Sicherheit (1)
- Internetanwendungen (1)
- Interpretability (1)
- Interpreter (1)
- Intersectionality (1)
- Interval Timed Automata (1)
- Interventionen (1)
- Intuition (1)
- Invariant-Checking (1)
- Invarianten (1)
- Invariants (1)
- Inverted Classroom (1)
- JCop (1)
- Java 2 Enterprise Edition (1)
- Java Security Framework (1)
- Java Virtual Machine (1)
- KI (1)
- Karten (1)
- Kartografisches Design (1)
- Kausalität (1)
- Kern-PCA (1)
- Kernel (1)
- Kernmethoden (1)
- Klassifikator-Kalibrierung (1)
- Klassifizierung (1)
- Kollaboration (1)
- Kommunikation (1)
- Kompetenzentwicklung (1)
- Kompetenzerwerb (1)
- Kompetenzmessung (1)
- Komplexität (1)
- Komplexität der Berechnung (1)
- Komplexitätsbewältigung (1)
- Komplexitätstheorie (1)
- Komposition (1)
- Konnektionskalkül (1)
- Konsensprotokolle (1)
- Konsistenzrestauration (1)
- Kontext (1)
- Konzeptionell (1)
- Kreativität (1)
- Kryptographie (1)
- Kundenverhalten (1)
- Kunstanalyse (1)
- Kybernetik (1)
- LEGO Mindstorms EV3 (1)
- LIDAR (1)
- LMS (1)
- LOD (1)
- LSTM (1)
- Landmarken (1)
- Laser Cutten (1)
- Laserscanning (1)
- Lastverteilung (1)
- Laufzeitanalyse (1)
- Laufzeitverhalten (1)
- Leadership (1)
- Learners (1)
- Learning Fields (1)
- Learning analytics (1)
- Learning dispositions (1)
- Learning ecology (1)
- Learning interfaces development (1)
- Learning with ICT (1)
- Lebendigkeit (1)
- Lebenslanges Lernen (1)
- Lefschetz number (1)
- Leftmost Derivations (1)
- Lehr- und Lernformate (1)
- Lehramtsstudium (1)
- Lehre (1)
- Lehrer (1)
- Lehrer*innenbildung (1)
- Lehrevaluation (1)
- Leistungsfähigkeit (1)
- Leistungsmodelle von virtuellen Maschinen (1)
- Leistungsvorhersage (1)
- Lern-App (1)
- Lernerfolg (1)
- Lernmotivation (1)
- Lernsoftware (1)
- Lernzentrum (1)
- LiDAR (1)
- Licenses (1)
- Liguistisch (1)
- Lindenmayer systems (1)
- Link Discovery (1)
- Linked Data (1)
- Linked Open Data (1)
- Linksableitungen (1)
- Live-Migration (1)
- Liveness (1)
- Logarithm (1)
- Logikkalkül (1)
- Logiksynthese (1)
- Loss (1)
- Low Latency (1)
- Lower Bounds (1)
- Lower Secondary Level (1)
- Lösungsraum (1)
- Lückentext (1)
- MDE Ansatz (1)
- MDE settings (1)
- MEG (1)
- Machine-Learning (1)
- Machinelles Lernen (1)
- Magnetoencephalographie (1)
- Malware (1)
- Management (1)
- Marketing (1)
- Marktübersicht (1)
- Maschinen (1)
- Massive Open Online Courses (1)
- Matrizen-Eigenwertaufgabe (1)
- Matroids (1)
- Media in education (1)
- Megamodel (1)
- Megamodels (1)
- Mehr-Faktor-Authentifizierung (1)
- Mehrfamilienhäuser (1)
- Mehrkernsysteme (1)
- Mehrklassen-Klassifikation (1)
- Messung (1)
- Metacrate (1)
- Metadata Discovery (1)
- Metadaten (1)
- Metadatenentdeckung (1)
- Metadatenqualität (1)
- Metamodell (1)
- Migration (1)
- Mindset (1)
- Minimal hitting set (1)
- Mischmodelle (1)
- Mischung <Signalverarbeitung> (1)
- Mobil (1)
- Mobile Application Development (1)
- Mobile Mapping (1)
- Mobile learning (1)
- Mobile-Mapping (1)
- Mobiles Lernen (1)
- Mobilgeräte (1)
- Model Based Engineering (1)
- Model Checking (1)
- Model Consistency (1)
- Model Driven Architecture (1)
- Model Execution (1)
- Model Management (1)
- Model repair (1)
- Model verification (1)
- Model-driven (1)
- Model-driven SOA Security (1)
- Modeling Languages (1)
- Modell Management (1)
- Modell-driven Security (1)
- Modell-getriebene SOA-Sicherheit (1)
- Modell-getriebene Sicherheit (1)
- Modell-getriebene Softwareentwicklung (1)
- Modellbasiert (1)
- Modelle mit mehreren Versionen (1)
- Modellerzeugung (1)
- Modellgetrieben (1)
- Modellgetriebene Architektur (1)
- Modellgetriebene Entwicklung (1)
- Modellgetriebene Softwareentwicklung (1)
- Modellierungssprachen (1)
- Modellkonsistenz (1)
- Modellreparatur (1)
- Modelltransformation (1)
- Modelltransformationen (1)
- Models at Runtime (1)
- Molekulare Bioinformatik (1)
- Monitoring (1)
- Morphic (1)
- Multi Task Learning (1)
- Multi-Class (1)
- Multi-Instanzen (1)
- Multi-Task-Lernen (1)
- Multi-objective optimization (1)
- Multi-sided platforms (1)
- Multicore architectures (1)
- Multidisciplinary Teams (1)
- Multimodal behavior (1)
- Multiprocessor (1)
- Multiprozessor (1)
- Music Technology (1)
- Muster (1)
- Musterabgleich (1)
- Mutation operators (1)
- NUI (1)
- Nash Equilibrium (1)
- Natural Science Education (1)
- Natural ventilation (1)
- Nebenläufigkeit (1)
- Nephrology (1)
- Nested Graph Conditions (1)
- Nested graph conditions (1)
- Network Creation Game (1)
- Network clustering (1)
- Netzneutralität (1)
- Netzwerk (1)
- Netzwerke (1)
- Netzwerkprotokolle (1)
- Neuronales Netz (1)
- New On-Line Error-Detection Methode (1)
- Newspeak (1)
- Next Generation Network (1)
- Nicht-photorealistisches Rendering (1)
- Nichtfotorealistische Bildsynthese (1)
- NoSQL (1)
- Non-photorealistic Rendering (1)
- Norway (1)
- Novice programmers (1)
- Nutzerinteraktion (1)
- Nutzungsinteresse (1)
- O (1)
- OAuth (1)
- Object Constraint Programming (1)
- Object-Oriented Programming (1)
- Objects (1)
- Objekt-Constraint Programmierung (1)
- Objekt-Orientiertes Programmieren (1)
- Objekt-orientiertes Programmieren mit Constraints (1)
- Objekte (1)
- Objektive Schwierigkeit (1)
- Objektlebenszyklus-Synchronisation (1)
- Omega (1)
- Online Learning Environments (1)
- Onlinekurse (1)
- Onlinelehre (1)
- Ontologies (1)
- Ontology (1)
- Open Source (1)
- OpenID Connect (1)
- OpenOLAT (1)
- Opinion mining (1)
- Optimierungen (1)
- Optimierungsproblem (1)
- OptoGait (1)
- Order dependencies (1)
- Ordinances (1)
- Organisationsveränderung (1)
- Owner-Retained Access Control (ORAC) (1)
- PAVM (1)
- PRISM Modell-Checker (1)
- PRISM model checker (1)
- PTCTL (1)
- Parallel Programming (1)
- Parallele Datenverarbeitung (1)
- Paralleles Rechnen (1)
- Parallelization (1)
- Parallelrechner (1)
- Parameterized Complexity (1)
- Parametrisierte Komplexität (1)
- Parsing (1)
- Patientenermündigung (1)
- Pattern Matching (1)
- Pattern Recognition (1)
- Pedagogical content knowledge (1)
- Pedagogical issues (1)
- Peer-Review (1)
- Peer-to-Peer-Netz ; GRID computing ; Zuverlässigkeit ; Web Services ; Betriebsmittelverwaltung ; Migration (1)
- Performance (1)
- Performance Prediction (1)
- Personal Data (1)
- Personas (1)
- Petri net Mapping (1)
- Petri net mapping (1)
- Petrinetz (1)
- Physical Science (1)
- Plant identification (1)
- Plattform-Ökosysteme (1)
- Platzierung (1)
- Point-based rendering (1)
- Policy Enforcement (1)
- Policy Languages (1)
- Policy Sprachen (1)
- Polymerase Chain Reaction Experiment (1)
- Popular matching (1)
- Posenabschätzung (1)
- PostGIS (1)
- Pre-RS Traceability (1)
- Prediction Game (1)
- Predictive Models (1)
- Preprocessing (1)
- Primary informatics (1)
- Prime graphs (1)
- Prior knowledge (1)
- Privacy Protection (1)
- Probabilistische Modelle (1)
- Problem solving (1)
- Problem solving strategies (1)
- Probleme in der Studie (1)
- Problemlösen (1)
- Problemlösung (1)
- Process Enactment (1)
- Process Execution (1)
- Process Modelling (1)
- Process modeling (1)
- Professoren (1)
- Prognosen (1)
- Programmierabstraktionen (1)
- Programmierausbildung (1)
- Programmiererlebnis (1)
- Programmierkonzepte (1)
- Programmierwerkzeuge (1)
- Programming Languages (1)
- Programming environments for children (1)
- Programming learning (1)
- Projekte (1)
- Prolog (1)
- Proof Theory (1)
- Propagation von Aktivitätsinstanzzuständen (1)
- Protocols (1)
- Prototyping (1)
- Prozess Verbesserung (1)
- Prozess- und Datenintegration (1)
- Prozessarchitektur (1)
- Prozessausführung (1)
- Prozessautomatisierung (1)
- Prozesse (1)
- Prozesserhebung (1)
- Prozessinstanz (1)
- Prozessmodell (1)
- Prozessmodelle (1)
- Prozessmodellsuche (1)
- Prozessoren (1)
- Prozessverfeinerung (1)
- Prädiktionsspiel (1)
- Präferenzen (1)
- Präsentation (1)
- Psychotherapie (1)
- Pytho n (1)
- Quanten-Computing (1)
- Quantenkryptographie (1)
- Quantified Boolean Formula (QBF) (1)
- Quantitative Analysen (1)
- Quantitative Modeling (1)
- Quantitative Modellierung (1)
- Query (1)
- Query execution (1)
- Query optimization (1)
- Query-Optimierung (1)
- Queuing Theory (1)
- RL (1)
- RT_PREEMT patch (1)
- RT_PREEMT-Patch (1)
- Rainfall-runoff (1)
- Random access memory (1)
- Re-Engineering (1)
- Real-Time Rendering (1)
- Realzeitsysteme (1)
- Recommendations for CS-Curricula in Higher Education (1)
- Reconfigurable (1)
- Region of Interest (1)
- Regressionstests (1)
- Rekonfiguration (1)
- Relational data (1)
- Rendering (1)
- Reparatur (1)
- Reproducible benchmarking (1)
- Research Projects (1)
- Resource Allocation (1)
- Resource Management (1)
- Ressourcenmanagement (1)
- Reverse Engineering (1)
- Reversibility (1)
- Robot personality (1)
- Ruby (1)
- Run time analysis (1)
- Runtime Binding (1)
- Runtime-monitoring (1)
- SIEM (1)
- SMT (1)
- SOA (1)
- SOA Security (1)
- SOA Security Pattern (1)
- SOA Sicherheit (1)
- SPARQL (1)
- SSO (1)
- STEM (1)
- SWIRL (1)
- Sammlungsdatentypen (1)
- Sample Selection Bias (1)
- Savanne (1)
- Scalability (1)
- Scale-invariant feature transform (SIFT) (1)
- Scene graph systems (1)
- Schelling Process (1)
- Schelling Prozess (1)
- Schelling Segregation (1)
- Schema-Entdeckung (1)
- Schemaentdeckung (1)
- Schlüsselkompetenzen (1)
- Schlüsselentdeckung (1)
- Schlüsselkompetenzen (1)
- Schriftartgestaltung (1)
- Schriftrendering (1)
- Scientific understanding of Information (1)
- Scrollytelling (1)
- Search Algorithms (1)
- Secure Digital Identities (1)
- Secure Enterprise SOA (1)
- Security (1)
- Security Modelling (1)
- Segmentierung (1)
- Selbst-Adaptive Software (1)
- Selektion (1)
- Selektionsbias (1)
- Self-Adaptive Software (1)
- Self-Checking Circuits (1)
- Self-Regulated Learning (1)
- Semantic Search (1)
- Semantik Web (1)
- Semantische Analyse (1)
- Semantische Suche (1)
- Seminarkonzept (1)
- Sensors (1)
- Sequential anomaly (1)
- Sequenzeigenschaften (1)
- Sequenzen von s/t-Pattern (1)
- Serialisierung (1)
- Service Creation (1)
- Service Delivery Platform (1)
- Service Provider (1)
- Service convergence (1)
- Service-oriented Architectures (1)
- Service-orientierte Systeme (1)
- Service-orientierte Systme (1)
- Shader (1)
- Sharing (1)
- Sichere Digitale Identitäten (1)
- Sicherheitsanalyse (1)
- Sicherheitsmodellierung (1)
- Signal Processing (1)
- Signal processing (1)
- Signalflankengraph (SFG oder STG) (1)
- Signalquellentrennung (1)
- Signaltrennung (1)
- Similarity Measures (1)
- Similarity Search (1)
- Simulation (1)
- Simulations (1)
- Simultane Diagonalisierung (1)
- Single Sign On (1)
- Single Trial Analysis (1)
- Single event upsets (1)
- Single-Sign-On (1)
- Situationsbewusstsein (1)
- Skalierbarkeit (1)
- Skelettberechnung (1)
- Skript-Entwicklungsumgebungen (1)
- Skriptsprachen (1)
- Small Private Online Courses (1)
- Smart cities (1)
- SoaML (1)
- Social impact (1)
- Sociotechnical Design (1)
- Software (1)
- Software architecture (1)
- Software-Evolution (1)
- Software-Testen (1)
- Software/Hardware Co-Design (1)
- Softwareanalyse (1)
- Softwareentwicklungsprozesse (1)
- Softwareproduktlinien (1)
- Softwaretechnik (1)
- Softwaretest (1)
- Softwaretests (1)
- Softwarevisualisierung (1)
- Softwarewartung (1)
- Solution Space (1)
- Soziale Medien (1)
- Sozialen Medien (1)
- Spaltenlayout (1)
- Spam (1)
- Spam Filtering (1)
- Spam-Erkennung (1)
- Spam-Filter (1)
- Spam-Filtering (1)
- Spatio-Spectral Filter (1)
- Spawning (1)
- Specification (1)
- Speicheroptimierungen (1)
- Spezifikation von gezeiteten Graph Transformationen (1)
- Spieldynamik (1)
- Spieldynamiken (1)
- Sprachdesign (1)
- Sprachlernen im Limes (1)
- Sprachspezifikation (1)
- Squeak (1)
- Squeak/Smalltalk (1)
- Stable marriage (1)
- Stable matching (1)
- Stadtmodell (1)
- Stance Detection (1)
- Standardisierung (1)
- Standards (1)
- Static Analysis (1)
- Statistical Tests (1)
- Statistikprogramm R (1)
- Statistische Tests (1)
- Stilisierung (1)
- Strategie (1)
- Structuring (1)
- Strukturierung (1)
- Strukturverbesserung (1)
- Student Engagement (1)
- Studentenerwartungen (1)
- Studentenhaltungen (1)
- Studentenjobs (1)
- Studienabbrecher (1)
- Studienabbruch (1)
- Studienanfänger*innen (1)
- Studiendauer (1)
- Studieneingangsphase (1)
- Studiengestaltung (1)
- Studiengänge (1)
- Studienverläufe (1)
- Studierendenperformance (1)
- Studium (1)
- Submodular function (1)
- Submodular functions (1)
- Subset selection (1)
- Suche (1)
- Suchtberatung und -therapie (1)
- Suchverfahren (1)
- Synchronisation (1)
- Synonyme (1)
- Synthese (1)
- System Biologie (1)
- System of Systems (1)
- System structure (1)
- Systembiologie (1)
- Systeme von Systemen (1)
- Systementwurf (1)
- Systems of Systems (1)
- Systems of parallel communicating (1)
- Szenengraph (1)
- TPTP (1)
- Tableaumethode (1)
- Tasks (1)
- Teacher perceptions (1)
- Teachers (1)
- Teaching information security (1)
- Teaching problem solving strategies (1)
- Technology proficiency (1)
- Telekommunikation (1)
- Telemedizin (1)
- Temporal Logic (1)
- Temporäre Anbindung (1)
- Terminologische Logik (1)
- Terminology (1)
- Test (1)
- Test-getriebene Fehlernavigation (1)
- Testen (1)
- Testergebnisse (1)
- Testpriorisierungs (1)
- Tests (1)
- Text mining (1)
- Texterkennung (1)
- Textklassifikation (1)
- The Sharing Economy (1)
- Theoretischen Vorlesungen (1)
- Threshold Cryptography (1)
- Time Augmented Petri Nets (1)
- Time series (1)
- Tool (1)
- Tools (1)
- Traceability (1)
- Tracking (1)
- Trajectories (1)
- Trajektorien (1)
- Transaktionen (1)
- Transformation (1)
- Transformationsebene (1)
- Transformationssequenzen (1)
- Transversal hypergraph (1)
- Travis CI (1)
- Treewidth (1)
- Tripel-Graph-Grammatiken (1)
- Triple Graph Grammar (1)
- Triple Graph Grammars (1)
- Triple-Graph-Grammatiken (1)
- Trust Management (1)
- Type and effect systems (1)
- Umfrage (1)
- Unabhängige Komponentenanalyse (1)
- Unbegrenzter Zustandsraum (1)
- Uncanny valley (1)
- Unique column combination (1)
- Unique column combinations (1)
- Universität Bagdad (1)
- Universität Potsdam (1)
- Universitätseinstellungen (1)
- Untere Schranken (1)
- Unterricht mit digitalen Medien (1)
- Unveränderlichkeit (1)
- Unvollständigkeit (1)
- Usage Interest (1)
- VGG16 (1)
- VIL (1)
- VM Integration (1)
- VR (1)
- VUCA-World (1)
- Validation (1)
- Value network (1)
- Verbindungsnetzwerke (1)
- Verbundwerte (1)
- Verhaltensabstraktion (1)
- Verhaltensanalyse (1)
- Verhaltensbewahrung (1)
- Verhaltensverfeinerung (1)
- Verhaltensänderung (1)
- Verhaltensäquivalenz (1)
- Verification (1)
- Verletzung Auflösung (1)
- Verletzung Erklärung (1)
- Vernetzte Daten (1)
- Versionierung (1)
- Verteiltes Arbeiten (1)
- Verteiltes Rechnen (1)
- Verteilungsalgorithmen (1)
- Verteilungsalgorithmus (1)
- Verteilungsunterschied (1)
- Vertrauen (1)
- Verwaltung von Rechenzentren (1)
- Verzögerungs-Verbreitung (1)
- Veränderungsanalyse (1)
- Videoanalyse (1)
- Videometadaten (1)
- Violation Explanation (1)
- Violation Resolution (1)
- Virtual Desktop Infrastructure (1)
- Virtual Machines (1)
- Virtual machines (1)
- Virtuelle Maschine (1)
- Virtuelle Realität (1)
- Virtuelles 3D Stadtmodell (1)
- Visualisierungskonzept-Exploration (1)
- Vocabulary (1)
- Vocational Education (1)
- Vorhersagemodelle (1)
- Vorkenntnisse (1)
- Vorwissen (1)
- W[3]-Completeness (1)
- Wahrnehmung (1)
- Wahrnehmung von Arousal (1)
- Wahrnehmungsunterschiede (1)
- Warteschlangentheorie (1)
- Wartung von Graphdatenbanksichten (1)
- Wartung von Lehrveranstaltungen (1)
- Wearable (1)
- Web Services (1)
- Web Sites (1)
- Web applications (1)
- Web of Data (1)
- Web-Anwendungen (1)
- Web-Based Rendering (1)
- Webbasiertes Rendering (1)
- Webseite (1)
- Weiterbildung (1)
- Well-structuredness (1)
- Werbung (1)
- Werkzeugbau (1)
- Wertschöpfungskooperation (1)
- WhatsApp (1)
- Wicked Problems (1)
- Wikipedia (1)
- Wirtschaftsinformatik (1)
- Wissenschaftliches Arbeiten (1)
- Wissensrepräsentation und -verarbeitung (1)
- Wissensrepräsentation und Schlussfolgerung (1)
- Wohlstrukturiertheit (1)
- Wolke (1)
- Women and IT (1)
- Workflow (1)
- Wüstenbildung (1)
- X-ray imaging (1)
- XM (1)
- Young People (1)
- ZQSA (1)
- ZQSAT (1)
- Zebris (1)
- Zeitbehaftete Petri Netze (1)
- Zero-Suppressed Binary Decision Diagram (ZDD) (1)
- Zugriffskontrolle (1)
- Zuverlässigkeitsanalyse (1)
- access control (1)
- action and change (1)
- action language (1)
- activity instance state propagation (1)
- acyclic preferences (1)
- ad hoc learning (1)
- ad hoc messaging network (1)
- adaptiv (1)
- addiction care (1)
- adoption (1)
- advanced persistent threat (1)
- advanced threats (1)
- aerosol size distribution (1)
- agil (1)
- airbnb (1)
- algorithm (1)
- algorithm configuration (1)
- algorithm schedules (1)
- algorithm scheduling (1)
- algorithm selection (1)
- analog-to-digital conversion (1)
- analogical thinking (1)
- analysis (1)
- animated PCA (1)
- animierte PCA (1)
- anisotropic Kuwahara filter (1)
- annotation (1)
- anomalies (1)
- answer (1)
- answer set (1)
- application virtualization (1)
- approximate joint diagonalization (1)
- approximation (1)
- apriori (1)
- apt (1)
- architectural adaptation (1)
- architecture recovery (1)
- archive analysis (1)
- argument mining (1)
- argumentation research (1)
- argumentation structure (1)
- arithmethische Prozeduren (1)
- arithmetic procedures (1)
- arousal perception (1)
- art analysis (1)
- artificial intelligence (1)
- aspect adapter (1)
- aspect oriented programming (1)
- aspect-oriented (1)
- aspects (1)
- aspectualization (1)
- asset management (1)
- assistive Technologien (1)
- assistive technologies (1)
- association rule mining (1)
- asynchronous circuit (1)
- attacks (1)
- augmented reality (1)
- ausführbare Semantiken (1)
- automata (1)
- automated planning (1)
- automated theorem proving (1)
- automatic theorem prover (1)
- automatisierter Theorembeweiser (1)
- automotive electronics (1)
- autonomous (1)
- back-in-time (1)
- balance analysis (1)
- bank (1)
- basic cloud storage services (1)
- behavior preservation (1)
- behavioral abstraction (1)
- behavioral equivalenc (1)
- behavioral refinement (1)
- behavioral specification (1)
- behaviour (1)
- behaviourally correct learning (1)
- benutzergenerierte Inhalte (1)
- beschreibende Feldstudie (1)
- betriebliche Weiterbildungspraxis (1)
- bidirectional optimality theory (1)
- bild (1)
- bildbasiertes Rendering (1)
- binary representation (1)
- binary search (1)
- bio-computing (1)
- bioinformatics (1)
- biological network (1)
- biological network model (1)
- biological networks (1)
- biomarker detection (1)
- biometrics (1)
- bisimulation (1)
- bitcoin (1)
- blind source separation (1)
- bottom–up (1)
- bounded backward model checking (1)
- bpm (1)
- brand personality (1)
- bug tracking (1)
- building models (1)
- built–in predicates (1)
- business informatics (1)
- business models (1)
- business process architecture (1)
- business process architectures (1)
- business process model abstraction (1)
- business process modeling (1)
- bystander (1)
- cancer therapy (1)
- cartographic design (1)
- case study (1)
- categories (1)
- causal AI (1)
- causal reasoning (1)
- causality (1)
- center dot Computing (1)
- change detection (1)
- change management (1)
- changeability (1)
- changing the study field (1)
- changing the university (1)
- choreographies (1)
- circuits (1)
- classes of logic programs (1)
- classifier calibration (1)
- classroom language (1)
- clause elimination (1)
- clause learning (1)
- cleansing (1)
- cloud datacenter (1)
- cloud storage (1)
- cluster-analysis (1)
- code generation (1)
- coding and information theory (1)
- cogeneration units (1)
- cognition (1)
- cognitive load (1)
- cognitive load theory (1)
- cognitive modifiability (1)
- coherence-enhancing filtering (1)
- collaborations (1)
- collection types (1)
- combined task and motion planning (1)
- communication (1)
- community (1)
- competence development (1)
- competency (1)
- complex optimization (1)
- complexity dichotomy (1)
- composite service (1)
- compositional analysis (1)
- computational biology (1)
- computational ethnomusicology (1)
- computational methods (1)
- computational photography (1)
- computed tomography (1)
- computer science education (CSE) (1)
- computer science teachers (1)
- computer-aided design (1)
- computer-mediated therapy (1)
- computergestützte Methoden (1)
- computergestützte Musikethnologie (1)
- computervermittelte Therapie (1)
- computing (1)
- computing science education (1)
- concept of algorithm (1)
- concurrency (1)
- concurrent graph rewriting (1)
- conditions (1)
- confidentiality (1)
- conflicts and dependencies in (1)
- confluence (1)
- conformance analysis (1)
- conformance checking (1)
- connection calculus (1)
- consensus protocols (1)
- consistency restoration (1)
- consistent learning (1)
- constraint (1)
- constraint programming (1)
- constraints (1)
- constructionism (1)
- consumer behavior (1)
- context awareness (1)
- continuous testing (1)
- contract (1)
- control resynthesis (1)
- controlled experiment (1)
- conversational agents (1)
- convolutional neural networks (1)
- corporate nomadism (1)
- corporate takeovers (1)
- corpus study (1)
- couple reaction (1)
- coupling relationship (1)
- course timetabling (1)
- creativity (1)
- crochet (1)
- crosscutting wrappers (1)
- cryptocurrency exchanges (1)
- cryptography (1)
- cryptology (1)
- cs4fn (1)
- cscw (1)
- cultural heritage (1)
- cumulative culture (1)
- curriculum theory (1)
- cyber (1)
- cyber humanistic (1)
- cyber threat intelligence (1)
- cyber-attack (1)
- cyber-physikalische Systeme (1)
- cybersecurity (1)
- cyberwar (1)
- data center management (1)
- data assimilation (1)
- data center management (1)
- data correctness checking (1)
- data dependencies (1)
- data driven approaches (1)
- data extraction (1)
- data flow correctness (1)
- data in business processes (1)
- data migration (1)
- data modeling (1)
- data models (1)
- data objects (1)
- data pipeline (1)
- data requirements (1)
- data science (1)
- data security (1)
- data set (1)
- data states (1)
- data structures and information theory (1)
- data synthesis (1)
- data transformation (1)
- data view (1)
- data visualization (1)
- data-driven (1)
- data-driven artifacts (1)
- database (1)
- database optimization (1)
- database technology (1)
- datengetrieben (1)
- dbms (1)
- deadline propagation (1)
- decentral identities (1)
- decision management (1)
- decision mining (1)
- decision models (1)
- decision trees (1)
- decubitus (1)
- deductive databases (1)
- deduplication (1)
- deep Gaussian processes (1)
- definiteness (1)
- delay propagation (1)
- demografische Informationen (1)
- demographic information (1)
- dental caries classification (1)
- dependable computing (1)
- dependencies (1)
- dependency discovery (1)
- depressive symptoms (1)
- desertification (1)
- design (1)
- design research (1)
- design space exploration (1)
- design-science research (1)
- determinism (1)
- deterministic properties (1)
- deurema modeling language (1)
- development tools (1)
- developmental systems (1)
- dezentrale Identitäten (1)
- diagnosis (1)
- didaktisches Konzept (1)
- difference of Gaussians (1)
- differential gene expression (1)
- differential privacy (1)
- diffusion (1)
- digital nomadism (1)
- digital picture archive (1)
- digital platform openness (1)
- digital strategy (1)
- digital unterstützter Unterricht (1)
- digital workplace transformation (1)
- digitale Hochschullehre (1)
- digitale Infrastruktur für den Schulunterricht (1)
- digitales Bildarchiv (1)
- digitales Whiteboard (1)
- digitally-enabled pedagogies (1)
- digitization of production processes (1)
- dimensional (1)
- direkte Manipulation (1)
- discrete-event model (1)
- discrimination networks (1)
- diskretes Ereignismodell (1)
- distributed computation (1)
- distributed ledger technology (1)
- distributed performance monitoring (1)
- distribution algorithm (1)
- divide and conquer (1)
- doctor-patient relationship (1)
- domain-specific modeling (1)
- drift theory (1)
- dropout (1)
- duale IT-Ausbildung (1)
- dynamic (1)
- dynamic typing (1)
- dynamic classification (1)
- dynamic consolidation (1)
- dynamic programming languages (1)
- dynamic systems (1)
- dynamisch (1)
- dynamische Klassifikation (1)
- dynamische Programmiersprachen (1)
- dynamische Sprachen (1)
- dynamische Systeme (1)
- dynamische Umsortierung (1)
- e-Assessment (1)
- e-learning platform (1)
- e-mentoring (1)
- education and public policy (1)
- educational programming (1)
- educational systems (1)
- educational timetabling (1)
- edutainment (1)
- efficiency (1)
- efficient deep learning (1)
- eindeutig (1)
- eingebettete Systeme (1)
- elections (1)
- electrical muscle stimulation (1)
- electronic health record (1)
- electronic tool integration (1)
- elektrische Muskelstimulation (1)
- elliptic complexes (1)
- email spam detection (1)
- embedded systems (1)
- embedded-systems (1)
- emotion (1)
- emotion representation (1)
- emotion research (1)
- emotional design (1)
- empirical studies (1)
- empirische Studien (1)
- endpoint security (1)
- energy efficiency (1)
- energy savings (1)
- engaged computing (1)
- engine (1)
- engineering (1)
- enterprise search (1)
- entity alignment (1)
- entity linking (1)
- entity resolution (1)
- enumeration (1)
- environments (1)
- epistemic logic programs (1)
- epistemic specifications (1)
- equality (1)
- erfahrbare Medien (1)
- error correction (1)
- error detection (1)
- erzeugende gegnerische Netzwerke (1)
- ethics (1)
- event abstraction (1)
- events (1)
- evidence theory (1)
- evolution (1)
- evolution in MDE (1)
- evolutionary computation (1)
- evolving systems (1)
- exact simulation methods (1)
- executable semantics (1)
- experience (1)
- experience report (1)
- experiment (1)
- explainability (1)
- explainability-accuracy trade-off (1)
- explainable AI (1)
- explicit knowledge (1)
- explicit negation (1)
- exploration (1)
- exploratives Programmieren (1)
- exponentiation (1)
- expression (1)
- extend (1)
- extensions of logic programs (1)
- external knowledge bases (1)
- external memory algorithms (1)
- fMRI (1)
- face tracking (1)
- facial expression (1)
- failure model (1)
- fatty acid amide hydrolase (1)
- fault injection (1)
- federated industrial platform ecosystems (1)
- feedback loop modeling (1)
- feedback loops (1)
- fehlende Daten (1)
- field-programmable gate array (1)
- file structure (1)
- flow-based bilateral filter (1)
- font engineering (1)
- font rendering (1)
- forecasts (1)
- forensics (1)
- formal framework (1)
- formal languages (1)
- formal verification (1)
- formal verification methods (1)
- formale Verifikation (1)
- formales Framework (1)
- formalism (1)
- forschendes Lernen (1)
- forschungsorientiertes Lernen (1)
- fortschrittliche Angriffe (1)
- forward / backward chaining (1)
- freie Daten (1)
- freie Software (1)
- fun (1)
- function symbols (1)
- functional dependency (1)
- functional lenses (1)
- functional programming (1)
- functions (1)
- funktionale Abhängigkeit (1)
- funktionale Programmierung (1)
- future SOC lab (1)
- fächerverbindend (1)
- gait analysis algorithm (1)
- ganzheitlich (1)
- ganzzahlige lineare Optimierung (1)
- gefaltete neuronale Netze (1)
- gemischte Daten (1)
- gene (1)
- gene expression matrix (1)
- gene selection (1)
- general (1)
- general education in computer science (1)
- general secondary education (1)
- generalization (1)
- generalized discrimination networks (1)
- generalized logic programs (1)
- generative adversarial networks (1)
- genome annotation (1)
- geometry generation (1)
- geovirtual environments (1)
- geovirtuelle Umgebungen (1)
- geschichtsbewusste Laufzeit-Modelle (1)
- gesture (1)
- getypte Attributierte Graphen (1)
- gewerkschaftlich unterstützte Weiterbildungspraxis (1)
- global constraints (1)
- global model management (1)
- globale Constraints (1)
- globales Modellmanagement (1)
- grammar inference (1)
- grammars (1)
- graph clustering (1)
- graph databases (1)
- graph inference (1)
- graph languages (1)
- graph mining (1)
- graph pattern matching (1)
- graph queries (1)
- graph repair (1)
- graph transformations (1)
- graph-based ranking (1)
- graph-search (1)
- graph-transformations (1)
- hardware accelerator (1)
- hardware architecture (1)
- hardware-software-codesign (1)
- hate speech detection (1)
- health care (1)
- healthcare (1)
- heterogeneity (1)
- heterogeneous computing (1)
- heterogeneous tissue (1)
- heterogenes Rechnen (1)
- heuristics (1)
- high school (1)
- higher (1)
- history-aware runtime models (1)
- holistic (1)
- home office (1)
- homogeneous cell population (1)
- homomorphic encryption (1)
- human-centered (1)
- human–computer interaction (1)
- hybrid graph-transformation-systems (1)
- hybrid systems (1)
- hybride Graph-Transformations-Systeme (1)
- hyrise (1)
- identity broker (1)
- image (1)
- image captioning (1)
- image data analysis (1)
- image stylization (1)
- image-based rendering (1)
- imdb (1)
- immediacy (1)
- immersion (1)
- immutable values (1)
- in-memory (1)
- in-memory database (1)
- inclusion dependency (1)
- incompleteness (1)
- inconsistency (1)
- incremental graph query evaluation (1)
- incumbent (1)
- independent component analysis (1)
- index (1)
- individuals (1)
- individuelle Lernwege (1)
- inductive invariant checking (1)
- induktives Invariant Checking (1)
- inertial measurement unit (1)
- inference (1)
- informal and formal learning (1)
- informatics curricula (1)
- informatics in upper secondary education (1)
- information diffusion (1)
- informatische Allgemeinbildung (1)
- informatische Grundkompetenzen (1)
- infrastructure (1)
- inkrementelle Ausführung von Graphanfragen (1)
- inkrementelles Graph Pattern Matching (1)
- innovation capabilities (1)
- innovation management (1)
- input accuracy (1)
- instruction (1)
- integer linear programming (1)
- integral equation (1)
- integrated development environments (1)
- integrierte Entwicklungsumgebungen (1)
- interaction (1)
- interaction modeling (1)
- interaction techniques (1)
- interactive course (1)
- interactive media (1)
- interactive simulation (1)
- interactive workshop (1)
- interaktive Medien (1)
- interconnect (1)
- interdisziplinäre Teams (1)
- interface (1)
- international comparison (1)
- international human rights (1)
- international humanitarian law (1)
- international study (1)
- interpretable machine learning (1)
- interpreters (1)
- interval probabilistic timed systems (1)
- interval probabilistische zeitgesteuerte Systeme (1)
- interval timed automata (1)
- intransitivity (1)
- intuition (1)
- intuitive Benutzeroberflächen (1)
- intuitive interfaces (1)
- invariant checking (1)
- invasive aspects (1)
- invention (1)
- invention mechanism (1)
- inverse ill-posed problem (1)
- inverse scattering (1)
- iteration method (1)
- iterative regularization (1)
- job-shop scheduling (1)
- juridical recording (1)
- k-Induktion (1)
- k-induction (1)
- k-inductive invariants (1)
- k-induktive Invarianten (1)
- k-induktive Invariantenprüfung (1)
- k-induktives Invariant-Checking (1)
- kausale KI (1)
- kausale Schlussfolgerung (1)
- kernel PCA (1)
- kernel methods (1)
- key competences in physical computing (1)
- key discovery (1)
- kinaesthetic teaching (1)
- klinisch-praktischer Unterricht (1)
- knowledge building (1)
- knowledge discovery (1)
- knowledge engineering (1)
- knowledge management system (1)
- knowledge representation (1)
- knowledge representation and nonmonotonic reasoning (1)
- knowledge transfer (1)
- knowledge work (1)
- kompositionale Analyse (1)
- konsistentes Lernen (1)
- kontinuierliches Testen (1)
- kontrolliertes Experiment (1)
- konvergente Dienste (1)
- kulturelles Erbe (1)
- labour union education (1)
- landmarks (1)
- language design (1)
- language learning in the limit (1)
- language specification (1)
- laser remote sensing (1)
- laserscanning (1)
- law and technology (1)
- leadership (1)
- leanCoP (1)
- learner characteristics (1)
- learning factory (1)
- lebenszentriert (1)
- left recursion (1)
- lesson (1)
- level-replacement systems (1)
- life-centered (1)
- linear code (1)
- linear programming problem (1)
- linearer Code (1)
- linguistic (1)
- link discovery (1)
- linked data (1)
- literature review (1)
- live migration (1)
- lively kernel (1)
- load balancing (1)
- localization (1)
- location-based (1)
- logic (1)
- logic synthesis (1)
- logical calculus (1)
- logical signaling networks (1)
- logische Ergänzung (1)
- logische Programmierung (1)
- logische Signalnetzwerke (1)
- long-term interaction (1)
- loop formulas (1)
- machine (1)
- machine learning algorithms (1)
- machines (1)
- main memory computing (1)
- malware detection (1)
- management (1)
- mandatory computer science foundations (1)
- manipulation planning (1)
- manufacturing (1)
- many-core (1)
- map reduce (1)
- map/reduce (1)
- maps (1)
- market study (1)
- maschinelle Verarbeitung natürlicher Sprache (1)
- maschninelles Lernen (1)
- matrices (1)
- media (1)
- mediated conversation (1)
- mediated learning experience (1)
- medical (1)
- medical documentation (1)
- medical malpractice (1)
- medizinisch (1)
- medizinische Dokumentation (1)
- mehrdimensionale Belangtrennung (1)
- mehrsprachige Ausführungsumgebungen (1)
- memory optimization (1)
- menschenzentriert (1)
- meta model (1)
- meta-programming (1)
- metabolic network (1)
- metabolite profile (1)
- metacrate (1)
- metadata (1)
- metadata detection (1)
- metadata discovery (1)
- metadata quality (1)
- methodologie (1)
- methods (1)
- metric learning (1)
- metric temporal logic (1)
- metric termporal graph logic (1)
- metrisch temporale Graph Logic (1)
- metrische Temporallogik (1)
- microcredential (1)
- microdissection (1)
- migration (1)
- misconception (1)
- misconceptions (1)
- mixed data (1)
- mixture models (1)
- mmdb (1)
- mobile devices (1)
- mobile learning (1)
- mobile technologies and apps (1)
- mobiles Lernen (1)
- model generation (1)
- model repair (1)
- model-based (1)
- model-based prototyping (1)
- model-driven (1)
- model-driven architecture (1)
- model-driven software engineering (1)
- modelgetriebene Entwicklung (1)
- modellgetriebene Softwaretechnik (1)
- modular counting (1)
- modularity (1)
- molecular network (1)
- molecular networks (1)
- molecular tumor board (1)
- molekulare Netzwerke (1)
- monetary incentive delay task (1)
- mood (1)
- morphic (1)
- morphological analysis (1)
- multi core data processing (1)
- multi factor authentication (1)
- multi-class classification (1)
- multi-core (1)
- multi-dimensional separation of concerns (1)
- multi-instances (1)
- multi-valued logic (1)
- multi-version models (1)
- multi-family residential buildings (1)
- multidisziplinäre Teams (1)
- multimedia learning (1)
- multimodal representations (1)
- multiuser (1)
- musical scales (1)
- musikalische Tonleitern (1)
- mutli-task learning (1)
- mutual gaze (1)
- mutual information (1)
- named entity mining (1)
- natural language processing (1)
- nested application conditions (1)
- nested expressions (1)
- network (1)
- network protocols (1)
- networks-on-chip (1)
- neue Online-Fehlererkennungsmethode (1)
- neural (1)
- new technologies (1)
- nicht-parametrische bedingte Unabhängigkeitstests (1)
- nichtlineare ICA (1)
- nichtlineare PCA (NLPCA) (1)
- nichtlineare Projektionen (1)
- non-monotonic reasoning (1)
- non-parametric conditional independence testing (1)
- nonlinear ICA (1)
- nonlinear PCA (NLPCA) (1)
- nonlinear projections (1)
- notation (1)
- novelty detection (1)
- nutzergenerierte Inhalte (1)
- nvm (1)
- object life cycle synchronization (1)
- object-constraint programming (1)
- object-oriented programming (1)
- objective difficulty (1)
- objektorientiertes Programmieren (1)
- omega (1)
- on-chip (1)
- online assistance (1)
- online course (1)
- online course creation (1)
- online course design (1)
- online learning (1)
- online photographs (1)
- online-learning (1)
- open innovation (1)
- open learning (1)
- open source (1)
- open source software (1)
- operating system (1)
- optical character recognition (1)
- optimal transport (1)
- optimizations (1)
- order dependencies (1)
- organisational evolution (1)
- organizational change (1)
- orts-basiert (1)
- overcomplete ICA (1)
- packrat parsing (1)
- paper prototyping (1)
- paraconsistency (1)
- parallel (1)
- parallel and sequential independence (1)
- parallel computing (1)
- parallel execution (1)
- parallel rewriting (1)
- parallel solving (1)
- parallele Verarbeitung (1)
- parallele und Sequentielle Unabhängigkeit (1)
- paralleles Lösen (1)
- paralleles Rechnen (1)
- parameter (1)
- parsing (1)
- parsing expression grammars (1)
- partial application conditions (1)
- partial correlation (1)
- partial replication (1)
- partielle Anwendungsbedingungen (1)
- partielle Replikation (1)
- patent (1)
- pathways (1)
- patient empowerment (1)
- pattern recognition (1)
- pedagogy (1)
- pedestrian navigation (1)
- perception (1)
- perception differences (1)
- performance (1)
- performance models of virtual machines (1)
- periodic tasks (1)
- periodische Aufgaben (1)
- personal (1)
- personal response systems (1)
- personality prediction (1)
- personalization principle (1)
- personalized medicine (1)
- persönliche Informationen (1)
- pervasive learning (1)
- petri net (1)
- philosophical foundation of informatics pedagogy (1)
- phone (1)
- physical computing tools (1)
- placement (1)
- planning (1)
- platform ecosystems (1)
- platypus (1)
- policy evaluation (1)
- polyglot execution environments (1)
- polyglot programming (1)
- polyglottes Programmieren (1)
- portfolio-based solving (1)
- pose estimation (1)
- poset (1)
- power-law (1)
- pre-primary level (1)
- predictive models (1)
- preference handling (1)
- preferences (1)
- prefetching (1)
- preprocessing (1)
- presentation (1)
- primary education (1)
- primary level (1)
- primary school (1)
- prime pair (1)
- primer pair design (1)
- prior knowledge (1)
- priorities (1)
- probabilistic machine learning (1)
- probabilistic models (1)
- probabilistic timed automata (1)
- probabilistische zeitbehaftete Automaten (1)
- probabilistisches maschinelles Lernen (1)
- problem-solving (1)
- process (1)
- process and data integration (1)
- process automation (1)
- process elicitation (1)
- process improvement (1)
- process instance (1)
- process instance grouping (1)
- process model (1)
- process model search (1)
- process modeling languages (1)
- process modelling (1)
- process models (1)
- process refinement (1)
- process scheduling (1)
- processes (1)
- processing (1)
- processor hardware (1)
- professional development (1)
- professors (1)
- profiling (1)
- program (1)
- program analysis (1)
- programming abstraction (1)
- programming experience (1)
- programming in context (1)
- programming language (1)
- programming skills (1)
- programming tools (1)
- programs (1)
- prototyping (1)
- proving (1)
- psychotherapy (1)
- public cloud storage services (1)
- public dataset (1)
- qualitative model (1)
- qualitatives Modell (1)
- quantification protocol (1)
- quantified logics (1)
- quantile normalization (1)
- quantum computing (1)
- quantum cryptography (1)
- query matching (1)
- query optimization (1)
- querying (1)
- railway network (1)
- railways (1)
- random I (1)
- random graphs (1)
- randomized control trial (1)
- rapid prototyping (1)
- raum-zeitlich (1)
- raumbezogene Straftatenanalyse (1)
- reactive (1)
- reaktive Programmierung (1)
- real-time application (1)
- real-time rendering (1)
- rechnerunterstütztes Konstruieren (1)
- recognition (1)
- recommendation (1)
- reconfigurable systems (1)
- reconfiguration (1)
- reconstruction (1)
- record linkage (1)
- recursive tuning (1)
- reflection (1)
- regression testing (1)
- regulatory networks (1)
- reinforcement learning (1)
- rekonfigurierbar (1)
- relational model transformation (1)
- relationale Modelltransformationen (1)
- reliability (1)
- reliability assessment (1)
- remodularization (1)
- remote collaboration (1)
- remote sensing (1)
- remote-first (1)
- repair (1)
- representation learning (1)
- requirements engineering (1)
- resilient architectures (1)
- resource management (1)
- resource optimization (1)
- rest service (1)
- restoration (1)
- restricted parallelism (1)
- reusable aspects (1)
- reverse engineering (1)
- reversible reaction (1)
- review (1)
- reward system (1)
- robust ICA (1)
- robuste ICA (1)
- robustness (1)
- runtime adaptations (1)
- runtime behavior (1)
- runtime monitoring (1)
- räumliche Geodaten (1)
- s/t-pattern sequences (1)
- sat (1)
- satisfiabilitiy solving (1)
- savanna (1)
- scheduling (1)
- school (1)
- schwach überwachtes maschinelles Lernen (1)
- science (1)
- scm (1)
- scripting environments (1)
- scripting languages (1)
- scrollytelling (1)
- search plan generation (1)
- secondary computer science education (1)
- secondary education (1)
- security analytics (1)
- security chaos engineering (1)
- security policies (1)
- security risk assessment (1)
- segmentation (1)
- selbst-souveräne Identitäten (1)
- selbstbestimmte Identitäten (1)
- selbstprüfende Schaltungen (1)
- self-adaptive multiprocessing system (1)
- self-adaptive software (1)
- self-disclosure (1)
- self-efficacy (1)
- self-healing (1)
- self-supervised learning (1)
- semantic analysis (1)
- semantic classification (1)
- semantic web services (1)
- semantics (1)
- semantics preservation (1)
- semantische Klassifizierung (1)
- semantisches Netz (1)
- sentiment (1)
- sentiment analysis (1)
- sequence properties (1)
- serialization (1)
- series (1)
- serverseitiges 3D-Rendering (1)
- serverside 3D rendering (1)
- service mediation (1)
- service orchestration (1)
- service-oriented (1)
- service-oriented architectures (1)
- serviceorientierte Architekturen (1)
- sets (1)
- shader (1)
- sharing economy (1)
- sign language (1)
- signal processing (1)
- signal transition graph (1)
- significant edge (1)
- similarity (1)
- similarity learning (1)
- similarity measures (1)
- single event upset (1)
- situated learning (1)
- situational awareness (1)
- skeletonization (1)
- skew Brownian motion (1)
- skew diffusions (1)
- small files (1)
- small talk (1)
- smartphone (1)
- smoother (1)
- social attraction (1)
- social media analysis (1)
- social networking (1)
- social networking sites (1)
- software (1)
- software analysis (1)
- software architecture (1)
- software development processes (1)
- software evolution (1)
- software maintenance (1)
- software product lines (1)
- software selection (1)
- software testing (1)
- software tests (1)
- software visualization (1)
- software/hardware co-design (1)
- solar particle event (1)
- sorting (1)
- space missions (1)
- spatio-temporal (1)
- spatio-temporal sensor data (1)
- specific prime pair (1)
- specification of timed graph transformations (1)
- speed independence (1)
- speed independent (1)
- spread correction (1)
- spreadsheets (1)
- squeak (1)
- stable matching (1)
- stable model semantics (1)
- standards (1)
- stark verhaltenskorrekt sperrend (1)
- static analysis (1)
- static source-code analysis (1)
- statische Analyse (1)
- statische Quellcodeanalyse (1)
- statistics program R (1)
- stochastic process (1)
- stratification (1)
- strong and uniform equivalence (1)
- strongly behaviourally correct locking (1)
- strongly stable matching (1)
- structured output prediction (1)
- strukturierte Vorhersage (1)
- student activation (1)
- student experience (1)
- student perceptions (1)
- studentische Forschung (1)
- students’ conceptions (1)
- students’ knowledge (1)
- study (1)
- study problems (1)
- stylization (1)
- super stable matching (1)
- survey mode (1)
- symbolic analysis (1)
- symbolic graphs (1)
- symbolische Analyse (1)
- symbolische Graphen (1)
- synonym discovery (1)
- system of systems (1)
- systems (1)
- t.BPM (1)
- tabellarische Dateien (1)
- tableau method (1)
- tabular data (1)
- tacit knowledge (1)
- tangible media (1)
- teacher (1)
- teacher competencies (1)
- teacher education (1)
- teachers (1)
- teaching (1)
- teaching informatics in general education (1)
- teaching material (1)
- technical notes and rapid communications (1)
- technische Rahmenbedingungen (1)
- technologies (1)
- technology (1)
- tele-lab (1)
- tele-teaching (1)
- telemedicine (1)
- temporal graph queries (1)
- temporal logic (1)
- temporale Graphanfragen (1)
- temporary binding (1)
- terminology (1)
- terrain models (1)
- test (1)
- test case prioritization (1)
- test items (1)
- test results (1)
- test-driven fault navigation (1)
- text classification (1)
- text mining (1)
- threat detection (1)
- threshold cryptography (1)
- tiefe Gauß-Prozesse (1)
- tiering (1)
- tool building (1)
- top– down (1)
- tort law (1)
- touch input (1)
- tptp (1)
- tracing (1)
- traditional Georgian music (1)
- traditionelle Georgische Musik (1)
- traditionelle Unternehmen (1)
- training (1)
- trajectories (1)
- transduction (1)
- transfer learning (1)
- transformation (1)
- transformation level (1)
- transformation sequences (1)
- trust model (1)
- tuple spaces (1)
- tutorial section (1)
- typed graph transformation systems (1)
- typisierte attributierte Graphen (1)
- uncanny valley (1)
- unferring cellular networks (1)
- unfounded sets (1)
- unification (1)
- unique (1)
- unique column combinations (1)
- unsupervised (1)
- unsupervised methods (1)
- usability (1)
- user interfaces (1)
- user-centred (1)
- value co-creation (1)
- variables (1)
- variation (1)
- variational inference (1)
- variationelle Inferenz (1)
- various applications (1)
- ventral striatum (1)
- verhaltenskorrektes Lernen (1)
- verifiable credentials (1)
- verschachtelte Anwednungsbedingungen (1)
- verschachtelte Anwendungsbedingungen (1)
- versioning (1)
- verteilte Berechnung (1)
- verteilte Datenbanken (1)
- verteilte Leistungsüberwachung (1)
- verzwickte Probleme (1)
- video analysis (1)
- video metadata (1)
- view maintenance (1)
- views (1)
- virtual (1)
- virtual 3D city model (1)
- virtual collaboration (1)
- virtual desktop infrastructure (1)
- virtual groups (1)
- virtual learning environments (1)
- virtual machine (1)
- virtual mobility (1)
- virtual teams (1)
- virtualisierte IT-Infrastruktur (1)
- virtuell (1)
- virtuelle Realität (1)
- visual analytics (1)
- visual language (1)
- visual languages (1)
- visualization concept exploration (1)
- visuelle Sprache (1)
- visuelle Sprachen (1)
- vulnerabilities (1)
- weak supervision (1)
- weakly (1)
- web services (1)
- web-applications (1)
- web-based development (1)
- web-based development environment (1)
- web-basierte Entwicklungsumgebung (1)
- webbasierte Entwicklung (1)
- weight (1)
- well-being (1)
- wissenschaftliches Arbeiten (1)
- wissenschaftliches Schreiben (1)
- word order freezing (1)
- word sense disambiguation (1)
- workload prediction (1)
- zero-day (1)
- zuverlässige Datenverarbeitung (1)
- zuverlässigen Datenverarbeitung (1)
- Ähnlichkeit (1)
- Ähnlichkeitsmaße (1)
- Ähnlichkeitssuche (1)
- Änderbarkeit (1)
- Übereinstimmungsanalyse (1)
- Überwachung (1)
- öffentliche Cloud Speicherdienste (1)
- überbestimmte ICA (1)
- überprüfbare Nachweise (1)
- ‘unplugged’ computing (1)
Institute
- Institut für Informatik und Computational Science (270)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (215)
- Hasso-Plattner-Institut für Digital Engineering GmbH (127)
- Extern (65)
- Fachgruppe Betriebswirtschaftslehre (27)
- Mathematisch-Naturwissenschaftliche Fakultät (24)
- Wirtschaftswissenschaften (19)
- Institut für Mathematik (16)
- Bürgerliches Recht (12)
- Digital Engineering Fakultät (8)
Different properties of programs, implemented in Constraint Handling Rules (CHR), have already been investigated. Proving these properties in CHR is fairly simpler than proving them in any type of imperative programming language, which triggered the proposal of a methodology to map imperative programs into equivalent CHR. The equivalence of both programs implies that if a property is satisfied for one, then it is satisfied for the other. The mapping methodology could be put to other beneficial uses. One such use is the automatic generation of global constraints, at an attempt to demonstrate the benefits of having a rule-based implementation for constraint solvers.
Linked Open Data (LOD) comprises very many and often large public data sets and knowledge bases. Those datasets are mostly presented in the RDF triple structure of subject, predicate, and object, where each triple represents a statement or fact. Unfortunately, the heterogeneity of available open data requires significant integration steps before it can be used in applications. Meta information, such as ontological definitions and exact range definitions of predicates, are desirable and ideally provided by an ontology. However in the context of LOD, ontologies are often incomplete or simply not available. Thus, it is useful to automatically generate meta information, such as ontological dependencies, range definitions, and topical classifications. Association rule mining, which was originally applied for sales analysis on transactional databases, is a promising and novel technique to explore such data. We designed an adaptation of this technique for min-ing Rdf data and introduce the concept of “mining configurations”, which allows us to mine RDF data sets in various ways. Different configurations enable us to identify schema and value dependencies that in combination result in interesting use cases. To this end, we present rule-based approaches for auto-completion, data enrichment, ontology improvement, and query relaxation. Auto-completion remedies the problem of inconsistent ontology usage, providing an editing user with a sorted list of commonly used predicates. A combination of different configurations step extends this approach to create completely new facts for a knowledge base. We present two approaches for fact generation, a user-based approach where a user selects the entity to be amended with new facts and a data-driven approach where an algorithm discovers entities that have to be amended with missing facts. As knowledge bases constantly grow and evolve, another approach to improve the usage of RDF data is to improve existing ontologies. Here, we present an association rule based approach to reconcile ontology and data. Interlacing different mining configurations, we infer an algorithm to discover synonymously used predicates. Those predicates can be used to expand query results and to support users during query formulation. We provide a wide range of experiments on real world datasets for each use case. The experiments and evaluations show the added value of association rule mining for the integration and usability of RDF data and confirm the appropriateness of our mining configuration methodology.
Unique column combinations of a relational database table are sets of columns that contain only unique values. Discovering such combinations is a fundamental research problem and has many different data management and knowledge discovery applications. Existing discovery algorithms are either brute force or have a high memory load and can thus be applied only to small datasets or samples. In this paper, the wellknown GORDIAN algorithm and "Apriori-based" algorithms are compared and analyzed for further optimization. We greatly improve the Apriori algorithms through efficient candidate generation and statistics-based pruning methods. A hybrid solution HCAGORDIAN combines the advantages of GORDIAN and our new algorithm HCA, and it significantly outperforms all previous work in many situations.
Dieser Beitrag stellt das Lehr-Lern-Konzept zur Kompetenzförderung im Software Engineering im Studiengang Mechatronik der Hochschule Aschaffenburg dar. Dieses Konzept ist mehrstufig mit Vorlesungs-, Seminar- und Projektsequenzen. Dabei werden Herausforderungen und Verbesserungspotentiale identifiziert und dargestellt. Abschließend wird ein Überblick gegeben, wie im Rahmen eines gerade gestarteten Forschungsprojektes Lehr-Lernkonzepte weiterentwickelt werden können.
Does a smile open all doors?
(2020)
Online photographs govern an individual’s choices across a variety of contexts. In sharing arrangements, facial appearance has been shown to affect the desire to collaborate, interest to explore a listing, and even willingness to pay for a stay. Because of the ubiquity of online images and their influence on social attitudes, it seems crucial to be able to control these aspects. The present study examines the effect of different photographic self-disclosures on the provider’s perceptions and willingness to accept a potential co-sharer. The findings from our experiment in the accommodation-sharing context suggest social attraction mediates the effect of photographic self-disclosures on willingness to host. Implications of the results for IS research and practitioners are discussed.
Helping overcome distance, the use of videoconferencing tools has surged during the pandemic. To shed light on the consequences of videoconferencing at work, this study takes a granular look at the implications of the self-view feature for meeting outcomes. Building on self-awareness research and self-regulation theory, we argue that by heightening the state of self-awareness, self-view engagement depletes participants’ mental resources and thereby can undermine online meeting outcomes. Evaluation of our theoretical model on a sample of 179 employees reveals a nuanced picture. Self-view engagement while speaking and while listening is positively associated with self-awareness, which, in turn, is negatively associated with satisfaction with meeting process, perceived productivity, and meeting enjoyment. The criticality of the communication role is put forward: looking at self while listening to other attendees has a negative direct and indirect effect on meeting outcomes; however, looking at self while speaking produces equivocal effects.
Despite the phenomenal growth of Big Data Analytics in the last few years, little research is done to explicate the relationship between Big Data Analytics Capability (BDAC) and indirect strategic value derived from such digital capabilities. We attempt to address this gap by proposing a conceptual model of the BDAC - Innovation relationship using dynamic capability theory. The work expands on BDAC business value research and extends the nominal research done on BDAC – innovation. We focus on BDAC's relationship with different innovation objects, namely product, business process, and business model innovation, impacting all value chain activities. The insights gained will stimulate academic and practitioner interest in explicating strategic value generated from BDAC and serve as a framework for future research on the subject
Inhaltsverzeichnis 1 Einführung 2 Aspektorientierte Programmierung 2.1 Ein System als Menge von Eigenschaften 2.2 Aspekte 2.3 Aspektweber 2.4 Vorteile Aspektorientierter Programmierung 2.5 Kategorisierung der Techniken und Werkzeuge f ¨ ur Aspektorientierte Programmierung 3 Techniken und Werkzeuge zur Analyse Aspektorientierter Softwareprogramme 3.1 Virtual Source File 3.2 FEAT 3.3 JQuery 3.4 Aspect Mining Tool 4 Techniken und Werkzeuge zum Entwurf Aspektorientierter Softwareprogramme 4.1 Concern Space Modeling Schema 4.2 Modellierung von Aspekten mit UML 4.3 CoCompose 4.4 Codagen Architect 5 Techniken und Werkzeuge zur Implementierung Aspektorientierter Softwareprogramme 5.1 Statische Aspektweber 5.2 Dynamische Aspektweber 6 Zusammenfassung
Technical report
(2019)
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
Parsing of argumentative structures has become a very active line of research in recent years. Like discourse parsing or any other natural language task that requires prediction of linguistic structures, most approaches choose to learn a local model and then perform global decoding over the local probability distributions, often imposing constraints that are specific to the task at hand. Specifically for argumentation parsing, two decoding approaches have been recently proposed: Minimum Spanning Trees (MST) and Integer Linear Programming (ILP), following similar trends in discourse parsing. In contrast to discourse parsing though, where trees are not always used as underlying annotation schemes, argumentation structures so far have always been represented with trees. Using the 'argumentative microtext corpus' [in: Argumentation and Reasoned Action: Proceedings of the 1st European Conference on Argumentation, Lisbon 2015 / Vol. 2, College Publications, London, 2016, pp. 801-815] as underlying data and replicating three different decoding mechanisms, in this paper we propose a novel ILP decoder and an extension to our earlier MST work, and then thoroughly compare the approaches. The result is that our new decoder outperforms related work in important respects, and that in general, ILP and MST yield very similar performance.
A common feature in Answer Set Programming is the use of a second negation, stronger than default negation and sometimes called explicit, strong or classical negation. This explicit negation is normally used in front of atoms, rather than allowing its use as a regular operator. In this paper we consider the arbitrary combination of explicit negation with nested expressions, as those defined by Lifschitz, Tang and Turner. We extend the concept of reduct for this new syntax and then prove that it can be captured by an extension of Equilibrium Logic with this second negation. We study some properties of this variant and compare to the already known combination of Equilibrium Logic with Nelson's strong negation.
The objective and motivation behind this research is to provide applications with easy-to-use interfaces to communities of deaf and functionally illiterate users, which enables them to work without any human assistance. Although recent years have witnessed technological advancements, the availability of technology does not ensure accessibility to information and communication technologies (ICT). Extensive use of text from menus to document contents means that deaf or functionally illiterate can not access services implemented on most computer software. Consequently, most existing computer applications pose an accessibility barrier to those who are unable to read fluently. Online technologies intended for such groups should be developed in continuous partnership with primary users and include a thorough investigation into their limitations, requirements and usability barriers. In this research, I investigated existing tools in voice, web and other multimedia technologies to identify learning gaps and explored ways to enhance the information literacy for deaf and functionally illiterate users. I worked on the development of user-centered interfaces to increase the capabilities of deaf and low literacy users by enhancing lexical resources and by evaluating several multimedia interfaces for them. The interface of the platform-independent Italian Sign Language (LIS) Dictionary has been developed to enhance the lexical resources for deaf users. The Sign Language Dictionary accepts Italian lemmas as input and provides their representation in the Italian Sign Language as output. The Sign Language dictionary has 3082 signs as set of Avatar animations in which each sign is linked to a corresponding Italian lemma. I integrated the LIS lexical resources with MultiWordNet (MWN) database to form the first LIS MultiWordNet(LMWN). LMWN contains information about lexical relations between words, semantic relations between lexical concepts (synsets), correspondences between Italian and sign language lexical concepts and semantic fields (domains). The approach enhances the deaf users’ understanding of written Italian language and shows that a relatively small set of lexicon can cover a significant portion of MWN. Integration of LIS signs with MWN made it useful tool for computational linguistics and natural language processing. The rule-based translation process from written Italian text to LIS has been transformed into service-oriented system. The translation process is composed of various modules including parser, semantic interpreter, generator, and spatial allocation planner. This translation procedure has been implemented in the Java Application Building Center (jABC), which is a framework for extreme model driven design (XMDD). The XMDD approach focuses on bringing software development closer to conceptual design, so that the functionality of a software solution could be understood by someone who is unfamiliar with programming concepts. The transformation addresses the heterogeneity challenge and enhances the re-usability of the system. For enhancing the e-participation of functionally illiterate users, two detailed studies were conducted in the Republic of Rwanda. In the first study, the traditional (textual) interface was compared with the virtual character-based interactive interface. The study helped to identify usability barriers and users evaluated these interfaces according to three fundamental areas of usability, i.e. effectiveness, efficiency and satisfaction. In another study, we developed four different interfaces to analyze the usability and effects of online assistance (consistent help) for functionally illiterate users and compared different help modes including textual, vocal and virtual character on the performance of semi-literate users. In our newly designed interfaces the instructions were automatically translated in Swahili language. All the interfaces were evaluated on the basis of task accomplishment, time consumption, System Usability Scale (SUS) rating and number of times the help was acquired. The results show that the performance of semi-literate users improved significantly when using the online assistance. The dissertation thus introduces a new development approach in which virtual characters are used as additional support for barely literate or naturally challenged users. Such components enhanced the application utility by offering a variety of services like translating contents in local language, providing additional vocal information, and performing automatic translation from text to sign language. Obviously, there is no such thing as one design solution that fits for all in the underlying domain. Context sensitivity, literacy and mental abilities are key factors on which I concentrated and the results emphasize that computer interfaces must be based on a thoughtful definition of target groups, purposes and objectives.
The highly structured nature of the educational sector demands effective policy mechanisms close to the needs of the field. That is why evidence-based policy making, endorsed by the European Commission under Erasmus+ Key Action 3, aims to make an alignment between the domains of policy and practice. Against this background, this article addresses two issues: First, that there is a vertical gap in the translation of higher-level policies to local strategies and regulations. Second, that there is a horizontal gap between educational domains regarding the policy awareness of individual players. This was analyzed in quantitative and qualitative studies with domain experts from the fields of virtual mobility and teacher training. From our findings, we argue that the combination of both gaps puts the academic bridge from secondary to tertiary education at risk, including the associated knowledge proficiency levels. We discuss the role of digitalization in the academic bridge by asking the question: which value does the involved stakeholders expect from educational policies? As a theoretical basis, we rely on the model of value co-creation for and by stakeholders. We describe the used instruments along with the obtained results and proposed benefits. Moreover, we reflect on the methodology applied, and we finally derive recommendations for future academic bridge policies.
The main objective of this dissertation is to analyse prerequisites, expectations, apprehensions, and attitudes of students studying computer science, who are willing to gain a bachelor degree. The research will also investigate in the students’ learning style according to the Felder-Silverman model. These investigations fall in the attempt to make an impact on reducing the “dropout”/shrinkage rate among students, and to suggest a better learning environment.
The first investigation starts with a survey that has been made at the computer science department at the University of Baghdad to investigate the attitudes of computer science students in an environment dominated by women, showing the differences in attitudes between male and female students in different study years. Students are accepted to university studies via a centrally controlled admission procedure depending mainly on their final score at school. This leads to a high percentage of students studying subjects they do not want. Our analysis shows that 75% of the female students do not regret studying computer science although it was not their first choice. And according to statistics over previous years, women manage to succeed in their study and often graduate on top of their class. We finish with a comparison of attitudes between the freshman students of two different cultures and two different university enrolment procedures (University of Baghdad, in Iraq, and the University of Potsdam, in Germany) both with opposite gender majority.
The second step of investigation took place at the department of computer science at the University of Potsdam in Germany and analyzes the learning styles of students studying the three major fields of study offered by the department (computer science, business informatics, and computer science teaching). Investigating the differences in learning styles between the students of those study fields who usually take some joint courses is important to be aware of which changes are necessary to be adopted in the teaching methods to address those different students. It was a two stage study using two questionnaires; the main one is based on the Index of Learning Styles Questionnaire of B. A. Solomon and R. M. Felder, and the second questionnaire was an investigation on the students’ attitudes towards the findings of their personal first questionnaire. Our analysis shows differences in the preferences of learning style between male and female students of the different study fields, as well as differences between students with the different specialties (computer science, business informatics, and computer science teaching).
The third investigation looks closely into the difficulties, issues, apprehensions and expectations of freshman students studying computer science. The study took place at the computer science department at the University of Potsdam with a volunteer sample of students. The goal is to determine and discuss the difficulties and issues that they are facing in their study that may lead them to think in dropping-out, changing the study field, or changing the university. The research continued with the same sample of students (with business informatics students being the majority) through more than three semesters. Difficulties and issues during the study were documented, as well as students’ attitudes, apprehensions, and expectations. Some of the professors and lecturers opinions and solutions to some students’ problems were also documented. Many participants had apprehensions and difficulties, especially towards informatics subjects. Some business informatics participants began to think of changing the university, in particular when they reached their third semester, others thought about changing their field of study. Till the end of this research, most of the participants continued in their studies (the study they have started with or the new study they have changed to) without leaving the higher education system.
A survey has been carried out in the Computer Science (CS) department at the University of Baghdad to investigate the attitudes of CS students in a female dominant environment, showing the differences between male and female students in different academic years. We also compare the attitudes of the freshman students of two different cultures (University of Baghdad, Iraq, and the University of Potsdam).
Extract-Transform-Load (ETL) tools are used for the creation, maintenance, and evolution of data warehouses, data marts, and operational data stores. ETL workflows populate those systems with data from various data sources by specifying and executing a DAG of transformations. Over time, hundreds of individual workflows evolve as new sources and new requirements are integrated into the system. The maintenance and evolution of large-scale ETL systems requires much time and manual effort. A key problem is to understand the meaning of unfamiliar attribute labels in source and target databases and ETL transformations. Hard-to-understand attribute labels lead to frustration and time spent to develop and understand ETL workflows. We present a schema decryption technique to support ETL developers in understanding cryptic schemata of sources, targets, and ETL transformations. For a given ETL system, our recommender-like approach leverages the large number of mapped attribute labels in existing ETL workflows to produce good and meaningful decryptions. In this way we are able to decrypt attribute labels consisting of a number of unfamiliar few-letter abbreviations, such as UNP_PEN_INT, which we can decrypt to UNPAID_PENALTY_INTEREST. We evaluate our schema decryption approach on three real-world repositories of ETL workflows and show that our approach is able to suggest high-quality decryptions for cryptic attribute labels in a given schema.
Advances in Web 2.0 technologies have led to the widespread assimilation of electronic commerce platforms as an innovative shopping method and an alternative to traditional shopping. However, due to pro-technology bias, scholars focus more on adopting technology, and slightly less attention has been given to the impact of electronic word of mouth (eWOM) on customers’ intention to use social commerce. This study addresses the gap by examining the intention through exploring the effect of eWOM on males’ and females’ intentions and identifying the mediation of perceived crowding. To this end, we adopted a dual-stage multi-group structural equation modeling and artificial neural network (SEM-ANN) approach. We successfully extended the eWOM concept by integrating negative and positive factors and perceived crowding. The results reveal the causal and non-compensatory relationships between the constructs. The variables supported by the SEM analysis are adopted as the ANN model’s input neurons. According to the natural significance obtained from the ANN approach, males’ intentions to accept social commerce are related mainly to helping the company, followed by core functionalities. In contrast, females are highly influenced by technical aspects and mishandling. The ANN model predicts customers’ intentions to use social commerce with an accuracy of 97%. We discuss the theoretical and practical implications of increasing customers’ intention toward social commerce channels among consumers based on our findings.
In the era of social networks, internet of things and location-based services, many online services produce a huge amount of data that have valuable objective information, such as geographic coordinates and date time. These characteristics (parameters) in the combination with a textual parameter bring the challenge for the discovery of geospatiotemporal knowledge. This challenge requires efficient methods for clustering and pattern mining in spatial, temporal and textual spaces.
In this thesis, we address the challenge of providing methods and frameworks for geospatiotemporal data analytics. As an initial step, we address the challenges of geospatial data processing: data gathering, normalization, geolocation, and storage. That initial step is the basement to tackle the next challenge -- geospatial clustering challenge. The first step of this challenge is to design the method for online clustering of georeferenced data. This algorithm can be used as a server-side clustering algorithm for online maps that visualize massive georeferenced data. As the second step, we develop the extension of this method that considers, additionally, the temporal aspect of data. For that, we propose the density and intensity-based geospatiotemporal clustering algorithm with fixed distance and time radius.
Each version of the clustering algorithm has its own use case that we show in the thesis.
In the next chapter of the thesis, we look at the spatiotemporal analytics from the perspective of the sequential rule mining challenge. We design and implement the framework that transfers data into textual geospatiotemporal data - data that contain geographic coordinates, time and textual parameters. By this way, we address the challenge of applying pattern/rule mining algorithms in geospatiotemporal space. As the applicable use case study, we propose spatiotemporal crime analytics -- discovery spatiotemporal patterns of crimes in publicly available crime data.
The second part of the thesis, we dedicate to the application part and use case studies. We design and implement the application that uses the proposed clustering algorithms to discover knowledge in data. Jointly with the application, we propose the use case studies for analysis of georeferenced data in terms of situational and public safety awareness.
We propose a network structure-based model for heterosis, and investigate it relying on metabolite profiles from Arabidopsis. A simple feed-forward two-layer network model (the Steinbuch matrix) is used in our conceptual approach. It allows for directly relating structural network properties with biological function. Interpreting heterosis as increased adaptability, our model predicts that the biological networks involved show increasing connectivity of regulatory interactions. A detailed analysis of metabolite profile data reveals that the increasing-connectivity prediction is true for graphical Gaussian models in our data from early development. This mirrors properties of observed heterotic Arabidopsis phenotypes. Furthermore, the model predicts a limit for increasing hybrid vigor with increasing heterozygosity—a known phenomenon in the literature.
Coherent network partitions
(2021)
We continue to study coherent partitions of graphs whereby the vertex set is partitioned into subsets that induce biclique spanned subgraphs. The problem of identifying the minimum number of edges to obtain biclique spanned connected components (CNP), called the coherence number, is NP-hard even on bipartite graphs. Here, we propose a graph transformation geared towards obtaining an O (log n)-approximation algorithm for the CNP on a bipartite graph with n vertices. The transformation is inspired by a new characterization of biclique spanned subgraphs. In addition, we study coherent partitions on prime graphs, and show that finding coherent partitions reduces to the problem of finding coherent partitions in a prime graph. Therefore, these results provide future directions for approximation algorithms for the coherence number of a given graph.
Program behavior that relies on contextual information, such as physical location or network accessibility, is common in today's applications, yet its representation is not sufficiently supported by programming languages. With context-oriented programming (COP), such context-dependent behavioral variations can be explicitly modularized and dynamically activated. In general, COP could be used to manage any context-specific behavior. However, its contemporary realizations limit the control of dynamic adaptation. This, in turn, limits the interaction of COP's adaptation mechanisms with widely used architectures, such as event-based, mobile, and distributed programming. The JCop programming language extends Java with language constructs for context-oriented programming and additionally provides a domain-specific aspect language for declarative control over runtime adaptations. As a result, these redesigned implementations are more concise and better modularized than their counterparts using plain COP. JCop's main features have been described in our previous publications. However, a complete language specification has not been presented so far. This report presents the entire JCop language including the syntax and semantics of its new language constructs.
This paper describes the proof calculus LD for clausal propositional logic, which is a linearized form of the well-known DPLL calculus extended by clause learning. It is motivated by the demand to model how current SAT solvers built on clause learning are working, while abstracting from decision heuristics and implementation details. The calculus is proved sound and terminating. Further, it is shown that both the original DPLL calculus and the conflict-directed backtracking calculus with clause learning, as it is implemented in many current SAT solvers, are complete and proof-confluent instances of the LD calculus.
Many formal descriptions of DPLL-based SAT algorithms either do not include all essential proof techniques applied by modern SAT solvers or are bound to particular heuristics or data structures. This makes it difficult to analyze proof-theoretic properties or the search complexity of these algorithms. In this paper we try to improve this situation by developing a nondeterministic proof calculus that models the functioning of SAT algorithms based on the DPLL calculus with clause learning. This calculus is independent of implementation details yet precise enough to enable a formal analysis of realistic DPLL-based SAT algorithms.
QuantPrime
(2008)
Background
Medium- to large-scale expression profiling using quantitative polymerase chain reaction (qPCR) assays are becoming increasingly important in genomics research. A major bottleneck in experiment preparation is the design of specific primer pairs, where researchers have to make several informed choices, often outside their area of expertise. Using currently available primer design tools, several interactive decisions have to be made, resulting in lengthy design processes with varying qualities of the assays.
Results
Here we present QuantPrime, an intuitive and user-friendly, fully automated tool for primer pair design in small- to large-scale qPCR analyses. QuantPrime can be used online through the internet http://www.quantprime.de/ or on a local computer after download; it offers design and specificity checking with highly customizable parameters and is ready to use with many publicly available transcriptomes of important higher eukaryotic model organisms and plant crops (currently 295 species in total), while benefiting from exon-intron border and alternative splice variant information in available genome annotations. Experimental results with the model plant Arabidopsis thaliana, the crop Hordeum vulgare and the model green alga Chlamydomonas reinhardtii show success rates of designed primer pairs exceeding 96%.
Conclusion
QuantPrime constitutes a flexible, fully automated web application for reliable primer design for use in larger qPCR experiments, as proven by experimental data. The flexible framework is also open for simple use in other quantification applications, such as hydrolyzation probe design for qPCR and oligonucleotide probe design for quantitative in situ hybridization. Future suggestions made by users can be easily implemented, thus allowing QuantPrime to be developed into a broad-range platform for the design of RNA expression assays.
Für die vorliegende Studie »Qualitative Untersuchung zur Akzeptanz des neuen Personalausweises und Erarbeitung von Vorschlägen zur Verbesserung der Usability der Software AusweisApp« arbeitete ein Innovationsteam mit Hilfe der Design Thinking Methode an der Aufgabenstellung »Wie können wir die AusweisApp für Nutzer intuitiv und verständlich gestalten?« Zunächst wurde die Akzeptanz des neuen Personalausweises getestet. Bürger wurden zu ihrem Wissensstand und ihren Erwartungen hinsichtlich des neuen Personalausweises befragt, darüber hinaus zur generellen Nutzung des neuen Personalausweises, der Nutzung der Online-Ausweisfunktion sowie der Usability der AusweisApp. Weiterhin wurden Nutzer bei der Verwendung der aktuellen AusweisApp beobachtet und anschließend befragt. Dies erlaubte einen tiefen Einblick in ihre Bedürfnisse. Die Ergebnisse aus der qualitativen Untersuchung wurden verwendet, um Verbesserungsvorschläge für die AusweisApp zu entwickeln, die den Bedürfnissen der Bürger entsprechen. Die Vorschläge zur Optimierung der AusweisApp wurden prototypisch umgesetzt und mit potentiellen Nutzern getestet. Die Tests haben gezeigt, dass die entwickelten Neuerungen den Bürgern den Zugang zur Nutzung der Online-Ausweisfunktion deutlich vereinfachen. Im Ergebnis konnte festgestellt werden, dass der Akzeptanzgrad des neuen Personalausweises stark divergiert. Die Einstellung der Befragten reichte von Skepsis bis hin zu Befürwortung. Der neue Personalausweis ist ein Thema, das den Bürger polarisiert. Im Rahmen der Nutzertests konnten zahlreiche Verbesserungspotenziale des bestehenden Service Designs sowohl rund um den neuen Personalausweis, als auch im Zusammenhang mit der verwendeten Software aufgedeckt werden. Während der Nutzertests, die sich an die Ideen- und Prototypenphase anschlossen, konnte das Innovtionsteam seine Vorschläge iterieren und auch verifizieren. Die ausgearbeiteten Vorschläge beziehen sich auf die AusweisApp. Die neuen Funktionen umfassen im Wesentlichen: · den direkten Zugang zu den Diensteanbietern, · umfangreiche Hilfestellungen (Tooltips, FAQ, Wizard, Video), · eine Verlaufsfunktion, · einen Beispieldienst, der die Online-Ausweisfunktion erfahrbar macht. Insbesondere gilt es, den Nutzern mit der neuen Version der AusweisApp Anwendungsfelder für ihren neuen Personalausweis und einen Mehrwert zu bieten. Die Ausarbeitung von weiteren Funktionen der AusweisApp kann dazu beitragen, dass der neue Personalausweis sein volles Potenzial entfalten kann.
Companies develop process models to explicitly describe their business operations. In the same time, business operations, business processes, must adhere to various types of compliance requirements. Regulations, e.g., Sarbanes Oxley Act of 2002, internal policies, best practices are just a few sources of compliance requirements. In some cases, non-adherence to compliance requirements makes the organization subject to legal punishment. In other cases, non-adherence to compliance leads to loss of competitive advantage and thus loss of market share. Unlike the classical domain-independent behavioral correctness of business processes, compliance requirements are domain-specific. Moreover, compliance requirements change over time. New requirements might appear due to change in laws and adoption of new policies. Compliance requirements are offered or enforced by different entities that have different objectives behind these requirements. Finally, compliance requirements might affect different aspects of business processes, e.g., control flow and data flow. As a result, it is infeasible to hard-code compliance checks in tools. Rather, a repeatable process of modeling compliance rules and checking them against business processes automatically is needed. This thesis provides a formal approach to support process design-time compliance checking. Using visual patterns, it is possible to model compliance requirements concerning control flow, data flow and conditional flow rules. Each pattern is mapped into a temporal logic formula. The thesis addresses the problem of consistency checking among various compliance requirements, as they might stem from divergent sources. Also, the thesis contributes to automatically check compliance requirements against process models using model checking. We show that extra domain knowledge, other than expressed in compliance rules, is needed to reach correct decisions. In case of violations, we are able to provide a useful feedback to the user. The feedback is in the form of parts of the process model whose execution causes the violation. In some cases, our approach is capable of providing automated remedy of the violation.
We systematically explore the effect of calibration data length on the performance of a conceptual hydrological model, GR4H, in comparison to two Artificial Neural Network (ANN) architectures: Long Short-Term Memory Networks (LSTM) and Gated Recurrent Units (GRU), which have just recently been introduced to the field of hydrology. We implemented a case study for six river basins across the contiguous United States, with 25 years of meteorological and discharge data. Nine years were reserved for independent validation; two years were used as a warm-up period, one year for each of the calibration and validation periods, respectively; from the remaining 14 years, we sampled increasing amounts of data for model calibration, and found pronounced differences in model performance. While GR4H required less data to converge, LSTM and GRU caught up at a remarkable rate, considering their number of parameters. Also, LSTM and GRU exhibited the higher calibration instability in comparison to GR4H. These findings confirm the potential of modern deep-learning architectures in rainfall runoff modelling, but also highlight the noticeable differences between them in regard to the effect of calibration data length.
Lehrkräfte aller Fächer benötigen informatische Kompetenzen, um der wachsenden Alltagsrelevanz von Informatik und aktuell gültigen Lehrplänen gerecht zu werden. Beispielsweise verweist in Sachsen der Lehrplan für das Fach Gemeinschaftskunde, Rechtserziehung und Wirtschaft am Gymnasium mit dem für die Jahrgangsstufe 11 vorgesehenem Thema „Digitalisierung und sozialer Wandel“ auf Künstliche Intelligenz (KI) und explizit auf die Bedeutung der informatischen Bildung. Um die nötigen informatischen Grundlagen zu vermitteln, wurde für Lehramtsstudierende des Faches Politik ein Workshop erarbeitet, der die Grundlagen der Funktionsweise von KI anhand von überwachtem maschinellen Lernen in neuronalen Netzen vermittelt. Inhalt des Workshops ist es, mit Bezug auf gesellschaftliche Implikationen wie Datenschutz bei Trainingsdaten und algorithmic bias einen informierten Diskurs zu politischen Themen zu ermöglichen. Ziele des Workshops für Lehramtsstudierende mit dem Fach Politik sind: (1) Aufbau informatischer Kompetenzen in Bezug zum Thema KI, (2) Stärkung der Diskussionsfähigkeiten der Studierenden durch passende informatische Kompetenzen und (3) Anregung der Studierenden zum Transfer auf passende Themenstellungen im Politikunterricht. Das Evaluationskonzept umfasst eine Pre-Post-Befragung zur Zuversicht zur Vermittlungskompetenz unter Bezug auf maschinelles Lernen in neuronalen Netzen im Unterricht, sowie die Analyse einer abschließenden Diskussion. Für die Pre-Post-Befragung konnte eine Steigerung der Zuversicht zur Vermittlungskompetenz beobachtet werden. Die Analyse der Diskussion zeigte das Bewusstsein der Alltagsrelevanz des Themas KI bei den Teilnehmenden, aber noch keine Anwendung der informatischen Inhalte des Workshops zur Stützung der Argumente in der Diskussion.
Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during process execution. Aiming at a better process understanding and improvement, this event data can be used to analyze processes using process mining techniques. Process models can be automatically discovered and the execution can be checked for conformance to specified behavior. Moreover, existing process models can be enhanced and annotated with valuable information, for example for performance analysis. While the maturity of process mining algorithms is increasing and more tools are entering the market, process mining projects still face the problem of different levels of abstraction when comparing events with modeled business activities. Mapping the recorded events to activities of a given process model is essential for conformance checking, annotation and understanding of process discovery results. Current approaches try to abstract from events in an automated way that does not capture the required domain knowledge to fit business activities. Such techniques can be a good way to quickly reduce complexity in process discovery. Yet, they fail to enable techniques like conformance checking or model annotation, and potentially create misleading process discovery results by not using the known business terminology.
In this thesis, we develop approaches that abstract an event log to the same level that is needed by the business. Typically, this abstraction level is defined by a given process model. Thus, the goal of this thesis is to match events from an event log to activities in a given process model. To accomplish this goal, behavioral and linguistic aspects of process models and event logs as well as domain knowledge captured in existing process documentation are taken into account to build semiautomatic matching approaches. The approaches establish a pre--processing for every available process mining technique that produces or annotates a process model, thereby reducing the manual effort for process analysts. While each of the presented approaches can be used in isolation, we also introduce a general framework for the integration of different matching approaches.
The approaches have been evaluated in case studies with industry and using a large industry process model collection and simulated event logs. The evaluation demonstrates the effectiveness and efficiency of the approaches and their robustness towards nonconforming execution logs.
The noble way to substantiate decisions that affect many people is to ask these people for their opinions. For governments that run whole countries, this means asking all citizens for their views to consider their situations and needs.
Organizations such as Africa's Voices Foundation, who want to facilitate communication between decision-makers and citizens of a country, have difficulty mediating between these groups. To enable understanding, statements need to be summarized and visualized. Accomplishing these goals in a way that does justice to the citizens' voices and situations proves challenging. Standard charts do not help this cause as they fail to create empathy for the people behind their graphical abstractions. Furthermore, these charts do not create trust in the data they are representing as there is no way to see or navigate back to the underlying code and the original data. To fulfill these functions, visualizations would highly benefit from interactions to explore the displayed data, which standard charts often only limitedly provide.
To help improve the understanding of people's voices, we developed and categorized 80 ideas for new visualizations, new interactions, and better connections between different charts, which we present in this report. From those ideas, we implemented 10 prototypes and two systems that integrate different visualizations. We show that this integration allows consistent appearance and behavior of visualizations. The visualizations all share the same main concept: representing each individual with a single dot. To realize this idea, we discuss technologies that efficiently allow the rendering of a large number of these dots. With these visualizations, direct interactions with representations of individuals are achievable by clicking on them or by dragging a selection around them. This direct interaction is only possible with a bidirectional connection from the visualization to the data it displays. We discuss different strategies for bidirectional mappings and the trade-offs involved. Having unified behavior across visualizations enhances exploration. For our prototypes, that includes grouping, filtering, highlighting, and coloring of dots. Our prototyping work was enabled by the development environment Lively4. We explain which parts of Lively4 facilitated our prototyping process. Finally, we evaluate our approach to domain problems and our developed visualization concepts.
Our work provides inspiration and a starting point for visualization development in this domain. Our visualizations can improve communication between citizens and their government and motivate empathetic decisions. Our approach, combining low-level entities to create visualizations, provides value to an explorative and empathetic workflow. We show that the design space for visualizing this kind of data has a lot of potential and that it is possible to combine qualitative and quantitative approaches to data analysis.
The course timetabling problem can be generally defined as the task of assigning a number of lectures to a limited set of timeslots and rooms, subject to a given set of hard and soft constraints. The modeling language for course timetabling is required to be expressive enough to specify a wide variety of soft constraints and objective functions. Furthermore, the resulting encoding is required to be extensible for capturing new constraints and for switching them between hard and soft, and to be flexible enough to deal with different formulations. In this paper, we propose to make effective use of ASP as a modeling language for course timetabling. We show that our ASP-based approach can naturally satisfy the above requirements, through an ASP encoding of the curriculum-based course timetabling problem proposed in the third track of the second international timetabling competition (ITC-2007). Our encoding is compact and human-readable, since each constraint is individually expressed by either one or two rules. Each hard constraint is expressed by using integrity constraints and aggregates of ASP. Each soft constraint S is expressed by rules in which the head is the form of penalty (S, V, C), and a violation V and its penalty cost C are detected and calculated respectively in the body. We carried out experiments on four different benchmark sets with five different formulations. We succeeded either in improving the bounds or producing the same bounds for many combinations of problem instances and formulations, compared with the previous best known bounds.
Abstract interpretation-based model checking provides an approach to verifying properties of infinite-state systems. In practice, most previous work on abstract model checking is either restricted to verifying universal properties, or develops special techniques for temporal logics such as modal transition systems or other dual transition systems. By contrast we apply completely standard techniques for constructing abstract interpretations to the abstraction of a CTL semantic function, without restricting the kind of properties that can be verified. Furthermore we show that this leads directly to implementation of abstract model checking algorithms for abstract domains based on constraints, making use of an SMT solver.
In the last two decades, process mining has developed from a niche
discipline to a significant research area with considerable impact on academia and industry. Process mining enables organisations to identify the running business processes from historical execution data. The first requirement of any process mining technique is an event log, an artifact that represents concrete business process executions in the form of sequence of events. These logs can be extracted from the organization's information systems and are used by process experts to retrieve deep insights from the organization's running processes. Considering the events pertaining to such logs, the process models can be automatically discovered and enhanced or annotated with performance-related information. Besides behavioral information, event logs contain domain specific data, albeit implicitly. However, such data are usually overlooked and, thus, not utilized to their full potential.
Within the process mining area, we address in this thesis the research gap of discovering, from event logs, the contextual information that cannot be captured by applying existing process mining techniques. Within this research gap, we identify four key problems and tackle them by looking at an event log from different angles. First, we address the problem of deriving an event log in the absence of a proper database access and domain knowledge. The second problem is related to the under-utilization of the implicit domain knowledge present in an event log that can increase the understandability of the discovered process model. Next, there is a lack of a holistic representation of the historical data manipulation at the process model level of abstraction. Last but not least, each process model presumes to be independent of other process models when discovered from an event log, thus, ignoring possible data dependencies between processes within an organization.
For each of the problems mentioned above, this thesis proposes a dedicated method. The first method provides a solution to extract an event log only from the transactions performed on the database that are stored in the form of redo logs. The second method deals with discovering the underlying data model that is implicitly embedded in the event log, thus, complementing the discovered process model with important domain knowledge information. The third method captures, on the process model level, how the data affects the running process instances. Lastly, the fourth method is about the discovery of the relations between business processes (i.e., how they exchange data) from a set of event logs and explicitly representing such complex interdependencies in a business process architecture.
All the methods introduced in this thesis are implemented as a prototype and their feasibility is proven by being applied on real-life event logs.
In recent years, the increased interest in application areas such as social networks has resulted in a rising popularity of graph-based approaches for storing and processing large amounts of interconnected data. To extract useful information from the growing network structures, efficient querying techniques are required.
In this paper, we propose an approach for graph pattern matching that allows a uniform handling of arbitrary constraints over the query vertices. Our technique builds on a previously introduced matching algorithm, which takes concrete host graph information into account to dynamically adapt the employed search plan during query execution. The dynamic algorithm is combined with an existing static approach for search plan generation, resulting in a hybrid technique which we further extend by a more sophisticated handling of filtering effects caused by constraint checks. We evaluate the presented concepts empirically based on an implementation for our graph pattern matching tool, the Story Diagram Interpreter, with queries and data provided by the LDBC Social Network Benchmark. Our results suggest that the hybrid technique may improve search efficiency in several cases, and rarely reduces efficiency.
Modular and incremental global model management with extended generalized discrimination networks
(2023)
Complex projects developed under the model-driven engineering paradigm nowadays often involve several interrelated models, which are automatically processed via a multitude of model operations. Modular and incremental construction and execution of such networks of models and model operations are required to accommodate efficient development with potentially large-scale models. The underlying problem is also called Global Model Management.
In this report, we propose an approach to modular and incremental Global Model Management via an extension to the existing technique of Generalized Discrimination Networks (GDNs). In addition to further generalizing the notion of query operations employed in GDNs, we adapt the previously query-only mechanism to operations with side effects to integrate model transformation and model synchronization. We provide incremental algorithms for the execution of the resulting extended Generalized Discrimination Networks (eGDNs), as well as a prototypical implementation for a number of example eGDN operations.
Based on this prototypical implementation, we experiment with an application scenario from the software development domain to empirically evaluate our approach with respect to scalability and conceptually demonstrate its applicability in a typical scenario. Initial results confirm that the presented approach can indeed be employed to realize efficient Global Model Management in the considered scenario.
Like conventional software projects, projects in model-driven software engineering require adequate management of multiple versions of development artifacts, importantly allowing living with temporary inconsistencies. In the case of model-driven software engineering, employed versioning approaches also have to handle situations where different artifacts, that is, different models, are linked via automatic model transformations.
In this report, we propose a technique for jointly handling the transformation of multiple versions of a source model into corresponding versions of a target model, which enables the use of a more compact representation that may afford improved execution time of both the transformation and further analysis operations. Our approach is based on the well-known formalism of triple graph grammars and a previously introduced encoding of model version histories called multi-version models. In addition to showing the correctness of our approach with respect to the standard semantics of triple graph grammars, we conduct an empirical evaluation that demonstrates the potential benefit regarding execution time performance.
Regardless of what is intended by government curriculum
specifications and advised by educational experts, the competencies
taught and learned in and out of classrooms can vary considerably.
In this paper, we discuss in particular how we can investigate the
perceptions that individual teachers have of competencies in ICT,
and how these and other factors may influence students’ learning. We
report case study research which identifies contradictions within the
teaching of ICT competencies as an activity system, highlighting issues
concerning the object of the curriculum, the roles of the participants and
the school cultures. In a particular case, contradictions in the learning
objectives between higher order skills and the use of application tools
have been resolved by a change in the teacher’s perceptions which
have not led to changes in other aspects of the activity system. We look
forward to further investigation of the effects of these contradictions in
other case studies and on forthcoming curriculum change.
Text is a ubiquitous entity in our world and daily life. We encounter it nearly everywhere in shops, on the street, or in our flats. Nowadays, more and more text is contained in digital images. These images are either taken using cameras, e.g., smartphone cameras, or taken using scanning devices such as document scanners. The sheer amount of available data, e.g., millions of images taken by Google Streetview, prohibits manual analysis and metadata extraction. Although much progress was made in the area of optical character recognition (OCR) for printed text in documents, broad areas of OCR are still not fully explored and hold many research challenges. With the mainstream usage of machine learning and especially deep learning, one of the most pressing problems is the availability and acquisition of annotated ground truth for the training of machine learning models because obtaining annotated training data using manual annotation mechanisms is time-consuming and costly. In this thesis, we address of how we can reduce the costs of acquiring ground truth annotations for the application of state-of-the-art machine learning methods to optical character recognition pipelines. To this end, we investigate how we can reduce the annotation cost by using only a fraction of the typically required ground truth annotations, e.g., for scene text recognition systems. We also investigate how we can use synthetic data to reduce the need of manual annotation work, e.g., in the area of document analysis for archival material. In the area of scene text recognition, we have developed a novel end-to-end scene text recognition system that can be trained using inexact supervision and shows competitive/state-of-the-art performance on standard benchmark datasets for scene text recognition. Our method consists of two independent neural networks, combined using spatial transformer networks. Both networks learn together to perform text localization and text recognition at the same time while only using annotations for the recognition task. We apply our model to end-to-end scene text recognition (meaning localization and recognition of words) and pure scene text recognition without any changes in the network architecture.
In the second part of this thesis, we introduce novel approaches for using and generating synthetic data to analyze handwriting in archival data. First, we propose a novel preprocessing method to determine whether a given document page contains any handwriting. We propose a novel data synthesis strategy to train a classification model and show that our data synthesis strategy is viable by evaluating the trained model on real images from an archive. Second, we introduce the new analysis task of handwriting classification. Handwriting classification entails classifying a given handwritten word image into classes such as date, word, or number. Such an analysis step allows us to select the best fitting recognition model for subsequent text recognition; it also allows us to reason about the semantic content of a given document page without the need for fine-grained text recognition and further analysis steps, such as Named Entity Recognition. We show that our proposed approaches work well when trained on synthetic data. Further, we propose a flexible metric learning approach to allow zero-shot classification of classes unseen during the network’s training. Last, we propose a novel data synthesis algorithm to train off-the-shelf pixel-wise semantic segmentation networks for documents. Our data synthesis pipeline is based on the famous Style-GAN architecture and can synthesize realistic document images with their corresponding segmentation annotation without the need for any annotated data!
In recent years, computer vision algorithms based on machine learning have seen rapid development. In the past, research mostly focused on solving computer vision problems such as image classification or object detection on images displaying natural scenes. Nowadays other fields such as the field of cultural heritage, where an abundance of data is available, also get into the focus of research. In the line of current research endeavours, we collaborated with the Getty Research Institute which provided us with a challenging dataset, containing images of paintings and drawings. In this technical report, we present the results of the seminar "Deep Learning for Computer Vision". In this seminar, students of the Hasso Plattner Institute evaluated state-of-the-art approaches for image classification, object detection and image recognition on the dataset of the Getty Research Institute. The main challenge when applying modern computer vision methods to the available data is the availability of annotated training data, as the dataset provided by the Getty Research Institute does not contain a sufficient amount of annotated samples for the training of deep neural networks. However, throughout the report we show that it is possible to achieve satisfying to very good results, when using further publicly available datasets, such as the WikiArt dataset, for the training of machine learning models.
Data integration aims to combine data of different sources and to provide users with a unified view on these data. This task is as challenging as valuable. In this thesis we propose algorithms for dependency discovery to provide necessary information for data integration. We focus on inclusion dependencies (INDs) in general and a special form named conditional inclusion dependencies (CINDs): (i) INDs enable the discovery of structure in a given schema. (ii) INDs and CINDs support the discovery of cross-references or links between schemas. An IND “A in B” simply states that all values of attribute A are included in the set of values of attribute B. We propose an algorithm that discovers all inclusion dependencies in a relational data source. The challenge of this task is the complexity of testing all attribute pairs and further of comparing all of each attribute pair's values. The complexity of existing approaches depends on the number of attribute pairs, while ours depends only on the number of attributes. Thus, our algorithm enables to profile entirely unknown data sources with large schemas by discovering all INDs. Further, we provide an approach to extract foreign keys from the identified INDs. We extend our IND discovery algorithm to also find three special types of INDs: (i) Composite INDs, such as “AB in CD”, (ii) approximate INDs that allow a certain amount of values of A to be not included in B, and (iii) prefix and suffix INDs that represent special cross-references between schemas. Conditional inclusion dependencies are inclusion dependencies with a limited scope defined by conditions over several attributes. Only the matching part of the instance must adhere the dependency. We generalize the definition of CINDs distinguishing covering and completeness conditions and define quality measures for conditions. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. The challenge for this task is twofold: (i) Which (and how many) attributes should be used for the conditions? (ii) Which attribute values should be chosen for the conditions? Previous approaches rely on pre-selected condition attributes or can only discover conditions applying to quality thresholds of 100%. Our approaches were motivated by two application domains: data integration in the life sciences and link discovery for linked open data. We show the efficiency and the benefits of our approaches for use cases in these domains.
Data dependencies, or integrity constraints, are used to improve the quality of a database schema, to optimize queries, and to ensure consistency in a database. In the last years conditional dependencies have been introduced to analyze and improve data quality. In short, a conditional dependency is a dependency with a limited scope defined by conditions over one or more attributes. Only the matching part of the instance must adhere to the dependency. In this paper we focus on conditional inclusion dependencies (CINDs). We generalize the definition of CINDs, distinguishing covering and completeness conditions. We present a new use case for such CINDs showing their value for solving complex data quality tasks. Further, we define quality measures for conditions inspired by precision and recall. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. Our algorithms choose not only the condition values but also the condition attributes automatically. Finally, we show that our approach efficiently provides meaningful and helpful results for our use case.
Data obtained from foreign data sources often come with only superficial structural information, such as relation names and attribute names. Other types of metadata that are important for effective integration and meaningful querying of such data sets are missing. In particular, relationships among attributes, such as foreign keys, are crucial metadata for understanding the structure of an unknown database. The discovery of such relationships is difficult, because in principle for each pair of attributes in the database each pair of data values must be compared. A precondition for a foreign key is an inclusion dependency (IND) between the key and the foreign key attributes. We present with Spider an algorithm that efficiently finds all INDs in a given relational database. It leverages the sorting facilities of DBMS but performs the actual comparisons outside of the database to save computation. Spider analyzes very large databases up to an order of magnitude faster than previous approaches. We also evaluate in detail the effectiveness of several heuristics to reduce the number of necessary comparisons. Furthermore, we generalize Spider to find composite INDs covering multiple attributes, and partial INDs, which are true INDs for all but a certain number of values. This last type is particularly relevant when integrating dirty data as is often the case in the life sciences domain - our driving motivation.
Business process management is an acknowledged asset for running an organization in a productive and sustainable way. One of the most important aspects of business process management, occurring on a daily basis at all levels, is decision making. In recent years, a number of decision management frameworks have appeared in addition to existing business process management systems. More recently, Decision Model and Notation (DMN) was developed by the OMG consortium with the aim of complementing the widely used Business Process Model and Notation (BPMN). One of the reasons for the emergence of DMN is the increasing interest in the evolving paradigm known as the separation of concerns. This paradigm states that modeling decisions complementary to processes reduces process complexity by externalizing decision logic from process models and importing it into a dedicated decision model. Such an approach increases the agility of model design and execution. This provides organizations with the flexibility to adapt to the ever increasing rapid and dynamic changes in the business ecosystem. The research gap, identified by us, is that the separation of concerns, recommended by DMN, prescribes the externalization of the decision logic of process models in one or more separate decision models, but it does not specify this can be achieved.
The goal of this thesis is to overcome the presented gap by developing a framework for discovering decision models in a semi-automated way from information about existing process decision making. Thus, in this thesis we develop methodologies to extract decision models from: (1) control flow and data of process models that exist in enterprises; and (2) from event logs recorded by enterprise information systems, encapsulating day-to-day operations. Furthermore, we provide an extension of the methodologies to discover decision models from event logs enriched with fuzziness, a tool dealing with partial knowledge of the process execution information. All the proposed techniques are implemented and evaluated in case studies using real-life and synthetic process models and event logs. The evaluation of these case studies shows that the proposed methodologies provide valid and accurate output decision models that can serve as blueprints for executing decisions complementary to process models. Thus, these methodologies have applicability in the real world and they can be used, for example, for compliance checks, among other uses, which could improve the organization's decision making and hence it's overall performance.
Systems of Systems (SoS) have received a lot of attention recently. In this thesis we will focus on SoS that are built atop the techniques of Service-Oriented Architectures and thus combine the benefits and challenges of both paradigms. For this thesis we will understand SoS as ensembles of single autonomous systems that are integrated to a larger system, the SoS. The interesting fact about these systems is that the previously isolated systems are still maintained, improved and developed on their own. Structural dynamics is an issue in SoS, as at every point in time systems can join and leave the ensemble. This and the fact that the cooperation among the constituent systems is not necessarily observable means that we will consider these systems as open systems. Of course, the system has a clear boundary at each point in time, but this can only be identified by halting the complete SoS. However, halting a system of that size is practically impossible. Often SoS are combinations of software systems and physical systems. Hence a failure in the software system can have a serious physical impact what makes an SoS of this kind easily a safety-critical system. The contribution of this thesis is a modelling approach that extends OMG's SoaML and basically relies on collaborations and roles as an abstraction layer above the components. This will allow us to describe SoS at an architectural level. We will also give a formal semantics for our modelling approach which employs hybrid graph-transformation systems. The modelling approach is accompanied by a modular verification scheme that will be able to cope with the complexity constraints implied by the SoS' structural dynamics and size. Building such autonomous systems as SoS without evolution at the architectural level --- i. e. adding and removing of components and services --- is inadequate. Therefore our approach directly supports the modelling and verification of evolution.
Cyber-physical systems achieve sophisticated system behavior exploring the tight interconnection of physical coupling present in classical engineering systems and information technology based coupling. A particular challenging case are systems where these cyber-physical systems are formed ad hoc according to the specific local topology, the available networking capabilities, and the goals and constraints of the subsystems captured by the information processing part. In this paper we present a formalism that permits to model the sketched class of cyber-physical systems. The ad hoc formation of tightly coupled subsystems of arbitrary size are specified using a UML-based graph transformation system approach. Differential equations are employed to define the resulting tightly coupled behavior. Together, both form hybrid graph transformation systems where the graph transformation rules define the discrete steps where the topology or modes may change, while the differential equations capture the continuous behavior in between such discrete changes. In addition, we demonstrate that automated analysis techniques known for timed graph transformation systems for inductive invariants can be extended to also cover the hybrid case for an expressive case of hybrid models where the formed tightly coupled subsystems are restricted to smaller local networks.
Service-oriented modeling employs collaborations to capture the coordination of multiple roles in form of service contracts. In case of dynamic collaborations the roles may join and leave the collaboration at runtime and therefore complex structural dynamics can result, which makes it very hard to ensure their correct and safe operation. We present in this paper our approach for modeling and verifying such dynamic collaborations. Modeling is supported using a well-defined subset of UML class diagrams, behavioral rules for the structural dynamics, and UML state machines for the role behavior. To be also able to verify the resulting service-oriented systems, we extended our former results for the automated verification of systems with structural dynamics [7, 8] and developed a compositional reasoning scheme, which enables the reuse of verification results. We outline our approach using the example of autonomous vehicles that use such dynamic collaborations via ad-hoc networking to coordinate and optimize their joint behavior.
Creating fonts is a complex task that requires expert knowledge in a variety of domains. Often, this knowledge is not held by a single person, but spread across a number of domain experts. A central concept needed for designing fonts is the glyph, an elemental symbol representing a readable character. Required domains include designing glyph shapes, engineering rules to combine glyphs for complex scripts and checking legibility. This process is most often iterative and requires communication in all directions. This report outlines a platform that aims to enhance the means of communication, describes our prototyping process, discusses complex font rendering and editing in a live environment and an approach to generate code based on a user’s live-edits.
SandBlocks
(2020)
Visuelle Programmiersprachen werden heutzutage zugunsten textueller Programmiersprachen nahezu nicht verwendet, obwohl visuelle Programmiersprachen einige Vorteile bieten. Diese reichen von der Vermeidung von Syntaxfehlern, über die Nutzung konkreter domänenspezifischer Notation bis hin zu besserer Lesbarkeit und Wartbarkeit des Programms. Trotzdem greifen professionelle Softwareentwickler nahezu ausschließlich auf textuelle Programmiersprachen zurück.
Damit Entwickler diese Vorteile visueller Programmiersprachen nutzen können, aber trotzdem nicht auf die ihnen bekannten textuellen Programmiersprachen verzichten müssen, gibt es die Idee, textuelle und visuelle Programmelemente gemeinsam in einer Programmiersprache nutzbar zu machen. Damit ist dem Entwickler überlassen wann und wie er visuelle Elemente in seinem Programmcode verwendet.
Diese Arbeit stellt das SandBlocks-Framework vor, das diese gemeinsame Nutzung visueller und textueller Programmelemente ermöglicht. Neben einer Auswertung visueller Programmiersprachen, zeigt es die technische Integration visueller Programmelemente in das Squeak/Smalltalk-System auf, gibt Einblicke in die Umsetzung und Verwendung in Live-Programmiersystemen und diskutiert ihre Verwendung in unterschiedlichen Domänen.
Confidence Counts
(2021)
The increasing reliance on online learning in higher education has been further expedited by the on-going Covid-19 pandemic. Students need to be supported as they adapt to this new learning environment. Research has established that learners with positive online learning self-efficacy beliefs are more likely to persevere and achieve their higher education goals when learning online. In this paper, we explore how MOOC design can contribute to the four sources of self-efficacy beliefs posited by Bandura [4]. Specifically, we will explore, drawing on learner reflections, whether design elements of the MOOC, The Digital Edge: Essentials for the Online Learner, provided participants with the necessary mastery experiences, vicarious experiences, verbal persuasion, and affective regulation opportunities, to evaluate and develop their online learning self-efficacy beliefs. Findings from a content analysis of discussion forum posts show that learners referenced three of the four information sources when reflecting on their experience of the MOOC. This paper illustrates the potential of MOOCs as a pedagogical tool for enhancing online learning self-efficacy among students.
In this paper we report on our experiments in teaching computer science concepts with a mix of tangible and abstract object manipulations. The goal we set ourselves was to let pupils discover the challenges one has to meet to automatically manipulate formatted text. We worked with a group of 25 secondary school pupils (9-10th grade), and they were actually able to “invent” the concept of mark-up language. From this experiment we distilled a set of activities which will be replicated in other classes (6th grade) under the guidance of maths teachers.
A method is presented of acquiring the principles of three sorting algorithms through developing interactive applications in Excel.
Integriert statt isoliert
(2022)
Dass Daten und Analysen Innovationstreiber sind und nicht mehr nur einen Hygienefaktor darstellen, haben viele Unternehmen erkannt. Um Potenziale zu heben, müssen Daten zielführend integriert werden. Komplexe Systemlandschaften und isolierte Datenbestände erschweren dies. Technologien für die erfolgreiche Umsetzung von datengetriebenem Management müssen richtig eingesetzt werden.
Viele Studieneingangs- und Eignungstests haben zum Ziel, für den entsprechenden Studiengang geeignete Studierende zu finden, die das Studium erfolgreich beenden können. Gerade in der Informatik ist aber häufig unklar, welche Eigenschaften geeignete Studierende haben sollten – auch stimmen mutmaßlich nicht alle Dozierenden in ihren Erwartungen an Studienanfänger*innen überein; Untersuchungen hierzu fehlen jedoch bislang. Um die Erwartungen von Dozent*innen an Studienanfänger*innen im Fach Informatik an deutschen Hochschulen zu analysieren, hat das Projekt MINTFIT im Sommer 2019 eine deutschlandweite Online-Befragung durchgeführt, an der 588 Hochschuldozent* innen aus allen Bundesländern teilnahmen. Die Umfrage hat gezeigt, dass überwiegend allgemeine Fähigkeiten, wie Motivation und logisches Denkvermögen, und nur wenig fachliches Vorwissen, wie Programmieren oder Formale Sprache, erwartet wird. Nach Einschätzung der Dozent*innen sind die problembehafteten Bereiche überwiegend in der theoretischen Informatik und in formellen Aspekten (z. B. Formale Sprache) zu finden. Obwohl Tendenzen erkennbar sind, zeigt die Umfrage, dass bei Anwendung strenger Akzeptanzkriterien keine Fähigkeiten und Kenntnisse explizit vorausgesetzt werden, was darauf hindeutet, dass noch kein deutschlandweiter Konsens unter den Lehrenden vorhanden ist.
Ein handlungsorientiertes, didaktisches Training für Tutoren im Bachelorstudium der Informatik
(2009)
Die didaktisch-pädagogische Ausbildung studentischer Tutoren für den Einsatz im Bachelorstudium der Informatik ist Gegenstand dieser Arbeit. Um die theoretischen Inhalte aus Sozial- und Lernpsychologie handlungsorientiert und effizient zu vermitteln, wird das Training als Lehrform gewählt. Die in einer Tutorübung zentrale Methode der Gruppenarbeit wird dabei explizit und implizit vermittelt. Erste praktische Erfahrungen mit ihrer zukünftigen Rolle gewinnen die Tutoren in Rollenspielen, wobei sowohl Standardsituationen als auch fachspezifisch und pädagogisch problematische Situationen simuliert werden. Während die Vermittlung der genannten Inhalte und die Rollenspiele im Rahmen einer Blockveranstaltung vor Beginn des Semesters durchgeführt werden, finden während des Semesters Hospitationen statt, in der die Fähigkeiten der Tutoren anhand eines standardisierten Bewertungsbogens beurteilt werden.
Algorithmic management
(2022)
Algorithmic management
(2022)
Viper
(2021)
Key-value stores (KVSs) have found wide application in modern software systems. For persistence, their data resides in slow secondary storage, which requires KVSs to employ various techniques to increase their read and write performance from and to the underlying medium. Emerging persistent memory (PMem) technologies offer data persistence at close-to-DRAM speed, making them a promising alternative to classical disk-based storage. However, simply drop-in replacing existing storage with PMem does not yield good results, as block-based access behaves differently in PMem than on disk and ignores PMem's byte addressability, layout, and unique performance characteristics. In this paper, we propose three PMem-specific access patterns and implement them in a hybrid PMem-DRAM KVS called Viper. We employ a DRAM-based hash index and a PMem-aware storage layout to utilize the random-write speed of DRAM and efficient sequential-write performance PMem. Our evaluation shows that Viper significantly outperforms existing KVSs for core KVS operations while providing full data persistence. Moreover, Viper outperforms existing PMem-only, hybrid, and disk-based KVSs by 4-18x for write workloads, while matching or surpassing their get performance.
Viper
(2021)
Key-value stores (KVSs) have found wide application in modern software systems. For persistence, their data resides in slow secondary storage, which requires KVSs to employ various techniques to increase their read and write performance from and to the underlying medium. Emerging persistent memory (PMem) technologies offer data persistence at close-to-DRAM speed, making them a promising alternative to classical disk-based storage. However, simply drop-in replacing existing storage with PMem does not yield good results, as block-based access behaves differently in PMem than on disk and ignores PMem's byte addressability, layout, and unique performance characteristics. In this paper, we propose three PMem-specific access patterns and implement them in a hybrid PMem-DRAM KVS called Viper. We employ a DRAM-based hash index and a PMem-aware storage layout to utilize the random-write speed of DRAM and efficient sequential-write performance PMem. Our evaluation shows that Viper significantly outperforms existing KVSs for core KVS operations while providing full data persistence. Moreover, Viper outperforms existing PMem-only, hybrid, and disk-based KVSs by 4-18x for write workloads, while matching or surpassing their get performance.
Requirements engineers have to elicit, document, and validate how stakeholders act and interact to achieve their common goals in collaborative scenarios. Only after gathering all information concerning who interacts with whom to do what and why, can a software system be designed and realized which supports the stakeholders to do their work. To capture and structure requirements of different (groups of) stakeholders, scenario-based approaches have been widely used and investigated. Still, the elicitation and validation of requirements covering collaborative scenarios remains complicated, since the required information is highly intertwined, fragmented, and distributed over several stakeholders. Hence, it can only be elicited and validated collaboratively. In times of globally distributed companies, scheduling and conducting workshops with groups of stakeholders is usually not feasible due to budget and time constraints. Talking to individual stakeholders, on the other hand, is feasible but leads to fragmented and incomplete stakeholder scenarios. Going back and forth between different individual stakeholders to resolve this fragmentation and explore uncovered alternatives is an error-prone, time-consuming, and expensive task for the requirements engineers. While formal modeling methods can be employed to automatically check and ensure consistency of stakeholder scenarios, such methods introduce additional overhead since their formal notations have to be explained in each interaction between stakeholders and requirements engineers. Tangible prototypes as they are used in other disciplines such as design, on the other hand, allow designers to feasibly validate and iterate concepts and requirements with stakeholders. This thesis proposes a model-based approach for prototyping formal behavioral specifications of stakeholders who are involved in collaborative scenarios. By simulating and animating such specifications in a remote domain-specific visualization, stakeholders can experience and validate the scenarios captured so far, i.e., how other stakeholders act and react. This interactive scenario simulation is referred to as a model-based virtual prototype. Moreover, through observing how stakeholders interact with a virtual prototype of their collaborative scenarios, formal behavioral specifications can be automatically derived which complete the otherwise fragmented scenarios. This, in turn, enables requirements engineers to elicit and validate collaborative scenarios in individual stakeholder sessions – decoupled, since stakeholders can participate remotely and are not forced to be available for a joint session at the same time. This thesis discusses and evaluates the feasibility, understandability, and modifiability of model-based virtual prototypes. Similarly to how physical prototypes are perceived, the presented approach brings behavioral models closer to being tangible for stakeholders and, moreover, combines the advantages of joint stakeholder sessions and decoupled sessions.
Die Studienanfänger der Informatik haben in Deutschland sehr unterschiedliche Grundkenntnisse in der Programmierung. Dies führt immer wieder zu Schwierigkeiten in der Ausrichtung der Einführungsveranstaltungen. An der TU München wird seit dem Wintersemester 2008/2009 nun eine neue Art von Vorkursen angeboten. In nur 2,5 Tagen erstellen die Teilnehmer ein kleines objektorientiertes Programm. Dabei arbeiten sie weitestgehend alleine, unterstützt von einem studentischen Tutor. In dieser Arbeit sollen nun das Konzept der sogenannten „Vorprojekte“ sowie erste Forschungsansätze vorgestellt werden
Wir stellen die Konzeption und erste Ergebnisse einer neuartigen Informatik- Lehrveranstaltung für Studierende der Geodäsie vor. Das Konzept verbindet drei didaktische Ideen: Kontextorientierung, Peer-Tutoring und Praxisbezug (Course). Die Studierenden sollen dabei in zwei Semestern wichtige Grundlagen der Informatik verstehen und anzuwenden lernen. Durch enge Verzahnung der Aufgaben mit einem für Nichtinformatiker relevanten Kontext, sowie einem sehr hohen Anteil von Selbsttätigkeit der Studierenden soll die Motivation für fachfremde Themen gesteigert werden. Die Ergebnisse zeigen, dass die Veranstaltung sehr erfolgreich war.
Es wird ein Informatik-Wettbewerb für Schülerinnen und Schüler der Sekundarstufe II beschrieben, der über mehrere Wochen möglichst realitätsnah die Arbeitswelt eines Informatikers vorstellt. Im Wettbewerb erarbeiten die Schülerteams eine Android-App und organisieren ihre Entwicklung durch Projektmanagementmethoden, die sich an professionellen, agilen Prozessen orientieren. Im Beitrag werden der theoretische Hintergrund zu Wettbewerben, die organisatorischen und didaktischen Entscheidung, eine erste Evaluation sowie Reflexion und Ausblick dargestellt.
Die Komplexität heutiger Geschäftsabläufe und die Menge der zu verwaltenden Daten stellen hohe Anforderungen an die Entwicklung und Wartung von Geschäftsanwendungen. Ihr Umfang entsteht unter anderem aus der Vielzahl von Modellentitäten und zugehörigen Nutzeroberflächen zur Bearbeitung und Analyse der Daten. Dieser Bericht präsentiert neuartige Konzepte und deren Umsetzung zur Vereinfachung der Entwicklung solcher umfangreichen Geschäftsanwendungen. Erstens: Wir schlagen vor, die Datenbank und die Laufzeitumgebung einer dynamischen objektorientierten Programmiersprache zu vereinen. Hierzu organisieren wir die Speicherstruktur von Objekten auf die Weise einer spaltenorientierten Hauptspeicherdatenbank und integrieren darauf aufbauend Transaktionen sowie eine deklarative Anfragesprache nahtlos in dieselbe Laufzeitumgebung. Somit können transaktionale und analytische Anfragen in derselben objektorientierten Hochsprache implementiert werden, und dennoch nah an den Daten ausgeführt werden. Zweitens: Wir beschreiben Programmiersprachkonstrukte, welche es erlauben, Nutzeroberflächen sowie Nutzerinteraktionen generisch und unabhängig von konkreten Modellentitäten zu beschreiben. Um diese abstrakte Beschreibung nutzen zu können, reichert man die Domänenmodelle um vormals implizite Informationen an. Neue Modelle müssen nur um einige Informationen erweitert werden um bereits vorhandene Nutzeroberflächen und -interaktionen auch für sie verwenden zu können. Anpassungen, die nur für ein Modell gelten sollen, können unabhängig vom Standardverhalten, inkrementell, definiert werden. Drittens: Wir ermöglichen mit einem weiteren Programmiersprachkonstrukt die zusammenhängende Beschreibung von Abläufen der Anwendung, wie z.B. Bestellprozesse. Unser Programmierkonzept kapselt Nutzerinteraktionen in synchrone Funktionsaufrufe und macht somit Prozesse als zusammenhängende Folge von Berechnungen und Interaktionen darstellbar. Viertens: Wir demonstrieren ein Konzept, wie Endnutzer komplexe analytische Anfragen intuitiver formulieren können. Es basiert auf der Idee, dass Endnutzer Anfragen als Konfiguration eines Diagramms sehen. Entsprechend beschreibt ein Nutzer eine Anfrage, indem er beschreibt, was sein Diagramm darstellen soll. Nach diesem Konzept beschriebene Diagramme enthalten ausreichend Informationen, um daraus eine Anfrage generieren zu können. Hinsichtlich der Ausführungsdauer sind die generierten Anfragen äquivalent zu Anfragen, die mit konventionellen Anfragesprachen formuliert sind. Das Anfragemodell setzen wir in einem Prototypen um, der auf den zuvor eingeführten Konzepten aufsetzt.
The challenge is providing teachers with the resources they need to strengthen their instructions and better prepare students for the jobs of the 21st Century. Technology can help meet the challenge. Teachers’ Tryscience is a noncommercial offer, developed by the New York Hall of Science, TeachEngineering, the National Board for Professional Teaching Standards and IBM Citizenship to provide teachers with such resources. The workshop provides deeper insight into this tool and discussion of how to support teaching of informatics in schools.
TransPipe
(2021)
Online learning environments, such as Massive Open Online Courses (MOOCs), often rely on videos as a major component to convey knowledge. However, these videos exclude potential participants who do not understand the lecturer’s language, regardless of whether that is due to language unfamiliarity or aural handicaps. Subtitles and/or interactive transcripts solve this issue, ease navigation based on the content, and enable indexing and retrieval by search engines. Although there are several automated speech-to-text converters and translation tools, their quality varies and the process of integrating them can be quite tedious. Thus, in practice, many videos on MOOC platforms only receive subtitles after the course is already finished (if at all) due to a lack of resources. This work describes an approach to tackle this issue by providing a dedicated tool, which is closing this gap between MOOC platforms and transcription and translation tools and offering a simple workflow that can easily be handled by users with a less technical background. The proposed method is designed and evaluated by qualitative interviews with three major MOOC providers.
In the most abstract definition of its operational semantics, the declarative and concurrent programming language CHR is trivially non-terminating for a significant class of programs. Common refinements of this definition, in closing the gap to real-world implementations, compromise on declarativity and/or concurrency. Building on recent work and the notion of persistent constraints, we introduce an operational semantics avoiding trivial non-termination without compromising on its essential features.
Graph queries have lately gained increased interest due to application areas such as social networks, biological networks, or model queries. For the relational database case the relational algebra and generalized discrimination networks have been studied to find appropriate decompositions into subqueries and ordering of these subqueries for query evaluation or incremental updates of query results. For graph database queries however there is no formal underpinning yet that allows us to find such suitable operationalizations. Consequently, we suggest a simple operational concept for the decomposition of arbitrary complex queries into simpler subqueries and the ordering of these subqueries in form of generalized discrimination networks for graph queries inspired by the relational case. The approach employs graph transformation rules for the nodes of the network and thus we can employ the underlying theory. We further show that the proposed generalized discrimination networks have the same expressive power as nested graph conditions.
Graph databases provide a natural way of storing and querying graph data. In contrast to relational databases, queries over graph databases enable to refer directly to the graph structure of such graph data. For example, graph pattern matching can be employed to formulate queries over graph data.
However, as for relational databases running complex queries can be very time-consuming and ruin the interactivity with the database. One possible approach to deal with this performance issue is to employ database views that consist of pre-computed answers to common and often stated queries. But to ensure that database views yield consistent query results in comparison with the data from which they are derived, these database views must be updated before queries make use of these database views. Such a maintenance of database views must be performed efficiently, otherwise the effort to create and maintain views may not pay off in comparison to processing the queries directly on the data from which the database views are derived.
At the time of writing, graph databases do not support database views and are limited to graph indexes that index nodes and edges of the graph data for fast query evaluation, but do not enable to maintain pre-computed answers of complex queries over graph data. Moreover, the maintenance of database views in graph databases becomes even more challenging when negation and recursion have to be supported as in deductive relational databases.
In this technical report, we present an approach for the efficient and scalable incremental graph view maintenance for deductive graph databases. The main concept of our approach is a generalized discrimination network that enables to model nested graph conditions including negative application conditions and recursion, which specify the content of graph views derived from graph data stored by graph databases. The discrimination network enables to automatically derive generic maintenance rules using graph transformations for maintaining graph views in case the graph data from which the graph views are derived change. We evaluate our approach in terms of a case study using multiple data sets derived from open source projects.
One of the main problems in machine learning is to train a predictive model from training data and to make predictions on test data. Most predictive models are constructed under the assumption that the training data is governed by the exact same distribution which the model will later be exposed to. In practice, control over the data collection process is often imperfect. A typical scenario is when labels are collected by questionnaires and one does not have access to the test population. For example, parts of the test population are underrepresented in the survey, out of reach, or do not return the questionnaire. In many applications training data from the test distribution are scarce because they are difficult to obtain or very expensive. Data from auxiliary sources drawn from similar distributions are often cheaply available. This thesis centers around learning under differing training and test distributions and covers several problem settings with different assumptions on the relationship between training and test distributions-including multi-task learning and learning under covariate shift and sample selection bias. Several new models are derived that directly characterize the divergence between training and test distributions, without the intermediate step of estimating training and test distributions separately. The integral part of these models are rescaling weights that match the rescaled or resampled training distribution to the test distribution. Integrated models are studied where only one optimization problem needs to be solved for learning under differing distributions. With a two-step approximation to the integrated models almost any supervised learning algorithm can be adopted to biased training data. In case studies on spam filtering, HIV therapy screening, targeted advertising, and other applications the performance of the new models is compared to state-of-the-art reference methods.
Die gelungene Durchführung einer Vorlesung „Informatik I – Einführung in die Programmierung“ ist schwierig, trotz einer Vielfalt existierender Materialien und erprobter didaktischer Methoden. Gerade aufgrund dieser vielfältigen Auswahl hat sich bisher noch kein robustes Konzept durchgesetzt, das unabhängig von den Durchführenden eine hohe Erfolgsquote garantiert. An den Universitäten Tübingen und Freiburg wurde die Informatik I aus den gleichen Lehrmaterialien und unter ähnlichen Bedingungen durchgeführt, um das verwendete Konzept auf Robustheit zu überprüfen. Die Grundlage der Vorlesung bildet ein systematischer Ansatz zum Erlernen des Programmierens, der von der PLTGruppe in USA entwickelt worden ist. Hinzu kommen neue Ansätze zur Betreuung, insbesondere das Betreute Programmieren, bei dem die Studierenden eine solide Basis für ihre Programmierfähigkeiten entwickeln. Der vorliegende Bericht beschreibt hierbei gesammelte Erfahrungen, erläutert die Entwicklung der Unterrichtsmethodik und der Inhaltsauswahl im Vergleich zu vorangegangenen Vorlesungen und präsentiert Daten zum Erfolg der Vorlesung.
Social networking sites (SNS) are a rich source of latent information about individual characteristics. Crawling and analyzing this content provides a new approach for enterprises to personalize services and put forward product recommendations. In the past few years, commercial brands made a gradual appearance on social media platforms for advertisement, customers support and public relation purposes and by now it became a necessity throughout all branches. This online identity can be represented as a brand personality that reflects how a brand is perceived by its customers. We exploited recent research in text analysis and personality detection to build an automatic brand personality prediction model on top of the (Five-Factor Model) and (Linguistic Inquiry and Word Count) features extracted from publicly available benchmarks. Predictive evaluation on brands' accounts reveals that Facebook platform provides a slight advantage over Twitter platform in offering more self-disclosure for users' to express their emotions especially their demographic and psychological traits. Results also confirm the wider perspective that the same social media account carry a quite similar and comparable personality scores over different social media platforms. For evaluating our prediction results on actual brands' accounts, we crawled the Facebook API and Twitter API respectively for 100k posts from the most valuable brands' pages in the USA and we visualize exemplars of comparison results and present suggestions for future directions.
Learning During COVID-19
(2021)
During the COVID-19 pandemic, learning in higher education and beyond shifted en masse to online formats, with the short- and long-term consequences for Massive Open Online Course (MOOC) platforms, learners, and creators still under evaluation. In this paper, we sought to determine whether the COVID-19 pandemic and this shift to online learning led to increased learner engagement and attainment in a single introductory biology MOOC through evaluating enrollment, proportional and individual engagement, and verification and performance data. As this MOOC regularly operates each year, we compared these data collected from two course runs during the pandemic to three pre-pandemic runs. During the first pandemic run, the number and rate of learners enrolling in the course doubled when compared to prior runs, while the second pandemic run indicated a gradual return to pre-pandemic enrollment. Due to higher enrollment, more learners viewed videos, attempted problems, and posted to the discussion forums during the pandemic. Participants engaged with forums in higher proportions in both pandemic runs, but the proportion of participants who viewed videos decreased in the second pandemic run relative to the prior runs. A higher percentage of learners chose to pursue a certificate via the verified track in each pandemic run, though a smaller proportion earned certification in the second pandemic run. During the pandemic, more enrolled learners did not necessarily correlate to greater engagement by all metrics. While verified-track learner performance varied widely during each run, the effects of the pandemic were not uniform for learners, much like in other aspects of life. As such, individual engagement trends in the first pandemic run largely resemble pre-pandemic metrics but with more learners overall, while engagement trends in the second pandemic run are less like pre-pandemic metrics, hinting at learner “fatigue”. This study serves to highlight the life-long learning opportunity that MOOCs offer is even more critical when traditional education modes are disrupted and more people are at home or unemployed. This work indicates that this boom in MOOC participation may not remain at a high level for the longer term in any one course, but overall, the number of MOOCs, programs, and learners continues to grow.
Through the use of next generation sequencing (NGS) technology, a lot of newly sequenced organisms are now available. Annotating those genes is one of the most challenging tasks in sequence biology. Here, we present an automated workflow to find homologue proteins, annotate sequences according to function and create a three-dimensional model.
Reading traces
(2020)
Through a design study, we develop an approach to data exploration that utilizes elastic visualizations designed to support varying degrees of detail and abstraction. Examining the notions of scalability and elasticity in interactive visualizations, we introduce a visualization of personal reading traces such as marginalia or markings inside the reference library of German realist author Theodor Fontane. To explore such a rich and extensive collection, meaningful visual forms of abstraction and detail are as important as the transitions between those states. Following a growing research interest in the role of fluid interactivity and animations between views, we are particularly interested in the potential of carefully designed transitions and consistent representations across scales. The resulting prototype addresses humanistic research questions about the interplay of distant and close reading with visualization research on continuous navigation along several granularity levels, using scrolling as one of the main interaction mechanisms. In addition to presenting the design process and resulting prototype, we present findings from a qualitative evaluation of the tool, which suggest that bridging between distant and close views can enhance exploration, but that transitions between views need to be crafted very carefully to facilitate comprehension.
The programmable network envisioned in the 1990s within standardization and research for the Intelligent Network is currently coming into reality using IPbased Next Generation Networks (NGN) and applying Service-Oriented Architecture (SOA) principles for service creation, execution, and hosting. SOA is the foundation for both next-generation telecommunications and middleware architectures, which are rapidly converging on top of commodity transport services. Services such as triple/quadruple play, multimedia messaging, and presence are enabled by the emerging service-oriented IPMultimedia Subsystem (IMS), and allow telecommunications service providers to maintain, if not improve, their position in the marketplace. SOA becomes the de facto standard in next-generation middleware systems as the system model of choice to interconnect service consumers and providers within and between enterprises. We leverage previous research activities in overlay networking technologies along with recent advances in network abstraction, service exposure, and service creation to develop a paradigm for a service environment providing converged Internet and Telecommunications services that we call Service Broker. Such a Service Broker provides mechanisms to combine and mediate between different service paradigms from the two domains Internet/WWW and telecommunications. Furthermore, it enables the composition of services across these domains and is capable of defining and applying temporal constraints during creation and execution time. By adding network-awareness into the service fabric, such a Service Broker may also act as a next generation network-to-service element allowing the composition of crossdomain and cross-layer network and service resources. The contribution of this research is threefold: first, we analyze and classify principles and technologies from Information Technologies (IT) and telecommunications to identify and discuss issues allowing cross-domain composition in a converging service layer. Second, we discuss service composition methods allowing the creation of converged services on an abstract level; in particular, we present a formalized method for model-checking of such compositions. Finally, we propose a Service Broker architecture converging Internet and Telecom services. This environment enables cross-domain feature interaction in services through formalized obligation policies acting as constraints during service discovery, creation, and execution time.
The transversal hypergraph problem asks to enumerate the minimal hitting sets of a hypergraph. If the solutions have bounded size, Eiter and Gottlob [SICOMP'95] gave an algorithm running in output-polynomial time, but whose space requirement also scales with the output. We improve this to polynomial delay and space. Central to our approach is the extension problem, deciding for a set X of vertices whether it is contained in any minimal hitting set. We show that this is one of the first natural problems to be W[3]-complete. We give an algorithm for the extension problem running in time O(m(vertical bar X vertical bar+1) n) and prove a SETH-lower bound showing that this is close to optimal. We apply our enumeration method to the discovery problem of minimal unique column combinations from data profiling. Our empirical evaluation suggests that the algorithm outperforms its worst-case guarantees on hypergraphs stemming from real-world databases.
Data encoding has been applied to database systems for decades as it mitigates bandwidth bottlenecks and reduces storage requirements. But even in the presence of these advantages, most in-memory database systems use data encoding only conservatively as the negative impact on runtime performance can be severe. Real-world systems with large parts being infrequently accessed and cost efficiency constraints in cloud environments require solutions that automatically and efficiently select encoding techniques, including heavy-weight compression. In this paper, we introduce workload-driven approaches to automaticaly determine memory budget-constrained encoding configurations using greedy heuristics and linear programming. We show for TPC-H, TPC-DS, and the Join Order Benchmark that optimized encoding configurations can reduce the main memory footprint significantly without a loss in runtime performance over state-of-the-art dictionary encoding. To yield robust selections, we extend the linear programming-based approach to incorporate query runtime constraints and mitigate unexpected performance regressions.
Für die Integration und den Bedarf der hochqualifizierten Migranten auf dem Arbeitsmarkt in Deutschland gibt es viele Überlegungen, aber noch keine ausreichenden Lösungen. Dieser Artikel beschreibt eine praktische Lösung über die Umsetzung des Konzepts für die Qualifizierung der akademischen Migranten am Beispiel eines Studienprogramms in Informatik an der Universität Oldenburg.
How inclusive are we?
(2022)
ACM SIGMOD, VLDB and other database organizations have committed to fostering an inclusive and diverse community, as do many other scientific organizations. Recently, different measures have been taken to advance these goals, especially for underrepresented groups. One possible measure is double-blind reviewing, which aims to hide gender, ethnicity, and other properties of the authors. <br /> We report the preliminary results of a gender diversity analysis of publications of the database community across several peer-reviewed venues, and also compare women's authorship percentages in both single-blind and double-blind venues along the years. We also obtained a cross comparison of the obtained results in data management with other relevant areas in Computer Science.
VLDB 2021
(2021)
The 47th International Conference on Very Large Databases (VLDB'21) was held on August 16-20, 2021 as a hybrid conference. It attracted 180 in-person attendees in Copenhagen and 840 remote attendees. In this paper, we describe our key decisions as general chairs and program committee chairs and share the lessons we learned.
Forschendes Lernen ist eine Lehr-Lernform, in der Studierende einen eigenen Forschungsprozess vollständig durchlaufen. In Informatikstudiengängen und insbesondere in Informatikbachelorstudiengängen ist die Forschungsorientierung allerdings nur gering ausgeprägt: Forschendes Lernen wird kaum eingesetzt, obwohl dies möglich und sinnvoll ist. Dieser Artikel stellt ein Konzept für ein Seminar Software Engineering im Bachelorstudium vor und beschreibt dessen Durchführung. Abschließend wird das Konzept diskutiert und sowohl aus Studierenden- als auch aus Lehrendensicht positiv evaluiert.
Precision oncology is a rapidly evolving interdisciplinary medical specialty. Comprehensive cancer panels are becoming increasingly available at pathology departments worldwide, creating the urgent need for scalable cancer variant annotation and molecularly informed treatment recommendations. A wealth of mainly academia-driven knowledge bases calls for software tools supporting the multi-step diagnostic process. We derive a comprehensive list of knowledge bases relevant for variant interpretation by a review of existing literature followed by a survey among medical experts from university hospitals in Germany. In addition, we review cancer variant interpretation tools, which integrate multiple knowledge bases. We categorize the knowledge bases along the diagnostic process in precision oncology and analyze programmatic access options as well as the integration of knowledge bases into software tools. The most commonly used knowledge bases provide good programmatic access options and have been integrated into a range of software tools. For the wider set of knowledge bases, access options vary across different parts of the diagnostic process. Programmatic access is limited for information regarding clinical classifications of variants and for therapy recommendations. The main issue for databases used for biological classification of pathogenic variants and pathway context information is the lack of standardized interfaces. There is no single cancer variant interpretation tool that integrates all identified knowledge bases. Specialized tools are available and need to be further developed for different steps in the diagnostic process.
Correction to: Knowledge bases and software support for variant interpretation in precision oncology
(2021)
Parsability approaches of several grammar formalisms generating also non-context-free languages are explored. Chomsky grammars, Lindenmayer systems, grammars with controlled derivations, and grammar systems are treated. Formal properties of these mechanisms are investigated, when they are used as language acceptors. Furthermore, cooperating distributed grammar systems are restricted so that efficient deterministic parsing without backtracking becomes possible. For this class of grammar systems, the parsing algorithm is presented and the feature of leftmost derivations is investigated in detail.
We introduce a new measure of descriptional complexity on finite automata, called the number of active states. Roughly speaking, the number of active states of an automaton A on input w counts the number of different states visited during the most economic computation of the automaton A for the word w. This concept generalizes to finite automata and regular languages in a straightforward way. We show that the number of active states of both finite automata and regular languages is computable, even with respect to nondeterministic finite automata. We further compare the number of active states to related measures for regular languages. In particular, we show incomparability to the radius of regular languages and that the difference between the number of active states and the total number of states needed in finite automata for a regular language can be of exponential order.
M-rate 0L systems are interactionless Lindenmayer systems together with a function assigning to every string a set of multisets of productions that may be applied simultaneously to the string. Some questions that have been left open in the forerunner papers are examined, and the computational power of deterministic M-rate 0L systems is investigated, where also tabled and extended variants are taken into consideration.
We study the concept of reversibility in connection with parallel communicating systems of finite automata (PCFA in short). We define the notion of reversibility in the case of PCFA (also covering the non-deterministic case) and discuss the relationship of the reversibility of the systems and the reversibility of its components. We show that a system can be reversible with non-reversible components, and the other way around, the reversibility of the components does not necessarily imply the reversibility of the system as a whole. We also investigate the computational power of deterministic centralized reversible PCFA. We show that these very simple types of PCFA (returning or non-returning) can recognize regular languages which cannot be accepted by reversible (deterministic) finite automata, and that they can even accept languages that are not context-free. We also separate the deterministic and non-deterministic variants in the case of systems with non-returning communication. We show that there are languages accepted by non-deterministic centralized PCFA, which cannot be recognized by any deterministic variant of the same type.
Computational Thinking
(2015)
Digital technology has radically changed the way people
work in industry, finance, services, media and commerce. Informatics
has contributed to the scientific and technological development of our
society in general and to the digital revolution in particular. Computational
thinking is the term indicating the key ideas of this discipline that
might be included in the key competencies underlying the curriculum
of compulsory education. The educational potential of informatics has
a history dating back to the sixties. In this article, we briefly revisit this
history looking for lessons learned. In particular, we focus on experiences
of teaching and learning programming. However, computational
thinking is more than coding. It is a way of thinking and practicing interactive
dynamic modeling with computers. We advocate that learners
can practice computational thinking in playful contexts where they can
develop personal projects, for example building videogames and/or robots,
share and discuss their construction with others. In our view, this
approach allows an integration of computational thinking in the K-12
curriculum across disciplines.
Dutch allows for variation as to whether the first position in the sentence is occupied by the subject or by some other constituent, such as the direct object. In particular situations, however, this commonly observed variation in word order is ‘frozen’ and only the subject appears in first position. We hypothesize that this partial freezing of word order in Dutch can be explained from the dependence of the speaker’s choice of word order on the hearer’s interpretation of this word order. A formal model of this interaction between the speaker’s perspective and the hearer’s perspective is presented in terms of bidirectional Optimality Theory. Empirical predictions of this model regarding the interaction between word order and definiteness are confirmed by a quantitative corpus study.
Deductive databases need general formulas in rule bodies, not only conjuctions of literals. This is well known since the work of Lloyd and Topor about extended logic programming. Of course, formulas must be restricted in such a way that they can be effectively evaluated in finite time, and produce only a finite number of new tuples (in each iteration of the TP-operator: the fixpoint can still be infinite). It is also necessary to respect binding restrictions of built-in predicates: many of these predicates can be executed only when certain arguments are ground. Whereas for standard logic programming rules, questions of safety, allowedness, and range-restriction are relatively easy and well understood, the situation for general formulas is a bit more complicated. We give a syntactic analysis of formulas that guarantees the necessary properties.
Die automatische Informationsextraktion (IE) aus unstrukturierten Texten ermöglicht völlig neue Wege, auf relevante Informationen zuzugreifen und deren Inhalte zu analysieren, die weit über bisherige Verfahren zur Stichwort-basierten Dokumentsuche hinausgehen. Die Entwicklung von Programmen zur Extraktion von maschinenlesbaren Daten aus Texten erfordert jedoch nach wie vor die Entwicklung von domänenspezifischen Extraktionsprogrammen. Insbesondere im Bereich der Enterprise Search (der Informationssuche im Unternehmensumfeld), in dem eine große Menge von heterogenen Dokumenttypen existiert, ist es oft notwendig ad-hoc Programm-module zur Extraktion von geschäftsrelevanten Entitäten zu entwickeln, die mit generischen Modulen in monolithischen IE-Systemen kombiniert werden. Dieser Umstand ist insbesondere kritisch, da potentiell für jeden einzelnen Anwendungsfall ein von Grund auf neues IE-System entwickelt werden muss. Die vorliegende Dissertation untersucht die effiziente Entwicklung und Ausführung von IE-Systemen im Kontext der Enterprise Search und effektive Methoden zur Ausnutzung bekannter strukturierter Daten im Unternehmenskontext für die Extraktion und Identifikation von geschäftsrelevanten Entitäten in Doku-menten. Grundlage der Arbeit ist eine neuartige Plattform zur Komposition von IE-Systemen auf Basis der Beschreibung des Datenflusses zwischen generischen und anwendungsspezifischen IE-Modulen. Die Plattform unterstützt insbesondere die Entwicklung und Wiederverwendung von generischen IE-Modulen und zeichnet sich durch eine höhere Flexibilität und Ausdrucksmächtigkeit im Vergleich zu vorherigen Methoden aus. Ein in der Dissertation entwickeltes Verfahren zur Dokumentverarbeitung interpretiert den Daten-austausch zwischen IE-Modulen als Datenströme und ermöglicht damit eine weitgehende Parallelisierung von einzelnen Modulen. Die autonome Ausführung der Module führt zu einer wesentlichen Beschleu-nigung der Verarbeitung von Einzeldokumenten und verbesserten Antwortzeiten, z. B. für Extraktions-dienste. Bisherige Ansätze untersuchen lediglich die Steigerung des durchschnittlichen Dokumenten-durchsatzes durch verteilte Ausführung von Instanzen eines IE-Systems. Die Informationsextraktion im Kontext der Enterprise Search unterscheidet sich z. B. von der Extraktion aus dem World Wide Web dadurch, dass in der Regel strukturierte Referenzdaten z. B. in Form von Unternehmensdatenbanken oder Terminologien zur Verfügung stehen, die oft auch die Beziehungen von Entitäten beschreiben. Entitäten im Unternehmensumfeld haben weiterhin bestimmte Charakteristiken: Eine Klasse von relevanten Entitäten folgt bestimmten Bildungsvorschriften, die nicht immer bekannt sind, auf die aber mit Hilfe von bekannten Beispielentitäten geschlossen werden kann, so dass unbekannte Entitäten extrahiert werden können. Die Bezeichner der anderen Klasse von Entitäten haben eher umschreibenden Charakter. Die korrespondierenden Umschreibungen in Texten können variieren, wodurch eine Identifikation derartiger Entitäten oft erschwert wird. Zur effizienteren Entwicklung von IE-Systemen wird in der Dissertation ein Verfahren untersucht, das alleine anhand von Beispielentitäten effektive Reguläre Ausdrücke zur Extraktion von unbekannten Entitäten erlernt und damit den manuellen Aufwand in derartigen Anwendungsfällen minimiert. Verschiedene Generalisierungs- und Spezialisierungsheuristiken erkennen Muster auf verschiedenen Abstraktionsebenen und schaffen dadurch einen Ausgleich zwischen Genauigkeit und Vollständigkeit bei der Extraktion. Bekannte Regellernverfahren im Bereich der Informationsextraktion unterstützen die beschriebenen Problemstellungen nicht, sondern benötigen einen (annotierten) Dokumentenkorpus. Eine Methode zur Identifikation von Entitäten, die durch Graph-strukturierte Referenzdaten vordefiniert sind, wird als dritter Schwerpunkt untersucht. Es werden Verfahren konzipiert, welche über einen exakten Zeichenkettenvergleich zwischen Text und Referenzdatensatz hinausgehen und Teilübereinstimmungen und Beziehungen zwischen Entitäten zur Identifikation und Disambiguierung heranziehen. Das in der Arbeit vorgestellte Verfahren ist bisherigen Ansätzen hinsichtlich der Genauigkeit und Vollständigkeit bei der Identifikation überlegen.
Der vorliegende Beitrag berichtet auf der Grundlage von Erfahrungen mit dem Audience Response System (ARS) „Auditorium Mobile Classroom Service“ von Erfolgsfaktoren für den Einsatz in der universitären Lehre. Dabei werden sowohl die technischen Rahmenbedingungen und Herausforderungen der Anwendungen berücksichtigt, als auch die unterschiedlichen didaktischen Konzepte und Ziele der beteiligten Akteure (Studierende, Lehrende und Institution). Ziel ist es, Einflussfaktoren für den erfolgreichen Einsatz sowohl für die Praxis als auch die wissenschaftliche Untersuchung und Weiterentwicklung der Systeme zu benennen und ein heuristisches Framework für Chancen und Herausforderungen beim Einsatz von ARS anzubieten.
In control theory, to solve a finite-horizon sequential decision problem (SDP) commonly means to find a list of decision rules that result in an optimal expected total reward (or cost) when taking a given number of decision steps. SDPs are routinely solved using Bellman's backward induction. Textbook authors (e.g. Bertsekas or Puterman) typically give more or less formal proofs to show that the backward induction algorithm is correct as solution method for deterministic and stochastic SDPs. Botta, Jansson and Ionescu propose a generic framework for finite horizon, monadic SDPs together with a monadic version of backward induction for solving such SDPs. In monadic SDPs, the monad captures a generic notion of uncertainty, while a generic measure function aggregates rewards. In the present paper, we define a notion of correctness for monadic SDPs and identify three conditions that allow us to prove a correctness result for monadic backward induction that is comparable to textbook correctness proofs for ordinary backward induction. The conditions that we impose are fairly general and can be cast in category-theoretical terms using the notion of Eilenberg-Moore algebra. They hold in familiar settings like those of deterministic or stochastic SDPs, but we also give examples in which they fail. Our results show that backward induction can safely be employed for a broader class of SDPs than usually treated in textbooks. However, they also rule out certain instances that were considered admissible in the context of Botta et al. 's generic framework. Our development is formalised in Idris as an extension of the Botta et al. framework and the sources are available as supplementary material.
Since 2002, keywords like service-oriented engineering, service-oriented computing, and service-oriented architecture have been widely used in research, education, and enterprises. These and related terms are often misunderstood or used incorrectly. To correct these misunderstandings, a deeper knowledge of the concepts, the historical backgrounds, and an overview of service-oriented architectures is demanded and given in this paper.