004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Article (163)
- Monograph/Edited Volume (157)
- Doctoral Thesis (148)
- Postprint (49)
- Conference Proceeding (29)
- Master's Thesis (8)
- Other (7)
- Part of a Book (2)
- Bachelor Thesis (1)
- Habilitation Thesis (1)
Language
- English (461)
- German (104)
- Multiple languages (2)
Is part of the Bibliography
- yes (567) (remove)
Keywords
- machine learning (16)
- answer set programming (13)
- Hasso-Plattner-Institut (10)
- cloud computing (10)
- Cloud Computing (9)
- Hasso Plattner Institute (9)
- Forschungskolleg (8)
- Klausurtagung (8)
- Machine Learning (8)
- Maschinelles Lernen (8)
- Service-oriented Systems Engineering (8)
- Forschungsprojekte (7)
- Future SOC Lab (7)
- In-Memory Technologie (7)
- Modellierung (7)
- Multicore Architekturen (7)
- maschinelles Lernen (7)
- openHPI (7)
- Datenintegration (6)
- E-Learning (6)
- Prozessmodellierung (6)
- Smalltalk (6)
- Visualisierung (6)
- cloud (6)
- Antwortmengenprogrammierung (5)
- Geschäftsprozessmanagement (5)
- Identitätsmanagement (5)
- Informatik (5)
- Ph.D. retreat (5)
- Verifikation (5)
- business process management (5)
- cyber-physical systems (5)
- data integration (5)
- digital education (5)
- identity management (5)
- multicore architectures (5)
- quantitative analysis (5)
- research projects (5)
- security (5)
- service-oriented systems engineering (5)
- social media (5)
- verification (5)
- visualization (5)
- Digitalisierung (4)
- In-Memory technology (4)
- Informatikdidaktik (4)
- Infrastruktur (4)
- MOOCs (4)
- Privacy (4)
- Research School (4)
- Semantic Web (4)
- Sicherheit (4)
- Virtualisierung (4)
- blockchain (4)
- digitale Bildung (4)
- digitalization (4)
- graph transformation (4)
- higher education (4)
- image processing (4)
- nested graph conditions (4)
- privacy (4)
- probabilistic timed systems (4)
- programming (4)
- qualitative Analyse (4)
- qualitative analysis (4)
- quantitative Analyse (4)
- research school (4)
- self-sovereign identity (4)
- smart contracts (4)
- software engineering (4)
- virtual machines (4)
- 3D visualization (3)
- Algorithmen (3)
- Answer Set Programming (3)
- BPMN (3)
- Betriebssysteme (3)
- Bildverarbeitung (3)
- Blockchains (3)
- CityGML (3)
- Cloud (3)
- Computer Networks (3)
- Computernetzwerke (3)
- Data profiling (3)
- Datenbank (3)
- Datenschutz (3)
- Design Thinking (3)
- Graphtransformationen (3)
- IPv4 (3)
- IPv6 (3)
- Informationsextraktion (3)
- Infrastructure (3)
- Innovation (3)
- Internet Protocol (3)
- Internet of Things (3)
- JSP (3)
- Klassifikation (3)
- Künstliche Intelligenz (3)
- Laufzeitmodelle (3)
- Lively Kernel (3)
- MOOC (3)
- Model Synchronisation (3)
- Model Transformation (3)
- Model-Driven Engineering (3)
- Modeling (3)
- Network Politics (3)
- Netzpolitik (3)
- Online-Lernen (3)
- Onlinekurs (3)
- Ph.D. Retreat (3)
- Process Modeling (3)
- Tele-Lab (3)
- Tele-Teaching (3)
- Tripel-Graph-Grammatik (3)
- Virtuelle Maschinen (3)
- Vorhersage (3)
- Werkzeuge (3)
- algorithms (3)
- artifical intelligence (3)
- bibliometric analysis (3)
- business processes (3)
- citation analysis (3)
- clustering (3)
- computer vision (3)
- conference (3)
- data preparation (3)
- data profiling (3)
- database systems (3)
- duplicate detection (3)
- evaluation (3)
- geospatial data (3)
- graph transformation systems (3)
- innovation (3)
- künstliche Intelligenz (3)
- model (3)
- modellgetriebene Entwicklung (3)
- models (3)
- non-photorealistic rendering (3)
- operating systems (3)
- outlier detection (3)
- process mining (3)
- real-time (3)
- simulation (3)
- systems biology (3)
- tele-TASK (3)
- trust (3)
- user experience (3)
- virtualization (3)
- virtuelle Maschinen (3)
- 3D point clouds (2)
- 3D-Geovisualisierung (2)
- 3D-Punktwolken (2)
- 3D-Stadtmodelle (2)
- 3D-Visualisierung (2)
- ACINQ (2)
- ASIC (2)
- AUTOSAR (2)
- Abstraktion (2)
- Algorithmic Game Theory (2)
- Algorithmische Spieltheorie (2)
- Algorithms (2)
- Anomalieerkennung (2)
- Assessment (2)
- Assoziationsregeln (2)
- Asynchrone Schaltung (2)
- Augmented reality (2)
- Australian securities exchange (2)
- BCCC (2)
- BPM (2)
- BTC (2)
- Bibliometrics (2)
- Big Data (2)
- BitShares (2)
- Bitcoin (2)
- Bitcoin Core (2)
- Blockchain (2)
- Blockchain Auth (2)
- Blockchain-Konsortium R3 (2)
- Blockkette (2)
- Blockstack (2)
- Blockstack ID (2)
- Blumix-Plattform (2)
- Blöcke (2)
- Bounded Model Checking (2)
- Business Process Management (2)
- Byzantine Agreement (2)
- CSC (2)
- Classification (2)
- Cloud-Sicherheit (2)
- Cloud-Speicher (2)
- Clusteranalyse (2)
- Code (2)
- Colored Coins (2)
- Computergrafik (2)
- Computersicherheit (2)
- DAO (2)
- DPoS (2)
- Data Integration (2)
- Data Modeling (2)
- Data Profiling (2)
- Datenanalyse (2)
- Datenaufbereitung (2)
- Datenbanken (2)
- Datenbanksysteme (2)
- Datenmodellierung (2)
- Datenqualität (2)
- Deep learning (2)
- Delegated Proof-of-Stake (2)
- Delphi study (2)
- Didaktik (2)
- Distributed Proof-of-Research (2)
- Duplikaterkennung (2)
- E-Wallet (2)
- ECDSA (2)
- EEG (2)
- Echtzeit (2)
- Echtzeit-Rendering (2)
- Economics (2)
- Electronic and spintronic devices (2)
- Eris (2)
- Ether (2)
- Ethereum (2)
- European Bioinformatics Institute (2)
- European Union (2)
- Europäische Union (2)
- Evolution (2)
- Exploration (2)
- FMC (2)
- FPGA (2)
- Federated Byzantine Agreement (2)
- Fehlertoleranz (2)
- FollowMyVote (2)
- Fork (2)
- Formale Verifikation (2)
- GPU (2)
- Game Dynamics (2)
- General Earth and Planetary Sciences (2)
- Geodaten (2)
- Geography, Planning and Development (2)
- Graphentransformationssysteme (2)
- Graphtransformationssysteme (2)
- Gridcoin (2)
- HCI (2)
- HPI Schul-Cloud (2)
- Hard Fork (2)
- Hashed Timelock Contracts (2)
- Hauptspeicherdatenbank (2)
- Hochschuldidaktik (2)
- ICA (2)
- IT-Infrastruktur (2)
- IT-infrastructure (2)
- Identität (2)
- In-Memory (2)
- Informatikstudium (2)
- Internet (2)
- Internet Service Provider (2)
- Internet der Dinge (2)
- IoT (2)
- Japanese Blockchain Consortium (2)
- Japanisches Blockchain-Konsortium (2)
- Java (2)
- Kette (2)
- Knowledge Representation and Reasoning (2)
- Kollaborationen (2)
- Konferenz (2)
- Konsensalgorithmus (2)
- Konsensprotokoll (2)
- Lightning Network (2)
- Link-Entdeckung (2)
- Live-Programmierung (2)
- Lock-Time-Parameter (2)
- MERLOT (2)
- Machine learning (2)
- Megamodell (2)
- Metaverse (2)
- Micropayment-Kanäle (2)
- Microsoft Azur (2)
- Middleware (2)
- Model Synchronization (2)
- Modell (2)
- Modellprüfung (2)
- Mustererkennung (2)
- N-of-1 trial (2)
- NASDAQ (2)
- NameID (2)
- Namecoin (2)
- Navigation (2)
- Off-Chain-Transaktionen (2)
- Onename (2)
- Online Course (2)
- Online-Learning (2)
- Ontologie (2)
- OpenBazaar (2)
- Optimization (2)
- Oracles (2)
- Orphan Block (2)
- P2P (2)
- Patterns (2)
- Peer-to-Peer Netz (2)
- Peercoin (2)
- PoB (2)
- PoS (2)
- PoW (2)
- Preis der Anarchie (2)
- Price of Anarchy (2)
- Privatsphäre (2)
- Process (2)
- Process Mining (2)
- Programmieren (2)
- Programmierung (2)
- Proof-of-Burn (2)
- Proof-of-Stake (2)
- Proof-of-Work (2)
- Prozess (2)
- Python (2)
- Ressourcenoptimierung (2)
- Ripple (2)
- Runtime analysis (2)
- SCED (2)
- SCP (2)
- SHA (2)
- SPV (2)
- SQL (2)
- STG decomposition (2)
- STG-Dekomposition (2)
- Satisfiability (2)
- Schule (2)
- Schwierigkeitsgrad (2)
- Second Life (2)
- Semiconductors (2)
- Service-Oriented Architecture (2)
- Service-Orientierte Architekturen (2)
- Simplified Payment Verification (2)
- Skalierbarkeit der Blockchain (2)
- Slock.it (2)
- Soft Fork (2)
- Softwarearchitektur (2)
- Steemit (2)
- Stellar Consensus Protocol (2)
- Storj (2)
- Studie (2)
- SysML (2)
- Systematics (2)
- Systemstruktur (2)
- Taxonomy (2)
- Temporallogik (2)
- Texturen (2)
- The Bitfury Group (2)
- The DAO (2)
- Timed Automata (2)
- Transaktion (2)
- Twitter (2)
- Two-Way-Peg (2)
- UX (2)
- Unspent Transaction Output (2)
- User Experience (2)
- VM (2)
- Verhalten (2)
- Verlässlichkeit (2)
- Versionsverwaltung (2)
- Verträge (2)
- Virtual Reality (2)
- Visualization (2)
- Water Science and Technology (2)
- Watson IoT (2)
- YouTube (2)
- Zielvorgabe (2)
- Zookos Dreieck (2)
- Zookos triangle (2)
- abstraction (2)
- adaptive (2)
- adaptive Systeme (2)
- adaptive systems (2)
- altchain (2)
- alternative chain (2)
- anomaly detection (2)
- anxiety (2)
- app (2)
- architecture (2)
- atomic swap (2)
- attribute assurance (2)
- authorship attribution (2)
- batch processing (2)
- bidirectional payment channels (2)
- big data (2)
- big data services (2)
- bitcoins (2)
- blockchain consortium (2)
- blockchain-übergreifend (2)
- blocks (2)
- blumix platform (2)
- bounded model checking (2)
- causal discovery (2)
- causal structure learning (2)
- chain (2)
- classification (2)
- cloud security (2)
- co-citation analysis (2)
- co-occurrence analysis (2)
- code (2)
- collaboration (2)
- computer graphics (2)
- computer science (2)
- confirmation period (2)
- consensus algorithm (2)
- consensus protocol (2)
- contest period (2)
- continuous integration (2)
- contracts (2)
- cross-chain (2)
- cyber-physische Systeme (2)
- cyberbullying (2)
- data (2)
- data analytics (2)
- data mining (2)
- data quality (2)
- data wrangling (2)
- debugging (2)
- decentralized autonomous organization (2)
- deep learning (2)
- deferred choice (2)
- dependability (2)
- depression (2)
- dezentrale autonome Organisation (2)
- didactics (2)
- difficulty (2)
- difficulty target (2)
- digital enlightenment (2)
- digital health (2)
- digital identity (2)
- digital interventions (2)
- digital learning platform (2)
- digital sovereignty (2)
- digital transformation (2)
- digitale Aufklärung (2)
- digitale Lernplattform (2)
- digitale Souveränität (2)
- direct manipulation (2)
- distributed systems (2)
- doppelter Hashwert (2)
- double hashing (2)
- e-learning (2)
- education (2)
- empathy (2)
- engagement (2)
- exploratory programming (2)
- fault tolerance (2)
- federated voting (2)
- formal semantics (2)
- functional dependencies (2)
- funktionale Abhängigkeiten (2)
- gender (2)
- geovisualization (2)
- graph constraints (2)
- hashrate (2)
- human computer interaction (2)
- identity (2)
- identity theory (2)
- in-memory technology (2)
- inclusion dependencies (2)
- incremental graph pattern matching (2)
- index selection (2)
- individual effects (2)
- informatics (2)
- information extraction (2)
- intelligente Verträge (2)
- inter-chain (2)
- interactive technologies (2)
- intrusion detection (2)
- job shop scheduling (2)
- k-inductive invariant checking (2)
- kausale Entdeckung (2)
- kausales Strukturlernen (2)
- knowledge management (2)
- kontinuierliche Integration (2)
- law (2)
- lebenslanges Lernen (2)
- ledger assets (2)
- lifelong learning (2)
- live programming (2)
- liveness (2)
- longitudinal (2)
- maschinelles Sehen (2)
- memory (2)
- merged mining (2)
- merkle root (2)
- method comparision (2)
- micropayment (2)
- micropayment channels (2)
- miner (2)
- mining (2)
- mining hardware (2)
- minting (2)
- mobile (2)
- mobile application (2)
- mobile mapping (2)
- model checking (2)
- model transformation (2)
- model-driven engineering (2)
- modeling (2)
- modelling (2)
- monitoring (2)
- navigation (2)
- neural networks (2)
- news media (2)
- nonce (2)
- off-chain transaction (2)
- oracles (2)
- parallel processing (2)
- peer-to-peer network (2)
- pegged sidechains (2)
- perception of robots (2)
- prediction (2)
- probabilistische gezeitete Systeme (2)
- probabilistische zeitgesteuerte Systeme (2)
- production planning and control (2)
- propositional satisfiability (2)
- quorum slices (2)
- real-time systems (2)
- rootstock (2)
- runtime models (2)
- scalability of blockchain (2)
- scarce tokens (2)
- schema discovery (2)
- search (2)
- self-driving (2)
- service-oriented systems (2)
- sidechain (2)
- single-case experimental design (2)
- smalltalk (2)
- social network analysis (2)
- societal effects (2)
- solver (2)
- standardization (2)
- synchronization (2)
- systematic literature review (2)
- systems of systems (2)
- taxonomy (2)
- teamwork (2)
- text based classification methods (2)
- textures (2)
- tiefes Lernen (2)
- timed automata (2)
- tools (2)
- transaction (2)
- triple graph grammars (2)
- typed attributed graphs (2)
- user interaction (2)
- user-generated content (2)
- verschachtelte Graphbedingungen (2)
- version control (2)
- virtual 3D city models (2)
- virtual reality (2)
- virtuelle 3D-Stadtmodelle (2)
- vocational training (2)
- wearables (2)
- web application (2)
- workflow patterns (2)
- "Big Data"-Dienste (1)
- 'Peer To Peer' (1)
- (FPGA) (1)
- 0-day (1)
- 3D Computer Grafik (1)
- 3D Computer Graphics (1)
- 3D Drucken (1)
- 3D Linsen (1)
- 3D Point Clouds (1)
- 3D Semiotik (1)
- 3D Visualisierung (1)
- 3D city model (1)
- 3D city models (1)
- 3D computer graphics (1)
- 3D geovisualisation (1)
- 3D geovisualization (1)
- 3D lenses (1)
- 3D point cloud (1)
- 3D portrayal (1)
- 3D printing (1)
- 3D semiotics (1)
- 3D-Punktwolke (1)
- 3D-Rendering (1)
- 3D-Stadtmodell (1)
- 3DCityDB (1)
- 3d city models (1)
- 47A52 (1)
- 65R20 (1)
- 65R32 (1)
- 78A46 (1)
- ADFS (1)
- APT (1)
- APX-hardness (1)
- Abbrecherquote (1)
- Abhängigkeiten (1)
- Abstraktion von Geschäftsprozessmodellen (1)
- Accepting Grammars (1)
- Ackerschmalwand (1)
- Active Directory Federation Services (1)
- Active Evaluation (1)
- Activity-oriented Optimization (1)
- Actor (1)
- Actor model (1)
- Advanced Persistent Threats (1)
- Advanced Video Codec (AVC) (1)
- Adversarial Learning (1)
- Agile (1)
- Agilität (1)
- Aktive Evaluierung (1)
- Aktivitäten (1)
- Akzeptierende Grammatiken (1)
- Algebraic methods (1)
- Algorithmenablaufplanung (1)
- Algorithmenkonfiguration (1)
- Algorithmenselektion (1)
- Alignment (1)
- Ambiguity (1)
- Ambiguität (1)
- Analog-zu-Digital-Konvertierung (1)
- Analyse (1)
- Android Security (1)
- Anfrageoptimierung (1)
- Anfragesprache (1)
- Angewandte Spieltheorie (1)
- Angriffe (1)
- Angriffserkennung (1)
- Animal building (1)
- Anisotroper Kuwahara Filter (1)
- Anleitung (1)
- Antwortmengen Programmierung (1)
- Anwendungsvirtualisierung (1)
- Application (1)
- Application Server (1)
- Applied Game Theory (1)
- Apriori (1)
- Arabidopsis thaliana (1)
- Architektur (1)
- Architekturadaptation (1)
- Archivanalyse (1)
- Argument Mining (1)
- Artem Erkomaishvili (1)
- Artificial Intelligence (1)
- Artificial neural networks (1)
- Arzt-Patient-Beziehung (1)
- Aspect-oriented Programming (1)
- Aspektorientierte Softwareentwicklung (1)
- Association Rule Mining (1)
- Asynchronous circuit (1)
- Attribut-Merge-Prozess (1)
- Attribute Merge Process (1)
- Attribute aggregation (1)
- Attributsicherung (1)
- Aufzählung (1)
- Augmented Reality (1)
- Augmented and virtual reality (1)
- Ausbildung (1)
- Ausführung von Modellen (1)
- Ausführungssemantiken (1)
- Ausreissererkennung (1)
- Ausreißererkennung (1)
- Auswirkungen (1)
- Authentication (1)
- Authentifizierung (1)
- Authorization (1)
- Autismus (1)
- Automatically controlled windows (1)
- Autorisierung (1)
- BCH (1)
- BCI (1)
- BSS (1)
- Bachelorstudierende der Informatik (1)
- Bahnwesen (1)
- Bank (1)
- Basic Service (1)
- Basic Storage Anbieter (1)
- Batchprozesse (1)
- Batchverarbeitung (1)
- Baumweite (1)
- Bayes'sche Netze (1)
- Bayesian networks (1)
- Bean (1)
- Bedingte Inklusionsabhängigkeiten (1)
- Bedrohungserkennung (1)
- Behavior (1)
- Behavior change (1)
- Behaviour Analysis (1)
- Benutzerinteraktion (1)
- Benutzeroberfläche (1)
- Berührungseingaben (1)
- Beschränkungen und Abhängigkeiten (1)
- Betrachtungsebenen (1)
- Beweistheorie (1)
- Bidirectional order dependencies (1)
- Big Five model (1)
- Bilddatenanalyse (1)
- Bildung (1)
- Binäres Entscheidungsdiagramm (1)
- Bioacoustics (1)
- Bioakustik (1)
- Biocomputing (1)
- Bioelektrisches Signal (1)
- Bioinformatik (1)
- Biometrie (1)
- Bisimulation (1)
- Blockheizkraftwerke (1)
- Boolean constraint solver (1)
- Bounded Backward Model Checking (1)
- Brain Computer Interface (1)
- Brownian motion with discontinuous drift (1)
- Business Process Models (1)
- Business process modeling (1)
- Bystander (1)
- CCS Concepts (1)
- CEP (1)
- CSCW (1)
- Cactus (1)
- Calibration (1)
- Canvas (1)
- Carrera Digital D132 (1)
- Case Management (1)
- Case management (1)
- Change Management (1)
- Choreographien (1)
- Citymodel (1)
- Clinical predictive modeling (1)
- Cloud Datenzentren (1)
- Clustering (1)
- Codierung (1)
- Cographs (1)
- Coherent partition (1)
- Common Spatial Pattern (1)
- Commonsense reasoning (1)
- Complementary Circuits (1)
- Complexity (1)
- Compliance (1)
- Compliance checking (1)
- Composition (1)
- Compound Values (1)
- Computation Tree Logic (1)
- Computational Complexity (1)
- Computational Hardness (1)
- Computational Photography (1)
- Computational photography (1)
- Computer Science (1)
- Computer Science Education (1)
- Computer crime (1)
- Computergestützes Training (1)
- Computing (1)
- Conceptual (1)
- Conceptual modeling (1)
- Condition number (1)
- Conditional Inclusion Dependency (1)
- Conformance Überprüfung (1)
- Consistency (1)
- Constraint (1)
- Constraint Solving (1)
- Constraint-Programmierung (1)
- Constraints (1)
- Constructive solid geometry (1)
- Context-oriented Programming (1)
- Contracts (1)
- Controlled Derivations (1)
- Controller-Resynthese (1)
- Convolution (1)
- Covariate Shift (1)
- Covid (1)
- Creative (1)
- Crime mapping (1)
- Critical pairs (1)
- Crowd-sourcing (1)
- Cryptography (1)
- Currencies (1)
- Customer ownership (1)
- Cyber-Physical Systems (1)
- Cyber-Physical-Systeme (1)
- Cyber-Sicherheit (1)
- Cyber-physical-systems (1)
- Cyber-physikalische Systeme (1)
- DBMS (1)
- DDoS (1)
- DNA (1)
- DNA computing (1)
- DNS (1)
- DPLL (1)
- Data Dependency (1)
- Data Quality (1)
- Data Structure Optimization (1)
- Data Warehouse (1)
- Data dependencies (1)
- Data integration (1)
- Data mining (1)
- Data modeling (1)
- Data warehouse (1)
- Data-Mining (1)
- Data-Science (1)
- Data-centric (1)
- Database (1)
- Database Cost Model (1)
- Databases (1)
- Dateistruktur (1)
- Daten (1)
- Datenabhängigkeiten (1)
- Datenabhängigkeiten-Entdeckung (1)
- Datenbank-Kostenmodell (1)
- Datenextraktion (1)
- Datenflusskorrektheit (1)
- Datenkorrektheit (1)
- Datenmodelle (1)
- Datenobjekte (1)
- Datenreinigung (1)
- Datensatz (1)
- Datenschutz-sicherer Einsatz in der Schule (1)
- Datensicht (1)
- Datenstrukturoptimierung (1)
- Datensynthese (1)
- Datentransformation (1)
- Datenvertraulichkeit (1)
- Datenvisualisierung (1)
- Datenzustände (1)
- Deadline-Verbreitung (1)
- Debugging (1)
- Decision support (1)
- Deduction (1)
- Deep Learning (1)
- Defining characteristics of physical computing (1)
- Dekubitus (1)
- Delphine (1)
- Delta preservation (1)
- Dempster-Shafer-Theorie (1)
- Dempster–Shafer theory (1)
- Denkweise (1)
- Dependency discovery (1)
- Description Logics (1)
- Design-Forschung (1)
- Deskriptive Logik (1)
- Deurema Modellierungssprache (1)
- Diagonalisierung (1)
- Didaktik der Informatik (1)
- Didaktische Konzepte (1)
- Dienstkomposition (1)
- Dienstplattform (1)
- Differential Privacy (1)
- Differenz von Gauss Filtern (1)
- Digital Engineering (1)
- Digital Game Based Learning (1)
- Digital image analysis (1)
- Digitale Transformation (1)
- Digitalisierung von Produktionsprozessen (1)
- Digitalization (1)
- Digitalstrategie (1)
- Direkte Manipulation (1)
- Discrimination Networks (1)
- Distributed Computing (1)
- Distributed computing (1)
- Distributed programming (1)
- Distributed-Ledger-Technologie (DLT) (1)
- Diversität (1)
- Dolphins (1)
- Domänenspezifische Modellierung (1)
- Dreidimensionale Computergraphik (1)
- Dubletten (1)
- Duplicate Detection (1)
- Dynamic Programming (1)
- Dynamic Type System (1)
- Dynamische Programmierung (1)
- Dynamische Rekonfiguration (1)
- Dynamische Typ Systeme (1)
- EHR (1)
- EPA (1)
- Echtzeitanwendung (1)
- Echtzeitsysteme (1)
- Ecosystems (1)
- Effizienz (1)
- Einbruchserkennung (1)
- Eingabegenauigkeit (1)
- Eingebettete Systeme (1)
- Eisenbahnnetz (1)
- Elektroencephalographie (1)
- Elektronische Patientenakte (1)
- Emotionen (1)
- Emotionsforschung (1)
- Empfehlungen (1)
- Endpunktsicherheit (1)
- Energieeffizienz (1)
- Energiesparen (1)
- Enterprise Search (1)
- Entity resolution (1)
- Entitätsverknüpfung (1)
- Entscheidungsbäume (1)
- Entscheidungsfindung (1)
- Entscheidungsmanagement (1)
- Entscheidungsmodelle (1)
- Entwicklungswerkzeuge (1)
- Entwurf (1)
- Entwurfsmuster (1)
- Entwurfsmuster für SOA-Sicherheit (1)
- Entwurfsraumexploration (1)
- Enumeration algorithm (1)
- Equilibrium logic (1)
- Ereignisabstraktion (1)
- Ereignisse (1)
- Erfahrungsbericht (1)
- Erfüllbarkeit einer Formel der Aussagenlogik (1)
- Erfüllbarkeitsanalyse (1)
- Erfüllbarkeitsproblem (1)
- Erkennen von Meta-Daten (1)
- Erkennung von Metadaten (1)
- Error Estimation (1)
- Error-Detection Circuits (1)
- Erweiterte Realität (1)
- Escherichia-coli (1)
- Estimation-of-distribution algorithm (1)
- Evaluation (1)
- Evidenztheorie (1)
- Evolution in MDE (1)
- Evolutionary algorithms (1)
- Execution Semantics (1)
- Exponential Time Hypothesis (1)
- Exponentialzeit Hypothese (1)
- Extract-Transform-Load (ETL) (1)
- FIDO (1)
- FMC-QE (1)
- FOSS (1)
- FRP (1)
- Fallmanagement (1)
- Fallstudie (1)
- Feature Combination (1)
- Feature extraction (1)
- Feature selection (1)
- Federated learning (1)
- Feedback (1)
- Feedback Loop Modellierung (1)
- Feedback Loops (1)
- Fehlende Daten (1)
- Fehlererkennung (1)
- Fehlerinjektion (1)
- Fehlerkorrektur (1)
- Fehlerschätzung (1)
- Fehlersuche (1)
- Fehlvorstellung (1)
- Fernerkundung (1)
- Fertigung (1)
- Field programmable gate arrays (1)
- Finite automata (1)
- Fintech (1)
- Fitness-distance correlation (1)
- Flussgesteuerter Bilateraler Filter (1)
- Focus+Context Visualization (1)
- Fokus-&-Kontext Visualisierung (1)
- Formal modelling (1)
- Formeln der quantifizierten Aussagenlogik (1)
- Fredholm complexes (1)
- Functional Lenses (1)
- Functional dependencies (1)
- Fundamental Modeling Concepts (1)
- Fußgängernavigation (1)
- GIS (1)
- GPU acceleration (1)
- GPU-Beschleunigung (1)
- Game-based learning (1)
- Gaussian process state-space models (1)
- Gaussian processes (1)
- Gauß-Prozess Zustandsraummodelle (1)
- Gauß-Prozesse (1)
- Gebäudemodelle (1)
- Gehirn-Computer-Schnittstelle (1)
- Geländemodelle (1)
- Gene expression (1)
- Generalisierung (1)
- Generalized Discrimination Networks (1)
- Geometrieerzeugung (1)
- Georgian chant (1)
- Georgische liturgische Gesänge (1)
- Geovisualisierung (1)
- German schools (1)
- Geschäftsanwendungen (1)
- Geschäftsmodell (1)
- Geschäftsprozessarchitekturen (1)
- Geschäftsprozesse (1)
- Geschäftsprozessmodelle (1)
- Gesetze (1)
- Gesichtsausdruck (1)
- Gesteuerte Ableitungen (1)
- Gewinnung benannter Entitäten (1)
- GitHub (1)
- Gleichheit (1)
- Globus (1)
- GraalVM (1)
- Grammar Systems (1)
- Grammatikalische Inferenz (1)
- Grammatiksysteme (1)
- Graph databases (1)
- Graph homomorphisms (1)
- Graph logic (1)
- Graph partitions (1)
- Graph repair (1)
- Graph transformation (1)
- Graph-Constraints (1)
- Graph-Mining (1)
- Graph-basierte Suche (1)
- Graph-basiertes Ranking (1)
- Graphableitung (1)
- Graphbedingungen (1)
- Graphdatenbanken (1)
- Graphfärbung (1)
- Graphreparatur (1)
- Graphtransformation (1)
- Grid (1)
- Grid Computing (1)
- Gruppierung von Prozessinstanzen (1)
- H.264 (1)
- HDI (1)
- HENSHIN (1)
- HPI Forschung (1)
- HPI research (1)
- Hardware accelerator (1)
- Hardware-Software-Co-Design (1)
- Hasserkennung (1)
- Hasso-Plattner-Institute (1)
- Hauptkomponentenanalyse (1)
- Hauptspeicher Technologie (1)
- Helmholtz problem (1)
- Heterogenität (1)
- Heuristiken (1)
- HiGHmed (1)
- High-Level Synthesis (1)
- Histograms (1)
- Hochschullehre (1)
- Hochschulsystem (1)
- Homomorphe Verschlüsselung (1)
- Human (1)
- Human-robot interaction (1)
- Hyrise (1)
- Häkeln (1)
- I/O-effiziente Algorithmen (1)
- IBM 360 (1)
- ICT (1)
- ICT competencies (1)
- IDS (1)
- IHL (1)
- IHRL (1)
- IOPS (1)
- ISSEP (1)
- IT security (1)
- IT-Security (1)
- IT-Sicherheit (1)
- Ideation (1)
- Ideenfindung (1)
- Identity Management (1)
- Identity management systems (1)
- Ill-conditioning (1)
- Image (1)
- Image resolution (1)
- Image-based rendering (1)
- Impact (1)
- Imperative calculi (1)
- Implementation in Organizations (1)
- Implementierung in Organisationen (1)
- Improving classroom (1)
- In-Memory Database (1)
- In-Memory Datenbank (1)
- In-Memory-Datenbank (1)
- Inclusion dependencies (1)
- Indefinite (1)
- Index (1)
- Index Structures (1)
- Indexauswahl (1)
- Indexstrukturen (1)
- Individuen (1)
- Industries (1)
- Industry 4.0 (1)
- Inference (1)
- Infinite State (1)
- Informatics (1)
- Informatics Education (1)
- Informatik-Studiengänge (1)
- Informatiksystem (1)
- Informatikunterricht (1)
- Informatikvoraussetzungen (1)
- Information Extraction (1)
- Information Systems (1)
- Information Transfer Rate (1)
- Informationssysteme (1)
- Informatische Kompetenzen (1)
- Initial conflicts (1)
- Inklusionsabhängigkeit (1)
- Inklusionsabhängigkeiten (1)
- Inkonsistenz (1)
- Inkrementelle Graphmustersuche (1)
- Innovationsmanagement (1)
- Innovationsmethode (1)
- Input Validation (1)
- Instagram (1)
- Insurance industry (1)
- Integration (1)
- Interactive Rendering (1)
- Interactive system (1)
- Interaktionsmodel (1)
- Interaktionsmodellierung (1)
- Interaktionstechniken (1)
- Interaktives Rendering (1)
- Interaktives System (1)
- Interdisciplinary Teams (1)
- Interface design (1)
- Internet Security (1)
- Internet-Sicherheit (1)
- Interpretability (1)
- Interpreter (1)
- Interval Timed Automata (1)
- Intuition (1)
- Invariant-Checking (1)
- Invarianten (1)
- Invariants (1)
- JCop (1)
- Java 2 Enterprise Edition (1)
- Java Virtual Machine (1)
- Karten (1)
- Kartografisches Design (1)
- Kausalität (1)
- Kern-PCA (1)
- Kernel (1)
- Kernmethoden (1)
- Key Competencies (1)
- Klassifikator-Kalibrierung (1)
- Klassifizierung (1)
- Kommunikation (1)
- Kompetenz (1)
- Kompetenzen (1)
- Kompetenzentwicklung (1)
- Komplexität (1)
- Komplexität der Berechnung (1)
- Komplexitätsbewältigung (1)
- Komplexitätstheorie (1)
- Komposition (1)
- Konnektionskalkül (1)
- Konsensprotokolle (1)
- Konsistenzrestauration (1)
- Konzeptionell (1)
- Kreativität (1)
- Kundenverhalten (1)
- Kunstanalyse (1)
- Kybernetik (1)
- LEGO Mindstorms EV3 (1)
- LIDAR (1)
- LOD (1)
- LSTM (1)
- Landmarken (1)
- Laser Cutten (1)
- Laserscanning (1)
- Lastverteilung (1)
- Laufzeitanalyse (1)
- Laufzeitverhalten (1)
- Leadership (1)
- Learning Analytics (1)
- Lebendigkeit (1)
- Lebenslanges Lernen (1)
- Lefschetz number (1)
- Leftmost Derivations (1)
- Lehrer (1)
- Leistungsmodelle von virtuellen Maschinen (1)
- Leistungsvorhersage (1)
- Lernsoftware (1)
- LiDAR (1)
- Licenses (1)
- Lindenmayer systems (1)
- Link Discovery (1)
- Linked Data (1)
- Linked Open Data (1)
- Linksableitungen (1)
- Live-Migration (1)
- Liveness (1)
- Logic Programming (1)
- Logics (1)
- Logiksynthese (1)
- Loss (1)
- Low Latency (1)
- Lower Bounds (1)
- Lösungsraum (1)
- MDE Ansatz (1)
- MDE settings (1)
- MEG (1)
- Machine-Learning (1)
- Machinelles Lernen (1)
- Magnetoencephalographie (1)
- Malware (1)
- Management (1)
- Marketing (1)
- Marktübersicht (1)
- Maschinen (1)
- Matrizen-Eigenwertaufgabe (1)
- Matroids (1)
- Measurement (1)
- Media in education (1)
- Megamodel (1)
- Megamodels (1)
- Mehr-Faktor-Authentifizierung (1)
- Mehrfamilienhäuser (1)
- Mehrkernsysteme (1)
- Mehrklassen-Klassifikation (1)
- Mensch-Computer-Interaktion (1)
- Messung (1)
- Metacrate (1)
- Metadata Discovery (1)
- Metadaten (1)
- Metadatenentdeckung (1)
- Metadatenqualität (1)
- Metamodell (1)
- Migration (1)
- Mindset (1)
- Minimal hitting set (1)
- Mischmodelle (1)
- Mischung <Signalverarbeitung> (1)
- Mobil (1)
- Mobile Application Development (1)
- Mobile Mapping (1)
- Mobile-Mapping (1)
- Mobiles Lernen (1)
- Mobilgeräte (1)
- Model Based Engineering (1)
- Model Checking (1)
- Model Consistency (1)
- Model Driven Architecture (1)
- Model Execution (1)
- Model Management (1)
- Model repair (1)
- Model verification (1)
- Model-driven (1)
- Model-driven SOA Security (1)
- Modeling Languages (1)
- Modell Management (1)
- Modell-driven Security (1)
- Modell-getriebene SOA-Sicherheit (1)
- Modell-getriebene Sicherheit (1)
- Modell-getriebene Softwareentwicklung (1)
- Modellbasiert (1)
- Modelle mit mehreren Versionen (1)
- Modellerzeugung (1)
- Modellgetrieben (1)
- Modellgetriebene Architektur (1)
- Modellgetriebene Entwicklung (1)
- Modellgetriebene Softwareentwicklung (1)
- Modellierungssprachen (1)
- Modellkonsistenz (1)
- Modellreparatur (1)
- Modelltransformation (1)
- Modelltransformationen (1)
- Models at Runtime (1)
- Molekulare Bioinformatik (1)
- Monitoring (1)
- Morphic (1)
- Multi Task Learning (1)
- Multi-Class (1)
- Multi-Instanzen (1)
- Multi-Task-Lernen (1)
- Multi-objective optimization (1)
- Multi-sided platforms (1)
- Multicore architectures (1)
- Multidisciplinary Teams (1)
- Multimodal behavior (1)
- Multiprocessor (1)
- Multiprozessor (1)
- Muster (1)
- Musterabgleich (1)
- Mutation operators (1)
- NUI (1)
- Nash Equilibrium (1)
- Natural ventilation (1)
- Nebenläufigkeit (1)
- Nephrology (1)
- Nested Graph Conditions (1)
- Nested graph conditions (1)
- Network Creation Game (1)
- Network clustering (1)
- Netzneutralität (1)
- Netzwerke (1)
- Netzwerkprotokolle (1)
- Neuronales Netz (1)
- New On-Line Error-Detection Methode (1)
- Newspeak (1)
- Next Generation Network (1)
- Nicht-photorealistisches Rendering (1)
- Nichtfotorealistische Bildsynthese (1)
- Non-photorealistic Rendering (1)
- Nutzerinteraktion (1)
- Nutzungsinteresse (1)
- O (1)
- OAuth (1)
- Object Constraint Programming (1)
- Object-Oriented Programming (1)
- Objects (1)
- Objekt-Constraint Programmierung (1)
- Objekt-Orientiertes Programmieren (1)
- Objekt-orientiertes Programmieren mit Constraints (1)
- Objekte (1)
- Objektive Schwierigkeit (1)
- Objektlebenszyklus-Synchronisation (1)
- Omega (1)
- Online Learning Environments (1)
- Onlinekurse (1)
- Onlinelehre (1)
- Ontologies (1)
- Ontology (1)
- Open Source (1)
- OpenID Connect (1)
- Opinion mining (1)
- Optimierungen (1)
- Optimierungsproblem (1)
- OptoGait (1)
- Order dependencies (1)
- Ordinances (1)
- Organisationsveränderung (1)
- PAVM (1)
- PRISM Modell-Checker (1)
- PRISM model checker (1)
- PTCTL (1)
- Parallel Programming (1)
- Parallele Datenverarbeitung (1)
- Paralleles Rechnen (1)
- Parallelization (1)
- Parallelrechner (1)
- Parameterized Complexity (1)
- Parametrisierte Komplexität (1)
- Parsing (1)
- Patientenermündigung (1)
- Pattern Matching (1)
- Pattern Recognition (1)
- Pedagogical issues (1)
- Peer-to-Peer-Netz ; GRID computing ; Zuverlässigkeit ; Web Services ; Betriebsmittelverwaltung ; Migration (1)
- Performance Prediction (1)
- Personal Data (1)
- Petri net Mapping (1)
- Petri net mapping (1)
- Petrinetz (1)
- Planing (1)
- Plant identification (1)
- Plattform-Ökosysteme (1)
- Platzierung (1)
- Point-based rendering (1)
- Policy Enforcement (1)
- Polymerase Chain Reaction Experiment (1)
- Popular matching (1)
- Posenabschätzung (1)
- PostGIS (1)
- Pre-RS Traceability (1)
- Prediction Game (1)
- Predictive Models (1)
- Prime graphs (1)
- Prior knowledge (1)
- Privacy Protection (1)
- Problem Solving (1)
- Probleme in der Studie (1)
- Problemlösen (1)
- Problemlösung (1)
- Process Enactment (1)
- Process Execution (1)
- Process Modelling (1)
- Process modeling (1)
- Professoren (1)
- Prognosen (1)
- Programmierabstraktionen (1)
- Programmiererlebnis (1)
- Programmierkonzepte (1)
- Programmierwerkzeuge (1)
- Programming Languages (1)
- Proof Theory (1)
- Propagation von Aktivitätsinstanzzuständen (1)
- Protocols (1)
- Prototyping (1)
- Prozess Verbesserung (1)
- Prozess- und Datenintegration (1)
- Prozessarchitektur (1)
- Prozessausführung (1)
- Prozessautomatisierung (1)
- Prozesse (1)
- Prozesserhebung (1)
- Prozessinstanz (1)
- Prozessmodell (1)
- Prozessmodelle (1)
- Prozessmodellsuche (1)
- Prozessoren (1)
- Prozessverfeinerung (1)
- Prädiktionsspiel (1)
- Präferenzen (1)
- Präsentation (1)
- Psychotherapie (1)
- Quanten-Computing (1)
- Quantified Boolean Formula (QBF) (1)
- Quantitative Analysen (1)
- Quantitative Modeling (1)
- Quantitative Modellierung (1)
- Query (1)
- Query execution (1)
- Query optimization (1)
- Query-Optimierung (1)
- Queuing Theory (1)
- RDF (1)
- RL (1)
- RT_PREEMT patch (1)
- RT_PREEMT-Patch (1)
- Rainfall-runoff (1)
- Random access memory (1)
- Real-Time Rendering (1)
- Realzeitsysteme (1)
- Reconfigurable (1)
- Region of Interest (1)
- Regressionstests (1)
- Rekonfiguration (1)
- Relational data (1)
- Rendering (1)
- Reparatur (1)
- Reproducible benchmarking (1)
- Research Projects (1)
- Resource Allocation (1)
- Resource Management (1)
- Ressourcenmanagement (1)
- Reverse Engineering (1)
- Reversibility (1)
- Robot personality (1)
- Ruby (1)
- Run time analysis (1)
- Runtime Binding (1)
- Runtime-monitoring (1)
- SAT (1)
- SIEM (1)
- SMT (1)
- SOA (1)
- SOA Security (1)
- SOA Security Pattern (1)
- SOA Sicherheit (1)
- SSO (1)
- SWIRL (1)
- Sammlungsdatentypen (1)
- Sample Selection Bias (1)
- Savanne (1)
- Scale-invariant feature transform (SIFT) (1)
- Scene graph systems (1)
- Schelling Process (1)
- Schelling Prozess (1)
- Schelling Segregation (1)
- Schema-Entdeckung (1)
- Schemaentdeckung (1)
- Schlüsselkompetenzen (1)
- Schlüsselentdeckung (1)
- Schlüsselkompetenzen (1)
- Schriftartgestaltung (1)
- Schriftrendering (1)
- Scrollytelling (1)
- Search Algorithms (1)
- Secure Digital Identities (1)
- Secure Enterprise SOA (1)
- Security (1)
- Security Modelling (1)
- Segmentierung (1)
- Selbst-Adaptive Software (1)
- Selektionsbias (1)
- Self-Adaptive Software (1)
- Self-Checking Circuits (1)
- Self-Regulated Learning (1)
- Semantic Search (1)
- Semantik Web (1)
- Semantische Suche (1)
- Sequential anomaly (1)
- Sequenzeigenschaften (1)
- Sequenzen von s/t-Pattern (1)
- Serialisierung (1)
- Service Creation (1)
- Service Delivery Platform (1)
- Service Provider (1)
- Service convergence (1)
- Service-oriented Architectures (1)
- Service-orientierte Systeme (1)
- Service-orientierte Systme (1)
- Shader (1)
- Sharing (1)
- Sichere Digitale Identitäten (1)
- Sicherheitsanalyse (1)
- Sicherheitsmodellierung (1)
- Signal Processing (1)
- Signal processing (1)
- Signalflankengraph (SFG oder STG) (1)
- Signalquellentrennung (1)
- Signaltrennung (1)
- Similarity Measures (1)
- Similarity Search (1)
- Simulation (1)
- Simulations (1)
- Simultane Diagonalisierung (1)
- Single Sign On (1)
- Single Trial Analysis (1)
- Single event upsets (1)
- Single-Sign-On (1)
- Situationsbewusstsein (1)
- Skelettberechnung (1)
- Skript-Entwicklungsumgebungen (1)
- Skriptsprachen (1)
- Smart cities (1)
- SoaML (1)
- Social (1)
- Software (1)
- Software Engineering (1)
- Software architecture (1)
- Software-Evolution (1)
- Software-Testen (1)
- Software/Hardware Co-Design (1)
- Softwareanalyse (1)
- Softwareentwicklung (1)
- Softwareproduktlinien (1)
- Softwaretechnik (1)
- Softwaretests (1)
- Softwarevisualisierung (1)
- Softwarewartung (1)
- Solution Space (1)
- Soziale Medien (1)
- Sozialen Medien (1)
- Spaltenlayout (1)
- Spam (1)
- Spam Filtering (1)
- Spam-Erkennung (1)
- Spam-Filter (1)
- Spam-Filtering (1)
- Spatio-Spectral Filter (1)
- Spawning (1)
- Specification (1)
- Speicheroptimierungen (1)
- Spezifikation von gezeiteten Graph Transformationen (1)
- Spieldynamik (1)
- Spieldynamiken (1)
- Sprachdesign (1)
- Sprachlernen im Limes (1)
- Sprachspezifikation (1)
- Squeak (1)
- Squeak/Smalltalk (1)
- Stable marriage (1)
- Stable matching (1)
- Stadtmodell (1)
- Stance Detection (1)
- Standardisierung (1)
- Standards (1)
- Static Analysis (1)
- Statistical Tests (1)
- Statistikprogramm R (1)
- Statistische Tests (1)
- Stilisierung (1)
- Structuring (1)
- Strukturierung (1)
- Studentenerwartungen (1)
- Studentenhaltungen (1)
- Studentenjobs (1)
- Studienabbrecher (1)
- Studiendauer (1)
- Submodular function (1)
- Submodular functions (1)
- Subset selection (1)
- Suche (1)
- Suchtberatung und -therapie (1)
- Suchverfahren (1)
- Synchronisation (1)
- Synonyme (1)
- Synthese (1)
- System Biologie (1)
- System of Systems (1)
- System structure (1)
- Systembiologie (1)
- Systeme von Systemen (1)
- Systementwurf (1)
- Systems of Systems (1)
- Systems of parallel communicating (1)
- Systemsoftware (1)
- Szenengraph (1)
- TPTP (1)
- Tableaumethode (1)
- Telekommunikation (1)
- Telemedizin (1)
- Temporal Logic (1)
- Temporäre Anbindung (1)
- Terminologische Logik (1)
- Test (1)
- Testen (1)
- Testergebnisse (1)
- Testpriorisierungs (1)
- Text mining (1)
- Texterkennung (1)
- Textklassifikation (1)
- The Sharing Economy (1)
- Theoretische Informatik (1)
- Theoretischen Vorlesungen (1)
- Theory (1)
- Threshold Cryptography (1)
- Time Augmented Petri Nets (1)
- Time series (1)
- Tool (1)
- Tools (1)
- Traceability (1)
- Tracking (1)
- Trajektorien (1)
- Transaktionen (1)
- Transformation (1)
- Transformationsebene (1)
- Transformationssequenzen (1)
- Transversal hypergraph (1)
- Travis CI (1)
- Treewidth (1)
- Tripel-Graph-Grammatiken (1)
- Triple Graph Grammar (1)
- Triple Graph Grammars (1)
- Triple-Graph-Grammatiken (1)
- Trust Management (1)
- Type and effect systems (1)
- Unabhängige Komponentenanalyse (1)
- Unbegrenzter Zustandsraum (1)
- Uncanny valley (1)
- Unique column combination (1)
- Unique column combinations (1)
- Universität Bagdad (1)
- Universität Potsdam (1)
- Universitätseinstellungen (1)
- Untere Schranken (1)
- Unterricht mit digitalen Medien (1)
- Unveränderlichkeit (1)
- Unvollständigkeit (1)
- Usage Interest (1)
- VGG16 (1)
- VM Integration (1)
- VR (1)
- VUCA-World (1)
- Validation (1)
- Value network (1)
- Verbindungsnetzwerke (1)
- Verbundwerte (1)
- Verhaltensabstraktion (1)
- Verhaltensanalyse (1)
- Verhaltensbewahrung (1)
- Verhaltensverfeinerung (1)
- Verhaltensänderung (1)
- Verhaltensäquivalenz (1)
- Verification (1)
- Verletzung Auflösung (1)
- Verletzung Erklärung (1)
- Versionierung (1)
- Verteiltes Rechnen (1)
- Verteilungsalgorithmen (1)
- Verteilungsalgorithmus (1)
- Verteilungsunterschied (1)
- Vertrauen (1)
- Verwaltung von Rechenzentren (1)
- Verzögerungs-Verbreitung (1)
- Veränderungsanalyse (1)
- Violation Explanation (1)
- Violation Resolution (1)
- Virtual Desktop Infrastructure (1)
- Virtual Machines (1)
- Virtual machines (1)
- Virtuelle Maschine (1)
- Virtuelle Realität (1)
- Virtuelles 3D Stadtmodell (1)
- Visualisierungskonzept-Exploration (1)
- Vocabulary (1)
- Vorhersagemodelle (1)
- W[3]-Completeness (1)
- Wahrnehmung (1)
- Wahrnehmung von Arousal (1)
- Wahrnehmungsunterschiede (1)
- Warteschlangentheorie (1)
- Wartung von Graphdatenbanksichten (1)
- Wearable (1)
- Web Services (1)
- Web Sites (1)
- Web applications (1)
- Web of Data (1)
- Web-Anwendungen (1)
- Web-Based Rendering (1)
- Webbasiertes Rendering (1)
- Webseite (1)
- Well-structuredness (1)
- Werbung (1)
- Werkzeugbau (1)
- Wertschöpfungskooperation (1)
- WhatsApp (1)
- Wicked Problems (1)
- Wikipedia (1)
- Wirtschaftsinformatik (1)
- Wissensrepräsentation und -verarbeitung (1)
- Wissensrepräsentation und Schlussfolgerung (1)
- Wohlstrukturiertheit (1)
- Wolke (1)
- Workflow (1)
- Wüstenbildung (1)
- X-ray imaging (1)
- ZQSA (1)
- ZQSAT (1)
- Zebris (1)
- Zeitbehaftete Petri Netze (1)
- Zero-Suppressed Binary Decision Diagram (ZDD) (1)
- Zugriffskontrolle (1)
- Zuverlässigkeitsanalyse (1)
- access control (1)
- action and change (1)
- action language (1)
- activity instance state propagation (1)
- acyclic preferences (1)
- adaptiv (1)
- addiction care (1)
- adoption (1)
- advanced persistent threat (1)
- advanced threats (1)
- aerosol size distribution (1)
- agil (1)
- airbnb (1)
- algorithm (1)
- algorithm configuration (1)
- algorithm schedules (1)
- algorithm scheduling (1)
- algorithm selection (1)
- analog-to-digital conversion (1)
- analysis (1)
- animated PCA (1)
- animierte PCA (1)
- anisotropic Kuwahara filter (1)
- annotation (1)
- answer (1)
- answer set (1)
- application virtualization (1)
- approximate joint diagonalization (1)
- approximation (1)
- apriori (1)
- apt (1)
- architectural adaptation (1)
- architecture recovery (1)
- archive analysis (1)
- argument mining (1)
- argumentation research (1)
- argumentation structure (1)
- arithmethische Prozeduren (1)
- arithmetic procedures (1)
- arousal perception (1)
- art analysis (1)
- artificial intelligence (1)
- asset management (1)
- association rule mining (1)
- asynchronous circuit (1)
- attacks (1)
- augmented reality (1)
- ausführbare Semantiken (1)
- authentication (1)
- automata (1)
- automated planning (1)
- automatic theorem prover (1)
- automatisierter Theorembeweiser (1)
- automotive electronics (1)
- autonomous (1)
- balance analysis (1)
- bank (1)
- basic cloud storage services (1)
- behavior preservation (1)
- behavioral abstraction (1)
- behavioral equivalenc (1)
- behavioral refinement (1)
- behavioral specification (1)
- behaviour (1)
- behaviourally correct learning (1)
- benutzergenerierte Inhalte (1)
- beschreibende Feldstudie (1)
- betriebliche Weiterbildungspraxis (1)
- bidirectional optimality theory (1)
- bild (1)
- bildbasiertes Rendering (1)
- bio-computing (1)
- bioinformatics (1)
- biological network (1)
- biological network model (1)
- biological networks (1)
- biomarker detection (1)
- biometrics (1)
- bisimulation (1)
- bitcoin (1)
- blind source separation (1)
- bounded backward model checking (1)
- bpm (1)
- brand personality (1)
- building models (1)
- business informatics (1)
- business models (1)
- business process architecture (1)
- business process architectures (1)
- business process model abstraction (1)
- business process modeling (1)
- bystander (1)
- cancer therapy (1)
- cartographic design (1)
- case study (1)
- categories (1)
- causal AI (1)
- causal reasoning (1)
- causality (1)
- center dot Computing (1)
- change detection (1)
- change management (1)
- changing the study field (1)
- changing the university (1)
- choreographies (1)
- circuits (1)
- classes of logic programs (1)
- classifier calibration (1)
- clause elimination (1)
- cleansing (1)
- cloud datacenter (1)
- cloud storage (1)
- cluster-analysis (1)
- code generation (1)
- coding and information theory (1)
- cogeneration units (1)
- cognition (1)
- cognitive load (1)
- cognitive load theory (1)
- coherence-enhancing filtering (1)
- collaborations (1)
- collection types (1)
- combined task and motion planning (1)
- communication (1)
- competence development (1)
- competencies (1)
- complex optimization (1)
- complexity (1)
- complexity dichotomy (1)
- composite service (1)
- compositional analysis (1)
- comprehension (1)
- computational biology (1)
- computational ethnomusicology (1)
- computational methods (1)
- computational photography (1)
- computational thinking (1)
- computed tomography (1)
- computer science education (1)
- computer science education (CSE) (1)
- computer security (1)
- computer-aided design (1)
- computer-mediated therapy (1)
- computergestützte Methoden (1)
- computergestützte Musikethnologie (1)
- computervermittelte Therapie (1)
- computing (1)
- concurrent graph rewriting (1)
- conditions (1)
- confidentiality (1)
- conflicts and dependencies in (1)
- confluence (1)
- conformance analysis (1)
- conformance checking (1)
- connection calculus (1)
- consensus protocols (1)
- consistency (1)
- consistency restoration (1)
- consistent learning (1)
- constraint (1)
- constraint programming (1)
- constraints (1)
- consumer behavior (1)
- continuous testing (1)
- contract (1)
- control resynthesis (1)
- controlled experiment (1)
- conversational agents (1)
- convolutional neural networks (1)
- corporate nomadism (1)
- corporate takeovers (1)
- corpus study (1)
- couple reaction (1)
- coupling relationship (1)
- course timetabling (1)
- creativity (1)
- crochet (1)
- cryptocurrency exchanges (1)
- cryptology (1)
- cultural heritage (1)
- cumulative culture (1)
- cyber (1)
- cyber humanistic (1)
- cyber threat intelligence (1)
- cyber-attack (1)
- cyber-physikalische Systeme (1)
- cybersecurity (1)
- cyberwar (1)
- data center management (1)
- data assimilation (1)
- data center management (1)
- data correctness checking (1)
- data dependencies (1)
- data driven approaches (1)
- data extraction (1)
- data flow correctness (1)
- data in business processes (1)
- data migration (1)
- data modeling (1)
- data models (1)
- data objects (1)
- data pipeline (1)
- data requirements (1)
- data science (1)
- data security (1)
- data set (1)
- data states (1)
- data structures and information theory (1)
- data synthesis (1)
- data transformation (1)
- data view (1)
- data visualization (1)
- data-driven (1)
- data-driven artifacts (1)
- database (1)
- database optimization (1)
- database technology (1)
- datengetrieben (1)
- dbms (1)
- deadline propagation (1)
- decentral identities (1)
- decision management (1)
- decision mining (1)
- decision models (1)
- decision trees (1)
- decubitus (1)
- deduplication (1)
- deep Gaussian processes (1)
- definiteness (1)
- delay propagation (1)
- demografische Informationen (1)
- demographic information (1)
- dental caries classification (1)
- dependable computing (1)
- dependencies (1)
- dependency discovery (1)
- depressive symptoms (1)
- desertification (1)
- design (1)
- design research (1)
- design space exploration (1)
- design thinking (1)
- design-science research (1)
- determinism (1)
- deterministic properties (1)
- deurema modeling language (1)
- development tools (1)
- developmental systems (1)
- dezentrale Identitäten (1)
- diagnosis (1)
- difference of Gaussians (1)
- differential gene expression (1)
- differential privacy (1)
- diffusion (1)
- digital nomadism (1)
- digital picture archive (1)
- digital platform openness (1)
- digital strategy (1)
- digital unterstützter Unterricht (1)
- digital whiteboard (1)
- digital workplace transformation (1)
- digitale Hochschullehre (1)
- digitale Infrastruktur für den Schulunterricht (1)
- digitales Bildarchiv (1)
- digitales Whiteboard (1)
- digitally-enabled pedagogies (1)
- digitization of production processes (1)
- dimensional (1)
- direkte Manipulation (1)
- discrete-event model (1)
- discrimination networks (1)
- diskretes Ereignismodell (1)
- distributed computation (1)
- distributed ledger technology (1)
- distributed performance monitoring (1)
- distribution algorithm (1)
- doctor-patient relationship (1)
- domain-specific modeling (1)
- drift theory (1)
- dropout (1)
- dynamic (1)
- dynamic typing (1)
- dynamic classification (1)
- dynamic consolidation (1)
- dynamic programming languages (1)
- dynamic reconfiguration (1)
- dynamic systems (1)
- dynamisch (1)
- dynamische Klassifikation (1)
- dynamische Programmiersprachen (1)
- dynamische Sprachen (1)
- dynamische Systeme (1)
- dynamische Umsortierung (1)
- e-Learning (1)
- educational systems (1)
- educational timetabling (1)
- efficiency (1)
- efficient deep learning (1)
- eindeutig (1)
- eingebettete Systeme (1)
- elections (1)
- electrical muscle stimulation (1)
- electronic health record (1)
- electronic tool integration (1)
- elektrische Muskelstimulation (1)
- elliptic complexes (1)
- email spam detection (1)
- embedded systems (1)
- embedded-systems (1)
- emotion (1)
- emotion representation (1)
- emotion research (1)
- emotional design (1)
- endpoint security (1)
- energy efficiency (1)
- energy savings (1)
- engine (1)
- engineering (1)
- enterprise search (1)
- entity alignment (1)
- entity linking (1)
- entity resolution (1)
- enumeration (1)
- epistemic logic programs (1)
- epistemic specifications (1)
- equality (1)
- erfahrbare Medien (1)
- error correction (1)
- error detection (1)
- erzeugende gegnerische Netzwerke (1)
- ethics (1)
- event abstraction (1)
- events (1)
- evidence theory (1)
- evolution (1)
- evolution in MDE (1)
- evolutionary computation (1)
- evolving systems (1)
- exact simulation methods (1)
- executable semantics (1)
- experience (1)
- experience report (1)
- experiment (1)
- explainability (1)
- explainability-accuracy trade-off (1)
- explainable AI (1)
- explicit knowledge (1)
- explicit negation (1)
- exploration (1)
- exploratives Programmieren (1)
- expression (1)
- extend (1)
- external knowledge bases (1)
- external memory algorithms (1)
- fMRI (1)
- face tracking (1)
- facial expression (1)
- failure model (1)
- fatty acid amide hydrolase (1)
- fault injection (1)
- federated industrial platform ecosystems (1)
- feedback loop modeling (1)
- feedback loops (1)
- fehlende Daten (1)
- field-programmable gate array (1)
- file structure (1)
- flow-based bilateral filter (1)
- font engineering (1)
- font rendering (1)
- forecasts (1)
- forensics (1)
- formal framework (1)
- formal languages (1)
- formal verification (1)
- formal verification methods (1)
- formale Verifikation (1)
- formales Framework (1)
- formalism (1)
- fortschrittliche Angriffe (1)
- freie Daten (1)
- freie Software (1)
- functional dependency (1)
- functional lenses (1)
- functional programming (1)
- functions (1)
- funktionale Abhängigkeit (1)
- funktionale Programmierung (1)
- future SOC lab (1)
- gait analysis algorithm (1)
- ganzheitlich (1)
- ganzzahlige lineare Optimierung (1)
- gefaltete neuronale Netze (1)
- gemischte Daten (1)
- gene (1)
- gene expression matrix (1)
- gene selection (1)
- general (1)
- general education in computer science (1)
- generalization (1)
- generalized discrimination networks (1)
- generative adversarial networks (1)
- genome annotation (1)
- geometry generation (1)
- geovirtual environments (1)
- geovirtuelle Umgebungen (1)
- geschichtsbewusste Laufzeit-Modelle (1)
- gesture (1)
- getypte Attributierte Graphen (1)
- gewerkschaftlich unterstützte Weiterbildungspraxis (1)
- global constraints (1)
- global model management (1)
- globale Constraints (1)
- globales Modellmanagement (1)
- grammar inference (1)
- grammars (1)
- graph clustering (1)
- graph databases (1)
- graph inference (1)
- graph languages (1)
- graph mining (1)
- graph pattern matching (1)
- graph queries (1)
- graph repair (1)
- graph transformations (1)
- graph-based ranking (1)
- graph-transformations (1)
- hardware accelerator (1)
- hardware architecture (1)
- hardware-software-codesign (1)
- hate speech detection (1)
- health care (1)
- healthcare (1)
- heterogeneous computing (1)
- heterogeneous tissue (1)
- heterogenes Rechnen (1)
- heuristics (1)
- history-aware runtime models (1)
- holistic (1)
- home office (1)
- homogeneous cell population (1)
- homomorphic encryption (1)
- human-centered (1)
- human–computer interaction (1)
- hybrid graph-transformation-systems (1)
- hybrid systems (1)
- hybride Graph-Transformations-Systeme (1)
- hyrise (1)
- identity broker (1)
- image (1)
- image captioning (1)
- image data analysis (1)
- image stylization (1)
- image-based rendering (1)
- imdb (1)
- immediacy (1)
- immersion (1)
- immutable values (1)
- in-memory (1)
- in-memory database (1)
- inclusion dependency (1)
- incompleteness (1)
- inconsistency (1)
- incremental graph query evaluation (1)
- incumbent (1)
- independent component analysis (1)
- index (1)
- individuals (1)
- inductive invariant checking (1)
- induktives Invariant Checking (1)
- inertial measurement unit (1)
- inference (1)
- informal and formal learning (1)
- information diffusion (1)
- informatische Allgemeinbildung (1)
- infrastructure (1)
- inkrementelle Ausführung von Graphanfragen (1)
- inkrementelles Graph Pattern Matching (1)
- innovation capabilities (1)
- innovation management (1)
- input accuracy (1)
- integer linear programming (1)
- integral equation (1)
- integrated development environments (1)
- integrierte Entwicklungsumgebungen (1)
- interaction (1)
- interaction modeling (1)
- interaction techniques (1)
- interactive media (1)
- interactive simulation (1)
- interaktive Medien (1)
- interconnect (1)
- interdisziplinäre Teams (1)
- interface (1)
- international human rights (1)
- international humanitarian law (1)
- interpretable machine learning (1)
- interpreters (1)
- interval probabilistic timed systems (1)
- interval probabilistische zeitgesteuerte Systeme (1)
- interval timed automata (1)
- intransitivity (1)
- intuition (1)
- intuitive Benutzeroberflächen (1)
- intuitive interfaces (1)
- invariant checking (1)
- invention (1)
- invention mechanism (1)
- inverse ill-posed problem (1)
- inverse scattering (1)
- iteration method (1)
- iterative regularization (1)
- job-shop scheduling (1)
- juridical recording (1)
- k-Induktion (1)
- k-induction (1)
- k-inductive invariants (1)
- k-induktive Invarianten (1)
- k-induktive Invariantenprüfung (1)
- k-induktives Invariant-Checking (1)
- kausale KI (1)
- kausale Schlussfolgerung (1)
- kernel PCA (1)
- kernel methods (1)
- key competences in physical computing (1)
- key competencies (1)
- key discovery (1)
- knowledge building (1)
- knowledge discovery (1)
- knowledge engineering (1)
- knowledge management system (1)
- knowledge representation and nonmonotonic reasoning (1)
- knowledge transfer (1)
- knowledge work (1)
- kompositionale Analyse (1)
- konsistentes Lernen (1)
- kontinuierliches Testen (1)
- kontrolliertes Experiment (1)
- konvergente Dienste (1)
- kulturelles Erbe (1)
- labour union education (1)
- landmarks (1)
- language design (1)
- language learning in the limit (1)
- language specification (1)
- laser remote sensing (1)
- laserscanning (1)
- law and technology (1)
- leadership (1)
- leanCoP (1)
- learner characteristics (1)
- learning (1)
- learning factory (1)
- lebenszentriert (1)
- left recursion (1)
- level-replacement systems (1)
- life-centered (1)
- linear code (1)
- linear programming problem (1)
- linearer Code (1)
- link discovery (1)
- literature review (1)
- live migration (1)
- lively kernel (1)
- load balancing (1)
- localization (1)
- location-based (1)
- logic (1)
- logic programming (1)
- logic synthesis (1)
- logical signaling networks (1)
- logische Ergänzung (1)
- logische Programmierung (1)
- logische Signalnetzwerke (1)
- long-term interaction (1)
- loop formulas (1)
- machine (1)
- machine learning algorithms (1)
- machines (1)
- main memory computing (1)
- malware detection (1)
- management (1)
- manipulation planning (1)
- manufacturing (1)
- many-core (1)
- map reduce (1)
- map/reduce (1)
- maps (1)
- market study (1)
- maschinelle Verarbeitung natürlicher Sprache (1)
- maschninelles Lernen (1)
- matrices (1)
- media (1)
- mediated conversation (1)
- medical (1)
- medical documentation (1)
- medical malpractice (1)
- medizinisch (1)
- medizinische Dokumentation (1)
- mehrdimensionale Belangtrennung (1)
- mehrsprachige Ausführungsumgebungen (1)
- memory optimization (1)
- menschenzentriert (1)
- meta model (1)
- meta-programming (1)
- metabolic network (1)
- metabolite profile (1)
- metacrate (1)
- metadata (1)
- metadata detection (1)
- metadata discovery (1)
- metadata quality (1)
- methodologie (1)
- methods (1)
- metric learning (1)
- metric temporal logic (1)
- metric termporal graph logic (1)
- metrisch temporale Graph Logic (1)
- metrische Temporallogik (1)
- microcredential (1)
- microdissection (1)
- middleware (1)
- migration (1)
- misconception (1)
- missing data (1)
- mixed data (1)
- mixture models (1)
- mmdb (1)
- mobile devices (1)
- mobile learning (1)
- mobile technologies and apps (1)
- model generation (1)
- model repair (1)
- model-based (1)
- model-based prototyping (1)
- model-driven (1)
- model-driven architecture (1)
- model-driven software engineering (1)
- modellgetriebene Softwaretechnik (1)
- modular counting (1)
- modularity (1)
- molecular network (1)
- molecular networks (1)
- molecular tumor board (1)
- molekulare Netzwerke (1)
- monetary incentive delay task (1)
- mood (1)
- morphic (1)
- morphological analysis (1)
- multi core data processing (1)
- multi factor authentication (1)
- multi-class classification (1)
- multi-core (1)
- multi-dimensional separation of concerns (1)
- multi-instances (1)
- multi-version models (1)
- multi-family residential buildings (1)
- multidisziplinäre Teams (1)
- multimedia learning (1)
- multimodal representations (1)
- musical scales (1)
- musikalische Tonleitern (1)
- mutli-task learning (1)
- mutual gaze (1)
- mutual information (1)
- named entity mining (1)
- natural language processing (1)
- nested application conditions (1)
- nested expressions (1)
- network protocols (1)
- networks (1)
- networks-on-chip (1)
- neue Online-Fehlererkennungsmethode (1)
- neural (1)
- new technologies (1)
- nicht-parametrische bedingte Unabhängigkeitstests (1)
- nichtlineare ICA (1)
- nichtlineare PCA (NLPCA) (1)
- nichtlineare Projektionen (1)
- non-monotonic reasoning (1)
- non-parametric conditional independence testing (1)
- nonlinear ICA (1)
- nonlinear PCA (NLPCA) (1)
- nonlinear projections (1)
- notation (1)
- novelty detection (1)
- nutzergenerierte Inhalte (1)
- nvm (1)
- object life cycle synchronization (1)
- object-constraint programming (1)
- object-oriented programming (1)
- objective difficulty (1)
- objektorientiertes Programmieren (1)
- omega (1)
- on-chip (1)
- online course (1)
- online course creation (1)
- online course design (1)
- online learning (1)
- online photographs (1)
- online-learning (1)
- open innovation (1)
- open source (1)
- open source software (1)
- optical character recognition (1)
- optimal transport (1)
- optimizations (1)
- order dependencies (1)
- organisational evolution (1)
- organizational change (1)
- orts-basiert (1)
- overcomplete ICA (1)
- packrat parsing (1)
- paper prototyping (1)
- parallel (1)
- parallel and sequential independence (1)
- parallel computing (1)
- parallel execution (1)
- parallel rewriting (1)
- parallel solving (1)
- parallele Verarbeitung (1)
- parallele und Sequentielle Unabhängigkeit (1)
- paralleles Lösen (1)
- paralleles Rechnen (1)
- parsing (1)
- parsing expression grammars (1)
- partial application conditions (1)
- partial correlation (1)
- partial replication (1)
- partielle Anwendungsbedingungen (1)
- partielle Replikation (1)
- patent (1)
- pathways (1)
- patient empowerment (1)
- pattern recognition (1)
- pedestrian navigation (1)
- perception (1)
- perception differences (1)
- performance (1)
- performance models of virtual machines (1)
- periodic tasks (1)
- periodische Aufgaben (1)
- personality prediction (1)
- personalization principle (1)
- personalized medicine (1)
- persönliche Informationen (1)
- petri net (1)
- phone (1)
- physical computing tools (1)
- placement (1)
- planning (1)
- platform ecosystems (1)
- platypus (1)
- policy evaluation (1)
- polyglot execution environments (1)
- polyglot programming (1)
- polyglottes Programmieren (1)
- portfolio-based solving (1)
- pose estimation (1)
- poset (1)
- power-law (1)
- predictive models (1)
- preference handling (1)
- preferences (1)
- presentation (1)
- prime pair (1)
- primer pair design (1)
- prior knowledge (1)
- priorities (1)
- probabilistic machine learning (1)
- probabilistic timed automata (1)
- probabilistische zeitbehaftete Automaten (1)
- probabilistisches maschinelles Lernen (1)
- process (1)
- process and data integration (1)
- process automation (1)
- process elicitation (1)
- process improvement (1)
- process instance (1)
- process instance grouping (1)
- process model (1)
- process model search (1)
- process modeling languages (1)
- process modelling (1)
- process models (1)
- process refinement (1)
- process scheduling (1)
- processes (1)
- processing (1)
- processor hardware (1)
- professors (1)
- profiling (1)
- program (1)
- programming abstraction (1)
- programming experience (1)
- programming skills (1)
- programming tools (1)
- programs (1)
- prototyping (1)
- psychotherapy (1)
- public cloud storage services (1)
- public dataset (1)
- qualitative model (1)
- qualitatives Modell (1)
- quantification protocol (1)
- quantified logics (1)
- quantile normalization (1)
- quantum computing (1)
- query optimization (1)
- querying (1)
- railway network (1)
- railways (1)
- random I (1)
- random graphs (1)
- randomized control trial (1)
- rapid prototyping (1)
- raum-zeitlich (1)
- raumbezogene Straftatenanalyse (1)
- reactive (1)
- reaktive Programmierung (1)
- real-time application (1)
- real-time rendering (1)
- rechnerunterstütztes Konstruieren (1)
- recognition (1)
- recommendation (1)
- reconfigurable systems (1)
- reconfiguration (1)
- reconstruction (1)
- record linkage (1)
- recursive tuning (1)
- regression testing (1)
- regulatory networks (1)
- reinforcement learning (1)
- rekonfigurierbar (1)
- relational model transformation (1)
- relationale Modelltransformationen (1)
- reliability (1)
- reliability assessment (1)
- remodularization (1)
- remote sensing (1)
- remote-first (1)
- repair (1)
- representation learning (1)
- requirements engineering (1)
- resilient architectures (1)
- resource management (1)
- resource optimization (1)
- rest service (1)
- restoration (1)
- restricted parallelism (1)
- reverse engineering (1)
- reversible reaction (1)
- review (1)
- reward system (1)
- robust ICA (1)
- robuste ICA (1)
- robustness (1)
- runtime adaptations (1)
- runtime behavior (1)
- runtime monitoring (1)
- räumliche Geodaten (1)
- s/t-pattern sequences (1)
- sat (1)
- satisfiabilitiy solving (1)
- savanna (1)
- scheduling (1)
- school (1)
- schwach überwachtes maschinelles Lernen (1)
- scm (1)
- scripting environments (1)
- scripting languages (1)
- scrollytelling (1)
- search plan generation (1)
- security analytics (1)
- security chaos engineering (1)
- security risk assessment (1)
- segmentation (1)
- selbst-souveräne Identitäten (1)
- selbstbestimmte Identitäten (1)
- selbstprüfende Schaltungen (1)
- selection (1)
- self-adaptive multiprocessing system (1)
- self-adaptive software (1)
- self-disclosure (1)
- self-healing (1)
- self-supervised learning (1)
- semantic classification (1)
- semantic web services (1)
- semantics preservation (1)
- semantische Klassifizierung (1)
- semantisches Netz (1)
- sentiment (1)
- sentiment analysis (1)
- sequence properties (1)
- serialization (1)
- series (1)
- serverseitiges 3D-Rendering (1)
- serverside 3D rendering (1)
- service mediation (1)
- service orchestration (1)
- service-oriented architectures (1)
- serviceorientierte Architekturen (1)
- sets (1)
- shader (1)
- sharing economy (1)
- signal processing (1)
- signal transition graph (1)
- significant edge (1)
- similarity (1)
- similarity learning (1)
- similarity measures (1)
- single event upset (1)
- situational awareness (1)
- skeletonization (1)
- skew Brownian motion (1)
- skew diffusions (1)
- small files (1)
- small talk (1)
- smartphone (1)
- smoother (1)
- social attraction (1)
- social media analysis (1)
- social networking sites (1)
- software (1)
- software analysis (1)
- software architecture (1)
- software development (1)
- software evolution (1)
- software maintenance (1)
- software product lines (1)
- software selection (1)
- software testing (1)
- software tests (1)
- software visualization (1)
- software/hardware co-design (1)
- solar particle event (1)
- space missions (1)
- spatio-temporal (1)
- spatio-temporal sensor data (1)
- specific prime pair (1)
- specification of timed graph transformations (1)
- speed independence (1)
- speed independent (1)
- spread correction (1)
- squeak (1)
- stable matching (1)
- stable model semantics (1)
- standards (1)
- stark verhaltenskorrekt sperrend (1)
- static analysis (1)
- static source-code analysis (1)
- statische Analyse (1)
- statische Quellcodeanalyse (1)
- statistics program R (1)
- stochastic Petri nets (1)
- stochastic process (1)
- stochastische Petri Netze (1)
- strong and uniform equivalence (1)
- strongly behaviourally correct locking (1)
- strongly stable matching (1)
- structured output prediction (1)
- strukturierte Vorhersage (1)
- study (1)
- study problems (1)
- stylization (1)
- super stable matching (1)
- survey mode (1)
- symbolic analysis (1)
- symbolic graphs (1)
- symbolische Analyse (1)
- symbolische Graphen (1)
- synonym discovery (1)
- system of systems (1)
- systems (1)
- systems software (1)
- t.BPM (1)
- tabellarische Dateien (1)
- tableau method (1)
- tabular data (1)
- tacit knowledge (1)
- tangible media (1)
- teacher training (1)
- teachers (1)
- teaching (1)
- technical notes and rapid communications (1)
- technologies (1)
- technology (1)
- tele-lab (1)
- tele-teaching (1)
- telemedicine (1)
- temporal graph queries (1)
- temporal logic (1)
- temporale Graphanfragen (1)
- temporary binding (1)
- terminology (1)
- terrain models (1)
- test (1)
- test case prioritization (1)
- test results (1)
- testing (1)
- text classification (1)
- text mining (1)
- threat detection (1)
- threshold cryptography (1)
- tiefe Gauß-Prozesse (1)
- tiering (1)
- tool building (1)
- topics (1)
- tort law (1)
- touch input (1)
- tptp (1)
- traditional Georgian music (1)
- traditionelle Georgische Musik (1)
- traditionelle Unternehmen (1)
- training (1)
- trajectories (1)
- transduction (1)
- transfer learning (1)
- transformation (1)
- transformation level (1)
- transformation sequences (1)
- trust model (1)
- tutorial section (1)
- typed graph transformation systems (1)
- typisierte attributierte Graphen (1)
- uncanny valley (1)
- unferring cellular networks (1)
- unfounded sets (1)
- unique (1)
- unique column combinations (1)
- unsupervised (1)
- unsupervised methods (1)
- usability (1)
- value co-creation (1)
- variables (1)
- variation (1)
- variational inference (1)
- variationelle Inferenz (1)
- various applications (1)
- ventral striatum (1)
- verhaltenskorrektes Lernen (1)
- verifiable credentials (1)
- verschachtelte Anwednungsbedingungen (1)
- verschachtelte Anwendungsbedingungen (1)
- versioning (1)
- verteilte Berechnung (1)
- verteilte Datenbanken (1)
- verteilte Leistungsüberwachung (1)
- verzwickte Probleme (1)
- view maintenance (1)
- virtual (1)
- virtual 3D city model (1)
- virtual collaboration (1)
- virtual desktop infrastructure (1)
- virtual groups (1)
- virtual learning environments (1)
- virtual machine (1)
- virtual mobility (1)
- virtual teams (1)
- virtualisierte IT-Infrastruktur (1)
- virtuell (1)
- virtuelle Realität (1)
- visual analytics (1)
- visual language (1)
- visual languages (1)
- visualization concept exploration (1)
- visuelle Sprache (1)
- visuelle Sprachen (1)
- vulnerabilities (1)
- weak supervision (1)
- weakly (1)
- web services (1)
- web-applications (1)
- web-based development (1)
- web-based development environment (1)
- web-basierte Entwicklungsumgebung (1)
- webbasierte Entwicklung (1)
- weight (1)
- well-being (1)
- word order freezing (1)
- workload prediction (1)
- zero-day (1)
- zuverlässige Datenverarbeitung (1)
- zuverlässigen Datenverarbeitung (1)
- Ähnlichkeit (1)
- Ähnlichkeitsmaße (1)
- Ähnlichkeitssuche (1)
- Übereinstimmungsanalyse (1)
- Überwachung (1)
- öffentliche Cloud Speicherdienste (1)
- überbestimmte ICA (1)
- überprüfbare Nachweise (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering gGmbH (194)
- Institut für Informatik und Computational Science (159)
- Hasso-Plattner-Institut für Digital Engineering GmbH (102)
- Fachgruppe Betriebswirtschaftslehre (27)
- Mathematisch-Naturwissenschaftliche Fakultät (23)
- Wirtschaftswissenschaften (18)
- Extern (17)
- Bürgerliches Recht (12)
- Digital Engineering Fakultät (8)
- Institut für Physik und Astronomie (8)
Homeoffice und mobiles Arbeiten haben sich infolge der Covid-19-Pandemie bei vielen Unternehmen bekanntlich etabliert. Die Anweisung bzw. „Duldung“ des Homeoffice beruhte allerdings meist mehr auf tatsächlicher als auf rechtlicher Grundlage. Letztere könnte aber aus betrieblicher Übung erwachsen. Dieser Beitrag geht dem rechtlichen Rahmen dafür nach.
3D from 2D touch
(2013)
While interaction with computers used to be dominated by mice and keyboards, new types of sensors now allow users to interact through touch, speech, or using their whole body in 3D space. These new interaction modalities are often referred to as "natural user interfaces" or "NUIs." While 2D NUIs have experienced major success on billions of mobile touch devices sold, 3D NUI systems have so far been unable to deliver a mobile form factor, mainly due to their use of cameras. The fact that cameras require a certain distance from the capture volume has prevented 3D NUI systems from reaching the flat form factor mobile users expect. In this dissertation, we address this issue by sensing 3D input using flat 2D sensors. The systems we present observe the input from 3D objects as 2D imprints upon physical contact. By sampling these imprints at very high resolutions, we obtain the objects' textures. In some cases, a texture uniquely identifies a biometric feature, such as the user's fingerprint. In other cases, an imprint stems from the user's clothing, such as when walking on multitouch floors. By analyzing from which part of the 3D object the 2D imprint results, we reconstruct the object's pose in 3D space. While our main contribution is a general approach to sensing 3D input on 2D sensors upon physical contact, we also demonstrate three applications of our approach. (1) We present high-accuracy touch devices that allow users to reliably touch targets that are a third of the size of those on current touch devices. We show that different users and 3D finger poses systematically affect touch sensing, which current devices perceive as random input noise. We introduce a model for touch that compensates for this systematic effect by deriving the 3D finger pose and the user's identity from each touch imprint. We then investigate this systematic effect in detail and explore how users conceptually touch targets. Our findings indicate that users aim by aligning visual features of their fingers with the target. We present a visual model for touch input that eliminates virtually all systematic effects on touch accuracy. (2) From each touch, we identify users biometrically by analyzing their fingerprints. Our prototype Fiberio integrates fingerprint scanning and a display into the same flat surface, solving a long-standing problem in human-computer interaction: secure authentication on touchscreens. Sensing 3D input and authenticating users upon touch allows Fiberio to implement a variety of applications that traditionally require the bulky setups of current 3D NUI systems. (3) To demonstrate the versatility of 3D reconstruction on larger touch surfaces, we present a high-resolution pressure-sensitive floor that resolves the texture of objects upon touch. Using the same principles as before, our system GravitySpace analyzes all imprints and identifies users based on their shoe soles, detects furniture, and enables accurate touch input using feet. By classifying all imprints, GravitySpace detects the users' body parts that are in contact with the floor and then reconstructs their 3D body poses using inverse kinematics. GravitySpace thus enables a range of applications for future 3D NUI systems based on a flat sensor, such as smart rooms in future homes. We conclude this dissertation by projecting into the future of mobile devices. Focusing on the mobility aspect of our work, we explore how NUI devices may one day augment users directly in the form of implanted devices.
Companies develop process models to explicitly describe their business operations. In the same time, business operations, business processes, must adhere to various types of compliance requirements. Regulations, e.g., Sarbanes Oxley Act of 2002, internal policies, best practices are just a few sources of compliance requirements. In some cases, non-adherence to compliance requirements makes the organization subject to legal punishment. In other cases, non-adherence to compliance leads to loss of competitive advantage and thus loss of market share. Unlike the classical domain-independent behavioral correctness of business processes, compliance requirements are domain-specific. Moreover, compliance requirements change over time. New requirements might appear due to change in laws and adoption of new policies. Compliance requirements are offered or enforced by different entities that have different objectives behind these requirements. Finally, compliance requirements might affect different aspects of business processes, e.g., control flow and data flow. As a result, it is infeasible to hard-code compliance checks in tools. Rather, a repeatable process of modeling compliance rules and checking them against business processes automatically is needed. This thesis provides a formal approach to support process design-time compliance checking. Using visual patterns, it is possible to model compliance requirements concerning control flow, data flow and conditional flow rules. Each pattern is mapped into a temporal logic formula. The thesis addresses the problem of consistency checking among various compliance requirements, as they might stem from divergent sources. Also, the thesis contributes to automatically check compliance requirements against process models using model checking. We show that extra domain knowledge, other than expressed in compliance rules, is needed to reach correct decisions. In case of violations, we are able to provide a useful feedback to the user. The feedback is in the form of parts of the process model whose execution causes the violation. In some cases, our approach is capable of providing automated remedy of the violation.
There is an increasing interest in fusing data from heterogeneous sources. Combining data sources increases the utility of existing datasets, generating new information and creating services of higher quality. A central issue in working with heterogeneous sources is data migration: In order to share and process data in different engines, resource intensive and complex movements and transformations between computing engines, services, and stores are necessary.
Muses is a distributed, high-performance data migration engine that is able to interconnect distributed data stores by forwarding, transforming, repartitioning, or broadcasting data among distributed engines' instances in a resource-, cost-, and performance-adaptive manner. As such, it performs seamless information sharing across all participating resources in a standard, modular manner. We show an overall improvement of 30 % for pipelining jobs across multiple engines, even when we count the overhead of Muses in the execution time. This performance gain implies that Muses can be used to optimise large pipelines that leverage multiple engines.
As resources are valuable assets, organizations have to decide which resources to allocate to business process tasks in a way that the process is executed not only effectively but also efficiently. Traditional role-based resource allocation leads to effective process executions, since each task is performed by a resource that has the required skills and competencies to do so. However, the resulting allocations are typically not as efficient as they could be, since optimization techniques have yet to find their way in traditional business process management scenarios. On the other hand, operations research provides a rich set of analytical methods for supporting problem-specific decisions on resource allocation. This paper provides a novel framework for creating transparency on existing tasks and resources, supporting individualized allocations for each activity in a process, and the possibility to integrate problem-specific analytical methods of the operations research domain. To validate the framework, the paper reports on the design and prototypical implementation of a software architecture, which extends a traditional process engine with a dedicated resource management component. This component allows us to define specific resource allocation problems at design time, and it also facilitates optimized resource allocation at run time. The framework is evaluated using a real-world parcel delivery process. The evaluation shows that the quality of the allocation results increase significantly with a technique from operations research in contrast to the traditional applied rule-based approach.
Image feature detection is a key task in computer vision. Scale Invariant Feature Transform (SIFT) is a prevalent and well known algorithm for robust feature detection. However, it is computationally demanding and software implementations are not applicable for real-time performance. In this paper, a versatile and pipelined hardware implementation is proposed, that is capable of computing keypoints and rotation invariant descriptors on-chip. All computations are performed in single precision floating-point format which makes it possible to implement the original algorithm with little alteration. Various rotation resolutions and filter kernel sizes are supported for images of any resolution up to ultra-high definition. For full high definition images, 84 fps can be processed. Ultra high definition images can be processed at 21 fps.
In a recent paper, the Lefschetz number for endomorphisms (modulo trace class operators) of sequences of trace class curvature was introduced. We show that this is a well defined, canonical extension of the classical Lefschetz number and establish the homotopy invariance of this number. Moreover, we apply the results to show that the Lefschetz fixed point formula holds for geometric quasiendomorphisms of elliptic quasicomplexes.
Graph repair, restoring consistency of a graph, plays a prominent role in several areas of computer science and beyond: For example, in model-driven engineering, the abstract syntax of models is usually encoded using graphs. Flexible edit operations temporarily create inconsistent graphs not representing a valid model, thus requiring graph repair. Similarly, in graph databases—managing the storage and manipulation of graph data—updates may cause that a given database does not satisfy some integrity constraints, requiring also graph repair. We present a logic-based incremental approach to graph repair, generating a sound and complete (upon termination) overview of least-changing repairs. In our context, we formalize consistency by so-called graph conditions being equivalent to first-order logic on graphs. We present two kind of repair algorithms: State-based repair restores consistency independent of the graph update history, whereas deltabased (or incremental) repair takes this history explicitly into account. Technically, our algorithms rely on an existing model generation algorithm for graph conditions implemented in AutoGraph. Moreover, the delta-based approach uses the new concept of satisfaction (ST) trees for encoding if and how a graph satisfies a graph condition. We then demonstrate how to manipulate these STs incrementally with respect to a graph update.
We introduce a logic-based incremental approach to graph repair, generating a sound and complete (upon termination) overview of least-changing graph repairs from which a user may select a graph repair based on non-formalized further requirements. This incremental approach features delta preservation as it allows to restrict the generation of graph repairs to delta-preserving graph repairs, which do not revert the additions and deletions of the most recent consistency-violating graph update. We specify consistency of graphs using the logic of nested graph conditions, which is equivalent to first-order logic on graphs. Technically, the incremental approach encodes if and how the graph under repair satisfies a graph condition using the novel data structure of satisfaction trees, which are adapted incrementally according to the graph updates applied. In addition to the incremental approach, we also present two state-based graph repair algorithms, which restore consistency of a graph independent of the most recent graph update and which generate additional graph repairs using a global perspective on the graph under repair. We evaluate the developed algorithms using our prototypical implementation in the tool AutoGraph and illustrate our incremental approach using a case study from the graph database domain.
Author summary <br /> The use of orally inhaled drugs for treating lung diseases is appealing since they have the potential for lung selectivity, i.e. high exposure at the site of action -the lung- without excessive side effects. However, the degree of lung selectivity depends on a large number of factors, including physiochemical properties of drug molecules, patient disease state, and inhalation devices. To predict the impact of these factors on drug exposure and thereby to understand the characteristics of an optimal drug for inhalation, we develop a predictive mathematical framework (a "pharmacokinetic model"). In contrast to previous approaches, our model allows combining knowledge from different sources appropriately and its predictions were able to adequately predict different sets of clinical data. Finally, we compare the impact of different factors and find that the most important factors are the size of the inhaled particles, the affinity of the drug to the lung tissue, as well as the rate of drug dissolution in the lung. In contrast to the common belief, the solubility of a drug in the lining fluids is not found to be relevant. These findings are important to understand how inhaled drugs should be designed to achieve best treatment results in patients. <br /> The fate of orally inhaled drugs is determined by pulmonary pharmacokinetic processes such as particle deposition, pulmonary drug dissolution, and mucociliary clearance. Even though each single process has been systematically investigated, a quantitative understanding on the interaction of processes remains limited and therefore identifying optimal drug and formulation characteristics for orally inhaled drugs is still challenging. To investigate this complex interplay, the pulmonary processes can be integrated into mathematical models. However, existing modeling attempts considerably simplify these processes or are not systematically evaluated against (clinical) data. In this work, we developed a mathematical framework based on physiologically-structured population equations to integrate all relevant pulmonary processes mechanistically. A tailored numerical resolution strategy was chosen and the mechanistic model was evaluated systematically against data from different clinical studies. Without adapting the mechanistic model or estimating kinetic parameters based on individual study data, the developed model was able to predict simultaneously (i) lung retention profiles of inhaled insoluble particles, (ii) particle size-dependent pharmacokinetics of inhaled monodisperse particles, (iii) pharmacokinetic differences between inhaled fluticasone propionate and budesonide, as well as (iv) pharmacokinetic differences between healthy volunteers and asthmatic patients. Finally, to identify the most impactful optimization criteria for orally inhaled drugs, the developed mechanistic model was applied to investigate the impact of input parameters on both the pulmonary and systemic exposure. Interestingly, the solubility of the inhaled drug did not have any relevant impact on the local and systemic pharmacokinetics. Instead, the pulmonary dissolution rate, the particle size, the tissue affinity, and the systemic clearance were the most impactful potential optimization parameters. In the future, the developed prediction framework should be considered a powerful tool for identifying optimal drug and formulation characteristics.
Advanced mechatronic systems have to integrate existing technologies from mechanical, electrical and software engineering. They must be able to adapt their structure and behavior at runtime by reconfiguration to react flexibly to changes in the environment. Therefore, a tight integration of structural and behavioral models of the different domains is required. This integration results in complex reconfigurable hybrid systems, the execution logic of which cannot be addressed directly with existing standard modeling, simulation, and code-generation techniques. We present in this paper how our component-based approach for reconfigurable mechatronic systems, M ECHATRONIC UML, efficiently handles the complex interplay of discrete behavior and continuous behavior in a modular manner. In addition, its extension to even more flexible reconfiguration cases is presented.
Graphs play an important role in many areas of Computer Science. In particular, our work is motivated by model-driven software development and by graph databases. For this reason, it is very important to have the means to express and to reason about the properties that a given graph may satisfy. With this aim, in this paper we present a visual logic that allows us to describe graph properties, including navigational properties, i.e., properties about the paths in a graph. The logic is equipped with a deductive tableau method that we have proved to be sound and complete.
Quantified Boolean formulas (QBFs) play an important role in theoretical computer science. QBF extends propositional logic in such a way that many advanced forms of reasoning can be easily formulated and evaluated. In this dissertation we present our ZQSAT, which is an algorithm for evaluating quantified Boolean formulas. ZQSAT is based on ZBDD: Zero-Suppressed Binary Decision Diagram , which is a variant of BDD, and an adopted version of the DPLL algorithm. It has been implemented in C using the CUDD: Colorado University Decision Diagram package. The capability of ZBDDs in storing sets of subsets efficiently enabled us to store the clauses of a QBF very compactly and let us to embed the notion of memoization to the DPLL algorithm. These points led us to implement the search algorithm in such a way that we could store and reuse the results of all previously solved subformulas with a little overheads. ZQSAT can solve some sets of standard QBF benchmark problems (known to be hard for DPLL based algorithms) faster than the best existing solvers. In addition to prenex-CNF, ZQSAT accepts prenex-NNF formulas. We show and prove how this capability can be exponentially beneficial.
Compared to their inorganic counterparts, organic semiconductors suffer from relatively low charge carrier mobilities. Therefore, expressions derived for inorganic solar cells to correlate characteristic performance parameters to material properties are prone to fail when applied to organic devices. This is especially true for the classical Shockley-equation commonly used to describe current-voltage (JV)-curves, as it assumes a high electrical conductivity of the charge transporting material. Here, an analytical expression for the JV-curves of organic solar cells is derived based on a previously published analytical model. This expression, bearing a similar functional dependence as the Shockley-equation, delivers a new figure of merit α to express the balance between free charge recombination and extraction in low mobility photoactive materials. This figure of merit is shown to determine critical device parameters such as the apparent series resistance and the fill factor.
Compared to their inorganic counterparts, organic semiconductors suffer from relatively low charge carrier mobilities. Therefore, expressions derived for inorganic solar cells to correlate characteristic performance parameters to material properties are prone to fail when applied to organic devices. This is especially true for the classical Shockley-equation commonly used to describe current-voltage (JV)-curves, as it assumes a high electrical conductivity of the charge transporting material. Here, an analytical expression for the JV-curves of organic solar cells is derived based on a previously published analytical model. This expression, bearing a similar functional dependence as the Shockley-equation, delivers a new figure of merit α to express the balance between free charge recombination and extraction in low mobility photoactive materials. This figure of merit is shown to determine critical device parameters such as the apparent series resistance and the fill factor.
A simplified run time analysis of the univariate marginal distribution algorithm on LeadingOnes
(2021)
With elementary means, we prove a stronger run time guarantee for the univariate marginal distribution algorithm (UMDA) optimizing the LEADINGONES benchmark function in the desirable regime with low genetic drift. If the population size is at least quasilinear, then, with high probability, the UMDA samples the optimum in a number of iterations that is linear in the problem size divided by the logarithm of the UMDA's selection rate. This improves over the previous guarantee, obtained by Dang and Lehre (2015) via the deep level-based population method, both in terms of the run time and by demonstrating further run time gains from small selection rates. Under similar assumptions, we prove a lower bound that matches our upper bound up to constant factors.
E-learning is a flexible and personalized alternative to traditional education. Nonetheless, existing e-learning systems for IT security education have difficulties in delivering hands-on experience because of the lack of proximity. Laboratory environments and practical exercises are indispensable instruction tools to IT security education, but security education in con-ventional computer laboratories poses the problem of immobility as well as high creation and maintenance costs. Hence, there is a need to effectively transform security laboratories and practical exercises into e-learning forms. This report introduces the Tele-Lab IT-Security architecture that allows students not only to learn IT security principles, but also to gain hands-on security experience by exercises in an online laboratory environment. In this architecture, virtual machines are used to provide safe user work environments instead of real computers. Thus, traditional laboratory environments can be cloned onto the Internet by software, which increases accessibilities to laboratory resources and greatly reduces investment and maintenance costs. Under the Tele-Lab IT-Security framework, a set of technical solutions is also proposed to provide effective functionalities, reliability, security, and performance. The virtual machines with appropriate resource allocation, software installation, and system configurations are used to build lightweight security laboratories on a hosting computer. Reliability and availability of laboratory platforms are covered by the virtual machine management framework. This management framework provides necessary monitoring and administration services to detect and recover critical failures of virtual machines at run time. Considering the risk that virtual machines can be misused for compromising production networks, we present security management solutions to prevent misuse of laboratory resources by security isolation at the system and network levels. This work is an attempt to bridge the gap between e-learning/tele-teaching and practical IT security education. It is not to substitute conventional teaching in laboratories but to add practical features to e-learning. This report demonstrates the possibility to implement hands-on security laboratories on the Internet reliably, securely, and economically.
This thesis discusses challenges in IT security education, points out a gap between e-learning and practical education, and presents a work to fill the gap. E-learning is a flexible and personalized alternative to traditional education. Nonetheless, existing e-learning systems for IT security education have difficulties in delivering hands-on experience because of the lack of proximity. Laboratory environments and practical exercises are indispensable instruction tools to IT security education, but security education in conventional computer laboratories poses particular problems such as immobility as well as high creation and maintenance costs. Hence, there is a need to effectively transform security laboratories and practical exercises into e-learning forms. In this thesis, we introduce the Tele-Lab IT-Security architecture that allows students not only to learn IT security principles, but also to gain hands-on security experience by exercises in an online laboratory environment. In this architecture, virtual machines are used to provide safe user work environments instead of real computers. Thus, traditional laboratory environments can be cloned onto the Internet by software, which increases accessibility to laboratory resources and greatly reduces investment and maintenance costs. Under the Tele-Lab IT-Security framework, a set of technical solutions is also proposed to provide effective functionalities, reliability, security, and performance. The virtual machines with appropriate resource allocation, software installation, and system configurations are used to build lightweight security laboratories on a hosting computer. Reliability and availability of laboratory platforms are covered by a virtual machine management framework. This management framework provides necessary monitoring and administration services to detect and recover critical failures of virtual machines at run time. Considering the risk that virtual machines can be misused for compromising production networks, we present a security management solution to prevent the misuse of laboratory resources by security isolation at the system and network levels. This work is an attempt to bridge the gap between e-learning/tele-teaching and practical IT security education. It is not to substitute conventional teaching in laboratories but to add practical features to e-learning. This thesis demonstrates the possibility to implement hands-on security laboratories on the Internet reliably, securely, and economically.
This paper describes the implementation of a workflow model for service-oriented computing of potential areas for wind turbines in jABC. By implementing a re-executable model the manual effort of a multi-criteria site analysis can be reduced. The aim is to determine the shift of typical geoprocessing tools of geographic information systems (GIS) from the desktop to the web. The analysis is based on a vector data set and mainly uses web services of the “Center for Spatial Information Science and Systems” (CSISS). This paper discusses effort, benefits and problems associated with the use of the web services.
Abstract gringo
(2015)
This paper defines the syntax and semantics of the input language of the ASP grounder gringo. The definition covers several constructs that were not discussed in earlier work on the semantics of that language, including intervals, pools, division of integers, aggregates with non-numeric values, and lparse-style aggregate expressions. The definition is abstract in the sense that it disregards some details related to representing programs by strings of ASCII characters. It serves as a specification for gringo from Version 4.5 on.
Business process management experiences a large uptake by the industry, and process models play an important role in the analysis and improvement of processes. While an increasing number of staff becomes involved in actual modeling practice, it is crucial to assure model quality and homogeneity along with providing suitable aids for creating models. In this paper we consider the problem of offering recommendations to the user during the act of modeling. Our key contribution is a concept for defining and identifying so-called action patterns - chunks of actions often appearing together in business processes. In particular, we specify action patterns and demonstrate how they can be identified from existing process model repositories using association rule mining techniques. Action patterns can then be used to suggest additional actions for a process model. Our approach is challenged by applying it to the collection of process models from the SAP Reference Model.
The field of machine learning studies algorithms that infer predictive models from data. Predictive models are applicable for many practical tasks such as spam filtering, face and handwritten digit recognition, and personalized product recommendation. In general, they are used to predict a target label for a given data instance. In order to make an informed decision about the deployment of a predictive model, it is crucial to know the model’s approximate performance. To evaluate performance, a set of labeled test instances is required that is drawn from the distribution the model will be exposed to at application time. In many practical scenarios, unlabeled test instances are readily available, but the process of labeling them can be a time- and cost-intensive task and may involve a human expert. This thesis addresses the problem of evaluating a given predictive model accurately with minimal labeling effort. We study an active model evaluation process that selects certain instances of the data according to an instrumental sampling distribution and queries their labels. We derive sampling distributions that minimize estimation error with respect to different performance measures such as error rate, mean squared error, and F-measures. An analysis of the distribution that governs the estimator leads to confidence intervals, which indicate how precise the error estimation is. Labeling costs may vary across different instances depending on certain characteristics of the data. For instance, documents differ in their length, comprehensibility, and technical requirements; these attributes affect the time a human labeler needs to judge relevance or to assign topics. To address this, the sampling distribution is extended to incorporate instance-specific costs. We empirically study conditions under which the active evaluation processes are more accurate than a standard estimate that draws equally many instances from the test distribution. We also address the problem of comparing the risks of two predictive models. The standard approach would be to draw instances according to the test distribution, label the selected instances, and apply statistical tests to identify significant differences. Drawing instances according to an instrumental distribution affects the power of a statistical test. We derive a sampling procedure that maximizes test power when used to select instances, and thereby minimizes the likelihood of choosing the inferior model. Furthermore, we investigate the task of comparing several alternative models; the objective of an evaluation could be to rank the models according to the risk that they incur or to identify the model with lowest risk. An experimental study shows that the active procedure leads to higher test power than the standard test in many application domains. Finally, we study the problem of evaluating the performance of ranking functions, which are used for example for web search. In practice, ranking performance is estimated by applying a given ranking model to a representative set of test queries and manually assessing the relevance of all retrieved items for each query. We apply the concepts of active evaluation and active comparison to ranking functions and derive optimal sampling distributions for the commonly used performance measures Discounted Cumulative Gain and Expected Reciprocal Rank. Experiments on web search engine data illustrate significant reductions in labeling costs.
Active use of social networking sites (SNSs) has long been assumed to benefit users' well-being. However, this established hypothesis is increasingly being challenged, with scholars criticizing its lack of empirical support and the imprecise conceptualization of active use. Nevertheless, with considerable heterogeneity among existing studies on the hypothesis and causal evidence still limited, a final verdict on its robustness is still pending. To contribute to this ongoing debate, we conducted a week-long randomized control trial with N = 381 adult Instagram users recruited via Prolific. Specifically, we tested how active SNS use, operationalized as picture postings on Instagram, affects different dimensions of well-being. The results depicted a positive effect on users' positive affect but null findings for other well-being outcomes. The findings broadly align with the recent criticism against the active use hypothesis and support the call for a more nuanced view on the impact of SNSs. <br /> Lay Summary Active use of social networking sites (SNSs) has long been assumed to benefit users' well-being. However, this established assumption is increasingly being challenged, with scholars criticizing its lack of empirical support and the imprecise conceptualization of active use. Nevertheless, with great diversity among conducted studies on the hypothesis and a lack of causal evidence, a final verdict on its viability is still pending. To contribute to this ongoing debate, we conducted a week-long experimental investigation with 381 adult Instagram users. Specifically, we tested how posting pictures on Instagram affects different aspects of well-being. The results of this study depicted a positive effect of posting Instagram pictures on users' experienced positive emotions but no effects on other aspects of well-being. The findings broadly align with the recent criticism against the active use hypothesis and support the call for a more nuanced view on the impact of SNSs on users.
Duplicate detection is the task of identifying all groups of records within a data set that represent the same real-world entity, respectively. This task is difficult, because (i) representations might differ slightly, so some similarity measure must be defined to compare pairs of records and (ii) data sets might have a high volume making a pair-wise comparison of all records infeasible. To tackle the second problem, many algorithms have been suggested that partition the data set and compare all record pairs only within each partition. One well-known such approach is the Sorted Neighborhood Method (SNM), which sorts the data according to some key and then advances a window over the data comparing only records that appear within the same window. We propose several variations of SNM that have in common a varying window size and advancement. The general intuition of such adaptive windows is that there might be regions of high similarity suggesting a larger window size and regions of lower similarity suggesting a smaller window size. We propose and thoroughly evaluate several adaption strategies, some of which are provably better than the original SNM in terms of efficiency (same results with fewer comparisons).
Industry 4.0 and the Internet of Things are recent developments that have lead to the creation of new kinds of manufacturing data. Linking this new kind of sensor data to traditional business information is crucial for enterprises to take advantage of the data’s full potential. In this paper, we present a demo which allows experiencing this data integration, both vertically between technical and business contexts and horizontally along the value chain. The tool simulates a manufacturing company, continuously producing both business and sensor data, and supports issuing ad-hoc queries that answer specific questions related to the business. In order to adapt to different environments, users can configure sensor characteristics to their needs.
In recent years, the ever-growing amount of documents on the Web as well as in closed systems for private or business contexts led to a considerable increase of valuable textual information about topics, events, and entities. It is a truism that the majority of information (i.e., business-relevant data) is only available in unstructured textual form. The text mining research field comprises various practice areas that have the common goal of harvesting high-quality information from textual data. These information help addressing users' information needs.
In this thesis, we utilize the knowledge represented in user-generated content (UGC) originating from various social media services to improve text mining results. These social media platforms provide a plethora of information with varying focuses. In many cases, an essential feature of such platforms is to share relevant content with a peer group. Thus, the data exchanged in these communities tend to be focused on the interests of the user base. The popularity of social media services is growing continuously and the inherent knowledge is available to be utilized. We show that this knowledge can be used for three different tasks.
Initially, we demonstrate that when searching persons with ambiguous names, the information from Wikipedia can be bootstrapped to group web search results according to the individuals occurring in the documents. We introduce two models and different means to handle persons missing in the UGC source. We show that the proposed approaches outperform traditional algorithms for search result clustering. Secondly, we discuss how the categorization of texts according to continuously changing community-generated folksonomies helps users to identify new information related to their interests. We specifically target temporal changes in the UGC and show how they influence the quality of different tag recommendation approaches. Finally, we introduce an algorithm to attempt the entity linking problem, a necessity for harvesting entity knowledge from large text collections. The goal is the linkage of mentions within the documents with their real-world entities. A major focus lies on the efficient derivation of coherent links.
For each of the contributions, we provide a wide range of experiments on various text corpora as well as different sources of UGC.
The evaluation shows the added value that the usage of these sources provides and confirms the appropriateness of leveraging user-generated content to serve different information needs.
Process mining (PM) has established itself in recent years as a main method for visualizing and analyzing processes. However, the identification of knowledge has not been addressed adequately because PM aims solely at data-driven discovering, monitoring, and improving real-world processes from event logs available in various information systems. The following paper, therefore, outlines a novel systematic analysis view on tools for data-driven and machine learning (ML)-based identification of knowledge-intensive target processes. To support the effectiveness of the identification process, the main contributions of this study are (1) to design a procedure for a systematic review and analysis for the selection of relevant dimensions, (2) to identify different categories of dimensions as evaluation metrics to select source systems, algorithms, and tools for PM and ML as well as include them in a multi-dimensional grid box model, (3) to select and assess the most relevant dimensions of the model, (4) to identify and assess source systems, algorithms, and tools in order to find evidence for the selected dimensions, and (5) to assess the relevance and applicability of the conceptualization and design procedure for tool selection in data-driven and ML-based process mining research.
In the last decades, there was a notable progress in solving the well-known Boolean satisfiability (Sat) problem, which can be witnessed by powerful Sat solvers. One of the reasons why these solvers are so fast are structural properties of instances that are utilized by the solver’s interna. This thesis deals with the well-studied structural property treewidth, which measures the closeness of an instance to being a tree. In fact, there are many problems parameterized by treewidth that are solvable in polynomial time in the instance size when parameterized by treewidth.
In this work, we study advanced treewidth-based methods and tools for problems in knowledge representation and reasoning (KR). Thereby, we provide means to establish precise runtime results (upper bounds) for canonical problems relevant to KR. Then, we present a new type of problem reduction, which we call decomposition-guided (DG) that
allows us to precisely monitor the treewidth when reducing from one problem to another problem. This new reduction type will be the basis for a long-open lower bound result for quantified Boolean formulas and allows us to design a new methodology for establishing runtime lower bounds for problems parameterized by treewidth.
Finally, despite these lower bounds, we provide an efficient implementation of algorithms that adhere to treewidth. Our approach finds suitable abstractions of instances, which are subsequently refined in a recursive fashion, and it uses Sat solvers for solving subproblems. It turns out that our resulting solver is quite competitive for two canonical counting problems related to Sat.
Unique column combinations of a relational database table are sets of columns that contain only unique values. Discovering such combinations is a fundamental research problem and has many different data management and knowledge discovery applications. Existing discovery algorithms are either brute force or have a high memory load and can thus be applied only to small datasets or samples. In this paper, the wellknown GORDIAN algorithm and "Apriori-based" algorithms are compared and analyzed for further optimization. We greatly improve the Apriori algorithms through efficient candidate generation and statistics-based pruning methods. A hybrid solution HCAGORDIAN combines the advantages of GORDIAN and our new algorithm HCA, and it significantly outperforms all previous work in many situations.
Für die vorliegende Studie »Qualitative Untersuchung zur Akzeptanz des neuen Personalausweises und Erarbeitung von Vorschlägen zur Verbesserung der Usability der Software AusweisApp« arbeitete ein Innovationsteam mit Hilfe der Design Thinking Methode an der Aufgabenstellung »Wie können wir die AusweisApp für Nutzer intuitiv und verständlich gestalten?« Zunächst wurde die Akzeptanz des neuen Personalausweises getestet. Bürger wurden zu ihrem Wissensstand und ihren Erwartungen hinsichtlich des neuen Personalausweises befragt, darüber hinaus zur generellen Nutzung des neuen Personalausweises, der Nutzung der Online-Ausweisfunktion sowie der Usability der AusweisApp. Weiterhin wurden Nutzer bei der Verwendung der aktuellen AusweisApp beobachtet und anschließend befragt. Dies erlaubte einen tiefen Einblick in ihre Bedürfnisse. Die Ergebnisse aus der qualitativen Untersuchung wurden verwendet, um Verbesserungsvorschläge für die AusweisApp zu entwickeln, die den Bedürfnissen der Bürger entsprechen. Die Vorschläge zur Optimierung der AusweisApp wurden prototypisch umgesetzt und mit potentiellen Nutzern getestet. Die Tests haben gezeigt, dass die entwickelten Neuerungen den Bürgern den Zugang zur Nutzung der Online-Ausweisfunktion deutlich vereinfachen. Im Ergebnis konnte festgestellt werden, dass der Akzeptanzgrad des neuen Personalausweises stark divergiert. Die Einstellung der Befragten reichte von Skepsis bis hin zu Befürwortung. Der neue Personalausweis ist ein Thema, das den Bürger polarisiert. Im Rahmen der Nutzertests konnten zahlreiche Verbesserungspotenziale des bestehenden Service Designs sowohl rund um den neuen Personalausweis, als auch im Zusammenhang mit der verwendeten Software aufgedeckt werden. Während der Nutzertests, die sich an die Ideen- und Prototypenphase anschlossen, konnte das Innovtionsteam seine Vorschläge iterieren und auch verifizieren. Die ausgearbeiteten Vorschläge beziehen sich auf die AusweisApp. Die neuen Funktionen umfassen im Wesentlichen: · den direkten Zugang zu den Diensteanbietern, · umfangreiche Hilfestellungen (Tooltips, FAQ, Wizard, Video), · eine Verlaufsfunktion, · einen Beispieldienst, der die Online-Ausweisfunktion erfahrbar macht. Insbesondere gilt es, den Nutzern mit der neuen Version der AusweisApp Anwendungsfelder für ihren neuen Personalausweis und einen Mehrwert zu bieten. Die Ausarbeitung von weiteren Funktionen der AusweisApp kann dazu beitragen, dass der neue Personalausweis sein volles Potenzial entfalten kann.
Boolean constraint solving technology has made tremendous progress over the last decade, leading to industrial-strength solvers, for example, in the areas of answer set programming (ASP), the constraint satisfaction problem (CSP), propositional satisfiability (SAT) and satisfiability of quantified Boolean formulas (QBF). However, in all these areas, there exist multiple solving strategies that work well on different applications; no strategy dominates all other strategies. Therefore, no individual solver shows robust state-of-the-art performance in all kinds of applications. Additionally, the question arises how to choose a well-performing solving strategy for a given application; this is a challenging question even for solver and domain experts. One way to address this issue is the use of portfolio solvers, that is, a set of different solvers or solver configurations. We present three new automatic portfolio methods: (i) automatic construction of parallel portfolio solvers (ACPP) via algorithm configuration,(ii) solving the $NP$-hard problem of finding effective algorithm schedules with Answer Set Programming (aspeed), and (iii) a flexible algorithm selection framework (claspfolio2) allowing for fair comparison of different selection approaches. All three methods show improved performance and robustness in comparison to individual solvers on heterogeneous instance sets from many different applications. Since parallel solvers are important to effectively solve hard problems on parallel computation systems (e.g., multi-core processors), we extend all three approaches to be effectively applicable in parallel settings. We conducted extensive experimental studies different instance sets from ASP, CSP, MAXSAT, Operation Research (OR), SAT and QBF that indicate an improvement in the state-of-the-art solving heterogeneous instance sets. Last but not least, from our experimental studies, we deduce practical advice regarding the question when to apply which of our methods.
Algorithmic management
(2022)
Algorithmic management
(2022)
Version Control Systems (VCS) allow developers to manage changes to software artifacts. Developers interact with VCSs through a variety of client programs, such as graphical front-ends or command line tools. It is desirable to use the same version control client program against different VCSs. Unfortunately, no established abstraction over VCS concepts exists. Instead, VCS client programs implement ad-hoc solutions to support interaction with multiple VCSs. This thesis presents Pur, an abstraction over version control concepts that allows building rich client programs that can interact with multiple VCSs. We provide an implementation of this abstraction and validate it by implementing a client application.
1 Introduction 1.1 Project formulation 1.2 Our contribution 2 Pedagogical Aspect 4 2.1 Modern teaching 2.2 Our Contribution 2.2.1 Autonomous and exploratory learning 2.2.2 Human machine interaction 2.2.3 Short multimedia clips 3 Ontology Aspect 3.1 Ontology driven expert systems 3.2 Our contribution 3.2.1 Ontology language 3.2.2 Concept Taxonomy 3.2.3 Knowledge base annotation 3.2.4 Description Logics 4 Natural language approach 4.1 Natural language processing in computer science 4.2 Our contribution 4.2.1 Explored strategies 4.2.2 Word equivalence 4.2.3 Semantic interpretation 4.2.4 Various problems 5 Information Retrieval Aspect 5.1 Modern information retrieval 5.2 Our contribution 5.2.1 Semantic query generation 5.2.2 Semantic relatedness 6 Implementation 6.1 Prototypes 6.2 Semantic layer architecture 6.3 Development 7 Experiments 7.1 Description of the experiments 7.2 General characteristics of the three sessions, instructions and procedure 7.3 First Session 7.4 Second Session 7.5 Third Session 7.6 Discussion and conclusion 8 Conclusion and future work 8.1 Conclusion 8.2 Open questions A Description Logics B Probabilistic context-free grammars
Although educational content in electronic form is increasing dramatically, its usage in an educational environment is poor, mainly due to the fact that there is too much of (unreliable) redundant, and not relevant information. Finding appropriate answers is a rather difficult task being reliant on the user filtering of the pertinent information from the noise. Turning knowledge bases like the online tele-TASK archive into useful educational resources requires identifying correct, reliable, and "machine-understandable" information, as well as developing simple but efficient search tools with the ability to reason over this information. Our vision is to create an E-Librarian Service, which is able to retrieve multimedia resources from a knowledge base in a more efficient way than by browsing through an index, or by using a simple keyword search. In our E-Librarian Service, the user can enter his question in a very simple and human way; in natural language (NL). Our premise is that more pertinent results would be retrieved if the search engine understood the sense of the user's query. The returned results are then logical consequences of an inference rather than of keyword matchings. Our E-Librarian Service does not return the answer to the user's question, but it retrieves the most pertinent document(s), in which the user finds the answer to his/her question. Among all the documents that have some common information with the user query, our E-Librarian Service identifies the most pertinent match(es), keeping in mind that the user expects an exhaustive answer while preferring a concise answer with only little or no information overhead. Also, our E-Librarian Service always proposes a solution to the user, even if the system concludes that there is no exhaustive answer. Our E-Librarian Service was implemented prototypically in three different educational tools. A first prototype is CHESt (Computer History Expert System); it has a knowledge base with 300 multimedia clips that cover the main events in computer history. A second prototype is MatES (Mathematics Expert System); it has a knowledge base with 115 clips that cover the topic of fractions in mathematics for secondary school w.r.t. the official school programme. All clips were recorded mainly by pupils. The third and most advanced prototype is the "Lecture Butler's E-Librarain Service"; it has a Web service interface to respect a service oriented architecture (SOA), and was developed in the context of the Web-University project at the Hasso-Plattner-Institute (HPI). Two major experiments in an educational environment - at the Lycée Technique Esch/Alzette in Luxembourg - were made to test the pertinence and reliability of our E-Librarian Service as a complement to traditional courses. The first experiment (in 2005) was made with CHESt in different classes, and covered a single lesson. The second experiment (in 2006) covered a period of 6 weeks of intensive use of MatES in one class. There was no classical mathematics lesson where the teacher gave explanations, but the students had to learn in an autonomous and exploratory way. They had to ask questions to the E-Librarian Service just the way they would if there was a human teacher.
The noble way to substantiate decisions that affect many people is to ask these people for their opinions. For governments that run whole countries, this means asking all citizens for their views to consider their situations and needs.
Organizations such as Africa's Voices Foundation, who want to facilitate communication between decision-makers and citizens of a country, have difficulty mediating between these groups. To enable understanding, statements need to be summarized and visualized. Accomplishing these goals in a way that does justice to the citizens' voices and situations proves challenging. Standard charts do not help this cause as they fail to create empathy for the people behind their graphical abstractions. Furthermore, these charts do not create trust in the data they are representing as there is no way to see or navigate back to the underlying code and the original data. To fulfill these functions, visualizations would highly benefit from interactions to explore the displayed data, which standard charts often only limitedly provide.
To help improve the understanding of people's voices, we developed and categorized 80 ideas for new visualizations, new interactions, and better connections between different charts, which we present in this report. From those ideas, we implemented 10 prototypes and two systems that integrate different visualizations. We show that this integration allows consistent appearance and behavior of visualizations. The visualizations all share the same main concept: representing each individual with a single dot. To realize this idea, we discuss technologies that efficiently allow the rendering of a large number of these dots. With these visualizations, direct interactions with representations of individuals are achievable by clicking on them or by dragging a selection around them. This direct interaction is only possible with a bidirectional connection from the visualization to the data it displays. We discuss different strategies for bidirectional mappings and the trade-offs involved. Having unified behavior across visualizations enhances exploration. For our prototypes, that includes grouping, filtering, highlighting, and coloring of dots. Our prototyping work was enabled by the development environment Lively4. We explain which parts of Lively4 facilitated our prototyping process. Finally, we evaluate our approach to domain problems and our developed visualization concepts.
Our work provides inspiration and a starting point for visualization development in this domain. Our visualizations can improve communication between citizens and their government and motivate empathetic decisions. Our approach, combining low-level entities to create visualizations, provides value to an explorative and empathetic workflow. We show that the design space for visualizing this kind of data has a lot of potential and that it is possible to combine qualitative and quantitative approaches to data analysis.
Helping overcome distance, the use of videoconferencing tools has surged during the pandemic. To shed light on the consequences of videoconferencing at work, this study takes a granular look at the implications of the self-view feature for meeting outcomes. Building on self-awareness research and self-regulation theory, we argue that by heightening the state of self-awareness, self-view engagement depletes participants’ mental resources and thereby can undermine online meeting outcomes. Evaluation of our theoretical model on a sample of 179 employees reveals a nuanced picture. Self-view engagement while speaking and while listening is positively associated with self-awareness, which, in turn, is negatively associated with satisfaction with meeting process, perceived productivity, and meeting enjoyment. The criticality of the communication role is put forward: looking at self while listening to other attendees has a negative direct and indirect effect on meeting outcomes; however, looking at self while speaking produces equivocal effects.
The main objective of this dissertation is to analyse prerequisites, expectations, apprehensions, and attitudes of students studying computer science, who are willing to gain a bachelor degree. The research will also investigate in the students’ learning style according to the Felder-Silverman model. These investigations fall in the attempt to make an impact on reducing the “dropout”/shrinkage rate among students, and to suggest a better learning environment.
The first investigation starts with a survey that has been made at the computer science department at the University of Baghdad to investigate the attitudes of computer science students in an environment dominated by women, showing the differences in attitudes between male and female students in different study years. Students are accepted to university studies via a centrally controlled admission procedure depending mainly on their final score at school. This leads to a high percentage of students studying subjects they do not want. Our analysis shows that 75% of the female students do not regret studying computer science although it was not their first choice. And according to statistics over previous years, women manage to succeed in their study and often graduate on top of their class. We finish with a comparison of attitudes between the freshman students of two different cultures and two different university enrolment procedures (University of Baghdad, in Iraq, and the University of Potsdam, in Germany) both with opposite gender majority.
The second step of investigation took place at the department of computer science at the University of Potsdam in Germany and analyzes the learning styles of students studying the three major fields of study offered by the department (computer science, business informatics, and computer science teaching). Investigating the differences in learning styles between the students of those study fields who usually take some joint courses is important to be aware of which changes are necessary to be adopted in the teaching methods to address those different students. It was a two stage study using two questionnaires; the main one is based on the Index of Learning Styles Questionnaire of B. A. Solomon and R. M. Felder, and the second questionnaire was an investigation on the students’ attitudes towards the findings of their personal first questionnaire. Our analysis shows differences in the preferences of learning style between male and female students of the different study fields, as well as differences between students with the different specialties (computer science, business informatics, and computer science teaching).
The third investigation looks closely into the difficulties, issues, apprehensions and expectations of freshman students studying computer science. The study took place at the computer science department at the University of Potsdam with a volunteer sample of students. The goal is to determine and discuss the difficulties and issues that they are facing in their study that may lead them to think in dropping-out, changing the study field, or changing the university. The research continued with the same sample of students (with business informatics students being the majority) through more than three semesters. Difficulties and issues during the study were documented, as well as students’ attitudes, apprehensions, and expectations. Some of the professors and lecturers opinions and solutions to some students’ problems were also documented. Many participants had apprehensions and difficulties, especially towards informatics subjects. Some business informatics participants began to think of changing the university, in particular when they reached their third semester, others thought about changing their field of study. Till the end of this research, most of the participants continued in their studies (the study they have started with or the new study they have changed to) without leaving the higher education system.
This thesis addresses real-time rendering techniques for 3D information lenses based on the focus & context metaphor. It analyzes, conceives, implements, and reviews its applicability to objects and structures of virtual 3D city models. In contrast to digital terrain models, the application of focus & context visualization to virtual 3D city models is barely researched. However, the purposeful visualization of contextual data of is extreme importance for the interactive exploration and analysis of this field. Programmable hardware enables the implementation of new lens techniques, that allow the augmentation of the perceptive and cognitive quality of the visualization compared to classical perspective projections. A set of 3D information lenses is integrated into a 3D scene-graph system: • Occlusion lenses modify the appearance of virtual 3D city model objects to resolve their occlusion and consequently facilitate the navigation. • Best-view lenses display city model objects in a priority-based manner and mediate their meta information. Thus, they support exploration and navigation of virtual 3D city models. • Color and deformation lenses modify the appearance and geometry of 3D city models to facilitate their perception. The presented techniques for 3D information lenses and their application to virtual 3D city models clarify their potential for interactive visualization and form a base for further development.
3D point clouds are a universal and discrete digital representation of three-dimensional objects and environments. For geospatial applications, 3D point clouds have become a fundamental type of raw data acquired and generated using various methods and techniques. In particular, 3D point clouds serve as raw data for creating digital twins of the built environment.
This thesis concentrates on the research and development of concepts, methods, and techniques for preprocessing, semantically enriching, analyzing, and visualizing 3D point clouds for applications around transport infrastructure. It introduces a collection of preprocessing techniques that aim to harmonize raw 3D point cloud data, such as point density reduction and scan profile detection. Metrics such as, e.g., local density, verticality, and planarity are calculated for later use. One of the key contributions tackles the problem of analyzing and deriving semantic information in 3D point clouds. Three different approaches are investigated: a geometric analysis, a machine learning approach operating on synthetically generated 2D images, and a machine learning approach operating on 3D point clouds without intermediate representation.
In the first application case, 2D image classification is applied and evaluated for mobile mapping data focusing on road networks to derive road marking vector data. The second application case investigates how 3D point clouds can be merged with ground-penetrating radar data for a combined visualization and to automatically identify atypical areas in the data. For example, the approach detects pavement regions with developing potholes. The third application case explores the combination of a 3D environment based on 3D point clouds with panoramic imagery to improve visual representation and the detection of 3D objects such as traffic signs.
The presented methods were implemented and tested based on software frameworks for 3D point clouds and 3D visualization. In particular, modules for metric computation, classification procedures, and visualization techniques were integrated into a modular pipeline-based C++ research framework for geospatial data processing, extended by Python machine learning scripts. All visualization and analysis techniques scale to large real-world datasets such as road networks of entire cities or railroad networks.
The thesis shows that some use cases allow taking advantage of established image vision methods to analyze images rendered from mobile mapping data efficiently. The two presented semantic classification methods working directly on 3D point clouds are use case independent and show similar overall accuracy when compared to each other. While the geometry-based method requires less computation time, the machine learning-based method supports arbitrary semantic classes but requires training the network with ground truth data. Both methods can be used in combination to gradually build this ground truth with manual corrections via a respective annotation tool.
This thesis contributes results for IT system engineering of applications, systems, and services that require spatial digital twins of transport infrastructure such as road networks and railroad networks based on 3D point clouds as raw data. It demonstrates the feasibility of fully automated data flows that map captured 3D point clouds to semantically classified models. This provides a key component for seamlessly integrated spatial digital twins in IT solutions that require up-to-date, object-based, and semantically enriched information about the built environment.
Analysis of protrusion dynamics in amoeboid cell motility by means of regularized contour flows
(2021)
Amoeboid cell motility is essential for a wide range of biological processes including wound healing, embryonic morphogenesis, and cancer metastasis. It relies on complex dynamical patterns of cell shape changes that pose long-standing challenges to mathematical modeling and raise a need for automated and reproducible approaches to extract quantitative morphological features from image sequences. Here, we introduce a theoretical framework and a computational method for obtaining smooth representations of the spatiotemporal contour dynamics from stacks of segmented microscopy images. Based on a Gaussian process regression we propose a one-parameter family of regularized contour flows that allows us to continuously track reference points (virtual markers) between successive cell contours. We use this approach to define a coordinate system on the moving cell boundary and to represent different local geometric quantities in this frame of reference. In particular, we introduce the local marker dispersion as a measure to identify localized membrane expansions and provide a fully automated way to extract the properties of such expansions, including their area and growth time. The methods are available as an open-source software package called AmoePy, a Python-based toolbox for analyzing amoeboid cell motility (based on time-lapse microscopy data), including a graphical user interface and detailed documentation. Due to the mathematical rigor of our framework, we envision it to be of use for the development of novel cell motility models. We mainly use experimental data of the social amoeba Dictyostelium discoideum to illustrate and validate our approach. <br /> Author summary Amoeboid motion is a crawling-like cell migration that plays an important key role in multiple biological processes such as wound healing and cancer metastasis. This type of cell motility results from expanding and simultaneously contracting parts of the cell membrane. From fluorescence images, we obtain a sequence of points, representing the cell membrane, for each time step. By using regression analysis on these sequences, we derive smooth representations, so-called contours, of the membrane. Since the number of measurements is discrete and often limited, the question is raised of how to link consecutive contours with each other. In this work, we present a novel mathematical framework in which these links are described by regularized flows allowing a certain degree of concentration or stretching of neighboring reference points on the same contour. This stretching rate, the so-called local dispersion, is used to identify expansions and contractions of the cell membrane providing a fully automated way of extracting properties of these cell shape changes. We applied our methods to time-lapse microscopy data of the social amoeba Dictyostelium discoideum.
Modern biological analysis techniques supply scientists with various forms of data. One category of such data are the so called "expression data". These data indicate the quantities of biochemical compounds present in tissue samples. Recently, expression data can be generated at a high speed. This leads in turn to amounts of data no longer analysable by classical statistical techniques. Systems biology is the new field that focuses on the modelling of this information. At present, various methods are used for this purpose. One superordinate class of these methods is machine learning. Methods of this kind had, until recently, predominantly been used for classification and prediction tasks. This neglected a powerful secondary benefit: the ability to induce interpretable models. Obtaining such models from data has become a key issue within Systems biology. Numerous approaches have been proposed and intensively discussed. This thesis focuses on the examination and exploitation of one basic technique: decision trees. The concept of comparing sets of decision trees is developed. This method offers the possibility of identifying significant thresholds in continuous or discrete valued attributes through their corresponding set of decision trees. Finding significant thresholds in attributes is a means of identifying states in living organisms. Knowing about states is an invaluable clue to the understanding of dynamic processes in organisms. Applied to metabolite concentration data, the proposed method was able to identify states which were not found with conventional techniques for threshold extraction. A second approach exploits the structure of sets of decision trees for the discovery of combinatorial dependencies between attributes. Previous work on this issue has focused either on expensive computational methods or the interpretation of single decision trees a very limited exploitation of the data. This has led to incomplete or unstable results. That is why a new method is developed that uses sets of decision trees to overcome these limitations. Both the introduced methods are available as software tools. They can be applied consecutively or separately. That way they make up a package of analytical tools that usefully supplement existing methods. By means of these tools, the newly introduced methods were able to confirm existing knowledge and to suggest interesting and new relationships between metabolites.
Durch die immer stärker werdende Flut an digitalen Informationen basieren immer mehr Anwendungen auf der Nutzung von kostengünstigen Cloud Storage Diensten. Die Anzahl der Anbieter, die diese Dienste zur Verfügung stellen, hat sich in den letzten Jahren deutlich erhöht. Um den passenden Anbieter für eine Anwendung zu finden, müssen verschiedene Kriterien individuell berücksichtigt werden. In der vorliegenden Studie wird eine Auswahl an Anbietern etablierter Basic Storage Diensten vorgestellt und miteinander verglichen. Für die Gegenüberstellung werden Kriterien extrahiert, welche bei jedem der untersuchten Anbieter anwendbar sind und somit eine möglichst objektive Beurteilung erlauben. Hierzu gehören unter anderem Kosten, Recht, Sicherheit, Leistungsfähigkeit sowie bereitgestellte Schnittstellen. Die vorgestellten Kriterien können genutzt werden, um Cloud Storage Anbieter bezüglich eines konkreten Anwendungsfalles zu bewerten.
The course timetabling problem can be generally defined as the task of assigning a number of lectures to a limited set of timeslots and rooms, subject to a given set of hard and soft constraints. The modeling language for course timetabling is required to be expressive enough to specify a wide variety of soft constraints and objective functions. Furthermore, the resulting encoding is required to be extensible for capturing new constraints and for switching them between hard and soft, and to be flexible enough to deal with different formulations. In this paper, we propose to make effective use of ASP as a modeling language for course timetabling. We show that our ASP-based approach can naturally satisfy the above requirements, through an ASP encoding of the curriculum-based course timetabling problem proposed in the third track of the second international timetabling competition (ITC-2007). Our encoding is compact and human-readable, since each constraint is individually expressed by either one or two rules. Each hard constraint is expressed by using integrity constraints and aggregates of ASP. Each soft constraint S is expressed by rules in which the head is the form of penalty (S, V, C), and a violation V and its penalty cost C are detected and calculated respectively in the body. We carried out experiments on four different benchmark sets with five different formulations. We succeeded either in improving the bounds or producing the same bounds for many combinations of problem instances and formulations, compared with the previous best known bounds.
Answer Set Programming faces an increasing popularity for problem solving in various domains. While its modeling language allows us to express many complex problems in an easy way, its solving technology enables their effective resolution. In what follows, we detail some of the key factors of its success. Answer Set Programming [ASP; Brewka et al. Commun ACM 54(12):92–103, (2011)] is seeing a rapid proliferation in academia and industry due to its easy and flexible way to model and solve knowledge-intense combinatorial (optimization) problems. To this end, ASP offers a high-level modeling language paired with high-performance solving technology. As a result, ASP systems provide out-off-the-box, general-purpose search engines that allow for enumerating (optimal) solutions. They are represented as answer sets, each being a set of atoms representing a solution. The declarative approach of ASP allows a user to concentrate on a problem’s specification rather than the computational means to solve it. This makes ASP a prime candidate for rapid prototyping and an attractive tool for teaching key AI techniques since complex problems can be expressed in a succinct and elaboration tolerant way. This is eased by the tuning of ASP’s modeling language to knowledge representation and reasoning (KRR). The resulting impact is nicely reflected by a growing range of successful applications of ASP [Erdem et al. AI Mag 37(3):53–68, 2016; Falkner et al. Industrial applications of answer set programming. K++nstliche Intelligenz (2018)]