004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Article (39)
- Monograph/Edited Volume (39)
- Doctoral Thesis (33)
- Postprint (5)
- Conference Proceeding (2)
- Part of a Book (1)
- Other (1)
Keywords
- machine learning (8)
- Smalltalk (5)
- maschinelles Lernen (5)
- cyber-physical systems (4)
- digital education (4)
- openHPI (4)
- probabilistic timed systems (4)
- qualitative Analyse (4)
- qualitative analysis (4)
- quantitative Analyse (4)
- quantitative analysis (4)
- Cloud Computing (3)
- Digitalisierung (3)
- Forschungsprojekte (3)
- Future SOC Lab (3)
- Hasso Plattner Institute (3)
- Hasso-Plattner-Institut (3)
- In-Memory Technologie (3)
- MOOC (3)
- Multicore Architekturen (3)
- artifical intelligence (3)
- blockchain (3)
- business process management (3)
- cloud (3)
- cloud computing (3)
- digitale Bildung (3)
- digitalization (3)
- multicore architectures (3)
- research projects (3)
- smart contracts (3)
- 3D visualization (2)
- 3D-Visualisierung (2)
- Anomalieerkennung (2)
- Bounded Model Checking (2)
- Datenaufbereitung (2)
- Design Thinking (2)
- European Union (2)
- Europäische Union (2)
- Forschungskolleg (2)
- Graphentransformationssysteme (2)
- HPI Schul-Cloud (2)
- Identitätsmanagement (2)
- In-Memory (2)
- In-Memory technology (2)
- Innovation (2)
- Klausurtagung (2)
- MERLOT (2)
- Modellprüfung (2)
- Ph.D. retreat (2)
- Programmieren (2)
- Service-oriented Systems Engineering (2)
- Sicherheit (2)
- Versionsverwaltung (2)
- Werkzeuge (2)
- anomaly detection (2)
- bounded model checking (2)
- business processes (2)
- classification (2)
- clustering (2)
- computer vision (2)
- cyber-physische Systeme (2)
- data preparation (2)
- deep learning (2)
- deferred choice (2)
- digital enlightenment (2)
- digital learning platform (2)
- digital sovereignty (2)
- digitale Aufklärung (2)
- digitale Lernplattform (2)
- digitale Souveränität (2)
- formal semantics (2)
- functional dependencies (2)
- funktionale Abhängigkeiten (2)
- geospatial data (2)
- graph transformation systems (2)
- identity management (2)
- inclusion dependencies (2)
- innovation (2)
- intrusion detection (2)
- künstliche Intelligenz (2)
- lebenslanges Lernen (2)
- lifelong learning (2)
- liveness (2)
- maschinelles Sehen (2)
- memory (2)
- mobile mapping (2)
- model checking (2)
- nested graph conditions (2)
- oracles (2)
- outlier detection (2)
- probabilistische gezeitete Systeme (2)
- probabilistische zeitgesteuerte Systeme (2)
- programming (2)
- research school (2)
- security (2)
- self-sovereign identity (2)
- service-oriented systems engineering (2)
- tiefes Lernen (2)
- typed attributed graphs (2)
- user interaction (2)
- user-generated content (2)
- version control (2)
- workflow patterns (2)
- 0-day (1)
- 3D city model (1)
- 3D geovisualization (1)
- 3D point cloud (1)
- 3D point clouds (1)
- 3D portrayal (1)
- 3D-Geovisualisierung (1)
- 3D-Punktwolke (1)
- 3D-Punktwolken (1)
- 3D-Rendering (1)
- 3D-Stadtmodell (1)
- ACINQ (1)
- APT (1)
- ASIC (1)
- Activity-oriented Optimization (1)
- Advanced Persistent Threats (1)
- Agile (1)
- Agilität (1)
- Aktivitäten (1)
- Algebraic methods (1)
- Ambiguity (1)
- Ambiguität (1)
- Analog-zu-Digital-Konvertierung (1)
- Anfrageoptimierung (1)
- Angriffserkennung (1)
- Architekturadaptation (1)
- Archivanalyse (1)
- Arzt-Patient-Beziehung (1)
- Attributsicherung (1)
- Ausreißererkennung (1)
- Australian securities exchange (1)
- Auswirkungen (1)
- BCCC (1)
- BPMN (1)
- BTC (1)
- Bahnwesen (1)
- Bedrohungserkennung (1)
- Behavior change (1)
- Benutzerinteraktion (1)
- Betriebssysteme (1)
- Big Data (1)
- Big Five model (1)
- Bildverarbeitung (1)
- BitShares (1)
- Bitcoin Core (1)
- Blockchain (1)
- Blockchain Auth (1)
- Blockchain-Konsortium R3 (1)
- Blockkette (1)
- Blockstack (1)
- Blockstack ID (1)
- Blumix-Plattform (1)
- Blöcke (1)
- Bounded Backward Model Checking (1)
- Business Process Management (1)
- Business process modeling (1)
- Byzantine Agreement (1)
- CityGML (1)
- Clinical predictive modeling (1)
- Cloud (1)
- Clusteranalyse (1)
- Clustering (1)
- Colored Coins (1)
- Compound Values (1)
- Computational Photography (1)
- Computing (1)
- Conceptual modeling (1)
- Creative (1)
- Cyber-Sicherheit (1)
- Cyber-physikalische Systeme (1)
- DAO (1)
- DBMS (1)
- DPoS (1)
- Data Profiling (1)
- Data Structure Optimization (1)
- Data modeling (1)
- Data warehouse (1)
- Data-Mining (1)
- Data-Science (1)
- Dateistruktur (1)
- Datenanalyse (1)
- Datenbank (1)
- Datenbanksysteme (1)
- Datenintegration (1)
- Datenmodelle (1)
- Datenqualität (1)
- Datensatz (1)
- Datenschutz-sicherer Einsatz in der Schule (1)
- Datenstrukturoptimierung (1)
- Datensynthese (1)
- Datentransformation (1)
- Datenvisualisierung (1)
- Debugging (1)
- Decision support (1)
- Dekubitus (1)
- Delegated Proof-of-Stake (1)
- Denkweise (1)
- Design-Forschung (1)
- Digital Engineering (1)
- Direkte Manipulation (1)
- Distributed Proof-of-Research (1)
- Distributed-Ledger-Technologie (DLT) (1)
- Domänenspezifische Modellierung (1)
- Dubletten (1)
- Duplikaterkennung (1)
- E-Learning (1)
- E-Wallet (1)
- ECDSA (1)
- Echtzeit (1)
- Echtzeit-Rendering (1)
- Einbruchserkennung (1)
- Endpunktsicherheit (1)
- Entitätsverknüpfung (1)
- Entscheidungsfindung (1)
- Entscheidungsmanagement (1)
- Entscheidungsmodelle (1)
- Eris (1)
- Erkennung von Metadaten (1)
- Ether (1)
- Ethereum (1)
- Evaluation (1)
- Exploration (1)
- Feature selection (1)
- Federated Byzantine Agreement (1)
- Fehlertoleranz (1)
- Fernerkundung (1)
- FollowMyVote (1)
- Fork (1)
- Formal modelling (1)
- GPU (1)
- GPU acceleration (1)
- GPU-Beschleunigung (1)
- Gaussian process state-space models (1)
- Gaussian processes (1)
- Gauß-Prozess Zustandsraummodelle (1)
- Gauß-Prozesse (1)
- Gene expression (1)
- General Earth and Planetary Sciences (1)
- Generalized Discrimination Networks (1)
- Geodaten (1)
- Geography, Planning and Development (1)
- German schools (1)
- Geschäftsprozessarchitekturen (1)
- Geschäftsprozessmanagement (1)
- Gewinnung benannter Entitäten (1)
- GitHub (1)
- GraalVM (1)
- Graph logic (1)
- Graph-Mining (1)
- Graphableitung (1)
- Graphreparatur (1)
- Gridcoin (1)
- Hard Fork (1)
- Hashed Timelock Contracts (1)
- Hasserkennung (1)
- Heuristiken (1)
- Hyrise (1)
- Häkeln (1)
- IDS (1)
- IT-Infrastruktur (1)
- IT-infrastructure (1)
- Ideation (1)
- Ideenfindung (1)
- Identität (1)
- Impact (1)
- Implementation in Organizations (1)
- Implementierung in Organisationen (1)
- Informationsextraktion (1)
- Inklusionsabhängigkeiten (1)
- Interdisciplinary Teams (1)
- Internet der Dinge (1)
- Internet of Things (1)
- Interpretability (1)
- Interpreter (1)
- Interval Timed Automata (1)
- IoT (1)
- Japanese Blockchain Consortium (1)
- Japanisches Blockchain-Konsortium (1)
- Java (1)
- Karten (1)
- Kausalität (1)
- Kette (1)
- Klassifikation (1)
- Klassifizierung (1)
- Konsensalgorithmus (1)
- Konsensprotokoll (1)
- Konsensprotokolle (1)
- Konsistenzrestauration (1)
- Kreativität (1)
- Kunstanalyse (1)
- Künstliche Intelligenz (1)
- Laserscanning (1)
- Laufzeitmodelle (1)
- Learning Analytics (1)
- Lebendigkeit (1)
- Leistungsmodelle von virtuellen Maschinen (1)
- LiDAR (1)
- Lightning Network (1)
- Live-Migration (1)
- Live-Programmierung (1)
- Lively Kernel (1)
- Liveness (1)
- Lock-Time-Parameter (1)
- Lösungsraum (1)
- MOOCs (1)
- Machine-Learning (1)
- Machinelles Lernen (1)
- Maschinelles Lernen (1)
- Maschinen (1)
- Measurement (1)
- Messung (1)
- Metacrate (1)
- Metadaten (1)
- Micropayment-Kanäle (1)
- Microsoft Azur (1)
- Mindset (1)
- Mobile Mapping (1)
- Mobile-Mapping (1)
- Modelle mit mehreren Versionen (1)
- Modellreparatur (1)
- Multidisciplinary Teams (1)
- NASDAQ (1)
- NameID (1)
- Namecoin (1)
- Nephrology (1)
- Netzwerkprotokolle (1)
- Non-photorealistic Rendering (1)
- Nutzerinteraktion (1)
- Objects (1)
- Objekte (1)
- Off-Chain-Transaktionen (1)
- Onename (1)
- Online Learning Environments (1)
- OpenBazaar (1)
- OptoGait (1)
- Oracles (1)
- Orphan Block (1)
- P2P (1)
- Patientenermündigung (1)
- Peer-to-Peer Netz (1)
- Peercoin (1)
- PoB (1)
- PoS (1)
- PoW (1)
- Posenabschätzung (1)
- Prior knowledge (1)
- Privatsphäre (1)
- Problem Solving (1)
- Problemlösung (1)
- Process (1)
- Process Execution (1)
- Programmiererlebnis (1)
- Programmierwerkzeuge (1)
- Proof-of-Burn (1)
- Proof-of-Stake (1)
- Proof-of-Work (1)
- Prototyping (1)
- Prozess (1)
- Prozessmodelle (1)
- Psychotherapie (1)
- Python (1)
- Quanten-Computing (1)
- Query-Optimierung (1)
- RL (1)
- Regressionstests (1)
- Reproducible benchmarking (1)
- Resource Allocation (1)
- Resource Management (1)
- Reverse Engineering (1)
- Ripple (1)
- Ruby (1)
- Runtime-monitoring (1)
- SCP (1)
- SHA (1)
- SIEM (1)
- SPV (1)
- SWIRL (1)
- Savanne (1)
- Schriftartgestaltung (1)
- Schriftrendering (1)
- Schule (1)
- Schwierigkeitsgrad (1)
- Scrollytelling (1)
- Selbst-Adaptive Software (1)
- Self-Regulated Learning (1)
- Sequenzeigenschaften (1)
- Serialisierung (1)
- Service-Oriented Architecture (1)
- Sicherheitsanalyse (1)
- Simplified Payment Verification (1)
- Situationsbewusstsein (1)
- Skalierbarkeit der Blockchain (1)
- Skriptsprachen (1)
- Slock.it (1)
- Soft Fork (1)
- Software-Evolution (1)
- Software/Hardware Co-Design (1)
- Solution Space (1)
- Soziale Medien (1)
- Specification (1)
- Spezifikation von gezeiteten Graph Transformationen (1)
- Sprachlernen im Limes (1)
- Squeak (1)
- Squeak/Smalltalk (1)
- Standardisierung (1)
- Steemit (1)
- Stellar Consensus Protocol (1)
- Storj (1)
- Suchtberatung und -therapie (1)
- Telemedizin (1)
- Temporallogik (1)
- Testergebnisse (1)
- Testpriorisierungs (1)
- Texterkennung (1)
- Textklassifikation (1)
- The Bitfury Group (1)
- The DAO (1)
- Timed Automata (1)
- Tools (1)
- Trajektorien (1)
- Transaktion (1)
- Tripel-Graph-Grammatiken (1)
- Two-Way-Peg (1)
- Unspent Transaction Output (1)
- Unterricht mit digitalen Medien (1)
- User Experience (1)
- VUCA-World (1)
- Validation (1)
- Verbundwerte (1)
- Verhaltensänderung (1)
- Verlässlichkeit (1)
- Vertrauen (1)
- Verträge (1)
- Veränderungsanalyse (1)
- Virtual Machines (1)
- Virtuelle Maschinen (1)
- Visualisierungskonzept-Exploration (1)
- Vorhersage (1)
- Water Science and Technology (1)
- Watson IoT (1)
- Wearable (1)
- Werkzeugbau (1)
- Wicked Problems (1)
- Wolke (1)
- Wüstenbildung (1)
- Zebris (1)
- Zielvorgabe (1)
- Zookos Dreieck (1)
- Zookos triangle (1)
- addiction care (1)
- advanced persistent threat (1)
- advanced threats (1)
- agil (1)
- altchain (1)
- alternative chain (1)
- analog-to-digital conversion (1)
- apt (1)
- architectural adaptation (1)
- archive analysis (1)
- art analysis (1)
- asset management (1)
- atomic swap (1)
- attribute assurance (1)
- autonomous (1)
- behaviourally correct learning (1)
- benutzergenerierte Inhalte (1)
- bidirectional payment channels (1)
- bildbasiertes Rendering (1)
- bitcoins (1)
- blockchain consortium (1)
- blockchain-übergreifend (1)
- blocks (1)
- blumix platform (1)
- bounded backward model checking (1)
- brand personality (1)
- business process architectures (1)
- categories (1)
- causal discovery (1)
- causal structure learning (1)
- causality (1)
- chain (1)
- change detection (1)
- code generation (1)
- compositional analysis (1)
- computational photography (1)
- computer-aided design (1)
- computer-mediated therapy (1)
- computervermittelte Therapie (1)
- computing (1)
- confirmation period (1)
- confluence (1)
- consensus algorithm (1)
- consensus protocol (1)
- consensus protocols (1)
- consistency restoration (1)
- consistent learning (1)
- contest period (1)
- continuous integration (1)
- contracts (1)
- convolutional neural networks (1)
- creativity (1)
- crochet (1)
- cross-chain (1)
- cultural heritage (1)
- cumulative culture (1)
- cyber-physikalische Systeme (1)
- cybersecurity (1)
- data analytics (1)
- data dependencies (1)
- data integration (1)
- data mining (1)
- data models (1)
- data pipeline (1)
- data profiling (1)
- data quality (1)
- data science (1)
- data set (1)
- data synthesis (1)
- data visualization (1)
- data wrangling (1)
- data-driven (1)
- database (1)
- database optimization (1)
- database systems (1)
- datengetrieben (1)
- debugging (1)
- decentral identities (1)
- decentralized autonomous organization (1)
- decision management (1)
- decision mining (1)
- decision models (1)
- decubitus (1)
- deduplication (1)
- deep Gaussian processes (1)
- demografische Informationen (1)
- demographic information (1)
- dependability (1)
- desertification (1)
- design research (1)
- dezentrale Identitäten (1)
- dezentrale autonome Organisation (1)
- difficulty (1)
- difficulty target (1)
- digital picture archive (1)
- digital unterstützter Unterricht (1)
- digital whiteboard (1)
- digitale Infrastruktur für den Schulunterricht (1)
- digitales Bildarchiv (1)
- digitales Whiteboard (1)
- direct manipulation (1)
- discrete-event model (1)
- diskretes Ereignismodell (1)
- distributed computation (1)
- distributed performance monitoring (1)
- distributed systems (1)
- doctor-patient relationship (1)
- domain-specific modeling (1)
- doppelter Hashwert (1)
- double hashing (1)
- drift theory (1)
- duplicate detection (1)
- dynamic systems (1)
- dynamische Systeme (1)
- e-learning (1)
- electrical muscle stimulation (1)
- elektrische Muskelstimulation (1)
- endpoint security (1)
- entity linking (1)
- entity resolution (1)
- erzeugende gegnerische Netzwerke (1)
- evaluation (1)
- evolutionary computation (1)
- experience (1)
- exploration (1)
- exploratives Programmieren (1)
- exploratory programming (1)
- extend (1)
- fault tolerance (1)
- federated voting (1)
- file structure (1)
- font engineering (1)
- font rendering (1)
- fortschrittliche Angriffe (1)
- gait analysis algorithm (1)
- gefaltete neuronale Netze (1)
- generalized discrimination networks (1)
- generative adversarial networks (1)
- geschichtsbewusste Laufzeit-Modelle (1)
- getypte Attributierte Graphen (1)
- global model management (1)
- globales Modellmanagement (1)
- grammars (1)
- graph inference (1)
- graph mining (1)
- graph repair (1)
- graph-transformations (1)
- hashrate (1)
- hate speech detection (1)
- heuristics (1)
- higher education (1)
- history-aware runtime models (1)
- human-centered (1)
- hybrid systems (1)
- identity (1)
- image processing (1)
- image stylization (1)
- image-based rendering (1)
- immediacy (1)
- in-memory (1)
- in-memory technology (1)
- incremental graph query evaluation (1)
- index selection (1)
- inertial measurement unit (1)
- information extraction (1)
- inkrementelle Ausführung von Graphanfragen (1)
- integrated development environments (1)
- integrierte Entwicklungsumgebungen (1)
- intelligente Verträge (1)
- inter-chain (1)
- interactive media (1)
- interaktive Medien (1)
- interdisziplinäre Teams (1)
- interpreters (1)
- interval probabilistic timed systems (1)
- interval probabilistische zeitgesteuerte Systeme (1)
- interval timed automata (1)
- intuitive Benutzeroberflächen (1)
- intuitive interfaces (1)
- invention (1)
- invention mechanism (1)
- juridical recording (1)
- k-inductive invariant checking (1)
- k-induktive Invariantenprüfung (1)
- kausale Entdeckung (1)
- kausales Strukturlernen (1)
- kompositionale Analyse (1)
- konsistentes Lernen (1)
- kontinuierliche Integration (1)
- kulturelles Erbe (1)
- language learning in the limit (1)
- laserscanning (1)
- learning (1)
- lebenszentriert (1)
- ledger assets (1)
- left recursion (1)
- level-replacement systems (1)
- life-centered (1)
- live migration (1)
- live programming (1)
- machine (1)
- machines (1)
- maps (1)
- maschinelle Verarbeitung natürlicher Sprache (1)
- medical documentation (1)
- medizinische Dokumentation (1)
- mehrsprachige Ausführungsumgebungen (1)
- menschenzentriert (1)
- merged mining (1)
- merkle root (1)
- metacrate (1)
- metadata (1)
- metadata detection (1)
- methods (1)
- metric temporal logic (1)
- metric termporal graph logic (1)
- metrisch temporale Graph Logic (1)
- metrische Temporallogik (1)
- microcredential (1)
- micropayment (1)
- micropayment channels (1)
- miner (1)
- mining (1)
- mining hardware (1)
- minting (1)
- model (1)
- model repair (1)
- model-driven engineering (1)
- model-driven software engineering (1)
- modellgetriebene Entwicklung (1)
- modellgetriebene Softwaretechnik (1)
- multi-version models (1)
- multidisziplinäre Teams (1)
- named entity mining (1)
- natural language processing (1)
- network protocols (1)
- non-photorealistic rendering (1)
- nonce (1)
- novelty detection (1)
- nutzergenerierte Inhalte (1)
- object-oriented programming (1)
- objektorientiertes Programmieren (1)
- off-chain transaction (1)
- online course creation (1)
- online course design (1)
- open innovation (1)
- operating systems (1)
- optical character recognition (1)
- order dependencies (1)
- packrat parsing (1)
- parallel and sequential independence (1)
- parallel processing (1)
- parallele Verarbeitung (1)
- parallele und Sequentielle Unabhängigkeit (1)
- parsing expression grammars (1)
- patent (1)
- patient empowerment (1)
- peer-to-peer network (1)
- pegged sidechains (1)
- performance models of virtual machines (1)
- personality prediction (1)
- polyglot execution environments (1)
- polyglot programming (1)
- polyglottes Programmieren (1)
- pose estimation (1)
- prediction (1)
- privacy (1)
- probabilistic machine learning (1)
- probabilistisches maschinelles Lernen (1)
- process mining (1)
- process models (1)
- programming experience (1)
- programming tools (1)
- programs (1)
- prototyping (1)
- psychotherapy (1)
- public dataset (1)
- qualitative model (1)
- qualitatives Modell (1)
- quantum computing (1)
- query optimization (1)
- quorum slices (1)
- railways (1)
- real-time (1)
- real-time rendering (1)
- rechnerunterstütztes Konstruieren (1)
- reconfigurable systems (1)
- regression testing (1)
- reinforcement learning (1)
- remote sensing (1)
- reverse engineering (1)
- rootstock (1)
- runtime models (1)
- runtime monitoring (1)
- räumliche Geodaten (1)
- savanna (1)
- scalability of blockchain (1)
- scarce tokens (1)
- schwach überwachtes maschinelles Lernen (1)
- scripting languages (1)
- scrollytelling (1)
- security analytics (1)
- selbst-souveräne Identitäten (1)
- selbstbestimmte Identitäten (1)
- self-adaptive software (1)
- self-driving (1)
- semantic classification (1)
- semantische Klassifizierung (1)
- sequence properties (1)
- serialization (1)
- serverseitiges 3D-Rendering (1)
- serverside 3D rendering (1)
- service-oriented architectures (1)
- serviceorientierte Architekturen (1)
- sidechain (1)
- simulation (1)
- situational awareness (1)
- small talk (1)
- smalltalk (1)
- social media (1)
- social media analysis (1)
- software evolution (1)
- software/hardware co-design (1)
- specification of timed graph transformations (1)
- squeak (1)
- standardization (1)
- stark verhaltenskorrekt sperrend (1)
- static source-code analysis (1)
- statische Quellcodeanalyse (1)
- stochastic process (1)
- strongly behaviourally correct locking (1)
- symbolic analysis (1)
- symbolic graphs (1)
- symbolische Analyse (1)
- symbolische Graphen (1)
- synchronization (1)
- tabellarische Dateien (1)
- tabular data (1)
- technology (1)
- telemedicine (1)
- temporal graph queries (1)
- temporal logic (1)
- temporale Graphanfragen (1)
- test case prioritization (1)
- test results (1)
- text classification (1)
- text mining (1)
- threat detection (1)
- tiefe Gauß-Prozesse (1)
- timed automata (1)
- tool building (1)
- tools (1)
- trajectories (1)
- transaction (1)
- triple graph grammars (1)
- trust (1)
- typisierte attributierte Graphen (1)
- unique column combinations (1)
- unsupervised (1)
- user experience (1)
- variational inference (1)
- variationelle Inferenz (1)
- verhaltenskorrektes Lernen (1)
- verifiable credentials (1)
- verschachtelte Anwendungsbedingungen (1)
- verschachtelte Graphbedingungen (1)
- verteilte Berechnung (1)
- verteilte Leistungsüberwachung (1)
- verzwickte Probleme (1)
- virtual (1)
- virtual machines (1)
- virtual reality (1)
- virtuell (1)
- virtuelle Maschinen (1)
- virtuelle Realität (1)
- visual language (1)
- visual languages (1)
- visualization concept exploration (1)
- visuelle Sprache (1)
- visuelle Sprachen (1)
- weak supervision (1)
- wearables (1)
- web-based development (1)
- web-based development environment (1)
- web-basierte Entwicklungsumgebung (1)
- webbasierte Entwicklung (1)
- zero-day (1)
- überprüfbare Nachweise (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (120) (remove)
As resources are valuable assets, organizations have to decide which resources to allocate to business process tasks in a way that the process is executed not only effectively but also efficiently. Traditional role-based resource allocation leads to effective process executions, since each task is performed by a resource that has the required skills and competencies to do so. However, the resulting allocations are typically not as efficient as they could be, since optimization techniques have yet to find their way in traditional business process management scenarios. On the other hand, operations research provides a rich set of analytical methods for supporting problem-specific decisions on resource allocation. This paper provides a novel framework for creating transparency on existing tasks and resources, supporting individualized allocations for each activity in a process, and the possibility to integrate problem-specific analytical methods of the operations research domain. To validate the framework, the paper reports on the design and prototypical implementation of a software architecture, which extends a traditional process engine with a dedicated resource management component. This component allows us to define specific resource allocation problems at design time, and it also facilitates optimized resource allocation at run time. The framework is evaluated using a real-world parcel delivery process. The evaluation shows that the quality of the allocation results increase significantly with a technique from operations research in contrast to the traditional applied rule-based approach.
Graph repair, restoring consistency of a graph, plays a prominent role in several areas of computer science and beyond: For example, in model-driven engineering, the abstract syntax of models is usually encoded using graphs. Flexible edit operations temporarily create inconsistent graphs not representing a valid model, thus requiring graph repair. Similarly, in graph databases—managing the storage and manipulation of graph data—updates may cause that a given database does not satisfy some integrity constraints, requiring also graph repair. We present a logic-based incremental approach to graph repair, generating a sound and complete (upon termination) overview of least-changing repairs. In our context, we formalize consistency by so-called graph conditions being equivalent to first-order logic on graphs. We present two kind of repair algorithms: State-based repair restores consistency independent of the graph update history, whereas deltabased (or incremental) repair takes this history explicitly into account. Technically, our algorithms rely on an existing model generation algorithm for graph conditions implemented in AutoGraph. Moreover, the delta-based approach uses the new concept of satisfaction (ST) trees for encoding if and how a graph satisfies a graph condition. We then demonstrate how to manipulate these STs incrementally with respect to a graph update.
In an attempt to pave the way for more extensive Computer Science Education (CSE) coverage in K-12, this research developed and made a preliminary evaluation of a blended-learning Introduction to CS program based on an academic MOOC. Using an academic MOOC that is pedagogically effective and engaging, such a program may provide teachers with disciplinary scaffolds and allow them to focus their attention on enhancing students’ learning experience and nurturing critical 21st-century skills such as self-regulated learning. As we demonstrate, this enabled us to introduce an academic level course to middle-school students. In this research, we developed the principals and initial version of such a program, targeting ninth-graders in science-track classes who learn CS as part of their standard curriculum. We found that the middle-schoolers who participated in the program achieved academic results on par with undergraduate students taking this MOOC for academic credit. Participating students also developed a more accurate perception of the essence of CS as a scientific discipline. The unplanned school closure due to the COVID19 pandemic outbreak challenged the research but underlined the advantages of such a MOOCbased blended learning program above classic pedagogy in times of global or local crises that lead to school closure. While most of the science track classes seem to stop learning CS almost entirely, and the end-of-year MoE exam was discarded, the program’s classes smoothly moved to remote learning mode, and students continued to study at a pace similar to that experienced before the school shut down.
Advanced mechatronic systems have to integrate existing technologies from mechanical, electrical and software engineering. They must be able to adapt their structure and behavior at runtime by reconfiguration to react flexibly to changes in the environment. Therefore, a tight integration of structural and behavioral models of the different domains is required. This integration results in complex reconfigurable hybrid systems, the execution logic of which cannot be addressed directly with existing standard modeling, simulation, and code-generation techniques. We present in this paper how our component-based approach for reconfigurable mechatronic systems, M ECHATRONIC UML, efficiently handles the complex interplay of discrete behavior and continuous behavior in a modular manner. In addition, its extension to even more flexible reconfiguration cases is presented.
Graphs play an important role in many areas of Computer Science. In particular, our work is motivated by model-driven software development and by graph databases. For this reason, it is very important to have the means to express and to reason about the properties that a given graph may satisfy. With this aim, in this paper we present a visual logic that allows us to describe graph properties, including navigational properties, i.e., properties about the paths in a graph. The logic is equipped with a deductive tableau method that we have proved to be sound and complete.
In recent years, the ever-growing amount of documents on the Web as well as in closed systems for private or business contexts led to a considerable increase of valuable textual information about topics, events, and entities. It is a truism that the majority of information (i.e., business-relevant data) is only available in unstructured textual form. The text mining research field comprises various practice areas that have the common goal of harvesting high-quality information from textual data. These information help addressing users' information needs.
In this thesis, we utilize the knowledge represented in user-generated content (UGC) originating from various social media services to improve text mining results. These social media platforms provide a plethora of information with varying focuses. In many cases, an essential feature of such platforms is to share relevant content with a peer group. Thus, the data exchanged in these communities tend to be focused on the interests of the user base. The popularity of social media services is growing continuously and the inherent knowledge is available to be utilized. We show that this knowledge can be used for three different tasks.
Initially, we demonstrate that when searching persons with ambiguous names, the information from Wikipedia can be bootstrapped to group web search results according to the individuals occurring in the documents. We introduce two models and different means to handle persons missing in the UGC source. We show that the proposed approaches outperform traditional algorithms for search result clustering. Secondly, we discuss how the categorization of texts according to continuously changing community-generated folksonomies helps users to identify new information related to their interests. We specifically target temporal changes in the UGC and show how they influence the quality of different tag recommendation approaches. Finally, we introduce an algorithm to attempt the entity linking problem, a necessity for harvesting entity knowledge from large text collections. The goal is the linkage of mentions within the documents with their real-world entities. A major focus lies on the efficient derivation of coherent links.
For each of the contributions, we provide a wide range of experiments on various text corpora as well as different sources of UGC.
The evaluation shows the added value that the usage of these sources provides and confirms the appropriateness of leveraging user-generated content to serve different information needs.
The noble way to substantiate decisions that affect many people is to ask these people for their opinions. For governments that run whole countries, this means asking all citizens for their views to consider their situations and needs.
Organizations such as Africa's Voices Foundation, who want to facilitate communication between decision-makers and citizens of a country, have difficulty mediating between these groups. To enable understanding, statements need to be summarized and visualized. Accomplishing these goals in a way that does justice to the citizens' voices and situations proves challenging. Standard charts do not help this cause as they fail to create empathy for the people behind their graphical abstractions. Furthermore, these charts do not create trust in the data they are representing as there is no way to see or navigate back to the underlying code and the original data. To fulfill these functions, visualizations would highly benefit from interactions to explore the displayed data, which standard charts often only limitedly provide.
To help improve the understanding of people's voices, we developed and categorized 80 ideas for new visualizations, new interactions, and better connections between different charts, which we present in this report. From those ideas, we implemented 10 prototypes and two systems that integrate different visualizations. We show that this integration allows consistent appearance and behavior of visualizations. The visualizations all share the same main concept: representing each individual with a single dot. To realize this idea, we discuss technologies that efficiently allow the rendering of a large number of these dots. With these visualizations, direct interactions with representations of individuals are achievable by clicking on them or by dragging a selection around them. This direct interaction is only possible with a bidirectional connection from the visualization to the data it displays. We discuss different strategies for bidirectional mappings and the trade-offs involved. Having unified behavior across visualizations enhances exploration. For our prototypes, that includes grouping, filtering, highlighting, and coloring of dots. Our prototyping work was enabled by the development environment Lively4. We explain which parts of Lively4 facilitated our prototyping process. Finally, we evaluate our approach to domain problems and our developed visualization concepts.
Our work provides inspiration and a starting point for visualization development in this domain. Our visualizations can improve communication between citizens and their government and motivate empathetic decisions. Our approach, combining low-level entities to create visualizations, provides value to an explorative and empathetic workflow. We show that the design space for visualizing this kind of data has a lot of potential and that it is possible to combine qualitative and quantitative approaches to data analysis.
3D point clouds are a universal and discrete digital representation of three-dimensional objects and environments. For geospatial applications, 3D point clouds have become a fundamental type of raw data acquired and generated using various methods and techniques. In particular, 3D point clouds serve as raw data for creating digital twins of the built environment.
This thesis concentrates on the research and development of concepts, methods, and techniques for preprocessing, semantically enriching, analyzing, and visualizing 3D point clouds for applications around transport infrastructure. It introduces a collection of preprocessing techniques that aim to harmonize raw 3D point cloud data, such as point density reduction and scan profile detection. Metrics such as, e.g., local density, verticality, and planarity are calculated for later use. One of the key contributions tackles the problem of analyzing and deriving semantic information in 3D point clouds. Three different approaches are investigated: a geometric analysis, a machine learning approach operating on synthetically generated 2D images, and a machine learning approach operating on 3D point clouds without intermediate representation.
In the first application case, 2D image classification is applied and evaluated for mobile mapping data focusing on road networks to derive road marking vector data. The second application case investigates how 3D point clouds can be merged with ground-penetrating radar data for a combined visualization and to automatically identify atypical areas in the data. For example, the approach detects pavement regions with developing potholes. The third application case explores the combination of a 3D environment based on 3D point clouds with panoramic imagery to improve visual representation and the detection of 3D objects such as traffic signs.
The presented methods were implemented and tested based on software frameworks for 3D point clouds and 3D visualization. In particular, modules for metric computation, classification procedures, and visualization techniques were integrated into a modular pipeline-based C++ research framework for geospatial data processing, extended by Python machine learning scripts. All visualization and analysis techniques scale to large real-world datasets such as road networks of entire cities or railroad networks.
The thesis shows that some use cases allow taking advantage of established image vision methods to analyze images rendered from mobile mapping data efficiently. The two presented semantic classification methods working directly on 3D point clouds are use case independent and show similar overall accuracy when compared to each other. While the geometry-based method requires less computation time, the machine learning-based method supports arbitrary semantic classes but requires training the network with ground truth data. Both methods can be used in combination to gradually build this ground truth with manual corrections via a respective annotation tool.
This thesis contributes results for IT system engineering of applications, systems, and services that require spatial digital twins of transport infrastructure such as road networks and railroad networks based on 3D point clouds as raw data. It demonstrates the feasibility of fully automated data flows that map captured 3D point clouds to semantically classified models. This provides a key component for seamlessly integrated spatial digital twins in IT solutions that require up-to-date, object-based, and semantically enriched information about the built environment.
Blockchain
(2018)
The term blockchain has recently become a buzzword, but only few know what exactly lies behind this approach. According to a survey, issued in the first quarter of 2017, the term is only known by 35 percent of German medium-sized enterprise representatives. However, the blockchain technology is very interesting for the mass media because of its rapid development and global capturing of different markets.
For example, many see blockchain technology either as an all-purpose weapon— which only a few have access to—or as a hacker technology for secret deals in the darknet. The innovation of blockchain technology is found in its successful combination of already existing approaches: such as decentralized networks, cryptography, and consensus models. This innovative concept makes it possible to exchange values in a decentralized system. At the same time, there is no requirement for trust between its nodes (e.g. users).
With this study the Hasso Plattner Institute would like to help readers form their own opinion about blockchain technology, and to distinguish between truly innovative properties and hype.
The authors of the present study analyze the positive and negative properties of the blockchain architecture and suggest possible solutions, which can contribute to the efficient use of the technology. We recommend that every company define a clear target for the intended application, which is achievable with a reasonable cost-benefit ration, before deciding on this technology. Both the possibilities and the limitations of blockchain technology need to be considered. The relevant steps that must be taken in this respect are summarized /summed up for the reader in this study.
Furthermore, this study elaborates on urgent problems such as the scalability of the blockchain, appropriate consensus algorithm and security, including various types of possible attacks and their countermeasures. New blockchains, for example, run the risk of reducing security, as changes to existing technology can lead to lacks in the security and failures.
After discussing the innovative properties and problems of the blockchain technology, its implementation is discussed. There are a lot of implementation opportunities for companies available who are interested in the blockchain realization. The numerous applications have either their own blockchain as a basis or use existing and widespread blockchain systems. Various consortia and projects offer "blockchain-as-a-serviceänd help other companies to develop, test and deploy their own applications.
This study gives a detailed overview of diverse relevant applications and projects in the field of blockchain technology. As this technology is still a relatively young and fast developing approach, it still lacks uniform standards to allow the cooperation of different systems and to which all developers can adhere. Currently, developers are orienting themselves to Bitcoin, Ethereum and Hyperledger systems, which serve as the basis for many other blockchain applications.
The goal is to give readers a clear and comprehensive overview of blockchain technology and its capabilities.
Based on the performance requirements of modern spatio-temporal data mining applications, in-memory database systems are often used to store and process the data. To efficiently utilize the scarce DRAM capacities, modern database systems support various tuning possibilities to reduce the memory footprint (e.g., data compression) or increase performance (e.g., additional indexes). However, the selection of cost and performance balancing configurations is challenging due to the vast number of possible setups consisting of mutually dependent individual decisions. In this paper, we introduce a novel approach to jointly optimize the compression, sorting, indexing, and tiering configuration for spatio-temporal workloads. Further, we consider horizontal data partitioning, which enables the independent application of different tuning options on a fine-grained level. We propose different linear programming (LP) models addressing cost dependencies at different levels of accuracy to compute optimized tuning configurations for a given workload and memory budgets. To yield maintainable and robust configurations, we extend our LP-based approach to incorporate reconfiguration costs as well as a worst-case optimization for potential workload scenarios. Further, we demonstrate on a real-world dataset that our models allow to significantly reduce the memory footprint with equal performance or increase the performance with equal memory size compared to existing tuning heuristics.
CoFeeMOOC-v.2
(2021)
Providing adequate support to MOOC participants is often a challenging task due to massiveness of the learners’ population and the asynchronous communication among peers and MOOC practitioners. This workshop aims at discussing common learners’ problems reported in the literature and reflect on designing adequate feedback interventions with the use of learning data. Our aim is three-fold: a) to pinpoint MOOC aspects that impact the planning of feedback, b) to explore the use of learning data in designing feedback strategies, and c) to propose design guidelines for developing and delivering scaffolding interventions for personalized feedback in MOOCs. To do so, we will carry out hands-on activities that aim to involve participants in interpreting learning data and using them to design adaptive feedback. This workshop appeals to researchers, practitioners and MOOC stakeholders who aim to providing contextualized scaffolding. We envision that this workshop will provide insights for bridging the gap between pedagogical theory and practice when it comes to feedback interventions in MOOCs.
The analysis of behavioral models is of high importance for cyber-physical systems, as the systems often encompass complex behavior based on e.g. concurrent components with mutual exclusion or probabilistic failures on demand. The rule-based formalism of probabilistic timed graph transformation systems is a suitable choice when the models representing states of the system can be understood as graphs and timed and probabilistic behavior is important. However, model checking PTGTSs is limited to systems with rather small state spaces.
We present an approach for the analysis of large scale systems modeled as probabilistic timed graph transformation systems by systematically decomposing their state spaces into manageable fragments. To obtain qualitative and quantitative analysis results for a large scale system, we verify that results obtained for its fragments serve as overapproximations for the corresponding results of the large scale system. Hence, our approach allows for the detection of violations of qualitative and quantitative safety properties for the large scale system under analysis. We consider a running example in which we model shuttles driving on tracks of a large scale topology and for which we verify that shuttles never collide and are unlikely to execute emergency brakes. In our evaluation, we apply an implementation of our approach to the running example.
Comprior
(2021)
Background
Reproducible benchmarking is important for assessing the effectiveness of novel feature selection approaches applied on gene expression data, especially for prior knowledge approaches that incorporate biological information from online knowledge bases. However, no full-fledged benchmarking system exists that is extensible, provides built-in feature selection approaches, and a comprehensive result assessment encompassing classification performance, robustness, and biological relevance. Moreover, the particular needs of prior knowledge feature selection approaches, i.e. uniform access to knowledge bases, are not addressed. As a consequence, prior knowledge approaches are not evaluated amongst each other, leaving open questions regarding their effectiveness.
Results
We present the Comprior benchmark tool, which facilitates the rapid development and effortless benchmarking of feature selection approaches, with a special focus on prior knowledge approaches. Comprior is extensible by custom approaches, offers built-in standard feature selection approaches, enables uniform access to multiple knowledge bases, and provides a customizable evaluation infrastructure to compare multiple feature selection approaches regarding their classification performance, robustness, runtime, and biological relevance.
Conclusion
Comprior allows reproducible benchmarking especially of prior knowledge approaches, which facilitates their applicability and for the first time enables a comprehensive assessment of their effectiveness
Remote sensing technology, such as airborne, mobile, or terrestrial laser scanning, and photogrammetric techniques, are fundamental approaches for efficient, automatic creation of digital representations of spatial environments. For example, they allow us to generate 3D point clouds of landscapes, cities, infrastructure networks, and sites. As essential and universal category of geodata, 3D point clouds are used and processed by a growing number of applications, services, and systems such as in the domains of urban planning, landscape architecture, environmental monitoring, disaster management, virtual geographic environments as well as for spatial analysis and simulation.
While the acquisition processes for 3D point clouds become more and more reliable and widely-used, applications and systems are faced with more and more 3D point cloud data. In addition, 3D point clouds, by their very nature, are raw data, i.e., they do not contain any structural or semantics information. Many processing strategies common to GIS such as deriving polygon-based 3D models generally do not scale for billions of points. GIS typically reduce data density and precision of 3D point clouds to cope with the sheer amount of data, but that results in a significant loss of valuable information at the same time.
This thesis proposes concepts and techniques designed to efficiently store and process massive 3D point clouds. To this end, object-class segmentation approaches are presented to attribute semantics to 3D point clouds, used, for example, to identify building, vegetation, and ground structures and, thus, to enable processing, analyzing, and visualizing 3D point clouds in a more effective and efficient way. Similarly, change detection and updating strategies for 3D point clouds are introduced that allow for reducing storage requirements and incrementally updating 3D point cloud databases. In addition, this thesis presents out-of-core, real-time rendering techniques used to interactively explore 3D point clouds and related analysis results. All techniques have been implemented based on specialized spatial data structures, out-of-core algorithms, and GPU-based processing schemas to cope with massive 3D point clouds having billions of points.
All proposed techniques have been evaluated and demonstrated their applicability to the field of geospatial applications and systems, in particular for tasks such as classification, processing, and visualization. Case studies for 3D point clouds of entire cities with up to 80 billion points show that the presented approaches open up new ways to manage and apply large-scale, dense, and time-variant 3D point clouds as required by a rapidly growing number of applications and systems.
Business processes constantly generate, manipulate, and consume data that are managed by organizational databases. Despite being central to process modeling and execution, the link between processes and data is often handled by developers when the process is implemented, thus leaving the connection unexplored during the conceptual design. In this paper, we introduce, formalize, and evaluate a novel conceptual view that bridges the gap between process and data models, and show some kinds of interesting insights that can be derived from this novel proposal.
Confidence Counts
(2021)
The increasing reliance on online learning in higher education has been further expedited by the on-going Covid-19 pandemic. Students need to be supported as they adapt to this new learning environment. Research has established that learners with positive online learning self-efficacy beliefs are more likely to persevere and achieve their higher education goals when learning online. In this paper, we explore how MOOC design can contribute to the four sources of self-efficacy beliefs posited by Bandura [4]. Specifically, we will explore, drawing on learner reflections, whether design elements of the MOOC, The Digital Edge: Essentials for the Online Learner, provided participants with the necessary mastery experiences, vicarious experiences, verbal persuasion, and affective regulation opportunities, to evaluate and develop their online learning self-efficacy beliefs. Findings from a content analysis of discussion forum posts show that learners referenced three of the four information sources when reflecting on their experience of the MOOC. This paper illustrates the potential of MOOCs as a pedagogical tool for enhancing online learning self-efficacy among students.
CovRadar
(2022)
The ongoing pandemic caused by SARS-CoV-2 emphasizes the importance of genomic surveillance to understand the evolution of the virus, to monitor the viral population, and plan epidemiological responses. Detailed analysis, easy visualization and intuitive filtering of the latest viral sequences are powerful for this purpose. We present CovRadar, a tool for genomic surveillance of the SARS-CoV-2 Spike protein. CovRadar consists of an analytical pipeline and a web application that enable the analysis and visualization of hundreds of thousand sequences. First, CovRadar extracts the regions of interest using local alignment, then builds a multiple sequence alignment, infers variants and consensus and finally presents the results in an interactive app, making accessing and reporting simple, flexible and fast.
Social networking sites (SNS) are a rich source of latent information about individual characteristics. Crawling and analyzing this content provides a new approach for enterprises to personalize services and put forward product recommendations. In the past few years, commercial brands made a gradual appearance on social media platforms for advertisement, customers support and public relation purposes and by now it became a necessity throughout all branches. This online identity can be represented as a brand personality that reflects how a brand is perceived by its customers. We exploited recent research in text analysis and personality detection to build an automatic brand personality prediction model on top of the (Five-Factor Model) and (Linguistic Inquiry and Word Count) features extracted from publicly available benchmarks. Predictive evaluation on brands' accounts reveals that Facebook platform provides a slight advantage over Twitter platform in offering more self-disclosure for users' to express their emotions especially their demographic and psychological traits. Results also confirm the wider perspective that the same social media account carry a quite similar and comparable personality scores over different social media platforms. For evaluating our prediction results on actual brands' accounts, we crawled the Facebook API and Twitter API respectively for 100k posts from the most valuable brands' pages in the USA and we visualize exemplars of comparison results and present suggestions for future directions.
This work presents a new design for programming environments that promote the exploration of domain-specific software artifacts and the construction of graphical tools for such program comprehension tasks. In complex software projects, tool building is essential because domain- or task-specific tools can support decision making by representing concerns concisely with low cognitive effort. In contrast, generic tools can only support anticipated scenarios, which usually align with programming language concepts or well-known project domains.
However, the creation and modification of interactive tools is expensive because the glue that connects data to graphics is hard to find, change, and test. Even if valuable data is available in a common format and even if promising visualizations could be populated, programmers have to invest many resources to make changes in the programming environment. Consequently, only ideas of predictably high value will be implemented. In the non-graphical, command-line world, the situation looks different and inspiring: programmers can easily build their own tools as shell scripts by configuring and combining filter programs to process data.
We propose a new perspective on graphical tools and provide a concept to build and modify such tools with a focus on high quality, low effort, and continuous adaptability. That is, (1) we propose an object-oriented, data-driven, declarative scripting language that reduces the amount of and governs the effects of glue code for view-model specifications, and (2) we propose a scalable UI-design language that promotes short feedback loops in an interactive, graphical environment such as Morphic known from Self or Squeak/Smalltalk systems.
We implemented our concept as a tool building environment, which we call VIVIDE, on top of Squeak/Smalltalk and Morphic. We replaced existing code browsing and debugging tools to iterate within our solution more quickly. In several case studies with undergraduate and graduate students, we observed that VIVIDE can be applied to many domains such as live language development, source-code versioning, modular code browsing, and multi-language debugging. Then, we designed a controlled experiment to measure the effect on the time to build tools. Several pilot runs showed that training is crucial and, presumably, takes days or weeks, which implies a need for further research.
As a result, programmers as users can directly work with tangible representations of their software artifacts in the VIVIDE environment. Tool builders can write domain-specific scripts to populate views to approach comprehension tasks from different angles. Our novel perspective on graphical tools can inspire the creation of new trade-offs in modularity for both data providers and view designers.