Hasso-Plattner-Institut für Digital Engineering GmbH
Refine
Year of publication
Document Type
- Article (168)
- Doctoral Thesis (96)
- Other (83)
- Monograph/Edited Volume (39)
- Postprint (22)
- Conference Proceeding (3)
- Part of a Book (1)
- Habilitation Thesis (1)
- Report (1)
Keywords
- MOOC (42)
- digital education (37)
- e-learning (36)
- Digitale Bildung (34)
- online course creation (34)
- online course design (34)
- Kursdesign (33)
- Micro Degree (33)
- Online-Lehre (33)
- Onlinekurs (33)
- Onlinekurs-Produktion (33)
- micro degree (33)
- micro-credential (33)
- online teaching (33)
- machine learning (16)
- maschinelles Lernen (7)
- E-Learning (6)
- deep learning (6)
- Cloud Computing (5)
- Smalltalk (5)
- evaluation (5)
- 3D printing (4)
- BPMN (4)
- Blockchain (4)
- Digitalisierung (4)
- Duplikaterkennung (4)
- MOOCs (4)
- Machine Learning (4)
- Scrum (4)
- blockchain (4)
- business process management (4)
- cloud computing (4)
- cyber-physical systems (4)
- duplicate detection (4)
- fabrication (4)
- image processing (4)
- inertial measurement unit (4)
- innovation (4)
- natural language processing (4)
- openHPI (4)
- probabilistic timed systems (4)
- programming (4)
- qualitative Analyse (4)
- qualitative analysis (4)
- quantitative Analyse (4)
- quantitative analysis (4)
- smart contracts (4)
- 3D visualization (3)
- 3D-Visualisierung (3)
- Business process models (3)
- DMN (3)
- Data profiling (3)
- Datenaufbereitung (3)
- Datenqualität (3)
- Design Thinking (3)
- Dynamic pricing (3)
- Forschungsprojekte (3)
- Future SOC Lab (3)
- Geschäftsprozessmanagement (3)
- HPI Schul-Cloud (3)
- Hasso Plattner Institute (3)
- Hasso-Plattner-Institut (3)
- In-Memory Technologie (3)
- Innovation (3)
- Künstliche Intelligenz (3)
- Maschinelles Lernen (3)
- Multicore Architekturen (3)
- Natural language processing (3)
- Security Metrics (3)
- Security Risk Assessment (3)
- Teamwork (3)
- artifical intelligence (3)
- cloud (3)
- computer vision (3)
- course design (3)
- creativity (3)
- data preparation (3)
- data profiling (3)
- digitale Bildung (3)
- digitalization (3)
- entity resolution (3)
- graph transformation systems (3)
- intrusion detection (3)
- künstliche Intelligenz (3)
- multicore architectures (3)
- real-time rendering (3)
- reinforcement learning (3)
- research projects (3)
- tiefes Lernen (3)
- user experience (3)
- user-generated content (3)
- 3D point clouds (2)
- Agile (2)
- Android (2)
- Angriffserkennung (2)
- Anomalieerkennung (2)
- Answer set programming (2)
- Behavioral economics (2)
- Big Data (2)
- Bounded Model Checking (2)
- Clinical decision support (2)
- Cloud-Security (2)
- Data Profiling (2)
- Datenbank (2)
- Datenbanksysteme (2)
- Datenvisualisierung (2)
- Debugging (2)
- Decision models (2)
- Deep Learning (2)
- Dynamic programming (2)
- Economic evaluation (2)
- Electronic health record (2)
- Energy (2)
- Entitätsauflösung (2)
- Entitätsverknüpfung (2)
- Entscheidungsfindung (2)
- European Union (2)
- Europäische Union (2)
- Fabrikation (2)
- Feature selection (2)
- Forschungskolleg (2)
- Gene expression (2)
- Graphentransformationssysteme (2)
- IDS (2)
- Identitätsmanagement (2)
- In-Memory (2)
- In-Memory technology (2)
- Information flow control (2)
- Internet der Dinge (2)
- Internet of Things (2)
- Java (2)
- Kanban (2)
- Klausurtagung (2)
- Learning Analytics (2)
- Lecture Video Archive (2)
- MAC security (2)
- MERLOT (2)
- Massive Open Online Course (MOOC) (2)
- Metadaten (2)
- Metanome (2)
- Modellprüfung (2)
- Oligopoly competition (2)
- OptoGait (2)
- P2P (2)
- Peer Assessment (2)
- Peer-to-Peer ridesharing (2)
- Ph.D. retreat (2)
- Prior knowledge (2)
- Programmieren (2)
- Python (2)
- RGB-D cameras (2)
- Reproducible benchmarking (2)
- SIEM (2)
- Secure Configuration (2)
- Security (2)
- Service-oriented Systems Engineering (2)
- Sicherheit (2)
- Smart micro-grids (2)
- Social Media Analysis (2)
- Squeak (2)
- Treemaps (2)
- Versionsverwaltung (2)
- Visualisierung (2)
- Werkzeuge (2)
- X-ray (2)
- Zebris (2)
- acute kidney injury (2)
- anomaly detection (2)
- artificial intelligence (2)
- artificial intelligence for health (2)
- assignments (2)
- bounded model checking (2)
- business processes (2)
- capstone course (2)
- causal discovery (2)
- causal structure learning (2)
- classification (2)
- clustering (2)
- collaborative work (2)
- comparison of devices (2)
- computer-mediated therapy (2)
- cyber-physische Systeme (2)
- data integration (2)
- data pipeline (2)
- data quality (2)
- database (2)
- database systems (2)
- deduplication (2)
- deferred choice (2)
- design thinking (2)
- digital enlightenment (2)
- digital health (2)
- digital learning platform (2)
- digital sovereignty (2)
- digital transformation (2)
- digitale Aufklärung (2)
- digitale Lernplattform (2)
- digitale Souveränität (2)
- disorder recognition (2)
- distributed systems (2)
- eindeutige Spaltenkombination (2)
- end-stage kidney disease (2)
- entity linking (2)
- everyday life (2)
- federated learning (2)
- flexibility (2)
- formal semantics (2)
- formal verification (2)
- formale Verifikation (2)
- framework (2)
- functional dependencies (2)
- funktionale Abhängigkeiten (2)
- gait analysis (2)
- gait analysis algorithm (2)
- genome-wide association (2)
- geospatial data (2)
- hate speech detection (2)
- human activity recognition (2)
- human motion (2)
- identity management (2)
- inclusion dependencies (2)
- index selection (2)
- interactive media (2)
- kausale Entdeckung (2)
- kausales Strukturlernen (2)
- knowledge management (2)
- labeling (2)
- learning path (2)
- lebenslanges Lernen (2)
- lifelong learning (2)
- liveness (2)
- location prediction algorithm (2)
- maschinelles Sehen (2)
- medical documentation (2)
- memory (2)
- metadata (2)
- mobile mapping (2)
- model checking (2)
- model-driven engineering (2)
- modularization (2)
- motion capture (2)
- multiple modalities (2)
- named entity recognition (2)
- nested graph conditions (2)
- neurological disorders (2)
- non-photorealistic rendering (2)
- oracles (2)
- outlier detection (2)
- parameterized complexity (2)
- peer assessment (2)
- pervasive healthcare (2)
- privacy and security (2)
- privacy attack (2)
- probabilistische gezeitete Systeme (2)
- probabilistische zeitgesteuerte Systeme (2)
- process mining (2)
- programmable matter (2)
- proteomics (2)
- public dataset (2)
- quality assessment (2)
- query optimization (2)
- rapid eGFRcrea decline (2)
- research school (2)
- retrospective (2)
- risk-aware dispatching (2)
- security (2)
- security analytics (2)
- self-paced learning (2)
- self-sovereign identity (2)
- sensor data (2)
- service-oriented systems engineering (2)
- simulation (2)
- social media (2)
- software development (2)
- software engineering (2)
- software process improvement (2)
- speech (2)
- stable matching (2)
- study (2)
- text mining (2)
- thinking styles (2)
- trajectory data (2)
- transparency (2)
- transport network companies (2)
- treemaps (2)
- trustworthiness (2)
- typed attributed graphs (2)
- unique column combinations (2)
- user interaction (2)
- variable geometry truss (2)
- version control (2)
- virtuelle Realität (2)
- visualization (2)
- voice (2)
- web-based rendering (2)
- workflow patterns (2)
- 0-day (1)
- 3D Druck (1)
- 3D Point Clouds (1)
- 3D Visualization (1)
- 3D city model (1)
- 3D geovirtual environment (1)
- 3D geovisualization (1)
- 3D geovisualization system (1)
- 3D point cloud (1)
- 3D portrayal (1)
- 3D-Druck (1)
- 3D-Einbettung (1)
- 3D-Geovisualisierung (1)
- 3D-Geovisualisierungssystem (1)
- 3D-Punktwolke (1)
- 3D-Punktwolken (1)
- 3D-Rendering (1)
- 3D-Stadtmodell (1)
- 3D-embedding (1)
- 3D-geovirtuelle Umgebung (1)
- ACINQ (1)
- AI Lab (1)
- APT (1)
- ASIC (1)
- Abgleich von Abhängigkeiten (1)
- Abhängigkeit (1)
- Abschlussbericht (1)
- Access Control (1)
- Activity-oriented Optimization (1)
- Adoption effects (1)
- Adressnormalisierung (1)
- Advanced Persistent Threats (1)
- Advertising (1)
- Agile methods (1)
- Agilität (1)
- Aktivitäten (1)
- Algebraic methods (1)
- Algorithms (1)
- Ambiguity (1)
- Ambiguität (1)
- Analog-zu-Digital-Konvertierung (1)
- Analyse (1)
- Anfrageoptimierung (1)
- Annahme (1)
- Anomaly detection (1)
- Anwendungsbedingungen (1)
- Application Container Security (1)
- Approximation algorithms (1)
- Architecture synthesis (1)
- Architectures (1)
- Architekturadaptation (1)
- Archivanalyse (1)
- Artificial Intelligence (1)
- Artistic Image Stylization (1)
- Arzt-Patient-Beziehung (1)
- Attributsicherung (1)
- Audit (1)
- Aufzählungsalgorithmen (1)
- Auktion (1)
- Ausreißererkennung (1)
- Australian securities exchange (1)
- Auswirkungen (1)
- Autoimmune (1)
- Automated parsing (1)
- Automatic domain term extraction (1)
- Automatisierung (1)
- Average-Case Analyse (1)
- BCCC (1)
- BIM (1)
- BRCA1 (1)
- BTC (1)
- Bahnwesen (1)
- Bandwidth (1)
- Batch activity (1)
- Batch processing (1)
- Batch-Aktivität (1)
- Bedrohungsanalyse (1)
- Bedrohungserkennung (1)
- Bedrohungsmodell (1)
- Behavior change (1)
- Behavioral equivalence and refinement (1)
- Benchmark (1)
- Benutzerinteraktion (1)
- Beschriftung (1)
- Betriebssysteme (1)
- Bewegung (1)
- Big Five Model (1)
- Big Five model (1)
- Bildungstechnologien (1)
- Bildverarbeitung (1)
- Binary Classification (1)
- Binäre Klassifikation (1)
- Biomarker-Erkennung (1)
- Bisimulation and simulation (1)
- BitShares (1)
- Bitcoin Core (1)
- Blockchain Auth (1)
- Blockchain Governance (1)
- Blockchain-Konsortium R3 (1)
- Blockchain-enabled Governance (1)
- Blockkette (1)
- Blockstack (1)
- Blockstack ID (1)
- Blumix-Plattform (1)
- Blöcke (1)
- Boolean Networks (1)
- Boolean satisfiability (1)
- Boolsche Erfüllbarkeit (1)
- Bounded Backward Model Checking (1)
- Brand Personality (1)
- Building Management (1)
- Business Process Management (1)
- Business process choreographies (1)
- Business process improvement (1)
- Business process modeling (1)
- Business process simulation (1)
- Byzantine Agreement (1)
- CMOS technology (1)
- COVID-19 (1)
- Car safety management (1)
- Cardinality estimation (1)
- Case Management (1)
- Causal inference (1)
- Causal structure learning (1)
- Cheating attacks (1)
- Chernoff-Hoeffding theorem (1)
- Chile (1)
- Circular economy (1)
- CityGML (1)
- Clinical Data (1)
- Clinical predictive modeling (1)
- Cloud (1)
- Cloud Audit (1)
- Cloud Service Provider (1)
- CloudRAID (1)
- CloudRAID for Business (1)
- Clusteranalyse (1)
- Clustering (1)
- Co-production (1)
- Collaborative learning (1)
- Colored Coins (1)
- Colored Petri Net (1)
- Commonsense reasoning (1)
- Complexity (1)
- Compound Values (1)
- Computational Photography (1)
- Computergrafik (1)
- Computervision (1)
- Computing (1)
- Conceptual Fit (1)
- Conceptual modeling (1)
- ContextErlang (1)
- Creative (1)
- Critical pair analysis (CPA) (1)
- Cross-platform (1)
- Crowd Resourcing (1)
- Cryptography (1)
- Curex (1)
- Custom Writable Class (1)
- Cyber-Sicherheit (1)
- Cyber-physikalische Systeme (1)
- Cybersecurity (1)
- Cybersecurity e-Learning (1)
- Cybersicherheit (1)
- Cybersicherheit E-Learning (1)
- DAO (1)
- DBMS (1)
- DPoS (1)
- DSA (1)
- Data Analytics (1)
- Data Structure Optimization (1)
- Data breach (1)
- Data compression (1)
- Data mining (1)
- Data mining Machine learning (1)
- Data modeling (1)
- Data partitioning (1)
- Data processing (1)
- Data profiling application (1)
- Data quality (1)
- Data warehouse (1)
- Data-Mining (1)
- Data-Science (1)
- Data-driven price anticipation (1)
- Data-driven strategies (1)
- Database (1)
- Dateistruktur (1)
- Daten-Analytik (1)
- Datenabgleich (1)
- Datenanalyse (1)
- Datenbereinigung (1)
- Datenintegration (1)
- Datenmodelle (1)
- Datensatz (1)
- Datensatzverknüpfung (1)
- Datenschutz (1)
- Datenschutz-sicherer Einsatz in der Schule (1)
- Datenstromverarbeitung (1)
- Datenstrukturoptimierung (1)
- Datensynthese (1)
- Datentransformation (1)
- Datenverwaltung (1)
- Deanonymisierung (1)
- Decision support (1)
- Declarative modelling (1)
- Deduplikation (1)
- Deep Kernel Learning (1)
- Deep learning (1)
- Dekubitus (1)
- Delegated Proof-of-Stake (1)
- Denial of sleep (1)
- Denkstile (1)
- Denkweise (1)
- Derecho (1)
- Design (1)
- Design-Forschung (1)
- Dezentrale Applikationen (1)
- Differential Expression Analysis (1)
- Digital Engineering (1)
- Digital Twin (1)
- Digital education (1)
- Digitaler Zwilling (1)
- Digitization (1)
- Dimensionsreduktion (1)
- Direkte Manipulation (1)
- Disadvantaged communities (1)
- Dissertation (1)
- Distance Learning (1)
- Distributed Proof-of-Research (1)
- Distributed snapshot algorithm (1)
- Distributed-Ledger-Technologie (DLT) (1)
- Diverse solution enumeration (1)
- Document classification (1)
- Dokument Analyse (1)
- Domain Objects (1)
- Domänenspezifische Modellierung (1)
- Drift (1)
- Dubletten (1)
- Durchsetzbarkeit (1)
- Dynamic pricing competition (1)
- Dynamik (1)
- E-Learning exam preparation (1)
- E-Lecture (1)
- E-Wallet (1)
- E-commerce (1)
- E-health (1)
- ECDSA (1)
- EMG (1)
- Echtzeit (1)
- Echtzeit-Rendering (1)
- Education technologies (1)
- Educational Data Mining (1)
- Educational Technology (1)
- Effect measurement (1)
- Einbettungen (1)
- Einbruchserkennung (1)
- Electrical products (1)
- Embedded Programming (1)
- Emotion Mining (1)
- Endpunktsicherheit (1)
- Energy efficiency (1)
- Entdeckung (1)
- Enterprise File Synchronization and Share (1)
- Entropy (1)
- Entscheidungskorrektheit (1)
- Entscheidungsmanagement (1)
- Entscheidungsmodelle (1)
- Entstehung (1)
- Entwicklung digitaler Innovationseinheiten (1)
- Enumeration algorithm (1)
- Ereignisabonnement (1)
- Ereignisnormalisierung (1)
- Erfüllbarkeitsschwellwert (1)
- Eris (1)
- Erkennung von Metadaten (1)
- Erkennung von Quasi-Identifikatoren (1)
- Estimation-of-Distribution-Algorithmen (1)
- Estimation-of-distribution algorithm (1)
- Ether (1)
- Ethereum (1)
- European reference networks (1)
- Evaluation (1)
- Expert knowledge (1)
- Exploration (1)
- Expressive rendering (1)
- Extensibility (1)
- Eye-tracking (1)
- Fabrication (1)
- Facility Management (1)
- Federated Byzantine Agreement (1)
- Feedback control loop (1)
- Fehlende Werte (1)
- Fehlertoleranz (1)
- Fernerkundung (1)
- Fertigung (1)
- Fertigungsunternehmen (1)
- Field study (1)
- First-hitting time (1)
- Flash (1)
- FollowMyVote (1)
- Forecasting (1)
- Foreign key (1)
- Fork (1)
- Formal modelling (1)
- Formal verification of behavior preservation (1)
- Functional dependencies (1)
- Funktionale Abhängigkeiten (1)
- GMDH (1)
- GPU (1)
- GPU acceleration (1)
- GPU-Beschleunigung (1)
- GTEx (1)
- Gaussian process state-space models (1)
- Gaussian processes (1)
- Gauß-Prozess Zustandsraummodelle (1)
- Gauß-Prozesse (1)
- Gebäudeinformationsmodellierung (1)
- Gebäudemanagement (1)
- Gen-Expression (1)
- Gene Expression Data Analysis (1)
- General Earth and Planetary Sciences (1)
- General demand function (1)
- Generalized Discrimination Networks (1)
- Geodaten (1)
- Geography, Planning and Development (1)
- Geospatial intelligence (1)
- Gerichtsbarkeit (1)
- German schools (1)
- Geschäftsprozess (1)
- Geschäftsprozess-Choreografien (1)
- Geschäftsprozessarchitekturen (1)
- Gewinnung benannter Entitäten (1)
- GitHub (1)
- GraalVM (1)
- Graph Algorithms (1)
- Graph Theory (1)
- Graph logic (1)
- Graph logics (1)
- Graph transformation (1)
- Graph transformation (double pushout approach) (1)
- Graph transformations (1)
- Graph-Mining (1)
- Graphableitung (1)
- Graphbedingungen (1)
- Graphentheorie (1)
- Graphreparatur (1)
- Graphtransformationen (1)
- Graphtransformationssysteme (1)
- Grid stability (1)
- Gridcoin (1)
- Großformat (1)
- HENSHIN (1)
- HLS (1)
- HTML5 (1)
- Hard Fork (1)
- Hashed Timelock Contracts (1)
- Hasserkennung (1)
- Heart Valve Diseases (1)
- Herzklappenerkrankungen (1)
- Heuristiken (1)
- Hitting Sets (1)
- Holant problems (1)
- Home appliances (1)
- Human Computer Interaction (1)
- Hybrid (1)
- Hyperbolic Geometry (1)
- Hyrise (1)
- Häkeln (1)
- ICT (1)
- IEEE 802.15.4 (1)
- IT Softwarentwicklung (1)
- IT project (1)
- IT systems engineering (1)
- IT-Infrastruktur (1)
- IT-Sicherheit (1)
- IT-infrastructure (1)
- IT-security (1)
- Ideation (1)
- Ideenfindung (1)
- Identity leak (1)
- Identität (1)
- Image Abstraction (1)
- Image Processing (1)
- Imbalanced medical image semantic segmentation (1)
- Immobilien 4.0 (1)
- Impact (1)
- Implementation in Organizations (1)
- Implementierung (1)
- Implementierung in Organisationen (1)
- Indexauswahl (1)
- Indoor Point Clouds (1)
- Indoor environments (1)
- Indoor-Punktwolken (1)
- Industry 4.0 (1)
- Information Retrieval (1)
- Information system (1)
- Informationsextraktion (1)
- Informationsflüsse (1)
- Informationsraum (1)
- Informationsraumverletzungen (1)
- Inklusionsabhängigkeit (1)
- Inklusionsabhängigkeiten (1)
- Institutions (1)
- Integrative Gene Selection (1)
- Intent analysis (1)
- Interacting processes (1)
- Interactive Media (1)
- Interactive control (1)
- Interdisciplinary Teams (1)
- Interessengrad-Techniken (1)
- Internet (1)
- Internet of things (1)
- Interoperability (1)
- Interpretability (1)
- Interpreter (1)
- Interval Timed Automata (1)
- Interventionen (1)
- Invariant checking (1)
- IoT (1)
- Japanese Blockchain Consortium (1)
- Japanisches Blockchain-Konsortium (1)
- JavaScript (1)
- K-12 (1)
- KI-Labor (1)
- Kalman filtering (1)
- Kardinalitätsschätzung (1)
- Karten (1)
- Kausalität (1)
- Kernelization (1)
- Kette (1)
- Klassifikation (1)
- Klassifizierung (1)
- Klinische Daten (1)
- Knowledge Bases (1)
- Kognitionswissenschaft (1)
- Kollaboration (1)
- Konsensalgorithmus (1)
- Konsensprotokoll (1)
- Konsensprotokolle (1)
- Konsistenzrestauration (1)
- Konstruktion von Wissensbasen (1)
- Konstruktion von Wissensgraphen (1)
- Korpusexploration (1)
- Korrektheit (1)
- Kraft (1)
- Kreativität (1)
- Kryptografie (1)
- Kultur (1)
- Kunstanalyse (1)
- LC-MS (1)
- Laserscanning (1)
- Laserschneiden (1)
- Lastverteilung (1)
- Laufzeitanalyse (1)
- Laufzeitmodelle (1)
- Law (1)
- Learning Factory (1)
- Learning experience (1)
- Lebendigkeit (1)
- Lecture Recording (1)
- Leistungsmodelle von virtuellen Maschinen (1)
- Lernerlebnis (1)
- Level-of-detail visualization (1)
- LiDAR (1)
- Lightning Network (1)
- Link layer security (1)
- Literature review (1)
- Live-Migration (1)
- Live-Programmierung (1)
- Lively Kernel (1)
- Liveness (1)
- Load modeling (1)
- Lock-Time-Parameter (1)
- Log data (1)
- Lossy networks (1)
- Low-processing capable devices (1)
- Lösungsraum (1)
- MOOC Remote Lab (1)
- MQTT (1)
- MS (1)
- Machine-Learning (1)
- Machinelles Lernen (1)
- Manufacturing (1)
- Maschinen (1)
- Maschinenlernen (1)
- Massive Open Online Courses (1)
- Measurement (1)
- Medien Bias (1)
- Mediumzugriffskontrolle (1)
- Meltdown (1)
- Memory Dumping (1)
- Mensch Computer Interaktion (1)
- Mensch-Maschine Interaktion (1)
- Merkmalsauswahl (1)
- Messung (1)
- Meta-Selbstanpassung (1)
- Metacrate (1)
- Metamaterialien (1)
- Metamaterials (1)
- Micro-grid networks (1)
- Micropayment-Kanäle (1)
- Microservices Security (1)
- Microsoft Azur (1)
- Mindset (1)
- Minimal hitting set (1)
- Minimum spanning tree (1)
- Missing values (1)
- Mixed Reality (1)
- Mobile Learning (1)
- Mobile Mapping (1)
- Mobile applications (1)
- Mobile devices (1)
- Mobile-Mapping (1)
- Mobiles (1)
- Model checking (1)
- Model extraction (1)
- Modelle mit mehreren Versionen (1)
- Modellreparatur (1)
- Monitoring (1)
- Motion Mapping (1)
- Moving Target Defense (1)
- Multi-objective optimization (1)
- Multidisciplinary Teams (1)
- Multiview classification (1)
- Multiziel (1)
- NASDAQ (1)
- NETCONF (1)
- NLP (1)
- NP-hardness (1)
- Nachrichten (1)
- NameID (1)
- Namecoin (1)
- Named-Entity-Erkennung (1)
- Nash equilibrium (1)
- Natural Language Processing (1)
- Natural language analysis (1)
- Navigational logics (1)
- Nephrology (1)
- Network Science (1)
- Network analysis (1)
- Network creation games (1)
- Netzoptimierung (1)
- Netzwerkprotokolle (1)
- Netzwerksicherheit (1)
- Neural Networks (1)
- Neural networks (1)
- New Public Governance (1)
- Non-photorealistic Rendering (1)
- Non-photorealistic rendering (1)
- Nutzer-Engagement (1)
- Nutzerinteraktion (1)
- Objects (1)
- Objekte (1)
- Off-Chain-Transaktionen (1)
- Offline-Enabled (1)
- Onename (1)
- Online Learning Environments (1)
- Online survey (1)
- Online-Gerichte (1)
- Online-Lernen (1)
- Online-Persönlichkeit (1)
- Ontology (1)
- OpenBazaar (1)
- Opinion mining (1)
- Optimal control (1)
- Optimierung (1)
- Oracles (1)
- Ordinances (1)
- Organisationsstudien (1)
- Orphan Block (1)
- PRISM model checker (1)
- PTCTL (1)
- PTM (1)
- Parallel independence (1)
- Parallel processing (1)
- Pareto-Verteilung (1)
- Parking search (1)
- Patents (1)
- Patient (1)
- Patientenermündigung (1)
- Pattern (1)
- Pattern Recognition (1)
- Peer assessment (1)
- Peer-feedback (1)
- Peer-to-Peer Netz (1)
- Peercoin (1)
- Performanz (1)
- Personality Prediction (1)
- PhD thesis (1)
- PoB (1)
- PoS (1)
- PoW (1)
- Point Clouds (1)
- Politik (1)
- Polystore (1)
- Popular matching (1)
- Posenabschätzung (1)
- Power auctioning (1)
- Power consumption characterization (1)
- Power demand (1)
- Predictive Modeling (1)
- Predictive models (1)
- Pricing (1)
- Primary biliary cholangitis (1)
- Primary key (1)
- Primary sclerosing cholangitis (1)
- Prior Knowledge (1)
- Privacy (1)
- Privatsphäre (1)
- Probabilistic timed automata (1)
- Problem Solving (1)
- Problemlösung (1)
- Process (1)
- Process Enactment (1)
- Process Execution (1)
- Process architecture (1)
- Process discovery (1)
- Process landscape (1)
- Process map (1)
- Process mining (1)
- Process model (1)
- Process-related data (1)
- Profilerstellung für Daten (1)
- Programmiererlebnis (1)
- Programmierung (1)
- Programmierwerkzeuge (1)
- Programming course (1)
- Project-based learning (1)
- Proof-of-Burn (1)
- Proof-of-Stake (1)
- Proof-of-Work (1)
- Proteom (1)
- Proteomics (1)
- Prototyping (1)
- Prozess (1)
- Prozessausführung (1)
- Prozessmodelle (1)
- Prozessmodellierung (1)
- Prädiktive Modellierung (1)
- Psychological Emotions (1)
- Psychotherapie (1)
- Punktwolken (1)
- Quanten-Computing (1)
- Query optimization (1)
- Query-Optimierung (1)
- REST-Interaktionen (1)
- RESTful choreographies (1)
- RESTful interactions (1)
- RL (1)
- RNAseq (1)
- Random process (1)
- Real Estate 4.0 (1)
- Real Walking (1)
- Real-time rendering (1)
- Recht (1)
- Reconfigurable architecture (1)
- Recurrent generative (1)
- Recursion (1)
- Recycling investments (1)
- Refactoring (1)
- Regressionstests (1)
- Rekursion (1)
- Relational model transformation (1)
- Requisit (1)
- Resource Allocation (1)
- Resource Management (1)
- Resource constrained smart micro-grids (1)
- Response strategies (1)
- Reverse Engineering (1)
- Ripple (1)
- Roadmap (1)
- Ruby (1)
- Runtime-monitoring (1)
- SAFE (1)
- SCP (1)
- SET effects (1)
- SHA (1)
- SPV (1)
- SWIRL (1)
- Satz von Chernoff-Hoeffding (1)
- Savanne (1)
- Scalability (1)
- Schema discovery (1)
- Schema-Entdeckung (1)
- Schlafentzugsangriffe (1)
- School (1)
- Schriftartgestaltung (1)
- Schriftrendering (1)
- Schule (1)
- Schwachstelle (1)
- Schwierigkeitsgrad (1)
- Scrollytelling (1)
- Secondary Education (1)
- Security analytics (1)
- Selbst-Adaptive Software (1)
- Self-Regulated Learning (1)
- Semantic Enrichment (1)
- Semantic Web (1)
- Semantic enrichment (1)
- Semantische Anreicherung (1)
- Sensor Analytics (1)
- Sensor networks (1)
- Sensor-Analytik (1)
- Sequenzeigenschaften (1)
- Serialisierung (1)
- Service-Oriented Architecture (1)
- Service-Oriented Systems (1)
- Service-Orientierte Systeme (1)
- Service-oriented (1)
- Serviceorientierte Architektur (SOA) (1)
- Servicification (1)
- Siamesische Neuronale Netzwerke (1)
- Sicherheitsanalyse (1)
- Simplified Payment Verification (1)
- Simulation (1)
- Situationsbewusstsein (1)
- Skalierbarkeit (1)
- Skalierbarkeit der Blockchain (1)
- Skriptsprachen (1)
- Slock.it (1)
- Smart Contracts (1)
- Smart Home Education (1)
- Smart cities (1)
- Social (1)
- Social Bots erkennen (1)
- Soft Fork (1)
- Software (1)
- Software engineering (1)
- Software-Evolution (1)
- Software/Hardware Co-Design (1)
- Softwareanalytik (1)
- Softwarevisualisierung (1)
- Solution Space (1)
- Source Code Readability (1)
- Soziale Medien (1)
- Spatial data handling systems (1)
- Spatio-Temporal Data (1)
- Spatio-temporal data analysis (1)
- Spatio-temporal visualization (1)
- Specification (1)
- Spectre (1)
- Spezifikation von gezeiteten Graph Transformationen (1)
- Spieltheorie (1)
- Sprachlernen im Limes (1)
- Squeak/Smalltalk (1)
- Stable marriage (1)
- Stable matching (1)
- StackOverflow (1)
- Standard (1)
- Standardisierung (1)
- Stapelverarbeitung (1)
- Static analysis (1)
- Statistical process mining (1)
- Steemit (1)
- Stellar Consensus Protocol (1)
- Stochastic differential games (1)
- Storj (1)
- Storytelling (1)
- Strategic cognition (1)
- Style transfer (1)
- Subject-oriented learning (1)
- Suchtberatung und -therapie (1)
- Supervised Learning (1)
- Supervised deep neural (1)
- Survey (1)
- System design (1)
- Systemmedizin (1)
- Systems Medicine (1)
- TCGA (1)
- Taxonomy (1)
- Team Assessment (1)
- Team based assignment (1)
- Team-based Learning (1)
- Teamarbeit (1)
- Technology mapping (1)
- Teile und Herrsche (1)
- Telemedizin (1)
- Temporallogik (1)
- Testergebnisse (1)
- Testpriorisierungs (1)
- Text mining (1)
- Texterkennung (1)
- Textklassifikation (1)
- The Bitfury Group (1)
- The DAO (1)
- Theorie (1)
- Theory (1)
- Threat Models (1)
- Tiefes Lernen (1)
- Time series analysis (1)
- Time series data (1)
- Timed Automata (1)
- Tools (1)
- Topic modeling (1)
- Tragfähigkeit (1)
- Trajectory Data Management (1)
- Trajectory visualization (1)
- Trajektorien (1)
- Transaktion (1)
- Transferlernen (1)
- Transversal hypergraph (1)
- Transversal-Hypergraph (1)
- Tree maintenance (1)
- Tripel-Graph-Grammatiken (1)
- Triple graph grammars (1)
- Two-Way-Peg (1)
- U-Förmiges Lernen (1)
- U-Shaped-Learning (1)
- Ubiquitous (1)
- Ubiquitous business process (1)
- Unbiasedness (1)
- Unified logging system (1)
- Unique column combination (1)
- Unspent Transaction Output (1)
- Unternehmensdateien synchronisieren und teilen (1)
- Unterricht mit digitalen Medien (1)
- User Experience (1)
- Utility-Funktionen (1)
- V2X (1)
- VUCA-World (1)
- Validation (1)
- Verarbeitung natürlicher Sprache (1)
- Verbundwerte (1)
- Verhaltensforschung (1)
- Verhaltensänderung (1)
- Verifikation induktiver Invarianten (1)
- Verlässlichkeit (1)
- Verteilte Systeme (1)
- Vertrauen (1)
- Verträge (1)
- Veränderungsanalyse (1)
- Video annotations (1)
- Virtual Laboratory (1)
- Virtual Machine (1)
- Virtual Machines (1)
- Virtual Reality (1)
- Virtuelle Maschinen (1)
- Virtuelles Labor (1)
- Visual Analytics (1)
- Visual analytics (1)
- Visualisierungskonzept-Exploration (1)
- Visualization (1)
- Vorhersage (1)
- Vorhersagemodelle (1)
- Vorhersagemodellierung (1)
- Vulnerability analysis (1)
- WALA (1)
- W[3]-Completeness (1)
- Walking (1)
- Water Science and Technology (1)
- Watson IoT (1)
- Wearable (1)
- Web-basiertes Rendering (1)
- Werkzeugbau (1)
- Wicked Problems (1)
- Wireless sensor networks (1)
- Wirtschaftsinformatik Projekte (1)
- Wissensbasis (1)
- Wissensgraph (1)
- Wissensgraphen (1)
- Wissensgraphen Verfeinerung (1)
- Wissensmanagement (1)
- Wissenstransfer (1)
- Wissensvalidierung (1)
- Wolke (1)
- Word embedding (1)
- Wüstenbildung (1)
- XIC extraction (1)
- YANG (1)
- Zielvorgabe (1)
- Zookos Dreieck (1)
- Zookos triangle (1)
- Zufallsgraphen (1)
- Zugriffskontrolle (1)
- Zustandsverwaltung (1)
- accelerator architectures (1)
- accuracy (1)
- action problems (1)
- active layers (1)
- active touch (1)
- acyclic preferences (1)
- addiction care (1)
- address normalization (1)
- adoption (1)
- advanced persistent threat (1)
- advanced threats (1)
- adversarial network (1)
- agil (1)
- agile (1)
- algorithms (1)
- allocation problem (1)
- altchain (1)
- alternative chain (1)
- analog-to-digital conversion (1)
- analysis (1)
- application conditions (1)
- approximate counting (1)
- apt (1)
- architectural adaptation (1)
- architecture-based software adaptation (1)
- architekturbasierte Softwareanpassung (1)
- archive analysis (1)
- art analysis (1)
- artistic image stylization (1)
- aspect-oriented programming (1)
- asset management (1)
- atomic swap (1)
- attribute assurance (1)
- auction (1)
- automation (1)
- autonomous (1)
- availability (1)
- average-case analysis (1)
- bachelor project (1)
- batch activity (1)
- batch processing (1)
- behavior psychotherapy (1)
- behavioral sciences (1)
- behaviourally correct learning (1)
- benchmark (1)
- benchmarking (1)
- benutzergenerierte Inhalte (1)
- bestärkendes Lernen (1)
- bidirectional payment channels (1)
- bildbasierte Repräsentation (1)
- bildbasiertes Rendering (1)
- bioinformatics (1)
- bioinformatics tool (1)
- biologisches Vorwissen (1)
- biomarker detection (1)
- bitcoins (1)
- blockchain consortium (1)
- blockchain-übergreifend (1)
- blocks (1)
- blumix platform (1)
- bounded backward model checking (1)
- brand personality (1)
- breast-cancer (1)
- business process (1)
- business process architectures (1)
- business process choreographies (1)
- business process managament (1)
- categories (1)
- causal AI (1)
- causal reasoning (1)
- causality (1)
- chain (1)
- change detection (1)
- classification zone (1)
- classification zone violations (1)
- clinical exome (1)
- cliquy tree (1)
- cloud monitoring (1)
- code generation (1)
- cognitive patterns (1)
- cognitive science (1)
- collaboration (1)
- collaborative learning (1)
- collective intelligence (1)
- combinational logic (1)
- communication (1)
- comparison (1)
- complex event processing (1)
- complex networks (1)
- complexity (1)
- compositional analysis (1)
- computational design (1)
- computational hardness (1)
- computational mass spectrometry (1)
- computational models (1)
- computational photography (1)
- computer graphics (1)
- computer science (1)
- computer science education (1)
- computer-aided design (1)
- computergestützte Gestaltung (1)
- computervermittelte Therapie (1)
- computing (1)
- confirmation period (1)
- confluence (1)
- conformance checking (1)
- connectivity (1)
- consensus algorithm (1)
- consensus protocol (1)
- consensus protocols (1)
- consistency restoration (1)
- consistent learning (1)
- constrained optimization (1)
- content gamification (1)
- contest period (1)
- context (1)
- context groups (1)
- contextual-variability modeling (1)
- continuous integration (1)
- contracts (1)
- convolutional neural networks (1)
- corpus exploration (1)
- courts of justice (1)
- crochet (1)
- cross-chain (1)
- cross-platform (1)
- cultural heritage (1)
- culture (1)
- cumulative culture (1)
- curex (1)
- cyber-physikalische Systeme (1)
- cybersecurity (1)
- data (1)
- data analytics (1)
- data cleaning (1)
- data dependencies (1)
- data management (1)
- data matching (1)
- data mining (1)
- data models (1)
- data privacy (1)
- data processing (1)
- data science (1)
- data set (1)
- data synthesis (1)
- data transfer (1)
- data visualisation (1)
- data visualization (1)
- data wrangling (1)
- data-driven (1)
- database optimization (1)
- database replication (1)
- datengetrieben (1)
- de-anonymisation (1)
- debugging (1)
- decentral identities (1)
- decentralized applications (1)
- decentralized autonomous organization (1)
- decision making (1)
- decision management (1)
- decision mining (1)
- decision models (1)
- decision soundness (1)
- decision-aware process models (1)
- decoupling cells (1)
- decubitus (1)
- deep Gaussian processes (1)
- deep kernel learning (1)
- degree-of-interest techniques (1)
- demand learning (1)
- demografische Informationen (1)
- demographic information (1)
- denial of sleep (1)
- dependability (1)
- dependency (1)
- desertification (1)
- design (1)
- design behaviour (1)
- design cognition (1)
- design research (1)
- developing countries (1)
- development artifacts (1)
- dezentrale Identitäten (1)
- dezentrale autonome Organisation (1)
- die arabische Welt (1)
- difficulty (1)
- difficulty target (1)
- digital fabrication (1)
- digital innovation units (1)
- digital learning (1)
- digital picture archive (1)
- digital unterstützter Unterricht (1)
- digital whiteboard (1)
- digitale Fabrikation (1)
- digitale Infrastruktur für den Schulunterricht (1)
- digitale Innovationseinheit (1)
- digitale Transformation (1)
- digitales Bildarchiv (1)
- digitales Lernen (1)
- digitales Whiteboard (1)
- digitalisation (1)
- digitalización (1)
- dimensionality reduction (1)
- direct manipulation (1)
- discovery (1)
- discrete-event model (1)
- diseño (1)
- diskretes Ereignismodell (1)
- distributed computation (1)
- distributed performance monitoring (1)
- divide-and-conquer (1)
- doctor-patient relationship (1)
- document analysis (1)
- domain-specific knowledge graphs (1)
- domain-specific modeling (1)
- dominant matching (1)
- domänenspezifisches Wissensgraphen (1)
- doppelter Hashwert (1)
- double hashing (1)
- drahtloses Netzwerk (1)
- drift theory (1)
- dsps (1)
- dynamic AOP (1)
- dynamic causal modeling (1)
- dynamic service adaptation (1)
- dynamic systems (1)
- dynamics (1)
- dynamische Systeme (1)
- e-Commerce (1)
- e-commerce (1)
- eLearning (1)
- edge-weighted networks (1)
- education (1)
- electrical muscle stimulation (1)
- elektrische Muskelstimulation (1)
- embeddings (1)
- emergence (1)
- emotion measurement (1)
- emotional cognitive dynamics (1)
- emotional kognitive Dynamiken (1)
- endpoint security (1)
- enforceability (1)
- entity-component-system (1)
- entscheidungsbewusste Prozessmodelle (1)
- enumeration algorithms (1)
- erzeugende gegnerische Netzwerke (1)
- escalation of commitment (1)
- eskalierendes Commitment (1)
- espbench (1)
- estimation-of-distribution algorithms (1)
- estudios de organización (1)
- event normalization (1)
- event subscription (1)
- evolution of digital innovation units (1)
- evolutionary computation (1)
- exact algorithms (1)
- expectation maximisation algorithm (1)
- experience (1)
- exploration (1)
- exploratives Programmieren (1)
- exploratory programming (1)
- extend (1)
- extension problems (1)
- eye-tracking (1)
- facial mimicry (1)
- fault tolerance (1)
- feature selection (1)
- federated voting (1)
- field study (1)
- file structure (1)
- final report (1)
- font engineering (1)
- font rendering (1)
- force (1)
- forest number (1)
- formal testing (1)
- fortschrittliche Angriffe (1)
- functional dependency (1)
- funktionale Abhängigkeit (1)
- game dynamics (1)
- game theory (1)
- gameful learning (1)
- ganzzahlige lineare Optimierung (1)
- gefaltete neuronale Netze (1)
- gemischte Daten (1)
- gene (1)
- gene expression (1)
- gene selection (1)
- generalized discrimination networks (1)
- generative adversarial networks (1)
- genomics (1)
- geographical distribution (1)
- geschichtsbewusste Laufzeit-Modelle (1)
- getypte Attributierte Graphen (1)
- gigantische Netzwerke (1)
- global model management (1)
- globales Modellmanagement (1)
- grammars (1)
- graph (1)
- graph analysis (1)
- graph conditions (1)
- graph constraints (1)
- graph inference (1)
- graph mining (1)
- graph neural networks (1)
- graph repair (1)
- graph theory (1)
- graph transformations (1)
- graph-transformations (1)
- graphical query language (1)
- graphische neuronale Netze (1)
- grobe Protokolle (1)
- group-based behavior adaptation (1)
- haptic feedback (1)
- haptisches Feedback (1)
- hardware (1)
- hashrate (1)
- healthcare (1)
- hepatitis (1)
- heuristics (1)
- hierarchical data (1)
- hierarchische Daten (1)
- higher education (1)
- history-aware runtime models (1)
- hitting sets (1)
- human computer interaction (1)
- human-centered (1)
- human-centered design (1)
- human-computer interaction (1)
- human-scale (1)
- hybrid (1)
- hybrid systems (1)
- hyperbolic geometry (1)
- hyperbolic random graphs (1)
- hyperbolische Geometrie (1)
- hyperbolische Zufallsgraphen (1)
- identity (1)
- image abstraction (1)
- image stylization (1)
- image-based rendering (1)
- image-based representation (1)
- imbalanced learning (1)
- immediacy (1)
- implementation (1)
- implied methods (1)
- in-memory (1)
- in-memory technology (1)
- inclusion dependency (1)
- incremental graph query evaluation (1)
- independency tree (1)
- inductive invariant checking (1)
- industry (1)
- industry 4.0 (1)
- information extraction (1)
- information flows (1)
- information systems projects (1)
- inkrementelle Ausführung von Graphanfragen (1)
- innovación (1)
- insula (1)
- integer linear programming (1)
- integrated development environments (1)
- integrierte Entwicklungsumgebungen (1)
- intelligente Verträge (1)
- inter-chain (1)
- interactive visualization (1)
- interaktive Medien (1)
- interaktive Visualisierung (1)
- interdisziplinäre Teams (1)
- internet of things (1)
- internet topology (1)
- interpreters (1)
- interval probabilistic timed systems (1)
- interval probabilistische zeitgesteuerte Systeme (1)
- interval timed automata (1)
- intervention (1)
- intransitivity (1)
- intuitive Benutzeroberflächen (1)
- intuitive interfaces (1)
- invention (1)
- invention mechanism (1)
- inventory management (1)
- juridical recording (1)
- k-Induktion (1)
- k-induction (1)
- k-inductive invariant (1)
- k-inductive invariant checking (1)
- k-induktive Invariante (1)
- k-induktive Invariantenprüfung (1)
- kausale KI (1)
- kausale Schlussfolgerung (1)
- key establishment (1)
- key management (1)
- key revocation (1)
- knowledge base (1)
- knowledge base construction (1)
- knowledge graph (1)
- knowledge graph construction (1)
- knowledge graph refinement (1)
- knowledge graphs (1)
- knowledge transfer (1)
- knowledge validation (1)
- kollaboratives Arbeiten (1)
- kollaboratives Lernen (1)
- komplexe Ereignisverarbeitung (1)
- kompositionale Analyse (1)
- konsistentes Lernen (1)
- kontinuierliche Integration (1)
- kulturelles Erbe (1)
- label-free quantification (1)
- language learning in the limit (1)
- large scale mechanism (1)
- large-scale mechanism (1)
- laser cutting (1)
- laserscanning (1)
- law (1)
- learner engagement (1)
- learning (1)
- learning factories (1)
- learning platform (1)
- learning styles (1)
- lebenszentriert (1)
- ledger assets (1)
- left recursion (1)
- level-replacement systems (1)
- life-centered (1)
- linear programming (1)
- link layer security (1)
- literature review (1)
- live migration (1)
- live programming (1)
- lively groups (1)
- load balancing (1)
- load-bearing (1)
- logic rules (1)
- logische Regeln (1)
- low-duty-cycling (1)
- machine (1)
- machines (1)
- male infertility (1)
- management (1)
- manufacturing (1)
- manufacturing companies (1)
- maps (1)
- maschinelle Verarbeitung natürlicher Sprache (1)
- massive networks (1)
- massive open online courses (1)
- matching dependencies (1)
- measurement (1)
- media (1)
- media bias (1)
- medical image analysis (1)
- medium access control (1)
- medizinische Bildanalyse (1)
- medizinische Dokumentation (1)
- mehrsprachige Ausführungsumgebungen (1)
- mehrstufiger Angriff (1)
- menschenzentriert (1)
- menschenzentriertes Design (1)
- mental models (1)
- merged mining (1)
- merkle root (1)
- meta self-adaptation (1)
- metaanalysis (1)
- metabolomics (1)
- metacognition (1)
- metacrate (1)
- metadata detection (1)
- metamaterials (1)
- metanome (1)
- methods (1)
- metric temporal graph logic (1)
- metric temporal logic (1)
- metric termporal graph logic (1)
- metrisch temporale Graph Logic (1)
- metrische Temporallogik (1)
- microcredential (1)
- micropayment (1)
- micropayment channels (1)
- microstructures (1)
- miner (1)
- mining (1)
- mining hardware (1)
- minting (1)
- mixed data (1)
- model (1)
- model repair (1)
- model-driven software engineering (1)
- modellgesteuerte Entwicklung (1)
- modellgetriebene Entwicklung (1)
- modellgetriebene Softwaretechnik (1)
- modification stoichiometry (1)
- motion and force (1)
- mpmUCC (1)
- multi-objective (1)
- multi-step attack (1)
- multi-version models (1)
- multidisziplinäre Teams (1)
- multimodal wireless sensor network (1)
- mutations (1)
- named entity mining (1)
- narrative (1)
- natürliche Sprachverarbeitung (1)
- network optimization (1)
- network protocols (1)
- network security (1)
- networks (1)
- news (1)
- nicht-parametrische bedingte Unabhängigkeitstests (1)
- nicht-uniforme Verteilung (1)
- non-parametric conditional independence testing (1)
- non-uniform distribution (1)
- nonce (1)
- note-taking (1)
- novelty detection (1)
- null results (1)
- nutzergenerierte Inhalte (1)
- object-oriented languages (1)
- object-oriented programming (1)
- objektorientiertes Programmieren (1)
- off-chain transaction (1)
- oligopoly competition (1)
- omics (1)
- oneM2M (1)
- online courts (1)
- online learning (1)
- online personality (1)
- open innovation (1)
- operating systems (1)
- optical character recognition (1)
- optimization (1)
- order dependencies (1)
- organization studies (1)
- oxytocin (1)
- packrat parsing (1)
- parallel and sequential independence (1)
- parallel processing (1)
- parallele Verarbeitung (1)
- parallele und Sequentielle Unabhängigkeit (1)
- parietal operculum (1)
- parsing expression grammars (1)
- partial order resolution (1)
- partial replication (1)
- partielle Replikation (1)
- partition functions (1)
- patent (1)
- patent analysis (1)
- patient empowerment (1)
- peer evaluation (1)
- peer feedback (1)
- peer review (1)
- peer-to-peer network (1)
- pegged sidechains (1)
- performance (1)
- performance models of virtual machines (1)
- persistent memory (1)
- persistenter Speicher (1)
- personality prediction (1)
- phosphoproteomics (1)
- pmem (1)
- point-based rendering (1)
- politics (1)
- polyglot execution environments (1)
- polyglot programming (1)
- polyglottes Programmieren (1)
- polynomials (1)
- polystore (1)
- popular matching (1)
- pose estimation (1)
- poset (1)
- post-translational modification (1)
- power law (1)
- predicated generic functions (1)
- predicted spectra (1)
- prediction (1)
- prediction models (1)
- price of anarchy (1)
- prior knowledge (1)
- privacy (1)
- probabilistic machine learning (1)
- probabilistic routing (1)
- probabilistisches maschinelles Lernen (1)
- process execution (1)
- process modeling (1)
- process models (1)
- production networks (1)
- programmierbare Materie (1)
- programming experience (1)
- programming tools (1)
- programs (1)
- progressive rendering (1)
- progressives Rendering (1)
- project based learning (1)
- props (1)
- proteomics graph networks (1)
- prototyping (1)
- pseudo-Boolean optimization (1)
- pseudoboolesche Optimierung (1)
- psychotherapy (1)
- qualitative model (1)
- qualitatives Modell (1)
- quantification (1)
- quantum computing (1)
- quasi-identifier discovery (1)
- quorum slices (1)
- radiation hardening (1)
- railways (1)
- rainbow connection (1)
- random graphs (1)
- random k-SAT (1)
- reactive object queries (1)
- real-time (1)
- rechnerunterstütztes Konstruieren (1)
- recommendations (1)
- reconfigurable systems (1)
- record linkage (1)
- regression testing (1)
- rekeying (1)
- reliability (1)
- remote sensing (1)
- reported out-come measures (1)
- reverse engineering (1)
- reward (1)
- risk (1)
- rootstock (1)
- rough logs (1)
- run time analysis (1)
- runtime models (1)
- runtime monitoring (1)
- räumliche Geodaten (1)
- satisfiability threshold (1)
- savanna (1)
- scalability (1)
- scalability of blockchain (1)
- scalable (1)
- scarce tokens (1)
- schwach überwachtes maschinelles Lernen (1)
- scripting languages (1)
- scrollytelling (1)
- secure data flow diagrams (DFDsec) (1)
- secure multi-execution (1)
- selbst-souveräne Identitäten (1)
- selbstanpassende Systeme (1)
- selbstbestimmte Identitäten (1)
- selbstheilende Systeme (1)
- self-adaptive software (1)
- self-adaptive systems (1)
- self-driving (1)
- self-government (1)
- self-healing (1)
- semantic classification (1)
- semantic representations (1)
- semantische Klassifizierung (1)
- semantische Repräsentationen (1)
- sequence properties (1)
- serialization (1)
- serverseitiges 3D-Rendering (1)
- serverside 3D rendering (1)
- service-oriented architecture (SOA) (1)
- service-oriented architectures (1)
- serviceorientierte Architekturen (1)
- siamese neural networks (1)
- sichere Datenflussdiagramme (DFDsec) (1)
- sidechain (1)
- situational awareness (1)
- skalierbar (1)
- small talk (1)
- smalltalk (1)
- soccer analytics (1)
- social bot detection (1)
- social interaction (1)
- social media analysis (1)
- social modulation (1)
- social networking (1)
- software analytics (1)
- software evolution (1)
- software visualization (1)
- software/hardware co-design (1)
- somatosensation (1)
- soundness (1)
- sozialen Medien (1)
- soziales Netzwerk (1)
- spatial aggregation (1)
- specification of timed graph transformations (1)
- spectrum clustering (1)
- squeak (1)
- standard (1)
- standardization (1)
- stark verhaltenskorrekt sperrend (1)
- state management (1)
- state space modelling (1)
- static source-code analysis (1)
- statische Quellcodeanalyse (1)
- stochastic process (1)
- storytelling (1)
- stream processing (1)
- strongly behaviourally correct locking (1)
- strongly stable matching (1)
- style transfer (1)
- super stable matching (1)
- susceptibility (1)
- symbolic analysis (1)
- symbolic graphs (1)
- symbolische Analyse (1)
- symbolische Graphen (1)
- synchronization (1)
- tabellarische Dateien (1)
- tabular data (1)
- task realization strategies (1)
- taxonomy (1)
- teamwork (1)
- technology (1)
- telemedicine (1)
- telework (1)
- temporal graph queries (1)
- temporal logic (1)
- temporale Graphanfragen (1)
- test case prioritization (1)
- test results (1)
- text classification (1)
- the Arab world (1)
- theory (1)
- threat analysis (1)
- threat detection (1)
- threat model (1)
- tiefe Gauß-Prozesse (1)
- timed automata (1)
- timed graph (1)
- tissue-awareness (1)
- tool building (1)
- tools (1)
- toxic comment classification (1)
- trajectories (1)
- transaction (1)
- transcriptomics (1)
- transfer learning (1)
- transformation (1)
- transversal hypergraph (1)
- tribunales de justicia (1)
- tribunales en línea (1)
- triple graph grammars (1)
- trust (1)
- typed attributed symbolic graphs (1)
- typisierte attributierte Graphen (1)
- uBPMN (1)
- ubiquitous business process model and notation (uBPMN) (1)
- ubiquitous business process modeling (1)
- ubiquitous computing (ubicomp) (1)
- ubiquitous decision-aware business process (1)
- ubiquitous decisions (1)
- unbalancierter Datensatz (1)
- uncertainty (1)
- unique column combination (1)
- univariat (1)
- univariate (1)
- unsupervised (1)
- upper bound (1)
- user engagement (1)
- user research framework (1)
- user-centered design (1)
- utility functions (1)
- variants, (1)
- variational inference (1)
- variationelle Inferenz (1)
- verhaltenskorrektes Lernen (1)
- verifiable credentials (1)
- verschachtelte Anwendungsbedingungen (1)
- verschachtelte Graphbedingungen (1)
- verstärkendes Lernen (1)
- verteilte Berechnung (1)
- verteilte Leistungsüberwachung (1)
- verzwickte Probleme (1)
- virtual (1)
- virtual machines (1)
- virtual reality (1)
- virtuell (1)
- virtuelle Maschinen (1)
- visual analytics (1)
- visual language (1)
- visual languages (1)
- visualization concept exploration (1)
- visuelle Sprache (1)
- visuelle Sprachen (1)
- vocational training (1)
- vulnerability (1)
- wake-up radio (1)
- weak supervision (1)
- weakly (1)
- wearables (1)
- web-based development (1)
- web-based development environment (1)
- web-basierte Entwicklungsumgebung (1)
- webbasierte Entwicklung (1)
- wireless networks (1)
- zero-day (1)
- zufälliges k-SAT (1)
- Überwachtes Lernen (1)
- überprüfbare Nachweise (1)
Scalable data profiling
(2018)
Data profiling is the act of extracting structural metadata from datasets. Structural metadata, such as data dependencies and statistics, can support data management operations, such as data integration and data cleaning. Data management often is the most time-consuming activity in any data-related project. Its support is extremely valuable in our data-driven world, so that more time can be spent on the actual utilization of the data, e. g., building analytical models. In most scenarios, however, structural metadata is not given and must be extracted first. Therefore, efficient data profiling methods are highly desirable.
Data profiling is a computationally expensive problem; in fact, most dependency discovery problems entail search spaces that grow exponentially in the number of attributes. To this end, this thesis introduces novel discovery algorithms for various types of data dependencies – namely inclusion dependencies, conditional inclusion dependencies, partial functional dependencies, and partial unique column combinations – that considerably improve over state-of-the-art algorithms in terms of efficiency and that scale to datasets that cannot be processed by existing algorithms. The key to those improvements are not only algorithmic innovations, such as novel pruning rules or traversal strategies, but also algorithm designs tailored for distributed execution. While distributed data profiling has been mostly neglected by previous works, it is a logical consequence on the face of recent hardware trends and the computational hardness of dependency discovery.
To demonstrate the utility of data profiling for data management, this thesis furthermore presents Metacrate, a database for structural metadata. Its salient features are its flexible data model, the capability to integrate various kinds of structural metadata, and its rich metadata analytics library. We show how to perform a data anamnesis of unknown, complex datasets based on this technology. In particular, we describe in detail how to reconstruct the schemata and assess their quality as part of the data anamnesis.
The data profiling algorithms and Metacrate have been carefully implemented, integrated with the Metanome data profiling tool, and are available as free software. In that way, we intend to allow for easy repeatability of our research results and also provide them for actual usage in real-world data-related projects.
Version control is a widely used practice among software developers. It reduces the risk of changing their software and allows them to manage different configurations and to collaborate with others more efficiently. This is amplified by code sharing platforms such as GitHub or Bitbucket. Most version control systems track files (e.g., Git, Mercurial, and Subversion do), but some programming environments do not operate on files, but on objects instead (many Smalltalk implementations do). Users of such environments want to use version control for their objects anyway. Specialized version control systems, such as the ones available for Smalltalk systems (e.g., ENVY/Developer and Monticello), focus on a small subset of objects that can be versioned. Most of these systems concentrate on the tracking of methods, classes, and configurations of these. Other user-defined and user-built objects are either not eligible for version control at all, tracking them involves complicated workarounds, or a fixed, domain-unspecific serialization format is used that does not equally suit all kinds of objects. Moreover, these version control systems that are specific to a programming environment require their own code sharing platforms; popular, well-established platforms for file-based version control systems cannot be used or adapter solutions need to be implemented and maintained.
To improve the situation for version control of arbitrary objects, a framework for tracking, converting, and storing of objects is presented in this report. It allows editions of objects to be stored in an exchangeable, existing backend version control system. The platforms of the backend version control system can thus be reused. Users and objects have control over how objects are captured for the purpose of version control. Domain-specific requirements can be implemented. The storage format (i.e. the file format, when file-based backend version control systems are used) can also vary from one object to another. Different editions of objects can be compared and sets of changes can be applied to graphs of objects. A generic way for capturing and restoring that supports most kinds of objects is described. It models each object as a collection of slots. Thus, users can begin to track their objects without first having to implement version control supplements for their own kinds of objects. The proposed architecture is evaluated using a prototype implementation that can be used to track objects in Squeak/Smalltalk with Git. The prototype improves the suboptimal standing of user objects with respect to version control described above and also simplifies some version control tasks for classes and methods as well. It also raises new problems, which are discussed in this report as well.
With recent advances in the area of information extraction, automatically extracting structured information from a vast amount of unstructured textual data becomes an important task, which is infeasible for humans to capture all information manually. Named entities (e.g., persons, organizations, and locations), which are crucial components in texts, are usually the subjects of structured information from textual documents. Therefore, the task of named entity mining receives much attention. It consists of three major subtasks, which are named entity recognition, named entity linking, and relation extraction.
These three tasks build up an entire pipeline of a named entity mining system, where each of them has its challenges and can be employed for further applications. As a fundamental task in the natural language processing domain, studies on named entity recognition have a long history, and many existing approaches produce reliable results. The task is aiming to extract mentions of named entities in text and identify their types. Named entity linking recently received much attention with the development of knowledge bases that contain rich information about entities. The goal is to disambiguate mentions of named entities and to link them to the corresponding entries in a knowledge base. Relation extraction, as the final step of named entity mining, is a highly challenging task, which is to extract semantic relations between named entities, e.g., the ownership relation between two companies.
In this thesis, we review the state-of-the-art of named entity mining domain in detail, including valuable features, techniques, evaluation methodologies, and so on. Furthermore, we present two of our approaches that focus on the named entity linking and relation extraction tasks separately.
To solve the named entity linking task, we propose the entity linking technique, BEL, which operates on a textual range of relevant terms and aggregates decisions from an ensemble of simple classifiers. Each of the classifiers operates on a randomly sampled subset of the above range. In extensive experiments on hand-labeled and benchmark datasets, our approach outperformed state-of-the-art entity linking techniques, both in terms of quality and efficiency.
For the task of relation extraction, we focus on extracting a specific group of difficult relation types, business relations between companies. These relations can be used to gain valuable insight into the interactions between companies and perform complex analytics, such as predicting risk or valuating companies. Our semi-supervised strategy can extract business relations between companies based on only a few user-provided seed company pairs. By doing so, we also provide a solution for the problem of determining the direction of asymmetric relations, such as the ownership_of relation. We improve the reliability of the extraction process by using a holistic pattern identification method, which classifies the generated extraction patterns. Our experiments show that we can accurately and reliably extract new entity pairs occurring in the target relation by using as few as five labeled seed pairs.
Self-adaptive data quality
(2017)
Carrying out business processes successfully is closely linked to the quality of the data inventory in an organization. Lacks in data quality lead to problems: Incorrect address data prevents (timely) shipments to customers. Erroneous orders lead to returns and thus to unnecessary effort. Wrong pricing forces companies to miss out on revenues or to impair customer satisfaction. If orders or customer records cannot be retrieved, complaint management takes longer. Due to erroneous inventories, too few or too much supplies might be reordered.
A special problem with data quality and the reason for many of the issues mentioned above are duplicates in databases. Duplicates are different representations of same real-world objects in a dataset. However, these representations differ from each other and are for that reason hard to match by a computer. Moreover, the number of required comparisons to find those duplicates grows with the square of the dataset size. To cleanse the data, these duplicates must be detected and removed. Duplicate detection is a very laborious process. To achieve satisfactory results, appropriate software must be created and configured (similarity measures, partitioning keys, thresholds, etc.). Both requires much manual effort and experience.
This thesis addresses automation of parameter selection for duplicate detection and presents several novel approaches that eliminate the need for human experience in parts of the duplicate detection process.
A pre-processing step is introduced that analyzes the datasets in question and classifies their attributes semantically. Not only do these annotations help understanding the respective datasets, but they also facilitate subsequent steps, for example, by selecting appropriate similarity measures or normalizing the data upfront. This approach works without schema information.
Following that, we show a partitioning technique that strongly reduces the number of pair comparisons for the duplicate detection process. The approach automatically finds particularly suitable partitioning keys that simultaneously allow for effective and efficient duplicate retrieval. By means of a user study, we demonstrate that this technique finds partitioning keys that outperform expert suggestions and additionally does not need manual configuration. Furthermore, this approach can be applied independently of the attribute types.
To measure the success of a duplicate detection process and to execute the described partitioning approach, a gold standard is required that provides information about the actual duplicates in a training dataset. This thesis presents a technique that uses existing duplicate detection results and crowdsourcing to create a near gold standard that can be used for the purposes above. Another part of the thesis describes and evaluates strategies how to reduce these crowdsourcing costs and to achieve a consensus with less effort.
Business process management is an acknowledged asset for running an organization in a productive and sustainable way. One of the most important aspects of business process management, occurring on a daily basis at all levels, is decision making. In recent years, a number of decision management frameworks have appeared in addition to existing business process management systems. More recently, Decision Model and Notation (DMN) was developed by the OMG consortium with the aim of complementing the widely used Business Process Model and Notation (BPMN). One of the reasons for the emergence of DMN is the increasing interest in the evolving paradigm known as the separation of concerns. This paradigm states that modeling decisions complementary to processes reduces process complexity by externalizing decision logic from process models and importing it into a dedicated decision model. Such an approach increases the agility of model design and execution. This provides organizations with the flexibility to adapt to the ever increasing rapid and dynamic changes in the business ecosystem. The research gap, identified by us, is that the separation of concerns, recommended by DMN, prescribes the externalization of the decision logic of process models in one or more separate decision models, but it does not specify this can be achieved.
The goal of this thesis is to overcome the presented gap by developing a framework for discovering decision models in a semi-automated way from information about existing process decision making. Thus, in this thesis we develop methodologies to extract decision models from: (1) control flow and data of process models that exist in enterprises; and (2) from event logs recorded by enterprise information systems, encapsulating day-to-day operations. Furthermore, we provide an extension of the methodologies to discover decision models from event logs enriched with fuzziness, a tool dealing with partial knowledge of the process execution information. All the proposed techniques are implemented and evaluated in case studies using real-life and synthetic process models and event logs. The evaluation of these case studies shows that the proposed methodologies provide valid and accurate output decision models that can serve as blueprints for executing decisions complementary to process models. Thus, these methodologies have applicability in the real world and they can be used, for example, for compliance checks, among other uses, which could improve the organization's decision making and hence it's overall performance.
The development of self-adaptive software requires the engineering of an adaptation engine that controls the underlying adaptable software by a feedback loop. State-of-the-art approaches prescribe the feedback loop in terms of numbers, how the activities (e.g., monitor, analyze, plan, and execute (MAPE)) and the knowledge are structured to a feedback loop, and the type of knowledge. Moreover, the feedback loop is usually hidden in the implementation or framework and therefore not visible in the architectural design. Additionally, an adaptation engine often employs runtime models that either represent the adaptable software or capture strategic knowledge such as reconfiguration strategies. State-of-the-art approaches do not systematically address the interplay of such runtime models, which would otherwise allow developers to freely design the entire feedback loop.
This thesis presents ExecUtable RuntimE MegAmodels (EUREMA), an integrated model-driven engineering (MDE) solution that rigorously uses models for engineering feedback loops. EUREMA provides a domain-specific modeling language to specify and an interpreter to execute feedback loops. The language allows developers to freely design a feedback loop concerning the activities and runtime models (knowledge) as well as the number of feedback loops. It further supports structuring the feedback loops in the adaptation engine that follows a layered architectural style. Thus, EUREMA makes the feedback loops explicit in the design and enables developers to reason about design decisions.
To address the interplay of runtime models, we propose the concept of a runtime megamodel, which is a runtime model that contains other runtime models as well as activities (e.g., MAPE) working on the contained models. This concept is the underlying principle of EUREMA. The resulting EUREMA (mega)models are kept alive at runtime and they are directly executed by the EUREMA interpreter to run the feedback loops. Interpretation provides the flexibility to dynamically adapt a feedback loop. In this context, EUREMA supports engineering self-adaptive software in which feedback loops run independently or in a coordinated fashion within the same layer as well as on top of each other in different layers of the adaptation engine. Moreover, we consider preliminary means to evolve self-adaptive software by providing a maintenance interface to the adaptation engine.
This thesis discusses in detail EUREMA by applying it to different scenarios such as single, multiple, and stacked feedback loops for self-repairing and self-optimizing the mRUBiS application. Moreover, it investigates the design and expressiveness of EUREMA, reports on experiments with a running system (mRUBiS) and with alternative solutions, and assesses EUREMA with respect to quality attributes such as performance and scalability.
The conducted evaluation provides evidence that EUREMA as an integrated and open MDE approach for engineering self-adaptive software seamlessly integrates the development and runtime environments using the same formalism to specify and execute feedback loops, supports the dynamic adaptation of feedback loops in layered architectures, and achieves an efficient execution of feedback loops by leveraging incrementality.
HPI Future SOC Lab
(2016)
The “HPI Future SOC Lab” is a cooperation of the Hasso Plattner Institute (HPI) and industrial partners. Its mission is to enable and promote exchange and interaction between the research community and the industrial partners.
The HPI Future SOC Lab provides researchers with free of charge access to a complete infrastructure of state of the art hard and software. This infrastructure includes components, which might be too expensive for an ordinary research environment, such as servers with up to 64 cores and 2 TB main memory. The offerings address researchers particularly from but not limited to the areas of computer science and business information systems. Main areas of research include cloud computing, parallelization, and In-Memory technologies.
This technical report presents results of research projects executed in 2016. Selected projects have presented their results on April 5th and November 3th 2016 at the Future SOC Lab Day events.
Business process automation improves organizations’ efficiency to perform work. Therefore, a business process is first documented as a process model which then serves as blueprint for a number of process instances representing the execution of specific business cases. In existing business process management systems, process instances run independently from each other. However, in practice, instances are also collected in groups at certain process activities for a combined execution to improve the process performance. Currently, this so-called batch processing is executed manually or supported by external software. Only few research proposals exist to explicitly represent and execute batch processing needs in business process models. These works also lack a comprehensive understanding of requirements.
This thesis addresses the described issues by providing a basic concept, called batch activity. It allows an explicit representation of batch processing configurations in process models and provides a corresponding execution semantics, thereby easing automation. The batch activity groups different process instances based on their data context and can synchronize their execution over one or as well multiple process activities. The concept is conceived based on a requirements analysis considering existing literature on batch processing from different domains and industry examples. Further, this thesis provides two extensions: First, a flexible batch configuration concept, based on event processing techniques, is introduced to allow run time adaptations of batch configurations. Second, a concept for collecting and batching activity instances of multiple different process models is given. Thereby, the batch configuration is centrally defined, independently of the process models, which is especially beneficial for organizations with large process model collections. This thesis provides a technical evaluation as well as a validation of the presented concepts. A prototypical implementation in an existing open-source BPMS shows that with a few extensions, batch processing is enabled. Further, it demonstrates that the consolidated view of several work items in one user form can improve work efficiency. The validation, in which the batch activity concept is applied to different use cases in a simulated environment, implies cost-savings for business processes when a suitable batch configuration is used. For the validation, an extensible business process simulator was developed. It enables process designers to study the influence of a batch activity in a process with regards to its performance.
In this era of high-speed informatization and globalization, online education is no longer an exquisite concept in the ivory tower, but a rapidly developing industry closely relevant to people's daily lives. Numerous lectures are recorded in form of multimedia data, uploaded to the Internet and made publicly accessible from anywhere in this world. These lectures are generally addressed as e-lectures. In recent year, a new popular form of e-lectures, the Massive Open Online Courses (MOOCs), boosts the growth of online education industry and somehow turns "learning online" into a fashion.
As an e-learning provider, besides to keep improving the quality of e-lecture content, to provide better learning environment for online learners is also a highly important task. This task can be preceded in various ways, and one of them is to enhance and upgrade the learning materials provided: e-lectures could be more than videos. Moreover, this process of enhancement or upgrading should be done automatically, without giving extra burdens to the lecturers or teaching teams, and this is the aim of this thesis.
The first part of this thesis is an integrated framework of multi-lingual subtitles production, which can help online learners penetrate the language barrier. The framework consists of Automatic Speech Recognition (ASR), Sentence Boundary Detection (SBD) and Machine Translation (MT), among which the proposed SBD solution is major technical contribution, building on Deep Neural Network (DNN) and Word Vector (WV) and achieving state-of-the-art performance. Besides, a quantitative evaluation with dozens of volunteers is also introduced to measure how these auto-generated subtitles could actually help in context of e-lectures.
Secondly, a technical solution "TOG" (Tree-Structure Outline Generation) is proposed to extract textual content from the displaying slides recorded in video and re-organize them into a hierarchical lecture outline, which may serve in multiple functions, such like preview, navigation and retrieval. TOG runs adaptively and can be roughly divided into intra-slide and inter-slides phases. Table detection and lecture video segmentation can be implemented as sub- or post-application in these two phases respectively. Evaluation on diverse e-lectures shows that all the outlines, tables and segments achieved are trustworthily accurate.
Based on the subtitles and outlines previously created, lecture videos can be further split into sentence units and slide-based segment units. A lecture highlighting process is further applied on these units, in order to capture and mark the most important parts within the corresponding lecture, just as what people do with a pen when reading paper books. Sentence-level highlighting depends on the acoustic analysis on the audio track, while segment-level highlighting focuses on exploring clues from the statistical information of related transcripts and slide content. Both objective and subjective evaluations prove that the proposed lecture highlighting solution is with decent precision and welcomed by users.
All above enhanced e-lecture materials have been already implemented in actual use or made available for implementation by convenient interfaces.
Data profiling is the computer science discipline of analyzing a given dataset for its metadata. The types of metadata range from basic statistics, such as tuple counts, column aggregations, and value distributions, to much more complex structures, in particular inclusion dependencies (INDs), unique column combinations (UCCs), and functional dependencies (FDs). If present, these statistics and structures serve to efficiently store, query, change, and understand the data. Most datasets, however, do not provide their metadata explicitly so that data scientists need to profile them.
While basic statistics are relatively easy to calculate, more complex structures present difficult, mostly NP-complete discovery tasks; even with good domain knowledge, it is hardly possible to detect them manually. Therefore, various profiling algorithms have been developed to automate the discovery. None of them, however, can process datasets of typical real-world size, because their resource consumptions and/or execution times exceed effective limits.
In this thesis, we propose novel profiling algorithms that automatically discover the three most popular types of complex metadata, namely INDs, UCCs, and FDs, which all describe different kinds of key dependencies. The task is to extract all valid occurrences from a given relational instance. The three algorithms build upon known techniques from related work and complement them with algorithmic paradigms, such as divide & conquer, hybrid search, progressivity, memory sensitivity, parallelization, and additional pruning to greatly improve upon current limitations. Our experiments show that the proposed algorithms are orders of magnitude faster than related work. They are, in particular, now able to process datasets of real-world, i.e., multiple gigabytes size with reasonable memory and time consumption.
Due to the importance of data profiling in practice, industry has built various profiling tools to support data scientists in their quest for metadata. These tools provide good support for basic statistics and they are also able to validate individual dependencies, but they lack real discovery features even though some fundamental discovery techniques are known for more than 15 years. To close this gap, we developed Metanome, an extensible profiling platform that incorporates not only our own algorithms but also many further algorithms from other researchers. With Metanome, we make our research accessible to all data scientists and IT-professionals that are tasked with data profiling. Besides the actual metadata discovery, the platform also offers support for the ranking and visualization of metadata result sets.
Being able to discover the entire set of syntactically valid metadata naturally introduces the subsequent task of extracting only the semantically meaningful parts. This is challenge, because the complete metadata results are surprisingly large (sometimes larger than the datasets itself) and judging their use case dependent semantic relevance is difficult. To show that the completeness of these metadata sets is extremely valuable for their usage, we finally exemplify the efficient processing and effective assessment of functional dependencies for the use case of schema normalization.
Advanced mechatronic systems have to integrate existing technologies from mechanical, electrical and software engineering. They must be able to adapt their structure and behavior at runtime by reconfiguration to react flexibly to changes in the environment. Therefore, a tight integration of structural and behavioral models of the different domains is required. This integration results in complex reconfigurable hybrid systems, the execution logic of which cannot be addressed directly with existing standard modeling, simulation, and code-generation techniques. We present in this paper how our component-based approach for reconfigurable mechatronic systems, M ECHATRONIC UML, efficiently handles the complex interplay of discrete behavior and continuous behavior in a modular manner. In addition, its extension to even more flexible reconfiguration cases is presented.
Organizations continue building virtual working teams (Teleworkers) to become more dynamic as part of their strategic innovation, with great benefits to individuals, business and society. However, during such transformations it is important to note that effective knowledge communication is particularly difficult in distributed environments as well as in non-interactive settings, because the interlocutors cannot use gestures or mimicry and have to adapt their expressions without receiving any feedback, which may affect the creation of tacit knowledge. Collective Intelligence appears to be an encouraging alternative for creating knowledge. However, in this scenario it faces an important goal to be achieved, as the degree of ability of two or more individuals increases with the need to overcome barriers through the aggregation of separately processed information, whereby all actors follow similar conditions to participate in the collective. Geographically distributed organizations have the great challenge of managing people’s knowledge, not only to keep operations running, but also to promote innovation within the organization in the creation of new knowledge. The management of knowledge from Collective Intelligence represents a big difference from traditional methods of information allocation, since managing Collective Intelligence poses new requirements. For instance, semantic analysis has to merge information, coming both from the content itself and the social/individual context, and in addition, the social dynamics that emerge online have to be taken into account. This study analyses how knowledge-based organizations working with decentralized staff may need to consider the cognitive styles and social behaviors of individuals participating in their programs to effectively manage knowledge in virtual settings. It also proposes assessment taxonomies to analyze online comportments at the levels of the individual and community, in order to successfully identify characteristics to help evaluate higher effectiveness of communication. We aim at modeling measurement patterns to identify effective ways of interaction of individuals, taking into consideration their cognitive and social behaviors.
Complex networks are ubiquitous in nature and society. They appear in vastly different domains, for instance as social networks, biological interactions or communication networks. Yet in spite of their different origins, these networks share many structural characteristics. For instance, their degree distribution typically follows a power law. This means that the fraction of vertices of degree k is proportional to k^(−β) for some constant β; making these networks highly inhomogeneous. Furthermore, they also typically have high clustering, meaning that links between two nodes are more likely to appear if they have a neighbor in common.
To mathematically study the behavior of such networks, they are often modeled as random graphs. Many of the popular models like inhomogeneous random graphs or Preferential Attachment excel at producing a power law degree distribution. Clustering, on the other hand, is in these models either not present or artificially enforced.
Hyperbolic random graphs bridge this gap by assuming an underlying geometry to the graph: Each vertex is assigned coordinates in the hyperbolic plane, and two vertices are connected if they are nearby. Clustering then emerges as a natural consequence: Two nodes joined by an edge are close by and therefore have many neighbors in common. On the other hand, the exponential expansion of space in the hyperbolic plane naturally produces a power law degree sequence. Due to the hyperbolic geometry, however, rigorous mathematical treatment of this model can quickly become mathematically challenging.
In this thesis, we improve upon the understanding of hyperbolic random graphs by studying its structural and algorithmical properties. Our main contribution is threefold. First, we analyze the emergence of cliques in this model. We find that whenever the power law exponent β is 2 < β < 3, there exists a clique of polynomial size in n. On the other hand, for β >= 3, the size of the largest clique is logarithmic; which severely contrasts previous models with a constant size clique in this case. We also provide efficient algorithms for finding cliques if the hyperbolic node coordinates are known. Second, we analyze the diameter, i. e., the longest shortest path in the graph. We find
that it is of order O(polylog(n)) if 2 < β < 3 and O(logn) if β > 3. To complement
these findings, we also show that the diameter is of order at least Ω(logn). Third, we provide an algorithm for embedding a real-world graph into the hyperbolic plane using only its graph structure. To ensure good quality of the embedding, we perform extensive computational experiments on generated hyperbolic random graphs. Further, as a proof of concept, we embed the Amazon product recommendation network and observe that products from the same category are mapped close together.