004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Doctoral Thesis (157) (remove)
Keywords
- Maschinelles Lernen (8)
- Machine Learning (7)
- machine learning (7)
- Antwortmengenprogrammierung (5)
- Geschäftsprozessmanagement (5)
- Prozessmodellierung (4)
- Visualisierung (4)
- Vorhersage (4)
- answer set programming (4)
- 3D visualization (3)
- Bildverarbeitung (3)
- Datenintegration (3)
- HCI (3)
- Klassifikation (3)
- Modellierung (3)
- business process management (3)
- clustering (3)
- geospatial data (3)
- image processing (3)
- non-photorealistic rendering (3)
- outlier detection (3)
- prediction (3)
- process mining (3)
- visualization (3)
- 3D-Geovisualisierung (2)
- 3D-Punktwolken (2)
- 3D-Stadtmodelle (2)
- 3D-Visualisierung (2)
- Abstraktion (2)
- Algorithmen (2)
- Algorithmic Game Theory (2)
- Algorithmische Spieltheorie (2)
- Algorithms (2)
- Anomalieerkennung (2)
- Answer Set Programming (2)
- CityGML (2)
- Classification (2)
- Clusteranalyse (2)
- Computergrafik (2)
- Computersicherheit (2)
- Datenaufbereitung (2)
- Datenbanken (2)
- Datenbanksysteme (2)
- E-Learning (2)
- EEG (2)
- Echtzeit-Rendering (2)
- Exploration (2)
- FMC (2)
- Fehlende Daten (2)
- GPU (2)
- Game Dynamics (2)
- Geodaten (2)
- ICA (2)
- Informatik (2)
- Informationsextraktion (2)
- Knowledge Representation and Reasoning (2)
- Künstliche Intelligenz (2)
- Laufzeitmodelle (2)
- Mensch-Computer-Interaktion (2)
- Modell (2)
- Mustererkennung (2)
- Ontologie (2)
- Preis der Anarchie (2)
- Price of Anarchy (2)
- Process Mining (2)
- RDF (2)
- Semantic Web (2)
- Smalltalk (2)
- Softwareentwicklung (2)
- Systemstruktur (2)
- Texturen (2)
- VM (2)
- Verhalten (2)
- Verifikation (2)
- Virtual Reality (2)
- abstraction (2)
- anomaly detection (2)
- causal discovery (2)
- causal structure learning (2)
- classification (2)
- computer graphics (2)
- data (2)
- data integration (2)
- data preparation (2)
- database systems (2)
- digital whiteboard (2)
- functional dependencies (2)
- funktionale Abhängigkeiten (2)
- human computer interaction (2)
- inclusion dependencies (2)
- index selection (2)
- information extraction (2)
- intrusion detection (2)
- kausale Entdeckung (2)
- kausales Strukturlernen (2)
- maschinelles Lernen (2)
- mobile (2)
- mobile mapping (2)
- model (2)
- model-driven engineering (2)
- runtime models (2)
- software development (2)
- software engineering (2)
- systems biology (2)
- systems of systems (2)
- testing (2)
- textures (2)
- verification (2)
- virtual 3D city models (2)
- virtuelle 3D-Stadtmodelle (2)
- 'Peer To Peer' (1)
- 0-day (1)
- 3D Computer Grafik (1)
- 3D Computer Graphics (1)
- 3D Drucken (1)
- 3D Point Clouds (1)
- 3D Semiotik (1)
- 3D Visualisierung (1)
- 3D city model (1)
- 3D city models (1)
- 3D computer graphics (1)
- 3D geovisualisation (1)
- 3D geovisualization (1)
- 3D point cloud (1)
- 3D point clouds (1)
- 3D portrayal (1)
- 3D printing (1)
- 3D semiotics (1)
- 3D-Punktwolke (1)
- 3D-Rendering (1)
- 3D-Stadtmodell (1)
- 3d city models (1)
- APT (1)
- Abbrecherquote (1)
- Ackerschmalwand (1)
- Active Evaluation (1)
- Advanced Persistent Threats (1)
- Adversarial Learning (1)
- Aktive Evaluierung (1)
- Algorithmenablaufplanung (1)
- Algorithmenkonfiguration (1)
- Algorithmenselektion (1)
- Analyse (1)
- Android Security (1)
- Anfrageoptimierung (1)
- Anfragepaare (1)
- Angewandte Spieltheorie (1)
- Angriffserkennung (1)
- Anisotroper Kuwahara Filter (1)
- Anleitung (1)
- Anomalien (1)
- Antwortmengen Programmierung (1)
- Application Server (1)
- Applied Game Theory (1)
- Architekturadaptation (1)
- Archivanalyse (1)
- Artificial Intelligence (1)
- Arzt-Patient-Beziehung (1)
- Aspect-Oriented Programming (1)
- Aspektorientierte Programmierung (1)
- Assoziationsregeln (1)
- Asynchrone Schaltung (1)
- Attributsicherung (1)
- Augmented Reality (1)
- Ausführungsgeschichte (1)
- Ausführungssemantiken (1)
- Ausreissererkennung (1)
- Ausreißererkennung (1)
- BCH (1)
- BCI (1)
- BSS (1)
- Bachelorstudierende der Informatik (1)
- Bank (1)
- Baumweite (1)
- Bayesian networks (1)
- Bayessche Netze (1)
- Bedrohungserkennung (1)
- Behavior (1)
- Behaviour Analysis (1)
- Benutzeroberfläche (1)
- Berührungseingaben (1)
- Betrachtungsebenen (1)
- Beweistheorie (1)
- Big Data (1)
- Bilddatenanalyse (1)
- Binäres Entscheidungsdiagramm (1)
- Bioacoustics (1)
- Bioakustik (1)
- Bioelektrisches Signal (1)
- Bioinformatik (1)
- Boolean constraint solver (1)
- Brain Computer Interface (1)
- Business Process Management (1)
- Business Process Models (1)
- CSC (1)
- CSCW (1)
- Cactus (1)
- Case Management (1)
- Choreographien (1)
- Cloud Computing (1)
- Cloud Datenzentren (1)
- Cloud computing (1)
- Clustering (1)
- Code (1)
- Common Spatial Pattern (1)
- Complementary Circuits (1)
- Compliance (1)
- Composition (1)
- Compound Values (1)
- Computational Complexity (1)
- Computational Hardness (1)
- Computational Photography (1)
- Computer Science (1)
- Computing (1)
- Conceptual (1)
- Constructive solid geometry (1)
- Covariate Shift (1)
- Crime mapping (1)
- Cyber-Sicherheit (1)
- DBMS (1)
- DDoS (1)
- DPLL (1)
- Data Modeling (1)
- Data Privacy (1)
- Data Profiling (1)
- Data Structure Optimization (1)
- Data-Mining (1)
- Data-Science (1)
- Databases (1)
- Dateistruktur (1)
- Daten (1)
- Datenabhängigkeiten-Entdeckung (1)
- Datenanalyse (1)
- Datenbank (1)
- Datenextraktion (1)
- Datenkorrektheit (1)
- Datenmodelle (1)
- Datenmodellierung (1)
- Datenobjekte (1)
- Datenqualität (1)
- Datenreinigung (1)
- Datenschutz (1)
- Datenstrukturoptimierung (1)
- Datensynthese (1)
- Datentransformation (1)
- Datenzustände (1)
- Deep Learning (1)
- Delphine (1)
- Dempster-Shafer-Theorie (1)
- Dempster–Shafer theory (1)
- Description Logics (1)
- Design Thinking (1)
- Design-Forschung (1)
- Deskriptive Logik (1)
- Deurema Modellierungssprache (1)
- Diagonalisierung (1)
- Didaktik (1)
- Didaktik der Informatik (1)
- Dienstkomposition (1)
- Dienstplattform (1)
- Differenz von Gauss Filtern (1)
- Digitale Transformation (1)
- Digitale Whiteboards (1)
- Digitalisierung (1)
- Digitalstrategie (1)
- Direkte Manipulation (1)
- Disambiguierung (1)
- Distributed Computing (1)
- Dolphins (1)
- Domänenspezifische Modellierung (1)
- Dreidimensionale Computergraphik (1)
- Dubletten (1)
- Duplikaterkennung (1)
- Dynamic Programming (1)
- Dynamische Programmierung (1)
- Dynamische Rekonfiguration (1)
- Einbruchserkennung (1)
- Eingabegenauigkeit (1)
- Elektroencephalographie (1)
- Emotionen (1)
- Emotionsforschung (1)
- Empfehlungen (1)
- Endpunktsicherheit (1)
- Energieeffizienz (1)
- Enterprise Search (1)
- Entitätsverknüpfung (1)
- Entscheidungsbäume (1)
- Entscheidungsfindung (1)
- Entscheidungsmanagement (1)
- Entscheidungsmodelle (1)
- Entwicklungswerkzeuge (1)
- Entwurf (1)
- Entwurfsmuster für SOA-Sicherheit (1)
- Entwurfsraumexploration (1)
- Ereignisabstraktion (1)
- Erfüllbarkeit einer Formel der Aussagenlogik (1)
- Erfüllbarkeitsproblem (1)
- Erkennung von Metadaten (1)
- Error Estimation (1)
- Error-Detection Circuits (1)
- Erweiterte Realität (1)
- Evaluation (1)
- Evidenztheorie (1)
- Evolution (1)
- Execution Semantics (1)
- Exponential Time Hypothesis (1)
- Exponentialzeit Hypothese (1)
- FMC-QE (1)
- FPGA (1)
- Fallmanagement (1)
- Feature Combination (1)
- Feedback (1)
- Feedback Loop Modellierung (1)
- Fehlerbeseitigung (1)
- Fehlererkennung (1)
- Fehlerkorrektur (1)
- Fehlerschätzung (1)
- Fehlvorstellung (1)
- Fernerkundung (1)
- Fertigung (1)
- Fintech (1)
- Flussgesteuerter Bilateraler Filter (1)
- Focus+Context Visualization (1)
- Fokus-&-Kontext Visualisierung (1)
- Formeln der quantifizierten Aussagenlogik (1)
- Fundamental Modeling Concepts (1)
- Fußgängernavigation (1)
- GIS (1)
- GPU acceleration (1)
- GPU-Beschleunigung (1)
- Gaussian process state-space models (1)
- Gaussian processes (1)
- Gauß-Prozess Zustandsraummodelle (1)
- Gauß-Prozesse (1)
- Gebäudemodelle (1)
- Gehirn-Computer-Schnittstelle (1)
- Geländemodelle (1)
- Generalisierung (1)
- Geometrieerzeugung (1)
- Geovisualisierung (1)
- Geschäftsprozessarchitekturen (1)
- Geschäftsprozessmodelle (1)
- Gesichtsausdruck (1)
- Gewinnung benannter Entitäten (1)
- Globus (1)
- GraalVM (1)
- Grammatikalische Inferenz (1)
- Graph-Mining (1)
- Graph-basiertes Ranking (1)
- Graphableitung (1)
- Grid (1)
- Grid Computing (1)
- Hardware-Software-Co-Design (1)
- Hasserkennung (1)
- Hauptkomponentenanalyse (1)
- Hauptspeicher Technologie (1)
- High-Level Synthesis (1)
- Hochschulsystem (1)
- Hyrise (1)
- I/O-effiziente Algorithmen (1)
- IBM 360 (1)
- ICT competencies (1)
- IDS (1)
- IT security (1)
- IT-Security (1)
- IT-Sicherheit (1)
- Identität (1)
- In-Memory (1)
- Index (1)
- Index Structures (1)
- Indexauswahl (1)
- Indexstrukturen (1)
- Informatik-Studiengänge (1)
- Informatikdidaktik (1)
- Informatikvoraussetzungen (1)
- Information Transfer Rate (1)
- Informationsvorhaltung (1)
- Informatische Kompetenzen (1)
- Inklusionsabhängigkeit (1)
- Inklusionsabhängigkeiten (1)
- Inkonsistenz (1)
- Interactive Rendering (1)
- Interactive system (1)
- Interaktionsmodel (1)
- Interaktionsmodellierung (1)
- Interaktionstechniken (1)
- Interaktives Rendering (1)
- Interaktives System (1)
- Internet Security (1)
- Internet applications (1)
- Internet-Sicherheit (1)
- Internetanwendungen (1)
- Intuition (1)
- Java 2 Enterprise Edition (1)
- Java Security Framework (1)
- Java Virtual Machine (1)
- Karten (1)
- Kartografisches Design (1)
- Kern-PCA (1)
- Kernmethoden (1)
- Klassifikator-Kalibrierung (1)
- Klassifizierung (1)
- Kollaborationen (1)
- Kommunikation (1)
- Kompetenzen (1)
- Komplexität (1)
- Komplexität der Berechnung (1)
- Komplexitätsbewältigung (1)
- Komplexitätstheorie (1)
- Komposition (1)
- Kontext (1)
- Konzeptionell (1)
- Kundenverhalten (1)
- Kybernetik (1)
- LOD (1)
- Landmarken (1)
- Laser Cutten (1)
- Laserscanning (1)
- Lastverteilung (1)
- Laufzeitverhalten (1)
- Learning Analytics (1)
- Lebendigkeit (1)
- Lehrer (1)
- Leistungsfähigkeit (1)
- Leistungsvorhersage (1)
- LiDAR (1)
- Link-Entdeckung (1)
- Live-Migration (1)
- Logiksynthese (1)
- Lower Bounds (1)
- MEG (1)
- MOOCs (1)
- Machine-Learning (1)
- Machinelles Lernen (1)
- Magnetoencephalographie (1)
- Malware (1)
- Maschinen (1)
- Matrizen-Eigenwertaufgabe (1)
- Megamodel (1)
- Megamodell (1)
- Mehrklassen-Klassifikation (1)
- Metacrate (1)
- Metadaten (1)
- Metamodell (1)
- Middleware (1)
- Migration (1)
- Mischmodelle (1)
- Mischung <Signalverarbeitung> (1)
- Mobil (1)
- Mobile Mapping (1)
- Mobile-Mapping (1)
- Mobilgeräte (1)
- Model Based Engineering (1)
- Model Checking (1)
- Model Consistency (1)
- Model Driven Architecture (1)
- Model Management (1)
- Model-Driven Engineering (1)
- Modeling (1)
- Modell Management (1)
- Modell-driven Security (1)
- Modell-getriebene Sicherheit (1)
- Modellbasiert (1)
- Modellgetrieben (1)
- Modellgetriebene Architektur (1)
- Modellgetriebene Entwicklung (1)
- Modellkonsistenz (1)
- Modelltransformation (1)
- Molekulare Bioinformatik (1)
- Monitoring (1)
- Multi Task Learning (1)
- Multi-Class (1)
- Multi-Task-Lernen (1)
- Multiprocessor (1)
- Multiprozessor (1)
- Nash Equilibrium (1)
- Navigation (1)
- Network Creation Game (1)
- Netzwerke (1)
- Neuronales Netz (1)
- New On-Line Error-Detection Methode (1)
- Next Generation Network (1)
- Nicht-photorealistisches Rendering (1)
- Nichtfotorealistische Bildsynthese (1)
- Non-photorealistic Rendering (1)
- Nutzerinteraktion (1)
- Nutzungsinteresse (1)
- Objects (1)
- Objekte (1)
- Objektive Schwierigkeit (1)
- Online Learning Environments (1)
- Ontologies (1)
- Ontology (1)
- Open Source (1)
- Optimierungsproblem (1)
- Owner-Retained Access Control (ORAC) (1)
- PAVM (1)
- Parallel Programming (1)
- Parallele Datenverarbeitung (1)
- Paralleles Rechnen (1)
- Parallelrechner (1)
- Parameterized Complexity (1)
- Parametrisierte Komplexität (1)
- Patientenermündigung (1)
- Pattern Recognition (1)
- Patterns (1)
- Peer-to-Peer-Netz ; GRID computing ; Zuverlässigkeit ; Web Services ; Betriebsmittelverwaltung ; Migration (1)
- Performance (1)
- Performance Prediction (1)
- Personal Data (1)
- Plattform-Ökosysteme (1)
- Platzierung (1)
- Policy Enforcement (1)
- Policy Languages (1)
- Policy Sprachen (1)
- Pre-RS Traceability (1)
- Prediction Game (1)
- Predictive Models (1)
- Privacy Protection (1)
- Privatsphäre (1)
- Probabilistische Modelle (1)
- Probleme in der Studie (1)
- Process (1)
- Process Modeling (1)
- Process Modelling (1)
- Process modeling (1)
- Professoren (1)
- Programmierabstraktionen (1)
- Programmieren (1)
- Programmierung (1)
- Programmierwerkzeuge (1)
- Proof Theory (1)
- Prozess (1)
- Prozess Verbesserung (1)
- Prozess- und Datenintegration (1)
- Prozessarchitektur (1)
- Prozessautomatisierung (1)
- Prozesse (1)
- Prozessmodell (1)
- Prozessmodelle (1)
- Prozessmodellsuche (1)
- Prozessverfeinerung (1)
- Prädiktionsspiel (1)
- Präferenzen (1)
- Präsentation (1)
- Psychotherapie (1)
- Quantified Boolean Formula (QBF) (1)
- Quantitative Modeling (1)
- Quantitative Modellierung (1)
- Query (1)
- Query-Optimierung (1)
- Queuing Theory (1)
- RL (1)
- Real-Time Rendering (1)
- Reconfigurable (1)
- Rekonfiguration (1)
- Rendering (1)
- Reparatur (1)
- Ressourcenmanagement (1)
- SIEM (1)
- SOA Security Pattern (1)
- SPARQL (1)
- STG decomposition (1)
- STG-Dekomposition (1)
- SWIRL (1)
- Sample Selection Bias (1)
- Satisfiability (1)
- Scalability (1)
- Scene graph systems (1)
- Schelling Process (1)
- Schelling Prozess (1)
- Schelling Segregation (1)
- Schema-Entdeckung (1)
- Schlüsselkompetenzen (1)
- Search Algorithms (1)
- Security Modelling (1)
- Segmentierung (1)
- Selbst-Adaptive Software (1)
- Selektionsbias (1)
- Self-Checking Circuits (1)
- Self-Regulated Learning (1)
- Semantic Search (1)
- Semantik Web (1)
- Semantische Analyse (1)
- Semantische Suche (1)
- Service Creation (1)
- Service Delivery Platform (1)
- Service convergence (1)
- Service-Oriented Architecture (1)
- Service-Orientierte Architekturen (1)
- Service-oriented Architectures (1)
- Service-orientierte Systeme (1)
- Sicherheit (1)
- Sicherheitsanalyse (1)
- Sicherheitsmodellierung (1)
- Signal Processing (1)
- Signalquellentrennung (1)
- Signaltrennung (1)
- Similarity Measures (1)
- Similarity Search (1)
- Simultane Diagonalisierung (1)
- Single Trial Analysis (1)
- Situationsbewusstsein (1)
- Skalierbarkeit (1)
- Skelettberechnung (1)
- Skriptsprachen (1)
- Software (1)
- Software Engineering (1)
- Software architecture (1)
- Software-Evolution (1)
- Softwareanalyse (1)
- Softwarearchitektur (1)
- Softwareentwicklungsprozesse (1)
- Softwaretechnik (1)
- Softwaretest (1)
- Softwarevisualisierung (1)
- Softwarewartung (1)
- Soziale Medien (1)
- Spam (1)
- Spam Filtering (1)
- Spam-Erkennung (1)
- Spam-Filter (1)
- Spam-Filtering (1)
- Spatio-Spectral Filter (1)
- Spawning (1)
- Spieldynamik (1)
- Spieldynamiken (1)
- Sprachdesign (1)
- Sprachlernen im Limes (1)
- Squeak/Smalltalk (1)
- Static Analysis (1)
- Statistical Tests (1)
- Statistische Tests (1)
- Stilisierung (1)
- Structuring (1)
- Strukturierung (1)
- Studentenerwartungen (1)
- Studentenhaltungen (1)
- Suchtberatung und -therapie (1)
- Suchverfahren (1)
- Synonyme (1)
- Synthese (1)
- System Biologie (1)
- System structure (1)
- Systembiologie (1)
- Systeme von Systemen (1)
- Systementwurf (1)
- Systems of Systems (1)
- Szenengraph (1)
- Telekommunikation (1)
- Temporal Logic (1)
- Temporallogik (1)
- Temporäre Anbindung (1)
- Terminologische Logik (1)
- Test (1)
- Test-getriebene Fehlernavigation (1)
- Testen (1)
- Texterkennung (1)
- Textklassifikation (1)
- Theoretischen Vorlesungen (1)
- Time Augmented Petri Nets (1)
- Traceability (1)
- Tracking (1)
- Transformation (1)
- Treewidth (1)
- Unabhängige Komponentenanalyse (1)
- Universität Bagdad (1)
- Universität Potsdam (1)
- Universitätseinstellungen (1)
- Untere Schranken (1)
- Unvollständigkeit (1)
- Usage Interest (1)
- User Experience (1)
- VM Integration (1)
- Verbundwerte (1)
- Verhaltensanalyse (1)
- Verletzung Auflösung (1)
- Verletzung Erklärung (1)
- Vernetzte Daten (1)
- Versionierung (1)
- Verteiltes Arbeiten (1)
- Verteiltes Rechnen (1)
- Verteilungsunterschied (1)
- Vertrauen (1)
- Veränderungsanalyse (1)
- Videoanalyse (1)
- Videometadaten (1)
- Violation Explanation (1)
- Violation Resolution (1)
- Virtual Machines (1)
- Virtuelle Maschine (1)
- Virtuelle Maschinen (1)
- Virtuelle Realität (1)
- Visualization (1)
- Vorhersagemodelle (1)
- Wahrnehmung (1)
- Wahrnehmung von Arousal (1)
- Wahrnehmungsunterschiede (1)
- Warteschlangentheorie (1)
- Wearable (1)
- Web Services (1)
- Web Sites (1)
- Web of Data (1)
- Web-Based Rendering (1)
- Webbasiertes Rendering (1)
- Webseite (1)
- Well-structuredness (1)
- Werkzeugbau (1)
- Wertschöpfungskooperation (1)
- Wirtschaftsinformatik (1)
- Wissensrepräsentation und -verarbeitung (1)
- Wissensrepräsentation und Schlussfolgerung (1)
- Wohlstrukturiertheit (1)
- Wolke (1)
- ZQSA (1)
- ZQSAT (1)
- Zeitbehaftete Petri Netze (1)
- Zero-Suppressed Binary Decision Diagram (ZDD) (1)
- Zuverlässigkeitsanalyse (1)
- adaptiv (1)
- adaptive (1)
- adaptive Systeme (1)
- adaptive systems (1)
- addiction care (1)
- advanced persistent threat (1)
- advanced threats (1)
- algorithm configuration (1)
- algorithm scheduling (1)
- algorithm selection (1)
- analysis (1)
- anisotropic Kuwahara filter (1)
- anomalies (1)
- approximate joint diagonalization (1)
- apt (1)
- architectural adaptation (1)
- archive analysis (1)
- arousal perception (1)
- artificial intelligence (1)
- assistive Technologien (1)
- assistive technologies (1)
- association rule mining (1)
- asynchronous circuit (1)
- attribute assurance (1)
- augmented reality (1)
- autonomous (1)
- back-in-time (1)
- bank (1)
- behavioral specification (1)
- behaviour (1)
- behaviourally correct learning (1)
- bild (1)
- bildbasiertes Rendering (1)
- blind source separation (1)
- bpm (1)
- building models (1)
- business informatics (1)
- business process architecture (1)
- business process architectures (1)
- cartographic design (1)
- causal AI (1)
- causal reasoning (1)
- change detection (1)
- changeability (1)
- changing the study field (1)
- changing the university (1)
- choreographies (1)
- classifier calibration (1)
- cleansing (1)
- cloud (1)
- cloud computing (1)
- cloud datacenter (1)
- code (1)
- coherence-enhancing filtering (1)
- collaborations (1)
- communication (1)
- competencies (1)
- complexity (1)
- computational biology (1)
- computational methods (1)
- computational photography (1)
- computer science (1)
- computer science education (1)
- computer science education (CSE) (1)
- computer security (1)
- computer vision (1)
- computer-mediated therapy (1)
- computergestützte Methoden (1)
- computervermittelte Therapie (1)
- computing (1)
- conformance analysis (1)
- consistent learning (1)
- consumer behavior (1)
- context awareness (1)
- cscw (1)
- cybersecurity (1)
- data analytics (1)
- data correctness checking (1)
- data dependencies (1)
- data extraction (1)
- data mining (1)
- data models (1)
- data objects (1)
- data profiling (1)
- data quality (1)
- data science (1)
- data states (1)
- data synthesis (1)
- data wrangling (1)
- data-driven (1)
- database (1)
- database optimization (1)
- database technology (1)
- datengetrieben (1)
- dbms (1)
- debugging (1)
- decision management (1)
- decision mining (1)
- decision models (1)
- decision trees (1)
- deduplication (1)
- deep Gaussian processes (1)
- deep learning (1)
- dependency discovery (1)
- design (1)
- design research (1)
- design space exploration (1)
- design thinking (1)
- deurema modeling language (1)
- development tools (1)
- didactics (1)
- difference of Gaussians (1)
- digital strategy (1)
- digital transformation (1)
- digitales Whiteboard (1)
- digitalization (1)
- direct manipulation (1)
- distributed computation (1)
- doctor-patient relationship (1)
- domain-specific modeling (1)
- dropout (1)
- duplicate detection (1)
- dynamic (1)
- dynamic classification (1)
- dynamic consolidation (1)
- dynamic reconfiguration (1)
- dynamisch (1)
- dynamische Klassifikation (1)
- dynamische Umsortierung (1)
- e-Learning (1)
- electrical muscle stimulation (1)
- elektrische Muskelstimulation (1)
- email spam detection (1)
- emotion (1)
- emotion representation (1)
- emotion research (1)
- empirical studies (1)
- empirische Studien (1)
- endpoint security (1)
- energy efficiency (1)
- enterprise search (1)
- entity alignment (1)
- entity linking (1)
- entity resolution (1)
- error correction (1)
- error detection (1)
- erzeugende gegnerische Netzwerke (1)
- evaluation (1)
- event abstraction (1)
- evidence theory (1)
- evolution (1)
- exploration (1)
- extend (1)
- external memory algorithms (1)
- face tracking (1)
- facial expression (1)
- feedback loop modeling (1)
- file structure (1)
- flow-based bilateral filter (1)
- formal framework (1)
- formales Framework (1)
- formalism (1)
- fortschrittliche Angriffe (1)
- ganzheitlich (1)
- ganzzahlige lineare Optimierung (1)
- gemischte Daten (1)
- general education in computer science (1)
- generalization (1)
- generative adversarial networks (1)
- geometry generation (1)
- geovirtual environments (1)
- geovirtuelle Umgebungen (1)
- geovisualization (1)
- geschichtsbewusste Laufzeit-Modelle (1)
- gesture (1)
- grammar inference (1)
- graph clustering (1)
- graph inference (1)
- graph mining (1)
- graph-based ranking (1)
- hardware-software-codesign (1)
- hate speech detection (1)
- heterogeneous computing (1)
- heterogenes Rechnen (1)
- higher education (1)
- history-aware runtime models (1)
- holistic (1)
- hyrise (1)
- identity (1)
- image (1)
- image data analysis (1)
- image stylization (1)
- image-based rendering (1)
- imdb (1)
- immediacy (1)
- in-memory (1)
- inclusion dependency (1)
- incompleteness (1)
- inconsistency (1)
- incremental graph query evaluation (1)
- incumbent (1)
- independent component analysis (1)
- index (1)
- informatics (1)
- informatische Allgemeinbildung (1)
- inkrementelle Ausführung von Graphanfragen (1)
- input accuracy (1)
- integer linear programming (1)
- interaction (1)
- interaction modeling (1)
- interaction techniques (1)
- interactive simulation (1)
- interface (1)
- intuition (1)
- kausale KI (1)
- kausale Schlussfolgerung (1)
- kernel PCA (1)
- kernel methods (1)
- key competencies (1)
- knowledge discovery (1)
- konsistentes Lernen (1)
- konvergente Dienste (1)
- landmarks (1)
- language design (1)
- language learning in the limit (1)
- laserscanning (1)
- linear code (1)
- linearer Code (1)
- link discovery (1)
- linked data (1)
- live migration (1)
- liveness (1)
- load balancing (1)
- logic programming (1)
- logic synthesis (1)
- logical signaling networks (1)
- logische Ergänzung (1)
- logische Programmierung (1)
- logische Signalnetzwerke (1)
- machines (1)
- main memory computing (1)
- malware detection (1)
- manufacturing (1)
- map reduce (1)
- map/reduce (1)
- maps (1)
- maschinelle Verarbeitung natürlicher Sprache (1)
- maschinelles Sehen (1)
- maschninelles Lernen (1)
- medical (1)
- medical documentation (1)
- medizinisch (1)
- medizinische Dokumentation (1)
- meta model (1)
- metacrate (1)
- metadata (1)
- metadata detection (1)
- misconception (1)
- missing data (1)
- mixed data (1)
- mixture models (1)
- mmdb (1)
- mobile devices (1)
- model transformation (1)
- model-based (1)
- model-based prototyping (1)
- model-driven (1)
- model-driven architecture (1)
- model-driven software engineering (1)
- modelgetriebene Entwicklung (1)
- modellgetriebene Entwicklung (1)
- modellgetriebene Softwaretechnik (1)
- modelling (1)
- molecular networks (1)
- molekulare Netzwerke (1)
- multi core data processing (1)
- multi-class classification (1)
- named entity mining (1)
- natural language processing (1)
- navigation (1)
- networks-on-chip (1)
- neue Online-Fehlererkennungsmethode (1)
- nicht-parametrische bedingte Unabhängigkeitstests (1)
- nichtlineare ICA (1)
- nichtlineare PCA (NLPCA) (1)
- nichtlineare Projektionen (1)
- non-parametric conditional independence testing (1)
- nonlinear ICA (1)
- nonlinear PCA (NLPCA) (1)
- nonlinear projections (1)
- novelty detection (1)
- nutzergenerierte Inhalte (1)
- nvm (1)
- objective difficulty (1)
- on-chip (1)
- online assistance (1)
- open source (1)
- optical character recognition (1)
- order dependencies (1)
- overcomplete ICA (1)
- parallel (1)
- parallel processing (1)
- parallel solving (1)
- parallele Verarbeitung (1)
- paralleles Lösen (1)
- partial replication (1)
- partielle Replikation (1)
- patient empowerment (1)
- pattern recognition (1)
- pedestrian navigation (1)
- perception (1)
- perception differences (1)
- persönliche Informationen (1)
- placement (1)
- platform ecosystems (1)
- polyglot programming (1)
- polyglottes Programmieren (1)
- preferences (1)
- prefetching (1)
- presentation (1)
- priorities (1)
- privacy (1)
- probabilistic machine learning (1)
- probabilistic models (1)
- probabilistisches maschinelles Lernen (1)
- process (1)
- process and data integration (1)
- process automation (1)
- process improvement (1)
- process model (1)
- process model search (1)
- process modelling (1)
- process models (1)
- process refinement (1)
- professors (1)
- programming (1)
- programming abstraction (1)
- programming tools (1)
- psychotherapy (1)
- query matching (1)
- query optimization (1)
- querying (1)
- rapid prototyping (1)
- raum-zeitlich (1)
- raumbezogene Straftatenanalyse (1)
- real-time rendering (1)
- recommendation (1)
- reconfiguration (1)
- reinforcement learning (1)
- rekonfigurierbar (1)
- reliability assessment (1)
- remote collaboration (1)
- remote sensing (1)
- repair (1)
- requirements engineering (1)
- resource management (1)
- robust ICA (1)
- robuste ICA (1)
- runtime behavior (1)
- räumliche Geodaten (1)
- scheduling (1)
- schema discovery (1)
- schwach überwachtes maschinelles Lernen (1)
- scm (1)
- scripting languages (1)
- security (1)
- security analytics (1)
- segmentation (1)
- selbst-souveräne Identitäten (1)
- selbstprüfende Schaltungen (1)
- self-adaptive software (1)
- self-driving (1)
- self-sovereign identity (1)
- semantic analysis (1)
- semantic classification (1)
- semantische Klassifizierung (1)
- semantisches Netz (1)
- serverseitiges 3D-Rendering (1)
- serverside 3D rendering (1)
- service-oriented architectures (1)
- service-oriented systems (1)
- serviceorientierte Architekturen (1)
- sign language (1)
- similarity (1)
- situational awareness (1)
- skeletonization (1)
- social media (1)
- software (1)
- software analysis (1)
- software development processes (1)
- software evolution (1)
- software maintenance (1)
- software visualization (1)
- spatio-temporal (1)
- speed independence (1)
- stark verhaltenskorrekt sperrend (1)
- stochastic Petri nets (1)
- stochastische Petri Netze (1)
- strongly behaviourally correct locking (1)
- structured output prediction (1)
- strukturierte Vorhersage (1)
- study problems (1)
- stylization (1)
- synonym discovery (1)
- tabellarische Dateien (1)
- tabular data (1)
- teachers (1)
- temporal graph queries (1)
- temporale Graphanfragen (1)
- temporary binding (1)
- terrain models (1)
- test (1)
- test-driven fault navigation (1)
- text classification (1)
- text mining (1)
- threat detection (1)
- tiefe Gauß-Prozesse (1)
- tiefes Lernen (1)
- tiering (1)
- tool building (1)
- topics (1)
- touch input (1)
- traditionelle Unternehmen (1)
- transformation (1)
- trust (1)
- tutorial section (1)
- unique column combinations (1)
- unsupervised (1)
- user experience (1)
- user interaction (1)
- user interfaces (1)
- user-generated content (1)
- value co-creation (1)
- variational inference (1)
- variationelle Inferenz (1)
- verhaltenskorrektes Lernen (1)
- versioning (1)
- verteilte Berechnung (1)
- verteilte Datenbanken (1)
- video analysis (1)
- video metadata (1)
- virtual (1)
- virtual machine (1)
- virtual machines (1)
- virtual reality (1)
- virtualisierte IT-Infrastruktur (1)
- virtuell (1)
- virtuelle Maschinen (1)
- virtuelle Realität (1)
- weak supervision (1)
- wearables (1)
- word sense disambiguation (1)
- zero-day (1)
- Ähnlichkeit (1)
- Ähnlichkeitsmaße (1)
- Ähnlichkeitssuche (1)
- Änderbarkeit (1)
- Übereinstimmungsanalyse (1)
- überbestimmte ICA (1)
Institute
- Institut für Informatik und Computational Science (83)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (43)
- Hasso-Plattner-Institut für Digital Engineering GmbH (35)
- Digital Engineering Fakultät (6)
- Extern (4)
- Institut für Umweltwissenschaften und Geographie (3)
- Wirtschaftswissenschaften (3)
- Department Linguistik (1)
- Interdisziplinäres Zentrum für Kognitive Studien (1)
- Mathematisch-Naturwissenschaftliche Fakultät (1)
Knowledge about causal structures is crucial for decision support in various domains. For example, in discrete manufacturing, identifying the root causes of failures and quality deviations that interrupt the highly automated production process requires causal structural knowledge. However, in practice, root cause analysis is usually built upon individual expert knowledge about associative relationships. But, "correlation does not imply causation", and misinterpreting associations often leads to incorrect conclusions. Recent developments in methods for causal discovery from observational data have opened the opportunity for a data-driven examination. Despite its potential for data-driven decision support, omnipresent challenges impede causal discovery in real-world scenarios. In this thesis, we make a threefold contribution to improving causal discovery in practice.
(1) The growing interest in causal discovery has led to a broad spectrum of methods with specific assumptions on the data and various implementations. Hence, application in practice requires careful consideration of existing methods, which becomes laborious when dealing with various parameters, assumptions, and implementations in different programming languages. Additionally, evaluation is challenging due to the lack of ground truth in practice and limited benchmark data that reflect real-world data characteristics.
To address these issues, we present a platform-independent modular pipeline for causal discovery and a ground truth framework for synthetic data generation that provides comprehensive evaluation opportunities, e.g., to examine the accuracy of causal discovery methods in case of inappropriate assumptions.
(2) Applying constraint-based methods for causal discovery requires selecting a conditional independence (CI) test, which is particularly challenging in mixed discrete-continuous data omnipresent in many real-world scenarios. In this context, inappropriate assumptions on the data or the commonly applied discretization of continuous variables reduce the accuracy of CI decisions, leading to incorrect causal structures.
Therefore, we contribute a non-parametric CI test leveraging k-nearest neighbors methods and prove its statistical validity and power in mixed discrete-continuous data, as well as the asymptotic consistency when used in constraint-based causal discovery. An extensive evaluation of synthetic and real-world data shows that the proposed CI test outperforms state-of-the-art approaches in the accuracy of CI testing and causal discovery, particularly in settings with low sample sizes.
(3) To show the applicability and opportunities of causal discovery in practice, we examine our contributions in real-world discrete manufacturing use cases. For example, we showcase how causal structural knowledge helps to understand unforeseen production downtimes or adds decision support in case of failures and quality deviations in automotive body shop assembly lines.
Column-oriented database systems can efficiently process transactional and analytical queries on a single node. However, increasing or peak analytical loads can quickly saturate single-node database systems. Then, a common scale-out option is using a database cluster with a single primary node for transaction processing and read-only replicas. Using (the naive) full replication, queries are distributed among nodes independently of the accessed data. This approach is relatively expensive because all nodes must store all data and apply all data modifications caused by inserts, deletes, or updates.
In contrast to full replication, partial replication is a more cost-efficient implementation: Instead of duplicating all data to all replica nodes, partial replicas store only a subset of the data while being able to process a large workload share. Besides lower storage costs, partial replicas enable (i) better scaling because replicas must potentially synchronize only subsets of the data modifications and thus have more capacity for read-only queries and (ii) better elasticity because replicas have to load less data and can be set up faster. However, splitting the overall workload evenly among the replica nodes while optimizing the data allocation is a challenging assignment problem.
The calculation of optimized data allocations in a partially replicated database cluster can be modeled using integer linear programming (ILP). ILP is a common approach for solving assignment problems, also in the context of database systems. Because ILP is not scalable, existing approaches (also for calculating partial allocations) often fall back to simple (e.g., greedy) heuristics for larger problem instances. Simple heuristics may work well but can lose optimization potential.
In this thesis, we present optimal and ILP-based heuristic programming models for calculating data fragment allocations for partially replicated database clusters. Using ILP, we are flexible to extend our models to (i) consider data modifications and reallocations and (ii) increase the robustness of allocations to compensate for node failures and workload uncertainty. We evaluate our approaches for TPC-H, TPC-DS, and a real-world accounting workload and compare the results to state-of-the-art allocation approaches. Our evaluations show significant improvements for varied allocation’s properties: Compared to existing approaches, we can, for example, (i) almost halve the amount of allocated data, (ii) improve the throughput in case of node failures and workload uncertainty while using even less memory, (iii) halve the costs of data modifications, and (iv) reallocate less than 90% of data when adding a node to the cluster. Importantly, we can calculate the corresponding ILP-based heuristic solutions within a few seconds. Finally, we demonstrate that the ideas of our ILP-based heuristics are also applicable to the index selection problem.
To manage tabular data files and leverage their content in a given downstream task, practitioners often design and execute complex transformation pipelines to prepare them. The complexity of such pipelines stems from different factors, including the nature of the preparation tasks, often exploratory or ad-hoc to specific datasets; the large repertory of tools, algorithms, and frameworks that practitioners need to master; and the volume, variety, and velocity of the files to be prepared. Metadata plays a fundamental role in reducing this complexity: characterizing a file assists end users in the design of data preprocessing pipelines, and furthermore paves the way for suggestion, automation, and optimization of data preparation tasks.
Previous research in the areas of data profiling, data integration, and data cleaning, has focused on extracting and characterizing metadata regarding the content of tabular data files, i.e., about the records and attributes of tables. Content metadata are useful for the latter stages of a preprocessing pipeline, e.g., error correction, duplicate detection, or value normalization, but they require a properly formed tabular input. Therefore, these metadata are not relevant for the early stages of a preparation pipeline, i.e., to correctly parse tables out of files. In this dissertation, we turn our focus to what we call the structure of a tabular data file, i.e., the set of characters within a file that do not represent data values but are required to parse and understand the content of the file. We provide three different approaches to represent file structure, an explicit representation based on context-free grammars; an implicit representation based on file-wise similarity; and a learned representation based on machine learning.
In our first contribution, we use the grammar-based representation to characterize a set of over 3000 real-world csv files and identify multiple structural issues that let files deviate from the csv standard, e.g., by having inconsistent delimiters or containing multiple tables. We leverage our learnings about real-world files and propose Pollock, a benchmark to test how well systems parse csv files that have a non-standard structure, without any previous preparation. We report on our experiments on using Pollock to evaluate the performance of 16 real-world data management systems.
Following, we characterize the structure of files implicitly, by defining a measure of structural similarity for file pairs. We design a novel algorithm to compute this measure, which is based on a graph representation of the files' content. We leverage this algorithm and propose Mondrian, a graphical system to assist users in identifying layout templates in a dataset, classes of files that have the same structure, and therefore can be prepared by applying the same preparation pipeline.
Finally, we introduce MaGRiTTE, a novel architecture that uses self-supervised learning to automatically learn structural representations of files in the form of vectorial embeddings at three different levels: cell level, row level, and file level. We experiment with the application of structural embeddings for several tasks, namely dialect detection, row classification, and data preparation efforts estimation.
Our experimental results show that structural metadata, either identified explicitly on parsing grammars, derived implicitly as file-wise similarity, or learned with the help of machine learning architectures, is fundamental to automate several tasks, to scale up preparation to large quantities of files, and to provide repeatable preparation pipelines.
Advancements in computer vision techniques driven by machine learning have facilitated robust and efficient estimation of attributes such as depth, optical flow, albedo, and shading. To encapsulate all such underlying properties associated with images and videos, we evolve the concept of intrinsic images towards intrinsic attributes. Further, rapid hardware growth in the form of high-quality smartphone cameras, readily available depth sensors, mobile GPUs, or dedicated neural processing units have made image and video processing pervasive. In this thesis, we explore the synergies between the above two advancements and propose novel image and video processing techniques and systems based on them. To begin with, we investigate intrinsic image decomposition approaches and analyze how they can be implemented on mobile devices. We propose an approach that considers not only diffuse reflection but also specular reflection; it allows us to decompose an image into specularity, albedo, and shading on a resource constrained system (e.g., smartphones or tablets) using the depth data provided by the built-in depth sensors. In addition, we explore how on-device depth data can further be used to add an immersive dimension to 2D photos, e.g., showcasing parallax effects via 3D photography. In this regard, we develop a novel system for interactive 3D photo generation and stylization on mobile devices. Further, we investigate how adaptive manipulation of baseline-albedo (i.e., chromaticity) can be used for efficient visual enhancement under low-lighting conditions. The proposed technique allows for interactive editing of enhancement settings while achieving improved quality and performance. We analyze the inherent optical flow and temporal noise as intrinsic properties of a video. We further propose two new techniques for applying the above intrinsic attributes for the purpose of consistent video filtering. To this end, we investigate how to remove temporal inconsistencies perceived as flickering artifacts. One of the techniques does not require costly optical flow estimation, while both provide interactive consistency control. Using intrinsic attributes for image and video processing enables new solutions for mobile devices – a pervasive visual computing device – and will facilitate novel applications for Augmented Reality (AR), 3D photography, and video stylization. The proposed low-light enhancement techniques can also improve the accuracy of high-level computer vision tasks (e.g., face detection) under low-light conditions. Finally, our approach for consistent video filtering can extend a wide range of image-based processing for videos.
BCH Codes mit kombinierter Korrektur und Erkennung In dieser Arbeit wird auf Grundlage des BCH Codes untersucht, wie eine Fehlerkorrektur mit einer Erkennung höherer Fehleranzahlen kombiniert werden kann. Mit dem Verfahren der 1-Bit Korrektur mit zusätzlicher Erkennung höherer Fehler wurde ein Ansatz entwickelt, welcher die Erkennung zusätzlicher Fehler durch das parallele Lösen einfacher Gleichungen der Form s_x = s_1^x durchführt. Die Anzahl dieser Gleichungen ist linear zu der Anzahl der zu überprüfenden höheren Fehler.
In dieser Arbeit wurde zusätzlich für bis zu 4-Bit Korrekturen mit zusätzlicher Erkennung höherer Fehler ein weiterer allgemeiner Ansatz vorgestellt. Dabei werden parallel für alle korrigierbaren Fehleranzahlen spekulative Fehlerkorrekturen durchgeführt. Aus den bestimmten Fehlerstellen werden spekulative Syndromkomponenten erzeugt, durch welche die Fehlerstellen bestätigt und höhere erkennbare Fehleranzahlen ausgeschlossen werden können. Die vorgestellten Ansätze unterscheiden sich von dem in entwickelten Ansatz, bei welchem die Anzahl der Fehlerstellen durch die Berechnung von Determinanten in absteigender Reihenfolge berechnet wird, bis die erste Determinante 0 bildet. Bei dem bekannten Verfahren ist durch die Berechnung der Determinanten eine faktorielle Anzahl an Berechnungen in Relation zu der Anzahl zu überprüfender Fehler durchzuführen. Im Vergleich zu dem bekannten sequentiellen Verfahrens nach Berlekamp Massey besitzen die Berechnungen im vorgestellten Ansatz simple Gleichungen und können parallel durchgeführt werden.Bei dem bekannten Verfahren zur parallelen Korrektur von 4-Bit Fehlern ist eine Gleichung vierten Grades im GF(2^m) zu lösen. Dies erfolgt, indem eine Hilfsgleichung dritten Grades und vier Gleichungen zweiten Grades parallel gelöst werden. In der vorliegenden Arbeit wurde gezeigt, dass sich eine Gleichung zweiten Grades einsparen lässt, wodurch sich eine Vereinfachung der Hardware bei einer parallelen Realisierung der 4-Bit Korrektur ergibt. Die erzielten Ergebnisse wurden durch umfangreiche Simulationen in Software und Hardwareimplementierungen überprüft.
The Security Operations Center (SOC) represents a specialized unit responsible for managing security within enterprises. To aid in its responsibilities, the SOC relies heavily on a Security Information and Event Management (SIEM) system that functions as a centralized repository for all security-related data, providing a comprehensive view of the organization's security posture. Due to the ability to offer such insights, SIEMS are considered indispensable tools facilitating SOC functions, such as monitoring, threat detection, and incident response.
Despite advancements in big data architectures and analytics, most SIEMs fall short of keeping pace. Architecturally, they function merely as log search engines, lacking the support for distributed large-scale analytics. Analytically, they rely on rule-based correlation, neglecting the adoption of more advanced data science and machine learning techniques.
This thesis first proposes a blueprint for next-generation SIEM systems that emphasize distributed processing and multi-layered storage to enable data mining at a big data scale. Next, with the architectural support, it introduces two data mining approaches for advanced threat detection as part of SOC operations.
First, a novel graph mining technique that formulates threat detection within the SIEM system as a large-scale graph mining and inference problem, built on the principles of guilt-by-association and exempt-by-reputation. The approach entails the construction of a Heterogeneous Information Network (HIN) that models shared characteristics and associations among entities extracted from SIEM-related events/logs. Thereon, a novel graph-based inference algorithm is used to infer a node's maliciousness score based on its associations with other entities in the HIN. Second, an innovative outlier detection technique that imitates a SOC analyst's reasoning process to find anomalies/outliers. The approach emphasizes explainability and simplicity, achieved by combining the output of simple context-aware univariate submodels that calculate an outlier score for each entry.
Both approaches were tested in academic and real-world settings, demonstrating high performance when compared to other algorithms as well as practicality alongside a large enterprise's SIEM system.
This thesis establishes the foundation for next-generation SIEM systems that can enhance today's SOCs and facilitate the transition from human-centric to data-driven security operations.
In model-driven engineering, the adaptation of large software systems with dynamic structure is enabled by architectural runtime models. Such a model represents an abstract state of the system as a graph of interacting components. Every relevant change in the system is mirrored in the model and triggers an evaluation of model queries, which search the model for structural patterns that should be adapted. This thesis focuses on a type of runtime models where the expressiveness of the model and model queries is extended to capture past changes and their timing. These history-aware models and temporal queries enable more informed decision-making during adaptation, as they support the formulation of requirements on the evolution of the pattern that should be adapted. However, evaluating temporal queries during adaptation poses significant challenges. First, it implies the capability to specify and evaluate requirements on the structure, as well as the ordering and timing in which structural changes occur. Then, query answers have to reflect that the history-aware model represents the architecture of a system whose execution may be ongoing, and thus answers may depend on future changes. Finally, query evaluation needs to be adequately fast and memory-efficient despite the increasing size of the history---especially for models that are altered by numerous, rapid changes.
The thesis presents a query language and a querying approach for the specification and evaluation of temporal queries. These contributions aim to cope with the challenges of evaluating temporal queries at runtime, a prerequisite for history-aware architectural monitoring and adaptation which has not been systematically treated by prior model-based solutions. The distinguishing features of our contributions are: the specification of queries based on a temporal logic which encodes structural patterns as graphs; the provision of formally precise query answers which account for timing constraints and ongoing executions; the incremental evaluation which avoids the re-computation of query answers after each change; and the option to discard history that is no longer relevant to queries. The query evaluation searches the model for occurrences of a pattern whose evolution satisfies a temporal logic formula. Therefore, besides model-driven engineering, another related research community is runtime verification. The approach differs from prior logic-based runtime verification solutions by supporting the representation and querying of structure via graphs and graph queries, respectively, which is more efficient for queries with complex patterns. We present a prototypical implementation of the approach and measure its speed and memory consumption in monitoring and adaptation scenarios from two application domains, with executions of an increasing size. We assess scalability by a comparison to the state-of-the-art from both related research communities. The implementation yields promising results, which pave the way for sophisticated history-aware self-adaptation solutions and indicate that the approach constitutes a highly effective technique for runtime monitoring on an architectural level.
Most machine learning methods provide only point estimates when being queried to predict on new data. This is problematic when the data is corrupted by noise, e.g. from imperfect measurements, or when the queried data point is very different to the data that the machine learning model has been trained with. Probabilistic modelling in machine learning naturally equips predictions with corresponding uncertainty estimates which allows a practitioner to incorporate information about measurement noise into the modelling process and to know when not to trust the predictions. A well-understood, flexible probabilistic framework is provided by Gaussian processes that are ideal as building blocks of probabilistic models. They lend themself naturally to the problem of regression, i.e., being given a set of inputs and corresponding observations and then predicting likely observations for new unseen inputs, and can also be adapted to many more machine learning tasks. However, exactly inferring the optimal parameters of such a Gaussian process model (in a computationally tractable manner) is only possible for regression tasks in small data regimes. Otherwise, approximate inference methods are needed, the most prominent of which is variational inference.
In this dissertation we study models that are composed of Gaussian processes embedded in other models in order to make those more flexible and/or probabilistic. The first example are deep Gaussian processes which can be thought of as a small network of Gaussian processes and which can be employed for flexible regression. The second model class that we study are Gaussian process state-space models. These can be used for time-series modelling, i.e., the task of being given a stream of data ordered by time and then predicting future observations. For both model classes the state-of-the-art approaches offer a trade-off between expressive models and computational properties (e.g. speed or convergence properties) and mostly employ variational inference. Our goal is to improve inference in both models by first getting a deep understanding of the existing methods and then, based on this, to design better inference methods. We achieve this by either exploring the existing trade-offs or by providing general improvements applicable to multiple methods.
We first provide an extensive background, introducing Gaussian processes and their sparse (approximate and efficient) variants. We continue with a description of the models under consideration in this thesis, deep Gaussian processes and Gaussian process state-space models, including detailed derivations and a theoretical comparison of existing methods.
Then we start analysing deep Gaussian processes more closely: Trading off the properties (good optimisation versus expressivity) of state-of-the-art methods in this field, we propose a new variational inference based approach. We then demonstrate experimentally that our new algorithm leads to better calibrated uncertainty estimates than existing methods.
Next, we turn our attention to Gaussian process state-space models, where we closely analyse the theoretical properties of existing methods.The understanding gained in this process leads us to propose a new inference scheme for general Gaussian process state-space models that incorporates effects on multiple time scales. This method is more efficient than previous approaches for long timeseries and outperforms its comparison partners on data sets in which effects on multiple time scales (fast and slowly varying dynamics) are present.
Finally, we propose a new inference approach for Gaussian process state-space models that trades off the properties of state-of-the-art methods in this field. By combining variational inference with another approximate inference method, the Laplace approximation, we design an efficient algorithm that outperforms its comparison partners since it achieves better calibrated uncertainties.
Residential segregation is a widespread phenomenon that can be observed in almost every major city. In these urban areas, residents with different ethnical or socioeconomic backgrounds tend to form homogeneous clusters. In Schelling’s classical segregation model two types of agents are placed on a grid. An agent is content with its location if the fraction of its neighbors, which have the same type as the agent, is at least 𝜏, for some 0 < 𝜏 ≤ 1. Discontent agents simply swap their location with a randomly chosen other discontent agent or jump to a random empty location. The model gives a coherent explanation of how clusters can form even if all agents are tolerant, i.e., if they agree to live in mixed neighborhoods. For segregation to occur, all it needs is a slight bias towards agents preferring similar neighbors.
Although the model is well studied, previous research focused on a random process point of view. However, it is more realistic to assume instead that the agents strategically choose where to live. We close this gap by introducing and analyzing game-theoretic models of Schelling segregation, where rational agents strategically choose their locations.
As the first step, we introduce and analyze a generalized game-theoretic model that allows more than two agent types and more general underlying graphs modeling the residential area. We introduce different versions of Swap and Jump Schelling Games. Swap Schelling Games assume that every vertex of the underlying graph serving as a residential area is occupied by an agent and pairs of discontent agents can swap their locations, i.e., their occupied vertices, to increase their utility. In contrast, for the Jump Schelling Game, we assume that there exist empty vertices in the graph and agents can jump to these vacant vertices if this increases their utility. We show that the number of agent types as well as the structure of underlying graph heavily influence the dynamic properties and the tractability of finding an optimal strategy profile.
As a second step, we significantly deepen these investigations for the swap version with 𝜏 = 1 by studying the influence of the underlying topology modeling the residential area on the existence of equilibria, the Price of Anarchy, and the dynamic properties. Moreover, we restrict the movement of agents locally. As a main takeaway, we find that both aspects influence the existence and the quality of stable states.
Furthermore, also for the swap model, we follow sociological surveys and study, asking the same core game-theoretic questions, non-monotone singlepeaked utility functions instead of monotone ones, i.e., utility functions that are not monotone in the fraction of same-type neighbors. Our results clearly show that moving from monotone to non-monotone utilities yields novel structural properties and different results in terms of existence and quality of stable states.
In the last part, we introduce an agent-based saturated open-city variant, the Flip Schelling Process, in which agents, based on the predominant type in their neighborhood, decide whether to change their types. We provide a general framework for analyzing the influence of the underlying topology on residential segregation and investigate the probability that an edge is monochrome, i.e., that both incident vertices have the same type, on random geometric and Erdős–Rényi graphs. For random geometric graphs, we prove the existence of a constant c > 0 such that the expected fraction of monochrome edges after the Flip Schelling Process is at least 1/2 + c. For Erdős–Rényi graphs, we show the expected fraction of monochrome edges after the Flip Schelling Process is at most 1/2 + o(1).
Today, point clouds are among the most important categories of spatial data, as they constitute digital 3D models of the as-is reality that can be created at unprecedented speed and precision. However, their unique properties, i.e., lack of structure, order, or connectivity information, necessitate specialized data structures and algorithms to leverage their full precision. In particular, this holds true for the interactive visualization of point clouds, which requires to balance hardware limitations regarding GPU memory and bandwidth against a naturally high susceptibility to visual artifacts.
This thesis focuses on concepts, techniques, and implementations of robust, scalable, and portable 3D visualization systems for massive point clouds. To that end, a number of rendering, visualization, and interaction techniques are introduced, that extend several basic strategies to decouple rendering efforts and data management: First, a novel visualization technique that facilitates context-aware filtering, highlighting, and interaction within point cloud depictions. Second, hardware-specific optimization techniques that improve rendering performance and image quality in an increasingly diversified hardware landscape. Third, natural and artificial locomotion techniques for nausea-free exploration in the context of state-of-the-art virtual reality devices. Fourth, a framework for web-based rendering that enables collaborative exploration of point clouds across device ecosystems and facilitates the integration into established workflows and software systems.
In cooperation with partners from industry and academia, the practicability and robustness of the presented techniques are showcased via several case studies using representative application scenarios and point cloud data sets. In summary, the work shows that the interactive visualization of point clouds can be implemented by a multi-tier software architecture with a number of domain-independent, generic system components that rely on optimization strategies specific to large point clouds. It demonstrates the feasibility of interactive, scalable point cloud visualization as a key component for distributed IT solutions that operate with spatial digital twins, providing arguments in favor of using point clouds as a universal type of spatial base data usable directly for visualization purposes.