004 Datenverarbeitung; Informatik
Refine
Has Fulltext
- yes (21) (remove)
Year of publication
- 2023 (21) (remove)
Document Type
- Doctoral Thesis (10)
- Monograph/Edited Volume (6)
- Article (3)
- Bachelor Thesis (1)
- Postprint (1)
Language
- English (21) (remove)
Keywords
- Digitalisierung (2)
- Diversity (2)
- digitalization (2)
- machine learning (2)
- 0-day (1)
- 3D Point Clouds (1)
- 3D-Punktwolken (1)
- APT (1)
- Advanced Persistent Threats (1)
- Algorithmic Game Theory (1)
- Algorithmische Spieltheorie (1)
- Anfrageoptimierung (1)
- Anomalieerkennung (1)
- Architekturadaptation (1)
- Augmented reality (1)
- Ausreißererkennung (1)
- Bachelor (1)
- Bahnwesen (1)
- Bedrohungserkennung (1)
- Betriebssysteme (1)
- Bibliometrics (1)
- Bildverarbeitung (1)
- Blockchain (1)
- Blockchains (1)
- Computational Photography (1)
- Course development (1)
- Course marketing (1)
- Courses for female students (1)
- Curricula Development (1)
- Curriculum analysis (1)
- Cyber-Sicherheit (1)
- DBMS (1)
- Data-Mining (1)
- Data-Science (1)
- Datenbank (1)
- Datenbanksysteme (1)
- Datenmodelle (1)
- Dekubitus (1)
- Distributed-Ledger-Technologie (DLT) (1)
- Echtzeit (1)
- Echtzeit-Rendering (1)
- Economics (1)
- Einbruchserkennung (1)
- Endpunktsicherheit (1)
- Ethics (1)
- European Union (1)
- Europäische Union (1)
- Fehlertoleranz (1)
- GPU acceleration (1)
- GPU-Beschleunigung (1)
- Game Dynamics (1)
- Gaussian process state-space models (1)
- Gaussian processes (1)
- Gauß-Prozess Zustandsraummodelle (1)
- Gauß-Prozesse (1)
- Gender (1)
- Generalized Discrimination Networks (1)
- Geschäftsprozessarchitekturen (1)
- Gleichheit (1)
- Graph-Mining (1)
- Graphableitung (1)
- HPI Schul-Cloud (1)
- Hyrise (1)
- Informatics (1)
- Intersectionality (1)
- Karten (1)
- Konnektionskalkül (1)
- Konsensprotokolle (1)
- MERLOT (1)
- Machine-Learning (1)
- Metaverse (1)
- Modelle mit mehreren Versionen (1)
- Netzwerkprotokolle (1)
- Non-photorealistic Rendering (1)
- Omega (1)
- Posenabschätzung (1)
- Preis der Anarchie (1)
- Price of Anarchy (1)
- Quanten-Computing (1)
- Query-Optimierung (1)
- RL (1)
- Real-Time Rendering (1)
- Reverse Engineering (1)
- STEM (1)
- SWIRL (1)
- Schelling Process (1)
- Schelling Prozess (1)
- Schelling Segregation (1)
- Second Life (1)
- Semantic Web (1)
- Sicherheitsanalyse (1)
- Social impact (1)
- Sociotechnical Design (1)
- Software/Hardware Co-Design (1)
- Spieldynamiken (1)
- Sprachlernen im Limes (1)
- Standardisierung (1)
- Systematics (1)
- TPTP (1)
- Taxonomy (1)
- Telemedizin (1)
- Tripel-Graph-Grammatiken (1)
- Verlässlichkeit (1)
- Virtual Reality (1)
- Virtuelle Realität (1)
- Visualisierung (1)
- Visualization (1)
- Web-Based Rendering (1)
- Webbasiertes Rendering (1)
- Women and IT (1)
- advanced persistent threat (1)
- advanced threats (1)
- anomaly detection (1)
- apt (1)
- architectural adaptation (1)
- arithmethische Prozeduren (1)
- arithmetic procedures (1)
- asset management (1)
- automatic theorem prover (1)
- automatisierter Theorembeweiser (1)
- autonomous (1)
- behaviourally correct learning (1)
- bildbasiertes Rendering (1)
- business process architectures (1)
- causal discovery (1)
- causal structure learning (1)
- computational photography (1)
- connection calculus (1)
- consensus protocols (1)
- consistent learning (1)
- convolutional neural networks (1)
- cybersecurity (1)
- data dependencies (1)
- data mining (1)
- data models (1)
- data science (1)
- data-driven (1)
- database (1)
- database optimization (1)
- database systems (1)
- datengetrieben (1)
- decentral identities (1)
- decubitus (1)
- deep Gaussian processes (1)
- deep learning (1)
- dependability (1)
- dezentrale Identitäten (1)
- digital education (1)
- digital sovereignty (1)
- digitale Bildung (1)
- digitale Souveränität (1)
- endpoint security (1)
- equality (1)
- extend (1)
- fault tolerance (1)
- fortschrittliche Angriffe (1)
- functional dependencies (1)
- funktionale Abhängigkeiten (1)
- gefaltete neuronale Netze (1)
- generalized discrimination networks (1)
- geschichtsbewusste Laufzeit-Modelle (1)
- global model management (1)
- globales Modellmanagement (1)
- graph inference (1)
- graph mining (1)
- history-aware runtime models (1)
- image processing (1)
- image stylization (1)
- image-based rendering (1)
- inclusion dependencies (1)
- incremental graph query evaluation (1)
- index selection (1)
- inkrementelle Ausführung von Graphanfragen (1)
- intrusion detection (1)
- juridical recording (1)
- kausale Entdeckung (1)
- kausales Strukturlernen (1)
- konsistentes Lernen (1)
- language learning in the limit (1)
- leanCoP (1)
- maps (1)
- maschinelles Lernen (1)
- model-driven software engineering (1)
- modellgetriebene Softwaretechnik (1)
- multi-version models (1)
- network protocols (1)
- non-photorealistic rendering (1)
- omega (1)
- openHPI (1)
- operating systems (1)
- order dependencies (1)
- outlier detection (1)
- parallel processing (1)
- parallele Verarbeitung (1)
- pose estimation (1)
- probabilistic machine learning (1)
- probabilistisches maschinelles Lernen (1)
- process mining (1)
- quantum computing (1)
- query optimization (1)
- railways (1)
- real-time (1)
- reinforcement learning (1)
- reverse engineering (1)
- security analytics (1)
- selbstbestimmte Identitäten (1)
- self-driving (1)
- self-sovereign identity (1)
- software/hardware co-design (1)
- standardization (1)
- stark verhaltenskorrekt sperrend (1)
- static source-code analysis (1)
- statische Quellcodeanalyse (1)
- strongly behaviourally correct locking (1)
- telemedicine (1)
- temporal graph queries (1)
- temporale Graphanfragen (1)
- threat detection (1)
- tiefe Gauß-Prozesse (1)
- tiefes Lernen (1)
- tptp (1)
- triple graph grammars (1)
- unique column combinations (1)
- unsupervised (1)
- variational inference (1)
- variationelle Inferenz (1)
- verhaltenskorrektes Lernen (1)
- verifiable credentials (1)
- zero-day (1)
- überprüfbare Nachweise (1)
In the last two decades, process mining has developed from a niche
discipline to a significant research area with considerable impact on academia and industry. Process mining enables organisations to identify the running business processes from historical execution data. The first requirement of any process mining technique is an event log, an artifact that represents concrete business process executions in the form of sequence of events. These logs can be extracted from the organization's information systems and are used by process experts to retrieve deep insights from the organization's running processes. Considering the events pertaining to such logs, the process models can be automatically discovered and enhanced or annotated with performance-related information. Besides behavioral information, event logs contain domain specific data, albeit implicitly. However, such data are usually overlooked and, thus, not utilized to their full potential.
Within the process mining area, we address in this thesis the research gap of discovering, from event logs, the contextual information that cannot be captured by applying existing process mining techniques. Within this research gap, we identify four key problems and tackle them by looking at an event log from different angles. First, we address the problem of deriving an event log in the absence of a proper database access and domain knowledge. The second problem is related to the under-utilization of the implicit domain knowledge present in an event log that can increase the understandability of the discovered process model. Next, there is a lack of a holistic representation of the historical data manipulation at the process model level of abstraction. Last but not least, each process model presumes to be independent of other process models when discovered from an event log, thus, ignoring possible data dependencies between processes within an organization.
For each of the problems mentioned above, this thesis proposes a dedicated method. The first method provides a solution to extract an event log only from the transactions performed on the database that are stored in the form of redo logs. The second method deals with discovering the underlying data model that is implicitly embedded in the event log, thus, complementing the discovered process model with important domain knowledge information. The third method captures, on the process model level, how the data affects the running process instances. Lastly, the fourth method is about the discovery of the relations between business processes (i.e., how they exchange data) from a set of event logs and explicitly representing such complex interdependencies in a business process architecture.
All the methods introduced in this thesis are implemented as a prototype and their feasibility is proven by being applied on real-life event logs.
Modular and incremental global model management with extended generalized discrimination networks
(2023)
Complex projects developed under the model-driven engineering paradigm nowadays often involve several interrelated models, which are automatically processed via a multitude of model operations. Modular and incremental construction and execution of such networks of models and model operations are required to accommodate efficient development with potentially large-scale models. The underlying problem is also called Global Model Management.
In this report, we propose an approach to modular and incremental Global Model Management via an extension to the existing technique of Generalized Discrimination Networks (GDNs). In addition to further generalizing the notion of query operations employed in GDNs, we adapt the previously query-only mechanism to operations with side effects to integrate model transformation and model synchronization. We provide incremental algorithms for the execution of the resulting extended Generalized Discrimination Networks (eGDNs), as well as a prototypical implementation for a number of example eGDN operations.
Based on this prototypical implementation, we experiment with an application scenario from the software development domain to empirically evaluate our approach with respect to scalability and conceptually demonstrate its applicability in a typical scenario. Initial results confirm that the presented approach can indeed be employed to realize efficient Global Model Management in the considered scenario.
Like conventional software projects, projects in model-driven software engineering require adequate management of multiple versions of development artifacts, importantly allowing living with temporary inconsistencies. In the case of model-driven software engineering, employed versioning approaches also have to handle situations where different artifacts, that is, different models, are linked via automatic model transformations.
In this report, we propose a technique for jointly handling the transformation of multiple versions of a source model into corresponding versions of a target model, which enables the use of a more compact representation that may afford improved execution time of both the transformation and further analysis operations. Our approach is based on the well-known formalism of triple graph grammars and a previously introduced encoding of model version histories called multi-version models. In addition to showing the correctness of our approach with respect to the standard semantics of triple graph grammars, we conduct an empirical evaluation that demonstrates the potential benefit regarding execution time performance.
Today, point clouds are among the most important categories of spatial data, as they constitute digital 3D models of the as-is reality that can be created at unprecedented speed and precision. However, their unique properties, i.e., lack of structure, order, or connectivity information, necessitate specialized data structures and algorithms to leverage their full precision. In particular, this holds true for the interactive visualization of point clouds, which requires to balance hardware limitations regarding GPU memory and bandwidth against a naturally high susceptibility to visual artifacts.
This thesis focuses on concepts, techniques, and implementations of robust, scalable, and portable 3D visualization systems for massive point clouds. To that end, a number of rendering, visualization, and interaction techniques are introduced, that extend several basic strategies to decouple rendering efforts and data management: First, a novel visualization technique that facilitates context-aware filtering, highlighting, and interaction within point cloud depictions. Second, hardware-specific optimization techniques that improve rendering performance and image quality in an increasingly diversified hardware landscape. Third, natural and artificial locomotion techniques for nausea-free exploration in the context of state-of-the-art virtual reality devices. Fourth, a framework for web-based rendering that enables collaborative exploration of point clouds across device ecosystems and facilitates the integration into established workflows and software systems.
In cooperation with partners from industry and academia, the practicability and robustness of the presented techniques are showcased via several case studies using representative application scenarios and point cloud data sets. In summary, the work shows that the interactive visualization of point clouds can be implemented by a multi-tier software architecture with a number of domain-independent, generic system components that rely on optimization strategies specific to large point clouds. It demonstrates the feasibility of interactive, scalable point cloud visualization as a key component for distributed IT solutions that operate with spatial digital twins, providing arguments in favor of using point clouds as a universal type of spatial base data usable directly for visualization purposes.
In this thesis, we investigate language learning in the formalisation of Gold [Gol67]. Here, a learner, being successively presented all information of a target language, conjectures which language it believes to be shown. Once these hypotheses converge syntactically to a correct explanation of the target language, the learning is considered successful. Fittingly, this is termed explanatory learning. To model learning strategies, we impose restrictions on the hypotheses made, for example requiring the conjectures to follow a monotonic behaviour. This way, we can study the impact a certain restriction has on learning.
Recently, the literature shifted towards map charting. Here, various seemingly unrelated restrictions are contrasted, unveiling interesting relations between them. The results are then depicted in maps. For explanatory learning, the literature already provides maps of common restrictions for various forms of data presentation.
In the case of behaviourally correct learning, where the learners are required to converge semantically instead of syntactically, the same restrictions as in explanatory learning have been investigated. However, a similarly complete picture regarding their interaction has not been presented yet.
In this thesis, we transfer the map charting approach to behaviourally correct learning. In particular, we complete the partial results from the literature for many well-studied restrictions and provide full maps for behaviourally correct learning with different types of data presentation. We also study properties of learners assessed important in the literature. We are interested whether learners are consistent, that is, whether their conjectures include the data they are built on. While learners cannot be assumed consistent in explanatory learning, the opposite is the case in behaviourally correct learning. Even further, it is known that learners following different restrictions may be assumed consistent. We contribute to the literature by showing that this is the case for all studied restrictions.
We also investigate mathematically interesting properties of learners. In particular, we are interested in whether learning under a given restriction may be done with strongly Bc-locking learners. Such learners are of particular value as they allow to apply simulation arguments when, for example, comparing two learning paradigms to each other. The literature gives a rich ground on when learners may be assumed strongly Bc-locking, which we complete for all studied restrictions.
Diversity is a term that is broadly used and challenging for informatics research, development and education. Diversity concerns may relate to unequal participation, knowledge and methodology, curricula, institutional planning etc. For a lot of these areas, measures, guidelines and best practices on diversity awareness exist. A systemic, sustainable impact of diversity measures on informatics is still largely missing. In this paper I explore what working with diversity and gender concepts in informatics entails, what the main challenges are and provide thoughts for improvement. The paper includes definitions of diversity and intersectionality, reflections on the disciplinary basis of informatics and practical implications of integrating diversity in informatics research and development. In the final part, two concepts from the social sciences and the humanities, the notion of “third space”/hybridity and the notion of “feminist ethics of care”, serve as a lens to foster more sustainable ways of working with diversity in informatics.
This technical report presents the results of student projects which were prepared during the lecture “Operating Systems II” offered by the “Operating Systems and Middleware” group at HPI in the Summer term of 2020. The lecture covered ad- vanced aspects of operating system implementation and architecture on topics such as Virtualization, File Systems and Input/Output Systems. In addition to attending the lecture, the participating students were encouraged to gather practical experience by completing a project on a closely related topic over the course of the semester. The results of 10 selected exceptional projects are covered in this report.
The students have completed hands-on projects on the topics of Operating System Design Concepts and Implementation, Hardware/Software Co-Design, Reverse Engineering, Quantum Computing, Static Source-Code Analysis, Operating Systems History, Application Binary Formats and more. It should be recognized that over the course of the semester all of these projects have achieved outstanding results which went far beyond the scope and the expec- tations of the lecture, and we would like to thank all participating students for their commitment and their effort in completing their respective projects, as well as their work on compiling this report.
Ethical issues surrounding modern computing technologies play an increasingly important role in the public debate. Yet, ethics still either doesn’t appear at all or only to a very small extent in computer science degree programs. This paper provides an argument for the value of ethics beyond a pure responsibility perspective and describes the positive value of ethical debate for future computer scientists. It also provides a systematic analysis of the module handbooks of 67 German universities and shows that there is indeed a lack of ethics in computer science education. Finally, we present a principled design of a compulsory course for undergraduate students.
Learning the causal structures from observational data is an omnipresent challenge in data science. The amount of observational data available to Causal Structure Learning (CSL) algorithms is increasing as data is collected at high frequency from many data sources nowadays. While processing more data generally yields higher accuracy in CSL, the concomitant increase in the runtime of CSL algorithms hinders their widespread adoption in practice. CSL is a parallelizable problem. Existing parallel CSL algorithms address execution on multi-core Central Processing Units (CPUs) with dozens of compute cores. However, modern computing systems are often heterogeneous and equipped with Graphics Processing Units (GPUs) to accelerate computations. Typically, these GPUs provide several thousand compute cores for massively parallel data processing.
To shorten the runtime of CSL algorithms, we design efficient execution strategies that leverage the parallel processing power of GPUs. Particularly, we derive GPU-accelerated variants of a well-known constraint-based CSL method, the PC algorithm, as it allows choosing a statistical Conditional Independence test (CI test) appropriate to the observational data characteristics.
Our two main contributions are: (1) to reflect differences in the CI tests, we design three GPU-based variants of the PC algorithm tailored to CI tests that handle data with the following characteristics. We develop one variant for data assuming the Gaussian distribution model, one for discrete data, and another for mixed discrete-continuous data and data with non-linear relationships. Each variant is optimized for the appropriate CI test leveraging GPU hardware properties, such as shared or thread-local memory. Our GPU-accelerated variants outperform state-of-the-art parallel CPU-based algorithms by factors of up to 93.4× for data assuming the Gaussian distribution model, up to 54.3× for discrete data, up to 240× for continuous data with non-linear relationships and up to 655× for mixed discrete-continuous data. However, the proposed GPU-based variants are limited to datasets that fit into a single GPU’s memory. (2) To overcome this shortcoming, we develop approaches to scale our GPU-based variants beyond a single GPU’s memory capacity. For example, we design an out-of-core GPU variant that employs explicit memory management to process arbitrary-sized datasets. Runtime measurements on a large gene expression dataset reveal that our out-of-core GPU variant is 364 times faster than a parallel CPU-based CSL algorithm. Overall, our proposed GPU-accelerated variants speed up CSL in numerous settings to foster CSL’s adoption in practice and research.
The amount of data stored in databases and the complexity of database workloads are ever- increasing. Database management systems (DBMSs) offer many configuration options, such as index creation or unique constraints, which must be adapted to the specific instance to efficiently process large volumes of data. Currently, such database optimization is complicated, manual work performed by highly skilled database administrators (DBAs). In cloud scenarios, manual database optimization even becomes infeasible: it exceeds the abilities of the best DBAs due to the enormous number of deployed DBMS instances (some providers maintain millions of instances), missing domain knowledge resulting from data privacy requirements, and the complexity of the configuration tasks.
Therefore, we investigate how to automate the configuration of DBMSs efficiently with the help of unsupervised database optimization. While there are numerous configuration options, in this thesis, we focus on automatic index selection and the use of data dependencies, such as functional dependencies, for query optimization. Both aspects have an extensive performance impact and complement each other by approaching unsupervised database optimization from different perspectives.
Our contributions are as follows: (1) we survey automated state-of-the-art index selection algorithms regarding various criteria, e.g., their support for index interaction. We contribute an extensible platform for evaluating the performance of such algorithms with industry-standard datasets and workloads. The platform is well-received by the community and has led to follow-up research. With our platform, we derive the strengths and weaknesses of the investigated algorithms. We conclude that existing solutions often have scalability issues and cannot quickly determine (near-)optimal solutions for large problem instances. (2) To overcome these limitations, we present two new algorithms. Extend determines (near-)optimal solutions with an iterative heuristic. It identifies the best index configurations for the evaluated benchmarks. Its selection runtimes are up to 10 times lower compared with other near-optimal approaches. SWIRL is based on reinforcement learning and delivers solutions instantly. These solutions perform within 3 % of the optimal ones. Extend and SWIRL are available as open-source implementations.
(3) Our index selection efforts are complemented by a mechanism that analyzes workloads to determine data dependencies for query optimization in an unsupervised fashion. We describe and classify 58 query optimization techniques based on functional, order, and inclusion dependencies as well as on unique column combinations. The unsupervised mechanism and three optimization techniques are implemented in our open-source research DBMS Hyrise. Our approach reduces the Join Order Benchmark’s runtime by 26 % and accelerates some TPC-DS queries by up to 58 times.
Additionally, we have developed a cockpit for unsupervised database optimization that allows interactive experiments to build confidence in such automated techniques. In summary, our contributions improve the performance of DBMSs, support DBAs in their work, and enable them to contribute their time to other, less arduous tasks.