004 Datenverarbeitung; Informatik
Refine
Year of publication
- 2023 (39) (remove)
Document Type
- Article (11)
- Doctoral Thesis (11)
- Monograph/Edited Volume (8)
- Conference Proceeding (4)
- Part of a Book (2)
- Postprint (2)
- Bachelor Thesis (1)
Is part of the Bibliography
- yes (39) (remove)
Keywords
- Digitalisierung (3)
- digitalization (3)
- machine learning (3)
- Augmented reality (2)
- Bibliometrics (2)
- Blockchains (2)
- Economics (2)
- European Union (2)
- Europäische Union (2)
- HPI Schul-Cloud (2)
- Informatikstudium (2)
- MERLOT (2)
- Metaverse (2)
- Second Life (2)
- Semantic Web (2)
- Systematics (2)
- Taxonomy (2)
- data mining (2)
- digital education (2)
- digital sovereignty (2)
- digitale Bildung (2)
- digitale Souveränität (2)
- identity theory (2)
- maschinelles Lernen (2)
- openHPI (2)
- standardization (2)
- 0-day (1)
- 3D Point Clouds (1)
- 3D-Punktwolken (1)
- APT (1)
- Advanced Persistent Threats (1)
- Algorithmic Game Theory (1)
- Algorithmische Spieltheorie (1)
- Anfrageoptimierung (1)
- Anomalieerkennung (1)
- Architekturadaptation (1)
- Assessment (1)
- Ausreißererkennung (1)
- BCH (1)
- Bahnwesen (1)
- Bedrohungserkennung (1)
- Betriebssysteme (1)
- Bildverarbeitung (1)
- Blockchain (1)
- C++ tool (1)
- Cloud Computing (1)
- Code (1)
- Commonsense reasoning (1)
- Community analysis (1)
- Computational Photography (1)
- Cyber-Sicherheit (1)
- DBMS (1)
- Data-Mining (1)
- Data-Science (1)
- Datenbank (1)
- Datenbanksysteme (1)
- Datenmodelle (1)
- Dekubitus (1)
- Didaktische Konzepte (1)
- Distributed-Ledger-Technologie (DLT) (1)
- Diversität (1)
- Echtzeit (1)
- Echtzeit-Rendering (1)
- Einbruchserkennung (1)
- Endpunktsicherheit (1)
- Fehlererkennung (1)
- Fehlerkorrektur (1)
- Fehlertoleranz (1)
- Forschungsprojekte (1)
- Future SOC Lab (1)
- GPU acceleration (1)
- GPU-Beschleunigung (1)
- Game Dynamics (1)
- Gaussian process state-space models (1)
- Gaussian processes (1)
- Gauß-Prozess Zustandsraummodelle (1)
- Gauß-Prozesse (1)
- Generalized Discrimination Networks (1)
- Geschäftsprozessarchitekturen (1)
- Gleichheit (1)
- Graph algorithm (1)
- Graph-Mining (1)
- Graphableitung (1)
- HDI (1)
- Heterogenität (1)
- Heuristic triangle estimation (1)
- Hochschuldidaktik (1)
- Hochschullehre (1)
- Hyrise (1)
- In-Memory Technologie (1)
- Informatikdidaktik (1)
- JSP (1)
- Karten (1)
- Konnektionskalkül (1)
- Konsensprotokolle (1)
- Large networks (1)
- Lebenslanges Lernen (1)
- Machine-Learning (1)
- Modelle mit mehreren Versionen (1)
- Multicore Architekturen (1)
- N-of-1 trial (1)
- Netzwerkprotokolle (1)
- Non-photorealistic Rendering (1)
- Omega (1)
- Open source (1)
- Opinion mining (1)
- Ordinances (1)
- Overlapping community detection (1)
- Parallelized algorithm (1)
- Posenabschätzung (1)
- Preis der Anarchie (1)
- Price of Anarchy (1)
- Quanten-Computing (1)
- Query-Optimierung (1)
- RL (1)
- Real-Time Rendering (1)
- Reverse Engineering (1)
- Runtime improvement (1)
- SCED (1)
- SWIRL (1)
- Schelling Process (1)
- Schelling Prozess (1)
- Schelling Segregation (1)
- Sicherheitsanalyse (1)
- Smart cities (1)
- Social (1)
- Software/Hardware Co-Design (1)
- Spieldynamiken (1)
- Sprachlernen im Limes (1)
- Standardisierung (1)
- Studentenjobs (1)
- Studienabbrecher (1)
- Studiendauer (1)
- TPTP (1)
- Telemedizin (1)
- Text mining (1)
- Tripel-Graph-Grammatiken (1)
- Twitter (1)
- Verlässlichkeit (1)
- Virtual Reality (1)
- Virtuelle Realität (1)
- Visualisierung (1)
- Visualization (1)
- Web-Based Rendering (1)
- Webbasiertes Rendering (1)
- Weighted clustering coefficient (1)
- advanced persistent threat (1)
- advanced threats (1)
- anomaly detection (1)
- app (1)
- apt (1)
- architectural adaptation (1)
- arithmethische Prozeduren (1)
- arithmetic procedures (1)
- artifical intelligence (1)
- asset management (1)
- automatic theorem prover (1)
- automatisierter Theorembeweiser (1)
- autonomous (1)
- behaviourally correct learning (1)
- bildbasiertes Rendering (1)
- business process architectures (1)
- causal discovery (1)
- causal structure learning (1)
- cloud computing (1)
- code (1)
- cognitive load theory (1)
- computational photography (1)
- connection calculus (1)
- consensus protocols (1)
- consistent learning (1)
- conversational agents (1)
- convolutional neural networks (1)
- corporate nomadism (1)
- creativity (1)
- cumulative culture (1)
- cybersecurity (1)
- data dependencies (1)
- data models (1)
- data science (1)
- data-driven (1)
- database (1)
- database optimization (1)
- database systems (1)
- datengetrieben (1)
- decentral identities (1)
- decubitus (1)
- deep Gaussian processes (1)
- deep learning (1)
- dependability (1)
- dezentrale Identitäten (1)
- digital health (1)
- digital interventions (1)
- digital nomadism (1)
- digital transformation (1)
- digital workplace transformation (1)
- digitale Hochschullehre (1)
- elections (1)
- endpoint security (1)
- equality (1)
- error correction (1)
- error detection (1)
- ethics (1)
- explicit knowledge (1)
- extend (1)
- fault tolerance (1)
- fortschrittliche Angriffe (1)
- functional dependencies (1)
- funktionale Abhängigkeiten (1)
- gefaltete neuronale Netze (1)
- generalized discrimination networks (1)
- geschichtsbewusste Laufzeit-Modelle (1)
- global model management (1)
- globales Modellmanagement (1)
- graph inference (1)
- graph mining (1)
- history-aware runtime models (1)
- home office (1)
- human–computer interaction (1)
- image processing (1)
- image stylization (1)
- image-based rendering (1)
- immersion (1)
- in-memory technology (1)
- inclusion dependencies (1)
- incremental graph query evaluation (1)
- index selection (1)
- information diffusion (1)
- inkrementelle Ausführung von Graphanfragen (1)
- innovation (1)
- intrusion detection (1)
- invention (1)
- invention mechanism (1)
- job-shop scheduling (1)
- juridical recording (1)
- kausale Entdeckung (1)
- kausales Strukturlernen (1)
- knowledge engineering (1)
- knowledge management (1)
- knowledge management system (1)
- knowledge transfer (1)
- knowledge work (1)
- konsistentes Lernen (1)
- künstliche Intelligenz (1)
- language learning in the limit (1)
- leanCoP (1)
- learner characteristics (1)
- linear code (1)
- linearer Code (1)
- maps (1)
- media (1)
- mobile application (1)
- model-driven software engineering (1)
- modellgetriebene Softwaretechnik (1)
- multi-version models (1)
- multicore architectures (1)
- network protocols (1)
- non-photorealistic rendering (1)
- notation (1)
- omega (1)
- open innovation (1)
- operating systems (1)
- order dependencies (1)
- outlier detection (1)
- parallel processing (1)
- parallele Verarbeitung (1)
- patent (1)
- pose estimation (1)
- probabilistic machine learning (1)
- probabilistisches maschinelles Lernen (1)
- process mining (1)
- quantum computing (1)
- query optimization (1)
- railways (1)
- real-time (1)
- reinforcement learning (1)
- remote-first (1)
- research projects (1)
- reverse engineering (1)
- security analytics (1)
- selbstbestimmte Identitäten (1)
- self-driving (1)
- self-sovereign identity (1)
- sentiment (1)
- single-case experimental design (1)
- software/hardware co-design (1)
- stark verhaltenskorrekt sperrend (1)
- static source-code analysis (1)
- statische Quellcodeanalyse (1)
- strongly behaviourally correct locking (1)
- tacit knowledge (1)
- teamwork (1)
- technology (1)
- telemedicine (1)
- temporal graph queries (1)
- temporale Graphanfragen (1)
- terminology (1)
- threat detection (1)
- tiefe Gauß-Prozesse (1)
- tiefes Lernen (1)
- tptp (1)
- triple graph grammars (1)
- trust (1)
- unique column combinations (1)
- unsupervised (1)
- variational inference (1)
- variationelle Inferenz (1)
- various applications (1)
- verhaltenskorrektes Lernen (1)
- verifiable credentials (1)
- virtual collaboration (1)
- virtual learning environments (1)
- virtual teams (1)
- visual analytics (1)
- vocational training (1)
- web application (1)
- zero-day (1)
- überprüfbare Nachweise (1)
RailChain
(2023)
The RailChain project designed, implemented, and experimentally evaluated a juridical recorder that is based on a distributed consensus protocol. That juridical blockchain recorder has been realized as distributed ledger on board the advanced TrainLab (ICE-TD 605 017) of Deutsche Bahn.
For the project, a consortium consisting of DB Systel, Siemens, Siemens Mobility, the Hasso Plattner Institute for Digital Engineering, Technische Universität Braunschweig, TÜV Rheinland InterTraffic, and Spherity has been formed. These partners not only concentrated competencies in railway operation, computer science, regulation, and approval, but also combined experiences from industry, research from academia, and enthusiasm from startups.
Distributed ledger technologies (DLTs) define distributed databases and express a digital protocol for transactions between business partners without the need for a trusted intermediary. The implementation of a blockchain with real-time requirements for the local network of a railway system (e.g., interlocking or train) allows to log data in the distributed system verifiably in real-time. For this, railway-specific assumptions can be leveraged to make modifications to standard blockchains protocols.
EULYNX and OCORA (Open CCS On-board Reference Architecture) are parts of a future European reference architecture for control command and signalling (CCS, Reference CCS Architecture – RCA). Both architectural concepts outline heterogeneous IT systems with components from multiple manufacturers. Such systems introduce novel challenges for the approved and safety-relevant CCS of railways which were considered neither for road-side nor for on-board systems so far. Logging implementations, such as the common juridical recorder on vehicles, can no longer be realized as a central component of a single manufacturer. All centralized approaches are in question.
The research project RailChain is funded by the mFUND program and gives practical evidence that distributed consensus protocols are a proper means to immutably (for legal purposes) store state information of many system components from multiple manufacturers. The results of RailChain have been published, prototypically implemented, and experimentally evaluated in large-scale field tests on the advanced TrainLab. At the same time, the project showed how RailChain can be integrated into the road-side and on-board architecture given by OCORA and EULYNX.
Logged data can now be analysed sooner and also their trustworthiness is being increased. This enables, e.g., auditable predictive maintenance, because it is ensured that data is authentic and unmodified at any point in time.
In this thesis, we investigate language learning in the formalisation of Gold [Gol67]. Here, a learner, being successively presented all information of a target language, conjectures which language it believes to be shown. Once these hypotheses converge syntactically to a correct explanation of the target language, the learning is considered successful. Fittingly, this is termed explanatory learning. To model learning strategies, we impose restrictions on the hypotheses made, for example requiring the conjectures to follow a monotonic behaviour. This way, we can study the impact a certain restriction has on learning.
Recently, the literature shifted towards map charting. Here, various seemingly unrelated restrictions are contrasted, unveiling interesting relations between them. The results are then depicted in maps. For explanatory learning, the literature already provides maps of common restrictions for various forms of data presentation.
In the case of behaviourally correct learning, where the learners are required to converge semantically instead of syntactically, the same restrictions as in explanatory learning have been investigated. However, a similarly complete picture regarding their interaction has not been presented yet.
In this thesis, we transfer the map charting approach to behaviourally correct learning. In particular, we complete the partial results from the literature for many well-studied restrictions and provide full maps for behaviourally correct learning with different types of data presentation. We also study properties of learners assessed important in the literature. We are interested whether learners are consistent, that is, whether their conjectures include the data they are built on. While learners cannot be assumed consistent in explanatory learning, the opposite is the case in behaviourally correct learning. Even further, it is known that learners following different restrictions may be assumed consistent. We contribute to the literature by showing that this is the case for all studied restrictions.
We also investigate mathematically interesting properties of learners. In particular, we are interested in whether learning under a given restriction may be done with strongly Bc-locking learners. Such learners are of particular value as they allow to apply simulation arguments when, for example, comparing two learning paradigms to each other. The literature gives a rich ground on when learners may be assumed strongly Bc-locking, which we complete for all studied restrictions.
The amount of data stored in databases and the complexity of database workloads are ever- increasing. Database management systems (DBMSs) offer many configuration options, such as index creation or unique constraints, which must be adapted to the specific instance to efficiently process large volumes of data. Currently, such database optimization is complicated, manual work performed by highly skilled database administrators (DBAs). In cloud scenarios, manual database optimization even becomes infeasible: it exceeds the abilities of the best DBAs due to the enormous number of deployed DBMS instances (some providers maintain millions of instances), missing domain knowledge resulting from data privacy requirements, and the complexity of the configuration tasks.
Therefore, we investigate how to automate the configuration of DBMSs efficiently with the help of unsupervised database optimization. While there are numerous configuration options, in this thesis, we focus on automatic index selection and the use of data dependencies, such as functional dependencies, for query optimization. Both aspects have an extensive performance impact and complement each other by approaching unsupervised database optimization from different perspectives.
Our contributions are as follows: (1) we survey automated state-of-the-art index selection algorithms regarding various criteria, e.g., their support for index interaction. We contribute an extensible platform for evaluating the performance of such algorithms with industry-standard datasets and workloads. The platform is well-received by the community and has led to follow-up research. With our platform, we derive the strengths and weaknesses of the investigated algorithms. We conclude that existing solutions often have scalability issues and cannot quickly determine (near-)optimal solutions for large problem instances. (2) To overcome these limitations, we present two new algorithms. Extend determines (near-)optimal solutions with an iterative heuristic. It identifies the best index configurations for the evaluated benchmarks. Its selection runtimes are up to 10 times lower compared with other near-optimal approaches. SWIRL is based on reinforcement learning and delivers solutions instantly. These solutions perform within 3 % of the optimal ones. Extend and SWIRL are available as open-source implementations.
(3) Our index selection efforts are complemented by a mechanism that analyzes workloads to determine data dependencies for query optimization in an unsupervised fashion. We describe and classify 58 query optimization techniques based on functional, order, and inclusion dependencies as well as on unique column combinations. The unsupervised mechanism and three optimization techniques are implemented in our open-source research DBMS Hyrise. Our approach reduces the Join Order Benchmark’s runtime by 26 % and accelerates some TPC-DS queries by up to 58 times.
Additionally, we have developed a cockpit for unsupervised database optimization that allows interactive experiments to build confidence in such automated techniques. In summary, our contributions improve the performance of DBMSs, support DBAs in their work, and enable them to contribute their time to other, less arduous tasks.
Defining the metaverse
(2023)
The term Metaverse is emerging as a result of the late push by multinational technology conglomerates and a recent surge of interest in Web 3.0, Blockchain, NFT, and Cryptocurrencies. From a scientific point of view, there is no definite consensus on what the Metaverse will be like. This paper collects, analyzes, and synthesizes scientific definitions and the accompanying major characteristics of the Metaverse using the methodology of a Systematic Literature Review (SLR). Two revised definitions for the Metaverse are presented, both condensing the key attributes, where the first one is rather simplistic holistic describing “a three-dimensional online environment in which users represented by avatars interact with each other in virtual spaces decoupled from the real physical world”. In contrast, the second definition is specified in a more detailed manner in the paper and further discussed. These comprehensive definitions offer specialized and general scholars an application within and beyond the scientific context of the system science, information system science, computer science, and business informatics, by also introducing open research challenges. Furthermore, an outlook on the social, economic, and technical implications is given, and the preconditions that are necessary for a successful implementation are discussed.
Defining the metaverse
(2023)
The term Metaverse is emerging as a result of the late push by multinational technology conglomerates and a recent surge of interest in Web 3.0, Blockchain, NFT, and Cryptocurrencies. From a scientific point of view, there is no definite consensus on what the Metaverse will be like. This paper collects, analyzes, and synthesizes scientific definitions and the accompanying major characteristics of the Metaverse using the methodology of a Systematic Literature Review (SLR). Two revised definitions for the Metaverse are presented, both condensing the key attributes, where the first one is rather simplistic holistic describing “a three-dimensional online environment in which users represented by avatars interact with each other in virtual spaces decoupled from the real physical world”. In contrast, the second definition is specified in a more detailed manner in the paper and further discussed. These comprehensive definitions offer specialized and general scholars an application within and beyond the scientific context of the system science, information system science, computer science, and business informatics, by also introducing open research challenges. Furthermore, an outlook on the social, economic, and technical implications is given, and the preconditions that are necessary for a successful implementation are discussed.
Like conventional software projects, projects in model-driven software engineering require adequate management of multiple versions of development artifacts, importantly allowing living with temporary inconsistencies. In the case of model-driven software engineering, employed versioning approaches also have to handle situations where different artifacts, that is, different models, are linked via automatic model transformations.
In this report, we propose a technique for jointly handling the transformation of multiple versions of a source model into corresponding versions of a target model, which enables the use of a more compact representation that may afford improved execution time of both the transformation and further analysis operations. Our approach is based on the well-known formalism of triple graph grammars and a previously introduced encoding of model version histories called multi-version models. In addition to showing the correctness of our approach with respect to the standard semantics of triple graph grammars, we conduct an empirical evaluation that demonstrates the potential benefit regarding execution time performance.
Modular and incremental global model management with extended generalized discrimination networks
(2023)
Complex projects developed under the model-driven engineering paradigm nowadays often involve several interrelated models, which are automatically processed via a multitude of model operations. Modular and incremental construction and execution of such networks of models and model operations are required to accommodate efficient development with potentially large-scale models. The underlying problem is also called Global Model Management.
In this report, we propose an approach to modular and incremental Global Model Management via an extension to the existing technique of Generalized Discrimination Networks (GDNs). In addition to further generalizing the notion of query operations employed in GDNs, we adapt the previously query-only mechanism to operations with side effects to integrate model transformation and model synchronization. We provide incremental algorithms for the execution of the resulting extended Generalized Discrimination Networks (eGDNs), as well as a prototypical implementation for a number of example eGDN operations.
Based on this prototypical implementation, we experiment with an application scenario from the software development domain to empirically evaluate our approach with respect to scalability and conceptually demonstrate its applicability in a typical scenario. Initial results confirm that the presented approach can indeed be employed to realize efficient Global Model Management in the considered scenario.
In the last two decades, process mining has developed from a niche
discipline to a significant research area with considerable impact on academia and industry. Process mining enables organisations to identify the running business processes from historical execution data. The first requirement of any process mining technique is an event log, an artifact that represents concrete business process executions in the form of sequence of events. These logs can be extracted from the organization's information systems and are used by process experts to retrieve deep insights from the organization's running processes. Considering the events pertaining to such logs, the process models can be automatically discovered and enhanced or annotated with performance-related information. Besides behavioral information, event logs contain domain specific data, albeit implicitly. However, such data are usually overlooked and, thus, not utilized to their full potential.
Within the process mining area, we address in this thesis the research gap of discovering, from event logs, the contextual information that cannot be captured by applying existing process mining techniques. Within this research gap, we identify four key problems and tackle them by looking at an event log from different angles. First, we address the problem of deriving an event log in the absence of a proper database access and domain knowledge. The second problem is related to the under-utilization of the implicit domain knowledge present in an event log that can increase the understandability of the discovered process model. Next, there is a lack of a holistic representation of the historical data manipulation at the process model level of abstraction. Last but not least, each process model presumes to be independent of other process models when discovered from an event log, thus, ignoring possible data dependencies between processes within an organization.
For each of the problems mentioned above, this thesis proposes a dedicated method. The first method provides a solution to extract an event log only from the transactions performed on the database that are stored in the form of redo logs. The second method deals with discovering the underlying data model that is implicitly embedded in the event log, thus, complementing the discovered process model with important domain knowledge information. The third method captures, on the process model level, how the data affects the running process instances. Lastly, the fourth method is about the discovery of the relations between business processes (i.e., how they exchange data) from a set of event logs and explicitly representing such complex interdependencies in a business process architecture.
All the methods introduced in this thesis are implemented as a prototype and their feasibility is proven by being applied on real-life event logs.
N-of-1 trials are the gold standard study design to evaluate individual treatment effects and derive personalized treatment strategies. Digital tools have the potential to initiate a new era of N-of-1 trials in terms of scale and scope, but fully functional platforms are not yet available. Here, we present the open source StudyU platform, which includes the StudyU Designer and StudyU app. With the StudyU Designer, scientists are given a collaborative web application to digitally specify, publish, and conduct N-of-1 trials. The StudyU app is a smartphone app with innovative user-centric elements for participants to partake in trials published through the StudyU Designer to assess the effects of different interventions on their health. Thereby, the StudyU platform allows clinicians and researchers worldwide to easily design and conduct digital N-of-1 trials in a safe manner. We envision that StudyU can change the landscape of personalized treatments both for patients and healthy individuals, democratize and personalize evidence generation for self-optimization and medicine, and can be integrated in clinical practice.