Refine
Year of publication
- 2018 (14) (remove)
Document Type
- Doctoral Thesis (14) (remove)
Language
- English (14)
Is part of the Bibliography
- yes (14) (remove)
Keywords
- Angriffserkennung (2)
- Big Data (2)
- IDS (2)
- SIEM (2)
- classification (2)
- intrusion detection (2)
- virtuelle Realität (2)
- 3D city model (1)
- 3D geovisualization (1)
- 3D point clouds (1)
- 3D portrayal (1)
- 3D printing (1)
- 3D visualization (1)
- 3D-Druck (1)
- 3D-Geovisualisierung (1)
- 3D-Punktwolken (1)
- 3D-Rendering (1)
- 3D-Stadtmodell (1)
- 3D-Visualisierung (1)
- Anomalieerkennung (1)
- Batch-Aktivität (1)
- Bewegung (1)
- CityGML (1)
- Clusteranalyse (1)
- Computergrafik (1)
- Data Profiling (1)
- Domänenspezifische Modellierung (1)
- Echtzeit-Rendering (1)
- Entitätsverknüpfung (1)
- Entscheidungsfindung (1)
- Entscheidungsmanagement (1)
- Entscheidungsmodelle (1)
- Ereignisnormalisierung (1)
- Fabrikation (1)
- Fernerkundung (1)
- GPU (1)
- Geschäftsprozess (1)
- Geschäftsprozessmanagement (1)
- In-Memory (1)
- Inklusionsabhängigkeiten (1)
- Klassifikation (1)
- Klassifizierung (1)
- Laserscanning (1)
- Laufzeitmodelle (1)
- LiDAR (1)
- Machinelles Lernen (1)
- Metacrate (1)
- Metadaten (1)
- Metamaterialien (1)
- Mobile-Mapping (1)
- Netzwerksicherheit (1)
- Prozessausführung (1)
- Prozessmodelle (1)
- Prozessmodellierung (1)
- Requisit (1)
- Selbst-Adaptive Software (1)
- Sicherheit (1)
- Software-Evolution (1)
- Stapelverarbeitung (1)
- Veränderungsanalyse (1)
- Virtual Reality (1)
- Visual Analytics (1)
- Visualisierung (1)
- Wearable (1)
- anomaly detection (1)
- batch activity (1)
- batch processing (1)
- business process (1)
- business process management (1)
- change detection (1)
- clustering (1)
- computational design (1)
- computer graphics (1)
- computergestützte Gestaltung (1)
- data profiling (1)
- decision management (1)
- decision mining (1)
- decision models (1)
- distributed computation (1)
- domain-specific modeling (1)
- electrical muscle stimulation (1)
- elektrische Muskelstimulation (1)
- entity linking (1)
- event normalization (1)
- fabrication (1)
- functional dependencies (1)
- funktionale Abhängigkeiten (1)
- haptic feedback (1)
- haptisches Feedback (1)
- in-memory (1)
- inclusion dependencies (1)
- laserscanning (1)
- machine learning (1)
- mehrstufiger Angriff (1)
- metacrate (1)
- metadata (1)
- metamaterials (1)
- mobile mapping (1)
- model-driven engineering (1)
- modellgetriebene Entwicklung (1)
- motion and force (1)
- multi-step attack (1)
- network security (1)
- novelty detection (1)
- nutzergenerierte Inhalte (1)
- outlier detection (1)
- process execution (1)
- process modeling (1)
- process models (1)
- programmable matter (1)
- programmierbare Materie (1)
- props (1)
- real-time rendering (1)
- remote sensing (1)
- runtime models (1)
- security (1)
- self-adaptive software (1)
- serverseitiges 3D-Rendering (1)
- serverside 3D rendering (1)
- service-oriented architectures (1)
- serviceorientierte Architekturen (1)
- software evolution (1)
- text mining (1)
- user-generated content (1)
- verteilte Berechnung (1)
- virtual reality (1)
- visual analytics (1)
- visualization (1)
- wearables (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (14) (remove)
Scalable data profiling
(2018)
Data profiling is the act of extracting structural metadata from datasets. Structural metadata, such as data dependencies and statistics, can support data management operations, such as data integration and data cleaning. Data management often is the most time-consuming activity in any data-related project. Its support is extremely valuable in our data-driven world, so that more time can be spent on the actual utilization of the data, e. g., building analytical models. In most scenarios, however, structural metadata is not given and must be extracted first. Therefore, efficient data profiling methods are highly desirable.
Data profiling is a computationally expensive problem; in fact, most dependency discovery problems entail search spaces that grow exponentially in the number of attributes. To this end, this thesis introduces novel discovery algorithms for various types of data dependencies – namely inclusion dependencies, conditional inclusion dependencies, partial functional dependencies, and partial unique column combinations – that considerably improve over state-of-the-art algorithms in terms of efficiency and that scale to datasets that cannot be processed by existing algorithms. The key to those improvements are not only algorithmic innovations, such as novel pruning rules or traversal strategies, but also algorithm designs tailored for distributed execution. While distributed data profiling has been mostly neglected by previous works, it is a logical consequence on the face of recent hardware trends and the computational hardness of dependency discovery.
To demonstrate the utility of data profiling for data management, this thesis furthermore presents Metacrate, a database for structural metadata. Its salient features are its flexible data model, the capability to integrate various kinds of structural metadata, and its rich metadata analytics library. We show how to perform a data anamnesis of unknown, complex datasets based on this technology. In particular, we describe in detail how to reconstruct the schemata and assess their quality as part of the data anamnesis.
The data profiling algorithms and Metacrate have been carefully implemented, integrated with the Metanome data profiling tool, and are available as free software. In that way, we intend to allow for easy repeatability of our research results and also provide them for actual usage in real-world data-related projects.
Spatio-temporal data denotes a category of data that contains spatial as well as temporal components. For example, time-series of geo-data, thematic maps that change over time, or tracking data of moving entities can be interpreted as spatio-temporal data.
In today's automated world, an increasing number of data sources exist, which constantly generate spatio-temporal data. This includes for example traffic surveillance systems, which gather movement data about human or vehicle movements, remote-sensing systems, which frequently scan our surroundings and produce digital representations of cities and landscapes, as well as sensor networks in different domains, such as logistics, animal behavior study, or climate research.
For the analysis of spatio-temporal data, in addition to automatic statistical and data mining methods, exploratory analysis methods are employed, which are based on interactive visualization. These analysis methods let users explore a data set by interactively manipulating a visualization, thereby employing the human cognitive system and knowledge of the users to find patterns and gain insight into the data.
This thesis describes a software framework for the visualization of spatio-temporal data, which consists of GPU-based techniques to enable the interactive visualization and exploration of large spatio-temporal data sets. The developed techniques include data management, processing, and rendering, facilitating real-time processing and visualization of large geo-temporal data sets. It includes three main contributions:
- Concept and Implementation of a GPU-Based Visualization Pipeline.
The developed visualization methods are based on the concept of a GPU-based visualization pipeline, in which all steps -- processing, mapping, and rendering -- are implemented on the GPU. With this concept, spatio-temporal data is represented directly in GPU memory, using shader programs to process and filter the data, apply mappings to visual properties, and finally generate the geometric representations for a visualization during the rendering process. Data processing, filtering, and mapping are thereby executed in real-time, enabling dynamic control over the mapping and a visualization process which can be controlled interactively by a user.
- Attributed 3D Trajectory Visualization.
A visualization method has been developed for the interactive exploration of large numbers of 3D movement trajectories. The trajectories are visualized in a virtual geographic environment, supporting basic geometries such as lines, ribbons, spheres, or tubes. Interactive mapping can be applied to visualize the values of per-node or per-trajectory attributes, supporting shape, height, size, color, texturing, and animation as visual properties. Using the dynamic mapping system, several kind of visualization methods have been implemented, such as focus+context visualization of trajectories using interactive density maps, and space-time cube visualization to focus on the temporal aspects of individual movements.
- Geographic Network Visualization.
A method for the interactive exploration of geo-referenced networks has been developed, which enables the visualization of large numbers of nodes and edges in a geographic context. Several geographic environments are supported, such as a 3D globe, as well as 2D maps using different map projections, to enable the analysis of networks in different contexts and scales. Interactive filtering, mapping, and selection can be applied to analyze these geographic networks, and visualization methods for specific types of networks, such as coupled 3D networks or temporal networks have been implemented.
As a demonstration of the developed visualization concepts, interactive visualization tools for two distinct use cases have been developed. The first contains the visualization of attributed 3D movement trajectories of airplanes around an airport. It allows users to explore and analyze the trajectories of approaching and departing aircrafts, which have been recorded over the period of a month. By applying the interactive visualization methods for trajectory visualization and interactive density maps, analysts can derive insight from the data, such as common flight paths, regular and irregular patterns, or uncommon incidents such as missed approaches on the airport.
The second use case involves the visualization of climate networks, which are geographic networks in the climate research domain. They represent the dynamics of the climate system using a network structure that expresses statistical interrelationships between different regions. The interactive tool allows climate analysts to explore these large networks, analyzing the network's structure and relating it to the geographic background. Interactive filtering and selection enables them to find patterns in the climate data and identify e.g. clusters in the networks or flow patterns.
The development of self-adaptive software requires the engineering of an adaptation engine that controls the underlying adaptable software by a feedback loop. State-of-the-art approaches prescribe the feedback loop in terms of numbers, how the activities (e.g., monitor, analyze, plan, and execute (MAPE)) and the knowledge are structured to a feedback loop, and the type of knowledge. Moreover, the feedback loop is usually hidden in the implementation or framework and therefore not visible in the architectural design. Additionally, an adaptation engine often employs runtime models that either represent the adaptable software or capture strategic knowledge such as reconfiguration strategies. State-of-the-art approaches do not systematically address the interplay of such runtime models, which would otherwise allow developers to freely design the entire feedback loop.
This thesis presents ExecUtable RuntimE MegAmodels (EUREMA), an integrated model-driven engineering (MDE) solution that rigorously uses models for engineering feedback loops. EUREMA provides a domain-specific modeling language to specify and an interpreter to execute feedback loops. The language allows developers to freely design a feedback loop concerning the activities and runtime models (knowledge) as well as the number of feedback loops. It further supports structuring the feedback loops in the adaptation engine that follows a layered architectural style. Thus, EUREMA makes the feedback loops explicit in the design and enables developers to reason about design decisions.
To address the interplay of runtime models, we propose the concept of a runtime megamodel, which is a runtime model that contains other runtime models as well as activities (e.g., MAPE) working on the contained models. This concept is the underlying principle of EUREMA. The resulting EUREMA (mega)models are kept alive at runtime and they are directly executed by the EUREMA interpreter to run the feedback loops. Interpretation provides the flexibility to dynamically adapt a feedback loop. In this context, EUREMA supports engineering self-adaptive software in which feedback loops run independently or in a coordinated fashion within the same layer as well as on top of each other in different layers of the adaptation engine. Moreover, we consider preliminary means to evolve self-adaptive software by providing a maintenance interface to the adaptation engine.
This thesis discusses in detail EUREMA by applying it to different scenarios such as single, multiple, and stacked feedback loops for self-repairing and self-optimizing the mRUBiS application. Moreover, it investigates the design and expressiveness of EUREMA, reports on experiments with a running system (mRUBiS) and with alternative solutions, and assesses EUREMA with respect to quality attributes such as performance and scalability.
The conducted evaluation provides evidence that EUREMA as an integrated and open MDE approach for engineering self-adaptive software seamlessly integrates the development and runtime environments using the same formalism to specify and execute feedback loops, supports the dynamic adaptation of feedback loops in layered architectures, and achieves an efficient execution of feedback loops by leveraging incrementality.
Business process management is an acknowledged asset for running an organization in a productive and sustainable way. One of the most important aspects of business process management, occurring on a daily basis at all levels, is decision making. In recent years, a number of decision management frameworks have appeared in addition to existing business process management systems. More recently, Decision Model and Notation (DMN) was developed by the OMG consortium with the aim of complementing the widely used Business Process Model and Notation (BPMN). One of the reasons for the emergence of DMN is the increasing interest in the evolving paradigm known as the separation of concerns. This paradigm states that modeling decisions complementary to processes reduces process complexity by externalizing decision logic from process models and importing it into a dedicated decision model. Such an approach increases the agility of model design and execution. This provides organizations with the flexibility to adapt to the ever increasing rapid and dynamic changes in the business ecosystem. The research gap, identified by us, is that the separation of concerns, recommended by DMN, prescribes the externalization of the decision logic of process models in one or more separate decision models, but it does not specify this can be achieved.
The goal of this thesis is to overcome the presented gap by developing a framework for discovering decision models in a semi-automated way from information about existing process decision making. Thus, in this thesis we develop methodologies to extract decision models from: (1) control flow and data of process models that exist in enterprises; and (2) from event logs recorded by enterprise information systems, encapsulating day-to-day operations. Furthermore, we provide an extension of the methodologies to discover decision models from event logs enriched with fuzziness, a tool dealing with partial knowledge of the process execution information. All the proposed techniques are implemented and evaluated in case studies using real-life and synthetic process models and event logs. The evaluation of these case studies shows that the proposed methodologies provide valid and accurate output decision models that can serve as blueprints for executing decisions complementary to process models. Thus, these methodologies have applicability in the real world and they can be used, for example, for compliance checks, among other uses, which could improve the organization's decision making and hence it's overall performance.
Metamaterial devices
(2018)
Digital fabrication machines such as 3D printers excel at producing arbitrary shapes, such as for decorative objects. In recent years, researchers started to engineer not only the outer shape of objects, but also their internal microstructure. Such objects, typically based on 3D cell grids, are known as metamaterials. Metamaterials have been used to create materials that, e.g., change their volume, or have variable compliance.
While metamaterials were initially understood as materials, we propose to think of them as devices.
We argue that thinking of metamaterials as devices enables us to create internal structures that offer functionalities to implement an input-process-output model without electronics, but purely within the material’s internal structure. In this thesis, we investigate three aspects of such metamaterial devices that implement parts of the input-process-output model: (1) materials that process analog inputs by implementing mechanisms based on their microstructure, (2) that process digital signals by embedding mechanical computation into the object’s microstructure, and (3) interactive metamaterial objects that output to the user by changing their outside to interact with their environment. The input to our metamaterial devices is provided directly by the users interacting with the device by means of physically pushing the metamaterial, e.g., turning a handle, pushing a button, etc.
The design of such intricate microstructures, which enable the functionality of metamaterial devices, is not obvious. The complexity of the design arises from the fact that not only a suitable cell geometry is necessary, but that additionally cells need to play together in a well-defined way. To support users in creating such microstructures, we research and implement interactive design tools. These tools allow experts to freely edit their materials, while supporting novice users by auto-generating cells assemblies from high-level input. Our tools implement easy-to-use interactions like brushing, interactively simulate the cell structures’ deformation directly in the editor, and export the geometry as a 3D-printable file. Our goal is to foster more research and innovation on metamaterial devices by allowing the broader public to contribute.
How can interactive devices connect with users in the most immediate and intimate way? This question has driven interactive computing for decades. Throughout the last decades, we witnessed how mobile devices moved computing into users’ pockets, and recently, wearables put computing in constant physical contact with the user’s skin. In both cases moving the devices closer to users allowed devices to sense more of the user, and thus act more personal. The main question that drives our research is: what is the next logical step?
Some researchers argue that the next generation of interactive devices will move past the user’s skin and be directly implanted inside the user’s body. This has already happened in that we have pacemakers, insulin pumps, etc. However, we argue that what we see is not devices moving towards the inside of the user’s body, but rather towards the body’s biological “interface” they need to address in order to perform their function.
To implement our vision, we created a set of devices that intentionally borrow parts of the user’s body for input and output, rather than adding more technology to the body.
In this dissertation we present one specific flavor of such devices, i.e., devices that borrow the user’s muscles. We engineered I/O devices that interact with the user by reading and controlling muscle activity. To achieve the latter, our devices are based on medical-grade signal generators and electrodes attached to the user’s skin that send electrical impulses to the user’s muscles; these impulses then cause the user’s muscles to contract.
While electrical muscle stimulation (EMS) devices have been used to regenerate lost motor functions in rehabilitation medicine since the 1960s, in this dissertation, we propose a new perspective: EMS as a means for creating interactive systems.
We start by presenting seven prototypes of interactive devices that we have created to illustrate several benefits of EMS. These devices form two main categories: (1) Devices that allow users eyes-free access to information by means of their proprioceptive sense, such as the value of a variable in a computer system, a tool, or a plot; (2) Devices that increase immersion in virtual reality by simulating large forces, such as wind, physical impact, or walls and heavy objects.
Then, we analyze the potential of EMS to build interactive systems that miniaturize well and discuss how they leverage our proprioceptive sense as an I/O modality. We proceed by laying out the benefits and disadvantages of both EMS and mechanical haptic devices, such as exoskeletons.
We conclude by sketching an outline for future research on EMS by listing open technical, ethical and philosophical questions that we left unanswered.
Human actuation
(2018)
Ever since the conception of the virtual reality headset in 1968, many researchers have argued that the next step in virtual reality is to allow users to not only see and hear, but also feel virtual worlds. One approach is to use mechanical equipment to provide haptic feedback, e.g., robotic arms, exoskeletons and motion platforms. However, the size and the weight of such mechanical equipment tends to be proportional to its target’s size and weight, i.e., providing human-scale haptic feedback requires human-scale equipment, often restricting them to arcades and lab environments.
The key idea behind this dissertation is to bypass mechanical equipment by instead leveraging human muscle power. We thus create software systems that orchestrate humans in doing such mechanical labor—this is what we call human actuation. A potential benefit of such systems is that humans are more generic, flexible, and versatile than machines. This brings a wide range of haptic feedback to modern virtual reality systems.
We start with a proof-of-concept system—Haptic Turk, focusing on delivering motion experiences just like a motion platform. All Haptic Turk setups consist of a user who is supported by one or more human actuators. The user enjoys an interactive motion simulation such as a hang glider experience, but the motion is generated by those human actuators who manually lift, tilt, and push the user’s limbs or torso. To get the timing and force right, timed motion instructions in a format familiar from rhythm games are generated by the system.
Next, we extend the concept of human actuation from 3-DoF to 6-DoF virtual reality where users have the freedom to walk around. TurkDeck tackles this problem by orchestrating a group of human actuators to reconfigure a set of passive props on the fly while the user is progressing in the virtual environment. TurkDeck schedules human actuators by their distances from the user, and instructs them to reconfigure the props to the right place on the right time using laser projection and voice output.
Our studies in Haptic Turk and TurkDeck showed that human actuators enjoyed the experience but not as much as users. To eliminate the need of dedicated human actuators, Mutual Turk makes everyone a user by exchanging mechanical actuation between two or more users. Mutual Turk’s main functionality is that it orchestrates the users so as to actuate props at just the right moment and with just the right force to produce the correct feedback in each other's experience.
Finally, we further eliminate the need of another user, making human actuation applicable to single-user experiences. iTurk makes the user constantly reconfigure and animate otherwise passive props. This allows iTurk to provide virtual worlds with constantly varying or even animated haptic effects, even though the only animate entity present in the system is the user. Our demo experience features one example each of iTurk’s two main types of props, i.e., reconfigurable props (the foldable board from TurkDeck) and animated props (the pendulum).
We conclude this dissertation by summarizing the findings of our explorations and pointing out future directions. We discuss the development of human actuation compare to traditional machine actuation, the possibility of combining human and machine actuators and interaction models that involve more human actuators.
The last years have shown an increasing sophistication of attacks against enterprises. Traditional security solutions like firewalls, anti-virus systems and generally Intrusion Detection Systems (IDSs) are no longer sufficient to protect an enterprise against these advanced attacks. One popular approach to tackle this issue is to collect and analyze events generated across the IT landscape of an enterprise. This task is achieved by the utilization of Security Information and Event Management (SIEM) systems. However, the majority of the currently existing SIEM solutions is not capable of handling the massive volume of data and the diversity of event representations. Even if these solutions can collect the data at a central place, they are neither able to extract all relevant information from the events nor correlate events across various sources. Hence, only rather simple attacks are detected, whereas complex attacks, consisting of multiple stages, remain undetected. Undoubtedly, security operators of large enterprises are faced with a typical Big Data problem.
In this thesis, we propose and implement a prototypical SIEM system named Real-Time Event Analysis and Monitoring System (REAMS) that addresses the Big Data challenges of event data with common paradigms, such as data normalization, multi-threading, in-memory storage, and distributed processing. In particular, a mostly stream-based event processing workflow is proposed that collects, normalizes, persists and analyzes events in near real-time. In this regard, we have made various contributions in the SIEM context. First, we propose a high-performance normalization algorithm that is highly parallelized across threads and distributed across nodes. Second, we are persisting into an in-memory database for fast querying and correlation in the context of attack detection. Third, we propose various analysis layers, such as anomaly- and signature-based detection, that run on top of the normalized and correlated events. As a result, we demonstrate our capabilities to detect previously known as well as unknown attack patterns. Lastly, we have investigated the integration of cyber threat intelligence (CTI) into the analytical process, for instance, for correlating monitored user accounts with previously collected public identity leaks to identify possible compromised user accounts.
In summary, we show that a SIEM system can indeed monitor a large enterprise environment with a massive load of incoming events. As a result, complex attacks spanning across the whole network can be uncovered and mitigated, which is an advancement in comparison to existing SIEM systems on the market.
The rapid development and integration of Information Technologies over the last decades influenced all areas of our life, including the business world. Yet not only the modern enterprises become digitalised, but also security and criminal threats move into the digital sphere. To withstand these threats, modern companies must be aware of all activities within their computer networks.
The keystone for such continuous security monitoring is a Security Information and Event Management (SIEM) system that collects and processes all security-related log messages from the entire enterprise network. However, digital transformations and technologies, such as network virtualisation and widespread usage of mobile communications, lead to a constantly increasing number of monitored devices and systems. As a result, the amount of data that has to be processed by a SIEM system is increasing rapidly. Besides that, in-depth security analysis of the captured data requires the application of rather sophisticated outlier detection algorithms that have a high computational complexity. Existing outlier detection methods often suffer from performance issues and are not directly applicable for high-speed and high-volume analysis of heterogeneous security-related events, which becomes a major challenge for modern SIEM systems nowadays.
This thesis provides a number of solutions for the mentioned challenges. First, it proposes a new SIEM system architecture for high-speed processing of security events, implementing parallel, in-memory and in-database processing principles. The proposed architecture also utilises the most efficient log format for high-speed data normalisation. Next, the thesis offers several novel high-speed outlier detection methods, including generic Hybrid Outlier Detection that can efficiently be used for Big Data analysis. Finally, the special User Behaviour Outlier Detection is proposed for better threat detection and analysis of particular user behaviour cases.
The proposed architecture and methods were evaluated in terms of both performance and accuracy, as well as compared with classical architecture and existing algorithms. These evaluations were performed on multiple data sets, including simulated data, well-known public intrusion detection data set, and real data from the large multinational enterprise. The evaluation results have proved the high performance and efficacy of the developed methods.
All concepts proposed in this thesis were integrated into the prototype of the SIEM system, capable of high-speed analysis of Big Security Data, which makes this integrated SIEM platform highly relevant for modern enterprise security applications.