Refine
Year of publication
- 2018 (89) (remove)
Document Type
- Other (52)
- Article (19)
- Doctoral Thesis (14)
- Monograph/Edited Volume (4)
Language
- English (89)
Is part of the Bibliography
- yes (89) (remove)
Keywords
- E-Learning (3)
- Security Metrics (3)
- Security Risk Assessment (3)
- real-time rendering (3)
- 3D printing (2)
- Angriffserkennung (2)
- Answer set programming (2)
- Big Data (2)
- Cloud-Security (2)
- Energy (2)
- IDS (2)
- Identitätsmanagement (2)
- Internet of Things (2)
- Kanban (2)
- Lecture Video Archive (2)
- SIEM (2)
- Scrum (2)
- Secure Configuration (2)
- Sicherheit (2)
- Smart micro-grids (2)
- capstone course (2)
- classification (2)
- fabrication (2)
- identity management (2)
- intrusion detection (2)
- programmable matter (2)
- security (2)
- virtuelle Realität (2)
- visualization (2)
- 3D Point Clouds (1)
- 3D city model (1)
- 3D geovisualization (1)
- 3D point clouds (1)
- 3D portrayal (1)
- 3D visualization (1)
- 3D-Druck (1)
- 3D-Geovisualisierung (1)
- 3D-Punktwolken (1)
- 3D-Rendering (1)
- 3D-Stadtmodell (1)
- 3D-Visualisierung (1)
- ACINQ (1)
- ASIC (1)
- Agile methods (1)
- Algorithms (1)
- Android (1)
- Anomalieerkennung (1)
- Application Container Security (1)
- Approximation algorithms (1)
- Architecture synthesis (1)
- Architectures (1)
- Australian securities exchange (1)
- Automated parsing (1)
- Automatic domain term extraction (1)
- BCCC (1)
- BPMN (1)
- BTC (1)
- Batch-Aktivität (1)
- Bewegung (1)
- BitShares (1)
- Bitcoin Core (1)
- Blockchain (1)
- Blockchain Auth (1)
- Blockchain-Konsortium R3 (1)
- Blockkette (1)
- Blockstack (1)
- Blockstack ID (1)
- Blumix-Plattform (1)
- Blöcke (1)
- Boolean Networks (1)
- Business process models (1)
- Business process simulation (1)
- Byzantine Agreement (1)
- Cheating attacks (1)
- CityGML (1)
- Cloud Audit (1)
- Cloud Service Provider (1)
- Clusteranalyse (1)
- Collaborative learning (1)
- Colored Coins (1)
- Computergrafik (1)
- Critical pair analysis (CPA) (1)
- DAO (1)
- DMN (1)
- DPoS (1)
- Data Profiling (1)
- Data breach (1)
- Data compression (1)
- Data mining (1)
- Data mining Machine learning (1)
- Data partitioning (1)
- Data profiling (1)
- Data-driven strategies (1)
- Decision models (1)
- Delegated Proof-of-Stake (1)
- Denial of sleep (1)
- Disadvantaged communities (1)
- Distance Learning (1)
- Distributed Proof-of-Research (1)
- Distributed snapshot algorithm (1)
- Diverse solution enumeration (1)
- Domänenspezifische Modellierung (1)
- Dynamic pricing (1)
- Dynamic programming (1)
- E-Learning exam preparation (1)
- E-Lecture (1)
- E-Wallet (1)
- E-commerce (1)
- ECDSA (1)
- Echtzeit-Rendering (1)
- Educational Data Mining (1)
- Electrical products (1)
- Embedded Programming (1)
- Emotion Mining (1)
- Energy efficiency (1)
- Entitätsverknüpfung (1)
- Entropy (1)
- Entscheidungsfindung (1)
- Entscheidungsmanagement (1)
- Entscheidungsmodelle (1)
- Ereignisnormalisierung (1)
- Eris (1)
- Ether (1)
- Ethereum (1)
- Expert knowledge (1)
- Extensibility (1)
- Fabrication (1)
- Fabrikation (1)
- Federated Byzantine Agreement (1)
- Feedback control loop (1)
- Fernerkundung (1)
- Flash (1)
- FollowMyVote (1)
- Forecasting (1)
- Fork (1)
- GMDH (1)
- GPU (1)
- GTEx (1)
- Geospatial intelligence (1)
- Geschäftsprozess (1)
- Geschäftsprozessmanagement (1)
- Graph logics (1)
- Graph transformation (double pushout approach) (1)
- Graph transformations (1)
- Grid stability (1)
- Gridcoin (1)
- HENSHIN (1)
- HLS (1)
- HTML5 (1)
- Hard Fork (1)
- Hashed Timelock Contracts (1)
- Home appliances (1)
- IT project (1)
- Identity leak (1)
- In-Memory (1)
- Information flow control (1)
- Information system (1)
- Inklusionsabhängigkeiten (1)
- Institutions (1)
- Intent analysis (1)
- Interacting processes (1)
- Internet der Dinge (1)
- Internet of things (1)
- Interoperability (1)
- IoT (1)
- Japanese Blockchain Consortium (1)
- Japanisches Blockchain-Konsortium (1)
- Kette (1)
- Klassifikation (1)
- Klassifizierung (1)
- Konsensalgorithmus (1)
- Konsensprotokoll (1)
- Laserscanning (1)
- Laufzeitmodelle (1)
- Lecture Recording (1)
- Leistungsmodelle von virtuellen Maschinen (1)
- Level-of-detail visualization (1)
- LiDAR (1)
- Lightning Network (1)
- Link layer security (1)
- Load modeling (1)
- Lock-Time-Parameter (1)
- Lossy networks (1)
- Low-processing capable devices (1)
- MAC security (1)
- MOOC (1)
- MOOC Remote Lab (1)
- MQTT (1)
- Machine Learning (1)
- Machinelles Lernen (1)
- Metacrate (1)
- Metadaten (1)
- Metamaterialien (1)
- Metamaterials (1)
- Micro-grid networks (1)
- Micropayment-Kanäle (1)
- Microservices Security (1)
- Microsoft Azur (1)
- Minimum spanning tree (1)
- Mobile-Mapping (1)
- Model checking (1)
- Monitoring (1)
- Moving Target Defense (1)
- Multi-objective optimization (1)
- NASDAQ (1)
- NETCONF (1)
- NameID (1)
- Namecoin (1)
- Natural Language Processing (1)
- Navigational logics (1)
- Netzwerksicherheit (1)
- Neural Networks (1)
- Off-Chain-Transaktionen (1)
- Oligopoly competition (1)
- Onename (1)
- Ontology (1)
- OpenBazaar (1)
- Oracles (1)
- Orphan Block (1)
- P2P (1)
- PRISM model checker (1)
- PTCTL (1)
- Parallel independence (1)
- Parallel processing (1)
- Peer assessment (1)
- Peer-to-Peer Netz (1)
- Peercoin (1)
- PoB (1)
- PoS (1)
- PoW (1)
- Power auctioning (1)
- Power consumption characterization (1)
- Power demand (1)
- Privacy (1)
- Privatsphäre (1)
- Probabilistic timed automata (1)
- Process-related data (1)
- Proof-of-Burn (1)
- Proof-of-Stake (1)
- Proof-of-Work (1)
- Prozessausführung (1)
- Prozessmodelle (1)
- Prozessmodellierung (1)
- Psychological Emotions (1)
- RNAseq (1)
- Reconfigurable architecture (1)
- Requisit (1)
- Resource constrained smart micro-grids (1)
- Ripple (1)
- Runtime-monitoring (1)
- SCP (1)
- SHA (1)
- SPV (1)
- Schwierigkeitsgrad (1)
- Security (1)
- Security analytics (1)
- Selbst-Adaptive Software (1)
- Semantic Web (1)
- Sensor networks (1)
- Sequenzeigenschaften (1)
- Serialisierung (1)
- Servicification (1)
- Simplified Payment Verification (1)
- Skalierbarkeit der Blockchain (1)
- Slock.it (1)
- Smart Home Education (1)
- Social Media Analysis (1)
- Soft Fork (1)
- Software-Evolution (1)
- Spatial data handling systems (1)
- Stapelverarbeitung (1)
- Static analysis (1)
- Steemit (1)
- Stellar Consensus Protocol (1)
- Storj (1)
- System design (1)
- TCGA (1)
- Team based assignment (1)
- Teamwork (1)
- Technology mapping (1)
- Temporallogik (1)
- The Bitfury Group (1)
- The DAO (1)
- Threat Models (1)
- Time series data (1)
- Topic modeling (1)
- Transaktion (1)
- Tree maintenance (1)
- Two-Way-Peg (1)
- Ubiquitous business process (1)
- Unified logging system (1)
- Unspent Transaction Output (1)
- Versionsverwaltung (1)
- Verträge (1)
- Veränderungsanalyse (1)
- Video annotations (1)
- Virtual Reality (1)
- Visual Analytics (1)
- Visualisierung (1)
- Vulnerability analysis (1)
- Watson IoT (1)
- Wearable (1)
- Wireless sensor networks (1)
- YANG (1)
- Zielvorgabe (1)
- Zookos Dreieck (1)
- Zookos triangle (1)
- accelerator architectures (1)
- altchain (1)
- alternative chain (1)
- anomaly detection (1)
- atomic swap (1)
- batch activity (1)
- batch processing (1)
- behavior psychotherapy (1)
- bidirectional payment channels (1)
- bitcoins (1)
- blockchain (1)
- blockchain consortium (1)
- blockchain-übergreifend (1)
- blocks (1)
- blumix platform (1)
- business process (1)
- business process management (1)
- chain (1)
- change detection (1)
- cloud (1)
- cloud monitoring (1)
- clustering (1)
- communication (1)
- computational design (1)
- computational models (1)
- computer graphics (1)
- computer-mediated therapy (1)
- computergestützte Gestaltung (1)
- confirmation period (1)
- consensus algorithm (1)
- consensus protocol (1)
- contest period (1)
- contracts (1)
- creativity (1)
- cross-chain (1)
- data integration (1)
- data profiling (1)
- data transfer (1)
- decentralized autonomous organization (1)
- decision management (1)
- decision mining (1)
- decision models (1)
- design behaviour (1)
- design cognition (1)
- development artifacts (1)
- dezentrale autonome Organisation (1)
- difficulty (1)
- difficulty target (1)
- distributed computation (1)
- distributed performance monitoring (1)
- domain-specific modeling (1)
- doppelter Hashwert (1)
- double hashing (1)
- electrical muscle stimulation (1)
- elektrische Muskelstimulation (1)
- emotion measurement (1)
- entity linking (1)
- event normalization (1)
- exploratives Programmieren (1)
- exploratory programming (1)
- federated voting (1)
- functional dependencies (1)
- funktionale Abhängigkeiten (1)
- gene selection (1)
- getypte Attributierte Graphen (1)
- haptic feedback (1)
- haptisches Feedback (1)
- hardware (1)
- hashrate (1)
- human-computer interaction (1)
- in-memory (1)
- inclusion dependencies (1)
- intelligente Verträge (1)
- inter-chain (1)
- labeling (1)
- large scale mechanism (1)
- laserscanning (1)
- ledger assets (1)
- low-duty-cycling (1)
- machine learning (1)
- medical documentation (1)
- mehrstufiger Angriff (1)
- merged mining (1)
- merkle root (1)
- metacrate (1)
- metadata (1)
- metamaterials (1)
- metric temporal logic (1)
- metrische Temporallogik (1)
- micropayment (1)
- micropayment channels (1)
- microstructures (1)
- miner (1)
- mining (1)
- mining hardware (1)
- minting (1)
- mobile mapping (1)
- model-driven engineering (1)
- modellgetriebene Entwicklung (1)
- motion and force (1)
- multi-step attack (1)
- multimodal wireless sensor network (1)
- nested graph conditions (1)
- network security (1)
- non-photorealistic rendering (1)
- nonce (1)
- note-taking (1)
- novelty detection (1)
- nutzergenerierte Inhalte (1)
- object-oriented programming (1)
- objektorientiertes Programmieren (1)
- off-chain transaction (1)
- oneM2M (1)
- outlier detection (1)
- peer-to-peer network (1)
- pegged sidechains (1)
- performance models of virtual machines (1)
- point-based rendering (1)
- privacy (1)
- process execution (1)
- process modeling (1)
- process models (1)
- programmierbare Materie (1)
- props (1)
- quorum slices (1)
- remote sensing (1)
- rootstock (1)
- runtime models (1)
- runtime monitoring (1)
- scalability of blockchain (1)
- scarce tokens (1)
- security analytics (1)
- self-adaptive software (1)
- sequence properties (1)
- serialization (1)
- serverseitiges 3D-Rendering (1)
- serverside 3D rendering (1)
- service-oriented architectures (1)
- serviceorientierte Architekturen (1)
- sidechain (1)
- smart contracts (1)
- software engineering (1)
- software evolution (1)
- spatial aggregation (1)
- style transfer (1)
- symbolic graphs (1)
- symbolische Graphen (1)
- temporal logic (1)
- text mining (1)
- tissue-awareness (1)
- transaction (1)
- typed attributed graphs (1)
- ubiquitous business process model and notation (uBPMN) (1)
- ubiquitous business process modeling (1)
- ubiquitous computing (ubicomp) (1)
- user experience (1)
- user-generated content (1)
- variable geometry truss (1)
- verschachtelte Anwendungsbedingungen (1)
- version control (1)
- verteilte Berechnung (1)
- verteilte Leistungsüberwachung (1)
- virtual reality (1)
- visual analytics (1)
- wake-up radio (1)
- wearables (1)
- web-based rendering (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (89) (remove)
We analyze the problem of response suggestion in a closed domain along a real-world scenario of a digital library. We present a text-processing pipeline to generate question-answer pairs from chat transcripts. On this limited amount of training data, we compare retrieval-based, conditioned-generation, and dedicated representation learning approaches for response suggestion. Our results show that retrieval-based methods that strive to find similar, known contexts are preferable over parametric approaches from the conditioned-generation family, when the training data is limited. We, however, identify a specific representation learning approach that is competitive to the retrieval-based approaches despite the training data limitation.
Microservice Architectures (MSA) structure applications as a collection of loosely coupled services that implement business capabilities. The key advantages of MSA include inherent support for continuous deployment of large complex applications, agility and enhanced productivity. However, studies indicate that most MSA are homogeneous, and introduce shared vulnerabilites, thus vulnerable to multi-step attacks, which are economics-of-scale incentives to attackers. In this paper, we address the issue of shared vulnerabilities in microservices with a novel solution based on the concept of Moving Target Defenses (MTD). Our mechanism works by performing risk analysis against microservices to detect and prioritize vulnerabilities. Thereafter, security risk-oriented software diversification is employed, guided by a defined diversification index. The diversification is performed at runtime, leveraging both model and template based automatic code generation techniques to automatically transform programming languages and container images of the microservices. Consequently, the microservices attack surfaces are altered thereby introducing uncertainty for attackers while reducing the attackability of the microservices. Our experiments demonstrate the efficiency of our solution, with an average success rate of over 70% attack surface randomization.
Resource constrained smart micro-grid architectures describe a class of smart micro-grid architectures that handle communications operations over a lossy network and depend on a distributed collection of power generation and storage units. Disadvantaged communities with no or intermittent access to national power networks can benefit from such a micro-grid model by using low cost communication devices to coordinate the power generation, consumption, and storage. Furthermore, this solution is both cost-effective and environmentally-friendly. One model for such micro-grids, is for users to agree to coordinate a power sharing scheme in which individual generator owners sell excess unused power to users wanting access to power. Since the micro-grid relies on distributed renewable energy generation sources which are variable and only partly predictable, coordinating micro-grid operations with distributed algorithms is necessity for grid stability. Grid stability is crucial in retaining user trust in the dependability of the micro-grid, and user participation in the power sharing scheme, because user withdrawals can cause the grid to breakdown which is undesirable. In this chapter, we present a distributed architecture for fair power distribution and billing on microgrids. The architecture is designed to operate efficiently over a lossy communication network, which is an advantage for disadvantaged communities. We build on the architecture to discuss grid coordination notably how tasks such as metering, power resource allocation, forecasting, and scheduling can be handled. All four tasks are managed by a feedback control loop that monitors the performance and behaviour of the micro-grid, and based on historical data makes decisions to ensure the smooth operation of the grid. Finally, since lossy networks are undependable, differentiating system failures from adversarial manipulations is an important consideration for grid stability. We therefore provide a characterisation of potential adversarial models and discuss possible mitigation measures.
3D point cloud technology facilitates the automated and highly detailed digital acquisition of real-world environments such as assets, sites, cities, and countries; the acquired 3D point clouds represent an essential category of geodata used in a variety of geoinformation applications and systems. In this paper, we present a web-based system for the interactive and collaborative exploration and inspection of arbitrary large 3D point clouds. Our approach is based on standard WebGL on the client side and is able to render 3D point clouds with billions of points. It uses spatial data structures and level-of-detail representations to manage the 3D point cloud data and to deploy out-of-core and web-based rendering concepts. By providing functionality for both, thin-client and thick-client applications, the system scales for client devices that are vastly different in computing capabilities. Different 3D point-based rendering techniques and post-processing effects are provided to enable task-specific and data-specific filtering and highlighting, e.g., based on per-point surface categories or temporal information. A set of interaction techniques allows users to collaboratively work with the data, e.g., by measuring distances and areas, by annotating, or by selecting and extracting data subsets. Additional value is provided by the system's ability to display additional, context-providing geodata alongside 3D point clouds and to integrate task-specific processing and analysis operations. We have evaluated the presented techniques and the prototype system with different data sets from aerial, mobile, and terrestrial acquisition campaigns with up to 120 billion points to show their practicality and feasibility.
Spatio-temporal data denotes a category of data that contains spatial as well as temporal components. For example, time-series of geo-data, thematic maps that change over time, or tracking data of moving entities can be interpreted as spatio-temporal data.
In today's automated world, an increasing number of data sources exist, which constantly generate spatio-temporal data. This includes for example traffic surveillance systems, which gather movement data about human or vehicle movements, remote-sensing systems, which frequently scan our surroundings and produce digital representations of cities and landscapes, as well as sensor networks in different domains, such as logistics, animal behavior study, or climate research.
For the analysis of spatio-temporal data, in addition to automatic statistical and data mining methods, exploratory analysis methods are employed, which are based on interactive visualization. These analysis methods let users explore a data set by interactively manipulating a visualization, thereby employing the human cognitive system and knowledge of the users to find patterns and gain insight into the data.
This thesis describes a software framework for the visualization of spatio-temporal data, which consists of GPU-based techniques to enable the interactive visualization and exploration of large spatio-temporal data sets. The developed techniques include data management, processing, and rendering, facilitating real-time processing and visualization of large geo-temporal data sets. It includes three main contributions:
- Concept and Implementation of a GPU-Based Visualization Pipeline.
The developed visualization methods are based on the concept of a GPU-based visualization pipeline, in which all steps -- processing, mapping, and rendering -- are implemented on the GPU. With this concept, spatio-temporal data is represented directly in GPU memory, using shader programs to process and filter the data, apply mappings to visual properties, and finally generate the geometric representations for a visualization during the rendering process. Data processing, filtering, and mapping are thereby executed in real-time, enabling dynamic control over the mapping and a visualization process which can be controlled interactively by a user.
- Attributed 3D Trajectory Visualization.
A visualization method has been developed for the interactive exploration of large numbers of 3D movement trajectories. The trajectories are visualized in a virtual geographic environment, supporting basic geometries such as lines, ribbons, spheres, or tubes. Interactive mapping can be applied to visualize the values of per-node or per-trajectory attributes, supporting shape, height, size, color, texturing, and animation as visual properties. Using the dynamic mapping system, several kind of visualization methods have been implemented, such as focus+context visualization of trajectories using interactive density maps, and space-time cube visualization to focus on the temporal aspects of individual movements.
- Geographic Network Visualization.
A method for the interactive exploration of geo-referenced networks has been developed, which enables the visualization of large numbers of nodes and edges in a geographic context. Several geographic environments are supported, such as a 3D globe, as well as 2D maps using different map projections, to enable the analysis of networks in different contexts and scales. Interactive filtering, mapping, and selection can be applied to analyze these geographic networks, and visualization methods for specific types of networks, such as coupled 3D networks or temporal networks have been implemented.
As a demonstration of the developed visualization concepts, interactive visualization tools for two distinct use cases have been developed. The first contains the visualization of attributed 3D movement trajectories of airplanes around an airport. It allows users to explore and analyze the trajectories of approaching and departing aircrafts, which have been recorded over the period of a month. By applying the interactive visualization methods for trajectory visualization and interactive density maps, analysts can derive insight from the data, such as common flight paths, regular and irregular patterns, or uncommon incidents such as missed approaches on the airport.
The second use case involves the visualization of climate networks, which are geographic networks in the climate research domain. They represent the dynamics of the climate system using a network structure that expresses statistical interrelationships between different regions. The interactive tool allows climate analysts to explore these large networks, analyzing the network's structure and relating it to the geographic background. Interactive filtering and selection enables them to find patterns in the climate data and identify e.g. clusters in the networks or flow patterns.
High-throughput RNA sequencing (RNAseq) produces large data sets containing expression levels of thousands of genes. The analysis of RNAseq data leads to a better understanding of gene functions and interactions, which eventually helps to study diseases like cancer and develop effective treatments. Large-scale RNAseq expression studies on cancer comprise samples from multiple cancer types and aim to identify their distinct molecular characteristics. Analyzing samples from different cancer types implies analyzing samples from different tissue origin. Such multi-tissue RNAseq data sets require a meaningful analysis that accounts for the inherent tissue-related bias: The identified characteristics must not originate from the differences in tissue types, but from the actual differences in cancer types. However, current analysis procedures do not incorporate that aspect. As a result, we propose to integrate a tissue-awareness into the analysis of multi-tissue RNAseq data. We introduce an extension for gene selection that provides a tissue-wise context for every gene and can be flexibly combined with any existing gene selection approach. We suggest to expand conventional evaluation by additional metrics that are sensitive to the tissue-related bias. Evaluations show that especially low complexity gene selection approaches profit from introducing tissue-awareness.
In recent years, the ever-growing amount of documents on the Web as well as in closed systems for private or business contexts led to a considerable increase of valuable textual information about topics, events, and entities. It is a truism that the majority of information (i.e., business-relevant data) is only available in unstructured textual form. The text mining research field comprises various practice areas that have the common goal of harvesting high-quality information from textual data. These information help addressing users' information needs.
In this thesis, we utilize the knowledge represented in user-generated content (UGC) originating from various social media services to improve text mining results. These social media platforms provide a plethora of information with varying focuses. In many cases, an essential feature of such platforms is to share relevant content with a peer group. Thus, the data exchanged in these communities tend to be focused on the interests of the user base. The popularity of social media services is growing continuously and the inherent knowledge is available to be utilized. We show that this knowledge can be used for three different tasks.
Initially, we demonstrate that when searching persons with ambiguous names, the information from Wikipedia can be bootstrapped to group web search results according to the individuals occurring in the documents. We introduce two models and different means to handle persons missing in the UGC source. We show that the proposed approaches outperform traditional algorithms for search result clustering. Secondly, we discuss how the categorization of texts according to continuously changing community-generated folksonomies helps users to identify new information related to their interests. We specifically target temporal changes in the UGC and show how they influence the quality of different tag recommendation approaches. Finally, we introduce an algorithm to attempt the entity linking problem, a necessity for harvesting entity knowledge from large text collections. The goal is the linkage of mentions within the documents with their real-world entities. A major focus lies on the efficient derivation of coherent links.
For each of the contributions, we provide a wide range of experiments on various text corpora as well as different sources of UGC.
The evaluation shows the added value that the usage of these sources provides and confirms the appropriateness of leveraging user-generated content to serve different information needs.
An energy consumption model for multiModal wireless sensor networks based on wake-up radio receivers
(2018)
Energy consumption is a major concern in Wireless Sensor Networks. A significant waste of energy occurs due to the idle listening and overhearing problems, which are typically avoided by turning off the radio, while no transmission is ongoing. The classical approach for allowing the reception of messages in such situations is to use a low-duty-cycle protocol, and to turn on the radio periodically, which reduces the idle listening problem, but requires timers and usually unnecessary wakeups. A better solution is to turn on the radio only on demand by using a Wake-up Radio Receiver (WuRx). In this paper, an energy model is presented to estimate the energy saving in various multi-hop network topologies under several use cases, when a WuRx is used instead of a classical low-duty-cycling protocol. The presented model also allows for estimating the benefit of various WuRx properties like using addressing or not.
An Information System Supporting the Eliciting of Expert Knowledge for Successful IT Projects
(2018)
In order to guarantee the success of an IT project, it is necessary for a company to possess expert knowledge. The difficulty arises when experts no longer work for the company and it then becomes necessary to use their knowledge, in order to realise an IT project. In this paper, the ExKnowIT information system which supports the eliciting of expert knowledge for successful IT projects, is presented and consists of the following modules: (1) the identification of experts for successful IT projects, (2) the eliciting of expert knowledge on completed IT projects, (3) the expert knowledge base on completed IT projects, (4) the Group Method for Data Handling (GMDH) algorithm, (5) new knowledge in support of decisions regarding the selection of a manager for a new IT project. The added value of our system is that these three approaches, namely, the elicitation of expert knowledge, the success of an IT project and the discovery of new knowledge, gleaned from the expert knowledge base, otherwise known as the decision model, complement each other.
ASEDS
(2018)
The Massive adoption of social media has provided new ways for individuals to express their opinion and emotion online. In 2016, Facebook introduced a new reactions feature that allows users to express their psychological emotions regarding published contents using so-called Facebook reactions. In this paper, a framework for predicting the distribution of Facebook post reactions is presented. For this purpose, we collected an enormous amount of Facebook posts associated with their reactions labels using the proposed scalable Facebook crawler. The training process utilizes 3 million labeled posts for more than 64,000 unique Facebook pages from diverse categories. The evaluation on standard benchmarks using the proposed features shows promising results compared to previous research. The final model is able to predict the reaction distribution on Facebook posts with a recall score of 0.90 for "Joy" emotion.
The classification of vulnerabilities is a fundamental step to derive formal attributes that allow a deeper analysis. Therefore, it is required that this classification has to be performed timely and accurate. Since the current situation demands a manual interaction in the classification process, the timely processing becomes a serious issue. Thus, we propose an automated alternative to the manual classification, because the amount of identified vulnerabilities per day cannot be processed manually anymore. We implemented two different approaches that are able to automatically classify vulnerabilities based on the vulnerability description. We evaluated our approaches, which use Neural Networks and the Naive Bayes methods respectively, on the base of publicly known vulnerabilities.
Beacon in the Dark
(2018)
The large amount of heterogeneous data in these email corpora renders experts' investigations by hand infeasible. Auditors or journalists, e.g., who are looking for irregular or inappropriate content or suspicious patterns, are in desperate need for computer-aided exploration tools to support their investigations.
We present our Beacon system for the exploration of such corpora at different levels of detail. A distributed processing pipeline combines text mining methods and social network analysis to augment the already semi-structured nature of emails. The user interface ties into the resulting cleaned and enriched dataset. For the interface design we identify three objectives expert users have: gain an initial overview of the data to identify leads to investigate, understand the context of the information at hand, and have meaningful filters to iteratively focus onto a subset of emails. To this end we make use of interactive visualisations based on rearranged and aggregated extracted information to reveal salient patterns.
Beyond Surveys
(2018)
Blockchain
(2018)
The term blockchain has recently become a buzzword, but only few know what exactly lies behind this approach. According to a survey, issued in the first quarter of 2017, the term is only known by 35 percent of German medium-sized enterprise representatives. However, the blockchain technology is very interesting for the mass media because of its rapid development and global capturing of different markets.
For example, many see blockchain technology either as an all-purpose weapon— which only a few have access to—or as a hacker technology for secret deals in the darknet. The innovation of blockchain technology is found in its successful combination of already existing approaches: such as decentralized networks, cryptography, and consensus models. This innovative concept makes it possible to exchange values in a decentralized system. At the same time, there is no requirement for trust between its nodes (e.g. users).
With this study the Hasso Plattner Institute would like to help readers form their own opinion about blockchain technology, and to distinguish between truly innovative properties and hype.
The authors of the present study analyze the positive and negative properties of the blockchain architecture and suggest possible solutions, which can contribute to the efficient use of the technology. We recommend that every company define a clear target for the intended application, which is achievable with a reasonable cost-benefit ration, before deciding on this technology. Both the possibilities and the limitations of blockchain technology need to be considered. The relevant steps that must be taken in this respect are summarized /summed up for the reader in this study.
Furthermore, this study elaborates on urgent problems such as the scalability of the blockchain, appropriate consensus algorithm and security, including various types of possible attacks and their countermeasures. New blockchains, for example, run the risk of reducing security, as changes to existing technology can lead to lacks in the security and failures.
After discussing the innovative properties and problems of the blockchain technology, its implementation is discussed. There are a lot of implementation opportunities for companies available who are interested in the blockchain realization. The numerous applications have either their own blockchain as a basis or use existing and widespread blockchain systems. Various consortia and projects offer "blockchain-as-a-serviceänd help other companies to develop, test and deploy their own applications.
This study gives a detailed overview of diverse relevant applications and projects in the field of blockchain technology. As this technology is still a relatively young and fast developing approach, it still lacks uniform standards to allow the cooperation of different systems and to which all developers can adhere. Currently, developers are orienting themselves to Bitcoin, Ethereum and Hyperledger systems, which serve as the basis for many other blockchain applications.
The goal is to give readers a clear and comprehensive overview of blockchain technology and its capabilities.
This Research-to-Practice paper examines the practical application of various forms of collaborative learning in MOOCs. Since 2012, about 60 MOOCs in the wider context of Information Technology and Computer Science have been conducted on our self-developed MOOC platform. The platform is also used by several customers, who either run their own platform instances or use our white label platform. We, as well as some of our partners, have experimented with different approaches in collaborative learning in these courses. Based on the results of early experiments, surveys amongst our participants, and requests by our business partners we have integrated several options to offer forms of collaborative learning to the system. The results of our experiments are directly fed back to the platform development, allowing to fine tune existing and to add new tools where necessary. In the paper at hand, we discuss the benefits and disadvantages of decisions in the design of a MOOC with regard to the various forms of collaborative learning. While the focus of the paper at hand is on forms of large group collaboration, two types of small group collaboration on our platforms are briefly introduced.
Logical modeling has been widely used to understand and expand the knowledge about protein interactions among different pathways. Realizing this, the caspo-ts system has been proposed recently to learn logical models from time series data. It uses Answer Set Programming to enumerate Boolean Networks (BNs) given prior knowledge networks and phosphoproteomic time series data. In the resulting sequence of solutions, similar BNs are typically clustered together. This can be problematic for large scale problems where we cannot explore the whole solution space in reasonable time. Our approach extends the caspo-ts system to cope with the important use case of finding diverse solutions of a problem with a large number of solutions. We first present the algorithm for finding diverse solutions and then we demonstrate the results of the proposed approach on two different benchmark scenarios in systems biology: (1) an artificial dataset to model TCR signaling and (2) the HPN-DREAM challenge dataset to model breast cancer cell lines.
Remote sensing technology, such as airborne, mobile, or terrestrial laser scanning, and photogrammetric techniques, are fundamental approaches for efficient, automatic creation of digital representations of spatial environments. For example, they allow us to generate 3D point clouds of landscapes, cities, infrastructure networks, and sites. As essential and universal category of geodata, 3D point clouds are used and processed by a growing number of applications, services, and systems such as in the domains of urban planning, landscape architecture, environmental monitoring, disaster management, virtual geographic environments as well as for spatial analysis and simulation.
While the acquisition processes for 3D point clouds become more and more reliable and widely-used, applications and systems are faced with more and more 3D point cloud data. In addition, 3D point clouds, by their very nature, are raw data, i.e., they do not contain any structural or semantics information. Many processing strategies common to GIS such as deriving polygon-based 3D models generally do not scale for billions of points. GIS typically reduce data density and precision of 3D point clouds to cope with the sheer amount of data, but that results in a significant loss of valuable information at the same time.
This thesis proposes concepts and techniques designed to efficiently store and process massive 3D point clouds. To this end, object-class segmentation approaches are presented to attribute semantics to 3D point clouds, used, for example, to identify building, vegetation, and ground structures and, thus, to enable processing, analyzing, and visualizing 3D point clouds in a more effective and efficient way. Similarly, change detection and updating strategies for 3D point clouds are introduced that allow for reducing storage requirements and incrementally updating 3D point cloud databases. In addition, this thesis presents out-of-core, real-time rendering techniques used to interactively explore 3D point clouds and related analysis results. All techniques have been implemented based on specialized spatial data structures, out-of-core algorithms, and GPU-based processing schemas to cope with massive 3D point clouds having billions of points.
All proposed techniques have been evaluated and demonstrated their applicability to the field of geospatial applications and systems, in particular for tasks such as classification, processing, and visualization. Case studies for 3D point clouds of entire cities with up to 80 billion points show that the presented approaches open up new ways to manage and apply large-scale, dense, and time-variant 3D point clouds as required by a rapidly growing number of applications and systems.
Business processes constantly generate, manipulate, and consume data that are managed by organizational databases. Despite being central to process modeling and execution, the link between processes and data is often handled by developers when the process is implemented, thus leaving the connection unexplored during the conceptual design. In this paper, we introduce, formalize, and evaluate a novel conceptual view that bridges the gap between process and data models, and show some kinds of interesting insights that can be derived from this novel proposal.
In the course of patient treatments, psychotherapists aim to meet the challenges of being both a trusted, knowledgeable conversation partner and a diligent documentalist. We are developing the digital whiteboard system Tele-Board MED (TBM), which allows the therapist to take digital notes during the session together with the patient. This study investigates what therapists are experiencing when they document with TBM in patient sessions for the first time and whether this documentation saves them time when writing official clinical documents. As the core of this study, we conducted four anamnesis session dialogues with behavior psychotherapists and volunteers acting in the role of patients. Following a mixed-method approach, the data collection and analysis involved self-reported emotion samples, user experience curves and questionnaires. We found that even in the very first patient session with TBM, therapists come to feel comfortable, develop a positive feeling and can concentrate on the patient. Regarding administrative documentation tasks, we found with the TBM report generation feature the therapists save 60% of the time they normally spend on writing case reports to the health insurance.
CSBAuditor
(2018)
Cloud Storage Brokers (CSB) provide seamless and concurrent access to multiple Cloud Storage Services (CSS) while abstracting cloud complexities from end-users. However, this multi-cloud strategy faces several security challenges including enlarged attack surfaces, malicious insider threats, security complexities due to integration of disparate components and API interoperability issues. Novel security approaches are imperative to tackle these security issues. Therefore, this paper proposes CSBAuditor, a novel cloud security system that continuously audits CSB resources, to detect malicious activities and unauthorized changes e.g. bucket policy misconfigurations, and remediates these anomalies. The cloud state is maintained via a continuous snapshotting mechanism thereby ensuring fault tolerance. We adopt the principles of chaos engineering by integrating Broker Monkey, a component that continuously injects failure into our reference CSB system, Cloud RAID. Hence, CSBAuditor is continuously tested for efficiency i.e. its ability to detect the changes injected by Broker Monkey. CSBAuditor employs security metrics for risk analysis by computing severity scores for detected vulnerabilities using the Common Configuration Scoring System, thereby overcoming the limitation of insufficient security metrics in existing cloud auditing schemes. CSBAuditor has been tested using various strategies including chaos engineering failure injection strategies. Our experimental evaluation validates the efficiency of our approach against the aforementioned security issues with a detection and recovery rate of over 96 %.