Refine
Has Fulltext
- no (52)
Year of publication
- 2018 (52) (remove)
Document Type
- Other (52) (remove)
Language
- English (52) (remove)
Is part of the Bibliography
- yes (52)
Keywords
- E-Learning (3)
- Security Metrics (3)
- Security Risk Assessment (3)
- Cloud-Security (2)
- Kanban (2)
- Lecture Video Archive (2)
- Scrum (2)
- Secure Configuration (2)
- capstone course (2)
- 3D Point Clouds (1)
- 3D printing (1)
- Agile methods (1)
- Algorithms (1)
- Android (1)
- Answer set programming (1)
- Application Container Security (1)
- Automatic domain term extraction (1)
- BPMN (1)
- Blockchain (1)
- Boolean Networks (1)
- Business process models (1)
- Business process simulation (1)
- Cloud Audit (1)
- Cloud Service Provider (1)
- Collaborative learning (1)
- DMN (1)
- Data compression (1)
- Data mining (1)
- Data mining Machine learning (1)
- Data partitioning (1)
- Data profiling (1)
- Decision models (1)
- Distance Learning (1)
- Diverse solution enumeration (1)
- E-Learning exam preparation (1)
- E-Lecture (1)
- Educational Data Mining (1)
- Embedded Programming (1)
- Emotion Mining (1)
- Energy efficiency (1)
- Entropy (1)
- Expert knowledge (1)
- Extensibility (1)
- Fabrication (1)
- Flash (1)
- GMDH (1)
- GTEx (1)
- Geospatial intelligence (1)
- HLS (1)
- HTML5 (1)
- IT project (1)
- Information flow control (1)
- Information system (1)
- Intent analysis (1)
- Interacting processes (1)
- Internet of Things (1)
- Interoperability (1)
- Lecture Recording (1)
- MOOC (1)
- MOOC Remote Lab (1)
- MQTT (1)
- Machine Learning (1)
- Metamaterials (1)
- Microservices Security (1)
- Minimum spanning tree (1)
- Model checking (1)
- Moving Target Defense (1)
- NETCONF (1)
- Natural Language Processing (1)
- Neural Networks (1)
- Ontology (1)
- Peer assessment (1)
- Process-related data (1)
- Psychological Emotions (1)
- RNAseq (1)
- Security analytics (1)
- Semantic Web (1)
- Servicification (1)
- Smart Home Education (1)
- Social Media Analysis (1)
- Spatial data handling systems (1)
- Static analysis (1)
- TCGA (1)
- Team based assignment (1)
- Teamwork (1)
- Threat Models (1)
- Time series data (1)
- Topic modeling (1)
- Tree maintenance (1)
- Unified logging system (1)
- Video annotations (1)
- Vulnerability analysis (1)
- YANG (1)
- accelerator architectures (1)
- behavior psychotherapy (1)
- cloud monitoring (1)
- computer-mediated therapy (1)
- data integration (1)
- data transfer (1)
- development artifacts (1)
- emotion measurement (1)
- fabrication (1)
- gene selection (1)
- hardware (1)
- human-computer interaction (1)
- labeling (1)
- large scale mechanism (1)
- low-duty-cycling (1)
- medical documentation (1)
- microstructures (1)
- multimodal wireless sensor network (1)
- non-photorealistic rendering (1)
- note-taking (1)
- oneM2M (1)
- point-based rendering (1)
- programmable matter (1)
- real-time rendering (1)
- security analytics (1)
- software engineering (1)
- style transfer (1)
- tissue-awareness (1)
- user experience (1)
- variable geometry truss (1)
- visualization (1)
- wake-up radio (1)
- web-based rendering (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (52) (remove)
We analyze the problem of response suggestion in a closed domain along a real-world scenario of a digital library. We present a text-processing pipeline to generate question-answer pairs from chat transcripts. On this limited amount of training data, we compare retrieval-based, conditioned-generation, and dedicated representation learning approaches for response suggestion. Our results show that retrieval-based methods that strive to find similar, known contexts are preferable over parametric approaches from the conditioned-generation family, when the training data is limited. We, however, identify a specific representation learning approach that is competitive to the retrieval-based approaches despite the training data limitation.
Microservice Architectures (MSA) structure applications as a collection of loosely coupled services that implement business capabilities. The key advantages of MSA include inherent support for continuous deployment of large complex applications, agility and enhanced productivity. However, studies indicate that most MSA are homogeneous, and introduce shared vulnerabilites, thus vulnerable to multi-step attacks, which are economics-of-scale incentives to attackers. In this paper, we address the issue of shared vulnerabilities in microservices with a novel solution based on the concept of Moving Target Defenses (MTD). Our mechanism works by performing risk analysis against microservices to detect and prioritize vulnerabilities. Thereafter, security risk-oriented software diversification is employed, guided by a defined diversification index. The diversification is performed at runtime, leveraging both model and template based automatic code generation techniques to automatically transform programming languages and container images of the microservices. Consequently, the microservices attack surfaces are altered thereby introducing uncertainty for attackers while reducing the attackability of the microservices. Our experiments demonstrate the efficiency of our solution, with an average success rate of over 70% attack surface randomization.
3D point cloud technology facilitates the automated and highly detailed digital acquisition of real-world environments such as assets, sites, cities, and countries; the acquired 3D point clouds represent an essential category of geodata used in a variety of geoinformation applications and systems. In this paper, we present a web-based system for the interactive and collaborative exploration and inspection of arbitrary large 3D point clouds. Our approach is based on standard WebGL on the client side and is able to render 3D point clouds with billions of points. It uses spatial data structures and level-of-detail representations to manage the 3D point cloud data and to deploy out-of-core and web-based rendering concepts. By providing functionality for both, thin-client and thick-client applications, the system scales for client devices that are vastly different in computing capabilities. Different 3D point-based rendering techniques and post-processing effects are provided to enable task-specific and data-specific filtering and highlighting, e.g., based on per-point surface categories or temporal information. A set of interaction techniques allows users to collaboratively work with the data, e.g., by measuring distances and areas, by annotating, or by selecting and extracting data subsets. Additional value is provided by the system's ability to display additional, context-providing geodata alongside 3D point clouds and to integrate task-specific processing and analysis operations. We have evaluated the presented techniques and the prototype system with different data sets from aerial, mobile, and terrestrial acquisition campaigns with up to 120 billion points to show their practicality and feasibility.
High-throughput RNA sequencing (RNAseq) produces large data sets containing expression levels of thousands of genes. The analysis of RNAseq data leads to a better understanding of gene functions and interactions, which eventually helps to study diseases like cancer and develop effective treatments. Large-scale RNAseq expression studies on cancer comprise samples from multiple cancer types and aim to identify their distinct molecular characteristics. Analyzing samples from different cancer types implies analyzing samples from different tissue origin. Such multi-tissue RNAseq data sets require a meaningful analysis that accounts for the inherent tissue-related bias: The identified characteristics must not originate from the differences in tissue types, but from the actual differences in cancer types. However, current analysis procedures do not incorporate that aspect. As a result, we propose to integrate a tissue-awareness into the analysis of multi-tissue RNAseq data. We introduce an extension for gene selection that provides a tissue-wise context for every gene and can be flexibly combined with any existing gene selection approach. We suggest to expand conventional evaluation by additional metrics that are sensitive to the tissue-related bias. Evaluations show that especially low complexity gene selection approaches profit from introducing tissue-awareness.
An energy consumption model for multiModal wireless sensor networks based on wake-up radio receivers
(2018)
Energy consumption is a major concern in Wireless Sensor Networks. A significant waste of energy occurs due to the idle listening and overhearing problems, which are typically avoided by turning off the radio, while no transmission is ongoing. The classical approach for allowing the reception of messages in such situations is to use a low-duty-cycle protocol, and to turn on the radio periodically, which reduces the idle listening problem, but requires timers and usually unnecessary wakeups. A better solution is to turn on the radio only on demand by using a Wake-up Radio Receiver (WuRx). In this paper, an energy model is presented to estimate the energy saving in various multi-hop network topologies under several use cases, when a WuRx is used instead of a classical low-duty-cycling protocol. The presented model also allows for estimating the benefit of various WuRx properties like using addressing or not.
An Information System Supporting the Eliciting of Expert Knowledge for Successful IT Projects
(2018)
In order to guarantee the success of an IT project, it is necessary for a company to possess expert knowledge. The difficulty arises when experts no longer work for the company and it then becomes necessary to use their knowledge, in order to realise an IT project. In this paper, the ExKnowIT information system which supports the eliciting of expert knowledge for successful IT projects, is presented and consists of the following modules: (1) the identification of experts for successful IT projects, (2) the eliciting of expert knowledge on completed IT projects, (3) the expert knowledge base on completed IT projects, (4) the Group Method for Data Handling (GMDH) algorithm, (5) new knowledge in support of decisions regarding the selection of a manager for a new IT project. The added value of our system is that these three approaches, namely, the elicitation of expert knowledge, the success of an IT project and the discovery of new knowledge, gleaned from the expert knowledge base, otherwise known as the decision model, complement each other.
ASEDS
(2018)
The Massive adoption of social media has provided new ways for individuals to express their opinion and emotion online. In 2016, Facebook introduced a new reactions feature that allows users to express their psychological emotions regarding published contents using so-called Facebook reactions. In this paper, a framework for predicting the distribution of Facebook post reactions is presented. For this purpose, we collected an enormous amount of Facebook posts associated with their reactions labels using the proposed scalable Facebook crawler. The training process utilizes 3 million labeled posts for more than 64,000 unique Facebook pages from diverse categories. The evaluation on standard benchmarks using the proposed features shows promising results compared to previous research. The final model is able to predict the reaction distribution on Facebook posts with a recall score of 0.90 for "Joy" emotion.
The classification of vulnerabilities is a fundamental step to derive formal attributes that allow a deeper analysis. Therefore, it is required that this classification has to be performed timely and accurate. Since the current situation demands a manual interaction in the classification process, the timely processing becomes a serious issue. Thus, we propose an automated alternative to the manual classification, because the amount of identified vulnerabilities per day cannot be processed manually anymore. We implemented two different approaches that are able to automatically classify vulnerabilities based on the vulnerability description. We evaluated our approaches, which use Neural Networks and the Naive Bayes methods respectively, on the base of publicly known vulnerabilities.
Beacon in the Dark
(2018)
The large amount of heterogeneous data in these email corpora renders experts' investigations by hand infeasible. Auditors or journalists, e.g., who are looking for irregular or inappropriate content or suspicious patterns, are in desperate need for computer-aided exploration tools to support their investigations.
We present our Beacon system for the exploration of such corpora at different levels of detail. A distributed processing pipeline combines text mining methods and social network analysis to augment the already semi-structured nature of emails. The user interface ties into the resulting cleaned and enriched dataset. For the interface design we identify three objectives expert users have: gain an initial overview of the data to identify leads to investigate, understand the context of the information at hand, and have meaningful filters to iteratively focus onto a subset of emails. To this end we make use of interactive visualisations based on rearranged and aggregated extracted information to reveal salient patterns.
Beyond Surveys
(2018)