Refine
Has Fulltext
- no (145)
Year of publication
- 2019 (145) (remove)
Document Type
- Other (145) (remove)
Language
- English (145) (remove)
Is part of the Bibliography
- yes (145)
Keywords
- evaluation (3)
- Cloud Computing (2)
- Industry 4.0 (2)
- Scrum (2)
- Social Media Analysis (2)
- Teamwork (2)
- Virtual Machine (2)
- fabrication (2)
- retrospective (2)
- software process improvement (2)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (30)
- Institut für Physik und Astronomie (19)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (17)
- Institut für Biochemie und Biologie (16)
- Department Psychologie (12)
- Institut für Geowissenschaften (9)
- Department Sport- und Gesundheitswissenschaften (4)
- Institut für Ernährungswissenschaft (4)
- Institut für Informatik und Computational Science (4)
- Institut für Umweltwissenschaften und Geographie (4)
In cloud computing, users are able to use their own operating system (OS) image to run a virtual machine (VM) on a remote host. The virtual machine OS is started by the user using some interfaces provided by a cloud provider in public or private cloud. In peer to peer cloud, the VM is started by the host admin. After the VM is running, the user could get a remote access to the VM to install, configure, and run services. For the security reasons, the user needs to verify the integrity of the running VM, because a malicious host admin could modify the image or even replace the image with a similar image, to be able to get sensitive data from the VM. We propose an approach to verify the integrity of a running VM on a remote host, without using any specific hardware such as Trusted Platform Module (TPM). Our approach is implemented on a Linux platform where the kernel files (vmlinuz and initrd) could be replaced with new files, while the VM is running. kexec is used to reboot the VM with the new kernel files. The new kernel has secret codes that will be used to verify whether the VM was started using the new kernel files. The new kernel is used to further measuring the integrity of the running VM.
The emergence of cloud computing allows users to easily host their Virtual Machines with no up-front investment and the guarantee of always available anytime anywhere. But with the Virtual Machine (VM) is hosted outside of user's premise, the user loses the physical control of the VM as it could be running on untrusted host machines in the cloud. Malicious host administrator could launch live memory dumping, Spectre, or Meltdown attacks in order to extract sensitive information from the VM's memory, e.g. passwords or cryptographic keys of applications running in the VM. In this paper, inspired by the moving target defense (MTD) scheme, we propose a novel approach to increase the security of application's sensitive data in the VM by continuously moving the sensitive data among several memory allocations (blocks) in Random Access Memory (RAM). A movement function is added into the application source code in order for the function to be running concurrently with the application's main function. Our approach could reduce the possibility of VM's sensitive data in the memory to be leaked into memory dump file by 2 5% and secure the sensitive data from Spectre and Meltdown attacks. Our approach's overhead depends on the number and the size of the sensitive data.
Devices on the Internet of Things (IoT) are usually battery-powered and have limited resources. Hence, energy-efficient and lightweight protocols were designed for IoT devices, such as the popular Constrained Application Protocol (CoAP). Yet, CoAP itself does not include any defenses against denial-of-sleep attacks, which are attacks that aim at depriving victim devices of entering low-power sleep modes. For example, a denial-of-sleep attack against an IoT device that runs a CoAP server is to send plenty of CoAP messages to it, thereby forcing the IoT device to expend energy for receiving and processing these CoAP messages. All current security solutions for CoAP, namely Datagram Transport Layer Security (DTLS), IPsec, and OSCORE, fail to prevent such attacks. To fill this gap, Seitz et al. proposed a method for filtering out inauthentic and replayed CoAP messages "en-route" on 6LoWPAN border routers. In this paper, we expand on Seitz et al.'s proposal in two ways. First, we revise Seitz et al.'s software architecture so that 6LoWPAN border routers can not only check the authenticity and freshness of CoAP messages, but can also perform a wide range of further checks. Second, we propose a couple of such further checks, which, as compared to Seitz et al.'s original checks, more reliably protect IoT devices that run CoAP servers from remote denial-of-sleep attacks, as well as from remote exploits. We prototyped our solution and successfully tested its compatibility with Contiki-NG's CoAP implementation.
Bottom-up saliency is often cited as a factor driving the choice of fixation locations of human observers, based on the (partial) success of saliency models to predict fixation densities in free viewing. However, these observations are only weak evidence for a causal role of bottom-up saliency in natural viewing behaviour. To test bottom-up saliency more directly, we analyse the performance of a number of saliency models---including our own saliency model based on our recently published model of early visual processing (Schütt & Wichmann, 2017, JoV)---as well as the theoretical limits for predictions over time. On free viewing data our model performs better than classical bottom-up saliency models, but worse than the current deep learning based saliency models incorporating higher-level information like knowledge about objects. However, on search data all saliency models perform worse than the optimal image independent prediction. We observe that the fixation density in free viewing is not stationary over time, but changes over the course of a trial. It starts with a pronounced central fixation bias on the first chosen fixation, which is nonetheless influenced by image content. Starting with the 2nd to 3rd fixation, the fixation density is already well predicted by later densities, but more concentrated. From there the fixation distribution broadens until it reaches a stationary distribution around the 10th fixation. Taken together these observations argue against bottom-up saliency as a mechanistic explanation for eye movement control after the initial orienting reaction in the first one to two saccades, although we confirm the predictive value of early visual representations for fixation locations. The fixation distribution is, first, not well described by any stationary density, second, is predicted better when including object information and, third, is badly predicted by any saliency model in a search task.
An efficient selection of indexes is indispensable for database performance. For large problem instances with hundreds of tables, existing approaches are not suitable: They either exhibit prohibitive runtimes or yield far from optimal index configurations by strongly limiting the set of index candidates or not handling index interaction explicitly. We introduce a novel recursive strategy that does not exclude index candidates in advance and effectively accounts for index interaction. Using large real-world workloads, we demonstrate the applicability of our approach. Further, we evaluate our solution end to end with a commercial database system using a reproducible setup. We show that our solutions are near-optimal for small index selection problems. For larger problems, our strategy outperforms state-of-the-art approaches in both scalability and solution quality.
SpringFit
(2019)
Joints are crucial to laser cutting as they allow making three-dimensional objects; mounts are crucial because they allow embedding technical components, such as motors. Unfortunately, mounts and joints tend to fail when trying to fabricate a model on a different laser cutter or from a different material. The reason for this lies in the way mounts and joints hold objects in place, which is by forcing them into slightly smaller openings. Such "press fit" mechanisms unfortunately are susceptible to the small changes in diameter that occur when switching to a machine that removes more or less material ("kerf"), as well as to changes in stiffness, as they occur when switching to a different material. We present a software tool called springFit that resolves this problem by replacing the problematic press fit-based mounts and joints with what we call cantilever-based mounts and joints. A cantilever spring is simply a long thin piece of material that pushes against the object to be held. Unlike press fits, cantilever springs are robust against variations in kerf and material; they can even handle very high variations, simply by using longer springs. SpringFit converts models in the form of 2D cutting plans by replacing all contained mounts, notch joints, finger joints, and t-joints. In our technical evaluation, we used springFit to convert 14 models downloaded from the web.
Rapid advances in location-acquisition technologies have led to large amounts of trajectory data. This data is the foundation for a broad spectrum of services driven and improved by trajectory data mining. However, for hybrid transactional and analytical workloads, the storing and processing of rapidly accumulated trajectory data is a non-trivial task. In this paper, we present a detailed survey about state-of-the-art trajectory data management systems. To determine the relevant aspects and requirements for such systems, we developed a trajectory data mining framework, which summarizes the different steps in the trajectory data mining process. Based on the derived requirements, we analyze different concepts to store, compress, index, and process spatio-temporal data. There are various trajectory management systems, which are optimized for scalability, data footprint reduction, elasticity, or query performance. To get a comprehensive overview, we describe and compare different exciting systems. Additionally, the observed similarities in the general structure of different systems are consolidated in a general blueprint of trajectory management systems.
Leveraging spatio-temporal soccer data to define a graphical query language for game recordings
(2019)
For professional soccer clubs, performance and video analysis are an integral part of the preparation and post-processing of games. Coaches, scouts, and video analysts extract information about strengths and weaknesses of their team as well as opponents by manually analyzing video recordings of past games. Since video recordings are an unstructured data source, it is a complex and time-intensive task to find specific game situations and identify similar patterns. In this paper, we present a novel approach to detect patterns and situations (e.g., playmaking and ball passing of midfielders) based on trajectory data. The application uses the metaphor of a tactic board to offer a graphical query language. With this interactive tactic board, the user can model a game situation or mark a specific situation in the video recording for which all matching occurrences in various games are immediately displayed, and the user can directly jump to the corresponding game scene. Through the additional visualization of key performance indicators (e.g.,the physical load of the players), the user can get a better overall assessment of situations. With the capabilities to find specific game situations and complex patterns in video recordings, the interactive tactic board serves as a useful tool to improve the video analysis process of professional sports teams.
The "Bachelor Project"
(2019)
One of the challenges of educating the next generation of computer scientists is to teach them to become team players, that are able to communicate and interact not only with different IT systems, but also with coworkers and customers with a non-it background. The “bachelor project” is a project based on team work and a close collaboration with selected industry partners. The authors hosted some of the teams since spring term 2014/15. In the paper at hand we explain and discuss this concept and evaluate its success based on students' evaluation and reports. Furthermore, the technology-stack that has been used by the teams is evaluated to understand how self-organized students in IT-related projects work. We will show that and why the bachelor is the most successful educational format in the perception of the students and how this positive results can be improved by the mentors.
Tikhonov regularization with oversmoothing penalty for linear statistical inverse learning problems
(2019)
In this paper, we consider the linear ill-posed inverse problem with noisy data in the statistical learning setting. The Tikhonov regularization scheme in Hilbert scales is considered in the reproducing kernel Hilbert space framework to reconstruct the estimator from the random noisy data. We discuss the rates of convergence for the regularized solution under the prior assumptions and link condition. For regression functions with smoothness given in terms of source conditions the error bound can explicitly be established.
Cardiovascular drift response over two different constant-load exercises in healthy non-athletes
(2019)
Cardiovascular drift (CV-d) is a steady increase in heart rate (HR) over time while performing constant load moderate intensity exercise (CME) > 20 min. CV-d presents problems for the prescription of exercise intensity by means of HR, because the work rate (WR) during exercise must be adjusted to maintain target HR, thus disturbing the intended effect of the exercise intervention. It has been shown that the increase in HR during CME is due to changes in WR and not to CV-d.
The target article discusses the question of how educational makerspaces can become places supportive of knowledge construction. This question is too often neglected by people who run makerspaces, as they mostly explain how to use different tools and focus on the creation of a product. In makerspaces, often pupils also engage in physical computing activities and thus in the creation of interactive artifacts containing embedded systems, such as smart shoes or wristbands, plant monitoring systems or drink mixing machines. This offers the opportunity to reflect on teaching physical computing in computer science education, where similarly often the creation of the product is so strongly focused upon that the reflection of the learning process is pushed into the background.
High-dimensional data is particularly useful for data analytics research. In the healthcare domain, for instance, high-dimensional data analytics has been used successfully for drug discovery. Yet, in order to adhere to privacy legislation, data analytics service providers must guarantee anonymity for data owners. In the context of high-dimensional data, ensuring privacy is challenging because increased data dimensionality must be matched by an exponential growth in the size of the data to avoid sparse datasets. Syntactically, anonymising sparse datasets with methods that rely of statistical significance, makes obtaining sound and reliable results, a challenge. As such, strong privacy is only achievable at the cost of high information loss, rendering the data unusable for data analytics. In this paper, we make two contributions to addressing this problem from both the privacy and information loss perspectives. First, we show that by identifying dependencies between attribute subsets we can eliminate privacy violating attributes from the anonymised dataset. Second, to minimise information loss, we employ a greedy search algorithm to determine and eliminate maximal partial unique attribute combinations. Thus, one only needs to find the minimal set of identifying attributes to prevent re-identification. Experiments on a health cloud based on the SAP HANA platform using a semi-synthetic medical history dataset comprised of 109 attributes, demonstrate the effectiveness of our approach.
This is a correction notice for ‘Post-adiabatic supernova remnants in an interstellar magnetic field: oblique shocks and non-uniform environment’ (DOI: https://doi.org/10.1093/mnras/sty1750), which was published in MNRAS 479, 4253–4270 (2018). The publisher regrets to inform that the colour was missing from the colour scales in Figs 8(a)–(d) and Figs 9(a) and (b). This has now been corrected online. The publisher apologizes for this error.
High-throughput RNA sequencing produces large gene expression datasets whose analysis leads to a better understanding of diseases like cancer. The nature of RNA-Seq data poses challenges to its analysis in terms of its high dimensionality, noise, and complexity of the underlying biological processes. Researchers apply traditional machine learning approaches, e. g. hierarchical clustering, to analyze this data. Until it comes to validation of the results, the analysis is based on the provided data only and completely misses the biological context. However, gene expression data follows particular patterns - the underlying biological processes. In our research, we aim to integrate the available biological knowledge earlier in the analysis process. We want to adapt state-of-the-art data mining algorithms to consider the biological context in their computations and deliver meaningful results for researchers.
Predictive coding and its generalization to active inference offer a unified theory of brain function. The underlying predictive processing paradigmhas gained significant attention in artificial intelligence research for its representation learning and predictive capacity. Here, we suggest that it is possible to integrate human and artificial generative models with a predictive coding network that processes sensations simultaneously with the signature of predictive coding found in human neuroimaging data. We propose a recurrent hierarchical predictive coding model that predicts low-dimensional representations of stimuli, electroencephalogram and physiological signals with variational inference. We suggest that in a shared environment, such hybrid predictive coding networks learn to incorporate the human predictive model in order to reduce prediction error. We evaluate the model on a publicly available EEG dataset of subjects watching one-minute long video excerpts. Our initial results indicate that the model can be trained to predict visual properties such as the amount, distance and motion of human subjects in videos.
In the present study, the charge distribution and the charge transport across the thickness of 2- and 3-dimensional polymer nanodielectrics was investigated. Chemically surface-treated polypropylene (PP) films and low-density polyethylene nanocomposite films with 3 wt % of magnesium oxide (LDPE/MgO) served as examples of 2-D and 3-D nanodielectrics, respectively. Surface charges were deposited onto the non-metallized surfaces of the one-side metallized polymer films and found to broaden and to thus enter the bulk of the films upon thermal stimulation at suitable elevated temperatures. The resulting space-charge profiles in the thickness direction were probed by means of Piezoelectrically-generated Pressure Steps (PPSs). It was observed that the chemical surface treatment of PP which led to the formation of nano-structures or the use of bulk nanoparticles from LDPE/MgO nanocomposites enhance charge trapping on or in the respective polymer films and also reduce charge transport inside the respective samples.
Words as social tools
(2019)
Working in iterations and repeatedly improving team workflows based on collected feedback is fundamental to agile software development processes. Scrum, the most popular agile method, provides dedicated retrospective meetings to reflect on the last development iteration and to decide on process improvement actions. However, agile methods do not prescribe how these improvement actions should be identified, managed or tracked in detail. The approaches to detect and remove problems in software development processes are therefore often based on intuition and prior experiences and perceptions of team members. Previous research in this area has focused on approaches to elicit a team's improvement opportunities as well as measurements regarding the work performed in an iteration, e.g. Scrum burn-down charts. Little research deals with the quality and nature of identified problems or how progress towards removing issues is measured. In this research, we investigate how agile development teams in the professional software industry organize their feedback and process improvement approaches. In particular, we focus on the structure and content of improvement and reflection meetings, i.e. retrospectives, and their outcomes. Researching how the vital mechanism of process improvement is implemented in practice in modern software development leads to a more complete picture of agile process improvement.
Feedback in Scrum
(2019)
Improving the way that teams work together by reflecting and improving the executed process is at the heart of agile processes. The idea of iterative process improvement takes various forms in different agile development methodologies, e.g. Scrum Retrospectives. However, these methods do not prescribe how improvement steps should be conducted in detail. In this research we investigate how agile software teams can use their development data, such as commits or tickets, created during regular development activities, to drive and track process improvement steps. Our previous research focused on data-informed process improvement in the context of student teams, where controlled circumstances and deep domain knowledge allowed creation and usage of specific process measures. Encouraged by positive results in this area, we investigate the process improvement approaches employed in industry teams. Researching how the vital mechanism of process improvement is implemented and how development data is already being used in practice in modern software development leads to a more complete picture of agile process improvement. It is the first step in enabling a data-informed feedback and improvement process, tailored to a team's context and based on the development data of individual teams.
Mise-Unseen
(2019)
Creating or arranging objects at runtime is needed in many virtual reality applications, but such changes are noticed when they occur inside the user's field of view. We present Mise-Unseen, a software system that applies such scene changes covertly inside the user's field of view. Mise-Unseen leverages gaze tracking to create models of user attention, intention, and spatial memory to determine if and when to inject a change. We present seven applications of Mise-Unseen to unnoticeably modify the scene within view (i) to hide that task difficulty is adapted to the user, (ii) to adapt the experience to the user's preferences, (iii) to time the use of low fidelity effects, (iv) to detect user choice for passive haptics even when lacking physical props, (v) to sustain physical locomotion despite a lack of physical space, (vi) to reduce motion sickness during virtual locomotion, and (vii) to verify user understanding during story progression. We evaluated Mise-Unseen and our applications in a user study with 15 participants and find that while gaze data indeed supports obfuscating changes inside the field of view, a change is rendered unnoticeably by using gaze in combination with common masking techniques.
Nowadays, structural health monitoring of critical infrastructures is considered as of primal importance especially for managing transport infrastructure however most current SHM methodologies are based on point-sensors that show various limitations relating to their spatial positioning capabilities, cost of development and measurement range. This publication describes the progress in the SENSKIN EC co-funded research project that is developing a dielectric-elastomer sensor, formed from a large highly extensible capacitance sensing membrane and is supported by an advanced micro-electronic circuitry, for monitoring transport infrastructure bridges. The sensor under development provides spatial measurements of strain in excess of 10%, while the sensing system is being designed to be easy to install, require low power in operation concepts, require simple signal processing, and have the ability to self-monitor and report. An appropriate wireless sensor network is also being designed and developed supported by local gateways for the required data collection and exploitation. SENSKIN also develops a Decision-Support-System (DSS) for proactive condition-based structural interventions under normal operating conditions and reactive emergency intervention following an extreme event. The latter is supported by a life-cycle-costing (LCC) and life-cycle-assessment (LCA) module responsible for the total internal and external costs for the identified bridge rehabilitation, analysis of options, yielding figures for the assessment of the economic implications of the bridge rehabilitation work and the environmental impacts of the bridge rehabilitation options and of the associated secondary effects respectively. The overall monitoring system will be evaluated and benchmarked on actual bridges of Egnatia Highway (Greece) and Bosporus Bridge (Turkey).
Based on the notion that time, space, and number are part of a generalized magnitude system, we assume that the dual-systems approach to temporal cognition also applies to numerical cognition. Referring to theoretical models of the development of numerical concepts, we propose that children's early skills in processing numbers can be described analogously to temporal updating and temporal reasoning.
Catholicism
(2019)
Introduction
(2019)
We review the evidence for a putative early 21st-century divergence between global mean surface temperature (GMST) and Coupled Model Intercomparison Project Phase 5 (CMIP5) projections. We provide a systematic comparison between temperatures and projections using historical versions of GMST products and historical versions of model projections that existed at the times when claims about a divergence were made. The comparisons are conducted with a variety of statistical techniques that correct for problems in previous work, including using continuous trends and a Monte Carlo approach to simulate internal variability. The results show that there is no robust statistical evidence for a divergence between models and observations. The impression of a divergence early in the 21st century was caused by various biases in model interpretation and in the observations, and was unsupported by robust statistics.
Diffusion of cosmic rays (CRs) is the key process for understanding their propagation and acceleration. We employ the description of spatial separation of magnetic field lines in magnetohydrodynamic turbulence in Lazarian & Vishniac to quantify the divergence of the magnetic field on scales less than the injection scale of turbulence and show that this divergence induces superdiffusion of CR in the direction perpendicular to the mean magnetic field. The perpendicular displacement squared increases, not as the distance x along the magnetic field, which is the case for a regular diffusion, but as the x 3 for freely streaming CRs. The dependence changes to x 3/2 for the CRs propagating diffusively along the magnetic field. In the latter case, we show that it is important to distinguish the perpendicular displacement with respect to the mean field and to the local magnetic field. We consider how superdiffusion changes the acceleration of CRs in shocks and show how it decreases efficiency of the CRs acceleration in perpendicular shocks. We also demonstrate that in the case when the small-scale magnetic field is generated in the pre-shock region, an efficient acceleration can take place for the CRs streaming without collisions along the magnetic loops.
Already for decades it has been known that the winds of massive stars are inhomogeneous (i.e. clumped). To properly model observed spectra of massive star winds it is necessary to incorporate the 3-D nature of clumping into radiative transfer calculations. In this paper we present our full 3-D Monte Carlo radiative transfer code for inhomogeneous expanding stellar winds. We use a set of parameters to describe dense as well as the rarefied wind components. At the same time, we account for non-monotonic velocity fields. We show how the 3-D density and velocity wind inhomogeneities strongly affect the resonance line formation. We also show how wind clumping can solve the discrepancy between P v and H alpha mass-loss rate diagnostics.