Refine
Has Fulltext
- no (189) (remove)
Year of publication
- 2018 (189) (remove)
Document Type
- Other (189) (remove)
Is part of the Bibliography
- yes (189) (remove)
Keywords
- E-Learning (3)
- Security Metrics (3)
- Security Risk Assessment (3)
- 3D Point Clouds (2)
- Cloud-Security (2)
- Internet of Things (2)
- Kanban (2)
- Lecture Video Archive (2)
- MQTT (2)
- Scrum (2)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (52)
- Institut für Physik und Astronomie (18)
- Institut für Geowissenschaften (17)
- Institut für Biochemie und Biologie (16)
- Department Psychologie (13)
- Department Sport- und Gesundheitswissenschaften (13)
- Institut für Informatik und Computational Science (13)
- Institut für Ernährungswissenschaft (12)
- Institut für Chemie (8)
- Department Linguistik (5)
Editorial: Reaching to Grasp Cognition: Analyzing Motor Behavior to Investigate Social Interactions
(2018)
Utilizing quad-trees for efficient design space exploration with partial assignment evaluation
(2018)
Recently, it has been shown that constraint-based symbolic solving techniques offer an efficient way for deciding binding and routing options in order to obtain a feasible system level implementation. In combination with various background theories, a feasibility analysis of the resulting system may already be performed on partial solutions. That is, infeasible subsets of mapping and routing options can be pruned early in the decision process, which fastens the solving accordingly. However, allowing a proper design space exploration including multi-objective optimization also requires an efficient structure for storing and managing non-dominated solutions. In this work, we propose and study the usage of the Quad-Tree data structure in the context of partial assignment evaluation during system synthesis. Out experiments show that unnecessary dominance checks can be avoided, which indicates a preference of Quad-Trees over a commonly used list-based implementation for large combinatorial optimization problems.
Imaginar la nación
(2018)
The problem of constructing and maintaining a tree topology in a distributed manner is a challenging task in WSNs. This is because the nodes have limited computational and memory resources and the network changes over time. We propose the Dynamic Gallager-Humblet-Spira (D-GHS) algorithm that builds and maintains a minimum spanning tree. To do so, we divide D-GHS into four phases, namely neighbor discovery, tree construction, data collection, and tree maintenance. In the neighbor discovery phase, the nodes collect information about their neighbors and the link quality. In the tree construction, D-GHS finds the minimum spanning tree by executing the Gallager-Humblet-Spira algorithm. In the data collection phase, the sink roots the minimum spanning tree at itself, and each node sends data packets. In the tree maintenance phase, the nodes repair the tree when communication failures occur. The emulation results show that D-GHS reduces the number of control messages and the energy consumption, at the cost of a slight increase in memory size and convergence time.
High storage density magnetic devices rely on the precise, reliable and ultrafast switching times of the magnetic states. Optical control of magnetization using femtosecond laser without applying any external magnetic field offers the advantage of switching magnetic states at ultrashort time scales, which has attracted a significant attention. Recently, it has been reported and demonstrated the,so-called, all-optical helicity-dependent switching (AO-HDS) in which a circularly polarized femtosecond laser pulse switches the magnetization of a ferromagnetic thin film as function of laser helicity [1]. Afterward, in more recent studies, it has been reported that AO-HDS is a general phenomenon existing in magnetic materials ranging from rare earth - transition metals ferrimagnetic (e.g. alloys, multilayers and hetero-structures system) to even ferromagnetic thin films. Among numerous studies in the literature which are discussing the microscopic origin of AO-HDS in ferromagnets or ferrimagnetic alloys, the most renowned concepts are momentum transfer via Inverse Faraday Effect (IFE) [1-3]and the concept of preferential thermal demagnetization for one magnetization direction by heating close to Tc (Curie temperature) in the presence of magnetic circular dichroism (MCD) [4-6]. In this study, we investigate all-optical magnetic switching using a stationary femtosecond laser spot (3-5 μm) in TbFe alloys via photoemission electron microscopy (PEEM) and x-ray magnetic circular dichroism (XMCD) with a spatial resolution of approximately 30 nm. We spatially characterize the effect of laser heating and local temperature profile created across the laser spot on AO-HDS in TbFe thin films. We find that AO-HDS occurs only in a `ring' shaped region surrounding the thermally demagnetized region formed by the laser spot and the formation of switched domains relies further on thermally induced domain wall motion. Our temperature dependent measurements highlight the importance of attainin...
Modern server systems with large NUMA architectures necessitate (i) data being distributed over the available computing nodes and (ii) NUMA-aware query processing to enable effective parallel processing in database systems. As these architectures incur significant latency and throughout penalties for accessing non-local data, queries should be executed as close as possible to the data. To further increase both performance and efficiency, data that is not relevant for the query result should be skipped as early as possible. One way to achieve this goal is horizontal partitioning to improve static partition pruning. As part of our ongoing work on workload-driven partitioning, we have implemented a recent approach called aggressive data skipping and extended it to handle both analytical as well as transactional access patterns. In this paper, we evaluate this approach with the workload and data of a production enterprise system of a Global 2000 company. The results show that over 80% of all tuples can be skipped in average while the resulting partitioning schemata are surprisingly stable over time.
Preface
(2018)
An Information System Supporting the Eliciting of Expert Knowledge for Successful IT Projects
(2018)
In order to guarantee the success of an IT project, it is necessary for a company to possess expert knowledge. The difficulty arises when experts no longer work for the company and it then becomes necessary to use their knowledge, in order to realise an IT project. In this paper, the ExKnowIT information system which supports the eliciting of expert knowledge for successful IT projects, is presented and consists of the following modules: (1) the identification of experts for successful IT projects, (2) the eliciting of expert knowledge on completed IT projects, (3) the expert knowledge base on completed IT projects, (4) the Group Method for Data Handling (GMDH) algorithm, (5) new knowledge in support of decisions regarding the selection of a manager for a new IT project. The added value of our system is that these three approaches, namely, the elicitation of expert knowledge, the success of an IT project and the discovery of new knowledge, gleaned from the expert knowledge base, otherwise known as the decision model, complement each other.
No other means of communication determines through its seemingly unrestricted possibilities our everyday life more than the internet. From the mid-90s onwards, more and more technical advancements in the field of communication appear on the market, which in turn call for new terminology. In the first place, it is the internet (essentially based on the interaction between users and experts), which requires effective nomenclature in order to mediate between lay users and their restricted knowledge on the one, and experts and their sophisticated terminology on the other hand. At the interface between the new and complex realities and the need for simple linguistic access, a huge quantity of metaphoric denominations is used, making abstract innovations more comprehensible. Metaphor in the internet discourse serves to "reduce verticality" (Stenschke 2006) between specialized terminology and common language. The paper deals with metaphors based on spatial concepts. Space and spatiality play a key role in cognitive theories of metaphor as these theories themselves (according to Lakoff/Johnson 1980) are often based on the application of spatial concepts to non-spatial relations. After describing spatial concepts in general (referring to the internet), the paper explores which kind of metaphor takes advantage of the complexity present in the internet and how the medial space is linguistically recaptured in terms of spatial perception.
The Aral Sea desiccation and related changes in hydroclimatic conditions on a regional level is a hot topic for past decades. The key problem of scientific research projects devoted to an investigation of modern Aral Sea basin hydrological regime is its discontinuous nature - the only limited amount of papers takes into account the complex runoff formation system entirely. Addressing this challenge we have developed a continuous prediction system for assessing freshwater inflow into the Small Aral Sea based on coupling stack of hydrological and data-driven models. Results show a good prediction skill and approve the possibility to develop a valuable water assessment tool which utilizes the power of classical physically based and modern machine learning models both for territories with complex water management system and strong water-related data scarcity. The source code and data of the proposed system is available on a Github page (https://github.com/SMASHIproject/IWRM2018).
The electret state stability in nonpolar semicrystalline polymers is largely determined by the traps located at crystalline/ amorphous phase interfaces. Thus, the thermal history of such polymers should considerably influence their electret properties. In the present work, we investigate how recrystallization influences charge stability in low-density polyethylene corona electrets. It has been found that electret charge stability in quenched samples is higher than in slowly-crystallized ones. Phenomenologicaly, this can be explained by the increased number of deeper traps in samples with smaller crystallite size.
Why choice matters
(2018)
Measures of democracy are in high demand. Scientific and public audiences use them to describe political realities and to substantiate causal claims about those realities. This introduction to the thematic issue reviews the history of democracy measurement since the 1950s. It identifies four development phases of the field, which are characterized by three recurrent topics of debate: (1) what is democracy, (2) what is a good measure of democracy, and (3) do our measurements of democracy register real-world developments? As the answers to those questions have been changing over time, the field of democracy measurement has adapted and reached higher levels of theoretical and methodological sophistication. In effect, the challenges facing contemporary social scientists are not only limited to the challenge of constructing a sound index of democracy. Today, they also need a profound understanding of the differences between various measures of democracy and their implications for empirical applications. The introduction outlines how the contributions to this thematic issue help scholars cope with the recurrent issues of conceptualization, measurement, and application, and concludes by identifying avenues for future research.
This introductory essay to the HSR Special Issue “Economists, Politics, and Society” argues for a strong field-theoretical programme inspired by Pierre Bourdieu to research economic life as an integral part of different social forms. Its main aim is threefold. First, we spell out the very distinct Durkheimian legacy in Bourdieu’s thinking and the way he applies it in researching economic phenomena. Without this background, much of what is actually part of how Bourdieu analysed economic aspects of social life would be overlooked or reduced to mere economic sociology. Second, we sketch the main theoretical concepts and heuristics used to analyse economic life from a field perspective. Third, we focus on practical methodological issues of field-analytical research into economic phenomena. We conclude with a short summary of the basic characteristics of this approach and discuss the main insights provided by the contributions to this special issue.
We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different types of image degradations. First, using three well known DNNs (ResNet-152, VGG-19, GoogLeNet) we find the human visual system to be more robust to nearly all of the tested image manipulations, and we observe progressively diverging classification error-patterns between humans and DNNs when the signal gets weaker. Secondly, we show that DNNs trained directly on distorted images consistently surpass human performance on the exact distortion types they were trained on, yet they display extremely poor generalisation abilities when tested on other distortion types. For example, training on salt-and-pepper noise does not imply robustness on uniform white noise and vice versa. Thus, changes in the noise distribution between training and testing constitutes a crucial challenge to deep learning vision systems that can be systematically addressed in a lifelong machine learning approach. Our new dataset consisting of 83K carefully measured human psychophysical trials provide a useful reference for lifelong robustness against image degradations set by the human visual system.
Metamaterial Devices
(2018)
In our hands-on demonstration, we show several objects, the functionality of which is defined by the objects' internal micro-structure. Such metamaterial machines can (1) be mechanisms based on their microstructures, (2) employ simple mechanical computation, or (3) change their outside to interact with their environment. They are 3D printed from one piece and we support their creating by providing interactive software tools.
The problem of atmospheric emission from OH molecules is a long standing problem for near-infrared astronomy. PRAXIS is a unique spectrograph which is fed by fibres that remove the OH background and is optimised specifically to benefit from OH-Suppression. The OH suppression is achieved with fibre Bragg gratings, which were tested successfully on the GNOSIS instrument. PRAXIS uses the same fibre Bragg gratings as GNOSIS in its first implementation, and will exploit new, cheaper and more efficient, multicore fibre Bragg gratings in the second implementation. The OH lines are suppressed by a factor of similar to 1000, and the expected increase in the signal-to-noise in the interline regions compared to GNOSIS is a factor of similar to 9 with the GNOSIS gratings and a factor of similar to 17 with the new gratings. PRAXIS will enable the full exploitation of OH suppression for the first time, which was not achieved by GNOSIS (a retrofit to an existing instrument that was not OH-Suppression optimised) due to high thermal emission, low spectrograph transmission and detector noise. PRAXIS has extremely low thermal emission, through the cooling of all significantly emitting parts, including the fore-optics, the fibre Bragg gratings, a long length of fibre, and the fibre slit, and an optical design that minimises leaks of thermal emission from outside the spectrograph. PRAXIS has low detector noise through the use of a Hawaii-2RG detector, and a high throughput through a efficient VPH based spectrograph. PRAXIS will determine the absolute level of the interline continuum and enable observations of individual objects via an IFU. In this paper we give a status update and report on acceptance tests.
Operational decisions in business processes can be modeled by using the Decision Model and Notation (DMN). The complementary use of DMN for decision modeling and of the Business Process Model and Notation (BPMN) for process design realizes the separation of concerns principle. For supporting separation of concerns during the design phase, it is crucial to understand which aspects of decision-making enclosed in a process model should be captured by a dedicated decision model. Whereas existing work focuses on the extraction of decision models from process control flow, the connection of process-related data and decision models is still unexplored. In this paper, we investigate how process-related data used for making decisions can be represented in process models and we distinguish a set of BPMN patterns capturing such information. Then, we provide a formal mapping of the identified BPMN patterns to corresponding DMN models and apply our approach to a real-world healthcare process.
An efficient Design Space Exploration (DSE) is imperative for the design of modern, highly complex embedded systems in order to steer the development towards optimal design points. The early evaluation of design decisions at system-level abstraction layer helps to find promising regions for subsequent development steps in lower abstraction levels by diminishing the complexity of the search problem. In recent works, symbolic techniques, especially Answer Set Programming (ASP) modulo Theories (ASPmT), have been shown to find feasible solutions of highly complex system-level synthesis problems with non-linear constraints very efficiently. In this paper, we present a novel approach to a holistic system-level DSE based on ASPmT. To this end, we include additional background theories that concurrently guarantee compliance with hard constraints and perform the simultaneous optimization of several design objectives. We implement and compare our approach with a state-of-the-art preference handling framework for ASP. Experimental results indicate that our proposed method produces better solutions with respect to both diversity and convergence to the true Pareto front.
Business process simulation is an important means for quantitative analysis of a business process and to compare different process alternatives. With the Business Process Model and Notation (BPMN) being the state-of-the-art language for the graphical representation of business processes, many existing process simulators support already the simulation of BPMN diagrams. However, they do not provide well-defined interfaces to integrate new concepts in the simulation environment. In this work, we present the design and architecture of a proof-of-concept implementation of an open and extensible BPMN process simulator. It also supports the simulation of multiple BPMN processes at a time and relies on the building blocks of the well-founded discrete event simulation. The extensibility is assured by a plug-in concept. Its feasibility is demonstrated by extensions supporting new BPMN concepts, such as the simulation of business rule activities referencing decision models and batch activities.
An IoT network may consist of hundreds heterogeneous devices. Some of them may be constrained in terms of memory, power, processing and network capacity. Manual network and service management of IoT devices are challenging. We propose a usage of an ontology for the IoT device descriptions enabling automatic network management as well as service discovery and aggregation. Our IoT architecture approach ensures interoperability using existing standards, i.e. MQTT protocol and SemanticWeb technologies. We herein introduce virtual IoT devices and their semantic framework deployed at the edge of network. As a result, virtual devices are enabled to aggregate capabilities of IoT devices, derive new services by inference, delegate requests/responses and generate events. Furthermore, they can collect and pre-process sensor data. These tasks on the edge computing overcome the shortcomings of the cloud usage regarding siloization, network bandwidth, latency and speed. We validate our proposition by implementing a virtual device on a Raspberry Pi.
Learning how to prove
(2018)
We have developed an alternative approach to teaching computer science students how to prove. First, students are taught how to prove theorems with the Coq proof assistant. In a second, more difficult, step students will transfer their acquired skills to the area of textbook proofs. In this article we present a realisation of the second step. Proofs in Coq have a high degree of formality while textbook proofs have only a medium one. Therefore our key idea is to reduce the degree of formality from the level of Coq to textbook proofs in several small steps. For that purpose we introduce three proof styles between Coq and textbook proofs, called line by line comments, weakened line by line comments, and structure faithful proofs. While this article is mostly conceptional we also report on experiences with putting our approach into practise.
Introduction
(2018)
Foreword
(2018)
Our Conclusions
(2018)
The overhead of moving data is the major limiting factor in todays hardware, especially in heterogeneous systems where data needs to be transferred frequently between host and accelerator memory. With the increasing availability of hardware-based compression facilities in modern computer architectures, this paper investigates the potential of hardware-accelerated I/O Link Compression as a promising approach to reduce data volumes and transfer time, thus improving the overall efficiency of accelerators in heterogeneous systems. Our considerations are focused on On-the-Fly compression in both Single-Node and Scale-Out deployments. Based on a theoretical analysis, this paper demonstrates the feasibility of hardware-accelerated On-the-Fly I/O Link Compression for many workloads in a Scale-Out scenario, and for some even in a Single-Node scenario. These findings are confirmed in a preliminary evaluation using software-and hardware-based implementations of the 842 compression algorithm.
Beyond Surveys
(2018)
What Stays in Mind?
(2018)
Beacon in the Dark
(2018)
The large amount of heterogeneous data in these email corpora renders experts' investigations by hand infeasible. Auditors or journalists, e.g., who are looking for irregular or inappropriate content or suspicious patterns, are in desperate need for computer-aided exploration tools to support their investigations.
We present our Beacon system for the exploration of such corpora at different levels of detail. A distributed processing pipeline combines text mining methods and social network analysis to augment the already semi-structured nature of emails. The user interface ties into the resulting cleaned and enriched dataset. For the interface design we identify three objectives expert users have: gain an initial overview of the data to identify leads to investigate, understand the context of the information at hand, and have meaningful filters to iteratively focus onto a subset of emails. To this end we make use of interactive visualisations based on rearranged and aggregated extracted information to reveal salient patterns.
CurEx
(2018)
The integration of diverse structured and unstructured information sources into a unified, domain-specific knowledge base is an important task in many areas. A well-maintained knowledge base enables data analysis in complex scenarios, such as risk analysis in the financial sector or investigating large data leaks, such as the Paradise or Panama papers. Both the creation of such knowledge bases, as well as their continuous maintenance and curation involves many complex tasks and considerable manual effort. With CurEx, we present a modular system that allows structured and unstructured data sources to be integrated into a domain-specific knowledge base. In particular, we (i) enable the incremental improvement of each individual integration component; (ii) enable the selective generation of multiple knowledge graphs from the information contained in the knowledge base; and (iii) provide two distinct user interfaces tailored to the needs of data engineers and end-users respectively. The former has curation capabilities and controls the integration process, whereas the latter focuses on the exploration of the generated knowledge graph.
Scrum2kanban
(2018)
Using university capstone courses to teach agile software development methodologies has become commonplace, as agile methods have gained support in professional software development. This usually means students are introduced to and work with the currently most popular agile methodology: Scrum. However, as the agile methods employed in the industry change and are adapted to different contexts, university courses must follow suit. A prime example of this is the Kanban method, which has recently gathered attention in the industry. In this paper, we describe a capstone course design, which adds the hands-on learning of the lean principles advocated by Kanban into a capstone project run with Scrum. This both ensures that students are aware of recent process frameworks and ideas as well as gain a more thorough overview of how agile methods can be employed in practice. We describe the details of the course and analyze the participating students' perceptions as well as our observations. We analyze the development artifacts, created by students during the course in respect to the two different development methodologies. We further present a summary of the lessons learned as well as recommendations for future similar courses. The survey conducted at the end of the course revealed an overwhelmingly positive attitude of students towards the integration of Kanban into the course.
An energy consumption model for multiModal wireless sensor networks based on wake-up radio receivers
(2018)
Energy consumption is a major concern in Wireless Sensor Networks. A significant waste of energy occurs due to the idle listening and overhearing problems, which are typically avoided by turning off the radio, while no transmission is ongoing. The classical approach for allowing the reception of messages in such situations is to use a low-duty-cycle protocol, and to turn on the radio periodically, which reduces the idle listening problem, but requires timers and usually unnecessary wakeups. A better solution is to turn on the radio only on demand by using a Wake-up Radio Receiver (WuRx). In this paper, an energy model is presented to estimate the energy saving in various multi-hop network topologies under several use cases, when a WuRx is used instead of a classical low-duty-cycling protocol. The presented model also allows for estimating the benefit of various WuRx properties like using addressing or not.
Recently blockchain technology has been introduced to execute interacting business processes in a secure and transparent way. While the foundations for process enactment on blockchain have been researched, the execution of decisions on blockchain has not been addressed yet. In this paper we argue that decisions are an essential aspect of interacting business processes, and, therefore, also need to be executed on blockchain. The immutable representation of decision logic can be used by the interacting processes, so that decision taking will be more secure, more transparent, and better auditable. The approach is based on a mapping of the DMN language S-FEEL to Solidity code to be run on the Ethereum blockchain. The work is evaluated by a proof-of-concept prototype and an empirical cost evaluation.
Several areas in Southeast Asia are very vulnerable to climate change and unable to take immediate/effective actions on countermeasures due to insufficient capabilities. Malaysia, in particular the east coast of peninsular Malaysia and Sarawak, is known as one of the vulnerable regions to flood disaster. Prolonged and intense rainfall, natural activities and increase in runoff are the main reasons to cause flooding in this area. In addition, topographic conditions also contribute to the occurrence of flood disaster. Kuching city is located in the northwest of Borneo Island and part of Sarawak river catchment. This area is a developing state in Malaysia experiencing rapid urbanization since 2000s, which has caused the insufficient data availability in topography and hydrology. To deal with these challenging issues, this study presents a flood modelling framework using the remote sensing technologies and machine learning techniques to acquire the digital elevation model (DEM) with improved accuracy for the non-surveyed areas. Intensity–duration–frequency (IDF) curves were derived from climate model for various scenario simulations. The developed flood framework will be beneficial for the planners, policymakers, stakeholders as well as researchers in the field of water resource management in the aspect of providing better ideas/tools in dealing with the flooding issues in the region.
PlAnalyzer
(2018)
In this work we propose PIAnalyzer, a novel approach to analyze PendingIntent related vulnerabilities. We empirically evaluate PIAnalyzer on a set of 1000 randomly selected applications from the Google Play Store and find 1358 insecure usages of Pendinglntents, including 70 severe vulnerabilities. We manually inspected ten reported vulnerabilities out of which nine correctly reported vulnerabilities, indicating a high precision. The evaluation shows that PIAnalyzer is efficient with an average execution time of 13 seconds per application.
We analyze the problem of response suggestion in a closed domain along a real-world scenario of a digital library. We present a text-processing pipeline to generate question-answer pairs from chat transcripts. On this limited amount of training data, we compare retrieval-based, conditioned-generation, and dedicated representation learning approaches for response suggestion. Our results show that retrieval-based methods that strive to find similar, known contexts are preferable over parametric approaches from the conditioned-generation family, when the training data is limited. We, however, identify a specific representation learning approach that is competitive to the retrieval-based approaches despite the training data limitation.
Modern routing algorithms reduce query time by depending heavily on preprocessed data. The recently developed Navigation Data Standard (NDS) enforces a separation between algorithms and map data, rendering preprocessing inapplicable. Furthermore, map data is partitioned into tiles with respect to their geographic coordinates. With the limited memory found in portable devices, the number of tiles loaded becomes the major factor for run time. We study routing under these restrictions and present new algorithms as well as empirical evaluations. Our results show that, on average, the most efficient algorithm presented uses more than 20 times fewer tile loads than a normal A*.
This study examined the relationships between the three phenotypic domains of the triarchic model of psychopathy —boldness, meanness, disinhibition— and electrophysiological indices of inhibitory control (NoGo-N2/NoGo-P3). EEG data from a 256-channel dense array were recorded while participants (135 un-dergraduates assessed via the Triarchic Psychopathy Measure) performed a Go/NoGo task with three types of stimuli (60% frequent-Go, 20% infrequent-Go, 20% infrequent-NoGo). N2 was defined as the mean amplitude between 240 ms and 340 ms after stimuli onset over fronto-central sensors on correct trials; P300 was defined as the mean amplitude between 350 ms and 550 ms after stimuli onset over centro-parietal sensors on correct trials. Multiple regression analyses using gender-corrected triarchic scores as predictors revealed that only Disinhibition scores significantly predicted reduced NoGo-N2 amplitudes (3.5% explained variance, beta weight = .23, p < .05) and reduced P3 amplitudes for NoGo and infrequent-Go trials (3.1 and 3.2% explained variance, respectively, beta weights = -.21, ps < .05). Our results indicate that high disinhibition entails deviations in early conflict monitoring processes (reduced NoGo-N2), as well as in latter evaluative and updating processing stages of infrequent events (reduced NoGo-P3 and infrequent-Go-P3). The null contribution of meanness and boldness domains in these results suggests that N2 and P3 amplitudes in Go/NoGo tasks could be considered as neurobiological indices of the externalizing tendencies comprised in this personality disorder.
Live migration is an important feature in modern software-defined datacenters and cloud computing environments. Dynamic resource management, load balance, power saving and fault tolerance are all dependent on the live migration feature. Despite the importance of live migration, the cost of live migration cannot be ignored and may result in service availability degradation. Live migration cost includes the migration time, downtime, CPU overhead, network and power consumption. There are many research articles that discuss the problem of live migration cost with different scopes like analyzing the cost and relate it to the parameters that control it, proposing new migration algorithms that minimize the cost and also predicting the migration cost. For the best of our knowledge, most of the papers that discuss the migration cost problem focus on open source hypervisors. For the research articles focus on VMware environments, none of the published articles proposed migration time, network overhead and power consumption modeling for single and multiple VMs live migration. In this paper, we propose empirical models for the live migration time, network overhead and power consumption for single and multiple VMs migration. The proposed models are obtained using a VMware based testbed.
Stable covalently photo-cross-linked porous poly(ionic liquid) membrane with gradient pore size
(2018)
Porous polyelectrolyte membranes stable in a highly ionic environment are obtained by covalent crosslinking of an imidazolium-based poly(ionic liquid). The crosslinking reaction involves the UV light-induced thiol-ene (click) chemistry, and the phase separation, occurring during the crosslinking step, generates a fully interconnected porous structure in the membrane. The porosity is on the order of the micrometer scale and the membrane shows a gradient of pore size across the membrane cross-section. The membrane can separate polystyrene latex particles of different size and undergoes actuation in contact with acetone due to the asymmetric porous structure.
3D point cloud technology facilitates the automated and highly detailed digital acquisition of real-world environments such as assets, sites, cities, and countries; the acquired 3D point clouds represent an essential category of geodata used in a variety of geoinformation applications and systems. In this paper, we present a web-based system for the interactive and collaborative exploration and inspection of arbitrary large 3D point clouds. Our approach is based on standard WebGL on the client side and is able to render 3D point clouds with billions of points. It uses spatial data structures and level-of-detail representations to manage the 3D point cloud data and to deploy out-of-core and web-based rendering concepts. By providing functionality for both, thin-client and thick-client applications, the system scales for client devices that are vastly different in computing capabilities. Different 3D point-based rendering techniques and post-processing effects are provided to enable task-specific and data-specific filtering and highlighting, e.g., based on per-point surface categories or temporal information. A set of interaction techniques allows users to collaboratively work with the data, e.g., by measuring distances and areas, by annotating, or by selecting and extracting data subsets. Additional value is provided by the system's ability to display additional, context-providing geodata alongside 3D point clouds and to integrate task-specific processing and analysis operations. We have evaluated the presented techniques and the prototype system with different data sets from aerial, mobile, and terrestrial acquisition campaigns with up to 120 billion points to show their practicality and feasibility.
We investigated online electrophysiological components of distributional learning, specifically of tones by listeners of a non tonal language. German listeners were presented with a bimodal distribution of syllables with lexical tones from a synthesized continuum based on Cantonese level tones. Tones were presented in sets of four standards (within-category tokens) followed by a deviant (across-category token). Mismatch negativity (MMN) was measured. Earlier behavioral data showed that exposure to this bimodal distribution improved both categorical perception and perceptual acuity for level tones [I]. In the present study we present analyses of the electrophysiological response recorded during this exposure, i.e., the development of the MMN response during distributional learning. This development over time is analyzed using Generalized Additive Mixed Models and results showed that the MMN amplitude increased for both within and across-category tokens, reflecting higher perceptual acuity accompanying category formation. This is evidence that learners zooming in on phonological categories undergo neural changes associated with more accurate phonetic perception.