Refine
Year of publication
- 2018 (195) (remove)
Document Type
- Other (195) (remove)
Is part of the Bibliography
- yes (195) (remove)
Keywords
- E-Learning (3)
- Security Metrics (3)
- Security Risk Assessment (3)
- 3D Point Clouds (2)
- Cloud-Security (2)
- Internet of Things (2)
- Kanban (2)
- Lecture Video Archive (2)
- MQTT (2)
- Scrum (2)
- Secure Configuration (2)
- capstone course (2)
- embodied cognition (2)
- thermally stimulated discharge (2)
- 3D printing (1)
- 7924 (1)
- 7934 (1)
- 7959 (1)
- Aerosol (1)
- Agile methods (1)
- Algorithms (1)
- Android (1)
- Answer set programming (1)
- Application Container Security (1)
- Artikelindex (1)
- Asian American studies (1)
- Assays (1)
- Atlantic studies (1)
- Augmented reality (1)
- Automatic domain term extraction (1)
- BIM (1)
- BPMN (1)
- Basic English (1)
- Biological Assay (1)
- Black Pacific (1)
- Blockchain (1)
- Boolean Networks (1)
- Bourdieu (1)
- Business process models (1)
- Business process simulation (1)
- C. K. ogden (1)
- CKD (1)
- CPPS (1)
- CPS (1)
- CT (1)
- CWSI (1)
- Climate Change (1)
- Clock Tree Implementation (1)
- Cloud Audit (1)
- Cloud Service Provider (1)
- Collaborative learning (1)
- Curie transition (1)
- DEM (1)
- DMN (1)
- Danish (1)
- Data compression (1)
- Data mining (1)
- Data mining Machine learning (1)
- Data partitioning (1)
- Data profiling (1)
- Decision models (1)
- Discovery System (1)
- Distance Learning (1)
- Diverse solution enumeration (1)
- Dopamine (1)
- Dutch (1)
- Dänisch (1)
- E-Learning exam preparation (1)
- E-Lecture (1)
- Economic sociology (1)
- Edge Computing (1)
- Educational Data Mining (1)
- Embedded Programming (1)
- Emotion Mining (1)
- Energy efficiency (1)
- English dialects (1)
- Entropy (1)
- Epigenetic Biomarkers (1)
- Expert knowledge (1)
- Extensibility (1)
- Fabrication (1)
- Flash (1)
- Function of the spatial metaphor (1)
- GMDH (1)
- GTEx (1)
- Generalized additive mixed-effects modeling (1)
- Generalized knowledge constructin axiom (1)
- Geospatial intelligence (1)
- HLS (1)
- HTML5 (1)
- Haptics (1)
- Hebraica (1)
- Hierarchical Design (1)
- High Mountain Asia (1)
- IT project (1)
- Icelandic (1)
- Indoor Models (1)
- Industry 4.0 (1)
- Inertial measurement units (1)
- Information flow control (1)
- Information system (1)
- Inhibitory Control (1)
- Intelligence (1)
- Intent analysis (1)
- Interacting processes (1)
- International language (1)
- Internet of things (1)
- Interoperability (1)
- Isländisch (1)
- Isotype (1)
- Judaica (1)
- Katalog (1)
- Komplementierer (1)
- LDPE nanocomposites (1)
- Landsat (1)
- Learning (1)
- Lecture Recording (1)
- M2M (1)
- MOOC (1)
- MOOC Remote Lab (1)
- Machine (1)
- Machine Learning (1)
- Marine mammals (1)
- Maternal relationships (1)
- Meta-model (1)
- Metamaterials (1)
- Microservices Security (1)
- Migration (1)
- Minimum spanning tree (1)
- Mitochondrial DNA (1)
- Model checking (1)
- Moving Target Defense (1)
- N2/P3 (1)
- NAB (1)
- NETCONF (1)
- Natural Language Processing (1)
- Near infrared (1)
- Neural Networks (1)
- Niederländisch (1)
- Norwegian (1)
- Norwegisch (1)
- OH suppression (1)
- Ontology (1)
- Operator (1)
- Otto neurath (1)
- P(VDF-TrFE-CFE) terpolymer (1)
- PTH (1)
- Pacific studies (1)
- Passive Microwave (1)
- Peer assessment (1)
- Philosophy of language (1)
- Physical Implementation (1)
- Polygenic Risk Score (1)
- Population genetics (1)
- Predictive markers (1)
- Process modeling (1)
- Process-related data (1)
- Psychological Emotions (1)
- RNAseq (1)
- Raman lidar (1)
- Reward Anticipation (1)
- Risk factors (1)
- SMI (1)
- SPB (1)
- Satztyp (1)
- Schwedisch (1)
- Security analytics (1)
- Semantic Interoperability (1)
- Semantic Web (1)
- Semiotics (1)
- Service-Oriented (1)
- Servicification (1)
- Simulation process building (1)
- Smart Home Education (1)
- Snow (1)
- Social Media Analysis (1)
- Space and metaphor (1)
- Space and spatiality in the internet terminology (1)
- Spatial data handling systems (1)
- Static analysis (1)
- Structural health monitoring (1)
- Swedish (1)
- Syntax (1)
- TCGA (1)
- TIN (1)
- Team based assignment (1)
- Teamwork (1)
- Threat Models (1)
- Time series data (1)
- Time-resolved crystallography (1)
- Topic modeling (1)
- Transpacific studies (1)
- Tree maintenance (1)
- Triarchic Model of Psychopathy (1)
- UV (1)
- Unified logging system (1)
- Use cases Morphologic box (1)
- Video annotations (1)
- Vienna circle (1)
- Virtual reality (1)
- Vulnerability analysis (1)
- X-ray refraction (1)
- YANG (1)
- abstract concepts (1)
- accelerator architectures (1)
- accessibility (1)
- action observation (1)
- activities (1)
- additive manufacturing (1)
- apple (1)
- application (1)
- archipelagic studies (1)
- astronomy (1)
- astrophotonics (1)
- athletic performance (1)
- authentication (1)
- automated driving (1)
- avoid magnetometers (1)
- back motion assessment (1)
- balloon telescopes (1)
- basal body (1)
- behavior psychotherapy (1)
- behavioral (1)
- behavioral reasoning (1)
- blind (1)
- bridges (1)
- centriole (1)
- centrosome (1)
- chemical modification (1)
- chimera state (1)
- cilium (1)
- clause type (1)
- cloud monitoring (1)
- cognitive development (1)
- community effect on height (1)
- competitive growth (1)
- complementary actions (1)
- complementiser (1)
- computer-mediated therapy (1)
- conceptualization (1)
- continuous (1)
- cooperation and competition (1)
- creep (1)
- cross-over effect (1)
- crystallinity (1)
- crystallography (1)
- damage evolution (1)
- data integration (1)
- data transfer (1)
- democracy (1)
- democratic quality (1)
- detectors (1)
- development artifacts (1)
- discourse (1)
- distributional learning (1)
- domination (1)
- doping (1)
- drainage networks (1)
- drift correction (1)
- economy (1)
- electrets (1)
- electroacoustic probing (1)
- emotion measurement (1)
- enzymology (1)
- exercise (1)
- fabrication (1)
- far infrared (1)
- ferroelectrets (1)
- ferroelectric and paraelectric phases (1)
- fibre Bragg gratings (1)
- field (1)
- fitness (1)
- flow accumulation (1)
- force-feedback (1)
- gait (1)
- gaming (1)
- gene selection (1)
- hardware (1)
- human motion analysis (1)
- human-computer interaction (1)
- humanoid (1)
- imitation (1)
- indigenous studies (1)
- injury prevention (1)
- intentionality (1)
- international academic mobility (1)
- internationalization (1)
- inversion (1)
- joint action (1)
- joint angle estimation (1)
- joint lab (1)
- kinematics (1)
- labeling (1)
- large scale mechanism (1)
- left periphery (1)
- lexical tone (1)
- lidar (1)
- linke Peripherie (1)
- locomotion (1)
- low back pain (1)
- low-density polyethylene (1)
- low-duty-cycling (1)
- management zone (1)
- measurement (1)
- medical documentation (1)
- mental number line (1)
- metal matrix composite (1)
- method development (1)
- methodology (1)
- microscopy (1)
- microstructures (1)
- microtubules (1)
- mismatch negativity (1)
- multimodal wireless sensor network (1)
- negative numbers (1)
- non-photorealistic rendering (1)
- nonlinear dynamics (1)
- nonlocal coupling (1)
- note-taking (1)
- nucleus-associated body (1)
- observatory (1)
- oceanic discourse (1)
- oneM2M (1)
- oneM2M Ontology (1)
- operator (1)
- oxidative stress (1)
- partial synchronization (1)
- particle microphysics (1)
- phase lag (1)
- phase oscillator (1)
- photometer (1)
- physical fitness (1)
- plyometric training (1)
- point clouds (1)
- point-based rendering (1)
- polypropylene (1)
- porosity (1)
- pre-attentive discrimination (1)
- precision agriculture (1)
- programmable matter (1)
- quenching (1)
- radiography (1)
- real-time rendering (1)
- real-walking (1)
- recreational sport (1)
- recrystallization (1)
- regularization (1)
- relaxor-ferroelectric polymer (1)
- reliability (1)
- restoration (1)
- safety (1)
- security (1)
- security analytics (1)
- smartphone (1)
- social cognition (1)
- social robots (1)
- software engineering (1)
- soil moisture (1)
- space-charge and polarization profiles (1)
- spectroscopy (1)
- speech acoustics (1)
- speech perception (1)
- speech production (1)
- speech variability (1)
- spindle pole body (1)
- state (1)
- steel and concrete structures (1)
- stochastic filtering (1)
- strain gauges (1)
- strain sensors (1)
- strategic growth adjustments (1)
- strength training (1)
- study abroad (1)
- study-related student travel (1)
- stunting (1)
- style transfer (1)
- surface charge stability (1)
- synchronization (1)
- synchrotron X-ray refraction radiography (1)
- syntax (1)
- teacher education (1)
- teaching students (1)
- time series (1)
- tissue-awareness (1)
- tomography (1)
- transoceanic studies (1)
- triangle method (1)
- turing test (1)
- uncertainty quantification (1)
- undernutrition (1)
- user experience (1)
- validation against optical motion capture (1)
- variable geometry truss (1)
- verbal reports (1)
- verification (1)
- visualization (1)
- visually impaired (1)
- voice onset time (1)
- vowels (1)
- wake-up radio (1)
- web-based rendering (1)
- young adults (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (52)
- Institut für Physik und Astronomie (18)
- Institut für Geowissenschaften (17)
- Institut für Biochemie und Biologie (16)
- Department Psychologie (13)
- Department Sport- und Gesundheitswissenschaften (13)
- Institut für Informatik und Computational Science (13)
- Institut für Ernährungswissenschaft (12)
- Institut für Chemie (8)
- Department Linguistik (6)
Over the past few years, studying abroad and other educational international experiences have become increasingly highly regarded. Nevertheless, research shows that only a minority of students actually take part in
academic mobility programs. But what is it that distinguishes those students who take up these international opportunities from those who do not? In this
study we reviewed recent quantitative studies on why (primarily German) students choose to travel abroad or not. This revealed a pattern of predictive factors. These indicate the key role played by students’ personal and social background, as well as previous international travel and the course of studies they are enrolled in. The study then focuses on teaching students. Both facilitating and debilitating factors are discussed and included in a model illustrating the decision-making process these students use. Finally, we discuss the practical implications for ways in which international, studyrelated travel might be increased in the future. We suggest that higher education institutions analyze individual student characteristics, offering differentiated programs to better meet the needs of different groups, thus raising the likelihood of disadvantaged students participating in academic international travel.
In the course of patient treatments, psychotherapists aim to meet the challenges of being both a trusted, knowledgeable conversation partner and a diligent documentalist. We are developing the digital whiteboard system Tele-Board MED (TBM), which allows the therapist to take digital notes during the session together with the patient. This study investigates what therapists are experiencing when they document with TBM in patient sessions for the first time and whether this documentation saves them time when writing official clinical documents. As the core of this study, we conducted four anamnesis session dialogues with behavior psychotherapists and volunteers acting in the role of patients. Following a mixed-method approach, the data collection and analysis involved self-reported emotion samples, user experience curves and questionnaires. We found that even in the very first patient session with TBM, therapists come to feel comfortable, develop a positive feeling and can concentrate on the patient. Regarding administrative documentation tasks, we found with the TBM report generation feature the therapists save 60% of the time they normally spend on writing case reports to the health insurance.
The globally distributed sperm whale (Physeter macrocephalus) has a partly matrilineal social structure with predominant male dispersal. At the beginning of 2016, a total of 30 male sperm whales stranded in five different countries bordering the southern North Sea. It has been postulated that these individuals were on a migration route from the north to warmer temperate and tropical waters where females live in social groups. By including samples from four countries (n = 27), this event provided a unique chance to genetically investigate the maternal relatedness and the putative origin of these temporally and spatially co-occuring male sperm whales. To utilize existing genetic resources, we sequenced 422 bp of the mitochondrial control region, a molecular marker for which sperm whale data are readily available from the entire distribution range. Based on four single nucleotide polymorphisms (SNPs) within the mitochondrial control region, five matrilines could be distinguished within the stranded specimens, four of which matched published haplotypes previously described in the Atlantic. Among these male sperm whales, multiple matrilineal lineages co-occur. We analyzed the population differentiation and could show that the genetic diversity of these male sperm whales is comparable to the genetic diversity in sperm whales from the entire Atlantic Ocean. We confirm that within this stranding event, males do not comprise maternally related individuals and apparently include assemblages of individuals from different geographic regions. (c) 2017 Deutsche Gesellschaft fur Saugetierkunde. Published by Elsevier GmbH. All rights reserved.
Utilizing quad-trees for efficient design space exploration with partial assignment evaluation
(2018)
Recently, it has been shown that constraint-based symbolic solving techniques offer an efficient way for deciding binding and routing options in order to obtain a feasible system level implementation. In combination with various background theories, a feasibility analysis of the resulting system may already be performed on partial solutions. That is, infeasible subsets of mapping and routing options can be pruned early in the decision process, which fastens the solving accordingly. However, allowing a proper design space exploration including multi-objective optimization also requires an efficient structure for storing and managing non-dominated solutions. In this work, we propose and study the usage of the Quad-Tree data structure in the context of partial assignment evaluation during system synthesis. Out experiments show that unnecessary dominance checks can be avoided, which indicates a preference of Quad-Trees over a commonly used list-based implementation for large combinatorial optimization problems.
Imaginar la nación
(2018)
The classification of vulnerabilities is a fundamental step to derive formal attributes that allow a deeper analysis. Therefore, it is required that this classification has to be performed timely and accurate. Since the current situation demands a manual interaction in the classification process, the timely processing becomes a serious issue. Thus, we propose an automated alternative to the manual classification, because the amount of identified vulnerabilities per day cannot be processed manually anymore. We implemented two different approaches that are able to automatically classify vulnerabilities based on the vulnerability description. We evaluated our approaches, which use Neural Networks and the Naive Bayes methods respectively, on the base of publicly known vulnerabilities.
Preface
(2018)
This book aims at understanding the diversity of planetary and lunar magnetic fields and their interaction with the solar wind. A synergistic interdisciplinary approach combines newly developed tools for data acquisition and analysis, computer simulations of planetary interiors and dynamos, models of solar wind interaction, measurement of terrestrial rocks and meteorites, and laboratory investigations. The following chapters represent a selection of some of the scientific findings derived by the 22 projects within the DFG Priority Program Planetary Magnetism" (PlanetMag). This introductory chapter gives an overview of the individual following chapters, highlighting their role in the overall goals of the PlanetMag framework. The diversity of the different contributions reflects the wide range of magnetic phenomena in our solar system. From the program we have excluded magnetism of the sun, which is an independent broad research discipline, but include the interaction of the solar wind with planets and moons. Within the subsequent 13 chapters of this book, the authors review the field centered on their research topic within PlanetMag. Here we shortly introduce the content of all the subsequent chapters and outline the context in which they should be seen.
Foreword
(2018)
In Europe, different countries developed a rich variety of sub-municipal institutions. Out of the plethora of intra- and sub-municipal decentralization forms (reaching from local outposts of city administration to “quasi-federal” structures), this book focuses on territorial sub-municipal units (SMUs) which combine multipurpose territorial responsibility with democratic legitimacy and can be seen as institutions promoting the articulation and realization of collective choices at a sub-municipal level.
Country chapters follow a common pattern that is facilitating systematic comparisons, while at the same time leaving enough space for national peculiarities and priorities chosen and highlighted by the authors, who also take advantage of the eventually existing empirical surveys and case studies.
Business process simulation is an important means for quantitative analysis of a business process and to compare different process alternatives. With the Business Process Model and Notation (BPMN) being the state-of-the-art language for the graphical representation of business processes, many existing process simulators support already the simulation of BPMN diagrams. However, they do not provide well-defined interfaces to integrate new concepts in the simulation environment. In this work, we present the design and architecture of a proof-of-concept implementation of an open and extensible BPMN process simulator. It also supports the simulation of multiple BPMN processes at a time and relies on the building blocks of the well-founded discrete event simulation. The extensibility is assured by a plug-in concept. Its feasibility is demonstrated by extensions supporting new BPMN concepts, such as the simulation of business rule activities referencing decision models and batch activities.
In university teaching today, it is common practice to record regular lectures and special events such as conferences and speeches. With these recordings, a large fundus of video teaching material can be created quickly and easily. Typically, lectures have a length of about one and a half hours and usually take place once or twice a week based on the credit hours. Depending on the number of lectures and other events recorded, the number of recordings available is increasing rapidly, which means that an appropriate form of provisioning is essential for the students. This is usually done in the form of lecture video platforms. In this work, we have investigated how lecture video platforms and the contained knowledge can be improved and accessed more easily by an increasing number of students. We came up with a multistep process we have applied to our own lecture video web portal that can be applied to other solutions as well.
Embedded smart home — remote lab MOOC with optional real hardware experience for over 4000 students
(2018)
MOOCs (Massive Open Online Courses) become more and more popular for learners of all ages to study further or to learn new subjects of interest. The purpose of this paper is to introduce a different MOOC course style. Typically, video content is shown teaching the student new information. After watching a video, self-test questions can be answered. Finally, the student answers weekly exams and final exams like the self test questions. Out of the points that have been scored for weekly and final exams a certificate can be issued. Our approach extends the possibility to receive points for the final score with practical programming exercises on real hardware. It allows the student to do embedded programming by communicating over GPIO pins to control LEDs and measure sensor values. Additionally, they can visualize values on an embedded display using web technologies, which are an essential part of embedded and smart home devices to communicate with common APIs. Students have the opportunity to solve all tasks within the online remote lab and at home on the same kind of hardware. The evaluation of this MOOCs indicates the interesting design for students to learn an engineering technique with new technology approaches in an appropriate, modern, supporting and motivating way of teaching.
When students watch learning videos online, they usually need to watch several hours of video content. In the end, not every minute of a video is relevant for the exam. Additionally, students need to add notes to clarify issues of a lecture. There are several possibilities to enhance the metadata of a video, e.g. a typical way to add user-specific information to an online video is a comment functionality, which allows users to share their thoughts and questions with the public. In contrast to common video material which can be found online, lecture videos are used for exam preparation. Due to this difference, the idea comes up to annotate lecture videos with markers and personal notes for a better understanding of the taught content. Especially, students learning for an exam use their notes to refresh their memories. To ease this learning method with lecture videos, we introduce the annotation feature in our video lecture archive. This functionality supports the students with keeping track of their thoughts by providing an intuitive interface to easily add, modify or remove their ideas. This annotation function is integrated in the video player. Hence, scrolling to a separate annotation area on the website is not necessary. Furthermore, the annotated notes can be exported together with the slide content to a PDF file, which can then be printed easily. Lecture video annotations support and motivate students to learn and watch videos from an E-Learning video archive.
An efficient Design Space Exploration (DSE) is imperative for the design of modern, highly complex embedded systems in order to steer the development towards optimal design points. The early evaluation of design decisions at system-level abstraction layer helps to find promising regions for subsequent development steps in lower abstraction levels by diminishing the complexity of the search problem. In recent works, symbolic techniques, especially Answer Set Programming (ASP) modulo Theories (ASPmT), have been shown to find feasible solutions of highly complex system-level synthesis problems with non-linear constraints very efficiently. In this paper, we present a novel approach to a holistic system-level DSE based on ASPmT. To this end, we include additional background theories that concurrently guarantee compliance with hard constraints and perform the simultaneous optimization of several design objectives. We implement and compare our approach with a state-of-the-art preference handling framework for ASP. Experimental results indicate that our proposed method produces better solutions with respect to both diversity and convergence to the true Pareto front.
Operational decisions in business processes can be modeled by using the Decision Model and Notation (DMN). The complementary use of DMN for decision modeling and of the Business Process Model and Notation (BPMN) for process design realizes the separation of concerns principle. For supporting separation of concerns during the design phase, it is crucial to understand which aspects of decision-making enclosed in a process model should be captured by a dedicated decision model. Whereas existing work focuses on the extraction of decision models from process control flow, the connection of process-related data and decision models is still unexplored. In this paper, we investigate how process-related data used for making decisions can be represented in process models and we distinguish a set of BPMN patterns capturing such information. Then, we provide a formal mapping of the identified BPMN patterns to corresponding DMN models and apply our approach to a real-world healthcare process.
Modern server systems with large NUMA architectures necessitate (i) data being distributed over the available computing nodes and (ii) NUMA-aware query processing to enable effective parallel processing in database systems. As these architectures incur significant latency and throughout penalties for accessing non-local data, queries should be executed as close as possible to the data. To further increase both performance and efficiency, data that is not relevant for the query result should be skipped as early as possible. One way to achieve this goal is horizontal partitioning to improve static partition pruning. As part of our ongoing work on workload-driven partitioning, we have implemented a recent approach called aggressive data skipping and extended it to handle both analytical as well as transactional access patterns. In this paper, we evaluate this approach with the workload and data of a production enterprise system of a Global 2000 company. The results show that over 80% of all tuples can be skipped in average while the resulting partitioning schemata are surprisingly stable over time.
Preface
(2018)
Previous work has shown that surface modification with orthophosphoric acid can significantly enhance the charge stability on polypropylene (PP) surface by generating deeper traps. In the present study, thermally stimulated potential-decay measurements revealed that the chemical treatment may also significantly increase the number of available trapping sites on the surface. Thus, as a consequence, the so-called "cross-over" phenomenon, which is observed on as-received and thermally treated PP electrets, may be overcome in a certain range of initial charge densities. Furthermore, the discharge behavior of chemically modified samples indicates that charges can be injected from the treated surface into the bulk, and/or charges of opposite polarity can be pulled from the rear electrode into the bulk at elevated temperatures and at the high electric fields that are caused by the deposited charges. In the bulk, a lack of deep traps causes rapid charge decay already in the temperature range around 95 degrees C.
The influence of chemical composition and crystallisation conditions on the ferroelectric and paraelectric phases and the resulting morphology in Poly(vinylidene fluoride-trifluoroethylene-chlorofluoroethylene) (P(VDF-TrFE-CFE)) terpolymer films with 55.4/37.2/7.3 mol% or with 62.2/29.4/8.4 mol% of VDF/TrFE/CFE was studied. Poly(vinylidene fluoride trifluoroethylene) (P(VDF-TrFE)) with 75/25 mol% VDF/TrFE was employed as reference material. Fourier-Transform Infrared Spectroscopy (FTIR) was used to determine the fractions of the relevant terpolymer phases, and X-Ray Diffraction (XRD) was employed to assess the crystalline morphology. The FTIR results show an increase of the fraction of paraelectric phases after annealing. On the other hand, XRD results indicate a more stable paraelectric phase in the terpolymer with higher CFE content.
The electret state stability in nonpolar semicrystalline polymers is largely determined by the traps located at crystalline/ amorphous phase interfaces. Thus, the thermal history of such polymers should considerably influence their electret properties. In the present work, we investigate how recrystallization influences charge stability in low-density polyethylene corona electrets. It has been found that electret charge stability in quenched samples is higher than in slowly-crystallized ones. Phenomenologicaly, this can be explained by the increased number of deeper traps in samples with smaller crystallite size.
Published results on LDPE/MgO nanocomposites (3wt%) show that they promise to be good electrical-insulation materials. In this work, the nanocomposites are examined as a potential (ferro-)electret material as well. Isothermal surface-potential decay measurements show that charged LDPE/MgO films still exhibit significant surface potentials after heating for 4 hours at 80 degrees C, which suggests good capabilities of LDPE/MgO nanocomposites to hold electric charges of both polarities. Open-tubular-channel ferroelectrets prepared from LDPE/MgO nanocomposite films show significant piezoelectricity with d(33) coefficients of about 20 pC/N after charging and are stable up to temperatures of at least 80 degrees C. Thus LDPE/MgO nanocomposites may become available as a new ferroelectret material. To increase their d(33) coefficients, it is desirable to optimize the charging conditions and the ferroelectret structure.
No other means of communication determines through its seemingly unrestricted possibilities our everyday life more than the internet. From the mid-90s onwards, more and more technical advancements in the field of communication appear on the market, which in turn call for new terminology. In the first place, it is the internet (essentially based on the interaction between users and experts), which requires effective nomenclature in order to mediate between lay users and their restricted knowledge on the one, and experts and their sophisticated terminology on the other hand. At the interface between the new and complex realities and the need for simple linguistic access, a huge quantity of metaphoric denominations is used, making abstract innovations more comprehensible. Metaphor in the internet discourse serves to "reduce verticality" (Stenschke 2006) between specialized terminology and common language. The paper deals with metaphors based on spatial concepts. Space and spatiality play a key role in cognitive theories of metaphor as these theories themselves (according to Lakoff/Johnson 1980) are often based on the application of spatial concepts to non-spatial relations. After describing spatial concepts in general (referring to the internet), the paper explores which kind of metaphor takes advantage of the complexity present in the internet and how the medial space is linguistically recaptured in terms of spatial perception.
This introductory essay to the HSR Special Issue “Economists, Politics, and Society” argues for a strong field-theoretical programme inspired by Pierre Bourdieu to research economic life as an integral part of different social forms. Its main aim is threefold. First, we spell out the very distinct Durkheimian legacy in Bourdieu’s thinking and the way he applies it in researching economic phenomena. Without this background, much of what is actually part of how Bourdieu analysed economic aspects of social life would be overlooked or reduced to mere economic sociology. Second, we sketch the main theoretical concepts and heuristics used to analyse economic life from a field perspective. Third, we focus on practical methodological issues of field-analytical research into economic phenomena. We conclude with a short summary of the basic characteristics of this approach and discuss the main insights provided by the contributions to this special issue.
This paper presents the concept of a community-accessible stratospheric balloon-based observatory that is currently under preparation by a consortium of European research institutes and industry. We present the technical motivation, science case, instrumentation, and a two-stage image stabilization approach of the 0.5-m UV/visible platform. In addition, we briefly describe the novel mid-sized stabilized balloon gondola under design to carry telescopes in the 0.5 to 0.6 m range as well as the currently considered flight option for this platform. Secondly, we outline the scientific and technical motivation for a large balloon-based FIR telescope and the ESBO DS approach towards such an infrastructure.
The nature restoration project ‘Lenzener Elbtalaue’, realised from 2002 to 2011 at the river Elbe, included the first large scale dike relocation in Germany (420 ha). Its aim was to initiate the development of endangered natural wetland habitats and processes, accompanied by greater biodiversity in the former grassland dominated area. The monitoring of spatial and temporal variations of soil moisture in this dike relocation area is therefore particularly important for estimating the restoration success. The topsoil moisture monitoring from 1990 to 2017 is based on the Soil Moisture Index (SMI)1 derived with the triangle method2 by use of optical remotely sensed data: land surface temperature and Normalized Differnce Vegetation Index are calculated from Landsat 4/5/7/8 data and atmospheric corrected by use of MODIS data. Spatial and temporal soil moisture variations in the restored area of the dike relocation are compared to the agricultural and pasture area behind the new dike. Ground truth data in the dike relocation area was obtained from field measurements in October 2017 with a FDR device. Additionally, data from a TERENO soil moisture sensor network (SoilNet) and mobile cosmic ray neutron sensing (CRNS) rover measurements are compared to the results of the triangle method for a region in the Harz Mountains (Germany). The SMI time series illustrates, that the dike relocation area has become significantly wetter between 1990 and 2017, due to restructuring measurements. Whereas the SMI of the dike hinterland reflects constant and drier conditions. An influence of climate is unlikely. However, validation of the dimensionless index with ground truth measurements is very difficult, mostly due to large differences in scale.
Point clouds provide high-resolution topographic data which is often classified into bare-earth, vegetation, and building points and then filtered and aggregated to gridded Digital Elevation Models (DEMs) or Digital Terrain Models (DTMs). Based on these equally-spaced grids flow-accumulation algorithms are applied to describe the hydrologic and geomorphologic mass transport on the surface. In this contribution, we propose a stochastic point-cloud filtering that, together with a spatial bootstrap sampling, allows for a flow accumulation directly on point clouds using Facet-Flow Networks (FFN). Additionally, this provides a framework for the quantification of uncertainties in point-cloud derived metrics such as Specific Catchment Area (SCA) even though the flow accumulation itself is deterministic.
Why choice matters
(2018)
Measures of democracy are in high demand. Scientific and public audiences use them to describe political realities and to substantiate causal claims about those realities. This introduction to the thematic issue reviews the history of democracy measurement since the 1950s. It identifies four development phases of the field, which are characterized by three recurrent topics of debate: (1) what is democracy, (2) what is a good measure of democracy, and (3) do our measurements of democracy register real-world developments? As the answers to those questions have been changing over time, the field of democracy measurement has adapted and reached higher levels of theoretical and methodological sophistication. In effect, the challenges facing contemporary social scientists are not only limited to the challenge of constructing a sound index of democracy. Today, they also need a profound understanding of the differences between various measures of democracy and their implications for empirical applications. The introduction outlines how the contributions to this thematic issue help scholars cope with the recurrent issues of conceptualization, measurement, and application, and concludes by identifying avenues for future research.
Konrad Repgen (1923-2017)
(2018)
The problem of atmospheric emission from OH molecules is a long standing problem for near-infrared astronomy. PRAXIS is a unique spectrograph which is fed by fibres that remove the OH background and is optimised specifically to benefit from OH-Suppression. The OH suppression is achieved with fibre Bragg gratings, which were tested successfully on the GNOSIS instrument. PRAXIS uses the same fibre Bragg gratings as GNOSIS in its first implementation, and will exploit new, cheaper and more efficient, multicore fibre Bragg gratings in the second implementation. The OH lines are suppressed by a factor of similar to 1000, and the expected increase in the signal-to-noise in the interline regions compared to GNOSIS is a factor of similar to 9 with the GNOSIS gratings and a factor of similar to 17 with the new gratings. PRAXIS will enable the full exploitation of OH suppression for the first time, which was not achieved by GNOSIS (a retrofit to an existing instrument that was not OH-Suppression optimised) due to high thermal emission, low spectrograph transmission and detector noise. PRAXIS has extremely low thermal emission, through the cooling of all significantly emitting parts, including the fore-optics, the fibre Bragg gratings, a long length of fibre, and the fibre slit, and an optical design that minimises leaks of thermal emission from outside the spectrograph. PRAXIS has low detector noise through the use of a Hawaii-2RG detector, and a high throughput through a efficient VPH based spectrograph. PRAXIS will determine the absolute level of the interline continuum and enable observations of individual objects via an IFU. In this paper we give a status update and report on acceptance tests.
We present a project combining lidar, photometer and particle counter data with a regularization software tool for a closure study of aerosol microphysical property retrieval. In a first step only lidar data are used to retrieve the particle size distribution (PSD). Secondly, photometer data are added, which results in a good consistency of the retrieved PSDs. Finally, those retrieved PSDs may be compared with the measured PSD from a particle counter. The data here were taken in Ny Alesund, Svalbard, as an example.
Logical modeling has been widely used to understand and expand the knowledge about protein interactions among different pathways. Realizing this, the caspo-ts system has been proposed recently to learn logical models from time series data. It uses Answer Set Programming to enumerate Boolean Networks (BNs) given prior knowledge networks and phosphoproteomic time series data. In the resulting sequence of solutions, similar BNs are typically clustered together. This can be problematic for large scale problems where we cannot explore the whole solution space in reasonable time. Our approach extends the caspo-ts system to cope with the important use case of finding diverse solutions of a problem with a large number of solutions. We first present the algorithm for finding diverse solutions and then we demonstrate the results of the proposed approach on two different benchmark scenarios in systems biology: (1) an artificial dataset to model TCR signaling and (2) the HPN-DREAM challenge dataset to model breast cancer cell lines.
High Mountain Asia provides water for more than a billion downstream users. Many catchments receive the majority of their yearly water budget in the form of snow - the vast majority of which is not monitored by sparse weather networks. We leverage passive microwave data from the SSMI series of satellites (SSMI, SSMI/S, 1987-2016), reprocessed to 3.125 km resolution, to examine trends in the volume and spatial distribution of snow-water equivalent (SWE) in the Indus Basin. We find that the majority of the Indus has seen an increase in snow-water storage. There exists a strong elevation-trend relationship, where high-elevation zones have more positive SWE trends. Negative trends are confined to the Himalayan foreland and deeply-incised valleys which run into the Upper Indus. This implies a temperature-dependent cutoff below which precipitation increases are not translated into increased SWE. Earlier snowmelt or a higher percentage of liquid precipitation could both explain this cutoff.(1) Earlier work 2 found a negative snow-water storage trend for the entire Indus catchment over the time period 1987-2009 (-4 x 10(-3) mm/yr). In this study based on an additional seven years of data, the average trend reverses to 1.4 x 10(-3). This implies that the decade since the mid-2000s was likely wetter, and positively impacted long-term SWE trends. This conclusion is supported by an analysis of snowmelt onset and end dates which found that while long-term trends are negative, more recent (since 2005) trends are positive (moving later in the year).(3)
Metamaterial Devices
(2018)
In our hands-on demonstration, we show several objects, the functionality of which is defined by the objects' internal micro-structure. Such metamaterial machines can (1) be mechanisms based on their microstructures, (2) employ simple mechanical computation, or (3) change their outside to interact with their environment. They are 3D printed from one piece and we support their creating by providing interactive software tools.
Cloud storage brokerage is an abstraction aimed at providing value-added services. However, Cloud Service Brokers are challenged by several security issues including enlarged attack surfaces due to integration of disparate components and API interoperability issues. Therefore, appropriate security risk assessment methods are required to identify and evaluate these security issues, and examine the efficiency of countermeasures. A possible approach for satisfying these requirements is employment of threat modeling concepts, which have been successfully applied in traditional paradigms. In this work, we employ threat models including attack trees, attack graphs and Data Flow Diagrams against a Cloud Service Broker (CloudRAID) and analyze these security threats and risks. Furthermore, we propose an innovative technique for combining Common Vulnerability Scoring System (CVSS) and Common Configuration Scoring System (CCSS) base scores in probabilistic attack graphs to cater for configuration-based vulnerabilities which are typically leveraged for attacking cloud storage systems. This approach is necessary since existing schemes do not provide sufficient security metrics, which are imperatives for comprehensive risk assessments. We demonstrate the efficiency of our proposal by devising CCSS base scores for two common attacks against cloud storage: Cloud Storage Enumeration Attack and Cloud Storage Exploitation Attack. These metrics are then used in Attack Graph Metric-based risk assessment. Our experimental evaluation shows that our approach caters for the aforementioned gaps and provides efficient security hardening options. Therefore, our proposals can be employed to improve cloud security.
High storage density magnetic devices rely on the precise, reliable and ultrafast switching times of the magnetic states. Optical control of magnetization using femtosecond laser without applying any external magnetic field offers the advantage of switching magnetic states at ultrashort time scales, which has attracted a significant attention. Recently, it has been reported and demonstrated the,so-called, all-optical helicity-dependent switching (AO-HDS) in which a circularly polarized femtosecond laser pulse switches the magnetization of a ferromagnetic thin film as function of laser helicity [1]. Afterward, in more recent studies, it has been reported that AO-HDS is a general phenomenon existing in magnetic materials ranging from rare earth - transition metals ferrimagnetic (e.g. alloys, multilayers and hetero-structures system) to even ferromagnetic thin films. Among numerous studies in the literature which are discussing the microscopic origin of AO-HDS in ferromagnets or ferrimagnetic alloys, the most renowned concepts are momentum transfer via Inverse Faraday Effect (IFE) [1-3]and the concept of preferential thermal demagnetization for one magnetization direction by heating close to Tc (Curie temperature) in the presence of magnetic circular dichroism (MCD) [4-6]. In this study, we investigate all-optical magnetic switching using a stationary femtosecond laser spot (3-5 μm) in TbFe alloys via photoemission electron microscopy (PEEM) and x-ray magnetic circular dichroism (XMCD) with a spatial resolution of approximately 30 nm. We spatially characterize the effect of laser heating and local temperature profile created across the laser spot on AO-HDS in TbFe thin films. We find that AO-HDS occurs only in a `ring' shaped region surrounding the thermally demagnetized region formed by the laser spot and the formation of switched domains relies further on thermally induced domain wall motion. Our temperature dependent measurements highlight the importance of attainin...
Development of a tool to identify intensive care patients at risk of meropenem therapy failure
(2018)
The problem of constructing and maintaining a tree topology in a distributed manner is a challenging task in WSNs. This is because the nodes have limited computational and memory resources and the network changes over time. We propose the Dynamic Gallager-Humblet-Spira (D-GHS) algorithm that builds and maintains a minimum spanning tree. To do so, we divide D-GHS into four phases, namely neighbor discovery, tree construction, data collection, and tree maintenance. In the neighbor discovery phase, the nodes collect information about their neighbors and the link quality. In the tree construction, D-GHS finds the minimum spanning tree by executing the Gallager-Humblet-Spira algorithm. In the data collection phase, the sink roots the minimum spanning tree at itself, and each node sends data packets. In the tree maintenance phase, the nodes repair the tree when communication failures occur. The emulation results show that D-GHS reduces the number of control messages and the energy consumption, at the cost of a slight increase in memory size and convergence time.
An energy consumption model for multiModal wireless sensor networks based on wake-up radio receivers
(2018)
Energy consumption is a major concern in Wireless Sensor Networks. A significant waste of energy occurs due to the idle listening and overhearing problems, which are typically avoided by turning off the radio, while no transmission is ongoing. The classical approach for allowing the reception of messages in such situations is to use a low-duty-cycle protocol, and to turn on the radio periodically, which reduces the idle listening problem, but requires timers and usually unnecessary wakeups. A better solution is to turn on the radio only on demand by using a Wake-up Radio Receiver (WuRx). In this paper, an energy model is presented to estimate the energy saving in various multi-hop network topologies under several use cases, when a WuRx is used instead of a classical low-duty-cycling protocol. The presented model also allows for estimating the benefit of various WuRx properties like using addressing or not.
Scrum2kanban
(2018)
Using university capstone courses to teach agile software development methodologies has become commonplace, as agile methods have gained support in professional software development. This usually means students are introduced to and work with the currently most popular agile methodology: Scrum. However, as the agile methods employed in the industry change and are adapted to different contexts, university courses must follow suit. A prime example of this is the Kanban method, which has recently gathered attention in the industry. In this paper, we describe a capstone course design, which adds the hands-on learning of the lean principles advocated by Kanban into a capstone project run with Scrum. This both ensures that students are aware of recent process frameworks and ideas as well as gain a more thorough overview of how agile methods can be employed in practice. We describe the details of the course and analyze the participating students' perceptions as well as our observations. We analyze the development artifacts, created by students during the course in respect to the two different development methodologies. We further present a summary of the lessons learned as well as recommendations for future similar courses. The survey conducted at the end of the course revealed an overwhelmingly positive attitude of students towards the integration of Kanban into the course.
802.15.4 security protects against the replay, injection, and eavesdropping of 802.15.4 frames. A core concept of 802.15.4 security is the use of frame counters for both nonce generation and anti-replay protection. While being functional, frame counters (i) cause an increased energy consumption as they incur a per-frame overhead of 4 bytes and (ii) only provide sequential freshness. The Last Bits (LB) optimization does reduce the per-frame overhead of frame counters, yet at the cost of an increased RAM consumption and occasional energy-and time-consuming resynchronization actions. Alternatively, the timeslotted channel hopping (TSCH) media access control (MAC) protocol of 802.15.4 avoids the drawbacks of frame counters by replacing them with timeslot indices, but findings of Yang et al. question the security of TSCH in general. In this paper, we assume the use of ContikiMAC, which is a popular asynchronous MAC protocol for 802.15.4 networks. Under this assumption, we propose an Intra-Layer Optimization for 802.15.4 Security (ILOS), which intertwines 802.15.4 security and ContikiMAC. In effect, ILOS reduces the security-related per-frame overhead even more than the LB optimization, as well as achieves strong freshness. Furthermore, unlike the LB optimization, ILOS neither incurs an increased RAM consumption nor requires resynchronization actions. Beyond that, ILOS integrates with and advances other security supplements to ContikiMAC. We implemented ILOS using OpenMotes and the Contiki operating system.
The present work is part of a collaborative H2020 European funded research project called SENSKIN, that aims to improve Structural Health Monitoring (SHM) for transport infrastructure through the development of an innovative monitoring and management system for bridges based on a novel, inexpensive, skin-like sensor. The integrated SENSKIN technology will be implemented in the case of steel and concrete bridges, and tested, field-evaluated and benchmarked on actual bridge environment against a conventional health monitoring solution developed by Mistras Group Hellas. The main objective of the present work is to implement the autonomous, fully functional strain monitoring system based on commercially available off-the-shelf components, that will be used to accomplish direct comparison between the performance of the innovative SENSKIN sensors and the conventional strain sensors commonly used for structural monitoring of bridges. For this purpose, the mini Structural Monitoring System (mini SMS) of Physical Acoustics Corporation, a comprehensive data acquisition unit designed specifically for long-term unattended operation in outdoor environments, was selected. For the completion of the conventional system, appropriate foil-type strain sensors were selected, driven by special conditioners manufactured by Mistras Group. A comprehensive description of the strain monitoring system and its peripheral components is provided in this paper. For the evaluation of the integrated system’s performance and the effect of various parameters on the long-term behavior of sensors, several test steel pieces instrumented with different strain sensors configurations were prepared and tested in both laboratory and field ambient conditions. Furthermore, loading tests were performed aiming to validate the response of the system in monitoring the strains developed in steel beam elements subject to bending regimes. Representative results obtained from the above experimental tests have been included in this paper as well.
CurEx
(2018)
The integration of diverse structured and unstructured information sources into a unified, domain-specific knowledge base is an important task in many areas. A well-maintained knowledge base enables data analysis in complex scenarios, such as risk analysis in the financial sector or investigating large data leaks, such as the Paradise or Panama papers. Both the creation of such knowledge bases, as well as their continuous maintenance and curation involves many complex tasks and considerable manual effort. With CurEx, we present a modular system that allows structured and unstructured data sources to be integrated into a domain-specific knowledge base. In particular, we (i) enable the incremental improvement of each individual integration component; (ii) enable the selective generation of multiple knowledge graphs from the information contained in the knowledge base; and (iii) provide two distinct user interfaces tailored to the needs of data engineers and end-users respectively. The former has curation capabilities and controls the integration process, whereas the latter focuses on the exploration of the generated knowledge graph.
Beacon in the Dark
(2018)
The large amount of heterogeneous data in these email corpora renders experts' investigations by hand infeasible. Auditors or journalists, e.g., who are looking for irregular or inappropriate content or suspicious patterns, are in desperate need for computer-aided exploration tools to support their investigations.
We present our Beacon system for the exploration of such corpora at different levels of detail. A distributed processing pipeline combines text mining methods and social network analysis to augment the already semi-structured nature of emails. The user interface ties into the resulting cleaned and enriched dataset. For the interface design we identify three objectives expert users have: gain an initial overview of the data to identify leads to investigate, understand the context of the information at hand, and have meaningful filters to iteratively focus onto a subset of emails. To this end we make use of interactive visualisations based on rearranged and aggregated extracted information to reveal salient patterns.
The detection of all inclusion dependencies (INDs) in an unknown dataset is at the core of any data profiling effort. Apart from the discovery of foreign key relationships, INDs can help perform data integration, integrity checking, schema (re-)design, and query optimization. With the advent of Big Data, the demand increases for efficient INDs discovery algorithms that can scale with the input data size. To this end, we propose S-INDD++ as a scalable system for detecting unary INDs in large datasets. S-INDD++ applies a new stepwise partitioning technique that helps discard a large number of attributes in early phases of the detection by processing the first partitions of smaller sizes. S-INDD++ also extends the concept of the attribute clustering to decide which attributes to be discarded based on the clustering result of each partition. Moreover, in contrast to the state-of-the-art, S-INDD++ does not require the partition to fit into the main memory-which is a highly appreciable property in the face of the ever growing datasets. We conducted an exhaustive evaluation of S-INDD++ by applying it to large datasets with thousands attributes and more than 266 million tuples. The results show the high superiority of S-INDD++ over the state-of-the-art. S-INDD++ reduced up to 50 % of the runtime in comparison with BINDER, and up to 98 % in comparison with S-INDD.
One particular challenge in the Internet of Things is the management of many heterogeneous things. The things are typically constrained devices with limited memory, power, network and processing capacity. Configuring every device manually is a tedious task. We propose an interoperable way to configure an IoT network automatically using existing standards. The proposed NETCONF-MQTT bridge intermediates between the constrained devices (speaking MQTT) and the network management standard NETCONF. The NETCONF-MQTT bridge generates dynamically YANG data models from the semantic description of the device capabilities based on the oneM2M ontology. We evaluate the approach for two use cases, i.e. describing an actuator and a sensor scenario.
Live migration is an important feature in modern software-defined datacenters and cloud computing environments. Dynamic resource management, load balance, power saving and fault tolerance are all dependent on the live migration feature. Despite the importance of live migration, the cost of live migration cannot be ignored and may result in service availability degradation. Live migration cost includes the migration time, downtime, CPU overhead, network and power consumption. There are many research articles that discuss the problem of live migration cost with different scopes like analyzing the cost and relate it to the parameters that control it, proposing new migration algorithms that minimize the cost and also predicting the migration cost. For the best of our knowledge, most of the papers that discuss the migration cost problem focus on open source hypervisors. For the research articles focus on VMware environments, none of the published articles proposed migration time, network overhead and power consumption modeling for single and multiple VMs live migration. In this paper, we propose empirical models for the live migration time, network overhead and power consumption for single and multiple VMs migration. The proposed models are obtained using a VMware based testbed.
The electromagnetic coupling of molecular excitations to plasmonic nanoparticles offers a promising method to manipulate the light-matter interaction at the nanoscale. Plasmonic nanoparticles foster exceptionally high coupling strengths, due to their capacity to strongly concentrate the light-field to sub-wavelength mode volumes. A particularly interesting coupling regime occurs, if the coupling increases to a level such that the coupling strength surpasses all damping rates in the system. In this so-called strong-coupling regime hybrid light-matter states emerge, which can no more be divided into separate light and matter components. These hybrids unite the features of the original components and possess new resonances whose positions are separated by the Rabi splitting energy h Omega. Detuning the resonance of one of the components leads to an anticrossing of the two arising branches of the new resonances omega(+) and omega(-) with a minimal separation of Omega = omega(+) - omega(-).
The coupling between molecular excitations and nanoparticles leads to promising applications. It is for example used to enhance the optical cross-section of molecules in surface enhanced Raman scattering, Purcell enhancement or plasmon enhanced dye lasers. In a coupled system new resonances emerge resulting from the original plasmon (ωpl) and exciton (ωex) resonances as
ω±=12(ωpl+ωex)±14(ωpl−ωex)2+g2−−−−−−−−−−−−−−−√,
(1)
where g is the coupling parameter. Hence, the new resonances show a separation of Δ = ω+ − ω− from which the coupling strength can be deduced from the minimum distance between the two resonances, Ω = Δ(ω+ = ω−).
An IoT network may consist of hundreds heterogeneous devices. Some of them may be constrained in terms of memory, power, processing and network capacity. Manual network and service management of IoT devices are challenging. We propose a usage of an ontology for the IoT device descriptions enabling automatic network management as well as service discovery and aggregation. Our IoT architecture approach ensures interoperability using existing standards, i.e. MQTT protocol and SemanticWeb technologies. We herein introduce virtual IoT devices and their semantic framework deployed at the edge of network. As a result, virtual devices are enabled to aggregate capabilities of IoT devices, derive new services by inference, delegate requests/responses and generate events. Furthermore, they can collect and pre-process sensor data. These tasks on the edge computing overcome the shortcomings of the cloud usage regarding siloization, network bandwidth, latency and speed. We validate our proposition by implementing a virtual device on a Raspberry Pi.
For the last ten years, almost every theoretical result concerning the expected run time of a randomized search heuristic used drift theory, making it the arguably most important tool in this domain. Its success is due to its ease of use and its powerful result: drift theory allows the user to derive bounds on the expected first-hitting time of a random process by bounding expected local changes of the process - the drift. This is usually far easier than bounding the expected first-hitting time directly. Due to the widespread use of drift theory, it is of utmost importance to have the best drift theorems possible. We improve the fundamental additive, multiplicative, and variable drift theorems by stating them in a form as general as possible and providing examples of why the restrictions we keep are still necessary. Our additive drift theorem for upper bounds only requires the process to be nonnegative, that is, we remove unnecessary restrictions like a finite, discrete, or bounded search space. As corollaries, the same is true for our upper bounds in the case of variable and multiplicative drift.
One of the most important aspects of a randomized algorithm is bounding its expected run time on various problems. Formally speaking, this means bounding the expected first-hitting time of a random process. The two arguably most popular tools to do so are the fitness level method and drift theory. The fitness level method considers arbitrary transition probabilities but only allows the process to move toward the goal. On the other hand, drift theory allows the process to move into any direction as long as it move closer to the goal in expectation; however, this tendency has to be monotone and, thus, the transition probabilities cannot be arbitrary. We provide a result that combines the benefit of these two approaches: our result gives a lower and an upper bound for the expected first-hitting time of a random process over {0,..., n} that is allowed to move forward and backward by 1 and can use arbitrary transition probabilities. In case that the transition probabilities are known, our bounds coincide and yield the exact value of the expected first-hitting time. Further, we also state the stationary distribution as well as the mixing time of a special case of our scenario.
For theoretical analyses there are two specifics distinguishing GP from many other areas of evolutionary computation. First, the variable size representations, in particular yielding a possible bloat (i.e. the growth of individuals with redundant parts). Second, the role and realization of crossover, which is particularly central in GP due to the tree-based representation. Whereas some theoretical work on GP has studied the effects of bloat, crossover had a surprisingly little share in this work. We analyze a simple crossover operator in combination with local search, where a preference for small solutions minimizes bloat (lexicographic parsimony pressure); the resulting algorithm is denoted Concatenation Crossover GP. For this purpose three variants of the wellstudied Majority test function with large plateaus are considered. We show that the Concatenation Crossover GP can efficiently optimize these test functions, while local search cannot be efficient for all three variants independent of employing bloat control.
High-throughput RNA sequencing (RNAseq) produces large data sets containing expression levels of thousands of genes. The analysis of RNAseq data leads to a better understanding of gene functions and interactions, which eventually helps to study diseases like cancer and develop effective treatments. Large-scale RNAseq expression studies on cancer comprise samples from multiple cancer types and aim to identify their distinct molecular characteristics. Analyzing samples from different cancer types implies analyzing samples from different tissue origin. Such multi-tissue RNAseq data sets require a meaningful analysis that accounts for the inherent tissue-related bias: The identified characteristics must not originate from the differences in tissue types, but from the actual differences in cancer types. However, current analysis procedures do not incorporate that aspect. As a result, we propose to integrate a tissue-awareness into the analysis of multi-tissue RNAseq data. We introduce an extension for gene selection that provides a tissue-wise context for every gene and can be flexibly combined with any existing gene selection approach. We suggest to expand conventional evaluation by additional metrics that are sensitive to the tissue-related bias. Evaluations show that especially low complexity gene selection approaches profit from introducing tissue-awareness.
Studies indicate that reliable access to power is an important enabler for economic growth. To this end, modern energy management systems have seen a shift from reliance on time-consuming manual procedures , to highly automated management , with current energy provisioning systems being run as cyber-physical systems . Operating energy grids as a cyber-physical system offers the advantage of increased reliability and dependability , but also raises issues of security and privacy. In this chapter, we provide an overview of the contents of this book showing the interrelation between the topics of the chapters in terms of smart energy provisioning. We begin by discussing the concept of smart-grids in general, proceeding to narrow our focus to smart micro-grids in particular. Lossy networks also provide an interesting framework for enabling the implementation of smart micro-grids in remote/rural areas, where deploying standard smart grids is economically and structurally infeasible. To this end, we consider an architectural design for a smart micro-grid suited to low-processing capable devices. We model malicious behaviour, and propose mitigation measures based properties to distinguish normal from malicious behaviour .
The Aral Sea desiccation and related changes in hydroclimatic conditions on a regional level is a hot topic for past decades. The key problem of scientific research projects devoted to an investigation of modern Aral Sea basin hydrological regime is its discontinuous nature - the only limited amount of papers takes into account the complex runoff formation system entirely. Addressing this challenge we have developed a continuous prediction system for assessing freshwater inflow into the Small Aral Sea based on coupling stack of hydrological and data-driven models. Results show a good prediction skill and approve the possibility to develop a valuable water assessment tool which utilizes the power of classical physically based and modern machine learning models both for territories with complex water management system and strong water-related data scarcity. The source code and data of the proposed system is available on a Github page (https://github.com/SMASHIproject/IWRM2018).
We consider chimera states in a one-dimensional medium of nonlinear nonlocally coupled phase oscillators. Stationary inhomogeneous solutions of the Ott-Antonsen equation for a complex order parameter that correspond to fundamental chimeras have been constructed. Stability calculations reveal that only some of these states are stable. The direct numerical simulation has shown that these structures under certain conditions are transformed to breathing chimera regimes because of the development of instability. Further development of instability leads to turbulent chimeras.
Learning how to prove
(2018)
We have developed an alternative approach to teaching computer science students how to prove. First, students are taught how to prove theorems with the Coq proof assistant. In a second, more difficult, step students will transfer their acquired skills to the area of textbook proofs. In this article we present a realisation of the second step. Proofs in Coq have a high degree of formality while textbook proofs have only a medium one. Therefore our key idea is to reduce the degree of formality from the level of Coq to textbook proofs in several small steps. For that purpose we introduce three proof styles between Coq and textbook proofs, called line by line comments, weakened line by line comments, and structure faithful proofs. While this article is mostly conceptional we also report on experiences with putting our approach into practise.
Modern routing algorithms reduce query time by depending heavily on preprocessed data. The recently developed Navigation Data Standard (NDS) enforces a separation between algorithms and map data, rendering preprocessing inapplicable. Furthermore, map data is partitioned into tiles with respect to their geographic coordinates. With the limited memory found in portable devices, the number of tiles loaded becomes the major factor for run time. We study routing under these restrictions and present new algorithms as well as empirical evaluations. Our results show that, on average, the most efficient algorithm presented uses more than 20 times fewer tile loads than a normal A*.
This paper proposes an education approach for master and bachelor students to enhance their skills in the area of reliability, safety and security of the electronic components in automated driving. The approach is based on the active synergetic work of research institutes, academia and industry in the frame of joint lab. As an example, the jointly organized summer school with the respective focus is organized and elaborated.
Minimising Information Loss on Anonymised High Dimensional Data with Greedy In-Memory Processing
(2018)
Minimising information loss on anonymised high dimensional data is important for data utility. Syntactic data anonymisation algorithms address this issue by generating datasets that are neither use-case specific nor dependent on runtime specifications. This results in anonymised datasets that can be re-used in different scenarios which is performance efficient. However, syntactic data anonymisation algorithms incur high information loss on high dimensional data, making the data unusable for analytics. In this paper, we propose an optimised exact quasi-identifier identification scheme, based on the notion of k-anonymity, to generate anonymised high dimensional datasets efficiently, and with low information loss. The optimised exact quasi-identifier identification scheme works by identifying and eliminating maximal partial unique column combination (mpUCC) attributes that endanger anonymity. By using in-memory processing to handle the attribute selection procedure, we significantly reduce the processing time required. We evaluated the effectiveness of our proposed approach with an enriched dataset drawn from multiple real-world data sources, and augmented with synthetic values generated in close alignment with the real-world data distributions. Our results indicate that in-memory processing drops attribute selection time for the mpUCC candidates from 400s to 100s, while significantly reducing information loss. In addition, we achieve a time complexity speed-up of O(3(n/3)) approximate to O(1.4422(n)).
We analyze the problem of response suggestion in a closed domain along a real-world scenario of a digital library. We present a text-processing pipeline to generate question-answer pairs from chat transcripts. On this limited amount of training data, we compare retrieval-based, conditioned-generation, and dedicated representation learning approaches for response suggestion. Our results show that retrieval-based methods that strive to find similar, known contexts are preferable over parametric approaches from the conditioned-generation family, when the training data is limited. We, however, identify a specific representation learning approach that is competitive to the retrieval-based approaches despite the training data limitation.
PlAnalyzer
(2018)
In this work we propose PIAnalyzer, a novel approach to analyze PendingIntent related vulnerabilities. We empirically evaluate PIAnalyzer on a set of 1000 randomly selected applications from the Google Play Store and find 1358 insecure usages of Pendinglntents, including 70 severe vulnerabilities. We manually inspected ten reported vulnerabilities out of which nine correctly reported vulnerabilities, indicating a high precision. The evaluation shows that PIAnalyzer is efficient with an average execution time of 13 seconds per application.
Currently we are witnessing profound changes in the geospatial domain. Driven by recent ICT developments, such as web services, serviceoriented computing or open-source software, an explosion of geodata and geospatial applications or rapidly growing communities of non-specialist users, the crucial issue is the provision and integration of geospatial intelligence in these rapidly changing, heterogeneous developments. This paper introduces the concept of Servicification into geospatial data processing. Its core idea is the provision of expertise through a flexible number of web-based software service modules. Selection and linkage of these services to user profiles, application tasks, data resources, or additional software allow for the compilation of flexible, time-sensitive geospatial data handling processes. Encapsulated in a string of discrete services, the approach presented here aims to provide non-specialist users with geospatial expertise required for the effective, professional solution of a defined application problem. Providing users with geospatial intelligence in the form of web-based, modular services, is a completely different approach to geospatial data processing. This novel concept puts geospatial intelligence, made available through services encapsulating rule bases and algorithms, in the centre and at the disposal of the users, regardless of their expertise.
Several areas in Southeast Asia are very vulnerable to climate change and unable to take immediate/effective actions on countermeasures due to insufficient capabilities. Malaysia, in particular the east coast of peninsular Malaysia and Sarawak, is known as one of the vulnerable regions to flood disaster. Prolonged and intense rainfall, natural activities and increase in runoff are the main reasons to cause flooding in this area. In addition, topographic conditions also contribute to the occurrence of flood disaster. Kuching city is located in the northwest of Borneo Island and part of Sarawak river catchment. This area is a developing state in Malaysia experiencing rapid urbanization since 2000s, which has caused the insufficient data availability in topography and hydrology. To deal with these challenging issues, this study presents a flood modelling framework using the remote sensing technologies and machine learning techniques to acquire the digital elevation model (DEM) with improved accuracy for the non-surveyed areas. Intensity–duration–frequency (IDF) curves were derived from climate model for various scenario simulations. The developed flood framework will be beneficial for the planners, policymakers, stakeholders as well as researchers in the field of water resource management in the aspect of providing better ideas/tools in dealing with the flooding issues in the region.
Recently blockchain technology has been introduced to execute interacting business processes in a secure and transparent way. While the foundations for process enactment on blockchain have been researched, the execution of decisions on blockchain has not been addressed yet. In this paper we argue that decisions are an essential aspect of interacting business processes, and, therefore, also need to be executed on blockchain. The immutable representation of decision logic can be used by the interacting processes, so that decision taking will be more secure, more transparent, and better auditable. The approach is based on a mapping of the DMN language S-FEEL to Solidity code to be run on the Ethereum blockchain. The work is evaluated by a proof-of-concept prototype and an empirical cost evaluation.
OpenLL
(2018)
Today's rendering APIs lack robust functionality and capabilities for dynamic, real-time text rendering and labeling, which represent key requirements for 3D application design in many fields. As a consequence, most rendering systems are barely or not at all equipped with respective capabilities. This paper drafts the unified text rendering and labeling API OpenLL intended to complement common rendering APIs, frameworks, and transmission formats. For it, various uses of static and dynamic placement of labels are showcased and a text interaction technique is presented. Furthermore, API design constraints with respect to state-of-the-art text rendering techniques are discussed. This contribution is intended to initiate a community-driven specification of a free and open label library.
A hybrid design approach of the hierarchical physical implementation design flow is presented and demonstrated on a fault-tolerant low-power multiprocessor system. The proposed flow allows to implement selected submodules in parallel with contrary requirements such as identical placement and individual block implementation. The overall system contains four Leon2 cores and communicates via the Waterbear framework and supports Adaptive Voltage Scaling (AVS) functionality. Three of the processor core variants are derived from the first baseline reference core but implemented individually at block level based on their clock tree specification. The chip is prepared for space applications and designed with triple modular redundancy (TMR) for control parts. The low-power performance is enabled by contemporary power and clock management control. An ASIC is fabricated in a low-power 0.13 mu m BiCMOS technology process node.
We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different types of image degradations. First, using three well known DNNs (ResNet-152, VGG-19, GoogLeNet) we find the human visual system to be more robust to nearly all of the tested image manipulations, and we observe progressively diverging classification error-patterns between humans and DNNs when the signal gets weaker. Secondly, we show that DNNs trained directly on distorted images consistently surpass human performance on the exact distortion types they were trained on, yet they display extremely poor generalisation abilities when tested on other distortion types. For example, training on salt-and-pepper noise does not imply robustness on uniform white noise and vice versa. Thus, changes in the noise distribution between training and testing constitutes a crucial challenge to deep learning vision systems that can be systematically addressed in a lifelong machine learning approach. Our new dataset consisting of 83K carefully measured human psychophysical trials provide a useful reference for lifelong robustness against image degradations set by the human visual system.
Comparative text mining extends from genre analysis and political bias detection to the revelation of cultural and geographic differences, through to the search for prior art across patents and scientific papers. These applications use cross-collection topic modeling for the exploration, clustering, and comparison of large sets of documents, such as digital libraries. However, topic modeling on documents from different collections is challenging because of domain-specific vocabulary. We present a cross-collection topic model combined with automatic domain term extraction and phrase segmentation. This model distinguishes collection-specific and collection-independent words based on information entropy and reveals commonalities and differences of multiple text collections. We evaluate our model on patents, scientific papers, newspaper articles, forum posts, and Wikipedia articles. In comparison to state-of-the-art cross-collection topic modeling, our model achieves up to 13% higher topic coherence, up to 4% lower perplexity, and up to 31% higher document classification accuracy. More importantly, our approach is the first topic model that ensures disjunct general and specific word distributions, resulting in clear-cut topic representations.
The collaboration during the modeling process is uncomfortable and characterized by various limitations. Faced with the successful transfer of first process modeling languages to the augmented world, non-transparent processes can be visualized in a more comprehensive way. With the aim to rise comfortability, speed, accuracy and manifoldness of real world process augmentations, a framework for the bidirectional interplay of the common process modeling world and the augmented world has been designed as morphologic box. Its demonstration proves the working of drawn AR integrations. Identified dimensions were derived from (1) a designed knowledge construction axiom, (2) a designed meta-model, (3) designed use cases and (4) designed directional interplay modes. Through a workshop-based survey, the so far best AR modeling configuration is identified, which can serve for benchmarks and implementations.
Microstructure Characterisation of Advanced Materials via 2D and 3D X-Ray Refraction Techniques
(2018)
3D imaging techniques have an enormous potential to understand the microstructure, its evolution, and its link to mechanical, thermal, and transport properties. In this conference paper we report the use of a powerful, yet not so wide-spread, set of X-ray techniques based on refraction effects. X-ray refraction allows determining internal specific surface (surface per unit volume) in a non-destructive fashion, position and orientation sensitive, and with a nanometric detectability. We demonstrate showcases of ceramics and composite materials, where microstructural parameters could be achieved in a way unrivalled even by high-resolution techniques such as electron microscopy or computed tomography. We present in situ analysis of the damage evolution in an Al/Al2O3 metal matrix composite during tensile load and the identification of void formation (different kinds of defects, particularly unsintered powder hidden in pores, and small inhomogeneity’s like cracks) in Ti64 parts produced by selective laser melting using synchrotron X-ray refraction radiography and tomography.
Precision fruticulture addresses site or tree-adapted crop management. In the present study, soil and tree status, as well as fruit quality at harvest were analysed in a commercial apple (Malus × domestica 'Gala Brookfield'/Pajam1) orchard in a temperate climate. Trees were irrigated in addition to precipitation. Three irrigation levels (0, 50 and 100%) were applied. Measurements included readings of apparent electrical conductivity of soil (ECa), stem water potential, canopy temperature obtained by infrared camera, and canopy volume estimated by LiDAR and RGB colour imaging. Laboratory analyses of 6 trees per treatment were done on fruit considering the pigment contents and quality parameters. Midday stem water potential (SWP), normalized crop water stress index (CWSI) calculated from thermal data, and fruit yield and quality at harvest were analysed. Spatial patterns of the variability of tree water status were estimated by CWSI imaging supported by SWP readings. CWSI ranged from 0.1 to 0.7 indicating high variability due to irrigation and precipitation. Canopy volume data were less variable. Soil ECa appeared homogeneous in the range of 0 to 4 mS m-1. Fruit harvested in a drought stress zone showed enhanced portion of pheophytin in the chlorophyll pool. Irrigation affected soluble solids content and, hence, the quality of fruit. Overall, results highlighted that spatial variation in orchards can be found even if marginal variability of soil properties can be assumed.
An Information System Supporting the Eliciting of Expert Knowledge for Successful IT Projects
(2018)
In order to guarantee the success of an IT project, it is necessary for a company to possess expert knowledge. The difficulty arises when experts no longer work for the company and it then becomes necessary to use their knowledge, in order to realise an IT project. In this paper, the ExKnowIT information system which supports the eliciting of expert knowledge for successful IT projects, is presented and consists of the following modules: (1) the identification of experts for successful IT projects, (2) the eliciting of expert knowledge on completed IT projects, (3) the expert knowledge base on completed IT projects, (4) the Group Method for Data Handling (GMDH) algorithm, (5) new knowledge in support of decisions regarding the selection of a manager for a new IT project. The added value of our system is that these three approaches, namely, the elicitation of expert knowledge, the success of an IT project and the discovery of new knowledge, gleaned from the expert knowledge base, otherwise known as the decision model, complement each other.
The relentless improvement of silicon photonics is making optical interconnects and networks appealing for use in miniaturized systems, where electrical interconnects cannot keep up with the growing levels of core integration due to bandwidth density and power efficiency limitations. At the same time, solutions such as 3D stacking or 2.5D integration open the door to a fully dedicated process optimization for the photonic die. However, an architecture-level integration challenge arises between the electronic network and the optical one in such tightly-integrated parallel systems. It consists of adapting signaling rates, matching the different levels of communication parallelism, handling cross-domain flow control, addressing re-synchronization concerns, and avoiding protocol-dependent deadlock. The associated energy and performance overhead may offset the inherent benefits of the emerging technology itself. This paper explores a hybrid CMOS-ECL bridge architecture between 3D-stacked technology-heterogeneous networks-on-chip (NoCs). The different ways of overcoming the serialization challenge (i.e., through an improvement of the signaling rate and/or through space-/wavelength division multiplexing options) give rise to a configuration space that the paper explores, in search for the most energy-efficient configuration for high-performance.
Mobile expressive rendering gained increasing popularity among users seeking casual creativity by image stylization and supports the development of mobile artists as a new user group. In particular, neural style transfer has advanced as a core technology to emulate characteristics of manifold artistic styles. However, when it comes to creative expression, the technology still faces inherent limitations in providing low-level controls for localized image stylization. This work enhances state-of-the-art neural style transfer techniques by a generalized user interface with interactive tools to facilitate a creative and localized editing process. Thereby, we first propose a problem characterization representing trade-offs between visual quality, run-time performance, and user control. We then present MaeSTrO, a mobile app for orchestration of neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors. At this, first user tests indicate different levels of satisfaction for the implemented techniques and interaction design.
Microservice Architectures (MSA) structure applications as a collection of loosely coupled services that implement business capabilities. The key advantages of MSA include inherent support for continuous deployment of large complex applications, agility and enhanced productivity. However, studies indicate that most MSA are homogeneous, and introduce shared vulnerabilites, thus vulnerable to multi-step attacks, which are economics-of-scale incentives to attackers. In this paper, we address the issue of shared vulnerabilities in microservices with a novel solution based on the concept of Moving Target Defenses (MTD). Our mechanism works by performing risk analysis against microservices to detect and prioritize vulnerabilities. Thereafter, security risk-oriented software diversification is employed, guided by a defined diversification index. The diversification is performed at runtime, leveraging both model and template based automatic code generation techniques to automatically transform programming languages and container images of the microservices. Consequently, the microservices attack surfaces are altered thereby introducing uncertainty for attackers while reducing the attackability of the microservices. Our experiments demonstrate the efficiency of our solution, with an average success rate of over 70% attack surface randomization.
Speech scientists have long noted that the qualities of naturally-produced vowels do not remain constant over their durations regardless of being nominally "monophthongs" or "diphthongs". Recent acoustic corpora show that there are consistent patterns of first (F1) and second (F2) formant frequency change across different vowel categories. The three Australian English (AusE) close front vowels /i:, 1, i/ provide a striking example: while their midpoint or mean F1 and F2 frequencies are virtually identical, their spectral change patterns distinctly differ. The results indicate that, despite the distinct patterns of spectral change of AusE /i:, i, la/ in production, its perceptual relevance is not uniform, but rather vowel-category dependent.
We investigated online electrophysiological components of distributional learning, specifically of tones by listeners of a non tonal language. German listeners were presented with a bimodal distribution of syllables with lexical tones from a synthesized continuum based on Cantonese level tones. Tones were presented in sets of four standards (within-category tokens) followed by a deviant (across-category token). Mismatch negativity (MMN) was measured. Earlier behavioral data showed that exposure to this bimodal distribution improved both categorical perception and perceptual acuity for level tones [I]. In the present study we present analyses of the electrophysiological response recorded during this exposure, i.e., the development of the MMN response during distributional learning. This development over time is analyzed using Generalized Additive Mixed Models and results showed that the MMN amplitude increased for both within and across-category tokens, reflecting higher perceptual acuity accompanying category formation. This is evidence that learners zooming in on phonological categories undergo neural changes associated with more accurate phonetic perception.
Voice onset time (VOT), a primary cue for voicing in many languages including English and German, is known to vary greatly between speakers, but also displays robust within-speaker consistencies, at least in English. The current analysis extends these findings to German. VOT measures were investigated from voiceless alveolar and velar stops in CV syllables cued by a visual prompt in a cue-distractor task. Comparably to English, a considerable portion of German VOT variability can be attributed to the syllable’s vowel length and the stop’s place of articulation. Individual differences in VOT still remain irrespective of speech rate. However, significant correlations across places of articulation and between speaker-specific mean VOTs and standard deviations indicate that talkers employ a relatively unified VOT profile across places of articulation. This could allow listeners to more efficiently adapt to speaker-specific realisations.
Unified logging system for monitoring multiple cloud storage providers in cloud storage broker
(2018)
With the increasing demand for personal and enterprise data storage service, Cloud Storage Broker (CSB) provides cloud storage service using multiple Cloud Service Providers (CSPs) with guaranteed Quality of Service (QoS), such as data availability and security. However monitoring cloud storage usage in multiple CSPs has become a challenge for CSB due to lack of standardized logging format for cloud services that causes each CSP to implement its own format. In this paper we propose a unified logging system that can be used by CSB to monitor cloud storage usage across multiple CSPs. We gather cloud storage log files from three different CSPs and normalise these into our proposed log format that can be used for further analysis process. We show that our work enables a coherent view suitable for data navigation, monitoring, and analytics.
CSBAuditor
(2018)
Cloud Storage Brokers (CSB) provide seamless and concurrent access to multiple Cloud Storage Services (CSS) while abstracting cloud complexities from end-users. However, this multi-cloud strategy faces several security challenges including enlarged attack surfaces, malicious insider threats, security complexities due to integration of disparate components and API interoperability issues. Novel security approaches are imperative to tackle these security issues. Therefore, this paper proposes CSBAuditor, a novel cloud security system that continuously audits CSB resources, to detect malicious activities and unauthorized changes e.g. bucket policy misconfigurations, and remediates these anomalies. The cloud state is maintained via a continuous snapshotting mechanism thereby ensuring fault tolerance. We adopt the principles of chaos engineering by integrating Broker Monkey, a component that continuously injects failure into our reference CSB system, Cloud RAID. Hence, CSBAuditor is continuously tested for efficiency i.e. its ability to detect the changes injected by Broker Monkey. CSBAuditor employs security metrics for risk analysis by computing severity scores for detected vulnerabilities using the Common Configuration Scoring System, thereby overcoming the limitation of insufficient security metrics in existing cloud auditing schemes. CSBAuditor has been tested using various strategies including chaos engineering failure injection strategies. Our experimental evaluation validates the efficiency of our approach against the aforementioned security issues with a detection and recovery rate of over 96 %.
ASEDS
(2018)
The Massive adoption of social media has provided new ways for individuals to express their opinion and emotion online. In 2016, Facebook introduced a new reactions feature that allows users to express their psychological emotions regarding published contents using so-called Facebook reactions. In this paper, a framework for predicting the distribution of Facebook post reactions is presented. For this purpose, we collected an enormous amount of Facebook posts associated with their reactions labels using the proposed scalable Facebook crawler. The training process utilizes 3 million labeled posts for more than 64,000 unique Facebook pages from diverse categories. The evaluation on standard benchmarks using the proposed features shows promising results compared to previous research. The final model is able to predict the reaction distribution on Facebook posts with a recall score of 0.90 for "Joy" emotion.
Organic or inorganic (A) metal (M) halide (X) perovskites (AMX(3)) are semiconductor materials setting the basis for the development of highly efficient, low-cost and multijunction solar energy conversion devices. The best efficiencies nowadays are obtained with mixed compositions containing methylammonium, formamidinium, Cs and Rb as well as iodine, bromine and chlorine as anions. The understanding of fundamental properties such as crystal structure and its effect on the band gap, as well as their phase stability is essential. In this systematic study X-ray diffraction and photoluminescense spectroscopy were applied to evaluate structural and optoelectronic properties of hybrid perovskites with mixed compositions.
What Stays in Mind?
(2018)
This Research-to-Practice paper examines the practical application of various forms of collaborative learning in MOOCs. Since 2012, about 60 MOOCs in the wider context of Information Technology and Computer Science have been conducted on our self-developed MOOC platform. The platform is also used by several customers, who either run their own platform instances or use our white label platform. We, as well as some of our partners, have experimented with different approaches in collaborative learning in these courses. Based on the results of early experiments, surveys amongst our participants, and requests by our business partners we have integrated several options to offer forms of collaborative learning to the system. The results of our experiments are directly fed back to the platform development, allowing to fine tune existing and to add new tools where necessary. In the paper at hand, we discuss the benefits and disadvantages of decisions in the design of a MOOC with regard to the various forms of collaborative learning. While the focus of the paper at hand is on forms of large group collaboration, two types of small group collaboration on our platforms are briefly introduced.
Beyond Surveys
(2018)
The overhead of moving data is the major limiting factor in todays hardware, especially in heterogeneous systems where data needs to be transferred frequently between host and accelerator memory. With the increasing availability of hardware-based compression facilities in modern computer architectures, this paper investigates the potential of hardware-accelerated I/O Link Compression as a promising approach to reduce data volumes and transfer time, thus improving the overall efficiency of accelerators in heterogeneous systems. Our considerations are focused on On-the-Fly compression in both Single-Node and Scale-Out deployments. Based on a theoretical analysis, this paper demonstrates the feasibility of hardware-accelerated On-the-Fly I/O Link Compression for many workloads in a Scale-Out scenario, and for some even in a Single-Node scenario. These findings are confirmed in a preliminary evaluation using software-and hardware-based implementations of the 842 compression algorithm.
Our Conclusions
(2018)