Refine
Year of publication
- 2018 (195) (remove)
Document Type
- Other (195) (remove)
Is part of the Bibliography
- yes (195)
Keywords
- E-Learning (3)
- Security Metrics (3)
- Security Risk Assessment (3)
- 3D Point Clouds (2)
- Cloud-Security (2)
- Internet of Things (2)
- Kanban (2)
- Lecture Video Archive (2)
- MQTT (2)
- Scrum (2)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (52)
- Institut für Physik und Astronomie (18)
- Institut für Geowissenschaften (17)
- Institut für Biochemie und Biologie (16)
- Department Psychologie (13)
- Department Sport- und Gesundheitswissenschaften (13)
- Institut für Informatik und Computational Science (13)
- Institut für Ernährungswissenschaft (12)
- Institut für Chemie (8)
- Department Linguistik (6)
The plant pathogen Pseudomonas syringae is a gram-negative bacterium which infects a wide range of plant species including important crops plants. To suppress plant immunity and cause disease P.syringae injects type-III effector proteins (T3Es) into the plant cell cytosol. In this study, we identified a novel target of the well characterized bacterial T3E HopZ1a. HopZ1a is an acetyltransferase that was shown to disrupt vesicle transport during innate immunity by acetylating tubulin. Using a yeast-two-hybrid screen approach, we identified a REMORIN (REM) protein from tobacco as a novel HopZ1a target. HopZ1a interacts with REM at the plasma membrane (PM) as shown by split-YFP experiments. Interestingly, we found that PBS1, a well-known kinase involved in plant immunity also interacts with REM in pull-down assays, and at the PM as shown by BiFC. Furthermore, we confirmed that REM is phosphorylated by PBS1 in vitro. Overexpression of REM provokes the upregulation of defense genes and leads to disease-like phenotypes pointing to a role of REM in plant immune signaling. Further protein-protein interaction studies reveal novel REM binding partners with a possible role in plant immune signaling. Thus, REM might act as an assembly hub for an immune signaling complex targeted by HopZ1a. Taken together, this is the first report describing that a REM protein is targeted by a bacterial effector. How HopZ1a might mechanistically manipulate the plant immune system through interfering with REM function will be discussed.
Oxidative posttranslationale Modifikationen endogener Proteine werden v. a. durch reaktive Sauerstoff- und Stickstoffspezies (engl:. Reactive Oxygen Species, ROS, reactive nitrogen species, RNS) hervorgerufen und können sowohl reversibel (z. B. Disulfidbindungen) als auch irreversibel (z. B. Proteincarbonyle) erfolgen [1–3]. Lange wurde angenommen, dass oxidative posttranslationale Proteinmodifikationen (oxPTPM) nur von untergeordneter Bedeutung für den Metabolismus sind. Tatsächlich handelt es sich jedoch um einen physiologischen Prozess, der über die Modulation der Proteinstruktur auch die Proteinfunktion (z. B. Enzymaktivität, Stabilität) und somit zahlreiche Stoffwechselwege wie den Energiestoffwechsel, die Immunfunktion, die vaskuläre Funktion sowie Apoptose und Genexpression beeinflussen kann. Die Bildung von oxPTPM ist dabei hochreguliert und hängt u. a. von der Proteinstruktur, der Verfügbarkeit von ROS und RNS sowie dem lokalen Mikromilieu der Zelle ab [2, 4].
One paragraph of the manuscript of the paper has been inadvertently omitted in the very final stage of its compilation due to a technical mistake. Since this paragraph discusses the declustering of the used earthquake catalogue and is therefore necessary for the understanding of the seismicity data preprocessing, the authors decided to provide this paragraph in form of a correction. The respective paragraph belongs to chapter 2 of the paper, where it was placed originally, and should be inserted into the published paper before the second to the last paragraph. The omitted text reads as follows:
We study the rupture processes of Iquique earthquake 8.1 (2014/04/01) and its largest aftershock 7.7 (2014/04/03) that ruptured the North Chile subduction zone. High-rate Global Positioning System (GPS) recordings and strong motion data are used to reconstruct the evolution of the slip amplitude, rise time and rupture time of both earthquakes. A two-step inversion scheme is assumed, by first building prior models for both earthquakes from the inversion of the estimated static displacements and then, kinematic inversions in the frequency domain are carried out taken into account this prior information. The preferred model for the mainshock exhibits a seismic moment of 1.73 × 1021 Nm ( 8.1) and maximum slip of ∼9 m, while the aftershock model has a seismic moment of 3.88 × 1020 ( 7.7) and a maximum slip of ∼3 m. For both earthquakes, the final slip distributions show two asperities (a shallow one and a deep one) separated by an area with significant slip deficit. This suggests a segmentation along-dip which might be related to a change of the dipping angle of the subducting slab inferred from gravimetric data. Along-strike, the areas where the seismic ruptures stopped seem to be well correlated with geological features observed from geophysical information (high-resolution bathymetry, gravimetry and coupling maps) that are representative of the long-term segmentation of the subduction margin. Considering the spatially limited portions that were broken by these two earthquakes, our results support the idea that the seismic gap is not filled yet.
Editorial
(2018)
"Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it's the only thing that ever has. - Margaret Mead."
With the last issue of this year we want to point out directions towards what will come and what challenges and opportunities lie ahead of us. More needed than ever are joint creative efforts to find ways to collaborate and innovate in order to secure the wellbeing of our earth for the next generation to come. We have found ourselves puzzled that we could assemble a sustainability issue without having a call for papers or a special issue. In fact, many of the submissions we currently receive, deal with sustainable, ecological or novel approaches to management and organizations. As creativity and innovation are undisputable necessary ingredients for reaching the sustainable development goals, empirical proof and research in this area are still in their infancy. While the role of design and design thinking has been highlighted before for solving wicked societal problems, a lot more research is needed which creative and innovative ways organisations and societies can take to find solutions to climate change, poverty, hunger and education. We would therefore like to call to you, our readers and writers to tackle these problems with your research.
The first article in this issue addresses one of the above named challenges - the role of innovation for achieving the transition to a low-carbon energy world. In “Innovating for low-carbon energy through hydropower: Enabling a conservation charity's transition to a low-carbon community”, the authors John Gallagher, Paul Coughlan, A. Prysor Williams and Aonghus McNabola look at how an eco-design approach has supported a community transition to low-carbon. They highlight the importance of effective management as well as external collaboration and how the key for success lay in fostering an open environment for creativity and idea sharing. The second article addresses another of the grand challenges, the future of mobility and uses a design-driven approach to develop scenarios for mobility in cities. In “Designing radical innovations of meanings for society: envisioning new scenarios for smart mobility”, the authors Claudio Dell'Era, Naiara Altuna and Roberto Verganti investigate how new meanings can be designed and proposed to society rather than to individuals in the particular context of smart mobility. Through two case studies the authors argue for a multi-level perspective, taking the perspective of the society to solve societal challenges while considering the needs of the individual. The latter is needed because we will not change if our needs are not addressed. Furthermore, the authors find that both, meaning and technology need to be considered to create radical innovation for society. The role of meaning continues in the third article in this issue. The authors Marta Gasparin and William Green show in their article “Reconstructing meaning without redesigning products: The case of the Serie7 chair” how meaning changes over time even though the product remains the same. Through an in-depth retrospective study of the Serie 7 chair the authors investigate the relationship between meaning and the materiality of the object, and show the importance of materiality in constructing product meaning over long periods. Translating this meaning over the course of the innovation process is an important task of management in order to gain buy-in from all involved stakeholders. In the following article “A systematic approach for new technology development by using a biomimicry-based TRIZ contradiction matrix” the authors Byungun Yoon, Chaeguk Lim, Inchae Park and Dooseob Yoon develop a systematic process combining biomimicry and technology-based TRIZ in order to solve technological problems or develop new technologies based on completely new sources or combinations from technology and biology.
In the fifth article in this issue “Innovating via Building Absorptive Capacity: Interactive Effects of Top Management Support of Learning, Employee Learning Orientation, and Decentralization Structure” the authors Li-Yun Sun, Chenwei Li and Yuntao Dong examine the effect of learning-related personal and contextual factors on organizational absorptive capability and subsequent innovative performance. The authors find positive effects as well as a moderation influence of decentralized organizational decision-making structures. In the sixth article “Creativity within boundaries: social identity and the development of new ideas in franchise systems” the authors Fanny Simon, Catherine Allix-Desfautaux, Nabil Khelil and Anne-Laure Le Nadant address the paradox of balancing novelty and conformity for creativity in a franchise system. This research is one of the first we know to explicitly address creativity and innovation in such a rigid and pre-determined system. Using a social identity perspective, they can show that social control, which may be exerted by manipulating group identity, is an efficient lever to increase both the creation and the diffusion of the idea. Furthermore, they show that franchisees who do not conform to the norm of the group are stigmatized and must face pressure from the group to adapt their behaviors. This has important implications for future research. In the following article “Exploring employee interactions and quality of contributions in intra-organisational innovation platforms” the authors Dimitra Chasanidou, Njål Sivertstol and Jarle Hildrum examine the user interactions in an intra-organisational innovation platform, and also address the influence of user interactions for idea development. The authors find that employees communicate through the innovation platform with different interaction, contribution and collaboration types and propose three types of contribution qualities—passive, efficient and balanced contribution. In the eighth article “Ready for Take-off”: How Open Innovation influences startup success” Cristina Marullo, Elena Casprini, Alberto di Minin and Andrea Piccaluga seek to predict new venture success based on factors that can be observed in the pre-startup phase. The authors introduce different variables of founding teams and how these relate to startup success. Building on large-scale dataset of submitted business plans at UC Berkeley, they can show that teams with high skills diversity and past joint experience are a lot better able to prevent the risk of business failure at entry and to adapt the internal resources to market conditions. Furthermore, it is crucial for the team to integrate many external knowledge sources into their process (openness) in order to be successful. The crucial role of knowledge and how it is communicated and shared is the focal point of Natalya Sergeeva's and Anna Trifilova's article on “The role of storytelling in the innovation process”. They authors can show how storytelling has an important role to play when it comes to motivating employees to innovate and promoting innovation success stories inside and outside the organization. The deep human desire to hear and experience stories is also addressed in the last article in this issue “Gamification Approaches to the Early Stage of Innovation” by Rui Patricio, Antonio Moreira and Francesco Zurlo. Using gamification approaches at the early stage of innovation promises to create better team coherence, let employees experience fun and engagement, improve communication and foster knowledge exchange. Using an analytical framework, the authors analyze 15 articles that have looked at gamification in the context of innovation management before. They find that gamification indeed supports firms in becoming better at performing complex innovation tasks and managing innovation challenges. Furthermore, gamification in innovation creates a space for inspiration, improves creativity and the generation of high potential ideas.
Riback et al. (Reports, 13 October 2017, p. 238) used small-angle x-ray scattering (SAXS) experiments to infer a degree of compaction for unfolded proteins in water versus chemical denaturant that is highly consistent with the results from Forster resonance energy transfer (FRET) experiments. There is thus no "contradiction" between the two methods, nor evidence to support their claim that commonly used FRET fluorophores cause protein compaction.
Introduction
(2018)
The present thematic set of studies comprises five concise review articles on the use of priming paradigms in different areas of bilingualism research. Their aim is to provide readers with a quick overview of how priming paradigms can be employed in particular subfields of bilingualism research and to make readers aware of the methodological issues that need to be considered when using priming techniques.
In an effort to explain the formation of a narrow third radiation belt at ultra-relativistic energies detected during a solar storm in September 20121, Mann et al.2 present simulations from which they conclude it arises from a process of outward radial diffusion alone, without the need for additional loss processes from higher frequency waves. The comparison of observations with the model in Figs 2 and 3 of their Article clearly shows that even with strong radial diffusion rates, the model predicts a third belt near L* = 3 that is twice as wide as observed and approximately an order of magnitude more intense. We therefore disagree with their interpretation that “the agreement between the absolute fluxes from the model and those observed by REPT [the Relativistic Electron Proton Telescope] shown on Figs 2 and 3 is excellent.”
Previous studies3 have shown that outward radial diffusion plays a very important role in the dynamics of the outer belt and is capable of explaining rapid reductions in the electron flux. It has also been shown that it can produce remnant belts (Fig. 2 of a long-term simulation study4). However, radial diffusion alone cannot explain the formation of the narrow third belt at multi-MeV during September 2012. An additional loss mechanism is required.
Higher radial diffusion rates cannot improve the comparison of model presented by Mann et al. with observations. A further increase in the radial diffusion rates (reported in Fig. 4 of the Supplementary Information of ref. 2) results in the overestimation of the outer belt fluxes by up to three orders of magnitude at energy of 3.4 MeV.
Observations at 2 MeV, where belts show only a two-zone structure, were not presented by Mann et al. Moreover, simulations of electrons with energies below 2 MeV with the same diffusion rates and boundary conditions used by the authors would probably produce very strong depletions down to L = 3–3.5, where L is radial distance from the centre of the Earth to the given field line in the equatorial plane. Observations do not show a non-adiabatic loss below L ∼ 4.5 for 2 MeV. Such different dynamics between 2 MeV and above 4 MeV at around L = 3.5 are another indication that particles are scattered by electromagnetic ion cyclotron (EMIC) waves that affect only energies above a certain threshold.
Observations of the phase space density (PSD) provide additional evidence for the local loss of electrons. Around L* = 3.5–4 PSD shows significant decrease by an order of magnitude starting in the afternoon of 3 September (Fig. 1a), while PSD above L* = 4 is increasing. The minimum in PSD between L* = 3.5–4 continues to decrease until 4 September. This evolution demonstrates that the loss is not produced by outward diffusion. Radial diffusion cannot produce deepening minima, as it works to smooth gradients. Just as growing peaks in PSD show the presence of localized acceleration5, deepening minima show the presence of localized loss.
Figure 1: Time evolution of radiation profiles in electron PSD at relativistic and ultra-relativistic energies.
figure 1
a, Similar to Supplementary Fig. 3 of ref. 2, but using TS07D model10 and for μ = 2,500 MeV G−1, K = 0.05 RE G0.5 (where RE is the radius of the Earth). b, Similar to Supplementary Fig. 3 of ref. 2, but using TS07D model and for μ = 700 MeV G−1, corresponding to MeV energies in the heart of the belt. Minimum in PSD in the heart of the multi-MeV electron radiation belt between 3.5 and 4 RE deepening between the afternoon of 3 September and 5 September clearly show that the narrow remnant belt at multi-MeV below 3.5 RE is produced by the local loss.
Full size image
The minimum in the outer boundary is reached on the evening of 2 September. After that, the outer boundary moves up, while the minimum decreases by approximately an order of magnitude, clearly showing that this main decrease cannot be explained by outward diffusion, and requires additional loss processes. The analysis of profiles of PSD is a standard tool used, for example, in the study about electron acceleration5 and routinely used by the entire Van Allen Probes team. In the Supplementary Information, we show that this analysis is validated by using different magnetic field models. The Supplementary Information also shows that measurements are above background noise.
Deepening minima at multi-MeV during the times when the boundary flux increases are clearly seen in Fig. 1a. They show that there must be localized loss, as radial diffusion cannot produce a minimum that becomes lower with time. At lower energies of 1–2 MeV, which corresponds to lower values of the first adiabatic invariant μ (Fig. 1b), the profiles are monotonic between L* = 3–3.5, consistent with the absence of scattering by EMIC waves that affect only electrons above a certain energy threshold6,7,8,9.
In summary, the results of the modelling and observations presented by Mann et al. do not lend support to the claim of explaining the dynamics of the ultra-relativistic third Van Allen radiation belt in terms of an outward radial diffusion process alone. While the outward radial diffusion driven by the loss to the magnetopause2 is certainly operating during this storm, there is compelling observational and modelling2,6 evidence that shows that very efficient localized electron loss operates during this storm at multi-MeV energies, consistent with localized loss produced by EMIC waves.
We present a prototype of an integrated reasoning environment for educational purposes. The presented tool is a fragment of a proof assistant and automated theorem prover. We describe the existing and planned functionality of the theorem prover and especially the functionality of the educational fragment. This currently supports working with terms of the untyped lambda calculus and addresses both undergraduate students and researchers. We show how the tool can be used to support the students' understanding of functional programming and discuss general problems related to the process of building theorem proving software that aims at supporting both research and education.
Low back pain (LBP) is a leading cause of activity limitation. Objective assessment of the spinal motion plays a key role in diagnosis and treatment of LBP. We propose a method that facilitates clinical assessment of lower back motions by means of a wireless inertial sensor network. The sensor units are attached to the right and left side of the lumbar region, the pelvis and the thighs, respectively. Since magnetometers are known to be unreliable in indoor environments, we use only 3D accelerometer and 3D gyroscope readings. Compensation of integration drift in the horizontal plane is achieved by estimating the gyroscope biases from automatically detected initial rest phases. For the estimation of sensor orientations, both a smoothing algorithm and a filtering algorithm are presented. From these orientations, we determine three-dimensional joint angles between the thighs and the pelvis and between the pelvis and the lumbar region. We compare the orientations and joint angles to measurements of an optical motion tracking system that tracks each skin-mounted sensor by means of reflective markers. Eight subjects perform a neutral initial pose, then flexion/extension, lateral flexion, and rotation of the trunk. The root mean square deviation between inertial and optical angles is about one degree for angles in the frontal and sagittal plane and about two degrees for angles in the transverse plane (both values averaged over all trials). We choose five features that characterize the initial pose and the three motions. Interindividual differences of all features are found to be clearly larger than the observed measurement deviations. These results indicate that the proposed inertial sensor-based method is a promising tool for lower back motion assessment.
Manufacturing industries are undergoing a major paradigm shift towards more autonomy. Automated planning and scheduling then becomes a necessity. The Planning and Execution Competition for Logistics Robots in Simulation held at ICAPS is based on this scenario and provides an interesting testbed. However, the posed problem is challenging as also demonstrated by the somewhat weak results in 2017. The domain requires temporal reasoning and dealing with uncertainty. We propose a novel planning system based on Answer Set Programming and the Clingo solver to tackle these problems and incentivize robot cooperation. Our results show a significant performance improvement, both, in terms of lowering computational requirements and better game metrics.
Declarative languages for knowledge representation and reasoning provide constructs to define preference relations over the set of possible interpretations, so that preferred models represent optimal solutions of the encoded problem. We introduce the notion of approximation for replacing preference relations with stronger preference relations, that is, relations comparing more pairs of interpretations. Our aim is to accelerate the computation of a non-empty subset of the optimal solutions by means of highly specialized algorithms. We implement our approach in Answer Set Programming (ASP), where problems involving quantitative and qualitative preference relations can be addressed by ASPRIN, implementing a generic optimization algorithm. Unlike this, chains of approximations allow us to reduce several preference relations to the preference relations associated with ASP’s native weak constraints and heuristic directives. In this way, ASPRIN can now take advantage of several highly optimized algorithms implemented by ASP solvers for computing optimal solutions
Participants of the 2017 European Space Weather Week in Ostend, Belgium, discussed the stakeholder requirements for space weather-related models. It was emphasized that stakeholders show an increased interest in space weather-related models. Participants of the meeting discussed particular prediction indicators that can provide first-order estimates of the impact of space weather on engineering systems.
The Amyloid-precursor-like protein 1 (APLP1) is a neuronal type I transmembrane protein which plays a role in synaptic adhesion and synaptogenesis. Past investigations indicated that APLP1 is involved in the formation of protein-protein complexes that bridge the junctions between neighboring cells. Nevertheless, APLP1-APLP1 trans interactions have never been directly observed in higher eukaryotic cells. Here, we investigate APLP1 interactions and dynamics directly in living human embryonic kidney (HEK) cells, using fluorescence fluctuation spectroscopy techniques, namely cross-correlation scanning fluorescence correlation spectroscopy (sFCS) and Number&Brightness (N&B). Our results show that APLP1 forms homotypic trans complexes at cell-cell contacts. In the presence of zinc ions, the protein forms macroscopic clusters, exhibiting an even higher degree of trans binding and strongly reduced dynamics. Further evidence from Giant Plasma Membrane Vesicles and live cell actin staining suggests that the presence of an intact cortical cytoskeleton is required for zinc-induced cis multimerization. Subsequently, large adhesion platforms bridging interacting cells are formed through APLP1-APLP1 direct trans interactions. Taken together, our results provide direct evidence that APLP1 functions as a neuronal zinc-dependent adhesion protein and provide a more detailed understanding of the molecular mechanisms driving the formation of APLP1 adhesion platforms. Further, they show that fluorescence fluctuation spectroscopy techniques are useful tools for the investigation of protein-protein interactions at cell-cell adhesion sites.
Pace-of-life syndromes
(2018)
This introduction to the topical collection on Pace-of-life syndromes: a framework for the adaptive integration of behaviour, physiology, and life history provides an overview of conceptual, theoretical, methodological, and empirical progress in research on pace-of-life syndromes (POLSs) over the last decade. The topical collection has two main goals. First, we briefly describe the history of POLS research and provide a refined definition of POLS that is applicable to various key levels of variation (genetic, individual, population, species). Second, we summarise the main lessons learned from current POLS research included in this topical collection. Based on an assessment of the current state of the theoretical foundations and the empirical support of the POLS hypothesis, we propose (i) conceptual refinements of theory, particularly with respect to the role of ecology in the evolution of (sexual dimorphism in) POLS, and (ii) methodological and statistical approaches to the study of POLS at all major levels of variation. This topical collection further holds (iii) key empirical examples demonstrating how POLS structures may be studied in wild populations of (non) human animals, and (iv) a modelling paper predicting POLS under various ecological conditions. Future POLS research will profit from the development of more explicit theoretical models and stringent empirical tests of model assumptions and predictions, increased focus on how ecology shapes (sex-specific) POLS structures at multiple hierarchical levels, and the usage of appropriate statistical tests and study designs. Significance statement As an introduction to the topical collection, we summarise current conceptual, theoretical, methodological and empirical progress in research on pace-of-life syndromes (POLSs), a framework for the adaptive integration of behaviour, physiology and life history at multiple hierarchical levels of variation (genetic, individual, population, species). Mixed empirical support of POLSs, particularly at the within-species level, calls for an evaluation and refinement of the hypothesis. We provide a refined definition of POLSs facilitating testable predictions. Future research on POLSs will profit from the development of more explicit theoretical models and stringent empirical tests of model assumptions and predictions, increased focus on how ecology shapes (sex-specific) POLSs structures at multiple hierarchical levels and the usage of appropriate statistical tests and study designs.
Comparative text mining extends from genre analysis and political bias detection to the revelation of cultural and geographic differences, through to the search for prior art across patents and scientific papers. These applications use cross-collection topic modeling for the exploration, clustering, and comparison of large sets of documents, such as digital libraries. However, topic modeling on documents from different collections is challenging because of domain-specific vocabulary. We present a cross-collection topic model combined with automatic domain term extraction and phrase segmentation. This model distinguishes collection-specific and collection-independent words based on information entropy and reveals commonalities and differences of multiple text collections. We evaluate our model on patents, scientific papers, newspaper articles, forum posts, and Wikipedia articles. In comparison to state-of-the-art cross-collection topic modeling, our model achieves up to 13% higher topic coherence, up to 4% lower perplexity, and up to 31% higher document classification accuracy. More importantly, our approach is the first topic model that ensures disjunct general and specific word distributions, resulting in clear-cut topic representations.
Over the past few years, studying abroad and other educational international experiences have become increasingly highly regarded. Nevertheless, research shows that only a minority of students actually take part in
academic mobility programs. But what is it that distinguishes those students who take up these international opportunities from those who do not? In this
study we reviewed recent quantitative studies on why (primarily German) students choose to travel abroad or not. This revealed a pattern of predictive factors. These indicate the key role played by students’ personal and social background, as well as previous international travel and the course of studies they are enrolled in. The study then focuses on teaching students. Both facilitating and debilitating factors are discussed and included in a model illustrating the decision-making process these students use. Finally, we discuss the practical implications for ways in which international, studyrelated travel might be increased in the future. We suggest that higher education institutions analyze individual student characteristics, offering differentiated programs to better meet the needs of different groups, thus raising the likelihood of disadvantaged students participating in academic international travel.
This paper presents the concept of a community-accessible stratospheric balloon-based observatory that is currently under preparation by a consortium of European research institutes and industry. We present the technical motivation, science case, instrumentation, and a two-stage image stabilization approach of the 0.5-m UV/visible platform. In addition, we briefly describe the novel mid-sized stabilized balloon gondola under design to carry telescopes in the 0.5 to 0.6 m range as well as the currently considered flight option for this platform. Secondly, we outline the scientific and technical motivation for a large balloon-based FIR telescope and the ESBO DS approach towards such an infrastructure.
The globally distributed sperm whale (Physeter macrocephalus) has a partly matrilineal social structure with predominant male dispersal. At the beginning of 2016, a total of 30 male sperm whales stranded in five different countries bordering the southern North Sea. It has been postulated that these individuals were on a migration route from the north to warmer temperate and tropical waters where females live in social groups. By including samples from four countries (n = 27), this event provided a unique chance to genetically investigate the maternal relatedness and the putative origin of these temporally and spatially co-occuring male sperm whales. To utilize existing genetic resources, we sequenced 422 bp of the mitochondrial control region, a molecular marker for which sperm whale data are readily available from the entire distribution range. Based on four single nucleotide polymorphisms (SNPs) within the mitochondrial control region, five matrilines could be distinguished within the stranded specimens, four of which matched published haplotypes previously described in the Atlantic. Among these male sperm whales, multiple matrilineal lineages co-occur. We analyzed the population differentiation and could show that the genetic diversity of these male sperm whales is comparable to the genetic diversity in sperm whales from the entire Atlantic Ocean. We confirm that within this stranding event, males do not comprise maternally related individuals and apparently include assemblages of individuals from different geographic regions. (c) 2017 Deutsche Gesellschaft fur Saugetierkunde. Published by Elsevier GmbH. All rights reserved.
The rapid digitalization of the Facility Management (FM) sector has increased the demand for mobile, interactive analytics approaches concerning the operational state of a building. These approaches provide the key to increasing stakeholder engagement associated with Operation and Maintenance (O&M) procedures of living and working areas, buildings, and other built environment spaces. We present a generic and fast approach to process and analyze given 3D point clouds of typical indoor office spaces to create corresponding up-to-date approximations of classified segments and object-based 3D models that can be used to analyze, record and highlight changes of spatial configurations. The approach is based on machine-learning methods used to classify the scanned 3D point cloud data using 2D images. This approach can be used to primarily track changes of objects over time for comparison, allowing for routine classification, and presentation of results used for decision making. We specifically focus on classification, segmentation, and reconstruction of multiple different object types in a 3D point-cloud scene. We present our current research and describe the implementation of these technologies as a web-based application using a services-oriented methodology.
Currently we are witnessing profound changes in the geospatial domain. Driven by recent ICT developments, such as web services, serviceoriented computing or open-source software, an explosion of geodata and geospatial applications or rapidly growing communities of non-specialist users, the crucial issue is the provision and integration of geospatial intelligence in these rapidly changing, heterogeneous developments. This paper introduces the concept of Servicification into geospatial data processing. Its core idea is the provision of expertise through a flexible number of web-based software service modules. Selection and linkage of these services to user profiles, application tasks, data resources, or additional software allow for the compilation of flexible, time-sensitive geospatial data handling processes. Encapsulated in a string of discrete services, the approach presented here aims to provide non-specialist users with geospatial expertise required for the effective, professional solution of a defined application problem. Providing users with geospatial intelligence in the form of web-based, modular services, is a completely different approach to geospatial data processing. This novel concept puts geospatial intelligence, made available through services encapsulating rule bases and algorithms, in the centre and at the disposal of the users, regardless of their expertise.
Mobile expressive rendering gained increasing popularity among users seeking casual creativity by image stylization and supports the development of mobile artists as a new user group. In particular, neural style transfer has advanced as a core technology to emulate characteristics of manifold artistic styles. However, when it comes to creative expression, the technology still faces inherent limitations in providing low-level controls for localized image stylization. This work enhances state-of-the-art neural style transfer techniques by a generalized user interface with interactive tools to facilitate a creative and localized editing process. Thereby, we first propose a problem characterization representing trade-offs between visual quality, run-time performance, and user control. We then present MaeSTrO, a mobile app for orchestration of neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors. At this, first user tests indicate different levels of satisfaction for the implemented techniques and interaction design.
OpenLL
(2018)
Today's rendering APIs lack robust functionality and capabilities for dynamic, real-time text rendering and labeling, which represent key requirements for 3D application design in many fields. As a consequence, most rendering systems are barely or not at all equipped with respective capabilities. This paper drafts the unified text rendering and labeling API OpenLL intended to complement common rendering APIs, frameworks, and transmission formats. For it, various uses of static and dynamic placement of labels are showcased and a text interaction technique is presented. Furthermore, API design constraints with respect to state-of-the-art text rendering techniques are discussed. This contribution is intended to initiate a community-driven specification of a free and open label library.
Impact of self-assessment of return to work on employable discharge from multi-component cardiac rehabilitation. Retrospective unicentric analysis of routine data from cardiac rehabilitation in patients below 65 years of age. Presentation in the "Cardiovascular rehabilitation revisited" high impact abstract session during ESC Congress 2018.
Screeninginstrumente
(2018)
For the last ten years, almost every theoretical result concerning the expected run time of a randomized search heuristic used drift theory, making it the arguably most important tool in this domain. Its success is due to its ease of use and its powerful result: drift theory allows the user to derive bounds on the expected first-hitting time of a random process by bounding expected local changes of the process - the drift. This is usually far easier than bounding the expected first-hitting time directly. Due to the widespread use of drift theory, it is of utmost importance to have the best drift theorems possible. We improve the fundamental additive, multiplicative, and variable drift theorems by stating them in a form as general as possible and providing examples of why the restrictions we keep are still necessary. Our additive drift theorem for upper bounds only requires the process to be nonnegative, that is, we remove unnecessary restrictions like a finite, discrete, or bounded search space. As corollaries, the same is true for our upper bounds in the case of variable and multiplicative drift.
One of the most important aspects of a randomized algorithm is bounding its expected run time on various problems. Formally speaking, this means bounding the expected first-hitting time of a random process. The two arguably most popular tools to do so are the fitness level method and drift theory. The fitness level method considers arbitrary transition probabilities but only allows the process to move toward the goal. On the other hand, drift theory allows the process to move into any direction as long as it move closer to the goal in expectation; however, this tendency has to be monotone and, thus, the transition probabilities cannot be arbitrary. We provide a result that combines the benefit of these two approaches: our result gives a lower and an upper bound for the expected first-hitting time of a random process over {0,..., n} that is allowed to move forward and backward by 1 and can use arbitrary transition probabilities. In case that the transition probabilities are known, our bounds coincide and yield the exact value of the expected first-hitting time. Further, we also state the stationary distribution as well as the mixing time of a special case of our scenario.
The centrosome is not only the largest and most sophisticated protein complex within a eukaryotic cell, in the light of evolution, it is also one of its most ancient organelles. This special issue of "Cells" features representatives of three main, structurally divergent centrosome types, i.e., centriole-containing centrosomes, yeast spindle pole bodies (SPBs), and amoebozoan nucleus-associated bodies (NABs). Here, I discuss their evolution and their key-functions in microtubule organization, mitosis, and cytokinesis. Furthermore, I provide a brief history of centrosome research and highlight recently emerged topics, such as the role of centrioles in ciliogenesis, the relationship of centrosomes and centriolar satellites, the integration of centrosomal structures into the nuclear envelope and the involvement of centrosomal components in non-centrosomal microtubule organization.
For theoretical analyses there are two specifics distinguishing GP from many other areas of evolutionary computation. First, the variable size representations, in particular yielding a possible bloat (i.e. the growth of individuals with redundant parts). Second, the role and realization of crossover, which is particularly central in GP due to the tree-based representation. Whereas some theoretical work on GP has studied the effects of bloat, crossover had a surprisingly little share in this work. We analyze a simple crossover operator in combination with local search, where a preference for small solutions minimizes bloat (lexicographic parsimony pressure); the resulting algorithm is denoted Concatenation Crossover GP. For this purpose three variants of the wellstudied Majority test function with large plateaus are considered. We show that the Concatenation Crossover GP can efficiently optimize these test functions, while local search cannot be efficient for all three variants independent of employing bloat control.
Skeletal muscle alterations during aging lead to dysfunctional metabolism, correlating with frailty and early mortality. The loss of proteostasis is a hallmark of aging. Whether proteostasis loss plays a role in muscle aging remains elusive. To address this question we collected muscles, Soleus (SOL, type I) and Extensor digitorum longus (EDL, type II), from young (4 months) and old (25 months) C57BL/6 mice and evaluated the proteasomal system. Initial work showed decreased 26 S activity in old SOL. EDL displayed lower proteasomal activity in both ages compared to any of the SOL ages. Moreover, in order to understand if during aging there is the so-called “fiber switch from fast-to-slow”, we performed western blots against sMHC and fMHC (slow and fast myosin heavy chain, respectively). Preliminary results suggest that young SOL is composed by slow twitch fibers but also contains fast twitch fibers, while young EDL seems to be mostly composed by fast twitch fibers that level down during aging, suggesting the switch. As a conclusion, EDL seems to have less proteasomal activity, however, if this is a contributor or a consequence to the muscle fiber switch during aging still needs further investigation.
Kim et al. recently measured the structure factor of deeply supercooled water droplets (Reports, 22 December 2017, p. 1589). We raise several concerns about their data analysis and interpretation. In our opinion, the reported data do not lead to clear conclusions about the origins of water’s anomalies.
I can see it in your face
(2018)
An essential, respected, and critical aspect of the modern practice of science and scientific publishing is peer review. The process of peer review facilitates best practices in scientific conduct and communication, ensuring that manuscripts published as accurate, valuable, and clearly communicated. The over 152 papers published in Tectonics in 2017 benefit from the time, effort, and expertise of our reviewers who have provided thoughtfully considered advice on each manuscript. This role is critical to advancing our understanding of the evolution of the continents and their margins, as these reviews lead to even clearer and higher-quality papers. In 2017, the over 423 papers submitted to Tectonics were the beneficiaries of more than 786 reviews provided by 562 members of the tectonics community and related disciplines. To everyone who has volunteered their time and intellect to peer reviewing, thank you for helping Tectonics and all other AGU Publications provide the best science possible.
One-tube osmotic fragility (OF) test is a rapid test used widely for screening thalassemia in countries with limited resources. The test has important limitation in that its accuracy relies on observers’ experience.
The iCheck Turbidity is a prototype of portable nephelometer developed by BioAnalyt (Bioanalyt GmbH, Germany). In this study, we assessed the applicability of the iCheck Turbidity, for checking turbidity of the OF-test
Editorial: Reaching to Grasp Cognition: Analyzing Motor Behavior to Investigate Social Interactions
(2018)
Background:
The overall goal of the project ‘StiEL’ is to contribute to the professional development of teachers and other educational staff working at German secondary schools. The aim is to develop an evidence-based training concept for the inclusion of students with diverse abilities. The project is organized as a collaborative research effort of three partnering institutions and funded by the German Federal Ministry of Education and Research from 2018-2021.
Methods:
To support the on-going transition towards inclusive school practices, a multi-stage approach is envisaged. The first phase aims at a scoping review of existing literature and programmes on inclusion. The overview is supplemented by interviews with school staff members. Training modules are developed in the second project phase. The third phase of StiEL puts the newly developed training program into practice. The knowledge and skills acquired by the participants through the training as well as the teaching and management of inclusive classrooms after the training are evaluated through longitudinal and ethnographic approaches. The final project phase creates a best practice manual and makes the modules available via open access databases.
Results:
The presentation will focus on the first phase and try to explore the health-related consequences of the transition towards an inclusive school system in Germany for different participants. We will present preliminary results of expert interviews as well as some results from the literature screening. Due to our findings the current practice on German schools towards the road to inclusion is very stressful for all participants. We will explore recommendations for health promoting schools under conditions of inclusion.
Conclusions:
In terms of health-related consequences for all participants, the road to inclusion is very ambitious but also very stressful. Regarding the development of an inclusive school system, we need to focus much more on health and health promotion.
We present a project combining lidar, photometer and particle counter data with a regularization software tool for a closure study of aerosol microphysical property retrieval. In a first step only lidar data are used to retrieve the particle size distribution (PSD). Secondly, photometer data are added, which results in a good consistency of the retrieved PSDs. Finally, those retrieved PSDs may be compared with the measured PSD from a particle counter. The data here were taken in Ny Alesund, Svalbard, as an example.
Utilizing quad-trees for efficient design space exploration with partial assignment evaluation
(2018)
Recently, it has been shown that constraint-based symbolic solving techniques offer an efficient way for deciding binding and routing options in order to obtain a feasible system level implementation. In combination with various background theories, a feasibility analysis of the resulting system may already be performed on partial solutions. That is, infeasible subsets of mapping and routing options can be pruned early in the decision process, which fastens the solving accordingly. However, allowing a proper design space exploration including multi-objective optimization also requires an efficient structure for storing and managing non-dominated solutions. In this work, we propose and study the usage of the Quad-Tree data structure in the context of partial assignment evaluation during system synthesis. Out experiments show that unnecessary dominance checks can be avoided, which indicates a preference of Quad-Trees over a commonly used list-based implementation for large combinatorial optimization problems.
Imaginar la nación
(2018)
The problem of constructing and maintaining a tree topology in a distributed manner is a challenging task in WSNs. This is because the nodes have limited computational and memory resources and the network changes over time. We propose the Dynamic Gallager-Humblet-Spira (D-GHS) algorithm that builds and maintains a minimum spanning tree. To do so, we divide D-GHS into four phases, namely neighbor discovery, tree construction, data collection, and tree maintenance. In the neighbor discovery phase, the nodes collect information about their neighbors and the link quality. In the tree construction, D-GHS finds the minimum spanning tree by executing the Gallager-Humblet-Spira algorithm. In the data collection phase, the sink roots the minimum spanning tree at itself, and each node sends data packets. In the tree maintenance phase, the nodes repair the tree when communication failures occur. The emulation results show that D-GHS reduces the number of control messages and the energy consumption, at the cost of a slight increase in memory size and convergence time.
High storage density magnetic devices rely on the precise, reliable and ultrafast switching times of the magnetic states. Optical control of magnetization using femtosecond laser without applying any external magnetic field offers the advantage of switching magnetic states at ultrashort time scales, which has attracted a significant attention. Recently, it has been reported and demonstrated the,so-called, all-optical helicity-dependent switching (AO-HDS) in which a circularly polarized femtosecond laser pulse switches the magnetization of a ferromagnetic thin film as function of laser helicity [1]. Afterward, in more recent studies, it has been reported that AO-HDS is a general phenomenon existing in magnetic materials ranging from rare earth - transition metals ferrimagnetic (e.g. alloys, multilayers and hetero-structures system) to even ferromagnetic thin films. Among numerous studies in the literature which are discussing the microscopic origin of AO-HDS in ferromagnets or ferrimagnetic alloys, the most renowned concepts are momentum transfer via Inverse Faraday Effect (IFE) [1-3]and the concept of preferential thermal demagnetization for one magnetization direction by heating close to Tc (Curie temperature) in the presence of magnetic circular dichroism (MCD) [4-6]. In this study, we investigate all-optical magnetic switching using a stationary femtosecond laser spot (3-5 μm) in TbFe alloys via photoemission electron microscopy (PEEM) and x-ray magnetic circular dichroism (XMCD) with a spatial resolution of approximately 30 nm. We spatially characterize the effect of laser heating and local temperature profile created across the laser spot on AO-HDS in TbFe thin films. We find that AO-HDS occurs only in a `ring' shaped region surrounding the thermally demagnetized region formed by the laser spot and the formation of switched domains relies further on thermally induced domain wall motion. Our temperature dependent measurements highlight the importance of attainin...
Modern server systems with large NUMA architectures necessitate (i) data being distributed over the available computing nodes and (ii) NUMA-aware query processing to enable effective parallel processing in database systems. As these architectures incur significant latency and throughout penalties for accessing non-local data, queries should be executed as close as possible to the data. To further increase both performance and efficiency, data that is not relevant for the query result should be skipped as early as possible. One way to achieve this goal is horizontal partitioning to improve static partition pruning. As part of our ongoing work on workload-driven partitioning, we have implemented a recent approach called aggressive data skipping and extended it to handle both analytical as well as transactional access patterns. In this paper, we evaluate this approach with the workload and data of a production enterprise system of a Global 2000 company. The results show that over 80% of all tuples can be skipped in average while the resulting partitioning schemata are surprisingly stable over time.
Preface
(2018)
An Information System Supporting the Eliciting of Expert Knowledge for Successful IT Projects
(2018)
In order to guarantee the success of an IT project, it is necessary for a company to possess expert knowledge. The difficulty arises when experts no longer work for the company and it then becomes necessary to use their knowledge, in order to realise an IT project. In this paper, the ExKnowIT information system which supports the eliciting of expert knowledge for successful IT projects, is presented and consists of the following modules: (1) the identification of experts for successful IT projects, (2) the eliciting of expert knowledge on completed IT projects, (3) the expert knowledge base on completed IT projects, (4) the Group Method for Data Handling (GMDH) algorithm, (5) new knowledge in support of decisions regarding the selection of a manager for a new IT project. The added value of our system is that these three approaches, namely, the elicitation of expert knowledge, the success of an IT project and the discovery of new knowledge, gleaned from the expert knowledge base, otherwise known as the decision model, complement each other.
Precision fruticulture addresses site or tree-adapted crop management. In the present study, soil and tree status, as well as fruit quality at harvest were analysed in a commercial apple (Malus × domestica 'Gala Brookfield'/Pajam1) orchard in a temperate climate. Trees were irrigated in addition to precipitation. Three irrigation levels (0, 50 and 100%) were applied. Measurements included readings of apparent electrical conductivity of soil (ECa), stem water potential, canopy temperature obtained by infrared camera, and canopy volume estimated by LiDAR and RGB colour imaging. Laboratory analyses of 6 trees per treatment were done on fruit considering the pigment contents and quality parameters. Midday stem water potential (SWP), normalized crop water stress index (CWSI) calculated from thermal data, and fruit yield and quality at harvest were analysed. Spatial patterns of the variability of tree water status were estimated by CWSI imaging supported by SWP readings. CWSI ranged from 0.1 to 0.7 indicating high variability due to irrigation and precipitation. Canopy volume data were less variable. Soil ECa appeared homogeneous in the range of 0 to 4 mS m-1. Fruit harvested in a drought stress zone showed enhanced portion of pheophytin in the chlorophyll pool. Irrigation affected soluble solids content and, hence, the quality of fruit. Overall, results highlighted that spatial variation in orchards can be found even if marginal variability of soil properties can be assumed.
No other means of communication determines through its seemingly unrestricted possibilities our everyday life more than the internet. From the mid-90s onwards, more and more technical advancements in the field of communication appear on the market, which in turn call for new terminology. In the first place, it is the internet (essentially based on the interaction between users and experts), which requires effective nomenclature in order to mediate between lay users and their restricted knowledge on the one, and experts and their sophisticated terminology on the other hand. At the interface between the new and complex realities and the need for simple linguistic access, a huge quantity of metaphoric denominations is used, making abstract innovations more comprehensible. Metaphor in the internet discourse serves to "reduce verticality" (Stenschke 2006) between specialized terminology and common language. The paper deals with metaphors based on spatial concepts. Space and spatiality play a key role in cognitive theories of metaphor as these theories themselves (according to Lakoff/Johnson 1980) are often based on the application of spatial concepts to non-spatial relations. After describing spatial concepts in general (referring to the internet), the paper explores which kind of metaphor takes advantage of the complexity present in the internet and how the medial space is linguistically recaptured in terms of spatial perception.
The Aral Sea desiccation and related changes in hydroclimatic conditions on a regional level is a hot topic for past decades. The key problem of scientific research projects devoted to an investigation of modern Aral Sea basin hydrological regime is its discontinuous nature - the only limited amount of papers takes into account the complex runoff formation system entirely. Addressing this challenge we have developed a continuous prediction system for assessing freshwater inflow into the Small Aral Sea based on coupling stack of hydrological and data-driven models. Results show a good prediction skill and approve the possibility to develop a valuable water assessment tool which utilizes the power of classical physically based and modern machine learning models both for territories with complex water management system and strong water-related data scarcity. The source code and data of the proposed system is available on a Github page (https://github.com/SMASHIproject/IWRM2018).
Previous work has shown that surface modification with orthophosphoric acid can significantly enhance the charge stability on polypropylene (PP) surface by generating deeper traps. In the present study, thermally stimulated potential-decay measurements revealed that the chemical treatment may also significantly increase the number of available trapping sites on the surface. Thus, as a consequence, the so-called "cross-over" phenomenon, which is observed on as-received and thermally treated PP electrets, may be overcome in a certain range of initial charge densities. Furthermore, the discharge behavior of chemically modified samples indicates that charges can be injected from the treated surface into the bulk, and/or charges of opposite polarity can be pulled from the rear electrode into the bulk at elevated temperatures and at the high electric fields that are caused by the deposited charges. In the bulk, a lack of deep traps causes rapid charge decay already in the temperature range around 95 degrees C.
The electret state stability in nonpolar semicrystalline polymers is largely determined by the traps located at crystalline/ amorphous phase interfaces. Thus, the thermal history of such polymers should considerably influence their electret properties. In the present work, we investigate how recrystallization influences charge stability in low-density polyethylene corona electrets. It has been found that electret charge stability in quenched samples is higher than in slowly-crystallized ones. Phenomenologicaly, this can be explained by the increased number of deeper traps in samples with smaller crystallite size.
Why choice matters
(2018)
Measures of democracy are in high demand. Scientific and public audiences use them to describe political realities and to substantiate causal claims about those realities. This introduction to the thematic issue reviews the history of democracy measurement since the 1950s. It identifies four development phases of the field, which are characterized by three recurrent topics of debate: (1) what is democracy, (2) what is a good measure of democracy, and (3) do our measurements of democracy register real-world developments? As the answers to those questions have been changing over time, the field of democracy measurement has adapted and reached higher levels of theoretical and methodological sophistication. In effect, the challenges facing contemporary social scientists are not only limited to the challenge of constructing a sound index of democracy. Today, they also need a profound understanding of the differences between various measures of democracy and their implications for empirical applications. The introduction outlines how the contributions to this thematic issue help scholars cope with the recurrent issues of conceptualization, measurement, and application, and concludes by identifying avenues for future research.
This introductory essay to the HSR Special Issue “Economists, Politics, and Society” argues for a strong field-theoretical programme inspired by Pierre Bourdieu to research economic life as an integral part of different social forms. Its main aim is threefold. First, we spell out the very distinct Durkheimian legacy in Bourdieu’s thinking and the way he applies it in researching economic phenomena. Without this background, much of what is actually part of how Bourdieu analysed economic aspects of social life would be overlooked or reduced to mere economic sociology. Second, we sketch the main theoretical concepts and heuristics used to analyse economic life from a field perspective. Third, we focus on practical methodological issues of field-analytical research into economic phenomena. We conclude with a short summary of the basic characteristics of this approach and discuss the main insights provided by the contributions to this special issue.
We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different types of image degradations. First, using three well known DNNs (ResNet-152, VGG-19, GoogLeNet) we find the human visual system to be more robust to nearly all of the tested image manipulations, and we observe progressively diverging classification error-patterns between humans and DNNs when the signal gets weaker. Secondly, we show that DNNs trained directly on distorted images consistently surpass human performance on the exact distortion types they were trained on, yet they display extremely poor generalisation abilities when tested on other distortion types. For example, training on salt-and-pepper noise does not imply robustness on uniform white noise and vice versa. Thus, changes in the noise distribution between training and testing constitutes a crucial challenge to deep learning vision systems that can be systematically addressed in a lifelong machine learning approach. Our new dataset consisting of 83K carefully measured human psychophysical trials provide a useful reference for lifelong robustness against image degradations set by the human visual system.
Voice onset time (VOT), a primary cue for voicing in many languages including English and German, is known to vary greatly between speakers, but also displays robust within-speaker consistencies, at least in English. The current analysis extends these findings to German. VOT measures were investigated from voiceless alveolar and velar stops in CV syllables cued by a visual prompt in a cue-distractor task. Comparably to English, a considerable portion of German VOT variability can be attributed to the syllable’s vowel length and the stop’s place of articulation. Individual differences in VOT still remain irrespective of speech rate. However, significant correlations across places of articulation and between speaker-specific mean VOTs and standard deviations indicate that talkers employ a relatively unified VOT profile across places of articulation. This could allow listeners to more efficiently adapt to speaker-specific realisations.
Speech scientists have long noted that the qualities of naturally-produced vowels do not remain constant over their durations regardless of being nominally "monophthongs" or "diphthongs". Recent acoustic corpora show that there are consistent patterns of first (F1) and second (F2) formant frequency change across different vowel categories. The three Australian English (AusE) close front vowels /i:, 1, i/ provide a striking example: while their midpoint or mean F1 and F2 frequencies are virtually identical, their spectral change patterns distinctly differ. The results indicate that, despite the distinct patterns of spectral change of AusE /i:, i, la/ in production, its perceptual relevance is not uniform, but rather vowel-category dependent.
We propose a new temporal extension of the logic of Here-and-There (HT) and its equilibria obtained by combining it with dynamic logic over (linear) traces. Unlike previous temporal extensions of HT based on linear temporal logic, the dynamic logic features allow us to reason about the composition of actions. For instance, this can be used to exercise fine grained control when planning in robotics, as exemplified by GOLOG. In this paper, we lay the foundations of our approach, and refer to it as Linear Dynamic Equilibrium Logic, or simply DEL. We start by developing the formal framework of DEL and provide relevant characteristic results. Among them, we elaborate upon the relationships to traditional linear dynamic logic and previous temporal extensions of HT.
Metamaterial Devices
(2018)
In our hands-on demonstration, we show several objects, the functionality of which is defined by the objects' internal micro-structure. Such metamaterial machines can (1) be mechanisms based on their microstructures, (2) employ simple mechanical computation, or (3) change their outside to interact with their environment. They are 3D printed from one piece and we support their creating by providing interactive software tools.
The problem of atmospheric emission from OH molecules is a long standing problem for near-infrared astronomy. PRAXIS is a unique spectrograph which is fed by fibres that remove the OH background and is optimised specifically to benefit from OH-Suppression. The OH suppression is achieved with fibre Bragg gratings, which were tested successfully on the GNOSIS instrument. PRAXIS uses the same fibre Bragg gratings as GNOSIS in its first implementation, and will exploit new, cheaper and more efficient, multicore fibre Bragg gratings in the second implementation. The OH lines are suppressed by a factor of similar to 1000, and the expected increase in the signal-to-noise in the interline regions compared to GNOSIS is a factor of similar to 9 with the GNOSIS gratings and a factor of similar to 17 with the new gratings. PRAXIS will enable the full exploitation of OH suppression for the first time, which was not achieved by GNOSIS (a retrofit to an existing instrument that was not OH-Suppression optimised) due to high thermal emission, low spectrograph transmission and detector noise. PRAXIS has extremely low thermal emission, through the cooling of all significantly emitting parts, including the fore-optics, the fibre Bragg gratings, a long length of fibre, and the fibre slit, and an optical design that minimises leaks of thermal emission from outside the spectrograph. PRAXIS has low detector noise through the use of a Hawaii-2RG detector, and a high throughput through a efficient VPH based spectrograph. PRAXIS will determine the absolute level of the interline continuum and enable observations of individual objects via an IFU. In this paper we give a status update and report on acceptance tests.
Operational decisions in business processes can be modeled by using the Decision Model and Notation (DMN). The complementary use of DMN for decision modeling and of the Business Process Model and Notation (BPMN) for process design realizes the separation of concerns principle. For supporting separation of concerns during the design phase, it is crucial to understand which aspects of decision-making enclosed in a process model should be captured by a dedicated decision model. Whereas existing work focuses on the extraction of decision models from process control flow, the connection of process-related data and decision models is still unexplored. In this paper, we investigate how process-related data used for making decisions can be represented in process models and we distinguish a set of BPMN patterns capturing such information. Then, we provide a formal mapping of the identified BPMN patterns to corresponding DMN models and apply our approach to a real-world healthcare process.
An efficient Design Space Exploration (DSE) is imperative for the design of modern, highly complex embedded systems in order to steer the development towards optimal design points. The early evaluation of design decisions at system-level abstraction layer helps to find promising regions for subsequent development steps in lower abstraction levels by diminishing the complexity of the search problem. In recent works, symbolic techniques, especially Answer Set Programming (ASP) modulo Theories (ASPmT), have been shown to find feasible solutions of highly complex system-level synthesis problems with non-linear constraints very efficiently. In this paper, we present a novel approach to a holistic system-level DSE based on ASPmT. To this end, we include additional background theories that concurrently guarantee compliance with hard constraints and perform the simultaneous optimization of several design objectives. We implement and compare our approach with a state-of-the-art preference handling framework for ASP. Experimental results indicate that our proposed method produces better solutions with respect to both diversity and convergence to the true Pareto front.
Business process simulation is an important means for quantitative analysis of a business process and to compare different process alternatives. With the Business Process Model and Notation (BPMN) being the state-of-the-art language for the graphical representation of business processes, many existing process simulators support already the simulation of BPMN diagrams. However, they do not provide well-defined interfaces to integrate new concepts in the simulation environment. In this work, we present the design and architecture of a proof-of-concept implementation of an open and extensible BPMN process simulator. It also supports the simulation of multiple BPMN processes at a time and relies on the building blocks of the well-founded discrete event simulation. The extensibility is assured by a plug-in concept. Its feasibility is demonstrated by extensions supporting new BPMN concepts, such as the simulation of business rule activities referencing decision models and batch activities.
An IoT network may consist of hundreds heterogeneous devices. Some of them may be constrained in terms of memory, power, processing and network capacity. Manual network and service management of IoT devices are challenging. We propose a usage of an ontology for the IoT device descriptions enabling automatic network management as well as service discovery and aggregation. Our IoT architecture approach ensures interoperability using existing standards, i.e. MQTT protocol and SemanticWeb technologies. We herein introduce virtual IoT devices and their semantic framework deployed at the edge of network. As a result, virtual devices are enabled to aggregate capabilities of IoT devices, derive new services by inference, delegate requests/responses and generate events. Furthermore, they can collect and pre-process sensor data. These tasks on the edge computing overcome the shortcomings of the cloud usage regarding siloization, network bandwidth, latency and speed. We validate our proposition by implementing a virtual device on a Raspberry Pi.
Learning how to prove
(2018)
We have developed an alternative approach to teaching computer science students how to prove. First, students are taught how to prove theorems with the Coq proof assistant. In a second, more difficult, step students will transfer their acquired skills to the area of textbook proofs. In this article we present a realisation of the second step. Proofs in Coq have a high degree of formality while textbook proofs have only a medium one. Therefore our key idea is to reduce the degree of formality from the level of Coq to textbook proofs in several small steps. For that purpose we introduce three proof styles between Coq and textbook proofs, called line by line comments, weakened line by line comments, and structure faithful proofs. While this article is mostly conceptional we also report on experiences with putting our approach into practise.
Introduction
(2018)
Foreword
(2018)
Our Conclusions
(2018)
The overhead of moving data is the major limiting factor in todays hardware, especially in heterogeneous systems where data needs to be transferred frequently between host and accelerator memory. With the increasing availability of hardware-based compression facilities in modern computer architectures, this paper investigates the potential of hardware-accelerated I/O Link Compression as a promising approach to reduce data volumes and transfer time, thus improving the overall efficiency of accelerators in heterogeneous systems. Our considerations are focused on On-the-Fly compression in both Single-Node and Scale-Out deployments. Based on a theoretical analysis, this paper demonstrates the feasibility of hardware-accelerated On-the-Fly I/O Link Compression for many workloads in a Scale-Out scenario, and for some even in a Single-Node scenario. These findings are confirmed in a preliminary evaluation using software-and hardware-based implementations of the 842 compression algorithm.
Beyond Surveys
(2018)