Refine
Year of publication
- 2018 (2819) (remove)
Document Type
- Article (1775)
- Postprint (287)
- Doctoral Thesis (284)
- Other (195)
- Review (111)
- Monograph/Edited Volume (59)
- Part of a Book (23)
- Part of Periodical (21)
- Working Paper (19)
- Conference Proceeding (13)
- Master's Thesis (11)
- Habilitation Thesis (6)
- Bachelor Thesis (5)
- Course Material (4)
- Journal/Publication series (3)
- Report (2)
- Contribution to a Periodical (1)
Language
Keywords
- climate change (20)
- gamma rays: general (17)
- Germany (12)
- German (11)
- cosmic rays (11)
- permafrost (11)
- stars: massive (11)
- ISM: supernova remnants (10)
- adaptation (10)
- inflammation (9)
Institute
- Institut für Biochemie und Biologie (326)
- Institut für Geowissenschaften (314)
- Institut für Physik und Astronomie (312)
- Institut für Chemie (194)
- Mathematisch-Naturwissenschaftliche Fakultät (134)
- Department Psychologie (124)
- Institut für Umweltwissenschaften und Geographie (94)
- Department Sport- und Gesundheitswissenschaften (92)
- Institut für Ernährungswissenschaft (92)
- Hasso-Plattner-Institut für Digital Engineering GmbH (89)
As an effort to reduce parameter uncertainties in constructing recurrence plots, and in particular to avoid potential artefacts, this paper presents a technique to derive artefact-safe region of parameter sets. This technique exploits both deterministic (incl. chaos) and stochastic signal characteristics of recurrence quantification (i.e. diagonal structures). It is useful when the evaluated signal is known to be deterministic. This study focuses on the recurrence plot generated from the reconstructed phase space in order to represent many real application scenarios when not all variables to describe a system are available (data scarcity). The technique involves random shuffling of the original signal to destroy its original deterministic characteristics. Its purpose is to evaluate whether the determinism values of the original and the shuffled signal remain closely together, and therefore suggesting that the recurrence plot might comprise artefacts. The use of such determinism-sensitive region shall be accompanied by standard embedding optimization approaches, e.g. using indices like false nearest neighbor and mutual information, to result in a more reliable recurrence plot parameterization.
From dogmatic views on conservation agriculture adoption in Zambia towards adapting to context
(2018)
Conservation Agriculture (CA) has been widely promoted in sub-Saharan Africa (SSA) as a sustainable agricultural practice, yet with debatable success. Most authors assume successful adoption, only if all three principles of CA are implemented: (1) minimum or zero tillage, (2) maintenance of a permanent soil cover, and (3) integration of crop rotations. Based on this strict definition, adoption has declined or remained stagnant. Presently, not much attention has been given to context-suited adaptation possibilities, and partial adoption has not been recognized as an entry point to full adoption. Furthermore, isolated success cases have not been analysed sufficiently. By applying the QAToCA approach based on focus group discussions complemented by semi-structured qualitative expert and farmer interviews, we assessed the reasons behind positive CA adaptation and adoption trends in Zambia. Main reasons behind Zambia’s emerging success are (1) a positive institutional influence, (2) a systematic approach towards CA promotion – encouraging a stepwise adaptation and adoption, and (3) mobilization of strong marketing dynamics around CA. These findings could help to eventually adjust or redesign CA promotion activities. We argue for a careful shift from the ‘dogmatic view’ on adoption of CA as a packaged technology, towards adapting its principles to the small-scale farming context of SSA.
Konrad Repgen (1923-2017)
(2018)
The problem of atmospheric emission from OH molecules is a long standing problem for near-infrared astronomy. PRAXIS is a unique spectrograph which is fed by fibres that remove the OH background and is optimised specifically to benefit from OH-Suppression. The OH suppression is achieved with fibre Bragg gratings, which were tested successfully on the GNOSIS instrument. PRAXIS uses the same fibre Bragg gratings as GNOSIS in its first implementation, and will exploit new, cheaper and more efficient, multicore fibre Bragg gratings in the second implementation. The OH lines are suppressed by a factor of similar to 1000, and the expected increase in the signal-to-noise in the interline regions compared to GNOSIS is a factor of similar to 9 with the GNOSIS gratings and a factor of similar to 17 with the new gratings. PRAXIS will enable the full exploitation of OH suppression for the first time, which was not achieved by GNOSIS (a retrofit to an existing instrument that was not OH-Suppression optimised) due to high thermal emission, low spectrograph transmission and detector noise. PRAXIS has extremely low thermal emission, through the cooling of all significantly emitting parts, including the fore-optics, the fibre Bragg gratings, a long length of fibre, and the fibre slit, and an optical design that minimises leaks of thermal emission from outside the spectrograph. PRAXIS has low detector noise through the use of a Hawaii-2RG detector, and a high throughput through a efficient VPH based spectrograph. PRAXIS will determine the absolute level of the interline continuum and enable observations of individual objects via an IFU. In this paper we give a status update and report on acceptance tests.
We present a project combining lidar, photometer and particle counter data with a regularization software tool for a closure study of aerosol microphysical property retrieval. In a first step only lidar data are used to retrieve the particle size distribution (PSD). Secondly, photometer data are added, which results in a good consistency of the retrieved PSDs. Finally, those retrieved PSDs may be compared with the measured PSD from a particle counter. The data here were taken in Ny Alesund, Svalbard, as an example.
Children of immigrants represent one in four children in the United States and will represent one in three children by 2050. Children of Asian and Latino immigrants together represent the majority of children of immigrants in the United States. Children of immigrants may be immigrants themselves, or they may have been born in the United States to foreign-born parents; their status may be legal or undocumented. We review transcultural and culture-specific factors that influence the various ways in which stressors are experienced; we also discuss the ways in which parental socialization and developmental processes function as risk factors or protective factors in their influence on the mental health of children of immigrants. Children of immigrants with elevated risk for mental health problems are more likely to be undocumented immigrants, refugees, or unaccompanied minors. We describe interventions and policies that show promise for reducing mental health problems among children of immigrants in the United States.
In order to replace bio-macromolecules by stable synthetic materials in separation techniques and bioanalysis biomimetic receptors and catalysts have been developed: Functional monomers are polymerized together with the target analyte and after template removal cavities are formed in the "molecularly imprinted polymer" (MIP) which resemble the active sites of antibodies and enzymes. Starting almost 80 years ago, around 1,100 papers on MIPs were published in 2016. Electropolymerization allows to deposit MIPs directly on voltammetric electrodes or chips for quartz crystal microbalance (QCM) and surface plasmon resonance (SPR). For the readout of MIPs for drugs amperometry, differential pulse voltammetry (DPV) and impedance spectroscopy (EIS) offer higher sensitivity as compared with QCM or SPR. Application of simple electrochemical devices allows both the reproducible preparation of MIP sensors, but also the sensitive signal generation. Electrochemical MIP-sensors for the whole arsenal of drugs, e.g. the most frequently used analgesics, antibiotics and anticancer drugs have been presented in literature and tested under laboratory conditions. These biomimetic sensors typically have measuring ranges covering the lower nano-up to millimolar concentration range and they are stable under extreme pH and in organic solvents like nonaqueous extracts.
The extratropical stratosphere in boreal winter is characterized by a strong circumpolar westerly jet, confining the coldest temperatures at high latitudes. The jet, referred to as the stratospheric polar vortex, is predominantly zonal and centered around the pole; however, it does exhibit large variability in wind speed and location. Previous studies showed that a weak stratospheric polar vortex can lead to cold-air outbreaks in the midlatitudes, but the exact relationships and mechanisms are unclear. Particularly, it is unclear whether stratospheric variability has contributed to the observed anomalous cooling trends in midlatitude Eurasia. Using hierarchical clustering, we show that over the last 37 years, the frequency of weak vortex states in mid- to late winter (January and February) has increased, which was accompanied by subsequent cold extremes in midlatitude Eurasia. For this region, 60% of the observed cooling in the era of Arctic amplification, that is, since 1990, can be explained by the increased frequency of weak stratospheric polar vortex states, a number that increases to almost 80% when El Nino-Southern Oscillation (ENSO) variability is included as well.
This study aimed to determine the specific physical and basic gymnastics skills considered critical in gymnastics talent identification and selection as well as in promoting men's artistic gymnastics performances. Fifty-one boys from a provincial gymnastics team (age 11.03 ± 0.95 years; height 1.33 ± 0.05 m; body mass 30.01 ± 5.53 kg; body mass index [BMI] 16.89 ± 3.93 kg/m²) regularly competing at national level voluntarily participated in this study. Anthropometric measures as well as the men's artistic gymnastics physical test battery (i.e., International Gymnastics Federation [FIG] age group development programme) were used to assess the somatic and physical fitness profile of participants, respectively. The physical characteristics assessed were: muscle strength, flexibility, speed, endurance, and muscle power. Test outcomes were subjected to a principal components analysis to identify the most representative factors. The main findings revealed that power speed, isometric and explosive strength, strength endurance, and dynamic and static flexibility are the most determinant physical fitness aspects of the talent selection process in young male artistic gymnasts. These findings are of utmost importance for talent identification, selection, and development.
Starting from the notion that work is an important part of who we are, we extend existing theory making on the interplay of work and identity by applying them to (so called) atypical work situations. Without the contextual stability of a permanent organizational position, the question “who one is” will be more difficult to answer. At the same time, a stable occupational identity might provide an even more important orientation to one’s career attitudes and goals in atypical employment situations. So, although atypical employment might pose different challenges on identity, identity can still be a valid concept to assist the understanding of behaviour, attitudes, and well-being in these situations. Our analysis does not attempt to “reinvent” the concept of identity, but will elaborate how existing conceptualizations of identity as being a multiple (albeit perceived as singular), fluid (albeit perceived as stable), and actively forged (as well as passively influenced) construct that can be adapted to understand the effects of atypical employment contexts. Furthermore, we suggest three specific ways to understand the longitudinal dynamics of the interplay between atypical employment and identity over time: passive incremental, active incremental, and transformative change. We conclude with key learning points and outline a few practical recommendations for more research into identity as an explanatory mechanism for the effects of atypical employment situations.
Rechtschreibung von Konsonantenclustern und morphologische Bewusstheit bei Grundschüler_innen
(2018)
Die vorliegenden Studien untersuchen die Entwicklung der Rechtschreibfähigkeit für finale Konsonantencluster im Deutschen und die ihr zugrundeliegenden Strategien bei Erst- bis Drittklässler_innen (N = 209). Dazu wurde der Einfluss der morphologischen Komplexität (poly- vs. monomorphematische Cluster) auf die Rechtschreibung qualitativ und quantitativ analysiert, sowie mit einer Messung zur morphologischen Bewusstheit korreliert. Von der ersten Klasse an zeigt sich eine hohe Korrektheit in der Schreibung und somit eine sprachspezifisch schnelle Entwicklung der alphabetischen Rechtschreibstrategie für finale Konsonantencluster. Der Einfluss morphologischer Verarbeitungsprozesse wurde allerdings erst für die Drittklässler_innen gefunden. Obwohl bereits die Erstklässler_innen gut entwickelte morphologische Bewusstheit zeigten, scheinen sie noch nicht in der Lage zu sein, diese bei der Rechtschreibung anzuwenden. Die Ergebnisse werden im Kontrast zu den umfangreicher vorliegenden Befunden für die englische Sprache diskutiert.
e-ASTROGAM is a concept for a breakthrough observatory space mission carrying a gamma-ray telescope dedicated to the study of the non-thermal Universe in the photon energy range from 0.15 MeV to 3 GeV. The lower energy limit can be pushed down to energies as low as 30 keV for gamma-ray burst detection with the calorimeter. The mission is based on an advanced space-proven detector technology, with unprecedented sensitivity, angular and energy resolution, combined with remarkable polarimetric capability. Thanks to its performance in the MeV-GeV domain, substantially improving its predecessors, e-ASTROGAM will open a new window on the non-thermal Universe, making pioneering observations of the most powerful Galactic and extragalactic sources, elucidating the nature of their relativistic outflows and their effects on the surroundings. With a line sensitivity in the MeV energy range one to two orders of magnitude better than previous and current generation instruments, e-ASTROGAM will determine the origin of key isotopes fundamental for the understanding of supernova explosion and the chemical evolution of our Galaxy. The mission will be a major player of the multiwavelength, multimessenger time-domain astronomy of the 2030s, and provide unique data of significant interest to a broad astronomical community, complementary to powerful observatories such as LISA, LIGO, Virgo, KAGRA, the Einstein Telescope and the Cosmic Explorer, IceCube-Gen2 and KM3NeT, SKA, ALMA, JWST, E-ELT, LSST, Athena, and the Cherenkov Telescope Array.
Random walks are frequently used in randomized algorithms. We study a derandomized variant of a random walk on graphs called the rotor-router model. In this model, instead of distributing tokens randomly, each vertex serves its neighbors in a fixed deterministic order. For most setups, both processes behave in a remarkably similar way: Starting with the same initial configuration, the number of tokens in the rotor-router model deviates only slightly from the expected number of tokens on the corresponding vertex in the random walk model. The maximal difference over all vertices and all times is called single vertex discrepancy. Cooper and Spencer [Combin. Probab. Comput., 15 (2006), pp. 815-822] showed that on Z(d), the single vertex discrepancy is only a constant c(d). Other authors also determined the precise value of c(d) for d = 1, 2. All of these results, however, assume that initially all tokens are only placed on one partition of the bipartite graph Z(d). We show that this assumption is crucial by proving that, otherwise, the single vertex discrepancy can become arbitrarily large. For all dimensions d >= 1 and arbitrary discrepancies l >= 0, we construct configurations that reach a discrepancy of at least l.
If war is an inevitable condition of human nature, as David Hume suggests, then what type of societies can best protect us from defeat and conquest? For David Hume, commerce decreases the relative cost of war and promotes technological military advances as well as martial spirit. Commerce therefore makes a country militarily stronger and better equipped to protect itself against attacks than any other kind of society. Hume does not assume commerce would yield a peaceful world nor that commercial societies would be militarily weak, as many contemporary scholars have argued. On the contrary, for him, military might is a beneficial consequence of commerce.
The framing of EU policies
(2018)
This chapter discusses how framing analysis can contribute to studies of policy making in the European Union (EU). Framing analysis is understood as an analytical perspective that focuses on how policy problems are constructed and categorised. This analytical perspective allows researchers to reconstruct how shifting problem frames empower competing constituencies and create changing patterns of political participation at the supranational level. Studies that assume a longitudinal perspective on EU policy development show how the framing of EU policy is constitutive of the way in which the jurisdictional boundaries and constitutional mandates of the EU evolve over time. Reviewing the growing body of empirical studies on EU policy framing in the context of the diverse theoretical origins of framing analysis, the chapter argues that framing research which takes seriously the notion that policy-making involves both puzzling and powering allows this analytical perspective to contribute a unique perspective on EU policy making.
Logical modeling has been widely used to understand and expand the knowledge about protein interactions among different pathways. Realizing this, the caspo-ts system has been proposed recently to learn logical models from time series data. It uses Answer Set Programming to enumerate Boolean Networks (BNs) given prior knowledge networks and phosphoproteomic time series data. In the resulting sequence of solutions, similar BNs are typically clustered together. This can be problematic for large scale problems where we cannot explore the whole solution space in reasonable time. Our approach extends the caspo-ts system to cope with the important use case of finding diverse solutions of a problem with a large number of solutions. We first present the algorithm for finding diverse solutions and then we demonstrate the results of the proposed approach on two different benchmark scenarios in systems biology: (1) an artificial dataset to model TCR signaling and (2) the HPN-DREAM challenge dataset to model breast cancer cell lines.
Parsing of argumentative structures has become a very active line of research in recent years. Like discourse parsing or any other natural language task that requires prediction of linguistic structures, most approaches choose to learn a local model and then perform global decoding over the local probability distributions, often imposing constraints that are specific to the task at hand. Specifically for argumentation parsing, two decoding approaches have been recently proposed: Minimum Spanning Trees (MST) and Integer Linear Programming (ILP), following similar trends in discourse parsing. In contrast to discourse parsing though, where trees are not always used as underlying annotation schemes, argumentation structures so far have always been represented with trees. Using the ‘argumentative microtext corpus’ [in: Argumentation and Reasoned Action: Proceedings of the 1st European Conference on Argumentation, Lisbon 2015 / Vol. 2, College Publications, London, 2016, pp. 801–815] as underlying data and replicating three different decoding mechanisms, in this paper we propose a novel ILP decoder and an extension to our earlier MST work, and then thoroughly compare the approaches. The result is that our new decoder outperforms related work in important respects, and that in general, ILP and MST yield very similar performance.
High Mountain Asia provides water for more than a billion downstream users. Many catchments receive the majority of their yearly water budget in the form of snow - the vast majority of which is not monitored by sparse weather networks. We leverage passive microwave data from the SSMI series of satellites (SSMI, SSMI/S, 1987-2016), reprocessed to 3.125 km resolution, to examine trends in the volume and spatial distribution of snow-water equivalent (SWE) in the Indus Basin. We find that the majority of the Indus has seen an increase in snow-water storage. There exists a strong elevation-trend relationship, where high-elevation zones have more positive SWE trends. Negative trends are confined to the Himalayan foreland and deeply-incised valleys which run into the Upper Indus. This implies a temperature-dependent cutoff below which precipitation increases are not translated into increased SWE. Earlier snowmelt or a higher percentage of liquid precipitation could both explain this cutoff.(1) Earlier work 2 found a negative snow-water storage trend for the entire Indus catchment over the time period 1987-2009 (-4 x 10(-3) mm/yr). In this study based on an additional seven years of data, the average trend reverses to 1.4 x 10(-3). This implies that the decade since the mid-2000s was likely wetter, and positively impacted long-term SWE trends. This conclusion is supported by an analysis of snowmelt onset and end dates which found that while long-term trends are negative, more recent (since 2005) trends are positive (moving later in the year).(3)
Silicon (Si) is considered as a quasiessential element for higher plants as its uptake increases plant growth and resistance against abiotic as well as biotic stresses. Foliar application of fertilizers generally is assumed to be a comparably environment-friendly form of fertilization because only small quantities are needed. The interest in foliar fertilization and the use of Si as a fertilizer in general increased significantly within the last decades, but there are only few publications dealing with the foliar application of Si at all. In the present review, the effects of Si foliar fertilization, including nano-Si fertilizers, on the three most important crops on a global scale, that is, maize, rice, and wheat, are summarized. Additionally, different pathways (i.e., cuticular pathways, stomata, and trichomes) of foliar uptake and functioning of Si foliar fertilizers against biotic (i.e., fungal diseases and harmful insects), as well as abiotic (i.e., water stress, macronutrient imbalance, and heavy metal toxicity) stressors are discussed. Future research should especially focus on (1) the gathering of empirical data from field and greenhouse experiments, (2) the intensification of co-operations between practitioners and scientists, (3) interdisciplinary research, and (4) the analysis of results from multiple studies (meta-analysis, big data) to fully understand effects, uptake, and functioning of Si foliar fertilizers and to evaluate their potential in modern sustainable agriculture concepts.
The physical and hydrological properties of peat from seven peatlands in northern Maputaland (South Africa) were investigated and related to the degradation processes of peatlands in different hydrogeomorphic settings. The selected peatlands are representative of typical hydrogeomorphic settings and different stages of human modification from natural to severely degraded. Nineteen transects (141 soil corings in total) were examined in order to describe peat properties typical of the distinct hydrogeomorphic settings. We studied degree of decomposition, organic matter content, bulk density, water retention, saturated hydraulic conductivity and hydrophobicity of the peats. From these properties we derived pore size distribution, unsaturated hydraulic conductivity and maximum capillary rise. We found that, after drainage, degradation advances faster in peatlands containing wood peat than in peatlands containing radicell peat. Eucalyptus plantations in catchment areas are especially threatening to peatlands in seeps, interdune depressions and unchannelled valley bottoms. All peatlands and their recharge areas require wise management, especially valley-bottom peatlands with swamp forest vegetation. Blocking drainage ditches is indispensable as a first step towards achieving the restoration of drained peatland areas, and further measures may be necessary to enhance the distribution of water. The sensitive swamp forest ecosystems should be given conservation priority.
Findings - The results provide (longitudinal) support for the proposed evaluative approach. They reveal new evidence that building brand equity is a means to mitigate negative effects, and indicate that negative spillover effects within a high-equity brand portfolio are unlikely. Finally, this research identifies situations in which developing a new brand might be more beneficial than leveraging an existing brand. Practical implications - This research has significant implications for firms with high-equity brands that might be affected by a scandal. The findings support managers to navigate their brands through a crisis.
While the role of and consequences of being a bystander to face-to-face bullying has received some attention in the literature, to date, little is known about the effects of being a bystander to cyberbullying. It is also unknown how empathy might impact the negative consequences associated with being a bystander of cyberbullying. The present study focused on examining the longitudinal association between bystander of cyberbullying depression, and anxiety, and the moderating role of empathy in the relationship between bystander of cyberbullying and subsequent depression and anxiety. There were 1,090 adolescents (M-age = 12.19; 50% female) from the United States included at Time 1, and they completed questionnaires on empathy, cyberbullying roles (bystander, perpetrator, victim), depression, and anxiety. One year later, at Time 2, 1,067 adolescents (M-age = 13.76; 51% female) completed questionnaires on depression and anxiety. Results revealed a positive association between bystander of cyberbullying and depression and anxiety. Further, empathy moderated the positive relationship between bystander of cyberbullying and depression, but not for anxiety. Implications for intervention and prevention programs are discussed.
Metamaterial Devices
(2018)
In our hands-on demonstration, we show several objects, the functionality of which is defined by the objects' internal micro-structure. Such metamaterial machines can (1) be mechanisms based on their microstructures, (2) employ simple mechanical computation, or (3) change their outside to interact with their environment. They are 3D printed from one piece and we support their creating by providing interactive software tools.
Cloud storage brokerage is an abstraction aimed at providing value-added services. However, Cloud Service Brokers are challenged by several security issues including enlarged attack surfaces due to integration of disparate components and API interoperability issues. Therefore, appropriate security risk assessment methods are required to identify and evaluate these security issues, and examine the efficiency of countermeasures. A possible approach for satisfying these requirements is employment of threat modeling concepts, which have been successfully applied in traditional paradigms. In this work, we employ threat models including attack trees, attack graphs and Data Flow Diagrams against a Cloud Service Broker (CloudRAID) and analyze these security threats and risks. Furthermore, we propose an innovative technique for combining Common Vulnerability Scoring System (CVSS) and Common Configuration Scoring System (CCSS) base scores in probabilistic attack graphs to cater for configuration-based vulnerabilities which are typically leveraged for attacking cloud storage systems. This approach is necessary since existing schemes do not provide sufficient security metrics, which are imperatives for comprehensive risk assessments. We demonstrate the efficiency of our proposal by devising CCSS base scores for two common attacks against cloud storage: Cloud Storage Enumeration Attack and Cloud Storage Exploitation Attack. These metrics are then used in Attack Graph Metric-based risk assessment. Our experimental evaluation shows that our approach caters for the aforementioned gaps and provides efficient security hardening options. Therefore, our proposals can be employed to improve cloud security.
The relation between executive functions and reading comprehension in primary-school students
(2018)
Higher-order cognitive skills are necessary prerequisites for reading and understanding words, sentences and texts. In particular, research on executive functions in the cognitive domain has shown that good executive functioning in children is positively related to reading comprehension skills and that deficits in executive functioning are related to difficulties with reading comprehension. However, developmental research on literacy and self-regulation in the early school years suggests that the relation between higher-order cognitive skills and reading might not be unidirectional, but mutually interdependent in nature. Therefore, the present longitudinal study explored the bidirectional relations between executive functions and reading comprehension during primary school across a 1-year period. At two time points (T1, T2), we assessed reading comprehension at the word, sentence, and text levels as well as three components of executive functioning, that is, updating, inhibition, and attention shifting. The sample consisted of three sequential cohorts of German primary school students (N = 1657) starting in first, second, and third grade respectively (aged 6-11 years at T1). Using a latent cross-lagged-panel design, we found bidirectional longitudinal relations between executive functions and reading comprehension for second and third graders. However, for first graders, only the path from executive functioning at T1 to reading comprehension at T2 attained significance. Succeeding analyses revealed updating as the crucial component of the effect from executive functioning on later reading comprehension, whereas text reading comprehension was most predictive of later executive functioning. The potential processes underlying the observed bidirectional relations are discussed with respect to developmental changes in reading comprehension across the primary years.
The article takes up on the observations made byKenesei (1994) regarding the position of the Hungarian interrogative marker -e in the clause and its distribution across clause types. Specifically, there are three crucial points: (i) the marker -e is related to the CP-domain, where clause typing is encoded; (ii) -e is obligatory in embedded clauses and optional in main clauses; (iii) -e is licensed in finite clauses only. I argue that certain clause-typing properties are reflected in the Hungarian clause in a lower functional domain, FP. In particular, finiteness and the interrogative nature of the clause are encoded here, as also indicated by focussing in non-interrogative clauses and by constituent questions, respectively. The marker -e is base-generated in the F head, as opposed to a designated FocP or TP/IP, allowing it to fulfil its clause-typing functions. Base-generation is crucial (as opposed to lowering from C) since it is able to capture the relatedness between -e and finiteness: -e is specified as [fin] and while the FP may be generated to host focussed constituents (including wh-elements) in non-finite clauses, a lexically [fin] head cannot be inserted.
Intensive bondage
(2018)
High storage density magnetic devices rely on the precise, reliable and ultrafast switching times of the magnetic states. Optical control of magnetization using femtosecond laser without applying any external magnetic field offers the advantage of switching magnetic states at ultrashort time scales, which has attracted a significant attention. Recently, it has been reported and demonstrated the,so-called, all-optical helicity-dependent switching (AO-HDS) in which a circularly polarized femtosecond laser pulse switches the magnetization of a ferromagnetic thin film as function of laser helicity [1]. Afterward, in more recent studies, it has been reported that AO-HDS is a general phenomenon existing in magnetic materials ranging from rare earth - transition metals ferrimagnetic (e.g. alloys, multilayers and hetero-structures system) to even ferromagnetic thin films. Among numerous studies in the literature which are discussing the microscopic origin of AO-HDS in ferromagnets or ferrimagnetic alloys, the most renowned concepts are momentum transfer via Inverse Faraday Effect (IFE) [1-3]and the concept of preferential thermal demagnetization for one magnetization direction by heating close to Tc (Curie temperature) in the presence of magnetic circular dichroism (MCD) [4-6]. In this study, we investigate all-optical magnetic switching using a stationary femtosecond laser spot (3-5 μm) in TbFe alloys via photoemission electron microscopy (PEEM) and x-ray magnetic circular dichroism (XMCD) with a spatial resolution of approximately 30 nm. We spatially characterize the effect of laser heating and local temperature profile created across the laser spot on AO-HDS in TbFe thin films. We find that AO-HDS occurs only in a `ring' shaped region surrounding the thermally demagnetized region formed by the laser spot and the formation of switched domains relies further on thermally induced domain wall motion. Our temperature dependent measurements highlight the importance of attainin...
Development of a tool to identify intensive care patients at risk of meropenem therapy failure
(2018)
The problem of constructing and maintaining a tree topology in a distributed manner is a challenging task in WSNs. This is because the nodes have limited computational and memory resources and the network changes over time. We propose the Dynamic Gallager-Humblet-Spira (D-GHS) algorithm that builds and maintains a minimum spanning tree. To do so, we divide D-GHS into four phases, namely neighbor discovery, tree construction, data collection, and tree maintenance. In the neighbor discovery phase, the nodes collect information about their neighbors and the link quality. In the tree construction, D-GHS finds the minimum spanning tree by executing the Gallager-Humblet-Spira algorithm. In the data collection phase, the sink roots the minimum spanning tree at itself, and each node sends data packets. In the tree maintenance phase, the nodes repair the tree when communication failures occur. The emulation results show that D-GHS reduces the number of control messages and the energy consumption, at the cost of a slight increase in memory size and convergence time.
An energy consumption model for multiModal wireless sensor networks based on wake-up radio receivers
(2018)
Energy consumption is a major concern in Wireless Sensor Networks. A significant waste of energy occurs due to the idle listening and overhearing problems, which are typically avoided by turning off the radio, while no transmission is ongoing. The classical approach for allowing the reception of messages in such situations is to use a low-duty-cycle protocol, and to turn on the radio periodically, which reduces the idle listening problem, but requires timers and usually unnecessary wakeups. A better solution is to turn on the radio only on demand by using a Wake-up Radio Receiver (WuRx). In this paper, an energy model is presented to estimate the energy saving in various multi-hop network topologies under several use cases, when a WuRx is used instead of a classical low-duty-cycling protocol. The presented model also allows for estimating the benefit of various WuRx properties like using addressing or not.
Scrum2kanban
(2018)
Using university capstone courses to teach agile software development methodologies has become commonplace, as agile methods have gained support in professional software development. This usually means students are introduced to and work with the currently most popular agile methodology: Scrum. However, as the agile methods employed in the industry change and are adapted to different contexts, university courses must follow suit. A prime example of this is the Kanban method, which has recently gathered attention in the industry. In this paper, we describe a capstone course design, which adds the hands-on learning of the lean principles advocated by Kanban into a capstone project run with Scrum. This both ensures that students are aware of recent process frameworks and ideas as well as gain a more thorough overview of how agile methods can be employed in practice. We describe the details of the course and analyze the participating students' perceptions as well as our observations. We analyze the development artifacts, created by students during the course in respect to the two different development methodologies. We further present a summary of the lessons learned as well as recommendations for future similar courses. The survey conducted at the end of the course revealed an overwhelmingly positive attitude of students towards the integration of Kanban into the course.
802.15.4 security protects against the replay, injection, and eavesdropping of 802.15.4 frames. A core concept of 802.15.4 security is the use of frame counters for both nonce generation and anti-replay protection. While being functional, frame counters (i) cause an increased energy consumption as they incur a per-frame overhead of 4 bytes and (ii) only provide sequential freshness. The Last Bits (LB) optimization does reduce the per-frame overhead of frame counters, yet at the cost of an increased RAM consumption and occasional energy-and time-consuming resynchronization actions. Alternatively, the timeslotted channel hopping (TSCH) media access control (MAC) protocol of 802.15.4 avoids the drawbacks of frame counters by replacing them with timeslot indices, but findings of Yang et al. question the security of TSCH in general. In this paper, we assume the use of ContikiMAC, which is a popular asynchronous MAC protocol for 802.15.4 networks. Under this assumption, we propose an Intra-Layer Optimization for 802.15.4 Security (ILOS), which intertwines 802.15.4 security and ContikiMAC. In effect, ILOS reduces the security-related per-frame overhead even more than the LB optimization, as well as achieves strong freshness. Furthermore, unlike the LB optimization, ILOS neither incurs an increased RAM consumption nor requires resynchronization actions. Beyond that, ILOS integrates with and advances other security supplements to ContikiMAC. We implemented ILOS using OpenMotes and the Contiki operating system.
The present work is part of a collaborative H2020 European funded research project called SENSKIN, that aims to improve Structural Health Monitoring (SHM) for transport infrastructure through the development of an innovative monitoring and management system for bridges based on a novel, inexpensive, skin-like sensor. The integrated SENSKIN technology will be implemented in the case of steel and concrete bridges, and tested, field-evaluated and benchmarked on actual bridge environment against a conventional health monitoring solution developed by Mistras Group Hellas. The main objective of the present work is to implement the autonomous, fully functional strain monitoring system based on commercially available off-the-shelf components, that will be used to accomplish direct comparison between the performance of the innovative SENSKIN sensors and the conventional strain sensors commonly used for structural monitoring of bridges. For this purpose, the mini Structural Monitoring System (mini SMS) of Physical Acoustics Corporation, a comprehensive data acquisition unit designed specifically for long-term unattended operation in outdoor environments, was selected. For the completion of the conventional system, appropriate foil-type strain sensors were selected, driven by special conditioners manufactured by Mistras Group. A comprehensive description of the strain monitoring system and its peripheral components is provided in this paper. For the evaluation of the integrated system’s performance and the effect of various parameters on the long-term behavior of sensors, several test steel pieces instrumented with different strain sensors configurations were prepared and tested in both laboratory and field ambient conditions. Furthermore, loading tests were performed aiming to validate the response of the system in monitoring the strains developed in steel beam elements subject to bending regimes. Representative results obtained from the above experimental tests have been included in this paper as well.
CurEx
(2018)
The integration of diverse structured and unstructured information sources into a unified, domain-specific knowledge base is an important task in many areas. A well-maintained knowledge base enables data analysis in complex scenarios, such as risk analysis in the financial sector or investigating large data leaks, such as the Paradise or Panama papers. Both the creation of such knowledge bases, as well as their continuous maintenance and curation involves many complex tasks and considerable manual effort. With CurEx, we present a modular system that allows structured and unstructured data sources to be integrated into a domain-specific knowledge base. In particular, we (i) enable the incremental improvement of each individual integration component; (ii) enable the selective generation of multiple knowledge graphs from the information contained in the knowledge base; and (iii) provide two distinct user interfaces tailored to the needs of data engineers and end-users respectively. The former has curation capabilities and controls the integration process, whereas the latter focuses on the exploration of the generated knowledge graph.
Beacon in the Dark
(2018)
The large amount of heterogeneous data in these email corpora renders experts' investigations by hand infeasible. Auditors or journalists, e.g., who are looking for irregular or inappropriate content or suspicious patterns, are in desperate need for computer-aided exploration tools to support their investigations.
We present our Beacon system for the exploration of such corpora at different levels of detail. A distributed processing pipeline combines text mining methods and social network analysis to augment the already semi-structured nature of emails. The user interface ties into the resulting cleaned and enriched dataset. For the interface design we identify three objectives expert users have: gain an initial overview of the data to identify leads to investigate, understand the context of the information at hand, and have meaningful filters to iteratively focus onto a subset of emails. To this end we make use of interactive visualisations based on rearranged and aggregated extracted information to reveal salient patterns.
The detection of all inclusion dependencies (INDs) in an unknown dataset is at the core of any data profiling effort. Apart from the discovery of foreign key relationships, INDs can help perform data integration, integrity checking, schema (re-)design, and query optimization. With the advent of Big Data, the demand increases for efficient INDs discovery algorithms that can scale with the input data size. To this end, we propose S-INDD++ as a scalable system for detecting unary INDs in large datasets. S-INDD++ applies a new stepwise partitioning technique that helps discard a large number of attributes in early phases of the detection by processing the first partitions of smaller sizes. S-INDD++ also extends the concept of the attribute clustering to decide which attributes to be discarded based on the clustering result of each partition. Moreover, in contrast to the state-of-the-art, S-INDD++ does not require the partition to fit into the main memory-which is a highly appreciable property in the face of the ever growing datasets. We conducted an exhaustive evaluation of S-INDD++ by applying it to large datasets with thousands attributes and more than 266 million tuples. The results show the high superiority of S-INDD++ over the state-of-the-art. S-INDD++ reduced up to 50 % of the runtime in comparison with BINDER, and up to 98 % in comparison with S-INDD.
One particular challenge in the Internet of Things is the management of many heterogeneous things. The things are typically constrained devices with limited memory, power, network and processing capacity. Configuring every device manually is a tedious task. We propose an interoperable way to configure an IoT network automatically using existing standards. The proposed NETCONF-MQTT bridge intermediates between the constrained devices (speaking MQTT) and the network management standard NETCONF. The NETCONF-MQTT bridge generates dynamically YANG data models from the semantic description of the device capabilities based on the oneM2M ontology. We evaluate the approach for two use cases, i.e. describing an actuator and a sensor scenario.
Live migration is an important feature in modern software-defined datacenters and cloud computing environments. Dynamic resource management, load balance, power saving and fault tolerance are all dependent on the live migration feature. Despite the importance of live migration, the cost of live migration cannot be ignored and may result in service availability degradation. Live migration cost includes the migration time, downtime, CPU overhead, network and power consumption. There are many research articles that discuss the problem of live migration cost with different scopes like analyzing the cost and relate it to the parameters that control it, proposing new migration algorithms that minimize the cost and also predicting the migration cost. For the best of our knowledge, most of the papers that discuss the migration cost problem focus on open source hypervisors. For the research articles focus on VMware environments, none of the published articles proposed migration time, network overhead and power consumption modeling for single and multiple VMs live migration. In this paper, we propose empirical models for the live migration time, network overhead and power consumption for single and multiple VMs migration. The proposed models are obtained using a VMware based testbed.
The lack of process-based classification procedures may lead to unrealistic hyetograph design due to complex oscillation of rainfall depths when assimilated at high temporal resolutions. Four consecutive years of sub-hourly rainfall data were assimilated in three study areas (Guaraira, GEB, Sao Joao do Cariri, CEB, and Aiuaba, AEB) under distinct climates (very hot semi-arid and tropical wet). This study aimed to define rainfall events (for Minimum Inter-event Time, MIT, and Minimum Rainfall Depth, MRD, equal to 30 min and 1.016 mm, respectively), classify their hyetograph types (rectangular, R, unimodal with left-skewed, UL, right-skewed, UR, and centred peaks, UC, bimodal, B, and shapeless, SL), and compare their key rainfall properties (frequency, duration, depth, rate and peak). A rain pulse aggregation process allowed for reshaping SL-events for six different time spans varying from 2 to 30 min. The results revealed that the coastal area held predominantly R-events (64% events and 49% rainfall depth), in western semi-arid prevailed UL-events (57% events and 63% rainfall depth), whereas in eastern semi-arid mostly were R-events (61% events and 30% rainfall depth) similar to coastal area. It is concluded that each cloud formation type had important effects on hyetograph properties, differentiating them even within the same climate.
The electromagnetic coupling of molecular excitations to plasmonic nanoparticles offers a promising method to manipulate the light-matter interaction at the nanoscale. Plasmonic nanoparticles foster exceptionally high coupling strengths, due to their capacity to strongly concentrate the light-field to sub-wavelength mode volumes. A particularly interesting coupling regime occurs, if the coupling increases to a level such that the coupling strength surpasses all damping rates in the system. In this so-called strong-coupling regime hybrid light-matter states emerge, which can no more be divided into separate light and matter components. These hybrids unite the features of the original components and possess new resonances whose positions are separated by the Rabi splitting energy h Omega. Detuning the resonance of one of the components leads to an anticrossing of the two arising branches of the new resonances omega(+) and omega(-) with a minimal separation of Omega = omega(+) - omega(-).
The coupling between molecular excitations and nanoparticles leads to promising applications. It is for example used to enhance the optical cross-section of molecules in surface enhanced Raman scattering, Purcell enhancement or plasmon enhanced dye lasers. In a coupled system new resonances emerge resulting from the original plasmon (ωpl) and exciton (ωex) resonances as
ω±=12(ωpl+ωex)±14(ωpl−ωex)2+g2−−−−−−−−−−−−−−−√,
(1)
where g is the coupling parameter. Hence, the new resonances show a separation of Δ = ω+ − ω− from which the coupling strength can be deduced from the minimum distance between the two resonances, Ω = Δ(ω+ = ω−).
An IoT network may consist of hundreds heterogeneous devices. Some of them may be constrained in terms of memory, power, processing and network capacity. Manual network and service management of IoT devices are challenging. We propose a usage of an ontology for the IoT device descriptions enabling automatic network management as well as service discovery and aggregation. Our IoT architecture approach ensures interoperability using existing standards, i.e. MQTT protocol and SemanticWeb technologies. We herein introduce virtual IoT devices and their semantic framework deployed at the edge of network. As a result, virtual devices are enabled to aggregate capabilities of IoT devices, derive new services by inference, delegate requests/responses and generate events. Furthermore, they can collect and pre-process sensor data. These tasks on the edge computing overcome the shortcomings of the cloud usage regarding siloization, network bandwidth, latency and speed. We validate our proposition by implementing a virtual device on a Raspberry Pi.
Social transitions are characterized by an increased heterogeneity in Western societies. Following the life course perspective, individual agency becomes central in shaping one’s life course. This article examines social transitions of adolescents using individual resource theory to explain differences of the timing of five transitions in partnership and family formation: the first sexual experience, the first intimate relationship, the first cohabitation, the first marriage, and the birth of the first child. Since little is so far known about how individual characteristics interact and influence the social transition to adulthood, we focus on the varying impacts of personal, social and socio-economic resources across the social life course. We use longitudinal data from the German LifE-Study, which focuses on the birth cohort of individuals born between 1965 and 1967. Using event history analysis, we find that the timing of the first sexual experience and first partnership transitions are mainly influenced by personal and social ressources, whereas socio-economic resources offer better explanations for the timing of entering marriage and parenthood. Most striking are the different explanatory models for women and men.
For the last ten years, almost every theoretical result concerning the expected run time of a randomized search heuristic used drift theory, making it the arguably most important tool in this domain. Its success is due to its ease of use and its powerful result: drift theory allows the user to derive bounds on the expected first-hitting time of a random process by bounding expected local changes of the process - the drift. This is usually far easier than bounding the expected first-hitting time directly. Due to the widespread use of drift theory, it is of utmost importance to have the best drift theorems possible. We improve the fundamental additive, multiplicative, and variable drift theorems by stating them in a form as general as possible and providing examples of why the restrictions we keep are still necessary. Our additive drift theorem for upper bounds only requires the process to be nonnegative, that is, we remove unnecessary restrictions like a finite, discrete, or bounded search space. As corollaries, the same is true for our upper bounds in the case of variable and multiplicative drift.
One of the most important aspects of a randomized algorithm is bounding its expected run time on various problems. Formally speaking, this means bounding the expected first-hitting time of a random process. The two arguably most popular tools to do so are the fitness level method and drift theory. The fitness level method considers arbitrary transition probabilities but only allows the process to move toward the goal. On the other hand, drift theory allows the process to move into any direction as long as it move closer to the goal in expectation; however, this tendency has to be monotone and, thus, the transition probabilities cannot be arbitrary. We provide a result that combines the benefit of these two approaches: our result gives a lower and an upper bound for the expected first-hitting time of a random process over {0,..., n} that is allowed to move forward and backward by 1 and can use arbitrary transition probabilities. In case that the transition probabilities are known, our bounds coincide and yield the exact value of the expected first-hitting time. Further, we also state the stationary distribution as well as the mixing time of a special case of our scenario.
For theoretical analyses there are two specifics distinguishing GP from many other areas of evolutionary computation. First, the variable size representations, in particular yielding a possible bloat (i.e. the growth of individuals with redundant parts). Second, the role and realization of crossover, which is particularly central in GP due to the tree-based representation. Whereas some theoretical work on GP has studied the effects of bloat, crossover had a surprisingly little share in this work. We analyze a simple crossover operator in combination with local search, where a preference for small solutions minimizes bloat (lexicographic parsimony pressure); the resulting algorithm is denoted Concatenation Crossover GP. For this purpose three variants of the wellstudied Majority test function with large plateaus are considered. We show that the Concatenation Crossover GP can efficiently optimize these test functions, while local search cannot be efficient for all three variants independent of employing bloat control.
Poly[(rac-lactide)-co-glycolide] (PLGA) is used in medicine to provide mechanical support for healing tissue or as matrix for controlled drug release. The properties of this copolymer depend on the evolution of the molecular weight of the material during degradation. which is determined by the kinetics of the cleavage of hydrolysable bonds. The generally accepted description of the degradation of PLGA is a random fragmentation that is autocatalyzed by the accumulation of acidic fragments inside the bulk material. Since mechanistic studies with lactide oligomers have concluded a chain-end scission mechanism and monolayer degradation experiments with polylactide found no accelerated degradation at lower pH, we hypothesize that the impact of acidic fragments on the molecular degradation kinetics of PLGA is overestimated By means of the Langmuir monolayer degradation technique. the molecular degradation kinetics of PLGA at different pH could be determined. Protons did not catalyze the degradation of PLGA. The molecular mechanism at neutral pH and low pH is a combination of random and chainend-cut events, while the degradation under strongly alkaline conditions is determined by rapid chainend cuts. We suggest that the degradation of bulk PLGA is not catalyzed by the acidic degradation products. Instead. increased concentration of small fragments leads to accelerated mass loss via fast chain-end cut events. In the future, we aim to substantiate the proposed molecular degradation mechanism of PLGA with interfacial rheology.
Enzymes have recently attracted increasing attention in material research based on their capacity to catalyze the conversion of polymer-bound moieties for synthesizing polymer networks, particularly bulk hydrogels. hi this study. the surface immobilization of a relevant enzyme. mushroom tyrosinase, should be explored using glass as model surface. In a first step. the glass support was functionalized with silanes to introduce either amine or carboxyl groups, as confirmed e.g. by X-ray photoelectron spectroscopy. By applying glutaraldehyde and EDC/NHS chemistry, respectively, surfaces have been activated for subsequent successful coupling of tyrosinase. Via protein hydrolysis and amino acid characterization by HPLC, the quantity of bound tyrosinase was shown to correspond to a full surface coverage. Based on the visualized enzymatic conversion of a test substrate at the glass support. the functionalized surfaces may be explored for surface-associated material synthesis in the future.
Humanoid robots, prosthetic limbs and exoskeletons require soft actuators to perform their primary function, which is controlled movement. In this wont we explored whether crosslinked poly[ethylene-co-(vinyl acetate)] (cPEVA) fibers, with different vinyl acetate (VA) content can serve as torsional fiber actuators. exhibiting temperature controlled reversible rotational changes. Broad melting transitions ranging from 50 to 90 degrees C for cPEVA18-165 or from 40 to 80 degrees C for cPEVA28-165 fibers in combination with complete crystallization at temperatures around 10 degrees C make them suitable actuating materials with adjustable actuation temperature ranges between 10 and 70 degrees C during repetitive cooling and heating. The obtained fibers exhibited a circular cross section with diameters around 0.4 +/- 0.1 mm, while a length of 4 cm was employed for the investigation of reversible rotational actuation after programming by twist insertion using 30 complete rotations at a temperature above melting transition. Repetitive heating and cooling between 10 to 60 degrees C or 70 degrees C of one-end-tethered programmed fibers revealed reversible rotations and torsional force. During cooling 3 +/- 1 complete rotations (Delta theta(r) = + 1080 +/- 360 degrees) in twisting direction were observed, while 4 +/- 1 turns in the opposite direction (Delta theta(r) = - 1440 +/- 1360 degrees) were found during heating. Such torsional fiber actuators, which are capable of approximately one rotation per cm fiber length, can serve as miniaturized rotary motors to provide rotational actuation in futuristic humanoid robots.
The variation of the molecular architecture of multiblock copolymers has enabled the introduction of functional behaviour and the control of key mechanical properties. In the current study, we explore the synergistic relationship of two structural components in a shape-memory material formed of a multiblock copolymer with crystallizable poly(epsilon-caprolactone) and crystallizable polyfoligo(3S-iso-butylmorpholine-2,5-dione) segments (PCL-PIBMD). The thermal and structural properties of PCL-PIBMD films were compared with PCI.-PU and PMMD-PU investigated by means of DSC, SAXS and WARS measurements. The shape-memory properties were quantified by cyclic, thermomechanical tensile tests, where deformation strains up to 900% were applied for programming PCL-PIBMD films at 50 degrees C. Toluene vapor treatment experiments demonstrated that the temporary shape was fixed mainly by glassy PIBMD domains at strains lower than 600% with the PCL contribution to fixation increasing to 42 +/- 2% at programming strains of 900% This study into the shape-memory mechanism of PCL-PIBMD provides insight into the structure function relation in multiblock copolymers with both crystallizable and glassy switching segments.
High-throughput RNA sequencing (RNAseq) produces large data sets containing expression levels of thousands of genes. The analysis of RNAseq data leads to a better understanding of gene functions and interactions, which eventually helps to study diseases like cancer and develop effective treatments. Large-scale RNAseq expression studies on cancer comprise samples from multiple cancer types and aim to identify their distinct molecular characteristics. Analyzing samples from different cancer types implies analyzing samples from different tissue origin. Such multi-tissue RNAseq data sets require a meaningful analysis that accounts for the inherent tissue-related bias: The identified characteristics must not originate from the differences in tissue types, but from the actual differences in cancer types. However, current analysis procedures do not incorporate that aspect. As a result, we propose to integrate a tissue-awareness into the analysis of multi-tissue RNAseq data. We introduce an extension for gene selection that provides a tissue-wise context for every gene and can be flexibly combined with any existing gene selection approach. We suggest to expand conventional evaluation by additional metrics that are sensitive to the tissue-related bias. Evaluations show that especially low complexity gene selection approaches profit from introducing tissue-awareness.
In rural/remote areas, resource constrained smart micro-grid (RCSMG) architectures can provide a cost-effective power supply alternative in cases when connectivity to the national power grid is impeded by factors such as load shedding. RCSMG architectures can be designed to handle communications over a distributed lossy network in order to minimise operation costs. However, due to the unreliable nature of lossy networks communication data can be distorted by noise additions that alter the veracity of the data. In this chapter, we consider cases in which an adversary who is internal to the RCSMG, deliberately distorts communicated data to gain an unfair advantage over the RCSMG’s users. The adversary’s goal is to mask malicious data manipulations as distortions due to additive noise due to communication channel unreliability. Distinguishing malicious data distortions from benign distortions is important in ensuring trustworthiness of the RCSMG. Perturbation data anonymisation algorithms can be used to alter transmitted data to ensure that adversarial manipulation of the data reveals no information that the adversary can take advantage of. However, because existing data perturbation anonymisation algorithms operate by using additive noise to anonymise data, using these algorithms in the RCSMG context is challenging. This is due to the fact that distinguishing benign noise additions from malicious noise additions is a difficult problem. In this chapter, we present a brief survey of cases of privacy violations due to inferences drawn from observed power consumption patterns in RCSMGs centred on inference, and propose a method of mitigating these risks. The lesson here is that while RCSMGs give users more control over power management and distribution, good anonymisation is essential to protecting personal information on RCSMGs.
In this chapter, we provide a framework to specify how cheating attacks can be conducted successfully on power marketing schemes in resource constrained smart micro-grids. This is an important problem because such cheating attacks can destabilise and in the worst case result in a breakdown of the micro-grid. We consider three aspects, in relation to modelling cheating attacks on power auctioning schemes. First, we aim to specify exactly how in spite of the resource constrained character of the micro-grid, cheating can be conducted successfully. Second, we consider how mitigations can be modelled to prevent cheating, and third, we discuss methods of maintaining grid stability and reliability even in the presence of cheating attacks. We use an Automated-Cheating-Attack (ACA) conception to build a taxonomy of cheating attacks based on the idea of adversarial acquisition of surplus energy. Adversarial acquisitions of surplus energy allow malicious users to pay less for access to more power than the quota allowed for the price paid. The impact on honest users, is the lack of an adequate supply of energy to meet power demand requests. We conclude with a discussion of the performance overhead of provoking, detecting, and mitigating such attacks efficiently.
Resource constrained smart micro-grid architectures describe a class of smart micro-grid architectures that handle communications operations over a lossy network and depend on a distributed collection of power generation and storage units. Disadvantaged communities with no or intermittent access to national power networks can benefit from such a micro-grid model by using low cost communication devices to coordinate the power generation, consumption, and storage. Furthermore, this solution is both cost-effective and environmentally-friendly. One model for such micro-grids, is for users to agree to coordinate a power sharing scheme in which individual generator owners sell excess unused power to users wanting access to power. Since the micro-grid relies on distributed renewable energy generation sources which are variable and only partly predictable, coordinating micro-grid operations with distributed algorithms is necessity for grid stability. Grid stability is crucial in retaining user trust in the dependability of the micro-grid, and user participation in the power sharing scheme, because user withdrawals can cause the grid to breakdown which is undesirable. In this chapter, we present a distributed architecture for fair power distribution and billing on microgrids. The architecture is designed to operate efficiently over a lossy communication network, which is an advantage for disadvantaged communities. We build on the architecture to discuss grid coordination notably how tasks such as metering, power resource allocation, forecasting, and scheduling can be handled. All four tasks are managed by a feedback control loop that monitors the performance and behaviour of the micro-grid, and based on historical data makes decisions to ensure the smooth operation of the grid. Finally, since lossy networks are undependable, differentiating system failures from adversarial manipulations is an important consideration for grid stability. We therefore provide a characterisation of potential adversarial models and discuss possible mitigation measures.
Studies indicate that reliable access to power is an important enabler for economic growth. To this end, modern energy management systems have seen a shift from reliance on time-consuming manual procedures , to highly automated management , with current energy provisioning systems being run as cyber-physical systems . Operating energy grids as a cyber-physical system offers the advantage of increased reliability and dependability , but also raises issues of security and privacy. In this chapter, we provide an overview of the contents of this book showing the interrelation between the topics of the chapters in terms of smart energy provisioning. We begin by discussing the concept of smart-grids in general, proceeding to narrow our focus to smart micro-grids in particular. Lossy networks also provide an interesting framework for enabling the implementation of smart micro-grids in remote/rural areas, where deploying standard smart grids is economically and structurally infeasible. To this end, we consider an architectural design for a smart micro-grid suited to low-processing capable devices. We model malicious behaviour, and propose mitigation measures based properties to distinguish normal from malicious behaviour .
Power Systems
(2018)
Studies indicate that reliable access to power is an important enabler for economic growth. To this end, modern energy management systems have seen a shift from reliance on time-consuming manual procedures, to highly automated management, with current energy provisioning systems being run as cyber-physical systems. Operating energy grids as a cyber-physical system offers the advantage of increased reliability and dependability, but also raises issues of security and privacy. In this chapter, we provide an overview of the contents of this book showing the interrelation between the topics of the chapters in terms of smart energy provisioning. We begin by discussing the concept of smart-grids in general, proceeding to narrow our focus to smart micro-grids in particular. Lossy networks also provide an interesting framework for enabling the implementation of smart micro-grids in remote/rural areas, where deploying standard smart grids is economically and structurally infeasible. To this end, we consider an architectural design for a smart micro-grid suited to low-processing capable devices. We model malicious behaviour, and propose mitigation measures based properties to distinguish normal from malicious behaviour.
We consider a distributed learning approach in supervised learning for a large class of spectral regularization methods in an reproducing kernel Hilbert space (RKHS) framework. The data set of size n is partitioned into m = O (n(alpha)), alpha < 1/2, disjoint subsamples. On each subsample, some spectral regularization method (belonging to a large class, including in particular Kernel Ridge Regression, L-2-boosting and spectral cut-off) is applied. The regression function f is then estimated via simple averaging, leading to a substantial reduction in computation time. We show that minimax optimal rates of convergence are preserved if m grows sufficiently slowly (corresponding to an upper bound for alpha) as n -> infinity, depending on the smoothness assumptions on f and the intrinsic dimensionality. In spirit, the analysis relies on a classical bias/stochastic error analysis.
The Aral Sea desiccation and related changes in hydroclimatic conditions on a regional level is a hot topic for past decades. The key problem of scientific research projects devoted to an investigation of modern Aral Sea basin hydrological regime is its discontinuous nature - the only limited amount of papers takes into account the complex runoff formation system entirely. Addressing this challenge we have developed a continuous prediction system for assessing freshwater inflow into the Small Aral Sea based on coupling stack of hydrological and data-driven models. Results show a good prediction skill and approve the possibility to develop a valuable water assessment tool which utilizes the power of classical physically based and modern machine learning models both for territories with complex water management system and strong water-related data scarcity. The source code and data of the proposed system is available on a Github page (https://github.com/SMASHIproject/IWRM2018).
BACKGROUND: The formation of a functionally-confluent endothelial cell (EC) monolayer affords proliferation of EC, which only happens in case of appropriate migratory activity. AIM OF THE STUDY: The migratory pathway of human umbilical endothelial cells (HUVEC) was investigated on different polymeric substrates. MATERIAL AND METHODS: Surface characterization of the polymers was performed by contact angle measurements and atomic force microscopy under wet conditions. 30,000 HUVEC per well were seeded on polytetrafluoroethylene (PTFE) (theta(adv) = 119 degrees +/- 2 degrees), on low-attachment plate LAP (theta(adv) = 28 degrees +/- 2 degrees) and on polystyrene based tissue culture plates (TCP, theta(adv) = 22 degrees +/- 1 degrees). HUVEC tracks (trajectories) were recorded by time lapse microscopy and the euclidean distance (straight line between starting and end point), the total distance and the velocities of HUVEC not leaving the vision field were determined. RESULTS: On PTFE, 42 HUVEC were in the vision field directly after seeding. The mean length of single migration steps (SML) was 6.1 +/- 5.2 mu m, the mean velocity (MV) 0.40 +/- 0.3 mu m.min(-1) and the complete length of the trajectory (LT) was 710 +/- 440 mu m. On TCP 82 HUVEC were in the vision field subsequent to seeding. The LT was 840 +/- 550 mu m, the SML 6.1 +/- 5.2 mu m and the MV 0.44 +/- 0.3 mu m.min(-1). The trajectories on LAP differed significantly in respect to SML (2.4 +/- 3.9 mu m, p <0.05), the MV (0.16 +/- 0.3 mu m.min(-1), p <0.05) and the LT (410 +/- 300 mu m, p <0.05), compared to PTFE and TCP. Solely on TCP a nearly confluent EC monolayer developed after three days. While on TCP diffuse signals of vinculin were found over the whole basal cell surface organizing the binding of the cells by focal adhesions, on PTFE vinculin was merely arranged at the cell rims, and on the hydrophilic material (LAP) no focal adhesions were found. CONCLUSION: The study revealed that the wettability of polymers affected not only the initial adherence but also the migration of EC, which is of importance for the proliferation and ultimately the endothelialization of polymer-based biomaterials.
We consider chimera states in a one-dimensional medium of nonlinear nonlocally coupled phase oscillators. Stationary inhomogeneous solutions of the Ott-Antonsen equation for a complex order parameter that correspond to fundamental chimeras have been constructed. Stability calculations reveal that only some of these states are stable. The direct numerical simulation has shown that these structures under certain conditions are transformed to breathing chimera regimes because of the development of instability. Further development of instability leads to turbulent chimeras.
In this chapter, an overview of systematic eradication of basic science foci in European universities in the last two decades is given. This happens under the slogan of optimisation of the university education to the needs and demands of the society. It is pointed out that reliance on “market demands” brings with it long-term deficiencies in the maintenance of basic and advanced knowledge construction in societies necessary for long-term future technological advances. University policies that claim improvement of higher education towards more immediate efficiency may end up with the opposite effect of affecting its quality and long term expected positive impact on society.
Since the 1980s, central governments have decentralized forestry to local governments in many countries of the Global South. More recently, REDD+ has started to impact forest policy-making in these countries by providing incentives to ensure a national-level approach to reducing emissions from deforestation and forest degradation. Höhne et al. analyze to what extent central governments have rebuilt capacity at the national level, imposed regulations from above, and interfered in forest management by local governments for advancing REDD+. Using the examples of Brazil and Indonesia, the chapter illustrates that while REDD+ has not initiated a large-scale recentralization in the forestry sector, it has supported the reinforcement and pooling of REDD+ related competences at the central government level.
Learning how to prove
(2018)
We have developed an alternative approach to teaching computer science students how to prove. First, students are taught how to prove theorems with the Coq proof assistant. In a second, more difficult, step students will transfer their acquired skills to the area of textbook proofs. In this article we present a realisation of the second step. Proofs in Coq have a high degree of formality while textbook proofs have only a medium one. Therefore our key idea is to reduce the degree of formality from the level of Coq to textbook proofs in several small steps. For that purpose we introduce three proof styles between Coq and textbook proofs, called line by line comments, weakened line by line comments, and structure faithful proofs. While this article is mostly conceptional we also report on experiences with putting our approach into practise.
Modern routing algorithms reduce query time by depending heavily on preprocessed data. The recently developed Navigation Data Standard (NDS) enforces a separation between algorithms and map data, rendering preprocessing inapplicable. Furthermore, map data is partitioned into tiles with respect to their geographic coordinates. With the limited memory found in portable devices, the number of tiles loaded becomes the major factor for run time. We study routing under these restrictions and present new algorithms as well as empirical evaluations. Our results show that, on average, the most efficient algorithm presented uses more than 20 times fewer tile loads than a normal A*.
This paper proposes an education approach for master and bachelor students to enhance their skills in the area of reliability, safety and security of the electronic components in automated driving. The approach is based on the active synergetic work of research institutes, academia and industry in the frame of joint lab. As an example, the jointly organized summer school with the respective focus is organized and elaborated.
Minimising Information Loss on Anonymised High Dimensional Data with Greedy In-Memory Processing
(2018)
Minimising information loss on anonymised high dimensional data is important for data utility. Syntactic data anonymisation algorithms address this issue by generating datasets that are neither use-case specific nor dependent on runtime specifications. This results in anonymised datasets that can be re-used in different scenarios which is performance efficient. However, syntactic data anonymisation algorithms incur high information loss on high dimensional data, making the data unusable for analytics. In this paper, we propose an optimised exact quasi-identifier identification scheme, based on the notion of k-anonymity, to generate anonymised high dimensional datasets efficiently, and with low information loss. The optimised exact quasi-identifier identification scheme works by identifying and eliminating maximal partial unique column combination (mpUCC) attributes that endanger anonymity. By using in-memory processing to handle the attribute selection procedure, we significantly reduce the processing time required. We evaluated the effectiveness of our proposed approach with an enriched dataset drawn from multiple real-world data sources, and augmented with synthetic values generated in close alignment with the real-world data distributions. Our results indicate that in-memory processing drops attribute selection time for the mpUCC candidates from 400s to 100s, while significantly reducing information loss. In addition, we achieve a time complexity speed-up of O(3(n/3)) approximate to O(1.4422(n)).
We analyze the problem of response suggestion in a closed domain along a real-world scenario of a digital library. We present a text-processing pipeline to generate question-answer pairs from chat transcripts. On this limited amount of training data, we compare retrieval-based, conditioned-generation, and dedicated representation learning approaches for response suggestion. Our results show that retrieval-based methods that strive to find similar, known contexts are preferable over parametric approaches from the conditioned-generation family, when the training data is limited. We, however, identify a specific representation learning approach that is competitive to the retrieval-based approaches despite the training data limitation.
PlAnalyzer
(2018)
In this work we propose PIAnalyzer, a novel approach to analyze PendingIntent related vulnerabilities. We empirically evaluate PIAnalyzer on a set of 1000 randomly selected applications from the Google Play Store and find 1358 insecure usages of Pendinglntents, including 70 severe vulnerabilities. We manually inspected ten reported vulnerabilities out of which nine correctly reported vulnerabilities, indicating a high precision. The evaluation shows that PIAnalyzer is efficient with an average execution time of 13 seconds per application.
Currently we are witnessing profound changes in the geospatial domain. Driven by recent ICT developments, such as web services, serviceoriented computing or open-source software, an explosion of geodata and geospatial applications or rapidly growing communities of non-specialist users, the crucial issue is the provision and integration of geospatial intelligence in these rapidly changing, heterogeneous developments. This paper introduces the concept of Servicification into geospatial data processing. Its core idea is the provision of expertise through a flexible number of web-based software service modules. Selection and linkage of these services to user profiles, application tasks, data resources, or additional software allow for the compilation of flexible, time-sensitive geospatial data handling processes. Encapsulated in a string of discrete services, the approach presented here aims to provide non-specialist users with geospatial expertise required for the effective, professional solution of a defined application problem. Providing users with geospatial intelligence in the form of web-based, modular services, is a completely different approach to geospatial data processing. This novel concept puts geospatial intelligence, made available through services encapsulating rule bases and algorithms, in the centre and at the disposal of the users, regardless of their expertise.
Several areas in Southeast Asia are very vulnerable to climate change and unable to take immediate/effective actions on countermeasures due to insufficient capabilities. Malaysia, in particular the east coast of peninsular Malaysia and Sarawak, is known as one of the vulnerable regions to flood disaster. Prolonged and intense rainfall, natural activities and increase in runoff are the main reasons to cause flooding in this area. In addition, topographic conditions also contribute to the occurrence of flood disaster. Kuching city is located in the northwest of Borneo Island and part of Sarawak river catchment. This area is a developing state in Malaysia experiencing rapid urbanization since 2000s, which has caused the insufficient data availability in topography and hydrology. To deal with these challenging issues, this study presents a flood modelling framework using the remote sensing technologies and machine learning techniques to acquire the digital elevation model (DEM) with improved accuracy for the non-surveyed areas. Intensity–duration–frequency (IDF) curves were derived from climate model for various scenario simulations. The developed flood framework will be beneficial for the planners, policymakers, stakeholders as well as researchers in the field of water resource management in the aspect of providing better ideas/tools in dealing with the flooding issues in the region.
Recently blockchain technology has been introduced to execute interacting business processes in a secure and transparent way. While the foundations for process enactment on blockchain have been researched, the execution of decisions on blockchain has not been addressed yet. In this paper we argue that decisions are an essential aspect of interacting business processes, and, therefore, also need to be executed on blockchain. The immutable representation of decision logic can be used by the interacting processes, so that decision taking will be more secure, more transparent, and better auditable. The approach is based on a mapping of the DMN language S-FEEL to Solidity code to be run on the Ethereum blockchain. The work is evaluated by a proof-of-concept prototype and an empirical cost evaluation.
Uniformly valid confidence intervals post model selection in regression can be constructed based on Post-Selection Inference (PoSI) constants. PoSI constants are minimal for orthogonal design matrices, and can be upper bounded in function of the sparsity of the set of models under consideration, for generic design matrices. In order to improve on these generic sparse upper bounds, we consider design matrices satisfying a Restricted Isometry Property (RIP) condition. We provide a new upper bound on the PoSI constant in this setting. This upper bound is an explicit function of the RIP constant of the design matrix, thereby giving an interpolation between the orthogonal setting and the generic sparse setting. We show that this upper bound is asymptotically optimal in many settings by constructing a matching lower bound.
We consider composite-composite testing problems for the expectation in the Gaussian sequence model where the null hypothesis corresponds to a closed convex subset C of R-d. We adopt a minimax point of view and our primary objective is to describe the smallest Euclidean distance between the null and alternative hypotheses such that there is a test with small total error probability. In particular, we focus on the dependence of this distance on the dimension d and variance 1/n giving rise to the minimax separation rate. In this paper we discuss lower and upper bounds on this rate for different smooth and non-smooth choices for C.
We consider truncated SVD (or spectral cut-off, projection) estimators for a prototypical statistical inverse problem in dimension D. Since calculating the singular value decomposition (SVD) only for the largest singular values is much less costly than the full SVD, our aim is to select a data-driven truncation level (m) over cap is an element of {1, . . . , D} only based on the knowledge of the first (m) over cap singular values and vectors. We analyse in detail whether sequential early stopping rules of this type can preserve statistical optimality. Information-constrained lower bounds and matching upper bounds for a residual based stopping rule are provided, which give a clear picture in which situation optimal sequential adaptation is feasible. Finally, a hybrid two-step approach is proposed which allows for classical oracle inequalities while considerably reducing numerical complexity.
OpenLL
(2018)
Today's rendering APIs lack robust functionality and capabilities for dynamic, real-time text rendering and labeling, which represent key requirements for 3D application design in many fields. As a consequence, most rendering systems are barely or not at all equipped with respective capabilities. This paper drafts the unified text rendering and labeling API OpenLL intended to complement common rendering APIs, frameworks, and transmission formats. For it, various uses of static and dynamic placement of labels are showcased and a text interaction technique is presented. Furthermore, API design constraints with respect to state-of-the-art text rendering techniques are discussed. This contribution is intended to initiate a community-driven specification of a free and open label library.
A hybrid design approach of the hierarchical physical implementation design flow is presented and demonstrated on a fault-tolerant low-power multiprocessor system. The proposed flow allows to implement selected submodules in parallel with contrary requirements such as identical placement and individual block implementation. The overall system contains four Leon2 cores and communicates via the Waterbear framework and supports Adaptive Voltage Scaling (AVS) functionality. Three of the processor core variants are derived from the first baseline reference core but implemented individually at block level based on their clock tree specification. The chip is prepared for space applications and designed with triple modular redundancy (TMR) for control parts. The low-power performance is enabled by contemporary power and clock management control. An ASIC is fabricated in a low-power 0.13 mu m BiCMOS technology process node.
We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different types of image degradations. First, using three well known DNNs (ResNet-152, VGG-19, GoogLeNet) we find the human visual system to be more robust to nearly all of the tested image manipulations, and we observe progressively diverging classification error-patterns between humans and DNNs when the signal gets weaker. Secondly, we show that DNNs trained directly on distorted images consistently surpass human performance on the exact distortion types they were trained on, yet they display extremely poor generalisation abilities when tested on other distortion types. For example, training on salt-and-pepper noise does not imply robustness on uniform white noise and vice versa. Thus, changes in the noise distribution between training and testing constitutes a crucial challenge to deep learning vision systems that can be systematically addressed in a lifelong machine learning approach. Our new dataset consisting of 83K carefully measured human psychophysical trials provide a useful reference for lifelong robustness against image degradations set by the human visual system.
Comparative text mining extends from genre analysis and political bias detection to the revelation of cultural and geographic differences, through to the search for prior art across patents and scientific papers. These applications use cross-collection topic modeling for the exploration, clustering, and comparison of large sets of documents, such as digital libraries. However, topic modeling on documents from different collections is challenging because of domain-specific vocabulary. We present a cross-collection topic model combined with automatic domain term extraction and phrase segmentation. This model distinguishes collection-specific and collection-independent words based on information entropy and reveals commonalities and differences of multiple text collections. We evaluate our model on patents, scientific papers, newspaper articles, forum posts, and Wikipedia articles. In comparison to state-of-the-art cross-collection topic modeling, our model achieves up to 13% higher topic coherence, up to 4% lower perplexity, and up to 31% higher document classification accuracy. More importantly, our approach is the first topic model that ensures disjunct general and specific word distributions, resulting in clear-cut topic representations.
An overview is given on the current state of X-ray absorption measurements on silicate melts and glasses. The challenges, limitations, and achievements of analyzing X-ray absorption spectra measured in liquids to determine structural properties of major and minor elements in magmas are described, with particular focus on describing non-Gaussian pair distribution functions in highly disordered glasses and melts, measured at in situ conditions. This includes a discussion on the progress of combining experiments with data from molecular dynamics simulations. For the measurements at conditions of the deep Earth, various experimental approaches and necessities are discussed and two examples are described in more detail. Finally, the achievements and prospects are presented for measuring X-ray absorption spectra indirectly by X-ray Raman scattering.
Zielsetzung: In diesem Beitrag werden Ergebnisse der EKipeE-Studie vorgestellt, in der erwachsene Kinder psychisch kranker Eltern befragt wurden. Ziel war es, die von den Befragten wahrgenommenen langfristigen Auswirkungen auf ihre Biographie, ihre Persönlichkeit und ihre Sozialbeziehungen zu beschreiben. Außerdem sollten Zusammenhänge zwischen ausgewählten belastenden Kindheitserfahrungen und Problemen im Erwachsenenalter untersucht werden. Ferner sollten die Unterstützungsbedürfnisse und -wünsche der erwachsenen Kinder erfasst werden. Methode: Im Rahmen einer online-Fragebogenstudie wurden N=561 erwachsene Kinder psychisch kranker Eltern befragt. Die quantitativen Fragebogendaten wurden mit SPSS 23.0 statistisch ausgewertet; die freien Antworten und Kommentare wurden inhaltsanalytisch ausgewertet. Ergebnisse: Die Studienteilnehmer berichteten vielfältige emotionale und soziale Probleme, die sie als Folgen ihrer Kindheitserfahrungen wahrnehmen. Sehr häufig haben sie das Gefühl, in ihrer Identität und ihrem Verhalten negativ geprägt zu sein. Viele äußern deswegen einen Bedarf an professioneller Beratung und Unterstützung. Diskussion: Es handelt sich um die bislang umfangreichste Studie zu den langfristigen Folgen einer Kindheit mit einem psychisch kranken Elternteil im deutschsprachigen Raum. Die Ergebnisse verdeutlichen, dass frühzeitige Hilfe- und Präventionsangebote für betroffene Kinder, Eltern und Familien notwendig sind. Auch die Bereitstellung spezifischer Beratungsangebote für erwachsene Kinder psychisch kranker Eltern wird empfohlen.
The collaboration during the modeling process is uncomfortable and characterized by various limitations. Faced with the successful transfer of first process modeling languages to the augmented world, non-transparent processes can be visualized in a more comprehensive way. With the aim to rise comfortability, speed, accuracy and manifoldness of real world process augmentations, a framework for the bidirectional interplay of the common process modeling world and the augmented world has been designed as morphologic box. Its demonstration proves the working of drawn AR integrations. Identified dimensions were derived from (1) a designed knowledge construction axiom, (2) a designed meta-model, (3) designed use cases and (4) designed directional interplay modes. Through a workshop-based survey, the so far best AR modeling configuration is identified, which can serve for benchmarks and implementations.
Microstructure Characterisation of Advanced Materials via 2D and 3D X-Ray Refraction Techniques
(2018)
3D imaging techniques have an enormous potential to understand the microstructure, its evolution, and its link to mechanical, thermal, and transport properties. In this conference paper we report the use of a powerful, yet not so wide-spread, set of X-ray techniques based on refraction effects. X-ray refraction allows determining internal specific surface (surface per unit volume) in a non-destructive fashion, position and orientation sensitive, and with a nanometric detectability. We demonstrate showcases of ceramics and composite materials, where microstructural parameters could be achieved in a way unrivalled even by high-resolution techniques such as electron microscopy or computed tomography. We present in situ analysis of the damage evolution in an Al/Al2O3 metal matrix composite during tensile load and the identification of void formation (different kinds of defects, particularly unsintered powder hidden in pores, and small inhomogeneity’s like cracks) in Ti64 parts produced by selective laser melting using synchrotron X-ray refraction radiography and tomography.
The incorporation of inorganic particles in a polymer matrix has been established as a method to adjust the mechanical performance of composite materials. We report on the influence of covalent integration of magnetic nanoparticles (MNP) on the actuation behavior and mechanical performance of hybrid nanocomposite (H-NC) based shape-memory polymer actuators (SMPA). The H-NC were synthesized by reacting two types of oligo(ω-pentadecalactone) (OPDL) based precursors with terminal hydroxy groups, a three arm OPDL (3 AOPDL, Mn = 6000 g mol•1−1 ) and an OPDL (Mn =3300 g • mol−1 ) coated magnetite nanoparticle (Ø = 10 ± 2 nm), with a diisocyanate. These H-NC were compared to the homopolymer network regarding the actuation performance, contractual stress (σcontr) as well as thermal and mechanical properties. The melting range of the OPDL crystals (ΔTm,OPDL) was shifted in homo polymer networks from 36 ºC − 76 ºC to 41ºC − 81 °C for H-NC with 9 wt% of MNP content. The actuators were explored by variation of separating temperature (Tsep), which splits the OPDL crystalline domain into actuating and geometry determining segments. Tsep was varied in the melting range of the nanocomposites and the actuation capability and contractual stress (σcontr) of the nanocomposite actuators could be adjusted. The reversible strain (εrev) was decreased from 11 ± 0.3% for homo polymer network to 3.2±0.3% for H-NC9 with 9 wt% of MNP indicating a restraining effect of the MNP on chain mobility. The results show that the performance of H-NCs in terms of thermal and elastic properties can be tailored by MNP content, however for higher reversible actuation, lower MNP contents are preferable.
Plant X-tender
(2018)
Cloning multiple DNA fragments for delivery of several genes of interest into the plant genome is one of the main technological challenges in plant synthetic biology. Despite several modular assembly methods developed in recent years, the plant biotechnology community has not widely adopted them yet, probably due to the lack of appropriate vectors and software tools. Here we present Plant X-tender, an extension of the highly efficient, scar-free and sequence-independent multigene assembly strategy AssemblX, based on overlap-depended cloning methods and rare-cutting restriction enzymes. Plant X-tender consists of a set of plant expression vectors and the protocols for most efficient cloning into the novel vector set needed for plant expression and thus introduces advantages of AssemblX into plant synthetic biology. The novel vector set covers different backbones and selection markers to allow full design flexibility. We have included ccdB counterselection, thereby allowing the transfer of multigene constructs into the novel vector set in a straightforward and highly efficient way. Vectors are available as empty backbones and are fully flexible regarding the orientation of expression cassettes and addition of linkers between them, if required. We optimised the assembly and subcloning protocol by testing different scar-less assembly approaches: the noncommercial SLiCE and TAR methods and the commercial Gibson assembly and NEBuilder HiFi DNA assembly kits. Plant X-tender was applicable even in combination with low efficient homemade chemically competent or electrocompetent Escherichia coli. We have further validated the developed procedure for plant protein expression by cloning two cassettes into the newly developed vectors and subsequently transferred them to Nicotiana benthamiana in a transient expression setup. Thereby we show that multigene constructs can be delivered into plant cells in a streamlined and highly efficient way. Our results will support faster introduction of synthetic biology into plant science.
We present a combined microscopic and macroscopic study of YbxCo4Sb12 skutterudites for a range of nominal filling fractions, 0.15 < x < 0.75. The samples were synthesized using two different methods — a melt–quench–annealing route in evacuated quartz ampoules and a non-equilibrium ball-mill route — for which we directly compare the crystal structure and phase composition as well as the thermoelectric properties. Rietveld refinements of high-quality neutron powder diffraction data reveal about a 30–40% smaller Yb occupancy on the crystallographic 2a site than nominally expected for both synthesis routes. We observe a maximum filling fraction of at least 0.439(7) for a sample synthesized by the ball-mill routine, exceeding theoretical predictions of the filling fraction limit of 0.2–0.3. A single secondary phase of CoSb2 is observed in ball-mill-synthesized samples, while two secondary phases, CoSb2 and YbSb2, are detected for samples prepared by the ampoule route. A detrimental influence of the secondary phases on the thermoelectric properties is observed for secondary-phase fractions larger than 8 wt % regardless of the kind of secondary phase. The largest figure of merit of all samples with a ZT ∼ 1.0 at 723 K is observed for the sample with a refined Yb content of x2a = 0.159(3), synthesized by the ampoule route.
The stratospheric polar vortex can influence the tropospheric circulation and thereby winter weather in the mid-latitudes. Weak vortex states, often associated with sudden stratospheric warmings (SSW), have been shown to increase the risk of cold-spells especially over Eurasia, but its role for North American winters is less clear. Using cluster analysis, we show that there are two dominant patterns of increased polar cap heights in the lower stratosphere. Both patterns represent a weak polar vortex but they are associated with different wave mechanisms and different regional tropospheric impacts. The first pattern is zonally symmetric and associated with absorbed upward-propagating wave activity, leading to a negative phase of the North Atlantic Oscillation (NAO) and cold-air outbreaks over northern Eurasia. This coupling mechanism is well-documented in the literature and is consistent with the downward migration of the northern annular mode (NAM). The second pattern is zonally asymmetric and linked to downward reflected planetary waves over Canada followed by a negative phase of the Western Pacific Oscillation (WPO) and cold-spells in Central Canada and the Great Lakes region. Causal effect network (CEN) analyses confirm the atmospheric pathways associated with this asymmetric pattern. Moreover, our findings suggest the reflective mechanism to be sensitive to the exact region of upward wave-activity fluxes and to be state-dependent on the strength of the vortex. Identifying the causal pathways that operate on weekly to monthly timescales can pave the way for improved sub-seasonal to seasonal forecasting of cold spells in the mid-latitudes.
Precision fruticulture addresses site or tree-adapted crop management. In the present study, soil and tree status, as well as fruit quality at harvest were analysed in a commercial apple (Malus × domestica 'Gala Brookfield'/Pajam1) orchard in a temperate climate. Trees were irrigated in addition to precipitation. Three irrigation levels (0, 50 and 100%) were applied. Measurements included readings of apparent electrical conductivity of soil (ECa), stem water potential, canopy temperature obtained by infrared camera, and canopy volume estimated by LiDAR and RGB colour imaging. Laboratory analyses of 6 trees per treatment were done on fruit considering the pigment contents and quality parameters. Midday stem water potential (SWP), normalized crop water stress index (CWSI) calculated from thermal data, and fruit yield and quality at harvest were analysed. Spatial patterns of the variability of tree water status were estimated by CWSI imaging supported by SWP readings. CWSI ranged from 0.1 to 0.7 indicating high variability due to irrigation and precipitation. Canopy volume data were less variable. Soil ECa appeared homogeneous in the range of 0 to 4 mS m-1. Fruit harvested in a drought stress zone showed enhanced portion of pheophytin in the chlorophyll pool. Irrigation affected soluble solids content and, hence, the quality of fruit. Overall, results highlighted that spatial variation in orchards can be found even if marginal variability of soil properties can be assumed.