Refine
Has Fulltext
- no (654) (remove)
Year of publication
Document Type
- Other (654) (remove)
Language
- English (654) (remove)
Is part of the Bibliography
- yes (654)
Keywords
- E-Learning (4)
- MOOC (4)
- Scrum (4)
- embodied cognition (4)
- errata, addenda (4)
- Cloud-Security (3)
- ISM: supernova remnants (3)
- Industry 4.0 (3)
- Internet of Things (3)
- Security Metrics (3)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (83)
- Institut für Biochemie und Biologie (82)
- Institut für Physik und Astronomie (82)
- Institut für Geowissenschaften (63)
- Department Psychologie (42)
- Department Sport- und Gesundheitswissenschaften (39)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (30)
- Institut für Chemie (27)
- Institut für Ernährungswissenschaft (27)
- Institut für Informatik und Computational Science (26)
Precision fruticulture addresses site or tree-adapted crop management. In the present study, soil and tree status, as well as fruit quality at harvest were analysed in a commercial apple (Malus × domestica 'Gala Brookfield'/Pajam1) orchard in a temperate climate. Trees were irrigated in addition to precipitation. Three irrigation levels (0, 50 and 100%) were applied. Measurements included readings of apparent electrical conductivity of soil (ECa), stem water potential, canopy temperature obtained by infrared camera, and canopy volume estimated by LiDAR and RGB colour imaging. Laboratory analyses of 6 trees per treatment were done on fruit considering the pigment contents and quality parameters. Midday stem water potential (SWP), normalized crop water stress index (CWSI) calculated from thermal data, and fruit yield and quality at harvest were analysed. Spatial patterns of the variability of tree water status were estimated by CWSI imaging supported by SWP readings. CWSI ranged from 0.1 to 0.7 indicating high variability due to irrigation and precipitation. Canopy volume data were less variable. Soil ECa appeared homogeneous in the range of 0 to 4 mS m-1. Fruit harvested in a drought stress zone showed enhanced portion of pheophytin in the chlorophyll pool. Irrigation affected soluble solids content and, hence, the quality of fruit. Overall, results highlighted that spatial variation in orchards can be found even if marginal variability of soil properties can be assumed.
Background: Pavlovian processes are thought to play an important role in the development, maintenance and relapse of alcohol dependence, possibly by influencing and usurping on- going thought and behavior. The influence of Pavlovian stimuli on on-going behavior is paradigmatically measured by Pavlovian-to-instrumental-transfer (PIT) tasks. These involve multiple stages and are complex. Whether increased PIT is involved in human alcohol
dependence is uncertain. We therefore aimed to establish and validate a modified PIT paradigm that would be robust, consistent, and tolerated by healthy controls as well as by patients suffering from alcohol dependence, and to explore whether alcohol dependence is associated with enhanced Pavlovian-Instrumental transfer.
Methods: 32 recently detoxified alcohol-dependent patients and 32 age and gender matched healthy controls performed a PIT task with instrumental go/no-go approach behaviours. The task involved both Pavlovian stimuli associated with monetary rewards and losses, and images of drinks.
Results: Both patients and healthy controls showed a robust and temporally stable PIT effect. Strengths of PIT effects to drug-related and monetary conditioned stimuli were highly correlated. Patients more frequently showed a PIT effect and the effect was stronger in response to aversively conditioned CSs (conditioned suppression), but there was no group difference in response to appetitive CSs.
Conclusion: The implementation of PIT has favorably robust properties in chronic alcohol- dependent patients and in healthy controls. It shows internal consistency between monetary and drug-related cues. The findings support an association of alcohol dependence with an increased propensity towards PIT.
This paper presents the concept of a community-accessible stratospheric balloon-based observatory that is currently under preparation by a consortium of European research institutes and industry. We present the technical motivation, science case, instrumentation, and a two-stage image stabilization approach of the 0.5-m UV/visible platform. In addition, we briefly describe the novel mid-sized stabilized balloon gondola under design to carry telescopes in the 0.5 to 0.6 m range as well as the currently considered flight option for this platform. Secondly, we outline the scientific and technical motivation for a large balloon-based FIR telescope and the ESBO DS approach towards such an infrastructure.
The electromagnetic coupling of molecular excitations to plasmonic nanoparticles offers a promising method to manipulate the light-matter interaction at the nanoscale. Plasmonic nanoparticles foster exceptionally high coupling strengths, due to their capacity to strongly concentrate the light-field to sub-wavelength mode volumes. A particularly interesting coupling regime occurs, if the coupling increases to a level such that the coupling strength surpasses all damping rates in the system. In this so-called strong-coupling regime hybrid light-matter states emerge, which can no more be divided into separate light and matter components. These hybrids unite the features of the original components and possess new resonances whose positions are separated by the Rabi splitting energy h Omega. Detuning the resonance of one of the components leads to an anticrossing of the two arising branches of the new resonances omega(+) and omega(-) with a minimal separation of Omega = omega(+) - omega(-).
The coupling between molecular excitations and nanoparticles leads to promising applications. It is for example used to enhance the optical cross-section of molecules in surface enhanced Raman scattering, Purcell enhancement or plasmon enhanced dye lasers. In a coupled system new resonances emerge resulting from the original plasmon (ωpl) and exciton (ωex) resonances as
ω±=12(ωpl+ωex)±14(ωpl−ωex)2+g2−−−−−−−−−−−−−−−√,
(1)
where g is the coupling parameter. Hence, the new resonances show a separation of Δ = ω+ − ω− from which the coupling strength can be deduced from the minimum distance between the two resonances, Ω = Δ(ω+ = ω−).
Capsella
(2018)
High-throughput RNA sequencing (RNAseq) produces large data sets containing expression levels of thousands of genes. The analysis of RNAseq data leads to a better understanding of gene functions and interactions, which eventually helps to study diseases like cancer and develop effective treatments. Large-scale RNAseq expression studies on cancer comprise samples from multiple cancer types and aim to identify their distinct molecular characteristics. Analyzing samples from different cancer types implies analyzing samples from different tissue origin. Such multi-tissue RNAseq data sets require a meaningful analysis that accounts for the inherent tissue-related bias: The identified characteristics must not originate from the differences in tissue types, but from the actual differences in cancer types. However, current analysis procedures do not incorporate that aspect. As a result, we propose to integrate a tissue-awareness into the analysis of multi-tissue RNAseq data. We introduce an extension for gene selection that provides a tissue-wise context for every gene and can be flexibly combined with any existing gene selection approach. We suggest to expand conventional evaluation by additional metrics that are sensitive to the tissue-related bias. Evaluations show that especially low complexity gene selection approaches profit from introducing tissue-awareness.
Logical modeling has been widely used to understand and expand the knowledge about protein interactions among different pathways. Realizing this, the caspo-ts system has been proposed recently to learn logical models from time series data. It uses Answer Set Programming to enumerate Boolean Networks (BNs) given prior knowledge networks and phosphoproteomic time series data. In the resulting sequence of solutions, similar BNs are typically clustered together. This can be problematic for large scale problems where we cannot explore the whole solution space in reasonable time. Our approach extends the caspo-ts system to cope with the important use case of finding diverse solutions of a problem with a large number of solutions. We first present the algorithm for finding diverse solutions and then we demonstrate the results of the proposed approach on two different benchmark scenarios in systems biology: (1) an artificial dataset to model TCR signaling and (2) the HPN-DREAM challenge dataset to model breast cancer cell lines.
Participants of the 2017 European Space Weather Week in Ostend, Belgium, discussed the stakeholder requirements for space weather-related models. It was emphasized that stakeholders show an increased interest in space weather-related models. Participants of the meeting discussed particular prediction indicators that can provide first-order estimates of the impact of space weather on engineering systems.
In an effort to explain the formation of a narrow third radiation belt at ultra-relativistic energies detected during a solar storm in September 20121, Mann et al.2 present simulations from which they conclude it arises from a process of outward radial diffusion alone, without the need for additional loss processes from higher frequency waves. The comparison of observations with the model in Figs 2 and 3 of their Article clearly shows that even with strong radial diffusion rates, the model predicts a third belt near L* = 3 that is twice as wide as observed and approximately an order of magnitude more intense. We therefore disagree with their interpretation that “the agreement between the absolute fluxes from the model and those observed by REPT [the Relativistic Electron Proton Telescope] shown on Figs 2 and 3 is excellent.”
Previous studies3 have shown that outward radial diffusion plays a very important role in the dynamics of the outer belt and is capable of explaining rapid reductions in the electron flux. It has also been shown that it can produce remnant belts (Fig. 2 of a long-term simulation study4). However, radial diffusion alone cannot explain the formation of the narrow third belt at multi-MeV during September 2012. An additional loss mechanism is required.
Higher radial diffusion rates cannot improve the comparison of model presented by Mann et al. with observations. A further increase in the radial diffusion rates (reported in Fig. 4 of the Supplementary Information of ref. 2) results in the overestimation of the outer belt fluxes by up to three orders of magnitude at energy of 3.4 MeV.
Observations at 2 MeV, where belts show only a two-zone structure, were not presented by Mann et al. Moreover, simulations of electrons with energies below 2 MeV with the same diffusion rates and boundary conditions used by the authors would probably produce very strong depletions down to L = 3–3.5, where L is radial distance from the centre of the Earth to the given field line in the equatorial plane. Observations do not show a non-adiabatic loss below L ∼ 4.5 for 2 MeV. Such different dynamics between 2 MeV and above 4 MeV at around L = 3.5 are another indication that particles are scattered by electromagnetic ion cyclotron (EMIC) waves that affect only energies above a certain threshold.
Observations of the phase space density (PSD) provide additional evidence for the local loss of electrons. Around L* = 3.5–4 PSD shows significant decrease by an order of magnitude starting in the afternoon of 3 September (Fig. 1a), while PSD above L* = 4 is increasing. The minimum in PSD between L* = 3.5–4 continues to decrease until 4 September. This evolution demonstrates that the loss is not produced by outward diffusion. Radial diffusion cannot produce deepening minima, as it works to smooth gradients. Just as growing peaks in PSD show the presence of localized acceleration5, deepening minima show the presence of localized loss.
Figure 1: Time evolution of radiation profiles in electron PSD at relativistic and ultra-relativistic energies.
figure 1
a, Similar to Supplementary Fig. 3 of ref. 2, but using TS07D model10 and for μ = 2,500 MeV G−1, K = 0.05 RE G0.5 (where RE is the radius of the Earth). b, Similar to Supplementary Fig. 3 of ref. 2, but using TS07D model and for μ = 700 MeV G−1, corresponding to MeV energies in the heart of the belt. Minimum in PSD in the heart of the multi-MeV electron radiation belt between 3.5 and 4 RE deepening between the afternoon of 3 September and 5 September clearly show that the narrow remnant belt at multi-MeV below 3.5 RE is produced by the local loss.
Full size image
The minimum in the outer boundary is reached on the evening of 2 September. After that, the outer boundary moves up, while the minimum decreases by approximately an order of magnitude, clearly showing that this main decrease cannot be explained by outward diffusion, and requires additional loss processes. The analysis of profiles of PSD is a standard tool used, for example, in the study about electron acceleration5 and routinely used by the entire Van Allen Probes team. In the Supplementary Information, we show that this analysis is validated by using different magnetic field models. The Supplementary Information also shows that measurements are above background noise.
Deepening minima at multi-MeV during the times when the boundary flux increases are clearly seen in Fig. 1a. They show that there must be localized loss, as radial diffusion cannot produce a minimum that becomes lower with time. At lower energies of 1–2 MeV, which corresponds to lower values of the first adiabatic invariant μ (Fig. 1b), the profiles are monotonic between L* = 3–3.5, consistent with the absence of scattering by EMIC waves that affect only electrons above a certain energy threshold6,7,8,9.
In summary, the results of the modelling and observations presented by Mann et al. do not lend support to the claim of explaining the dynamics of the ultra-relativistic third Van Allen radiation belt in terms of an outward radial diffusion process alone. While the outward radial diffusion driven by the loss to the magnetopause2 is certainly operating during this storm, there is compelling observational and modelling2,6 evidence that shows that very efficient localized electron loss operates during this storm at multi-MeV energies, consistent with localized loss produced by EMIC waves.
An efficient Design Space Exploration (DSE) is imperative for the design of modern, highly complex embedded systems in order to steer the development towards optimal design points. The early evaluation of design decisions at system-level abstraction layer helps to find promising regions for subsequent development steps in lower abstraction levels by diminishing the complexity of the search problem. In recent works, symbolic techniques, especially Answer Set Programming (ASP) modulo Theories (ASPmT), have been shown to find feasible solutions of highly complex system-level synthesis problems with non-linear constraints very efficiently. In this paper, we present a novel approach to a holistic system-level DSE based on ASPmT. To this end, we include additional background theories that concurrently guarantee compliance with hard constraints and perform the simultaneous optimization of several design objectives. We implement and compare our approach with a state-of-the-art preference handling framework for ASP. Experimental results indicate that our proposed method produces better solutions with respect to both diversity and convergence to the true Pareto front.
Utilizing quad-trees for efficient design space exploration with partial assignment evaluation
(2018)
Recently, it has been shown that constraint-based symbolic solving techniques offer an efficient way for deciding binding and routing options in order to obtain a feasible system level implementation. In combination with various background theories, a feasibility analysis of the resulting system may already be performed on partial solutions. That is, infeasible subsets of mapping and routing options can be pruned early in the decision process, which fastens the solving accordingly. However, allowing a proper design space exploration including multi-objective optimization also requires an efficient structure for storing and managing non-dominated solutions. In this work, we propose and study the usage of the Quad-Tree data structure in the context of partial assignment evaluation during system synthesis. Out experiments show that unnecessary dominance checks can be avoided, which indicates a preference of Quad-Trees over a commonly used list-based implementation for large combinatorial optimization problems.
Manufacturing industries are undergoing a major paradigm shift towards more autonomy. Automated planning and scheduling then becomes a necessity. The Planning and Execution Competition for Logistics Robots in Simulation held at ICAPS is based on this scenario and provides an interesting testbed. However, the posed problem is challenging as also demonstrated by the somewhat weak results in 2017. The domain requires temporal reasoning and dealing with uncertainty. We propose a novel planning system based on Answer Set Programming and the Clingo solver to tackle these problems and incentivize robot cooperation. Our results show a significant performance improvement, both, in terms of lowering computational requirements and better game metrics.
Declarative languages for knowledge representation and reasoning provide constructs to define preference relations over the set of possible interpretations, so that preferred models represent optimal solutions of the encoded problem. We introduce the notion of approximation for replacing preference relations with stronger preference relations, that is, relations comparing more pairs of interpretations. Our aim is to accelerate the computation of a non-empty subset of the optimal solutions by means of highly specialized algorithms. We implement our approach in Answer Set Programming (ASP), where problems involving quantitative and qualitative preference relations can be addressed by ASPRIN, implementing a generic optimization algorithm. Unlike this, chains of approximations allow us to reduce several preference relations to the preference relations associated with ASP’s native weak constraints and heuristic directives. In this way, ASPRIN can now take advantage of several highly optimized algorithms implemented by ASP solvers for computing optimal solutions
We propose a new temporal extension of the logic of Here-and-There (HT) and its equilibria obtained by combining it with dynamic logic over (linear) traces. Unlike previous temporal extensions of HT based on linear temporal logic, the dynamic logic features allow us to reason about the composition of actions. For instance, this can be used to exercise fine grained control when planning in robotics, as exemplified by GOLOG. In this paper, we lay the foundations of our approach, and refer to it as Linear Dynamic Equilibrium Logic, or simply DEL. We start by developing the formal framework of DEL and provide relevant characteristic results. Among them, we elaborate upon the relationships to traditional linear dynamic logic and previous temporal extensions of HT.
Modern routing algorithms reduce query time by depending heavily on preprocessed data. The recently developed Navigation Data Standard (NDS) enforces a separation between algorithms and map data, rendering preprocessing inapplicable. Furthermore, map data is partitioned into tiles with respect to their geographic coordinates. With the limited memory found in portable devices, the number of tiles loaded becomes the major factor for run time. We study routing under these restrictions and present new algorithms as well as empirical evaluations. Our results show that, on average, the most efficient algorithm presented uses more than 20 times fewer tile loads than a normal A*.
For theoretical analyses there are two specifics distinguishing GP from many other areas of evolutionary computation. First, the variable size representations, in particular yielding a possible bloat (i.e. the growth of individuals with redundant parts). Second, the role and realization of crossover, which is particularly central in GP due to the tree-based representation. Whereas some theoretical work on GP has studied the effects of bloat, crossover had a surprisingly little share in this work. We analyze a simple crossover operator in combination with local search, where a preference for small solutions minimizes bloat (lexicographic parsimony pressure); the resulting algorithm is denoted Concatenation Crossover GP. For this purpose three variants of the wellstudied Majority test function with large plateaus are considered. We show that the Concatenation Crossover GP can efficiently optimize these test functions, while local search cannot be efficient for all three variants independent of employing bloat control.
The German Enlightenment
(2017)
The term Enlightenment (or Aufklärung) remains heavily contested. Even when historians delimit the remit of the concept, assigning it to a particular historical period rather than to an intellectual or moral programme, the public resonance of the Enlightenment remains high and problematic—especially when equated in an essentialist manner with modernity or some core values of ‘the West’. This Forum has been convened to discuss recent research on the Enlightenment in Germany, different views of the term and its ideological use in public discourse outside academia (and sometimes within it).
The interview offers a reconstruction of the German reception of Durkheim since the middle of the 1970s. Hans Joas, who was one of its major protagonists, discusses the backdrop that finally permitted a scholarly examination of Durkheim’s sociology in Germany. Focussing on his personal reception Joas then gives an account of the Durkheimian themes that inspire his work.
Just after the publication of the Theory of Communicative Action in 1981, a new generation of interpreters started a different reception of Durkheim in Germany. Hans-Peter Müller, sociologist and editor of the German translation of Leçons de sociologie, reconstructs the history of the German Durkheim’s Reception and illuminates the reasons for his interest in the French sociologist. He delivers different insights into the background which permitted the post-Habermasian generation to reach a new understanding of Durkheim’s work by enlightening the scientific and political conditions from which this new sensibility emerged.
New data from the LEADER trial show that the glucagon-like peptide 1 receptor agonist liraglutide protects against diabetic nephropathy in patients with type 2 diabetes mellitus. The renoprotective efficacy of liraglutide is not, however, as great as that reported for the sodium-glucose cotransporter 2 inhibitor emplagiflozin in the EMPA-REG OUTCOME trial.
Editorial
(2017)
Tailed bacteriophages specific for Gram‐negative bacteria encounter lipopolysaccharide (LPS) during the first infection steps. Yet, it is not well understood how biochemistry of these initial interactions relates to subsequent events that orchestrate phage adsorption and tail rearrangements to initiate cell entry. For many phages, long O‐antigen chains found on the LPS of smooth bacterial strains serve as essential receptor recognized by their tailspike proteins (TSP). Many TSP are depolymerases and O‐antigen cleavage was described as necessary step for subsequent orientation towards a secondary receptor. However, O‐antigen specific host attachment must not always come along with O‐antigen degradation. In this issue of Molecular Microbiology Prokhorov et al. report that coliphage G7C carries a TSP that deacetylates O‐antigen but does not degrade it, whereas rough strains or strains lacking O‐antigen acetylation remain unaffected. Bacteriophage G7C specifically functionalizes its tail by attaching the deacetylase TSP directly to a second TSP that is nonfunctional on the host's O‐antigen. This challenges the view that bacteriophages use their TSP only to clear their way to a secondary receptor. Rather, O‐antigen specific phages may employ enzymatically active TSP as a tool for irreversible LPS membrane binding to initiate subsequent infection steps.
Kijko et al. (2016) present various methods to estimate parameters that are relevant for probabilistic seismic-hazard assessment. One of these parameters, although not the most influential, is the maximum possible earthquake magnitude m(max). I show that the proposed estimation of m(max) is based on an erroneous equation related to a misuse of the estimator in Cooke (1979) and leads to unstable results. So far, reported finite estimations of m(max) arise from data selection, because the estimator in Kijko et al. (2016) diverges with finite probability. This finding is independent of the assumed distribution of earthquake magnitudes. For the specific choice of the doubly truncated Gutenberg-Richter distribution, I illustrate the problems by deriving explicit equations. Finally, I conclude that point estimators are generally not a suitable approach to constrain m(max).
DPP4 inhibition prevents AKI
(2017)
Gamma-ray bursts (GRBs) are some of the Universe’s most enigmatic and exotic events. However, at energies above 10 GeV their behaviour remains largely unknown. Although space based telescopes such as the Fermi-LAT have been able to detect GRBs in this energy range, their photon statistics are limited by the small detector size. Such limitations are not present in ground based gamma-ray telescopes such as the H.E.S.S. experiment, which has now entered its second phase with the addition of a large 600 m2 telescope to the centre of the array. Such a large telescope allows H.E.S.S. to access the sub 100-GeV energy range while still maintaining a large effective collection area, helping to potentially probe the short timescale emission of these events.
We present a description of the H.E.S.S. GRB observation programme, summarising the performance of the rapid GRB repointing system and the conditions under which GRB observations are initiated. Additionally we will report on the GRB follow-ups made during the 2014-15 observation campaigns.
HESS J1826-130
(2017)
HESS J1826-130 is an unidentified hard spectrum source discovered by H.E.S.S. along the Galactic plane, the spectral index being Gamma = 1.6 with an exponential cut-off at about 12 TeV. While the source does not have a clear counterpart at longer wavelengths, the very hard spectrum emission at TeV energies implies that electrons or protons accelerated up to several hundreds of TeV are responsible for the emission. In the hadronic case, the VHE emission can be produced by runaway cosmic-rays colliding with the dense molecular clouds spatially coincident with the H.E.S.S. source.
E-commerce marketplaces are highly dynamic with constant competition. While this competition is challenging for many merchants, it also provides plenty of opportunities, e.g., by allowing them to automatically adjust prices in order to react to changing market situations. For practitioners however, testing automated pricing strategies is time-consuming and potentially hazardously when done in production. Researchers, on the other side, struggle to study how pricing strategies interact under heavy competition. As a consequence, we built an open continuous time framework to simulate dynamic pricing competition called Price Wars. The microservice-based architecture provides a scalable platform for large competitions with dozens of merchants and a large random stream of consumers. Our platform stores each event in a distributed log. This allows to provide different performance measures enabling users to compare profit and revenue of various repricing strategies in real-time. For researchers, price trajectories are shown which ease evaluating mutual price reactions of competing strategies. Furthermore, merchants can access historical marketplace data and apply machine learning. By providing a set of customizable, artificial merchants, users can easily simulate both simple rule-based strategies as well as sophisticated data-driven strategies using demand learning to optimize their pricing strategies.
The ionospheric delay of global navigation satellite systems (GNSS) signals typically is compensated by adding a single correction value to the pseudorange measurement of a GNSS receiver. Yet, this neglects the dispersive nature of the ionosphere. In this context we analyze the ionospheric signal distortion beyond a constant delay. These effects become increasingly significant with the signal bandwidth and hence more important for new broadband navigation signals. Using measurements of the Galileo E5 signal, captured with a high gain antenna, we verify that the expected influence can indeed be observed and compensated. A new method to estimate the total electron content (TEC) from a single frequency high gain antenna measurement of a broadband GNSS signal is proposed and described in detail. The received signal is de facto unaffected by multi-path and interference because of the narrow aperture angle of the used antenna which should reduce the error source of the result in general. We would like to point out that such measurements are independent of code correlation, like in standard receiver applications. It is therefore also usable without knowledge of the signal coding. Results of the TEC estimation process are shown and discussed comparing to common TEC products like TEC maps and dual frequency receiver estimates.
Mixed-projection treemaps
(2017)
This paper presents a novel technique for combining 2D and 2.5D treemaps using multi-perspective views to leverage the advantages of both treemap types. It enables a new form of overview+detail visualization for tree-structured data and contributes new concepts for real-time rendering of and interaction with treemaps. The technique operates by tilting the graphical elements representing inner nodes using affine transformations and animated state transitions. We explain how to mix orthogonal and perspective projections within a single treemap. Finally, we show application examples that benefit from the reduced interaction overhead.
Nanocarriers
(2017)
Background: Evidence that home telemonitoring (HTM) for patients with chronic heart failure (CHF) offers clinical benefit over usual care is controversial as is evidence of a health economic advantage. Therefore the CardioBBEAT trial was designed to prospectively assess the health economic impact of a dedicated home monitoring system for patients with CHF based on actual costs directly obtained from patients’ health care providers.
Methods: Between January 2010 and June 2013, 621 patients (mean age 63,0 ± 11,5 years, 88 % male) with a confirmed diagnosis of CHF (LVEF ≤ 40 %) were enrolled and randomly assigned to two study groups comprising usual care with and without an interactive bi-directional HTM (Motiva®). The primary endpoint was the Incremental Cost-Effectiveness Ratio (ICER) established by the groups’ difference in total cost and in the combined clinical endpoint “days alive and not in hospital nor inpatient care per potential days in study” within the follow up of 12 months. Secondary outcome measures were total mortality and health related quality of life (SF-36, WHO-5 and KCCQ).
Results: In the intention-to-treat analysis, total mortality (HR 0.81; 95% CI 0.45 – 1.45) and days alive and not in hospital (343.3 ± 55.4 vs. 347.2 ± 43.9; p = 0.909) were not significantly different between HTM and usual care. While the resulting primary endpoint ICER was not positive (-181.9; 95% CI −1626.2 ± 1628.9), quality of life assessed by SF-36, WHO-5 and KCCQ as a secondary endpoint was significantly higher in the HTW group at 6 and 12 months of follow-up.
Conclusions: The first simultaneous assessment of clinical and economic outcome of HTM in patients with CHF did not demonstrate superior incremental cost effectiveness compared to usual care. On the other hand, quality of life was improved. It remains open whether the tested HTM solution represents a useful innovative approach in the recent health care setting.
Recently, Kocyan & Wiland-Szymańska (2016) have published a thorough research article on one of the outstanding members of the family Hypoxidaceae on the Seychelles, which resulted in the raise of a new genus (Friedmannia Kocyan & Wiland-Szymańska 2016: 60) to accommodate the former Curculigo seychellensis Bojer ex Baker (1877: 368). However, it has turned out that the name Friedmannia Chantanachat & Bold (1962: 45) already exists in literature for a green alga, which renders the new hypoxid genus illegitimate (Melbourne Code; McNeill et al. 2012). Therefore, we assign a new generic epithet to Curculigo seychellensis.