Refine
Has Fulltext
- no (23)
Year of publication
- 2022 (23) (remove)
Language
- English (23)
Is part of the Bibliography
- yes (23) (remove)
Keywords
- digital health (3)
- Aaron Wildavsky (1)
- Actor (1)
- Bidirectional order dependencies (1)
- Case management (1)
- Climate change (1)
- Competition (1)
- CorMID (1)
- Cultural theory (1)
- Data profiling (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering gGmbH (23) (remove)
Background:
Digital therapeutic care apps provide a new effective and scalable approach for people with nonspecific low back pain (LBP). Digital therapeutic care apps are also driven by personalized decision-support interventions that support the user in self-managing LBP, and may induce prolonged behavior change to reduce the frequency and intensity of pain episodes. However, these therapeutic apps are associated with high attrition rates, and the initial prescription cost is higher than that of face-to-face physiotherapy. In Germany, digital therapeutic care apps are now being reimbursed by statutory health insurance; however, price targets and cost-driving factors for the formation of the reimbursement rate remain unexplored.
Objective:
The aim of this study was to evaluate the cost-effectiveness of a digital therapeutic care app compared to treatment as usual (TAU) in Germany. We further aimed to explore under which circumstances the reimbursement rate could be modified to consider value-based pricing.
Methods:
We developed a state-transition Markov model based on a best-practice analysis of prior LBP-related decision-analytic models, and evaluated the cost utility of a digital therapeutic care app compared to TAU in Germany. Based on a 3-year time horizon, we simulated the incremental cost and quality-adjusted life years (QALYs) for people with nonacute LBP from the societal perspective. In the deterministic sensitivity and scenario analyses, we focused on diverging attrition rates and app cost to assess our model's robustness and conditions for changing the reimbursement rate. All costs are reported in Euro (euro1=US $1.12).
Results:
Our base case results indicated that the digital therapeutic care strategy led to an incremental cost of euro121.59, but also generated 0.0221 additional QALYs compared to the TAU strategy, with an estimated incremental cost-effectiveness ratio (ICER) of euro5486 per QALY. The sensitivity analysis revealed that the reimbursement rate and the capability of digital therapeutic care to prevent reoccurring LBP episodes have a significant impact on the ICER. At the same time, the other parameters remained unaffected and thus supported the robustness of our model. In the scenario analysis, the different model time horizons and attrition rates strongly influenced the economic outcome. Reducing the cost of the app to euro99 per 3 months or decreasing the app's attrition rate resulted in digital therapeutic care being significantly less costly with more generated QALYs, and is thus considered to be the dominant strategy over TAU.
Conclusions:
The current reimbursement rate for a digital therapeutic care app in the statutory health insurance can be considered a cost-effective measure compared to TAU. The app's attrition rate and effect on the patient's prolonged behavior change essentially influence the settlement of an appropriate reimbursement rate. Future value-based pricing targets should focus on additional outcome parameters besides pain intensity and functional disability by including attrition rates and the app's long-term effect on quality of life.
The investigation of metabolic fluxes and metabolite distributions within cells by means of tracer molecules is a valuable tool to unravel the complexity of biological systems. Technological advances in mass spectrometry (MS) technology such as atmospheric pressure chemical ionization (APCI) coupled with high resolution (HR), not only allows for highly sensitive analyses but also broadens the usefulness of tracer-based experiments, as interesting signals can be annotated de novo when not yet present in a compound library. However, several effects in the APCI ion source, i.e., fragmentation and rearrangement, lead to superimposed mass isotopologue distributions (MID) within the mass spectra, which need to be corrected during data evaluation as they will impair enrichment calculation otherwise. Here, we present and evaluate a novel software tool to automatically perform such corrections. We discuss the different effects, explain the implemented algorithm, and show its application on several experimental datasets. This adjustable tool is available as an R package from CRAN.
N-of-1 trials are the gold standard study design to evaluate individual treatment effects and derive personalized treatment strategies. Digital tools have the potential to initiate a new era of N-of-1 trials in terms of scale and scope, but fully functional platforms are not yet available.
Here, we present the open source StudyU platform, which includes the StudyU Designer and StudyU app.
With the StudyU Designer, scientists are given a collaborative web application to digitally specify, publish, and conduct N-of-1 trials.
The StudyU app is a smartphone app with innovative user-centric elements for participants to partake in trials published through the StudyU Designer to assess the effects of different interventions on their health.
Thereby, the StudyU platform allows clinicians and researchers worldwide to easily design and conduct digital N-of-1 trials in a safe manner.
We envision that StudyU can change the landscape of personalized treatments both for patients and healthy individuals, democratize and personalize evidence generation for self-optimization and medicine, and can be integrated in clinical practice.
Dynamic pricing is considered a possibility to gain an advantage over competitors in modern online markets. The past advancements in Reinforcement Learning (RL) provided more capable algorithms that can be used to solve pricing problems. In this paper, we study the performance of Deep Q-Networks (DQN) and Soft Actor Critic (SAC) in different market models. We consider tractable duopoly settings, where optimal solutions derived by dynamic programming techniques can be used for verification, as well as oligopoly settings, which are usually intractable due to the curse of dimensionality. We find that both algorithms provide reasonable results, while SAC performs better than DQN. Moreover, we show that under certain conditions, RL algorithms can be forced into collusion by their competitors without direct communication.
Modern data analysis tasks often involve control flow statements, such as the iterations in PageRank and K-means. To achieve scalability, developers usually implement these tasks in distributed dataflow systems, such as Spark and Flink. Designers of such systems have to choose between providing imperative or functional control flow constructs to users. Imperative constructs are easier to use, but functional constructs are easier to compile to an efficient dataflow job. We propose Mitos, a system where control flow is both easy to use and efficient. Mitos relies on an intermediate representation based on the static single assignment form. This allows us to abstract away from specific control flow constructs and treat any imperative control flow uniformly both when building the dataflow job and when coordinating the distributed execution.
Modern data analysis tasks often involve control flow statements, such as the iterations in PageRank and K-means. To achieve scalability, developers usually implement these tasks in distributed dataflow systems, such as Spark and Flink. Designers of such systems have to choose between providing imperative or functional control flow constructs to users. Imperative constructs are easier to use, but functional constructs are easier to compile to an efficient dataflow job. We propose Mitos, a system where control flow is both easy to use and efficient. Mitos relies on an intermediate representation based on the static single assignment form. This allows us to abstract away from specific control flow constructs and treat any imperative control flow uniformly both when building the dataflow job and when coordinating the distributed execution.
One of the first and easy to use techniques for proving run time bounds for evolutionary algorithms is the so-called method of fitness levels by Wegener. It uses a partition of the search space into a sequence of levels which are traversed by the algorithm in increasing order, possibly skipping levels. An easy, but often strong upper bound for the run time can then be derived by adding the reciprocals of the probabilities to leave the levels (or upper bounds for these). Unfortunately, a similarly effective method for proving lower bounds has not yet been established. The strongest such method, proposed by Sudholt (2013), requires a careful choice of the viscosity parameters gamma(i), j, 0 <= i < j <= n. In this paper we present two new variants of the method, one for upper and one for lower bounds. Besides the level leaving probabilities, they only rely on the probabilities that levels are visited at all. We show that these can be computed or estimated without greater difficulties and apply our method to reprove the following known results in an easy and natural way. (i) The precise run time of the (1+1) EA on LEADINGONES. (ii) A lower bound for the run time of the (1+1) EA on ONEMAX, tight apart from an O(n) term. (iii) A lower bound for the run time of the (1+1) EA on long k-paths (which differs slightly from the previous result due to a small error in the latter). We also prove a tighter lower bound for the run time of the (1+1) EA on jump functions by showing that, regardless of the jump size, only with probability O(2(-n)) the algorithm can avoid to jump over the valley of low fitness.
SensorHub
(2022)
Observational studies are an important tool for determining whether the findings from controlled experiments can be transferred into scenarios that are closer to subjects' real-life circumstances. A rigorous approach to observational studies involves collecting data from different sensors to comprehensively capture the situation of the subject. However, this leads to technical difficulties especially if the sensors are from different manufacturers, as multiple data collection tools have to run simultaneously. We present SensorHub, a system that can collect data from various wearable devices from different manufacturers, such as inertial measurement units, portable electrocardiographs, portable electroencephalographs, portable photoplethysmographs, and sensors for electrodermal activity. Additionally, our tool offers the possibility to include ecological momentary assessments (EMAs) in studies. Hence, SensorHub enables multimodal sensor data collection under real-world conditions and allows direct user feedback to be collected through questionnaires, enabling studies at home. In a first study with 11 participants, we successfully used SensorHub to record multiple signals with different devices and collected additional information with the help of EMAs. In addition, we evaluated SensorHub's technical capabilities in several trials with up to 21 participants recording simultaneously using multiple sensors with sampling frequencies as high as 1000 Hz. We could show that although there is a theoretical limitation to the transmissible data rate, in practice this limitation is not an issue and data loss is rare. We conclude that with modern communication protocols and with the increasingly powerful smartphones and wearables, a system like our SensorHub establishes an interoperability framework to adequately combine consumer-grade sensing hardware which enables observational studies in real life.
How inclusive are we?
(2022)
ACM SIGMOD, VLDB and other database organizations have committed to fostering an inclusive and diverse community, as do many other scientific organizations. Recently, different measures have been taken to advance these goals, especially for underrepresented groups. One possible measure is double-blind reviewing, which aims to hide gender, ethnicity, and other properties of the authors. <br /> We report the preliminary results of a gender diversity analysis of publications of the database community across several peer-reviewed venues, and also compare women's authorship percentages in both single-blind and double-blind venues along the years. We also obtained a cross comparison of the obtained results in data management with other relevant areas in Computer Science.
A standard approach to accelerating shortest path algorithms on networks is the bidirectional search, which explores the graph from the start and the destination, simultaneously. In practice this strategy performs particularly well on scale-free real-world networks. Such networks typically have a heterogeneous degree distribution (e.g., a power-law distribution) and high clustering (i.e., vertices with a common neighbor are likely to be connected themselves). These two properties can be obtained by assuming an underlying hyperbolic geometry. <br /> To explain the observed behavior of the bidirectional search, we analyze its running time on hyperbolic random graphs and prove that it is (O) over tilde (n(2-1/alpha) + n(1/(2 alpha)) + delta(max)) with high probability, where alpha is an element of (1/2, 1) controls the power-law exponent of the degree distribution, and dmax is the maximum degree. This bound is sublinear, improving the obvious worst-case linear bound. Although our analysis depends on the underlying geometry, the algorithm itself is oblivious to it.