Hasso-Plattner-Institut für Digital Engineering gGmbH
Refine
Year of publication
Document Type
- Article (205)
- Monograph/Edited Volume (122)
- Doctoral Thesis (46)
- Other (30)
- Conference Proceeding (14)
- Preprint (4)
- Postprint (2)
- Part of a Book (1)
- Review (1)
Language
- English (383)
- German (40)
- Multiple languages (2)
Keywords
- Cloud Computing (9)
- cloud computing (8)
- Hasso-Plattner-Institut (7)
- Datenintegration (6)
- Forschungskolleg (6)
- Hasso Plattner Institute (6)
- Klausurtagung (6)
- Modellierung (6)
- Service-oriented Systems Engineering (6)
- machine learning (6)
Background Mass gatherings (MGs) such as music festivals and sports events have been associated with a high risk of SARS-CoV-2 transmission. On-site research can foster knowledge of risk factors for infections and improve risk assessments and precautionary measures at future MGs. We tested a web-based participatory disease surveillance tool to detect COVID-19 infections at and after an outdoor MG by collecting self-reported COVID-19 symptoms and tests. Methods We conducted a digital prospective observational cohort study among fully immunized attendees of a sports festival that took place from September 2 to 5, 2021 in Saxony-Anhalt, Germany. Participants used our study app to report demographic data, COVID-19 tests, symptoms, and their contact behavior. This self-reported data was used to define probable and confirmed COVID-19 cases for the full "study period" (08/12/2021 - 10/31/2021) and within the 14-day "surveillance period" during and after the MG, with the highest likelihood of an MG-related COVID-19 outbreak (09/04/2021 - 09/17/2021). Results A total of 2,808 of 9,242 (30.4%) event attendees participated in the study. Within the study period, 776 individual symptoms and 5,255 COVID-19 tests were reported. During the 14-day surveillance period around and after the MG, seven probable and seven PCR-confirmed COVID-19 cases were detected. The confirmed cases translated to an estimated seven-day incidence of 125 per 100,000 participants (95% CI [67.7/100,000, 223/100,000]), which was comparable to the average age-matched incidence in Germany during this time. Overall, weekly numbers of COVID-19 cases were fluctuating over the study period, with another increase at the end of the study period. Conclusion COVID-19 cases attributable to the mass gathering were comparable to the Germany-wide age-matched incidence, implicating that our active participatory disease surveillance tool was able to detect MG-related infections. Further studies are needed to evaluate and apply our participatory disease surveillance tool in other mass gathering settings.
Patient monitoring technology has been used to guide therapy and alert staff when a vital sign leaves a predefined range in the intensive care unit (ICU) for decades. However, large amounts of technically false or clinically irrelevant alarms provoke alarm fatigue in staff leading to desensitisation towards critical alarms.
With this systematic review, we are following the Preferred Reporting Items for Systematic Reviews (PRISMA) checklist in order to summarise scientific efforts that aimed to develop IT systems to reduce alarm fatigue in ICUs. 69 peer-reviewed publications were included. The majority of publications targeted the avoidance of technically false alarms, while the remainder focused on prediction of patient deterioration or alarm presentation.
The investigated alarm types were mostly associated with heart rate or arrhythmia, followed by arterial blood pressure, oxygen saturation, and respiratory rate.
Most publications focused on the development of software solutions, some on wearables, smartphones, or headmounted displays for delivering alarms to staff.
The most commonly used statistical models were tree-based. In conclusion, we found strong evidence that alarm fatigue can be alleviated by IT-based solutions.
However, future efforts should focus more on the avoidance of clinically non-actionable alarms which could be accelerated by improving the data availability.
One of the most challenging difficulties for incumbent organisations, especially small- and medium-sized enterprises (SMEs), is to manage digital transformation driven by technological change. Incumbent organisations' responses to digital transformation have been extensively studied in the current literature.
However, most research neglects digital transformation in SMEs. There are hardly any valid developed measures for the maturity of digital transformation. We present a holistic digital transformation maturity model based on an extensive literature review, qualitative computer-assisted data analysis, and empirical findings.
The digital transformation maturity model focuses on small- and medium-sized enterprises' unique features and characteristics.
We proved the practical applicability and relevance of the digital transformation maturity model in an extensive study involving various organisations, particularly German SMEs (n = 310).
Organisations can use this model to assess themselves initially and, through this process, gain a comprehensive understanding of the multiple forms of digital transformation.
Analysis of single event transient effects in standard delay cells based on decoupling capacitors
(2022)
Single Event Transients (SETs), i.e., voltage glitches induced in combinational logic as a result of the passage of energetic particles, represent an increasingly critical reliability threat for modern complementary metal oxide semiconductor (CMOS) integrated circuits (ICs) employed in space missions.
In rad-hard ICs implemented with standard digital cells, special design techniques should be applied to reduce the Soft Error Rate (SER) due to SETs.
To this end, it is essential to consider the SET robustness of individual standard cells. Among the wide range of logic cells available in standard cell libraries, the standard delay cells (SDCs) implemented with the skew-sized inverters are exceptionally vulnerable to SETs. Namely, the SET pulses induced in these cells may be hundreds of picoseconds longer than those in other standard cells.
In this work, an alternative design of a SDC based on two inverters and two decoupling capacitors is introduced. Electrical simulations have shown that the propagation delay and SET robustness of the proposed delay cell are strongly influenced by the transistor sizes and supply voltage, while the impact of temperature is moderate. The proposed design is more tolerant to SETs than the SDCs with skew-sized inverters, and occupies less area compared to the hardening configurations based on partial and complete duplication.
Due to the low transistor count (only six transistors), the proposed delay cell could also be used as a SET filter.
In the area of cardiac monitoring, the use of digitally driven technologies is on the rise. While the development of medical products is advancing rapidly, allowing for new use-cases in cardiac monitoring and other areas, regulatory and legal requirements that govern market access are often evolving slowly, sometimes creating market barriers. This article gives a brief overview of the existing clinical studies regarding the use of smart wearables in cardiac monitoring and provides insight into the main regulatory and legal aspects that need to be considered when such products are intended to be used in a health care setting. Based on this brief overview, the article elaborates on the specific requirements in the main areas of authorization/certification and reimbursement/compensation, as well as data protection and data security. Three case studies are presented as examples of specific market access procedures: the USA, Germany, and Belgium. This article concludes that, despite the differences in specific requirements, market access pathways in most countries are characterized by a number of similarities, which should be considered early on in product development. The article also elaborates on how regulatory and legal requirements are currently being adapted for digitally driven wearables and proposes an ongoing evolution of these requirements to facilitate market access for beneficial medical technology in the future.
In this increasingly data-rich world, visual recordings of human behavior are often unable to be shared due to concerns about privacy.
Consequently, data sharing in fields such as behavioral science, multimodal communication, and human movement research is often limited.
In addition, in legal and other non-scientific contexts, privacy-related concerns may preclude the sharing of video recordings and thus remove the rich multimodal context that humans recruit to communicate.
Minimizing the risk of identity exposure while preserving critical behavioral information would maximize utility of public resources (e.g., research grants) and time invested in audio-visual research.
Here we present an open-source computer vision tool that masks the identities of humans while maintaining rich information about communicative body movements. Furthermore, this masking tool can be easily applied to many videos, leveraging computational tools to augment the reproducibility and accessibility of behavioral research.
The tool is designed for researchers and practitioners engaged in kinematic and affective research. Application areas include teaching/education, communication and human movement research, CCTV, and legal contexts.
Objective:
Hypertension has long been recognized as one of the most important predisposing factors for cardiovascular diseases and mortality.
In recent years, machine learning methods have shown potential in diagnostic and predictive approaches in chronic diseases.
Electronic health records (EHRs) have emerged as a reliable source of longitudinal data. The aim of this study is to predict the onset of hypertension using modern deep learning (DL) architectures, specifically long short-term memory (LSTM) networks, and longitudinal EHRs.
Materials and Methods:
We compare this approach to the best performing models reported from previous works, particularly XGboost, applied to aggregated features.
Our work is based on data from 233 895 adult patients from a large health system in the United States. We divided our population into 2 distinct longitudinal datasets based on the diagnosis date.
To ensure generalization to unseen data, we trained our models on the first dataset (dataset A "train and validation") using cross-validation, and then applied the models to a second dataset (dataset B "test") to assess their performance.
We also experimented with 2 different time-windows before the onset of hypertension and evaluated the impact on model performance.
Results:
With the LSTM network, we were able to achieve an area under the receiver operating characteristic curve value of 0.98 in the "train and validation" dataset A and 0.94 in the "test" dataset B for a prediction time window of 1 year. Lipid disorders, type 2 diabetes, and renal disorders are found to be associated with incident hypertension.
Conclusion:
These findings show that DL models based on temporal EHR data can improve the identification of patients at high risk of hypertension and corresponding driving factors. In the long term, this work may support identifying individuals who are at high risk for developing hypertension and facilitate earlier intervention to prevent the future development of hypertension.
Hyperbolic random graphs (HRGs) and geometric inhomogeneous random graphs (GIRGs) are two similar generative network models that were designed to resemble complex real-world networks.
In particular, they have a power-law degree distribution with controllable exponent beta and high clustering that can be controlled via the temperature T.
We present the first implementation of an efficient GIRG generator running in expected linear time.
Besides varying temperatures, it also supports underlying geometries of higher dimensions. It is capable of generating graphs with ten million edges in under a second on commodity hardware. The algorithm can be adapted to HRGs.
Our resulting implementation is the fastest sequential HRG generator, despite the fact that we support non-zero temperatures. Though non-zero temperatures are crucial for many applications, most existing generators are restricted to T = 0 .
We also support parallelization, although this is not the focus of this paper.
Moreover, we note that our generators draw from the correct probability distribution, that is, they involve no approximation.
Besides the generators themselves, we also provide an efficient algorithm to determine the non-trivial dependency between the average degree of the resulting graph and the input parameters of the GIRG model.
This makes it possible to specify the desired expected average degree as input. Moreover, we investigate the differences between HRGs and GIRGs, shedding new light on the nature of the relation between the two models. Although HRGs represent, in a certain sense, a special case of the GIRG model, we find that a straightforward inclusion does not hold in practice.
However, the difference is negligible for most use cases.