Refine
Has Fulltext
- no (8) (remove)
Year of publication
- 2020 (8) (remove)
Document Type
- Article (8) (remove)
Language
- English (8)
Is part of the Bibliography
- yes (8)
Keywords
- performance (8) (remove)
Evaluating the performance of self-adaptive systems is challenging due to their interactions with often highly dynamic environments. In the specific case of self-healing systems, the performance evaluations of self-healing approaches and their parameter tuning rely on the considered characteristics of failure occurrences and the resulting interactions with the self-healing actions. In this paper, we first study the state-of-the-art for evaluating the performances of self-healing systems by means of a systematic literature review. We provide a classification of different input types for such systems and analyse the limitations of each input type. A main finding is that the employed inputs are often not sophisticated regarding the considered characteristics for failure occurrences. To further study the impact of the identified limitations, we present experiments demonstrating that wrong assumptions regarding the characteristics of the failure occurrences can result in large performance prediction errors, disadvantageous design-time decisions concerning the selection of alternative self-healing approaches, and disadvantageous deployment-time decisions concerning parameter tuning. Furthermore, the experiments indicate that employing multiple alternative input characteristics can help with reducing the risk of premature disadvantageous design-time decisions.
Improving scalability and reward of utility-driven self-healing for large dynamic architectures
(2020)
Self-adaptation can be realized in various ways. Rule-based approaches prescribe the adaptation to be executed if the system or environment satisfies certain conditions. They result in scalable solutions but often with merely satisfying adaptation decisions. In contrast, utility-driven approaches determine optimal decisions by using an often costly optimization, which typically does not scale for large problems. We propose a rule-based and utility-driven adaptation scheme that achieves the benefits of both directions such that the adaptation decisions are optimal, whereas the computation scales by avoiding an expensive optimization. We use this adaptation scheme for architecture-based self-healing of large software systems. For this purpose, we define the utility for large dynamic architectures of such systems based on patterns that define issues the self-healing must address. Moreover, we use pattern-based adaptation rules to resolve these issues. Using a pattern-based scheme to define the utility and adaptation rules allows us to compute the impact of each rule application on the overall utility and to realize an incremental and efficient utility-driven self-healing. In addition to formally analyzing the computational effort and optimality of the proposed scheme, we thoroughly demonstrate its scalability and optimality in terms of reward in comparative experiments with a static rule-based approach as a baseline and a utility-driven approach using a constraint solver. These experiments are based on different failure profiles derived from real-world failure logs. We also investigate the impact of different failure profile characteristics on the scalability and reward to evaluate the robustness of the different approaches.
The aim of this study was to establish maturation-, age-, and sex-specific anthropometric and physical fitness percentile reference values of young elite athletes from various sports. Anthropometric (i.e., standing and sitting body height, body mass, body mass index) and physical fitness (i.e., countermovement jump, drop jump, change-of-direction speed [i.e., T-test], trunk muscle endurance [i.e., ventral Bourban test], dynamic lower limbs balance [i.e., Y-balance test], hand grip strength) of 703 male and female elite young athletes aged 8–18 years were collected to aggregate reference values according to maturation, age, and sex. Findings indicate that body height and mass were significantly higher (p<0.001; 0.95≤d≤1.74) in more compared to less mature young athletes as well as with increasing chronological age (p<0.05; 0.66≤d≤3.13). Furthermore, male young athletes were significantly taller and heavier compared to their female counterparts (p<0.001; 0.34≤d≤0.50). In terms of physical fitness, post-pubertal athletes showed better countermovement jump, drop jump, change-of-direction, and handgrip strength performances (p<0.001; 1.57≤d≤8.72) compared to pubertal athletes. Further, countermovement jump, drop jump, change-of-direction, and handgrip strength performances increased with increasing chronological age (p<0.05; 0.29≤d≤4.13). In addition, male athletes outperformed their female counterpart in the countermovement jump, drop jump, change-of-direction, and handgrip strength (p<0.05; 0.17≤d≤0.76). Significant age by sex interactions indicate that sex-specific differences were even more pronounced with increasing age. Conclusively, body height, body mass, and physical fitness increased with increasing maturational status and chronological age. Sex-specific differences appear to be larger as youth grow older. Practitioners can use the percentile values as approximate benchmarks for talent identification and development.
Purpose: To examine the effects of loaded (LPJT) versus unloaded plyometric jump training (UPJT) programs on measures of muscle power, speed, change of direction (CoD), and kicking-distance performance in prepubertal male soccer players. Methods: Participants (N = 29) were randomly assigned to a LPJT group (n = 13; age = 13.0 [0.7] y) using weighted vests or UPJT group (n = 16; age = 13.0 [0.5] y) using body mass only. Before and after the intervention, tests for the assessment of proxies of muscle power (ie, countermovement jump, standing long jump); speed (ie, 5-, 10-, and 20-m sprint); CoD (ie, Illinois CoD test, modified 505 agility test); and kicking-distance were conducted. Data were analyzed using magnitude-based inferences. Results: Within-group analyses for the LPJT group showed large and very large improvements for 10-m sprint time (effect size [ES] = 2.00) and modified 505 CoD (ES = 2.83) tests, respectively. For the same group, moderate improvements were observed for the Illinois CoD test (ES = 0.61), 5- and 20-m sprint time test (ES = 1.00 for both the tests), countermovement jump test (ES = 1.00), and the maximal kicking-distance test (ES = 0.90). Small enhancements in the standing long jump test (ES = 0.50) were apparent. Regarding the UPJT group, small improvements were observed for all tests (ES = 0.33-0.57), except 5- and 10-m sprint time (ES = 1.00 and 0.63, respectively). Between-group analyses favored the LPJT group for the modified 505 CoD (ES = 0.61), standing long jump (ES = 0.50), and maximal kicking-distance tests (ES = 0.57), but not for the 5-m sprint time test (ES = 1.00). Only trivial between-group differences were shown for the remaining tests (ES = 0.00-0.09). Conclusion: Overall, LPJT appears to be more effective than UPJT in improving measures of muscle power, speed, CoD, and kicking-distance performance in prepubertal male soccer players.
The load-depended loss of vertical barbell velocity at the end of the acceleration phase limits the maximum weight that can be lifted. Thus, the purpose of this study was to analyze how increased barbell loads affect the vertical barbell velocity in the sub-phases of the acceleration phase during the snatch. It was hypothesized that the load-dependent velocity loss at the end of the acceleration phase is primarily associated with a velocity loss during the 1st pull. For this purpose, 14 male elite weightlifters lifted seven load-stages from 70-100% of their personal best in the snatch. The load-velocity relationship was calculated using linear regression analysis to determine the velocity loss at 1st pull, transition, and 2nd pull. A group mean data contrast analysis revealed the highest load-dependent velocity loss for the 1st pull (t = 1.85, p = 0.044, g = 0.49 [-0.05, 1.04]) which confirmed our study hypothesis. In contrast to the group mean data, the individual athlete showed a unique response to increased loads during the acceleration sub-phases of the snatch. With the proposed method, individualized training recommendations on exercise selection and loading schemes can be derived to specifically improve the sub-phases of the snatch acceleration phase. Furthermore, the results highlight the importance of single-subject assessment when working with elite athletes in Olympic weightlifting.
We examined state evaluation anxiety, trait evaluation anxiety, and neuroticism in relation to New Zealand first-year university students' (n = 234) task performance on either a test or essay assessment. For both assessment types, the underlying components of state evaluation anxiety (cognitive worry, emotionality, and distraction) reflect linear-as opposed to nonlinear-associations with task performance. Results of several regression models show differential effects of both state evaluation anxiety and neuroticism on task performance depending on the assessment type. The multi-dimensionality of anxiety and its relative contribution on task performance across authentic types of assessment are discussed.
Field-based sports require athletes to run sub-maximally over significant distances, often while contending with dynamic perturbations to preferred coordination patterns. The ability to adapt movement to maintain performance under such perturbations appears to be trainable through exposure to task variability, which encourages movement variability. The aim of the present study was to investigate the extent to which various wearable resistance loading magnitudes alter coordination and induce movement variability during running. To investigate this, 14 participants (three female and 11 male) performed 10 sub-maximal velocity shuttle runs with either no weight, 1%, 3%, or 5% of body weight attached to the lower limbs. Sagittal plane lower limb joint kinematics from one complete stride cycle in each run were assessed using functional data analysis techniques, both across the participant group and within-individuals. At the group-level, decreases in ankle plantarflexion following toe-off were evident in the 3% and 5% conditions, while increased knee flexion occurred during weight acceptance in the 5% condition compared with unloaded running. At the individual-level, between-run joint angle profiles varied, with six participants exhibiting increased joint angle variability in one or more loading conditions compared with unloaded running. Loading of 5% decreased between-run ankle joint variability among two individuals, likely in accordance with the need to manage increased system load or the novelty of the task. In terms of joint coordination, the most considerable alterations to coordination occurred in the 5% loading condition at the hip-knee joint pair, however, only a minority of participants exhibited this tendency. Coaches should prescribe wearable resistance individually to perturb preferred coordination patterns and encourage movement variability without loading to the extent that movement options become limited.
Background:
COVID-19 has infected millions of people worldwide and is responsible for several hundred thousand fatalities. The COVID-19 pandemic has necessitated thoughtful resource allocation and early identification of high-risk patients. However, effective methods to meet these needs are lacking.
Objective:
The aims of this study were to analyze the electronic health records (EHRs) of patients who tested positive for COVID-19 and were admitted to hospitals in the Mount Sinai Health System in New York City; to develop machine learning models for making predictions about the hospital course of the patients over clinically meaningful time horizons based on patient characteristics at admission; and to assess the performance of these models at multiple hospitals and time points.
Methods:
We used Extreme Gradient Boosting (XGBoost) and baseline comparator models to predict in-hospital mortality and critical events at time windows of 3, 5, 7, and 10 days from admission. Our study population included harmonized EHR data from five hospitals in New York City for 4098 COVID-19-positive patients admitted from March 15 to May 22, 2020. The models were first trained on patients from a single hospital (n=1514) before or on May 1, externally validated on patients from four other hospitals (n=2201) before or on May 1, and prospectively validated on all patients after May 1 (n=383). Finally, we established model interpretability to identify and rank variables that drive model predictions.
Results:
Upon cross-validation, the XGBoost classifier outperformed baseline models, with an area under the receiver operating characteristic curve (AUC-ROC) for mortality of 0.89 at 3 days, 0.85 at 5 and 7 days, and 0.84 at 10 days. XGBoost also performed well for critical event prediction, with an AUC-ROC of 0.80 at 3 days, 0.79 at 5 days, 0.80 at 7 days, and 0.81 at 10 days. In external validation, XGBoost achieved an AUC-ROC of 0.88 at 3 days, 0.86 at 5 days, 0.86 at 7 days, and 0.84 at 10 days for mortality prediction. Similarly, the unimputed XGBoost model achieved an AUC-ROC of 0.78 at 3 days, 0.79 at 5 days, 0.80 at 7 days, and 0.81 at 10 days. Trends in performance on prospective validation sets were similar. At 7 days, acute kidney injury on admission, elevated LDH, tachypnea, and hyperglycemia were the strongest drivers of critical event prediction, while higher age, anion gap, and C-reactive protein were the strongest drivers of mortality prediction.
Conclusions:
We externally and prospectively trained and validated machine learning models for mortality and critical events for patients with COVID-19 at different time horizons. These models identified at-risk patients and uncovered underlying relationships that predicted outcomes.