Refine
Has Fulltext
- yes (86)
Year of publication
Document Type
- Postprint (86) (remove)
Language
- English (86) (remove)
Is part of the Bibliography
- yes (86) (remove)
Keywords
- muscle strength (5)
- adolescents (4)
- exercise (4)
- resistance training (4)
- Adaptive Force (3)
- Neuroenhancement (3)
- depression (3)
- inflammation (3)
- maximal isometric Adaptive Force (3)
- youth sports (3)
Institute
- Department Sport- und Gesundheitswissenschaften (86) (remove)
Models employed in exercise psychology highlight the role of reflective processes for explaining behavior change. However, as discussed in social cognition literature, information-processing models also consider automatic processes (dual-process models). To examine the relevance of automatic processing in exercise psychology, we used a priming task to assess the automatic evaluations of exercise stimuli in physically active sport and exercise majors (n = 32), physically active nonsport majors (n = 31), and inactive students (n = 31). Results showed that physically active students responded faster to positive words after exercise primes, whereas inactive students responded more rapidly to negative words. Priming task reaction times were successfully used to predict reported amounts of exercise in an ordinal regression model. Findings were obtained only with experiential items reflecting negative and positive consequences of exercise. The results illustrate the potential importance of dual-process models in exercise psychology.
Background/Purpose
Muscular reflex responses of the lower extremities to sudden gait disturbances are related to postural stability and injury risk. Chronic ankle instability (CAI) has shown to affect activities related to the distal leg muscles while walking. Its effects on proximal muscle activities of the leg, both for the injured- (IN) and uninjured-side (NON), remain unclear. Therefore, the aim was to compare the difference of the motor control strategy in ipsilateral and contralateral proximal joints while unperturbed walking and perturbed walking between individuals with CAI and matched controls.
Materials and methods
In a cross-sectional study, 13 participants with unilateral CAI and 13 controls (CON) walked on a split-belt treadmill with and without random left- and right-sided perturbations. EMG amplitudes of muscles at lower extremities were analyzed 200 ms after perturbations, 200 ms before, and 100 ms after (Post100) heel contact while walking. Onset latencies were analyzed at heel contacts and after perturbations. Statistical significance was set at alpha≤0.05 and 95% confidence intervals were applied to determine group differences. Cohen’s d effect sizes were calculated to evaluate the extent of differences.
Results
Participants with CAI showed increased EMG amplitudes for NON-rectus abdominus at Post100 and shorter latencies for IN-gluteus maximus after heel contact compared to CON (p<0.05). Overall, leg muscles (rectus femoris, biceps femoris, and gluteus medius) activated earlier and less bilaterally (d = 0.30–0.88) and trunk muscles (bilateral rectus abdominus and NON-erector spinae) activated earlier and more for the CAI group than CON group (d = 0.33–1.09).
Conclusion
Unilateral CAI alters the pattern of the motor control strategy around proximal joints bilaterally. Neuromuscular training for the muscles, which alters motor control strategy because of CAI, could be taken into consideration when planning rehabilitation for CAI.
Background: The use of psychoactive substances to neuroenhance cognitive performance is prevalent. Neuroenhancement (NE) in everyday life and doping in sport might rest on similar attitudinal representations, and both behaviors can be theoretically modeled by comparable means-to-end relations (substance-performance). A behavioral (not substance-based) definition of NE is proposed, with assumed functionality as its core component. It is empirically tested whether different NE variants (lifestyle drug, prescription drug, and illicit substance) can be regressed to school stressors.
Findings: Participants were 519 students (25.8 +/- 8.4 years old, 73.1% female). Logistic regressions indicate that a modified doping attitude scale can predict all three NE variants. Multiple NE substance abuse was frequent. Overwhelming demands in school were associated with lifestyle and prescription drug NE.
Conclusions: Researchers should be sensitive for probable structural similarities between enhancement in everyday life and sport and systematically explore where findings from one domain can be adapted for the other. Policy makers should be aware that students might misperceive NE as an acceptable means of coping with stress in school, and help to form societal sensitivity for the topic of NE among our younger ones in general.
Background: Neuroenhancement (NE), the use of psychoactive substances in order to enhance a healthy individual's cognitive functioning from a proficient to an even higher level, is prevalent in student populations. According to the strength model of self-control, people fail to self-regulate and fall back on their dominant behavioral response when finite self-control resources are depleted. An experiment was conducted to test the hypothesis that ego-depletion will prevent students who are unfamiliar with NE from trying it.
Findings: 130 undergraduates, who denied having tried NE before (43% female, mean age = 22.76 +/- 4.15 years old), were randomly assigned to either an ego-depletion or a control condition. The dependent variable was taking an "energy-stick" (a legal nutritional supplement, containing low doses of caffeine, taurine and vitamin B), offered as a potential means of enhancing performance on the bogus concentration task that followed. Logistic regression analysis showed that ego-depleted participants were three times less likely to take the substance, OR = 0.37, p = .01.
Conclusion: This experiment found that trying NE for the first time was more likely if an individual's cognitive capacities were not depleted. This means that mental exhaustion is not predictive for NE in students for whom NE is not the dominant response. Trying NE for the first time is therefore more likely to occur as a thoughtful attempt at self-regulation than as an automatic behavioral response in stressful situations. We therefore recommend targeting interventions at this inter-individual difference. Students without previous reinforcing NE experience should be provided with information about the possible negative health outcomes of NE. Reconfiguring structural aspects in the academic environment (e.g. lessening workloads) might help to deter current users.
Background: Recent studies have demonstrated a superior diagnostic accuracy of cardiovascular magnetic resonance (CMR) for the detection of coronary artery disease (CAD). We aimed to determine the comparative cost-effectiveness of CMR versus single-photon emission computed tomography (SPECT).
Methods: Based on Bayes' theorem, a mathematical model was developed to compare the cost-effectiveness and utility of CMR with SPECT in patients with suspected CAD. Invasive coronary angiography served as the standard of reference. Effectiveness was defined as the accurate detection of CAD, and utility as the number of quality-adjusted life-years (QALYs) gained. Model input parameters were derived from the literature, and the cost analysis was conducted from a German health care payer's perspective. Extensive sensitivity analyses were performed.
Results: Reimbursement fees represented only a minor fraction of the total costs incurred by a diagnostic strategy. Increases in the prevalence of CAD were generally associated with improved cost-effectiveness and decreased costs per utility unit (Delta QALY). By comparison, CMR was consistently more cost-effective than SPECT, and showed lower costs per QALY gained. Given a CAD prevalence of 0.50, CMR was associated with total costs of (sic)6,120 for one patient correctly diagnosed as having CAD and with (sic)2,246 per Delta QALY gained versus (sic)7,065 and (sic)2,931 for SPECT, respectively. Above a threshold value of CAD prevalence of 0.60, proceeding directly to invasive angiography was the most cost-effective approach.
Conclusions: In patients with low to intermediate CAD probabilities, CMR is more cost-effective than SPECT. Moreover, lower costs per utility unit indicate a superior clinical utility of CMR.
Background: Doping attitude is a key variable in predicting athletes' intention to use forbidden performance enhancing drugs. Indirect reaction-time based attitude tests, such as the implicit association test, conceal the ultimate goal of measurement from the participant better than questionnaires. Indirect tests are especially useful when socially sensitive constructs such as attitudes towards doping need to be described. The present study serves the development and validation of a novel picture-based brief implicit association test (BIAT) for testing athletes' attitudes towards doping in sport. It shall provide the basis for a transnationally compatible research instrument able to harmonize anti-doping research efforts.
Method: Following a known-group differences validation strategy, the doping attitudes of 43 athletes from bodybuilding (representative for a highly doping prone sport) and handball (as a contrast group) were compared using the picture-based doping-BIAT. The Performance Enhancement Attitude Scale (PEAS) was employed as a corresponding direct measure in order to additionally validate the results.
Results: As expected, in the group of bodybuilders, indirectly measured doping attitudes as tested with the picture-based doping-BIAT were significantly less negative (eta(2) = .11). The doping-BIAT and PEAS scores correlated significantly at r = .50 for bodybuilders, and not significantly at r = .36 for handball players. There was a low error rate (7%) and a satisfactory internal consistency (r(dagger dagger) = .66) for the picture-based doping-BIAT.
Conclusions: The picture-based doping-BIAT constitutes a psychometrically tested method, ready to be adopted by the international research community. The test can be administered via the internet. All test material is available "open source". The test might be implemented, for example, as a new effect-measure in the evaluation of prevention programs.
Background: Knowing and, if necessary, altering competitive athletes' real attitudes towards the use of banned performance-enhancing substances is an important goal of worldwide doping prevention efforts. However athletes will not always be willing to reporting their real opinions. Reaction time-based attitude tests help conceal the ultimate goal of measurement from the participant and impede strategic answering. This study investigated how well a reaction time-based attitude test discriminated between athletes who were doping and those who were not. We investigated whether athletes whose urine samples were positive for at least one banned substance (dopers) evaluated doping more favorably than clean athletes (non-dopers).
Methods: We approached a group of 61 male competitive bodybuilders and collected urine samples for biochemical testing. The pictorial doping Brief Implicit Association Test (BIAT) was used for attitude measurement. This test quantifies the difference in response latencies (in milliseconds) to stimuli representing related concepts (i.e. doping-dislike/like-[health food]).
Results: Prohibited substances were found in 43% of all tested urine samples. Dopers had more lenient attitudes to doping than non-dopers (Hedges's g = -0.76). D-scores greater than -0.57 (CI95 = -0.72 to -0.46) might be indicative of a rather lenient attitude to doping. In urine samples evidence of administration of combinations of substances, complementary administration of substances to treat side effects and use of stimulants to promote loss of body fat was common.
Conclusion: This study demonstrates that athletes' attitudes to doping can be assessed indirectly with a reaction time-based test, and that their attitudes are related to their behavior. Although bodybuilders may be more willing to reveal their attitude to doping than other athletes, these results still provide evidence that the pictorial doping BIAT may be useful in athletes from other sports, perhaps as a complementary measure in evaluations of the effectiveness of doping prevention interventions.
Dimensional psychiatry
(2014)
A dimensional approach in psychiatry aims to identify core mechanisms of mental disorders across nosological boundaries.
We compared anticipation of reward between major psychiatric disorders, and investigated whether reward anticipation is impaired in several mental disorders and whether there is a common psychopathological correlate (negative mood) of such an impairment.
We used functional magnetic resonance imaging (fMRI) and a monetary incentive delay (MID) task to study the functional correlates of reward anticipation across major psychiatric disorders in 184 subjects, with the diagnoses of alcohol dependence (n = 26), schizophrenia (n = 44), major depressive disorder (MDD, n = 24), bipolar disorder (acute manic episode, n = 13), attention deficit/hyperactivity disorder (ADHD, n = 23), and healthy controls (n = 54). Subjects' individual Beck Depression Inventory-and State-Trait Anxiety Inventory-scores were correlated with clusters showing significant activation during reward anticipation.
During reward anticipation, we observed significant group differences in ventral striatal (VS) activation: patients with schizophrenia, alcohol dependence, and major depression showed significantly less ventral striatal activation compared to healthy controls. Depressive symptoms correlated with dysfunction in reward anticipation regardless of diagnostic entity. There was no significant correlation between anxiety symptoms and VS functional activation.
Our findings demonstrate a neurobiological dysfunction related to reward prediction that transcended disorder categories and was related to measures of depressed mood. The findings underline the potential of a dimensional approach in psychiatry and strengthen the hypothesis that neurobiological research in psychiatric disorders can be targeted at core mechanisms that are likely to be implicated in a range of clinical entities.
Background: Cross-sectional studies detected associations between physical fitness, living area, and sports participation in children. Yet, their scientific value is limited because the identification of cause-and-effect relationships is not possible. In a longitudinal approach, we examined the effects of living area and sports club participation on physical fitness development in primary school children from classes 3 to 6.
Methods: One-hundred and seventy-two children (age: 9-12 years; sex: 69 girls, 103 boys) were tested for their physical fitness (i.e., endurance [9-min run], speed [50-m sprint], lower- [triple hop] and upper-extremity muscle strength [1-kg ball push], flexibility [stand-and-reach], and coordination [star coordination run]). Living area (i.e., urban or rural) and sports club participation were assessed using parent questionnaire.
Results: Over the 4 year study period, urban compared to rural children showed significantly better performance development for upper- (p = 0.009, ES = 0.16) and lower-extremity strength (p < 0.001, ES = 0.22). Further, significantly better performance development were found for endurance (p = 0.08, ES = 0.19) and lower-extremity strength (p = 0.024, ES = 0.23) for children continuously participating in sports clubs compared to their non-participating peers.
Conclusions: Our findings suggest that sport club programs with appealing arrangements appear to represent a good means to promote physical fitness in children living in rural areas.
Background
Previous literature mainly introduced cognitive functions to explain performance decrements in dual-task walking, i.e., changes in dual-task locomotion are attributed to limited cognitive information processing capacities. In this study, we enlarge existing literature and investigate whether leg muscular capacity plays an additional role in children’s dual-task walking performance.
Methods
To this end, we had prepubescent children (mean age: 8.7 ± 0.5 years, age range: 7–9 years) walk in single task (ST) and while concurrently conducting an arithmetic subtraction task (DT). Additionally, leg lean tissue mass was assessed.
Results
Findings show that both, boys and girls, significantly decrease their gait velocity (f = 0.73), stride length (f = 0.62) and cadence (f = 0.68) and increase the variability thereof (f = 0.20-0.63) during DT compared to ST. Furthermore, stepwise regressions indicate that leg lean tissue mass is closely associated with step time and the variability thereof during DT (R2 = 0.44, p = 0.009). These associations between gait measures and leg lean tissue mass could not be observed for ST (R2 = 0.17, p = 0.19).
Conclusion
We were able to show a potential link between leg muscular capacities and DT walking performance in children. We interpret these findings as evidence that higher leg muscle mass in children may mitigate the impact of a cognitive interference task on DT walking performance by inducing enhanced gait stability.
Background: Habitual walking speed predicts many clinical conditions later in life, but it declines with age. However, which particular exercise intervention can minimize the age-related gait speed loss is unclear.
Purpose: Our objective was to determine the effects of strength, power, coordination, and multimodal exercise training on healthy old adults' habitual and fast gait speed.
Methods: We performed a computerized systematic literature search in PubMed and Web of Knowledge from January 1984 up to December 2014. Search terms included 'Resistance training', 'power training', 'coordination training', 'multimodal training', and 'gait speed (outcome term). Inclusion criteria were articles available in full text, publication period over past 30 years, human species, journal articles, clinical trials, randomized controlled trials, English as publication language, and subject age C65 years. The methodological quality of all eligible intervention studies was assessed using the Physiotherapy Evidence Database (PEDro) scale. We computed weighted average standardized mean differences of the intervention-induced adaptations in gait speed using a random-effects model and tested for overall and individual intervention effects relative to no-exercise controls.
Results: A total of 42 studies (mean PEDro score of 5.0 +/- 1.2) were included in the analyses (2495 healthy old adults; age 74.2 years [64.4-82.7]; body mass 69.9 +/- 4.9 kg, height 1.64 +/- 0.05 m, body mass index 26.4 +/- 1.9 kg/m(2), and gait speed 1.22 +/- 0.18 m/s). The search identified only one power training study, therefore the subsequent analyses focused only on the effects of resistance, coordination, and multimodal training on gait speed. The three types of intervention improved gait speed in the three experimental groups combined (n = 1297) by 0.10 m/s (+/- 0.12) or 8.4 % (+/- 9.7), with a large effect size (ES) of 0.84. Resistance (24 studies; n = 613; 0.11 m/s; 9.3 %; ES: 0.84), coordination (eight studies, n = 198; 0.09 m/s; 7.6 %; ES: 0.76), and multimodal training (19 studies; n = 486; 0.09 m/s; 8.4 %, ES: 0.86) increased gait speed statistically and similarly.
Conclusions: Commonly used exercise interventions can functionally and clinically increase habitual and fast gait speed and help slow the loss of gait speed or delay its onset.
Dropping Out or Keeping Up?
(2016)
The aim of this study was to examine how automatic evaluations of exercising (AEE) varied according to adherence to an exercise program. Eighty-eight participants (24.98 years ± 6.88; 51.1% female) completed a Brief-Implicit Association Task assessing their AEE, positive and negative associations to exercising at the beginning of a 3-month exercise program. Attendance data were collected for all participants and used in a cluster analysis of adherence patterns. Three different adherence patterns (52 maintainers, 16 early dropouts, 20 late dropouts; 40.91% overall dropouts) were detected using cluster analyses. Participants from these three clusters differed significantly with regard to their positive and negative associations to exercising before the first course meeting (η2p = 0.07). Discriminant function analyses revealed that positive associations to exercising was a particularly good discriminating factor. This is the first study to provide evidence of the differential impact of positive and negative associations on exercise behavior over the medium term. The findings contribute to theoretical understanding of evaluative processes from a dual-process perspective and may provide a basis for targeted interventions.
Introduction: Adequate cognitive function in patients is a prerequisite for successful implementation of patient education and lifestyle coping in comprehensive cardiac rehabilitation (CR) programs. Although the association between cardiovascular diseases and cognitive impairments (CIs) is well known, the prevalence particularly of mild CI in CR and the characteristics of affected patients have been insufficiently investigated so far.
Methods: In this prospective observational study, 496 patients (54.5 ± 6.2 years, 79.8% men) with coronary artery disease following an acute coronary event (ACE) were analyzed. Patients were enrolled within 14 days of discharge from the hospital in a 3-week inpatient CR program. Patients were tested for CI using the Montreal Cognitive Assessment (MoCA) upon admission to and discharge from CR. Additionally, sociodemographic, clinical, and physiological variables were documented. The data were analyzed descriptively and in a multivariate stepwise backward elimination regression model with respect to CI.
Results: At admission to CR, the CI (MoCA score < 26) was determined in 182 patients (36.7%). Significant differences between CI and no CI groups were identified, and CI group was associated with high prevalence of smoking (65.9 vs 56.7%, P = 0.046), heavy (physically demanding) workloads (26.4 vs 17.8%, P < 0.001), sick leave longer than 1 month prior to CR (28.6 vs 18.5%, P = 0.026), reduced exercise capacity (102.5 vs 118.8 W, P = 0.006), and a shorter 6-min walking distance (401.7 vs 421.3 m, P = 0.021) compared to no CI group. The age- and education-adjusted model showed positive associations with CI only for sick leave more than 1 month prior to ACE (odds ratio [OR] 1.673, 95% confidence interval 1.07–2.79; P = 0.03) and heavy workloads (OR 2.18, 95% confidence interval 1.42–3.36; P < 0.01).
Conclusion: The prevalence of CI in CR was considerably high, affecting more than one-third of cardiac patients. Besides age and education level, CI was associated with heavy workloads and a longer sick leave before ACE.
Background:
Deception can distort psychological tests on socially sensitive topics. Understanding the cerebral
processes that are involved in such faking can be useful in detection and prevention of deception. Previous research
shows that faking a brief implicit association test (BIAT ) evokes a characteristic ERP response. It is not yet known
whether temporarily available self-control resources moderate this response. We randomly assigned 22 participants
(15 females, 24.23
±
2.91
years old) to a counterbalanced repeated-measurements design. Participants first com-
pleted a Brief-IAT (BIAT ) on doping attitudes as a baseline measure and were then instructed to fake a negative dop
-
ing attitude both when self-control resources were depleted and non-depleted. Cerebral activity during BIAT perfor
-
mance was assessed using high-density EEG.
Results:
Compared to the baseline BIAT, event-related potentials showed a first interaction at the parietal P1,
while significant post hoc differences were found only at the later occurring late positive potential. Here, signifi-
cantly decreased amplitudes were recorded for ‘normal’ faking, but not in the depletion condition. In source space,
enhanced activity was found for ‘normal’ faking in the bilateral temporoparietal junction. Behaviorally, participants
were successful in faking the BIAT successfully in both conditions.
Conclusions:
Results indicate that temporarily available self-control resources do not affect overt faking success on
a BIAT. However, differences were found on an electrophysiological level. This indicates that while on a phenotypical
level self-control resources play a negligible role in deliberate test faking the underlying cerebral processes are markedly different.
Background: The goal of this study was to estimate the prevalence of and risk factors for diagnosed depression in heart failure (HF) patients in German primary care practices.
Methods: This study was a retrospective database analysis in Germany utilizing the Disease Analyzer (R) Database (IMS Health, Germany). The study population included 132,994 patients between 40 and 90 years of age from 1,072 primary care practices. The observation period was between 2004 and 2013. Follow-up lasted up to five years and ended in April 2015. A total of 66,497 HF patients were selected after applying exclusion criteria. The same number of 66,497 controls were chosen and were matched (1:1) to HF patients on the basis of age, sex, health insurance, depression diagnosis in the past, and follow-up duration after index date.
Results: HF was a strong risk factor for diagnosed depression (p < 0.0001). A total of 10.5% of HF patients and 6.3% of matched controls developed depression after one year of follow-up (p < 0.001). Depression was documented in 28.9% of the HF group and 18.2% of the control group after the five-year follow-up (p < 0.001). Cancer, dementia, osteoporosis, stroke, and osteoarthritis were associated with a higher risk of developing depression. Male gender and private health insurance were associated with lower risk of depression.
Conclusions: The risk of diagnosed depression is significantly increased in patients with HF compared to patients without HF in primary care practices in Germany.
Background:
Arising from the relevance of sensorimotor training in the therapy of nonspecific low back pain patients and from the value of individualized therapy, the present trial aims to test the feasibility and efficacy of individualized sensorimotor training interventions in patients suffering from nonspecific low back pain.
Methods and study design:
A multicentre, single-blind two-armed randomized controlled trial to evaluate the
effects of a 12-week (3 weeks supervised centre-based and 9 weeks home-based) individualized sensorimotor exercise program is performed. The control group stays inactive during this period. Outcomes are pain, and pain-associated function as well as motor function in adults with nonspecific low back pain. Each participant is scheduled to five measurement dates: baseline (M1), following centre-based training (M2), following home-based training (M3) and at two follow-up time points 6 months (M4) and 12 months (M5) after M1. All investigations and the assessment of the primary and secondary outcomes are performed in a standardized order: questionnaires – clinical examination – biomechanics (motor function). Subsequent statistical procedures are executed after the examination of underlying assumptions for parametric or rather non-parametric testing.
Discussion:
The results and practical relevance of the study will be of clinical and practical relevance not only for researchers and policy makers but also for the general population suffering from nonspecific low back pain.
Trial registration:
Identification number DRKS00010129. German Clinical Trial registered on 3 March 2016.
Background
Overweight and obesity are increasing health problems that are not restricted to adults only. Childhood obesity is associated with metabolic, psychological and musculoskeletal comorbidities. However, knowledge about the effect of obesity on the foot function across maturation is lacking. Decreased foot function with disproportional loading characteristics is expected for obese children. The aim of this study was to examine foot loading characteristics during gait of normal-weight, overweight and obese children aged 1-12 years.
Methods
A total of 10382 children aged one to twelve years were enrolled in the study. Finally, 7575 children (m/f: n = 3630/3945; 7.0 +/- 2.9yr; 1.23 +/- 0.19m; 26.6 +/- 10.6kg; BMI: 17.1 +/- 2.4kg/m(2)) were included for (complete case) data analysis. Children were categorized to normalweight (>= 3rd and <90th percentile; n = 6458), overweight (>= 90rd and <97th percentile; n = 746) or obese (>97th percentile; n = 371) according to the German reference system that is based on age and gender-specific body mass indices (BMI). Plantar pressure measurements were assessed during gait on an instrumented walkway. Contact area, arch index (AI), peak pressure (PP) and force time integral (FTI) were calculated for the total, fore-, mid-and hindfoot. Data was analyzed descriptively (mean +/- SD) followed by ANOVA/Welch-test (according to homogeneity of variances: yes/no) for group differences according to BMI categorization (normal-weight, overweight, obesity) and for each age group 1 to 12yrs (post-hoc Tukey Kramer/Dunnett's C; alpha = 0.05).
Results
Mean walking velocity was 0.95 +/- 0.25 m/s with no differences between normal-weight, overweight or obese children (p = 0.0841). Results show higher foot contact area, arch index, peak pressure and force time integral in overweight and obese children (p< 0.001). Obese children showed the 1.48-fold (1 year-old) to 3.49-fold (10 year-old) midfoot loading (FTI) compared to normal-weight.
Conclusion
Additional body mass leads to higher overall load, with disproportional impact on the midfoot area and longitudinal foot arch showing characteristic foot loading patterns. Already the feet of one and two year old children are significantly affected. Childhood overweight and obesity is not compensated by the musculoskeletal system. To avoid excessive foot loading with potential risk of discomfort or pain in childhood, prevention strategies should be developed and validated for children with a high body mass index and functional changes in the midfoot area. The presented plantar pressure values could additionally serve as reference data to identify suspicious foot loading patterns in children.
Effects of resistance training in youth athletes on muscular fitness and athletic performance
(2016)
During the stages of long-term athlete development (LTAD), resistance training (RT) is an important means for (i) stimulating athletic development, (ii) tolerating the demands of long-term training and competition, and (iii) inducing long-term health promoting effects that are robust over time and track into adulthood. However, there is a gap in the literature with regards to optimal RT methods during LTAD and how RT is linked to biological age. Thus, the aims of this scoping review were (i) to describe and discuss the effects of RT on muscular fitness and athletic performance in youth athletes, (ii) to introduce a conceptual model on how to appropriately implement different types of RT within LTAD stages, and (iii) to identify research gaps from the existing literature by deducing implications for future research. In general, RT produced small -to -moderate effects on muscular fitness and athletic performance in youth athletes with muscular strength showing the largest improvement. Free weight, complex, and plyometric training appear to be well -suited to improve muscular fitness and athletic performance. In addition, balance training appears to be an important preparatory (facilitating) training program during all stages of LTAD but particularly during the early stages. As youth athletes become more mature, specificity, and intensity of RT methods increase. This scoping review identified research gaps that are summarized in the following and that should be addressed in future studies: (i) to elucidate the influence of gender and biological age on the adaptive potential following RT in youth athletes (especially in females), (ii) to describe RT protocols in more detail (i.e., always report stress and strain based parameters), and (iii) to examine neuromuscular and tendomuscular adaptations following RT in youth athletes.
Background
It has been demonstrated that core strength training is an effective means to enhance trunk muscle strength (TMS) and proxies of physical fitness in youth. Of note, cross-sectional studies revealed that the inclusion of unstable elements in core strengthening exercises produced increases in trunk muscle activity and thus provide potential extra training stimuli for performance enhancement. Thus, utilizing unstable surfaces during core strength training may even produce larger performance gains. However, the effects of core strength training using unstable surfaces are unresolved in youth. This randomized controlled study specifically investigated the effects of core strength training performed on stable surfaces (CSTS) compared to unstable surfaces (CSTU) on physical fitness in school-aged children.
Methods
Twenty-seven (14 girls, 13 boys) healthy subjects (mean age: 14 ± 1 years, age range: 13–15 years) were randomly assigned to a CSTS (n = 13) or a CSTU (n = 14) group. Both training programs lasted 6 weeks (2 sessions/week) and included frontal, dorsal, and lateral core exercises. During CSTU, these exercises were conducted on unstable surfaces (e.g., TOGU© DYNAIR CUSSIONS, THERA-BAND© STABILITY TRAINER).
Results
Significant main effects of Time (pre vs. post) were observed for the TMS tests (8-22%, f = 0.47-0.76), the jumping sideways test (4-5%, f = 1.07), and the Y balance test (2-3%, f = 0.46-0.49). Trends towards significance were found for the standing long jump test (1-3%, f = 0.39) and the stand-and-reach test (0-2%, f = 0.39). We could not detect any significant main effects of Group. Significant Time x Group interactions were detected for the stand-and-reach test in favour of the CSTU group (2%, f = 0.54).
Conclusions
Core strength training resulted in significant increases in proxies of physical fitness in adolescents. However, CSTU as compared to CSTS had only limited additional effects (i.e., stand-and-reach test). Consequently, if the goal of training is to enhance physical fitness, then CSTU has limited advantages over CSTS.
The general purpose of this systematic review was to summarize, structure and evaluate the findings on automatic evaluations of exercising. Studies were eligible for inclusion if they reported measuring automatic evaluations of exercising with an implicit measure and assessed some kind of exercise variable. Fourteen nonexperimental and six experimental studies (out of a total N = 1,928) were identified and rated by two independent reviewers. The main study characteristics were extracted and the grade of evidence for each study evaluated. First, results revealed a large heterogeneity in the applied measures to assess automatic evaluations of exercising and the exercise variables. Generally, small to large-sized significant relations between automatic evaluations of exercising and exercise variables were identified in the vast majority of studies. The review offers a systematization of the various examined exercise variables and prompts to differentiate more carefully between actually observed exercise behavior (proximal exercise indicator) and associated physiological or psychological variables (distal exercise indicator). Second, a lack of transparent reported reflections on the differing theoretical basis leading to the use of specific implicit measures was observed. Implicit measures should be applied purposefully, taking into consideration the individual advantages or disadvantages of the measures. Third, 12 studies were rated as providing first-grade evidence (lowest grade of evidence), five represent second-grade and three were rated as third-grade evidence. There is a dramatic lack of experimental studies, which are essential for illustrating the cause-effect relation between automatic evaluations of exercising and exercise and investigating under which conditions automatic evaluations of exercising influence behavior. Conclusions about the necessity of exercise interventions targeted at the alteration of automatic evaluations of exercising should therefore not be drawn too hastily.
Background: Healthy university students have been shown to use psychoactive substances, expecting them to be functional means for enhancing their cognitive capacity, sometimes over and above an essentially proficient level. This behavior called Neuroenhancement (NE) has not yet been integrated into a behavioral theory that is able to predict performance. Job Demands Resources (JD-R) Theory for example assumes that strain (e.g. burnout) will occur and influence performance when job demands are high and job resources are limited at the same time. The aim of this study is to investigate whether or not university students’ self-reported NE can be integrated into JD-R Theory’s comprehensive approach to psychological health and performance.
Methods: 1,007 students (23.56 ± 3.83 years old, 637 female) participated in an online survey. Lifestyle drug, prescription drug, and illicit substance NE together with the complete set of JD-R variables (demands, burnout, resources, motivation, and performance) were measured. Path models were used in order to test our data’s fit to hypothesized main effects and interactions.
Results: JD-R Theory could successfully be applied to describe the situation of university students. NE was mainly associated with the JD-R Theory’s health impairment process: Lifestyle drug NE (p < .05) as well as prescription drug NE (p < .001) is associated with higher burnout scores, and lifestyle drug NE aggravates the study demands-burnout interaction. In addition, prescription drug NE mitigates the protective influence of resources on burnout and on motivation.
Conclusion: According to our results, the uninformed trying of NE (i.e., without medical supervision) might result in strain. Increased strain is related to decreased performance. From a public health perspective, intervention strategies should address these costs of non-supervised NE. With regard to future research we propose to model NE as a means to reach an end (i.e. performance enhancement) rather than a target behavior itself. This is necessary to provide a deeper understanding of the behavioral roots and consequences of the phenomenon.
Background: Data on electrocardiographic and echocardiographic pre-participation screening findings in paediatric athletes are limited.
Methods and results: 10-15 year-old athletes (n = 343) were screened using electro- and echocardiography. The electrocardiogram (ECG) was normal in 220 (64%), mildly abnormal in 108 (31%), and distinctly abnormal in 15 (4%) athletes. Echocardiographic upper reference limits (URL, 97.5 percentile) for the left ventricular (LV) wall thickness in 10-11-year-old boys and girls were 9-10 mm and 8-9 mm, respectively; in 12-13-year-old boys and girls 9-10 mm; and in 14-15-year-old boys and girls 10-11 mm and 9-10 mm, respectively. Three athletes were excluded from competitive sports: one for symptomatic Wolff-Parkinson-White syndrome with a normal echocardiogram; one for negative T-waves in V-1-V-4 and a dilated right ventricle by echocardiography suggestive of (arrhythmogenic) right ventricular disease; and one for normal ECG and biscupid aortic valve including an aneurysm of the ascending aorta detected by echocardiography. Related to echocardiographic findings, the sensitivity and specificity of the ECG to identify cardiovascular abnormalities was 38% and 64%, respectively. The ECG's positive-predictive and negative-predictive values were 13% and 88%, respectively. The numbers needed to screen and calculated costs were 172 for ECG ( 7049), 172 for echocardiography ( 11,530), and 114 combining ECG and echocardiography ( 9323).
Conclusions: Compared to adults, paediatric athletes presented with fewer distinctly abnormal ECGs, and there was no gender difference in paediatric athletes' ECG-pattern distribution. A combination of ECG and echocardiography for pre-participation screening of paediatric athletes is superior to ECG alone but 30% more costly.
Rehabilitation after autologous chondrocyte implantation for isolated cartilage defects of the knee
(2017)
Autologous chondrocyte implantation for treatment of isolated cartilage defects of the knee has become well established. Although various publications report technical modifications, clinical results, and cell-related issues, little is known about appropriate and optimal rehabilitation after autologous chondrocyte implantation. This article reviews the literature on rehabilitation after autologous chondrocyte implantation and presents a rehabilitation protocol that has been developed considering the best available evidence and has been successfully used for several years in a large number of patients who underwent autologous chondrocyte implantation for cartilage defects of the knee.
Background
Back pain patients (BPP) show delayed muscle onset, increased co-contractions, and variability as response to quasi-static sudden trunk loading in comparison to healthy controls (H). However, it is unclear whether these results can validly be transferred to suddenly applied walking perturbations, an automated but more functional and complex movement pattern. There is an evident need to develop research-based strategies for the rehabilitation of back pain. Therefore, the investigation of differences in trunk stability between H and BPP in functional movements is of primary interest in order to define suitable intervention regimes. The purpose of this study was to analyse neuromuscular reflex activity as well as three-dimensional trunk kinematics between H and BPP during walking perturbations.
Methods
Eighty H (31m/49f;29±9yrs;174±10cm;71±13kg) and 14 BPP (6m/8f;30±8yrs;171±10cm;67±14kg) walked (1m/s) on a split-belt treadmill while 15 right-sided perturbations (belt decelerating, 40m/s2, 50ms duration; 200ms after heel contact) were randomly applied. Trunk muscle activity was assessed using a 12-lead EMG set-up. Trunk kinematics were measured using a 3-segment-model consisting of 12 markers (upper thoracic (UTA), lower thoracic (LTA), lumbar area (LA)). EMG-RMS ([%],0-200ms after perturbation) was calculated and normalized to the RMS of unperturbed gait. Latency (TON;ms) and time to maximum activity (TMAX;ms) were analysed. Total motion amplitude (ROM;[°]) and mean angle (Amean;[°]) for extension-flexion, lateral flexion and rotation were calculated (whole stride cycle; 0-200ms after perturbation) for each of the three segments during unperturbed and perturbed gait. For ROM only, perturbed was normalized to unperturbed step [%] for the whole stride as well as the 200ms after perturbation. Data were analysed descriptively followed by a student´s t-test to account for group differences. Co-contraction was analyzed between ventral and dorsal muscles (V:R) as well as side right:side left ratio (Sright:Sleft). The coefficient of variation (CV;%) was calculated (EMG-RMS;ROM) to evaluate variability between the 15 perturbations for all groups. With respect to unequal distribution of participants to groups, an additional matched-group analysis was conducted. Fourteen healthy controls out of group H were sex-, age- and anthropometrically matched (group Hmatched) to the BPP.
Results
No group differences were observed for EMG-RMS or CV analysis (EMG/ROM) (p>0.025). Co-contraction analysis revealed no differences for V:R and Srigth:Sleft between the groups (p>0.025). BPP showed an increased TON and TMAX, being significant for Mm. rectus abdominus (p = 0.019) and erector spinae T9/L3 (p = 0.005/p = 0.015). ROM analysis over the unperturbed stride cycle revealed no differences between groups (p>0.025). Normalization of perturbed to unperturbed step lead to significant differences for the lumbar segment (LA) in lateral flexion with BPP showing higher normalized ROM compared to Hmatched (p = 0.02). BPP showed a significant higher flexed posture (UTA (p = 0.02); LTA (p = 0.004)) during normal walking (Amean). Trunk posture (Amean) during perturbation showed higher trunk extension values in LTA segments for H/Hmatched compared to BPP (p = 0.003). Matched group (BPP vs. Hmatched) analysis did not show any systematic changes of all results between groups.
Conclusion
BPP present impaired muscle response times and trunk posture, especially in the sagittal and transversal planes, compared to H. This could indicate reduced trunk stability and higher loading during gait perturbations.
Background
Recently, the incidence rate of back pain (BP) in adolescents has been reported at 21%. However, the development of BP in adolescent athletes is unclear. Hence, the purpose of this study was to examine the incidence of BP in young elite athletes in relation to gender and type of sport practiced.
Methods
Subjective BP was assessed in 321 elite adolescent athletes (m/f 57%/43%; 13.2 ± 1.4 years; 163.4 ± 11.4 cm; 52.6 ± 12.6 kg; 5.0 ± 2.6 training yrs; 7.6 ± 5.3 training h/week). Initially, all athletes were free of pain. The main outcome criterion was the incidence of back pain [%] analyzed in terms of pain development from the first measurement day (M1) to the second measurement day (M2) after 2.0 ± 1.0 year. Participants were classified into athletes who developed back pain (BPD) and athletes who did not develop back pain (nBPD). BP (acute or within the last 7 days) was assessed with a 5-step face scale (face 1–2 = no pain; face 3–5 = pain). BPD included all athletes who reported faces 1 and 2 at M1 and faces 3 to 5 at M2. nBPD were all athletes who reported face 1 or 2 at both M1 and M2. Data was analyzed descriptively. Additionally, a Chi2 test was used to analyze gender- and sport-specific differences (p = 0.05).
Results
Thirty-two athletes were categorized as BPD (10%). The gender difference was 5% (m/f: 12%/7%) but did not show statistical significance (p = 0.15). The incidence of BP ranged between 6 and 15% for the different sport categories. Game sports (15%) showed the highest, and explosive strength sports (6%) the lowest incidence. Anthropometrics or training characteristics did not significantly influence BPD (p = 0.14 gender to p = 0.90 sports; r2 = 0.0825).
Conclusions
BP incidence was lower in adolescent athletes compared to young non-athletes and even to the general adult population. Consequently, it can be concluded that high-performance sports do not lead to an additional increase in back pain incidence during early adolescence. Nevertheless, back pain prevention programs should be implemented into daily training routines for sport categories identified as showing high incidence rates.
Background: The aim of the present study was to verify concurrent validity of the Gyko inertial sensor system for the assessment of vertical jump height. - Methods: Nineteen female sub-elite youth soccer players (mean age: 14.7 ± 0.6 years) performed three trials of countermovement (CMJ) and squat jumps (SJ), respectively. Maximal vertical jump height was simultaneously quantified with the Gyko system, a Kistler force-plate (i.e., gold standard), and another criterion device that is frequently used in the field, the Optojump system. - Results: Compared to the force-plate, the Gyko system determined significant systematic bias for mean CMJ (−0.66 cm, p < 0.01, d = 1.41) and mean SJ (−0.91 cm, p < 0.01, d = 1.69) height. Random bias was ± 3.2 cm for CMJ and ± 4.0 cm for SJ height and intraclass correlation coefficients (ICCs) were “excellent” (ICC = 0.87 for CMJ and 0.81 for SJ). Compared to the Optojump device, the Gyko system detected a significant systematic bias for mean CMJ (0.55 cm, p < 0.05, d = 0.94) but not for mean SJ (0.39 cm) height. Random bias was ± 3.3 cm for CMJ and ± 4.2 cm for SJ height and ICC values were “excellent” (ICC = 0.86 for CMJ and 0.82 for SJ). - Conclusion: Consequently, apparatus specific regression equations were provided to estimate true vertical jump height for the Kistler force-plate and the Optojump device from Gyko-derived data. Our findings indicate that the Gyko system cannot be used interchangeably with a Kistler force-plate and the Optojump device in trained individuals. It is suggested that practitioners apply the correction equations to estimate vertical jump height for the force-plate and the Optojump system from Gyko-derived data.
Background
In health research, indicators of socioeconomic status (SES) are often used interchangeably and often lack theoretical foundation. This makes it difficult to compare results from different studies and to explore the relationship between SES and health outcomes. To aid researchers in choosing appropriate indicators of SES, this article proposes and tests a theory-based selection of SES indicators using chronic back pain as a health outcome.
Methods
Strength of relationship predictions were made using Brunner & Marmot’s model of ‘social determinants of health’. Subsequently, a longitudinal study was conducted with 66 patients receiving in-patient treatment for chronic back pain. Sociodemographic variables, four SES indicators (education, job position, income, multidimensional index) and back pain intensity and disability were obtained at baseline. Both pain dimensions were assessed again 6 months later. Using linear regression, the predictive strength of each SES indicator on pain intensity and disability was estimated and compared to the theory based prediction.
Results
Chronic back pain intensity was best predicted by the multidimensional index (beta = 0.31, p < 0.05), followed by job position (beta = 0.29, p < 0.05) and education (beta = −0.29, p < 0.05); whereas, income exerted no significant influence. Back pain disability was predicted strongest by education (beta = −0.30, p < 0.05) and job position (beta = 0.29, p < 0.05). Here, multidimensional index and income had no significant influence.
Conclusions
The choice of SES indicators influences predictive power on both back pain dimensions, suggesting SES predictors cannot be used interchangeably. Therefore, researchers should carefully consider prior to each study which SES indicator to use. The introduced framework can be valuable in supporting this decision because it allows for a stable prediction of SES indicator influence and their hierarchy on a specific health outcomes.
Serious knee pain and related disability have an annual prevalence of approximately 25% on those over the age of 55 years. As curative treatments for the common knee problems are not available to date, knee pathologies typically progress and often lead to osteoarthritis (OA). While the roles that the meniscus plays in knee biomechanics are well characterized, biological mechanisms underlying meniscus pathophysiology and roles in knee pain and OA progression are not fully clear. Experimental treatments for knee disorders that are successful in animal models often produce unsatisfactory results in humans due to species differences or the inability to fully replicate disease progression in experimental animals. The use of animals with spontaneous knee pathologies, such as dogs, can significantly help addressing this issue. As microscopic and macroscopic anatomy of the canine and human menisci are similar, spontaneous meniscal pathologies in canine patients are thought to be highly relevant for translational medicine. However, it is not clear whether the biomolecular mechanisms of pain, degradation of extracellular matrix, and inflammatory responses are species dependent. The aims of this review are (1) to provide an overview of the anatomy, physiology, and pathology of the human and canine meniscus, (2) to compare the known signaling pathways involved in spontaneous meniscus pathology between both species, and (3) to assess the relevance of dogs with spontaneous meniscal pathology as a translational model. Understanding these mechanisms in human and canine meniscus can help to advance diagnostic and therapeutic strategies for painful knee disorders and improve clinical decision making.
The use of functional music in gait training termed rhythmic auditory stimulation (RAS) and treadmill training (TT) have both been shown to be effective in stroke patients (SP). The combination of RAS and treadmill training (RAS-TT) has not been clinically evaluated to date. The aim of the study was to evaluate the efficacy of RAS-TT on functional gait in SR The protocol followed the design of an explorative study with a rater-blinded three arm prospective randomized controlled parallel group design. Forty-five independently walking SP with a hemiparesis of the lower limb or an unsafe and asymmetrical walking pattern were recruited. RAS-TT was carried out over 4 weeks with TT and neurodevelopmental treatment based on Bobath approach (NDT) serving as control interventions. For RAS-TT functional music was adjusted individually while walking on the treadmill. Pre and post-assessments consisted of the fast gait speed test (FGS), a gait analysis with the locometre (LOC), 3 min walking time test (3MWT), and an instrumental evaluation of balance (IEB). Raters were blinded to group assignments. An analysis of covariance (ANCOVA) was performed with affiliated measures from pre-assessment and time between stroke and start of study as covariates. Thirty-five participants (mean age 63.6 +/- 8.6 years, mean time between stroke and start of study 42.1 +/- 23.7 days) completed the study (11 RAS-TT, 13 TT, 11 NDT). Significant group differences occurred in the FGS for adjusted post-measures in gait velocity [F-(2,F- (34)) = 3.864, p = 0.032; partial eta(2) = 0.205] and cadence [F-(2,F- 34) = 7.656, p = 0.002; partial eta(2) = 0.338]. Group contrasts showed significantly higher values for RAS-TT. Stride length results did not vary between the groups. LOC, 3MWT, and IEB did not indicate group differences. One patient was withdrawn from TT because of pain in one arm. The study provides first evidence for a higher efficacy of RAS-TT in comparison to the standard approaches TT and NDT in restoring functional gait in SP. The results support the implementation of functional music in neurological gait rehabilitation and its use in combination with treadmill training.
Background: Life events (LEs) are associated with future physical and mental health. They are crucial for understanding the pathways to mental disorders as well as the interactions with biological parameters. However, deeper insight is needed into the complex interplay between the type of LE, its subjective evaluation and accompanying factors such as social support. The "Stralsund Life Event List" (SEL) was developed to facilitate this research.
Methods: The SEL is a standardized interview that assesses the time of occurrence and frequency of 81 LEs, their subjective emotional valence, the perceived social support during the LE experience and the impact of past LEs on present life. Data from 2265 subjects from the general population-based cohort study "Study of Health in Pomerania" (SHIP) were analysed. Based on the mean emotional valence ratings of the whole sample, LEs were categorized as "positive" or "negative". For verification, the SEL was related to lifetime major depressive disorder (MDD; Munich Composite International Diagnostic Interview), childhood trauma (Childhood Trauma Questionnaire), resilience (Resilience Scale) and subjective health (SF-12 Health Survey).
Results: The report of lifetime MDD was associated with more negative emotional valence ratings of negative LEs (OR = 2.96, p < 0.0001). Negative LEs (b = 0.071, p < 0.0001, beta = 0.25) and more negative emotional valence ratings of positive LEs (b = 3.74, p < 0.0001, beta = 0.11) were positively associated with childhood trauma. In contrast, more positive emotional valence ratings of positive LEs were associated with higher resilience (b = -7.05, p < 0.0001, beta = 0.13), and a lower present impact of past negative LEs was associated with better subjective health (b = 2.79, p = 0.001, beta = 0.05). The internal consistency of the generated scores varied considerably, but the mean value was acceptable (averaged Cronbach's alpha > 0.75).
Conclusions: The SEL is a valid instrument that enables the analysis of the number and frequency of LEs, their emotional valence, perceived social support and current impact on life on a global score and on an individual item level. Thus, we can recommend its use in research settings that require the assessment and analysis of the relationship between the occurrence and subjective evaluation of LEs as well as the complex balance between distressing and stabilizing life experiences.
Degenerative disc disease is associated with increased expression of pro-inflammatory cytokines in the intervertebral disc (IVD). However, it is not completely clear how inflammation arises in the IVD and which cellular compartments are involved in this process. Recently, the endoplasmic reticulum (ER) has emerged as a possible modulator of inflammation in age-related disorders. In addition, ER stress has been associated with the microenvironment of degenerated IVDs. Therefore, the aim of this study was to analyze the effects of ER stress on inflammatory responses in degenerated human IVDs and associated molecular mechanisms. Gene expression of ER stress marker GRP78 and pro-inflammatory cytokines IL-6, IL-8, IL-1 beta, and TNF-alpha was analyzed in human surgical IVD samples (n = 51, Pfirrmann grade 2-5). The expression of GRP78 positively correlated with the degeneration grade in lumbar IVDs and IL-6, but not with IL-1 beta and TNF-alpha. Another set of human surgical IVD samples (n = 25) was used to prepare primary cell cultures. ER stress inducer thapsigargin (Tg, 100 and 500 nM) activated gene and protein expression of IL-6 and induced phosphorylation of p38 MAPK. Both inhibition of p38 MAPK by SB203580 (10 mu M) and knockdown of ER stress effector CCAAT-enhancer-binding protein homologous protein (CHOP) reduced gene and protein expression of IL-6 in Tg-treated cells. Furthermore, the effects of an inflammatory microenvironment on ER stress were tested. TNF-alpha (5 and 10 ng/mL) did not activate ER stress, while IL-1 beta (5 and 10 ng/mL) activated gene and protein expression of GRP78, but did not influence [Ca2+](i) flux and expression of CHOP, indicating that pro-inflammatory cytokines alone may not induce ER stress in vivo. This study showed that IL-6 release in the IVD can be initiated following ER stress and that ER stress mediates IL-6 release through p38 MAPK and CHOP. Therapeutic targeting of ER stress response may reduce the consequences of the harsh microenvironment in degenerated IVD.
This study aimed to determine the specific physical and basic gymnastics skills considered critical in gymnastics talent identification and selection as well as in promoting men’s artistic gymnastics performances. Fifty-one boys from a provincial gymnastics team (age 11.03 ± 0.95 years; height 1.33 ± 0.05 m; body mass 30.01 ± 5.53 kg; body mass index [BMI] 16.89 ± 3.93 kg/m²) regularly competing at national level voluntarily participated in this study. Anthropometric measures as well as the men’s artistic gymnastics physical test battery (i.e., International Gymnastics Federation [FIG] age group development programme) were used to assess the somatic and physical fitness profile of participants, respectively. The physical characteristics assessed were: muscle strength, flexibility, speed, endurance, and muscle power. Test outcomes were subjected to a principal components analysis to identify the most representative factors. The main findings revealed that power speed, isometric and explosive strength, strength endurance, and dynamic and static flexibility are the most determinant physical fitness aspects of the talent selection process in young male artistic gymnasts. These findings are of utmost importance for talent identification, selection, and development.
The relevance for in vitro three-dimensional (3D) tissue culture of skin has been present for almost a century. From using skin biopsies in organ culture, to vascularized organotypic full-thickness reconstructed human skin equivalents, in vitro tissue regeneration of 3D skin has reached a golden era. However, the reconstruction of 3D skin still has room to grow and develop. The need for reproducible methodology, physiological structures and tissue architecture, and perfusable vasculature are only recently becoming a reality, though the addition of more complex structures such as glands and tactile corpuscles require advanced technologies. In this review, we will discuss the current methodology for biofabrication of 3D skin models and highlight the advantages and disadvantages of the existing systems as well as emphasize how new techniques can aid in the production of a truly physiologically relevant skin construct for preclinical innovation.
Intervertebral disc (IVD) cells are naturally exposed to high osmolarity and complex mechanical loading, which drive microenvironmental osmotic changes. Age- and degeneration-induced degradation of the IVD's extracellular matrix causes osmotic imbalance, which, together with an altered function of cellular receptors and signalling pathways, instigates local osmotic stress. Cellular responses to osmotic stress include osmoadaptation and activation of pro-inflammatory pathways. This review summarises the current knowledge on how IVD cells sense local osmotic changes and translate these signals into physiological or pathophysiological responses, with a focus on inflammation. Furthermore, it discusses the expression and function of putative membrane osmosensors (e.g. solute carrier transporters, transient receptor potential channels, aquaporins and acid-sensing ion channels) and osmosignalling mediators [e.g. tonicity responseelement-binding protein/nuclear factor of activated T-cells 5 (TonEBP/NFAT5), nuclear factor kappa-lightchain-enhancer of activated B cells (NF-kappa B)] in healthy and degenerated IVDs. Finally, an overview of the potential therapeutic targets for modifying osmosensing and osmosignalling in degenerated IVDs is provided.
Objective: The aim of the present study was to examine the effect of Cold Water Immersion (CWI) on the recovery of physical performance, hematological stress markers and perceived wellness (i.e., Hooper scores) following a simulated Mixed Martial Arts (MMA) competition.
Methods: Participants completed two experimental sessions in a counter-balanced order (CWI or passive recovery for control condition: CON), after a simulated MMAs competition (3 x 5-min MMA rounds separated by 1-min of passive rest). During CWI, athletes were required to submerge their bodies, except the trunk, neck and head, in the seated position in a temperature-controlled bath (similar to 10 degrees C) for 15-min. During CON, athletes were required to be in a seated position for 15-min in same room ambient temperature. Venous blood samples (creatine kinase, cortisol, and testosterone concentrations) were collected at rest (PRE-EX, i.e., before MMAs), immediately following MMAs (POST-EX), immediately following recovery (POST-R) and 24 h post MMAs (POST-24), whilst physical fitness (squat jump, countermovement-jump and 5- and 10-m sprints) and perceptual measures (well-being Hooper index: fatigue, stress, delayed onset muscle soreness (DOMS), and sleep) were collected at PRE-EX, POST-R and POST-24, and at PRE-EX and POST-24, respectively.
Results: The main results indicate that POST-R sprint (5- and 10-m) performances were 'likely to very likely' (d = 0.64 and 0.65) impaired by prior CWI. However, moderate improvements were in 10-m sprint performance were 'likely' evident at POST-24 after CWI compared with CON (d = 0.53). Additionally, the use of CWI 'almost certainly' resulted in a large overall improvement in Hooper scores (d = 1.93). Specifically, CWI 'almost certainly' resulted in improved sleep quality (d = 1.36), stress (d = 1.56) and perceived fatigue (d = 1.51), and 'likely' resulted in a moderate decrease in DOMS (d = 0.60).
Conclusion: The use of CWI resulted in an enhanced recovery of 10-m sprint performance, as well as improved perceived wellness 24-h following simulated MMA competition.
Background Low back pain (LBP) is a common pain syndrome in athletes, responsible for 28% of missed training days/year. Psychosocial factors contribute to chronic pain development. This study aims to investigate the transferability of psychosocial screening tools developed in the general population to athletes and to define athlete-specific thresholds.
Methods Data from a prospective multicentre study on LBP were collected at baseline and 1-year follow-up (n=52 athletes, n=289 recreational athletes and n=246 non-athletes). Pain was assessed using the Chronic Pain Grade questionnaire. The psychosocial Risk Stratification Index (RSI) was used to obtain prognostic information regarding the risk of chronic LBP (CLBP). Individual psychosocial risk profile was gained with the Risk Prevention Index – Social (RPI-S). Differences between groups were calculated using general linear models and planned contrasts. Discrimination thresholds for athletes were defined with receiver operating characteristics (ROC) curves.
Results Athletes and recreational athletes showed significantly lower psychosocial risk profiles and prognostic risk for CLBP than non-athletes. ROC curves suggested discrimination thresholds for athletes were different compared with non-athletes. Both screenings demonstrated very good sensitivity (RSI=100%; RPI-S: 75%–100%) and specificity (RSI: 76%–93%; RPI-S: 71%–93%). RSI revealed two risk classes for pain intensity (area under the curve (AUC) 0.92(95% CI 0.85 to 1.0)) and pain disability (AUC 0.88(95% CI 0.71 to 1.0)).
Conclusions Both screening tools can be used for athletes. Athlete-specific thresholds will improve physicians’ decision making and allow stratified treatment and prevention.
Long-distance race car drivers are classified as athletes. The sport is physically and mentally demanding, requiring long hours of practice. Therefore, optimal dietary intake is essential for health and performance of the athlete. The aim of the study was to evaluate dietary intake and to compare the data with dietary recommendations for athletes and for the general adult population according to the German Nutrition Society (DGE). A 24-h dietary recall during a competition preparation phase was obtained from 16 male race car drivers (28.3 ± 6.1 years, body mass index (BMI) of 22.9 ± 2.3 kg/m2). The mean intake of energy, nutrients, water and alcohol was recorded. The mean energy, vitamin B2, vitamin E, folate, fiber, calcium, water and alcohol intake were 2124 ± 814 kcal/day, 1.3 ± 0.5 mg/day, 12.5 ± 9.5 mg/day, 231.0 ± 90.9 ug/day, 21.4 ± 9.4 g/day, 1104 ± 764 mg/day, 3309 ± 1522 mL/day and 0.8 ± 2.5 mL/day respectively. Our study indicated that many of the nutrients studied, including energy and carbohydrate, were below the recommended dietary intake for both athletes and the DGE.
Purpose
To test whether the negative relationship between perceived stress and quality of life (Hypothesis 1) can be buffered by perceived social support in patients with dementia as well as in caregivers individually (Hypothesis 2: actor effects) and across partners (Hypothesis 3: partner effects and actor-partner effects).
Method
A total of 108 couples (N = 216 individuals) comprised of one individual with early-stage dementia and one caregiving partner were assessed at baseline and one month apart. Moderation effects were investigated by applying linear mixed models and actor-partner interdependence models.
Results
Although the stress-quality of life association was more pronounced in caregivers (beta = -.63, p<.001) compared to patients (beta= -.31, p<.001), this association was equally moderated by social support in patients (beta = .14, p<.05) and in the caregivers (beta =.13, p<.05). From one partner to his or her counterpart, the partner buffering and actor-partner-buffering effect were not present.
Conclusion
The stress-buffering effect has been replicated in individuals with dementia and caregivers but not across partners. Interventions to improve quality of life through perceived social support should not only focus on caregivers, but should incorporate both partners.
Combining training of muscle strength and cardiorespiratory fitness within a training cycle could increase athletic performance more than single-mode training. However, the physiological effects produced by each training modality could also interfere with each other, improving athletic performance less than single-mode training. Because anthropometric, physiological, and biomechanical differences between young and adult athletes can affect the responses to exercise training, young athletes might respond differently to concurrent training (CT) compared with adults. Thus, the aim of the present systematic review with meta-analysis was to determine the effects of concurrent strength and endurance training on selected physical fitness components and athletic performance in youth. A systematic literature search of PubMed and Web of Science identified 886 records. The studies included in the analyses examined children (girls age 6–11 years, boys age 6–13 years) or adolescents (girls age 12–18 years, boys age 14–18 years), compared CT with single-mode endurance (ET) or strength training (ST), and reported at least one strength/power—(e.g., jump height), endurance—(e.g., peak V°O2, exercise economy), or performance-related (e.g., time trial) outcome. We calculated weighted standardized mean differences (SMDs). CT compared to ET produced small effects in favor of CT on athletic performance (n = 11 studies, SMD = 0.41, p = 0.04) and trivial effects on cardiorespiratory endurance (n = 4 studies, SMD = 0.04, p = 0.86) and exercise economy (n = 5 studies, SMD = 0.16, p = 0.49) in young athletes. A sub-analysis of chronological age revealed a trend toward larger effects of CT vs. ET on athletic performance in adolescents (SMD = 0.52) compared with children (SMD = 0.17). CT compared with ST had small effects in favor of CT on muscle power (n = 4 studies, SMD = 0.23, p = 0.04). In conclusion, CT is more effective than single-mode ET or ST in improving selected measures of physical fitness and athletic performance in youth. Specifically, CT compared with ET improved athletic performance in children and particularly adolescents. Finally, CT was more effective than ST in improving muscle power in youth.
Background:
It has previously been shown that conditioning activities consisting of repetitive hops have the
potential to induce better drop jump (DJ) performance in recreationally active individuals. In the present pilot study,
we investigated whether repetitive conditioning hops can also increase reactive jump and sprint performance in
sprint-trained elite athletes competing at an international level.
Methods:
Jump and sprint performances of 5 athletes were randomly assessed under 2 conditions. The control
condition (CON) comprised 8 DJs and 4 trials of 30-m sprints. The intervention condition (HOP) consisted of 10
maximal repetitive two-legged hops that were conducted 10 s prior to each single DJ and sprint trial. DJ
performance was analyzed using a one-dimensional ground reaction force plate. Step length (SL), contact time (CT),
and sprint time (ST) during the 30-m sprints were recorded using an opto-electronic measurement system.
Results:
Following the conditioning activity, DJ height and external DJ peak power were both significantly
increased by 11 % compared to the control condition. All other variables did not show any significant differences
between HOP and CON.
Conclusions:
In the present pilot study, we were able to demonstrate large improvements in DJ performance even
in sprint-trained elite athletes following a conditioning activity consisting of maximal two-legged repetitive hops.
This strengthens the hypothesis that plyometric conditioning exercises can induce performance enhancements in
elite athletes that are even greater than those observed in recreationally active athletes.. In addition, it appears that
the transfer of these effects to other stretch-shortening cycle activities is limited, as we did not observe any
changes in sprint performance following the plyometric conditioning activity.
This study aimed at examining physiological responses (i.e., oxygen uptake [VO2] and heart rate [HR]) to a semi-contact 3 x 3-min format, amateur boxing combat simulation in elite level male boxers. Eleven boxers aged 21.4 +/- 2.1 years (body height 173.4 +/- 3.7, body mass 74.9 +/- 8.6 kg, body fat 12.1 +/- 1.9, training experience 5.7 +/- 1.3 years) volunteered to participate in this study. They performed a maximal graded aerobic test on a motor-driven treadmill to determine maximum oxygen uptake (VO2max), oxygen uptake (VO2AT) and heart rate (HRAT) at the anaerobic threshold, and maximal heart rate (HRmax). Additionally, VO2 and peak HR (HRpeak) were recorded following each boxing round. Results showed no significant differences between VO2max values derived from the treadmill running test and VO2 outcomes of the simulated boxing contest (p > 0.05, d = 0.02 to 0.39). However, HRmax and HRpeak recorded from the treadmill running test and the simulated amateur boxing contest, respectively, displayed significant differences regardless of the boxing round (p < 0.01, d = 1.60 to 3.00). In terms of VO2 outcomes during the simulated contest, no significant between-round differences were observed (p = 0.19, d = 0.17 to 0.73). Irrespective of the boxing round, the recorded VO2 was >90% of the VO2max. Likewise, HRpeak observed across the three boxing rounds were >= 90% of the HRmax. In summary, the simulated 3 x 3-min amateur boxing contest is highly demanding from a physiological standpoint. Thus, coaches are advised to systematically monitor internal training load for instance through rating of perceived exertion to optimize training-related adaptations and to prevent boxers from overreaching and/or overtraining.
Introduction
Injury prevention programs (IPPs) are an inherent part of training in recreational and professional sports. Providing performance-enhancing benefits in addition to injury prevention may help adjust coaches and athletes’ attitudes towards implementation of injury prevention into daily routine. Conventional thinking by players and coaches alike seems to suggest that IPPs need to be specific to one’s sport to allow for performance enhancement. The systematic literature review aims to firstly determine the IPPs nature of exercises and whether they are specific to the sport or based on general conditioning. Secondly, can they demonstrate whether general, sports-specific or even mixed IPPs improve key performance indicators with the aim to better facilitate long-term implementation of these programs?
Methods
PubMed and Web of Science were electronically searched throughout March 2018. The inclusion criteria were randomized control trials, publication dates between Jan 2006 and Feb 2018, athletes (11–45 years), injury prevention programs and included predefined performance measures that could be categorized into balance, power, strength, speed/agility and endurance. The methodological quality of included articles was assessed with the Cochrane Collaboration assessment tools.
Results
Of 6619 initial findings, 22 studies met the inclusion criteria. In addition, reference lists unearthed a further 6 studies, making a total of 28. Nine studies used sports specific IPPs, eleven general and eight mixed prevention strategies. Overall, general programs ranged from 29–57% in their effectiveness across performance outcomes. Mixed IPPs improved in 80% balance outcomes but only 20–44% in others. Sports-specific programs led to larger scale improvements in balance (66%), power (83%), strength (75%), and speed/agility (62%).
Conclusion
Sports-specific IPPs have the strongest influence on most performance indices based on the significant improvement versus control groups. Other factors such as intensity, technical execution and compliance should be accounted for in future investigations in addition to exercise modality.
In animals and humans, behavior can be influenced by irrelevant stimuli, a phenomenon called Pavlovian-to-instrumental transfer (PIT). In subjects with substance use disorder, PIT is even enhanced with functional activation in the nucleus accumbens (NAcc) and amygdala. While we observed enhanced behavioral and neural PIT effects in alcohol-dependent subjects, we here aimed to determine whether behavioral PIT is enhanced in young men with high-risk compared to low-risk drinking and subsequently related functional activation in an a-priori region of interest encompassing the NAcc and amygdala and related to polygenic risk for alcohol consumption. A representative sample of 18-year old men (n = 1937) was contacted: 445 were screened, 209 assessed: resulting in 191 valid behavioral, 139 imaging and 157 genetic datasets. None of the subjects fulfilled criteria for alcohol dependence according to the Diagnostic and Statistical Manual of Mental Disorders-IV-TextRevision (DSM-IV-TR). We measured how instrumental responding for rewards was influenced by background Pavlovian conditioned stimuli predicting action-independent rewards and losses. Behavioral PIT was enhanced in high-compared to low-risk drinkers (b = 0.09, SE = 0.03, z = 2.7, p < 0.009). Across all subjects, we observed PIT-related neural blood oxygen level-dependent (BOLD) signal in the right amygdala (t = 3.25, p(SVC) = 0.04, x = 26, y = -6, z = -12), but not in NAcc. The strength of the behavioral PIT effect was positively correlated with polygenic risk for alcohol consumption (r(s) = 0.17, p = 0.032). We conclude that behavioral PIT and polygenic risk for alcohol consumption might be a biomarker for a subclinical phenotype of risky alcohol consumption, even if no drug-related stimulus is present. The association between behavioral PIT effects and the amygdala might point to habitual processes related to out PIT task. In non-dependent young social drinkers, the amygdala rather than the NAcc is activated during PIT; possible different involvement in association with disease trajectory should be investigated in future studies.
The aim of this study is to monitor short-term seasonal development of young Olympic weightlifters’ anthropometry, body composition, physical fitness, and sport-specific performance. Fifteen male weightlifters aged 13.2 ± 1.3 years participated in this study. Tests for the assessment of anthropometry (e.g., body-height, body-mass), body-composition (e.g., lean-body-mass, relative fat-mass), muscle strength (grip-strength), jump performance (drop-jump (DJ) height, countermovement-jump (CMJ) height, DJ contact time, DJ reactive-strength-index (RSI)), dynamic balance (Y-balance-test), and sport-specific performance (i.e., snatch and clean-and-jerk) were conducted at different time-points (i.e., T1 (baseline), T2 (9 weeks), T3 (20 weeks)). Strength tests (i.e., grip strength, clean-and-jerk and snatch) and training volume were normalized to body mass. Results showed small-to-large increases in body-height, body-mass, lean-body-mass, and lower-limbs lean-mass from T1-to-T2 and T2-to-T3 (∆0.7–6.7%; 0.1 ≤ d ≤ 1.2). For fat-mass, a significant small-sized decrease was found from T1-to-T2 (∆13.1%; d = 0.4) and a significant increase from T2-to-T3 (∆9.1%; d = 0.3). A significant main effect of time was observed for DJ contact time (d = 1.3) with a trend toward a significant decrease from T1-to-T2 (∆–15.3%; d = 0.66; p = 0.06). For RSI, significant small increases from T1-to-T2 (∆9.9%, d = 0.5) were noted. Additionally, a significant main effect of time was found for snatch (d = 2.7) and clean-and-jerk (d = 3.1) with significant small-to-moderate increases for both tests from T1-to-T2 and T2-to-T3 (∆4.6–11.3%, d = 0.33 to 0.64). The other tests did not change significantly over time (0.1 ≤ d ≤ 0.8). Results showed significantly higher training volume for sport-specific training during the second period compared with the first period (d = 2.2). Five months of Olympic weightlifting contributed to significant changes in anthropometry, body-composition, and sport-specific performance. However, hardly any significant gains were observed for measures of physical fitness. Coaches are advised to design training programs that target a variety of fitness components to lay an appropriate foundation for later performance as an elite athlete.
Introduction
To date, several meta-analyses clearly demonstrated that resistance and plyometric training are effective to improve physical fitness in children and adolescents. However, a methodological limitation of meta-analyses is that they synthesize results from different studies and hence ignore important differences across studies (i.e., mixing apples and oranges). Therefore, we aimed at examining comparative intervention studies that assessed the effects of age, sex, maturation, and resistance or plyometric training descriptors (e.g., training intensity, volume etc.) on measures of physical fitness while holding other variables constant.
Methods
To identify relevant studies, we systematically searched multiple electronic databases (e.g., PubMed) from inception to March 2018. We included resistance and plyometric training studies in healthy young athletes and non-athletes aged 6 to 18 years that investigated the effects of moderator variables (e.g., age, maturity, sex, etc.) on components of physical fitness (i.e., muscle strength and power).
Results
Our systematic literature search revealed a total of 75 eligible resistance and plyometric training studies, including 5,138 participants. Mean duration of resistance and plyometric training programs amounted to 8.9 ± 3.6 weeks and 7.1±1.4 weeks, respectively. Our findings showed that maturation affects plyometric and resistance training outcomes differently, with the former eliciting greater adaptations pre-peak height velocity (PHV) and the latter around- and post-PHV. Sex has no major impact on resistance training related outcomes (e.g., maximal strength, 10 repetition maximum). In terms of plyometric training, around-PHV boys appear to respond with larger performance improvements (e.g., jump height, jump distance) compared with girls. Different types of resistance training (e.g., body weight, free weights) are effective in improving measures of muscle strength (e.g., maximum voluntary contraction) in untrained children and adolescents. Effects of plyometric training in untrained youth primarily follow the principle of training specificity. Despite the fact that only 6 out of 75 comparative studies investigated resistance or plyometric training in trained individuals, positive effects were reported in all 6 studies (e.g., maximum strength and vertical jump height, respectively).
Conclusions
The present review article identified research gaps (e.g., training descriptors, modern alternative training modalities) that should be addressed in future comparative studies.
This meta-analysis aimed to assess the effects of plyometric jump training (PJT) on volleyball players’ vertical jump height (VJH), comparing changes with those observed in a matched control group. A literature search in the databases of PubMed, MEDLINE, Web of Science, and SCOPUS was conducted. Only randomized-controlled trials and studies that included a pre-to-post intervention assessment of VJH were included. They involved only healthy volleyball players with no restrictions on age or sex. Data were independently extracted from the included studies by two authors. The Physiotherapy Evidence Database scale was used to assess the risk of bias, and methodological quality, of eligible studies included in the review. From 7,081 records, 14 studies were meta-analysed. A moderate Cohen’s d effect size (ES = 0.82, p <0.001) was observed for VJH, with moderate heterogeneity (I2 = 34.4%, p = 0.09) and no publication bias (Egger’s test, p = 0.59). Analyses of moderator variables revealed no significant differences for PJT program duration (≤8 vs. >8 weeks, ES = 0.79 vs. 0.87, respectively), frequency (≤2 vs. >2 sessions/week, ES = 0.83 vs. 0.78, respectively), total number of sessions (≤16 vs. >16 sessions, ES = 0.73 vs. 0.92, respectively), sex (female vs. male, ES = 1.3 vs. 0.5, respectively), age (≥19 vs. <19 years of age, ES = 0.89 vs. 0.70, respectively), and volume (>2,000 vs. <2,000 jumps, ES = 0.76 vs. 0.79, respectively). In conclusion, PJT appears to be effective in inducing improvements in volleyball players’ VJH. Improvements in VJH may be achieved by both male and female volleyball players, in different age groups, with programs of relatively low volume and frequency. Though PJT seems to be safe for volleyball players, it is recommended that an individualized approach, according to player position, is adopted with some players (e.g. libero) less prepared to sustain PJT loads.
We are glad to introduce the Second Journal Club of Volume Five, Second Issue. This edition is focused on relevant studies published in the last few years in the field of resistance training, chosen by our Editorial Board members and their colleagues. We hope to stimulate your curiosity in this field and to share with you the passion for the sport, seen also from the scientific point of view. The Editorial Board members wish you an inspiring lecture.
Objective:
Depression and coronary heart disease (CHD) are highly comorbid conditions. Brain-derived neurotrophic factor (BDNF) plays an important role in cardiovascular processes. Depressed patients typically show decreased BDNF concentrations. We analysed the relationship between BDNF and depression in a sample of patients with CHD and additionally distinguished between cognitive-affective and somatic depression symptoms. We also investigated whether BDNF was associated with somatic comorbidity burden, acute coronary syndrome (ACS) or congestive heart failure (CHF).
Methods:
The following variables were assessed for 225 hospitalised patients with CHD: BDNF concentrations, depression [Patient Health Questionnaire-9 (PHQ-9)], somatic comorbidity (Charlson Comorbidity Index), CHF, ACS, platelet count, smoking status and antidepressant treatment.
Results:
Regression models revealed that BDNF was not associated with severity of depression. Although depressed patients (PHQ-9 score >7) had significantly lower BDNF concentrations compared to non-depressed patients (p = 0.04), this was not statistically significant after controlling for confounders (p = 0.15). Cognitive-affective symptoms and somatic comorbidity burden each closely missed a statistically significant association with BDNF concentrations (p = 0.08, p = 0.06, respectively). BDNF was reduced in patients with CHF (p = 0.02). There was no covariate-adjusted, significant association between BDNF and ACS.
Conclusion:
Serum BDNF concentrations are associated with cardiovascular dysfunction. Somatic comorbidities should be considered when investigating the relationship between depression and BDNF.
Background: The back pain screening tool Risk-Prevention-Index Social (RPI-S) identifies the individual psychosocial risk for low back pain chronification and supports the allocation of patients at risk in additional multidisciplinary treatments. The study objectives were to evaluate (1) the prognostic validity of the RPI-S for a 6-month time frame and (2) the clinical benefit of the RPI-S.
Methods: In a multicenter single-blind 3-armed randomized controlled trial, n = 660 persons (age 18–65 years) were randomly assigned to a twelve-week uni- or multidisciplinary exercise intervention or control group. Psychosocial risk was assessed by the RPI-S domain social environment (RPI-SSE) and the outcome pain by the Chronic Pain Grade Questionnaire (baseline M1, 12-weeks M4, 24-weeks M5). Prognostic validity was quantified by the root mean squared error (RMSE) within the control group. The clinical benefit of RPI-SSE was calculated by repeated measures ANOVA in intervention groups.
Results: A subsample of n = 274 participants (mean = 38.0 years, SD 13.1) was analyzed, of which 30% were classified at risk in their psychosocial profile. The half-year prognostic validity was good (RMSE for disability of 9.04 at M4 and of 9.73 at M5; RMSE for pain intensity of 12.45 at M4 and of 14.49 at M5). People at risk showed significantly stronger reduction in pain disability and intensity at M4/M5, if participating in a multidisciplinary exercise treatment. Subjects at no risk showed a smaller reduction in pain disability in both interventions and no group differences for pain intensity. Regarding disability due to pain, around 41% of the sample would gain an unfitted treatment without the back pain screening.
Conclusion: The RPI-SSE prognostic validity demonstrated good applicability and a clinical benefit confirmed by a clear advantage of an individualized treatment possibility.