Refine
Has Fulltext
- yes (84)
Year of publication
Document Type
- Postprint (84) (remove)
Language
- English (84) (remove)
Is part of the Bibliography
- yes (84)
Keywords
- exercise (4)
- muscle strength (4)
- Adaptive Force (3)
- Neuroenhancement (3)
- adolescents (3)
- depression (3)
- inflammation (3)
- maximal isometric Adaptive Force (3)
- resistance training (3)
- Prevention (2)
Institute
- Department Sport- und Gesundheitswissenschaften (84) (remove)
Gait analysis is an important tool for the early detection of neurological diseases and for the assessment of risk of falling in elderly people. The availability of low-cost camera hardware on the market today and recent advances in Machine Learning enable a wide range of clinical and health-related applications, such as patient monitoring or exercise recognition at home. In this study, we evaluated the motion tracking performance of the latest generation of the Microsoft Kinect camera, Azure Kinect, compared to its predecessor Kinect v2 in terms of treadmill walking using a gold standard Vicon multi-camera motion capturing system and the 39 marker Plug-in Gait model. Five young and healthy subjects walked on a treadmill at three different velocities while data were recorded simultaneously with all three camera systems. An easy-to-administer camera calibration method developed here was used to spatially align the 3D skeleton data from both Kinect cameras and the Vicon system. With this calibration, the spatial agreement of joint positions between the two Kinect cameras and the reference system was evaluated. In addition, we compared the accuracy of certain spatio-temporal gait parameters, i.e., step length, step time, step width, and stride time calculated from the Kinect data, with the gold standard system. Our results showed that the improved hardware and the motion tracking algorithm of the Azure Kinect camera led to a significantly higher accuracy of the spatial gait parameters than the predecessor Kinect v2, while no significant differences were found between the temporal parameters. Furthermore, we explain in detail how this experimental setup could be used to continuously monitor the progress during gait rehabilitation in older people.
Dropping Out or Keeping Up?
(2016)
The aim of this study was to examine how automatic evaluations of exercising (AEE) varied according to adherence to an exercise program. Eighty-eight participants (24.98 years ± 6.88; 51.1% female) completed a Brief-Implicit Association Task assessing their AEE, positive and negative associations to exercising at the beginning of a 3-month exercise program. Attendance data were collected for all participants and used in a cluster analysis of adherence patterns. Three different adherence patterns (52 maintainers, 16 early dropouts, 20 late dropouts; 40.91% overall dropouts) were detected using cluster analyses. Participants from these three clusters differed significantly with regard to their positive and negative associations to exercising before the first course meeting (η2p = 0.07). Discriminant function analyses revealed that positive associations to exercising was a particularly good discriminating factor. This is the first study to provide evidence of the differential impact of positive and negative associations on exercise behavior over the medium term. The findings contribute to theoretical understanding of evaluative processes from a dual-process perspective and may provide a basis for targeted interventions.
Long-distance race car drivers are classified as athletes. The sport is physically and mentally demanding, requiring long hours of practice. Therefore, optimal dietary intake is essential for health and performance of the athlete. The aim of the study was to evaluate dietary intake and to compare the data with dietary recommendations for athletes and for the general adult population according to the German Nutrition Society (DGE). A 24-h dietary recall during a competition preparation phase was obtained from 16 male race car drivers (28.3 ± 6.1 years, body mass index (BMI) of 22.9 ± 2.3 kg/m2). The mean intake of energy, nutrients, water and alcohol was recorded. The mean energy, vitamin B2, vitamin E, folate, fiber, calcium, water and alcohol intake were 2124 ± 814 kcal/day, 1.3 ± 0.5 mg/day, 12.5 ± 9.5 mg/day, 231.0 ± 90.9 ug/day, 21.4 ± 9.4 g/day, 1104 ± 764 mg/day, 3309 ± 1522 mL/day and 0.8 ± 2.5 mL/day respectively. Our study indicated that many of the nutrients studied, including energy and carbohydrate, were below the recommended dietary intake for both athletes and the DGE.
Background Recent shoulder injury prevention programs have utilized resistance exercises combined with different forms of instability, with the goal of eliciting functional adaptations and thereby reducing the risk of injury. However, it is still unknown how an unstable weight mass (UWM) affects the muscular activity of the shoulder stabilizers. Aim of the study was to assess neuromuscular activity of dynamic shoulder stabilizers under four conditions of stable and UWM during three shoulder exercises. It was hypothesized that a combined condition of weight with UWM would elicit greater activation due to the increased stabilization demand. Methods Sixteen participants (7 m/9 f) were included in this cross-sectional study and prepared with an EMG-setup for the: Mm. upper/lower trapezius (U.TA/L.TA), lateral deltoid (DE), latissimus dorsi (LD), serratus anterior (SA) and pectoralis major (PE). A maximal voluntary isometric contraction test (MVIC; 5 s.) was performed on an isokinetic dynamometer. Next, internal/external rotation (In/Ex), abduction/adduction (Ab/Ad) and diagonal flexion/extension (F/E) exercises (5 reps.) were performed with four custom-made-pipes representing different exercise conditions. First, the empty-pipe (P; 0.5 kg) and then, randomly ordered, water-filled-pipe (PW; 1 kg), weight-pipe (PG; 4.5 kg) and weight + water-filled-pipe (PWG; 4.5 kg), while EMG was recorded. Raw root-mean-square values (RMS) were normalized to MVIC (%MVIC). Differences between conditions for RMS%MVIC, scapular stabilizer (SR: U.TA/L.TA; U.TA/SA) and contraction (CR: concentric/eccentric) ratios were analyzed (paired t-test; p <= 0.05; Bonferroni adjusted alpha = 0.008). Results PWG showed significantly greater muscle activity for all exercises and all muscles except for PE compared to P and PW. Condition PG elicited muscular activity comparable to PWG (p > 0.008) with significantly lower activation of L.TA and SA in the In/Ex rotation. The SR ratio was significantly higher in PWG compared to P and PW. No significant differences were found for the CR ratio in all exercises and for all muscles. Conclusion Higher weight generated greater muscle activation whereas an UWM raised the neuromuscular activity, increasing the stabilization demands. Especially in the In/Ex rotation, an UWM increased the RMS%MVIC and SR ratio. This might improve training effects in shoulder prevention and rehabilitation programs.
Background
Previous literature mainly introduced cognitive functions to explain performance decrements in dual-task walking, i.e., changes in dual-task locomotion are attributed to limited cognitive information processing capacities. In this study, we enlarge existing literature and investigate whether leg muscular capacity plays an additional role in children’s dual-task walking performance.
Methods
To this end, we had prepubescent children (mean age: 8.7 ± 0.5 years, age range: 7–9 years) walk in single task (ST) and while concurrently conducting an arithmetic subtraction task (DT). Additionally, leg lean tissue mass was assessed.
Results
Findings show that both, boys and girls, significantly decrease their gait velocity (f = 0.73), stride length (f = 0.62) and cadence (f = 0.68) and increase the variability thereof (f = 0.20-0.63) during DT compared to ST. Furthermore, stepwise regressions indicate that leg lean tissue mass is closely associated with step time and the variability thereof during DT (R2 = 0.44, p = 0.009). These associations between gait measures and leg lean tissue mass could not be observed for ST (R2 = 0.17, p = 0.19).
Conclusion
We were able to show a potential link between leg muscular capacities and DT walking performance in children. We interpret these findings as evidence that higher leg muscle mass in children may mitigate the impact of a cognitive interference task on DT walking performance by inducing enhanced gait stability.
The manual muscle test (MMT) is a flexible diagnostic tool, which is used in many disciplines, applied in several ways. The main problem is the subjectivity of the test. The MMT in the version of a “break test” depends on the tester’s force rise and the patient’s ability to resist the applied force. As a first step, the investigation of the reproducibility of the testers’ force profile is required for valid application. The study examined the force profiles of n = 29 testers (n = 9 experiences (Exp), n = 8 little experienced (LitExp), n = 12 beginners (Beg)). The testers performed 10 MMTs according to the test of hip flexors, but against a fixed leg to exclude the patient’s reaction. A handheld device recorded the temporal course of the applied force. The results show significant differences between Exp and Beg concerning the starting force (padj = 0.029), the ratio of starting to maximum force (padj = 0.005) and the normalized mean Euclidean distances between the 10 trials (padj = 0.015). The slope is significantly higher in Exp vs. LitExp (p = 0.006) and Beg (p = 0.005). The results also indicate that experienced testers show inter-tester differences and partly even a low intra-tester reproducibility. This highlights the necessity of an objective MMT-assessment. Furthermore, an agreement on a standardized force profile is required. A suggestion for this is given.
Background: Recent studies have demonstrated a superior diagnostic accuracy of cardiovascular magnetic resonance (CMR) for the detection of coronary artery disease (CAD). We aimed to determine the comparative cost-effectiveness of CMR versus single-photon emission computed tomography (SPECT).
Methods: Based on Bayes' theorem, a mathematical model was developed to compare the cost-effectiveness and utility of CMR with SPECT in patients with suspected CAD. Invasive coronary angiography served as the standard of reference. Effectiveness was defined as the accurate detection of CAD, and utility as the number of quality-adjusted life-years (QALYs) gained. Model input parameters were derived from the literature, and the cost analysis was conducted from a German health care payer's perspective. Extensive sensitivity analyses were performed.
Results: Reimbursement fees represented only a minor fraction of the total costs incurred by a diagnostic strategy. Increases in the prevalence of CAD were generally associated with improved cost-effectiveness and decreased costs per utility unit (Delta QALY). By comparison, CMR was consistently more cost-effective than SPECT, and showed lower costs per QALY gained. Given a CAD prevalence of 0.50, CMR was associated with total costs of (sic)6,120 for one patient correctly diagnosed as having CAD and with (sic)2,246 per Delta QALY gained versus (sic)7,065 and (sic)2,931 for SPECT, respectively. Above a threshold value of CAD prevalence of 0.60, proceeding directly to invasive angiography was the most cost-effective approach.
Conclusions: In patients with low to intermediate CAD probabilities, CMR is more cost-effective than SPECT. Moreover, lower costs per utility unit indicate a superior clinical utility of CMR.
Background: Doping attitude is a key variable in predicting athletes' intention to use forbidden performance enhancing drugs. Indirect reaction-time based attitude tests, such as the implicit association test, conceal the ultimate goal of measurement from the participant better than questionnaires. Indirect tests are especially useful when socially sensitive constructs such as attitudes towards doping need to be described. The present study serves the development and validation of a novel picture-based brief implicit association test (BIAT) for testing athletes' attitudes towards doping in sport. It shall provide the basis for a transnationally compatible research instrument able to harmonize anti-doping research efforts.
Method: Following a known-group differences validation strategy, the doping attitudes of 43 athletes from bodybuilding (representative for a highly doping prone sport) and handball (as a contrast group) were compared using the picture-based doping-BIAT. The Performance Enhancement Attitude Scale (PEAS) was employed as a corresponding direct measure in order to additionally validate the results.
Results: As expected, in the group of bodybuilders, indirectly measured doping attitudes as tested with the picture-based doping-BIAT were significantly less negative (eta(2) = .11). The doping-BIAT and PEAS scores correlated significantly at r = .50 for bodybuilders, and not significantly at r = .36 for handball players. There was a low error rate (7%) and a satisfactory internal consistency (r(dagger dagger) = .66) for the picture-based doping-BIAT.
Conclusions: The picture-based doping-BIAT constitutes a psychometrically tested method, ready to be adopted by the international research community. The test can be administered via the internet. All test material is available "open source". The test might be implemented, for example, as a new effect-measure in the evaluation of prevention programs.
Background: Knowing and, if necessary, altering competitive athletes' real attitudes towards the use of banned performance-enhancing substances is an important goal of worldwide doping prevention efforts. However athletes will not always be willing to reporting their real opinions. Reaction time-based attitude tests help conceal the ultimate goal of measurement from the participant and impede strategic answering. This study investigated how well a reaction time-based attitude test discriminated between athletes who were doping and those who were not. We investigated whether athletes whose urine samples were positive for at least one banned substance (dopers) evaluated doping more favorably than clean athletes (non-dopers).
Methods: We approached a group of 61 male competitive bodybuilders and collected urine samples for biochemical testing. The pictorial doping Brief Implicit Association Test (BIAT) was used for attitude measurement. This test quantifies the difference in response latencies (in milliseconds) to stimuli representing related concepts (i.e. doping-dislike/like-[health food]).
Results: Prohibited substances were found in 43% of all tested urine samples. Dopers had more lenient attitudes to doping than non-dopers (Hedges's g = -0.76). D-scores greater than -0.57 (CI95 = -0.72 to -0.46) might be indicative of a rather lenient attitude to doping. In urine samples evidence of administration of combinations of substances, complementary administration of substances to treat side effects and use of stimulants to promote loss of body fat was common.
Conclusion: This study demonstrates that athletes' attitudes to doping can be assessed indirectly with a reaction time-based test, and that their attitudes are related to their behavior. Although bodybuilders may be more willing to reveal their attitude to doping than other athletes, these results still provide evidence that the pictorial doping BIAT may be useful in athletes from other sports, perhaps as a complementary measure in evaluations of the effectiveness of doping prevention interventions.
There is ample evidence that youth resistance training (RT) is safe, joyful, and effective for different markers of performance (e.g., muscle strength, power, linear sprint speed) and health (e.g., injury prevention). Accordingly, the first aim of this narrative review is to present and discuss the relevance of muscle strength for youth physical development. The second purpose is to report evidence on the effectiveness of RT on muscular fitness (muscle strength, power, muscle endurance), on movement skill performance and injury prevention in youth. There is evidence that RT is effective in enhancing measures of muscle fitness in children and adolescents, irrespective of sex. Additionally, numerous studies indicate that RT has positive effects on fundamental movement skills (e.g., jumping, running, throwing) in youth regardless of age, maturity, training status, and sex. Further, irrespective of age, sex, and training status, regular exposure to RT (e.g., plyometric training) decreases the risk of sustaining injuries in youth. This implies that RT should be a meaningful element of youths’ exercise programming. This has been acknowledged by global (e.g., World Health Organization) and national (e.g., National Strength and Conditioning Association) health- and performance-related organizations which is why they recommended to perform RT as an integral part of weekly exercise programs to promote muscular strength, fundamental movement skills, and to resist injuries in youth.
The aim of this study is to monitor short-term seasonal development of young Olympic weightlifters’ anthropometry, body composition, physical fitness, and sport-specific performance. Fifteen male weightlifters aged 13.2 ± 1.3 years participated in this study. Tests for the assessment of anthropometry (e.g., body-height, body-mass), body-composition (e.g., lean-body-mass, relative fat-mass), muscle strength (grip-strength), jump performance (drop-jump (DJ) height, countermovement-jump (CMJ) height, DJ contact time, DJ reactive-strength-index (RSI)), dynamic balance (Y-balance-test), and sport-specific performance (i.e., snatch and clean-and-jerk) were conducted at different time-points (i.e., T1 (baseline), T2 (9 weeks), T3 (20 weeks)). Strength tests (i.e., grip strength, clean-and-jerk and snatch) and training volume were normalized to body mass. Results showed small-to-large increases in body-height, body-mass, lean-body-mass, and lower-limbs lean-mass from T1-to-T2 and T2-to-T3 (∆0.7–6.7%; 0.1 ≤ d ≤ 1.2). For fat-mass, a significant small-sized decrease was found from T1-to-T2 (∆13.1%; d = 0.4) and a significant increase from T2-to-T3 (∆9.1%; d = 0.3). A significant main effect of time was observed for DJ contact time (d = 1.3) with a trend toward a significant decrease from T1-to-T2 (∆–15.3%; d = 0.66; p = 0.06). For RSI, significant small increases from T1-to-T2 (∆9.9%, d = 0.5) were noted. Additionally, a significant main effect of time was found for snatch (d = 2.7) and clean-and-jerk (d = 3.1) with significant small-to-moderate increases for both tests from T1-to-T2 and T2-to-T3 (∆4.6–11.3%, d = 0.33 to 0.64). The other tests did not change significantly over time (0.1 ≤ d ≤ 0.8). Results showed significantly higher training volume for sport-specific training during the second period compared with the first period (d = 2.2). Five months of Olympic weightlifting contributed to significant changes in anthropometry, body-composition, and sport-specific performance. However, hardly any significant gains were observed for measures of physical fitness. Coaches are advised to design training programs that target a variety of fitness components to lay an appropriate foundation for later performance as an elite athlete.
This study aimed at examining physiological responses (i.e., oxygen uptake [VO2] and heart rate [HR]) to a semi-contact 3 x 3-min format, amateur boxing combat simulation in elite level male boxers. Eleven boxers aged 21.4 +/- 2.1 years (body height 173.4 +/- 3.7, body mass 74.9 +/- 8.6 kg, body fat 12.1 +/- 1.9, training experience 5.7 +/- 1.3 years) volunteered to participate in this study. They performed a maximal graded aerobic test on a motor-driven treadmill to determine maximum oxygen uptake (VO2max), oxygen uptake (VO2AT) and heart rate (HRAT) at the anaerobic threshold, and maximal heart rate (HRmax). Additionally, VO2 and peak HR (HRpeak) were recorded following each boxing round. Results showed no significant differences between VO2max values derived from the treadmill running test and VO2 outcomes of the simulated boxing contest (p > 0.05, d = 0.02 to 0.39). However, HRmax and HRpeak recorded from the treadmill running test and the simulated amateur boxing contest, respectively, displayed significant differences regardless of the boxing round (p < 0.01, d = 1.60 to 3.00). In terms of VO2 outcomes during the simulated contest, no significant between-round differences were observed (p = 0.19, d = 0.17 to 0.73). Irrespective of the boxing round, the recorded VO2 was >90% of the VO2max. Likewise, HRpeak observed across the three boxing rounds were >= 90% of the HRmax. In summary, the simulated 3 x 3-min amateur boxing contest is highly demanding from a physiological standpoint. Thus, coaches are advised to systematically monitor internal training load for instance through rating of perceived exertion to optimize training-related adaptations and to prevent boxers from overreaching and/or overtraining.
Eccentric exercise is discussed as a treatment option for clinical populations, but specific responses in terms of muscle damage and systemic inflammation after repeated loading of large muscle groups have not been conclusively characterized. Therefore, this study tested the feasibility of an isokinetic protocol for repeated maximum eccentric loading of the trunk muscles. Nine asymptomatic participants (5 f/4 m; 34±6 yrs; 175±13 cm; 76±17 kg) performed three isokinetic 2-minute all-out trunk strength tests (1x concentric (CON), 2x eccentric (ECC1, ECC2), 2 weeks apart; flexion/extension, 60°/s, ROM 55°). Outcomes were peak torque, torque decline, total work, and indicators of muscle damage and inflammation (over 168 h). Statistics were done using the Friedman test (Dunn’s post-test). For ECC1 and ECC2, peak torque and total work were increased and torque decline reduced compared to CON. Repeated ECC bouts yielded unaltered torque and work outcomes. Muscle damage markers were highest after ECC1 (soreness 48 h, creatine kinase 72 h; p<0.05). Their overall responses (area under the curve) were abolished post-ECC2 compared to post-ECC1 (p<0.05). Interleukin-6 was higher post-ECC1 than CON, and attenuated post-ECC2 (p>0.05). Interleukin-10 and tumor necrosis factor-α were not detectable. All markers showed high inter-individual variability. The protocol was feasible to induce muscle damage indicators after exercising a large muscle group, but the pilot results indicated only weak systemic inflammatory responses in asymptomatic adults.
Background
In health research, indicators of socioeconomic status (SES) are often used interchangeably and often lack theoretical foundation. This makes it difficult to compare results from different studies and to explore the relationship between SES and health outcomes. To aid researchers in choosing appropriate indicators of SES, this article proposes and tests a theory-based selection of SES indicators using chronic back pain as a health outcome.
Methods
Strength of relationship predictions were made using Brunner & Marmot’s model of ‘social determinants of health’. Subsequently, a longitudinal study was conducted with 66 patients receiving in-patient treatment for chronic back pain. Sociodemographic variables, four SES indicators (education, job position, income, multidimensional index) and back pain intensity and disability were obtained at baseline. Both pain dimensions were assessed again 6 months later. Using linear regression, the predictive strength of each SES indicator on pain intensity and disability was estimated and compared to the theory based prediction.
Results
Chronic back pain intensity was best predicted by the multidimensional index (beta = 0.31, p < 0.05), followed by job position (beta = 0.29, p < 0.05) and education (beta = −0.29, p < 0.05); whereas, income exerted no significant influence. Back pain disability was predicted strongest by education (beta = −0.30, p < 0.05) and job position (beta = 0.29, p < 0.05). Here, multidimensional index and income had no significant influence.
Conclusions
The choice of SES indicators influences predictive power on both back pain dimensions, suggesting SES predictors cannot be used interchangeably. Therefore, researchers should carefully consider prior to each study which SES indicator to use. The introduced framework can be valuable in supporting this decision because it allows for a stable prediction of SES indicator influence and their hierarchy on a specific health outcomes.
In animals and humans, behavior can be influenced by irrelevant stimuli, a phenomenon called Pavlovian-to-instrumental transfer (PIT). In subjects with substance use disorder, PIT is even enhanced with functional activation in the nucleus accumbens (NAcc) and amygdala. While we observed enhanced behavioral and neural PIT effects in alcohol-dependent subjects, we here aimed to determine whether behavioral PIT is enhanced in young men with high-risk compared to low-risk drinking and subsequently related functional activation in an a-priori region of interest encompassing the NAcc and amygdala and related to polygenic risk for alcohol consumption. A representative sample of 18-year old men (n = 1937) was contacted: 445 were screened, 209 assessed: resulting in 191 valid behavioral, 139 imaging and 157 genetic datasets. None of the subjects fulfilled criteria for alcohol dependence according to the Diagnostic and Statistical Manual of Mental Disorders-IV-TextRevision (DSM-IV-TR). We measured how instrumental responding for rewards was influenced by background Pavlovian conditioned stimuli predicting action-independent rewards and losses. Behavioral PIT was enhanced in high-compared to low-risk drinkers (b = 0.09, SE = 0.03, z = 2.7, p < 0.009). Across all subjects, we observed PIT-related neural blood oxygen level-dependent (BOLD) signal in the right amygdala (t = 3.25, p(SVC) = 0.04, x = 26, y = -6, z = -12), but not in NAcc. The strength of the behavioral PIT effect was positively correlated with polygenic risk for alcohol consumption (r(s) = 0.17, p = 0.032). We conclude that behavioral PIT and polygenic risk for alcohol consumption might be a biomarker for a subclinical phenotype of risky alcohol consumption, even if no drug-related stimulus is present. The association between behavioral PIT effects and the amygdala might point to habitual processes related to out PIT task. In non-dependent young social drinkers, the amygdala rather than the NAcc is activated during PIT; possible different involvement in association with disease trajectory should be investigated in future studies.
Purpose
To test whether the negative relationship between perceived stress and quality of life (Hypothesis 1) can be buffered by perceived social support in patients with dementia as well as in caregivers individually (Hypothesis 2: actor effects) and across partners (Hypothesis 3: partner effects and actor-partner effects).
Method
A total of 108 couples (N = 216 individuals) comprised of one individual with early-stage dementia and one caregiving partner were assessed at baseline and one month apart. Moderation effects were investigated by applying linear mixed models and actor-partner interdependence models.
Results
Although the stress-quality of life association was more pronounced in caregivers (beta = -.63, p<.001) compared to patients (beta= -.31, p<.001), this association was equally moderated by social support in patients (beta = .14, p<.05) and in the caregivers (beta =.13, p<.05). From one partner to his or her counterpart, the partner buffering and actor-partner-buffering effect were not present.
Conclusion
The stress-buffering effect has been replicated in individuals with dementia and caregivers but not across partners. Interventions to improve quality of life through perceived social support should not only focus on caregivers, but should incorporate both partners.
Background: Cross-sectional studies detected associations between physical fitness, living area, and sports participation in children. Yet, their scientific value is limited because the identification of cause-and-effect relationships is not possible. In a longitudinal approach, we examined the effects of living area and sports club participation on physical fitness development in primary school children from classes 3 to 6.
Methods: One-hundred and seventy-two children (age: 9-12 years; sex: 69 girls, 103 boys) were tested for their physical fitness (i.e., endurance [9-min run], speed [50-m sprint], lower- [triple hop] and upper-extremity muscle strength [1-kg ball push], flexibility [stand-and-reach], and coordination [star coordination run]). Living area (i.e., urban or rural) and sports club participation were assessed using parent questionnaire.
Results: Over the 4 year study period, urban compared to rural children showed significantly better performance development for upper- (p = 0.009, ES = 0.16) and lower-extremity strength (p < 0.001, ES = 0.22). Further, significantly better performance development were found for endurance (p = 0.08, ES = 0.19) and lower-extremity strength (p = 0.024, ES = 0.23) for children continuously participating in sports clubs compared to their non-participating peers.
Conclusions: Our findings suggest that sport club programs with appealing arrangements appear to represent a good means to promote physical fitness in children living in rural areas.
Real options are widely applied in strategic and operational decision-making, allowing for managerial flexibility in uncertaincontexts. Increased scholarly interest has led to an extensive but fragmented research landscape. We aim to measure andsystematize the research field quantitatively. To achieve this goal, we conduct bibliometric performance analyses and bibliographiccoupling analyses with an in-depth content review. The results of the performance analyses show an increasing interest in realoptions since the beginning of the 2000s and identify the most influential journals and authors. The science mappings reveal sixand seven research clusters over the last two decades. Based on an in-depth analysis of their themes, we develop a researchframework comprising antecedents, application areas, internal and external contingencies, and uncertainty resolution throughreal option valuation or reasoning. We identify several gaps in that framework, which we propose to tackle in future research.
Effects of resistance training in youth athletes on muscular fitness and athletic performance
(2016)
During the stages of long-term athlete development (LTAD), resistance training (RT) is an important means for (i) stimulating athletic development, (ii) tolerating the demands of long-term training and competition, and (iii) inducing long-term health promoting effects that are robust over time and track into adulthood. However, there is a gap in the literature with regards to optimal RT methods during LTAD and how RT is linked to biological age. Thus, the aims of this scoping review were (i) to describe and discuss the effects of RT on muscular fitness and athletic performance in youth athletes, (ii) to introduce a conceptual model on how to appropriately implement different types of RT within LTAD stages, and (iii) to identify research gaps from the existing literature by deducing implications for future research. In general, RT produced small -to -moderate effects on muscular fitness and athletic performance in youth athletes with muscular strength showing the largest improvement. Free weight, complex, and plyometric training appear to be well -suited to improve muscular fitness and athletic performance. In addition, balance training appears to be an important preparatory (facilitating) training program during all stages of LTAD but particularly during the early stages. As youth athletes become more mature, specificity, and intensity of RT methods increase. This scoping review identified research gaps that are summarized in the following and that should be addressed in future studies: (i) to elucidate the influence of gender and biological age on the adaptive potential following RT in youth athletes (especially in females), (ii) to describe RT protocols in more detail (i.e., always report stress and strain based parameters), and (iii) to examine neuromuscular and tendomuscular adaptations following RT in youth athletes.
Background
It has been demonstrated that core strength training is an effective means to enhance trunk muscle strength (TMS) and proxies of physical fitness in youth. Of note, cross-sectional studies revealed that the inclusion of unstable elements in core strengthening exercises produced increases in trunk muscle activity and thus provide potential extra training stimuli for performance enhancement. Thus, utilizing unstable surfaces during core strength training may even produce larger performance gains. However, the effects of core strength training using unstable surfaces are unresolved in youth. This randomized controlled study specifically investigated the effects of core strength training performed on stable surfaces (CSTS) compared to unstable surfaces (CSTU) on physical fitness in school-aged children.
Methods
Twenty-seven (14 girls, 13 boys) healthy subjects (mean age: 14 ± 1 years, age range: 13–15 years) were randomly assigned to a CSTS (n = 13) or a CSTU (n = 14) group. Both training programs lasted 6 weeks (2 sessions/week) and included frontal, dorsal, and lateral core exercises. During CSTU, these exercises were conducted on unstable surfaces (e.g., TOGU© DYNAIR CUSSIONS, THERA-BAND© STABILITY TRAINER).
Results
Significant main effects of Time (pre vs. post) were observed for the TMS tests (8-22%, f = 0.47-0.76), the jumping sideways test (4-5%, f = 1.07), and the Y balance test (2-3%, f = 0.46-0.49). Trends towards significance were found for the standing long jump test (1-3%, f = 0.39) and the stand-and-reach test (0-2%, f = 0.39). We could not detect any significant main effects of Group. Significant Time x Group interactions were detected for the stand-and-reach test in favour of the CSTU group (2%, f = 0.54).
Conclusions
Core strength training resulted in significant increases in proxies of physical fitness in adolescents. However, CSTU as compared to CSTS had only limited additional effects (i.e., stand-and-reach test). Consequently, if the goal of training is to enhance physical fitness, then CSTU has limited advantages over CSTS.
Purpose: The aim of this study was to compare the effects of moderate intensity, low volume (MILV) vs. low intensity, high volume (LIHV) strength training on sport-specific performance, measures of muscular fitness, and skeletal muscle mass in young kayakers and canoeists.
Methods: Semi-elite young kayakers and canoeists (N = 40, 13 ± 0.8 years, 11 girls) performed either MILV (70–80% 1-RM, 6–12 repetitions per set) or LIHV (30–40% 1-RM, 60–120 repetitions per set) strength training for one season. Linear mixed-effects models were used to compare effects of training condition on changes over time in 250 and 2,000 m time trials, handgrip strength, underhand shot throw, average bench pull power over 2 min, and skeletal muscle mass. Both between- and within-subject designs were used for analysis. An alpha of 0.05 was used to determine statistical significance.
Results: Between- and within-subject analyses showed that monthly changes were greater in LIHV vs. MILV for the 2,000 m time trial (between: 9.16 s, SE = 2.70, p < 0.01; within: 2,000 m: 13.90 s, SE = 5.02, p = 0.01) and bench pull average power (between: 0.021 W⋅kg–1, SE = 0.008, p = 0.02; within: 0.010 W⋅kg–1, SE = 0.009, p > 0.05). Training conditions did not affect other outcomes.
Conclusion: Young sprint kayakers and canoeists benefit from LIHV more than MILV strength training in terms of 2,000 m performance and muscular endurance (i.e., 2 min bench pull power).
Combining training of muscle strength and cardiorespiratory fitness within a training cycle could increase athletic performance more than single-mode training. However, the physiological effects produced by each training modality could also interfere with each other, improving athletic performance less than single-mode training. Because anthropometric, physiological, and biomechanical differences between young and adult athletes can affect the responses to exercise training, young athletes might respond differently to concurrent training (CT) compared with adults. Thus, the aim of the present systematic review with meta-analysis was to determine the effects of concurrent strength and endurance training on selected physical fitness components and athletic performance in youth. A systematic literature search of PubMed and Web of Science identified 886 records. The studies included in the analyses examined children (girls age 6–11 years, boys age 6–13 years) or adolescents (girls age 12–18 years, boys age 14–18 years), compared CT with single-mode endurance (ET) or strength training (ST), and reported at least one strength/power—(e.g., jump height), endurance—(e.g., peak V°O2, exercise economy), or performance-related (e.g., time trial) outcome. We calculated weighted standardized mean differences (SMDs). CT compared to ET produced small effects in favor of CT on athletic performance (n = 11 studies, SMD = 0.41, p = 0.04) and trivial effects on cardiorespiratory endurance (n = 4 studies, SMD = 0.04, p = 0.86) and exercise economy (n = 5 studies, SMD = 0.16, p = 0.49) in young athletes. A sub-analysis of chronological age revealed a trend toward larger effects of CT vs. ET on athletic performance in adolescents (SMD = 0.52) compared with children (SMD = 0.17). CT compared with ST had small effects in favor of CT on muscle power (n = 4 studies, SMD = 0.23, p = 0.04). In conclusion, CT is more effective than single-mode ET or ST in improving selected measures of physical fitness and athletic performance in youth. Specifically, CT compared with ET improved athletic performance in children and particularly adolescents. Finally, CT was more effective than ST in improving muscle power in youth.
Background
Earlier studies have shown that balance training (BT) has the potential to induce performance enhancements in selected components of physical fitness (i.e., balance, muscle strength, power, speed). While there is ample evidence on the long-term effects of BT on components of physical fitness in youth, less is known on the short-term or acute effects of single BT sessions on selected measures of physical fitness.
Objective
To examine the acute effects of different balance exercise types on balance, change-of-direction (CoD) speed, and jump performance in youth female volleyball players.
Methods
Eleven female players aged 14 years participated in this study. Three types of balance exercises (i.e., anterior, posterolateral, rotational type) were conducted in randomized order. For each exercise, 3 sets including 5 repetitions were performed. Before and after the performance of the balance exercises, participants were tested for their static balance (center of pressure surface area [CoP SA] and velocity [CoP V]) on foam and firm surfaces, CoD speed (T-Half test), and vertical jump height (countermovement jump [CMJ] height). A 3 (condition: anterior, mediolateral, rotational balance exercise type) × 2 (time: pre, post) analysis of variance was computed with repeated measures on time.
Results
Findings showed no significant condition × time interactions for all outcome measures (p > 0.05). However, there were small main effects of time for CoP SA on firm and foam surfaces (both d = 0.38; all p < 0.05) with no effect for CoP V on both surface conditions (p > 0.05). For CoD speed, findings showed a large main effect of time (d = 0.91; p < 0.001). However, for CMJ height, no main effect of time was observed (p > 0.05).
Conclusions
Overall, our results indicated small-to-large changes in balance and CoD speed performances but not in CMJ height in youth female volleyball players, regardless of the balance exercise type. Accordingly, it is recommended to regularly integrate balance exercises before the performance of sport-specific training to optimize performance development in youth female volleyball players.
Extracellular vesicles: potential mediators of psychosocial stress contribution to osteoporosis?
(2021)
Osteoporosis is characterized by low bone mass and damage to the bone tissue’s microarchitecture, leading to increased fracture risk. Several studies have provided evidence for associations between psychosocial stress and osteoporosis through various pathways, including the hypothalamic-pituitary-adrenocortical axis, the sympathetic nervous system, and other endocrine factors. As psychosocial stress provokes oxidative cellular stress with consequences for mitochondrial function and cell signaling (e.g., gene expression, inflammation), it is of interest whether extracellular vesicles (EVs) may be a relevant biomarker in this context or act by transporting substances. EVs are intercellular communicators, transfer substances encapsulated in them, modify the phenotype and function of target cells, mediate cell-cell communication, and, therefore, have critical applications in disease progression and clinical diagnosis and therapy. This review summarizes the characteristics of EVs, their role in stress and osteoporosis, and their benefit as biological markers. We demonstrate that EVs are potential mediators of psychosocial stress and osteoporosis and may be beneficial in innovative research settings.
One of the major risk factors for global death and disability is alcohol, tobacco, and illicit drug use. While there is increasing knowledge with respect to individual factors promoting the initiation and maintenance of substance use disorders (SUDs), disease trajectories involved in losing and regaining control over drug intake (ReCoDe) are still not well described. Our newly formed German Collaborative Research Centre (CRC) on ReCoDe has an interdisciplinary approach funded by the German Research Foundation (DFG) with a 12-year perspective. The main goals of our research consortium are (i) to identify triggers and modifying factors that longitudinally modulate the trajectories of losing and regaining control over drug consumption in real life, (ii) to study underlying behavioral, cognitive, and neurobiological mechanisms, and (iii) to implicate mechanism-based interventions. These goals will be achieved by: (i) using mobile health (m-health) tools to longitudinally monitor the effects of triggers (drug cues, stressors, and priming doses) and modify factors (eg, age, gender, physical activity, and cognitive control) on drug consumption patterns in real-life conditions and in animal models of addiction; (ii) the identification and computational modeling of key mechanisms mediating the effects of such triggers and modifying factors on goal-directed, habitual, and compulsive aspects of behavior from human studies and animal models; and (iii) developing and testing interventions that specifically target the underlying mechanisms for regaining control over drug intake.
Satisfaction and frustration of the needs for autonomy, competence, and relatedness, as assessed with the 24-item Basic Psychological Need Satisfaction and Frustration Scale (BPNSFS), have been found to be crucial indicators of individuals’ psychological health. To increase the usability of this scale within a clinical and health services research context, we aimed to validate a German short version (12 items) of this scale in individuals with depression including the examination of the relations from need frustration and need satisfaction to ill-being and quality of life (QOL). This cross-sectional study involved 344 adults diagnosed with depression (Mage (SD) = 47.5 years (11.1); 71.8% females). Confirmatory factor analyses indicated that the short version of the BPNSFS was not only reliable, but also fitted a six-factor structure (i.e., satisfaction/frustration X type of need). Subsequent structural equation modeling showed that need frustration related positively to indicators of ill-being and negatively to QOL. Surprisingly, need satisfaction did not predict differences in ill-being or QOL. The short form of the BPNSFS represents a practical instrument to measure need satisfaction and frustration in people with depression. Further, the results support recent evidence on the importance of especially need frustration in the prediction of psychopathology.
Background and Aims Wearable inertial sensors may offer additional kinematic parameters of the shoulder compared to traditional instruments such as goniometers when elaborate and time-consuming data processing procedures are undertaken. However, in clinical practice simple-real time motion analysis is required to improve clinical reasoning. Therefore, the aim was to assess the criterion validity between a portable "off-the-shelf" sensor-software system (IMU) and optical motion (Mocap) for measuring kinematic parameters during active shoulder movements. Methods 24 healthy participants (9 female, 15 male, age 29 +/- 4 years, height 177 +/- 11 cm, weight 73 +/- 14 kg) were included. Range of motion (ROM), total range of motion (TROM), peak and mean angular velocity of both systems were assessed during simple (abduction/adduction, horizontal flexion/horizontal extension, vertical flexion/extension, and external/internal rotation) and complex shoulder movements. Criterion validity was determined using intraclass-correlation coefficients (ICC), root mean square error (RMSE) and Bland and Altmann analysis (bias; upper and lower limits of agreement). Results ROM and TROM analysis revealed inconsistent validity during simple (ICC: 0.040-0.733, RMSE: 9.7 degrees-20.3 degrees, bias: 1.2 degrees-50.7 degrees) and insufficient agreement during complex shoulder movements (ICC: 0.104-0.453, RMSE: 10.1 degrees-23.3 degrees, bias: 1.0 degrees-55.9 degrees). Peak angular velocity (ICC: 0.202-0.865, RMSE: 14.6 degrees/s-26.7 degrees/s, bias: 10.2 degrees/s-29.9 degrees/s) and mean angular velocity (ICC: 0.019-0.786, RMSE:6.1 degrees/s-34.2 degrees/s, bias: 1.6 degrees/s-27.8 degrees/s) were inconsistent. Conclusions The "off-the-shelf" sensor-software system showed overall insufficient agreement with the gold standard. Further development of commercial IMU-software-solutions may increase measurement accuracy and permit their integration into everyday clinical practice.
Rehabilitation after autologous chondrocyte implantation for isolated cartilage defects of the knee
(2017)
Autologous chondrocyte implantation for treatment of isolated cartilage defects of the knee has become well established. Although various publications report technical modifications, clinical results, and cell-related issues, little is known about appropriate and optimal rehabilitation after autologous chondrocyte implantation. This article reviews the literature on rehabilitation after autologous chondrocyte implantation and presents a rehabilitation protocol that has been developed considering the best available evidence and has been successfully used for several years in a large number of patients who underwent autologous chondrocyte implantation for cartilage defects of the knee.
Background: Habitual walking speed predicts many clinical conditions later in life, but it declines with age. However, which particular exercise intervention can minimize the age-related gait speed loss is unclear.
Purpose: Our objective was to determine the effects of strength, power, coordination, and multimodal exercise training on healthy old adults' habitual and fast gait speed.
Methods: We performed a computerized systematic literature search in PubMed and Web of Knowledge from January 1984 up to December 2014. Search terms included 'Resistance training', 'power training', 'coordination training', 'multimodal training', and 'gait speed (outcome term). Inclusion criteria were articles available in full text, publication period over past 30 years, human species, journal articles, clinical trials, randomized controlled trials, English as publication language, and subject age C65 years. The methodological quality of all eligible intervention studies was assessed using the Physiotherapy Evidence Database (PEDro) scale. We computed weighted average standardized mean differences of the intervention-induced adaptations in gait speed using a random-effects model and tested for overall and individual intervention effects relative to no-exercise controls.
Results: A total of 42 studies (mean PEDro score of 5.0 +/- 1.2) were included in the analyses (2495 healthy old adults; age 74.2 years [64.4-82.7]; body mass 69.9 +/- 4.9 kg, height 1.64 +/- 0.05 m, body mass index 26.4 +/- 1.9 kg/m(2), and gait speed 1.22 +/- 0.18 m/s). The search identified only one power training study, therefore the subsequent analyses focused only on the effects of resistance, coordination, and multimodal training on gait speed. The three types of intervention improved gait speed in the three experimental groups combined (n = 1297) by 0.10 m/s (+/- 0.12) or 8.4 % (+/- 9.7), with a large effect size (ES) of 0.84. Resistance (24 studies; n = 613; 0.11 m/s; 9.3 %; ES: 0.84), coordination (eight studies, n = 198; 0.09 m/s; 7.6 %; ES: 0.76), and multimodal training (19 studies; n = 486; 0.09 m/s; 8.4 %, ES: 0.86) increased gait speed statistically and similarly.
Conclusions: Commonly used exercise interventions can functionally and clinically increase habitual and fast gait speed and help slow the loss of gait speed or delay its onset.
Dimensional psychiatry
(2014)
A dimensional approach in psychiatry aims to identify core mechanisms of mental disorders across nosological boundaries.
We compared anticipation of reward between major psychiatric disorders, and investigated whether reward anticipation is impaired in several mental disorders and whether there is a common psychopathological correlate (negative mood) of such an impairment.
We used functional magnetic resonance imaging (fMRI) and a monetary incentive delay (MID) task to study the functional correlates of reward anticipation across major psychiatric disorders in 184 subjects, with the diagnoses of alcohol dependence (n = 26), schizophrenia (n = 44), major depressive disorder (MDD, n = 24), bipolar disorder (acute manic episode, n = 13), attention deficit/hyperactivity disorder (ADHD, n = 23), and healthy controls (n = 54). Subjects' individual Beck Depression Inventory-and State-Trait Anxiety Inventory-scores were correlated with clusters showing significant activation during reward anticipation.
During reward anticipation, we observed significant group differences in ventral striatal (VS) activation: patients with schizophrenia, alcohol dependence, and major depression showed significantly less ventral striatal activation compared to healthy controls. Depressive symptoms correlated with dysfunction in reward anticipation regardless of diagnostic entity. There was no significant correlation between anxiety symptoms and VS functional activation.
Our findings demonstrate a neurobiological dysfunction related to reward prediction that transcended disorder categories and was related to measures of depressed mood. The findings underline the potential of a dimensional approach in psychiatry and strengthen the hypothesis that neurobiological research in psychiatric disorders can be targeted at core mechanisms that are likely to be implicated in a range of clinical entities.
Background: In terms of physiological and biomechanical characteristics, over-pronation of the feet has been associated with distinct muscle recruitment patterns and ground reaction forces during running.
Objective: The aim of this study was to evaluate the effects of running on sand vs. stable ground on ground-reaction-forces (GRFs) and electromyographic (EMG) activity of lower limb muscles in individuals with over-pronated feet (OPF) compared with healthy controls.
Methods: Thirty-three OPF individuals and 33 controls ran at preferred speed and in randomized-order over level-ground and sand. A force-plate was embedded in an 18-m runway to collect GRFs. Muscle activities were recorded using an EMG-system. Data were adjusted for surface-related differences in running speed.
Results: Running on sand resulted in lower speed compared with stable ground running (p < 0.001; d = 0.83). Results demonstrated that running on sand produced higher tibialis anterior activity (p = 0.024; d = 0.28). Also, findings indicated larger loading rates (p = 0.004; d = 0.72) and greater vastus medialis (p < 0.001; d = 0.89) and rectus femoris (p = 0.001; d = 0.61) activities in OPF individuals. Controls but not OPF showed significantly lower gluteus-medius activity (p = 0.022; d = 0.63) when running on sand.
Conclusion: Running on sand resulted in lower running speed and higher tibialis anterior activity during the loading phase. This may indicate alterations in neuromuscular demands in the distal part of the lower limbs when running on sand. In OPF individuals, higher loading rates together with greater quadriceps activity may constitute a proximal compensatory mechanism for distal surface instability.
Background: The prevalence of diabetes worldwide is predicted to increase from 2.8% in 2000 to 4.4% in 2030. Diabetic neuropathy (DN) is associated with damage to nerve glial cells, their axons, and endothelial cells leading to impaired function and mobility.
Objective: We aimed to examine the effects of an endurance-dominated exercise program on maximum oxygen consumption (VO2max), ground reaction forces, and muscle activities during walking in patients with moderate DN.
Methods: Sixty male and female individuals aged 45–65 years with DN were randomly assigned to an intervention (IG, n = 30) or a waiting control (CON, n = 30) group. The research protocol of this study was registered with the Local Clinical Trial Organization (IRCT20200201046326N1). IG conducted an endurance-dominated exercise program including exercises on a bike ergometer and gait therapy. The progressive intervention program lasted 12 weeks with three sessions per week, each 40–55 min. CON received the same treatment as IG after the post-tests. Pre- and post-training, VO2max was tested during a graded exercise test using spiroergometry. In addition, ground reaction forces and lower limbs muscle activities were recorded while walking at a constant speed of ∼1 m/s.
Results: No statistically significant baseline between group differences was observed for all analyzed variables. Significant group-by-time interactions were found for VO2max (p < 0.001; d = 1.22). The post-hoc test revealed a significant increase in IG (p < 0.001; d = 1.88) but not CON. Significant group-by-time interactions were observed for peak lateral and vertical ground reaction forces during heel contact and peak vertical ground reaction force during push-off (p = 0.001–0.037; d = 0.56–1.53). For IG, post-hoc analyses showed decreases in peak lateral (p < 0.001; d = 1.33) and vertical (p = 0.004; d = 0.55) ground reaction forces during heel contact and increases in peak vertical ground reaction force during push-off (p < 0.001; d = 0.92). In terms of muscle activity, significant group-by-time interactions were found for vastus lateralis and gluteus medius during the loading phase and for vastus medialis during the mid-stance phase, and gastrocnemius medialis during the push-off phase (p = 0.001–0.044; d = 0.54–0.81). Post-hoc tests indicated significant intervention-related increases in vastus lateralis (p = 0.001; d = 1.08) and gluteus medius (p = 0.008; d = 0.67) during the loading phase and vastus medialis activity during mid-stance (p = 0.001; d = 0.86). In addition, post-hoc tests showed decreases in gastrocnemius medialis during the push-off phase in IG only (p < 0.001; d = 1.28).
Conclusions: This study demonstrated that an endurance-dominated exercise program has the potential to improve VO2max and diabetes-related abnormal gait in patients with DN. The observed decreases in peak vertical ground reaction force during the heel contact of walking could be due to increased vastus lateralis and gluteus medius activities during the loading phase. Accordingly, we recommend to implement endurance-dominated exercise programs in type 2 diabetic patients because it is feasible, safe and effective by improving aerobic capacity and gait characteristics.
Several studies have investigated the effects of music on both submaximal and maximal exercise performance at a constant work-rate. However, there is a lack of research that has examined the effects of music on the pacing strategy during self-paced exercise. The aim of this study was to examine the effects of preferred music on performance and pacing during a 6 min run test (6-MSPRT) in young male adults. Twenty healthy male participants volunteered for this study. They performed two randomly assigned trials (with or without music) of a 6-MSPRT three days apart. Mean running speed, the adopted pacing strategy, total distance covered (TDC), peak and mean heart rate (HRpeak, HRmean), blood lactate (3 min after the test), and rate of perceived exertion (RPE) were measured. Listening to preferred music during the 6-MSPRT resulted in significant TDC improvement (?10%; p = 0.016; effect size (ES) = 0.80). A significantly faster mean running speed was observed when listening to music compared with no music. The improvement of TDC in the present study is explained by a significant overall increase in speed (main effect for conditions) during the music trial. Music failed to modify pacing patterns as suggested by the similar reversed “J-shaped” profile during the two conditions. Blood-lactate concentrations were significantly reduced by 9% (p = 0.006, ES = 1.09) after the 6-MSPRT with music compared to those in the control condition. No statistically significant differences were found between the test conditions for HRpeak, HRmean, and RPE. Therefore, listening to preferred music can have positive effects on exercise performance during the 6-MSPRT, such as greater TDC, faster running speeds, and reduced blood lactate levels but has no effect on the pacing strategy.
Background: Data on electrocardiographic and echocardiographic pre-participation screening findings in paediatric athletes are limited.
Methods and results: 10-15 year-old athletes (n = 343) were screened using electro- and echocardiography. The electrocardiogram (ECG) was normal in 220 (64%), mildly abnormal in 108 (31%), and distinctly abnormal in 15 (4%) athletes. Echocardiographic upper reference limits (URL, 97.5 percentile) for the left ventricular (LV) wall thickness in 10-11-year-old boys and girls were 9-10 mm and 8-9 mm, respectively; in 12-13-year-old boys and girls 9-10 mm; and in 14-15-year-old boys and girls 10-11 mm and 9-10 mm, respectively. Three athletes were excluded from competitive sports: one for symptomatic Wolff-Parkinson-White syndrome with a normal echocardiogram; one for negative T-waves in V-1-V-4 and a dilated right ventricle by echocardiography suggestive of (arrhythmogenic) right ventricular disease; and one for normal ECG and biscupid aortic valve including an aneurysm of the ascending aorta detected by echocardiography. Related to echocardiographic findings, the sensitivity and specificity of the ECG to identify cardiovascular abnormalities was 38% and 64%, respectively. The ECG's positive-predictive and negative-predictive values were 13% and 88%, respectively. The numbers needed to screen and calculated costs were 172 for ECG ( 7049), 172 for echocardiography ( 11,530), and 114 combining ECG and echocardiography ( 9323).
Conclusions: Compared to adults, paediatric athletes presented with fewer distinctly abnormal ECGs, and there was no gender difference in paediatric athletes' ECG-pattern distribution. A combination of ECG and echocardiography for pre-participation screening of paediatric athletes is superior to ECG alone but 30% more costly.
Background: The goal of this study was to estimate the prevalence of and risk factors for diagnosed depression in heart failure (HF) patients in German primary care practices.
Methods: This study was a retrospective database analysis in Germany utilizing the Disease Analyzer (R) Database (IMS Health, Germany). The study population included 132,994 patients between 40 and 90 years of age from 1,072 primary care practices. The observation period was between 2004 and 2013. Follow-up lasted up to five years and ended in April 2015. A total of 66,497 HF patients were selected after applying exclusion criteria. The same number of 66,497 controls were chosen and were matched (1:1) to HF patients on the basis of age, sex, health insurance, depression diagnosis in the past, and follow-up duration after index date.
Results: HF was a strong risk factor for diagnosed depression (p < 0.0001). A total of 10.5% of HF patients and 6.3% of matched controls developed depression after one year of follow-up (p < 0.001). Depression was documented in 28.9% of the HF group and 18.2% of the control group after the five-year follow-up (p < 0.001). Cancer, dementia, osteoporosis, stroke, and osteoarthritis were associated with a higher risk of developing depression. Male gender and private health insurance were associated with lower risk of depression.
Conclusions: The risk of diagnosed depression is significantly increased in patients with HF compared to patients without HF in primary care practices in Germany.
Degenerative disc disease is associated with increased expression of pro-inflammatory cytokines in the intervertebral disc (IVD). However, it is not completely clear how inflammation arises in the IVD and which cellular compartments are involved in this process. Recently, the endoplasmic reticulum (ER) has emerged as a possible modulator of inflammation in age-related disorders. In addition, ER stress has been associated with the microenvironment of degenerated IVDs. Therefore, the aim of this study was to analyze the effects of ER stress on inflammatory responses in degenerated human IVDs and associated molecular mechanisms. Gene expression of ER stress marker GRP78 and pro-inflammatory cytokines IL-6, IL-8, IL-1 beta, and TNF-alpha was analyzed in human surgical IVD samples (n = 51, Pfirrmann grade 2-5). The expression of GRP78 positively correlated with the degeneration grade in lumbar IVDs and IL-6, but not with IL-1 beta and TNF-alpha. Another set of human surgical IVD samples (n = 25) was used to prepare primary cell cultures. ER stress inducer thapsigargin (Tg, 100 and 500 nM) activated gene and protein expression of IL-6 and induced phosphorylation of p38 MAPK. Both inhibition of p38 MAPK by SB203580 (10 mu M) and knockdown of ER stress effector CCAAT-enhancer-binding protein homologous protein (CHOP) reduced gene and protein expression of IL-6 in Tg-treated cells. Furthermore, the effects of an inflammatory microenvironment on ER stress were tested. TNF-alpha (5 and 10 ng/mL) did not activate ER stress, while IL-1 beta (5 and 10 ng/mL) activated gene and protein expression of GRP78, but did not influence [Ca2+](i) flux and expression of CHOP, indicating that pro-inflammatory cytokines alone may not induce ER stress in vivo. This study showed that IL-6 release in the IVD can be initiated following ER stress and that ER stress mediates IL-6 release through p38 MAPK and CHOP. Therapeutic targeting of ER stress response may reduce the consequences of the harsh microenvironment in degenerated IVD.
Serious knee pain and related disability have an annual prevalence of approximately 25% on those over the age of 55 years. As curative treatments for the common knee problems are not available to date, knee pathologies typically progress and often lead to osteoarthritis (OA). While the roles that the meniscus plays in knee biomechanics are well characterized, biological mechanisms underlying meniscus pathophysiology and roles in knee pain and OA progression are not fully clear. Experimental treatments for knee disorders that are successful in animal models often produce unsatisfactory results in humans due to species differences or the inability to fully replicate disease progression in experimental animals. The use of animals with spontaneous knee pathologies, such as dogs, can significantly help addressing this issue. As microscopic and macroscopic anatomy of the canine and human menisci are similar, spontaneous meniscal pathologies in canine patients are thought to be highly relevant for translational medicine. However, it is not clear whether the biomolecular mechanisms of pain, degradation of extracellular matrix, and inflammatory responses are species dependent. The aims of this review are (1) to provide an overview of the anatomy, physiology, and pathology of the human and canine meniscus, (2) to compare the known signaling pathways involved in spontaneous meniscus pathology between both species, and (3) to assess the relevance of dogs with spontaneous meniscal pathology as a translational model. Understanding these mechanisms in human and canine meniscus can help to advance diagnostic and therapeutic strategies for painful knee disorders and improve clinical decision making.
Muscle quality defined as the ratio of muscle strength to muscle mass disregards underlying factors which influence muscle strength. The aim of this review was to investigate the relationship of phase angle (PhA), echo intensity (EI), muscular adipose tissue (MAT), muscle fiber type, fascicle pennation angle (θf), fascicle length (lf), muscle oxidative capacity, insulin sensitivity (IS), neuromuscular activation, and motor unit to muscle strength. PubMed search was performed in 2021. The inclusion criteria were: (i) original research, (ii) human participants, (iii) adults (≥18 years). Exclusion criteria were: (i) no full-text, (ii) non-English or -German language, (iii) pathologies. Forty-one studies were identified. Nine studies found a weak–moderate negative (range r: [−0.26]–[−0.656], p < 0.05) correlation between muscle strength and EI. Four studies found a weak–moderate positive correlation (range r: 0.177–0.696, p < 0.05) between muscle strength and PhA. Two studies found a moderate-strong negative correlation (range r: [−0.446]–[−0.87], p < 0.05) between muscle strength and MAT. Two studies found a weak-strong positive correlation (range r: 0.28–0.907, p < 0.05) between θf and muscle strength. Muscle oxidative capacity was found to be a predictor of muscle strength. This review highlights that the current definition of muscle quality should be expanded upon as to encompass all possible factors of muscle quality.
Background: Life events (LEs) are associated with future physical and mental health. They are crucial for understanding the pathways to mental disorders as well as the interactions with biological parameters. However, deeper insight is needed into the complex interplay between the type of LE, its subjective evaluation and accompanying factors such as social support. The "Stralsund Life Event List" (SEL) was developed to facilitate this research.
Methods: The SEL is a standardized interview that assesses the time of occurrence and frequency of 81 LEs, their subjective emotional valence, the perceived social support during the LE experience and the impact of past LEs on present life. Data from 2265 subjects from the general population-based cohort study "Study of Health in Pomerania" (SHIP) were analysed. Based on the mean emotional valence ratings of the whole sample, LEs were categorized as "positive" or "negative". For verification, the SEL was related to lifetime major depressive disorder (MDD; Munich Composite International Diagnostic Interview), childhood trauma (Childhood Trauma Questionnaire), resilience (Resilience Scale) and subjective health (SF-12 Health Survey).
Results: The report of lifetime MDD was associated with more negative emotional valence ratings of negative LEs (OR = 2.96, p < 0.0001). Negative LEs (b = 0.071, p < 0.0001, beta = 0.25) and more negative emotional valence ratings of positive LEs (b = 3.74, p < 0.0001, beta = 0.11) were positively associated with childhood trauma. In contrast, more positive emotional valence ratings of positive LEs were associated with higher resilience (b = -7.05, p < 0.0001, beta = 0.13), and a lower present impact of past negative LEs was associated with better subjective health (b = 2.79, p = 0.001, beta = 0.05). The internal consistency of the generated scores varied considerably, but the mean value was acceptable (averaged Cronbach's alpha > 0.75).
Conclusions: The SEL is a valid instrument that enables the analysis of the number and frequency of LEs, their emotional valence, perceived social support and current impact on life on a global score and on an individual item level. Thus, we can recommend its use in research settings that require the assessment and analysis of the relationship between the occurrence and subjective evaluation of LEs as well as the complex balance between distressing and stabilizing life experiences.
Background:
It has previously been shown that conditioning activities consisting of repetitive hops have the
potential to induce better drop jump (DJ) performance in recreationally active individuals. In the present pilot study,
we investigated whether repetitive conditioning hops can also increase reactive jump and sprint performance in
sprint-trained elite athletes competing at an international level.
Methods:
Jump and sprint performances of 5 athletes were randomly assessed under 2 conditions. The control
condition (CON) comprised 8 DJs and 4 trials of 30-m sprints. The intervention condition (HOP) consisted of 10
maximal repetitive two-legged hops that were conducted 10 s prior to each single DJ and sprint trial. DJ
performance was analyzed using a one-dimensional ground reaction force plate. Step length (SL), contact time (CT),
and sprint time (ST) during the 30-m sprints were recorded using an opto-electronic measurement system.
Results:
Following the conditioning activity, DJ height and external DJ peak power were both significantly
increased by 11 % compared to the control condition. All other variables did not show any significant differences
between HOP and CON.
Conclusions:
In the present pilot study, we were able to demonstrate large improvements in DJ performance even
in sprint-trained elite athletes following a conditioning activity consisting of maximal two-legged repetitive hops.
This strengthens the hypothesis that plyometric conditioning exercises can induce performance enhancements in
elite athletes that are even greater than those observed in recreationally active athletes.. In addition, it appears that
the transfer of these effects to other stretch-shortening cycle activities is limited, as we did not observe any
changes in sprint performance following the plyometric conditioning activity.
Background: The aim of the present study was to verify concurrent validity of the Gyko inertial sensor system for the assessment of vertical jump height. - Methods: Nineteen female sub-elite youth soccer players (mean age: 14.7 ± 0.6 years) performed three trials of countermovement (CMJ) and squat jumps (SJ), respectively. Maximal vertical jump height was simultaneously quantified with the Gyko system, a Kistler force-plate (i.e., gold standard), and another criterion device that is frequently used in the field, the Optojump system. - Results: Compared to the force-plate, the Gyko system determined significant systematic bias for mean CMJ (−0.66 cm, p < 0.01, d = 1.41) and mean SJ (−0.91 cm, p < 0.01, d = 1.69) height. Random bias was ± 3.2 cm for CMJ and ± 4.0 cm for SJ height and intraclass correlation coefficients (ICCs) were “excellent” (ICC = 0.87 for CMJ and 0.81 for SJ). Compared to the Optojump device, the Gyko system detected a significant systematic bias for mean CMJ (0.55 cm, p < 0.05, d = 0.94) but not for mean SJ (0.39 cm) height. Random bias was ± 3.3 cm for CMJ and ± 4.2 cm for SJ height and ICC values were “excellent” (ICC = 0.86 for CMJ and 0.82 for SJ). - Conclusion: Consequently, apparatus specific regression equations were provided to estimate true vertical jump height for the Kistler force-plate and the Optojump device from Gyko-derived data. Our findings indicate that the Gyko system cannot be used interchangeably with a Kistler force-plate and the Optojump device in trained individuals. It is suggested that practitioners apply the correction equations to estimate vertical jump height for the force-plate and the Optojump system from Gyko-derived data.
Background: Chronic ankle instability, developing from ankle sprain, is one of the most common sports injuries. Besides it being an ankle issue, chronic ankle instability can also cause additional injuries. Investigating the epidemiology of chronic ankle instability is an essential step to develop an adequate injury prevention strategy. However, the epidemiology of chronic ankle instability remains unknown. Therefore, the purpose of this study was to investigate the epidemiology of chronic ankle instability through valid and reliable self-reported tools in active populations.
Methods: An electronic search was performed on PubMed and Web of Science in July 2020. The inclusion criteria for articles were peer-reviewed, published between 2006 and 2020, using one of the valid and reliable tools to evaluate ankle instability, determining chronic ankle instability based on the criteria of the International Ankle
Consortium, and including the outcome of epidemiology of chronic ankle instability. The risk of bias of the included studies was evaluated with an adapted tool for the sports injury review method.
Results: After removing duplicated studies, 593 articles were screened for eligibility. Twenty full-texts were screened and finally nine studies were included, assessing 3804 participants in total. The participants were between 15 and 32 years old and represented soldiers, students, athletes and active individuals with a history of ankle sprain. The prevalence of chronic ankle instability was 25%, ranging between 7 and 53%. The prevalence of chronic ankle instability within participants with a history of ankle sprains was 46%, ranging between 9 and 76%. Five included studies identified chronic ankle instability based on the standard criteria, and four studies applied adapted exclusion criteria to conduct the study. Five out of nine included studies showed a low risk of bias.
Conclusions: The prevalence of chronic ankle instability shows a wide range. This could be due to the different exclusion criteria, age, sports discipline, or other factors among the included studies. For future studies, standardized criteria to investigate the epidemiology of chronic ankle instability are required. The epidemiology of
CAI should be prospective. Factors affecting the prevalence of chronic ankle instability should be investigated and clearly described.
Background/Purpose
Muscular reflex responses of the lower extremities to sudden gait disturbances are related to postural stability and injury risk. Chronic ankle instability (CAI) has shown to affect activities related to the distal leg muscles while walking. Its effects on proximal muscle activities of the leg, both for the injured- (IN) and uninjured-side (NON), remain unclear. Therefore, the aim was to compare the difference of the motor control strategy in ipsilateral and contralateral proximal joints while unperturbed walking and perturbed walking between individuals with CAI and matched controls.
Materials and methods
In a cross-sectional study, 13 participants with unilateral CAI and 13 controls (CON) walked on a split-belt treadmill with and without random left- and right-sided perturbations. EMG amplitudes of muscles at lower extremities were analyzed 200 ms after perturbations, 200 ms before, and 100 ms after (Post100) heel contact while walking. Onset latencies were analyzed at heel contacts and after perturbations. Statistical significance was set at alpha≤0.05 and 95% confidence intervals were applied to determine group differences. Cohen’s d effect sizes were calculated to evaluate the extent of differences.
Results
Participants with CAI showed increased EMG amplitudes for NON-rectus abdominus at Post100 and shorter latencies for IN-gluteus maximus after heel contact compared to CON (p<0.05). Overall, leg muscles (rectus femoris, biceps femoris, and gluteus medius) activated earlier and less bilaterally (d = 0.30–0.88) and trunk muscles (bilateral rectus abdominus and NON-erector spinae) activated earlier and more for the CAI group than CON group (d = 0.33–1.09).
Conclusion
Unilateral CAI alters the pattern of the motor control strategy around proximal joints bilaterally. Neuromuscular training for the muscles, which alters motor control strategy because of CAI, could be taken into consideration when planning rehabilitation for CAI.
The use of functional music in gait training termed rhythmic auditory stimulation (RAS) and treadmill training (TT) have both been shown to be effective in stroke patients (SP). The combination of RAS and treadmill training (RAS-TT) has not been clinically evaluated to date. The aim of the study was to evaluate the efficacy of RAS-TT on functional gait in SR The protocol followed the design of an explorative study with a rater-blinded three arm prospective randomized controlled parallel group design. Forty-five independently walking SP with a hemiparesis of the lower limb or an unsafe and asymmetrical walking pattern were recruited. RAS-TT was carried out over 4 weeks with TT and neurodevelopmental treatment based on Bobath approach (NDT) serving as control interventions. For RAS-TT functional music was adjusted individually while walking on the treadmill. Pre and post-assessments consisted of the fast gait speed test (FGS), a gait analysis with the locometre (LOC), 3 min walking time test (3MWT), and an instrumental evaluation of balance (IEB). Raters were blinded to group assignments. An analysis of covariance (ANCOVA) was performed with affiliated measures from pre-assessment and time between stroke and start of study as covariates. Thirty-five participants (mean age 63.6 +/- 8.6 years, mean time between stroke and start of study 42.1 +/- 23.7 days) completed the study (11 RAS-TT, 13 TT, 11 NDT). Significant group differences occurred in the FGS for adjusted post-measures in gait velocity [F-(2,F- (34)) = 3.864, p = 0.032; partial eta(2) = 0.205] and cadence [F-(2,F- 34) = 7.656, p = 0.002; partial eta(2) = 0.338]. Group contrasts showed significantly higher values for RAS-TT. Stride length results did not vary between the groups. LOC, 3MWT, and IEB did not indicate group differences. One patient was withdrawn from TT because of pain in one arm. The study provides first evidence for a higher efficacy of RAS-TT in comparison to the standard approaches TT and NDT in restoring functional gait in SP. The results support the implementation of functional music in neurological gait rehabilitation and its use in combination with treadmill training.
This study aimed to determine the specific physical and basic gymnastics skills considered critical in gymnastics talent identification and selection as well as in promoting men’s artistic gymnastics performances. Fifty-one boys from a provincial gymnastics team (age 11.03 ± 0.95 years; height 1.33 ± 0.05 m; body mass 30.01 ± 5.53 kg; body mass index [BMI] 16.89 ± 3.93 kg/m²) regularly competing at national level voluntarily participated in this study. Anthropometric measures as well as the men’s artistic gymnastics physical test battery (i.e., International Gymnastics Federation [FIG] age group development programme) were used to assess the somatic and physical fitness profile of participants, respectively. The physical characteristics assessed were: muscle strength, flexibility, speed, endurance, and muscle power. Test outcomes were subjected to a principal components analysis to identify the most representative factors. The main findings revealed that power speed, isometric and explosive strength, strength endurance, and dynamic and static flexibility are the most determinant physical fitness aspects of the talent selection process in young male artistic gymnasts. These findings are of utmost importance for talent identification, selection, and development.
Background
Back pain patients (BPP) show delayed muscle onset, increased co-contractions, and variability as response to quasi-static sudden trunk loading in comparison to healthy controls (H). However, it is unclear whether these results can validly be transferred to suddenly applied walking perturbations, an automated but more functional and complex movement pattern. There is an evident need to develop research-based strategies for the rehabilitation of back pain. Therefore, the investigation of differences in trunk stability between H and BPP in functional movements is of primary interest in order to define suitable intervention regimes. The purpose of this study was to analyse neuromuscular reflex activity as well as three-dimensional trunk kinematics between H and BPP during walking perturbations.
Methods
Eighty H (31m/49f;29±9yrs;174±10cm;71±13kg) and 14 BPP (6m/8f;30±8yrs;171±10cm;67±14kg) walked (1m/s) on a split-belt treadmill while 15 right-sided perturbations (belt decelerating, 40m/s2, 50ms duration; 200ms after heel contact) were randomly applied. Trunk muscle activity was assessed using a 12-lead EMG set-up. Trunk kinematics were measured using a 3-segment-model consisting of 12 markers (upper thoracic (UTA), lower thoracic (LTA), lumbar area (LA)). EMG-RMS ([%],0-200ms after perturbation) was calculated and normalized to the RMS of unperturbed gait. Latency (TON;ms) and time to maximum activity (TMAX;ms) were analysed. Total motion amplitude (ROM;[°]) and mean angle (Amean;[°]) for extension-flexion, lateral flexion and rotation were calculated (whole stride cycle; 0-200ms after perturbation) for each of the three segments during unperturbed and perturbed gait. For ROM only, perturbed was normalized to unperturbed step [%] for the whole stride as well as the 200ms after perturbation. Data were analysed descriptively followed by a student´s t-test to account for group differences. Co-contraction was analyzed between ventral and dorsal muscles (V:R) as well as side right:side left ratio (Sright:Sleft). The coefficient of variation (CV;%) was calculated (EMG-RMS;ROM) to evaluate variability between the 15 perturbations for all groups. With respect to unequal distribution of participants to groups, an additional matched-group analysis was conducted. Fourteen healthy controls out of group H were sex-, age- and anthropometrically matched (group Hmatched) to the BPP.
Results
No group differences were observed for EMG-RMS or CV analysis (EMG/ROM) (p>0.025). Co-contraction analysis revealed no differences for V:R and Srigth:Sleft between the groups (p>0.025). BPP showed an increased TON and TMAX, being significant for Mm. rectus abdominus (p = 0.019) and erector spinae T9/L3 (p = 0.005/p = 0.015). ROM analysis over the unperturbed stride cycle revealed no differences between groups (p>0.025). Normalization of perturbed to unperturbed step lead to significant differences for the lumbar segment (LA) in lateral flexion with BPP showing higher normalized ROM compared to Hmatched (p = 0.02). BPP showed a significant higher flexed posture (UTA (p = 0.02); LTA (p = 0.004)) during normal walking (Amean). Trunk posture (Amean) during perturbation showed higher trunk extension values in LTA segments for H/Hmatched compared to BPP (p = 0.003). Matched group (BPP vs. Hmatched) analysis did not show any systematic changes of all results between groups.
Conclusion
BPP present impaired muscle response times and trunk posture, especially in the sagittal and transversal planes, compared to H. This could indicate reduced trunk stability and higher loading during gait perturbations.
Background
Overweight and obesity are increasing health problems that are not restricted to adults only. Childhood obesity is associated with metabolic, psychological and musculoskeletal comorbidities. However, knowledge about the effect of obesity on the foot function across maturation is lacking. Decreased foot function with disproportional loading characteristics is expected for obese children. The aim of this study was to examine foot loading characteristics during gait of normal-weight, overweight and obese children aged 1-12 years.
Methods
A total of 10382 children aged one to twelve years were enrolled in the study. Finally, 7575 children (m/f: n = 3630/3945; 7.0 +/- 2.9yr; 1.23 +/- 0.19m; 26.6 +/- 10.6kg; BMI: 17.1 +/- 2.4kg/m(2)) were included for (complete case) data analysis. Children were categorized to normalweight (>= 3rd and <90th percentile; n = 6458), overweight (>= 90rd and <97th percentile; n = 746) or obese (>97th percentile; n = 371) according to the German reference system that is based on age and gender-specific body mass indices (BMI). Plantar pressure measurements were assessed during gait on an instrumented walkway. Contact area, arch index (AI), peak pressure (PP) and force time integral (FTI) were calculated for the total, fore-, mid-and hindfoot. Data was analyzed descriptively (mean +/- SD) followed by ANOVA/Welch-test (according to homogeneity of variances: yes/no) for group differences according to BMI categorization (normal-weight, overweight, obesity) and for each age group 1 to 12yrs (post-hoc Tukey Kramer/Dunnett's C; alpha = 0.05).
Results
Mean walking velocity was 0.95 +/- 0.25 m/s with no differences between normal-weight, overweight or obese children (p = 0.0841). Results show higher foot contact area, arch index, peak pressure and force time integral in overweight and obese children (p< 0.001). Obese children showed the 1.48-fold (1 year-old) to 3.49-fold (10 year-old) midfoot loading (FTI) compared to normal-weight.
Conclusion
Additional body mass leads to higher overall load, with disproportional impact on the midfoot area and longitudinal foot arch showing characteristic foot loading patterns. Already the feet of one and two year old children are significantly affected. Childhood overweight and obesity is not compensated by the musculoskeletal system. To avoid excessive foot loading with potential risk of discomfort or pain in childhood, prevention strategies should be developed and validated for children with a high body mass index and functional changes in the midfoot area. The presented plantar pressure values could additionally serve as reference data to identify suspicious foot loading patterns in children.
Background
Recently, the incidence rate of back pain (BP) in adolescents has been reported at 21%. However, the development of BP in adolescent athletes is unclear. Hence, the purpose of this study was to examine the incidence of BP in young elite athletes in relation to gender and type of sport practiced.
Methods
Subjective BP was assessed in 321 elite adolescent athletes (m/f 57%/43%; 13.2 ± 1.4 years; 163.4 ± 11.4 cm; 52.6 ± 12.6 kg; 5.0 ± 2.6 training yrs; 7.6 ± 5.3 training h/week). Initially, all athletes were free of pain. The main outcome criterion was the incidence of back pain [%] analyzed in terms of pain development from the first measurement day (M1) to the second measurement day (M2) after 2.0 ± 1.0 year. Participants were classified into athletes who developed back pain (BPD) and athletes who did not develop back pain (nBPD). BP (acute or within the last 7 days) was assessed with a 5-step face scale (face 1–2 = no pain; face 3–5 = pain). BPD included all athletes who reported faces 1 and 2 at M1 and faces 3 to 5 at M2. nBPD were all athletes who reported face 1 or 2 at both M1 and M2. Data was analyzed descriptively. Additionally, a Chi2 test was used to analyze gender- and sport-specific differences (p = 0.05).
Results
Thirty-two athletes were categorized as BPD (10%). The gender difference was 5% (m/f: 12%/7%) but did not show statistical significance (p = 0.15). The incidence of BP ranged between 6 and 15% for the different sport categories. Game sports (15%) showed the highest, and explosive strength sports (6%) the lowest incidence. Anthropometrics or training characteristics did not significantly influence BPD (p = 0.14 gender to p = 0.90 sports; r2 = 0.0825).
Conclusions
BP incidence was lower in adolescent athletes compared to young non-athletes and even to the general adult population. Consequently, it can be concluded that high-performance sports do not lead to an additional increase in back pain incidence during early adolescence. Nevertheless, back pain prevention programs should be implemented into daily training routines for sport categories identified as showing high incidence rates.
Background:
Arising from the relevance of sensorimotor training in the therapy of nonspecific low back pain patients and from the value of individualized therapy, the present trial aims to test the feasibility and efficacy of individualized sensorimotor training interventions in patients suffering from nonspecific low back pain.
Methods and study design:
A multicentre, single-blind two-armed randomized controlled trial to evaluate the
effects of a 12-week (3 weeks supervised centre-based and 9 weeks home-based) individualized sensorimotor exercise program is performed. The control group stays inactive during this period. Outcomes are pain, and pain-associated function as well as motor function in adults with nonspecific low back pain. Each participant is scheduled to five measurement dates: baseline (M1), following centre-based training (M2), following home-based training (M3) and at two follow-up time points 6 months (M4) and 12 months (M5) after M1. All investigations and the assessment of the primary and secondary outcomes are performed in a standardized order: questionnaires – clinical examination – biomechanics (motor function). Subsequent statistical procedures are executed after the examination of underlying assumptions for parametric or rather non-parametric testing.
Discussion:
The results and practical relevance of the study will be of clinical and practical relevance not only for researchers and policy makers but also for the general population suffering from nonspecific low back pain.
Trial registration:
Identification number DRKS00010129. German Clinical Trial registered on 3 March 2016.