Refine
Year of publication
Document Type
- Article (51)
- Postprint (14)
- Review (5)
- Doctoral Thesis (1)
Is part of the Bibliography
- yes (71) (remove)
Keywords
- athletic performance (7)
- stretch-shortening cycle (7)
- muscle strength (6)
- periodization (5)
- plyometric training (5)
- resistance training (5)
- strength training (5)
- young athletes (5)
- youth sports (5)
- Injury (4)
The purpose of this study was to investigate the effects of plyometric training on stable (SPT) vs. highly unstable surfaces (IPT) on athletic performance in adolescent soccer players. 24 male sub-elite soccer players (age: 15 +/- 1 years) were assigned to 2 groups performing plyometric training for 8 weeks (2 sessions/week, 90min each). The SPT group conducted plyometrics on stable and the IPT group on unstable surfaces. Tests included jump performance (countermovement jump [CMJ] height, drop jump [DJ] height, DJ performance index), sprint time, agility and balance. Statistical analysis revealed significant main effects of time for CMJ height (p<0.01, f=1.44), DJ height (p<0.01, f=0.62), DJ performance index (p<0.05, f=0.60), 0-10-m sprint time (p<0.05, f=0.58), agility (p<0.01, f=1.15) and balance (p<0.05, 0.46f1.36). Additionally, a Training groupxTime interaction was found for CMJ height (p<0.01, f=0.66) in favor of the SPT group. Following 8 weeks of training, similar improvements in speed, agility and balance were observed in the IPT and SPT groups. However, the performance of IPT appears to be less effective for increasing CMJ height compared to SPT. It is thus recommended that coaches use SPT if the goal is to improve jump performance.
Power training programs have proved to be effective in improving components of physical fitness such as speed. According to the concept of training specificity, it was postulated that exercises must attempt to closely mimic the demands of the respective activity. When transferring this idea to speed development, the purpose of the present study was to examine the effects of resisted sprint (RST) vs. traditional power training (TPT) on physical fitness in healthy young adults. Thirty-five healthy, physically active adults were randomly assigned to a RST (n = 10, 23 ± 3 years), a TPT (n = 9, 23 ± 3 years), or a passive control group (n = 16, 23 ± 2 years). RST and TPT exercised for 6 weeks with three training sessions/week each lasting 45–60 min. RST comprised frontal and lateral sprint exercises using an expander system with increasing levels of resistance that was attached to a treadmill (h/p/cosmos). TPT included ballistic strength training at 40% of the one-repetition-maximum for the lower limbs (e.g., leg press, knee extensions). Before and after training, sprint (20-m sprint), change-of-direction speed (T-agility test), jump (drop, countermovement jump), and balance performances (Y balance test) were assessed. ANCOVA statistics revealed large main effects of group for 20-m sprint velocity and ground contact time (0.81 ≤ d ≤ 1.00). Post-hoc tests showed higher sprint velocity following RST and TPT (0.69 ≤ d ≤ 0.82) when compared to the control group, but no difference between RST and TPT. Pre-to-post changes amounted to 4.5% for RST [90%CI: (−1.1%;10.1%), d = 1.23] and 2.6% for TPT [90%CI: (0.4%;4.8%), d = 1.59]. Additionally, ground contact times during sprinting were shorter following RST and TPT (0.68 ≤ d ≤ 1.09) compared to the control group, but no difference between RST and TPT. Pre-to-post changes amounted to −6.3% for RST [90%CI: (−11.4%;−1.1%), d = 1.45) and −2.7% for TPT [90%CI: (−4.2%;−1.2%), d = 2.36]. Finally, effects for change-of-direction speed, jump, and balance performance varied from small-to-large. The present findings indicate that 6 weeks of RST and TPT produced similar effects on 20-m sprint performance compared with a passive control in healthy and physically active, young adults. However, no training-related effects were found for change-of-direction speed, jump and balance performance. We conclude that both training regimes can be applied for speed development.
Objective:
This study aimed to systematically review and meta-analyze the effect of flywheel resistance training (FRT) versus traditional resistance training (TRT) on change of direction (CoD) performance in male athletes.
Methods:
Five databases were screened up to December 2021.
Results:
Seven studies were included. The results indicated a significantly larger effect of FRT compared with TRT (standardized mean difference [SMD] = 0.64). A within-group comparison indicated a significant large effect of FRT on CoD performance (SMD = 1.63). For TRT, a significant moderate effect was observed (SMD = 0.62). FRT of <= 2 sessions/week resulted in a significant large effect (SMD = 1.33), whereas no significant effect was noted for >2 sessions/week. Additionally, a significant large effect of <= 12 FRT sessions (SMD = 1.83) was observed, with no effect of >12 sessions. Regarding TRT, no significant effects of any of the training factors were detected (p > 0.05).
Conclusions:
FRT appears to be more effective than TRT in improving CoD performance in male athletes. Independently computed single training factor analyses for FRT indicated that <= 2 sessions/week resulted in a larger effect on CoD performance than >2 sessions/week. Additionally, a total of <= 12 FRT sessions induced a larger effect than >12 training sessions. Practitioners in sports, in which accelerative and decelerative actions occur in quick succession to change direction, should regularly implement FRT.
Coaches and athletes in elite sports are constantly seeking to use innovative and advanced training strategies to efficiently improve strength/power performance in already highly-trained individuals. In this regard, high-intensity conditioning contractions have become a popular means to induce acute improvements primarily in muscle contractile properties, which are supposed to translate to subsequent power performances. This performance-enhancing physiological mechanism has previously been called postactivation potentiation (PAP). However, in contrast to the traditional mechanistic understanding of PAP that is based on electrically-evoked twitch properties, an increasing number of studies used the term PAP while referring to acute performance enhancements, even if physiological measures of PAP were not directly assessed. In this current opinion article, we compare the two main approaches (i.e., mechanistic vs. performance) used in the literature to describe PAP effects. We additionally discuss potential misconceptions in the general use of the term PAP. Studies showed that mechanistic and performance-related PAP approaches have different characteristics in terms of the applied research field (basic vs. applied), effective conditioning contractions (e.g., stimulated vs. voluntary), verification (lab-based vs. field tests), effects (twitch peak force vs. maximal voluntary strength), occurrence (consistent vs. inconsistent), and time course (largest effect immediately after vs. similar to 7 min after the conditioning contraction). Moreover, cross-sectional studies revealed inconsistent and trivial-to-large-sized associations between selected measures of mechanistic (e.g., twitch peak force) vs. performance-related PAP approaches (e.g., jump height). In an attempt to avoid misconceptions related to the two different PAP approaches, we propose to use two different terms. Postactivation potentiation should only be used to indicate the increase in muscular force/torque production during an electrically-evoked twitch. In contrast, postactivation performance enhancement (PAPE) should be used to refer to the enhancement of measures of maximal strength, power, and speed following conditioning contractions. The implementation of this terminology would help to better differentiate between mechanistic and performance-related PAP approaches. This is important from a physiological point of view, but also when it comes to aggregating findings from PAP studies, e.g., in the form of meta-analyses, and translating these findings to the field of strength and conditioning.
The regular monitoring of physical fitness and sport-specific performance is important in elite sports to increase the likelihood of success in competition. This study aimed to systematically review and to critically appraise the methodological quality, validation data, and feasibility of the sport-specific performance assessment in Olympic combat sports like amateur boxing, fencing, judo, karate, taekwondo, and wrestling. A systematic search was conducted in the electronic databases PubMed, Google-Scholar, and Science-Direct up to October 2017. Studies in combat sports were included that reported validation data (e.g., reliability, validity, sensitivity) of sport-specific tests. Overall, 39 studies were eligible for inclusion in this review. The majority of studies (74%) contained sample sizes <30 subjects. Nearly, 1/3 of the reviewed studies lacked a sufficient description (e.g., anthropometrics, age, expertise level) of the included participants. Seventy-two percent of studies did not sufficiently report inclusion/exclusion criteria of their participants. In 62% of the included studies, the description and/or inclusion of a familiarization session (s) was either incomplete or not existent. Sixty-percent of studies did not report any details about the stability of testing conditions. Approximately half of the studies examined reliability measures of the included sport-specific tests (intraclass correlation coefficient [ICC] = 0.43–1.00). Content validity was addressed in all included studies, criterion validity (only the concurrent aspect of it) in approximately half of the studies with correlation coefficients ranging from r = −0.41 to 0.90. Construct validity was reported in 31% of the included studies and predictive validity in only one. Test sensitivity was addressed in 13% of the included studies. The majority of studies (64%) ignored and/or provided incomplete information on test feasibility and methodological limitations of the sport-specific test. In 28% of the included studies, insufficient information or a complete lack of information was provided in the respective field of the test application. Several methodological gaps exist in studies that used sport-specific performance tests in Olympic combat sports. Additional research should adopt more rigorous validation procedures in the application and description of sport-specific performance tests in Olympic combat sports.
Background
Maximal isokinetic strength ratios of joint flexors and extensors are important parameters to indicate the level of muscular balance at the joint. Further, in combat sports athletes, upper and lower limb muscle strength is affected by the type of sport. Thus, this study aimed to examine the differences in maximal isokinetic strength of the flexors and extensors and the corresponding flexor–extensor strength ratios of the elbows and knees in combat sports athletes.
Method
Forty male participants (age = 22.3 ± 2.5 years) from four different combat sports (amateur boxing, taekwondo, karate, and judo; n = 10 per sport) were tested for eccentric peak torque of the elbow/knee flexors (EF/KF) and concentric peak torque of the elbow/knee extensors (EE/KE) at three different angular velocities (60, 120, and 180°/s) on the dominant and non-dominant side using an isokinetic device.
Results
Analyses revealed significant, large-sized group × velocity × limb interactions for EF, EE, and EF–EE ratio, KF, KE, and KF–KE ratio (p ≤ 0.03; 0.91 ≤ d ≤ 1.75). Post-hoc analyses indicated that amateur boxers displayed the largest EE strength values on the non-dominant side at ≤ 120°/s and the dominant side at ≥ 120°/s (p < 0.03; 1.21 ≤ d ≤ 1.59). The largest EF–EE strength ratios were observed on amateur boxers’ and judokas’ non-dominant side at ≥ 120°/s (p < 0.04; 1.36 ≤ d ≤ 2.44). Further, we found lower KF–KE strength measures in karate (p < 0.04; 1.12 ≤ d ≤ 6.22) and judo athletes (p ≤ 0.03; 1.60 ≤ d ≤ 5.31) particularly on the non-dominant side.
Conclusions
The present findings indicated combat sport-specific differences in maximal isokinetic strength measures of EF, EE, KF, and KE particularly in favor of amateur boxers on the non-dominant side.
Background
Recently, the incidence rate of back pain (BP) in adolescents has been reported at 21%. However, the development of BP in adolescent athletes is unclear. Hence, the purpose of this study was to examine the incidence of BP in young elite athletes in relation to gender and type of sport practiced.
Methods
Subjective BP was assessed in 321 elite adolescent athletes (m/f 57%/43%; 13.2 ± 1.4 years; 163.4 ± 11.4 cm; 52.6 ± 12.6 kg; 5.0 ± 2.6 training yrs; 7.6 ± 5.3 training h/week). Initially, all athletes were free of pain. The main outcome criterion was the incidence of back pain [%] analyzed in terms of pain development from the first measurement day (M1) to the second measurement day (M2) after 2.0 ± 1.0 year. Participants were classified into athletes who developed back pain (BPD) and athletes who did not develop back pain (nBPD). BP (acute or within the last 7 days) was assessed with a 5-step face scale (face 1–2 = no pain; face 3–5 = pain). BPD included all athletes who reported faces 1 and 2 at M1 and faces 3 to 5 at M2. nBPD were all athletes who reported face 1 or 2 at both M1 and M2. Data was analyzed descriptively. Additionally, a Chi2 test was used to analyze gender- and sport-specific differences (p = 0.05).
Results
Thirty-two athletes were categorized as BPD (10%). The gender difference was 5% (m/f: 12%/7%) but did not show statistical significance (p = 0.15). The incidence of BP ranged between 6 and 15% for the different sport categories. Game sports (15%) showed the highest, and explosive strength sports (6%) the lowest incidence. Anthropometrics or training characteristics did not significantly influence BPD (p = 0.14 gender to p = 0.90 sports; r2 = 0.0825).
Conclusions
BP incidence was lower in adolescent athletes compared to young non-athletes and even to the general adult population. Consequently, it can be concluded that high-performance sports do not lead to an additional increase in back pain incidence during early adolescence. Nevertheless, back pain prevention programs should be implemented into daily training routines for sport categories identified as showing high incidence rates.
Strength training is an important means for performance development in young rowers. The purpose of this study was to examine the effects of a 9-week equal volume heavy-resistance strength training (HRST) versus strength endurance training (SET) in addition to regular rowing training on primary (e.g., maximal strength/power) and secondary outcomes (e.g., balance) in young rowers. Twenty-six female elite adolescent rowers were assigned to an HRST (n = 12; age: 13.2 ± 0.5 yrs; maturity-offset: +2.0 ± 0.5 yrs) or a SET group (n = 14; age: 13.1 ± 0.5 yrs; maturity-offset: +2.1 ± 0.5 yrs). HRST and SET comprised lower- (i.e., leg press/knee flexion/extension), upper-limbs (i.e., bench press/pull; lat-pull down), and complex exercises (i.e., rowing ergometer). HRST performed four sets with 12 repetitions per set at an intensity of 75–95% of the one-repetition maximum (1-RM). SET conducted four sets with 30 repetitions per set at 50–60% of the 1-RM. Training volume was matched for overall repetitions × intensity × training per week. Pre-post training, tests were performed for the assessment of primary [i.e., maximal strength (e.g., bench pull/knee flexion/extension 1-RM/isometric handgrip test), muscle power (e.g., medicine-ball push test, triple hop, drop jump, and countermovement jump), anaerobic endurance (400-m run), sport-specific performance (700-m rowing ergometer trial)] and secondary outcomes [dynamic balance (Y-balance test), change-of-direction (CoD) speed (multistage shuttle-run test)]. Adherence rate was >87% and one athlete of each group dropped out. Overall, 24 athletes completed the study and no test or training-related injuries occurred. Significant group × time interactions were observed for maximal strength, muscle power, anaerobic endurance, CoD speed, and sport-specific performance (p ≤ 0.05; 0.45 ≤ d ≤ 1.11). Post hoc analyses indicated larger gains in maximal strength and muscle power following HRST (p ≤ 0.05; 1.81 ≤ d ≤ 3.58) compared with SET (p ≤ 0.05; 1.04 ≤ d ≤ 2.30). Furthermore, SET (p ≤ 0.01; d = 2.08) resulted in larger gains in sport-specific performance compared with HRST (p < 0.05; d = 1.3). Only HRST produced significant pre-post improvements for anaerobic endurance and CoD speed (p ≤ 0.05; 1.84 ≤ d ≤ 4.76). In conclusion, HRST in addition to regular rowing training was more effective than SET to improve selected measures of physical fitness (i.e., maximal strength, muscle power, anaerobic endurance, and CoD speed) and SET was more effective than HRST to enhance sport-specific performance gains in female elite young rowers.
Fatigue has been defined differently in the literature depending on the field of research. The inconsistent use of the term fatigue complicated scientific communication, thereby limiting progress towards a more in-depth understanding of the phenomenon. Therefore, Enoka and Duchateau (Med Sci Sports Exerc 48:2228-38, 2016, [3]) proposed a fatigue framework that distinguishes between trait fatigue (i.e., fatigue experienced by an individual over a longer period of time) and motor or cognitive task-induced state fatigue (i.e., self-reported disabling symptom derived from the two interdependent attributes performance fatigability and perceived fatigability). Thereby, performance fatigability describes a decrease in an objective performance measure, while perceived fatigability refers to the sensations that regulate the integrity of the performer. Although this framework served as a good starting point to unravel the psychophysiology of fatigue, several important aspects were not included and the interdependence of the mechanisms driving performance fatigability and perceived fatigability were not comprehensively discussed. Therefore, the present narrative review aimed to (1) update the fatigue framework suggested by Enoka and Duchateau (Med Sci Sports Exerc 48:2228-38, 2016, [3]) pertaining the taxonomy (i.e., cognitive performance fatigue and perceived cognitive fatigue were added) and important determinants that were not considered previously (e.g., effort perception, affective valence, self-regulation), (2) discuss the mechanisms underlying performance fatigue and perceived fatigue in response to motor and cognitive tasks as well as their interdependence, and (3) provide recommendations for future research on these interactions. We propose to define motor or cognitive task-induced state fatigue as a psychophysiological condition characterized by a decrease in motor or cognitive performance (i.e., motor or cognitive performance fatigue, respectively) and/or an increased perception of fatigue (i.e., perceived motor or cognitive fatigue). These dimensions are interdependent, hinge on different determinants, and depend on body homeostasis (e.g., wakefulness, core temperature) as well as several modulating factors (e.g., age, sex, diseases, characteristics of the motor or cognitive task). Consequently, there is no single factor primarily determining performance fatigue and perceived fatigue in response to motor or cognitive tasks. Instead, the relative weight of each determinant and their interaction are modulated by several factors.