Refine
Year of publication
Document Type
- Article (216)
- Postprint (162)
- Doctoral Thesis (16)
- Monograph/Edited Volume (7)
- Conference Proceeding (6)
- Review (6)
- Preprint (3)
- Master's Thesis (2)
- Habilitation Thesis (1)
- Part of Periodical (1)
Keywords
- exercise (15)
- football (14)
- embodied cognition (13)
- fMRI (12)
- working memory (12)
- performance (10)
- German (9)
- language acquisition (9)
- neuroimaging (9)
- adolescents (8)
Institute
- Strukturbereich Kognitionswissenschaften (420) (remove)
Proceedings of KogWis 2010 : 10th Biannual Meeting of the German Society for Cognitive Science
(2010)
As the latest biannual meeting of the German Society for Cognitive Science (Gesellschaft für Kognitionswissenschaft, GK), KogWis 2010 at Potsdam University reflects the current trends in a fascinating domain of research concerned with human and artificial cognition and the interaction of mind and brain. The Plenary talks provide a venue for questions of the numerical capacities and human arithmetic (Brian Butterworth), of the theoretical development of cognitive architectures and intelligent virtual agents (Pat Langley), of categorizations induced by linguistic constructions (Claudia Maienborn), and of a cross-level account of the “Self as a complex system“ (Paul Thagard). KogWis 2010 integrates a wealth of experimental research, cognitive modelling, and conceptual analysis in 5 invited symposia, over 150 individual talks, 6 symposia, and more than 40 poster contributions. Some of the invited symposia reflect local and regional strenghts of research in the Berlin-Brandenburg area: the two largests research fields of the university Cognitive Sciences Area of Excellence in Potsdam are represented by an invited symposium on “Information Structure” by the Special Research Area 632 (“Sonderforschungsbereich”, SFB) of the same name, of Potsdam University and Humboldt-University Berlin, and by a satellite conference of the research group “Mind and Brain Dynamics”. The Berlin School of Mind and Brain at Humboldt-University Berlin takes part with an invited symposium on “Decision Making” from a perspective of cognitive neuroscience and philosophy and the DFG Cluster of Excellence “Languages of Emotion” of Free University presents interdisciplinary research results in an invited symposium on “Symbolising Emotions”.
The predictions of two contrasting approaches to the acquisition of transitive relative clauses were tested within the same groups of German-speaking participants aged from 3 to 5 years old. The input frequency approach predicts that object relative clauses with inanimate heads (e.g., the pullover that the man is scratching) are comprehended earlier and more accurately than those with an animate head (e.g., the man that the boy is scratching). In contrast, the structural intervention approach predicts that object relative clauses with two full NP arguments mismatching in number (e.g., the man that the boys are scratching) are comprehended earlier and more accurately than those with number-matching NPs (e.g., the man that the boy is scratching). These approaches were tested in two steps. First, we ran a corpus analysis to ensure that object relative clauses with number-mismatching NPs are not more frequent than object relative clauses with number-matching NPs in child directed speech. Next, the comprehension of these structures was tested experimentally in 3-, 4-, and 5-year-olds respectively by means of a color naming task. By comparing the predictions of the two approaches within the same participant groups, we were able to uncover that the effects predicted by the input frequency and by the structural intervention approaches co-exist and that they both influence the performance of children on transitive relative clauses, but in a manner that is modulated by age. These results reveal a sensitivity to animacy mismatch already being demonstrated by 3-year-olds and show that animacy is initially deployed more reliably than number to interpret relative clauses correctly. In all age groups, the animacy mismatch appears to explain the performance of children, thus, showing that the comprehension of frequent object relative clauses is enhanced compared to the other conditions. Starting with 4-year-olds but especially in 5-year-olds, the number mismatch supported comprehension—a facilitation that is unlikely to be driven by input frequency. Once children fine-tune their sensitivity to verb agreement information around the age of four, they are also able to deploy number marking to overcome the intervention effects. This study highlights the importance of testing experimentally contrasting theoretical approaches in order to characterize the multifaceted, developmental nature of language acquisition.
The predictions of two contrasting approaches to the acquisition of transitive relative clauses were tested within the same groups of German-speaking participants aged from 3 to 5 years old. The input frequency approach predicts that object relative clauses with inanimate heads (e.g., the pullover that the man is scratching) are comprehended earlier and more accurately than those with an animate head (e.g., the man that the boy is scratching). In contrast, the structural intervention approach predicts that object relative clauses with two full NP arguments mismatching in number (e.g., the man that the boys are scratching) are comprehended earlier and more accurately than those with number-matching NPs (e.g., the man that the boy is scratching). These approaches were tested in two steps. First, we ran a corpus analysis to ensure that object relative clauses with number-mismatching NPs are not more frequent than object relative clauses with number-matching NPs in child directed speech. Next, the comprehension of these structures was tested experimentally in 3-, 4-, and 5-year-olds respectively by means of a color naming task. By comparing the predictions of the two approaches within the same participant groups, we were able to uncover that the effects predicted by the input frequency and by the structural intervention approaches co-exist and that they both influence the performance of children on transitive relative clauses, but in a manner that is modulated by age. These results reveal a sensitivity to animacy mismatch already being demonstrated by 3-year-olds and show that animacy is initially deployed more reliably than number to interpret relative clauses correctly. In all age groups, the animacy mismatch appears to explain the performance of children, thus, showing that the comprehension of frequent object relative clauses is enhanced compared to the other conditions. Starting with 4-year-olds but especially in 5-year-olds, the number mismatch supported comprehension—a facilitation that is unlikely to be driven by input frequency. Once children fine-tune their sensitivity to verb agreement information around the age of four, they are also able to deploy number marking to overcome the intervention effects. This study highlights the importance of testing experimentally contrasting theoretical approaches in order to characterize the multifaceted, developmental nature of language acquisition.
Exploring generalisation following treatment of language deficits in aphasia can provide insights into the functional relation of the cognitive processing systems involved. In the present study, we first review treatment outcomes of interventions targeting sentence processing deficits and, second report a treatment study examining the occurrence of practice effects and generalisation in sentence comprehension and production. In order to explore the potential linkage between processing systems involved in comprehending and producing sentences, we investigated whether improvements generalise within (i.e., uni-modal generalisation in comprehension or in production) and/or across modalities (i.e., cross-modal generalisation from comprehension to production or vice versa). Two individuals with aphasia displaying co-occurring deficits in sentence comprehension and production were trained on complex, non-canonical sentences in both modalities. Two evidence-based treatment protocols were applied in a crossover intervention study with sequence of treatment phases being randomly allocated. Both participants benefited significantly from treatment, leading to uni-modal generalisation in both comprehension and production. However, cross-modal generalisation did not occur. The magnitude of uni-modal generalisation in sentence production was related to participants’ sentence comprehension performance prior to treatment. These findings support the assumption of modality-specific sub-systems for sentence comprehension and production, being linked uni-directionally from comprehension to production.
Das 3. Herbsttreffen Patholinguistik fand am 21. November 2009 an der Universität Potsdam statt. Der vorliegende Tagungsband enthält die drei Hauptvorträge zum Schwerpunktthema „Von der Programmierung zu Artikulation: Sprechapraxie bei Kindern und Erwachsenen“. Darüber hinaus enthält der Band die Beiträge aus dem Spektrum Patholinguistik, sowie die Abstracts der Posterpräsentationen.
The purpose of this study was to examine the test-retest reliability, and convergent and discriminative validity of a new taekwondo-specific change-of-direction (COD) speed test with striking techniques (TST) in elite taekwondo athletes. Twenty (10 males and 10 females) elite (athletes who compete at national level) and top-elite (athletes who compete at national and international level) taekwondo athletes with an average training background of 8.9 ± 1.3 years of systematic taekwondo training participated in this study. During the two-week test-retest period, various generic performance tests measuring COD speed, balance, speed, and jump performance were carried out during the first week and as a retest during the second week. Three TST trials were conducted with each athlete and the best trial was used for further analyses. The relevant performance measure derived from the TST was the time with striking penalty (TST-TSP). TST-TSP performances amounted to 10.57 ± 1.08 s for males and 11.74 ± 1.34 s for females. The reliability analysis of the TST performance was conducted after logarithmic transformation, in order to address the problem of heteroscedasticity. In both groups, the TST demonstrated a high relative test-retest reliability (intraclass correlation coefficients and 90% compatibility limits were 0.80 and 0.47 to 0.93, respectively). For absolute reliability, the TST’s typical error of measurement (TEM), 90% compatibility limits, and magnitudes were 4.6%, 3.4 to 7.7, for males, and 5.4%, 3.9 to 9.0, for females. The homogeneous sample of taekwondo athletes meant that the TST’s TEM exceeded the usual smallest important change (SIC) with 0.2 effect size in the two groups. The new test showed mostly very large correlations with linear sprint speed (r = 0.71 to 0.85) and dynamic balance (r = −0.71 and −0.74), large correlations with COD speed (r = 0.57 to 0.60) and vertical jump performance (r = −0.50 to −0.65), and moderate correlations with horizontal jump performance (r = −0.34 to −0.45) and static balance (r = −0.39 to −0.44). Top-elite athletes showed better TST performances than elite counterparts. Receiver operating characteristic analysis indicated that the TST effectively discriminated between top-elite and elite taekwondo athletes. In conclusion, the TST is a valid, and sensitive test to evaluate the COD speed with taekwondo specific skills, and reliable when considering ICC and TEM. Although the usefulness of the TST is questioned to detect small performance changes in the present population, the TST can detect moderate changes in taekwondo-specific COD speed.
The purpose of this study was to examine the test-retest reliability, and convergent and discriminative validity of a new taekwondo-specific change-of-direction (COD) speed test with striking techniques (TST) in elite taekwondo athletes. Twenty (10 males and 10 females) elite (athletes who compete at national level) and top-elite (athletes who compete at national and international level) taekwondo athletes with an average training background of 8.9 ± 1.3 years of systematic taekwondo training participated in this study. During the two-week test-retest period, various generic performance tests measuring COD speed, balance, speed, and jump performance were carried out during the first week and as a retest during the second week. Three TST trials were conducted with each athlete and the best trial was used for further analyses. The relevant performance measure derived from the TST was the time with striking penalty (TST-TSP). TST-TSP performances amounted to 10.57 ± 1.08 s for males and 11.74 ± 1.34 s for females. The reliability analysis of the TST performance was conducted after logarithmic transformation, in order to address the problem of heteroscedasticity. In both groups, the TST demonstrated a high relative test-retest reliability (intraclass correlation coefficients and 90% compatibility limits were 0.80 and 0.47 to 0.93, respectively). For absolute reliability, the TST’s typical error of measurement (TEM), 90% compatibility limits, and magnitudes were 4.6%, 3.4 to 7.7, for males, and 5.4%, 3.9 to 9.0, for females. The homogeneous sample of taekwondo athletes meant that the TST’s TEM exceeded the usual smallest important change (SIC) with 0.2 effect size in the two groups. The new test showed mostly very large correlations with linear sprint speed (r = 0.71 to 0.85) and dynamic balance (r = −0.71 and −0.74), large correlations with COD speed (r = 0.57 to 0.60) and vertical jump performance (r = −0.50 to −0.65), and moderate correlations with horizontal jump performance (r = −0.34 to −0.45) and static balance (r = −0.39 to −0.44). Top-elite athletes showed better TST performances than elite counterparts. Receiver operating characteristic analysis indicated that the TST effectively discriminated between top-elite and elite taekwondo athletes. In conclusion, the TST is a valid, and sensitive test to evaluate the COD speed with taekwondo specific skills, and reliable when considering ICC and TEM. Although the usefulness of the TST is questioned to detect small performance changes in the present population, the TST can detect moderate changes in taekwondo-specific COD speed.
We investigated the mental rehearsal of complex action instructions by recording spontaneous eye movements of healthy adults as they looked at objects on a monitor. Participants heard consecutive instructions, each of the form "move [object] to [location]''. Instructions were only to be executed after a go signal, by manipulating all objects successively with a mouse. Participants re-inspected previously mentioned objects already while listening to further instructions. This rehearsal behavior broke down after 4 instructions, coincident with participants' instruction span, as determined from subsequent execution accuracy. These results suggest that spontaneous eye movements while listening to instructions predict their successful execution.
This study aimed to investigate the relationship between the acute to chronic workload ratio (ACWR), based upon participant session rating of perceived exertion (sRPE), using two models [(1) rolling averages (ACWRRA); and (2) exponentially weighted moving averages (ACWREWMA)] and the injury rate in young male team soccer players aged 17.1 ± 0.7 years during a competitive mesocycle. Twenty-two players were enrolled in this study and performed four training sessions per week with 2 days of recovery and 1 match day per week. During each training session and each weekly match, training time and sRPE were recorded. In addition, training impulse (TRIMP), monotony, and strain were subsequently calculated. The rate of injury was recorded for each soccer player over a period of 4 weeks (i.e., 28 days) using a daily questionnaire. The results showed that over the course of the study, the number of non-contact injuries was significantly higher than that for contact injuries (2.5 vs. 0.5, p = 0.01). There were also significant positive correlations between sRPE and training time (r = 0.411, p = 0.039), ACWRRA (r = 0.47, p = 0.049), and ACWREWMA (r = 0.51, p = 0.038). In addition, small-to-medium correlations were detected between ACWR and non-contact injury occurrence (ACWRRA, r = 0.31, p = 0.05; ACWREWMA, r = 0.53, p = 0.03). Explained variance (r 2) for non-contact injury was significantly greater using the ACWREWMA model (ranging between 21 and 52%) compared with ACWRRA (ranging between 17 and 39%). In conclusion, the results of this study showed that the ACWREWMA model is more sensitive than ACWRRA to identify non-contact injury occurrence in male team soccer players during a short period in the competitive season.
This study aimed to investigate the relationship between the acute to chronic workload ratio (ACWR), based upon participant session rating of perceived exertion (sRPE), using two models [(1) rolling averages (ACWRRA); and (2) exponentially weighted moving averages (ACWREWMA)] and the injury rate in young male team soccer players aged 17.1 ± 0.7 years during a competitive mesocycle. Twenty-two players were enrolled in this study and performed four training sessions per week with 2 days of recovery and 1 match day per week. During each training session and each weekly match, training time and sRPE were recorded. In addition, training impulse (TRIMP), monotony, and strain were subsequently calculated. The rate of injury was recorded for each soccer player over a period of 4 weeks (i.e., 28 days) using a daily questionnaire. The results showed that over the course of the study, the number of non-contact injuries was significantly higher than that for contact injuries (2.5 vs. 0.5, p = 0.01). There were also significant positive correlations between sRPE and training time (r = 0.411, p = 0.039), ACWRRA (r = 0.47, p = 0.049), and ACWREWMA (r = 0.51, p = 0.038). In addition, small-to-medium correlations were detected between ACWR and non-contact injury occurrence (ACWRRA, r = 0.31, p = 0.05; ACWREWMA, r = 0.53, p = 0.03). Explained variance (r²) for non-contact injury was significantly greater using the ACWREWMA model (ranging between 21 and 52%) compared with ACWRRA (ranging between 17 and 39%). In conclusion, the results of this study showed that the ACWREWMA model is more sensitive than ACWRRA to identify non-contact injury occurrence in male team soccer players during a short period in the competitive season.
This study aimed to investigate the relationship between the acute to chronic workload ratio (ACWR), based upon participant session rating of perceived exertion (sRPE), using two models [(1) rolling averages (ACWRRA); and (2) exponentially weighted moving averages (ACWREWMA)] and the injury rate in young male team soccer players aged 17.1 ± 0.7 years during a competitive mesocycle. Twenty-two players were enrolled in this study and performed four training sessions per week with 2 days of recovery and 1 match day per week. During each training session and each weekly match, training time and sRPE were recorded. In addition, training impulse (TRIMP), monotony, and strain were subsequently calculated. The rate of injury was recorded for each soccer player over a period of 4 weeks (i.e., 28 days) using a daily questionnaire. The results showed that over the course of the study, the number of non-contact injuries was significantly higher than that for contact injuries (2.5 vs. 0.5, p = 0.01). There were also significant positive correlations between sRPE and training time (r = 0.411, p = 0.039), ACWRRA (r = 0.47, p = 0.049), and ACWREWMA (r = 0.51, p = 0.038). In addition, small-to-medium correlations were detected between ACWR and non-contact injury occurrence (ACWRRA, r = 0.31, p = 0.05; ACWREWMA, r = 0.53, p = 0.03). Explained variance (r 2) for non-contact injury was significantly greater using the ACWREWMA model (ranging between 21 and 52%) compared with ACWRRA (ranging between 17 and 39%). In conclusion, the results of this study showed that the ACWREWMA model is more sensitive than ACWRRA to identify non-contact injury occurrence in male team soccer players during a short period in the competitive season.
This study aimed to investigate the relationship between the acute to chronic workload ratio (ACWR), based upon participant session rating of perceived exertion (sRPE), using two models [(1) rolling averages (ACWRRA); and (2) exponentially weighted moving averages (ACWREWMA)] and the injury rate in young male team soccer players aged 17.1 ± 0.7 years during a competitive mesocycle. Twenty-two players were enrolled in this study and performed four training sessions per week with 2 days of recovery and 1 match day per week. During each training session and each weekly match, training time and sRPE were recorded. In addition, training impulse (TRIMP), monotony, and strain were subsequently calculated. The rate of injury was recorded for each soccer player over a period of 4 weeks (i.e., 28 days) using a daily questionnaire. The results showed that over the course of the study, the number of non-contact injuries was significantly higher than that for contact injuries (2.5 vs. 0.5, p = 0.01). There were also significant positive correlations between sRPE and training time (r = 0.411, p = 0.039), ACWRRA (r = 0.47, p = 0.049), and ACWREWMA (r = 0.51, p = 0.038). In addition, small-to-medium correlations were detected between ACWR and non-contact injury occurrence (ACWRRA, r = 0.31, p = 0.05; ACWREWMA, r = 0.53, p = 0.03). Explained variance (r²) for non-contact injury was significantly greater using the ACWREWMA model (ranging between 21 and 52%) compared with ACWRRA (ranging between 17 and 39%). In conclusion, the results of this study showed that the ACWREWMA model is more sensitive than ACWRRA to identify non-contact injury occurrence in male team soccer players during a short period in the competitive season.
Experimental and quantitative research in the field of human language processing and production strongly depends on the quality of the underlying language material: beside its size, representativeness, variety and balance have been discussed as important factors which influence design, analysis and interpretation of experiments and their results. This volume brings together creators and users of both general purpose and specialized lexical resources which are used in psychology, psycholinguistics, neurolinguistics and cognitive research. It aims to be a forum to report experiences and results, review problems and discuss perspectives of any linguistic data used in the field.
Prosody and information status in typological perspective - Introduction to the Special Issue
(2015)
Postural control is important to cope with demands of everyday life. It has been shown that both attentional demand (i.e., cognitive processing) and fatigue affect postural control in young adults. However, their combined effect is still unresolved. Therefore, we investigated the effects of fatigue on single- (ST) and dual-task (DT) postural control. Twenty young subjects (age: 23.7 ± 2.7) performed an all-out incremental treadmill protocol. After each completed stage, one-legged-stance performance on a force platform under ST (i.e., one-legged-stance only) and DT conditions (i.e., one-legged-stance while subtracting serial 3s) was registered. On a second test day, subjects conducted the same balance tasks for the control condition (i.e., non-fatigued). Results showed that heart rate, lactate, and ventilation increased following fatigue (all p < 0.001; d = 4.2–21). Postural sway and sway velocity increased during DT compared to ST (all p < 0.001; d = 1.9–2.0) and fatigued compared to non-fatigued condition (all p < 0.001; d = 3.3–4.2). In addition, postural control deteriorated with each completed stage during the treadmill protocol (all p < 0.01; d = 1.9–3.3). The addition of an attention-demanding interference task did not further impede one-legged-stance performance. Although both additional attentional demand and physical fatigue affected postural control in healthy young adults, there was no evidence for an overadditive effect (i.e., fatigue-related performance decrements in postural control were similar under ST and DT conditions). Thus, attentional resources were sufficient to cope with the DT situations in the fatigue condition of this experiment.
Postural control is important to cope with demands of everyday life. It has been shown that both attentional demand (i.e., cognitive processing) and fatigue affect postural control in young adults. However, their combined effect is still unresolved. Therefore, we investigated the effects of fatigue on single- (ST) and dual-task (DT) postural control. Twenty young subjects (age: 23.7 ± 2.7) performed an all-out incremental treadmill protocol. After each completed stage, one-legged-stance performance on a force platform under ST (i.e., one-legged-stance only) and DT conditions (i.e., one-legged-stance while subtracting serial 3s) was registered. On a second test day, subjects conducted the same balance tasks for the control condition (i.e., non-fatigued). Results showed that heart rate, lactate, and ventilation increased following fatigue (all p < 0.001; d = 4.2–21). Postural sway and sway velocity increased during DT compared to ST (all p < 0.001; d = 1.9–2.0) and fatigued compared to non-fatigued condition (all p < 0.001; d = 3.3–4.2). In addition, postural control deteriorated with each completed stage during the treadmill protocol (all p < 0.01; d = 1.9–3.3). The addition of an attention-demanding interference task did not further impede one-legged-stance performance. Although both additional attentional demand and physical fatigue affected postural control in healthy young adults, there was no evidence for an overadditive effect (i.e., fatigue-related performance decrements in postural control were similar under ST and DT conditions). Thus, attentional resources were sufficient to cope with the DT situations in the fatigue condition of this experiment.
The term “bilateral deficit” (BLD) has been used to describe a reduction in performance during bilateral contractions when compared to the sum of identical unilateral contractions. In old age, maximal isometric force production (MIF) decreases and BLD increases indicating the need for training interventions to mitigate this impact in seniors. In a cross-sectional approach, we examined age-related differences in MIF and BLD in young (age: 20–30 years) and old adults (age: >65 years). In addition, a randomized-controlled trial was conducted to investigate training-specific effects of resistance vs. balance training on MIF and BLD of the leg extensors in old adults. Subjects were randomly assigned to resistance training (n = 19), balance training (n = 14), or a control group (n = 20). Bilateral heavy-resistance training for the lower extremities was performed for 13 weeks (3 × / week) at 80% of the one repetition maximum. Balance training was conducted using predominately unilateral exercises on wobble boards, soft mats, and uneven surfaces for the same duration. Pre- and post-tests included uni- and bilateral measurements of maximal isometric leg extension force. At baseline, young subjects outperformed older adults in uni- and bilateral MIF (all p < .001; d = 2.61–3.37) and in measures of BLD (p < .001; d = 2.04). We also found significant increases in uni- and bilateral MIF after resistance training (all p < .001, d = 1.8-5.7) and balance training (all p < .05, d = 1.3-3.2). In addition, BLD decreased following resistance (p < .001, d = 3.4) and balance training (p < .001, d = 2.6). It can be concluded that both training regimens resulted in increased MIF and decreased BLD of the leg extensors (HRT-group more than BAL-group), almost reaching the levels of young adults.
The term “bilateral deficit” (BLD) has been used to describe a reduction in performance during bilateral contractions when compared to the sum of identical unilateral contractions. In old age, maximal isometric force production (MIF) decreases and BLD increases indicating the need for training interventions to mitigate this impact in seniors. In a cross-sectional approach, we examined age-related differences in MIF and BLD in young (age: 20–30 years) and old adults (age: >65 years). In addition, a randomized-controlled trial was conducted to investigate training-specific effects of resistance vs. balance training on MIF and BLD of the leg extensors in old adults. Subjects were randomly assigned to resistance training (n = 19), balance training (n = 14), or a control group (n = 20). Bilateral heavy-resistance training for the lower extremities was performed for 13 weeks (3 × / week) at 80% of the one repetition maximum. Balance training was conducted using predominately unilateral exercises on wobble boards, soft mats, and uneven surfaces for the same duration. Pre- and post-tests included uni- and bilateral measurements of maximal isometric leg extension force. At baseline, young subjects outperformed older adults in uni- and bilateral MIF (all p < .001; d = 2.61–3.37) and in measures of BLD (p < .001; d = 2.04). We also found significant increases in uni- and bilateral MIF after resistance training (all p < .001, d = 1.8-5.7) and balance training (all p < .05, d = 1.3-3.2). In addition, BLD decreased following resistance (p < .001, d = 3.4) and balance training (p < .001, d = 2.6). It can be concluded that both training regimens resulted in increased MIF and decreased BLD of the leg extensors (HRT-group more than BAL-group), almost reaching the levels of young adults.
Walking while concurrently performing cognitive and/or motor interference tasks is the norm rather than the exception during everyday life and there is evidence from behavioral studies that it negatively affects human locomotion. However, there is hardly any information available regarding the underlying neural correlates of single- and dual-task walking. We had 12 young adults (23.8 ± 2.8 years) walk while concurrently performing a cognitive interference (CI) or a motor interference (MI) task. Simultaneously, neural activation in frontal, central, and parietal brain areas was registered using a mobile EEG system. Results showed that the MI task but not the CI task affected walking performance in terms of significantly decreased gait velocity and stride length and significantly increased stride time and tempo-spatial variability. Average activity in alpha and beta frequencies was significantly modulated during both CI and MI walking conditions in frontal and central brain regions, indicating an increased cognitive load during dual-task walking. Our results suggest that impaired motor performance during dual-task walking is mirrored in neural activation patterns of the brain. This finding is in line with established cognitive theories arguing that dual-task situations overstrain cognitive capabilities resulting in motor performance decrements.
Walking while concurrently performing cognitive and/or motor interference tasks is the norm rather than the exception during everyday life and there is evidence from behavioral studies that it negatively affects human locomotion. However, there is hardly any information available regarding the underlying neural correlates of single- and dual-task walking. We had 12 young adults (23.8 ± 2.8 years) walk while concurrently performing a cognitive interference (CI) or a motor interference (MI) task. Simultaneously, neural activation in frontal, central, and parietal brain areas was registered using a mobile EEG system. Results showed that the MI task but not the CI task affected walking performance in terms of significantly decreased gait velocity and stride length and significantly increased stride time and tempo-spatial variability. Average activity in alpha and beta frequencies was significantly modulated during both CI and MI walking conditions in frontal and central brain regions, indicating an increased cognitive load during dual-task walking. Our results suggest that impaired motor performance during dual-task walking is mirrored in neural activation patterns of the brain. This finding is in line with established cognitive theories arguing that dual-task situations overstrain cognitive capabilities resulting in motor performance decrements.
Background: As the number of cardiac diseases continuously increases within the last years in modern society, so does cardiac treatment, especially cardiac catheterization. The procedure of a cardiac catheterization is challenging for both patients and practitioners. Several potential stressors of psychological or physical nature can occur during the procedure. The objective of the study is to develop and implement a stress management intervention for both practitioners and patients that aims to reduce the psychological and physical strain of a cardiac catheterization.
Methods: The clinical study (DRKS00026624) includes two randomized controlled intervention trials with parallel groups, for patients with elective cardiac catheterization and practitioners at the catheterization lab, in two clinic sites of the Ernst-von-Bergmann clinic network in Brandenburg, Germany. Both groups received different interventions for stress management. The intervention for patients comprises a psychoeducational video with different stress management technics and additional a standardized medical information about the cardiac catheterization examination. The control condition includes the in hospitals practiced medical patient education before the examination (usual care). Primary and secondary outcomes are measured by physiological parameters and validated questionnaires, the day before (M1) and after (M2) the cardiac catheterization and at a postal follow-up 6 months later (M3). It is expected that people with standardized information and psychoeducation show reduced complications during cardiac catheterization procedures, better pre- and post-operative wellbeing, regeneration, mood and lower stress levels over time. The intervention for practitioners includes a Mindfulness-based stress reduction program (MBSR) over 8 weeks supervised by an experienced MBSR practitioner directly at the clinic site and an operative guideline. It is expected that practitioners with intervention show improved perceived and chronic stress, occupational health, physical and mental function, higher effort-reward balance, regeneration and quality of life. Primary and secondary outcomes are measured by physiological parameters (heart rate variability, saliva cortisol) and validated questionnaires and will be assessed before (M1) and after (M2) the MBSR intervention and at a postal follow-up 6 months later (M3). Physiological biomarkers in practitioners will be assessed before (M1) and after intervention (M2) on two work days and a two days off. Intervention effects in both groups (practitioners and patients) will be evaluated separately using multivariate variance analysis.
Discussion: This study evaluates the effectiveness of two stress management intervention programs for patients and practitioners within cardiac catheter laboratory. Study will disclose strains during a cardiac catheterization affecting both patients and practitioners. For practitioners it may contribute to improved working conditions and occupational safety, preservation of earning capacity, avoidance of participation restrictions and loss of performance. In both groups less anxiety, stress and complications before and during the procedures can be expected. The study may add knowledge how to eliminate stressful exposures and to contribute to more (psychological) security, less output losses and exhaustion during work. The evolved stress management guidelines, training manuals and the standardized patient education should be transferred into clinical routines
Background: As the number of cardiac diseases continuously increases within the last years in modern society, so does cardiac treatment, especially cardiac catheterization. The procedure of a cardiac catheterization is challenging for both patients and practitioners. Several potential stressors of psychological or physical nature can occur during the procedure. The objective of the study is to develop and implement a stress management intervention for both practitioners and patients that aims to reduce the psychological and physical strain of a cardiac catheterization.
Methods: The clinical study (DRKS00026624) includes two randomized controlled intervention trials with parallel groups, for patients with elective cardiac catheterization and practitioners at the catheterization lab, in two clinic sites of the Ernst-von-Bergmann clinic network in Brandenburg, Germany. Both groups received different interventions for stress management. The intervention for patients comprises a psychoeducational video with different stress management technics and additional a standardized medical information about the cardiac catheterization examination. The control condition includes the in hospitals practiced medical patient education before the examination (usual care). Primary and secondary outcomes are measured by physiological parameters and validated questionnaires, the day before (M1) and after (M2) the cardiac catheterization and at a postal follow-up 6 months later (M3). It is expected that people with standardized information and psychoeducation show reduced complications during cardiac catheterization procedures, better pre- and post-operative wellbeing, regeneration, mood and lower stress levels over time. The intervention for practitioners includes a Mindfulness-based stress reduction program (MBSR) over 8 weeks supervised by an experienced MBSR practitioner directly at the clinic site and an operative guideline. It is expected that practitioners with intervention show improved perceived and chronic stress, occupational health, physical and mental function, higher effort-reward balance, regeneration and quality of life. Primary and secondary outcomes are measured by physiological parameters (heart rate variability, saliva cortisol) and validated questionnaires and will be assessed before (M1) and after (M2) the MBSR intervention and at a postal follow-up 6 months later (M3). Physiological biomarkers in practitioners will be assessed before (M1) and after intervention (M2) on two work days and a two days off. Intervention effects in both groups (practitioners and patients) will be evaluated separately using multivariate variance analysis.
Discussion: This study evaluates the effectiveness of two stress management intervention programs for patients and practitioners within cardiac catheter laboratory. Study will disclose strains during a cardiac catheterization affecting both patients and practitioners. For practitioners it may contribute to improved working conditions and occupational safety, preservation of earning capacity, avoidance of participation restrictions and loss of performance. In both groups less anxiety, stress and complications before and during the procedures can be expected. The study may add knowledge how to eliminate stressful exposures and to contribute to more (psychological) security, less output losses and exhaustion during work. The evolved stress management guidelines, training manuals and the standardized patient education should be transferred into clinical routines
Emotions are a complex concept and they are present in our everyday life. Persons on the autism spectrum are said to have difficulties in social interactions, showing deficits in emotion recognition in comparison to neurotypically developed persons. But social-emotional skills are believed to be positively augmented by training. A new adaptive social cognition training tool “E.V.A.” is introduced which teaches emotion recognition from face, voice and body language. One cross-sectional and one longitudinal study with adult neurotypical and autistic participants were conducted. The aim of the cross-sectional study was to characterize the two groups and see if differences in their social-emotional skills exist. The longitudinal study, on the other hand, aimed for detecting possible training effects following training with the new training tool. In addition, in both studies usability assessments were conducted to investigate the perceived usability of the new tool for neurotypical as well as autistic participants. Differences were found between autistic and neurotypical participants in their social-emotional and emotion recognition abilities. Training effects for neurotypical participants in an emotion recognition task were found after two weeks of home training. Similar perceived usability was found for the neurotypical and autistic participants. The current findings suggest that persons with ASC do not have a general deficit in emotion recognition, but are in need for more time to correctly recognize emotions. In addition, findings suggest that training emotion recognition abilities is possible. Further studies are needed to verify if the training effects found for neurotypical participants also manifest in a larger ASC sample.
The concurrent performance of cognitive and postural tasks is particularly impaired in old adults and associated with an increased risk of falls. Biological aging of the cognitive and postural control system appears to be responsible for increased cognitive-motor interference effects. We examined neural and behavioral markers of motor-cognitive dual-task performance in young and old adults performing spatial one-back working memory single and dual tasks during semitandem stance. On the neural level, we used EEG to test for age-related modulations in the frequency domain related to cognitive-postural task load. Twenty-eight healthy young and 30 old adults participated in this study. The tasks included a postural single task, a cognitive-postural dual task, and a cognitive-postural triple task (cognitive dual-task with postural demands). Postural sway (i.e., total center of pressure displacements) was recorded in semistance position on an unstable surface that was placed on top of a force plate while performing cognitive tasks. Neural activation was recorded using a 64-channel mobile EEG system. EEG frequencies were attenuated by the baseline postural single-task condition and demarcated in nine Regions-of-Interest (ROIs), i.e., anterior, central, posterior, over the cortical midline, and both hemispheres. Our findings revealed impaired cognitive dual-task performance in old compared to young participants in the form of significantly lower cognitive performance in the triple-task condition. Furthermore, old adults compared with young adults showed significantly larger postural sway, especially in cognitive-postural task conditions. With respect to EEG frequencies, young compared to old participants showed significantly lower alpha-band activity in cognitive-cognitive-postural triple-task conditions compared with cognitive-postural dual tasks. In addition, with increasing task difficulty, we observed synchronized theta and delta frequencies, irrespective of age. Taskdependent alterations of the alpha frequency band were most pronounced over frontal and central ROIs, while alterations of the theta and delta frequency bands were found in frontal, central, and posterior ROIs. Theta and delta synchronization exhibited a decrease from anterior to posterior regions. For old adults, task difficulty was reflected by theta synchronization in the posterior ROI. For young adults, it was reflected by alpha desynchronization in bilateral anterior ROIs. In addition, we could not identify any effects of task difficulty and age on the beta frequency band. Our results shed light on age-related cognitive and postural declines and how they interact. Modulated alpha frequencies during high cognitive-postural task demands in young but not old adults might be reflective of a constrained neural adaptive potential in old adults. Future studies are needed to elucidate associations between the identified age-related performance decrements with task difficulty and changes in brain activity.
The concurrent performance of cognitive and postural tasks is particularly impaired in old adults and associated with an increased risk of falls. Biological aging of the cognitive and postural control system appears to be responsible for increased cognitive-motor interference effects. We examined neural and behavioral markers of motor-cognitive dual-task performance in young and old adults performing spatial one-back working memory single and dual tasks during semitandem stance. On the neural level, we used EEG to test for age-related modulations in the frequency domain related to cognitive-postural task load. Twenty-eight healthy young and 30 old adults participated in this study. The tasks included a postural single task, a cognitive-postural dual task, and a cognitive-postural triple task (cognitive dual-task with postural demands). Postural sway (i.e., total center of pressure displacements) was recorded in semistance position on an unstable surface that was placed on top of a force plate while performing cognitive tasks. Neural activation was recorded using a 64-channel mobile EEG system. EEG frequencies were attenuated by the baseline postural single-task condition and demarcated in nine Regions-of-Interest (ROIs), i.e., anterior, central, posterior, over the cortical midline, and both hemispheres. Our findings revealed impaired cognitive dual-task performance in old compared to young participants in the form of significantly lower cognitive performance in the triple-task condition. Furthermore, old adults compared with young adults showed significantly larger postural sway, especially in cognitive-postural task conditions. With respect to EEG frequencies, young compared to old participants showed significantly lower alpha-band activity in cognitive-cognitive-postural triple-task conditions compared with cognitive-postural dual tasks. In addition, with increasing task difficulty, we observed synchronized theta and delta frequencies, irrespective of age. Taskdependent alterations of the alpha frequency band were most pronounced over frontal and central ROIs, while alterations of the theta and delta frequency bands were found in frontal, central, and posterior ROIs. Theta and delta synchronization exhibited a decrease from anterior to posterior regions. For old adults, task difficulty was reflected by theta synchronization in the posterior ROI. For young adults, it was reflected by alpha desynchronization in bilateral anterior ROIs. In addition, we could not identify any effects of task difficulty and age on the beta frequency band. Our results shed light on age-related cognitive and postural declines and how they interact. Modulated alpha frequencies during high cognitive-postural task demands in young but not old adults might be reflective of a constrained neural adaptive potential in old adults. Future studies are needed to elucidate associations between the identified age-related performance decrements with task difficulty and changes in brain activity.
Continuous treatment with antidementia drugs in Germany 2003-2013: a retrospective database analysis
(2015)
Background: Continuous treatment is an important indicator of medication adherence in dementia. However, long-term studies in larger clinical settings are lacking, and little is known about moderating effects of patient and service characteristics.
Methods: Data from 12,910 outpatients with dementia (mean age 79.2 years; SD = 7.6 years) treated between January 2003 and December 2013 in Germany were included. Continuous treatment was analysed using Kaplan-Meier curves and log-rank tests. In addition, multivariate Cox regression models were fitted with continuous treatment as dependent variable and the predictors antidementia agent, age, gender, medical comorbidities, physician specialty, and health insurance status.
Results: After one year of follow-up, nearly 60% of patients continued drug treatment. Donezepil (HR: 0.88; 95% CI: 0.82-0.95) and memantine (HR: 0.85; 0.79-0.91) patients were less likely to be discontinued treatment as compared to rivastigmine users. Patients were less likely to be discontinued if they were treated by specialist physicians as compared to general practitioners (HR: 0.44; 0.41-0.48). Younger male patients and patients who had private health insurance had a lower discontinuation risk. Regarding comorbidity, patients were more likely to be continuously treated with the index substance if a diagnosis of heart failure or hypertension had been diagnosed at baseline.
Conclusions: Our results imply that besides type of antidementia agent, involvement of a specialist in the complex process of prescribing antidementia drugs can provide meaningful benefits to patients, in terms of more disease-specific and continuous treatment.
Recent studies have suggested that musical rhythm perception ability can affect the phonological system. The most prevalent causal account for developmental dyslexia is the phonological deficit hypothesis. As rhythm is a subpart of phonology, we hypothesized that reading deficits in dyslexia are associated with rhythm processing in speech and in music. In a rhythmic grouping task, adults with diagnosed dyslexia and age-matched controls listened to speech streams with syllables alternating in intensity, duration, or neither, and indicated whether they perceived a strong-weak or weak-strong rhythm pattern. Additionally, their reading and musical rhythm abilities were measured. Results showed that adults with dyslexia had lower musical rhythm abilities than adults without dyslexia. Moreover, lower musical rhythm ability was associated with lower reading ability in dyslexia. However, speech grouping by adults with dyslexia was not impaired when musical rhythm perception ability was controlled: like adults without dyslexia, they showed consistent preferences. However, rhythmic grouping was predicted by musical rhythm perception ability, irrespective of dyslexia. The results suggest associations among musical rhythm perception ability, speech rhythm perception, and reading ability. This highlights the importance of considering individual variability to better understand dyslexia and raises the possibility that musical rhythm perception ability is a key to phonological and reading acquisition.
Recent studies have suggested that musical rhythm perception ability can affect the phonological system. The most prevalent causal account for developmental dyslexia is the phonological deficit hypothesis. As rhythm is a subpart of phonology, we hypothesized that reading deficits in dyslexia are associated with rhythm processing in speech and in music. In a rhythmic grouping task, adults with diagnosed dyslexia and age-matched controls listened to speech streams with syllables alternating in intensity, duration, or neither, and indicated whether they perceived a strong-weak or weak-strong rhythm pattern. Additionally, their reading and musical rhythm abilities were measured. Results showed that adults with dyslexia had lower musical rhythm abilities than adults without dyslexia. Moreover, lower musical rhythm ability was associated with lower reading ability in dyslexia. However, speech grouping by adults with dyslexia was not impaired when musical rhythm perception ability was controlled: like adults without dyslexia, they showed consistent preferences. However, rhythmic grouping was predicted by musical rhythm perception ability, irrespective of dyslexia. The results suggest associations among musical rhythm perception ability, speech rhythm perception, and reading ability. This highlights the importance of considering individual variability to better understand dyslexia and raises the possibility that musical rhythm perception ability is a key to phonological and reading acquisition.
Recent research has indicated that university students sometimes use caffeine pills for neuroenhancement (NE; non-medical use of psychoactive substances or technology to produce a subjective enhancement in psychological functioning and experience), especially during exam preparation. In our factorial survey experiment, we manipulated the evidence participants were given about the prevalence of NE amongst peers and measured the resulting effects on the psychological predictors included in the Prototype-Willingness Model of risk behavior. Two hundred and thirty-one university students were randomized to a high prevalence condition (read faked research results overstating usage of caffeine pills amongst peers by a factor of 5; 50%), low prevalence condition (half the estimated prevalence; 5%) or control condition (no information about peer prevalence). Structural equation modeling confirmed that our participants’ willingness and intention to use caffeine pills in the next exam period could be explained by their past use of neuroenhancers, attitude to NE and subjective norm about use of caffeine pills whilst image of the typical user was a much less important factor. Provision of inaccurate information about prevalence reduced the predictive power of attitude with respect to willingness by 40-45%. This may be because receiving information about peer prevalence which does not fit with their perception of the social norm causes people to question their attitude. Prevalence information might exert a deterrent effect on NE via the attitude-willingness association. We argue that research into NE and deterrence of associated risk behaviors should be informed by psychological theory.
Recent research has indicated that university students sometimes use caffeine pills for neuroenhancement (NE; non-medical use of psychoactive substances or technology to produce a subjective enhancement in psychological functioning and experience), especially during exam preparation. In our factorial survey experiment, we manipulated the evidence participants were given about the prevalence of NE amongst peers and measured the resulting effects on the psychological predictors included in the Prototype-Willingness Model of risk behavior. Two hundred and thirty-one university students were randomized to a high prevalence condition (read faked research results overstating usage of caffeine pills amongst peers by a factor of 5; 50%), low prevalence condition (half the estimated prevalence; 5%) or control condition (no information about peer prevalence). Structural equation modeling confirmed that our participants’ willingness and intention to use caffeine pills in the next exam period could be explained by their past use of neuroenhancers, attitude to NE and subjective norm about use of caffeine pills whilst image of the typical user was a much less important factor. Provision of inaccurate information about prevalence reduced the predictive power of attitude with respect to willingness by 40-45%. This may be because receiving information about peer prevalence which does not fit with their perception of the social norm causes people to question their attitude. Prevalence information might exert a deterrent effect on NE via the attitude-willingness association. We argue that research into NE and deterrence of associated risk behaviors should be informed by psychological theory.
Background: The COVID-19 pandemic has highlighted the importance of scientific endeavors. The goal of this systematic review is to evaluate the quality of the research on physical activity (PA) behavior change and its potential to contribute to policy-making processes in the early days of COVID-19 related restrictions.
Methods: We conducted a systematic review of methodological quality of current research according to PRISMA guidelines using Pubmed and Web of Science, of articles on PA behavior change that were published within 365 days after COVID-19 was declared a pandemic by the World Health Organization (WHO). Items from the JBI checklist and the AXIS tool were used for additional risk of bias assessment. Evidence mapping is used for better visualization of the main results. Conclusions about the significance of published articles are based on hypotheses on PA behavior change in the light of the COVID-19 pandemic.
Results: Among the 1,903 identified articles, there were 36% opinion pieces, 53% empirical studies, and 9% reviews. Of the 332 studies included in the systematic review, 213 used self-report measures to recollect prepandemic behavior in often small convenience samples. Most focused changes in PA volume, whereas changes in PA types were rarely measured. The majority had methodological reporting flaws. Few had very large samples with objective measures using repeated measure design (pre and during the pandemic). In addition to the expected decline in PA duration, these studies show that many of those who were active prepandemic, continued to be active during the pandemic.
Conclusions: Research responded quickly at the onset of the pandemic. However, most of the studies lacked robust methodology, and PA behavior change data lacked the accuracy needed to guide policy makers. To improve the field, we propose the implementation of longitudinal cohort studies by larger organizations such as WHO to ease access to data on PA behavior, and suggest those institutions set clear standards for this research. Researchers need to ensure a better fit between the measurement method and the construct being measured, and use both objective and subjective measures where appropriate to complement each other and provide a comprehensive picture of PA behavior.
Background: The COVID-19 pandemic has highlighted the importance of scientific endeavors. The goal of this systematic review is to evaluate the quality of the research on physical activity (PA) behavior change and its potential to contribute to policy-making processes in the early days of COVID-19 related restrictions.
Methods: We conducted a systematic review of methodological quality of current research according to PRISMA guidelines using Pubmed and Web of Science, of articles on PA behavior change that were published within 365 days after COVID-19 was declared a pandemic by the World Health Organization (WHO). Items from the JBI checklist and the AXIS tool were used for additional risk of bias assessment. Evidence mapping is used for better visualization of the main results. Conclusions about the significance of published articles are based on hypotheses on PA behavior change in the light of the COVID-19 pandemic.
Results: Among the 1,903 identified articles, there were 36% opinion pieces, 53% empirical studies, and 9% reviews. Of the 332 studies included in the systematic review, 213 used self-report measures to recollect prepandemic behavior in often small convenience samples. Most focused changes in PA volume, whereas changes in PA types were rarely measured. The majority had methodological reporting flaws. Few had very large samples with objective measures using repeated measure design (pre and during the pandemic). In addition to the expected decline in PA duration, these studies show that many of those who were active prepandemic, continued to be active during the pandemic.
Conclusions: Research responded quickly at the onset of the pandemic. However, most of the studies lacked robust methodology, and PA behavior change data lacked the accuracy needed to guide policy makers. To improve the field, we propose the implementation of longitudinal cohort studies by larger organizations such as WHO to ease access to data on PA behavior, and suggest those institutions set clear standards for this research. Researchers need to ensure a better fit between the measurement method and the construct being measured, and use both objective and subjective measures where appropriate to complement each other and provide a comprehensive picture of PA behavior.
I Can See It in Your Face.
(2019)
The purpose of this study was to illustrate that people’s affective valuation of exercise can be identified in their faces. The study was conducted with a software for automatic facial expression analysis and it involved testing the hypothesis that positive or negative affective valuation occurs spontaneously when people are reminded of exercise. We created a task similar to an emotional Stroop task, in which participants responded to exercise-related and control stimuli with a positive or negative facial expression (smile or frown) depending on whether the photo was presented upright or tilted. We further asked participants how much time they would normally spend for physical exercise, because we assumed that the affective valuation of those who exercise more would be more positive. Based on the data of 86 participants, regression analysis revealed that those who reported less exercise and a more negative reflective evaluation of exercise initiated negative facial expressions on exercise-related stimuli significantly faster than those who reported exercising more often. No significant effect was observed for smile responses. We suspect that responding with a smile to exercise-related stimuli was the congruent response for the majority of our participants, so that for them no Stroop interference occurred in the exercise-related condition. This study suggests that immediate negative affective reactions to exercise-related stimuli result from a postconscious automatic process and can be detected in the study participants’ faces. It furthermore illustrates how methodological paradigms from social–cognition research (here: the emotional Stroop paradigm) can be adapted to collect and analyze biometric data for the investigation of exercisers’ and non-exercisers’ automatic valuations of exercise.
I Can See It in Your Face.
(2019)
The purpose of this study was to illustrate that people’s affective valuation of exercise can be identified in their faces. The study was conducted with a software for automatic facial expression analysis and it involved testing the hypothesis that positive or negative affective valuation occurs spontaneously when people are reminded of exercise. We created a task similar to an emotional Stroop task, in which participants responded to exercise-related and control stimuli with a positive or negative facial expression (smile or frown) depending on whether the photo was presented upright or tilted. We further asked participants how much time they would normally spend for physical exercise, because we assumed that the affective valuation of those who exercise more would be more positive. Based on the data of 86 participants, regression analysis revealed that those who reported less exercise and a more negative reflective evaluation of exercise initiated negative facial expressions on exercise-related stimuli significantly faster than those who reported exercising more often. No significant effect was observed for smile responses. We suspect that responding with a smile to exercise-related stimuli was the congruent response for the majority of our participants, so that for them no Stroop interference occurred in the exercise-related condition. This study suggests that immediate negative affective reactions to exercise-related stimuli result from a postconscious automatic process and can be detected in the study participants’ faces. It furthermore illustrates how methodological paradigms from social–cognition research (here: the emotional Stroop paradigm) can be adapted to collect and analyze biometric data for the investigation of exercisers’ and non-exercisers’ automatic valuations of exercise.
Stress-levels experienced by school-aged elite athletes are pronounced, but data on their mental health status are widely lacking. In our study, we examined self-reported psychological symptoms and chronic mood. Data from a representative sample of 866 elite student-athletes (aged 12-15 years), enrolled in high-performance sport programming in German Elite Schools of Sport, were compared with data from 80 student-athletes from the same schools who have just been deselected from elite sport promotion, and from 432 age-and sex-matched non-sport students from regular schools (without such programming). Anxiety symptoms were least prevalent in female elite student-athletes. In male elite student-athletes, only symptoms of posttraumatic stress were less prevalent than in the other groups. Somatoform symptoms were generally more frequent in athletes, a trend that was significantly pronounced in deselected athletes. Deselected athletes showed an increased risk for psychological symptoms compared with both other groups. Regarding chronic mood, again deselected athletes showed less positive scores. While there was a trend toward high-performance sport being associated with better psychological health at least in girls, preventative programs should take into account that deselection from elite sport programming may be associated with specific risks for mental disorders.
Background: We assessed the effects of gender, in association with a four-week small-sided games (SSGs) training program, during Ramadan intermitting fasting (RIF) on changes in psychometric and physiological markers in professional male and female basketball players.
Methods: Twenty-four professional basketball players from the first Tunisian (Tunisia) division participated in this study. The players were dichotomized by sex (males [GM = 12]; females [GF = 12]). Both groups completed a 4 weeks SSGs training program with 3 sessions per week. Psychometric (e.g., quality of sleep, fatigue, stress, and delayed onset of muscle soreness [DOMS]) and physiological parameters (e.g., heart rate frequency, blood lactate) were measured during the first week (baseline) and at the end of RIF (post-test).
Results: Post hoc tests showed a significant increase in stress levels in both groups (GM [− 81.11%; p < 0.001, d = 0.33, small]; GF [− 36,53%; p = 0.001, d = 0.25, small]). Concerning physiological parameters, ANCOVA revealed significantly lower heart rates in favor of GM at post-test (1.70%, d = 0.38, small, p = 0.002).
Conclusions: Our results showed that SSGs training at the end of the RIF negatively impacted psychometric parameters of male and female basketball players. It can be concluded that there are sex-mediated effects of training during RIF in basketball players, and this should be considered by researchers and practitioners when programing training during RIF.
Background: We assessed the effects of gender, in association with a four-week small-sided games (SSGs) training program, during Ramadan intermitting fasting (RIF) on changes in psychometric and physiological markers in professional male and female basketball players.
Methods: Twenty-four professional basketball players from the first Tunisian (Tunisia) division participated in this study. The players were dichotomized by sex (males [GM = 12]; females [GF = 12]). Both groups completed a 4 weeks SSGs training program with 3 sessions per week. Psychometric (e.g., quality of sleep, fatigue, stress, and delayed onset of muscle soreness [DOMS]) and physiological parameters (e.g., heart rate frequency, blood lactate) were measured during the first week (baseline) and at the end of RIF (post-test).
Results: Post hoc tests showed a significant increase in stress levels in both groups (GM [− 81.11%; p < 0.001, d = 0.33, small]; GF [− 36,53%; p = 0.001, d = 0.25, small]). Concerning physiological parameters, ANCOVA revealed significantly lower heart rates in favor of GM at post-test (1.70%, d = 0.38, small, p = 0.002).
Conclusions: Our results showed that SSGs training at the end of the RIF negatively impacted psychometric parameters of male and female basketball players. It can be concluded that there are sex-mediated effects of training during RIF in basketball players, and this should be considered by researchers and practitioners when programing training during RIF.
NutzerInnen von gewalthaltigen Medien geben einerseits oftmals zu, dass sie fiktionale, gewalthaltige Medien konsumieren, behaupten jedoch gleichzeitig, dass dies nicht ihr Verhalten außerhalb des Medienkontexts beeinflusst. Sie argumentieren, dass sie leicht zwischen Dingen, die im fiktionalen Kontext und Dingen, die in der Realität gelernt wurden, unterscheiden können. Im Kontrast zu diesen Aussagen zeigen Metanalysen Effektstärken im mittleren Bereich für den Zusammenhang zwischen Gewaltmedienkonsum und aggressivem Verhalten. Diese Ergebnisse können nur erklärt werden, wenn MediennutzerInnen gewalthaltige Lernerfahrungen auch außerhalb des Medienkontexts anwenden. Ein Prozess, der Lernerfahrungen innerhalb des Medienkontexts mit dem Verhalten in der realen Welt verknüpft, ist Desensibilisierung, die oftmals eine Reduktion des negativen Affektes gegenüber Gewalt definiert ist. Zur Untersuchung des Desensibilisierungsprozesses wurden vier Experimente durchgeführt. Die erste in dieser Arbeit untersuchte Hypothese war, dass je häufiger Personen Gewaltmedien konsumieren, desto weniger negativen Affekt zeigen sie gegenüber Bildern mit realer Gewalt. Jedoch wurde angenommen, dass diese Bewertung auf Darstellungen von realer Gewalt beschränkt ist und nicht bei Bildern ohne Gewaltbezug, die einen negativen Affekt auslösen, zu finden ist. Die zweite Hypothese bezog sich auf den Affekt während des Konsums von Mediengewalt. Hier wurde angenommen, dass besonders Personen, die Freude an Gewalt in den Medien empfinden weniger negativen Affekt gegenüber realen Gewaltdarstellungen zeigen. Die letzte Hypothese beschäftigte sich mit kognitiver Desensibilisierung und sagte vorher, dass Gewaltmedienkonsum zu einem Transfer von Reaktionen, die normalerweise gegenüber gewalthaltigen Reizen gezeigt werden, auf ursprünglich neutrale Reize führt. Das erste Experiment (N = 57) untersuchte, ob die habituelle Nutzung von gewalthaltigen Medien den selbstberichteten Affekt (Valenz und Aktivierung) gegenüber Darstellungen von realer Gewalt und nichtgewalthaltigen Darstellungen, die negativen Affekt auslösen, vorhersagt. Die habituelle Nutzung von gewalthaltigen Medien sagte weniger negative Valenz und weniger allgemeine Aktivierung gegenüber gewalthalten und nichtgewalthaltigen Bildern vorher. Das zweite Experiment (N = 103) untersuchte auch die Beziehung zwischen habituellem Gewaltmedienkonsum und den affektiven Reaktionen gegenüber Bildern realer Gewalt und negativen affektauslösenden Bildern. Als weiterer Prädiktor wurde der Affekt beim Betrachten von gewalthaltigen Medien hinzugefügt. Der Affekt gegenüber den Bildern wurde zusätzlich durch psychophysiologische Maße (Valenz: C: Supercilii; Aktivierung: Hautleitreaktion) erhoben. Wie zuvor sagte habitueller Gewaltmedienkonsum weniger selbstberichte Erregung und weniger negative Valenz für die gewalthaltigen und die negativen, gewalthaltfreien Bilder vorher. Die physiologischen Maßen replizierten dieses Ergebnis. Jedoch zeigte sich ein anderes Muster für den Affekt beim Konsum von Gewalt in den Medien. Personen, die Gewalt in den Medien stärker erfreut, zeigen eine Reduktion der Responsivität gegenüber Gewalt auf allen vier Maßen. Weiterhin war bei drei dieser vier Maße (selbstberichte Valenz, Aktivität des C. Supercilii und Hautleitreaktion) dieser Zusammenhang auf die gewalthaltigen Bilder beschränkt, mit keinem oder nur einem kleinen Effekt auf die negativen, aber nichtgewalthaltigen Bilder. Das dritte Experiment (N = 73) untersuchte den Affekt während die Teilnehmer ein Computerspiel spielten. Das Spiel wurde eigens für dieses Experiment programmiert, sodass einzelne Handlungen im Spiel mit der Aktivität des C. Supercilii, dem Indikator für negativen Affekt, in Bezug gesetzt werden konnten. Die Analyse des C. Supercilii zeigte, dass wiederholtes Durchführen von aggressiven Spielzügen zu einem Rückgang von negativen Affekt führte, der die aggressiven Spielhandlungen begleitete. Der negative Affekt während gewalthaltiger Spielzüge wiederum sagte die affektive Reaktion gegenüber Darstellungen von gewalthaltigen Bildern vorher, nicht jedoch gegenüber den negativen Bildern. Das vierte Experiment (N = 77) untersuchte kognitive Desensibilisierung, die die Entwicklung von Verknüpfungen zwischen neutralen und aggressiven Kognitionen beinhaltete. Die Teilnehmer spielten einen Ego-Shooter entweder auf einem Schiff- oder einem Stadtlevel. Die Beziehung zwischen den neutralen Konstrukten (Schiff/Stadt) und den aggressiven Kognitionen wurde mit einer lexikalischen Entscheidungsaufgabe gemessen. Das Spielen im Schiff-/Stadt-Level führte zu einer kürzen Reaktionszeit für aggressive Wörter, wenn sie einem Schiff- bzw. Stadtprime folgten. Dies zeigte, dass die im Spiel enthaltenen neutralen Konzepte mit aggressiven Knoten verknüpft werden. Die Ergebnisse dieser vier Experimente wurden diskutiert im Rahmen eines lerntheoretischen Ansatzes um Desensibilisierung zu konzeptualisieren.
Increased Achilles (AT) and Patellar tendon (PT) thickness in adolescent athletes compared to non-athletes could be shown. However, it is unclear, if changes are of pathological or physiological origin due to training. The aim of this study was to determine physiological AT and PT thickness adaptation in adolescent elite athletes compared to non-athletes, considering sex and sport. In a longitudinal study design with two measurement days (M1/M2) within an interval of 3.2 ± 0.8 years, 131 healthy adolescent elite athletes (m/f: 90/41) out of 13 different sports and 24 recreationally active controls (m/f: 6/18) were included. Both ATs and PTs were measured at standardized reference points. Athletes were divided into 4 sport categories [ball (B), combat (C), endurance (E) and explosive strength sports (S)]. Descriptive analysis (mean ± SD) and statistical testing for group differences was performed (α = 0.05). AT thickness did not differ significantly between measurement days, neither in athletes (5.6 ± 0.7 mm/5.6 ± 0.7 mm) nor in controls (4.8 ± 0.4 mm/4.9 ± 0.5 mm, p > 0.05). For PTs, athletes presented increased thickness at M2 (M1: 3.5 ± 0.5 mm, M2: 3.8 ± 0.5 mm, p < 0.001). In general, males had thicker ATs and PTs than females (p < 0.05). Considering sex and sports, only male athletes from B, C, and S showed significant higher PT-thickness at M2 compared to controls (p ≤ 0.01). Sport-specific adaptation regarding tendon thickness in adolescent elite athletes can be detected in PTs among male athletes participating in certain sports with high repetitive jumping and strength components. Sonographic microstructural analysis might provide an enhanced insight into tendon material properties enabling the differentiation of sex and influence of different sports.
Increased Achilles (AT) and Patellar tendon (PT) thickness in adolescent athletes compared to non-athletes could be shown. However, it is unclear, if changes are of pathological or physiological origin due to training. The aim of this study was to determine physiological AT and PT thickness adaptation in adolescent elite athletes compared to non-athletes, considering sex and sport. In a longitudinal study design with two measurement days (M1/M2) within an interval of 3.2 ± 0.8 years, 131 healthy adolescent elite athletes (m/f: 90/41) out of 13 different sports and 24 recreationally active controls (m/f: 6/18) were included. Both ATs and PTs were measured at standardized reference points. Athletes were divided into 4 sport categories [ball (B), combat (C), endurance (E) and explosive strength sports (S)]. Descriptive analysis (mean ± SD) and statistical testing for group differences was performed (α = 0.05). AT thickness did not differ significantly between measurement days, neither in athletes (5.6 ± 0.7 mm/5.6 ± 0.7 mm) nor in controls (4.8 ± 0.4 mm/4.9 ± 0.5 mm, p > 0.05). For PTs, athletes presented increased thickness at M2 (M1: 3.5 ± 0.5 mm, M2: 3.8 ± 0.5 mm, p < 0.001). In general, males had thicker ATs and PTs than females (p < 0.05). Considering sex and sports, only male athletes from B, C, and S showed significant higher PT-thickness at M2 compared to controls (p ≤ 0.01). Sport-specific adaptation regarding tendon thickness in adolescent elite athletes can be detected in PTs among male athletes participating in certain sports with high repetitive jumping and strength components. Sonographic microstructural analysis might provide an enhanced insight into tendon material properties enabling the differentiation of sex and influence of different sports.
Aim: The aim of the study was to identify common orthopedic sports injury profiles in adolescent elite athletes with respect to age, sex, and anthropometrics.
Methods: A retrospective data analysis of 718 orthopedic presentations among 381 adolescent elite athletes from 16 different sports to a sports medical department was performed. Recorded data of history and clinical examination included area, cause and structure of acute and overuse injuries. Injury-events were analyzed in the whole cohort and stratified by age (11–14/15–17 years) and sex. Group differences were tested by chi-squared-tests. Logistic regression analysis was applied examining the influence of factors age, sex, and body mass index (BMI) on the outcome variables area and structure (a = 0.05).
Results: Higher proportions of injury-events were reported for females (60%) and athletes of the older age group (66%) than males and younger athletes. The most frequently injured area was the lower extremity (47%) followed by the spine (30.5%) and the upper extremity (12.5%). Acute injuries were mainly located at the lower extremity (74.5%), while overuse injuries were predominantly observed at the lower extremity (41%) as well as the spine (36.5%). Joints (34%), muscles (22%), and tendons (21.5%) were found to be the most often affected structures. The injured structures were different between the age groups (p = 0.022), with the older age group presenting three times more frequent with ligament pathology events (5.5%/2%) and less frequent with bony problems (11%/20.5%) than athletes of the younger age group. The injured area differed between the sexes (p = 0.005), with males having fewer spine injury-events (25.5%/34%) but more upper extremity injuries (18%/9%) than females. Regression analysis showed statistically significant influence for BMI (p = 0.002) and age (p = 0.015) on structure, whereas the area was significantly influenced by sex (p = 0.005).
Conclusion: Events of soft-tissue overuse injuries are the most common reasons resulting in orthopedic presentations of adolescent elite athletes. Mostly, the lower extremity and the spine are affected, while sex and age characteristics on affected area and structure must be considered. Therefore, prevention strategies addressing the injury-event profiles should already be implemented in early adolescence taking age, sex as well as injury entity into account.
Aim: The aim of the study was to identify common orthopedic sports injury profiles in adolescent elite athletes with respect to age, sex, and anthropometrics.
Methods: A retrospective data analysis of 718 orthopedic presentations among 381 adolescent elite athletes from 16 different sports to a sports medical department was performed. Recorded data of history and clinical examination included area, cause and structure of acute and overuse injuries. Injury-events were analyzed in the whole cohort and stratified by age (11–14/15–17 years) and sex. Group differences were tested by chi-squared-tests. Logistic regression analysis was applied examining the influence of factors age, sex, and body mass index (BMI) on the outcome variables area and structure (a = 0.05).
Results: Higher proportions of injury-events were reported for females (60%) and athletes of the older age group (66%) than males and younger athletes. The most frequently injured area was the lower extremity (47%) followed by the spine (30.5%) and the upper extremity (12.5%). Acute injuries were mainly located at the lower extremity (74.5%), while overuse injuries were predominantly observed at the lower extremity (41%) as well as the spine (36.5%). Joints (34%), muscles (22%), and tendons (21.5%) were found to be the most often affected structures. The injured structures were different between the age groups (p = 0.022), with the older age group presenting three times more frequent with ligament pathology events (5.5%/2%) and less frequent with bony problems (11%/20.5%) than athletes of the younger age group. The injured area differed between the sexes (p = 0.005), with males having fewer spine injury-events (25.5%/34%) but more upper extremity injuries (18%/9%) than females. Regression analysis showed statistically significant influence for BMI (p = 0.002) and age (p = 0.015) on structure, whereas the area was significantly influenced by sex (p = 0.005).
Conclusion: Events of soft-tissue overuse injuries are the most common reasons resulting in orthopedic presentations of adolescent elite athletes. Mostly, the lower extremity and the spine are affected, while sex and age characteristics on affected area and structure must be considered. Therefore, prevention strategies addressing the injury-event profiles should already be implemented in early adolescence taking age, sex as well as injury entity into account.
The effects of static stretching (StS) on subsequent strength and power activities has been one of the most debated topics in sport science literature over the past decades. The aim of this review is (1) to summarize previous and current findings on the acute effects of StS on muscle strength and power performances; (2) to update readers’ knowledge related to previous caveats; and (3) to discuss the underlying physiological mechanisms of short-duration StS when performed as single-mode treatment or when integrated into a full warm-up routine. Over the last two decades, StS has been considered harmful to subsequent strength and power performances. Accordingly, it has been recommended not to apply StS before strength- and power-related activities. More recent evidence suggests that when performed as a single-mode treatment or when integrated within a full warm-up routine including aerobic activity, dynamic-stretching, and sport-specific activities, short-duration StS (≤60 s per muscle group) trivially impairs subsequent strength and power activities (∆1–2%). Yet, longer StS durations (>60 s per muscle group) appear to induce substantial and practically relevant declines in strength and power performances (∆4.0–7.5%). Moreover, recent evidence suggests that when included in a full warm-up routine, short-duration StS may even contribute to lower the risk of sustaining musculotendinous injuries especially with high-intensity activities (e.g., sprint running and change of direction speed). It seems that during short-duration StS, neuromuscular activation and musculotendinous stiffness appear not to be affected compared with long-duration StS. Among other factors, this could be due to an elevated muscle temperature induced by a dynamic warm-up program. More specifically, elevated muscle temperature leads to increased muscle fiber conduction-velocity and improved binding of contractile proteins (actin, myosin). Therefore, our previous understanding of harmful StS effects on subsequent strength and power activities has to be updated. In fact, short-duration StS should be included as an important warm-up component before the uptake of recreational sports activities due to its potential positive effect on flexibility and musculotendinous injury prevention. However, in high-performance athletes, short-duration StS has to be applied with caution due to its negligible but still prevalent negative effects on subsequent strength and power performances, which could have an impact on performance during competition.
The effects of static stretching (StS) on subsequent strength and power activities has been one of the most debated topics in sport science literature over the past decades. The aim of this review is (1) to summarize previous and current findings on the acute effects of StS on muscle strength and power performances; (2) to update readers’ knowledge related to previous caveats; and (3) to discuss the underlying physiological mechanisms of short-duration StS when performed as single-mode treatment or when integrated into a full warm-up routine. Over the last two decades, StS has been considered harmful to subsequent strength and power performances. Accordingly, it has been recommended not to apply StS before strength- and power-related activities. More recent evidence suggests that when performed as a single-mode treatment or when integrated within a full warm-up routine including aerobic activity, dynamic-stretching, and sport-specific activities, short-duration StS (≤60 s per muscle group) trivially impairs subsequent strength and power activities (∆1–2%). Yet, longer StS durations (>60 s per muscle group) appear to induce substantial and practically relevant declines in strength and power performances (∆4.0–7.5%). Moreover, recent evidence suggests that when included in a full warm-up routine, short-duration StS may even contribute to lower the risk of sustaining musculotendinous injuries especially with high-intensity activities (e.g., sprint running and change of direction speed). It seems that during short-duration StS, neuromuscular activation and musculotendinous stiffness appear not to be affected compared with long-duration StS. Among other factors, this could be due to an elevated muscle temperature induced by a dynamic warm-up program. More specifically, elevated muscle temperature leads to increased muscle fiber conduction-velocity and improved binding of contractile proteins (actin, myosin). Therefore, our previous understanding of harmful StS effects on subsequent strength and power activities has to be updated. In fact, short-duration StS should be included as an important warm-up component before the uptake of recreational sports activities due to its potential positive effect on flexibility and musculotendinous injury prevention. However, in high-performance athletes, short-duration StS has to be applied with caution due to its negligible but still prevalent negative effects on subsequent strength and power performances, which could have an impact on performance during competition.
Although a relatively large number of studies on acquired language impairments have tested the case of derivational morphology, none of these have specifically investigated whether there are differences in how prefixed and suffixed derived words are impaired. Based on linguistic and psycholinguistic considerations on prefixed and suffixed derived words, differences in how these two types of derivations are processed, and consequently impaired, are predicted. In the present study, we investigated the errors produced in reading aloud simple, prefixed, and suffixed words by three German individuals with agrammatic aphasia (NN, LG, SA). We found that, while NN and LG produced similar numbers of errors with prefixed and suffixed words, SA showed a selective impairment for prefixed words. Furthermore, NN and SA produced more errors specifically involving the affix with prefixed words than with suffixed words. We discuss our findings in terms of relative position of stem and affix in prefixed and suffixed words, as well as in terms of specific properties of prefixes and suffixes.
Although a relatively large number of studies on acquired language impairments have tested the case of derivational morphology, none of these have specifically investigated whether there are differences in how prefixed and suffixed derived words are impaired. Based on linguistic and psycholinguistic considerations on prefixed and suffixed derived words, differences in how these two types of derivations are processed, and consequently impaired, are predicted. In the present study, we investigated the errors produced in reading aloud simple, prefixed, and suffixed words by three German individuals with agrammatic aphasia (NN, LG, SA). We found that, while NN and LG produced similar numbers of errors with prefixed and suffixed words, SA showed a selective impairment for prefixed words. Furthermore, NN and SA produced more errors specifically involving the affix with prefixed words than with suffixed words. We discuss our findings in terms of relative position of stem and affix in prefixed and suffixed words, as well as in terms of specific properties of prefixes and suffixes.
This article first outlines different ways of how psycholinguists have dealt with linguistic diversity and illustrates these approaches with three familiar cases from research on language processing, language acquisition, and language disorders. The second part focuses on the role of morphology and morphological variability across languages for psycholinguistic research. The specific phenomena to be examined are to do with stem-formation morphology and inflectional classes; they illustrate how experimental research that is informed by linguistic typology can lead to new insights.
This study aimed to compare the training load of a professional under-19 soccer team (U-19) to that of an elite adult team (EAT), from the same club, during the in-season period. Thirty-nine healthy soccer players were involved (EAT [n = 20]; U-19 [n = 19]) in the study which spanned four weeks. Training load (TL) was monitored as external TL, using a global positioning system (GPS), and internal TL, using a rating of perceived exertion (RPE). TL data were recorded after each training session. During soccer matches, players’ RPEs were recorded. The internal TL was quantified daily by means of the session rating of perceived exertion (session-RPE) using Borg’s 0–10 scale. For GPS data, the selected running speed intensities (over 0.5 s time intervals) were 12–15.9 km/h; 16–19.9 km/h; 20–24.9 km/h; >25 km/h (sprint). Distances covered between 16 and 19.9 km/h, > 20 km/h and >25 km/h were significantly higher in U-19 compared to EAT over the course of the study (p = 0.023, d = 0.243, small; p = 0.016, d = 0.298, small; and p = 0.001, d = 0.564, small, respectively). EAT players performed significantly fewer sprints per week compared to U-19 players (p = 0.002, d = 0.526, small). RPE was significantly higher in U-19 compared to EAT (p = 0.001, d = 0.188, trivial). The external and internal measures of TL were significantly higher in the U-19 group compared to the EAT soccer players. In conclusion, the results obtained show that the training load is greater in U19 compared to EAT.
This study aimed to compare the training load of a professional under-19 soccer team (U-19) to that of an elite adult team (EAT), from the same club, during the in-season period. Thirty-nine healthy soccer players were involved (EAT [n = 20]; U-19 [n = 19]) in the study which spanned four weeks. Training load (TL) was monitored as external TL, using a global positioning system (GPS), and internal TL, using a rating of perceived exertion (RPE). TL data were recorded after each training session. During soccer matches, players’ RPEs were recorded. The internal TL was quantified daily by means of the session rating of perceived exertion (session-RPE) using Borg’s 0–10 scale. For GPS data, the selected running speed intensities (over 0.5 s time intervals) were 12–15.9 km/h; 16–19.9 km/h; 20–24.9 km/h; >25 km/h (sprint). Distances covered between 16 and 19.9 km/h, > 20 km/h and >25 km/h were significantly higher in U-19 compared to EAT over the course of the study (p = 0.023, d = 0.243, small; p = 0.016, d = 0.298, small; and p = 0.001, d = 0.564, small, respectively). EAT players performed significantly fewer sprints per week compared to U-19 players (p = 0.002, d = 0.526, small). RPE was significantly higher in U-19 compared to EAT (p = 0.001, d = 0.188, trivial). The external and internal measures of TL were significantly higher in the U-19 group compared to the EAT soccer players. In conclusion, the results obtained show that the training load is greater in U19 compared to EAT.
We aimed at unveiling the role of executive functions (EFs) and language-related skills in spelling for mono- versus multilingual primary school children. We focused on EF and language-related skills, in particular lexicon size and phonological awareness (PA), because these factors were found to predict spelling in studies predominantly conducted with monolinguals, and because multilingualism can modulate these factors. There is evidence for (a) a bilingual advantage in EF due to constant high cognitive demands through language control, (b) a smaller mental lexicon in German and (c) possibly better PA. Multilinguals in Germany show on average poorer German language proficiency, what can influence performance on language-based tasks negatively. Thus, we included two spelling tasks to tease apart spelling based on lexical knowledge (i.e., word spelling) from spelling based on non-lexical strategies (i.e., non-word spelling). Our sample consisted of heterogeneous third graders from Germany: 69 monolinguals (age: M = 108 months) and 57 multilinguals (age: M = 111 months). On less language-dependent tasks (e.g., non-word spelling, PA, intelligence, short-term memory (STM) and three EF tasks testing switching, inhibition, and working memory) performance of both groups did not differ significantly. However, multilinguals performed significantly more poorly on tasks measuring German lexicon size and word spelling than monolinguals. Regression analyses revealed that for multilinguals, inhibition was related to spelling, whereas switching was the only EF component to influence word spelling in monolinguals and non-word spelling performance in both groups. By adding lexicon size and other language-related factors to the regression models, the influence of switching was reduced to insignificant effects, but inhibition remained significant for multilinguals. Language-related skills best predicted spelling and both language groups shared those variables: PA for word spelling, and STM for non-word spelling. Additionally, multilinguals’ word spelling performance was also predicted by their German lexicon size, and non-word spelling performance by PA. This study offers an in-depth look at spelling acquisition at a certain point of literacy development. Mono- and multilinguals have the predominant factors for spelling in common, but probably due to superior language knowledge, monolinguals were already able to make use of EF during spelling. For multilinguals, German lexicon size was more important for spelling than EF. For multilinguals’ spelling these functions might come into play only at a later stage.
We aimed at unveiling the role of executive functions (EFs) and language-related skills in spelling for mono- versus multilingual primary school children. We focused on EF and language-related skills, in particular lexicon size and phonological awareness (PA), because these factors were found to predict spelling in studies predominantly conducted with monolinguals, and because multilingualism can modulate these factors. There is evidence for (a) a bilingual advantage in EF due to constant high cognitive demands through language control, (b) a smaller mental lexicon in German and (c) possibly better PA. Multilinguals in Germany show on average poorer German language proficiency, what can influence performance on language-based tasks negatively. Thus, we included two spelling tasks to tease apart spelling based on lexical knowledge (i.e., word spelling) from spelling based on non-lexical strategies (i.e., non-word spelling). Our sample consisted of heterogeneous third graders from Germany: 69 monolinguals (age: M = 108 months) and 57 multilinguals (age: M = 111 months). On less language-dependent tasks (e.g., non-word spelling, PA, intelligence, short-term memory (STM) and three EF tasks testing switching, inhibition, and working memory) performance of both groups did not differ significantly. However, multilinguals performed significantly more poorly on tasks measuring German lexicon size and word spelling than monolinguals. Regression analyses revealed that for multilinguals, inhibition was related to spelling, whereas switching was the only EF component to influence word spelling in monolinguals and non-word spelling performance in both groups. By adding lexicon size and other language-related factors to the regression models, the influence of switching was reduced to insignificant effects, but inhibition remained significant for multilinguals. Language-related skills best predicted spelling and both language groups shared those variables: PA for word spelling, and STM for non-word spelling. Additionally, multilinguals’ word spelling performance was also predicted by their German lexicon size, and non-word spelling performance by PA. This study offers an in-depth look at spelling acquisition at a certain point of literacy development. Mono- and multilinguals have the predominant factors for spelling in common, but probably due to superior language knowledge, monolinguals were already able to make use of EF during spelling. For multilinguals, German lexicon size was more important for spelling than EF. For multilinguals’ spelling these functions might come into play only at a later stage.
Background: A prominent model of semantic processing in modern cognitive psychology proposes that semantic memory originates in everyday life experience with concrete objects such as plants, animals, and tools (Martin Chao, 2001). When the meaning of a concrete content word is being acquired, the learner is confronted with stimuli of various modalities related to the word's meaning. This comes to be stored as sensory knowledge about the object. It is further postulated that there is a conceptual domain remote from the mechanisms of perception, which is often referred to as functional knowledge or verbal semantics. There is a large body of neuropsychological literature trying to establish how much sensory and functional semantics is needed to access a name, and whether the relative contribution of these types of knowledge is the same for all categories of objects. Another controversial issue is whether naming requires access to semantic knowledge, or whether object names can be accessed directly from vision without the intervention of semantics, as is generally accepted for written word naming. Some support for this assumption seems to come from cases of so-called non-optic aphasia, a condition in which patients can name from visual presentation only but not from any other modality of presentation such as auditory, verbal, tactile, etc. In optic aphasia, a condition far better established, naming is possible from all modalities except vision. Aims: The aim of this paper is to draw attention to the first case description of non-optic or negative optic aphasia described by Wolff (1897, 1904). Methods Procedures: The case describes the results of a re-examination of Voit, who was seen by several neurologists in the course of a decade in classical aphasiology. The patient demonstrated anomia in oral but not in written naming of objects in view. Wolff's examination involves extensive testing of semantic processing in several modalities, especially with respect to the status of functional and sensory semantic features Outcomes Results: The re-examination of patient Voit by Wolff in 1897 with new procedures revealed a specific impairment in processing sensory knowledge, while functional knowledge of objects was relatively preserved. This led to a naming impairment in all modalities of presentation except the visual one. Using more refined tasks, Wolff also demonstrated receptive impairments, in contrast to previous researchers who had concluded that the impairment was restricted to oral production. Conclusions: Although Wolff's (1904) case of negative optic aphasia has been almost completely forgotten (but see Bartels Wallesch, 1996), it is astonishingly modern in its conceptual approach and in the central questions it addresses on the mechanisms involved in the process of naming and on the structure of the semantic system. As is usual in classical cases, the methodology may appear less stringent than in most contemporary work, but the approach was brilliant.
Background
Isometric muscle actions can be performed either by initiating the action, e.g., pulling on an immovable resistance (PIMA), or by reacting to an external load, e.g., holding a weight (HIMA). In the present study, it was mainly examined if these modalities could be differentiated by oxygenation variables as well as by time to task failure (TTF). Furthermore, it was analyzed if variables are changed by intermittent voluntary muscle twitches during weight holding (Twitch). It was assumed that twitches during a weight holding task change the character of the isometric muscle action from reacting (≙ HIMA) to acting (≙ PIMA).
Methods
Twelve subjects (two drop outs) randomly performed two tasks (HIMA vs. PIMA or HIMA vs. Twitch, n = 5 each) with the elbow flexors at 60% of maximal torque maintained until muscle failure with each arm. Local capillary venous oxygen saturation (SvO2) and relative hemoglobin amount (rHb) were measured by light spectrometry.
Results
Within subjects, no significant differences were found between tasks regarding the behavior of SvO2 and rHb, the slope and extent of deoxygenation (max. SvO2 decrease), SvO2 level at global rHb minimum, and time to SvO2 steady states. The TTF was significantly longer during Twitch and PIMA (incl. Twitch) compared to HIMA (p = 0.043 and 0.047, respectively). There was no substantial correlation between TTF and maximal deoxygenation independently of the task (r = − 0.13).
Conclusions
HIMA and PIMA seem to have a similar microvascular oxygen and blood supply. The supply might be sufficient, which is expressed by homeostatic steady states of SvO2 in all trials and increases in rHb in most of the trials. Intermittent voluntary muscle twitches might not serve as a further support but extend the TTF. A changed neuromuscular control is discussed as possible explanation.
Adaptive Force (AF) reflects the capability of the neuromuscular system to adapt adequately to external forces with the intention of maintaining a position or motion. One specific approach to assessing AF is to measure force and limb position during a pneumatically applied increasing external force. Through this method, the highest (AFmax), the maximal isometric (AFisomax) and the maximal eccentric Adaptive Force (AFeccmax) can be determined. The main question of the study was whether the AFisomax is a specific and independent parameter of muscle function compared to other maximal forces. In 13 healthy subjects (9 male and 4 female), the maximal voluntary isometric contraction (pre- and post-MVIC), the three AF parameters and the MVIC with a prior concentric contraction (MVICpri-con) of the elbow extensors were measured 4 times on two days. Arithmetic mean (M) and maximal (Max) torques of all force types were analyzed. Regarding the reliability of the AF parameters between days, the mean changes were 0.31–1.98 Nm (0.61%–5.47%, p = 0.175–0.552), the standard errors of measurements (SEM) were 1.29–5.68 Nm (2.53%–15.70%) and the ICCs(3,1) = 0.896–0.996. M and Max of AFisomax, AFmax and pre-MVIC correlated highly (r = 0.85–0.98). The M and Max of AFisomax were significantly lower (6.12–14.93 Nm; p ≤ 0.001–0.009) and more variable between trials (coefficient of variation (CVs) ≥ 21.95%) compared to those of pre-MVIC and AFmax (CVs ≤ 5.4%). The results suggest the novel measuring procedure is suitable to reliably quantify the AF, whereby the presented measurement errors should be taken into consideration. The AFisomax seems to reflect its own strength capacity and should be detected separately. It is suggested its normalization to the MVIC or AFmax could serve as an indicator of a neuromuscular function.
Background
Isometric muscle actions can be performed either by initiating the action, e.g., pulling on an immovable resistance (PIMA), or by reacting to an external load, e.g., holding a weight (HIMA). In the present study, it was mainly examined if these modalities could be differentiated by oxygenation variables as well as by time to task failure (TTF). Furthermore, it was analyzed if variables are changed by intermittent voluntary muscle twitches during weight holding (Twitch). It was assumed that twitches during a weight holding task change the character of the isometric muscle action from reacting (≙ HIMA) to acting (≙ PIMA).
Methods
Twelve subjects (two drop outs) randomly performed two tasks (HIMA vs. PIMA or HIMA vs. Twitch, n = 5 each) with the elbow flexors at 60% of maximal torque maintained until muscle failure with each arm. Local capillary venous oxygen saturation (SvO2) and relative hemoglobin amount (rHb) were measured by light spectrometry.
Results
Within subjects, no significant differences were found between tasks regarding the behavior of SvO2 and rHb, the slope and extent of deoxygenation (max. SvO2 decrease), SvO2 level at global rHb minimum, and time to SvO2 steady states. The TTF was significantly longer during Twitch and PIMA (incl. Twitch) compared to HIMA (p = 0.043 and 0.047, respectively). There was no substantial correlation between TTF and maximal deoxygenation independently of the task (r = − 0.13).
Conclusions
HIMA and PIMA seem to have a similar microvascular oxygen and blood supply. The supply might be sufficient, which is expressed by homeostatic steady states of SvO2 in all trials and increases in rHb in most of the trials. Intermittent voluntary muscle twitches might not serve as a further support but extend the TTF. A changed neuromuscular control is discussed as possible explanation.
Importance Alcohol consumption (AC) leads to death and disability worldwide. Ongoing discussions on potential negative effects of the COVID-19 pandemic on AC need to be informed by real-world evidence.
Objective To examine whether lockdown measures are associated with AC and consumption-related temporal and psychological within-person mechanisms.
Design, Setting, and Participants This quantitative, intensive, longitudinal cohort study recruited 1743 participants from 3 sites from February 20, 2020, to February 28, 2021. Data were provided before and within the second lockdown of the COVID-19 pandemic in Germany: before lockdown (October 2 to November 1, 2020); light lockdown (November 2 to December 15, 2020); and hard lockdown (December 16, 2020, to February 28, 2021).
Main Outcomes and Measures Daily ratings of AC (main outcome) captured during 3 lockdown phases (main variable) and temporal (weekends and holidays) and psychological (social isolation and drinking intention) correlates.
Results Of the 1743 screened participants, 189 (119 [63.0%] male; median [IQR] age, 37 [27.5-52.0] years) with at least 2 alcohol use disorder (AUD) criteria according to the Diagnostic and Statistical Manual of Mental Disorders (Fifth Edition) yet without the need for medically supervised alcohol withdrawal were included. These individuals provided 14 694 smartphone ratings from October 2020 through February 2021. Multilevel modeling revealed significantly higher AC (grams of alcohol per day) on weekend days vs weekdays (β = 11.39; 95% CI, 10.00-12.77; P < .001). Alcohol consumption was above the overall average on Christmas (β = 26.82; 95% CI, 21.87-31.77; P < .001) and New Year’s Eve (β = 66.88; 95% CI, 59.22-74.54; P < .001). During the hard lockdown, perceived social isolation was significantly higher (β = 0.12; 95% CI, 0.06-0.15; P < .001), but AC was significantly lower (β = −5.45; 95% CI, −8.00 to −2.90; P = .001). Independent of lockdown, intention to drink less alcohol was associated with lower AC (β = −11.10; 95% CI, −13.63 to −8.58; P < .001). Notably, differences in AC between weekend and weekdays decreased both during the hard lockdown (β = −6.14; 95% CI, −9.96 to −2.31; P = .002) and in participants with severe AUD (β = −6.26; 95% CI, −10.18 to −2.34; P = .002).
Conclusions and Relevance This 5-month cohort study found no immediate negative associations of lockdown measures with overall AC. Rather, weekend-weekday and holiday AC patterns exceeded lockdown effects. Differences in AC between weekend days and weekdays evinced that weekend drinking cycles decreased as a function of AUD severity and lockdown measures, indicating a potential mechanism of losing and regaining control. This finding suggests that temporal patterns and drinking intention constitute promising targets for prevention and intervention, even in high-risk individuals.
Importance Alcohol consumption (AC) leads to death and disability worldwide. Ongoing discussions on potential negative effects of the COVID-19 pandemic on AC need to be informed by real-world evidence.
Objective To examine whether lockdown measures are associated with AC and consumption-related temporal and psychological within-person mechanisms.
Design, Setting, and Participants This quantitative, intensive, longitudinal cohort study recruited 1743 participants from 3 sites from February 20, 2020, to February 28, 2021. Data were provided before and within the second lockdown of the COVID-19 pandemic in Germany: before lockdown (October 2 to November 1, 2020); light lockdown (November 2 to December 15, 2020); and hard lockdown (December 16, 2020, to February 28, 2021).
Main Outcomes and Measures Daily ratings of AC (main outcome) captured during 3 lockdown phases (main variable) and temporal (weekends and holidays) and psychological (social isolation and drinking intention) correlates.
Results Of the 1743 screened participants, 189 (119 [63.0%] male; median [IQR] age, 37 [27.5-52.0] years) with at least 2 alcohol use disorder (AUD) criteria according to the Diagnostic and Statistical Manual of Mental Disorders (Fifth Edition) yet without the need for medically supervised alcohol withdrawal were included. These individuals provided 14 694 smartphone ratings from October 2020 through February 2021. Multilevel modeling revealed significantly higher AC (grams of alcohol per day) on weekend days vs weekdays (β = 11.39; 95% CI, 10.00-12.77; P < .001). Alcohol consumption was above the overall average on Christmas (β = 26.82; 95% CI, 21.87-31.77; P < .001) and New Year’s Eve (β = 66.88; 95% CI, 59.22-74.54; P < .001). During the hard lockdown, perceived social isolation was significantly higher (β = 0.12; 95% CI, 0.06-0.15; P < .001), but AC was significantly lower (β = −5.45; 95% CI, −8.00 to −2.90; P = .001). Independent of lockdown, intention to drink less alcohol was associated with lower AC (β = −11.10; 95% CI, −13.63 to −8.58; P < .001). Notably, differences in AC between weekend and weekdays decreased both during the hard lockdown (β = −6.14; 95% CI, −9.96 to −2.31; P = .002) and in participants with severe AUD (β = −6.26; 95% CI, −10.18 to −2.34; P = .002).
Conclusions and Relevance This 5-month cohort study found no immediate negative associations of lockdown measures with overall AC. Rather, weekend-weekday and holiday AC patterns exceeded lockdown effects. Differences in AC between weekend days and weekdays evinced that weekend drinking cycles decreased as a function of AUD severity and lockdown measures, indicating a potential mechanism of losing and regaining control. This finding suggests that temporal patterns and drinking intention constitute promising targets for prevention and intervention, even in high-risk individuals.
Objective: A role for microRNAs is implicated in several biological and pathological processes. We investigated the effects of high-intensity interval training (HIIT) and moderate-intensity continuous training (MICT) on molecular markers of diabetic cardiomyopathy in rats.
Methods: Eighteen male Wistar rats (260 ± 10 g; aged 8 weeks) with streptozotocin (STZ)-induced type 1 diabetes mellitus (55 mg/kg, IP) were randomly allocated to three groups: control, MICT, and HIIT. The two different training protocols were performed 5 days each week for 5 weeks. Cardiac performance (end-systolic and end-diastolic dimensions, ejection fraction), the expression of miR-206, HSP60, and markers of apoptosis (cleaved PARP and cytochrome C) were determined at the end of the exercise interventions.
Results: Both exercise interventions (HIIT and MICT) decreased blood glucose levels and improved cardiac performance, with greater changes in the HIIT group (p < 0.001, η2: 0.909). While the expressions of miR-206 and apoptotic markers decreased in both training protocols (p < 0.001, η2: 0.967), HIIT caused greater reductions in apoptotic markers and produced a 20% greater reduction in miR-206 compared with the MICT protocol (p < 0.001). Furthermore, both training protocols enhanced the expression of HSP60 (p < 0.001, η2: 0.976), with a nearly 50% greater increase in the HIIT group compared with MICT.
Conclusions: Our results indicate that both exercise protocols, HIIT and MICT, have the potential to reduce diabetic cardiomyopathy by modifying the expression of miR-206 and its downstream targets of apoptosis. It seems however that HIIT is even more effective than MICT to modulate these molecular markers.
Objective: A role for microRNAs is implicated in several biological and pathological processes. We investigated the effects of high-intensity interval training (HIIT) and moderate-intensity continuous training (MICT) on molecular markers of diabetic cardiomyopathy in rats.
Methods: Eighteen male Wistar rats (260 ± 10 g; aged 8 weeks) with streptozotocin (STZ)-induced type 1 diabetes mellitus (55 mg/kg, IP) were randomly allocated to three groups: control, MICT, and HIIT. The two different training protocols were performed 5 days each week for 5 weeks. Cardiac performance (end-systolic and end-diastolic dimensions, ejection fraction), the expression of miR-206, HSP60, and markers of apoptosis (cleaved PARP and cytochrome C) were determined at the end of the exercise interventions.
Results: Both exercise interventions (HIIT and MICT) decreased blood glucose levels and improved cardiac performance, with greater changes in the HIIT group (p < 0.001, η2: 0.909). While the expressions of miR-206 and apoptotic markers decreased in both training protocols (p < 0.001, η2: 0.967), HIIT caused greater reductions in apoptotic markers and produced a 20% greater reduction in miR-206 compared with the MICT protocol (p < 0.001). Furthermore, both training protocols enhanced the expression of HSP60 (p < 0.001, η2: 0.976), with a nearly 50% greater increase in the HIIT group compared with MICT.
Conclusions: Our results indicate that both exercise protocols, HIIT and MICT, have the potential to reduce diabetic cardiomyopathy by modifying the expression of miR-206 and its downstream targets of apoptosis. It seems however that HIIT is even more effective than MICT to modulate these molecular markers.
Aims: High intensity interval training (HIIT) improves mitochondrial characteristics. This study compared the impact of two workload-matched high intensity interval training (HIIT) protocols with different work:recovery ratios on regulatory factors related to mitochondrial biogenesis in the soleus muscle of diabetic rats.
Materials and methods: Twenty-four Wistar rats were randomly divided into four equal-sized groups: non-diabetic control, diabetic control (DC), diabetic with long recovery exercise [4–5 × 2-min running at 80%–90% of the maximum speed reached with 2-min of recovery at 40% of the maximum speed reached (DHIIT1:1)], and diabetic with short recovery exercise (5–6 × 2-min running at 80%–90% of the maximum speed reached with 1-min of recovery at 30% of the maximum speed reached [DHIIT2:1]). Both HIIT protocols were completed five times/week for 4 weeks while maintaining equal running distances in each session.
Results: Gene and protein expressions of PGC-1α, p53, and citrate synthase of the muscles increased significantly following DHIIT1:1 and DHIIT2:1 compared to DC (p ˂ 0.05). Most parameters, except for PGC-1α protein (p = 0.597), were significantly higher in DHIIT2:1 than in DHIIT1:1 (p ˂ 0.05). Both DHIIT groups showed significant increases in maximum speed with larger increases in DHIIT2:1 compared with DHIIT1:1.
Conclusion: Our findings indicate that both HIIT protocols can potently up-regulate gene and protein expression of PGC-1α, p53, and CS. However, DHIIT2:1 has superior effects compared with DHIIT1:1 in improving mitochondrial adaptive responses in diabetic rats.
Aims: High intensity interval training (HIIT) improves mitochondrial characteristics. This study compared the impact of two workload-matched high intensity interval training (HIIT) protocols with different work:recovery ratios on regulatory factors related to mitochondrial biogenesis in the soleus muscle of diabetic rats.
Materials and methods: Twenty-four Wistar rats were randomly divided into four equal-sized groups: non-diabetic control, diabetic control (DC), diabetic with long recovery exercise [4–5 × 2-min running at 80%–90% of the maximum speed reached with 2-min of recovery at 40% of the maximum speed reached (DHIIT1:1)], and diabetic with short recovery exercise (5–6 × 2-min running at 80%–90% of the maximum speed reached with 1-min of recovery at 30% of the maximum speed reached [DHIIT2:1]). Both HIIT protocols were completed five times/week for 4 weeks while maintaining equal running distances in each session.
Results: Gene and protein expressions of PGC-1α, p53, and citrate synthase of the muscles increased significantly following DHIIT1:1 and DHIIT2:1 compared to DC (p ˂ 0.05). Most parameters, except for PGC-1α protein (p = 0.597), were significantly higher in DHIIT2:1 than in DHIIT1:1 (p ˂ 0.05). Both DHIIT groups showed significant increases in maximum speed with larger increases in DHIIT2:1 compared with DHIIT1:1.
Conclusion: Our findings indicate that both HIIT protocols can potently up-regulate gene and protein expression of PGC-1α, p53, and CS. However, DHIIT2:1 has superior effects compared with DHIIT1:1 in improving mitochondrial adaptive responses in diabetic rats.
Computer aided dosage management of phenprocoumon anticoagulation therapy Clinical validation
(2014)
A recently developed multiparameter computer-aided expert system (TheMa) for guiding anticoagulation with phenprocoumon (PPC) was validated by a prospective investigation in 22 patients. The PPC-INR-response curve resulting from physician guided dosage was compared to INR values calculated by "twin calculation" from TheMa recommended dosage. Additionally, TheMa was used to predict the optimal time to perform surgery or invasive procedures after interruption of anticogulation therapy. Results: Comparison of physician and TheMa guided anticoagulation showed almost identical accuracy by three quantitative measures: Polygon integration method (area around INR target) 616.17 vs. 607.86, INR hits in the target range 166 vs. 161, and TTR (time in therapeutic range) 63.91 vs. 62.40 %. After discontinuation of anticoagulation therapy, calculating the INR phase-out curve with TheMa INR prognosis of 1.8 was possible with a standard deviation of 0.50 +/- 0.59 days. Conclusion: Guiding anticoagulation with TheMa was as accurate as Physician guided therapy. After interruption of anticoagulant therapy, TheMa may be used for calculating the optimal time performing operations or initiating bridging therapy.
Background
Total hip or knee replacement is one of the most frequently performed surgical procedures. Physical rehabilitation following total hip or knee replacement is an essential part of the therapy to improve functional outcomes and quality of life. After discharge from inpatient rehabilitation, a subsequent postoperative exercise therapy is needed to maintain functional mobility. Telerehabilitation may be a potential innovative treatment approach. We aim to investigate the superiority of an interactive telerehabilitation intervention for patients after total hip or knee replacement, in comparison to usual care, regarding physical performance, functional mobility, quality of life and pain.
Methods/design
This is an open, randomized controlled, multicenter superiority study with two prospective arms. One hundred and ten eligible and consenting participants with total knee or hip replacement will be recruited at admission to subsequent inpatient rehabilitation. After comprehensive, 3-week, inpatient rehabilitation, the intervention group performs a 3-month, interactive, home-based exercise training with a telerehabilitation system. For this purpose, the physiotherapist creates an individual training plan out of 38 different strength and balance exercises which were implemented in the system. Data about the quality and frequency of training are transmitted to the physiotherapist for further adjustment. Communication between patient and physiotherapist is possible with the system. The control group receives voluntary, usual aftercare programs. Baseline assessments are investigated after discharge from rehabilitation; final assessments 3 months later. The primary outcome is the difference in improvement between intervention and control group in 6-minute walk distance after 3 months. Secondary outcomes include differences in the Timed Up and Go Test, the Five-Times-Sit-to-Stand Test, the Stair Ascend Test, the Short-Form 36, the Western Ontario and McMaster Universities Osteoarthritis Index, the International Physical Activity Questionnaire, and postural control as well as gait and kinematic parameters of the lower limbs. Baseline-adjusted analysis of covariance models will be used to test for group differences in the primary and secondary endpoints.
Discussion
We expect the intervention group to benefit from the interactive, home-based exercise training in many respects represented by the study endpoints. If successful, this approach could be used to enhance the access to aftercare programs, especially in structurally weak areas.
Background
Total hip or knee replacement is one of the most frequently performed surgical procedures. Physical rehabilitation following total hip or knee replacement is an essential part of the therapy to improve functional outcomes and quality of life. After discharge from inpatient rehabilitation, a subsequent postoperative exercise therapy is needed to maintain functional mobility. Telerehabilitation may be a potential innovative treatment approach. We aim to investigate the superiority of an interactive telerehabilitation intervention for patients after total hip or knee replacement, in comparison to usual care, regarding physical performance, functional mobility, quality of life and pain.
Methods/design
This is an open, randomized controlled, multicenter superiority study with two prospective arms. One hundred and ten eligible and consenting participants with total knee or hip replacement will be recruited at admission to subsequent inpatient rehabilitation. After comprehensive, 3-week, inpatient rehabilitation, the intervention group performs a 3-month, interactive, home-based exercise training with a telerehabilitation system. For this purpose, the physiotherapist creates an individual training plan out of 38 different strength and balance exercises which were implemented in the system. Data about the quality and frequency of training are transmitted to the physiotherapist for further adjustment. Communication between patient and physiotherapist is possible with the system. The control group receives voluntary, usual aftercare programs. Baseline assessments are investigated after discharge from rehabilitation; final assessments 3 months later. The primary outcome is the difference in improvement between intervention and control group in 6-minute walk distance after 3 months. Secondary outcomes include differences in the Timed Up and Go Test, the Five-Times-Sit-to-Stand Test, the Stair Ascend Test, the Short-Form 36, the Western Ontario and McMaster Universities Osteoarthritis Index, the International Physical Activity Questionnaire, and postural control as well as gait and kinematic parameters of the lower limbs. Baseline-adjusted analysis of covariance models will be used to test for group differences in the primary and secondary endpoints.
Discussion
We expect the intervention group to benefit from the interactive, home-based exercise training in many respects represented by the study endpoints. If successful, this approach could be used to enhance the access to aftercare programs, especially in structurally weak areas.
Background: Telerehabilitation can contribute to the maintenance of successful rehabilitation regardless of location and time. The aim of this study was to investigate a specific three-month interactive telerehabilitation routine regarding its effectiveness in assisting patients with physical functionality and with returning to work compared to typical aftercare.
Objective: The aim of the study was to investigate a specific three-month interactive telerehabilitation with regard to effectiveness in functioning and return to work compared to usual aftercare.
Methods: From August 2016 to December 2017, 111 patients (mean 54.9 years old; SD 6.8; 54.3% female) with hip or knee replacement were enrolled in the randomized controlled trial. At discharge from inpatient rehabilitation and after three months, their distance in the 6-minute walk test was assessed as the primary endpoint. Other functional parameters, including health related quality of life, pain, and time to return to work, were secondary endpoints.
Results: Patients in the intervention group performed telerehabilitation for an average of 55.0 minutes (SD 9.2) per week. Adherence was high, at over 75%, until the 7th week of the three-month intervention phase. Almost all the patients and therapists used the communication options. Both the intervention group (average difference 88.3 m; SD 57.7; P=.95) and the control group (average difference 79.6 m; SD 48.7; P=.95) increased their distance in the 6-minute-walk-test. Improvements in other functional parameters, as well as in quality of life and pain, were achieved in both groups. The higher proportion of working patients in the intervention group (64.6%; P=.01) versus the control group (46.2%) is of note.
Conclusions: The effect of the investigated telerehabilitation therapy in patients following knee or hip replacement was equivalent to the usual aftercare in terms of functional testing, quality of life, and pain. Since a significantly higher return-to-work rate could be achieved, this therapy might be a promising supplement to established aftercare.
Background: Telerehabilitation can contribute to the maintenance of successful rehabilitation regardless of location and time. The aim of this study was to investigate a specific three-month interactive telerehabilitation routine regarding its effectiveness in assisting patients with physical functionality and with returning to work compared to typical aftercare.
Objective: The aim of the study was to investigate a specific three-month interactive telerehabilitation with regard to effectiveness in functioning and return to work compared to usual aftercare.
Methods: From August 2016 to December 2017, 111 patients (mean 54.9 years old; SD 6.8; 54.3% female) with hip or knee replacement were enrolled in the randomized controlled trial. At discharge from inpatient rehabilitation and after three months, their distance in the 6-minute walk test was assessed as the primary endpoint. Other functional parameters, including health related quality of life, pain, and time to return to work, were secondary endpoints.
Results: Patients in the intervention group performed telerehabilitation for an average of 55.0 minutes (SD 9.2) per week. Adherence was high, at over 75%, until the 7th week of the three-month intervention phase. Almost all the patients and therapists used the communication options. Both the intervention group (average difference 88.3 m; SD 57.7; P=.95) and the control group (average difference 79.6 m; SD 48.7; P=.95) increased their distance in the 6-minute-walk-test. Improvements in other functional parameters, as well as in quality of life and pain, were achieved in both groups. The higher proportion of working patients in the intervention group (64.6%; P=.01) versus the control group (46.2%) is of note.
Conclusions: The effect of the investigated telerehabilitation therapy in patients following knee or hip replacement was equivalent to the usual aftercare in terms of functional testing, quality of life, and pain. Since a significantly higher return-to-work rate could be achieved, this therapy might be a promising supplement to established aftercare.
Multicomponent cardiac rehabilitation in patients after transcatheter aortic valve implantation
(2017)
Background: In the last decade, transcatheter aortic valve implantation has become a promising treatment modality for patients with aortic stenosis and a high surgical risk. Little is known about influencing factors of function and quality of life during multicomponent cardiac rehabilitation. Methods: From October 2013 to July 2015, patients with elective transcatheter aortic valve implantation and a subsequent inpatient cardiac rehabilitation were enrolled in the prospective cohort multicentre study. Frailty-Index (including cognition, nutrition, autonomy and mobility), Short Form-12 (SF-12), six-minute walk distance (6MWD) and maximum work load in bicycle ergometry were performed at admission and discharge of cardiac rehabilitation. The relation between patient characteristics and improvements in 6MWD, maximum work load or SF-12 scales were studied univariately and multivariately using regression models. Results: One hundred and thirty-six patients (80.6 +/- 5.0 years, 47.8% male) were enrolled. 6MWD and maximum work load increased by 56.3 +/- 65.3 m (p < 0.001) and 8.0 +/- 14.9 watts (p < 0.001), respectively. An improvement in SF-12 (physical 2.5 +/- 8.7, p = 0.001, mental 3.4 +/- 10.2, p = 0.003) could be observed. In multivariate analysis, age and higher education were significantly associated with a reduced 6MWD, whereas cognition and obesity showed a positive predictive value. Higher cognition, nutrition and autonomy positively influenced the physical scale of SF-12. Additionally, the baseline values of SF-12 had an inverse impact on the change during cardiac rehabilitation. Conclusions: Cardiac rehabilitation can improve functional capacity as well as quality of life and reduce frailty in patients after transcatheter aortic valve implantation. An individually tailored therapy with special consideration of cognition and nutrition is needed to maintain autonomy and empower octogenarians in coping with challenges of everyday life.
Background
Aim of the study was to find predictors of allocating patients after transcatheter aortic valve implantation (TAVI) to geriatric (GR) or cardiac rehabilitation (CR) and describe this new patient group based on a differentiated characterization.
Methods
From 10/2013 to 07/2015, 344 patients with an elective TAVI were consecutively enrolled in this prospective multicentric cohort study. Before intervention, sociodemographic parameters, echocardiographic data, comorbidities, 6-min walk distance (6MWD), quality of life and frailty (score indexing activities of daily living [ADL], cognition, nutrition and mobility) were documented. Out of these, predictors for assignment to CR or GR after TAVI were identified using a multivariable regression model.
Results
After TAVI, 249 patients (80.7 ± 5.1 years, 59.0% female) underwent CR (n = 198) or GR (n = 51). GR patients were older, less physically active and more often had a level of care, peripheral artery disease as well as a lower left ventricular ejection fraction. The groups also varied in 6MWD. Furthermore, individual components of frailty revealed prognostic impact: higher values in instrumental ADL reduced the probability for referral to GR (OR:0.49, p < 0.001), while an impaired mobility was positively associated with referral to GR (OR:3.97, p = 0.046). Clinical parameters like stroke (OR:0.19 of GR, p = 0.038) and the EuroSCORE (OR:1.04 of GR, p = 0.026) were also predictive.
Conclusion
Advanced age patients after TAVI referred to CR or GR differ in several parameters and seem to be different patient groups with specific needs, e.g. regarding activities of daily living and mobility. Thus, our data prove the eligibility of both CR and GR settings.
Background
Aim of the study was to find predictors of allocating patients after transcatheter aortic valve implantation (TAVI) to geriatric (GR) or cardiac rehabilitation (CR) and describe this new patient group based on a differentiated characterization.
Methods
From 10/2013 to 07/2015, 344 patients with an elective TAVI were consecutively enrolled in this prospective multicentric cohort study. Before intervention, sociodemographic parameters, echocardiographic data, comorbidities, 6-min walk distance (6MWD), quality of life and frailty (score indexing activities of daily living [ADL], cognition, nutrition and mobility) were documented. Out of these, predictors for assignment to CR or GR after TAVI were identified using a multivariable regression model.
Results
After TAVI, 249 patients (80.7 ± 5.1 years, 59.0% female) underwent CR (n = 198) or GR (n = 51). GR patients were older, less physically active and more often had a level of care, peripheral artery disease as well as a lower left ventricular ejection fraction. The groups also varied in 6MWD. Furthermore, individual components of frailty revealed prognostic impact: higher values in instrumental ADL reduced the probability for referral to GR (OR:0.49, p < 0.001), while an impaired mobility was positively associated with referral to GR (OR:3.97, p = 0.046). Clinical parameters like stroke (OR:0.19 of GR, p = 0.038) and the EuroSCORE (OR:1.04 of GR, p = 0.026) were also predictive.
Conclusion
Advanced age patients after TAVI referred to CR or GR differ in several parameters and seem to be different patient groups with specific needs, e.g. regarding activities of daily living and mobility. Thus, our data prove the eligibility of both CR and GR settings.
Background
Maximal isokinetic strength ratios of joint flexors and extensors are important parameters to indicate the level of muscular balance at the joint. Further, in combat sports athletes, upper and lower limb muscle strength is affected by the type of sport. Thus, this study aimed to examine the differences in maximal isokinetic strength of the flexors and extensors and the corresponding flexor–extensor strength ratios of the elbows and knees in combat sports athletes.
Method
Forty male participants (age = 22.3 ± 2.5 years) from four different combat sports (amateur boxing, taekwondo, karate, and judo; n = 10 per sport) were tested for eccentric peak torque of the elbow/knee flexors (EF/KF) and concentric peak torque of the elbow/knee extensors (EE/KE) at three different angular velocities (60, 120, and 180°/s) on the dominant and non-dominant side using an isokinetic device.
Results
Analyses revealed significant, large-sized group × velocity × limb interactions for EF, EE, and EF–EE ratio, KF, KE, and KF–KE ratio (p ≤ 0.03; 0.91 ≤ d ≤ 1.75). Post-hoc analyses indicated that amateur boxers displayed the largest EE strength values on the non-dominant side at ≤ 120°/s and the dominant side at ≥ 120°/s (p < 0.03; 1.21 ≤ d ≤ 1.59). The largest EF–EE strength ratios were observed on amateur boxers’ and judokas’ non-dominant side at ≥ 120°/s (p < 0.04; 1.36 ≤ d ≤ 2.44). Further, we found lower KF–KE strength measures in karate (p < 0.04; 1.12 ≤ d ≤ 6.22) and judo athletes (p ≤ 0.03; 1.60 ≤ d ≤ 5.31) particularly on the non-dominant side.
Conclusions
The present findings indicated combat sport-specific differences in maximal isokinetic strength measures of EF, EE, KF, and KE particularly in favor of amateur boxers on the non-dominant side.
Background
Maximal isokinetic strength ratios of joint flexors and extensors are important parameters to indicate the level of muscular balance at the joint. Further, in combat sports athletes, upper and lower limb muscle strength is affected by the type of sport. Thus, this study aimed to examine the differences in maximal isokinetic strength of the flexors and extensors and the corresponding flexor–extensor strength ratios of the elbows and knees in combat sports athletes.
Method
Forty male participants (age = 22.3 ± 2.5 years) from four different combat sports (amateur boxing, taekwondo, karate, and judo; n = 10 per sport) were tested for eccentric peak torque of the elbow/knee flexors (EF/KF) and concentric peak torque of the elbow/knee extensors (EE/KE) at three different angular velocities (60, 120, and 180°/s) on the dominant and non-dominant side using an isokinetic device.
Results
Analyses revealed significant, large-sized group × velocity × limb interactions for EF, EE, and EF–EE ratio, KF, KE, and KF–KE ratio (p ≤ 0.03; 0.91 ≤ d ≤ 1.75). Post-hoc analyses indicated that amateur boxers displayed the largest EE strength values on the non-dominant side at ≤ 120°/s and the dominant side at ≥ 120°/s (p < 0.03; 1.21 ≤ d ≤ 1.59). The largest EF–EE strength ratios were observed on amateur boxers’ and judokas’ non-dominant side at ≥ 120°/s (p < 0.04; 1.36 ≤ d ≤ 2.44). Further, we found lower KF–KE strength measures in karate (p < 0.04; 1.12 ≤ d ≤ 6.22) and judo athletes (p ≤ 0.03; 1.60 ≤ d ≤ 5.31) particularly on the non-dominant side.
Conclusions
The present findings indicated combat sport-specific differences in maximal isokinetic strength measures of EF, EE, KF, and KE particularly in favor of amateur boxers on the non-dominant side.
Purpose: To examine the effects of fatiguing isometric contractions on maximal eccentric strength and electromechanical delay (EMD) of the knee flexors in healthy young adults of different training status.
Methods: Seventy-five male participants (27.7 ± 5.0 years) were enrolled in this study and allocated to three experimental groups according to their training status: athletes (ATH, n = 25), physically active adults (ACT, n = 25), and sedentary participants (SED, n = 25). The fatigue protocol comprised intermittent isometric knee flexions (6-s contraction, 4-s rest) at 60% of the maximum voluntary contraction until failure. Pre- and post-fatigue, maximal eccentric knee flexor strength and EMDs of the biceps femoris, semimembranosus, and semitendinosus muscles were assessed during maximal eccentric knee flexor actions at 60, 180, and 300°/s angular velocity. An analysis of covariance was computed with baseline (unfatigued) data included as a covariate.
Results: Significant and large-sized main effects of group (p ≤ 0.017, 0.87 ≤ d ≤ 3.69) and/or angular velocity (p < 0.001, d = 1.81) were observed. Post hoc tests indicated that regardless of angular velocity, maximal eccentric knee flexor strength was lower and EMD was longer in SED compared with ATH and ACT (p ≤ 0.025, 0.76 ≤ d ≤ 1.82) and in ACT compared with ATH (p = ≤0.025, 0.76 ≤ d ≤ 1.82). Additionally, EMD at post-test was significantly longer at 300°/s compared with 60 and 180°/s (p < 0.001, 2.95 ≤ d ≤ 4.64) and at 180°/s compared with 60°/s (p < 0.001, d = 2.56), irrespective of training status.
Conclusion: The main outcomes revealed significantly higher maximal eccentric strength and shorter eccentric EMDs of knee flexors in individuals with higher training status (i.e., athletes) following fatiguing exercises. Therefore, higher training status is associated with better neuromuscular functioning (i.e., strength, EMD) of the hamstring muscles in fatigued condition. Future longitudinal studies are needed to substantiate the clinical relevance of these findings.
Purpose: To examine the effects of fatiguing isometric contractions on maximal eccentric strength and electromechanical delay (EMD) of the knee flexors in healthy young adults of different training status.
Methods: Seventy-five male participants (27.7 ± 5.0 years) were enrolled in this study and allocated to three experimental groups according to their training status: athletes (ATH, n = 25), physically active adults (ACT, n = 25), and sedentary participants (SED, n = 25). The fatigue protocol comprised intermittent isometric knee flexions (6-s contraction, 4-s rest) at 60% of the maximum voluntary contraction until failure. Pre- and post-fatigue, maximal eccentric knee flexor strength and EMDs of the biceps femoris, semimembranosus, and semitendinosus muscles were assessed during maximal eccentric knee flexor actions at 60, 180, and 300°/s angular velocity. An analysis of covariance was computed with baseline (unfatigued) data included as a covariate.
Results: Significant and large-sized main effects of group (p ≤ 0.017, 0.87 ≤ d ≤ 3.69) and/or angular velocity (p < 0.001, d = 1.81) were observed. Post hoc tests indicated that regardless of angular velocity, maximal eccentric knee flexor strength was lower and EMD was longer in SED compared with ATH and ACT (p ≤ 0.025, 0.76 ≤ d ≤ 1.82) and in ACT compared with ATH (p = ≤0.025, 0.76 ≤ d ≤ 1.82). Additionally, EMD at post-test was significantly longer at 300°/s compared with 60 and 180°/s (p < 0.001, 2.95 ≤ d ≤ 4.64) and at 180°/s compared with 60°/s (p < 0.001, d = 2.56), irrespective of training status.
Conclusion: The main outcomes revealed significantly higher maximal eccentric strength and shorter eccentric EMDs of knee flexors in individuals with higher training status (i.e., athletes) following fatiguing exercises. Therefore, higher training status is associated with better neuromuscular functioning (i.e., strength, EMD) of the hamstring muscles in fatigued condition. Future longitudinal studies are needed to substantiate the clinical relevance of these findings.
Regulatory focus is a motivational construct that describes humans’ motivational orientation during goal pursuit. It is conceptualized as a chronic, trait-like, as well as a momentary, state-like orientation. Whereas there is a large number of measures to capture chronic regulatory focus, measures for its momentary assessment are only just emerging. This paper presents the development and validation of a measure of Momentary–Chronic Regulatory Focus. Our development incorporates the distinction between self-guide and reference-point definitions of regulatory focus. Ideals and ought striving are the promotion and prevention dimension in the self-guide system; gain and non-loss regulatory focus are the respective dimensions within the reference-point system. Three-survey-based studies test the structure, psychometric properties, and validity of the measure in its version to assess chronic regulatory focus (two samples of working participants, N = 389, N = 672; one student sample [time 1, N = 105; time 2, n = 91]). In two further studies, an experience sampling study with students (N = 84, k = 1649) and a daily-diary study with working individuals (N = 129, k = 1766), the measure was applied to assess momentary regulatory focus. Multilevel analyses test the momentary measure’s factorial structure, provide support for its sensitivity to capture within-person fluctuations, and provide evidence for concurrent construct validity.
Regulatory focus is a motivational construct that describes humans’ motivational orientation during goal pursuit. It is conceptualized as a chronic, trait-like, as well as a momentary, state-like orientation. Whereas there is a large number of measures to capture chronic regulatory focus, measures for its momentary assessment are only just emerging. This paper presents the development and validation of a measure of Momentary–Chronic Regulatory Focus. Our development incorporates the distinction between self-guide and reference-point definitions of regulatory focus. Ideals and ought striving are the promotion and prevention dimension in the self-guide system; gain and non-loss regulatory focus are the respective dimensions within the reference-point system. Three-survey-based studies test the structure, psychometric properties, and validity of the measure in its version to assess chronic regulatory focus (two samples of working participants, N = 389, N = 672; one student sample [time 1, N = 105; time 2, n = 91]). In two further studies, an experience sampling study with students (N = 84, k = 1649) and a daily-diary study with working individuals (N = 129, k = 1766), the measure was applied to assess momentary regulatory focus. Multilevel analyses test the momentary measure’s factorial structure, provide support for its sensitivity to capture within-person fluctuations, and provide evidence for concurrent construct validity.
Among the different meanings carried by numerical information, cardinality is fundamental for survival and for the development of basic as well as of higher numerical skills. Importantly, the human brain inherits from evolution a predisposition to map cardinality onto space, as revealed by the presence of spatial-numerical associations (SNAs) in humans and animals. Here, the mapping of cardinal information onto physical space is addressed as a hallmark signature characterizing numerical cognition.
According to traditional approaches, cognition is defined as complex forms of internal information processing, taking place in the brain (cognitive processor). On the contrary, embodied cognition approaches define cognition as functionally linked to perception and action, in the continuous interaction between a biological body and its physical and sociocultural environment.
Embracing the principles of the embodied cognition perspective, I conducted four novel studies designed to unveil how SNAs originate, develop, and adapt, depending on characteristics of the organism, the context, and their interaction. I structured my doctoral thesis in three levels. At the grounded level (Study 1), I unfold the biological foundations underlying the tendency to map cardinal information across space; at the embodied level (Study 2), I reveal the impact of atypical motor development on the construction of SNAs; at the situated level (Study 3), I document the joint influence of visuospatial attention and task properties on SNAs. Furthermore, I experimentally investigate the presence of associations between physical and numerical distance, another numerical property fundamental for the development of efficient mathematical minds (Study 4).
In Study 1, I present the Brain’s Asymmetric Frequency Tuning hypothesis that relies on hemispheric asymmetries for processing spatial frequencies, a low-level visual feature that the (in)vertebrate brain extracts from any visual scene to create a coherent percept of the world. Computational analyses of the power spectra of the original stimuli used to document the presence of SNAs in human newborns and animals, support the brain’s asymmetric frequency tuning as a theoretical account and as an evolutionarily inherited mechanism scaffolding the universal and innate tendency to represent cardinality across horizontal space.
In Study 2, I explore SNAs in children with rare genetic neuromuscular diseases: spinal muscular atrophy (SMA) and Duchenne muscular dystrophy (DMD). SMA children never accomplish independent motoric exploration of their environment; in contrast, DMD children do explore but later lose this ability. The different SNAs reported by the two groups support the critical role of early sensorimotor experiences in the spatial representation of cardinality.
In Study 3, I directly compare the effects of overt attentional orientation during explicit and implicit processing of numerical magnitude. First, the different effects of attentional orienting based on the type of assessment support different mechanisms underlying SNAs during explicit and implicit assessment of numerical magnitude. Secondly, the impact of vertical shifts of attention on the processing of numerical distance sheds light on the correspondence between numerical distance and peri-personal distance.
In Study 4, I document the presence of different SNAs, driven by numerical magnitude and numerical distance, by employing different response mappings (left vs. right and near vs. distant).
In the field of numerical cognition, the four studies included in the present thesis contribute to unveiling how the characteristics of the organism and the environment influence the emergence, the development, and the flexibility of our attitude to represent cardinal information across space, thus supporting the predictions of the embodied cognition approach. Furthermore, they inform a taxonomy of body-centred factors (biological properties of the brain and sensorimotor system) modulating the spatial representation of cardinality throughout the course of life, at the grounded, embodied, and situated levels.
If the awareness for different variables influencing SNAs over the course of life is important, it is equally important to consider the organism as a whole in its sensorimotor interaction with the world. Inspired by my doctoral research, here I propose a holistic perspective that considers the role of evolution, embodiment, and environment in the association of cardinal information with directional space. The new perspective advances the current approaches to SNAs, both at the conceptual and at the methodological levels.
Unveiling how the mental representation of cardinality emerges, develops, and adapts is necessary to shape efficient mathematical minds and achieve economic productivity, technological progress, and a higher quality of life.
Commentary
(2020)
Commentary
(2020)
While children acquire new words and simple sentence structures extremely fast and without much effort, the ability to process complex sentences develops rather late in life. Although the conjoint occurrence between brain-structural and brain-functional changes, the decrease of plasticity, and changes in cognitive abilities suggests a certain causality between these processes, concrete evidence for the relation between brain development, language processing, and language performance is rare. Therefore, the current dissertation investigates the tripartite relationship between behavior (in the form of language performance and cognitive maturation as prerequisite for language processing), brain structure (in the form of gray matter maturation), and brain function (in the form of brain activation evoked by complex sentence processing). Previous developmental studies indicate a missing increase of activation in accordance to sentence complexity (functional selectivity) in language-relevant brain areas in children. To determine the factors contributing to the functional development of language-relevant brain areas, different methodologies and data acquisition techniques were used to investigate the processing of center-embedded sentences in 5- and 6-year-old children, 7- and 8-year-old children, and adults. Behavioral results indicate that children between 5 and 8 years show difficulties in processing double embedded sentences and that their performance for these type of sentences is positively correlated with digit span. In 7- and 8-year-old children, it was found that especially the processing of long-distance relations between the initial phrase and its corresponding verb appears to be associated with the subject’s verbal working memory capacity. In contrast, children’s performance for double embedded sentences in the younger age group positively correlated with their performance in a standardized sentence comprehension test. This finding supports the hypothesis that processing difficulties in this age group may be mainly attributed to difficulties in processing case marking information. These findings are discussed with respect to current accounts of language and working memory development. A second study aimed at investigating the structural maturation of brain areas involved in sentence comprehension. To do this, whole-brain magnetic resonance images from 59 children between 5 and 8 years were collected and children’s gray matter was analyzed by using voxel-based morphometry. Children’s grammatical proficiency was assessed by a standardized sentence comprehension test. A confirmatory factory analysis corroborated a grammar-relevant and a verbal working memory-relevant factor underlying the measured performance. While children’s ability to assign thematic roles is positively correlated with gray matter probability (GMP) in the left inferior temporal gyrus and the left inferior frontal gyrus, verbal working memory-related performance is positively correlated with GMP in the left parietal operculum extending into the posterior superior temporal gyrus. These areas have been previously shown to be differentially engaged in adults’ complex sentence processing. Thus, the findings of the second study suggest a specific correspondence between children’s GMP in language-relevant brain regions and differential cognitive abilities which underlie complex sentence comprehension. In a third study, functional brain activity during the processing of center-embedded sentences was investigated in three different age groups (5–6 years, 7–8 years, and adults). Although all age groups engage a qualitatively comparable network of the left pars opercularis (PO), the left inferior parietal lobe extending into the posterior superior temporal gyrus (IPL/pSTG), the supplementary motor area (SMA) and the cerebellum, functional selectivity of these regions was only observable in adults. However, functional activation of the language-related regions (PO and IPL/pSTG) predicted sentence comprehension performance for all age groups. To solve the question of the complex interplay between different maturational factors, a fourth study analyzed the predictive power of gray matter probability, verbal working memory capacity, and behavioral differences in performance for simple and complex sentence for the functional selectivity of each activated region. These analyses revealed that the establishment of the adult-like functional selectivity for complex sentences is predicted by a reduction of the left PO’s gray matter probability across age groups while that of the IPL/pSTG is additionally predicted by verbal working memory capacity. Taken all findings together, the current thesis provides evidence that both structural brain maturation and verbal working memory expansion provide the basis for the emergence of functional selectivity in language-related brain regions leading to more efficient sentence processing during development.
Background
The aim of this study was to analyze the shoulder functional profile (rotation range of motion [ROM] and strength), upper and lower body performance, and throwing speed of U13 versus U15 male handball players, and to establish the relationship between these measures of physical fitness and throwing speed.
Methods
One-hundred and nineteen young male handball players (under (U)-13 (U13) [n = 85]) and U15 [n = 34]) volunteered to participate in this study. The participating athletes had a mean background of sytematic handball training of 5.5 ± 2.8 years and they exercised on average 540 ± 10.1 min per week including sport-specific team handball training and strength and conditioning programs. Players were tested for passive shoulder range-of-motion (ROM) for both internal (IR) and external rotation (ER) and isometric strength (i.e., IR and ER) of the dominant/non-dominant shoulders, overhead medicine ball throw (OMB), hip isometric abductor (ABD) and adductor (ADD) strength, hip ROM, jumps (countermovement jump [CMJ] and triple leg-hop [3H] for distance), linear sprint test, modified 505 change-of-direction (COD) test and handball throwing speed (7 m [HT7] and 9 m [HT9]).
Results
U15 players outperformed U13 in upper (i.e., HT7 and HT9 speed, OMB, absolute IR and ER strength of the dominant and non-dominant sides; Cohen’s d: 0.76–2.13) and lower body (i.e., CMJ, 3H, 20-m sprint and COD, hip ABD and ADD; d: 0.70–2.33) performance measures. Regarding shoulder ROM outcomes, a lower IR ROM was found of the dominant side in the U15 group compared to the U13 and a higher ER ROM on both sides in U15 (d: 0.76–1.04). It seems that primarily anthropometric characteristics (i.e., body height, body mass) and upper body strength/power (OMB distance) are the most important factors that explain the throw speed variance in male handball players, particularly in U13.
Conclusions
Findings from this study imply that regular performance monitoring is important for performance development and for minimizing injury risk of the shoulder in both age categories of young male handball players. Besides measures of physical fitness, anthropometric data should be recorded because handball throwing performance is related to these measures.
Background
The aim of this study was to analyze the shoulder functional profile (rotation range of motion [ROM] and strength), upper and lower body performance, and throwing speed of U13 versus U15 male handball players, and to establish the relationship between these measures of physical fitness and throwing speed.
Methods
One-hundred and nineteen young male handball players (under (U)-13 (U13) [n = 85]) and U15 [n = 34]) volunteered to participate in this study. The participating athletes had a mean background of sytematic handball training of 5.5 ± 2.8 years and they exercised on average 540 ± 10.1 min per week including sport-specific team handball training and strength and conditioning programs. Players were tested for passive shoulder range-of-motion (ROM) for both internal (IR) and external rotation (ER) and isometric strength (i.e., IR and ER) of the dominant/non-dominant shoulders, overhead medicine ball throw (OMB), hip isometric abductor (ABD) and adductor (ADD) strength, hip ROM, jumps (countermovement jump [CMJ] and triple leg-hop [3H] for distance), linear sprint test, modified 505 change-of-direction (COD) test and handball throwing speed (7 m [HT7] and 9 m [HT9]).
Results
U15 players outperformed U13 in upper (i.e., HT7 and HT9 speed, OMB, absolute IR and ER strength of the dominant and non-dominant sides; Cohen’s d: 0.76–2.13) and lower body (i.e., CMJ, 3H, 20-m sprint and COD, hip ABD and ADD; d: 0.70–2.33) performance measures. Regarding shoulder ROM outcomes, a lower IR ROM was found of the dominant side in the U15 group compared to the U13 and a higher ER ROM on both sides in U15 (d: 0.76–1.04). It seems that primarily anthropometric characteristics (i.e., body height, body mass) and upper body strength/power (OMB distance) are the most important factors that explain the throw speed variance in male handball players, particularly in U13.
Conclusions
Findings from this study imply that regular performance monitoring is important for performance development and for minimizing injury risk of the shoulder in both age categories of young male handball players. Besides measures of physical fitness, anthropometric data should be recorded because handball throwing performance is related to these measures.
Research on problem solving offers insights into how humans process task-related information and which strategies they use (Newell and Simon, 1972; Öllinger et al., 2014). Problem solving can be defined as the search for possible changes in one's mind (Kahneman, 2003). In a recent study, Adams et al. (2021) assessed whether the predominant problem solving strategy when making changes involves adding or subtracting elements. In order to do this, they used several examples of simple problems, such as editing text or making visual patterns symmetrical, either in naturalistic settings or on-line. The essence of the authors' findings is a strong preference to add rather than subtract elements across a diverse range of problems, including the stabilizing of artifacts, creating symmetrical patterns, or editing texts. More specifically, they succeeded in demonstrating that “participants were less likely to identify advantageous subtractive changes when the task did not (vs. did) cue them to consider subtraction, when they had only one opportunity (vs. several) to recognize the shortcomings of an additive search strategy or when they were under a higher (vs. lower) cognitive load” (Adams et al., 2021, p. 258).
Addition and subtraction are generally defined as de-contextualized mathematical operations using abstract symbols (Russell, 1903/1938). Nevertheless, understanding of both symbols and operations is informed by everyday activities, such as making or breaking objects (Lakoff and Núñez, 2000; Fischer and Shaki, 2018). The universal attribution of “addition bias” or “subtraction neglect” to problem solving activities is perhaps a convenient shorthand but it overlooks influential framing effects beyond those already acknowledged in the report and the accompanying commentary (Meyvis and Yoon, 2021).
Most importantly, while Adams et al.'s study addresses an important issue, their very method of verbally instructing participants, together with lack of control over several known biases, might render their findings less than conclusive. Below, we discuss our concerns that emerged from the identified biases, namely those regarding the instructions and the experimental materials. Moreover, we refer to research from mathematical cognition that provides new insights into Adams et al.'s findings.
Research on problem solving offers insights into how humans process task-related information and which strategies they use (Newell and Simon, 1972; Öllinger et al., 2014). Problem solving can be defined as the search for possible changes in one's mind (Kahneman, 2003). In a recent study, Adams et al. (2021) assessed whether the predominant problem solving strategy when making changes involves adding or subtracting elements. In order to do this, they used several examples of simple problems, such as editing text or making visual patterns symmetrical, either in naturalistic settings or on-line. The essence of the authors' findings is a strong preference to add rather than subtract elements across a diverse range of problems, including the stabilizing of artifacts, creating symmetrical patterns, or editing texts. More specifically, they succeeded in demonstrating that “participants were less likely to identify advantageous subtractive changes when the task did not (vs. did) cue them to consider subtraction, when they had only one opportunity (vs. several) to recognize the shortcomings of an additive search strategy or when they were under a higher (vs. lower) cognitive load” (Adams et al., 2021, p. 258).
Addition and subtraction are generally defined as de-contextualized mathematical operations using abstract symbols (Russell, 1903/1938). Nevertheless, understanding of both symbols and operations is informed by everyday activities, such as making or breaking objects (Lakoff and Núñez, 2000; Fischer and Shaki, 2018). The universal attribution of “addition bias” or “subtraction neglect” to problem solving activities is perhaps a convenient shorthand but it overlooks influential framing effects beyond those already acknowledged in the report and the accompanying commentary (Meyvis and Yoon, 2021).
Most importantly, while Adams et al.'s study addresses an important issue, their very method of verbally instructing participants, together with lack of control over several known biases, might render their findings less than conclusive. Below, we discuss our concerns that emerged from the identified biases, namely those regarding the instructions and the experimental materials. Moreover, we refer to research from mathematical cognition that provides new insights into Adams et al.'s findings.
Tracheotomierte Patienten, die sowohl eine Dysphagie als auch respiratorische Defizite aufweisen, haben nach der Dekanülierung häufig Probleme, sich an die translaryngeale Atmung anzupassen. Wir entwickelten ein Dekanülierungsprotokoll für diese Patientengruppe, das optional in unser bestehendes Trachealkanülenmanagement integriert werden kann. Erfüllt ein Patient die hierfür definierten Kriterien, so erfolgt unter laryngoskopischer Kontrolle die Einlage eines Platzhalters, der bis zu 3 Tage in situ verbleibt. Während dieser Probedekanülierungsphase werden die respiratorischen Funktionen und das Speichelmanagement engmaschig überwacht. Auf der Grundlage dieser Evaluation wird dann die Entscheidung für oder gegen eine endgültige Dekanülierung getroffen. Wir stellen den Ablauf, die Kriterienkataloge und die Evaluationsparameter für diese Probedekanülierungsphase vor und illustrieren den Ablauf anhand von 2 Fallbeispielen.
Introduction: We conducted a case study to examine the feasibility and safety of high-intensity interval training (HIIT) with increased inspired oxygen content in a colon cancer patient undergoing chemotherapy. A secondary purpose was to investigate the effects of such training regimen on physical functioning.
Case presentation: A female patient (51 years; 49.1 kg; 1.65 m; tumor stage: pT3, pN2a (5/29), pM1a (HEP), L0, V0, R0) performed 8 sessions of HIIT (5 × 3 minutes at 90% of Wmax, separated by 2 minutes at 45% Wmax) with an increased inspired oxygen fraction of 30%. Patient safety, training adherence, cardiorespiratory fitness (peak oxygen uptake and maximal power output during an incremental cycle ergometer test), autonomous nervous function (i.e., heart rate variability during an orthostatic test) as well as questionnaire-assessed quality of life (EORTC QLQ-C30) were evaluated before and after the intervention.
No adverse events were reported throughout the training intervention and a 3 months follow-up. While the patient attended all sessions, adherence to total training time was only 51% (102 of 200 minutes; mean training time per session 12:44 min:sec). VO2peak and Wmax increased by 13% (from 23.0 to 26.1 mL min kg−1) and 21% (from 83 to 100 W), respectively. Heart rate variability represented by the root mean squares of successive differences both in supine and upright positions were increased after the training by 143 and 100%, respectively. The EORTC QLQ-C30 score for physical functioning (7.5%) as well as the global health score (10.7%) improved, while social function decreased (17%).
Conclusions: Our results show that a already short period of HIIT with concomitant hyperoxia was safe and feasible for a patient undergoing chemotherapy for colon cancer. Furthermore, the low overall training adherence of only 51% and an overall low training time per session (∼13 minutes) was sufficient to induce clinically meaningful improvements in physical functioning. However, this case also underlines that intensity and/or length of the HIIT-bouts might need further adjustments to increase training compliance.
Introduction: We conducted a case study to examine the feasibility and safety of high-intensity interval training (HIIT) with increased inspired oxygen content in a colon cancer patient undergoing chemotherapy. A secondary purpose was to investigate the effects of such training regimen on physical functioning.
Case presentation: A female patient (51 years; 49.1 kg; 1.65 m; tumor stage: pT3, pN2a (5/29), pM1a (HEP), L0, V0, R0) performed 8 sessions of HIIT (5 × 3 minutes at 90% of Wmax, separated by 2 minutes at 45% Wmax) with an increased inspired oxygen fraction of 30%. Patient safety, training adherence, cardiorespiratory fitness (peak oxygen uptake and maximal power output during an incremental cycle ergometer test), autonomous nervous function (i.e., heart rate variability during an orthostatic test) as well as questionnaire-assessed quality of life (EORTC QLQ-C30) were evaluated before and after the intervention.
No adverse events were reported throughout the training intervention and a 3 months follow-up. While the patient attended all sessions, adherence to total training time was only 51% (102 of 200 minutes; mean training time per session 12:44 min:sec). VO2peak and Wmax increased by 13% (from 23.0 to 26.1 mL min kg−1) and 21% (from 83 to 100 W), respectively. Heart rate variability represented by the root mean squares of successive differences both in supine and upright positions were increased after the training by 143 and 100%, respectively. The EORTC QLQ-C30 score for physical functioning (7.5%) as well as the global health score (10.7%) improved, while social function decreased (17%).
Conclusions: Our results show that a already short period of HIIT with concomitant hyperoxia was safe and feasible for a patient undergoing chemotherapy for colon cancer. Furthermore, the low overall training adherence of only 51% and an overall low training time per session (∼13 minutes) was sufficient to induce clinically meaningful improvements in physical functioning. However, this case also underlines that intensity and/or length of the HIIT-bouts might need further adjustments to increase training compliance.
Timing of initial school enrollment may vary considerably for various reasons such as early or delayed enrollment, skipped or repeated school classes. Accordingly, the age range within school grades includes older-(OTK) and younger-than-keyage (YTK) children. Hardly any information is available on the impact of timing of school enrollment on physical fitness. There is evidence from a related research topic showing large differences in academic performance between OTK and YTK children versus keyage children. Thus, the aim of this study was to compare physical fitness of OTK (N = 26,540) and YTK (N = 2586) children versus keyage children (N = 108,295) in a representative sample of German third graders. Physical fitness tests comprised cardiorespiratory endurance, coordination, speed, lower, and upper limbs muscle power. Predictions of physical fitness performance for YTK and OTK children were estimated using data from keyage children by taking age, sex, school, and assessment year into account. Data were annually recorded between 2011 and 2019. The difference between observed and predicted z-scores yielded a delta z-score that was used as a dependent variable in the linear mixed models. Findings indicate that OTK children showed poorer performance compared to keyage children, especially in coordination, and that YTK children outperformed keyage children, especially in coordination. Teachers should be aware that OTK children show poorer physical fitness performance compared to keyage children.
Timing of initial school enrollment may vary considerably for various reasons such as early or delayed enrollment, skipped or repeated school classes. Accordingly, the age range within school grades includes older-(OTK) and younger-than-keyage (YTK) children. Hardly any information is available on the impact of timing of school enrollment on physical fitness. There is evidence from a related research topic showing large differences in academic performance between OTK and YTK children versus keyage children. Thus, the aim of this study was to compare physical fitness of OTK (N = 26,540) and YTK (N = 2586) children versus keyage children (N = 108,295) in a representative sample of German third graders. Physical fitness tests comprised cardiorespiratory endurance, coordination, speed, lower, and upper limbs muscle power. Predictions of physical fitness performance for YTK and OTK children were estimated using data from keyage children by taking age, sex, school, and assessment year into account. Data were annually recorded between 2011 and 2019. The difference between observed and predicted z-scores yielded a delta z-score that was used as a dependent variable in the linear mixed models. Findings indicate that OTK children showed poorer performance compared to keyage children, especially in coordination, and that YTK children outperformed keyage children, especially in coordination. Teachers should be aware that OTK children show poorer physical fitness performance compared to keyage children.
Children’s physical fitness development and related moderating effects of age and sex are well documented, especially boys’ and girls’ divergence during puberty. The situation might be different during prepuberty. As girls mature approximately two years earlier than boys, we tested a possible convergence of performance with five tests representing four components of physical fitness in a large sample of 108,295 eight-year old third-graders. Within this single prepubertal year of life and irrespective of the test, performance increased linearly with chronological age, and boys outperformed girls to a larger extent in tests requiring muscle mass for successful performance. Tests differed in the magnitude of age effects (gains), but there was no evidence for an interaction between age and sex. Moreover, “physical fitness” of schools correlated at r = 0.48 with their age effect which might imply that "fit schools” promote larger gains; expected secular trends from 2011 to 2019 were replicated.
Children’s physical fitness development and related moderating effects of age and sex are well documented, especially boys’ and girls’ divergence during puberty. The situation might be different during prepuberty. As girls mature approximately two years earlier than boys, we tested a possible convergence of performance with five tests representing four components of physical fitness in a large sample of 108,295 eight-year old third-graders. Within this single prepubertal year of life and irrespective of the test, performance increased linearly with chronological age, and boys outperformed girls to a larger extent in tests requiring muscle mass for successful performance. Tests differed in the magnitude of age effects (gains), but there was no evidence for an interaction between age and sex. Moreover, “physical fitness” of schools correlated at r = 0.48 with their age effect which might imply that "fit schools” promote larger gains; expected secular trends from 2011 to 2019 were replicated.
Midbrain dopamine neurons invigorate responses by signaling opportunity costs (tonic dopamine) and promote associative learning by encoding a reward prediction error signal (phasic dopamine). Recent studies on Bayesian sensorimotor control have implicated midbrain dopamine concentration in the integration of prior knowledge and current sensory information. The present behavioral study addressed the contributions of tonic and phasic dopamine in a Bayesian decision-making task by alternating reward magnitude and inferring reward prediction errors. Twenty-four participants were asked to indicate the position of a hidden target stimulus under varying prior and likelihood uncertainty. Trial-by-trial rewards were allocated based on performance and two different reward maxima. Overall, participants’ behavior agreed with Bayesian decision theory, but indicated excessive reliance on likelihood information. These results thus
oppose accounts of statistically optimal integration in sensorimotor control, and suggest that the sensorimotor system is subject to additional decision heuristics. Moreover, higher reward magnitude was not observed to induce enhanced response vigor, and was associated with less Bayes-like integration. In addition, the weighting of prior knowledge and current sensory information proceeded independently of reward prediction errors.
Taken together, these findings suggest that the process of combining prior and likelihood uncertainties in sensorimotor control is largely robust to variations in reward.
Meaning-making in the brain has become one of the most intensely discussed topics in cognitive science. Traditional theories on cognition that emphasize abstract symbol manipulations often face a dead end: The symbol grounding problem. The embodiment idea tries to overcome this barrier by assuming that the mind is grounded in sensorimotor experiences. A recent surge in behavioral and brain-imaging studies has therefore focused on the role of the motor cortex in language processing. Concrete, action-related words have received convincing evidence to rely on sensorimotor activation. Abstract concepts, however, still pose a distinct challenge for embodied theories on cognition. Fully embodied abstraction mechanisms were formulated but sensorimotor activation alone seems unlikely to close the explanatory gap. In this respect, the idea of integration areas, such as convergence zones or the ‘hub and spoke’ model, do not only appear like the most promising candidates to account for the discrepancies between concrete and abstract concepts but could also help to unite the field of cognitive science again. The current review identifies milestones in cognitive science research and recent achievements that highlight fundamental challenges, key questions and directions for future research.