Strukturbereich Kognitionswissenschaften
Refine
Year of publication
Document Type
- Article (221)
- Postprint (162)
- Doctoral Thesis (15)
- Monograph/Edited Volume (7)
- Conference Proceeding (6)
- Review (6)
- Preprint (3)
- Master's Thesis (2)
- Habilitation Thesis (1)
- Part of Periodical (1)
Keywords
- exercise (15)
- football (14)
- embodied cognition (12)
- fMRI (12)
- working memory (12)
- performance (10)
- German (9)
- language acquisition (9)
- neuroimaging (9)
- adolescents (8)
Institute
- Strukturbereich Kognitionswissenschaften (424) (remove)
Although a relatively large number of studies on acquired language impairments have tested the case of derivational morphology, none of these have specifically investigated whether there are differences in how prefixed and suffixed derived words are impaired. Based on linguistic and psycholinguistic considerations on prefixed and suffixed derived words, differences in how these two types of derivations are processed, and consequently impaired, are predicted. In the present study, we investigated the errors produced in reading aloud simple, prefixed, and suffixed words by three German individuals with agrammatic aphasia (NN, LG, SA). We found that, while NN and LG produced similar numbers of errors with prefixed and suffixed words, SA showed a selective impairment for prefixed words. Furthermore, NN and SA produced more errors specifically involving the affix with prefixed words than with suffixed words. We discuss our findings in terms of relative position of stem and affix in prefixed and suffixed words, as well as in terms of specific properties of prefixes and suffixes.
This article first outlines different ways of how psycholinguists have dealt with linguistic diversity and illustrates these approaches with three familiar cases from research on language processing, language acquisition, and language disorders. The second part focuses on the role of morphology and morphological variability across languages for psycholinguistic research. The specific phenomena to be examined are to do with stem-formation morphology and inflectional classes; they illustrate how experimental research that is informed by linguistic typology can lead to new insights.
This study aimed to compare the training load of a professional under-19 soccer team (U-19) to that of an elite adult team (EAT), from the same club, during the in-season period. Thirty-nine healthy soccer players were involved (EAT [n = 20]; U-19 [n = 19]) in the study which spanned four weeks. Training load (TL) was monitored as external TL, using a global positioning system (GPS), and internal TL, using a rating of perceived exertion (RPE). TL data were recorded after each training session. During soccer matches, players’ RPEs were recorded. The internal TL was quantified daily by means of the session rating of perceived exertion (session-RPE) using Borg’s 0–10 scale. For GPS data, the selected running speed intensities (over 0.5 s time intervals) were 12–15.9 km/h; 16–19.9 km/h; 20–24.9 km/h; >25 km/h (sprint). Distances covered between 16 and 19.9 km/h, > 20 km/h and >25 km/h were significantly higher in U-19 compared to EAT over the course of the study (p = 0.023, d = 0.243, small; p = 0.016, d = 0.298, small; and p = 0.001, d = 0.564, small, respectively). EAT players performed significantly fewer sprints per week compared to U-19 players (p = 0.002, d = 0.526, small). RPE was significantly higher in U-19 compared to EAT (p = 0.001, d = 0.188, trivial). The external and internal measures of TL were significantly higher in the U-19 group compared to the EAT soccer players. In conclusion, the results obtained show that the training load is greater in U19 compared to EAT.
This study aimed to compare the training load of a professional under-19 soccer team (U-19) to that of an elite adult team (EAT), from the same club, during the in-season period. Thirty-nine healthy soccer players were involved (EAT [n = 20]; U-19 [n = 19]) in the study which spanned four weeks. Training load (TL) was monitored as external TL, using a global positioning system (GPS), and internal TL, using a rating of perceived exertion (RPE). TL data were recorded after each training session. During soccer matches, players’ RPEs were recorded. The internal TL was quantified daily by means of the session rating of perceived exertion (session-RPE) using Borg’s 0–10 scale. For GPS data, the selected running speed intensities (over 0.5 s time intervals) were 12–15.9 km/h; 16–19.9 km/h; 20–24.9 km/h; >25 km/h (sprint). Distances covered between 16 and 19.9 km/h, > 20 km/h and >25 km/h were significantly higher in U-19 compared to EAT over the course of the study (p = 0.023, d = 0.243, small; p = 0.016, d = 0.298, small; and p = 0.001, d = 0.564, small, respectively). EAT players performed significantly fewer sprints per week compared to U-19 players (p = 0.002, d = 0.526, small). RPE was significantly higher in U-19 compared to EAT (p = 0.001, d = 0.188, trivial). The external and internal measures of TL were significantly higher in the U-19 group compared to the EAT soccer players. In conclusion, the results obtained show that the training load is greater in U19 compared to EAT.
We aimed at unveiling the role of executive functions (EFs) and language-related skills in spelling for mono- versus multilingual primary school children. We focused on EF and language-related skills, in particular lexicon size and phonological awareness (PA), because these factors were found to predict spelling in studies predominantly conducted with monolinguals, and because multilingualism can modulate these factors. There is evidence for (a) a bilingual advantage in EF due to constant high cognitive demands through language control, (b) a smaller mental lexicon in German and (c) possibly better PA. Multilinguals in Germany show on average poorer German language proficiency, what can influence performance on language-based tasks negatively. Thus, we included two spelling tasks to tease apart spelling based on lexical knowledge (i.e., word spelling) from spelling based on non-lexical strategies (i.e., non-word spelling). Our sample consisted of heterogeneous third graders from Germany: 69 monolinguals (age: M = 108 months) and 57 multilinguals (age: M = 111 months). On less language-dependent tasks (e.g., non-word spelling, PA, intelligence, short-term memory (STM) and three EF tasks testing switching, inhibition, and working memory) performance of both groups did not differ significantly. However, multilinguals performed significantly more poorly on tasks measuring German lexicon size and word spelling than monolinguals. Regression analyses revealed that for multilinguals, inhibition was related to spelling, whereas switching was the only EF component to influence word spelling in monolinguals and non-word spelling performance in both groups. By adding lexicon size and other language-related factors to the regression models, the influence of switching was reduced to insignificant effects, but inhibition remained significant for multilinguals. Language-related skills best predicted spelling and both language groups shared those variables: PA for word spelling, and STM for non-word spelling. Additionally, multilinguals’ word spelling performance was also predicted by their German lexicon size, and non-word spelling performance by PA. This study offers an in-depth look at spelling acquisition at a certain point of literacy development. Mono- and multilinguals have the predominant factors for spelling in common, but probably due to superior language knowledge, monolinguals were already able to make use of EF during spelling. For multilinguals, German lexicon size was more important for spelling than EF. For multilinguals’ spelling these functions might come into play only at a later stage.
We aimed at unveiling the role of executive functions (EFs) and language-related skills in spelling for mono- versus multilingual primary school children. We focused on EF and language-related skills, in particular lexicon size and phonological awareness (PA), because these factors were found to predict spelling in studies predominantly conducted with monolinguals, and because multilingualism can modulate these factors. There is evidence for (a) a bilingual advantage in EF due to constant high cognitive demands through language control, (b) a smaller mental lexicon in German and (c) possibly better PA. Multilinguals in Germany show on average poorer German language proficiency, what can influence performance on language-based tasks negatively. Thus, we included two spelling tasks to tease apart spelling based on lexical knowledge (i.e., word spelling) from spelling based on non-lexical strategies (i.e., non-word spelling). Our sample consisted of heterogeneous third graders from Germany: 69 monolinguals (age: M = 108 months) and 57 multilinguals (age: M = 111 months). On less language-dependent tasks (e.g., non-word spelling, PA, intelligence, short-term memory (STM) and three EF tasks testing switching, inhibition, and working memory) performance of both groups did not differ significantly. However, multilinguals performed significantly more poorly on tasks measuring German lexicon size and word spelling than monolinguals. Regression analyses revealed that for multilinguals, inhibition was related to spelling, whereas switching was the only EF component to influence word spelling in monolinguals and non-word spelling performance in both groups. By adding lexicon size and other language-related factors to the regression models, the influence of switching was reduced to insignificant effects, but inhibition remained significant for multilinguals. Language-related skills best predicted spelling and both language groups shared those variables: PA for word spelling, and STM for non-word spelling. Additionally, multilinguals’ word spelling performance was also predicted by their German lexicon size, and non-word spelling performance by PA. This study offers an in-depth look at spelling acquisition at a certain point of literacy development. Mono- and multilinguals have the predominant factors for spelling in common, but probably due to superior language knowledge, monolinguals were already able to make use of EF during spelling. For multilinguals, German lexicon size was more important for spelling than EF. For multilinguals’ spelling these functions might come into play only at a later stage.
Background: A prominent model of semantic processing in modern cognitive psychology proposes that semantic memory originates in everyday life experience with concrete objects such as plants, animals, and tools (Martin Chao, 2001). When the meaning of a concrete content word is being acquired, the learner is confronted with stimuli of various modalities related to the word's meaning. This comes to be stored as sensory knowledge about the object. It is further postulated that there is a conceptual domain remote from the mechanisms of perception, which is often referred to as functional knowledge or verbal semantics. There is a large body of neuropsychological literature trying to establish how much sensory and functional semantics is needed to access a name, and whether the relative contribution of these types of knowledge is the same for all categories of objects. Another controversial issue is whether naming requires access to semantic knowledge, or whether object names can be accessed directly from vision without the intervention of semantics, as is generally accepted for written word naming. Some support for this assumption seems to come from cases of so-called non-optic aphasia, a condition in which patients can name from visual presentation only but not from any other modality of presentation such as auditory, verbal, tactile, etc. In optic aphasia, a condition far better established, naming is possible from all modalities except vision. Aims: The aim of this paper is to draw attention to the first case description of non-optic or negative optic aphasia described by Wolff (1897, 1904). Methods Procedures: The case describes the results of a re-examination of Voit, who was seen by several neurologists in the course of a decade in classical aphasiology. The patient demonstrated anomia in oral but not in written naming of objects in view. Wolff's examination involves extensive testing of semantic processing in several modalities, especially with respect to the status of functional and sensory semantic features Outcomes Results: The re-examination of patient Voit by Wolff in 1897 with new procedures revealed a specific impairment in processing sensory knowledge, while functional knowledge of objects was relatively preserved. This led to a naming impairment in all modalities of presentation except the visual one. Using more refined tasks, Wolff also demonstrated receptive impairments, in contrast to previous researchers who had concluded that the impairment was restricted to oral production. Conclusions: Although Wolff's (1904) case of negative optic aphasia has been almost completely forgotten (but see Bartels Wallesch, 1996), it is astonishingly modern in its conceptual approach and in the central questions it addresses on the mechanisms involved in the process of naming and on the structure of the semantic system. As is usual in classical cases, the methodology may appear less stringent than in most contemporary work, but the approach was brilliant.
Background
Isometric muscle actions can be performed either by initiating the action, e.g., pulling on an immovable resistance (PIMA), or by reacting to an external load, e.g., holding a weight (HIMA). In the present study, it was mainly examined if these modalities could be differentiated by oxygenation variables as well as by time to task failure (TTF). Furthermore, it was analyzed if variables are changed by intermittent voluntary muscle twitches during weight holding (Twitch). It was assumed that twitches during a weight holding task change the character of the isometric muscle action from reacting (≙ HIMA) to acting (≙ PIMA).
Methods
Twelve subjects (two drop outs) randomly performed two tasks (HIMA vs. PIMA or HIMA vs. Twitch, n = 5 each) with the elbow flexors at 60% of maximal torque maintained until muscle failure with each arm. Local capillary venous oxygen saturation (SvO2) and relative hemoglobin amount (rHb) were measured by light spectrometry.
Results
Within subjects, no significant differences were found between tasks regarding the behavior of SvO2 and rHb, the slope and extent of deoxygenation (max. SvO2 decrease), SvO2 level at global rHb minimum, and time to SvO2 steady states. The TTF was significantly longer during Twitch and PIMA (incl. Twitch) compared to HIMA (p = 0.043 and 0.047, respectively). There was no substantial correlation between TTF and maximal deoxygenation independently of the task (r = − 0.13).
Conclusions
HIMA and PIMA seem to have a similar microvascular oxygen and blood supply. The supply might be sufficient, which is expressed by homeostatic steady states of SvO2 in all trials and increases in rHb in most of the trials. Intermittent voluntary muscle twitches might not serve as a further support but extend the TTF. A changed neuromuscular control is discussed as possible explanation.
Adaptive Force (AF) reflects the capability of the neuromuscular system to adapt adequately to external forces with the intention of maintaining a position or motion. One specific approach to assessing AF is to measure force and limb position during a pneumatically applied increasing external force. Through this method, the highest (AFmax), the maximal isometric (AFisomax) and the maximal eccentric Adaptive Force (AFeccmax) can be determined. The main question of the study was whether the AFisomax is a specific and independent parameter of muscle function compared to other maximal forces. In 13 healthy subjects (9 male and 4 female), the maximal voluntary isometric contraction (pre- and post-MVIC), the three AF parameters and the MVIC with a prior concentric contraction (MVICpri-con) of the elbow extensors were measured 4 times on two days. Arithmetic mean (M) and maximal (Max) torques of all force types were analyzed. Regarding the reliability of the AF parameters between days, the mean changes were 0.31–1.98 Nm (0.61%–5.47%, p = 0.175–0.552), the standard errors of measurements (SEM) were 1.29–5.68 Nm (2.53%–15.70%) and the ICCs(3,1) = 0.896–0.996. M and Max of AFisomax, AFmax and pre-MVIC correlated highly (r = 0.85–0.98). The M and Max of AFisomax were significantly lower (6.12–14.93 Nm; p ≤ 0.001–0.009) and more variable between trials (coefficient of variation (CVs) ≥ 21.95%) compared to those of pre-MVIC and AFmax (CVs ≤ 5.4%). The results suggest the novel measuring procedure is suitable to reliably quantify the AF, whereby the presented measurement errors should be taken into consideration. The AFisomax seems to reflect its own strength capacity and should be detected separately. It is suggested its normalization to the MVIC or AFmax could serve as an indicator of a neuromuscular function.
Background
Isometric muscle actions can be performed either by initiating the action, e.g., pulling on an immovable resistance (PIMA), or by reacting to an external load, e.g., holding a weight (HIMA). In the present study, it was mainly examined if these modalities could be differentiated by oxygenation variables as well as by time to task failure (TTF). Furthermore, it was analyzed if variables are changed by intermittent voluntary muscle twitches during weight holding (Twitch). It was assumed that twitches during a weight holding task change the character of the isometric muscle action from reacting (≙ HIMA) to acting (≙ PIMA).
Methods
Twelve subjects (two drop outs) randomly performed two tasks (HIMA vs. PIMA or HIMA vs. Twitch, n = 5 each) with the elbow flexors at 60% of maximal torque maintained until muscle failure with each arm. Local capillary venous oxygen saturation (SvO2) and relative hemoglobin amount (rHb) were measured by light spectrometry.
Results
Within subjects, no significant differences were found between tasks regarding the behavior of SvO2 and rHb, the slope and extent of deoxygenation (max. SvO2 decrease), SvO2 level at global rHb minimum, and time to SvO2 steady states. The TTF was significantly longer during Twitch and PIMA (incl. Twitch) compared to HIMA (p = 0.043 and 0.047, respectively). There was no substantial correlation between TTF and maximal deoxygenation independently of the task (r = − 0.13).
Conclusions
HIMA and PIMA seem to have a similar microvascular oxygen and blood supply. The supply might be sufficient, which is expressed by homeostatic steady states of SvO2 in all trials and increases in rHb in most of the trials. Intermittent voluntary muscle twitches might not serve as a further support but extend the TTF. A changed neuromuscular control is discussed as possible explanation.
Importance Alcohol consumption (AC) leads to death and disability worldwide. Ongoing discussions on potential negative effects of the COVID-19 pandemic on AC need to be informed by real-world evidence.
Objective To examine whether lockdown measures are associated with AC and consumption-related temporal and psychological within-person mechanisms.
Design, Setting, and Participants This quantitative, intensive, longitudinal cohort study recruited 1743 participants from 3 sites from February 20, 2020, to February 28, 2021. Data were provided before and within the second lockdown of the COVID-19 pandemic in Germany: before lockdown (October 2 to November 1, 2020); light lockdown (November 2 to December 15, 2020); and hard lockdown (December 16, 2020, to February 28, 2021).
Main Outcomes and Measures Daily ratings of AC (main outcome) captured during 3 lockdown phases (main variable) and temporal (weekends and holidays) and psychological (social isolation and drinking intention) correlates.
Results Of the 1743 screened participants, 189 (119 [63.0%] male; median [IQR] age, 37 [27.5-52.0] years) with at least 2 alcohol use disorder (AUD) criteria according to the Diagnostic and Statistical Manual of Mental Disorders (Fifth Edition) yet without the need for medically supervised alcohol withdrawal were included. These individuals provided 14 694 smartphone ratings from October 2020 through February 2021. Multilevel modeling revealed significantly higher AC (grams of alcohol per day) on weekend days vs weekdays (β = 11.39; 95% CI, 10.00-12.77; P < .001). Alcohol consumption was above the overall average on Christmas (β = 26.82; 95% CI, 21.87-31.77; P < .001) and New Year’s Eve (β = 66.88; 95% CI, 59.22-74.54; P < .001). During the hard lockdown, perceived social isolation was significantly higher (β = 0.12; 95% CI, 0.06-0.15; P < .001), but AC was significantly lower (β = −5.45; 95% CI, −8.00 to −2.90; P = .001). Independent of lockdown, intention to drink less alcohol was associated with lower AC (β = −11.10; 95% CI, −13.63 to −8.58; P < .001). Notably, differences in AC between weekend and weekdays decreased both during the hard lockdown (β = −6.14; 95% CI, −9.96 to −2.31; P = .002) and in participants with severe AUD (β = −6.26; 95% CI, −10.18 to −2.34; P = .002).
Conclusions and Relevance This 5-month cohort study found no immediate negative associations of lockdown measures with overall AC. Rather, weekend-weekday and holiday AC patterns exceeded lockdown effects. Differences in AC between weekend days and weekdays evinced that weekend drinking cycles decreased as a function of AUD severity and lockdown measures, indicating a potential mechanism of losing and regaining control. This finding suggests that temporal patterns and drinking intention constitute promising targets for prevention and intervention, even in high-risk individuals.
Importance Alcohol consumption (AC) leads to death and disability worldwide. Ongoing discussions on potential negative effects of the COVID-19 pandemic on AC need to be informed by real-world evidence.
Objective To examine whether lockdown measures are associated with AC and consumption-related temporal and psychological within-person mechanisms.
Design, Setting, and Participants This quantitative, intensive, longitudinal cohort study recruited 1743 participants from 3 sites from February 20, 2020, to February 28, 2021. Data were provided before and within the second lockdown of the COVID-19 pandemic in Germany: before lockdown (October 2 to November 1, 2020); light lockdown (November 2 to December 15, 2020); and hard lockdown (December 16, 2020, to February 28, 2021).
Main Outcomes and Measures Daily ratings of AC (main outcome) captured during 3 lockdown phases (main variable) and temporal (weekends and holidays) and psychological (social isolation and drinking intention) correlates.
Results Of the 1743 screened participants, 189 (119 [63.0%] male; median [IQR] age, 37 [27.5-52.0] years) with at least 2 alcohol use disorder (AUD) criteria according to the Diagnostic and Statistical Manual of Mental Disorders (Fifth Edition) yet without the need for medically supervised alcohol withdrawal were included. These individuals provided 14 694 smartphone ratings from October 2020 through February 2021. Multilevel modeling revealed significantly higher AC (grams of alcohol per day) on weekend days vs weekdays (β = 11.39; 95% CI, 10.00-12.77; P < .001). Alcohol consumption was above the overall average on Christmas (β = 26.82; 95% CI, 21.87-31.77; P < .001) and New Year’s Eve (β = 66.88; 95% CI, 59.22-74.54; P < .001). During the hard lockdown, perceived social isolation was significantly higher (β = 0.12; 95% CI, 0.06-0.15; P < .001), but AC was significantly lower (β = −5.45; 95% CI, −8.00 to −2.90; P = .001). Independent of lockdown, intention to drink less alcohol was associated with lower AC (β = −11.10; 95% CI, −13.63 to −8.58; P < .001). Notably, differences in AC between weekend and weekdays decreased both during the hard lockdown (β = −6.14; 95% CI, −9.96 to −2.31; P = .002) and in participants with severe AUD (β = −6.26; 95% CI, −10.18 to −2.34; P = .002).
Conclusions and Relevance This 5-month cohort study found no immediate negative associations of lockdown measures with overall AC. Rather, weekend-weekday and holiday AC patterns exceeded lockdown effects. Differences in AC between weekend days and weekdays evinced that weekend drinking cycles decreased as a function of AUD severity and lockdown measures, indicating a potential mechanism of losing and regaining control. This finding suggests that temporal patterns and drinking intention constitute promising targets for prevention and intervention, even in high-risk individuals.
Objective: A role for microRNAs is implicated in several biological and pathological processes. We investigated the effects of high-intensity interval training (HIIT) and moderate-intensity continuous training (MICT) on molecular markers of diabetic cardiomyopathy in rats.
Methods: Eighteen male Wistar rats (260 ± 10 g; aged 8 weeks) with streptozotocin (STZ)-induced type 1 diabetes mellitus (55 mg/kg, IP) were randomly allocated to three groups: control, MICT, and HIIT. The two different training protocols were performed 5 days each week for 5 weeks. Cardiac performance (end-systolic and end-diastolic dimensions, ejection fraction), the expression of miR-206, HSP60, and markers of apoptosis (cleaved PARP and cytochrome C) were determined at the end of the exercise interventions.
Results: Both exercise interventions (HIIT and MICT) decreased blood glucose levels and improved cardiac performance, with greater changes in the HIIT group (p < 0.001, η2: 0.909). While the expressions of miR-206 and apoptotic markers decreased in both training protocols (p < 0.001, η2: 0.967), HIIT caused greater reductions in apoptotic markers and produced a 20% greater reduction in miR-206 compared with the MICT protocol (p < 0.001). Furthermore, both training protocols enhanced the expression of HSP60 (p < 0.001, η2: 0.976), with a nearly 50% greater increase in the HIIT group compared with MICT.
Conclusions: Our results indicate that both exercise protocols, HIIT and MICT, have the potential to reduce diabetic cardiomyopathy by modifying the expression of miR-206 and its downstream targets of apoptosis. It seems however that HIIT is even more effective than MICT to modulate these molecular markers.
Objective: A role for microRNAs is implicated in several biological and pathological processes. We investigated the effects of high-intensity interval training (HIIT) and moderate-intensity continuous training (MICT) on molecular markers of diabetic cardiomyopathy in rats.
Methods: Eighteen male Wistar rats (260 ± 10 g; aged 8 weeks) with streptozotocin (STZ)-induced type 1 diabetes mellitus (55 mg/kg, IP) were randomly allocated to three groups: control, MICT, and HIIT. The two different training protocols were performed 5 days each week for 5 weeks. Cardiac performance (end-systolic and end-diastolic dimensions, ejection fraction), the expression of miR-206, HSP60, and markers of apoptosis (cleaved PARP and cytochrome C) were determined at the end of the exercise interventions.
Results: Both exercise interventions (HIIT and MICT) decreased blood glucose levels and improved cardiac performance, with greater changes in the HIIT group (p < 0.001, η2: 0.909). While the expressions of miR-206 and apoptotic markers decreased in both training protocols (p < 0.001, η2: 0.967), HIIT caused greater reductions in apoptotic markers and produced a 20% greater reduction in miR-206 compared with the MICT protocol (p < 0.001). Furthermore, both training protocols enhanced the expression of HSP60 (p < 0.001, η2: 0.976), with a nearly 50% greater increase in the HIIT group compared with MICT.
Conclusions: Our results indicate that both exercise protocols, HIIT and MICT, have the potential to reduce diabetic cardiomyopathy by modifying the expression of miR-206 and its downstream targets of apoptosis. It seems however that HIIT is even more effective than MICT to modulate these molecular markers.
Aims: High intensity interval training (HIIT) improves mitochondrial characteristics. This study compared the impact of two workload-matched high intensity interval training (HIIT) protocols with different work:recovery ratios on regulatory factors related to mitochondrial biogenesis in the soleus muscle of diabetic rats.
Materials and methods: Twenty-four Wistar rats were randomly divided into four equal-sized groups: non-diabetic control, diabetic control (DC), diabetic with long recovery exercise [4–5 × 2-min running at 80%–90% of the maximum speed reached with 2-min of recovery at 40% of the maximum speed reached (DHIIT1:1)], and diabetic with short recovery exercise (5–6 × 2-min running at 80%–90% of the maximum speed reached with 1-min of recovery at 30% of the maximum speed reached [DHIIT2:1]). Both HIIT protocols were completed five times/week for 4 weeks while maintaining equal running distances in each session.
Results: Gene and protein expressions of PGC-1α, p53, and citrate synthase of the muscles increased significantly following DHIIT1:1 and DHIIT2:1 compared to DC (p ˂ 0.05). Most parameters, except for PGC-1α protein (p = 0.597), were significantly higher in DHIIT2:1 than in DHIIT1:1 (p ˂ 0.05). Both DHIIT groups showed significant increases in maximum speed with larger increases in DHIIT2:1 compared with DHIIT1:1.
Conclusion: Our findings indicate that both HIIT protocols can potently up-regulate gene and protein expression of PGC-1α, p53, and CS. However, DHIIT2:1 has superior effects compared with DHIIT1:1 in improving mitochondrial adaptive responses in diabetic rats.
Aims: High intensity interval training (HIIT) improves mitochondrial characteristics. This study compared the impact of two workload-matched high intensity interval training (HIIT) protocols with different work:recovery ratios on regulatory factors related to mitochondrial biogenesis in the soleus muscle of diabetic rats.
Materials and methods: Twenty-four Wistar rats were randomly divided into four equal-sized groups: non-diabetic control, diabetic control (DC), diabetic with long recovery exercise [4–5 × 2-min running at 80%–90% of the maximum speed reached with 2-min of recovery at 40% of the maximum speed reached (DHIIT1:1)], and diabetic with short recovery exercise (5–6 × 2-min running at 80%–90% of the maximum speed reached with 1-min of recovery at 30% of the maximum speed reached [DHIIT2:1]). Both HIIT protocols were completed five times/week for 4 weeks while maintaining equal running distances in each session.
Results: Gene and protein expressions of PGC-1α, p53, and citrate synthase of the muscles increased significantly following DHIIT1:1 and DHIIT2:1 compared to DC (p ˂ 0.05). Most parameters, except for PGC-1α protein (p = 0.597), were significantly higher in DHIIT2:1 than in DHIIT1:1 (p ˂ 0.05). Both DHIIT groups showed significant increases in maximum speed with larger increases in DHIIT2:1 compared with DHIIT1:1.
Conclusion: Our findings indicate that both HIIT protocols can potently up-regulate gene and protein expression of PGC-1α, p53, and CS. However, DHIIT2:1 has superior effects compared with DHIIT1:1 in improving mitochondrial adaptive responses in diabetic rats.
Computer aided dosage management of phenprocoumon anticoagulation therapy Clinical validation
(2014)
A recently developed multiparameter computer-aided expert system (TheMa) for guiding anticoagulation with phenprocoumon (PPC) was validated by a prospective investigation in 22 patients. The PPC-INR-response curve resulting from physician guided dosage was compared to INR values calculated by "twin calculation" from TheMa recommended dosage. Additionally, TheMa was used to predict the optimal time to perform surgery or invasive procedures after interruption of anticogulation therapy. Results: Comparison of physician and TheMa guided anticoagulation showed almost identical accuracy by three quantitative measures: Polygon integration method (area around INR target) 616.17 vs. 607.86, INR hits in the target range 166 vs. 161, and TTR (time in therapeutic range) 63.91 vs. 62.40 %. After discontinuation of anticoagulation therapy, calculating the INR phase-out curve with TheMa INR prognosis of 1.8 was possible with a standard deviation of 0.50 +/- 0.59 days. Conclusion: Guiding anticoagulation with TheMa was as accurate as Physician guided therapy. After interruption of anticoagulant therapy, TheMa may be used for calculating the optimal time performing operations or initiating bridging therapy.
Background
Total hip or knee replacement is one of the most frequently performed surgical procedures. Physical rehabilitation following total hip or knee replacement is an essential part of the therapy to improve functional outcomes and quality of life. After discharge from inpatient rehabilitation, a subsequent postoperative exercise therapy is needed to maintain functional mobility. Telerehabilitation may be a potential innovative treatment approach. We aim to investigate the superiority of an interactive telerehabilitation intervention for patients after total hip or knee replacement, in comparison to usual care, regarding physical performance, functional mobility, quality of life and pain.
Methods/design
This is an open, randomized controlled, multicenter superiority study with two prospective arms. One hundred and ten eligible and consenting participants with total knee or hip replacement will be recruited at admission to subsequent inpatient rehabilitation. After comprehensive, 3-week, inpatient rehabilitation, the intervention group performs a 3-month, interactive, home-based exercise training with a telerehabilitation system. For this purpose, the physiotherapist creates an individual training plan out of 38 different strength and balance exercises which were implemented in the system. Data about the quality and frequency of training are transmitted to the physiotherapist for further adjustment. Communication between patient and physiotherapist is possible with the system. The control group receives voluntary, usual aftercare programs. Baseline assessments are investigated after discharge from rehabilitation; final assessments 3 months later. The primary outcome is the difference in improvement between intervention and control group in 6-minute walk distance after 3 months. Secondary outcomes include differences in the Timed Up and Go Test, the Five-Times-Sit-to-Stand Test, the Stair Ascend Test, the Short-Form 36, the Western Ontario and McMaster Universities Osteoarthritis Index, the International Physical Activity Questionnaire, and postural control as well as gait and kinematic parameters of the lower limbs. Baseline-adjusted analysis of covariance models will be used to test for group differences in the primary and secondary endpoints.
Discussion
We expect the intervention group to benefit from the interactive, home-based exercise training in many respects represented by the study endpoints. If successful, this approach could be used to enhance the access to aftercare programs, especially in structurally weak areas.
Background
Total hip or knee replacement is one of the most frequently performed surgical procedures. Physical rehabilitation following total hip or knee replacement is an essential part of the therapy to improve functional outcomes and quality of life. After discharge from inpatient rehabilitation, a subsequent postoperative exercise therapy is needed to maintain functional mobility. Telerehabilitation may be a potential innovative treatment approach. We aim to investigate the superiority of an interactive telerehabilitation intervention for patients after total hip or knee replacement, in comparison to usual care, regarding physical performance, functional mobility, quality of life and pain.
Methods/design
This is an open, randomized controlled, multicenter superiority study with two prospective arms. One hundred and ten eligible and consenting participants with total knee or hip replacement will be recruited at admission to subsequent inpatient rehabilitation. After comprehensive, 3-week, inpatient rehabilitation, the intervention group performs a 3-month, interactive, home-based exercise training with a telerehabilitation system. For this purpose, the physiotherapist creates an individual training plan out of 38 different strength and balance exercises which were implemented in the system. Data about the quality and frequency of training are transmitted to the physiotherapist for further adjustment. Communication between patient and physiotherapist is possible with the system. The control group receives voluntary, usual aftercare programs. Baseline assessments are investigated after discharge from rehabilitation; final assessments 3 months later. The primary outcome is the difference in improvement between intervention and control group in 6-minute walk distance after 3 months. Secondary outcomes include differences in the Timed Up and Go Test, the Five-Times-Sit-to-Stand Test, the Stair Ascend Test, the Short-Form 36, the Western Ontario and McMaster Universities Osteoarthritis Index, the International Physical Activity Questionnaire, and postural control as well as gait and kinematic parameters of the lower limbs. Baseline-adjusted analysis of covariance models will be used to test for group differences in the primary and secondary endpoints.
Discussion
We expect the intervention group to benefit from the interactive, home-based exercise training in many respects represented by the study endpoints. If successful, this approach could be used to enhance the access to aftercare programs, especially in structurally weak areas.
Background: Telerehabilitation can contribute to the maintenance of successful rehabilitation regardless of location and time. The aim of this study was to investigate a specific three-month interactive telerehabilitation routine regarding its effectiveness in assisting patients with physical functionality and with returning to work compared to typical aftercare.
Objective: The aim of the study was to investigate a specific three-month interactive telerehabilitation with regard to effectiveness in functioning and return to work compared to usual aftercare.
Methods: From August 2016 to December 2017, 111 patients (mean 54.9 years old; SD 6.8; 54.3% female) with hip or knee replacement were enrolled in the randomized controlled trial. At discharge from inpatient rehabilitation and after three months, their distance in the 6-minute walk test was assessed as the primary endpoint. Other functional parameters, including health related quality of life, pain, and time to return to work, were secondary endpoints.
Results: Patients in the intervention group performed telerehabilitation for an average of 55.0 minutes (SD 9.2) per week. Adherence was high, at over 75%, until the 7th week of the three-month intervention phase. Almost all the patients and therapists used the communication options. Both the intervention group (average difference 88.3 m; SD 57.7; P=.95) and the control group (average difference 79.6 m; SD 48.7; P=.95) increased their distance in the 6-minute-walk-test. Improvements in other functional parameters, as well as in quality of life and pain, were achieved in both groups. The higher proportion of working patients in the intervention group (64.6%; P=.01) versus the control group (46.2%) is of note.
Conclusions: The effect of the investigated telerehabilitation therapy in patients following knee or hip replacement was equivalent to the usual aftercare in terms of functional testing, quality of life, and pain. Since a significantly higher return-to-work rate could be achieved, this therapy might be a promising supplement to established aftercare.
Background: Telerehabilitation can contribute to the maintenance of successful rehabilitation regardless of location and time. The aim of this study was to investigate a specific three-month interactive telerehabilitation routine regarding its effectiveness in assisting patients with physical functionality and with returning to work compared to typical aftercare.
Objective: The aim of the study was to investigate a specific three-month interactive telerehabilitation with regard to effectiveness in functioning and return to work compared to usual aftercare.
Methods: From August 2016 to December 2017, 111 patients (mean 54.9 years old; SD 6.8; 54.3% female) with hip or knee replacement were enrolled in the randomized controlled trial. At discharge from inpatient rehabilitation and after three months, their distance in the 6-minute walk test was assessed as the primary endpoint. Other functional parameters, including health related quality of life, pain, and time to return to work, were secondary endpoints.
Results: Patients in the intervention group performed telerehabilitation for an average of 55.0 minutes (SD 9.2) per week. Adherence was high, at over 75%, until the 7th week of the three-month intervention phase. Almost all the patients and therapists used the communication options. Both the intervention group (average difference 88.3 m; SD 57.7; P=.95) and the control group (average difference 79.6 m; SD 48.7; P=.95) increased their distance in the 6-minute-walk-test. Improvements in other functional parameters, as well as in quality of life and pain, were achieved in both groups. The higher proportion of working patients in the intervention group (64.6%; P=.01) versus the control group (46.2%) is of note.
Conclusions: The effect of the investigated telerehabilitation therapy in patients following knee or hip replacement was equivalent to the usual aftercare in terms of functional testing, quality of life, and pain. Since a significantly higher return-to-work rate could be achieved, this therapy might be a promising supplement to established aftercare.
Multicomponent cardiac rehabilitation in patients after transcatheter aortic valve implantation
(2017)
Background: In the last decade, transcatheter aortic valve implantation has become a promising treatment modality for patients with aortic stenosis and a high surgical risk. Little is known about influencing factors of function and quality of life during multicomponent cardiac rehabilitation. Methods: From October 2013 to July 2015, patients with elective transcatheter aortic valve implantation and a subsequent inpatient cardiac rehabilitation were enrolled in the prospective cohort multicentre study. Frailty-Index (including cognition, nutrition, autonomy and mobility), Short Form-12 (SF-12), six-minute walk distance (6MWD) and maximum work load in bicycle ergometry were performed at admission and discharge of cardiac rehabilitation. The relation between patient characteristics and improvements in 6MWD, maximum work load or SF-12 scales were studied univariately and multivariately using regression models. Results: One hundred and thirty-six patients (80.6 +/- 5.0 years, 47.8% male) were enrolled. 6MWD and maximum work load increased by 56.3 +/- 65.3 m (p < 0.001) and 8.0 +/- 14.9 watts (p < 0.001), respectively. An improvement in SF-12 (physical 2.5 +/- 8.7, p = 0.001, mental 3.4 +/- 10.2, p = 0.003) could be observed. In multivariate analysis, age and higher education were significantly associated with a reduced 6MWD, whereas cognition and obesity showed a positive predictive value. Higher cognition, nutrition and autonomy positively influenced the physical scale of SF-12. Additionally, the baseline values of SF-12 had an inverse impact on the change during cardiac rehabilitation. Conclusions: Cardiac rehabilitation can improve functional capacity as well as quality of life and reduce frailty in patients after transcatheter aortic valve implantation. An individually tailored therapy with special consideration of cognition and nutrition is needed to maintain autonomy and empower octogenarians in coping with challenges of everyday life.
Background
Aim of the study was to find predictors of allocating patients after transcatheter aortic valve implantation (TAVI) to geriatric (GR) or cardiac rehabilitation (CR) and describe this new patient group based on a differentiated characterization.
Methods
From 10/2013 to 07/2015, 344 patients with an elective TAVI were consecutively enrolled in this prospective multicentric cohort study. Before intervention, sociodemographic parameters, echocardiographic data, comorbidities, 6-min walk distance (6MWD), quality of life and frailty (score indexing activities of daily living [ADL], cognition, nutrition and mobility) were documented. Out of these, predictors for assignment to CR or GR after TAVI were identified using a multivariable regression model.
Results
After TAVI, 249 patients (80.7 ± 5.1 years, 59.0% female) underwent CR (n = 198) or GR (n = 51). GR patients were older, less physically active and more often had a level of care, peripheral artery disease as well as a lower left ventricular ejection fraction. The groups also varied in 6MWD. Furthermore, individual components of frailty revealed prognostic impact: higher values in instrumental ADL reduced the probability for referral to GR (OR:0.49, p < 0.001), while an impaired mobility was positively associated with referral to GR (OR:3.97, p = 0.046). Clinical parameters like stroke (OR:0.19 of GR, p = 0.038) and the EuroSCORE (OR:1.04 of GR, p = 0.026) were also predictive.
Conclusion
Advanced age patients after TAVI referred to CR or GR differ in several parameters and seem to be different patient groups with specific needs, e.g. regarding activities of daily living and mobility. Thus, our data prove the eligibility of both CR and GR settings.
Background
Aim of the study was to find predictors of allocating patients after transcatheter aortic valve implantation (TAVI) to geriatric (GR) or cardiac rehabilitation (CR) and describe this new patient group based on a differentiated characterization.
Methods
From 10/2013 to 07/2015, 344 patients with an elective TAVI were consecutively enrolled in this prospective multicentric cohort study. Before intervention, sociodemographic parameters, echocardiographic data, comorbidities, 6-min walk distance (6MWD), quality of life and frailty (score indexing activities of daily living [ADL], cognition, nutrition and mobility) were documented. Out of these, predictors for assignment to CR or GR after TAVI were identified using a multivariable regression model.
Results
After TAVI, 249 patients (80.7 ± 5.1 years, 59.0% female) underwent CR (n = 198) or GR (n = 51). GR patients were older, less physically active and more often had a level of care, peripheral artery disease as well as a lower left ventricular ejection fraction. The groups also varied in 6MWD. Furthermore, individual components of frailty revealed prognostic impact: higher values in instrumental ADL reduced the probability for referral to GR (OR:0.49, p < 0.001), while an impaired mobility was positively associated with referral to GR (OR:3.97, p = 0.046). Clinical parameters like stroke (OR:0.19 of GR, p = 0.038) and the EuroSCORE (OR:1.04 of GR, p = 0.026) were also predictive.
Conclusion
Advanced age patients after TAVI referred to CR or GR differ in several parameters and seem to be different patient groups with specific needs, e.g. regarding activities of daily living and mobility. Thus, our data prove the eligibility of both CR and GR settings.
Background
Maximal isokinetic strength ratios of joint flexors and extensors are important parameters to indicate the level of muscular balance at the joint. Further, in combat sports athletes, upper and lower limb muscle strength is affected by the type of sport. Thus, this study aimed to examine the differences in maximal isokinetic strength of the flexors and extensors and the corresponding flexor–extensor strength ratios of the elbows and knees in combat sports athletes.
Method
Forty male participants (age = 22.3 ± 2.5 years) from four different combat sports (amateur boxing, taekwondo, karate, and judo; n = 10 per sport) were tested for eccentric peak torque of the elbow/knee flexors (EF/KF) and concentric peak torque of the elbow/knee extensors (EE/KE) at three different angular velocities (60, 120, and 180°/s) on the dominant and non-dominant side using an isokinetic device.
Results
Analyses revealed significant, large-sized group × velocity × limb interactions for EF, EE, and EF–EE ratio, KF, KE, and KF–KE ratio (p ≤ 0.03; 0.91 ≤ d ≤ 1.75). Post-hoc analyses indicated that amateur boxers displayed the largest EE strength values on the non-dominant side at ≤ 120°/s and the dominant side at ≥ 120°/s (p < 0.03; 1.21 ≤ d ≤ 1.59). The largest EF–EE strength ratios were observed on amateur boxers’ and judokas’ non-dominant side at ≥ 120°/s (p < 0.04; 1.36 ≤ d ≤ 2.44). Further, we found lower KF–KE strength measures in karate (p < 0.04; 1.12 ≤ d ≤ 6.22) and judo athletes (p ≤ 0.03; 1.60 ≤ d ≤ 5.31) particularly on the non-dominant side.
Conclusions
The present findings indicated combat sport-specific differences in maximal isokinetic strength measures of EF, EE, KF, and KE particularly in favor of amateur boxers on the non-dominant side.
Background
Maximal isokinetic strength ratios of joint flexors and extensors are important parameters to indicate the level of muscular balance at the joint. Further, in combat sports athletes, upper and lower limb muscle strength is affected by the type of sport. Thus, this study aimed to examine the differences in maximal isokinetic strength of the flexors and extensors and the corresponding flexor–extensor strength ratios of the elbows and knees in combat sports athletes.
Method
Forty male participants (age = 22.3 ± 2.5 years) from four different combat sports (amateur boxing, taekwondo, karate, and judo; n = 10 per sport) were tested for eccentric peak torque of the elbow/knee flexors (EF/KF) and concentric peak torque of the elbow/knee extensors (EE/KE) at three different angular velocities (60, 120, and 180°/s) on the dominant and non-dominant side using an isokinetic device.
Results
Analyses revealed significant, large-sized group × velocity × limb interactions for EF, EE, and EF–EE ratio, KF, KE, and KF–KE ratio (p ≤ 0.03; 0.91 ≤ d ≤ 1.75). Post-hoc analyses indicated that amateur boxers displayed the largest EE strength values on the non-dominant side at ≤ 120°/s and the dominant side at ≥ 120°/s (p < 0.03; 1.21 ≤ d ≤ 1.59). The largest EF–EE strength ratios were observed on amateur boxers’ and judokas’ non-dominant side at ≥ 120°/s (p < 0.04; 1.36 ≤ d ≤ 2.44). Further, we found lower KF–KE strength measures in karate (p < 0.04; 1.12 ≤ d ≤ 6.22) and judo athletes (p ≤ 0.03; 1.60 ≤ d ≤ 5.31) particularly on the non-dominant side.
Conclusions
The present findings indicated combat sport-specific differences in maximal isokinetic strength measures of EF, EE, KF, and KE particularly in favor of amateur boxers on the non-dominant side.
Purpose: To examine the effects of fatiguing isometric contractions on maximal eccentric strength and electromechanical delay (EMD) of the knee flexors in healthy young adults of different training status.
Methods: Seventy-five male participants (27.7 ± 5.0 years) were enrolled in this study and allocated to three experimental groups according to their training status: athletes (ATH, n = 25), physically active adults (ACT, n = 25), and sedentary participants (SED, n = 25). The fatigue protocol comprised intermittent isometric knee flexions (6-s contraction, 4-s rest) at 60% of the maximum voluntary contraction until failure. Pre- and post-fatigue, maximal eccentric knee flexor strength and EMDs of the biceps femoris, semimembranosus, and semitendinosus muscles were assessed during maximal eccentric knee flexor actions at 60, 180, and 300°/s angular velocity. An analysis of covariance was computed with baseline (unfatigued) data included as a covariate.
Results: Significant and large-sized main effects of group (p ≤ 0.017, 0.87 ≤ d ≤ 3.69) and/or angular velocity (p < 0.001, d = 1.81) were observed. Post hoc tests indicated that regardless of angular velocity, maximal eccentric knee flexor strength was lower and EMD was longer in SED compared with ATH and ACT (p ≤ 0.025, 0.76 ≤ d ≤ 1.82) and in ACT compared with ATH (p = ≤0.025, 0.76 ≤ d ≤ 1.82). Additionally, EMD at post-test was significantly longer at 300°/s compared with 60 and 180°/s (p < 0.001, 2.95 ≤ d ≤ 4.64) and at 180°/s compared with 60°/s (p < 0.001, d = 2.56), irrespective of training status.
Conclusion: The main outcomes revealed significantly higher maximal eccentric strength and shorter eccentric EMDs of knee flexors in individuals with higher training status (i.e., athletes) following fatiguing exercises. Therefore, higher training status is associated with better neuromuscular functioning (i.e., strength, EMD) of the hamstring muscles in fatigued condition. Future longitudinal studies are needed to substantiate the clinical relevance of these findings.
Purpose: To examine the effects of fatiguing isometric contractions on maximal eccentric strength and electromechanical delay (EMD) of the knee flexors in healthy young adults of different training status.
Methods: Seventy-five male participants (27.7 ± 5.0 years) were enrolled in this study and allocated to three experimental groups according to their training status: athletes (ATH, n = 25), physically active adults (ACT, n = 25), and sedentary participants (SED, n = 25). The fatigue protocol comprised intermittent isometric knee flexions (6-s contraction, 4-s rest) at 60% of the maximum voluntary contraction until failure. Pre- and post-fatigue, maximal eccentric knee flexor strength and EMDs of the biceps femoris, semimembranosus, and semitendinosus muscles were assessed during maximal eccentric knee flexor actions at 60, 180, and 300°/s angular velocity. An analysis of covariance was computed with baseline (unfatigued) data included as a covariate.
Results: Significant and large-sized main effects of group (p ≤ 0.017, 0.87 ≤ d ≤ 3.69) and/or angular velocity (p < 0.001, d = 1.81) were observed. Post hoc tests indicated that regardless of angular velocity, maximal eccentric knee flexor strength was lower and EMD was longer in SED compared with ATH and ACT (p ≤ 0.025, 0.76 ≤ d ≤ 1.82) and in ACT compared with ATH (p = ≤0.025, 0.76 ≤ d ≤ 1.82). Additionally, EMD at post-test was significantly longer at 300°/s compared with 60 and 180°/s (p < 0.001, 2.95 ≤ d ≤ 4.64) and at 180°/s compared with 60°/s (p < 0.001, d = 2.56), irrespective of training status.
Conclusion: The main outcomes revealed significantly higher maximal eccentric strength and shorter eccentric EMDs of knee flexors in individuals with higher training status (i.e., athletes) following fatiguing exercises. Therefore, higher training status is associated with better neuromuscular functioning (i.e., strength, EMD) of the hamstring muscles in fatigued condition. Future longitudinal studies are needed to substantiate the clinical relevance of these findings.
Regulatory focus is a motivational construct that describes humans’ motivational orientation during goal pursuit. It is conceptualized as a chronic, trait-like, as well as a momentary, state-like orientation. Whereas there is a large number of measures to capture chronic regulatory focus, measures for its momentary assessment are only just emerging. This paper presents the development and validation of a measure of Momentary–Chronic Regulatory Focus. Our development incorporates the distinction between self-guide and reference-point definitions of regulatory focus. Ideals and ought striving are the promotion and prevention dimension in the self-guide system; gain and non-loss regulatory focus are the respective dimensions within the reference-point system. Three-survey-based studies test the structure, psychometric properties, and validity of the measure in its version to assess chronic regulatory focus (two samples of working participants, N = 389, N = 672; one student sample [time 1, N = 105; time 2, n = 91]). In two further studies, an experience sampling study with students (N = 84, k = 1649) and a daily-diary study with working individuals (N = 129, k = 1766), the measure was applied to assess momentary regulatory focus. Multilevel analyses test the momentary measure’s factorial structure, provide support for its sensitivity to capture within-person fluctuations, and provide evidence for concurrent construct validity.
Regulatory focus is a motivational construct that describes humans’ motivational orientation during goal pursuit. It is conceptualized as a chronic, trait-like, as well as a momentary, state-like orientation. Whereas there is a large number of measures to capture chronic regulatory focus, measures for its momentary assessment are only just emerging. This paper presents the development and validation of a measure of Momentary–Chronic Regulatory Focus. Our development incorporates the distinction between self-guide and reference-point definitions of regulatory focus. Ideals and ought striving are the promotion and prevention dimension in the self-guide system; gain and non-loss regulatory focus are the respective dimensions within the reference-point system. Three-survey-based studies test the structure, psychometric properties, and validity of the measure in its version to assess chronic regulatory focus (two samples of working participants, N = 389, N = 672; one student sample [time 1, N = 105; time 2, n = 91]). In two further studies, an experience sampling study with students (N = 84, k = 1649) and a daily-diary study with working individuals (N = 129, k = 1766), the measure was applied to assess momentary regulatory focus. Multilevel analyses test the momentary measure’s factorial structure, provide support for its sensitivity to capture within-person fluctuations, and provide evidence for concurrent construct validity.
Commentary
(2020)
Commentary
(2020)
While children acquire new words and simple sentence structures extremely fast and without much effort, the ability to process complex sentences develops rather late in life. Although the conjoint occurrence between brain-structural and brain-functional changes, the decrease of plasticity, and changes in cognitive abilities suggests a certain causality between these processes, concrete evidence for the relation between brain development, language processing, and language performance is rare. Therefore, the current dissertation investigates the tripartite relationship between behavior (in the form of language performance and cognitive maturation as prerequisite for language processing), brain structure (in the form of gray matter maturation), and brain function (in the form of brain activation evoked by complex sentence processing). Previous developmental studies indicate a missing increase of activation in accordance to sentence complexity (functional selectivity) in language-relevant brain areas in children. To determine the factors contributing to the functional development of language-relevant brain areas, different methodologies and data acquisition techniques were used to investigate the processing of center-embedded sentences in 5- and 6-year-old children, 7- and 8-year-old children, and adults. Behavioral results indicate that children between 5 and 8 years show difficulties in processing double embedded sentences and that their performance for these type of sentences is positively correlated with digit span. In 7- and 8-year-old children, it was found that especially the processing of long-distance relations between the initial phrase and its corresponding verb appears to be associated with the subject’s verbal working memory capacity. In contrast, children’s performance for double embedded sentences in the younger age group positively correlated with their performance in a standardized sentence comprehension test. This finding supports the hypothesis that processing difficulties in this age group may be mainly attributed to difficulties in processing case marking information. These findings are discussed with respect to current accounts of language and working memory development. A second study aimed at investigating the structural maturation of brain areas involved in sentence comprehension. To do this, whole-brain magnetic resonance images from 59 children between 5 and 8 years were collected and children’s gray matter was analyzed by using voxel-based morphometry. Children’s grammatical proficiency was assessed by a standardized sentence comprehension test. A confirmatory factory analysis corroborated a grammar-relevant and a verbal working memory-relevant factor underlying the measured performance. While children’s ability to assign thematic roles is positively correlated with gray matter probability (GMP) in the left inferior temporal gyrus and the left inferior frontal gyrus, verbal working memory-related performance is positively correlated with GMP in the left parietal operculum extending into the posterior superior temporal gyrus. These areas have been previously shown to be differentially engaged in adults’ complex sentence processing. Thus, the findings of the second study suggest a specific correspondence between children’s GMP in language-relevant brain regions and differential cognitive abilities which underlie complex sentence comprehension. In a third study, functional brain activity during the processing of center-embedded sentences was investigated in three different age groups (5–6 years, 7–8 years, and adults). Although all age groups engage a qualitatively comparable network of the left pars opercularis (PO), the left inferior parietal lobe extending into the posterior superior temporal gyrus (IPL/pSTG), the supplementary motor area (SMA) and the cerebellum, functional selectivity of these regions was only observable in adults. However, functional activation of the language-related regions (PO and IPL/pSTG) predicted sentence comprehension performance for all age groups. To solve the question of the complex interplay between different maturational factors, a fourth study analyzed the predictive power of gray matter probability, verbal working memory capacity, and behavioral differences in performance for simple and complex sentence for the functional selectivity of each activated region. These analyses revealed that the establishment of the adult-like functional selectivity for complex sentences is predicted by a reduction of the left PO’s gray matter probability across age groups while that of the IPL/pSTG is additionally predicted by verbal working memory capacity. Taken all findings together, the current thesis provides evidence that both structural brain maturation and verbal working memory expansion provide the basis for the emergence of functional selectivity in language-related brain regions leading to more efficient sentence processing during development.
Background
The aim of this study was to analyze the shoulder functional profile (rotation range of motion [ROM] and strength), upper and lower body performance, and throwing speed of U13 versus U15 male handball players, and to establish the relationship between these measures of physical fitness and throwing speed.
Methods
One-hundred and nineteen young male handball players (under (U)-13 (U13) [n = 85]) and U15 [n = 34]) volunteered to participate in this study. The participating athletes had a mean background of sytematic handball training of 5.5 ± 2.8 years and they exercised on average 540 ± 10.1 min per week including sport-specific team handball training and strength and conditioning programs. Players were tested for passive shoulder range-of-motion (ROM) for both internal (IR) and external rotation (ER) and isometric strength (i.e., IR and ER) of the dominant/non-dominant shoulders, overhead medicine ball throw (OMB), hip isometric abductor (ABD) and adductor (ADD) strength, hip ROM, jumps (countermovement jump [CMJ] and triple leg-hop [3H] for distance), linear sprint test, modified 505 change-of-direction (COD) test and handball throwing speed (7 m [HT7] and 9 m [HT9]).
Results
U15 players outperformed U13 in upper (i.e., HT7 and HT9 speed, OMB, absolute IR and ER strength of the dominant and non-dominant sides; Cohen’s d: 0.76–2.13) and lower body (i.e., CMJ, 3H, 20-m sprint and COD, hip ABD and ADD; d: 0.70–2.33) performance measures. Regarding shoulder ROM outcomes, a lower IR ROM was found of the dominant side in the U15 group compared to the U13 and a higher ER ROM on both sides in U15 (d: 0.76–1.04). It seems that primarily anthropometric characteristics (i.e., body height, body mass) and upper body strength/power (OMB distance) are the most important factors that explain the throw speed variance in male handball players, particularly in U13.
Conclusions
Findings from this study imply that regular performance monitoring is important for performance development and for minimizing injury risk of the shoulder in both age categories of young male handball players. Besides measures of physical fitness, anthropometric data should be recorded because handball throwing performance is related to these measures.
Background
The aim of this study was to analyze the shoulder functional profile (rotation range of motion [ROM] and strength), upper and lower body performance, and throwing speed of U13 versus U15 male handball players, and to establish the relationship between these measures of physical fitness and throwing speed.
Methods
One-hundred and nineteen young male handball players (under (U)-13 (U13) [n = 85]) and U15 [n = 34]) volunteered to participate in this study. The participating athletes had a mean background of sytematic handball training of 5.5 ± 2.8 years and they exercised on average 540 ± 10.1 min per week including sport-specific team handball training and strength and conditioning programs. Players were tested for passive shoulder range-of-motion (ROM) for both internal (IR) and external rotation (ER) and isometric strength (i.e., IR and ER) of the dominant/non-dominant shoulders, overhead medicine ball throw (OMB), hip isometric abductor (ABD) and adductor (ADD) strength, hip ROM, jumps (countermovement jump [CMJ] and triple leg-hop [3H] for distance), linear sprint test, modified 505 change-of-direction (COD) test and handball throwing speed (7 m [HT7] and 9 m [HT9]).
Results
U15 players outperformed U13 in upper (i.e., HT7 and HT9 speed, OMB, absolute IR and ER strength of the dominant and non-dominant sides; Cohen’s d: 0.76–2.13) and lower body (i.e., CMJ, 3H, 20-m sprint and COD, hip ABD and ADD; d: 0.70–2.33) performance measures. Regarding shoulder ROM outcomes, a lower IR ROM was found of the dominant side in the U15 group compared to the U13 and a higher ER ROM on both sides in U15 (d: 0.76–1.04). It seems that primarily anthropometric characteristics (i.e., body height, body mass) and upper body strength/power (OMB distance) are the most important factors that explain the throw speed variance in male handball players, particularly in U13.
Conclusions
Findings from this study imply that regular performance monitoring is important for performance development and for minimizing injury risk of the shoulder in both age categories of young male handball players. Besides measures of physical fitness, anthropometric data should be recorded because handball throwing performance is related to these measures.
Research on problem solving offers insights into how humans process task-related information and which strategies they use (Newell and Simon, 1972; Öllinger et al., 2014). Problem solving can be defined as the search for possible changes in one's mind (Kahneman, 2003). In a recent study, Adams et al. (2021) assessed whether the predominant problem solving strategy when making changes involves adding or subtracting elements. In order to do this, they used several examples of simple problems, such as editing text or making visual patterns symmetrical, either in naturalistic settings or on-line. The essence of the authors' findings is a strong preference to add rather than subtract elements across a diverse range of problems, including the stabilizing of artifacts, creating symmetrical patterns, or editing texts. More specifically, they succeeded in demonstrating that “participants were less likely to identify advantageous subtractive changes when the task did not (vs. did) cue them to consider subtraction, when they had only one opportunity (vs. several) to recognize the shortcomings of an additive search strategy or when they were under a higher (vs. lower) cognitive load” (Adams et al., 2021, p. 258).
Addition and subtraction are generally defined as de-contextualized mathematical operations using abstract symbols (Russell, 1903/1938). Nevertheless, understanding of both symbols and operations is informed by everyday activities, such as making or breaking objects (Lakoff and Núñez, 2000; Fischer and Shaki, 2018). The universal attribution of “addition bias” or “subtraction neglect” to problem solving activities is perhaps a convenient shorthand but it overlooks influential framing effects beyond those already acknowledged in the report and the accompanying commentary (Meyvis and Yoon, 2021).
Most importantly, while Adams et al.'s study addresses an important issue, their very method of verbally instructing participants, together with lack of control over several known biases, might render their findings less than conclusive. Below, we discuss our concerns that emerged from the identified biases, namely those regarding the instructions and the experimental materials. Moreover, we refer to research from mathematical cognition that provides new insights into Adams et al.'s findings.
Research on problem solving offers insights into how humans process task-related information and which strategies they use (Newell and Simon, 1972; Öllinger et al., 2014). Problem solving can be defined as the search for possible changes in one's mind (Kahneman, 2003). In a recent study, Adams et al. (2021) assessed whether the predominant problem solving strategy when making changes involves adding or subtracting elements. In order to do this, they used several examples of simple problems, such as editing text or making visual patterns symmetrical, either in naturalistic settings or on-line. The essence of the authors' findings is a strong preference to add rather than subtract elements across a diverse range of problems, including the stabilizing of artifacts, creating symmetrical patterns, or editing texts. More specifically, they succeeded in demonstrating that “participants were less likely to identify advantageous subtractive changes when the task did not (vs. did) cue them to consider subtraction, when they had only one opportunity (vs. several) to recognize the shortcomings of an additive search strategy or when they were under a higher (vs. lower) cognitive load” (Adams et al., 2021, p. 258).
Addition and subtraction are generally defined as de-contextualized mathematical operations using abstract symbols (Russell, 1903/1938). Nevertheless, understanding of both symbols and operations is informed by everyday activities, such as making or breaking objects (Lakoff and Núñez, 2000; Fischer and Shaki, 2018). The universal attribution of “addition bias” or “subtraction neglect” to problem solving activities is perhaps a convenient shorthand but it overlooks influential framing effects beyond those already acknowledged in the report and the accompanying commentary (Meyvis and Yoon, 2021).
Most importantly, while Adams et al.'s study addresses an important issue, their very method of verbally instructing participants, together with lack of control over several known biases, might render their findings less than conclusive. Below, we discuss our concerns that emerged from the identified biases, namely those regarding the instructions and the experimental materials. Moreover, we refer to research from mathematical cognition that provides new insights into Adams et al.'s findings.
Tracheotomierte Patienten, die sowohl eine Dysphagie als auch respiratorische Defizite aufweisen, haben nach der Dekanülierung häufig Probleme, sich an die translaryngeale Atmung anzupassen. Wir entwickelten ein Dekanülierungsprotokoll für diese Patientengruppe, das optional in unser bestehendes Trachealkanülenmanagement integriert werden kann. Erfüllt ein Patient die hierfür definierten Kriterien, so erfolgt unter laryngoskopischer Kontrolle die Einlage eines Platzhalters, der bis zu 3 Tage in situ verbleibt. Während dieser Probedekanülierungsphase werden die respiratorischen Funktionen und das Speichelmanagement engmaschig überwacht. Auf der Grundlage dieser Evaluation wird dann die Entscheidung für oder gegen eine endgültige Dekanülierung getroffen. Wir stellen den Ablauf, die Kriterienkataloge und die Evaluationsparameter für diese Probedekanülierungsphase vor und illustrieren den Ablauf anhand von 2 Fallbeispielen.
Introduction: We conducted a case study to examine the feasibility and safety of high-intensity interval training (HIIT) with increased inspired oxygen content in a colon cancer patient undergoing chemotherapy. A secondary purpose was to investigate the effects of such training regimen on physical functioning.
Case presentation: A female patient (51 years; 49.1 kg; 1.65 m; tumor stage: pT3, pN2a (5/29), pM1a (HEP), L0, V0, R0) performed 8 sessions of HIIT (5 × 3 minutes at 90% of Wmax, separated by 2 minutes at 45% Wmax) with an increased inspired oxygen fraction of 30%. Patient safety, training adherence, cardiorespiratory fitness (peak oxygen uptake and maximal power output during an incremental cycle ergometer test), autonomous nervous function (i.e., heart rate variability during an orthostatic test) as well as questionnaire-assessed quality of life (EORTC QLQ-C30) were evaluated before and after the intervention.
No adverse events were reported throughout the training intervention and a 3 months follow-up. While the patient attended all sessions, adherence to total training time was only 51% (102 of 200 minutes; mean training time per session 12:44 min:sec). VO2peak and Wmax increased by 13% (from 23.0 to 26.1 mL min kg−1) and 21% (from 83 to 100 W), respectively. Heart rate variability represented by the root mean squares of successive differences both in supine and upright positions were increased after the training by 143 and 100%, respectively. The EORTC QLQ-C30 score for physical functioning (7.5%) as well as the global health score (10.7%) improved, while social function decreased (17%).
Conclusions: Our results show that a already short period of HIIT with concomitant hyperoxia was safe and feasible for a patient undergoing chemotherapy for colon cancer. Furthermore, the low overall training adherence of only 51% and an overall low training time per session (∼13 minutes) was sufficient to induce clinically meaningful improvements in physical functioning. However, this case also underlines that intensity and/or length of the HIIT-bouts might need further adjustments to increase training compliance.
Introduction: We conducted a case study to examine the feasibility and safety of high-intensity interval training (HIIT) with increased inspired oxygen content in a colon cancer patient undergoing chemotherapy. A secondary purpose was to investigate the effects of such training regimen on physical functioning.
Case presentation: A female patient (51 years; 49.1 kg; 1.65 m; tumor stage: pT3, pN2a (5/29), pM1a (HEP), L0, V0, R0) performed 8 sessions of HIIT (5 × 3 minutes at 90% of Wmax, separated by 2 minutes at 45% Wmax) with an increased inspired oxygen fraction of 30%. Patient safety, training adherence, cardiorespiratory fitness (peak oxygen uptake and maximal power output during an incremental cycle ergometer test), autonomous nervous function (i.e., heart rate variability during an orthostatic test) as well as questionnaire-assessed quality of life (EORTC QLQ-C30) were evaluated before and after the intervention.
No adverse events were reported throughout the training intervention and a 3 months follow-up. While the patient attended all sessions, adherence to total training time was only 51% (102 of 200 minutes; mean training time per session 12:44 min:sec). VO2peak and Wmax increased by 13% (from 23.0 to 26.1 mL min kg−1) and 21% (from 83 to 100 W), respectively. Heart rate variability represented by the root mean squares of successive differences both in supine and upright positions were increased after the training by 143 and 100%, respectively. The EORTC QLQ-C30 score for physical functioning (7.5%) as well as the global health score (10.7%) improved, while social function decreased (17%).
Conclusions: Our results show that a already short period of HIIT with concomitant hyperoxia was safe and feasible for a patient undergoing chemotherapy for colon cancer. Furthermore, the low overall training adherence of only 51% and an overall low training time per session (∼13 minutes) was sufficient to induce clinically meaningful improvements in physical functioning. However, this case also underlines that intensity and/or length of the HIIT-bouts might need further adjustments to increase training compliance.
Children’s physical fitness development and related moderating effects of age and sex are well documented, especially boys’ and girls’ divergence during puberty. The situation might be different during prepuberty. As girls mature approximately two years earlier than boys, we tested a possible convergence of performance with five tests representing four components of physical fitness in a large sample of 108,295 eight-year old third-graders. Within this single prepubertal year of life and irrespective of the test, performance increased linearly with chronological age, and boys outperformed girls to a larger extent in tests requiring muscle mass for successful performance. Tests differed in the magnitude of age effects (gains), but there was no evidence for an interaction between age and sex. Moreover, “physical fitness” of schools correlated at r = 0.48 with their age effect which might imply that "fit schools” promote larger gains; expected secular trends from 2011 to 2019 were replicated.
Timing of initial school enrollment may vary considerably for various reasons such as early or delayed enrollment, skipped or repeated school classes. Accordingly, the age range within school grades includes older-(OTK) and younger-than-keyage (YTK) children. Hardly any information is available on the impact of timing of school enrollment on physical fitness. There is evidence from a related research topic showing large differences in academic performance between OTK and YTK children versus keyage children. Thus, the aim of this study was to compare physical fitness of OTK (N = 26,540) and YTK (N = 2586) children versus keyage children (N = 108,295) in a representative sample of German third graders. Physical fitness tests comprised cardiorespiratory endurance, coordination, speed, lower, and upper limbs muscle power. Predictions of physical fitness performance for YTK and OTK children were estimated using data from keyage children by taking age, sex, school, and assessment year into account. Data were annually recorded between 2011 and 2019. The difference between observed and predicted z-scores yielded a delta z-score that was used as a dependent variable in the linear mixed models. Findings indicate that OTK children showed poorer performance compared to keyage children, especially in coordination, and that YTK children outperformed keyage children, especially in coordination. Teachers should be aware that OTK children show poorer physical fitness performance compared to keyage children.
Timing of initial school enrollment may vary considerably for various reasons such as early or delayed enrollment, skipped or repeated school classes. Accordingly, the age range within school grades includes older-(OTK) and younger-than-keyage (YTK) children. Hardly any information is available on the impact of timing of school enrollment on physical fitness. There is evidence from a related research topic showing large differences in academic performance between OTK and YTK children versus keyage children. Thus, the aim of this study was to compare physical fitness of OTK (N = 26,540) and YTK (N = 2586) children versus keyage children (N = 108,295) in a representative sample of German third graders. Physical fitness tests comprised cardiorespiratory endurance, coordination, speed, lower, and upper limbs muscle power. Predictions of physical fitness performance for YTK and OTK children were estimated using data from keyage children by taking age, sex, school, and assessment year into account. Data were annually recorded between 2011 and 2019. The difference between observed and predicted z-scores yielded a delta z-score that was used as a dependent variable in the linear mixed models. Findings indicate that OTK children showed poorer performance compared to keyage children, especially in coordination, and that YTK children outperformed keyage children, especially in coordination. Teachers should be aware that OTK children show poorer physical fitness performance compared to keyage children.
Children’s physical fitness development and related moderating effects of age and sex are well documented, especially boys’ and girls’ divergence during puberty. The situation might be different during prepuberty. As girls mature approximately two years earlier than boys, we tested a possible convergence of performance with five tests representing four components of physical fitness in a large sample of 108,295 eight-year old third-graders. Within this single prepubertal year of life and irrespective of the test, performance increased linearly with chronological age, and boys outperformed girls to a larger extent in tests requiring muscle mass for successful performance. Tests differed in the magnitude of age effects (gains), but there was no evidence for an interaction between age and sex. Moreover, “physical fitness” of schools correlated at r = 0.48 with their age effect which might imply that "fit schools” promote larger gains; expected secular trends from 2011 to 2019 were replicated.
Midbrain dopamine neurons invigorate responses by signaling opportunity costs (tonic dopamine) and promote associative learning by encoding a reward prediction error signal (phasic dopamine). Recent studies on Bayesian sensorimotor control have implicated midbrain dopamine concentration in the integration of prior knowledge and current sensory information. The present behavioral study addressed the contributions of tonic and phasic dopamine in a Bayesian decision-making task by alternating reward magnitude and inferring reward prediction errors. Twenty-four participants were asked to indicate the position of a hidden target stimulus under varying prior and likelihood uncertainty. Trial-by-trial rewards were allocated based on performance and two different reward maxima. Overall, participants’ behavior agreed with Bayesian decision theory, but indicated excessive reliance on likelihood information. These results thus
oppose accounts of statistically optimal integration in sensorimotor control, and suggest that the sensorimotor system is subject to additional decision heuristics. Moreover, higher reward magnitude was not observed to induce enhanced response vigor, and was associated with less Bayes-like integration. In addition, the weighting of prior knowledge and current sensory information proceeded independently of reward prediction errors.
Taken together, these findings suggest that the process of combining prior and likelihood uncertainties in sensorimotor control is largely robust to variations in reward.
Meaning-making in the brain has become one of the most intensely discussed topics in cognitive science. Traditional theories on cognition that emphasize abstract symbol manipulations often face a dead end: The symbol grounding problem. The embodiment idea tries to overcome this barrier by assuming that the mind is grounded in sensorimotor experiences. A recent surge in behavioral and brain-imaging studies has therefore focused on the role of the motor cortex in language processing. Concrete, action-related words have received convincing evidence to rely on sensorimotor activation. Abstract concepts, however, still pose a distinct challenge for embodied theories on cognition. Fully embodied abstraction mechanisms were formulated but sensorimotor activation alone seems unlikely to close the explanatory gap. In this respect, the idea of integration areas, such as convergence zones or the ‘hub and spoke’ model, do not only appear like the most promising candidates to account for the discrepancies between concrete and abstract concepts but could also help to unite the field of cognitive science again. The current review identifies milestones in cognitive science research and recent achievements that highlight fundamental challenges, key questions and directions for future research.