Open Access
Refine
Year of publication
Is part of the Bibliography
- yes (229)
Keywords
- sentence comprehension (7)
- embodied cognition (6)
- stress (6)
- acquisition (5)
- cardiac rehabilitation (5)
- english (5)
- language acquisition (5)
- late bilinguals (5)
- numerical cognition (5)
- reliability (5)
Institute
- Humanwissenschaftliche Fakultät (229) (remove)
Background: Infection with human immunodeficiency virus (HIV) affects muscle mass, altering independent activities of people living with HIV (PLWH). Resistance training alone (RT) or combined with aerobic exercise (AE) is linked to improved muscle mass and strength maintenance in PLWH. These exercise benefits have been the focus of different meta-analyses, although only a limited number of studies have been identified up to the year 2013/4. An up-to-date systematic review and meta-analysis concerning the effect of RT alone or combined with AE on strength parameters and hormones is of high value, since more and recent studies dealing with these types of exercise in PLWH have been published. Methods: Randomized controlled trials evaluating the effects of RT alone, AE alone or the combination of both (AERT) on PLWH was performed through five web-databases up to December 2017. Risk of bias and study quality was attained using the PEDro scale. Weighted mean difference (WMD) from baseline to post-intervention changes was calculated. The I2 statistics for heterogeneity was calculated.
Results: Thirteen studies reported strength outcomes. Eight studies presented a low risk of bias. The overall change in upper body strength was 19.3 Kg (95% CI: 9.8±28.8, p< 0.001) after AERT and 17.5 Kg (95% CI: 16±19.1, p< 0.001) for RT. Lower body change was 29.4 Kg (95% CI: 18.1±40.8, p< 0.001) after RT and 10.2 Kg (95% CI: 6.7±13.8, p< 0.001) for AERT. Changes were higher after controlling for the risk of bias in upper and lower body strength and for supervised exercise in lower body strength. A significant change towards lower levels of IL-6 was found (-2.4 ng/dl (95% CI: -2.6, -2.1, p< 0.001). Conclusion: Both resistance training alone and combined with aerobic exercise showed a positive change when studies with low risk of bias and professional supervision were analyzed, improving upper and, more critically, lower body muscle strength. Also, this study found that exercise had a lowering effect on IL-6 levels in PLWH.
Accuracy of training recommendations based on a treadmill multistage incremental exercise test
(2018)
Competitive runners will occasionally undergo exercise in a laboratory setting to obtain predictive and prescriptive information regarding their performance. The present research aimed to assess whether the physiological demands of lab-based treadmill running (TM) can simulate that of over-ground (OG) running using a commonly used protocol. Fifteen healthy volunteers with a weekly mileage of ≥ 20 km over the past 6 months and treadmill experience participated in this cross-sectional study. Two stepwise incremental tests until volitional exhaustion was performed in a fixed order within one week in an Outpatient Clinic research laboratory and outdoor athletic track. Running velocity (IATspeed), heart rate (IATHR) and lactate concentration at the individual anaerobic threshold (IATbLa) were primary endpoints. Additionally, distance covered (DIST), maximal heart rate (HRmax), maximal blood lactate concentration (bLamax) and rate of perceived exertion (RPE) at IATspeed were analyzed. IATspeed, DIST and HRmax were not statistically significantly different between conditions, whereas bLamax and RPE at IATspeed showed statistical significance (p < 0.05). Apart from RPE at IATspeed, IATspeed, DIST, HRmax and bLamax strongly correlate between conditions (r = 0.815–0.988). High reliability between conditions provides strong evidence to suggest that running on a treadmill are physiologically comparable to that of OG and that training recommendations and be made with assurance.
Accuracy of training recommendations based on a treadmill multistage incremental exercise test
(2018)
Competitive runners will occasionally undergo exercise in a laboratory setting to obtain predictive and prescriptive information regarding their performance. The present research aimed to assess whether the physiological demands of lab-based treadmill running (TM) can simulate that of over-ground (OG) running using a commonly used protocol. Fifteen healthy volunteers with a weekly mileage of ≥ 20 km over the past 6 months and treadmill experience participated in this cross-sectional study. Two stepwise incremental tests until volitional exhaustion was performed in a fixed order within one week in an Outpatient Clinic research laboratory and outdoor athletic track. Running velocity (IATspeed), heart rate (IATHR) and lactate concentration at the individual anaerobic threshold (IATbLa) were primary endpoints. Additionally, distance covered (DIST), maximal heart rate (HRmax), maximal blood lactate concentration (bLamax) and rate of perceived exertion (RPE) at IATspeed were analyzed. IATspeed, DIST and HRmax were not statistically significantly different between conditions, whereas bLamax and RPE at IATspeed showed statistical significance (p < 0.05). Apart from RPE at IATspeed, IATspeed, DIST, HRmax and bLamax strongly correlate between conditions (r = 0.815–0.988). High reliability between conditions provides strong evidence to suggest that running on a treadmill are physiologically comparable to that of OG and that training recommendations and be made with assurance.
Hatred directed at members of groups due to their origin, race, gender, religion, or sexual orientation is not new, but it has taken on a new dimension in the online world. To date, very little is known about online hate among adolescents. It is also unknown how online disinhibition might influence the association between being bystanders and being perpetrators of online hate. Thus, the present study focused on examining the associations among being bystanders of online hate, being perpetrators of online hate, and the moderating role of toxic online disinhibition in the relationship between being bystanders and perpetrators of online hate. In total, 1480 students aged between 12 and 17 years old were included in this study. Results revealed positive associations between being online hate bystanders and perpetrators, regardless of whether adolescents had or had not been victims of online hate themselves. The results also showed an association between toxic online disinhibition and online hate perpetration. Further, toxic online disinhibition moderated the relationship between being bystanders of online hate and being perpetrators of online hate. Implications for prevention programs and future research are discussed.
Static (one-legged stance) and dynamic (star excursion balance) postural control tests were performed by 14 adolescent athletes with and 17 without back pain to determine reproducibility. The total displacement, mediolateral and anterior-posterior displacements of the centre of pressure in mm for the static, and the normalized and composite reach distances for the dynamic tests were analysed. Intraclass correlation coefficients, 95% confidence intervals, and a Bland-Altman analysis were calculated for reproducibility. Intraclass correlation coefficients for subjects with (0.54 to 0.65), (0.61 to 0.69) and without (0.45 to 0.49), (0.52 to 0.60) back pain were obtained on the static test for right and left legs, respectively. Likewise, (0.79 to 0.88), (0.75 to 0.93) for subjects with and (0.61 to 0.82), (0.60 to 0.85) for those without back pain were obtained on the dynamic test for the right and left legs, respectively. Systematic bias was not observed between test and retest of subjects on both static and dynamic tests. The one-legged stance and star excursion balance tests have fair to excellent reliabilities on measures of postural control in adolescent athletes with and without back pain. They can be used as measures of postural control in adolescent athletes with and without back pain.
Static (one-legged stance) and dynamic (star excursion balance) postural control tests were performed by 14 adolescent athletes with and 17 without back pain to determine reproducibility. The total displacement, mediolateral and anterior-posterior displacements of the centre of pressure in mm for the static, and the normalized and composite reach distances for the dynamic tests were analysed. Intraclass correlation coefficients, 95% confidence intervals, and a Bland-Altman analysis were calculated for reproducibility. Intraclass correlation coefficients for subjects with (0.54 to 0.65), (0.61 to 0.69) and without (0.45 to 0.49), (0.52 to 0.60) back pain were obtained on the static test for right and left legs, respectively. Likewise, (0.79 to 0.88), (0.75 to 0.93) for subjects with and (0.61 to 0.82), (0.60 to 0.85) for those without back pain were obtained on the dynamic test for the right and left legs, respectively. Systematic bias was not observed between test and retest of subjects on both static and dynamic tests. The one-legged stance and star excursion balance tests have fair to excellent reliabilities on measures of postural control in adolescent athletes with and without back pain. They can be used as measures of postural control in adolescent athletes with and without back pain.
Do properties of individual languages shape the mechanisms by which they are processed? By virtue of their non-concatenative morphological structure, the recognition of complex words in Semitic languages has been argued to rely strongly on morphological information and on decomposition into root and pattern constituents. Here, we report results from a masked priming experiment in Hebrew in which we contrasted verb forms belonging to two morphological classes, Paal and Piel, which display similar properties, but crucially differ on whether they are extended to novel verbs. Verbs from the open-class Piel elicited familiar root priming effects, but verbs from the closed-class Paal did not. Our findings indicate that, similarly to other (e.g., Indo-European) languages, down-to-the-root decomposition in Hebrew does not apply to stems of non-productive verbal classes. We conclude that the Semitic word processor is less unique than previously thought: Although it operates on morphological units that are combined in a non-linear way, it engages the same universal mechanisms of storage and computation as those seen in other languages.
Understanding a sentence and integrating it into the discourse depends upon the identification of its focus, which, in spoken German, is marked by accentuation. In the case of written language, which lacks explicit cues to accent, readers have to draw on other kinds of information to determine the focus. We study the joint or interactive effects of two kinds of information that have no direct representation in print but have each been shown to be influential in the reader's text comprehension: (i) the (low-level) rhythmic-prosodic structure that is based on the distribution of lexically stressed syllables, and (ii) the (high-level) discourse context that is grounded in the memory of previous linguistic content. Systematically manipulating these factors, we examine the way readers resolve a syntactic ambiguity involving the scopally ambiguous focus operator auch (engl. "too") in both oral (Experiment 1) and silent reading (Experiment 2). The results of both experiments attest that discourse context and local linguistic rhythm conspire to guide the syntactic and, concomitantly, the focus-structural analysis of ambiguous sentences. We argue that reading comprehension requires the (implicit) assignment of accents according to the focus structure and that, by establishing a prominence profile, the implicit prosodic rhythm directly affects accent assignment.
Many educational technology proponents support the Technological
Pedagogical Content Knowledge (TPACK) model as a way to
conceptualize teaching with technology, but recent TPACK research
shows a need for empirical studies regarding the development of this
knowledge. This proof-of-concept study applies mixed-methods to
investigate the meta-cognitive awareness produced by teachers who
participate in the Graphic Assessment of TPACK Instrument (GATI).
This process involves creating graphical representations (circles of
differing sizes and the degree of their overlap) that represent what
teachers understand to be their current and aspired TPACK. This study
documented teachers’ explanations during a think-aloud procedure as
they created their GATI figures. The in-depth data from two German
teachers who participated in the process captured the details of their
experience and demonstrated the potential of the GATI to support
teachers in reflecting about their professional knowledge and in
determining their own professional development activities. These
findings will be informative to future pilot studies involving the larger
design of the GATI process, to better understand the role of teachers’
meta-conceptual awareness, and to better ascertain how the GATI
might be used to support professional development on a larger scale.
Between-school variation in students' achievement, motivation, affect, and learning strategies
(2017)
To plan group-randomized trials where treatment conditions are assigned to schools, researchers need design parameters that provide information about between-school differences in outcomes as well as the amount of variance that can be explained by covariates at the student (L1) and school (L2) levels. Most previous research has offered these parameters for U.S. samples and for achievement as the outcome. This paper and the online supplementary materials provide design parameters for 81 countries in three broad outcome categories (achievement, affect and motivation, and learning strategies) for domain-general and domain-specific (mathematics, reading, and science) measures. Sociodemographic characteristics were used as covariates. Data from representative samples of 15-year-old students stemmed from five cycles of the Programme for International Student Assessment (PISA; total number of students/schools: 1,905,147/70,098). Between-school differences as well as the amount of variance explained at L1 and L2 varied widely across countries and educational outcomes, demonstrating the limited generalizability of design parameters across these dimensions. The use of the design parameters to plan group-randomized trials is illustrated.
This study investigates the comprehension of wh-questions in individuals with aphasia (IWA) speaking Turkish, a non-wh-movement language, and German, a wh-movement language. We examined six German-speaking and 11 Turkish-speaking IWA using picture-pointing tasks. Findings from our experiments show that the Turkish IWA responded more accurately to both object who and object which questions than to subject questions, while the German IWA performed better for subject which questions than in all other conditions. Using random forest models, a machine learning technique used in tree-structured classification, on the individual data revealed that both the Turkish and German IWA’s response accuracy is largely predicted by the presence of overt and unambiguous case marking. We discuss our results with regard to different theoretical approaches to the comprehension of wh-questions in aphasia.
The starting point of this contribution is the potential risk to health and performance from the combination of elite sporting careers with the pursuit of education. In European sport science and politics, structural measures to promote dual careers in elite sports have been discussed increasingly of late. In addition to organisational measures, there are calls for educational-psychological intervention programmes supporting the successful management of dual careers at the individual level. This paper presents an appropriate intervention programme and its evaluation: stress-resistance training for elite athletes (SRT-EA). It comprises 10 units, each lasting 90 minutes. It is intended for athletes and aims to improve their resistance to chronic stress. The evaluation was carried out in a quasi-experimental design, with three points of measurement (baseline, immediately after, and three months after) and two non-randomised groups: an intervention group (n = 128) and an untreated control group (n = 117). Participants were between 13 and 20 years of age (53.5% male) and represented
various Olympic sports. Outcome variables were assessed with questionnaires. Significant short- and mid-term intervention effects were explored. The intervention increased stress-related knowledge, general self-efficacy, and stress sensitivity. Chronic stress level, stress symptoms, and stress reactivity were reduced. In line with the intention of the intervention, the results showed short- and mid-term, small to medium-sized effects. Accordingly, separate measurements at the end of the intervention and three months later showed mostly positive subjective experiences. Thus, the results reinforce the hope that educational-psychological stress-management interventions can support dual careers.
The aim of our study was to examine the extent to which linguistic
approaches to sentence comprehension deficits in aphasia can
account for differential impairment patterns in the comprehension
of wh-questions in bilingual persons with aphasia (PWA). We investi-
gated the comprehension of subject and object wh-questions in both
Turkish, a wh-in-situ language, and German, a wh-fronting language,
in two bilingual PWA using a sentence-to-picture matching task. Both
PWA showed differential impairment patterns in their two languages.
SK, an early bilingual PWA, had particular difficulty comprehending
subject which-questions in Turkish but performed normal across all
conditions in German. CT, a late bilingual PWA, performed more
poorly for object which-questions in German than in all other condi-
tions, whilst in Turkish his accuracy was at chance level across all
conditions. We conclude that the observed patterns of selective
cross-linguistic impairments cannot solely be attributed either to
difficulty with wh-movement or to problems with the integration of
discourse-level information. Instead our results suggest that differ-
ences between our PWA’s individual bilingualism profiles (e.g. onset
of bilingualism, premorbid language dominance) considerably
affected the nature and extent of their impairments.
Using behavioral observation for the longitudinal study of anger regulation in middle childhood
(2017)
Assessing anger regulation via self-reports is fraught with problems, especially among children. Behavioral observation provides an ecologically valid alternative for measuring anger regulation. The present study uses data from two waves of a longitudinal study to present a behavioral observation approach for measuring anger regulation in middle childhood. At T1, 599 children from Germany (6–10 years old) were observed during an anger eliciting task, and the use of anger regulation strategies was coded. At T2, 3 years later, the observation was repeated with an age-appropriate version of the same task. Partial metric measurement invariance over time demonstrated the structural equivalence of the two versions. Maladaptive anger regulation between the two time points showed moderate stability. Validity was established by showing correlations with aggressive behavior, peer problems, and conduct problems (concurrent and predictive criterion validity). The study presents an ecologically valid and economic approach to assessing anger regulation strategies in situations.
Schools are a major context for academic and socio-emotional development, but
also an important acculturative context. This is notably the case in adolescence,
which is a critical period for the development of a social and ethnic identity, as
well as moral reasoning and intergroup attitudes. How schools approach cultural
diversity issues is therefore likely to affect these developmental and acculturative
processes and adaptation outcomes. In the present article, the manifestation
and effects of the most prominent approaches to cultural diversity, namely
those guided by a perspective of equality and inclusion, and those guided by
a perspective of cultural pluralism, are reviewed and compared in the context
of multi-ethnic schools. The aim is to explore when and how the potential of
cultural diversity can best flourish, enhancing the academic and socio-emotional
development of culturally diverse students.
Although all bilinguals encounter cross-language interference (CLI), some bilinguals are more susceptible to interference than others. Here, we report on language performance of late bilinguals (Russian/German) on two bilingual tasks (interview, verbal fluency), their language use and switching habits. The only between-group difference was CLI: one group consistently produced significantly more errors of CLI on both tasks than the other (thereby replicating our findings from a bilingual picture naming task). This striking group difference in language control ability can only be explained by differences in cognitive control, not in language proficiency or language mode.
Background: The goal of this study was to estimate the prevalence of and risk factors for diagnosed depression in heart failure (HF) patients in German primary care practices.
Methods: This study was a retrospective database analysis in Germany utilizing the Disease Analyzer (R) Database (IMS Health, Germany). The study population included 132,994 patients between 40 and 90 years of age from 1,072 primary care practices. The observation period was between 2004 and 2013. Follow-up lasted up to five years and ended in April 2015. A total of 66,497 HF patients were selected after applying exclusion criteria. The same number of 66,497 controls were chosen and were matched (1:1) to HF patients on the basis of age, sex, health insurance, depression diagnosis in the past, and follow-up duration after index date.
Results: HF was a strong risk factor for diagnosed depression (p < 0.0001). A total of 10.5% of HF patients and 6.3% of matched controls developed depression after one year of follow-up (p < 0.001). Depression was documented in 28.9% of the HF group and 18.2% of the control group after the five-year follow-up (p < 0.001). Cancer, dementia, osteoporosis, stroke, and osteoarthritis were associated with a higher risk of developing depression. Male gender and private health insurance were associated with lower risk of depression.
Conclusions: The risk of diagnosed depression is significantly increased in patients with HF compared to patients without HF in primary care practices in Germany.
Changes in free symptom attributions in hypochondriasis after cognitive therapy and exposure therapy
(2016)
Background: Cognitive-behavioural therapy can change dysfunctional symptom attributions in patients with hypochondriasis. Past research has used forced-choice answer formats, such as questionnaires, to assess these misattributions; however, with this approach, idiosyncratic attributions cannot be assessed. Free associations are an important complement to existing approaches that assess symptom attributions. Aims: With this study, we contribute to the current literature by using an open-response instrument to investigate changes in freely associated attributions after exposure therapy (ET) and cognitive therapy (CT) compared with a wait list (WL). Method: The current study is a re-examination of a formerly published randomized controlled trial (Weck, Neng, Richtberg, Jakob and Stangier, 2015) that investigated the effectiveness of CT and ET. Seventy-three patients with hypochondriasis were randomly assigned to CT, ET or a WL, and completed a 12-week treatment (or waiting period). Before and after the treatment or waiting period, patients completed an Attribution task in which they had to spontaneously attribute nine common bodily sensations to possible causes in an open-response format. Results: Compared with the WL, both CT and ET reduced the frequency of somatic attributions regarding severe diseases (CT: Hedges's g = 1.12; ET: Hedges's g = 1.03) and increased the frequency of normalizing attributions (CT: Hedges's g = 1.17; ET: Hedges's g = 1.24). Only CT changed the attributions regarding moderate diseases (Hedges's g = 0.69). Changes in somatic attributions regarding mild diseases and psychological attributions were not observed. Conclusions: Both CT and ET are effective for treating freely associated misattributions in patients with hypochondriasis. This study supplements research that used a forced-choice assessment.
This paper examines phonological phrasing in the Kwa language Akan. Regressive [+ATR] vowel harmony between words (RVH) serves as a hitherto unreported diagnostic of phonological phrasing. In this paper I discuss VP-internal and NP-internal structures, as well as SVO(O) and serial verb constructions. RVH is a general process in Akan grammar, although it is blocked in certain contexts. The analysis of phonological phrasing relies on universal syntax-phonology mapping constraints whereby lexically headed syntactic phrases are mapped onto phonological phrases. Blocking contexts call for a domain-sensitive analysis of RVH assuming recursive prosodic structure which makes reference to maximal and non-maximal phonological phrases. It is proposed (i) that phonological phrase structure is isomorphic to syntactic structure in Akan, and (ii) that the process of RVH is blocked at the edge of a maximal phonological phrase; this is formulated in terms of a domain-sensitive CrispEdge constraint.
Feeling Half-Half?
(2018)
Growing up in multicultural environments, Turkish-heritage individuals in
Europe face specific challenges in combining their multiple cultural iden-
tities to form a coherent sense of self. Drawing from social identity com-
plexity, this study explores four modes of combining cultural identities and
their variation in relational contexts. Problem-centered interviews with
Turkish-heritage young adults in Austria revealed the preference for com-
plex, supranational labels, such as multicultural. Furthermore, most partici-
pants described varying modes of combining cultural identities over time
and across relational contexts. Social exclusion experiences throughout
adolescence related to perceived conflict of cultural identities, whereas
multicultural peer groups supported perceived compatibility of cultural
identities. Findings emphasize the need for complex, multidimensional
approaches to study ethnic minorities’ combination of cultural identities.
Sensitivity to salience
(2018)
Sentence comprehension is optimised by indicating entities as salient through linguistic (i.e., information-structural) or visual means. We compare how salience of a depicted referent due to a linguistic (i.e., topic status) or visual cue (i.e., a virtual person’s gaze shift) modulates sentence comprehension in German. We investigated processing of sentences with varying word order and pronoun resolution by means of self-paced reading and an antecedent choice task, respectively. Our results show that linguistic as well as visual salience cues immediately speeded up reading times of sentences mentioning the salient referent first. In contrast, for pronoun resolution, linguistic and visual cues modulated antecedent choice preferences less congruently. In sum, our findings speak in favour of a significant impact of linguistic and visual salience cues on sentence comprehension, substantiating that salient information delivered via language as well as the visual environment is integrated in the current mental representation of the discourse.
In a self-paced reading study on German sluicing, Paape (Paape, 2016) found that reading times were shorter at the ellipsis site when the antecedent was a temporarily ambiguous garden-path structure. As a post-hoc explanation of this finding, Paape assumed that the antecedent’s memory representation was reactivated during syntactic reanalysis, making it easier to retrieve. In two eye tracking experiments, we subjected the reactivation hypothesis to further empirical scrutiny. Experiment 1, carried out in French, showed no evidence in favor in the reactivation hypothesis. Instead, results for one out of the three types of garden-path sentences that were tested suggest that subjects sometimes failed to resolve the temporary ambiguity in the antecedent clause, and subsequently failed to resolve the ellipsis. The results of Experiment 2, a conceptual replication of Paape’s (Paape, 2016) original study carried out in German, are compatible with the reactivation hypothesis, but leave open the possibility that the observed speedup for ambiguous antecedents may be due to occasional retrievals of an incorrect structure.
In a self-paced reading study on German sluicing, Paape (Paape, 2016) found that reading times were shorter at the ellipsis site when the antecedent was a temporarily ambiguous garden-path structure. As a post-hoc explanation of this finding, Paape assumed that the antecedent’s memory representation was reactivated during syntactic reanalysis, making it easier to retrieve. In two eye tracking experiments, we subjected the reactivation hypothesis to further empirical scrutiny. Experiment 1, carried out in French, showed no evidence in favor in the reactivation hypothesis. Instead, results for one out of the three types of garden-path sentences that were tested suggest that subjects sometimes failed to resolve the temporary ambiguity in the antecedent clause, and subsequently failed to resolve the ellipsis. The results of Experiment 2, a conceptual replication of Paape’s (Paape, 2016) original study carried out in German, are compatible with the reactivation hypothesis, but leave open the possibility that the observed speedup for ambiguous antecedents may be due to occasional retrievals of an incorrect structure.
Much research on language control in bilinguals has relied on the interpretation of the costs of switching between two languages. Of the two types of costs that are linked to language control, switching costs are assumed to be transient in nature and modulated by trial-specific manipulations (e.g., by preparation time), while mixing costs are supposed to be more stable and less affected by trial-specific manipulations. The present study investigated the effect of preparation time on switching and mixing costs, revealing that both types of costs can be influenced by trial-specific manipulations.
Rhythm perception is assumed to be guided by a domain-general auditory principle, the Iambic/Trochaic Law, stating that sounds varying in intensity are grouped as strong-weak, and sounds varying in duration are grouped as weak-strong. Recently, Bhatara et al. (2013) showed that rhythmic grouping is influenced by native language experience, French listeners having weaker grouping preferences than German listeners. This study explores whether L2 knowledge and musical experience also affect rhythmic grouping. In a grouping task, French late learners of German listened to sequences of coarticulated syllables varying in either intensity or duration. Data on their language and musical experience were obtained by a questionnaire. Mixed-effect model comparisons showed influences of musical experience as well as L2 input quality and quantity on grouping preferences. These results imply that adult French listeners' sensitivity to rhythm can be enhanced through L2 and musical experience.
Background: Dementia is a psychiatric condition the development of which is associated with numerous aspects of life. Our aim was to estimate dementia risk factors in German primary care patients.
Methods: The case-control study included primary care patients (70-90 years) with first diagnosis of dementia (all-cause) during the index period (01/2010-12/2014) (Disease Analyzer, Germany), and controls without dementia matched (1:1) to cases on the basis of age, sex, type of health insurance, and physician. Practice visit records were used to verify that there had been 10 years of continuous follow-up prior to the index date. Multivariate logistic regression models were fitted with dementia as a dependent variable and the potential predictors.
Results: The mean age for the 11,956 cases and the 11,956 controls was 80.4 (SD: 5.3) years. 39.0% of them were male and 1.9% had private health insurance. In the multivariate regression model, the following variables were linked to a significant extent with an increased risk of dementia: diabetes (OR: 1.17; 95% CI: 1.10-1.24), lipid metabolism (1.07; 1.00-1.14), stroke incl. TIA (1.68; 1.57-1.80), Parkinson's disease (PD) (1.89; 1.64-2.19), intracranial injury (1.30; 1.00-1.70), coronary heart disease (1.06; 1.00-1.13), mild cognitive impairment (MCI) (2.12; 1.82-2.48), mental and behavioral disorders due to alcohol use (1.96; 1.50-2.57). The use of statins (OR: 0.94; 0.90-0.99), proton-pump inhibitors (PPI) (0.93; 0.90-0.97), and antihypertensive drugs (0.96, 0.94-0.99) were associated with a decreased risk of developing dementia.
Conclusions: Risk factors for dementia found in this study are consistent with the literature. Nevertheless, the associations between statin, PPI and antihypertensive drug use, and decreased risk of dementia need further investigations.
Background: Given the well-established association between perceived stress and quality of life (QoL) in dementia patients and their partners, our goal was to identify whether relationship quality and dyadic coping would operate as mediators between perceived stress and QoL.
Methods: 82 dyads of dementia patients and their spousal caregivers were included in a cross-sectional assessment from a prospective study. QoL was assessed with the Quality of Life in Alzheimer's Disease scale (QoL-AD) for dementia patients and the WHO Quality of Life-BREF for spousal caregivers. Perceived stress was measured with the Perceived Stress Scale (PSS-14). Both partners were assessed with the Dyadic Coping Inventory (DCI). Analyses of correlation as well as regression models including mediator analyses were performed.
Results: We found negative correlations between stress and QoL in both partners (QoL-AD: r = -0.62; p < 0.001; WHO-QOL Overall: r = -0.27; p = 0.02). Spousal caregivers had a significantly lower DCI total score than dementia patients (p < 0.001). Dyadic coping was a significant mediator of the relationship between stress and QoL in spousal caregivers (z = 0.28; p = 0.02), but not in dementia patients. Likewise, relationship quality significantly mediated the relationship between stress and QoL in caregivers only (z = -2.41; p = 0.02).
Conclusions: This study identified dyadic coping as a mediator on the relationship between stress and QoL in (caregiving) partners of dementia patients. In patients, however, we found a direct negative effect of stress on QoL. The findings suggest the importance of stress reducing and dyadic interventions for dementia patients and their partners, respectively.
Previous research informs us about facilitators of employees’ promotive voice. Yet little is known about what determines whether a specific idea for constructive change brought up by an employee will be approved or rejected by a supervisor. Drawing on interactionist theories of motivation and personality, we propose that a supervisor will be least likely to support an idea when it threatens the supervisor’s power motive, and when it is perceived to serve the employee’s own striving for power. The prosocial versus egoistic intentions attributed to the idea presenter are proposed to mediate the latter effect. We conducted three scenario-based studies in which supervisors evaluated fictitious ideas voiced by employees that – if implemented – would have power-related consequences for them as a supervisor. Results show that the higher a supervisors’ explicit power motive was, the less likely they were to support a power-threatening idea (Study 1, N = 60). Moreover, idea support was less likely when this idea was proposed by an employee that was described as high (rather than low) on power motivation (Study 2, N = 79); attributed prosocial intentions mediated this effect. Study 3 (N = 260) replicates these results.
Several personality dispositions with common features capturing sensitivities to negative social cues have recently been introduced into psychological research. To date, however, little is known about their interrelations, their conjoint effects on behavior, or their interplay with other risk factors. We asked N = 349 adults from Germany to rate their justice, rejection, moral disgust, and provocation sensitivity, hostile attribution bias, trait anger, and forms and functions of aggression. The sensitivity measures were mostly positively correlated; particularly those with an egoistic focus, such as victim justice, rejection, and provocation sensitivity, hostile attributions and trait anger as well as those with an altruistic focus, such as observer justice, perpetrator justice, and moral disgust sensitivity. The sensitivity measures had independent and differential effects on forms and functions of aggression when considered simultaneously and when controlling for hostile attributions and anger. They could not be integrated into a single factor of interpersonal sensitivity or reduced to other well-known risk factors for aggression. The sensitivity measures, therefore, require consideration in predicting and preventing aggression.
Background:
Deception can distort psychological tests on socially sensitive topics. Understanding the cerebral
processes that are involved in such faking can be useful in detection and prevention of deception. Previous research
shows that faking a brief implicit association test (BIAT ) evokes a characteristic ERP response. It is not yet known
whether temporarily available self-control resources moderate this response. We randomly assigned 22 participants
(15 females, 24.23
±
2.91
years old) to a counterbalanced repeated-measurements design. Participants first com-
pleted a Brief-IAT (BIAT ) on doping attitudes as a baseline measure and were then instructed to fake a negative dop
-
ing attitude both when self-control resources were depleted and non-depleted. Cerebral activity during BIAT perfor
-
mance was assessed using high-density EEG.
Results:
Compared to the baseline BIAT, event-related potentials showed a first interaction at the parietal P1,
while significant post hoc differences were found only at the later occurring late positive potential. Here, signifi-
cantly decreased amplitudes were recorded for ‘normal’ faking, but not in the depletion condition. In source space,
enhanced activity was found for ‘normal’ faking in the bilateral temporoparietal junction. Behaviorally, participants
were successful in faking the BIAT successfully in both conditions.
Conclusions:
Results indicate that temporarily available self-control resources do not affect overt faking success on
a BIAT. However, differences were found on an electrophysiological level. This indicates that while on a phenotypical
level self-control resources play a negligible role in deliberate test faking the underlying cerebral processes are markedly different.
Aim: We aimed to identify patient characteristics and comorbidities that correlate with the initial exercise capacity of
cardiac rehabilitation (CR) patients and to study the significance of patient characteristics, comorbidities and training
methods for training achievements and final fitness of CR patients.
Methods: We studied 557 consecutive patients (51.7 Æ 6.9 years; 87.9% men) admitted to a three-week in-patient CR.
Cardiopulmonary exercise testing (CPX) was performed at discharge. Exercise capacity (watts) at entry, gain in training
volume and final physical fitness (assessed by peak O 2 utilization (VO 2peak ) were analysed using analysis of covariance
(ANCOVA) models.
Results: Mean training intensity was 90.7 Æ 9.7% of maximum heart rate (81% continuous/19% interval training, 64%
additional strength training). A total of 12.2 Æ 2.6 bicycle exercise training sessions were performed. Increase of training
volume by an average of more than 100% was achieved (difference end/beginning of CR: 784 Æ 623 watts  min). In the
multivariate model the gain in training volume was significantly associated with smoking, age and exercise capacity at
entry of CR. The physical fitness level achieved at discharge from CR as assessed by VO 2peak was mainly dependent on
age, but also on various factors related to training, namely exercise capacity at entry, increase of training volume and
training method.
Conclusion: CR patients were trained in line with current guidelines with moderate-to-high intensity and reached a
considerable increase of their training volume. The physical fitness level achieved at discharge from CR depended on
various factors associated with training, which supports the recommendation that CR should be offered to all cardiac
patients.
Editorial
(2016)
Background:
It has previously been shown that conditioning activities consisting of repetitive hops have the
potential to induce better drop jump (DJ) performance in recreationally active individuals. In the present pilot study,
we investigated whether repetitive conditioning hops can also increase reactive jump and sprint performance in
sprint-trained elite athletes competing at an international level.
Methods:
Jump and sprint performances of 5 athletes were randomly assessed under 2 conditions. The control
condition (CON) comprised 8 DJs and 4 trials of 30-m sprints. The intervention condition (HOP) consisted of 10
maximal repetitive two-legged hops that were conducted 10 s prior to each single DJ and sprint trial. DJ
performance was analyzed using a one-dimensional ground reaction force plate. Step length (SL), contact time (CT),
and sprint time (ST) during the 30-m sprints were recorded using an opto-electronic measurement system.
Results:
Following the conditioning activity, DJ height and external DJ peak power were both significantly
increased by 11 % compared to the control condition. All other variables did not show any significant differences
between HOP and CON.
Conclusions:
In the present pilot study, we were able to demonstrate large improvements in DJ performance even
in sprint-trained elite athletes following a conditioning activity consisting of maximal two-legged repetitive hops.
This strengthens the hypothesis that plyometric conditioning exercises can induce performance enhancements in
elite athletes that are even greater than those observed in recreationally active athletes.. In addition, it appears that
the transfer of these effects to other stretch-shortening cycle activities is limited, as we did not observe any
changes in sprint performance following the plyometric conditioning activity.
The genesis of chronic pain is explained by a biopsychosocial model. It hypothesizes an interdependency between environmental and genetic factors provoking aberrant long-term changes in biological and psychological regulatory systems. Physiological effects of psychological and physical stressors may play a crucial role in these maladaptive processes. Specifically, long-term demands on the stress response system may moderate central pain processing and influence descending serotonergic and noradrenergic signals from the brainstem, regulating nociceptive processing at the spinal level. However, the underlying mechanisms of this pathophysiological interplay still remain unclear. This paper aims to shed light on possible pathways between physical (exercise) and psychological stress and the potential neurobiological consequences in the genesis and treatment of chronic pain, highlighting evolving concepts and promising research directions in the treatment of chronic pain. Two treatment forms (exercise and mindfulness-based stress reduction as exemplary therapies), their interaction, and the dose-response will be discussed in more detail, which might pave the way to a better understanding of alterations in the pain matrix and help to develop future prevention and therapeutic concepts
The genesis of chronic pain is explained by a biopsychosocial model. It hypothesizes an interdependency between environmental and genetic factors provoking aberrant long-term changes in biological and psychological regulatory systems. Physiological effects of psychological and physical stressors may play a crucial role in these maladaptive processes. Specifically, long-term demands on the stress response system may moderate central pain processing and influence descending serotonergic and noradrenergic signals from the brainstem, regulating nociceptive processing at the spinal level. However, the underlying mechanisms of this pathophysiological interplay still remain unclear. This paper aims to shed light on possible pathways between physical (exercise) and psychological stress and the potential neurobiological consequences in the genesis and treatment of chronic pain, highlighting evolving concepts and promising research directions in the treatment of chronic pain. Two treatment forms (exercise and mindfulness-based stress reduction as exemplary therapies), their interaction, and the dose-response will be discussed in more detail, which might pave the way to a better understanding of alterations in the pain matrix and help to develop future prevention and therapeutic concepts
Objective: To investigate associations between socioeconomic status (SES) indicators (education, job position, income, multidimensional index) and the genesis of chronic low back pain (CLBP).
Design: Longitudinal field study (baseline and 6-month follow-up).
Setting: Four medical clinics across Germany.
Participants: 352 people were included according to the following criteria: (1) between 18 and 65 years of age, (2) intermittent pain and (3) an understanding of the study and the ability to answer a questionnaire without help. Exclusion criteria were: (1) pregnancy, (2) inability to stand upright, (3) inability to give sick leave information, (4) signs of serious spinal pathology, (5) acute pain in the past 7 days or (6) an incomplete SES indicators questionnaire.
Outcome measures: Subjective intensity and disability of CLBP.
Results: Analysis showed that job position was the best single predictor of CLBP intensity, followed by a multidimensional index. Education and income had no significant association with intensity. Subjective disability was best predicted by job position, succeeded by the multidimensional index and education, while income again had no significant association.
Conclusion: The results showed that SES indicators have different strong associations with the genesis of CLBP and should therefore not be used interchangeably. Job position was found to be the single most important indicator. These results could be helpful in the planning of back pain care programmes, but in general, more research on the relationship between SES and health outcomes is needed.
Objective: To investigate associations between socioeconomic status (SES) indicators (education, job position, income, multidimensional index) and the genesis of chronic low back pain (CLBP).
Design: Longitudinal field study (baseline and 6-month follow-up).
Setting: Four medical clinics across Germany.
Participants: 352 people were included according to the following criteria: (1) between 18 and 65 years of age, (2) intermittent pain and (3) an understanding of the study and the ability to answer a questionnaire without help. Exclusion criteria were: (1) pregnancy, (2) inability to stand upright, (3) inability to give sick leave information, (4) signs of serious spinal pathology, (5) acute pain in the past 7 days or (6) an incomplete SES indicators questionnaire.
Outcome measures: Subjective intensity and disability of CLBP.
Results Analysis: showed that job position was the best single predictor of CLBP intensity, followed by a multidimensional index. Education and income had no significant association with intensity. Subjective disability was best predicted by job position, succeeded by the multidimensional index and education, while income again had no significant association.
Conclusion: The results showed that SES indicators have different strong associations with the genesis of CLBP and should therefore not be used interchangeably. Job position was found to be the single most important indicator. These results could be helpful in the planning of back pain care programmes, but in general, more research on the relationship between SES and health outcomes is needed.
The regular monitoring of physical fitness and sport-specific performance is important in elite sports to increase the likelihood of success in competition. This study aimed to systematically review and to critically appraise the methodological quality, validation data, and feasibility of the sport-specific performance assessment in Olympic combat sports like amateur boxing, fencing, judo, karate, taekwondo, and wrestling. A systematic search was conducted in the electronic databases PubMed, Google-Scholar, and Science-Direct up to October 2017. Studies in combat sports were included that reported validation data (e.g., reliability, validity, sensitivity) of sport-specific tests. Overall, 39 studies were eligible for inclusion in this review. The majority of studies (74%) contained sample sizes <30 subjects. Nearly, 1/3 of the reviewed studies lacked a sufficient description (e.g., anthropometrics, age, expertise level) of the included participants. Seventy-two percent of studies did not sufficiently report inclusion/exclusion criteria of their participants. In 62% of the included studies, the description and/or inclusion of a familiarization session (s) was either incomplete or not existent. Sixty-percent of studies did not report any details about the stability of testing conditions. Approximately half of the studies examined reliability measures of the included sport-specific tests (intraclass correlation coefficient [ICC] = 0.43–1.00). Content validity was addressed in all included studies, criterion validity (only the concurrent aspect of it) in approximately half of the studies with correlation coefficients ranging from r = −0.41 to 0.90. Construct validity was reported in 31% of the included studies and predictive validity in only one. Test sensitivity was addressed in 13% of the included studies. The majority of studies (64%) ignored and/or provided incomplete information on test feasibility and methodological limitations of the sport-specific test. In 28% of the included studies, insufficient information or a complete lack of information was provided in the respective field of the test application. Several methodological gaps exist in studies that used sport-specific performance tests in Olympic combat sports. Additional research should adopt more rigorous validation procedures in the application and description of sport-specific performance tests in Olympic combat sports.
The regular monitoring of physical fitness and sport-specific performance is important in elite sports to increase the likelihood of success in competition. This study aimed to systematically review and to critically appraise the methodological quality, validation data, and feasibility of the sport-specific performance assessment in Olympic combat sports like amateur boxing, fencing, judo, karate, taekwondo, and wrestling. A systematic search was conducted in the electronic databases PubMed, Google-Scholar, and Science-Direct up to October 2017. Studies in combat sports were included that reported validation data (e.g., reliability, validity, sensitivity) of sport-specific tests. Overall, 39 studies were eligible for inclusion in this review. The majority of studies (74%) contained sample sizes <30 subjects. Nearly, 1/3 of the reviewed studies lacked a sufficient description (e.g., anthropometrics, age, expertise level) of the included participants. Seventy-two percent of studies did not sufficiently report inclusion/exclusion criteria of their participants. In 62% of the included studies, the description and/or inclusion of a familiarization session (s) was either incomplete or not existent. Sixty-percent of studies did not report any details about the stability of testing conditions. Approximately half of the studies examined reliability measures of the included sport-specific tests (intraclass correlation coefficient [ICC] = 0.43–1.00). Content validity was addressed in all included studies, criterion validity (only the concurrent aspect of it) in approximately half of the studies with correlation coefficients ranging from r = −0.41 to 0.90. Construct validity was reported in 31% of the included studies and predictive validity in only one. Test sensitivity was addressed in 13% of the included studies. The majority of studies (64%) ignored and/or provided incomplete information on test feasibility and methodological limitations of the sport-specific test. In 28% of the included studies, insufficient information or a complete lack of information was provided in the respective field of the test application. Several methodological gaps exist in studies that used sport-specific performance tests in Olympic combat sports. Additional research should adopt more rigorous validation procedures in the application and description of sport-specific performance tests in Olympic combat sports.
Findings on the perceptual reorganization of lexical tones are mixed. Some studies report good tone discrimination abilities for all tested age groups, others report decreased or enhanced discrimination with increasing age, and still others report U-shaped developmental curves. Since prior studies have used a wide range of contrasts and experimental procedures, it is unclear how specific task requirements interact with discrimination abilities at different ages. In the present work, we tested German and Cantonese adults on their discrimination of Cantonese lexical tones, as well as German-learning infants between 6 and 18 months of age on their discrimination of two specific Cantonese tones using two different types of experimental procedures. The adult experiment showed that German native speakers can discriminate between lexical tones, but native Cantonese speakers show significantly better performance. The results from German-learning infants suggest that 6- and 18-month-olds discriminate tones, while 9-month-olds do not, supporting a U-shaped developmental curve. Furthermore, our results revealed an effect of methodology, with good discrimination performance at 6 months after habituation but not after familiarization. These results support three main conclusions. First, habituation can be a more sensitive procedure for measuring infants' discrimination than familiarization. Second, the previous finding of a U-shaped curve in the discrimination of lexical tones is further supported. Third, discrimination abilities at 18 months appear to reflect mature perceptual sensitivity to lexical tones, since German adults also discriminated the lexical tones with high accuracy.
Findings on the perceptual reorganization of lexical tones are mixed. Some studies report good tone discrimination abilities for all tested age groups, others report decreased or enhanced discrimination with increasing age, and still others report U-shaped developmental curves. Since prior studies have used a wide range of contrasts and experimental procedures, it is unclear how specific task requirements interact with discrimination abilities at different ages. In the present work, we tested German and Cantonese adults on their discrimination of Cantonese lexical tones, as well as German-learning infants between 6 and 18 months of age on their discrimination of two specific Cantonese tones using two different types of experimental procedures. The adult experiment showed that German native speakers can discriminate between lexical tones, but native Cantonese speakers show significantly better performance. The results from German-learning infants suggest that 6- and 18-month-olds discriminate tones, while 9-month-olds do not, supporting a U-shaped developmental curve. Furthermore, our results revealed an effect of methodology, with good discrimination performance at 6 months after habituation but not after familiarization. These results support three main conclusions. First, habituation can be a more sensitive procedure for measuring infants' discrimination than familiarization. Second, the previous finding of a U-shaped curve in the discrimination of lexical tones is further supported. Third, discrimination abilities at 18 months appear to reflect mature perceptual sensitivity to lexical tones, since German adults also discriminated the lexical tones with high accuracy.
Infants start learning the prosodic properties of their native language before 12 months, as shown by the emergence of a trochaic bias in English-learning infants between 6 and 9 months (Jusczyk et al., 1993), and in German-learning infants between 4 and 6 months (Huhle et al., 2009, 2014), while French-learning infants do not show a bias at 6 months (Hohle et al., 2009). This language-specific emergence of a trochaic bias is supported by the fact that English and German are languages with trochaic predominance in their lexicons, while French is a language with phrase-final lengthening but lacking lexical stress. We explored the emergence of a trochaic bias in bilingual French/German infants, to study whether the developmental trajectory would be similar to monolingual infants and whether amount of relative exposure to the two languages has an impact on the emergence of the bias. Accordingly, we replicated Hohle et al. (2009) with 24 bilingual 6-month-olds learning French and German simultaneously. All infants had been exposed to both languages for 30 to 70% of the time from birth. Using the Head Preference Procedure, infants were presented with two lists of stimuli, one made up of several occurrences of the pseudoword /GAba/ with word-initial stress (trochaic pattern), the second one made up of several occurrences of the pseudoword /gaBA/ with word-final stress (iambic pattern). The stimuli were recorded by a native German female speaker. Results revealed that these French/German bilingual 6-month olds have a trochaic bias (as evidenced by a preference to listen to the trochaic pattern). Hence, their listening preference is comparable to that of monolingual German-learning 6-month-olds, but differs from that of monolingual French-learning 6-month-olds who did not show any preference (Noble et al., 2009). Moreover, the size of the trochaic bias in the bilingual infants was not correlated with their amount of exposure to German. The present results thus establish that the development of a trochaic bias in simultaneous bilinguals is not delayed compared to monolingual German-learning infants (Hohle et al., 2009) and is rather independent of the amount of exposure to German relative to French.
Drugs as instruments
(2016)
Neuroenhancement (NE) is the non-medical use of psychoactive substances to produce a subjective enhancement in psychological functioning and experience. So far empirical investigations of individuals' motivation for NE however have been hampered by the lack of theoretical foundation. This study aimed to apply drug instrumentalization theory to user motivation for NE. We argue that NE should be defined and analyzed from a behavioral perspective rather than in terms of the characteristics of substances used for NE. In the empirical study we explored user behavior by analyzing relationships between drug options (use over-the-counter products, prescription drugs, illicit drugs) and postulated drug instrumentalization goals (e.g., improved cognitive performance, counteracting fatigue, improved social interaction). Questionnaire data from 1438 university students were subjected to exploratory and confirmatory factor analysis to address the question of whether analysis of drug instrumentalization should be based on the assumption that users are aiming to achieve a certain goal and choose their drug accordingly or whether NE behavior is more strongly rooted in a decision to try or use a certain drug option. We used factor mixture modeling to explore whether users could be separated into qualitatively different groups defined by a shared "goal X drug option" configuration. Our results indicate, first, that individuals decisions about NE are eventually based on personal attitude to drug options (e.g., willingness to use an over-the-counter product but not to abuse prescription drugs) rather than motivated by desire to achieve a specific goal (e.g., fighting tiredness) for which different drug options might be tried. Second, data analyses suggested two qualitatively different classes of users. Both predominantly used over-the-counter products, but "neuroenhancers" might be characterized by a higher propensity to instrumentalize over-the-counter products for virtually all investigated goals whereas "fatigue-fighters" might be inclined to use over-the-counter products exclusively to fight fatigue. We believe that psychological investigations like these are essential, especially for designing programs to prevent risky behavior.
Due to maturation of the postural control system and secular declines in motor performance, adolescents experience deficits in postural control during standing and walking while concurrently performing cognitive interference tasks. Thus, adequately designed balance training programs may help to counteract these deficits. While the general effectiveness of youth balance training is well-documented, there is hardly any information available on the specific effects of single-task (ST) versus dual-task (DT) balance training. Therefore, the objectives of this study were (i) to examine static/dynamic balance performance under ST and DT conditions in adolescents and (ii) to study the effects of ST versus DT balance training on static/dynamic balance under ST and DT conditions in adolescents. Twenty-eight healthy girls and boys aged 12–13 years were randomly assigned to either 8 weeks of ST or DT balance training. Before and after training, postural sway and spatio-temporal gait parameters were registered under ST (standing/walking only) and DT conditions (standing/walking while concurrently performing an arithmetic task). At baseline, significantly slower gait speed (p < 0.001, d = 5.1), shorter stride length (p < 0.001, d = 4.8), and longer stride time (p < 0.001, d = 3.8) were found for DT compared to ST walking but not standing. Training resulted in significant pre–post decreases in DT costs for gait velocity (p < 0.001, d = 3.1), stride length (-45%, p < 0.001, d = 2.4), and stride time (-44%, p < 0.01, d = 1.9). Training did not induce any significant changes (p > 0.05, d = 0–0.1) in DT costs for all parameters of secondary task performance during standing and walking. Training produced significant pre–post increases (p = 0.001; d = 1.47) in secondary task performance while sitting. The observed increase was significantly greater for the ST training group (p = 0.04; d = 0.81). For standing, no significant changes were found over time irrespective of the experimental group. We conclude that adolescents showed impaired DT compared to ST walking but not standing. ST and DT balance training resulted in significant and similar changes in DT costs during walking. Thus, there appears to be no preference for either ST or DT balance training in adolescents.
Due to maturation of the postural control system and secular declines in motor performance, adolescents experience deficits in postural control during standing and walking while concurrently performing cognitive interference tasks. Thus, adequately designed balance training programs may help to counteract these deficits. While the general effectiveness of youth balance training is well-documented, there is hardly any information available on the specific effects of single-task (ST) versus dual-task (DT) balance training. Therefore, the objectives of this study were (i) to examine static/dynamic balance performance under ST and DT conditions in adolescents and (ii) to study the effects of ST versus DT balance training on static/dynamic balance under ST and DT conditions in adolescents. Twenty-eight healthy girls and boys aged 12–13 years were randomly assigned to either 8 weeks of ST or DT balance training. Before and after training, postural sway and spatio-temporal gait parameters were registered under ST (standing/walking only) and DT conditions (standing/walking while concurrently performing an arithmetic task). At baseline, significantly slower gait speed (p < 0.001, d = 5.1), shorter stride length (p < 0.001, d = 4.8), and longer stride time (p < 0.001, d = 3.8) were found for DT compared to ST walking but not standing. Training resulted in significant pre–post decreases in DT costs for gait velocity (p < 0.001, d = 3.1), stride length (-45%, p < 0.001, d = 2.4), and stride time (-44%, p < 0.01, d = 1.9). Training did not induce any significant changes (p > 0.05, d = 0–0.1) in DT costs for all parameters of secondary task performance during standing and walking. Training produced significant pre–post increases (p = 0.001; d = 1.47) in secondary task performance while sitting. The observed increase was significantly greater for the ST training group (p = 0.04; d = 0.81). For standing, no significant changes were found over time irrespective of the experimental group. We conclude that adolescents showed impaired DT compared to ST walking but not standing. ST and DT balance training resulted in significant and similar changes in DT costs during walking. Thus, there appears to be no preference for either ST or DT balance training in adolescents.
Concrete-operational thinking depicts an important aspect of cognitive development. A promising approach in promoting these skills is the instruction of strategies. The construction of such instructional programs requires insights into the mental operations involved in problem-solving. In the present paper, we address the question to which extent variations of the effect of isolated and combined mental operations (strategies) on correct solution of concrete-operational concepts can be observed. Therefore, a cross-sectional design was applied. The use of mental operations was measured by thinking-aloud reports from 80 first- and second-graders (N = 80) while solving tasks depicting concrete-operational thinking. Concrete-operational thinking was assessed using the subscales conservation of numbers, classification and sequences of the TEKO. The verbal reports were transcribed and coded with regard to the mental operations applied per task. Data analyses focused on tasks level, resulting in the analyses of N = 240 tasks per subscale. Differences regarding the contribution of isolated and combined mental operations (strategies) to correct solution were observed. Thereby, the results indicate the necessity of selection and integration of appropriate mental operations as strategies. The results offer insights in involved mental operations while solving concrete-operational tasks and depict a contribution to the construction of instructional programs.
Concrete-operational thinking depicts an important aspect of cognitive development. A promising approach in promoting these skills is the instruction of strategies. The construction of such instructional programs requires insights into the mental operations involved in problem-solving. In the present paper, we address the question to which extent variations of the effect of isolated and combined mental operations (strategies) on correct solution of concrete-operational concepts can be observed. Therefore, a cross-sectional design was applied. The use of mental operations was measured by thinking-aloud reports from 80 first- and second-graders (N = 80) while solving tasks depicting concrete-operational thinking. Concrete-operational thinking was assessed using the subscales conservation of numbers, classification and sequences of the TEKO. The verbal reports were transcribed and coded with regard to the mental operations applied per task. Data analyses focused on tasks level, resulting in the analyses of N = 240 tasks per subscale. Differences regarding the contribution of isolated and combined mental operations (strategies) to correct solution were observed. Thereby, the results indicate the necessity of selection and integration of appropriate mental operations as strategies. The results offer insights in involved mental operations while solving concrete-operational tasks and depict a contribution to the construction of instructional programs.