Refine
Has Fulltext
- no (4)
Document Type
- Article (4)
Language
- English (4)
Is part of the Bibliography
- yes (4)
Keywords
- assessment (4) (remove)
Institute
Although the observation and assessment of psychotherapeutic competences are central to training, supervision, patient care, quality control, and life-long practice, structured instruments are used only occasionally. In the current study, an observation-based tool for the Assessment of Core CBT Skills (ACCS) was translated into German and adapted, and its psychometric properties were pilot evaluated. Competence of therapists-in-training was assessed in a random sample of n = 30 videos on cognitive behavioural therapy including patients diagnosed with hypochondriasis. Two of three raters independently assessed the competences demonstrated in the entire, active treatment sessions (n = 60). In our sample, internal consistency was excellent, and interrater reliability was good. Convergent validity (Cognitive Therapy Scale) and discriminant validity (Helping Alliance Questionnaire) were within the expected ranges. The ACCS total score did not significantly predict the reduction of symptoms of hypochondriasis, and a one-factorial structure of the instrument was found. By providing multiple opportunities for feedback, self-reflection, and supervision, the ACCS may complement current tools for the assessment of psychotherapeutic competences and importantly support competence-based training and supervision.
Assessments of psychotherapeutic competencies play a crucial role in research and training. However, research on the reliability and validity of such assessments is sparse. This study aimed to provide an overview of the current evidence and to provide an average interrater reliability (IRR) of psychotherapeutic competence ratings. A systematic review was conducted, and 20 studies reported in 32 publications were collected. These 20 studies were included in a narrative synthesis, and 20 coefficients were entered into the meta-analysis. Most primary studies referred to cognitive-behavioral therapies and the treatment of depression, used the Cognitive Therapy Scale, based ratings on videos, and trained the raters. Our meta-analysis revealed a pooled ICC of 0.82, but at the same time severe heterogeneity. The evidence map highlighted a variety of variables related to competence assessments. Further aspects influencing the reliability of competence ratings and regarding the considerable heterogeneity are discussed in detail throughout the manuscript.
Objective: Despite increasing research on psychotherapy preferences, the preferences of psychotherapy trainees are largely unknown. Moreover, differences in preferences between trainees and their patients could (a) hinder symptom improvement and therapy success for patients and (b) represent significant obstacles in the early career and development of future therapists. Method: We compared the preferences of n = 466 psychotherapy trainees to those of n = 969 laypersons using the Cooper-Norcross Inventory of Preferences. Moreover, we compared preferences between trainees in cognitive-behavioural therapy (CBT) and psychodynamic trainees. Results: We found significant differences between both samples in 13 of 18 items, and three of four subscales. Psychotherapy trainees preferred less therapist directiveness (d = 0.58), more emotional intensity (d = 0.74), as well as more focused challenge (d = 0.35) than laypeople. CBT trainees preferred more therapist directiveness (d = 2.00), less emotional intensity (d = 0.51), more present orientation (d = 0.76) and more focused challenge (d = 0.33) than trainees in psychodynamic/psychoanalytic therapy. Conclusion: Overall, the results underline the importance of implementing preference assessment and discussion during psychotherapy training. Moreover, therapists of different orientations seem to cover a large range of preferences for patients, in order to choose the right fit.
Background:
Many authors regard counseling self-efficacy (CSE) as important in therapist development and training. The purpose of this study was to examine the factor structure, reliability, and validity of the German version of the Counselor Activity Self-Efficacy Scales-Revised (CASES-R).
Method:
The sample consisted of 670 German psychotherapy trainees, who completed an online survey. We examined the factor structure by applying exploratory and confirmatory factor analysis to the instrument as a whole.
Results:
A bifactor-exploratory structural equation modeling model with one general and five specific factors provided the best fit to the data. Omega hierarchical coefficients indicated optimal reliability for the general factor, acceptable reliability for the Action Skills-Revised (AS-R) factor, and insufficient estimates for the remaining factors. The CASES-R scales yielded significant correlations with related measures, but also with therapeutic orientations.
Conclusion:
We found support for the reliability and validity of the German CASES-R. However, the subdomains (except AS-R) should be interpreted with caution, and we do not recommend the CASES-R for comparisons between psychotherapeutic orientations.