Refine
Year of publication
- 2024 (1)
- 2023 (10)
- 2022 (47)
- 2021 (89)
- 2020 (89)
- 2019 (103)
- 2018 (108)
- 2017 (113)
- 2016 (62)
- 2015 (78)
- 2014 (82)
- 2013 (71)
- 2012 (75)
- 2011 (55)
- 2010 (31)
- 2009 (54)
- 2008 (33)
- 2007 (24)
- 2006 (36)
- 2005 (44)
- 2004 (26)
- 2003 (19)
- 2002 (16)
- 2001 (19)
- 2000 (31)
- 1999 (23)
- 1998 (15)
- 1997 (12)
- 1996 (18)
- 1995 (16)
- 1994 (12)
- 1993 (8)
- 1992 (10)
- 1991 (6)
- 1990 (3)
- 1989 (5)
- 1988 (3)
- 1987 (5)
- 1986 (4)
- 1985 (1)
- 1984 (6)
- 1983 (4)
- 1982 (1)
- 1981 (2)
Document Type
- Article (1113)
- Postprint (147)
- Doctoral Thesis (71)
- Other (41)
- Conference Proceeding (27)
- Review (27)
- Monograph/Edited Volume (25)
- Preprint (18)
- Bachelor Thesis (1)
- Habilitation Thesis (1)
Language
- English (1472) (remove)
Keywords
- eye movements (31)
- Eye movements (27)
- reading (25)
- embodied cognition (23)
- attention (19)
- Reading (17)
- Chinese (16)
- aggression (16)
- Adolescence (15)
- EEG (15)
Institute
- Department Psychologie (1472) (remove)
Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
A growing literature has suggested that processing of visual information presented near the hands is facilitated. In this study, we investigated whether the near-hands superiority effect also occurs with the hands moving. In two experiments, participants performed a cyclical bimanual movement task requiring concurrent visual identification of briefly presented letters. For both the static and dynamic hand conditions, the results showed improved letter recognition performance with the hands closer to the stimuli. The finding that the encoding advantage for near-hand stimuli also occurred with the hands moving suggests that the effect is regulated in real time, in accordance with the concept of a bimodal neural system that dynamically updates hand position in external space.
When infants observe a human grasping action, experience-based accounts predict that all infants familiar with grasping actions should be able to predict the goal regardless of additional agency cues such as an action effect. Cue-based accounts, however, suggest that infants use agency cues to identify and predict action goals when the action or the agent is not familiar. From these accounts, we hypothesized that younger infants would need additional agency cues such as a salient action effect to predict the goal of a human grasping action, whereas older infants should be able to predict the goal regardless of agency cues. In three experiments, we presented 6-, 7-, and 11-month-olds with videos of a manual grasping action presented either with or without an additional salient action effect (Exp. 1 and 2), or we presented 7-month-olds with videos of a mechanical claw performing a grasping action presented with a salient action effect (Exp. 3). The 6-month-olds showed tracking gaze behavior, and the 11-month-olds showed predictive gaze behavior, regardless of the action effect. However, the 7-month-olds showed predictive gaze behavior in the action-effect condition, but tracking gaze behavior in the no-action-effect condition and in the action-effect condition with a mechanical claw. The results therefore support the idea that salient action effects are especially important for infants' goal predictions from 7 months on, and that this facilitating influence of action effects is selective for the observation of human hands.
Action effects have been stated to be important for infants’ processing of goal-directed actions. In this study, 11-month-olds showed equally fast predictive gaze shifts to a claw’s action goal when the grasping action was presented either with three agency cues (self-propelled movement, equifinality of goal achievement and a salient action effect) or with only a salient action effect, but infants showed tracking gaze when the claw showed only self-propelled movement and equifinality of goal achievement. The results suggest that action effects, compared to purely kinematic cues, seem to be especially important for infants' online processing of goal-directed actions.
During the observation of goal-directed actions, infants usually predict the goal at an earlier age when the agent is familiar (e.g., human hand) compared to unfamiliar (e.g., mechanical claw). These findings implicate a crucial role of the developing agentive self for infants' processing of others' action goals. Recent theoretical accounts suggest that predictive gaze behavior relies on an interplay between infants' agentive experience (top-down processes) and perceptual information about the agent and the action-event (bottom-up information; e.g., agency cues). The present study examined 7-, 11-, and 18-month-old infants' predictive gaze behavior for a grasping action performed by an unfamiliar tool, depending on infants' age-related action knowledge about tool-use and the display of the agency cue of producing a salient action effect. The results are in line with the notion of a systematic interplay between experience-based top-down processes and cue-based bottom-up information: Regardless of the salient action effect, predictive gaze shifts did not occur in the 7-month-olds (least experienced age group), but did occur in the 18-month-olds (most experienced age group). In the 11-month-olds, however, predictive gaze shifts occurred only when a salient action effect was presented. This sheds new light on how the developing agentive self, in interplay with available agency cues, supports infants' action-goal prediction also for observed tool-use actions.
For the processing of goal-directed actions, some accounts emphasize the importance of experience with the action or the agent. Other accounts stress the importance of agency cues. We investigated the impact of agency cues on 11-month-olds’ and adults’ goal anticipation for a grasping-action performed by a mechanical claw. With an eyetracker, we measured anticipations in two conditions, where the claw was displayed either with or without agency cues. In two experiments, 11-month-olds were predictive when agency cues were present, but reactive when no agency cues were presented. Adults were predictive in both conditions. Furthermore, 11-month-olds rapidly learned to predict the goal in the agency condition, but not in the mechanical condition. Adults’ predictions did not change across trials in the agency condition, but decelerated in the mechanical condition. Thus, agency cues and own action experience are important for infants’ and adults’ online processing of goal-directed actions by non-human agents.
Previous research indicates that infants’ prediction of the goals of observed actions is influenced by own experience with the type of agent performing the action (i.e., human hand vs. non-human agent) as well as by action-relevant features of goal objects (e.g., object size). The present study investigated the combined effects of these factors on 12-month-olds’ action prediction. Infants’ (N = 49) goal-directed gaze shifts were recorded as they observed 14 trials in which either a human hand or a mechanical claw reached for a small goal area (low-saliency goal) or a large goal area (high-saliency goal). Only infants who had observed the human hand reaching for a high-saliency goal fixated the goal object ahead of time, and they rapidly learned to predict the action goal across trials. By contrast, infants in all other conditions did not track the observed action in a predictive manner, and their gaze shifts to the action goal did not change systematically across trials. Thus, high-saliency goals seem to boost infants’ predictive gaze shifts during the observation of human manual actions, but not of actions performed by a mechanical device. This supports the assumption that infants’ action predictions are based on interactive effects of action-relevant object features (e.g., size) and own action experience.
We assessed intra-individual variability of response times (RT) and single-trial P3 amplitudes following targets in healthy adults during a Flanker/NO-GO task. RT variability and variability of the neural responses coupled at the faster frequencies examined (0.07-0.17 Hz) at Pz, the target-P3 maxima, despite non-significant associations for overall variability (standard deviation, SD). Frequency-specific patterns of variability in the single-trial P3 may help to understand the neurophysiology of RT variability and its explanatory models of attention allocation deficits beyond intra-individual variability summary indices such as SD.
The aim of the present study was to investigate the test-retest reliability of the olfactory detection threshold subtest of the Sniffin" Sticks test battery, if administered repeatedly on 4 time points. The detection threshold test was repeatedly conducted in 64 healthy subjects. On the first testing session, the threshold test was accomplished 3 times (T(1) = 0 min, T(2) = 35 min, and T(3) = 105 min), representing a short-term testing. A fourth threshold test was conducted on a second testing session (T(4) = 35.1 days after the first testing session), representing a long-term testing. The average scores for olfactory detection threshold for n-butanol did not differ significantly across the 4 points of time. The test-retest reliability (Pearson"s r) between the 4 time points of threshold testing were in a range of 0.43-0.85 (P < 0.01). These results support the notion that the olfactory detection threshold test is a highly reliable method for repeated olfactory testing, even if the test is repeated more than once per day and over a long-term period. It is concluded that the olfactory detection threshold test of the Sniffin" Sticks is suitable for repeated testing during experimental or clinical studies.
Applied to the nasal mucosa in low concentrations, nicotine vapor evokes odorous sensations (mediated by the olfactory system) whereas at higher concentrations nicotine vapor additionally produces burning and stinging sensations in the nose (mediated by the trigeminal system). The objective of this study was to determine whether intranasal stimulation with suprathreshold concentrations of S(-)-nicotine vapor causes brain activation in olfactory cortical areas or if trigeminal cortical areas are also activated. Individual olfactory detection thresholds for S(-)-nicotine were determined in 19 healthy occasional smokers using a computer-controlled air-dilution olfactometer. Functional magnetic resonance images were acquired using a 1.5T MR scanner with applications of nicotine in concentrations at or just above the individual"s olfactory detection threshold. Subjects reliably perceived the stimuli as being odorous. Accordingly, activation of brain areas known to be involved in processing of olfactory stimuli was identified. Although most of the subjects never or only rarely observed a burning or painful sensation in the nose, brain areas associated with the processing of painful stimuli were activated in all subjects. This indicates that the olfactory and trigeminal systems are activated during perception of nicotine and it is not possible to completely separate olfactory from trigeminal effects by lowering the concentration of the applied nicotine. In conclusion, even at low concentrations that do not consistently lead to painful sensations, intranasally applied nicotine activates both the olfactory and the trigeminal system.
Cultural generality versus specificity of media violence effects on aggression was examined in seven countries (Australia, China, Croatia, Germany, Japan, Romania, the United States). Participants reported aggressive behaviors, media use habits, and several other known risk and protective factors for aggression. Across nations, exposure to violent screen media was positively associated with aggression. This effect was partially mediated by aggressive cognitions and empathy. The media violence effect on aggression remained significant even after statistically controlling a number of relevant risk and protective factors (e.g., abusive parenting, peer delinquency), and was similar in magnitude to effects of other risk factors. In support of the cumulative risk model, joint effects of different risk factors on aggressive behavior in each culture were larger than effects of any individual risk factor.
We uniquely introduce convex production costs into a cartel model involving spatial price discrimination. We demonstrate that greater convexity improves cartel stability and that for sufficient convexity first best locations will be adopted. We show that allowing locations to vary over the game reduces cartel stability but that greater convexity continues to improve that stability. Moreover, when the degree of convexity does not support the first best collusive locations, other collusive locations exist that require less stability and these may either increase or decrease social welfare relative to competition. Critically, these locations that require less stability are more dispersed in sharp contrast to the known result assuming linear production costs.
The boundary paradigm (Rayner, 1975) with a novel preview manipulation was used to examine the extent of parafoveal processing of words to the right of fixation. Words n + 1 and n + 2 had either correct or incorrect previews prior to fixation (prior to crossing the boundary location). In addition, the manipulation utilized either a high or low frequency word in word n + 1 location on the assumption that it would be more likely that n + 2 preview effects could be obtained when word n + 1 was high frequency. The primary findings were that there was no evidence for a preview benefit for word n + 2 and no evidence for parafoveal-on-foveal effects when word n + 1 is at least four letters long. We discuss implications for models of eye-movement control in reading.
The boundary paradigm (Rayner, 1975) with a novel preview manipulation was used to examine the extent of parafoveal processing of words to the right of fixation. Words n+1 and n+2 had either correct or incorrect previews prior to fixation (prior to crossing the boundary location). In addition, the manipulation utilized either a high or low frequency word in word n+1 location on the assumption that it would be more likely that n+2 preview effects could be obtained when word n+1 was high frequency. The primary findings were that there was no evidence for a preview benefit for word n+2 and no evidence for parafoveal-on-foveal effects when word n+1 is at least four letters long. We discuss implications for models of eye-movement control in reading.
We measured memory span for assembly instructions involving objects with handles oriented to the left or right side. Right-handed participants remembered more instructions when objects' handles were spatially congruent with the hand used in forthcoming assembly actions. No such affordance-based memory benefit was found for left-handed participants. These results are discussed in terms of motor simulation as an embodied rehearsal mechanism.
Analysis of physicians' probability estimates of a medical outcome based on a sequence of events
(2022)
IMPORTANCE
The probability of a conjunction of 2 independent events is the product of the probabilities of the 2 components and therefore cannot exceed the probability of either component; violation of this basic law is called the conjunction fallacy. A common medical decision-making scenario involves estimating the probability of a final outcome resulting from a sequence of independent events; however, little is known about physicians' ability to accurately estimate the overall probability of success in these situations.
OBJECTIVE
To ascertain whether physicians are able to correctly estimate the overall probability of a medical outcome resulting from 2 independent events.
DESIGN, SETTING, AND PARTICIPANTS
This survey study consisted of 3 separate substudies, in which 215 physicians were asked via internet-based survey to estimate the probability of success of each of 2 components of a diagnostic or prognostic sequence as well as the overall probability of success of the 2-step sequence. Substudy 1 was performed from April 2 to 4, 2021, substudy 2 from November 2 toll, 2021, and substudy 3 from May 13 to 19, 2021. All physicians were board certified or board eligible in the primary specialty germane to the substudy (ie, obstetrics and gynecology for substudies land 3 and pulmonology for substudy 2), were recruited from a commercial survey service, and volunteered to participate in the study.
EXPOSURES
Case scenarios presented in an online survey.
MAIN OUTCOMES AND MEASURES
Respondents were asked to provide their demographic information in addition to 3 probability estimates. The first substudy included a scenario describing a brow presentation discovered during labor; the 2 conjuncts were the probabilities that the brow presentation would resolve and that the delivery would be vaginal. The second substudy involved a diagnostic evaluation of an incidentally discovered pulmonary nodule; the 2 conjuncts were the probabilities that the patient had a malignant condition and that a technically successful transthoracic needle biopsy would reveal a malignant condition. The third substudy included a modification of the first substudy in an attempt to debias the conjunction fallacy prevalent in the first substudy. Respondents' own probability estimates of the individual events were used to calculate the mathematically correct conjunctive probability.
RESULTS
Among 215 respondents, the mean (SD) age was 54.0 (9.5) years; 142 respondents (66.0%) were male. Data on race and ethnicity were not collected. A total of 168 physicians (78.1%) estimated the probability of the 2-step sequence to be greater than the probability of at least 1 of the 2 component events. Compared with the product of their 2 estimated components, respondents overestimated the combined probability by 12.8% (95% CI, 9.6%-16.1%; P < .001) in substudy 1, 19.8% (95% Cl, 16.6%-23.0%; P < .001) in substudy 2, and 18.0% (95% CI, 13.4%-22.5%; P < .001) in substudy 3, results that were mathematically incoherent (ie, formally illogical and mathematically incorrect).
CONCLUSIONS AND RELEVANCE
In this survey study of 215 physicians, respondents consistently overestimated the combined probability of 2 events compared with the probability calculated from their own estimates of the individual events. This biased estimation, consistent with the conjunction fallacy, may have substantial implications for diagnostic and prognostic decision-making.
We examined face memory deficits in patients with Idiopathic Parkinson's disease (IPD) with specific regard to the moderating role of sex and the different memory processes involved. We tested short- and long-term face recognition memory in 18 nonclinical participants and 18 IPD-patients matched for sex, education and age. We varied the duration of item presentation (1, 5, 10s), the time of testing (immediately, 1hr, 24hrs) and the possibility to re-encode items. In accordance with earlier studies, we report face memory deficits in IPD. Moreover, our findings indicate that sex and encoding conditions may be important moderator variables. In contrast to healthy individuals, IPD-patients cannot gain from increasing duration of presentation. Furthermore, our results suggest that I PD leads to face memory deficits in women, only.
The aim of our study was to examine the extent to which linguistic approaches to sentence comprehension deficits in aphasia can account for differential impairment patterns in the comprehension of wh-questions in bilingual persons with aphasia (PWA). We investigated the comprehension of subject and object wh-questions in both Turkish, a wh-in-situ language, and German, a wh-fronting language, in two bilingual PWA using a sentence-to-picture matching task. Both PWA showed differential impairment patterns in their two languages. SK, an early bilingual PWA, had particular difficulty comprehending subject which-questions in Turkish but performed normal across all conditions in German. CT, a late bilingual PWA, performed more poorly for object which-questions in German than in all other conditions, whilst in Turkish his accuracy was at chance level across all conditions. We conclude that the observed patterns of selective cross-linguistic impairments cannot solely be attributed either to difficulty with wh-movement or to problems with the integration of discourse-level information. Instead our results suggest that differences between our PWA’s individual bilingualism profiles (e.g. onset of bilingualism, premorbid language dominance) considerably affected the nature and extent of their impairments.
The main goal of our target article was to provide concrete recommendations for improving the replicability of research findings. Most of the comments focus on this point. In addition, a few comments were concerned with the distinction between replicability and generalizability and the role of theory in replication. We address all comments within the conceptual structure of the target article and hope to convince readers that replication in psychological science amounts to much more than hitting the lottery twice.
Replicability of findings is at the heart of any empirical science. The aim of this article is to move the current replicability debate in psychology towards concrete recommendations for improvement. We focus on research practices but also offer guidelines for reviewers, editors, journal management, teachers, granting institutions, and university promotion committees, highlighting some of the emerging and existing practical solutions that can facilitate implementation of these recommendations. The challenges for improving replicability in psychological science are systemic. Improvement can occur only if changes are made at many levels of practice, evaluation, and reward.
Objective
Leaders differ in their personalities from non-leaders. However, when do these differences emerge? Are leaders "born to be leaders" or does their personality change in preparation for a leadership role and due to increasing leadership experience?
Method
Using data from the German Socio-Economic Panel Study, we examined personality differences between leaders (N = 2683 leaders, women: n = 967; 36.04%) and non-leaders (N = 33,663) as well as personality changes before and after becoming a leader.
Results
Already in the years before starting a leadership position, leaders-to-be were more extraverted, open, emotionally stable, conscientious, and willing to take risks, felt to have greater control, and trusted others more than non-leaders. Moreover, personality changed in emergent leaders: While approaching a leadership position, leaders-to-be (especially men) became gradually more extraverted, open, and willing to take risks and felt to have more control over their life. After becoming a leader, they became less extraverted, less willing to take risks, and less conscientious but gained self-esteem.
Conclusions
Our findings suggest that people are not simply "born to be leaders" but that their personalities change considerably in preparation for a leadership role and due to leadership experience. Some changes are transient, but others last for a long time.
Studies show relations between executive function (EF), Theory of Mind (ToM), and conduct-problem (CP) symptoms. However, many studies have involved cross-sectional data, small clinical samples, pre-school children, and/or did not consider potential mediation effects. The present study examined the longitudinal relations between EF, ToM abilities, and CP symptoms in a population-based sample of 1,657 children between 6 and 11 years (T1: M = 8.3 years, T2: M = 9.1 years; 51.9% girls). We assessed EF skills and ToM abilities via computerized tasks at first measurement (T1), CP symptoms were rated via parent questionnaires at T1 and approximately 1 year later (T2). Structural-equation models showed a negative relation between T1 EF and T2 CP symptoms even when controlling for attention-deficit hyperactivity disorder (ADHD) symptoms and other variables. This relation was fully mediated by T1 ToM abilities. The study shows how children's abilities to control their thoughts and behaviors and to understand others' mental states interact in the development of CP symptoms.
There is robust evidence showing a link between executive function (EF) and theory of mind (ToM) in 3-to 5-year-olds. However, it is unclear whether this relationship extends to middle childhood. In addition, there has been much discussion about the nature of this relationship. Whereas some authors claim that ToM is needed for EF, others argue that ToM requires EF. To date, however, studies examining the longitudinal relationship between distinct sub components of EF [i.e., attention shifting, working memory (WM) updating, inhibition] and ToM in middle childhood are rare. The present study examined (1) the relationship between three EF subcomponents (attention shifting, WM updating, inhibition) and ToM in middle childhood, and (2) the longitudinal reciprocal relationships between the EF subcomponents and ToM across a 1-year period. EF and ToM measures were assessed experimentally in a sample of 1,657 children (aged 6-11 years) at time point one (t1) and 1 year later at time point two (t2). Results showed that the concurrent relationships between all three EF subcomponents and ToM pertained in middle childhood at t1 and t2, respectively, even when age, gender, and fluid intelligence were partialle dout. Moreover, cross-lagged structural equation modeling (again, controlling for age, gender, and fluid intelligence, as well as for the earlier levels of the target variables), revealed partial support for the view that early ToM predictslater EF, but stronger evidence for the assumption that early EF predictslater ToM. The latter was found for attention shifting and WM updating, but not for inhibition. This reveals the importance of studying the exact interplay of ToM and EF across childhood development, especially with regard to different EF subcomponents. Most likely, understanding others' mental states at different levels of perspective-taking requires specific EF subcomponents, suggesting developmental change in the relations between EF and ToM across childhood.
Mental health problems are highly prevalent worldwide. Fortunately, psychotherapy has proven highly effective in the treatment of a number of mental health issues, such as depression and anxiety disorders. In contrast, psychotherapy training as is practised currently cannot be considered evidence-based. Thus, there is much room for improvement. The integration of simulated patients (SPs) into psychotherapy training and research is on the rise. SPs originate from the medical education and have, in a number of studies, been demonstrated to contribute to effective learning environments. Nevertheless, there has been voiced criticism regarding the authenticity of SP portrayals, but few studies have examined this to date.
Based on these considerations, this dissertation explores SPs’ authenticity while portraying a mental disorder, depression. Altogether, the present cumulative dissertation consists of three empirical papers. At the time of printing, Paper I and Paper III have been accepted for publication, and Paper II is under review after a minor revision.
First, Paper I develops and validates an observer-based rating-scale to assess SP authenticity in psychotherapeutic contexts. Based on the preliminary findings, it can be concluded that the Authenticity of Patient Demonstrations scale is a reliable and valid tool that can be used for recruiting, training, and evaluating the authenticity of SPs.
Second, Paper II tests whether student SPs are perceived as more authentic after they receive an in-depth role-script compared to those SPs who only receive basic information on the patient case. To test this assumption, a randomised controlled study design was implemented and the hypothesis could be confirmed. As a consequence, when engaging SPs, an in-depth role-script with details, e.g. on nonverbal behaviour and feelings of the patient, should be provided.
Third, Paper III demonstrates that psychotherapy trainees cannot distinguish between trained SPs and real patients and therefore suggests that, with proper training, SPs are a promising training method for psychotherapy.
Altogether, the dissertation shows that SPs can be trained to portray a depressive patient authentically and thus delivers promising evidence for the further dissemination of SPs.
Background:
Under the new psychotherapy law in Germany, standardized patients (SPs) are to become a standard component inpsychotherapy training, even though little is known about their authenticity.Objective:The present pilot study explored whether, followingan exhaustive two-day SP training, psychotherapy trainees can distinguish SPs from real patients.
Methods:
Twenty-eight psychotherapytrainees (M= 28.54 years of age,SD= 3.19) participated as blind raters. They evaluated six video-recorded therapy segments of trained SPsand real patients using the Authenticity of Patient Demonstrations Scale.
Results:
The authenticity scores of real patients and SPs did notdiffer (p= .43). The descriptive results indicated that the highest score of authenticity was given to an SP. Further, the real patients did notdiffer significantly from the SPs concerning perceived impairment (p= .33) and the likelihood of being a real patient (p= .52).
Conclusions:
The current results suggest that psychotherapy trainees were unable to distinguish the SPs from real patients. We therefore stronglyrecommend incorporating training SPs before application. Limitations and future research directions are discussed.
Public Significance Statement This study demonstrates that simulated patients (SPs) can authentically portray a depressive case. The results provide preliminary evidence of psychometrically sound properties of the rating scale that contributes to distinguishing between authentic and unauthentic SPs and may thus foster SPs' dissemination into evidence-based training. <br /> For training purposes, simulated patients (SPs), that is, healthy people portraying a disorder, are disseminating more into clinical psychology and psychotherapy. In the current study, we developed an observer-based rating instrument for the evaluation of SP authenticity-namely, it not being possible to distinguish them from real patients-so as to foster their use in evidence-based training. We applied a multistep inductive approach to develop the Authenticity of Patient Demonstrations (APD) scale. Ninety-seven independent psychotherapy trainees, 77.32% female, mean age of 31.49 (SD = 5.17) years, evaluated the authenticity of 2 independent SPs, each of whom portrayed a depressive patient. The APD demonstrated good internal consistency (Cronbach's alpha = .83) and a strong correlation (r = .82) with an established tool for assessing SP performance in medical contexts. The APD scale distinguished significantly between an authentic and unauthentic SP (d = 2.35). Preliminary evidence for the psychometric properties of the APD indicates that the APD could be a viable tool for recruiting, training, and evaluating the authenticity of SPs. Strengths, limitations, and future directions are also discussed in detail.
Double Jeopardy
(2019)
The present study investigates whether secondary traumatization (i.e., family history of Holocaust survival and secondary exposure to captivity) is implicated in subjective age. Women exposed to different levels of secondary traumatization (N = 177) were assessed. Analyses of variance (ANOVAs) revealed that a Holocaust background and husband's captivity had a marginally significant positive effect on age appearance. Women with a Holocaust background whose husbands were held captive reported older interest age, indicating double jeopardy for older subjective age when two sources of secondary traumatization are present. A similar trend existed for behavior age. Possible explanations for these complex findings of risk and resilience are discussed.
Real-world scene perception is typically studied in the laboratory using static picture viewing with restrained head position. Consequently, the transfer of results obtained in this paradigm to real-word scenarios has been questioned. The advancement of mobile eye-trackers and the progress in image processing, however, permit a more natural experimental setup that, at the same time, maintains the high experimental control from the standard laboratory setting. We investigated eye movements while participants were standing in front of a projector screen and explored images under four specific task instructions. Eye movements were recorded with a mobile eye-tracking device and raw gaze data were transformed from head-centered into image-centered coordinates. We observed differences between tasks in temporal and spatial eye-movement parameters and found that the bias to fixate images near the center differed between tasks. Our results demonstrate that current mobile eye-tracking technology and a highly controlled design support the study of fine-scaled task dependencies in an experimental setting that permits more natural viewing behavior than the static picture viewing paradigm.
Real-world scene perception is typically studied in the laboratory using static picture viewing with restrained head position. Consequently, the transfer of results obtained in this paradigm to real-word scenarios has been questioned. The advancement of mobile eye-trackers and the progress in image processing, however, permit a more natural experimental setup that, at the same time, maintains the high experimental control from the standard laboratory setting. We investigated eye movements while participants were standing in front of a projector screen and explored images under four specific task instructions. Eye movements were recorded with a mobile eye-tracking device and raw gaze data were transformed from head-centered into image-centered coordinates. We observed differences between tasks in temporal and spatial eye-movement parameters and found that the bias to fixate images near the center differed between tasks. Our results demonstrate that current mobile eye-tracking technology and a highly controlled design support the study of fine-scaled task dependencies in an experimental setting that permits more natural viewing behavior than the static picture viewing paradigm.
Neurofeedback treatment has been demonstrated to reduce inattention, impulsivity and hyperactivity in children with attention deficit/hyperactivity disorder (ADHD). However, previous studies did not adequately control confounding variables or did not employ a randomized reinforcer-controlled design. This study addresses those methodological shortcomings by comparing the effects of the following two matched biofeedback training variants on the primary symptoms of ADHD: EEG neurofeedback (NF) aiming at theta/beta ratio reduction and EMG biofeedback (BF) aiming at forehead muscle relaxation. Thirty-five children with ADHD (26 boys, 9 girls; 6-14 years old) were randomly assigned to either the therapy group (NF; n = 18) or the control group (BF; n = 17). Treatment for both groups consisted of 30 sessions. Pre- and post-treatment assessment consisted of psychophysiological measures, behavioural rating scales completed by parents and teachers, as well as psychometric measures. Training effectively reduced theta/beta ratios and EMG levels in the NF and BF groups, respectively. Parents reported significant reductions in primary ADHD symptoms, and inattention improvements in the NF group were higher compared to the control intervention (BF, dcorr = -.94). NF training also improved attention and reaction times on the psychometric measures. The results indicate that NF effectively reduced inattention symptoms on parent rating scales and reaction time in neuropsychological tests. However, regarding hyperactivity and impulsivity symptoms, the results imply that non-specific factors, such as behavioural contingencies, self-efficacy, structured learning environment and feed-forward processes, may also contribute to the positive behavioural effects induced by neurofeedback training.
Neurofeedback treatment has been demonstrated to reduce inattention, impulsivity and hyperactivity in children with attention deficit/hyperactivity disorder (ADHD). However, previous studies did not adequately control confounding variables or did not employ a randomized reinforcer-controlled design. This study addresses those methodological shortcomings by comparing the effects of the following two matched biofeedback training variants on the primary symptoms of ADHD: EEG neurofeedback (NF) aiming at theta/beta ratio reduction and EMG biofeedback (BF) aiming at forehead muscle relaxation. Thirty-five children with ADHD (26 boys, 9 girls; 6-14 years old) were randomly assigned to either the therapy group (NF; n = 18) or the control group (BF; n = 17). Treatment for both groups consisted of 30 sessions. Pre- and post-treatment assessment consisted of psychophysiological measures, behavioural rating scales completed by parents and teachers, as well as psychometric measures. Training effectively reduced theta/beta ratios and EMG levels in the NF and BF groups, respectively. Parents reported significant reductions in primary ADHD symptoms, and inattention improvements in the NF group were higher compared to the control intervention (BF, d(corr) = -.94). NF training also improved attention and reaction times on the psychometric measures. The results indicate that NF effectively reduced inattention symptoms on parent rating scales and reaction time in neuropsychological tests. However, regarding hyperactivity and impulsivity symptoms, the results imply that non-specific factors, such as behavioural contingencies, self-efficacy, structured learning environment and feed-forward processes, may also contribute to the positive behavioural effects induced by neurofeedback training.
An area of increasing interest amongst teachers and researchers is the availability of tools for the design and implementation of literacy interventions with Spanish speaking children. The present systematic literature review contributes to this need by summarizing available findings on evidence-based literacy interventions (EBI) for children from first to third year of primary school. Our results are based on 20 EBI that aimed at improving at least one of the critical components mentioned by the NRP (2000): phonological awareness, phonics, fluency, vocabulary and comprehension. As 90% of the studies were completed with English-speaking children, we critically discussed the applicability of this evidence to the specific context of Spanish-speaking countries. Although many of the general characteristics of the EBI completed with English speaking children could also guide interventions in Spanish, it remains crucial to take into account structural differences between the orthographies of both languages. Moreover, we identified transversal strategies and implementation techniques that due to their universal character could also be useful for early literacy interventions in Spanish. (c) 2018 Fundacion Universitaria Konrad Lorenz. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/bync-nd/4.0/).
Early numeracy is one of the strongest predictors for later success in school mathematics (e.g., Duncan et al., 2007). The main goal of first grade mathematics teachers should therefore be to provide learning opportunities that enable all students to develop sound early numeracy skills. Developmental models, or learning progressions, can describe how early numerical understanding typically develops. Assessments that are aligned to empirically validated learning progressions can support teachers to understand their students learning better and target instruction accordingly. To date, there have been no progression-based instruments made available for German teachers to monitor their students’ progress in the domain of early numeracy. This dissertation contributes to the design of such an instrument. The first study analysed the suitability of early numeracy assessments currently used in German primary schools at school entry to identify students’ individual starting points for subsequent progress monitoring. The second study described the development of progression-based items and investigated the items in regards to main test quality criteria, such as reliability, validity, and test fairness, to find a suitable item pool to build targeted tests. The third study described the construction of the progress monitoring measure, referred to as the learning progress assessment (LPA). The study investigated the extent to which the LPA was able to monitor students’ individual learning progress in early numeracy over time. The results of the first study indicated that current school entry assessments were not able to provide meaningful information about the students’ initial learning status. Thus, the MARKO-D test (Ricken, Fritz, & Balzer, 2013) was used to determine the students’ initial numerical understanding in the other two studies, because it has been shown to be an effective measure of conceptual numerical understanding (Fritz, Ehlert, & Leutner, 2018). Both studies provided promising evidence for the quality of the LPA and its ability to detect changes in numerical understanding over the course of first grade. The studies of this dissertation can be considered an important step in the process of designing an empirically validated instrument that supports teachers to monitor their students’ early numeracy development and to adjust their teaching accordingly to enhance school achievement.
Fluid intelligence belongs to that cluster of intellectual abilities evincing aging loss. To examine further the range of intellectual reserve available to aging individuals and the question of replicability in a new cultural and laboratory setting, 204 healthy older adults (mean age = 72 years; range = 60-86) participated in a short-term longitudinal training study. For experimental subjects, 10 sessions consisted of cognitive training involving two subability tests (Figural Relations, Induction) of fluid intelligence. The pattern of outcomes replicates and expands on earlier studies. Older adults have the reserve to evince substantial increases in levels of performance in fluid intelligence tests. Transfer of training, however, is narrow in scope. Training also increases accuracy of performance and the ability to solve more difficult test items. Difficulty level was estimated in a separate study, with a comparable sample of N = 112 elderly adults. Future research is suggested to examine whether intellectual reserve extends to near-maximum levels of performance.
Earlier testing-the-limits research on age differences in cognitive plasticity of a memory skill was extended by 18 additional assessment and training sessions to explore whether older adults were able to catch up with additional practice and improved training conditions. The focus was on the method of loci, which requires mental imagination to encode and retrieve lists of words from memory in serial order. Of the original 37 subjects, 35 (16 young, ranging from 20 to 30 years of age, and 19 older adults, ranging from 66 to 80 years of age) participated in the follow-up study. Older adults showed sizable performance deficits when compared with young adults and tested for limits of reserve capacity. The negative age difference was substantial, resistant to extensive practice, and applied to all subjects studied. The primary origin for this negative age difference may be a loss in the production and use of mental imagination for operations of the mind.
On the locus of training gains in research on the plasticity of fluid intelligence in old age
(1988)
Cognitive training research has shown that many older adults have a substantial reserve capacity in fluid intelligence. Little is known, however, about the locus of plasticity. Two studies were conducted to examine whether training gains in fluid abilities are critically dependent on experimenter-guided training and/or whether older adults can achieve similar improvements by themselves on the basis of cognitive skills already available in their repertoire. Several comparisons were made: (a) between test performances after trainer-guided training in ability-specific cognitive skills and after self-guided retest practice (without feedback), (b) between performances under speeded and power conditions of assessment, (c) between performances on easy and difficult items, and (d) between the relative numbers of correct and wrong answers. Results suggest that a large share of the training improvement shown by the elderly can plausibly be explained as the result of the activation and practice of cognitive skills already available in their repertoire. The results also have implications for educational practice, pointing to the appropriateness of strategies of self-directed learning for many elderly adults.
Cognitive research on the plasticity of fluid intelligence has demonstrated that older adults benefit markedly from guided practice in cognitive skills and problem-solving strategies. We examined to what degree older adults are capable by themselves of achieving similar practice gains, focusing on the fluid ability of figural relations. A sample of 72 healthy older adults was assigned randomly to three conditions: control, tutor-guided training, self-guided training. Training time and training materials were held constant for the two training conditions. Posttraining performances were analyzed using a transfer of training paradigm in terms of three indicators: correct responses, accuracy, and level of item difficulty. The training programs were effective and produced a significant but narrow band of within-ability transfer. However, there was no difference between the two training groups. Older adults were shown to be capable of producing gains by themselves that were comparable to those obtained following tutor-guided training in the nature of test-relevant cognitive skills.
In the number-to-position methodology, a number is presented on each trial and the observer places it on a straight line in a position that corresponds to its felt subjective magnitude. In the novel modification introduced in this study, the two-numbers-to-two-positions method, a pair of numbers rather than a single number is presented on each trial and the observer places them in appropriate positions on the same line. Responses in this method indicate not only the subjective magnitude of each single number but, simultaneously, provide a direct estimation of their subjective numerical distance. The results of four experiments provide strong evidence for a linear representation of numbers and, commensurately, for the linear representation of numerical distances. We attribute earlier results that indicate a logarithmic representation to the ordered nature of numbers and to the task used and not to a truly non-linear underlying representation.
Values are assumed to be relatively stable during adulthood. Yet, little research has examined value stability and change, and there are no studies on the structure of value change. On the basis of S. H. Schwartz's (1992) value theory, the authors propose that the structure of intraindividual value change mirrors the circumplexlike structure of values so that conflicting values change in opposite directions and compatible values change in the same direction. Four longitudinal studies, varying in life contexts, time gaps, populations, countries, languages, and value measures, supported the proposed structure of intraindividual value change. An increase in the importance of any one value is accompanied by slight increases in the importance of compatible values and by decreases in the importance of conflicting values. Thus, intraindividual changes in values are not chaotic, but occur in a way that maintains Schwartz's value structure. Furthermore, the greater the extent of life-changing events, the greater the value change found, whereas age was only a marginal negative predictor of value change when life events were taken into account. Implications for the structure of personality change are discussed.
Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.
Correlation between burnout syndrome and psychological and psychosomatic symptoms among teachers
(2006)
Objectives: Psychosomatic disorders and symptoms that correlate with the so-called burnout syndrome turned out to be the main cause of increasing rates of premature retirement of school teachers. The aim of this study was to evaluate the relation between occupational burden and psychological strain of teachers who are still in work. Methods: A sample of 408 teachers at ten grammar schools (am: High school; German: Gymnasium) in south-western Germany was evaluated. To determine the styles of coping with occupational burden we used the measure of coping capacity questionnaire (MECCA). To analyse the psychopathological and psychosomatic symptom load we applied SCL 90 R questionnaire. Results: According to the MECCA questionnaire, 32.5% of the sample suffered from burnout (type B), 17.7% suffered severe strain (type A), 35.9% showed an unambitious (type S) and 13.8% showed a healthy-ambitious coping style (type G). Burnout was significantly higher among women, divorced teachers and teachers working part-time. As part of the MECCA, teachers were asked to rate what they regarded as the strongest factor resulting in occupational burden. Teachers indicated that, besides high numbers of pupils in one class, they regarded destructive and aggressive behaviour of pupils as the primary stress factor. According to the SCL 90 R, 20% of the sample showed a severe degree (defined as > 70 points in the SCL90R GSI) of psychological and psychosomatic symptoms. MECCA type B (burnout) correlated significantly with high psychological and psychosomatic symptom load according to the SCL90R. Conclusions: In school teachers, burnout syndrome, a construct that derived from occupational psychology and occupational medicine, is significantly correlated with psychological and psychosomatic symptoms. Teachers rate destructive and aggressive behaviour of pupils as the primary stress factor.
To examine whether the dopamine receptor D4 gene (DRD4) exon III VNTR moderates the risk of infants with regulatory disorders for developing attention-deficit/hyperactivity disorder (ADHD) later in childhood. In a prospective longitudinal study of children at risk for later psychopathology, 300 participants were assessed for regulatory problems in infancy, DRD4 genotype, and ADHD symptoms and diagnoses from childhood to adolescence. To examine a potential moderating effect on ADHD measures, linear and logistic regressions were computed. Models were fit for the main effects of the DRD4 genotype (presence or absence of the 7r allele) and regulatory problems (presence or absence), with the addition of the interaction term. All models were controlled for sex, family adversity, and obstetric risk status. In children without the DRD4-7r allele, a history of regulatory problems in infancy was unrelated to later ADHD. But in children with regulatory problems in infancy, the additional presence of the DRD4-7r allele increased the risk for ADHD in childhood. The DRD4 genotype seems to moderate the association between regulatory problems in infancy and later ADHD. A replication study is needed before further conclusions can be drawn, however.
Objective To demonstrate that children homozygous for the 10-repeat allele of the common dopamine transporter (DAT1) polymorphism who were exposed to maternal prenatal smoke exhibited significantly higher hyperactivity-impulsivity than children without these environmental or genetic risks. Study design We performed a prospective longitudinal study from birth into early adulthood monitoring the long-term outcome of early risk factors. Maternal prenatal smoking was determined during a standardized interview with the mother when the child was 3 months old. At age 15 years, 305 adolescents participated in genotyping for the DAT1 40 base pair variable number of tandem repeats polymorphism and assessment of inattention, hyperactivity-impulsivity, and oppositional defiant/conduct disorder symptoms with die Kiddie- Sads-Present and Lifetime Version. Results There was no bivariate association between DAT1 genotype, prenatal smoke exposure and symptoms of attention deficit hyperactivity disorder. However, a significant interaction between DAT1 genotype and prenatal smoke exposure emerged (P =.012), indicating that males with prenatal smoke exposure who were homozygous for the DAT1 10r allele had higher hyperactivity-impulsivity than males from all other groups. In females, no significant main effects of DAT1 genotype or prenatal smoke exposure or interaction effects on any symptoms were evident (all P >.25). Conclusions This study provides further evidence for the multifactorial nature of attention deficit hyperactivity disorder and the importance of studying both genetic and environmental factors and their interaction.
The Body Appreciation Scale-2 (BAS-2) is the most current measure of body appreciation, a central facet of positive body image. This work aimed to examine the factor structure and psychometric properties of a German version. In Study 1 (N = 659; M-age = 27.19, SD = 8.57), exploratory factor analyses (EFA) revealed that the German BAS-2 has a one-dimensional factor structure in women and men, showing cross-gender factor similarity. In Study 2 (N = 472; M-age = 30.08, SD = 12.35), confirmatory factor analysis (CFA) further supported the original scale's one-dimensional factor structure after freeing correlated errors. The German BAS-2 also showed partial scalar invariance across gender, with women and men not differing significantly in latent mean scores. As predicted, we found convergent relationships with measures of self-esteem, intuitive eating, and variables associated with negative body image (i.e., weight-and shape concerns, drive for thinness). Correlations with BMI were small and in an inverse direction. Incremental validity was demonstrated by predicting self-esteem and intuitive eating over and above measures of negative body image. Additionally, the German BAS-2 showed internal consistency and 2-week test-retest reliability. Overall, our results suggest that the German BAS-2 is a psychometrically sound instrument.
"BreaThink"
(2021)
Cognition is shaped by signals from outside and within the body. Following recent evidence of interoceptive signals modulating higher-level cognition, we examined whether breathing changes the production and perception of quantities. In Experiment 1, 22 adults verbally produced on average larger random numbers after inhaling than after exhaling. In Experiment 2, 24 further adults estimated the numerosity of dot patterns that were briefly shown after either inhaling or exhaling. Again, we obtained on average larger responses following inhalation than exhalation. These converging results extend models of situated cognition according to which higher-level cognition is sensitive to transient interoceptive states.
Cross-education has been extensively investigated with adults. Adult studies report asymmetrical cross-education adaptations predominately after dominant limb training. The objective of the study was to examine unilateral leg press (LP) training of the dominant or nondominant leg on contralateral and ipsilateral strength and balance measures. Forty-two youth (10-13 years) were placed (random allocation) into a dominant (n = 15) or nondominant (n = 14) leg press training group or nontraining control (n = 13). Experimental groups trained 3 times per week for 8 weeks and were tested pre-/post-training for ipsilateral and contralateral 1-repetition maximum (RM) horizontal LP, maximum voluntary isometric contraction (MVIC) of knee extensors (KE) and flexors (KF), countermovement jump (CMJ), triple hop test (THT), MVIC strength of elbow flexors (EF) and handgrip, as well as the stork and Y balance tests. Both dominant and nondominant LP training significantly (p < 0.05) increased both ipsilateral and contralateral lower body strength (LP 1RM (dominant: 59.6%-81.8%; nondominant: 59.5%-96.3%), KE MVIC (dominant: 12.4%-18.3%; nondominant: 8.6%-18.6%), KF MVIC (dominant: 7.9%-22.3%; nondominant: nonsignificant-3.8%), and power (CMJ: dominant: 11.1%-18.1%; nondominant: 7.7%-16.6%)). The exception was that nondominant LP training demonstrated a nonsignificant change with the contralateral KF MVIC. Other significant improvements were with nondominant LP training on ipsilateral EF 1RM (6.2%) and THT (9.6%). There were no significant changes with EF and handgrip MVIC. The contralateral leg stork balance test was impaired following dominant LP training. KF MVIC exhibited the only significant relative post-training to pretraining (post-test/pre-test) ratio differences between dominant versus nondominant LP cross-education training effects. In conclusion, children exhibit symmetrical cross-education or global training adaptations with unilateral training of dominant or nondominant upper leg.
Brain activation stability is crucial to understanding attention lapses. EEG methods could provide excellent markers to assess neuronal response variability with respect to temporal (intertrial coherence) and spatial variability (topographic consistency) as well as variations in activation intensity (low frequency variability of single trial global field power).
We calculated intertrial coherence, topographic consistency and low frequency amplitude variability during target P300 in a continuous performance test in 263 15-year-olds from a cohort with psychosocial and biological risk factors.
Topographic consistency and low frequency amplitude variability predicted reaction time fluctuations (RTSD) in a linear model. Higher RTSD was only associated with higher psychosocial adversity in the presence of the homozygous 6R-10R dopamine transporter haplotype.
We propose that topographic variability of single trial P300 reflects noise as well as variability in evoked cortical activation patterns. Dopaminergic neuromodulation interacted with environmental and biological risk factors to predict behavioural reaction time variability. (C) 2015 Elsevier B.V. All rights reserved.
Background: Dopamine plays an important role in orienting, response anticipation and movement evaluation. Thus, we examined the influence of functional variants related to dopamine inactivation in the dopamine transporter (DAT1) and catechol-O-methyltransferase genes (COMT) on the time-course of motor processing in a contingent negative variation (CNV) task.
Methods: 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version). Early and late CNV as well as motor postimperative negative variation were assessed. Adolescents were genotyped for the COMT Val(158) Met and two DAT1 polymorphisms (variable number tandem repeats in the 3'-untranslated region and in intron 8).
Results: The results revealed a significant interaction between COMT and DAT1, indicating that COMT exerted stronger effects on lateralized motor post-processing (centro-parietal motor postimperative negative variation) in homozygous carriers of a DAT1 haplotype increasing DAT1 expression. Source analysis showed that the time interval 500-1000 ms after the motor response was specifically affected in contrast to preceding movement anticipation and programming stages, which were not altered.
Conclusions: Motor slow negative waves allow the genomic imaging of dopamine inactivation effects on cortical motor post-processing during response evaluation. This is the first report to point towards epistatic effects in the motor system during response evaluation, i.e. during the post-processing of an already executed movement rather than during movement programming.
Background: Dopamine plays an important role in orienting and the regulation of selective attention to relevant stimulus characteristics. Thus, we examined the influences of functional variants related to dopamine inactivation in the dopamine transporter (DAT1) and catechol-O-methyltransferase genes (COMT) on the time-course of visual processing in a contingent negative variation (CNV) task.
Methods: 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version). Early and late CNV as well as preceding visual evoked potential components were assessed.
Results: Significant additive main effects of DAT1 and COMT on the occipito-temporal early CNV were observed. In addition, there was a trend towards an interaction between the two polymorphisms. Source analysis showed early CNV generators in the ventral visual stream and in frontal regions. There was a strong negative correlation between occipito-temporal visual post-processing and the frontal early CNV component. The early CNV time interval 500-1000 ms after the visual cue was specifically affected while the preceding visual perception stages were not influenced.
Conclusions: Late visual potentials allow the genomic imaging of dopamine inactivation effects on visual post-processing. The same specific time-interval has been found to be affected by DAT1 and COMT during motor post-processing but not motor preparation. We propose the hypothesis that similar dopaminergic mechanisms modulate working memory encoding in both the visual and motor and perhaps other systems.
Measures of gender identity have almost exclusively relied on positive aspects of masculinity and femininity, although conceptually the self-concept is not limited to positive attributes. A theoretical argument is made for considering negative attributes of gender identity, followed by five studies developing the Positive-Negative Sex-Role Inventory (PN-SRI) as a new measure of gender identity. Study 1 demonstrated that many of the attributes of a German version of the Bem Sex-Role Inventory are no longer considered to differ in desirability for men and women. For the PN-SRI, Study 2 elicited attributes characterizing men and women in today's society, for which ratings of typicality and desirability as well as self-ratings by men and women were obtained in Study 3. Study 4 examined the reliability and factorial structure of the four subscales of positive and negative masculinity and femininity and demonstrated the construct and discriminant validity of the PN-SRI by showing that the negative masculinity and femininity scales were unique predictors of select validation constructs. Study 5 showed that the new instrument explained variance in the validation constructs beyond earlier measures of gender identity. Key message: Even in the construction of negative aspects of gender identity, individuals prefer gender-congruent attributes. Negative masculinity and femininity make a unique contribution to understanding gender-related differences in psychological outcome variables.
Although teen dating violence (TDV) is internationally recognized as a serious threat to adolescents' health and well-being, almost no data is available for Slovenian youth. Hence, the purpose of this study was to examine the prevalence and predictors of TDV among Slovenian adolescents for the first time. Using data from the SPMAD study (Study of Parental Monitoring and Adolescent Delinquency), 330 high school students were asked about physical TDV victimization and perpetration as well as about their dating history, relationship conflicts, peers' antisocial behavior, and informal social control by family and school. A substantial number of female andmale adolescents reported victimization (16.7% of female and 12.7% of male respondents) and perpetration (21.1% of female and 6.0% of male respondents). Furthermore, the results revealed that lower age at the first relationship, relationship conflicts, and school informal social control were associated with victimization, whereas being female, relationship conflicts, having antisocial peers, and family informal social control were linked to perpetration. Implications of the study findings were discussed.
Research in legal decision making has demonstrated the tendency to blame the victim and exonerate the perpetrator of sexual assault. This study examined the hypothesis of a special leniency bias in rape cases by comparing them to cases of robbery. N = 288 participants received descriptions of rape and robbery of a female victim by a male perpetrator and made ratings of victim and perpetrator blame. Case scenarios varied with respect to the prior relationship (strangers, acquaintances, ex-partners) and coercive strategy (force vs. exploiting victim intoxication). More blame was attributed to the victim and less blame was attributed to the perpetrator for rape than for robbery. Information about a prior relationship between victim and perpetrator increased ratings of victim blame and decreased perceptions of perpetrator blame in the rape cases, but not in the robbery cases. The findings support the notion of a special leniency bias in sexual assault cases.
Research in legal decision making has demonstrated the tendency to blame the victim and exonerate the perpetrator of sexual assault. This study examined the hypothesis of a special leniency bias in rape cases by comparing them to cases of robbery. N = 288 participants received descriptions of rape and robbery of a female victim by a male perpetrator and made ratings of victim and perpetrator blame. Case scenarios varied with respect to the prior relationship (strangers, acquaintances, ex-partners) and coercive strategy (force vs. exploiting victim intoxication). More blame was attributed to the victim and less blame was attributed to the perpetrator for rape than for robbery. Information about a prior relationship between victim and perpetrator increased ratings of victim blame and decreased perceptions of perpetrator blame in the rape cases, but not in the robbery cases. The findings support the notion of a special leniency bias in sexual assault cases.
Infants start learning the prosodic properties of their native language before 12 months, as shown by the emergence of a trochaic bias in English-learning infants between 6 and 9 months (Jusczyk et al., 1993), and in German-learning infants between 4 and 6 months (Huhle et al., 2009, 2014), while French-learning infants do not show a bias at 6 months (Hohle et al., 2009). This language-specific emergence of a trochaic bias is supported by the fact that English and German are languages with trochaic predominance in their lexicons, while French is a language with phrase-final lengthening but lacking lexical stress. We explored the emergence of a trochaic bias in bilingual French/German infants, to study whether the developmental trajectory would be similar to monolingual infants and whether amount of relative exposure to the two languages has an impact on the emergence of the bias. Accordingly, we replicated Hohle et al. (2009) with 24 bilingual 6-month-olds learning French and German simultaneously. All infants had been exposed to both languages for 30 to 70% of the time from birth. Using the Head Preference Procedure, infants were presented with two lists of stimuli, one made up of several occurrences of the pseudoword /GAba/ with word-initial stress (trochaic pattern), the second one made up of several occurrences of the pseudoword /gaBA/ with word-final stress (iambic pattern). The stimuli were recorded by a native German female speaker. Results revealed that these French/German bilingual 6-month olds have a trochaic bias (as evidenced by a preference to listen to the trochaic pattern). Hence, their listening preference is comparable to that of monolingual German-learning 6-month-olds, but differs from that of monolingual French-learning 6-month-olds who did not show any preference (Noble et al., 2009). Moreover, the size of the trochaic bias in the bilingual infants was not correlated with their amount of exposure to German. The present results thus establish that the development of a trochaic bias in simultaneous bilinguals is not delayed compared to monolingual German-learning infants (Hohle et al., 2009) and is rather independent of the amount of exposure to German relative to French.
From fantasy to reality
(2022)
Aggression-related sexual fantasies (ASF) have been related to various forms of harmful sexual behavior in both sex offender and community samples. However, more research is needed to fully understand this relation, particularly whether ASF is associated with harmful sexual behavior beyond hostile sexism against women and a sexual preference for violence and sexual violence. In the present study, N = 428 participants (61.9% women) between 18 and 83 years of age (M = 28.17, SD = 9.7) reported their ASF and hostile sexism. They rated their sexual arousal by erotic, violent, and sexually violent pictures as a direct measure of sexual preference. Response latencies between stimulus presentation and arousal ratings were used as an indirect measure of sexual preference. ASF and the directly and indirectly assessed sexual preference for violent and sexually violent stimuli were positively correlated. They were unrelated to hostile sexism against women. ASF showed the strongest associations with self-reported sexually sadistic behavior and presumably non-consensual sexual sadism beyond these preferences and hostile sexism in the total group and separately among men and women. The findings indicate that ASF and sexual preference are not equivalent constructs and further underscore the potential relevance of ASF for harmful sexual behavior.
Background: Clock genes govern circadian rhythms and shape the effect of alcohol use on the physiological system. Exposure to severe negative life events is related to both heavy drinking and disturbed circadian rhythmicity. The aim of this study was 1) to extend previous findings suggesting an association of a haplotype tagging single nucleotide polymorphism of PER2 gene with drinking patterns, and 2) to examine a possible role for an interaction of this gene with life stress in hazardous drinking.
Methods: Data were collected as part of an epidemiological cohort study on the outcome of early risk factors followed since birth. At age 19 years, 268 young adults (126 males, 142 females) were genotyped for PER2 rs56013859 and were administered a 45-day alcohol timeline follow-back interview and the Alcohol Use Disorders Identification Test (AUDIT). Life stress was assessed as the number of severe negative life events during the past four years reported in a questionnaire and validated by interview.
Results: Individuals with the minor G allele of rs56013859 were found to be less engaged in alcohol use, drinking at only 72% of the days compared to homozygotes for the major A allele. Moreover, among regular drinkers, a gene x environment interaction emerged (p = .020). While no effects of genotype appeared under conditions of low stress, carriers of the G allele exhibited less hazardous drinking than those homozygous for the A allele when exposed to high stress.
Conclusions: These findings may suggest a role of the circadian rhythm gene PER2 in both the drinking patterns of young adults and in moderating the impact of severe life stress on hazardous drinking in experienced alcohol users. However, in light of the likely burden of multiple tests, the nature of the measures used and the nominal evidence of interaction, replication is needed before drawing firm conclusions.
Background:
Recent evidence from animal experiments and studies in humans suggests that early age at first drink (AFD) may lead to higher stress-induced drinking. The present study aimed to extend these findings by examining whether AFD interacted with stressful life events (SLE) and/or with daily hassles regarding the impact on drinking patterns among young adults.
Method:
In 306 participants of an epidemiological cohort study, AFD was assessed together with SLE during the past 3 years, daily hassles in the last month, and drinking behavior at age 22. As outcome variables, 2 variables were derived, reflecting different aspects of alcohol use: the amount of alcohol consumed in the last month and the drinking frequency, indicated by the number of drinking days in the last month.
Results:
Linear regression models revealed an interaction effect between the continuous measures of AFD and SLE on the amount of alcohol consumed. The earlier young adults had their first alcoholic drink and the higher the levels of SLE they were exposed to, the disproportionately more alcohol they consumed. Drinking frequency was not affected by an interaction of these variables, while daily hassles and their interaction with AFD were unrelated to drinking behavior.
Conclusions:
These findings highlight the importance of early age at drinking onset as a risk factor for later heavy drinking under high load of SLE. Prevention programs should aim to raise age at first contact with alcohol. Additionally, support in stressful life situations and the acquisition of effective coping strategies might prevent heavy drinking in those with earlier drinking onset.
BackgroundEarly alcohol use is one of the strongest predictors of later alcohol use disorders, with early use usually taking place during puberty. Many researchers have suggested drinking during puberty as a potential biological basis of the age at first drink (AFD) effect. However, the influence of the pubertal phase at alcohol use initiation on subsequent drinking in later life has not been examined so far.
MethodsPubertal stage at first drink (PSFD) was determined in N=283 young adults (131 males, 152 females) from an epidemiological cohort study. At ages 19, 22, and 23years, drinking behavior (number of drinking days, amount of alcohol consumed, hazardous drinking) was assessed using interview and questionnaire methods. Additionally, an animal study examined the effects of pubertal or adult ethanol (EtOH) exposure on voluntary EtOH consumption in later life in 20 male Wistar rats.
ResultsPSFD predicted drinking behavior in humans in early adulthood, indicating that individuals who had their first drink during puberty displayed elevated drinking levels compared to those with postpubertal drinking onset. These findings were corroborated by the animal study, in which rats that received free access to alcohol during the pubertal period were found to consume more alcohol as adults, compared to the control animals that first came into contact with alcohol during adulthood.
ConclusionsThe results point to a significant role of stage of pubertal development at first contact with alcohol for the development of later drinking habits. Possible biological mechanisms and implications for prevention are discussed.
Interaction between CRHR1 gene and stressful life events predicts adolescent heavy alcohol use
(2007)
Background: Recent animal research suggests that alterations in the corticotropin releasing hormone receptor 1 (CRHR1) may lead to heavy alcohol use following repeated stress. The aim of this study was to examine interactions between two haplotype-tagging single nucleotide polymorphisms (SNPs) covering the CRHR1 gene and adverse life events on heavy drinking in adolescents. Methods: Data were available from the Mannheim Study of Children at Risk, an ongoing cohort study of the long-term outcome of early risk factors followed since birth. At age 15 years, 280 participants (135 males, 145 females) completed a self-report questionnaire measuring alcohol use and were genotyped for two SNPs (rs242938, rs1876831) of CRHR1. Assessment of negative life events over the past three years was obtained by a standardized interview with the parents. Results: Adolescents homozygous for the C allele of rs1876831 drank higher maximum amounts of alcohol per occasion and had greater lifetime rates of heavy drinking in relation to negative life events than individuals carrying the T allele. No gene X environment interactions were found for regular drinking and between rs242938 and stressful life events. Conclusions: These findings provide first evidence in humans that the CRHR1 gene interacts with exposure to stressful life events to predict heavy alcohol use in adolescents.
Several lines of evidence have implicated the mesolimbic dopamine reward pathway in altered brain function resulting from exposure to early adversity. The present study examined the impact of early life adversity on different stages of neuronal reward processing later in life and their association with a related behavioral phenotype, i.e. attention deficit/hyperactivity disorder (ADHD). 162 healthy young adults (mean age = 24.4 years; 58% female) from an epidemiological cohort study followed since birth participated in a simultaneous EEG-fMRI study using a monetary incentive delay task. Early life adversity according to an early family adversity index (EFA) and lifetime ADHD symptoms were assessed using standardized parent interviews conducted at the offspring's age of 3 months and between 2 and 15 years, respectively. fMRI region-of-interest analysis revealed a significant effect of EFA during reward anticipation in reward-related areas (i.e. ventral striatum, putamen, thalamus), indicating decreased activation when EFA increased. EEG analysis demonstrated a similar effect for the contingent negative variation (CNV), with the CNV decreasing with the level of EFA. In contrast, during reward delivery, activation of the bilateral insula, right pallidum and bilateral putamen increased with EFA. There was a significant association of lifetime ADHD symptoms with lower activation in the left ventral striatum during reward anticipation and higher activation in the right insula during reward delivery. The present findings indicate a differential long-term impact of early life adversity on reward processing, implicating hyporesponsiveness during reward anticipation and hyperresponsiveness when receiving a reward. Moreover, a similar activation pattern related to lifetime ADHD suggests that the impact of early life stress on ADHD may possibly be mediated by a dysfunctional reward pathway.
Although there is ample evidence linking insecure attachment styles and intimate partner violence (IPV), little is known about the psychological processes underlying this association, especially from the victim’s perspective. The present study examined how attachment styles relate to the experience of sexual and psychological abuse, directly or indirectly through destructive conflict resolution strategies, both self-reported and attributed to their opposite-sex romantic partner. In an online survey, 216 Spanish undergraduates completed measures of adult attachment style, engagement and withdrawal conflict resolution styles shown by self and partner, and victimization by an intimate partner in the form of sexual coercion and psychological abuse. As predicted, anxious and avoidant attachment styles were directly related to both forms of victimization. Also, an indirect path from anxious attachment to IPV victimization was detected via destructive conflict resolution strategies. Specifically, anxiously attached participants reported a higher use of conflict engagement by themselves and by their partners. In addition, engagement reported by the self and perceived in the partner was linked to an increased probability of experiencing sexual coercion and psychological abuse. Avoidant attachment was linked to higher withdrawal in conflict situations, but the paths from withdrawal to perceived partner engagement, sexual coercion, and psychological abuse were non-significant. No gender differences in the associations were found. The discussion highlights the role of anxious attachment in understanding escalating patterns of destructive conflict resolution strategies, which may increase the vulnerability to IPV victimization.
Is bad intent negligible?
(2018)
The hostile attribution bias (HAB) is a well-established risk factor for aggression. It is considered part of the suspicious mindset that may cause highly victim-justice sensitive individuals to behave uncooperatively. Thus, links of victim justice sensitivity (JS) with negative behavior, such as aggression, may be better explained by HAB. The present study tested this hypothesis in N=279 German adolescents who rated their JS, HAB, and physical, relational, verbal, reactive, and proactive aggression. Victim JS predicted physical, relational, verbal, reactive, and proactive aggression when HAB was controlled. HAB only predicted physical and proactive aggression. There were no moderator effects. Injustice seems an important reason for aggression irrespective of whether or not it is intentionally caused, particularly among those high in victim JS. Thus, victim JS should be considered as a potential important risk factor for aggression and receive more attention by research on aggression and preventive efforts.
Individuals differ in their tendency to perceive injustice and in their responses towards these perceptions. Those high in justice sensitivity tend to show intense negative affective, cognitive, and behavioral responses towards injustice that in part also depend on the perspective from which injustice is perceived. The present research project showed that inter-individual differences in justice sensitivity may already be measured and observed in childhood and adolescence and that early adolescence seems an important age-range and developmental stage for the stabilization of these differences. Furthermore, the different justice sensitivity perspectives were related to different forms of externalizing (aggression, ADHD, bullying) and internalizing problem behavior (depressive symptoms) both in children and adolescents as well as in adults in cross-sectional studies. Particularly victim sensitivity may apparently constitute an important risk factor for a broad range of both externalizing and internalizing maladaptive behaviors and mental health problems as shown in those studies using longitudinal data. Regarding aggressive behavior, victim justice sensitivity may even constitute a risk factor above and beyond other important and well-established risk factors for aggression and similar sensitivity constructs that had previously been linked to this kind of behavior. In contrast, observer and perpetrator sensitivity (perpetrator sensitivity in particular) tended to show negative links with externalizing problem behavior and instead predicted prosocial behavior in children and adolescents. However, there were also detached positive relations of perpetrator sensitivity with emotional problems as well as of observer sensitivity with reactive aggression and depressive symptoms. Taken together, the findings from the present research show that justice sensitivity forms in childhood at the latest and that it may have important, long-term influences on pro- and antisocial behavior and mental health. Thus, justice sensitivity requires more attention in research on the prevention and intervention of mental health problems and antisocial behavior, such as aggression.
School attacks are attracting increasing attention in aggression research. Recent systematic analyses provided new insights into offense and offender characteristics. Less is known about attacks in institutes of higher education (e.g., universities). It is therefore questionable whether the term “school attack” should be limited to institutions of general education or could be extended to institutions of higher education. Scientific literature is divided in distinguishing or unifying these two groups and reports similarities as well as differences. We researched 232 school attacks and 45 attacks in institutes of higher education throughout the world and conducted systematic comparisons between the two groups. The analyses yielded differences in offender (e.g., age, migration background) and offense characteristics (e.g., weapons, suicide rates), and some similarities (e.g., gender). Most differences can apparently be accounted for by offenders’ age and situational influences. We discuss the implications of our findings for future research and the development of preventative measures.
Objective:
Rejection sensitivity and justice sensitivity are personality traits that are characterized by frequent perceptions and intense adverse responses to negative social cues. Whereas there is good evidence for associations between rejection sensitivity, justice sensitivity, and internalizing problems, no longitudinal studies have investigated their association with eating disorder (ED) pathology so far. Thus, the present study examined longitudinal relations between rejection sensitivity, justice sensitivity, and ED pathology.
Method:
Participants (N = 769) reported on their rejection sensitivity, justice sensitivity, and ED pathology at 9-19 (T1), 11-21 (T2), and 14-22 years of age (T3).
Results:
Latent cross-lagged models showed longitudinal associations between ED pathology and anxious rejection sensitivity, observer and victim justice sensitivity. T1 and T2 ED pathology predicted higher T2 and T3 anxious rejection sensitivity, respectively. In turn, T2 anxious rejection sensitivity predicted more T3 ED pathology. T1 observer justice sensitivity predicted more T2 ED pathology, which predicted higher T3 observer justice sensitivity. Furthermore, T1 ED pathology predicted higher T2 victim justice sensitivity.
Discussion:
Rejection sensitivity-particularly anxious rejection sensitivity-and justice sensitivity may be involved in the maintenance or worsening of ED pathology and should be considered by future research and in prevention and treatment of ED pathology. Also, mental health problems may increase rejection sensitivity and justice sensitivity traits in the long term.
Recent research provides evidence that aggressive sexual fantasies predict aggressive sexual behavior in the general population. However, sexual fantasies including fantasies about the infliction of pain and humiliation, should be frequent and often consensually acted upon among individuals with sadomasochistic likings. The question arises whether sexual fantasies with aggressive content still predict presumably non-consensual aggressive sexual behavior in individuals with sadomasochistic likings, given that BDSM encounters are generally considered consensual. To investigate this question, we conducted a questionnaire survey of sexual fantasies, as sessing the frequency of seventy sexual fantasies involving non-aggressive, masochistic, and aggressive acts. Our sample (N = 182) contained 99 respondents who self-identified as sadist, masochist, or switcher; 44 reported no such identification. For respondents reporting BDSM identification, we replicated a factor structure for sexual fantasies similar to that previously found in the general population, including three factors reflecting fantasies about increasingly severe aggressive sexual acts. Fantasies about injuring a partner and/or using weapons and fantasies about sexual coercion predicted presumably non-consensual sexual behavior independently of other risk factors for aggressive sexual behavior and irrespective of BDSM identification. Hence, severely aggressive sexual fantasies may predispose to presumably non-consensual sexual behavior in both individuals with and without BDSM identification.
Background: Aggression-related sexual fantasies (ASF) are considered an important risk factor for sexual aggression, but empirical knowledge is limited, in part because previous research has been based on predominantly male, North-American college samples, and limited numbers of questions. <br /> Aim: The present study aimed to foster the knowledge about the frequency and correlates of ASF, while including a large sample of women and a broad range of ASF. <br /> Method: A convenience sample of N = 664 participants from Germany including 508 (77%) women and 156 (23%) men with a median age of 25 (21-27) years answered an online questionnaire. Participants were mainly recruited via social networks (online and in person) and were mainly students. We examined the frequencies of (aggression-related) sexual fantasies and their expected factor structure (factors reflecting affective, experimental, masochistic, and aggression-related contents) via exploratory factor analysis. We investigated potential correlates (eg, psychopathic traits, attitudes towards sexual fantasies) as predictors of ASF using multiple regression analyses. Finally, we examined whether ASF would positively predict sexual aggression beyond other pertinent risk factors using multiple regression analysis. <br /> Outcomes: The participants rated the frequency of a broad set of 56 aggression-related and other sexual fantasies, attitudes towards sexual fantasies, the Big Five (ie, broad personality dimensions including neuroticism and extraversion), sexual aggression, and other risk factors for sexual aggression. <br /> Results: All participants reported non-aggression-related sexual fantasies and 77% reported at least one ASF in their lives. Being male, frequent sexual fantasies, psychopathic traits, and negative attitudes towards sexual fantasies predicted more frequent ASF. ASF were the strongest predictor of sexual aggression beyond other risk factors, including general aggression, psychopathic traits, rape myth acceptance, and violent pornography consumption. <br /> Clinical Translation: ASF may be an important risk factor for sexual aggression and should be more strongly considered in prevention and intervention efforts. <br /> Strengths and Limitations: The strengths of the present study include using a large item pool and a large sample with a large proportion of women in order to examine ASF as a predictor of sexual aggression beyond important control variables. Its weaknesses include the reliance on cross-sectional data, that preclude causal inferences, and not continuously distinguishing between consensual and non-consensual acts. <br /> Conclusion: ASF are a frequent phenomenon even in in the general population and among women and show strong associations with sexual aggression. Thus, they require more attention by research on sexual aggression and its prevention.
Individuals differ in their sensitivity toward injustice. Justice-sensitive persons perceive injustice more frequently and show stronger responses to it. Justice sensitivity has been studied predominantly in adults; little is known about its development in childhood and adolescence and its connection to prosocial behavior and emotional and behavioral problems. This study evaluates a version of the justice sensitivity inventory for children and adolescents (JSI-CA5) in 1472 9- to 17-year olds. Items and scales showed good psychometric properties and correlations with prosocial behavior and conduct problems similar to findings in adults, supporting the reliability and validity of the scale. We found individual differences in justice sensitivity as a function of age and gender. Furthermore, justice sensitivity predicted emotional and behavioral problems in children and adolescents over a 1- to 2-year period. Justice sensitivity perspectives can therefore be considered as risk and/or protective factors for mental health in childhood and adolescence.
Justice sensitivity captures individual differences in the frequency with which injustice is perceived and the intensity of emotional, cognitive, and behavioral reactions to it. Persons with ADHD have been reported to show high justice sensitivity, and a recent study provided evidence for this notion in an adult sample. In 1,235 German 10- to 19-year olds, we measured ADHD symptoms, justice sensitivity from the victim, observer, and perpetrator perspective, the frequency of perceptions of injustice, anxious and angry rejection sensitivity, depressive symptoms, conduct problems, and self-esteem. Participants with ADHD symptoms reported significantly higher victim justice sensitivity, more perceptions of injustice, and higher anxious and angry rejection sensitivity, but significantly lower perpetrator justice sensitivity than controls. In latent path analyses, justice sensitivity as well as rejection sensitivity partially mediated the link between ADHD symptoms and comorbid problems when considered simultaneously. Thus, both justice sensitivity and rejection sensitivity may contribute to explaining the emergence and maintenance of problems typically associated with ADHD symptoms, and should therefore be considered in ADHD therapy.
Individual differences in justice sensitivity and rejection sensitivity have been linked to differences in aggressive behavior in adults. However, there is little research studying this association in children and adolescents and considering the two constructs in combination. We assessed justice sensitivity from the victim, observer, and perpetrator perspective as well as anxious and angry rejection sensitivity and linked both constructs to different forms (physical, relational), and functions (proactive, reactive) of self-reported aggression and to teacher- and parent-rated aggression in N=1,489 9- to 19-year olds in Germany. Victim sensitivity and both angry and anxious rejection sensitivity showed positive correlations with all forms and functions of aggression. Angry rejection sensitivity also correlated positively with teacher-rated aggression. Perpetrator sensitivity was negatively correlated with all aggression measures, and observer sensitivity also correlated negatively with all aggression measures except for a positive correlation with reactive aggression. Path models considering the sensitivity facets in combination and controlling for age and gender showed that higher victim justice sensitivity predicted higher aggression on all measures. Higher perpetrator sensitivity predicted lower physical, relational, proactive, and reactive aggression. Higher observer sensitivity predicted lower teacher-rated aggression. Angry rejection sensitivity predicted higher proactive and reactive aggression, whereas anxious rejection sensitivity did not make an additional contribution to the prediction of aggression. The findings are discussed in terms of social information processing models of aggression in childhood and adolescence. Aggr. Behav. 41:353-368, 2015. (c) 2014 Wiley Periodicals, Inc.
Several personality dispositions with common features capturing sensitivities to negative social cues have recently been introduced into psychological research. To date, however, little is known about their interrelations, their conjoint effects on behavior, or their interplay with other risk factors. We asked N = 349 adults from Germany to rate their justice, rejection, moral disgust, and provocation sensitivity, hostile attribution bias, trait anger, and forms and functions of aggression. The sensitivity measures were mostly positively correlated; particularly those with an egoistic focus, such as victim justice, rejection, and provocation sensitivity, hostile attributions and trait anger as well as those with an altruistic focus, such as observer justice, perpetrator justice, and moral disgust sensitivity. The sensitivity measures had independent and differential effects on forms and functions of aggression when considered simultaneously and when controlling for hostile attributions and anger. They could not be integrated into a single factor of interpersonal sensitivity or reduced to other well-known risk factors for aggression. The sensitivity measures, therefore, require consideration in predicting and preventing aggression.
Several personality dispositions with common features capturing sensitivities to negative social cues have recently been introduced into psychological research. To date, however, little is known about their interrelations, their conjoint effects on behavior, or their interplay with other risk factors. We asked N = 349 adults from Germany to rate their justice, rejection, moral disgust, and provocation sensitivity, hostile attribution bias, trait anger, and forms and functions of aggression. The sensitivity measures were mostly positively correlated; particularly those with an egoistic focus, such as victim justice, rejection, and provocation sensitivity, hostile attributions and trait anger as well as those with an altruistic focus, such as observer justice, perpetrator justice, and moral disgust sensitivity. The sensitivity measures had independent and differential effects on forms and functions of aggression when considered simultaneously and when controlling for hostile attributions and anger. They could not be integrated into a single factor of interpersonal sensitivity or reduced to other well-known risk factors for aggression. The sensitivity measures, therefore, require consideration in predicting and preventing aggression.
Depressive symptoms have been related to anxious rejection sensitivity, but little is known about relations with angry rejection sensitivity and justice sensitivity. We measured rejection sensitivity, justice sensitivity, and depressive symptoms in 1,665 9-to-21-year olds at two points of measurement. Participants with high T1 levels of depressive symptoms reported higher anxious and angry rejection sensitivity and higher justice sensitivity than controls at T1 and T2. T1 rejection, but not justice sensitivity predicted T2 depressive symptoms; high victim justice sensitivity, however, added to the stabilization of depressive symptoms. T1 depressive symptoms positively predicted T2 anxious and angry rejection and victim justice sensitivity. Hence, sensitivity toward negative social cues may be cause and consequence of depressive symptoms and requires consideration in cognitive-behavioral treatment of depression.
Research indicates individual pathways towards school attacks and inconsistent offender profiles. Thus, several authors have classified offenders according to mental disorders, motives, or number/kinds of victims. We assumed differences between single and multiple victim offenders (intending to kill one or more than one victim). In qualitative and quantitative analyses of data from qualitative content analyses of case files on seven school attacks in Germany, we found differences between the offender groups in seriousness, patterns, characteristics, and classes of leaking (announcements of offences), offence-related behaviour, and offence characteristics. There were only minor differences in risk factors. Our research thus adds to the understanding of school attacks and leaking. Differences between offender groups require consideration in the planning of effective preventive approaches.
Leaking comprises observable behavior or statements that signal intentions of committing a violent offense and is considered an important warning sign for school shootings. School staff who are confronted with leaking have to assess its seriousness and react appropriately - a difficult task, because knowledge about leaking is sparse. The present study, therefore, examined how frequently leaking occurs in schools and how teachers identify leaking and respond to it. To achieve this aim, we informed teachers from eight schools in Germany about the definition of leaking and other warning signs and risk factors for school shootings in a one-hour information session. Teachers were then asked to report cases of leaking over a six- to nine-month period and to answer a questionnaire on leaking and its treatment after the information session and six to nine months later. Our results suggest that leaking is a relevant problem in German schools. Teachers mostly rated the information session positively and benefited in several aspects (e.g. reported more perceived courses of action or improved knowledge about leaking), but also expressed a constant need for support. Our findings highlight teachers' needs for further support and training and may be used in the planning of prevention measures for school shootings.
School shooters are often described as narcissistic, but empirical evidence is scant. To provide more reliable and detailed information, we conducted an exploratory study, analyzing police investigation files on seven school shootings in Germany, looking for symptoms of narcissistic personality disorder as defined by the Diagnostic and Statistical Manual of Mental Disorders (4th ed.; DSM-IV) in witnesses' and offenders' reports and expert psychological evaluations. Three out of four offenders who had been treated for mental disorders prior to the offenses displayed detached symptoms of narcissism, but none was diagnosed with narcissistic personality disorder. Of the other three, two displayed narcissistic traits. In one case, the number of symptoms would have justified a diagnosis of narcissistic personality disorder. Offenders showed low and high self-esteem and a range of other mental disorders. Thus, narcissism is not a common characteristic of school shooters, but possibly more frequent than in the general population. This should be considered in developing adequate preventive and intervention measures.
Background: Dementia is a psychiatric condition the development of which is associated with numerous aspects of life. Our aim was to estimate dementia risk factors in German primary care patients.
Methods: The case-control study included primary care patients (70-90 years) with first diagnosis of dementia (all-cause) during the index period (01/2010-12/2014) (Disease Analyzer, Germany), and controls without dementia matched (1:1) to cases on the basis of age, sex, type of health insurance, and physician. Practice visit records were used to verify that there had been 10 years of continuous follow-up prior to the index date. Multivariate logistic regression models were fitted with dementia as a dependent variable and the potential predictors.
Results: The mean age for the 11,956 cases and the 11,956 controls was 80.4 (SD: 5.3) years. 39.0% of them were male and 1.9% had private health insurance. In the multivariate regression model, the following variables were linked to a significant extent with an increased risk of dementia: diabetes (OR: 1.17; 95% CI: 1.10-1.24), lipid metabolism (1.07; 1.00-1.14), stroke incl. TIA (1.68; 1.57-1.80), Parkinson's disease (PD) (1.89; 1.64-2.19), intracranial injury (1.30; 1.00-1.70), coronary heart disease (1.06; 1.00-1.13), mild cognitive impairment (MCI) (2.12; 1.82-2.48), mental and behavioral disorders due to alcohol use (1.96; 1.50-2.57). The use of statins (OR: 0.94; 0.90-0.99), proton-pump inhibitors (PPI) (0.93; 0.90-0.97), and antihypertensive drugs (0.96, 0.94-0.99) were associated with a decreased risk of developing dementia.
Conclusions: Risk factors for dementia found in this study are consistent with the literature. Nevertheless, the associations between statin, PPI and antihypertensive drug use, and decreased risk of dementia need further investigations.
There is a longstanding and widely held misconception about the relative remoteness of abstract concepts from concrete experiences. This review examines the current evidence for external influences and internal constraints on the processing, representation, and use of abstract concepts, like truth, friendship, and number. We highlight the theoretical benefit of distinguishing between grounded and embodied cognition and then ask which roles do perception, action, language, and social interaction play in acquiring, representing and using abstract concepts. By reviewing several studies, we show that they are, against the accepted definition, not detached from perception and action. Focussing on magnitude-related concepts, we also discuss evidence for cultural influences on abstract knowledge and explore how internal processes such as inner speech, metacognition, and inner bodily signals (interoception) influence the acquisition and retrieval of abstract knowledge. Finally, we discuss some methodological developments. Specifically, we focus on the importance of studies that investigate the time course of conceptual processing and we argue that, because of the paramount role of sociality for abstract concepts, new methods are necessary to study concepts in interactive situations. We conclude that bodily, linguistic, and social constraints provide important theoretical limitations for our theories of conceptual knowledge.
This special issue, "Concrete constraints of abstract concepts", addresses the role of concrete determinants, both external and internal to the human body, in acquisition, processing and use of abstract concepts while at the same time presenting to the readers an overview of methods used to assess their representation.
Background: Agrammatic speakers have problems with grammatical encoding and decoding. However, not all syntactic processes are equally problematic: present time reference, who questions, and reflexives can be processed by narrow syntax alone and are relatively spared compared to past time reference, which questions, and personal pronouns, respectively. The latter need additional access to discourse and information structures to link to their referent outside the clause (Avrutin, 2006). Linguistic processing that requires discourse-linking is difficult for agrammatic individuals: verb morphology with reference to the past is more difficult than with reference to the present (Bastiaanse et al., 2011). The same holds for which questions compared to who questions and for pronouns compared to reflexives (Avrutin, 2006). These results have been reported independently for different populations in different languages. The current study, for the first time, tested all conditions within the same population.
Aims: We had two aims with the current study. First, we wanted to investigate whether discourse-linking is the common denominator of the deficits in time reference, wh questions, and object pronouns. Second, we aimed to compare the comprehension of discourse-linked elements in people with agrammatic and fluent aphasia.
Methods and procedures: Three sentence-picture-matching tasks were administered to 10 agrammatic, 10 fluent aphasic, and 10 non-brain-damaged Russian speakers (NBDs): (1) the Test for Assessing Reference of Time (TART) for present imperfective (reference to present) and past perfective (reference to past), (2) the Wh Extraction Assessment Tool (WHEAT) for which and who subject questions, and (3) the Reflexive-Pronoun Test (RePro) for reflexive and pronominal reference.
Outcomes and results: NBDs scored at ceiling and significantly higher than the aphasic participants. We found an overall effect of discourse-linking in the TART and WHEAT for the agrammatic speakers, and in all three tests for the fluent speakers. Scores on the RePro were at ceiling.
Conclusions: The results are in line with the prediction that problems that individuals with agrammatic and fluent aphasia experience when comprehending sentences that contain verbs with past time reference, which question words and pronouns are caused by the fact that these elements involve discourse linking. The effect is not specific to agrammatism, although it may result from different underlying disorders in agrammatic and fluent aphasia.
Eye fixation durations during normal reading correlate with processing difficulty, but the specific cognitive mechanisms reflected in these measures are not well understood. This study finds support in German readers' eye fixations for two distinct difficulty metrics: surprisal, which reflects the change in probabilities across syntactic analyses as new words are integrated; and retrieval, which quantifies comprehension difficulty in terms of working memory constraints. We examine the predictions of both metrics using a family of dependency parsers indexed by an upper limit on the number of candidate syntactic analyses they retain at successive words. Surprisal models all fixation measures and regression probability. By contrast, retrieval does not model any measure in serial processing. As more candidate analyses are considered in parallel at each word, retrieval can account for the same measures as surprisal. This pattern suggests an important role for ranked parallelism in theories of sentence comprehension.
Parsing costs as predictors of reading difficulty : an evaluation using the Potsdam Sentence Corpus
(2008)
Parsing costs as predictors of reading difficulty: An evaluation using the Potsdam Sentence Corpus
(2008)
The surprisal of a word on a probabilistic grammar constitutes a promising complexity metric for human sentence comprehension difficulty. Using two different grammar types, surprisal is shown to have an effect on fixation durations and regression probabilities in a sample of German readers’ eye movements, the Potsdam Sentence Corpus. A linear mixed-effects model was used to quantify the effect of surprisal while taking into account unigram and bigram frequency, word length, and empirically-derived word predictability; the so-called “early” and “late” measures of processing difficulty both showed an effect of surprisal. Surprisal is also shown to have a small but statistically non-significant effect on empirically-derived predictability itself. This work thus demonstrates the importance of including parsing costs as a predictor of comprehension difficulty in models of reading, and suggests that a simple identification of syntactic parsing costs with early measures and late measures with durations of post-syntactic events may be difficult to uphold.
Eye fixation durations during normal reading correlate with processing difficulty but the specific cognitive mechanisms reflected in these measures are not well understood. This study finds support in German readers’ eyefixations for two distinct difficulty metrics: surprisal, which reflects the change in probabilities across syntactic analyses as new words are integrated, and retrieval, which quantifies comprehension difficulty in terms of working memory constraints. We examine the predictions of both metrics using a family of dependency parsers indexed by an upper limit on the number of candidate syntactic analyses they retain at successive words. Surprisal models all fixation measures and regression probability. By contrast, retrieval does not model any measure in serial processing. As more candidate analyses are considered in parallel at each word, retrieval can account for the same measures as surprisal. This pattern suggests an important role for ranked parallelism in theories of sentence comprehension.
Suboptimal post-operative improvements in functional capacity are often observed after minimally invasive aortic valve replacement (mini-AVR). It remains to be studied how AVR affects the cardiopulmonary and skeletal muscle function during exercise to explain these clinical observations and to provide a basis for improved/tailored post-operative rehabilitation. Twenty two patients with severe aortic stenosis (AS) (aortic valve area (AVA) < 1.0 cm(2)) were preoperatively compared to 22 healthy controls during submaximal constant-workload endurance-type exercise for oxygen uptake (V-O2), carbon dioxide output (V-CO2), respiratory gas exchange ratio, expiratory volume (V-E), ventilatory equivalents for O-2 (V-E/V-O2) and CO2 (V-E/V-CO2), respiratory rate (RR), tidal volume (V-t), heart rate (HR), oxygen pulse (V-O2/HR), blood lactate, Borg ratings of perceived exertion (RPE) and exercise-onset V-O2 kinetics. These exercise tests were repeated at 5 and 21 days after AVR surgery (n = 14), along with echocardiographic examinations. Respiratory exchange ratio and ventilatory equivalents (V-E/V-O2 and V-E/V-CO2) were significantly elevated, V-O2 and V-O2/HR were significantly lowered, and exercise-onset V-O2 kinetics were significantly slower in AS patients vs. healthy controls (P < 0.05). Although the AVA was restored by mini-AVR in AS patients, V-E/V-O2 and V-E/V-CO2 further worsened significantly within 5 days after surgery, accompanied by elevations in Borg RPE, V-E and RR, and lowered V-t. At 21 days after mini-AVR, exercise-onset V-O2 kinetics further slowed significantly (P < 0.05). A decline in pulmonary function was observed early aftermini-AVRsurgery, which was followed by a decline in skeletal muscle function in the subsequent weeks of recovery. Therefore, a tailored rehabilitation programmeshould include training modalities for the respiratory and peripheral muscular system.
Keeping the breath in mind
(2021)
Scientific interest in the brain and body interactions has been surging in recent years. One fundamental yet underexplored aspect of brain and body interactions is the link between the respiratory and the nervous systems. In this article, we give an overview of the emerging literature on how respiration modulates neural, cognitive and emotional processes. Moreover, we present a perspective linking respiration to the free-energy principle. We frame volitional modulation of the breath as an active inference mechanism in which sensory evidence is recontextualized to alter interoceptive models. We further propose that respiration-entrained gamma oscillations may reflect the propagation of prediction errors from the sensory level up to cortical regions in order to alter higher level predictions. Accordingly, controlled breathing emerges as an easily accessible tool for emotional, cognitive, and physiological regulation.
Cognitive resources contribute to balance control. There is evidence that mental fatigue reduces cognitive resources and impairs balance performance, particularly in older adults and when balance tasks are complex, for example when trying to walk or stand while concurrently performing a secondary cognitive task.
We conducted a systematic literature search in PubMed (MEDLINE), Web of Science and Google Scholar to identify eligible studies and performed a random effects meta-analysis to quantify the effects of experimentally induced mental fatigue on balance performance in healthy adults. Subgroup analyses were computed for age (healthy young vs. healthy older adults) and balance task complexity (balance tasks with high complexity vs. balance tasks with low complexity) to examine the moderating effects of these factors on fatigue-mediated balance performance.
We identified 7 eligible studies with 9 study groups and 206 participants. Analysis revealed that performing a prolonged cognitive task had a small but significant effect (SMDwm = −0.38) on subsequent balance performance in healthy young and older adults. However, age- and task-related differences in balance responses to fatigue could not be confirmed statistically.
Overall, aggregation of the available literature indicates that mental fatigue generally reduces balance in healthy adults. However, interactions between cognitive resource reduction, aging and balance task complexity remain elusive.
Cognitive resources contribute to balance control. There is evidence that mental fatigue reduces cognitive resources and impairs balance performance, particularly in older adults and when balance tasks are complex, for example when trying to walk or stand while concurrently performing a secondary cognitive task.
We conducted a systematic literature search in PubMed (MEDLINE), Web of Science and Google Scholar to identify eligible studies and performed a random effects meta-analysis to quantify the effects of experimentally induced mental fatigue on balance performance in healthy adults. Subgroup analyses were computed for age (healthy young vs. healthy older adults) and balance task complexity (balance tasks with high complexity vs. balance tasks with low complexity) to examine the moderating effects of these factors on fatigue-mediated balance performance.
We identified 7 eligible studies with 9 study groups and 206 participants. Analysis revealed that performing a prolonged cognitive task had a small but significant effect (SMDwm = −0.38) on subsequent balance performance in healthy young and older adults. However, age- and task-related differences in balance responses to fatigue could not be confirmed statistically.
Overall, aggregation of the available literature indicates that mental fatigue generally reduces balance in healthy adults. However, interactions between cognitive resource reduction, aging and balance task complexity remain elusive.