Refine
Has Fulltext
- no (1136)
Year of publication
Document Type
- Article (1136) (remove)
Language
- English (1136) (remove)
Keywords
- Eye movements (23)
- eye movements (20)
- Chinese (16)
- reading (16)
- Reading (14)
- embodied cognition (14)
- Adolescence (13)
- adolescence (12)
- gender (12)
- Children (11)
Institute
- Department Psychologie (1136) (remove)
Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
A growing literature has suggested that processing of visual information presented near the hands is facilitated. In this study, we investigated whether the near-hands superiority effect also occurs with the hands moving. In two experiments, participants performed a cyclical bimanual movement task requiring concurrent visual identification of briefly presented letters. For both the static and dynamic hand conditions, the results showed improved letter recognition performance with the hands closer to the stimuli. The finding that the encoding advantage for near-hand stimuli also occurred with the hands moving suggests that the effect is regulated in real time, in accordance with the concept of a bimodal neural system that dynamically updates hand position in external space.
When infants observe a human grasping action, experience-based accounts predict that all infants familiar with grasping actions should be able to predict the goal regardless of additional agency cues such as an action effect. Cue-based accounts, however, suggest that infants use agency cues to identify and predict action goals when the action or the agent is not familiar. From these accounts, we hypothesized that younger infants would need additional agency cues such as a salient action effect to predict the goal of a human grasping action, whereas older infants should be able to predict the goal regardless of agency cues. In three experiments, we presented 6-, 7-, and 11-month-olds with videos of a manual grasping action presented either with or without an additional salient action effect (Exp. 1 and 2), or we presented 7-month-olds with videos of a mechanical claw performing a grasping action presented with a salient action effect (Exp. 3). The 6-month-olds showed tracking gaze behavior, and the 11-month-olds showed predictive gaze behavior, regardless of the action effect. However, the 7-month-olds showed predictive gaze behavior in the action-effect condition, but tracking gaze behavior in the no-action-effect condition and in the action-effect condition with a mechanical claw. The results therefore support the idea that salient action effects are especially important for infants' goal predictions from 7 months on, and that this facilitating influence of action effects is selective for the observation of human hands.
Action effects have been stated to be important for infants’ processing of goal-directed actions. In this study, 11-month-olds showed equally fast predictive gaze shifts to a claw’s action goal when the grasping action was presented either with three agency cues (self-propelled movement, equifinality of goal achievement and a salient action effect) or with only a salient action effect, but infants showed tracking gaze when the claw showed only self-propelled movement and equifinality of goal achievement. The results suggest that action effects, compared to purely kinematic cues, seem to be especially important for infants' online processing of goal-directed actions.
During the observation of goal-directed actions, infants usually predict the goal at an earlier age when the agent is familiar (e.g., human hand) compared to unfamiliar (e.g., mechanical claw). These findings implicate a crucial role of the developing agentive self for infants' processing of others' action goals. Recent theoretical accounts suggest that predictive gaze behavior relies on an interplay between infants' agentive experience (top-down processes) and perceptual information about the agent and the action-event (bottom-up information; e.g., agency cues). The present study examined 7-, 11-, and 18-month-old infants' predictive gaze behavior for a grasping action performed by an unfamiliar tool, depending on infants' age-related action knowledge about tool-use and the display of the agency cue of producing a salient action effect. The results are in line with the notion of a systematic interplay between experience-based top-down processes and cue-based bottom-up information: Regardless of the salient action effect, predictive gaze shifts did not occur in the 7-month-olds (least experienced age group), but did occur in the 18-month-olds (most experienced age group). In the 11-month-olds, however, predictive gaze shifts occurred only when a salient action effect was presented. This sheds new light on how the developing agentive self, in interplay with available agency cues, supports infants' action-goal prediction also for observed tool-use actions.
For the processing of goal-directed actions, some accounts emphasize the importance of experience with the action or the agent. Other accounts stress the importance of agency cues. We investigated the impact of agency cues on 11-month-olds’ and adults’ goal anticipation for a grasping-action performed by a mechanical claw. With an eyetracker, we measured anticipations in two conditions, where the claw was displayed either with or without agency cues. In two experiments, 11-month-olds were predictive when agency cues were present, but reactive when no agency cues were presented. Adults were predictive in both conditions. Furthermore, 11-month-olds rapidly learned to predict the goal in the agency condition, but not in the mechanical condition. Adults’ predictions did not change across trials in the agency condition, but decelerated in the mechanical condition. Thus, agency cues and own action experience are important for infants’ and adults’ online processing of goal-directed actions by non-human agents.
Previous research indicates that infants’ prediction of the goals of observed actions is influenced by own experience with the type of agent performing the action (i.e., human hand vs. non-human agent) as well as by action-relevant features of goal objects (e.g., object size). The present study investigated the combined effects of these factors on 12-month-olds’ action prediction. Infants’ (N = 49) goal-directed gaze shifts were recorded as they observed 14 trials in which either a human hand or a mechanical claw reached for a small goal area (low-saliency goal) or a large goal area (high-saliency goal). Only infants who had observed the human hand reaching for a high-saliency goal fixated the goal object ahead of time, and they rapidly learned to predict the action goal across trials. By contrast, infants in all other conditions did not track the observed action in a predictive manner, and their gaze shifts to the action goal did not change systematically across trials. Thus, high-saliency goals seem to boost infants’ predictive gaze shifts during the observation of human manual actions, but not of actions performed by a mechanical device. This supports the assumption that infants’ action predictions are based on interactive effects of action-relevant object features (e.g., size) and own action experience.
We assessed intra-individual variability of response times (RT) and single-trial P3 amplitudes following targets in healthy adults during a Flanker/NO-GO task. RT variability and variability of the neural responses coupled at the faster frequencies examined (0.07-0.17 Hz) at Pz, the target-P3 maxima, despite non-significant associations for overall variability (standard deviation, SD). Frequency-specific patterns of variability in the single-trial P3 may help to understand the neurophysiology of RT variability and its explanatory models of attention allocation deficits beyond intra-individual variability summary indices such as SD.
The color red has been implicated in a variety of social processes, including those involving mating. While previous research suggests that women sometimes wear red strategically to increase their attractiveness, the replicability of this literature has been questioned. The current research is a reasonably powered conceptual replication designed to strengthen this literature by testing whether women are more inclined to display the color red 1) during fertile (as compared with less fertile) days of the menstrual cycle, and 2) when expecting to interact with an attractive man (as compared with a less attractive man and with a control condition). Analyses controlled for a number of theoretically relevant covariates (relationship status, age, the current weather). Only the latter hypothesis received mixed support (mainly among women on hormonal birth control), whereas results concerning the former hypothesis did not reach significance. Women (N = 281) displayed more red when expecting to interact with an attractive man; findings did not support the prediction that women would increase their display of red on fertile days of the cycle. Findings thus suggested only mixed replicability for the link between the color red and psychological processes involving romantic attraction. They also illustrate the importance of further investigating the boundary conditions of color effects on everyday social processes.
The aim of the present study was to investigate the test-retest reliability of the olfactory detection threshold subtest of the Sniffin" Sticks test battery, if administered repeatedly on 4 time points. The detection threshold test was repeatedly conducted in 64 healthy subjects. On the first testing session, the threshold test was accomplished 3 times (T(1) = 0 min, T(2) = 35 min, and T(3) = 105 min), representing a short-term testing. A fourth threshold test was conducted on a second testing session (T(4) = 35.1 days after the first testing session), representing a long-term testing. The average scores for olfactory detection threshold for n-butanol did not differ significantly across the 4 points of time. The test-retest reliability (Pearson"s r) between the 4 time points of threshold testing were in a range of 0.43-0.85 (P < 0.01). These results support the notion that the olfactory detection threshold test is a highly reliable method for repeated olfactory testing, even if the test is repeated more than once per day and over a long-term period. It is concluded that the olfactory detection threshold test of the Sniffin" Sticks is suitable for repeated testing during experimental or clinical studies.
Applied to the nasal mucosa in low concentrations, nicotine vapor evokes odorous sensations (mediated by the olfactory system) whereas at higher concentrations nicotine vapor additionally produces burning and stinging sensations in the nose (mediated by the trigeminal system). The objective of this study was to determine whether intranasal stimulation with suprathreshold concentrations of S(-)-nicotine vapor causes brain activation in olfactory cortical areas or if trigeminal cortical areas are also activated. Individual olfactory detection thresholds for S(-)-nicotine were determined in 19 healthy occasional smokers using a computer-controlled air-dilution olfactometer. Functional magnetic resonance images were acquired using a 1.5T MR scanner with applications of nicotine in concentrations at or just above the individual"s olfactory detection threshold. Subjects reliably perceived the stimuli as being odorous. Accordingly, activation of brain areas known to be involved in processing of olfactory stimuli was identified. Although most of the subjects never or only rarely observed a burning or painful sensation in the nose, brain areas associated with the processing of painful stimuli were activated in all subjects. This indicates that the olfactory and trigeminal systems are activated during perception of nicotine and it is not possible to completely separate olfactory from trigeminal effects by lowering the concentration of the applied nicotine. In conclusion, even at low concentrations that do not consistently lead to painful sensations, intranasally applied nicotine activates both the olfactory and the trigeminal system.
Cultural generality versus specificity of media violence effects on aggression was examined in seven countries (Australia, China, Croatia, Germany, Japan, Romania, the United States). Participants reported aggressive behaviors, media use habits, and several other known risk and protective factors for aggression. Across nations, exposure to violent screen media was positively associated with aggression. This effect was partially mediated by aggressive cognitions and empathy. The media violence effect on aggression remained significant even after statistically controlling a number of relevant risk and protective factors (e.g., abusive parenting, peer delinquency), and was similar in magnitude to effects of other risk factors. In support of the cumulative risk model, joint effects of different risk factors on aggressive behavior in each culture were larger than effects of any individual risk factor.
We uniquely introduce convex production costs into a cartel model involving spatial price discrimination. We demonstrate that greater convexity improves cartel stability and that for sufficient convexity first best locations will be adopted. We show that allowing locations to vary over the game reduces cartel stability but that greater convexity continues to improve that stability. Moreover, when the degree of convexity does not support the first best collusive locations, other collusive locations exist that require less stability and these may either increase or decrease social welfare relative to competition. Critically, these locations that require less stability are more dispersed in sharp contrast to the known result assuming linear production costs.
The boundary paradigm (Rayner, 1975) with a novel preview manipulation was used to examine the extent of parafoveal processing of words to the right of fixation. Words n + 1 and n + 2 had either correct or incorrect previews prior to fixation (prior to crossing the boundary location). In addition, the manipulation utilized either a high or low frequency word in word n + 1 location on the assumption that it would be more likely that n + 2 preview effects could be obtained when word n + 1 was high frequency. The primary findings were that there was no evidence for a preview benefit for word n + 2 and no evidence for parafoveal-on-foveal effects when word n + 1 is at least four letters long. We discuss implications for models of eye-movement control in reading.
We measured memory span for assembly instructions involving objects with handles oriented to the left or right side. Right-handed participants remembered more instructions when objects' handles were spatially congruent with the hand used in forthcoming assembly actions. No such affordance-based memory benefit was found for left-handed participants. These results are discussed in terms of motor simulation as an embodied rehearsal mechanism.
Analysis of physicians' probability estimates of a medical outcome based on a sequence of events
(2022)
IMPORTANCE
The probability of a conjunction of 2 independent events is the product of the probabilities of the 2 components and therefore cannot exceed the probability of either component; violation of this basic law is called the conjunction fallacy. A common medical decision-making scenario involves estimating the probability of a final outcome resulting from a sequence of independent events; however, little is known about physicians' ability to accurately estimate the overall probability of success in these situations.
OBJECTIVE
To ascertain whether physicians are able to correctly estimate the overall probability of a medical outcome resulting from 2 independent events.
DESIGN, SETTING, AND PARTICIPANTS
This survey study consisted of 3 separate substudies, in which 215 physicians were asked via internet-based survey to estimate the probability of success of each of 2 components of a diagnostic or prognostic sequence as well as the overall probability of success of the 2-step sequence. Substudy 1 was performed from April 2 to 4, 2021, substudy 2 from November 2 toll, 2021, and substudy 3 from May 13 to 19, 2021. All physicians were board certified or board eligible in the primary specialty germane to the substudy (ie, obstetrics and gynecology for substudies land 3 and pulmonology for substudy 2), were recruited from a commercial survey service, and volunteered to participate in the study.
EXPOSURES
Case scenarios presented in an online survey.
MAIN OUTCOMES AND MEASURES
Respondents were asked to provide their demographic information in addition to 3 probability estimates. The first substudy included a scenario describing a brow presentation discovered during labor; the 2 conjuncts were the probabilities that the brow presentation would resolve and that the delivery would be vaginal. The second substudy involved a diagnostic evaluation of an incidentally discovered pulmonary nodule; the 2 conjuncts were the probabilities that the patient had a malignant condition and that a technically successful transthoracic needle biopsy would reveal a malignant condition. The third substudy included a modification of the first substudy in an attempt to debias the conjunction fallacy prevalent in the first substudy. Respondents' own probability estimates of the individual events were used to calculate the mathematically correct conjunctive probability.
RESULTS
Among 215 respondents, the mean (SD) age was 54.0 (9.5) years; 142 respondents (66.0%) were male. Data on race and ethnicity were not collected. A total of 168 physicians (78.1%) estimated the probability of the 2-step sequence to be greater than the probability of at least 1 of the 2 component events. Compared with the product of their 2 estimated components, respondents overestimated the combined probability by 12.8% (95% CI, 9.6%-16.1%; P < .001) in substudy 1, 19.8% (95% Cl, 16.6%-23.0%; P < .001) in substudy 2, and 18.0% (95% CI, 13.4%-22.5%; P < .001) in substudy 3, results that were mathematically incoherent (ie, formally illogical and mathematically incorrect).
CONCLUSIONS AND RELEVANCE
In this survey study of 215 physicians, respondents consistently overestimated the combined probability of 2 events compared with the probability calculated from their own estimates of the individual events. This biased estimation, consistent with the conjunction fallacy, may have substantial implications for diagnostic and prognostic decision-making.
We examined face memory deficits in patients with Idiopathic Parkinson's disease (IPD) with specific regard to the moderating role of sex and the different memory processes involved. We tested short- and long-term face recognition memory in 18 nonclinical participants and 18 IPD-patients matched for sex, education and age. We varied the duration of item presentation (1, 5, 10s), the time of testing (immediately, 1hr, 24hrs) and the possibility to re-encode items. In accordance with earlier studies, we report face memory deficits in IPD. Moreover, our findings indicate that sex and encoding conditions may be important moderator variables. In contrast to healthy individuals, IPD-patients cannot gain from increasing duration of presentation. Furthermore, our results suggest that I PD leads to face memory deficits in women, only.
The aim of our study was to examine the extent to which linguistic approaches to sentence comprehension deficits in aphasia can account for differential impairment patterns in the comprehension of wh-questions in bilingual persons with aphasia (PWA). We investigated the comprehension of subject and object wh-questions in both Turkish, a wh-in-situ language, and German, a wh-fronting language, in two bilingual PWA using a sentence-to-picture matching task. Both PWA showed differential impairment patterns in their two languages. SK, an early bilingual PWA, had particular difficulty comprehending subject which-questions in Turkish but performed normal across all conditions in German. CT, a late bilingual PWA, performed more poorly for object which-questions in German than in all other conditions, whilst in Turkish his accuracy was at chance level across all conditions. We conclude that the observed patterns of selective cross-linguistic impairments cannot solely be attributed either to difficulty with wh-movement or to problems with the integration of discourse-level information. Instead our results suggest that differences between our PWA’s individual bilingualism profiles (e.g. onset of bilingualism, premorbid language dominance) considerably affected the nature and extent of their impairments.
Replicability of findings is at the heart of any empirical science. The aim of this article is to move the current replicability debate in psychology towards concrete recommendations for improvement. We focus on research practices but also offer guidelines for reviewers, editors, journal management, teachers, granting institutions, and university promotion committees, highlighting some of the emerging and existing practical solutions that can facilitate implementation of these recommendations. The challenges for improving replicability in psychological science are systemic. Improvement can occur only if changes are made at many levels of practice, evaluation, and reward.
Objective
Leaders differ in their personalities from non-leaders. However, when do these differences emerge? Are leaders "born to be leaders" or does their personality change in preparation for a leadership role and due to increasing leadership experience?
Method
Using data from the German Socio-Economic Panel Study, we examined personality differences between leaders (N = 2683 leaders, women: n = 967; 36.04%) and non-leaders (N = 33,663) as well as personality changes before and after becoming a leader.
Results
Already in the years before starting a leadership position, leaders-to-be were more extraverted, open, emotionally stable, conscientious, and willing to take risks, felt to have greater control, and trusted others more than non-leaders. Moreover, personality changed in emergent leaders: While approaching a leadership position, leaders-to-be (especially men) became gradually more extraverted, open, and willing to take risks and felt to have more control over their life. After becoming a leader, they became less extraverted, less willing to take risks, and less conscientious but gained self-esteem.
Conclusions
Our findings suggest that people are not simply "born to be leaders" but that their personalities change considerably in preparation for a leadership role and due to leadership experience. Some changes are transient, but others last for a long time.
Studies show relations between executive function (EF), Theory of Mind (ToM), and conduct-problem (CP) symptoms. However, many studies have involved cross-sectional data, small clinical samples, pre-school children, and/or did not consider potential mediation effects. The present study examined the longitudinal relations between EF, ToM abilities, and CP symptoms in a population-based sample of 1,657 children between 6 and 11 years (T1: M = 8.3 years, T2: M = 9.1 years; 51.9% girls). We assessed EF skills and ToM abilities via computerized tasks at first measurement (T1), CP symptoms were rated via parent questionnaires at T1 and approximately 1 year later (T2). Structural-equation models showed a negative relation between T1 EF and T2 CP symptoms even when controlling for attention-deficit hyperactivity disorder (ADHD) symptoms and other variables. This relation was fully mediated by T1 ToM abilities. The study shows how children's abilities to control their thoughts and behaviors and to understand others' mental states interact in the development of CP symptoms.
There is robust evidence showing a link between executive function (EF) and theory of mind (ToM) in 3-to 5-year-olds. However, it is unclear whether this relationship extends to middle childhood. In addition, there has been much discussion about the nature of this relationship. Whereas some authors claim that ToM is needed for EF, others argue that ToM requires EF. To date, however, studies examining the longitudinal relationship between distinct sub components of EF [i.e., attention shifting, working memory (WM) updating, inhibition] and ToM in middle childhood are rare. The present study examined (1) the relationship between three EF subcomponents (attention shifting, WM updating, inhibition) and ToM in middle childhood, and (2) the longitudinal reciprocal relationships between the EF subcomponents and ToM across a 1-year period. EF and ToM measures were assessed experimentally in a sample of 1,657 children (aged 6-11 years) at time point one (t1) and 1 year later at time point two (t2). Results showed that the concurrent relationships between all three EF subcomponents and ToM pertained in middle childhood at t1 and t2, respectively, even when age, gender, and fluid intelligence were partialle dout. Moreover, cross-lagged structural equation modeling (again, controlling for age, gender, and fluid intelligence, as well as for the earlier levels of the target variables), revealed partial support for the view that early ToM predictslater EF, but stronger evidence for the assumption that early EF predictslater ToM. The latter was found for attention shifting and WM updating, but not for inhibition. This reveals the importance of studying the exact interplay of ToM and EF across childhood development, especially with regard to different EF subcomponents. Most likely, understanding others' mental states at different levels of perspective-taking requires specific EF subcomponents, suggesting developmental change in the relations between EF and ToM across childhood.
Background:
Under the new psychotherapy law in Germany, standardized patients (SPs) are to become a standard component inpsychotherapy training, even though little is known about their authenticity.Objective:The present pilot study explored whether, followingan exhaustive two-day SP training, psychotherapy trainees can distinguish SPs from real patients.
Methods:
Twenty-eight psychotherapytrainees (M= 28.54 years of age,SD= 3.19) participated as blind raters. They evaluated six video-recorded therapy segments of trained SPsand real patients using the Authenticity of Patient Demonstrations Scale.
Results:
The authenticity scores of real patients and SPs did notdiffer (p= .43). The descriptive results indicated that the highest score of authenticity was given to an SP. Further, the real patients did notdiffer significantly from the SPs concerning perceived impairment (p= .33) and the likelihood of being a real patient (p= .52).
Conclusions:
The current results suggest that psychotherapy trainees were unable to distinguish the SPs from real patients. We therefore stronglyrecommend incorporating training SPs before application. Limitations and future research directions are discussed.
Public Significance Statement This study demonstrates that simulated patients (SPs) can authentically portray a depressive case. The results provide preliminary evidence of psychometrically sound properties of the rating scale that contributes to distinguishing between authentic and unauthentic SPs and may thus foster SPs' dissemination into evidence-based training. <br /> For training purposes, simulated patients (SPs), that is, healthy people portraying a disorder, are disseminating more into clinical psychology and psychotherapy. In the current study, we developed an observer-based rating instrument for the evaluation of SP authenticity-namely, it not being possible to distinguish them from real patients-so as to foster their use in evidence-based training. We applied a multistep inductive approach to develop the Authenticity of Patient Demonstrations (APD) scale. Ninety-seven independent psychotherapy trainees, 77.32% female, mean age of 31.49 (SD = 5.17) years, evaluated the authenticity of 2 independent SPs, each of whom portrayed a depressive patient. The APD demonstrated good internal consistency (Cronbach's alpha = .83) and a strong correlation (r = .82) with an established tool for assessing SP performance in medical contexts. The APD scale distinguished significantly between an authentic and unauthentic SP (d = 2.35). Preliminary evidence for the psychometric properties of the APD indicates that the APD could be a viable tool for recruiting, training, and evaluating the authenticity of SPs. Strengths, limitations, and future directions are also discussed in detail.
Double Jeopardy
(2019)
The present study investigates whether secondary traumatization (i.e., family history of Holocaust survival and secondary exposure to captivity) is implicated in subjective age. Women exposed to different levels of secondary traumatization (N = 177) were assessed. Analyses of variance (ANOVAs) revealed that a Holocaust background and husband's captivity had a marginally significant positive effect on age appearance. Women with a Holocaust background whose husbands were held captive reported older interest age, indicating double jeopardy for older subjective age when two sources of secondary traumatization are present. A similar trend existed for behavior age. Possible explanations for these complex findings of risk and resilience are discussed.
Real-world scene perception is typically studied in the laboratory using static picture viewing with restrained head position. Consequently, the transfer of results obtained in this paradigm to real-word scenarios has been questioned. The advancement of mobile eye-trackers and the progress in image processing, however, permit a more natural experimental setup that, at the same time, maintains the high experimental control from the standard laboratory setting. We investigated eye movements while participants were standing in front of a projector screen and explored images under four specific task instructions. Eye movements were recorded with a mobile eye-tracking device and raw gaze data were transformed from head-centered into image-centered coordinates. We observed differences between tasks in temporal and spatial eye-movement parameters and found that the bias to fixate images near the center differed between tasks. Our results demonstrate that current mobile eye-tracking technology and a highly controlled design support the study of fine-scaled task dependencies in an experimental setting that permits more natural viewing behavior than the static picture viewing paradigm.
Neurofeedback treatment has been demonstrated to reduce inattention, impulsivity and hyperactivity in children with attention deficit/hyperactivity disorder (ADHD). However, previous studies did not adequately control confounding variables or did not employ a randomized reinforcer-controlled design. This study addresses those methodological shortcomings by comparing the effects of the following two matched biofeedback training variants on the primary symptoms of ADHD: EEG neurofeedback (NF) aiming at theta/beta ratio reduction and EMG biofeedback (BF) aiming at forehead muscle relaxation. Thirty-five children with ADHD (26 boys, 9 girls; 6-14 years old) were randomly assigned to either the therapy group (NF; n = 18) or the control group (BF; n = 17). Treatment for both groups consisted of 30 sessions. Pre- and post-treatment assessment consisted of psychophysiological measures, behavioural rating scales completed by parents and teachers, as well as psychometric measures. Training effectively reduced theta/beta ratios and EMG levels in the NF and BF groups, respectively. Parents reported significant reductions in primary ADHD symptoms, and inattention improvements in the NF group were higher compared to the control intervention (BF, dcorr = -.94). NF training also improved attention and reaction times on the psychometric measures. The results indicate that NF effectively reduced inattention symptoms on parent rating scales and reaction time in neuropsychological tests. However, regarding hyperactivity and impulsivity symptoms, the results imply that non-specific factors, such as behavioural contingencies, self-efficacy, structured learning environment and feed-forward processes, may also contribute to the positive behavioural effects induced by neurofeedback training.
Neurofeedback treatment has been demonstrated to reduce inattention, impulsivity and hyperactivity in children with attention deficit/hyperactivity disorder (ADHD). However, previous studies did not adequately control confounding variables or did not employ a randomized reinforcer-controlled design. This study addresses those methodological shortcomings by comparing the effects of the following two matched biofeedback training variants on the primary symptoms of ADHD: EEG neurofeedback (NF) aiming at theta/beta ratio reduction and EMG biofeedback (BF) aiming at forehead muscle relaxation. Thirty-five children with ADHD (26 boys, 9 girls; 6-14 years old) were randomly assigned to either the therapy group (NF; n = 18) or the control group (BF; n = 17). Treatment for both groups consisted of 30 sessions. Pre- and post-treatment assessment consisted of psychophysiological measures, behavioural rating scales completed by parents and teachers, as well as psychometric measures. Training effectively reduced theta/beta ratios and EMG levels in the NF and BF groups, respectively. Parents reported significant reductions in primary ADHD symptoms, and inattention improvements in the NF group were higher compared to the control intervention (BF, d(corr) = -.94). NF training also improved attention and reaction times on the psychometric measures. The results indicate that NF effectively reduced inattention symptoms on parent rating scales and reaction time in neuropsychological tests. However, regarding hyperactivity and impulsivity symptoms, the results imply that non-specific factors, such as behavioural contingencies, self-efficacy, structured learning environment and feed-forward processes, may also contribute to the positive behavioural effects induced by neurofeedback training.
An area of increasing interest amongst teachers and researchers is the availability of tools for the design and implementation of literacy interventions with Spanish speaking children. The present systematic literature review contributes to this need by summarizing available findings on evidence-based literacy interventions (EBI) for children from first to third year of primary school. Our results are based on 20 EBI that aimed at improving at least one of the critical components mentioned by the NRP (2000): phonological awareness, phonics, fluency, vocabulary and comprehension. As 90% of the studies were completed with English-speaking children, we critically discussed the applicability of this evidence to the specific context of Spanish-speaking countries. Although many of the general characteristics of the EBI completed with English speaking children could also guide interventions in Spanish, it remains crucial to take into account structural differences between the orthographies of both languages. Moreover, we identified transversal strategies and implementation techniques that due to their universal character could also be useful for early literacy interventions in Spanish. (c) 2018 Fundacion Universitaria Konrad Lorenz. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/bync-nd/4.0/).
In the number-to-position methodology, a number is presented on each trial and the observer places it on a straight line in a position that corresponds to its felt subjective magnitude. In the novel modification introduced in this study, the two-numbers-to-two-positions method, a pair of numbers rather than a single number is presented on each trial and the observer places them in appropriate positions on the same line. Responses in this method indicate not only the subjective magnitude of each single number but, simultaneously, provide a direct estimation of their subjective numerical distance. The results of four experiments provide strong evidence for a linear representation of numbers and, commensurately, for the linear representation of numerical distances. We attribute earlier results that indicate a logarithmic representation to the ordered nature of numbers and to the task used and not to a truly non-linear underlying representation.
Values are assumed to be relatively stable during adulthood. Yet, little research has examined value stability and change, and there are no studies on the structure of value change. On the basis of S. H. Schwartz's (1992) value theory, the authors propose that the structure of intraindividual value change mirrors the circumplexlike structure of values so that conflicting values change in opposite directions and compatible values change in the same direction. Four longitudinal studies, varying in life contexts, time gaps, populations, countries, languages, and value measures, supported the proposed structure of intraindividual value change. An increase in the importance of any one value is accompanied by slight increases in the importance of compatible values and by decreases in the importance of conflicting values. Thus, intraindividual changes in values are not chaotic, but occur in a way that maintains Schwartz's value structure. Furthermore, the greater the extent of life-changing events, the greater the value change found, whereas age was only a marginal negative predictor of value change when life events were taken into account. Implications for the structure of personality change are discussed.
Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.
Correlation between burnout syndrome and psychological and psychosomatic symptoms among teachers
(2006)
Objectives: Psychosomatic disorders and symptoms that correlate with the so-called burnout syndrome turned out to be the main cause of increasing rates of premature retirement of school teachers. The aim of this study was to evaluate the relation between occupational burden and psychological strain of teachers who are still in work. Methods: A sample of 408 teachers at ten grammar schools (am: High school; German: Gymnasium) in south-western Germany was evaluated. To determine the styles of coping with occupational burden we used the measure of coping capacity questionnaire (MECCA). To analyse the psychopathological and psychosomatic symptom load we applied SCL 90 R questionnaire. Results: According to the MECCA questionnaire, 32.5% of the sample suffered from burnout (type B), 17.7% suffered severe strain (type A), 35.9% showed an unambitious (type S) and 13.8% showed a healthy-ambitious coping style (type G). Burnout was significantly higher among women, divorced teachers and teachers working part-time. As part of the MECCA, teachers were asked to rate what they regarded as the strongest factor resulting in occupational burden. Teachers indicated that, besides high numbers of pupils in one class, they regarded destructive and aggressive behaviour of pupils as the primary stress factor. According to the SCL 90 R, 20% of the sample showed a severe degree (defined as > 70 points in the SCL90R GSI) of psychological and psychosomatic symptoms. MECCA type B (burnout) correlated significantly with high psychological and psychosomatic symptom load according to the SCL90R. Conclusions: In school teachers, burnout syndrome, a construct that derived from occupational psychology and occupational medicine, is significantly correlated with psychological and psychosomatic symptoms. Teachers rate destructive and aggressive behaviour of pupils as the primary stress factor.
To examine whether the dopamine receptor D4 gene (DRD4) exon III VNTR moderates the risk of infants with regulatory disorders for developing attention-deficit/hyperactivity disorder (ADHD) later in childhood. In a prospective longitudinal study of children at risk for later psychopathology, 300 participants were assessed for regulatory problems in infancy, DRD4 genotype, and ADHD symptoms and diagnoses from childhood to adolescence. To examine a potential moderating effect on ADHD measures, linear and logistic regressions were computed. Models were fit for the main effects of the DRD4 genotype (presence or absence of the 7r allele) and regulatory problems (presence or absence), with the addition of the interaction term. All models were controlled for sex, family adversity, and obstetric risk status. In children without the DRD4-7r allele, a history of regulatory problems in infancy was unrelated to later ADHD. But in children with regulatory problems in infancy, the additional presence of the DRD4-7r allele increased the risk for ADHD in childhood. The DRD4 genotype seems to moderate the association between regulatory problems in infancy and later ADHD. A replication study is needed before further conclusions can be drawn, however.
Objective To demonstrate that children homozygous for the 10-repeat allele of the common dopamine transporter (DAT1) polymorphism who were exposed to maternal prenatal smoke exhibited significantly higher hyperactivity-impulsivity than children without these environmental or genetic risks. Study design We performed a prospective longitudinal study from birth into early adulthood monitoring the long-term outcome of early risk factors. Maternal prenatal smoking was determined during a standardized interview with the mother when the child was 3 months old. At age 15 years, 305 adolescents participated in genotyping for the DAT1 40 base pair variable number of tandem repeats polymorphism and assessment of inattention, hyperactivity-impulsivity, and oppositional defiant/conduct disorder symptoms with die Kiddie- Sads-Present and Lifetime Version. Results There was no bivariate association between DAT1 genotype, prenatal smoke exposure and symptoms of attention deficit hyperactivity disorder. However, a significant interaction between DAT1 genotype and prenatal smoke exposure emerged (P =.012), indicating that males with prenatal smoke exposure who were homozygous for the DAT1 10r allele had higher hyperactivity-impulsivity than males from all other groups. In females, no significant main effects of DAT1 genotype or prenatal smoke exposure or interaction effects on any symptoms were evident (all P >.25). Conclusions This study provides further evidence for the multifactorial nature of attention deficit hyperactivity disorder and the importance of studying both genetic and environmental factors and their interaction.
The Body Appreciation Scale-2 (BAS-2) is the most current measure of body appreciation, a central facet of positive body image. This work aimed to examine the factor structure and psychometric properties of a German version. In Study 1 (N = 659; M-age = 27.19, SD = 8.57), exploratory factor analyses (EFA) revealed that the German BAS-2 has a one-dimensional factor structure in women and men, showing cross-gender factor similarity. In Study 2 (N = 472; M-age = 30.08, SD = 12.35), confirmatory factor analysis (CFA) further supported the original scale's one-dimensional factor structure after freeing correlated errors. The German BAS-2 also showed partial scalar invariance across gender, with women and men not differing significantly in latent mean scores. As predicted, we found convergent relationships with measures of self-esteem, intuitive eating, and variables associated with negative body image (i.e., weight-and shape concerns, drive for thinness). Correlations with BMI were small and in an inverse direction. Incremental validity was demonstrated by predicting self-esteem and intuitive eating over and above measures of negative body image. Additionally, the German BAS-2 showed internal consistency and 2-week test-retest reliability. Overall, our results suggest that the German BAS-2 is a psychometrically sound instrument.
"BreaThink"
(2021)
Cognition is shaped by signals from outside and within the body. Following recent evidence of interoceptive signals modulating higher-level cognition, we examined whether breathing changes the production and perception of quantities. In Experiment 1, 22 adults verbally produced on average larger random numbers after inhaling than after exhaling. In Experiment 2, 24 further adults estimated the numerosity of dot patterns that were briefly shown after either inhaling or exhaling. Again, we obtained on average larger responses following inhalation than exhalation. These converging results extend models of situated cognition according to which higher-level cognition is sensitive to transient interoceptive states.
Cross-education has been extensively investigated with adults. Adult studies report asymmetrical cross-education adaptations predominately after dominant limb training. The objective of the study was to examine unilateral leg press (LP) training of the dominant or nondominant leg on contralateral and ipsilateral strength and balance measures. Forty-two youth (10-13 years) were placed (random allocation) into a dominant (n = 15) or nondominant (n = 14) leg press training group or nontraining control (n = 13). Experimental groups trained 3 times per week for 8 weeks and were tested pre-/post-training for ipsilateral and contralateral 1-repetition maximum (RM) horizontal LP, maximum voluntary isometric contraction (MVIC) of knee extensors (KE) and flexors (KF), countermovement jump (CMJ), triple hop test (THT), MVIC strength of elbow flexors (EF) and handgrip, as well as the stork and Y balance tests. Both dominant and nondominant LP training significantly (p < 0.05) increased both ipsilateral and contralateral lower body strength (LP 1RM (dominant: 59.6%-81.8%; nondominant: 59.5%-96.3%), KE MVIC (dominant: 12.4%-18.3%; nondominant: 8.6%-18.6%), KF MVIC (dominant: 7.9%-22.3%; nondominant: nonsignificant-3.8%), and power (CMJ: dominant: 11.1%-18.1%; nondominant: 7.7%-16.6%)). The exception was that nondominant LP training demonstrated a nonsignificant change with the contralateral KF MVIC. Other significant improvements were with nondominant LP training on ipsilateral EF 1RM (6.2%) and THT (9.6%). There were no significant changes with EF and handgrip MVIC. The contralateral leg stork balance test was impaired following dominant LP training. KF MVIC exhibited the only significant relative post-training to pretraining (post-test/pre-test) ratio differences between dominant versus nondominant LP cross-education training effects. In conclusion, children exhibit symmetrical cross-education or global training adaptations with unilateral training of dominant or nondominant upper leg.
Brain activation stability is crucial to understanding attention lapses. EEG methods could provide excellent markers to assess neuronal response variability with respect to temporal (intertrial coherence) and spatial variability (topographic consistency) as well as variations in activation intensity (low frequency variability of single trial global field power).
We calculated intertrial coherence, topographic consistency and low frequency amplitude variability during target P300 in a continuous performance test in 263 15-year-olds from a cohort with psychosocial and biological risk factors.
Topographic consistency and low frequency amplitude variability predicted reaction time fluctuations (RTSD) in a linear model. Higher RTSD was only associated with higher psychosocial adversity in the presence of the homozygous 6R-10R dopamine transporter haplotype.
We propose that topographic variability of single trial P300 reflects noise as well as variability in evoked cortical activation patterns. Dopaminergic neuromodulation interacted with environmental and biological risk factors to predict behavioural reaction time variability. (C) 2015 Elsevier B.V. All rights reserved.
Background: Dopamine plays an important role in orienting, response anticipation and movement evaluation. Thus, we examined the influence of functional variants related to dopamine inactivation in the dopamine transporter (DAT1) and catechol-O-methyltransferase genes (COMT) on the time-course of motor processing in a contingent negative variation (CNV) task.
Methods: 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version). Early and late CNV as well as motor postimperative negative variation were assessed. Adolescents were genotyped for the COMT Val(158) Met and two DAT1 polymorphisms (variable number tandem repeats in the 3'-untranslated region and in intron 8).
Results: The results revealed a significant interaction between COMT and DAT1, indicating that COMT exerted stronger effects on lateralized motor post-processing (centro-parietal motor postimperative negative variation) in homozygous carriers of a DAT1 haplotype increasing DAT1 expression. Source analysis showed that the time interval 500-1000 ms after the motor response was specifically affected in contrast to preceding movement anticipation and programming stages, which were not altered.
Conclusions: Motor slow negative waves allow the genomic imaging of dopamine inactivation effects on cortical motor post-processing during response evaluation. This is the first report to point towards epistatic effects in the motor system during response evaluation, i.e. during the post-processing of an already executed movement rather than during movement programming.
Background: Dopamine plays an important role in orienting and the regulation of selective attention to relevant stimulus characteristics. Thus, we examined the influences of functional variants related to dopamine inactivation in the dopamine transporter (DAT1) and catechol-O-methyltransferase genes (COMT) on the time-course of visual processing in a contingent negative variation (CNV) task.
Methods: 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version). Early and late CNV as well as preceding visual evoked potential components were assessed.
Results: Significant additive main effects of DAT1 and COMT on the occipito-temporal early CNV were observed. In addition, there was a trend towards an interaction between the two polymorphisms. Source analysis showed early CNV generators in the ventral visual stream and in frontal regions. There was a strong negative correlation between occipito-temporal visual post-processing and the frontal early CNV component. The early CNV time interval 500-1000 ms after the visual cue was specifically affected while the preceding visual perception stages were not influenced.
Conclusions: Late visual potentials allow the genomic imaging of dopamine inactivation effects on visual post-processing. The same specific time-interval has been found to be affected by DAT1 and COMT during motor post-processing but not motor preparation. We propose the hypothesis that similar dopaminergic mechanisms modulate working memory encoding in both the visual and motor and perhaps other systems.
Measures of gender identity have almost exclusively relied on positive aspects of masculinity and femininity, although conceptually the self-concept is not limited to positive attributes. A theoretical argument is made for considering negative attributes of gender identity, followed by five studies developing the Positive-Negative Sex-Role Inventory (PN-SRI) as a new measure of gender identity. Study 1 demonstrated that many of the attributes of a German version of the Bem Sex-Role Inventory are no longer considered to differ in desirability for men and women. For the PN-SRI, Study 2 elicited attributes characterizing men and women in today's society, for which ratings of typicality and desirability as well as self-ratings by men and women were obtained in Study 3. Study 4 examined the reliability and factorial structure of the four subscales of positive and negative masculinity and femininity and demonstrated the construct and discriminant validity of the PN-SRI by showing that the negative masculinity and femininity scales were unique predictors of select validation constructs. Study 5 showed that the new instrument explained variance in the validation constructs beyond earlier measures of gender identity. Key message: Even in the construction of negative aspects of gender identity, individuals prefer gender-congruent attributes. Negative masculinity and femininity make a unique contribution to understanding gender-related differences in psychological outcome variables.
Although teen dating violence (TDV) is internationally recognized as a serious threat to adolescents' health and well-being, almost no data is available for Slovenian youth. Hence, the purpose of this study was to examine the prevalence and predictors of TDV among Slovenian adolescents for the first time. Using data from the SPMAD study (Study of Parental Monitoring and Adolescent Delinquency), 330 high school students were asked about physical TDV victimization and perpetration as well as about their dating history, relationship conflicts, peers' antisocial behavior, and informal social control by family and school. A substantial number of female andmale adolescents reported victimization (16.7% of female and 12.7% of male respondents) and perpetration (21.1% of female and 6.0% of male respondents). Furthermore, the results revealed that lower age at the first relationship, relationship conflicts, and school informal social control were associated with victimization, whereas being female, relationship conflicts, having antisocial peers, and family informal social control were linked to perpetration. Implications of the study findings were discussed.
Research in legal decision making has demonstrated the tendency to blame the victim and exonerate the perpetrator of sexual assault. This study examined the hypothesis of a special leniency bias in rape cases by comparing them to cases of robbery. N = 288 participants received descriptions of rape and robbery of a female victim by a male perpetrator and made ratings of victim and perpetrator blame. Case scenarios varied with respect to the prior relationship (strangers, acquaintances, ex-partners) and coercive strategy (force vs. exploiting victim intoxication). More blame was attributed to the victim and less blame was attributed to the perpetrator for rape than for robbery. Information about a prior relationship between victim and perpetrator increased ratings of victim blame and decreased perceptions of perpetrator blame in the rape cases, but not in the robbery cases. The findings support the notion of a special leniency bias in sexual assault cases.
From fantasy to reality
(2022)
Aggression-related sexual fantasies (ASF) have been related to various forms of harmful sexual behavior in both sex offender and community samples. However, more research is needed to fully understand this relation, particularly whether ASF is associated with harmful sexual behavior beyond hostile sexism against women and a sexual preference for violence and sexual violence. In the present study, N = 428 participants (61.9% women) between 18 and 83 years of age (M = 28.17, SD = 9.7) reported their ASF and hostile sexism. They rated their sexual arousal by erotic, violent, and sexually violent pictures as a direct measure of sexual preference. Response latencies between stimulus presentation and arousal ratings were used as an indirect measure of sexual preference. ASF and the directly and indirectly assessed sexual preference for violent and sexually violent stimuli were positively correlated. They were unrelated to hostile sexism against women. ASF showed the strongest associations with self-reported sexually sadistic behavior and presumably non-consensual sexual sadism beyond these preferences and hostile sexism in the total group and separately among men and women. The findings indicate that ASF and sexual preference are not equivalent constructs and further underscore the potential relevance of ASF for harmful sexual behavior.
Background: Clock genes govern circadian rhythms and shape the effect of alcohol use on the physiological system. Exposure to severe negative life events is related to both heavy drinking and disturbed circadian rhythmicity. The aim of this study was 1) to extend previous findings suggesting an association of a haplotype tagging single nucleotide polymorphism of PER2 gene with drinking patterns, and 2) to examine a possible role for an interaction of this gene with life stress in hazardous drinking.
Methods: Data were collected as part of an epidemiological cohort study on the outcome of early risk factors followed since birth. At age 19 years, 268 young adults (126 males, 142 females) were genotyped for PER2 rs56013859 and were administered a 45-day alcohol timeline follow-back interview and the Alcohol Use Disorders Identification Test (AUDIT). Life stress was assessed as the number of severe negative life events during the past four years reported in a questionnaire and validated by interview.
Results: Individuals with the minor G allele of rs56013859 were found to be less engaged in alcohol use, drinking at only 72% of the days compared to homozygotes for the major A allele. Moreover, among regular drinkers, a gene x environment interaction emerged (p = .020). While no effects of genotype appeared under conditions of low stress, carriers of the G allele exhibited less hazardous drinking than those homozygous for the A allele when exposed to high stress.
Conclusions: These findings may suggest a role of the circadian rhythm gene PER2 in both the drinking patterns of young adults and in moderating the impact of severe life stress on hazardous drinking in experienced alcohol users. However, in light of the likely burden of multiple tests, the nature of the measures used and the nominal evidence of interaction, replication is needed before drawing firm conclusions.
Background:
Recent evidence from animal experiments and studies in humans suggests that early age at first drink (AFD) may lead to higher stress-induced drinking. The present study aimed to extend these findings by examining whether AFD interacted with stressful life events (SLE) and/or with daily hassles regarding the impact on drinking patterns among young adults.
Method:
In 306 participants of an epidemiological cohort study, AFD was assessed together with SLE during the past 3 years, daily hassles in the last month, and drinking behavior at age 22. As outcome variables, 2 variables were derived, reflecting different aspects of alcohol use: the amount of alcohol consumed in the last month and the drinking frequency, indicated by the number of drinking days in the last month.
Results:
Linear regression models revealed an interaction effect between the continuous measures of AFD and SLE on the amount of alcohol consumed. The earlier young adults had their first alcoholic drink and the higher the levels of SLE they were exposed to, the disproportionately more alcohol they consumed. Drinking frequency was not affected by an interaction of these variables, while daily hassles and their interaction with AFD were unrelated to drinking behavior.
Conclusions:
These findings highlight the importance of early age at drinking onset as a risk factor for later heavy drinking under high load of SLE. Prevention programs should aim to raise age at first contact with alcohol. Additionally, support in stressful life situations and the acquisition of effective coping strategies might prevent heavy drinking in those with earlier drinking onset.
BackgroundEarly alcohol use is one of the strongest predictors of later alcohol use disorders, with early use usually taking place during puberty. Many researchers have suggested drinking during puberty as a potential biological basis of the age at first drink (AFD) effect. However, the influence of the pubertal phase at alcohol use initiation on subsequent drinking in later life has not been examined so far.
MethodsPubertal stage at first drink (PSFD) was determined in N=283 young adults (131 males, 152 females) from an epidemiological cohort study. At ages 19, 22, and 23years, drinking behavior (number of drinking days, amount of alcohol consumed, hazardous drinking) was assessed using interview and questionnaire methods. Additionally, an animal study examined the effects of pubertal or adult ethanol (EtOH) exposure on voluntary EtOH consumption in later life in 20 male Wistar rats.
ResultsPSFD predicted drinking behavior in humans in early adulthood, indicating that individuals who had their first drink during puberty displayed elevated drinking levels compared to those with postpubertal drinking onset. These findings were corroborated by the animal study, in which rats that received free access to alcohol during the pubertal period were found to consume more alcohol as adults, compared to the control animals that first came into contact with alcohol during adulthood.
ConclusionsThe results point to a significant role of stage of pubertal development at first contact with alcohol for the development of later drinking habits. Possible biological mechanisms and implications for prevention are discussed.
Interaction between CRHR1 gene and stressful life events predicts adolescent heavy alcohol use
(2007)
Background: Recent animal research suggests that alterations in the corticotropin releasing hormone receptor 1 (CRHR1) may lead to heavy alcohol use following repeated stress. The aim of this study was to examine interactions between two haplotype-tagging single nucleotide polymorphisms (SNPs) covering the CRHR1 gene and adverse life events on heavy drinking in adolescents. Methods: Data were available from the Mannheim Study of Children at Risk, an ongoing cohort study of the long-term outcome of early risk factors followed since birth. At age 15 years, 280 participants (135 males, 145 females) completed a self-report questionnaire measuring alcohol use and were genotyped for two SNPs (rs242938, rs1876831) of CRHR1. Assessment of negative life events over the past three years was obtained by a standardized interview with the parents. Results: Adolescents homozygous for the C allele of rs1876831 drank higher maximum amounts of alcohol per occasion and had greater lifetime rates of heavy drinking in relation to negative life events than individuals carrying the T allele. No gene X environment interactions were found for regular drinking and between rs242938 and stressful life events. Conclusions: These findings provide first evidence in humans that the CRHR1 gene interacts with exposure to stressful life events to predict heavy alcohol use in adolescents.
Several lines of evidence have implicated the mesolimbic dopamine reward pathway in altered brain function resulting from exposure to early adversity. The present study examined the impact of early life adversity on different stages of neuronal reward processing later in life and their association with a related behavioral phenotype, i.e. attention deficit/hyperactivity disorder (ADHD). 162 healthy young adults (mean age = 24.4 years; 58% female) from an epidemiological cohort study followed since birth participated in a simultaneous EEG-fMRI study using a monetary incentive delay task. Early life adversity according to an early family adversity index (EFA) and lifetime ADHD symptoms were assessed using standardized parent interviews conducted at the offspring's age of 3 months and between 2 and 15 years, respectively. fMRI region-of-interest analysis revealed a significant effect of EFA during reward anticipation in reward-related areas (i.e. ventral striatum, putamen, thalamus), indicating decreased activation when EFA increased. EEG analysis demonstrated a similar effect for the contingent negative variation (CNV), with the CNV decreasing with the level of EFA. In contrast, during reward delivery, activation of the bilateral insula, right pallidum and bilateral putamen increased with EFA. There was a significant association of lifetime ADHD symptoms with lower activation in the left ventral striatum during reward anticipation and higher activation in the right insula during reward delivery. The present findings indicate a differential long-term impact of early life adversity on reward processing, implicating hyporesponsiveness during reward anticipation and hyperresponsiveness when receiving a reward. Moreover, a similar activation pattern related to lifetime ADHD suggests that the impact of early life stress on ADHD may possibly be mediated by a dysfunctional reward pathway.
Although there is ample evidence linking insecure attachment styles and intimate partner violence (IPV), little is known about the psychological processes underlying this association, especially from the victim’s perspective. The present study examined how attachment styles relate to the experience of sexual and psychological abuse, directly or indirectly through destructive conflict resolution strategies, both self-reported and attributed to their opposite-sex romantic partner. In an online survey, 216 Spanish undergraduates completed measures of adult attachment style, engagement and withdrawal conflict resolution styles shown by self and partner, and victimization by an intimate partner in the form of sexual coercion and psychological abuse. As predicted, anxious and avoidant attachment styles were directly related to both forms of victimization. Also, an indirect path from anxious attachment to IPV victimization was detected via destructive conflict resolution strategies. Specifically, anxiously attached participants reported a higher use of conflict engagement by themselves and by their partners. In addition, engagement reported by the self and perceived in the partner was linked to an increased probability of experiencing sexual coercion and psychological abuse. Avoidant attachment was linked to higher withdrawal in conflict situations, but the paths from withdrawal to perceived partner engagement, sexual coercion, and psychological abuse were non-significant. No gender differences in the associations were found. The discussion highlights the role of anxious attachment in understanding escalating patterns of destructive conflict resolution strategies, which may increase the vulnerability to IPV victimization.
Is bad intent negligible?
(2018)
The hostile attribution bias (HAB) is a well-established risk factor for aggression. It is considered part of the suspicious mindset that may cause highly victim-justice sensitive individuals to behave uncooperatively. Thus, links of victim justice sensitivity (JS) with negative behavior, such as aggression, may be better explained by HAB. The present study tested this hypothesis in N=279 German adolescents who rated their JS, HAB, and physical, relational, verbal, reactive, and proactive aggression. Victim JS predicted physical, relational, verbal, reactive, and proactive aggression when HAB was controlled. HAB only predicted physical and proactive aggression. There were no moderator effects. Injustice seems an important reason for aggression irrespective of whether or not it is intentionally caused, particularly among those high in victim JS. Thus, victim JS should be considered as a potential important risk factor for aggression and receive more attention by research on aggression and preventive efforts.
School attacks are attracting increasing attention in aggression research. Recent systematic analyses provided new insights into offense and offender characteristics. Less is known about attacks in institutes of higher education (e.g., universities). It is therefore questionable whether the term “school attack” should be limited to institutions of general education or could be extended to institutions of higher education. Scientific literature is divided in distinguishing or unifying these two groups and reports similarities as well as differences. We researched 232 school attacks and 45 attacks in institutes of higher education throughout the world and conducted systematic comparisons between the two groups. The analyses yielded differences in offender (e.g., age, migration background) and offense characteristics (e.g., weapons, suicide rates), and some similarities (e.g., gender). Most differences can apparently be accounted for by offenders’ age and situational influences. We discuss the implications of our findings for future research and the development of preventative measures.
Objective:
Rejection sensitivity and justice sensitivity are personality traits that are characterized by frequent perceptions and intense adverse responses to negative social cues. Whereas there is good evidence for associations between rejection sensitivity, justice sensitivity, and internalizing problems, no longitudinal studies have investigated their association with eating disorder (ED) pathology so far. Thus, the present study examined longitudinal relations between rejection sensitivity, justice sensitivity, and ED pathology.
Method:
Participants (N = 769) reported on their rejection sensitivity, justice sensitivity, and ED pathology at 9-19 (T1), 11-21 (T2), and 14-22 years of age (T3).
Results:
Latent cross-lagged models showed longitudinal associations between ED pathology and anxious rejection sensitivity, observer and victim justice sensitivity. T1 and T2 ED pathology predicted higher T2 and T3 anxious rejection sensitivity, respectively. In turn, T2 anxious rejection sensitivity predicted more T3 ED pathology. T1 observer justice sensitivity predicted more T2 ED pathology, which predicted higher T3 observer justice sensitivity. Furthermore, T1 ED pathology predicted higher T2 victim justice sensitivity.
Discussion:
Rejection sensitivity-particularly anxious rejection sensitivity-and justice sensitivity may be involved in the maintenance or worsening of ED pathology and should be considered by future research and in prevention and treatment of ED pathology. Also, mental health problems may increase rejection sensitivity and justice sensitivity traits in the long term.
Recent research provides evidence that aggressive sexual fantasies predict aggressive sexual behavior in the general population. However, sexual fantasies including fantasies about the infliction of pain and humiliation, should be frequent and often consensually acted upon among individuals with sadomasochistic likings. The question arises whether sexual fantasies with aggressive content still predict presumably non-consensual aggressive sexual behavior in individuals with sadomasochistic likings, given that BDSM encounters are generally considered consensual. To investigate this question, we conducted a questionnaire survey of sexual fantasies, as sessing the frequency of seventy sexual fantasies involving non-aggressive, masochistic, and aggressive acts. Our sample (N = 182) contained 99 respondents who self-identified as sadist, masochist, or switcher; 44 reported no such identification. For respondents reporting BDSM identification, we replicated a factor structure for sexual fantasies similar to that previously found in the general population, including three factors reflecting fantasies about increasingly severe aggressive sexual acts. Fantasies about injuring a partner and/or using weapons and fantasies about sexual coercion predicted presumably non-consensual sexual behavior independently of other risk factors for aggressive sexual behavior and irrespective of BDSM identification. Hence, severely aggressive sexual fantasies may predispose to presumably non-consensual sexual behavior in both individuals with and without BDSM identification.
Background: Aggression-related sexual fantasies (ASF) are considered an important risk factor for sexual aggression, but empirical knowledge is limited, in part because previous research has been based on predominantly male, North-American college samples, and limited numbers of questions. <br /> Aim: The present study aimed to foster the knowledge about the frequency and correlates of ASF, while including a large sample of women and a broad range of ASF. <br /> Method: A convenience sample of N = 664 participants from Germany including 508 (77%) women and 156 (23%) men with a median age of 25 (21-27) years answered an online questionnaire. Participants were mainly recruited via social networks (online and in person) and were mainly students. We examined the frequencies of (aggression-related) sexual fantasies and their expected factor structure (factors reflecting affective, experimental, masochistic, and aggression-related contents) via exploratory factor analysis. We investigated potential correlates (eg, psychopathic traits, attitudes towards sexual fantasies) as predictors of ASF using multiple regression analyses. Finally, we examined whether ASF would positively predict sexual aggression beyond other pertinent risk factors using multiple regression analysis. <br /> Outcomes: The participants rated the frequency of a broad set of 56 aggression-related and other sexual fantasies, attitudes towards sexual fantasies, the Big Five (ie, broad personality dimensions including neuroticism and extraversion), sexual aggression, and other risk factors for sexual aggression. <br /> Results: All participants reported non-aggression-related sexual fantasies and 77% reported at least one ASF in their lives. Being male, frequent sexual fantasies, psychopathic traits, and negative attitudes towards sexual fantasies predicted more frequent ASF. ASF were the strongest predictor of sexual aggression beyond other risk factors, including general aggression, psychopathic traits, rape myth acceptance, and violent pornography consumption. <br /> Clinical Translation: ASF may be an important risk factor for sexual aggression and should be more strongly considered in prevention and intervention efforts. <br /> Strengths and Limitations: The strengths of the present study include using a large item pool and a large sample with a large proportion of women in order to examine ASF as a predictor of sexual aggression beyond important control variables. Its weaknesses include the reliance on cross-sectional data, that preclude causal inferences, and not continuously distinguishing between consensual and non-consensual acts. <br /> Conclusion: ASF are a frequent phenomenon even in in the general population and among women and show strong associations with sexual aggression. Thus, they require more attention by research on sexual aggression and its prevention.
Individuals differ in their sensitivity toward injustice. Justice-sensitive persons perceive injustice more frequently and show stronger responses to it. Justice sensitivity has been studied predominantly in adults; little is known about its development in childhood and adolescence and its connection to prosocial behavior and emotional and behavioral problems. This study evaluates a version of the justice sensitivity inventory for children and adolescents (JSI-CA5) in 1472 9- to 17-year olds. Items and scales showed good psychometric properties and correlations with prosocial behavior and conduct problems similar to findings in adults, supporting the reliability and validity of the scale. We found individual differences in justice sensitivity as a function of age and gender. Furthermore, justice sensitivity predicted emotional and behavioral problems in children and adolescents over a 1- to 2-year period. Justice sensitivity perspectives can therefore be considered as risk and/or protective factors for mental health in childhood and adolescence.
Justice sensitivity captures individual differences in the frequency with which injustice is perceived and the intensity of emotional, cognitive, and behavioral reactions to it. Persons with ADHD have been reported to show high justice sensitivity, and a recent study provided evidence for this notion in an adult sample. In 1,235 German 10- to 19-year olds, we measured ADHD symptoms, justice sensitivity from the victim, observer, and perpetrator perspective, the frequency of perceptions of injustice, anxious and angry rejection sensitivity, depressive symptoms, conduct problems, and self-esteem. Participants with ADHD symptoms reported significantly higher victim justice sensitivity, more perceptions of injustice, and higher anxious and angry rejection sensitivity, but significantly lower perpetrator justice sensitivity than controls. In latent path analyses, justice sensitivity as well as rejection sensitivity partially mediated the link between ADHD symptoms and comorbid problems when considered simultaneously. Thus, both justice sensitivity and rejection sensitivity may contribute to explaining the emergence and maintenance of problems typically associated with ADHD symptoms, and should therefore be considered in ADHD therapy.
Anger, indignation, guilt, rumination, victim compensation, and perpetrator punishment are considered primary responses associated with justice sensitivity (JS).
However, injustice and high JS may predispose to further responses.
We had N = 293 adults rate their JS, 17 potential responses toward 12 unjust scenarios from the victim's, observer's, beneficiary's, and perpetrator's perspectives, and several control variables.
Unjust situations generally elicited many affective, cognitive, and behavioral responses. JS generally predisposed to strong affective responses toward injustice, including sadness, pity, disappointment, and helplessness. It impaired trivialization, victim-blaming, or justification, which may otherwise help cope with injustice.
It predisposed to conflict solutions and victim compensation. Particularly victim and beneficiary JS had stronger effects in unjust situations from the corresponding perspective.
These findings add to a better understanding of the main and interaction effects of unjust situations from different perspectives and the JS facets, differences between the JS facets, as well as the links between JS and behavior and well-being.
Individual differences in justice sensitivity and rejection sensitivity have been linked to differences in aggressive behavior in adults. However, there is little research studying this association in children and adolescents and considering the two constructs in combination. We assessed justice sensitivity from the victim, observer, and perpetrator perspective as well as anxious and angry rejection sensitivity and linked both constructs to different forms (physical, relational), and functions (proactive, reactive) of self-reported aggression and to teacher- and parent-rated aggression in N=1,489 9- to 19-year olds in Germany. Victim sensitivity and both angry and anxious rejection sensitivity showed positive correlations with all forms and functions of aggression. Angry rejection sensitivity also correlated positively with teacher-rated aggression. Perpetrator sensitivity was negatively correlated with all aggression measures, and observer sensitivity also correlated negatively with all aggression measures except for a positive correlation with reactive aggression. Path models considering the sensitivity facets in combination and controlling for age and gender showed that higher victim justice sensitivity predicted higher aggression on all measures. Higher perpetrator sensitivity predicted lower physical, relational, proactive, and reactive aggression. Higher observer sensitivity predicted lower teacher-rated aggression. Angry rejection sensitivity predicted higher proactive and reactive aggression, whereas anxious rejection sensitivity did not make an additional contribution to the prediction of aggression. The findings are discussed in terms of social information processing models of aggression in childhood and adolescence. Aggr. Behav. 41:353-368, 2015. (c) 2014 Wiley Periodicals, Inc.
Several personality dispositions with common features capturing sensitivities to negative social cues have recently been introduced into psychological research. To date, however, little is known about their interrelations, their conjoint effects on behavior, or their interplay with other risk factors. We asked N = 349 adults from Germany to rate their justice, rejection, moral disgust, and provocation sensitivity, hostile attribution bias, trait anger, and forms and functions of aggression. The sensitivity measures were mostly positively correlated; particularly those with an egoistic focus, such as victim justice, rejection, and provocation sensitivity, hostile attributions and trait anger as well as those with an altruistic focus, such as observer justice, perpetrator justice, and moral disgust sensitivity. The sensitivity measures had independent and differential effects on forms and functions of aggression when considered simultaneously and when controlling for hostile attributions and anger. They could not be integrated into a single factor of interpersonal sensitivity or reduced to other well-known risk factors for aggression. The sensitivity measures, therefore, require consideration in predicting and preventing aggression.
Depressive symptoms have been related to anxious rejection sensitivity, but little is known about relations with angry rejection sensitivity and justice sensitivity. We measured rejection sensitivity, justice sensitivity, and depressive symptoms in 1,665 9-to-21-year olds at two points of measurement. Participants with high T1 levels of depressive symptoms reported higher anxious and angry rejection sensitivity and higher justice sensitivity than controls at T1 and T2. T1 rejection, but not justice sensitivity predicted T2 depressive symptoms; high victim justice sensitivity, however, added to the stabilization of depressive symptoms. T1 depressive symptoms positively predicted T2 anxious and angry rejection and victim justice sensitivity. Hence, sensitivity toward negative social cues may be cause and consequence of depressive symptoms and requires consideration in cognitive-behavioral treatment of depression.
Research indicates individual pathways towards school attacks and inconsistent offender profiles. Thus, several authors have classified offenders according to mental disorders, motives, or number/kinds of victims. We assumed differences between single and multiple victim offenders (intending to kill one or more than one victim). In qualitative and quantitative analyses of data from qualitative content analyses of case files on seven school attacks in Germany, we found differences between the offender groups in seriousness, patterns, characteristics, and classes of leaking (announcements of offences), offence-related behaviour, and offence characteristics. There were only minor differences in risk factors. Our research thus adds to the understanding of school attacks and leaking. Differences between offender groups require consideration in the planning of effective preventive approaches.
School shooters are often described as narcissistic, but empirical evidence is scant. To provide more reliable and detailed information, we conducted an exploratory study, analyzing police investigation files on seven school shootings in Germany, looking for symptoms of narcissistic personality disorder as defined by the Diagnostic and Statistical Manual of Mental Disorders (4th ed.; DSM-IV) in witnesses' and offenders' reports and expert psychological evaluations. Three out of four offenders who had been treated for mental disorders prior to the offenses displayed detached symptoms of narcissism, but none was diagnosed with narcissistic personality disorder. Of the other three, two displayed narcissistic traits. In one case, the number of symptoms would have justified a diagnosis of narcissistic personality disorder. Offenders showed low and high self-esteem and a range of other mental disorders. Thus, narcissism is not a common characteristic of school shooters, but possibly more frequent than in the general population. This should be considered in developing adequate preventive and intervention measures.
Leaking comprises observable behavior or statements that signal intentions of committing a violent offense and is considered an important warning sign for school shootings. School staff who are confronted with leaking have to assess its seriousness and react appropriately - a difficult task, because knowledge about leaking is sparse. The present study, therefore, examined how frequently leaking occurs in schools and how teachers identify leaking and respond to it. To achieve this aim, we informed teachers from eight schools in Germany about the definition of leaking and other warning signs and risk factors for school shootings in a one-hour information session. Teachers were then asked to report cases of leaking over a six- to nine-month period and to answer a questionnaire on leaking and its treatment after the information session and six to nine months later. Our results suggest that leaking is a relevant problem in German schools. Teachers mostly rated the information session positively and benefited in several aspects (e.g. reported more perceived courses of action or improved knowledge about leaking), but also expressed a constant need for support. Our findings highlight teachers' needs for further support and training and may be used in the planning of prevention measures for school shootings.
There is a longstanding and widely held misconception about the relative remoteness of abstract concepts from concrete experiences. This review examines the current evidence for external influences and internal constraints on the processing, representation, and use of abstract concepts, like truth, friendship, and number. We highlight the theoretical benefit of distinguishing between grounded and embodied cognition and then ask which roles do perception, action, language, and social interaction play in acquiring, representing and using abstract concepts. By reviewing several studies, we show that they are, against the accepted definition, not detached from perception and action. Focussing on magnitude-related concepts, we also discuss evidence for cultural influences on abstract knowledge and explore how internal processes such as inner speech, metacognition, and inner bodily signals (interoception) influence the acquisition and retrieval of abstract knowledge. Finally, we discuss some methodological developments. Specifically, we focus on the importance of studies that investigate the time course of conceptual processing and we argue that, because of the paramount role of sociality for abstract concepts, new methods are necessary to study concepts in interactive situations. We conclude that bodily, linguistic, and social constraints provide important theoretical limitations for our theories of conceptual knowledge.
Background: Agrammatic speakers have problems with grammatical encoding and decoding. However, not all syntactic processes are equally problematic: present time reference, who questions, and reflexives can be processed by narrow syntax alone and are relatively spared compared to past time reference, which questions, and personal pronouns, respectively. The latter need additional access to discourse and information structures to link to their referent outside the clause (Avrutin, 2006). Linguistic processing that requires discourse-linking is difficult for agrammatic individuals: verb morphology with reference to the past is more difficult than with reference to the present (Bastiaanse et al., 2011). The same holds for which questions compared to who questions and for pronouns compared to reflexives (Avrutin, 2006). These results have been reported independently for different populations in different languages. The current study, for the first time, tested all conditions within the same population.
Aims: We had two aims with the current study. First, we wanted to investigate whether discourse-linking is the common denominator of the deficits in time reference, wh questions, and object pronouns. Second, we aimed to compare the comprehension of discourse-linked elements in people with agrammatic and fluent aphasia.
Methods and procedures: Three sentence-picture-matching tasks were administered to 10 agrammatic, 10 fluent aphasic, and 10 non-brain-damaged Russian speakers (NBDs): (1) the Test for Assessing Reference of Time (TART) for present imperfective (reference to present) and past perfective (reference to past), (2) the Wh Extraction Assessment Tool (WHEAT) for which and who subject questions, and (3) the Reflexive-Pronoun Test (RePro) for reflexive and pronominal reference.
Outcomes and results: NBDs scored at ceiling and significantly higher than the aphasic participants. We found an overall effect of discourse-linking in the TART and WHEAT for the agrammatic speakers, and in all three tests for the fluent speakers. Scores on the RePro were at ceiling.
Conclusions: The results are in line with the prediction that problems that individuals with agrammatic and fluent aphasia experience when comprehending sentences that contain verbs with past time reference, which question words and pronouns are caused by the fact that these elements involve discourse linking. The effect is not specific to agrammatism, although it may result from different underlying disorders in agrammatic and fluent aphasia.
Eye fixation durations during normal reading correlate with processing difficulty, but the specific cognitive mechanisms reflected in these measures are not well understood. This study finds support in German readers' eye fixations for two distinct difficulty metrics: surprisal, which reflects the change in probabilities across syntactic analyses as new words are integrated; and retrieval, which quantifies comprehension difficulty in terms of working memory constraints. We examine the predictions of both metrics using a family of dependency parsers indexed by an upper limit on the number of candidate syntactic analyses they retain at successive words. Surprisal models all fixation measures and regression probability. By contrast, retrieval does not model any measure in serial processing. As more candidate analyses are considered in parallel at each word, retrieval can account for the same measures as surprisal. This pattern suggests an important role for ranked parallelism in theories of sentence comprehension.
Parsing costs as predictors of reading difficulty : an evaluation using the Potsdam Sentence Corpus
(2008)
Suboptimal post-operative improvements in functional capacity are often observed after minimally invasive aortic valve replacement (mini-AVR). It remains to be studied how AVR affects the cardiopulmonary and skeletal muscle function during exercise to explain these clinical observations and to provide a basis for improved/tailored post-operative rehabilitation. Twenty two patients with severe aortic stenosis (AS) (aortic valve area (AVA) < 1.0 cm(2)) were preoperatively compared to 22 healthy controls during submaximal constant-workload endurance-type exercise for oxygen uptake (V-O2), carbon dioxide output (V-CO2), respiratory gas exchange ratio, expiratory volume (V-E), ventilatory equivalents for O-2 (V-E/V-O2) and CO2 (V-E/V-CO2), respiratory rate (RR), tidal volume (V-t), heart rate (HR), oxygen pulse (V-O2/HR), blood lactate, Borg ratings of perceived exertion (RPE) and exercise-onset V-O2 kinetics. These exercise tests were repeated at 5 and 21 days after AVR surgery (n = 14), along with echocardiographic examinations. Respiratory exchange ratio and ventilatory equivalents (V-E/V-O2 and V-E/V-CO2) were significantly elevated, V-O2 and V-O2/HR were significantly lowered, and exercise-onset V-O2 kinetics were significantly slower in AS patients vs. healthy controls (P < 0.05). Although the AVA was restored by mini-AVR in AS patients, V-E/V-O2 and V-E/V-CO2 further worsened significantly within 5 days after surgery, accompanied by elevations in Borg RPE, V-E and RR, and lowered V-t. At 21 days after mini-AVR, exercise-onset V-O2 kinetics further slowed significantly (P < 0.05). A decline in pulmonary function was observed early aftermini-AVRsurgery, which was followed by a decline in skeletal muscle function in the subsequent weeks of recovery. Therefore, a tailored rehabilitation programmeshould include training modalities for the respiratory and peripheral muscular system.
Keeping the breath in mind
(2021)
Scientific interest in the brain and body interactions has been surging in recent years. One fundamental yet underexplored aspect of brain and body interactions is the link between the respiratory and the nervous systems. In this article, we give an overview of the emerging literature on how respiration modulates neural, cognitive and emotional processes. Moreover, we present a perspective linking respiration to the free-energy principle. We frame volitional modulation of the breath as an active inference mechanism in which sensory evidence is recontextualized to alter interoceptive models. We further propose that respiration-entrained gamma oscillations may reflect the propagation of prediction errors from the sensory level up to cortical regions in order to alter higher level predictions. Accordingly, controlled breathing emerges as an easily accessible tool for emotional, cognitive, and physiological regulation.
Cognitive resources contribute to balance control. There is evidence that mental fatigue reduces cognitive resources and impairs balance performance, particularly in older adults and when balance tasks are complex, for example when trying to walk or stand while concurrently performing a secondary cognitive task.
We conducted a systematic literature search in PubMed (MEDLINE), Web of Science and Google Scholar to identify eligible studies and performed a random effects meta-analysis to quantify the effects of experimentally induced mental fatigue on balance performance in healthy adults. Subgroup analyses were computed for age (healthy young vs. healthy older adults) and balance task complexity (balance tasks with high complexity vs. balance tasks with low complexity) to examine the moderating effects of these factors on fatigue-mediated balance performance.
We identified 7 eligible studies with 9 study groups and 206 participants. Analysis revealed that performing a prolonged cognitive task had a small but significant effect (SMDwm = −0.38) on subsequent balance performance in healthy young and older adults. However, age- and task-related differences in balance responses to fatigue could not be confirmed statistically.
Overall, aggregation of the available literature indicates that mental fatigue generally reduces balance in healthy adults. However, interactions between cognitive resource reduction, aging and balance task complexity remain elusive.
This article introduces a new theory, the Affective-Reflective Theory (ART) of physical inactivity and exercise. ART aims to explain and predict behavior in situations in which people either remain in a state of physical inactivity or initiate action (exercise). It is a dual-process model and assumes that exercise-related stimuli trigger automatic associations and a resulting automatic affective valuation of exercise (type-1 process). The automatic affective valuation forms the basis for the reflective evaluation (type-2 process), which can follow if self-control resources are available. The automatic affective valuation is connected with an action impulse, whereas the reflective evaluation can result in action plans. The two processes, in constant interaction, direct the individual towards or away from changing behavior. The ART of physical inactivity and exercise predicts that, when there is an affective-reflective discrepancy and self-control resources are low, behavior is more likely to be governed by the affective type-1 process. This introductory article explains the underlying concepts and main theoretical roots from which the ART of physical inactivity and exercise was developed (field theory, affective responses to exercise, automatic evaluation, evaluation-behavior link, dual-process theorizing). We also summarize the empirical tests that have been conducted to refine the theory in its present form.
Method: Following a known-group differences validation strategy, the doping attitudes of 43 athletes from bodybuilding (representative for a highly doping prone sport) and handball (as a contrast group) were compared using the picture-based doping-BIAT. The Performance Enhancement Attitude Scale (PEAS) was employed as a corresponding direct measure in order to additionally validate the results.
Results: As expected, in the group of bodybuilders, indirectly measured doping attitudes as tested with the picture-based doping-BIAT were significantly less negative (eta(2) = .11). The doping-BIAT and PEAS scores correlated significantly at r = .50 for bodybuilders, and not significantly at r = .36 for handball players. There was a low error rate (7%) and a satisfactory internal consistency (r(dagger dagger) = .66) for the picture-based doping-BIAT.
Conclusions: The picture-based doping-BIAT constitutes a psychometrically tested method, ready to be adopted by the international research community. The test can be administered via the internet. All test material is available "open source". The test might be implemented, for example, as a new effect-measure in the evaluation of prevention programs.
Objectives: Today, the doping attitudes of athletes can either be measured by asking athletes directly or with the help of indirect attitude measurement procedures as for example the implicit association test (IAT). Using indirect measures may be helpful for example when psychological effects of doping prevention programs shall be evaluated. In the present study we have analyzed and compared measurement properties of two recently published IATs.
Design: The IATs "doping substance vs. tea blend" and "doping substance vs. legal nutritional supplement" were presented to two randomly assigned independent samples of 102 athletes (44 male, 58 female; mean age 23.6 years) from different sports. Both IATs were complemented by a control IAT "word vs. non-word".
Methods: In order to test central measurement properties of both IATs, distributions of measured values, correlations with the control IAT, reliability analyses, and analyses of error rates were performed.
Results: Results pointed to a rather negative doping attitude in most athletes. Especially the fact that in the "doping vs. supplement" IAT error rates (12%) and adaptational learning effects across test blocks were substantial (eta(2) = .22), indicating that participants had difficulties correctly assigning the word stimuli to the respective category, we see slight advantages for the "doping vs. tea" IAT (e.g. satisfactory internal scale consistency Cronbach's-alpha = .78 among athletes reporting to be regularly involved in competitions).
Conclusion: The less satisfactory measurement properties of the "doping vs. supplement" IAT can possibly be explained by the fact that the boundaries between (legal) supplements and (illegal) doping substances have been shifted from time to time so that athletes were not sure whether substances were legal or not.
Personality is a relevant predictor for important life outcomes across the entire lifespan. Although previous studies have suggested the comparability of the measurement of the Big Five personality traits across adulthood, the generalizability to childhood is largely unknown. The present study investigated the structure of the Big Five personality traits assessed with the Big Five Inventory-SOEP Version (BFI-S; SOEP = Socio-Economic Panel) across a broad age range spanning 11-84 years. We used two samples of N = 1,090 children (52% female, M-age = 11.87) and N = 18,789 adults (53% female, M-age = 51.09), estimating a multigroup CFA analysis across four age groups (late childhood: 11-14 years; early adulthood: 17-30 years; middle adulthood: 31-60 years; late adulthood: 61-84 years). Our results indicated the comparability of the personality trait metric in terms of general factor structure, loading patterns, and the majority of intercepts across all age groups. Therefore, the findings suggest both a reliable assessment of the Big Five personality traits with the BFI-S even in late childhood and a vastly comparable metric across age groups.
Although many behavioral studies have investigated the effect of processing fluency on subsequent recognition memory, little research has examined the neural mechanism of this phenomenon. The present study aimed to explore the electrophysiological correlates of the effects of processing fluency on subsequent recognition memory by using an event-related potential (ERP) approach. The masked repetition priming paradigm was used to manipulate processing fluency in the study phase, and the R/K paradigm was utilized to investigate which recognition memory process (familiarity or recollection) was affected by processing fluency in the test phase. Converging behavioral and ERP results indicated that increased processing fluency impaired subsequent recollection. Results from the analysis of ERP priming effects in the study phase indicated that increased perceptual processing fluency of object features, reflected by the N/P 190 priming effect, can hinder encoding activities, reflected by the LPC priming effect, which leads to worse subsequent recollection based recognition memory. These results support the idea that processing fluency can influence subsequent recognition memory and provide a potential neural mechanism underlying this effect. However, further studies are needed to examine whether processing fluency can affect subsequent familiarity.
Assessing individual differences in achievement motivation with the Implicit Association Test
(2004)
The authors examined the validity of an Implicit Association Test (Greenwald, McGhee, & Schwartz, 1998) for assessing individual differences in achievement tendencies. Eighty-eight students completed an IAT and explicit self- ratings of achievement orientation, and were then administered a mental concentration test that they performed either in the presence or in the absence of achievement-related feedback. Implicit and explicit measures of achievement orientation were uncorrelated. Under feedback, the IAT uniquely predicted students' test performance but failed to predict their self-reported task enjoyment. Conversely, explicit self-ratings were unrelated to test performance but uniquely related to subjective accounts of task enjoyment. Without feedback, individual differences in both performance and enjoyment were independent of differences in either of the two achievement orientation measures. (C) 2004 Elsevier Inc. All rights reserved
The defocused attention hypothesis (von Hecker and Meiser, 2005) assumes that negative mood broadens attention, whereas the analytical rumination hypothesis (Andrews and Thompson, 2009) suggests a narrowing of the attentional focus with depression. We tested these conflicting hypotheses by directly measuring the perceptual span in groups of dysphoric and control subjects, using eye tracking. In the moving window paradigm, information outside of a variable-width gaze-contingent window was masked during reading of sentences. In measures of sentence reading time and mean fixation duration, dysphoric subjects were more pronouncedly affected than controls by a reduced window size. This difference supports the defocused attention hypothesis and seems hard to reconcile with a narrowing of attentional focus.
There is converging evidence suggesting a particular susceptibility to the addictive properties of nicotine among adolescents. The aim of the current study was to prospectively ascertain the relationship between age at first cigarette and initial smoking experiences, and to examine the combined effects of these characteristics of adolescent smoking behavior on adult smoking. It was hypothesized that the association between earlier age at first cigarette and later development of nicotine dependence may, at least in part, be attributable to differences in experiencing pleasurable early smoking sensations. Data were drawn from the participants of the Mannheim Study of Children at Risk, an ongoing epidemiological cohort study from birth to adulthood. Structured interviews at age 15, 19 and 22 years were conducted to assess the age at first cigarette, early smoking experiences and current smoking behavior in 213 young adults. In addition, the participants completed the Fagerstrom Test for Nicotine Dependence. Adolescents who smoked their first cigarette at an earlier age reported more pleasurable sensations from the cigarette, and they were more likely to be regular smokers at age 22. The age at first cigarette also predicted the number of cigarettes smoked and dependence at age 22. Thus, both the age of first cigarette and the pleasure experienced from the cigarette independently predicted aspects of smoking at age 22.
Recent studies have emphasized an important role for neurotrophins, such as brain-derived neurotrophic factor (BDNF), in regulating the plasticity of neural circuits involved in the pathophysiology of stress-related diseases. The aim of the present study was to examine the interplay of the BDNF Val(66)Met and the serotonin transporter promoter (5-HTTLPR) polymorphisms in moderating the impact of early-life adversity on BDNF plasma concentration and depressive symptoms. Participants were taken from an epidemiological cohort study following the long-term outcome of early risk factors from birth into young adulthood. In 259 individuals (119 males, 140 females), genotyped for the BDNF Val(66)Met and the 5-HTTLPR polymorphisms, plasma BDNF was assessed at the age of 19 years. In addition, participants completed the Beck Depression Inventory (BDI). Early adversity was determined according to a family adversity index assessed at 3 months of age. Results indicated that individuals homozygous for both the BDNF Val and the 5-HTTLPR L allele showed significantly reduced BDNF levels following exposure to high adversity. In contrast, BDNF levels appeared to be unaffected by early psychosocial adversity in carriers of the BDNF Met or the 5-HTTLPR S allele. While the former group appeared to be most susceptible to depressive symptoms, the impact of early adversity was less pronounced in the latter group. This is the first preliminary evidence indicating that early-life adverse experiences may have lasting sequelae for plasma BDNF levels in humans, highlighting that the susceptibility to this effect is moderated by BDNF Val(66)Met and 5-HTTLPR genotype.
Enhanced endocannabinoid signaling has been implicated in typically adolescent behavioral features such as increased risk-taking, impulsivity and novelty seeking. Research investigating the impact of genetic variants in the cannabinoid receptor 1 gene (CNR1) and of early rearing conditions has demonstrated that both factors contribute to the prediction of impulsivity-related phenotypes. The present study aimed to test the hypothesis of an interaction of the two most studied CNR1 polymorphisms rs806379 and rs1049353 with early psychosocial adversity in terms of affecting impulsivity in 15-year-olds from an epidemiological cohort sample followed since birth. In 323 adolescents (170 girls, 153 boys), problems of impulse control and novelty seeking were assessed using parent-report and self-report, respectively. Exposure to early psychosocial adversity was determined in a parent interview conducted at the age of 3 months. The results indicated that impulsivity increased following exposure to early psychosocial adversity, with this increase being dependent on CNR1 genotype. In contrast, while individuals exposed to early adversity scored higher on novelty seeking, no significant impact of genotype or the interaction thereof was detected. This is the first evidence to suggest that the interaction of CNR1 gene variants with the experience of early life adversity may play a role in determining adolescent impulsive behavior. However, given that the reported findings are obtained in a high-risk community sample, results are restricted in terms of interpretation and generalization. Future research is needed to replicate these findings and to identify the mediating mechanisms underlying this effect.
Recent research suggests an important role of FKBP5, a glucocorticoid receptor regulating co-chaperone, in the development of stress-related diseases such as depression and anxiety disorders. The present study aimed to replicate and extend previous evidence indicating that FKBP5 polymorphisms moderate hypothalamus-pituitary-adrenal (HPA) function by examining whether FKBP5 rs1360780 genotype and different measures of childhood adversity interact to predict stress-induced cortisol secretion. At age 19 years, 195 young adults (90 males, 105 females) participating in an epidemiological cohort study completed the Trier Social Stress Test (TSST) to assess cortisol stress responsiveness and were genotyped for the FKBP5 rs1360780. Childhood adversity was assessed using the Childhood Trauma Questionnaire (CTQ) and by a standardized parent interview yielding an index of family adversity. A significant interaction between genotype and childhood adversity on cortisol response to stress was demonstrated for exposure to childhood maltreatment as assessed by retrospective self-report (CTQ), but not for prospectively ascertained objective family adversity. Severity of childhood maltreatment was significantly associated with attenuated cortisol levels among carriers of the rs1360780 CC genotype, while no such effect emerged in carriers of the T allele. These findings point towards the functional involvement of FKBP5 in long-term alterations of neuroendocrine stress regulation related to childhood maltreatment, which have been suggested to represent a premorbid risk or resilience factor in the context of stress-related disorders. (C) 2013 Elsevier B.V. and ECNR This is an open access article under the CC BY-NC-ND license.
Objective: To examine prospectively whether early parental child-rearing behavior is a predictor of cardiometabolic outcome in young adulthood when other potential risk factors are controlled. Metabolic factors associated with increased risk for cardiovascular disease have been found to vary, depending on lifestyle as well as genetic predisposition. Moreover, there is evidence suggesting that environmental conditions, such as stress in pre- and postnatal life, may have a sustained impact on an individual's metabolic risk profile. Methods: Participants were drawn from a prospective, epidemiological, cohort study followed up from birth into young adulthood. Parent interviews and behavioral observations at the age of 3 months were conducted to assess child-rearing practices and mother-infant interaction in the home setting and in the laboratory. In 279 participants, anthropometric characteristics, low-density lipoprotein and high-density lipoprotein cholesterol, apolipoproteins, and triglycerides were recorded at age 19 years. In addition, structured interviews were administered to the young adults to assess indicators of current lifestyle and education. Results: Adverse early-life interaction experiences were significantly associated with lower levels of high- density lipoprotein cholesterol and apolipoprotein A1 in young adulthood. Current lifestyle variables and level of education did not account for this effect, although habitual smoking and alcohol consumption also contributed significantly to cardiometabolic outcomes. Conclusions: These findings suggest that early parental child-rearing behavior may predict health outcome in later life through its impact on metabolic parameters in adulthood.
Stress is known to induce cigarette craving in smokers, but the underlying mechanisms are widely unknown. We investigated how dependence severity, smoking habits and stress-induced cortisol secretion are associated with increased cigarette craving after a standardised laboratory stressor. Hundred and six healthy participants (50 men, age 18-19 years) underwent a standardised public speaking stress task. In all, 35 smoked daily (DS), 13 smoked occasionally (OS), and 58 never smoked (NS). Smoking was unrestricted until 2 h before stress onset. Plasma cortisol was measured before and up to 95 min after the stressor. All current smokers rated intensity of cigarette craving immediately before and immediately after the stressor using the Brief Questionnaire of Smoking Urges (BQSU). Cortisol levels significantly increased in response to stress in all groups. The magnitude of this stress response was significantly lower in DS compared with OS and NS but did not differ between OS and NS. Baseline BQSU scores were significantly higher in DS than OS. BQSU scores increased significantly during the stress period and were positively correlated to the cortisol response in the DS but were unrelated to their nicotine dependence scores. In OS, no change in cigarette craving could be observed. In daily smokers, cigarette craving is increased after compared with before stress exposure and is related to the magnitude of cortisol stress response rather than to severity of nicotine dependence. This result supports, but does not prove, the concept that hypothalamus-pituitary-adrenal stimulation is one of the mechanisms how stress can elicit cigarette craving.
There is ample evidence that the early initiation of alcohol use is a risk factor for the development of later alcohol-related problems. The purpose of the current study was to examine whether this association can be explained by indicators of a common underlying susceptibility or whether age at drinking onset may be considered as an independent predictor of later drinking behavior, suggesting a potential causal relationship. Participants were drawn from a prospective cohort study of the long-term outcomes of early risk factors followed up from birth onwards. Structured interviews were administered to 304 participants to assess age at first drink and current drinking behavior. Data on risk factors, including early family adversity, parental alcohol use, childhood psychopathology and stressful life events, were repeatedly collected during childhood using standardized parent interviews. In addition, information on genotype was considered. Results confirmed previous work demonstrating that hazardous alcohol consumption is related to early-adolescent drinking onset. A younger age of first drink was significantly predicted by 5-HTTLPR genotype and the degree of preceding externalizing symptoms, and both factors were related to increased consumption or harmful alcohol use at age 19. However, even after controlling for these potential explanatory factors, earlier age at drinking onset remained a strong predictor of heavy alcohol consumption in young adulthood. The present longitudinal study adds to the current literature indicating that the early onset - adult hazardous drinking association cannot solely be attributed to shared genetic and psychopathologic risk factors as examined in this study.