Refine
Year of publication
- 2024 (1)
- 2023 (16)
- 2022 (53)
- 2021 (102)
- 2020 (101)
- 2019 (110)
- 2018 (124)
- 2017 (134)
- 2016 (70)
- 2015 (87)
- 2014 (92)
- 2013 (86)
- 2012 (86)
- 2011 (79)
- 2010 (44)
- 2009 (69)
- 2008 (78)
- 2007 (61)
- 2006 (82)
- 2005 (98)
- 2004 (63)
- 2003 (72)
- 2002 (67)
- 2001 (49)
- 2000 (79)
- 1999 (87)
- 1998 (59)
- 1997 (51)
- 1996 (61)
- 1995 (48)
- 1994 (30)
- 1993 (20)
- 1992 (19)
- 1991 (11)
- 1990 (4)
- 1989 (7)
- 1988 (9)
- 1987 (7)
- 1986 (5)
- 1985 (3)
- 1984 (9)
- 1983 (5)
- 1982 (2)
- 1981 (3)
- 1980 (1)
Document Type
- Article (1752)
- Postprint (194)
- Doctoral Thesis (158)
- Monograph/Edited Volume (97)
- Other (45)
- Review (43)
- Conference Proceeding (29)
- Preprint (20)
- Master's Thesis (3)
- Bachelor Thesis (2)
Language
Keywords
- eye movements (31)
- Eye movements (28)
- reading (26)
- embodied cognition (23)
- attention (20)
- psychotherapy (19)
- Reading (17)
- adolescence (17)
- aggression (17)
- Adolescence (16)
Institute
- Department Psychologie (2346) (remove)
Zur Interaktion von Verarbeitungstiefe und dem Wortvorhersagbarkeitseffekt beim Lesen von Sätzen
(2008)
Although there is ample evidence linking insecure attachment styles and intimate partner violence (IPV), little is known about the psychological processes underlying this association, especially from the victim’s perspective. The present study examined how attachment styles relate to the experience of sexual and psychological abuse, directly or indirectly through destructive conflict resolution strategies, both self-reported and attributed to their opposite-sex romantic partner. In an online survey, 216 Spanish undergraduates completed measures of adult attachment style, engagement and withdrawal conflict resolution styles shown by self and partner, and victimization by an intimate partner in the form of sexual coercion and psychological abuse. As predicted, anxious and avoidant attachment styles were directly related to both forms of victimization. Also, an indirect path from anxious attachment to IPV victimization was detected via destructive conflict resolution strategies. Specifically, anxiously attached participants reported a higher use of conflict engagement by themselves and by their partners. In addition, engagement reported by the self and perceived in the partner was linked to an increased probability of experiencing sexual coercion and psychological abuse. Avoidant attachment was linked to higher withdrawal in conflict situations, but the paths from withdrawal to perceived partner engagement, sexual coercion, and psychological abuse were non-significant. No gender differences in the associations were found. The discussion highlights the role of anxious attachment in understanding escalating patterns of destructive conflict resolution strategies, which may increase the vulnerability to IPV victimization.
Is bad intent negligible?
(2018)
The hostile attribution bias (HAB) is a well-established risk factor for aggression. It is considered part of the suspicious mindset that may cause highly victim-justice sensitive individuals to behave uncooperatively. Thus, links of victim justice sensitivity (JS) with negative behavior, such as aggression, may be better explained by HAB. The present study tested this hypothesis in N=279 German adolescents who rated their JS, HAB, and physical, relational, verbal, reactive, and proactive aggression. Victim JS predicted physical, relational, verbal, reactive, and proactive aggression when HAB was controlled. HAB only predicted physical and proactive aggression. There were no moderator effects. Injustice seems an important reason for aggression irrespective of whether or not it is intentionally caused, particularly among those high in victim JS. Thus, victim JS should be considered as a potential important risk factor for aggression and receive more attention by research on aggression and preventive efforts.
Individuals differ in their tendency to perceive injustice and in their responses towards these perceptions. Those high in justice sensitivity tend to show intense negative affective, cognitive, and behavioral responses towards injustice that in part also depend on the perspective from which injustice is perceived. The present research project showed that inter-individual differences in justice sensitivity may already be measured and observed in childhood and adolescence and that early adolescence seems an important age-range and developmental stage for the stabilization of these differences. Furthermore, the different justice sensitivity perspectives were related to different forms of externalizing (aggression, ADHD, bullying) and internalizing problem behavior (depressive symptoms) both in children and adolescents as well as in adults in cross-sectional studies. Particularly victim sensitivity may apparently constitute an important risk factor for a broad range of both externalizing and internalizing maladaptive behaviors and mental health problems as shown in those studies using longitudinal data. Regarding aggressive behavior, victim justice sensitivity may even constitute a risk factor above and beyond other important and well-established risk factors for aggression and similar sensitivity constructs that had previously been linked to this kind of behavior. In contrast, observer and perpetrator sensitivity (perpetrator sensitivity in particular) tended to show negative links with externalizing problem behavior and instead predicted prosocial behavior in children and adolescents. However, there were also detached positive relations of perpetrator sensitivity with emotional problems as well as of observer sensitivity with reactive aggression and depressive symptoms. Taken together, the findings from the present research show that justice sensitivity forms in childhood at the latest and that it may have important, long-term influences on pro- and antisocial behavior and mental health. Thus, justice sensitivity requires more attention in research on the prevention and intervention of mental health problems and antisocial behavior, such as aggression.
School attacks are attracting increasing attention in aggression research. Recent systematic analyses provided new insights into offense and offender characteristics. Less is known about attacks in institutes of higher education (e.g., universities). It is therefore questionable whether the term “school attack” should be limited to institutions of general education or could be extended to institutions of higher education. Scientific literature is divided in distinguishing or unifying these two groups and reports similarities as well as differences. We researched 232 school attacks and 45 attacks in institutes of higher education throughout the world and conducted systematic comparisons between the two groups. The analyses yielded differences in offender (e.g., age, migration background) and offense characteristics (e.g., weapons, suicide rates), and some similarities (e.g., gender). Most differences can apparently be accounted for by offenders’ age and situational influences. We discuss the implications of our findings for future research and the development of preventative measures.
Objective:
Rejection sensitivity and justice sensitivity are personality traits that are characterized by frequent perceptions and intense adverse responses to negative social cues. Whereas there is good evidence for associations between rejection sensitivity, justice sensitivity, and internalizing problems, no longitudinal studies have investigated their association with eating disorder (ED) pathology so far. Thus, the present study examined longitudinal relations between rejection sensitivity, justice sensitivity, and ED pathology.
Method:
Participants (N = 769) reported on their rejection sensitivity, justice sensitivity, and ED pathology at 9-19 (T1), 11-21 (T2), and 14-22 years of age (T3).
Results:
Latent cross-lagged models showed longitudinal associations between ED pathology and anxious rejection sensitivity, observer and victim justice sensitivity. T1 and T2 ED pathology predicted higher T2 and T3 anxious rejection sensitivity, respectively. In turn, T2 anxious rejection sensitivity predicted more T3 ED pathology. T1 observer justice sensitivity predicted more T2 ED pathology, which predicted higher T3 observer justice sensitivity. Furthermore, T1 ED pathology predicted higher T2 victim justice sensitivity.
Discussion:
Rejection sensitivity-particularly anxious rejection sensitivity-and justice sensitivity may be involved in the maintenance or worsening of ED pathology and should be considered by future research and in prevention and treatment of ED pathology. Also, mental health problems may increase rejection sensitivity and justice sensitivity traits in the long term.
Recent research provides evidence that aggressive sexual fantasies predict aggressive sexual behavior in the general population. However, sexual fantasies including fantasies about the infliction of pain and humiliation, should be frequent and often consensually acted upon among individuals with sadomasochistic likings. The question arises whether sexual fantasies with aggressive content still predict presumably non-consensual aggressive sexual behavior in individuals with sadomasochistic likings, given that BDSM encounters are generally considered consensual. To investigate this question, we conducted a questionnaire survey of sexual fantasies, as sessing the frequency of seventy sexual fantasies involving non-aggressive, masochistic, and aggressive acts. Our sample (N = 182) contained 99 respondents who self-identified as sadist, masochist, or switcher; 44 reported no such identification. For respondents reporting BDSM identification, we replicated a factor structure for sexual fantasies similar to that previously found in the general population, including three factors reflecting fantasies about increasingly severe aggressive sexual acts. Fantasies about injuring a partner and/or using weapons and fantasies about sexual coercion predicted presumably non-consensual sexual behavior independently of other risk factors for aggressive sexual behavior and irrespective of BDSM identification. Hence, severely aggressive sexual fantasies may predispose to presumably non-consensual sexual behavior in both individuals with and without BDSM identification.
Background: Aggression-related sexual fantasies (ASF) are considered an important risk factor for sexual aggression, but empirical knowledge is limited, in part because previous research has been based on predominantly male, North-American college samples, and limited numbers of questions. <br /> Aim: The present study aimed to foster the knowledge about the frequency and correlates of ASF, while including a large sample of women and a broad range of ASF. <br /> Method: A convenience sample of N = 664 participants from Germany including 508 (77%) women and 156 (23%) men with a median age of 25 (21-27) years answered an online questionnaire. Participants were mainly recruited via social networks (online and in person) and were mainly students. We examined the frequencies of (aggression-related) sexual fantasies and their expected factor structure (factors reflecting affective, experimental, masochistic, and aggression-related contents) via exploratory factor analysis. We investigated potential correlates (eg, psychopathic traits, attitudes towards sexual fantasies) as predictors of ASF using multiple regression analyses. Finally, we examined whether ASF would positively predict sexual aggression beyond other pertinent risk factors using multiple regression analysis. <br /> Outcomes: The participants rated the frequency of a broad set of 56 aggression-related and other sexual fantasies, attitudes towards sexual fantasies, the Big Five (ie, broad personality dimensions including neuroticism and extraversion), sexual aggression, and other risk factors for sexual aggression. <br /> Results: All participants reported non-aggression-related sexual fantasies and 77% reported at least one ASF in their lives. Being male, frequent sexual fantasies, psychopathic traits, and negative attitudes towards sexual fantasies predicted more frequent ASF. ASF were the strongest predictor of sexual aggression beyond other risk factors, including general aggression, psychopathic traits, rape myth acceptance, and violent pornography consumption. <br /> Clinical Translation: ASF may be an important risk factor for sexual aggression and should be more strongly considered in prevention and intervention efforts. <br /> Strengths and Limitations: The strengths of the present study include using a large item pool and a large sample with a large proportion of women in order to examine ASF as a predictor of sexual aggression beyond important control variables. Its weaknesses include the reliance on cross-sectional data, that preclude causal inferences, and not continuously distinguishing between consensual and non-consensual acts. <br /> Conclusion: ASF are a frequent phenomenon even in in the general population and among women and show strong associations with sexual aggression. Thus, they require more attention by research on sexual aggression and its prevention.
Individuals differ in their sensitivity toward injustice. Justice-sensitive persons perceive injustice more frequently and show stronger responses to it. Justice sensitivity has been studied predominantly in adults; little is known about its development in childhood and adolescence and its connection to prosocial behavior and emotional and behavioral problems. This study evaluates a version of the justice sensitivity inventory for children and adolescents (JSI-CA5) in 1472 9- to 17-year olds. Items and scales showed good psychometric properties and correlations with prosocial behavior and conduct problems similar to findings in adults, supporting the reliability and validity of the scale. We found individual differences in justice sensitivity as a function of age and gender. Furthermore, justice sensitivity predicted emotional and behavioral problems in children and adolescents over a 1- to 2-year period. Justice sensitivity perspectives can therefore be considered as risk and/or protective factors for mental health in childhood and adolescence.
Justice sensitivity captures individual differences in the frequency with which injustice is perceived and the intensity of emotional, cognitive, and behavioral reactions to it. Persons with ADHD have been reported to show high justice sensitivity, and a recent study provided evidence for this notion in an adult sample. In 1,235 German 10- to 19-year olds, we measured ADHD symptoms, justice sensitivity from the victim, observer, and perpetrator perspective, the frequency of perceptions of injustice, anxious and angry rejection sensitivity, depressive symptoms, conduct problems, and self-esteem. Participants with ADHD symptoms reported significantly higher victim justice sensitivity, more perceptions of injustice, and higher anxious and angry rejection sensitivity, but significantly lower perpetrator justice sensitivity than controls. In latent path analyses, justice sensitivity as well as rejection sensitivity partially mediated the link between ADHD symptoms and comorbid problems when considered simultaneously. Thus, both justice sensitivity and rejection sensitivity may contribute to explaining the emergence and maintenance of problems typically associated with ADHD symptoms, and should therefore be considered in ADHD therapy.
Individual differences in justice sensitivity and rejection sensitivity have been linked to differences in aggressive behavior in adults. However, there is little research studying this association in children and adolescents and considering the two constructs in combination. We assessed justice sensitivity from the victim, observer, and perpetrator perspective as well as anxious and angry rejection sensitivity and linked both constructs to different forms (physical, relational), and functions (proactive, reactive) of self-reported aggression and to teacher- and parent-rated aggression in N=1,489 9- to 19-year olds in Germany. Victim sensitivity and both angry and anxious rejection sensitivity showed positive correlations with all forms and functions of aggression. Angry rejection sensitivity also correlated positively with teacher-rated aggression. Perpetrator sensitivity was negatively correlated with all aggression measures, and observer sensitivity also correlated negatively with all aggression measures except for a positive correlation with reactive aggression. Path models considering the sensitivity facets in combination and controlling for age and gender showed that higher victim justice sensitivity predicted higher aggression on all measures. Higher perpetrator sensitivity predicted lower physical, relational, proactive, and reactive aggression. Higher observer sensitivity predicted lower teacher-rated aggression. Angry rejection sensitivity predicted higher proactive and reactive aggression, whereas anxious rejection sensitivity did not make an additional contribution to the prediction of aggression. The findings are discussed in terms of social information processing models of aggression in childhood and adolescence. Aggr. Behav. 41:353-368, 2015. (c) 2014 Wiley Periodicals, Inc.
Several personality dispositions with common features capturing sensitivities to negative social cues have recently been introduced into psychological research. To date, however, little is known about their interrelations, their conjoint effects on behavior, or their interplay with other risk factors. We asked N = 349 adults from Germany to rate their justice, rejection, moral disgust, and provocation sensitivity, hostile attribution bias, trait anger, and forms and functions of aggression. The sensitivity measures were mostly positively correlated; particularly those with an egoistic focus, such as victim justice, rejection, and provocation sensitivity, hostile attributions and trait anger as well as those with an altruistic focus, such as observer justice, perpetrator justice, and moral disgust sensitivity. The sensitivity measures had independent and differential effects on forms and functions of aggression when considered simultaneously and when controlling for hostile attributions and anger. They could not be integrated into a single factor of interpersonal sensitivity or reduced to other well-known risk factors for aggression. The sensitivity measures, therefore, require consideration in predicting and preventing aggression.
Several personality dispositions with common features capturing sensitivities to negative social cues have recently been introduced into psychological research. To date, however, little is known about their interrelations, their conjoint effects on behavior, or their interplay with other risk factors. We asked N = 349 adults from Germany to rate their justice, rejection, moral disgust, and provocation sensitivity, hostile attribution bias, trait anger, and forms and functions of aggression. The sensitivity measures were mostly positively correlated; particularly those with an egoistic focus, such as victim justice, rejection, and provocation sensitivity, hostile attributions and trait anger as well as those with an altruistic focus, such as observer justice, perpetrator justice, and moral disgust sensitivity. The sensitivity measures had independent and differential effects on forms and functions of aggression when considered simultaneously and when controlling for hostile attributions and anger. They could not be integrated into a single factor of interpersonal sensitivity or reduced to other well-known risk factors for aggression. The sensitivity measures, therefore, require consideration in predicting and preventing aggression.
Depressive symptoms have been related to anxious rejection sensitivity, but little is known about relations with angry rejection sensitivity and justice sensitivity. We measured rejection sensitivity, justice sensitivity, and depressive symptoms in 1,665 9-to-21-year olds at two points of measurement. Participants with high T1 levels of depressive symptoms reported higher anxious and angry rejection sensitivity and higher justice sensitivity than controls at T1 and T2. T1 rejection, but not justice sensitivity predicted T2 depressive symptoms; high victim justice sensitivity, however, added to the stabilization of depressive symptoms. T1 depressive symptoms positively predicted T2 anxious and angry rejection and victim justice sensitivity. Hence, sensitivity toward negative social cues may be cause and consequence of depressive symptoms and requires consideration in cognitive-behavioral treatment of depression.
Research indicates individual pathways towards school attacks and inconsistent offender profiles. Thus, several authors have classified offenders according to mental disorders, motives, or number/kinds of victims. We assumed differences between single and multiple victim offenders (intending to kill one or more than one victim). In qualitative and quantitative analyses of data from qualitative content analyses of case files on seven school attacks in Germany, we found differences between the offender groups in seriousness, patterns, characteristics, and classes of leaking (announcements of offences), offence-related behaviour, and offence characteristics. There were only minor differences in risk factors. Our research thus adds to the understanding of school attacks and leaking. Differences between offender groups require consideration in the planning of effective preventive approaches.
School shooters are often described as narcissistic, but empirical evidence is scant. To provide more reliable and detailed information, we conducted an exploratory study, analyzing police investigation files on seven school shootings in Germany, looking for symptoms of narcissistic personality disorder as defined by the Diagnostic and Statistical Manual of Mental Disorders (4th ed.; DSM-IV) in witnesses' and offenders' reports and expert psychological evaluations. Three out of four offenders who had been treated for mental disorders prior to the offenses displayed detached symptoms of narcissism, but none was diagnosed with narcissistic personality disorder. Of the other three, two displayed narcissistic traits. In one case, the number of symptoms would have justified a diagnosis of narcissistic personality disorder. Offenders showed low and high self-esteem and a range of other mental disorders. Thus, narcissism is not a common characteristic of school shooters, but possibly more frequent than in the general population. This should be considered in developing adequate preventive and intervention measures.
Leaking comprises observable behavior or statements that signal intentions of committing a violent offense and is considered an important warning sign for school shootings. School staff who are confronted with leaking have to assess its seriousness and react appropriately - a difficult task, because knowledge about leaking is sparse. The present study, therefore, examined how frequently leaking occurs in schools and how teachers identify leaking and respond to it. To achieve this aim, we informed teachers from eight schools in Germany about the definition of leaking and other warning signs and risk factors for school shootings in a one-hour information session. Teachers were then asked to report cases of leaking over a six- to nine-month period and to answer a questionnaire on leaking and its treatment after the information session and six to nine months later. Our results suggest that leaking is a relevant problem in German schools. Teachers mostly rated the information session positively and benefited in several aspects (e.g. reported more perceived courses of action or improved knowledge about leaking), but also expressed a constant need for support. Our findings highlight teachers' needs for further support and training and may be used in the planning of prevention measures for school shootings.
Datenbankbasierte epidemiologische Untersuchung über die Versorgung demenzerkrankter Patienten
(2016)
Hintergrund:
Demenz wird von der Weltgesundheitsorganisation als ein in der Regel chronisch oder progressiv verlaufendes Syndrom definiert, das von einer Vielzahl von Hirnerkrankungen verursacht wird, welche das Gedächtnis, das Denkvermögen, das Verhalten und die Fähigkeit, alltägliche Tätigkeiten auszuführen, beeinflussen. Weltweit leiden 47,5 Millionen Menschen unter Demenz und diese Zahl wird sich voraussichtlich bis zum Jahr 2050 verdreifachen.
In den vorliegenden Studien wurden zum Einem die Faktoren, welche mit dem Risiko einhergehen eine Demenz zu entwickeln, analysiert. Zum Anderen wurde die Persistenz der medikamentösen Behandlung von depressiven Zuständen mit Antidepressiva sowie die Persistenz der medikamentösen Behandlung von Verhaltensstörungen, therapiert mit Antipsychotika, bei Demenzpatienten untersucht.
Durchführung:
Alle drei Studien basieren auf den Daten der IMS Disease Analyzer-Datenbank.
Die Daten der Disease Analyzer-Datenbank werden über standardisierte Schnittstellen direkt monatlich aus dem Praxiscomputer generiert. Die Daten werden vor der Übertragung verschlüsselt und entsprechen in Umfang und Detaillierungsgrad der Patientenakte.
Risikofaktoren für eine Demenzdiagnose
Methode:
Insgesamt wurden in die Studie 11.956 Patienten mit einer Erstdiagnose (Indexdatum) einer Demenz (ICD 10: F01, F03, G30) zwischen Januar 2010 und Dezember 2014 eingeschlossen. 11.956 Kontrollpatienten (ohne Demenz) wurden den Patienten nach Alter, Geschlecht, Art der Krankenversicherung und Arzt zugeordnet. In beiden Fällen wurden die Praxisaufzeichnungen dazu verwendet, sicherzustellen, dass die Patienten vor dem Indexdatum jeweils 10 Jahre kontinuierlich beobachtet worden waren. Insgesamt wurden 23.912 Personen betrachtet.
Mehrere Erkrankungen, die möglicherweise mit Demenz assoziiert sind, wurden anhand von allgemeinärztlichen Diagnosen bestimmt (ICD-10-Codes): Diabetes (E10-E14), Hypertonie (I10), Adipositas (E66), Hyperlipidämie (E78), Schlaganfall (einschließlich transitorische ischämische Attacke, TIA) (I63, I64, G45), Parkinson-Krankheit (G20, G21), intrakranielle Verletzung (S06), koronare Herzkrankheit (I20-I25), leichte kognitive Beeinträchtigung (F06) und psychische und Verhaltensstörungen durch Alkohol (F10). Das Vorhandensein mehrerer Medikamente, wie z. B. Statine, Protonenpumpenhemmer und Antihypertensiva (einschließlich Diuretika, Beta-Blocker, Calciumkanalblocker, ACE-Hemmer und Angiotensin-II), wurde ebenfalls bemessen.
Ergebnisse:
Das Durchschnittsalter für die 11.956 Demenzpatienten und die 11.956 der Kontrollkohorte war 80,4 (SD 5,3) Jahre. 39,0% der waren männlich. In dem multivariaten Regressionsmodell, wurden folgende Variablen mit einem erhöhten Risiko von Demenz in einem signifikanten Einfluß assoziiert: milde kognitive Beeinträchtigung (MCI) (OR: 2,12), psychische und Verhaltensstörungen durch Alkohol (1,96), Parkinson-Krankheit (PD) (1,89), Schlaganfall (1,68), intrakranielle Verletzung (1,30), Diabetes (1,17), Fettstoffwechselstörung (1,07), koronare Herzkrankheit (1,06). Der Einsatz von Antihypertensiva (0,96), Statinen (OR: 0,94) und Protonen-Pumpen-Inhibitoren (PPI) (0,93) wurden mit einem verringerten Risiko der Entwicklung von Demenz.
Schlussfolgerung:
Die gefundenen Risikofaktoren für Demenz stehen in Einklang mit der Literatur. Nichtsdestotrotz bedürfen die Zusammenhänge zwischen der Verwendung von Statinen, PPI und Antihypertensiva und einem verringerten Demenzrisiko weiterer Untersuchungen.
Persistenz der Behandlung mit Antidepressiva bei Patienten mit Demenz
Methode:
Patienten wurden ausgewählt, wenn bei ihnen im Zeitraum zwischen Januar 2004 und Dezember 2013 eine Demenzdiagnose (ICD 10: F01, F03, G30) und eine erste Verordnung eines trizyklischen Antidepressivums oder selektiven Serotonin-
Wiederaufnahmehemmers (SSRI) oder Serotonin-Noradrenalin-Wiederaufnahmehemmers (SSNRI) vorlagen. Ausgewählte Patienten wurden über einen Zeitraum von bis zu zwei Jahre nach dem Indexdatum beobachtet. Das letzte Nachbeobachtungsdatum eines Patienten war der 31. Dezember 2014. Insgesamt standen 12.281 Patienten für die Persistenzanalyse zur Verfügung. Der Hauptzielparameter war die Abbruchrate der Antidepressivabehandlung innerhalb von sechs Monaten nach dem Indexdatum. Behandlungsabbruch wurde definiert als ein Zeitraum von 90 Tagen ohne diese Therapie.
Als demographische Daten wurden Alter, Art der Krankenversicherung (privat oder gesetzlich), Art der Praxis (Neurologe oder Allgemeinmediziner) und Praxisregion (Ost- oder Westdeutschland) erhoben. Die folgenden Demenzdiagnosen wurden berücksichtigt: Alzheimer-Krankheit (G30), vaskuläre Demenz (F01) und nicht näher bezeichnete Demenz (F03). Darüber hinaus wurde der Charlson-Komorbiditäts-Index als allgemeiner Marker für Komorbidität verwendet. Darüber hinaus wurden die folgenden Diagnosen als Komorbiditäten in die Studie aufgenommen: Depression (ICD 10: F32-33), Delir (F05), Typ-2-Diabetes mellitus (E11, E14), Hypertonie (I10), koronare Herzkrankheit (I24, I25), Schlaganfall (I63, I64), Myokardinfarkt (I21-23) und Herzinsuffizienz (I50).
Ergebnisse:
Nach sechs Monaten Nachbeobachtung hatten 52,7 % (von 12,281 Demenzpatienten) der mit Antidepressiva behandelten Demenzpatienten die Medikamenteneinnahme abgebrochen. Die multivariaten Regressionsanalysen zeigten ein signifikant geringeres Risiko für einen Behandlungsabbruch bei mit SSRRI oder SSNRI behandelten Patienten verglichen mit Patienten, die trizyklische Antidepressiva einnahmen. Es zeigte sich ein signifikant geringeres Risiko eines Behandlungsabbruchs bei jüngeren Patienten.
Schlussfolgerung:
Die Ergebnisse zeigen eine unzureichende Persistenz mit Antipsychotika bei Demenzpatienten unter Alltagsbedingungen. Es muss eine Verbesserung erreicht werden, um die in den Richtlinien empfohlene Behandlung sicherzustellen.
Persistenz der Behandlung mit Antipsychotika bei Patienten mit Demenz
Methode:
Diese Studie umfasste Patienten im Alter ab 60 Jahren mit Demenz beliebigen Ursprungs, die zwischen Januar 2009 und Dezember 2013 (Indexdatum) zum ersten Mal Antipsychotikaverordnungen (ATC: N05A) von deutschen Psychiatern erhielten. Der Nachbeobachtungszeitraum endete im Oktober 2015. Die Demenz wurde auf Grundlage der ICD-10-Codes für vaskuläre Demenz (F01), nicht näher bezeichnete Demenz (F03) und Alzheimer-Krankheit (G30) bewertet.
Der Hauptzielparameter war die Behandlungspersistenzrate über einen Zeitraum von mehr als 6 Monaten nach dem Indexdatum. Die Persistenz wurde als Therapiezeit ohne Absetzen der Behandlung, definiert als mindestens 180 Tage ohne Antipsychotikatherapie, geschätzt. Alle Patienten wurden für eine Dauer von bis zu zwei Jahren ab ihrem Indexdatum beobachtet.
Gleichzeitig auftretende Erkrankungen wurden anhand von Diagnosen (ICD-10-Codes) von Depression (F32, F33), Parkinson-Krankheit (G20), psychischer Störungen aufgrund bekannter physiologischer Erkrankungen (F06) und Persönlichkeits- und Verhaltensstörungen aufgrund physiologischer Erkrankungen (F07) bestimmt. Als demographische Daten wurden Alter, Geschlecht und Art der Krankenversicherung (privat/gesetzlich) erhoben.
Ergebnisse:
12,979 Demenzpatienten mit einem Durchschnittsalter von 82 Jahre (52.1% leben in Pflegeheimen) wurden in diese Studie analysiert. Nach zwei Jahren Nachbeobachtung hatten 54,8 %, 57,2 %, 61,1 % bzw. 65,4 % der Patienten zwischen 60-69, 70-79, 80-89 bzw. 90-99 Jahren Antipsychotikaverordnungen erhalten (p<0,001). 82,6 % der in Pflegeheimen lebenden Patienten und 76,2 % der Patienten in häuslicher Pflege setzten ihre Behandlung ebenfalls länger als 6 Monate fort; nach zwei Jahren lag der Anteil bei 63,9 % (in Pflegeheimen) bzw. 55,0 % (in häuslicher Pflege) (p<0,001).
Schlussfolgerung:
Die Studie zeigt, dass der Anteil der mit Antipsychotika behandelten Demenzpatienten sehr hoch ist. Weitere Studien, einschließlich qualitativer Untersuchungen, sind nötig, um die Gründe für dieses Verschreibungsverhalten zu verstehen und zu erklären.
Background: Dementia is a psychiatric condition the development of which is associated with numerous aspects of life. Our aim was to estimate dementia risk factors in German primary care patients.
Methods: The case-control study included primary care patients (70-90 years) with first diagnosis of dementia (all-cause) during the index period (01/2010-12/2014) (Disease Analyzer, Germany), and controls without dementia matched (1:1) to cases on the basis of age, sex, type of health insurance, and physician. Practice visit records were used to verify that there had been 10 years of continuous follow-up prior to the index date. Multivariate logistic regression models were fitted with dementia as a dependent variable and the potential predictors.
Results: The mean age for the 11,956 cases and the 11,956 controls was 80.4 (SD: 5.3) years. 39.0% of them were male and 1.9% had private health insurance. In the multivariate regression model, the following variables were linked to a significant extent with an increased risk of dementia: diabetes (OR: 1.17; 95% CI: 1.10-1.24), lipid metabolism (1.07; 1.00-1.14), stroke incl. TIA (1.68; 1.57-1.80), Parkinson's disease (PD) (1.89; 1.64-2.19), intracranial injury (1.30; 1.00-1.70), coronary heart disease (1.06; 1.00-1.13), mild cognitive impairment (MCI) (2.12; 1.82-2.48), mental and behavioral disorders due to alcohol use (1.96; 1.50-2.57). The use of statins (OR: 0.94; 0.90-0.99), proton-pump inhibitors (PPI) (0.93; 0.90-0.97), and antihypertensive drugs (0.96, 0.94-0.99) were associated with a decreased risk of developing dementia.
Conclusions: Risk factors for dementia found in this study are consistent with the literature. Nevertheless, the associations between statin, PPI and antihypertensive drug use, and decreased risk of dementia need further investigations.
This special issue, "Concrete constraints of abstract concepts", addresses the role of concrete determinants, both external and internal to the human body, in acquisition, processing and use of abstract concepts while at the same time presenting to the readers an overview of methods used to assess their representation.
There is a longstanding and widely held misconception about the relative remoteness of abstract concepts from concrete experiences. This review examines the current evidence for external influences and internal constraints on the processing, representation, and use of abstract concepts, like truth, friendship, and number. We highlight the theoretical benefit of distinguishing between grounded and embodied cognition and then ask which roles do perception, action, language, and social interaction play in acquiring, representing and using abstract concepts. By reviewing several studies, we show that they are, against the accepted definition, not detached from perception and action. Focussing on magnitude-related concepts, we also discuss evidence for cultural influences on abstract knowledge and explore how internal processes such as inner speech, metacognition, and inner bodily signals (interoception) influence the acquisition and retrieval of abstract knowledge. Finally, we discuss some methodological developments. Specifically, we focus on the importance of studies that investigate the time course of conceptual processing and we argue that, because of the paramount role of sociality for abstract concepts, new methods are necessary to study concepts in interactive situations. We conclude that bodily, linguistic, and social constraints provide important theoretical limitations for our theories of conceptual knowledge.
Background: Agrammatic speakers have problems with grammatical encoding and decoding. However, not all syntactic processes are equally problematic: present time reference, who questions, and reflexives can be processed by narrow syntax alone and are relatively spared compared to past time reference, which questions, and personal pronouns, respectively. The latter need additional access to discourse and information structures to link to their referent outside the clause (Avrutin, 2006). Linguistic processing that requires discourse-linking is difficult for agrammatic individuals: verb morphology with reference to the past is more difficult than with reference to the present (Bastiaanse et al., 2011). The same holds for which questions compared to who questions and for pronouns compared to reflexives (Avrutin, 2006). These results have been reported independently for different populations in different languages. The current study, for the first time, tested all conditions within the same population.
Aims: We had two aims with the current study. First, we wanted to investigate whether discourse-linking is the common denominator of the deficits in time reference, wh questions, and object pronouns. Second, we aimed to compare the comprehension of discourse-linked elements in people with agrammatic and fluent aphasia.
Methods and procedures: Three sentence-picture-matching tasks were administered to 10 agrammatic, 10 fluent aphasic, and 10 non-brain-damaged Russian speakers (NBDs): (1) the Test for Assessing Reference of Time (TART) for present imperfective (reference to present) and past perfective (reference to past), (2) the Wh Extraction Assessment Tool (WHEAT) for which and who subject questions, and (3) the Reflexive-Pronoun Test (RePro) for reflexive and pronominal reference.
Outcomes and results: NBDs scored at ceiling and significantly higher than the aphasic participants. We found an overall effect of discourse-linking in the TART and WHEAT for the agrammatic speakers, and in all three tests for the fluent speakers. Scores on the RePro were at ceiling.
Conclusions: The results are in line with the prediction that problems that individuals with agrammatic and fluent aphasia experience when comprehending sentences that contain verbs with past time reference, which question words and pronouns are caused by the fact that these elements involve discourse linking. The effect is not specific to agrammatism, although it may result from different underlying disorders in agrammatic and fluent aphasia.
Eye fixation durations during normal reading correlate with processing difficulty, but the specific cognitive mechanisms reflected in these measures are not well understood. This study finds support in German readers' eye fixations for two distinct difficulty metrics: surprisal, which reflects the change in probabilities across syntactic analyses as new words are integrated; and retrieval, which quantifies comprehension difficulty in terms of working memory constraints. We examine the predictions of both metrics using a family of dependency parsers indexed by an upper limit on the number of candidate syntactic analyses they retain at successive words. Surprisal models all fixation measures and regression probability. By contrast, retrieval does not model any measure in serial processing. As more candidate analyses are considered in parallel at each word, retrieval can account for the same measures as surprisal. This pattern suggests an important role for ranked parallelism in theories of sentence comprehension.
Parsing costs as predictors of reading difficulty : an evaluation using the Potsdam Sentence Corpus
(2008)
Parsing costs as predictors of reading difficulty: An evaluation using the Potsdam Sentence Corpus
(2008)
The surprisal of a word on a probabilistic grammar constitutes a promising complexity metric for human sentence comprehension difficulty. Using two different grammar types, surprisal is shown to have an effect on fixation durations and regression probabilities in a sample of German readers’ eye movements, the Potsdam Sentence Corpus. A linear mixed-effects model was used to quantify the effect of surprisal while taking into account unigram and bigram frequency, word length, and empirically-derived word predictability; the so-called “early” and “late” measures of processing difficulty both showed an effect of surprisal. Surprisal is also shown to have a small but statistically non-significant effect on empirically-derived predictability itself. This work thus demonstrates the importance of including parsing costs as a predictor of comprehension difficulty in models of reading, and suggests that a simple identification of syntactic parsing costs with early measures and late measures with durations of post-syntactic events may be difficult to uphold.
Eye fixation durations during normal reading correlate with processing difficulty but the specific cognitive mechanisms reflected in these measures are not well understood. This study finds support in German readers’ eyefixations for two distinct difficulty metrics: surprisal, which reflects the change in probabilities across syntactic analyses as new words are integrated, and retrieval, which quantifies comprehension difficulty in terms of working memory constraints. We examine the predictions of both metrics using a family of dependency parsers indexed by an upper limit on the number of candidate syntactic analyses they retain at successive words. Surprisal models all fixation measures and regression probability. By contrast, retrieval does not model any measure in serial processing. As more candidate analyses are considered in parallel at each word, retrieval can account for the same measures as surprisal. This pattern suggests an important role for ranked parallelism in theories of sentence comprehension.
Suboptimal post-operative improvements in functional capacity are often observed after minimally invasive aortic valve replacement (mini-AVR). It remains to be studied how AVR affects the cardiopulmonary and skeletal muscle function during exercise to explain these clinical observations and to provide a basis for improved/tailored post-operative rehabilitation. Twenty two patients with severe aortic stenosis (AS) (aortic valve area (AVA) < 1.0 cm(2)) were preoperatively compared to 22 healthy controls during submaximal constant-workload endurance-type exercise for oxygen uptake (V-O2), carbon dioxide output (V-CO2), respiratory gas exchange ratio, expiratory volume (V-E), ventilatory equivalents for O-2 (V-E/V-O2) and CO2 (V-E/V-CO2), respiratory rate (RR), tidal volume (V-t), heart rate (HR), oxygen pulse (V-O2/HR), blood lactate, Borg ratings of perceived exertion (RPE) and exercise-onset V-O2 kinetics. These exercise tests were repeated at 5 and 21 days after AVR surgery (n = 14), along with echocardiographic examinations. Respiratory exchange ratio and ventilatory equivalents (V-E/V-O2 and V-E/V-CO2) were significantly elevated, V-O2 and V-O2/HR were significantly lowered, and exercise-onset V-O2 kinetics were significantly slower in AS patients vs. healthy controls (P < 0.05). Although the AVA was restored by mini-AVR in AS patients, V-E/V-O2 and V-E/V-CO2 further worsened significantly within 5 days after surgery, accompanied by elevations in Borg RPE, V-E and RR, and lowered V-t. At 21 days after mini-AVR, exercise-onset V-O2 kinetics further slowed significantly (P < 0.05). A decline in pulmonary function was observed early aftermini-AVRsurgery, which was followed by a decline in skeletal muscle function in the subsequent weeks of recovery. Therefore, a tailored rehabilitation programmeshould include training modalities for the respiratory and peripheral muscular system.
Keeping the breath in mind
(2021)
Scientific interest in the brain and body interactions has been surging in recent years. One fundamental yet underexplored aspect of brain and body interactions is the link between the respiratory and the nervous systems. In this article, we give an overview of the emerging literature on how respiration modulates neural, cognitive and emotional processes. Moreover, we present a perspective linking respiration to the free-energy principle. We frame volitional modulation of the breath as an active inference mechanism in which sensory evidence is recontextualized to alter interoceptive models. We further propose that respiration-entrained gamma oscillations may reflect the propagation of prediction errors from the sensory level up to cortical regions in order to alter higher level predictions. Accordingly, controlled breathing emerges as an easily accessible tool for emotional, cognitive, and physiological regulation.
Zur Entwicklung geistiger Fähigkeiten in der gymnasialen Oberstufe : Potsdamer Längsschnittstudie
(1992)
Cognitive resources contribute to balance control. There is evidence that mental fatigue reduces cognitive resources and impairs balance performance, particularly in older adults and when balance tasks are complex, for example when trying to walk or stand while concurrently performing a secondary cognitive task.
We conducted a systematic literature search in PubMed (MEDLINE), Web of Science and Google Scholar to identify eligible studies and performed a random effects meta-analysis to quantify the effects of experimentally induced mental fatigue on balance performance in healthy adults. Subgroup analyses were computed for age (healthy young vs. healthy older adults) and balance task complexity (balance tasks with high complexity vs. balance tasks with low complexity) to examine the moderating effects of these factors on fatigue-mediated balance performance.
We identified 7 eligible studies with 9 study groups and 206 participants. Analysis revealed that performing a prolonged cognitive task had a small but significant effect (SMDwm = −0.38) on subsequent balance performance in healthy young and older adults. However, age- and task-related differences in balance responses to fatigue could not be confirmed statistically.
Overall, aggregation of the available literature indicates that mental fatigue generally reduces balance in healthy adults. However, interactions between cognitive resource reduction, aging and balance task complexity remain elusive.
Cognitive resources contribute to balance control. There is evidence that mental fatigue reduces cognitive resources and impairs balance performance, particularly in older adults and when balance tasks are complex, for example when trying to walk or stand while concurrently performing a secondary cognitive task.
We conducted a systematic literature search in PubMed (MEDLINE), Web of Science and Google Scholar to identify eligible studies and performed a random effects meta-analysis to quantify the effects of experimentally induced mental fatigue on balance performance in healthy adults. Subgroup analyses were computed for age (healthy young vs. healthy older adults) and balance task complexity (balance tasks with high complexity vs. balance tasks with low complexity) to examine the moderating effects of these factors on fatigue-mediated balance performance.
We identified 7 eligible studies with 9 study groups and 206 participants. Analysis revealed that performing a prolonged cognitive task had a small but significant effect (SMDwm = −0.38) on subsequent balance performance in healthy young and older adults. However, age- and task-related differences in balance responses to fatigue could not be confirmed statistically.
Overall, aggregation of the available literature indicates that mental fatigue generally reduces balance in healthy adults. However, interactions between cognitive resource reduction, aging and balance task complexity remain elusive.
This article introduces a new theory, the Affective-Reflective Theory (ART) of physical inactivity and exercise. ART aims to explain and predict behavior in situations in which people either remain in a state of physical inactivity or initiate action (exercise). It is a dual-process model and assumes that exercise-related stimuli trigger automatic associations and a resulting automatic affective valuation of exercise (type-1 process). The automatic affective valuation forms the basis for the reflective evaluation (type-2 process), which can follow if self-control resources are available. The automatic affective valuation is connected with an action impulse, whereas the reflective evaluation can result in action plans. The two processes, in constant interaction, direct the individual towards or away from changing behavior. The ART of physical inactivity and exercise predicts that, when there is an affective-reflective discrepancy and self-control resources are low, behavior is more likely to be governed by the affective type-1 process. This introductory article explains the underlying concepts and main theoretical roots from which the ART of physical inactivity and exercise was developed (field theory, affective responses to exercise, automatic evaluation, evaluation-behavior link, dual-process theorizing). We also summarize the empirical tests that have been conducted to refine the theory in its present form.
Method: Following a known-group differences validation strategy, the doping attitudes of 43 athletes from bodybuilding (representative for a highly doping prone sport) and handball (as a contrast group) were compared using the picture-based doping-BIAT. The Performance Enhancement Attitude Scale (PEAS) was employed as a corresponding direct measure in order to additionally validate the results.
Results: As expected, in the group of bodybuilders, indirectly measured doping attitudes as tested with the picture-based doping-BIAT were significantly less negative (eta(2) = .11). The doping-BIAT and PEAS scores correlated significantly at r = .50 for bodybuilders, and not significantly at r = .36 for handball players. There was a low error rate (7%) and a satisfactory internal consistency (r(dagger dagger) = .66) for the picture-based doping-BIAT.
Conclusions: The picture-based doping-BIAT constitutes a psychometrically tested method, ready to be adopted by the international research community. The test can be administered via the internet. All test material is available "open source". The test might be implemented, for example, as a new effect-measure in the evaluation of prevention programs.
Objectives: Today, the doping attitudes of athletes can either be measured by asking athletes directly or with the help of indirect attitude measurement procedures as for example the implicit association test (IAT). Using indirect measures may be helpful for example when psychological effects of doping prevention programs shall be evaluated. In the present study we have analyzed and compared measurement properties of two recently published IATs.
Design: The IATs "doping substance vs. tea blend" and "doping substance vs. legal nutritional supplement" were presented to two randomly assigned independent samples of 102 athletes (44 male, 58 female; mean age 23.6 years) from different sports. Both IATs were complemented by a control IAT "word vs. non-word".
Methods: In order to test central measurement properties of both IATs, distributions of measured values, correlations with the control IAT, reliability analyses, and analyses of error rates were performed.
Results: Results pointed to a rather negative doping attitude in most athletes. Especially the fact that in the "doping vs. supplement" IAT error rates (12%) and adaptational learning effects across test blocks were substantial (eta(2) = .22), indicating that participants had difficulties correctly assigning the word stimuli to the respective category, we see slight advantages for the "doping vs. tea" IAT (e.g. satisfactory internal scale consistency Cronbach's-alpha = .78 among athletes reporting to be regularly involved in competitions).
Conclusion: The less satisfactory measurement properties of the "doping vs. supplement" IAT can possibly be explained by the fact that the boundaries between (legal) supplements and (illegal) doping substances have been shifted from time to time so that athletes were not sure whether substances were legal or not.
Drugs as instruments
(2016)
Neuroenhancement (NE) is the non-medical use of psychoactive substances to produce a subjective enhancement in psychological functioning and experience. So far empirical investigations of individuals' motivation for NE however have been hampered by the lack of theoretical foundation. This study aimed to apply drug instrumentalization theory to user motivation for NE. We argue that NE should be defined and analyzed from a behavioral perspective rather than in terms of the characteristics of substances used for NE. In the empirical study we explored user behavior by analyzing relationships between drug options (use over-the-counter products, prescription drugs, illicit drugs) and postulated drug instrumentalization goals (e.g., improved cognitive performance, counteracting fatigue, improved social interaction). Questionnaire data from 1438 university students were subjected to exploratory and confirmatory factor analysis to address the question of whether analysis of drug instrumentalization should be based on the assumption that users are aiming to achieve a certain goal and choose their drug accordingly or whether NE behavior is more strongly rooted in a decision to try or use a certain drug option. We used factor mixture modeling to explore whether users could be separated into qualitatively different groups defined by a shared "goal X drug option" configuration. Our results indicate, first, that individuals decisions about NE are eventually based on personal attitude to drug options (e.g., willingness to use an over-the-counter product but not to abuse prescription drugs) rather than motivated by desire to achieve a specific goal (e.g., fighting tiredness) for which different drug options might be tried. Second, data analyses suggested two qualitatively different classes of users. Both predominantly used over-the-counter products, but "neuroenhancers" might be characterized by a higher propensity to instrumentalize over-the-counter products for virtually all investigated goals whereas "fatigue-fighters" might be inclined to use over-the-counter products exclusively to fight fatigue. We believe that psychological investigations like these are essential, especially for designing programs to prevent risky behavior.
Personality is a relevant predictor for important life outcomes across the entire lifespan. Although previous studies have suggested the comparability of the measurement of the Big Five personality traits across adulthood, the generalizability to childhood is largely unknown. The present study investigated the structure of the Big Five personality traits assessed with the Big Five Inventory-SOEP Version (BFI-S; SOEP = Socio-Economic Panel) across a broad age range spanning 11-84 years. We used two samples of N = 1,090 children (52% female, M-age = 11.87) and N = 18,789 adults (53% female, M-age = 51.09), estimating a multigroup CFA analysis across four age groups (late childhood: 11-14 years; early adulthood: 17-30 years; middle adulthood: 31-60 years; late adulthood: 61-84 years). Our results indicated the comparability of the personality trait metric in terms of general factor structure, loading patterns, and the majority of intercepts across all age groups. Therefore, the findings suggest both a reliable assessment of the Big Five personality traits with the BFI-S even in late childhood and a vastly comparable metric across age groups.
Although many behavioral studies have investigated the effect of processing fluency on subsequent recognition memory, little research has examined the neural mechanism of this phenomenon. The present study aimed to explore the electrophysiological correlates of the effects of processing fluency on subsequent recognition memory by using an event-related potential (ERP) approach. The masked repetition priming paradigm was used to manipulate processing fluency in the study phase, and the R/K paradigm was utilized to investigate which recognition memory process (familiarity or recollection) was affected by processing fluency in the test phase. Converging behavioral and ERP results indicated that increased processing fluency impaired subsequent recollection. Results from the analysis of ERP priming effects in the study phase indicated that increased perceptual processing fluency of object features, reflected by the N/P 190 priming effect, can hinder encoding activities, reflected by the LPC priming effect, which leads to worse subsequent recollection based recognition memory. These results support the idea that processing fluency can influence subsequent recognition memory and provide a potential neural mechanism underlying this effect. However, further studies are needed to examine whether processing fluency can affect subsequent familiarity.
Selbstgesteuertes Lernen
(2018)
Instruktionspsychologie
(2000)
Implizites versus explizites Leistungsstreben : Befunde zur Unabhängigkeit zweier Motivationssysteme
(2002)
Motivation und Persönlichkeit : von der Analyse von Teilsystemen zur Analyse ihrer Interaktionen
(1999)
Assessing individual differences in achievement motivation with the Implicit Association Test
(2004)
The authors examined the validity of an Implicit Association Test (Greenwald, McGhee, & Schwartz, 1998) for assessing individual differences in achievement tendencies. Eighty-eight students completed an IAT and explicit self- ratings of achievement orientation, and were then administered a mental concentration test that they performed either in the presence or in the absence of achievement-related feedback. Implicit and explicit measures of achievement orientation were uncorrelated. Under feedback, the IAT uniquely predicted students' test performance but failed to predict their self-reported task enjoyment. Conversely, explicit self-ratings were unrelated to test performance but uniquely related to subjective accounts of task enjoyment. Without feedback, individual differences in both performance and enjoyment were independent of differences in either of the two achievement orientation measures. (C) 2004 Elsevier Inc. All rights reserved
Selbstgesteuertes Lernen
(2001)
The defocused attention hypothesis (von Hecker and Meiser, 2005) assumes that negative mood broadens attention, whereas the analytical rumination hypothesis (Andrews and Thompson, 2009) suggests a narrowing of the attentional focus with depression. We tested these conflicting hypotheses by directly measuring the perceptual span in groups of dysphoric and control subjects, using eye tracking. In the moving window paradigm, information outside of a variable-width gaze-contingent window was masked during reading of sentences. In measures of sentence reading time and mean fixation duration, dysphoric subjects were more pronouncedly affected than controls by a reduced window size. This difference supports the defocused attention hypothesis and seems hard to reconcile with a narrowing of attentional focus.
There is converging evidence suggesting a particular susceptibility to the addictive properties of nicotine among adolescents. The aim of the current study was to prospectively ascertain the relationship between age at first cigarette and initial smoking experiences, and to examine the combined effects of these characteristics of adolescent smoking behavior on adult smoking. It was hypothesized that the association between earlier age at first cigarette and later development of nicotine dependence may, at least in part, be attributable to differences in experiencing pleasurable early smoking sensations. Data were drawn from the participants of the Mannheim Study of Children at Risk, an ongoing epidemiological cohort study from birth to adulthood. Structured interviews at age 15, 19 and 22 years were conducted to assess the age at first cigarette, early smoking experiences and current smoking behavior in 213 young adults. In addition, the participants completed the Fagerstrom Test for Nicotine Dependence. Adolescents who smoked their first cigarette at an earlier age reported more pleasurable sensations from the cigarette, and they were more likely to be regular smokers at age 22. The age at first cigarette also predicted the number of cigarettes smoked and dependence at age 22. Thus, both the age of first cigarette and the pleasure experienced from the cigarette independently predicted aspects of smoking at age 22.
Recent studies have emphasized an important role for neurotrophins, such as brain-derived neurotrophic factor (BDNF), in regulating the plasticity of neural circuits involved in the pathophysiology of stress-related diseases. The aim of the present study was to examine the interplay of the BDNF Val(66)Met and the serotonin transporter promoter (5-HTTLPR) polymorphisms in moderating the impact of early-life adversity on BDNF plasma concentration and depressive symptoms. Participants were taken from an epidemiological cohort study following the long-term outcome of early risk factors from birth into young adulthood. In 259 individuals (119 males, 140 females), genotyped for the BDNF Val(66)Met and the 5-HTTLPR polymorphisms, plasma BDNF was assessed at the age of 19 years. In addition, participants completed the Beck Depression Inventory (BDI). Early adversity was determined according to a family adversity index assessed at 3 months of age. Results indicated that individuals homozygous for both the BDNF Val and the 5-HTTLPR L allele showed significantly reduced BDNF levels following exposure to high adversity. In contrast, BDNF levels appeared to be unaffected by early psychosocial adversity in carriers of the BDNF Met or the 5-HTTLPR S allele. While the former group appeared to be most susceptible to depressive symptoms, the impact of early adversity was less pronounced in the latter group. This is the first preliminary evidence indicating that early-life adverse experiences may have lasting sequelae for plasma BDNF levels in humans, highlighting that the susceptibility to this effect is moderated by BDNF Val(66)Met and 5-HTTLPR genotype.
Enhanced endocannabinoid signaling has been implicated in typically adolescent behavioral features such as increased risk-taking, impulsivity and novelty seeking. Research investigating the impact of genetic variants in the cannabinoid receptor 1 gene (CNR1) and of early rearing conditions has demonstrated that both factors contribute to the prediction of impulsivity-related phenotypes. The present study aimed to test the hypothesis of an interaction of the two most studied CNR1 polymorphisms rs806379 and rs1049353 with early psychosocial adversity in terms of affecting impulsivity in 15-year-olds from an epidemiological cohort sample followed since birth. In 323 adolescents (170 girls, 153 boys), problems of impulse control and novelty seeking were assessed using parent-report and self-report, respectively. Exposure to early psychosocial adversity was determined in a parent interview conducted at the age of 3 months. The results indicated that impulsivity increased following exposure to early psychosocial adversity, with this increase being dependent on CNR1 genotype. In contrast, while individuals exposed to early adversity scored higher on novelty seeking, no significant impact of genotype or the interaction thereof was detected. This is the first evidence to suggest that the interaction of CNR1 gene variants with the experience of early life adversity may play a role in determining adolescent impulsive behavior. However, given that the reported findings are obtained in a high-risk community sample, results are restricted in terms of interpretation and generalization. Future research is needed to replicate these findings and to identify the mediating mechanisms underlying this effect.
Recent research suggests an important role of FKBP5, a glucocorticoid receptor regulating co-chaperone, in the development of stress-related diseases such as depression and anxiety disorders. The present study aimed to replicate and extend previous evidence indicating that FKBP5 polymorphisms moderate hypothalamus-pituitary-adrenal (HPA) function by examining whether FKBP5 rs1360780 genotype and different measures of childhood adversity interact to predict stress-induced cortisol secretion. At age 19 years, 195 young adults (90 males, 105 females) participating in an epidemiological cohort study completed the Trier Social Stress Test (TSST) to assess cortisol stress responsiveness and were genotyped for the FKBP5 rs1360780. Childhood adversity was assessed using the Childhood Trauma Questionnaire (CTQ) and by a standardized parent interview yielding an index of family adversity. A significant interaction between genotype and childhood adversity on cortisol response to stress was demonstrated for exposure to childhood maltreatment as assessed by retrospective self-report (CTQ), but not for prospectively ascertained objective family adversity. Severity of childhood maltreatment was significantly associated with attenuated cortisol levels among carriers of the rs1360780 CC genotype, while no such effect emerged in carriers of the T allele. These findings point towards the functional involvement of FKBP5 in long-term alterations of neuroendocrine stress regulation related to childhood maltreatment, which have been suggested to represent a premorbid risk or resilience factor in the context of stress-related disorders. (C) 2013 Elsevier B.V. and ECNR This is an open access article under the CC BY-NC-ND license.
Objective: To examine prospectively whether early parental child-rearing behavior is a predictor of cardiometabolic outcome in young adulthood when other potential risk factors are controlled. Metabolic factors associated with increased risk for cardiovascular disease have been found to vary, depending on lifestyle as well as genetic predisposition. Moreover, there is evidence suggesting that environmental conditions, such as stress in pre- and postnatal life, may have a sustained impact on an individual's metabolic risk profile. Methods: Participants were drawn from a prospective, epidemiological, cohort study followed up from birth into young adulthood. Parent interviews and behavioral observations at the age of 3 months were conducted to assess child-rearing practices and mother-infant interaction in the home setting and in the laboratory. In 279 participants, anthropometric characteristics, low-density lipoprotein and high-density lipoprotein cholesterol, apolipoproteins, and triglycerides were recorded at age 19 years. In addition, structured interviews were administered to the young adults to assess indicators of current lifestyle and education. Results: Adverse early-life interaction experiences were significantly associated with lower levels of high- density lipoprotein cholesterol and apolipoprotein A1 in young adulthood. Current lifestyle variables and level of education did not account for this effect, although habitual smoking and alcohol consumption also contributed significantly to cardiometabolic outcomes. Conclusions: These findings suggest that early parental child-rearing behavior may predict health outcome in later life through its impact on metabolic parameters in adulthood.
Stress is known to induce cigarette craving in smokers, but the underlying mechanisms are widely unknown. We investigated how dependence severity, smoking habits and stress-induced cortisol secretion are associated with increased cigarette craving after a standardised laboratory stressor. Hundred and six healthy participants (50 men, age 18-19 years) underwent a standardised public speaking stress task. In all, 35 smoked daily (DS), 13 smoked occasionally (OS), and 58 never smoked (NS). Smoking was unrestricted until 2 h before stress onset. Plasma cortisol was measured before and up to 95 min after the stressor. All current smokers rated intensity of cigarette craving immediately before and immediately after the stressor using the Brief Questionnaire of Smoking Urges (BQSU). Cortisol levels significantly increased in response to stress in all groups. The magnitude of this stress response was significantly lower in DS compared with OS and NS but did not differ between OS and NS. Baseline BQSU scores were significantly higher in DS than OS. BQSU scores increased significantly during the stress period and were positively correlated to the cortisol response in the DS but were unrelated to their nicotine dependence scores. In OS, no change in cigarette craving could be observed. In daily smokers, cigarette craving is increased after compared with before stress exposure and is related to the magnitude of cortisol stress response rather than to severity of nicotine dependence. This result supports, but does not prove, the concept that hypothalamus-pituitary-adrenal stimulation is one of the mechanisms how stress can elicit cigarette craving.
There is ample evidence that the early initiation of alcohol use is a risk factor for the development of later alcohol-related problems. The purpose of the current study was to examine whether this association can be explained by indicators of a common underlying susceptibility or whether age at drinking onset may be considered as an independent predictor of later drinking behavior, suggesting a potential causal relationship. Participants were drawn from a prospective cohort study of the long-term outcomes of early risk factors followed up from birth onwards. Structured interviews were administered to 304 participants to assess age at first drink and current drinking behavior. Data on risk factors, including early family adversity, parental alcohol use, childhood psychopathology and stressful life events, were repeatedly collected during childhood using standardized parent interviews. In addition, information on genotype was considered. Results confirmed previous work demonstrating that hazardous alcohol consumption is related to early-adolescent drinking onset. A younger age of first drink was significantly predicted by 5-HTTLPR genotype and the degree of preceding externalizing symptoms, and both factors were related to increased consumption or harmful alcohol use at age 19. However, even after controlling for these potential explanatory factors, earlier age at drinking onset remained a strong predictor of heavy alcohol consumption in young adulthood. The present longitudinal study adds to the current literature indicating that the early onset - adult hazardous drinking association cannot solely be attributed to shared genetic and psychopathologic risk factors as examined in this study.
Background: Recent animal and human studies indicate that the exposure to alcohol during early adolescence increases the risk for heavy alcohol use in response to stress. The purpose of this study was to examine whether this effect may be the consequence of a higher susceptibility to develop "drinking to cope" motives among early initiators. Methods: Data from 320 participants were collected as part of the Mannheim Study of Children at Risk, an ongoing epidemiological cohort study. Structured interviews at age 15 and 19 were used to assess age at first alcohol experience and drunkenness. The young adults completed questionnaires to obtain information about the occurrence of stressful life events during the past 4 years and current drinking habits. In addition, alcohol use under conditions of negative states was assessed with the Inventory of Drinking Situations. Results: The probability of young adults' alcohol use in situations characterized by unpleasant emotions was significantly increased the earlier they had initiated the use of alcohol, even when controlling for current drinking habits and stressful life events. Similar results were obtained for the age at first drunkenness. Conclusions: The findings strengthen the hypothesis that alcohol experiences during early adolescence facilitate drinking to regulate negative affect as an adverse coping strategy which may represent the starting point of a vicious circle comprising drinking to relieve stress and increased stress as a consequence of drinking.
Considerable evidence suggests that genetic factors combine with environmental influences to impact on the development of aggressive behavior. A genetic variant that has repeatedly been reported to render individuals more sensitive to the presence of adverse experiences, including stress exposure during fetal life, is the seven-repeat allele of the dopamine D4 receptor (DRD4) gene.
The present investigation concentrated on the interplay of prenatal maternal stress and DRD4 genotype in predicting self-reported aggression in young adults. As disruption of the hypothalamic-pituitary-adrenal system has been discussed as a pathophysiological pathway to aggression, cortisol stress reactivity was additionally examined.
As part of an epidemiological cohort study, prenatal maternal stress was assessed by maternal interview 3 months after childbirth. Between the ages of 19 and 23 years, 298 offspring (140 males, 158 females) completed the Young Adult Self-Report to measure aggressive behavior and were genotyped for the DRD4 gene. At 19 years, 219 participants additionally underwent the Trier Social Stress Test to determine cortisol reactivity.
Extending earlier findings with respect to childhood antisocial behavior, the results revealed that, under conditions of higher prenatal maternal stress, carriers of the DRD4 seven-repeat allele displayed more aggression in adulthood (p = 0.032). Moreover, the same conditions which seemed to promote aggression were found to predict attenuated cortisol secretion (p = 0.028).
This is the first study to indicate a long-term impact of prenatal stress exposure on the cortisol stress response depending on DRD4 genotype.
When playing violent video games, aggressive actions are performed against the background of an originally neutral environment, and associations are formed between cues related to violence and contextual features. This experiment examined the hypothesis that neutral contextual features of a virtual environment become associated with aggressive meaning and acquire the function of primes for aggressive cognitions. Seventy-six participants were assigned to one of two violent video game conditions that varied in context (ship vs. city environment) or a control condition. Afterwards, they completed a Lexical Decision Task to measure the accessibility of aggressive cognitions in which they were primed either with ship-related or city-related words. As predicted, participants who had played the violent game in the ship environment had shorter reaction times for aggressive words following the ship primes than the city primes, whereas participants in the city condition responded faster to the aggressive words following the city primes compared to the ship primes. No parallel effect was observed for the non-aggressive targets. The findings indicate that the associations between violent and neutral cognitions learned during violent game play facilitate the accessibility of aggressive cognitions.
This article investigated how the development of deviant behavior in adolescence is influenced by the variability of deviant behavior in the peer group. Based on the social information-processing (SIP) model, we predicted that peer groups with a low variability of deviant behavior (providing normative information that is easy to process) should have a main effect on the development of adolescents’ deviant behavior over time, whereas peer groups in which deviant behavior is more variable (i.e., more difficult to process) should primarily impact the deviant behavior of initially nondeviant classroom members. These hypotheses were largely supported in a multilevel analysis using self-reports of deviant behavior in a sample of 16,891 adolescents in 1,308 classes assessed at two data waves about 1-year apart. The results demonstrate the advantages of studying cross-level interactions to clarify the impact of the peer environment on the development of deviant behavior in adolescence.
The Girls Set the Tone: Gendered Classroom Norms and the Development of Aggression in Adolescence
(2015)
In a four-wave longitudinal study with N = 1,321 adolescents in Germany, we examined the impact of class-level normative beliefs about aggression on aggressive norms and behavior at the individual level over the course of 3 years. At each data wave, participants indicated their normative acceptance of aggressive behavior and provided self-reports of physical and relational aggression. Multilevel analyses revealed significant cross-level interactions between class-level and individual-level normative beliefs at T1 on individual differences in physical aggression at T2, and the indirect interactive effects were significant up to T4. Normative approval of aggression at the class level, especially girls' normative beliefs, defined the boundary conditions for the expression of individual differences in aggressive norms and their impact on physically and relationally aggressive behavior for both girls and boys. The findings demonstrate the moderating effect of social norms on the pathways from individual normative beliefs to aggressive behavior in adolescence.
When playing violent video games, aggressive actions are performed against the background of an originally neutral environment, and associations are formed between cues related to violence and contextual features. This experiment examined the hypothesis that neutral contextual features of a virtual environment become associated with aggressive meaning and acquire the function of primes for aggressive cognitions. Seventy-six participants were assigned to one of two violent video game conditions that varied in context (ship vs. city environment) or a control condition. Afterwards, they completed a Lexical Decision Task to measure the accessibility of aggressive cognitions in which they were primed either with ship-related or city-related words. As predicted, participants who had played the violent game in the ship environment had shorter reaction times for aggressive words following the ship primes than the city primes, whereas participants in the city condition responded faster to the aggressive words following the city primes compared to the ship primes. No parallel effect was observed for the non-aggressive targets. The findings indicate that the associations between violent and neutral cognitions learned during violent game play facilitate the accessibility of aggressive cognitions.
Fragestellung: Es soll die Qualität der Berichterstattung über Suizide und Suizidversuche in deutschsprachigen Jugendmagazinen näher untersucht werden und Veränderungen der Suizidzahlen unter Jugendlichen in Österreich nach dem Erscheinen von Berichten erfasst werden. Methodik: Suizidberichte aus fünf großen deutschsprachigen Jugendmagazinen wurden mithilfe qualitativer Inhaltsanalyse im Hinblick auf Geschlecht, dargestellte Motive, Suizid(versuchs)methoden, positive und negative Darstellungsweisen, Schuldzuweisungen und Abweichungen von Medienempfehlungen zur Berichterstattung über Suizid analysiert. Die Suizidzahlen 2 Wochen vor und nach dem Erscheinen von Suizidberichten wurden verglichen. Ergebnisse: 59 Berichte wurden identifiziert. Die häufigste Berichterstattung zum Thema Suizid fand sich in der Zeitschrift Bravo, wobei es zu einer leichten Überrepräsentation weiblicher Suizide und insgesamt zu einer Unterrepräsentation von Suizidversuchen kam. Entsprechend der Epidemiologie suizidalen Verhaltens wurde Sturz in die Tiefe am häufigsten bei Mädchen und Erhängen bei den Jungen beschrieben. Bei den dargestellten Motiven zeigte sich, dass wichtige Faktoren wie psychiatrische Erkrankungen kaum Erwähnung fanden. Während Suizidentinnen häufig positiv dargestellt wurden, wurden Suizidenten häufiger negativ dargestellt. Implizite Schuldzuweisungen wurden vorwiegend den Eltern zugeschrieben. Es zeigte sich kein Hinweis auf einen Werther-Effekt nach Berichterstattung. Schlussfolgerungen: Die weitgehende Divergenz zwischen der Epidemiologie von Suizidalität Jugendlicher und im deutschsprachigen Raum derzeit vorherrschenden Mediendarstellungen verdeutlicht wichtige Ansatzpunkte für Präventions- und Aufklärungsarbeit in der Bevölkerung.
Individual differences in demographics, personality, and other related beliefs are associated with coronavirus disease 2019 (COVID-19) threat beliefs. However, the relative contributions of these different types of individual differences to COVID-19 threat beliefs are not known. In this study, a total of 1,700 participants in Croatia (68% female; age 18-86 years) completed a survey that included questions about COVID-19 risks, questions about related beliefs including vaccination beliefs, trust in the health system, trust in scientists, and trust in the political system, the HEXACO 60 personality inventory, as well as demographic questions about gender, age, chronic diseases, and region. We used hierarchical regression analyses to examine the proportion of variance explained by demographics, personality, and other related beliefs. All three types of individual differences explained a part of the variance of COVID-19 threat beliefs, with related beliefs explaining the largest part. Personality facets explained a slightly larger amount of variance than personality factors. These results have implications for communication about COVID-19.
Das Ziel der Doktorarbeit war die Entwicklung und Evaluation eines skillsbasierten primären Präventionsprogramms (Mainzer Schultraining zur Essstörungsprävention (MaiStep)) für partielle und manifeste Essstörungen. Dabei wurde die Wirksamkeit durch einen primären (Reduktion vorhandener Essstörungssymptome) und sekundären (assoziierte Psychopathologie) Zielparameter 3 und 12 Monate nach Durchführung des Trainings überprüft. Innerhalb der randomisiert kontrollierten Studie gab es zwei Interventionsgruppe und eine aktive Kontrollgruppe. 1.654 Jugendliche (weiblich/männlich: 781/873; mittleres Alter: 13.1±0.7; BMI: 20.0±3.5) konnten für die Studie, an zufällig ausgewählten Schulen in Rheinland-Pfalz, rekrutiert werden. Die Entwicklung des Präventionsprogramms basiert auf einem systematischen Literaturreview von 63 wissenschaftlichen Studien über die Prävention von Essstörungen im Kindes- und Jugendalter. Eine Interventionsgruppe wurde durch Psychologinnen/Psychologen und eine zweite durch Lehrkräfte angeleitet. Das in der aktiven Kontrollgruppe durchgeführte Sucht- bzw. Stresspräventionsprogramm wurde durch Lehrkräfte geleitet. MaiStep zeigte zur 3-Monatskatamnese keine signifikanten Effekte im Vergleich zur aktiven Kontrollgruppe. Allerdings zeigten sich nach 12 Monaten multiple signifikante Effekte zwischen den Interventions- und der aktiven Kontrollgruppe. Im Rahmen der Analyse des primären Parameters wurden in den Interventionsgruppen signifikant weniger Jugendliche mit einer partiellen Anorexia nervosa (CHI²(2) = 8.74, p = .01**) und/oder partiellen Bulimia nervosa (CHI²(2) = 7.25, p = .02*) gefunden. Im Rahmen der sekundären Zielparameter zeigten sich signifikante Veränderungen in Subskalen des Eating Disorder Inventory (EDI-2) Schlankheitsstreben (F (2, 355) = 3.94, p = .02*) und Perfektionismus (F (2, 355) = 4.19, p = .01**) sowie dem Body Image Avoidance Questionnaire (BIAQ) (F (2, 525) = 18.79, p = .01**) zwischen den Interventions- und der aktiven Kontrollgruppe. MaiStep kann somit als erfolgreiches Programm zur Reduktion von partiellen Essstörungen für die Altersgruppe der 13- 15-jährigen bezeichnet werden. Trotz unterschiedlicher Wirkmechanismen zeigten sich die Lehrkräfte im Vergleich zu den Psychologinnen/Psychologen ebenso erfolgreich in der Durchführung.
Trial registration MaiStep is registered at the German Clinical Trials Register (DRKS00005050).
Eye movements serve as a window into ongoing visual-cognitive processes and can thus be used to investigate how people perceive real-world scenes. A key issue for understanding eye-movement control during scene viewing is the roles of central and peripheral vision, which process information differently and are therefore specialized for different tasks (object identification and peripheral target selection respectively). Yet, rather little is known about the contributions of central and peripheral processing to gaze control and how they are coordinated within a fixation during scene viewing. Additionally, the factors determining fixation durations have long been neglected, as scene perception research has mainly been focused on the factors determining fixation locations. The present thesis aimed at increasing the knowledge on how central and peripheral vision contribute to spatial and, in particular, to temporal aspects of eye-movement control during scene viewing. In a series of five experiments, we varied processing difficulty in the central or the peripheral visual field by attenuating selective parts of the spatial-frequency spectrum within these regions. Furthermore, we developed a computational model on how foveal and peripheral processing might be coordinated for the control of fixation duration. The thesis provides three main findings. First, the experiments indicate that increasing processing demands in central or peripheral vision do not necessarily prolong fixation durations; instead, stimulus-independent timing is adapted when processing becomes too difficult. Second, peripheral vision seems to play a prominent role in the control of fixation durations, a notion also implemented in the computational model. The model assumes that foveal and peripheral processing proceed largely in parallel and independently during fixation, but can interact to modulate fixation duration. Thus, we propose that the variation in fixation durations can in part be accounted for by the interaction between central and peripheral processing. Third, the experiments indicate that saccadic behavior largely adapts to processing demands, with a bias of avoiding spatial-frequency filtered scene regions as saccade targets. We demonstrate that the observed saccade amplitude patterns reflect corresponding modulations of visual attention. The present work highlights the individual contributions and the interplay of central and peripheral vision for gaze control during scene viewing, particularly for the control of fixation duration. Our results entail new implications for computational models and for experimental research on scene perception.
Visuospatial attention and gaze control depend on the interaction of foveal and peripheral processing. The foveal and peripheral regions of the visual field are differentially sensitive to parts of the spatial frequency spectrum. In two experiments, we investigated how the selective attenuation of spatial frequencies in the central or the peripheral visual field affects eye-movement behavior during real-world scene viewing. Gaze-contingent low-pass or high-pass filters with varying filter levels (i.e., cutoff frequencies; Experiment 1) or filter sizes (Experiment 2) were applied. Compared to unfiltered control conditions, mean fixation durations increased most with central high-pass and peripheral low-pass filtering. Increasing filter size prolonged fixation durations with peripheral filtering, but not with central filtering. Increasing filter level prolonged fixation durations with low-pass filtering, but not with high-pass filtering. These effects indicate that fixation durations are not always longer under conditions of increased processing difficulty. Saccade amplitudes largely adapted to processing difficulty: amplitudes increased with central filtering and decreased with peripheral filtering; the effects strengthened with increasing filter size and filter level. In addition, we observed a trade-off between saccade timing and saccadic selection, since saccade amplitudes were modulated when fixation durations were unaffected by the experimental manipulations. We conclude that interactions of perception and gaze control are highly sensitive to experimental manipulations of input images as long as the residual information can still be accessed for gaze control. (C) 2016 Elsevier Ltd. All rights reserved.
When studying how people search for objects in scenes, the inhomogeneity of the visual field is often ignored. Due to physiological limitations, peripheral vision is blurred and mainly uses coarse-grained information (i.e., low spatial frequencies) for selecting saccade targets, whereas high-acuity central vision uses fine-grained information (i.e., high spatial frequencies) for analysis of details. Here we investigated how spatial frequencies and color affect object search in real-world scenes. Using gaze-contingent filters, we attenuated high or low frequencies in central or peripheral vision while viewers searched color or grayscale scenes. Results showed that peripheral filters and central high-pass filters hardly affected search accuracy, whereas accuracy dropped drastically with central low-pass filters. Peripheral filtering increased the time to localize the target by decreasing saccade amplitudes and increasing number and duration of fixations. The use of coarse-grained information in the periphery was limited to color scenes. Central filtering increased the time to verify target identity instead, especially with low-pass filters. We conclude that peripheral vision is critical for object localization and central vision is critical for object identification. Visual guidance during peripheral object localization is dominated by low-frequency color information, whereas high-frequency information, relatively independent of color, is most important for object identification in central vision.
Coupling of attention and saccades when viewing scenes with central and peripheral degradation
(2016)
Degrading real-world scenes in the central or the peripheral visual field yields a characteristic pattern: Mean saccade amplitudes increase with central and decrease with peripheral degradation. Does this pattern reflect corresponding modulations of selective attention? If so, the observed saccade amplitude pattern should reflect more focused attention in the central region with peripheral degradation and an attentional bias toward the periphery with central degradation. To investigate this hypothesis, we measured the detectability of peripheral (Experiment 1) or central targets (Experiment 2) during scene viewing when low or high spatial frequencies were gaze-contingently filtered in the central or the peripheral visual field. Relative to an unfiltered control condition, peripheral filtering induced a decrease of the detection probability for peripheral but not for central targets (tunnel vision). Central filtering decreased the detectability of central but not of peripheral targets. Additional post hoc analyses are compatible with the interpretation that saccade amplitudes and direction are computed in partial independence. Our experimental results indicate that task-induced modulations of saccade amplitudes reflect attentional modulations.
Coupling of attention and saccades when viewing scenes with central and peripheral degradation
(2016)
Degrading real-world scenes in the central or the peripheral visual field yields a characteristic pattern: Mean saccade amplitudes increase with central and decrease with peripheral degradation. Does this pattern reflect corresponding modulations of selective attention? If so, the observed saccade amplitude pattern should reflect more focused attention in the central region with peripheral degradation and an attentional bias toward the periphery with central degradation. To investigate this hypothesis, we measured the detectability of peripheral (Experiment 1) or central targets (Experiment 2) during scene viewing when low or high spatial frequencies were gaze-contingently filtered in the central or the peripheral visual field. Relative to an unfiltered control condition, peripheral filtering induced a decrease of the detection probability for peripheral but not for central targets (tunnel vision). Central filtering decreased the detectability of central but not of peripheral targets. Additional post hoc analyses are compatible with the interpretation that saccade amplitudes and direction are computed in partial independence. Our experimental results indicate that task-induced modulations of saccade amplitudes reflect attentional modulations.