Refine
Has Fulltext
- no (2062) (remove)
Year of publication
Document Type
- Article (1765)
- Monograph/Edited Volume (89)
- Doctoral Thesis (70)
- Other (45)
- Review (43)
- Conference Proceeding (29)
- Preprint (18)
- Habilitation Thesis (2)
- Part of a Book (1)
Language
Is part of the Bibliography
- yes (2062) (remove)
Keywords
- Eye movements (26)
- eye movements (21)
- embodied cognition (20)
- reading (17)
- Chinese (16)
- Reading (16)
- psychotherapy (16)
- Adolescence (15)
- adolescence (15)
- aggression (13)
Institute
- Department Psychologie (2062) (remove)
The human language processing mechanism assigns a structure to the incoming materials as they unfold. There is evidence that the parser prefers some attachment types over others; however, theories of sentence processing are still in dispute over the stage at which each source of information contributes to the parsing system. The present study aims to identify the nature of initial parsing decisions during sentence processing through manipulating attachment type and verbs' argument structure. To this end, we designed a self-paced reading task using globally ambiguous constructions in Dutch. The structures included double locative prepositional phrases (PPs) where the first PP could attach both to the verb (high attachment) and the noun preceding it (low attachment). To disambiguate the structures, we presented a visual context in the form of short animation clips prior to each reading task. Furthermore, we manipulated the argument structure of the sentences using 2- and 3-argument verbs. The results showed that parsing decisions were influenced by contextual cues depending on the argument structure of the verb. That is, the visual context overcame the preference for high attachment only in the case of 2-argument verbs, while this preference persisted in structures including 3-argument verbs as represented by longer reading times for the low attachment interpretations. These findings can be taken as evidence that our language processing system actively integrates information from linguistic and non-linguistic sources from the initial stages of analysis to build up meaning. We discuss our findings in light of serial and parallel models of sentence processing.
Background & aims:
This study aimed to describe the association of healthy eating literacy (HEL) with energy, nutrients, and food consumption in young women who had normal and lean weight at a Japanese university, considering their resident status.
Methods:
Cross-sectional data from the Ochanomizu Health Study were used in this study. Participants answered a self-administered, two-part, anonymous survey in 2018 and 2019.
A total of 203 female undergraduate students with lean and normal body mass index (BMI) were included in the analysis. Single and stepwise multiple linear regression analysis was used to examine the association of HEL and resident status with healthy food consumption, such as vegetables, fish, and shellfish.
The dependent variables were HEL and resident status, and the covariates were age, BMI, and the total metabolic equivalents.
Results:
The median (25th and 75th percentiles) age, BMI, and total HEL score were 20 (19, 21) years, 20.2 (18.9, 21.3) kg/m 2, and 18 (16, 20), respectively.
Resident status and HEL were independently associated with vegetables, fish, and shellfish intake.
Participants who had higher total HEL scores and lived in their family home consumed significantly more vegetables (b = 0.17 and-0.34, p < 0.05) and fish and shellfish (b = 0.24,-0.28, p < 0.001).
Conclusion:
This study provides an insight into the association between HEL and dietary consumption in young women with normal and lean BMI.
Anger, indignation, guilt, rumination, victim compensation, and perpetrator punishment are considered primary responses associated with justice sensitivity (JS).
However, injustice and high JS may predispose to further responses.
We had N = 293 adults rate their JS, 17 potential responses toward 12 unjust scenarios from the victim's, observer's, beneficiary's, and perpetrator's perspectives, and several control variables.
Unjust situations generally elicited many affective, cognitive, and behavioral responses. JS generally predisposed to strong affective responses toward injustice, including sadness, pity, disappointment, and helplessness. It impaired trivialization, victim-blaming, or justification, which may otherwise help cope with injustice.
It predisposed to conflict solutions and victim compensation. Particularly victim and beneficiary JS had stronger effects in unjust situations from the corresponding perspective.
These findings add to a better understanding of the main and interaction effects of unjust situations from different perspectives and the JS facets, differences between the JS facets, as well as the links between JS and behavior and well-being.
The ‘social brain’, consisting of areas sensitive to social information, supposedly gates the mechanisms involved in human language learning. Early preverbal interactions are guided by ostensive signals, such as gaze patterns, which are coordinated across body, brain, and environment. However, little is known about how the infant brain processes social gaze in naturalistic interactions and how this relates to infant language development. During free-play of 9-month-olds with their mothers, we recorded hemodynamic cortical activity of ´social brain` areas (prefrontal cortex, temporo-parietal junctions) via fNIRS, and micro-coded mother’s and infant’s social gaze. Infants’ speech processing was assessed with a word segmentation task. Using joint recurrence quantification analysis, we examined the connection between infants’ ´social brain` activity and the temporal dynamics of social gaze at intrapersonal (i.e., infant’s coordination, maternal coordination) and interpersonal (i.e., dyadic coupling) levels. Regression modeling revealed that intrapersonal dynamics in maternal social gaze (but not infant’s coordination or dyadic coupling) coordinated significantly with infant’s cortical activity. Moreover, recurrence quantification analysis revealed that intrapersonal maternal social gaze dynamics (in terms of entropy) were the best predictor of infants’ word segmentation. The findings support the importance of social interaction in language development, particularly highlighting maternal social gaze dynamics.
Previous studies suggest that associations between numbers and space are mediated by shifts of visuospatial attention along the horizontal axis. In this study, we investigated the effect of vertical shifts of overt attention, induced by optokinetic stimulation (OKS) and monitored through eye-tracking, in two tasks requiring explicit (number comparison) or implicit (parity judgment) processing of number magnitude. Participants were exposed to black-and-white stripes (OKS) that moved vertically (upward or downward) or remained static (control condition). During the OKS, participants were asked to verbally classify auditory one-digit numbers as larger/smaller than 5 (comparison task; Exp. 1) or as odd/even (parity task; Exp. 2). OKS modulated response times in both experiments. In Exp.1, upward attentional displacement decreased the Magnitude effect (slower responses for large numbers) and increased the Distance effect (slower responses for numbers close to the reference). In Exp.2, we observed a complex interaction between parity, magnitude, and OKS, indicating that downward attentional displacement slowed down responses for large odd numbers. Moreover, eye tracking analyses revealed an influence of number processing on eye movements both in Exp. 1, with eye gaze shifting downwards during the processing of small numbers as compared to large ones; and in Exp. 2, with leftward shifts after large even numbers (6,8) and rightward shifts after large odd numbers (7,9). These results provide evidence of bidirectional links between number and space and extend them to the vertical dimension. Moreover, they document the influence of visuo-spatial attention on processing of numerical magnitude, numerical distance, and parity. Together, our findings are in line with grounded and embodied accounts of numerical cognition.
Rhythmicity characterizes both interpersonal synchrony and spoken language. Emotions and language are forms of interpersonal communication, which interact with each other throughout development. We investigated whether and how emotional synchrony between mothers and their 9-month-old infants relates to infants' word segmentation as an early marker of language development. Twenty-six 9-month-old infants and their German-speaking mothers took part in the study. To measure emotional synchrony, we coded positive, neutral and negative emotional expressions of the mothers and their infants during a free play session. We then calculated the degree to which the mothers' and their infants' matching emotional expressions followed a predictable pattern. To measure word segmentation, we familiarized infants with auditory text passages and tested how long they looked at the screen while listening to familiar versus novel words. We found that higher levels of predictability (i.e. low entropy) during mother-infant interaction is associated with infants' word segmentation performance. These findings suggest that individual differences in word segmentation relate to the complexity and predictability of emotional expressions during mother-infant interactions.
This pre-registered study examined the prevalence and correlates of sexual aggression in a sample of 530 Iranians (322 women, 208 men) with a behaviorally specific questionnaire distinguishing between different coercive strategies, victim-perpetrator relationships, and sexual acts. Significantly more women (63.0%) than men (51.0%) experienced at least one incident of sexual aggression victimization since the age of 15 years, and significantly more men (37.0%) than women (13.4%) reported at least one incident of sexual aggression perpetration. In women and men, the experience of child sexual abuse predicted sexual victimization and sexual aggression perpetration after the age of 15 years, both directly and indirectly through higher engagement in risky sexual behavior. Greater endorsement of hostile masculinity among men explained additional variance in the prediction of sexual aggression perpetration. This research is a first step towards documenting and explaining high rates of sexual aggression victimization and perpetration among Iranian women and men, providing important information for sex education as well for the prevention of sexual aggression. However, to achieve these goals, we highlight the need for systematic actions in all educational, social, and legal sectors of Iranian society.
The development of interface technologies is driven by the goal of making interaction more positive through natural action-control mappings.
In Virtual Reality (VR), the entire body is potentially involved for interaction, using such mappings with a maximum of degrees of freedom. The downside is the increase in interaction complexity, which can dramatically influence interface design.
A cognitive perspective on detailed aspects of interaction patterns is lacking in common interface design guidelines, although it can be helpful to make this complexity controllable and, thus, make interaction behavior predictable.
In the present study, the distinction between grounding, embodiment, and situatedness (the GES framework) is applied to organize aspects of interactions and to compare them with each other.
In two experiments, zooming into or out of emotional pictures through changes of arm span was examined in VR. There are qualitatively different aspects during such an interaction: i) perceptual aspects caused by zooming are fundamental for human behavior (Grounding: closer objects appear bigger) and ii) aspects of gestures correspond to the physical characteristics of the agents (Embodiment: little distance of hands signals little or, in contrast, "creating more detail").
The GES-framework sets aspects of Grounding against aspects of Embodiment, thus allowing to predict human behavior regarding these qualitatively different aspects.
For the zooming procedure, the study shows that Grounding can overrule Embodiment in interaction design.
Thus, we propose GES as a cognitive framework that can help to inform interaction guidelines for user interface design in VR.
Purpose
Rehabilitation professionals are faced with judging and describing the social-medicine status of their patients. Rehabilitation professionals must know the core concepts of acute unfitness for work, psychological capacities, and long-term work capacity.
Acquiring and applying this knowledge, requires training. The research question is if and to what extent medical professionals and students' knowledge changes after social medicine training.
Methods
This quasi-experimental study was carried out in the real-life context of social medicine training. Psychology students (n = 42), physicians/psychotherapists (i.e. state-licensed health professionals) (n = 44) and medical assistant professionals (n = 29) were trained. Their social medicine knowledge was measured before and after training by a 10-min expert-approved and content valid knowledge questionnaire.
Three free-text questions had to be answered on the essential aspects of present and prognostic work ability and psychological capacities.
Answers were rated for correctness by two experts. Paired t tests and variance analysis have been calculated for group comparisons.
Results
All groups improved their social medicine knowledge from the pre- to the post-test. The students started with the lowest level of knowledge in the pre-test.
After training, 69% of the physicians/psychotherapists and 56.8% of the medical assistant professionals, but only 7% of the students, obtained maximum scores for naming psychological capacities.
Conclusions
Social medicine knowledge increased after a training course consisting of eight lessons. The increase was greater for medical assistant professionals and physicians/psychotherapists than for students. Social medicine training must be adjusted to the trainee groups' knowledge levels.
The functional significance of the two prominent language-related ERP components N400 and P600 is still under debate.
It has recently been suggested that one important dimension along which the two vary is in terms of automaticity versus attentional control, with N400 amplitudes reflecting more automatic and P600 amplitudes reflecting more controlled aspects of sentence comprehension.
The availability of executive resources necessary for controlled processes depends on sustained attention, which fluctuates over time.
Here, we thus tested whether P600 and N400 amplitudes depend on the level of sustained attention.
We reanalyzed EEG and behavioral data from a sentence processing task by Sassenhagen and Bornkessel-Schlesewsky [The P600 as a correlate of ventral attention network reorientation. Cortex, 66, A3-A20, 2015], which included sentences with morphosyntactic and semantic violations.
Participants read sentences phrase by phrase and indicated whether a sentence contained any type of anomaly as soon as they had the relevant information.
To quantify the varying degrees of sustained attention, we extracted a moving reaction time coefficient of variation over the entire course of the task.
We found that the P600 amplitude was significantly larger during periods of low reaction time variability (high sustained attention) than in periods of high reaction time variability (low sustained attention). In contrast, the amplitude of the N400 was not affected by reaction time variability.
These results thus suggest that the P600 component is sensitive to sustained attention whereas the N400 component is not, which provides independent evidence for accounts suggesting that P600 amplitudes reflect more controlled and N400 amplitudes reflect more automatic aspects of sentence comprehension.