Refine
Has Fulltext
- no (4)
Document Type
- Article (4) (remove)
Language
- English (4)
Is part of the Bibliography
- yes (4)
Keywords
- event-related potentials (4) (remove)
Institute
- Department Psychologie (4) (remove)
Recent event-related potential (ERP) data showed that neutral objects encoded in emotional background pictures were better remembered than objects encoded in neutral contexts, when recognition memory was tested one week later. In the present study, we investigated whether this long-term memory advantage for items is also associated with correct memory for contextual source details. Furthermore, we were interested in the possibly dissociable contribution of familiarity and recollection processes (using a Remember/Know procedure). The results revealed that item memory performance was mainly driven by the subjective experience of familiarity, irrespective of whether the objects were previously encoded in emotional or neutral contexts. Correct source memory for the associated background picture, however, was driven by recollection and enhanced when the content was emotional. In ERPs, correctly recognized old objects evoked frontal ERP Old/New effects (300-500 ms), irrespective of context category. As in our previous study (Ventura-Bort et al., 2016b), retrieval for objects from emotional contexts was associated with larger parietal Old/New differences (600-800 ms), indicating stronger involvement of recollection. Thus, the results suggest a stronger contribution of recollection-based retrieval to item and contextual background source memory for neutral information associated with an emotional event.
During social interactions, individuals rapidly and automatically judge others’ trustworthiness on the basis of subtle facial cues. To investigate the behavioral and neural correlates of these judgments, we conducted 2 studies: 1 study for the construction and evaluation of a set of natural faces differing in trustworthiness (Study 1: n = 30) and another study for the investigation of event-related potentials (ERPs) in response to this set of natural faces (Study 2: n = 30). Participants of both studies provided highly reliable and nearly identical trustworthiness ratings for the selected faces, supporting the notion that the discrimination of trustworthy and untrustworthy faces depends on distinct facial cues. These cues appear to be processed in an automatic and bottom-up-driven fashion because the free viewing of these faces was sufficient to elicit trustworthiness-related differences in late positive potentials (LPPs) as indicated by larger amplitudes to untrustworthy as compared with trustworthy faces. Taken together, these findings suggest that natural faces contain distinct cues that are automatically and rapidly processed to facilitate the discrimination of untrustworthy and trustworthy faces across various contexts, presumably by enhancing the elaborative processing of untrustworthy as compared with trustworthy faces. (
One of the ongoing debates about visual consciousness is whether it can be considered as an all-or-none or a graded phenomenon. While there is increasing evidence for the existence of graded states of conscious awareness based on paradigms such as visual masking, only little and mixed evidence is available for the attentional blink paradigm, specifically in regard to electrophysiological measures. Thereby, the all-or-none pattern reported in some attentional blink studies might have originated from specifics of the experimental design, suggesting the need to examine the generalizability of results. In the present event-related potential (ERP) study (N = 32), visual awareness of T2 face targets was assessed via subjective visibility ratings on a perceptual awareness scale in combination with ERPs time-locked to T2 onset (components P1, N1, N2, and P3). Furthermore, a classification task preceding visibility ratings allowed to track task performance. The behavioral results indicate a graded rather than an all-or-none pattern of visual awareness. Corresponding graded differences in the N1, N2, and P3 components were observed for the comparison of visibility levels. These findings suggest that conscious perception during the attentional blink can occur in a graded fashion.
Previous cross-modal priming studies showed that lexical decisions to words after a pronoun were facilitated when these words were semantically related to the pronoun's antecedent. These studies suggested that semantic priming effectively measured antecedent retrieval during coreference. We examined whether these effects extended to implicit reading comprehension using the N400 response. The results of three experiments did not yield strong evidence of semantic facilitation due to coreference. Further, the comparison with two additional experiments showed that N400 facilitation effects were reduced in sentences (vs. word pair paradigms) and were modulated by the case morphology of the prime word. We propose that priming effects in cross-modal experiments may have resulted from task-related strategies. More generally, the impact of sentence context and morphological information on priming effects suggests that they may depend on the extent to which the upcoming input is predicted, rather than automatic spreading activation between semantically related words.