Refine
Document Type
- Article (1)
- Doctoral Thesis (1)
- Postprint (1)
Language
- English (3) (remove)
Is part of the Bibliography
- yes (3) (remove)
Keywords
- neuroscience (3) (remove)
Institute
Conceptual knowledge about objects, people and events in the world is central to human cognition, underlying core cognitive abilities such as object recognition and use, and word comprehension. Previous research indicates that concepts consist of perceptual and motor features represented in modality-specific perceptual-motor brain regions. In addition, cross-modal convergence zones integrate modality-specific features into more abstract conceptual representations.
However, several questions remain open: First, to what extent does the retrieval of perceptual-motor features depend on the concurrent task? Second, how do modality-specific and cross-modal regions interact during conceptual knowledge retrieval? Third, which brain regions are causally relevant for conceptually-guided behavior? This thesis addresses these three key issues using functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS) in the healthy human brain.
Study 1 - an fMRI activation study - tested to what extent the retrieval of sound and action features of concepts, and the resulting engagement of auditory and somatomotor brain regions depend on the concurrent task. 40 healthy human participants performed three different tasks - lexical decision, sound judgment, and action judgment - on words with a high or low association to sounds and actions. We found that modality-specific regions selectively respond to task-relevant features: Auditory regions selectively responded to sound features during sound judgments, and somatomotor regions selectively responded to action features during action judgments. Unexpectedly, several regions (e.g. the left posterior parietal cortex; PPC) exhibited a task-dependent response to both sound and action features. We propose these regions to be "multimodal", and not "amodal", convergence zones which retain modality-specific information.
Study 2 - an fMRI connectivity study - investigated the functional interaction between modality-specific and multimodal areas during conceptual knowledge retrieval. Using the above fMRI data, we asked (1) whether modality-specific and multimodal regions are functionally coupled during sound and action feature retrieval, (2) whether their coupling depends on the task, (3) whether information flows bottom-up, top-down, or bidirectionally, and (4) whether their coupling is behaviorally relevant. We found that functional coupling between multimodal and modality-specific areas is task-dependent, bidirectional, and relevant for conceptually-guided behavior. Left PPC acted as a connectivity "switchboard" that flexibly adapted its coupling to task-relevant modality-specific nodes.
Hence, neuroimaging studies 1 and 2 suggested a key role of left PPC as a multimodal convergence zone for conceptual knowledge. However, as neuroimaging is correlational, it remained unknown whether left PPC plays a causal role as a multimodal conceptual hub. Therefore, study 3 - a TMS study - tested the causal relevance of left PPC for sound and action feature retrieval. We found that TMS over left PPC selectively impaired action judgments on low sound-low action words, as compared to sham stimulation. Computational simulations of the TMS-induced electrical field revealed that stronger stimulation of left PPC was associated with worse performance on action, but not sound, judgments. These results indicate that left PPC causally supports conceptual processing when action knowledge is task-relevant and cannot be compensated by sound knowledge. Our findings suggest that left PPC is specialized for action knowledge, challenging the view of left PPC as a multimodal conceptual hub.
Overall, our studies support "hybrid theories" which posit that conceptual processing involves both modality-specific perceptual-motor regions and cross-modal convergence zones. In our new model of the conceptual system, we propose conceptual processing to rely on a representational hierarchy from modality-specific to multimodal up to amodal brain regions. Crucially, this hierarchical system is flexible, with different regions and connections being engaged in a task-dependent fashion. Our model not only reconciles the seemingly opposing grounded cognition and amodal theories, it also incorporates task dependency of conceptually-related brain activity and connectivity, thereby resolving several current issues on the neural basis of conceptual knowledge retrieval.
Making sense of the world
(2020)
For human infants, the first years after birth are a period of intense exploration-getting to understand their own competencies in interaction with a complex physical and social environment. In contemporary neuroscience, the predictive-processing framework has been proposed as a general working principle of the human brain, the optimization of predictions about the consequences of one's own actions, and sensory inputs from the environment. However, the predictive-processing framework has rarely been applied to infancy research. We argue that a predictive-processing framework may provide a unifying perspective on several phenomena of infant development and learning that may seem unrelated at first sight. These phenomena include statistical learning principles, infants' motor and proprioceptive learning, and infants' basic understanding of their physical and social environment. We discuss how a predictive-processing perspective can advance the understanding of infants' early learning processes in theory, research, and application.
Making sense of the world
(2020)
For human infants, the first years after birth are a period of intense exploration-getting to understand their own competencies in interaction with a complex physical and social environment. In contemporary neuroscience, the predictive-processing framework has been proposed as a general working principle of the human brain, the optimization of predictions about the consequences of one's own actions, and sensory inputs from the environment. However, the predictive-processing framework has rarely been applied to infancy research. We argue that a predictive-processing framework may provide a unifying perspective on several phenomena of infant development and learning that may seem unrelated at first sight. These phenomena include statistical learning principles, infants' motor and proprioceptive learning, and infants' basic understanding of their physical and social environment. We discuss how a predictive-processing perspective can advance the understanding of infants' early learning processes in theory, research, and application.