Refine
Year of publication
Is part of the Bibliography
- yes (48)
Keywords
- embodied cognition (48) (remove)
"BreaThink"
(2021)
Cognition is shaped by signals from outside and within the body. Following recent evidence of interoceptive signals modulating higher-level cognition, we examined whether breathing changes the production and perception of quantities. In Experiment 1, 22 adults verbally produced on average larger random numbers after inhaling than after exhaling. In Experiment 2, 24 further adults estimated the numerosity of dot patterns that were briefly shown after either inhaling or exhaling. Again, we obtained on average larger responses following inhalation than exhalation. These converging results extend models of situated cognition according to which higher-level cognition is sensitive to transient interoceptive states.
Interoception is an often neglected but crucial aspect of the human minimal self. In this perspective, we extend the embodiment account of interoceptive inference to explain the development of the minimal self in humans. To do so, we first provide a comparative overview of the central accounts addressing the link between interoception and the minimal self. Grounding our arguments on the embodiment framework, we propose a bidirectional relationship between motor and interoceptive states, which jointly contribute to the development of the minimal self. We present empirical findings on interoception in development and discuss the role of interoception in the development of the minimal self. Moreover, we make theoretical predictions that can be tested in future experiments. Our goal is to provide a comprehensive view on the mechanisms underlying the minimal self by explaining the role of interoception in the development of the minimal self.
Interoception is an often neglected but crucial aspect of the human minimal self. In this perspective, we extend the embodiment account of interoceptive inference to explain the development of the minimal self in humans. To do so, we first provide a comparative overview of the central accounts addressing the link between interoception and the minimal self. Grounding our arguments on the embodiment framework, we propose a bidirectional relationship between motor and interoceptive states, which jointly contribute to the development of the minimal self. We present empirical findings on interoception in development and discuss the role of interoception in the development of the minimal self. Moreover, we make theoretical predictions that can be tested in future experiments. Our goal is to provide a comprehensive view on the mechanisms underlying the minimal self by explaining the role of interoception in the development of the minimal self.
The current study explored effects of continuous hand motion on the allocation of visual attention. A concurrent paradigm was used to combine visually concealed continuous hand movements with an attentionally demanding letter discrimination task. The letter probe appeared contingent upon the moving right hand passing through one of six positions. Discrimination responses were then collected via a keyboard press with the static left hand. Both the right hand's position and its movement direction systematically contributed to participants' visual sensitivity. Discrimination performance increased substantially when the right hand was distant from, but moving toward the visual probe location (replicating the far-hand effect, Festrnan et al., 2013). However, this effect disappeared when the probe appeared close to the static left hand, supporting the view that static and dynamic features of both hands combine in modulating pragmatic maps of attention.
Finger-based representation of numbers is a high-level cognitive strategy to assist numerical and arithmetic processing in children and adults. It is unclear whether this paradigm builds on simple perceptual features or comprises several attributes through embodiment. Here we describe the development and initial testing of an experimental setup to study embodiment during a finger-based numerical task using Virtual Reality (VR) and a low-cost tactile stimulator that is easy to build. Using VR allows us to create new ways to study finger-based numerical representation using a virtual hand that can be manipulated in ways our hand cannot, such as decoupling tactile and visual stimuli. The goal is to present a new methodology that can allow researchers to study embodiment through this new approach, maybe shedding new light on the cognitive strategy behind the finger-based representation of numbers. In this case, a critical methodological requirement is delivering precisely targeted sensory stimuli to specific effectors while simultaneously recording their behavior and engaging the participant in a simulated experience. We tested the device's capability by stimulating users in different experimental configurations. Results indicate that our device delivers reliable tactile stimulation to all fingers of a participant's hand without losing motion tracking quality during an ongoing task. This is reflected by an accuracy of over 95% in participants detecting stimulation of a single finger or multiple fingers in sequential stimulation as indicated by experiments with sixteen participants. We discuss possible application scenarios, explain how to apply our methodology to study the embodiment of finger-based numerical representations and other high-level cognitive functions, and discuss potential further developments of the device based on the data obtained in our testing.
Commentary
(2020)
Commentary
(2015)
Commentary
(2020)
Editorial: Reaching to Grasp Cognition: Analyzing Motor Behavior to Investigate Social Interactions
(2018)
Motivated by conflicting evidence in the literature, we re-assessed the role of facial feedback when detecting quantitative or qualitative changes in others’ emotional expressions. Fifty-three healthy adults observed self-paced morph sequences where the emotional facial expression either changed quantitatively (i.e., sad-to-neutral, neutral-to-sad, happy-to-neutral, neutral-to-happy) or qualitatively (i.e. from sad to happy, or from happy to sad). Observers held a pen in their own mouth to induce smiling or frowning during the detection task. When morph sequences started or ended with neutral expressions we replicated a congruency effect: Happiness was perceived longer and sooner while smiling; sadness was perceived longer and sooner while frowning. Interestingly, no such congruency effects occurred for transitions between emotional expressions. These results suggest that facial feedback is especially useful when evaluating the intensity of a facial expression, but less so when we have to recognize which emotion our counterpart is expressing.
Motivated by conflicting evidence in the literature, we re-assessed the role of facial feedback when detecting quantitative or qualitative changes in others’ emotional expressions. Fifty-three healthy adults observed self-paced morph sequences where the emotional facial expression either changed quantitatively (i.e., sad-to-neutral, neutral-to-sad, happy-to-neutral, neutral-to-happy) or qualitatively (i.e. from sad to happy, or from happy to sad). Observers held a pen in their own mouth to induce smiling or frowning during the detection task. When morph sequences started or ended with neutral expressions we replicated a congruency effect: Happiness was perceived longer and sooner while smiling; sadness was perceived longer and sooner while frowning. Interestingly, no such congruency effects occurred for transitions between emotional expressions. These results suggest that facial feedback is especially useful when evaluating the intensity of a facial expression, but less so when we have to recognize which emotion our counterpart is expressing.
Motivated by conflicting evidence in the literature, we re-assessed the role of facial feedback when detecting quantitative or qualitative changes in others’ emotional expressions. Fifty-three healthy adults observed self-paced morph sequences where the emotional facial expression either changed quantitatively (i.e., sad-to-neutral, neutral-to-sad, happy-to-neutral, neutral-to-happy) or qualitatively (i.e. from sad to happy, or from happy to sad). Observers held a pen in their own mouth to induce smiling or frowning during the detection task. When morph sequences started or ended with neutral expressions we replicated a congruency effect: Happiness was perceived longer and sooner while smiling; sadness was perceived longer and sooner while frowning. Interestingly, no such congruency effects occurred for transitions between emotional expressions. These results suggest that facial feedback is especially useful when evaluating the intensity of a facial expression, but less so when we have to recognize which emotion our counterpart is expressing.
Positive objects or actions are associated with physical highness, whereas negative objects or actions are related to physical lowness. Previous research suggests that metaphorical connection ("good is up" or "bad is down") between spatial experience and evaluation of objects is grounded in actual experience with the body. Prior studies investigated effects of spatial metaphors with respect to verticality of either static objects or self-performed actions. By presenting videos of object placements, the current three experiments combined vertically-located stimuli with observation of vertically-directed actions. As expected, participants' ratings of emotionally-neutral objects were systematically influenced by the observed vertical positioning, that is, ratings were more positive for objects that were observed being placed up as compared to down. Moreover, effects were slightly more pronounced for "bad is down," because only the observed downward, but not the upward, action led to different ratings as compared to a medium-positioned action. Last, some ratings were even affected by observing only the upward/downward action, without seeing the final vertical placement of the object. Thus, both, a combination of observing a vertically-directed action and seeing a vertically-located object, and observing a vertically-directed action alone, affected participants' evaluation of emotional valence of the involved object. The present findings expand the relevance of spatial metaphors to action observation, thereby giving new impetus to embodied-cognition research.
Positive objects or actions are associated with physical highness, whereas negative objects or actions are related to physical lowness. Previous research suggests that metaphorical connection ("good is up" or "bad is down") between spatial experience and evaluation of objects is grounded in actual experience with the body. Prior studies investigated effects of spatial metaphors with respect to verticality of either static objects or self-performed actions. By presenting videos of object placements, the current three experiments combined vertically-located stimuli with observation of vertically-directed actions. As expected, participants' ratings of emotionally-neutral objects were systematically influenced by the observed vertical positioning, that is, ratings were more positive for objects that were observed being placed up as compared to down. Moreover, effects were slightly more pronounced for "bad is down," because only the observed downward, but not the upward, action led to different ratings as compared to a medium-positioned action. Last, some ratings were even affected by observing only the upward/downward action, without seeing the final vertical placement of the object. Thus, both, a combination of observing a vertically-directed action and seeing a vertically-located object, and observing a vertically-directed action alone, affected participants' evaluation of emotional valence of the involved object. The present findings expand the relevance of spatial metaphors to action observation, thereby giving new impetus to embodied-cognition research.