600 Technik, Technologie
Refine
Language
- English (7)
Is part of the Bibliography
- yes (7)
Keywords
- Amygdala (2)
- Appearance (2)
- Facial Expressions (2)
- Information (2)
- Metaanalysis (2)
- Perception (2)
- Recognition Memory (2)
- Recollection (2)
- Trust (2)
- Trustworthiness (2)
Institute
- Strukturbereich Kognitionswissenschaften (7) (remove)
Peripersonal space is the space surrounding our body, where multisensory integration of stimuli and action execution take place. The size of peripersonal space is flexible and subject to change by various personal and situational factors. The dynamic representation of our peripersonal space modulates our spatial behaviors towards other individuals. During the COVID-19 pandemic, this spatial behavior was modified by two further factors: social distancing and wearing a face mask. Evidence from offline and online studies on the impact of a face mask on pro-social behavior is mixed. In an attempt to clarify the role of face masks as pro-social or anti-social signals, 235 observers participated in the present online study. They watched pictures of two models standing at three different distances from each other (50, 90 and 150 cm), who were either wearing a face mask or not and were either interacting by initiating a hand shake or just standing still. The observers’ task was to classify the model by gender. Our results show that observers react fastest, and therefore show least avoidance, for the shortest distances (50 and 90 cm) but only when models wear a face mask and do not interact. Thus, our results document both pro- and anti-social consequences of face masks as a result of the complex interplay between social distancing and interactive behavior. Practical implications of these findings are discussed.
The Human Takes It All
(2020)
Background: The increasing involvement of social robots in human lives raises the question as to how humans perceive social robots. Little is known about human perception of synthesized voices.
Aim: To investigate which synthesized voice parameters predict the speaker's eeriness and voice likability; to determine if individual listener characteristics (e.g., personality, attitude toward robots, age) influence synthesized voice evaluations; and to explore which paralinguistic features subjectively distinguish humans from robots/artificial agents.
Methods: 95 adults (62 females) listened to randomly presented audio-clips of three categories: synthesized (Watson, IBM), humanoid (robot Sophia, Hanson Robotics), and human voices (five clips/category). Voices were rated on intelligibility, prosody, trustworthiness, confidence, enthusiasm, pleasantness, human-likeness, likability, and naturalness. Speakers were rated on appeal, credibility, human-likeness, and eeriness. Participants' personality traits, attitudes to robots, and demographics were obtained.
Results: The human voice and human speaker characteristics received reliably higher scores on all dimensions except for eeriness. Synthesized voice ratings were positively related to participants' agreeableness and neuroticism. Females rated synthesized voices more positively on most dimensions. Surprisingly, interest in social robots and attitudes toward robots played almost no role in voice evaluation. Contrary to the expectations of an uncanny valley, when the ratings of human-likeness for both the voice and the speaker characteristics were higher, they seemed less eerie to the participants. Moreover, when the speaker's voice was more humanlike, it was more liked by the participants. This latter point was only applicable to one of the synthesized voices. Finally, pleasantness and trustworthiness of the synthesized voice predicted the likability of the speaker's voice. Qualitative content analysis identified intonation, sound, emotion, and imageability/embodiment as diagnostic features.
Discussion: Humans clearly prefer human voices, but manipulating diagnostic speech features might increase acceptance of synthesized voices and thereby support human-robot interaction. There is limited evidence that human-likeness of a voice is negatively linked to the perceived eeriness of the speaker.
The Human Takes It All
(2020)
Background: The increasing involvement of social robots in human lives raises the question as to how humans perceive social robots. Little is known about human perception of synthesized voices.
Aim: To investigate which synthesized voice parameters predict the speaker's eeriness and voice likability; to determine if individual listener characteristics (e.g., personality, attitude toward robots, age) influence synthesized voice evaluations; and to explore which paralinguistic features subjectively distinguish humans from robots/artificial agents.
Methods: 95 adults (62 females) listened to randomly presented audio-clips of three categories: synthesized (Watson, IBM), humanoid (robot Sophia, Hanson Robotics), and human voices (five clips/category). Voices were rated on intelligibility, prosody, trustworthiness, confidence, enthusiasm, pleasantness, human-likeness, likability, and naturalness. Speakers were rated on appeal, credibility, human-likeness, and eeriness. Participants' personality traits, attitudes to robots, and demographics were obtained.
Results: The human voice and human speaker characteristics received reliably higher scores on all dimensions except for eeriness. Synthesized voice ratings were positively related to participants' agreeableness and neuroticism. Females rated synthesized voices more positively on most dimensions. Surprisingly, interest in social robots and attitudes toward robots played almost no role in voice evaluation. Contrary to the expectations of an uncanny valley, when the ratings of human-likeness for both the voice and the speaker characteristics were higher, they seemed less eerie to the participants. Moreover, when the speaker's voice was more humanlike, it was more liked by the participants. This latter point was only applicable to one of the synthesized voices. Finally, pleasantness and trustworthiness of the synthesized voice predicted the likability of the speaker's voice. Qualitative content analysis identified intonation, sound, emotion, and imageability/embodiment as diagnostic features.
Discussion: Humans clearly prefer human voices, but manipulating diagnostic speech features might increase acceptance of synthesized voices and thereby support human-robot interaction. There is limited evidence that human-likeness of a voice is negatively linked to the perceived eeriness of the speaker.
The pathophysiology of Parkinson’s disease (PD) is still not understood. There are investigations which show a changed oscillatory behaviour of brain circuits or changes in variability of, e.g., gait parameters in PD. The aim of this study was to investigate whether or not the motor output differs between PD patients and healthy controls. Thereby, patients without tremor are investigated in the medication off state performing a special bilateral isometric motor task. The force and accelerations (ACC) were recorded as well as the Mechanomyography (MMG) of the biceps brachii, the brachioradialis and of the pectoralis major muscles using piezoelectric-sensors during the bilateral motor task at 60% of the maximal isometric contraction. The frequency, a specific power ratio, the amplitude variation and the slope of amplitudes were analysed. The results indicate that the oscillatory behaviour of motor output in PD patients without tremor deviates from controls: thereby, the 95%-confidence-intervals of power ratio and of amplitude variation of all signals are disjoint between PD and controls and show significant differences in group comparisons (power ratio: p = 0.000–0.004, r = 0.441–0.579; amplitude variation: p = 0.000–0.001, r = 0.37–0.67). The mean frequency shows a significant difference for ACC (p = 0.009, r = 0.43), but not for MMG. It remains open, whether this muscular output reflects changes of brain circuits and whether the results are reproducible and specific for PD.
The pathophysiology of Parkinson’s disease (PD) is still not understood. There are investigations which show a changed oscillatory behaviour of brain circuits or changes in variability of, e.g., gait parameters in PD. The aim of this study was to investigate whether or not the motor output differs between PD patients and healthy controls. Thereby, patients without tremor are investigated in the medication off state performing a special bilateral isometric motor task. The force and accelerations (ACC) were recorded as well as the Mechanomyography (MMG) of the biceps brachii, the brachioradialis and of the pectoralis major muscles using piezoelectric-sensors during the bilateral motor task at 60% of the maximal isometric contraction. The frequency, a specific power ratio, the amplitude variation and the slope of amplitudes were analysed. The results indicate that the oscillatory behaviour of motor output in PD patients without tremor deviates from controls: thereby, the 95%-confidence-intervals of power ratio and of amplitude variation of all signals are disjoint between PD and controls and show significant differences in group comparisons (power ratio: p = 0.000–0.004, r = 0.441–0.579; amplitude variation: p = 0.000–0.001, r = 0.37–0.67). The mean frequency shows a significant difference for ACC (p = 0.009, r = 0.43), but not for MMG. It remains open, whether this muscular output reflects changes of brain circuits and whether the results are reproducible and specific for PD.
In daily life, we automatically form impressions of other individuals on basis of subtle facial features that convey trustworthiness. Because these face-based judgements influence current and future social interactions, we investigated how perceived trustworthiness of faces affects long-term memory using event-related potentials (ERPs). In the current study, participants incidentally viewed 60 neutral faces differing in trustworthiness, and one week later, performed a surprise recognition memory task, in which the same old faces were presented intermixed with novel ones. We found that after one week untrustworthy faces were better recognized than trustworthy faces and that untrustworthy faces prompted early (350–550 ms) enhanced frontal ERP old/new differences (larger positivity for correctly remembered old faces, compared to novel ones) during recognition. Our findings point toward an enhanced long-lasting, likely familiarity-based, memory for untrustworthy faces. Even when trust judgments about a person do not necessarily need to be accurate, a fast access to memories predicting potential harm may be important to guide social behaviour in daily life.
In daily life, we automatically form impressions of other individuals on basis of subtle facial features that convey trustworthiness. Because these face-based judgements influence current and future social interactions, we investigated how perceived trustworthiness of faces affects long-term memory using event-related potentials (ERPs). In the current study, participants incidentally viewed 60 neutral faces differing in trustworthiness, and one week later, performed a surprise recognition memory task, in which the same old faces were presented intermixed with novel ones. We found that after one week untrustworthy faces were better recognized than trustworthy faces and that untrustworthy faces prompted early (350–550 ms) enhanced frontal ERP old/new differences (larger positivity for correctly remembered old faces, compared to novel ones) during recognition. Our findings point toward an enhanced long-lasting, likely familiarity-based, memory for untrustworthy faces. Even when trust judgments about a person do not necessarily need to be accurate, a fast access to memories predicting potential harm may be important to guide social behaviour in daily life.