Refine
Year of publication
- 2020 (3) (remove)
Language
- English (3)
Is part of the Bibliography
- yes (3)
Keywords
- human-robot interaction (2)
- paralinguistic features (2)
- synthesized voice (2)
- text-to-speech (2)
- uncanny valley (2)
- TMS (1)
- action language (1)
- action observation (1)
- inhibition (1)
- motor (1)
Institute
In two experiments, we compared the dynamics of corticospinal excitability when processing visually or linguistically presented tool-oriented hand actions in native speakers and sequential bilinguals. In a third experiment we used the same procedure to test non-motor, low-level stimuli, i.e. scrambled images and pseudo-words.
Stimuli were presented in sequence: pictures (tool + tool-oriented hand action or their scrambled counterpart) and words (tool noun + tool-action verb or pseudo-words). Experiment 1 presented German linguistic stimuli to native speakers, while Experiment 2 presented English stimuli to non-natives. Experiment 3 tested Italian native speakers. Single-pulse trascranial magnetic stimulation (spTMS) was applied to the left motor cortex at five different timings: baseline, 200 ms after tool/noun onset, 150, 350 and 500 ms after hand/verb onset with motor-evoked potentials (MEPs) recorded from the first dorsal interosseous (FDI) and abductor digiti minimi (ADM) muscles.
We report strong similarities in the dynamics of corticospinal excitability across the visual and linguistic modalities. MEPs' suppression started as early as 150 ms and lasted for the duration of stimulus presentation (500 ms). Moreover, we show that this modulation is absent for stimuli with no motor content. Overall, our study supports the notion of a core, overarching system of action semantics shared by different modalities.
The Human Takes It All
(2020)
Background: The increasing involvement of social robots in human lives raises the question as to how humans perceive social robots. Little is known about human perception of synthesized voices.
Aim: To investigate which synthesized voice parameters predict the speaker's eeriness and voice likability; to determine if individual listener characteristics (e.g., personality, attitude toward robots, age) influence synthesized voice evaluations; and to explore which paralinguistic features subjectively distinguish humans from robots/artificial agents.
Methods: 95 adults (62 females) listened to randomly presented audio-clips of three categories: synthesized (Watson, IBM), humanoid (robot Sophia, Hanson Robotics), and human voices (five clips/category). Voices were rated on intelligibility, prosody, trustworthiness, confidence, enthusiasm, pleasantness, human-likeness, likability, and naturalness. Speakers were rated on appeal, credibility, human-likeness, and eeriness. Participants' personality traits, attitudes to robots, and demographics were obtained.
Results: The human voice and human speaker characteristics received reliably higher scores on all dimensions except for eeriness. Synthesized voice ratings were positively related to participants' agreeableness and neuroticism. Females rated synthesized voices more positively on most dimensions. Surprisingly, interest in social robots and attitudes toward robots played almost no role in voice evaluation. Contrary to the expectations of an uncanny valley, when the ratings of human-likeness for both the voice and the speaker characteristics were higher, they seemed less eerie to the participants. Moreover, when the speaker's voice was more humanlike, it was more liked by the participants. This latter point was only applicable to one of the synthesized voices. Finally, pleasantness and trustworthiness of the synthesized voice predicted the likability of the speaker's voice. Qualitative content analysis identified intonation, sound, emotion, and imageability/embodiment as diagnostic features.
Discussion: Humans clearly prefer human voices, but manipulating diagnostic speech features might increase acceptance of synthesized voices and thereby support human-robot interaction. There is limited evidence that human-likeness of a voice is negatively linked to the perceived eeriness of the speaker.
The Human Takes It All
(2020)
Background: The increasing involvement of social robots in human lives raises the question as to how humans perceive social robots. Little is known about human perception of synthesized voices.
Aim: To investigate which synthesized voice parameters predict the speaker's eeriness and voice likability; to determine if individual listener characteristics (e.g., personality, attitude toward robots, age) influence synthesized voice evaluations; and to explore which paralinguistic features subjectively distinguish humans from robots/artificial agents.
Methods: 95 adults (62 females) listened to randomly presented audio-clips of three categories: synthesized (Watson, IBM), humanoid (robot Sophia, Hanson Robotics), and human voices (five clips/category). Voices were rated on intelligibility, prosody, trustworthiness, confidence, enthusiasm, pleasantness, human-likeness, likability, and naturalness. Speakers were rated on appeal, credibility, human-likeness, and eeriness. Participants' personality traits, attitudes to robots, and demographics were obtained.
Results: The human voice and human speaker characteristics received reliably higher scores on all dimensions except for eeriness. Synthesized voice ratings were positively related to participants' agreeableness and neuroticism. Females rated synthesized voices more positively on most dimensions. Surprisingly, interest in social robots and attitudes toward robots played almost no role in voice evaluation. Contrary to the expectations of an uncanny valley, when the ratings of human-likeness for both the voice and the speaker characteristics were higher, they seemed less eerie to the participants. Moreover, when the speaker's voice was more humanlike, it was more liked by the participants. This latter point was only applicable to one of the synthesized voices. Finally, pleasantness and trustworthiness of the synthesized voice predicted the likability of the speaker's voice. Qualitative content analysis identified intonation, sound, emotion, and imageability/embodiment as diagnostic features.
Discussion: Humans clearly prefer human voices, but manipulating diagnostic speech features might increase acceptance of synthesized voices and thereby support human-robot interaction. There is limited evidence that human-likeness of a voice is negatively linked to the perceived eeriness of the speaker.