Using Expressive Avatars to Increase Emotion Recognition: A Pilot Study
N. Hube, K. Vidackovic, and M. Sedlmair. Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems, page 1–7. New York, NY, USA, Association for Computing Machinery, (Apr 28, 2022)
DOI: 10.1145/3491101.3519822
Abstract
Virtual avatars are widely used for collaborating in virtual environments. Yet, often these avatars lack expressiveness to determine a state of mind. Prior work has demonstrated effective usage of determining emotions and animated lip movement through analyzing mere audio tracks of spoken words. To provide this information on a virtual avatar, we created a natural audio data set consisting of 17 audio files from which we then extracted the underlying emotion and lip movement. To conduct a pilot study, we developed a prototypical system that displays the extracted visual parameters and then maps them on a virtual avatar while playing the corresponding audio file. We tested the system with 5 participants in two conditions: (i) while seeing the virtual avatar only an audio file was played. (ii) In addition to the audio file, the extracted facial visual parameters were displayed on the virtual avatar. Our results suggest the validity of using additional visual parameters in the avatars’ face as it helps to determine emotions. We conclude with a brief discussion on the outcomes and their implications on future work.
%0 Conference Paper
%1 Hube2022
%A Hube, Natalie
%A Vidackovic, Kresimir
%A Sedlmair, Michael
%B Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems
%C New York, NY, USA
%D 2022
%I Association for Computing Machinery
%K EXC2075 PN7 PN7-1(II)Sedlmair PN7-1.3 curated
%P 1–7
%R 10.1145/3491101.3519822
%T Using Expressive Avatars to Increase Emotion Recognition: A Pilot Study
%U https://doi.org/10.1145/3491101.3519822
%X Virtual avatars are widely used for collaborating in virtual environments. Yet, often these avatars lack expressiveness to determine a state of mind. Prior work has demonstrated effective usage of determining emotions and animated lip movement through analyzing mere audio tracks of spoken words. To provide this information on a virtual avatar, we created a natural audio data set consisting of 17 audio files from which we then extracted the underlying emotion and lip movement. To conduct a pilot study, we developed a prototypical system that displays the extracted visual parameters and then maps them on a virtual avatar while playing the corresponding audio file. We tested the system with 5 participants in two conditions: (i) while seeing the virtual avatar only an audio file was played. (ii) In addition to the audio file, the extracted facial visual parameters were displayed on the virtual avatar. Our results suggest the validity of using additional visual parameters in the avatars’ face as it helps to determine emotions. We conclude with a brief discussion on the outcomes and their implications on future work.
%@ 9781450391566
@inproceedings{Hube2022,
abstract = {Virtual avatars are widely used for collaborating in virtual environments. Yet, often these avatars lack expressiveness to determine a state of mind. Prior work has demonstrated effective usage of determining emotions and animated lip movement through analyzing mere audio tracks of spoken words. To provide this information on a virtual avatar, we created a natural audio data set consisting of 17 audio files from which we then extracted the underlying emotion and lip movement. To conduct a pilot study, we developed a prototypical system that displays the extracted visual parameters and then maps them on a virtual avatar while playing the corresponding audio file. We tested the system with 5 participants in two conditions: (i) while seeing the virtual avatar only an audio file was played. (ii) In addition to the audio file, the extracted facial visual parameters were displayed on the virtual avatar. Our results suggest the validity of using additional visual parameters in the avatars’ face as it helps to determine emotions. We conclude with a brief discussion on the outcomes and their implications on future work.},
added-at = {2023-11-21T17:10:46.000+0100},
address = {New York, NY, USA},
author = {Hube, Natalie and Vidackovic, Kresimir and Sedlmair, Michael},
biburl = {https://puma.ub.uni-stuttgart.de/bibtex/2b8da417e8aaf9d7f9cc4030b1054fd94/simtech},
booktitle = {Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems},
day = 28,
doi = {10.1145/3491101.3519822},
interhash = {c491444b6f8fdccd47fc036bd0601481},
intrahash = {b8da417e8aaf9d7f9cc4030b1054fd94},
isbn = {9781450391566},
keywords = {EXC2075 PN7 PN7-1(II)Sedlmair PN7-1.3 curated},
location = {New Orleans, LA, USA},
month = {4},
pages = {1–7},
publisher = {Association for Computing Machinery},
series = {CHI EA '22},
timestamp = {2023-12-06T08:55:18.000+0100},
title = {Using Expressive Avatars to Increase Emotion Recognition: A Pilot Study},
url = {https://doi.org/10.1145/3491101.3519822},
year = 2022
}