R. Hedeshy, R. Menges, and S. Staab. Software, (2024)Related to: CNVVE: Dataset and Benchmark for Classifying Non-verbal Voice Expressions. R. Hedeshy, R. Menges, and S. Staab. Interspeech 2023, August 20-24, 2023. Dublin, Ireland, (2023). doi: 10.21437/Interspeech.2023-201.
DOI: 10.18419/darus-3896
Abstract
This dataset consists of files used for training and testing the CNVVE Dataset. This dataset consists of 950 audio samples encompassing six distinct classes of voice expressions. These expressions were collected from 42 generous individuals who donated their voice recordings for the study. By making the dataset publicly accessible, we hope to facilitate further research and development of computational methods for non-verbal voice-based interactions.For more info, please check the paper or feel free to contact the authors for any inquiries.
Related to: CNVVE: Dataset and Benchmark for Classifying Non-verbal Voice Expressions. R. Hedeshy, R. Menges, and S. Staab. Interspeech 2023, August 20-24, 2023. Dublin, Ireland, (2023). doi: 10.21437/Interspeech.2023-201
%0 Generic
%1 hedeshy2024training
%A Hedeshy, Ramin
%A Menges, Raphael
%A Staab, Steffen
%D 2024
%K darus ubs_10005 ubs_20008 ubs_30082 ubs_40488 unibibliografie
%R 10.18419/darus-3896
%T Code for Training and Testing CNVVE
%X This dataset consists of files used for training and testing the CNVVE Dataset. This dataset consists of 950 audio samples encompassing six distinct classes of voice expressions. These expressions were collected from 42 generous individuals who donated their voice recordings for the study. By making the dataset publicly accessible, we hope to facilitate further research and development of computational methods for non-verbal voice-based interactions.For more info, please check the paper or feel free to contact the authors for any inquiries.
@misc{hedeshy2024training,
abstract = {This dataset consists of files used for training and testing the CNVVE Dataset. This dataset consists of 950 audio samples encompassing six distinct classes of voice expressions. These expressions were collected from 42 generous individuals who donated their voice recordings for the study. By making the dataset publicly accessible, we hope to facilitate further research and development of computational methods for non-verbal voice-based interactions.For more info, please check the paper or feel free to contact the authors for any inquiries. },
added-at = {2024-02-19T15:14:11.000+0100},
affiliation = {Hedeshy, Ramin/Universität Stuttgart, Menges, Raphael/Semanux, Staab, Steffen/Universität Stuttgart},
author = {Hedeshy, Ramin and Menges, Raphael and Staab, Steffen},
biburl = {https://puma.ub.uni-stuttgart.de/bibtex/2f4167e1a68cfba051a9a75f3bcc49ca7/unibiblio},
doi = {10.18419/darus-3896},
howpublished = {Software},
interhash = {13fd3f9153ee910aca1601c5b2fc004c},
intrahash = {f4167e1a68cfba051a9a75f3bcc49ca7},
keywords = {darus ubs_10005 ubs_20008 ubs_30082 ubs_40488 unibibliografie},
note = {Related to: CNVVE: Dataset and Benchmark for Classifying Non-verbal Voice Expressions. R. Hedeshy, R. Menges, and S. Staab. Interspeech 2023, August 20-24, 2023. Dublin, Ireland, (2023). doi: 10.21437/Interspeech.2023-201},
orcid-numbers = {Hedeshy, Ramin/0000-0001-5854-4033, Menges, Raphael/0000-0002-2112-7065, Staab, Steffen/0000-0002-0780-4154},
timestamp = {2024-02-19T15:14:11.000+0100},
title = {Code for Training and Testing CNVVE},
year = 2024
}