Data for "VisRecall: Quantifying Information Visualisation Recallability via Question Answering"
Y. Wang, and A. Bulling. Software, (2022)Related to: Y. Wang, C. Jiao, M. Bâce and A. Bulling, "VisRecall: Quantifying Information Visualisation Recallability via Question Answering," in IEEE Transactions on Visualization and Computer Graphics, vol. 28, no. 12, pp. 4995-5005, 1 Dec. 2022. doi: 10.1109/TVCG.2022.3198163.
DOI: 10.18419/darus-2826
Abstract
Despite its importance for assessing the effectiveness of communicating information visually, fine-grained recallability of information visualisations has not been studied quantitatively so far.We propose a question-answering paradigm to study visualisation recallability and present VisRecall -- a novel dataset consisting of 200 information visualisations that are annotated with crowd-sourced human (N = 305) recallability scores obtained from 1,000 questions from five question types, which are related to titles, filtering information, finding extrema, retrieving values, and understanding visualisations. It aims to make fundamental contributions towards a new generation of methods to assist designers in optimising information visualisations.This dataset contains stimuli and collected participant data of VisRecall. The structure of the dataset is described in the README-File.If you are interested in related codes of the publication, please refer to the code repository in Metadata for Research Software.
Related to: Y. Wang, C. Jiao, M. Bâce and A. Bulling, "VisRecall: Quantifying Information Visualisation Recallability via Question Answering," in IEEE Transactions on Visualization and Computer Graphics, vol. 28, no. 12, pp. 4995-5005, 1 Dec. 2022. doi: 10.1109/TVCG.2022.3198163
%0 Generic
%1 wang2022visrecall
%A Wang, Yao
%A Bulling, Andreas
%D 2022
%K darus mult ubs_10005 ubs_10018 ubs_20008 ubs_20024 ubs_30086 ubs_30200 ubs_40336 unibibliografie
%R 10.18419/darus-2826
%T Data for "VisRecall: Quantifying Information Visualisation Recallability via Question Answering"
%X Despite its importance for assessing the effectiveness of communicating information visually, fine-grained recallability of information visualisations has not been studied quantitatively so far.We propose a question-answering paradigm to study visualisation recallability and present VisRecall -- a novel dataset consisting of 200 information visualisations that are annotated with crowd-sourced human (N = 305) recallability scores obtained from 1,000 questions from five question types, which are related to titles, filtering information, finding extrema, retrieving values, and understanding visualisations. It aims to make fundamental contributions towards a new generation of methods to assist designers in optimising information visualisations.This dataset contains stimuli and collected participant data of VisRecall. The structure of the dataset is described in the README-File.If you are interested in related codes of the publication, please refer to the code repository in Metadata for Research Software.
@misc{wang2022visrecall,
abstract = {Despite its importance for assessing the effectiveness of communicating information visually, fine-grained recallability of information visualisations has not been studied quantitatively so far.We propose a question-answering paradigm to study visualisation recallability and present VisRecall -- a novel dataset consisting of 200 information visualisations that are annotated with crowd-sourced human (N = 305) recallability scores obtained from 1,000 questions from five question types, which are related to titles, filtering information, finding extrema, retrieving values, and understanding visualisations. It aims to make fundamental contributions towards a new generation of methods to assist designers in optimising information visualisations.This dataset contains stimuli and collected participant data of VisRecall. The structure of the dataset is described in the README-File.If you are interested in related codes of the publication, please refer to the code repository in Metadata for Research Software. },
added-at = {2022-07-01T11:31:44.000+0200},
affiliation = {Wang, Yao/Universität Stuttgart, Bulling, Andreas/Universität Stuttgart},
author = {Wang, Yao and Bulling, Andreas},
biburl = {https://puma.ub.uni-stuttgart.de/bibtex/2d9d5069ef6e44975c8875b61715dfbd4/unibiblio},
doi = {10.18419/darus-2826},
howpublished = {Software},
interhash = {6bef77ff4a77f67ba21c51981a0d4798},
intrahash = {d9d5069ef6e44975c8875b61715dfbd4},
keywords = {darus mult ubs_10005 ubs_10018 ubs_20008 ubs_20024 ubs_30086 ubs_30200 ubs_40336 unibibliografie},
note = {Related to: Y. Wang, C. Jiao, M. Bâce and A. Bulling, "VisRecall: Quantifying Information Visualisation Recallability via Question Answering," in IEEE Transactions on Visualization and Computer Graphics, vol. 28, no. 12, pp. 4995-5005, 1 Dec. 2022. doi: 10.1109/TVCG.2022.3198163},
orcid-numbers = {Wang, Yao/0000-0002-3633-8623, Bulling, Andreas/0000-0001-6317-7303},
timestamp = {2024-03-25T15:15:35.000+0100},
title = {Data for "VisRecall: Quantifying Information Visualisation Recallability via Question Answering"},
year = 2022
}