@unibiblio

Data for "VisRecall: Quantifying Information Visualisation Recallability via Question Answering"

, and . Software, (2022)Related to: Y. Wang, C. Jiao, M. Bâce and A. Bulling, "VisRecall: Quantifying Information Visualisation Recallability via Question Answering," in IEEE Transactions on Visualization and Computer Graphics, vol. 28, no. 12, pp. 4995-5005, 1 Dec. 2022. doi: 10.1109/TVCG.2022.3198163.
DOI: 10.18419/darus-2826

Abstract

Despite its importance for assessing the effectiveness of communicating information visually, fine-grained recallability of information visualisations has not been studied quantitatively so far.We propose a question-answering paradigm to study visualisation recallability and present VisRecall -- a novel dataset consisting of 200 information visualisations that are annotated with crowd-sourced human (N = 305) recallability scores obtained from 1,000 questions from five question types, which are related to titles, filtering information, finding extrema, retrieving values, and understanding visualisations. It aims to make fundamental contributions towards a new generation of methods to assist designers in optimising information visualisations.This dataset contains stimuli and collected participant data of VisRecall. The structure of the dataset is described in the README-File.If you are interested in related codes of the publication, please refer to the code repository in Metadata for Research Software.

Links and resources

Tags