Dataset for "How Deep Is Your Gaze? Leveraging Distance in Image-Based Gaze Analysis"
M. Koch, N. Pathmanathan, D. Weiskopf, und K. Kurzhals. Dataset, (2024)Related to: Maurice Koch, Nelusa Pathmanathan, Daniel Weiskopf, and Kuno Kurzhals.2024. How Deep Is Your Gaze? Leveraging Distance in Image-Based GazeAnalysis. In 2024 Symposium on Eye Tracking Research and Applications(ETRA ’24), June 04–07, 2024, Glasgow, United Kingdom. ACM, New York,NY, USA, 7 pages. doi: 10.1145/3649902.3653349.
DOI: 10.18419/darus-4141
Zusammenfassung
This dataset was recorded in an AR environment comprised of three physical and three virtual scene objects. Four participants were instructed to gaze at the six objects from different depth levels (50cm, 150cm, 300cm) in two orders (left-to-right, right-to-left). There are seven trials per condition, which equals 42 recordings in total. More details can be found in the README.md.
Related to: Maurice Koch, Nelusa Pathmanathan, Daniel Weiskopf, and Kuno Kurzhals.2024. How Deep Is Your Gaze? Leveraging Distance in Image-Based GazeAnalysis. In 2024 Symposium on Eye Tracking Research and Applications(ETRA ’24), June 04–07, 2024, Glasgow, United Kingdom. ACM, New York,NY, USA, 7 pages. doi: 10.1145/3649902.3653349
%0 Generic
%1 koch2024dataset
%A Koch, Maurice
%A Pathmanathan, Nelusa
%A Weiskopf, Daniel
%A Kurzhals, Kuno
%D 2024
%K darus mult ubs_10005 ubs_10017 ubs_20008 ubs_20035 ubs_30086 ubs_40132 unibibliografie
%R 10.18419/darus-4141
%T Dataset for "How Deep Is Your Gaze? Leveraging Distance in Image-Based Gaze Analysis"
%X This dataset was recorded in an AR environment comprised of three physical and three virtual scene objects. Four participants were instructed to gaze at the six objects from different depth levels (50cm, 150cm, 300cm) in two orders (left-to-right, right-to-left). There are seven trials per condition, which equals 42 recordings in total. More details can be found in the README.md.
@misc{koch2024dataset,
abstract = {This dataset was recorded in an AR environment comprised of three physical and three virtual scene objects. Four participants were instructed to gaze at the six objects from different depth levels (50cm, 150cm, 300cm) in two orders (left-to-right, right-to-left). There are seven trials per condition, which equals 42 recordings in total. More details can be found in the README.md. },
added-at = {2024-05-27T11:53:42.000+0200},
affiliation = {Koch, Maurice/Universität Stuttgart, Pathmanathan, Nelusa/Universität Stuttgart, Weiskopf, Daniel/Universität Stuttgart, Kurzhals, Kuno/Universität Stuttgart},
author = {Koch, Maurice and Pathmanathan, Nelusa and Weiskopf, Daniel and Kurzhals, Kuno},
biburl = {https://puma.ub.uni-stuttgart.de/bibtex/2010726deafdaa998088d3b79ac1dc7d9/unibiblio},
doi = {10.18419/darus-4141},
howpublished = {Dataset},
interhash = {a13e787f9a0621206adbca661b22f140},
intrahash = {010726deafdaa998088d3b79ac1dc7d9},
keywords = {darus mult ubs_10005 ubs_10017 ubs_20008 ubs_20035 ubs_30086 ubs_40132 unibibliografie},
note = {Related to: Maurice Koch, Nelusa Pathmanathan, Daniel Weiskopf, and Kuno Kurzhals.2024. How Deep Is Your Gaze? Leveraging Distance in Image-Based GazeAnalysis. In 2024 Symposium on Eye Tracking Research and Applications(ETRA ’24), June 04–07, 2024, Glasgow, United Kingdom. ACM, New York,NY, USA, 7 pages. doi: 10.1145/3649902.3653349},
orcid-numbers = {Koch, Maurice/0000-0003-0469-8971, Pathmanathan, Nelusa/0000-0002-6848-8554, Weiskopf, Daniel/0000-0003-1174-1026, Kurzhals, Kuno/0000-0003-4919-4582},
timestamp = {2024-05-27T11:53:42.000+0200},
title = {Dataset for "How Deep Is Your Gaze? Leveraging Distance in Image-Based Gaze Analysis"},
year = 2024
}