PUMA publications for /tag/sfbtrr161%20visus:netzelrf%20from:muellerhttps://puma.ub.uni-stuttgart.de/tag/sfbtrr161%20visus:netzelrf%20from:muellerPUMA RSS feed for /tag/sfbtrr161%20visus:netzelrf%20from:mueller2024-03-29T13:05:02+01:00Interactive Scanpath-oriented Annotation of Fixationshttps://puma.ub.uni-stuttgart.de/bibtex/2b5f7e1519c494f1845273a8deb09e0cb/muellermueller2020-10-09T12:34:24+02:002016 B01 from:mueller sfbtrr161 vis(us) visus visus:burchml visus:netzelrf visus:weiskopf <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Rudolf Netzel" itemprop="url" href="/person/16894d08d25351481917bddb065efb2f2/author/0"><span itemprop="name">R. Netzel</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Michael Burch" itemprop="url" href="/person/16894d08d25351481917bddb065efb2f2/author/1"><span itemprop="name">M. Burch</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Weiskopf" itemprop="url" href="/person/16894d08d25351481917bddb065efb2f2/author/2"><span itemprop="name">D. Weiskopf</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/PublicationIssue" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="journal">Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA)</span>, </em> </span>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Fri Oct 09 12:34:24 CEST 2020Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA)183-187Interactive Scanpath-oriented Annotation of Fixations20162016 B01 from:mueller sfbtrr161 vis(us) visus visus:burchml visus:netzelrf visus:weiskopf In this short paper, we present a lightweight application for the interactive annotation of eye tracking data for both static and dynamic stimuli. The main functionality is the annotation of fixations that takes into account the scanpath and stimulus. Our visual interface allows the annotator to work through a sequence of fixations, while it shows the context of the scanpath in the form of previous and subsequent fixations. The context of the stimulus is included as visual overlay. Our application supports the automatic initial labeling according to areas of interest (AOIs), but is not dependent on AOIs. The software is easily configurable, supports user-defined annotation schemes, and fits in existing workflows of eye tracking experiments and the evaluation thereof by providing import and export functionalities for data files.Interactive Scanpath-oriented Annotation of FixationsInteractive Scanpath-oriented Annotation of Fixationshttps://puma.ub.uni-stuttgart.de/bibtex/2b5f7e1519c494f1845273a8deb09e0cb/visusvisus2020-10-09T12:34:24+02:00B01 visus:weiskopf sfbtrr161 2016 from:mueller vis(us) visus:burchml visus:netzelrf visus <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Rudolf Netzel" itemprop="url" href="/person/16894d08d25351481917bddb065efb2f2/author/0"><span itemprop="name">R. Netzel</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Michael Burch" itemprop="url" href="/person/16894d08d25351481917bddb065efb2f2/author/1"><span itemprop="name">M. Burch</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Weiskopf" itemprop="url" href="/person/16894d08d25351481917bddb065efb2f2/author/2"><span itemprop="name">D. Weiskopf</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/PublicationIssue" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="journal">Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA)</span>, </em> </span>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Fri Oct 09 12:34:24 CEST 2020Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA)183-187Interactive Scanpath-oriented Annotation of Fixations2016B01 visus:weiskopf sfbtrr161 2016 from:mueller vis(us) visus:burchml visus:netzelrf visus In this short paper, we present a lightweight application for the interactive annotation of eye tracking data for both static and dynamic stimuli. The main functionality is the annotation of fixations that takes into account the scanpath and stimulus. Our visual interface allows the annotator to work through a sequence of fixations, while it shows the context of the scanpath in the form of previous and subsequent fixations. The context of the stimulus is included as visual overlay. Our application supports the automatic initial labeling according to areas of interest (AOIs), but is not dependent on AOIs. The software is easily configurable, supports user-defined annotation schemes, and fits in existing workflows of eye tracking experiments and the evaluation thereof by providing import and export functionalities for data files.Interactive Scanpath-oriented Annotation of FixationsThe Challenges of Designing Metro Mapshttps://puma.ub.uni-stuttgart.de/bibtex/2878e54e53540f8b5cb973e55d068ba23/muellermueller2020-10-09T12:34:24+02:002016 B01 from:mueller sfbtrr161 vis(us) visus visus:burchml visus:netzelrf visus:weiskopf <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Michael Burch" itemprop="url" href="/person/1f28321413ecd852b0963e4ad31becda2/author/0"><span itemprop="name">M. Burch</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Robin Woods" itemprop="url" href="/person/1f28321413ecd852b0963e4ad31becda2/author/1"><span itemprop="name">R. Woods</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Rudolf Netzel" itemprop="url" href="/person/1f28321413ecd852b0963e4ad31becda2/author/2"><span itemprop="name">R. Netzel</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Weiskopf" itemprop="url" href="/person/1f28321413ecd852b0963e4ad31becda2/author/3"><span itemprop="name">D. Weiskopf</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"></span><em> 2: IVAPP, </em><em><span itemprop="publisher">SciTePress</span>, </em>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Fri Oct 09 12:34:24 CEST 2020Proceedings of the Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP)The Challenges of Designing Metro Maps2: IVAPP20162016 B01 from:mueller sfbtrr161 vis(us) visus visus:burchml visus:netzelrf visus:weiskopf Metro maps can be regarded as a particular version of information visualization. The goal is to produce readable and effective map designs. In this paper, we combine the expertise of design experts and visualization researchers to achieve this goal. The aesthetic design of the maps should play a major role as the intention of the designer is to make them attractive for the human viewer in order to use the designs in a way that is the most efficient. The designs should invoke accurate actions by the user—in the case of a metro map, the user would be making journeys. We provide two views on metro map designs: one from a designer point of view and one from a visualization expert point of view. The focus of this work is to find a combination of both worlds from which the designer as well as the visualizer can benefit. To reach this goal we first describe the designer’s work when designing metro maps, then we take a look at how a visualizer measures performance from an end user perspective by tracking people’s eyes when working with the formerly designed maps while answering a route finding task. The Challenges of Designing Metro MapsThe Challenges of Designing Metro Mapshttps://puma.ub.uni-stuttgart.de/bibtex/2878e54e53540f8b5cb973e55d068ba23/visusvisus2020-10-09T12:34:24+02:00B01 visus:weiskopf sfbtrr161 2016 from:mueller vis(us) visus:burchml visus:netzelrf visus <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Michael Burch" itemprop="url" href="/person/1f28321413ecd852b0963e4ad31becda2/author/0"><span itemprop="name">M. Burch</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Robin Woods" itemprop="url" href="/person/1f28321413ecd852b0963e4ad31becda2/author/1"><span itemprop="name">R. Woods</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Rudolf Netzel" itemprop="url" href="/person/1f28321413ecd852b0963e4ad31becda2/author/2"><span itemprop="name">R. Netzel</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Weiskopf" itemprop="url" href="/person/1f28321413ecd852b0963e4ad31becda2/author/3"><span itemprop="name">D. Weiskopf</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"></span><em> 2: IVAPP, </em><em><span itemprop="publisher">SciTePress</span>, </em>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Fri Oct 09 12:34:24 CEST 2020Proceedings of the Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP)The Challenges of Designing Metro Maps2: IVAPP2016B01 visus:weiskopf sfbtrr161 2016 from:mueller vis(us) visus:burchml visus:netzelrf visus Metro maps can be regarded as a particular version of information visualization. The goal is to produce readable and effective map designs. In this paper, we combine the expertise of design experts and visualization researchers to achieve this goal. The aesthetic design of the maps should play a major role as the intention of the designer is to make them attractive for the human viewer in order to use the designs in a way that is the most efficient. The designs should invoke accurate actions by the user—in the case of a metro map, the user would be making journeys. We provide two views on metro map designs: one from a designer point of view and one from a visualization expert point of view. The focus of this work is to find a combination of both worlds from which the designer as well as the visualizer can benefit. To reach this goal we first describe the designer’s work when designing metro maps, then we take a look at how a visualizer measures performance from an end user perspective by tracking people’s eyes when working with the formerly designed maps while answering a route finding task. The Challenges of Designing Metro MapsGenerative Data Models for Validation and Evaluation of Visualization Techniqueshttps://puma.ub.uni-stuttgart.de/bibtex/281f59bfcd0ec63db13bfc6ce23757e2b/muellermueller2020-10-09T12:34:23+02:002016 A01 A02 A03 B02 D02 from:mueller sfbtrr161 sfbtrr75 vis(us) vis-ertl vis-gis visus visus:ertl visus:freysn visus:hlawatml visus:karchgz visus:netzelrf visus:schulzch visus:weiskopf <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Christoph Schulz" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/0"><span itemprop="name">C. Schulz</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Arlind Nocaj" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/1"><span itemprop="name">A. Nocaj</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Mennatallah El-Assady" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/2"><span itemprop="name">M. El-Assady</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Steffen Frey" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/3"><span itemprop="name">S. Frey</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Marcel Hlawatsch" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/4"><span itemprop="name">M. Hlawatsch</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Michael Hund" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/5"><span itemprop="name">M. Hund</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Grzegorz Karol Karch" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/6"><span itemprop="name">G. Karch</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Rudolf Netzel" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/7"><span itemprop="name">R. Netzel</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Christin Schätzle" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/8"><span itemprop="name">C. Schätzle</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Miriam Butt" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/9"><span itemprop="name">M. Butt</span></a></span></span> and 4 other author(s). </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization (BELIV)</span>, </em></span><em>page <span itemprop="pagination">112-124</span>. </em><em><span itemprop="publisher">ACM</span>, </em>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Fri Oct 09 12:34:23 CEST 2020Proceedings of the Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization (BELIV)112-124Generative Data Models for Validation and Evaluation of Visualization Techniques20162016 A01 A02 A03 B02 D02 from:mueller sfbtrr161 sfbtrr75 vis(us) vis-ertl vis-gis visus visus:ertl visus:freysn visus:hlawatml visus:karchgz visus:netzelrf visus:schulzch visus:weiskopf We argue that there is a need for substantially more research on the use of generative data models in the validation and evaluation of visualization techniques. For example, user studies will require the display of representative and uncon-founded visual stimuli, while algorithms will need functional coverage and assessable benchmarks. However, data is often collected in a semi-automatic fashion or entirely hand-picked, which obscures the view of generality, impairs availability, and potentially violates privacy. There are some sub-domains of visualization that use synthetic data in the sense of generative data models, whereas others work with real-world-based data sets and simulations. Depending on the visualization domain, many generative data models are "side projects" as part of an ad-hoc validation of a techniques paper and thus neither reusable nor general-purpose. We review existing work on popular data collections and generative data models in visualization to discuss the opportunities and consequences for technique validation, evaluation, and experiment design. We distill handling and future directions, and discuss how we can engineer generative data models and how visualization research could benefit from more and better use of generative data models.Generative Data Models for Validation and Evaluation of Visualization TechniquesGenerative Data Models for Validation and Evaluation of Visualization Techniqueshttps://puma.ub.uni-stuttgart.de/bibtex/281f59bfcd0ec63db13bfc6ce23757e2b/visusvisus2020-10-09T12:34:23+02:00A01 visus:hlawatml A02 A03 B02 sfbtrr161 D02 visus:netzelrf visus:ertl visus vis-ertl visus:karchgz visus:weiskopf 2016 vis-gis from:mueller vis(us) sfbtrr75 visus:schulzch visus:freysn <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Christoph Schulz" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/0"><span itemprop="name">C. Schulz</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Arlind Nocaj" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/1"><span itemprop="name">A. Nocaj</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Mennatallah El-Assady" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/2"><span itemprop="name">M. El-Assady</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Steffen Frey" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/3"><span itemprop="name">S. Frey</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Marcel Hlawatsch" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/4"><span itemprop="name">M. Hlawatsch</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Michael Hund" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/5"><span itemprop="name">M. Hund</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Grzegorz Karol Karch" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/6"><span itemprop="name">G. Karch</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Rudolf Netzel" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/7"><span itemprop="name">R. Netzel</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Christin Schätzle" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/8"><span itemprop="name">C. Schätzle</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Miriam Butt" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/9"><span itemprop="name">M. Butt</span></a></span></span> and 4 other author(s). </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization (BELIV)</span>, </em></span><em>page <span itemprop="pagination">112-124</span>. </em><em><span itemprop="publisher">ACM</span>, </em>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Fri Oct 09 12:34:23 CEST 2020Proceedings of the Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization (BELIV)112-124Generative Data Models for Validation and Evaluation of Visualization Techniques2016A01 visus:hlawatml A02 A03 B02 sfbtrr161 D02 visus:netzelrf visus:ertl visus vis-ertl visus:karchgz visus:weiskopf 2016 vis-gis from:mueller vis(us) sfbtrr75 visus:schulzch visus:freysn We argue that there is a need for substantially more research on the use of generative data models in the validation and evaluation of visualization techniques. For example, user studies will require the display of representative and uncon-founded visual stimuli, while algorithms will need functional coverage and assessable benchmarks. However, data is often collected in a semi-automatic fashion or entirely hand-picked, which obscures the view of generality, impairs availability, and potentially violates privacy. There are some sub-domains of visualization that use synthetic data in the sense of generative data models, whereas others work with real-world-based data sets and simulations. Depending on the visualization domain, many generative data models are "side projects" as part of an ad-hoc validation of a techniques paper and thus neither reusable nor general-purpose. We review existing work on popular data collections and generative data models in visualization to discuss the opportunities and consequences for technique validation, evaluation, and experiment design. We distill handling and future directions, and discuss how we can engineer generative data models and how visualization research could benefit from more and better use of generative data models.Generative Data Models for Validation and Evaluation of Visualization TechniquesHilbert Attention Maps for Visualizing Spatiotemporal Gaze Datahttps://puma.ub.uni-stuttgart.de/bibtex/2b081a6756fadaf1bfe7beedc7b004add/muellermueller2020-10-09T12:34:23+02:002016 B01 from:mueller sfbtrr161 vis(us) visus visus:netzelrf visus:weiskopf <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Rudolf Netzel" itemprop="url" href="/person/10e03ce36db7577bee2399c65e7595b50/author/0"><span itemprop="name">R. Netzel</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Weiskopf" itemprop="url" href="/person/10e03ce36db7577bee2399c65e7595b50/author/1"><span itemprop="name">D. Weiskopf</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS)</span>, </em></span><em>page <span itemprop="pagination">21-25</span>. </em>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Fri Oct 09 12:34:23 CEST 2020Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS)21-25Hilbert Attention Maps for Visualizing Spatiotemporal Gaze Data20162016 B01 from:mueller sfbtrr161 vis(us) visus visus:netzelrf visus:weiskopf Attention maps-often in the form of heatmaps-are a common visualization approach to obtaining an overview of the spatial distribution of gaze data from eye tracking experiments. However, attention maps are not designed to let us easily analyze the temporal information of gaze data: they completely ignore temporal information by aggregating over time, or they use animation to build a sequence of attention maps. To overcome this issue, we introduce Hilbert attention maps: a 2D static visualization of the spatiotemporal distribution of gaze points. The visualization is based on the projection of the 2D spatial domain onto a space-filling Hilbert curve that is used as one axis of our new attention map; the other axis represents time. We visualize Hilbert attention maps either as dot displays or heatmaps. This 2D visualization works for data from individual participants or large groups of participants, it supports static and dynamic stimuli alike, and it does not require any preprocessing or definition of areas of interest. We demonstrate how our visualization allows analysts to identify spatiotemporal patterns of visual reading behavior, including attentional synchrony and smooth pursuit.Hilbert Attention Maps for Visualizing Spatiotemporal Gaze DataHilbert Attention Maps for Visualizing Spatiotemporal Gaze Datahttps://puma.ub.uni-stuttgart.de/bibtex/2b081a6756fadaf1bfe7beedc7b004add/visusvisus2020-10-09T12:34:23+02:00B01 visus:weiskopf sfbtrr161 2016 from:mueller vis(us) visus:netzelrf visus <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Rudolf Netzel" itemprop="url" href="/person/10e03ce36db7577bee2399c65e7595b50/author/0"><span itemprop="name">R. Netzel</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Weiskopf" itemprop="url" href="/person/10e03ce36db7577bee2399c65e7595b50/author/1"><span itemprop="name">D. Weiskopf</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS)</span>, </em></span><em>page <span itemprop="pagination">21-25</span>. </em>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Fri Oct 09 12:34:23 CEST 2020Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS)21-25Hilbert Attention Maps for Visualizing Spatiotemporal Gaze Data2016B01 visus:weiskopf sfbtrr161 2016 from:mueller vis(us) visus:netzelrf visus Attention maps-often in the form of heatmaps-are a common visualization approach to obtaining an overview of the spatial distribution of gaze data from eye tracking experiments. However, attention maps are not designed to let us easily analyze the temporal information of gaze data: they completely ignore temporal information by aggregating over time, or they use animation to build a sequence of attention maps. To overcome this issue, we introduce Hilbert attention maps: a 2D static visualization of the spatiotemporal distribution of gaze points. The visualization is based on the projection of the 2D spatial domain onto a space-filling Hilbert curve that is used as one axis of our new attention map; the other axis represents time. We visualize Hilbert attention maps either as dot displays or heatmaps. This 2D visualization works for data from individual participants or large groups of participants, it supports static and dynamic stimuli alike, and it does not require any preprocessing or definition of areas of interest. We demonstrate how our visualization allows analysts to identify spatiotemporal patterns of visual reading behavior, including attentional synchrony and smooth pursuit.Hilbert Attention Maps for Visualizing Spatiotemporal Gaze DataMulti-Similarity Matrices of Eye Movement Datahttps://puma.ub.uni-stuttgart.de/bibtex/2854ae7053fc5b43be4b344a5ef7f2182/muellermueller2020-10-09T12:34:23+02:002016 B01 from:mueller sfbtrr161 vis(us) visus visus:burchml visus:netzelrf visus:weiskopf <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Ayush Kumar" itemprop="url" href="/person/1d1bc7d4226a574b8b396f325f8738563/author/0"><span itemprop="name">A. Kumar</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Rudolf Netzel" itemprop="url" href="/person/1d1bc7d4226a574b8b396f325f8738563/author/1"><span itemprop="name">R. Netzel</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Michael Burch" itemprop="url" href="/person/1d1bc7d4226a574b8b396f325f8738563/author/2"><span itemprop="name">M. Burch</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Weiskopf" itemprop="url" href="/person/1d1bc7d4226a574b8b396f325f8738563/author/3"><span itemprop="name">D. Weiskopf</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Klaus Mueller" itemprop="url" href="/person/1d1bc7d4226a574b8b396f325f8738563/author/4"><span itemprop="name">K. Mueller</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS)</span>, </em></span><em>page <span itemprop="pagination">26-30</span>. </em>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Fri Oct 09 12:34:23 CEST 2020Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS)26-30Multi-Similarity Matrices of Eye Movement Data20162016 B01 from:mueller sfbtrr161 vis(us) visus visus:burchml visus:netzelrf visus:weiskopf Multi-Similarity Matrices of Eye Movement DataMulti-Similarity Matrices of Eye Movement Datahttps://puma.ub.uni-stuttgart.de/bibtex/2854ae7053fc5b43be4b344a5ef7f2182/visusvisus2020-10-09T12:34:23+02:00B01 visus:weiskopf sfbtrr161 2016 from:mueller vis(us) visus:burchml visus:netzelrf visus <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Ayush Kumar" itemprop="url" href="/person/1d1bc7d4226a574b8b396f325f8738563/author/0"><span itemprop="name">A. Kumar</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Rudolf Netzel" itemprop="url" href="/person/1d1bc7d4226a574b8b396f325f8738563/author/1"><span itemprop="name">R. Netzel</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Michael Burch" itemprop="url" href="/person/1d1bc7d4226a574b8b396f325f8738563/author/2"><span itemprop="name">M. Burch</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Weiskopf" itemprop="url" href="/person/1d1bc7d4226a574b8b396f325f8738563/author/3"><span itemprop="name">D. Weiskopf</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Klaus Mueller" itemprop="url" href="/person/1d1bc7d4226a574b8b396f325f8738563/author/4"><span itemprop="name">K. Mueller</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS)</span>, </em></span><em>page <span itemprop="pagination">26-30</span>. </em>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Fri Oct 09 12:34:23 CEST 2020Proceedings of the Symposium on Eye Tracking and Visualization (ETVIS)26-30Multi-Similarity Matrices of Eye Movement Data2016B01 visus:weiskopf sfbtrr161 2016 from:mueller vis(us) visus:burchml visus:netzelrf visus Multi-Similarity Matrices of Eye Movement DataVisualization of Time Series Data with Spatial Context: Communicating the Energy Production of Power Plantshttps://puma.ub.uni-stuttgart.de/bibtex/2c9736615b908e611da90065e63ae4f23/muellermueller2020-10-09T12:34:20+02:002017 B01 from:mueller sfbtrr161 vis(us) visus visus:burchml visus:netzelrf visus:rodrigns visus:weiskopf <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Nils Rodrigues" itemprop="url" href="/person/19e55149294152097fec60fdbb208a309/author/0"><span itemprop="name">N. Rodrigues</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Rudolf Netzel" itemprop="url" href="/person/19e55149294152097fec60fdbb208a309/author/1"><span itemprop="name">R. Netzel</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Kazi Riaz Ullah" itemprop="url" href="/person/19e55149294152097fec60fdbb208a309/author/2"><span itemprop="name">K. Ullah</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Michael Burch" itemprop="url" href="/person/19e55149294152097fec60fdbb208a309/author/3"><span itemprop="name">M. Burch</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Alexander Schultz" itemprop="url" href="/person/19e55149294152097fec60fdbb208a309/author/4"><span itemprop="name">A. Schultz</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Bruno Burger" itemprop="url" href="/person/19e55149294152097fec60fdbb208a309/author/5"><span itemprop="name">B. Burger</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Weiskopf" itemprop="url" href="/person/19e55149294152097fec60fdbb208a309/author/6"><span itemprop="name">D. Weiskopf</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI)</span>, </em></span><em>page <span itemprop="pagination">37-44</span>. </em>(<em><span>2017<meta content="2017" itemprop="datePublished"/></span></em>)</span>Fri Oct 09 12:34:20 CEST 2020Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI)37-44Visualization of Time Series Data with Spatial Context: Communicating the Energy Production of Power Plants20172017 B01 from:mueller sfbtrr161 vis(us) visus visus:burchml visus:netzelrf visus:rodrigns visus:weiskopf Visualizing time series data with a spatial context is a problem that appears more and more often, since small and lightweight GPS devices allow us to enrich the time series data with position information. One example is the visualization of the energy output of power plants. We present a web-based application that aims to provide information about the energy production of a specified region, along with location information about the power plants. The application is intended to be used as a solid data basis for political discussions, nudging, and story telling about the German energy transition to renewables, called "Energiewende". It was therefore designed to be intuitive, easy to use, and provide information for a broad spectrum of users that do not need any domain-specific knowledge. Users are able to select different categories of power plants and look up their positions on an overview map. Glyphs indicate their exact positions and a selection mechanism allows users to compare the power output on different time scales using stacked area charts or ThemeRivers. As an evaluation of the application, we have collected web access statistics and conducted an online survey with respect to the intuitiveness, usability, and informativeness.Visualization of Time Series Data with Spatial Context: Communicating the Energy Production of Power PlantsVisualization of Time Series Data with Spatial Context: Communicating the Energy Production of Power Plantshttps://puma.ub.uni-stuttgart.de/bibtex/2c9736615b908e611da90065e63ae4f23/visusvisus2020-10-09T12:34:20+02:00B01 visus:weiskopf sfbtrr161 2017 from:mueller vis(us) visus:burchml visus:netzelrf visus:rodrigns visus <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Nils Rodrigues" itemprop="url" href="/person/19e55149294152097fec60fdbb208a309/author/0"><span itemprop="name">N. Rodrigues</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Rudolf Netzel" itemprop="url" href="/person/19e55149294152097fec60fdbb208a309/author/1"><span itemprop="name">R. Netzel</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Kazi Riaz Ullah" itemprop="url" href="/person/19e55149294152097fec60fdbb208a309/author/2"><span itemprop="name">K. Ullah</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Michael Burch" itemprop="url" href="/person/19e55149294152097fec60fdbb208a309/author/3"><span itemprop="name">M. Burch</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Alexander Schultz" itemprop="url" href="/person/19e55149294152097fec60fdbb208a309/author/4"><span itemprop="name">A. Schultz</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Bruno Burger" itemprop="url" href="/person/19e55149294152097fec60fdbb208a309/author/5"><span itemprop="name">B. Burger</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Weiskopf" itemprop="url" href="/person/19e55149294152097fec60fdbb208a309/author/6"><span itemprop="name">D. Weiskopf</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI)</span>, </em></span><em>page <span itemprop="pagination">37-44</span>. </em>(<em><span>2017<meta content="2017" itemprop="datePublished"/></span></em>)</span>Fri Oct 09 12:34:20 CEST 2020Proceedings of the ACM Symposium on Visual Information Communication and Interaction (VINCI)37-44Visualization of Time Series Data with Spatial Context: Communicating the Energy Production of Power Plants2017B01 visus:weiskopf sfbtrr161 2017 from:mueller vis(us) visus:burchml visus:netzelrf visus:rodrigns visus Visualizing time series data with a spatial context is a problem that appears more and more often, since small and lightweight GPS devices allow us to enrich the time series data with position information. One example is the visualization of the energy output of power plants. We present a web-based application that aims to provide information about the energy production of a specified region, along with location information about the power plants. The application is intended to be used as a solid data basis for political discussions, nudging, and story telling about the German energy transition to renewables, called "Energiewende". It was therefore designed to be intuitive, easy to use, and provide information for a broad spectrum of users that do not need any domain-specific knowledge. Users are able to select different categories of power plants and look up their positions on an overview map. Glyphs indicate their exact positions and a selection mechanism allows users to compare the power output on different time scales using stacked area charts or ThemeRivers. As an evaluation of the application, we have collected web access statistics and conducted an online survey with respect to the intuitiveness, usability, and informativeness.Visualization of Time Series Data with Spatial Context: Communicating the Energy Production of Power PlantsSpectral Visualization Sharpening.https://puma.ub.uni-stuttgart.de/bibtex/2e700867e6efea9fd5ba03159c1d2edda/visusvisus2020-10-09T12:34:19+02:00from:leonkokkoliadis B01 visus:weiskopf sfbtrr161 from:mueller visus:netzelrf 2020 visus <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Liang Zhou" itemprop="url" href="/person/1f50df07fa55824833772729dad70522c/author/0"><span itemprop="name">L. Zhou</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Rudolf Netzel" itemprop="url" href="/person/1f50df07fa55824833772729dad70522c/author/1"><span itemprop="name">R. Netzel</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Weiskopf" itemprop="url" href="/person/1f50df07fa55824833772729dad70522c/author/2"><span itemprop="name">D. Weiskopf</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Chris R. Johnson" itemprop="url" href="/person/1f50df07fa55824833772729dad70522c/author/3"><span itemprop="name">C. Johnson</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the ACM Symposium on Applied Perception (SAP)</span>, </em></span><em>page <span itemprop="pagination">18:1-18:9</span>. </em><em><span itemprop="publisher">ACM</span>, </em>(<em><span>2019<meta content="2019" itemprop="datePublished"/></span></em>)</span>Fri Oct 09 12:34:19 CEST 2020Proceedings of the ACM Symposium on Applied Perception (SAP)conf/apgv/201918:1-18:9Spectral Visualization Sharpening.2019from:leonkokkoliadis B01 visus:weiskopf sfbtrr161 from:mueller visus:netzelrf 2020 visus In this paper, we propose a perceptually-guided visualization sharpening technique. We analyze the spectral behavior of an established comprehensive perceptual model to arrive at our approximated model based on an adapted weighting of the bandpass images from a Gaussian pyramid. The main benefit of this approximated model is its controllability and predictability for sharpening color-mapped visualizations. Our method can be integrated into any visualization tool as it adopts generic image-based post-processing, and it is intuitive and easy to use as viewing distance is the only parameter. Using highly diverse datasets, we show the usefulness of our method across a wide range of typical visualizations.Spectral Visualization Sharpening.User Performance and Reading Strategies for Metro Maps: An Eye Tracking Studyhttps://puma.ub.uni-stuttgart.de/bibtex/275dbc1a909d0dde0cb1844de3434db6c/visusvisus2020-10-09T12:31:48+02:00from:leonkokkoliadis B01 visus:weiskopf sfbtrr161 2016 from:mueller visus:netzelrf visus <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Rudolf Netzel" itemprop="url" href="/person/1d85bdd5274d59a7e6338b948eeae9b12/author/0"><span itemprop="name">R. Netzel</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Michael Burch" itemprop="url" href="/person/1d85bdd5274d59a7e6338b948eeae9b12/author/1"><span itemprop="name">M. Burch</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Weiskopf" itemprop="url" href="/person/1d85bdd5274d59a7e6338b948eeae9b12/author/2"><span itemprop="name">D. Weiskopf</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/PublicationIssue" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="journal">Spatial Cognition and Computation, Special Issue: Eye Tracking for Spatial Research</span>, </em> </span>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Fri Oct 09 12:31:48 CEST 2020Spatial Cognition and Computation, Special Issue: Eye Tracking for Spatial ResearchUser Performance and Reading Strategies for Metro Maps: An Eye Tracking Study2016from:leonkokkoliadis B01 visus:weiskopf sfbtrr161 2016 from:mueller visus:netzelrf visus We conducted a controlled empirical eye tracking study with 40 participants using schematic metro maps. The study focused on two aspects: determining different reading strategies and assessing user performance. We considered the following factors: color encoding (color vs. gray-scale), map complexity (three levels), and task difficulty (three levels). There was one type of task: find a route from a start to a target location and state the number of transfers that have to be performed. To identify reading strategies, we annotated fixations of scanpaths, computed a transition matrix of each annotated scanpath, and used these matrices as input to cluster scanpaths into groups of similar behavior. We show how these reading strategies relate to the geodesic structure of the scanpaths' fixations projected onto the geodesic line that connects start and target locations. The analysis of the eye tracking data is complemented by statistical inference working on two eye tracking metrics (average fixation duration and saccade length). User performance was evaluated with a statistical analysis of task correctness and completion time. Our study shows that the design factors have a significant impact on user task performance. Also, we were able to identify typical reading strategies like directly finding a path from start to target location. Often, participants check the correctness of their result multiple times by moving back and forth between start and target. Our findings also indicate that the choice of reading strategies does not depend on whether color or gray-scale encoding is used.User Performance and Reading Strategies for Metro Maps: An Eye Tracking StudyComparative Eye-tracking Evaluation of Scatterplots and Parallel Coordinateshttps://puma.ub.uni-stuttgart.de/bibtex/2a0ffef079c694b449e8648b45bd85c84/visusvisus2020-10-09T12:31:48+02:00from:leonkokkoliadis B01 visus:weiskopf sfbtrr161 2017 from:mueller visus:netzelrf visus <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Rudolf Netzel" itemprop="url" href="/person/1bc867ff88364ed0033badf3d2c376c95/author/0"><span itemprop="name">R. Netzel</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Jenny Vuong" itemprop="url" href="/person/1bc867ff88364ed0033badf3d2c376c95/author/1"><span itemprop="name">J. Vuong</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Ulrich Engelke" itemprop="url" href="/person/1bc867ff88364ed0033badf3d2c376c95/author/2"><span itemprop="name">U. Engelke</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Seán I. O'Donoghue" itemprop="url" href="/person/1bc867ff88364ed0033badf3d2c376c95/author/3"><span itemprop="name">S. O'Donoghue</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Weiskopf" itemprop="url" href="/person/1bc867ff88364ed0033badf3d2c376c95/author/4"><span itemprop="name">D. Weiskopf</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Julian Heinrich" itemprop="url" href="/person/1bc867ff88364ed0033badf3d2c376c95/author/5"><span itemprop="name">J. Heinrich</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/PublicationIssue" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="journal">Visual Informatics</span>, </em> <em><span itemtype="http://schema.org/PublicationVolume" itemscope="itemscope" itemprop="isPartOf"><span itemprop="volumeNumber">1 </span></span>(<span itemprop="issueNumber">2</span>):
<span itemprop="pagination">118-131</span></em> </span>(<em><span>2017<meta content="2017" itemprop="datePublished"/></span></em>)</span>Fri Oct 09 12:31:48 CEST 2020Visual Informatics2118-131Comparative Eye-tracking Evaluation of Scatterplots and Parallel Coordinates12017from:leonkokkoliadis B01 visus:weiskopf sfbtrr161 2017 from:mueller visus:netzelrf visus We investigate task performance and reading characteristics for scatterplots (Cartesian coordinates) and parallel coordinates. In a controlled eye-tracking study, we asked 24 participants to assess the relative distance of points in multidimensional space, depending on the diagram type (parallel coordinates or a horizontal collection of scatterplots), the number of data dimensions (2, 4, 6, or 8), and the relative distance between points (15%, 20%, or 25%). For a given reference point and two target points, we instructed participants to choose the target point that was closer to the reference point in multidimensional space. We present a visual scanning model that describes different strategies to solve this retrieval task for both diagram types, and propose corresponding hypotheses that we test using task completion time, accuracy, and gaze positions as dependent variables. Our results show that scatterplots outperform parallel coordinates significantly in 2 dimensions, however, the task was solved more quickly and more accurately with parallel coordinates in 8 dimensions. The eye-tracking data further shows significant differences between Cartesian and parallel coordinates, as well as between different numbers of dimensions. For parallel coordinates, there is a clear trend toward shorter fixations and longer saccades with increasing number of dimensions. Using an area-of-interest (AOI) based approach, we identify different reading strategies for each diagram type: For parallel coordinates, the participants’ gaze frequently jumped back and forth between pairs of axes, while axes were rarely focused on when viewing Cartesian coordinates. We further found that participants’ attention is biased: toward the center of the whole plotfor parallel coordinates and skewed to the center/left side for Cartesian coordinates. We anticipate that these results may support the design of more effective visualizations for multidimensional data.Comparative Eye-tracking Evaluation of Scatterplots and Parallel CoordinatesVisualization of Time Series Data with Spatial Context: Communicating the Energy Production of Power Plantshttps://puma.ub.uni-stuttgart.de/bibtex/2561bf03113363936090ae6a3bab13b72/visusvisus2017-10-25T15:55:14+02:00visus:netzelrf visus sfbtrr161 visus:weiskopf visus:rodrigns vis(us) from:mueller 2017 visus:burchml <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Nils Rodrigues" itemprop="url" href="/person/19e55149294152097fec60fdbb208a309/author/0"><span itemprop="name">N. Rodrigues</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Rudolf Netzel" itemprop="url" href="/person/19e55149294152097fec60fdbb208a309/author/1"><span itemprop="name">R. Netzel</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Kazi Riaz Ullah" itemprop="url" href="/person/19e55149294152097fec60fdbb208a309/author/2"><span itemprop="name">K. Ullah</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Michael Burch" itemprop="url" href="/person/19e55149294152097fec60fdbb208a309/author/3"><span itemprop="name">M. Burch</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Alexander Schultz" itemprop="url" href="/person/19e55149294152097fec60fdbb208a309/author/4"><span itemprop="name">A. Schultz</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Bruno Burger" itemprop="url" href="/person/19e55149294152097fec60fdbb208a309/author/5"><span itemprop="name">B. Burger</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Weiskopf" itemprop="url" href="/person/19e55149294152097fec60fdbb208a309/author/6"><span itemprop="name">D. Weiskopf</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">VINCI 2017</span>, </em></span>(<em><span>2017<meta content="2017" itemprop="datePublished"/></span></em>)</span>Wed Oct 25 15:55:14 CEST 2017VINCI 2017Visualization of Time Series Data with Spatial Context: Communicating the Energy Production of Power Plants2017visus:netzelrf visus sfbtrr161 visus:weiskopf visus:rodrigns vis(us) from:mueller 2017 visus:burchml Visualization of Time Series Data with Spatial Context: Communicating the Energy Production of Power PlantsAn Evaluation of Visual Search Support in Mapshttps://puma.ub.uni-stuttgart.de/bibtex/2c0ebd53ce086aee00f5564383a5469c1/visusvisus2017-10-25T15:54:46+02:00visus:netzelrf visus visus:hlawatml sfbtrr161 visus:weiskopf vis(us) from:mueller 2017 visus:burchml <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Rudolf Netzel" itemprop="url" href="/person/10ee4a8cf349f5543cf59b0abf9edc725/author/0"><span itemprop="name">R. Netzel</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Marcel Hlawatsch" itemprop="url" href="/person/10ee4a8cf349f5543cf59b0abf9edc725/author/1"><span itemprop="name">M. Hlawatsch</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Michael Burch" itemprop="url" href="/person/10ee4a8cf349f5543cf59b0abf9edc725/author/2"><span itemprop="name">M. Burch</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Sanjeev Balakrishnan" itemprop="url" href="/person/10ee4a8cf349f5543cf59b0abf9edc725/author/3"><span itemprop="name">S. Balakrishnan</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Hansjörg Schmauder" itemprop="url" href="/person/10ee4a8cf349f5543cf59b0abf9edc725/author/4"><span itemprop="name">H. Schmauder</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Weiskopf" itemprop="url" href="/person/10ee4a8cf349f5543cf59b0abf9edc725/author/5"><span itemprop="name">D. Weiskopf</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/PublicationIssue" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="journal">IEEE Transactions on Visualization and Computer Graphics</span>, </em> </span>(<em><span>2017<meta content="2017" itemprop="datePublished"/></span></em>)</span>Wed Oct 25 15:54:46 CEST 2017 IEEE Transactions on Visualization and Computer Graphics1An Evaluation of Visual Search Support in Maps232017visus:netzelrf visus visus:hlawatml sfbtrr161 visus:weiskopf vis(us) from:mueller 2017 visus:burchml An Evaluation of Visual Search Support in MapsMulti-Similarity Matrices of Eye Movement Datahttps://puma.ub.uni-stuttgart.de/bibtex/215e2826802b8ab432ffe647713ebbea9/visusvisus2017-10-25T15:54:35+02:00visus:netzelrf visus visus:weiskopf vis(us) sfbtrr161 from:mueller 2016 visus:burchml <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Ayush Kumar" itemprop="url" href="/person/1d1bc7d4226a574b8b396f325f8738563/author/0"><span itemprop="name">A. Kumar</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Rudolf Netzel" itemprop="url" href="/person/1d1bc7d4226a574b8b396f325f8738563/author/1"><span itemprop="name">R. Netzel</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Michael Burch" itemprop="url" href="/person/1d1bc7d4226a574b8b396f325f8738563/author/2"><span itemprop="name">M. Burch</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Weiskopf" itemprop="url" href="/person/1d1bc7d4226a574b8b396f325f8738563/author/3"><span itemprop="name">D. Weiskopf</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Klaus Mueller" itemprop="url" href="/person/1d1bc7d4226a574b8b396f325f8738563/author/4"><span itemprop="name">K. Mueller</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"></span>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Wed Oct 25 15:54:35 CEST 2017Multi-Similarity Matrices of Eye Movement Data2016visus:netzelrf visus visus:weiskopf vis(us) sfbtrr161 from:mueller 2016 visus:burchml Multi-Similarity Matrices of Eye Movement DataHilbert Attention Maps for Visualizing Spatiotemporal Gaze Datahttps://puma.ub.uni-stuttgart.de/bibtex/29330d125cbd2c10555b75272665013f8/visusvisus2017-10-25T15:54:34+02:00visus:netzelrf visus visus:weiskopf vis(us) sfbtrr161 from:mueller 2016 <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Rudolf Netzel" itemprop="url" href="/person/10e03ce36db7577bee2399c65e7595b50/author/0"><span itemprop="name">R. Netzel</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Weiskopf" itemprop="url" href="/person/10e03ce36db7577bee2399c65e7595b50/author/1"><span itemprop="name">D. Weiskopf</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"></span>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Wed Oct 25 15:54:34 CEST 2017Hilbert Attention Maps for Visualizing Spatiotemporal Gaze Data2016visus:netzelrf visus visus:weiskopf vis(us) sfbtrr161 from:mueller 2016 Hilbert Attention Maps for Visualizing Spatiotemporal Gaze DataGenerative Data Models for Validation and Evaluation of Visualization Techniqueshttps://puma.ub.uni-stuttgart.de/bibtex/298a31a496d32e9909c7474a47902f349/visusvisus2017-10-25T15:54:32+02:00visus:netzelrf visus visus:hlawatml sfbtrr161 vis-ertl sfbtrr75 visus:weiskopf visus:karchgz visus:schulzch vis(us) visus:ertl from:mueller 2016 vis-gis visus:freysn <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Christoph Schulz" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/0"><span itemprop="name">C. Schulz</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Arlind Nocaj" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/1"><span itemprop="name">A. Nocaj</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Mennatallah El-Assady" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/2"><span itemprop="name">M. El-Assady</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Steffen Frey" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/3"><span itemprop="name">S. Frey</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Marcel Hlawatsch" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/4"><span itemprop="name">M. Hlawatsch</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Michael Hund" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/5"><span itemprop="name">M. Hund</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Grzegorz Karol Karch" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/6"><span itemprop="name">G. Karch</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Rudolf Netzel" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/7"><span itemprop="name">R. Netzel</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Christin Schätzle" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/8"><span itemprop="name">C. Schätzle</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Miriam Butt" itemprop="url" href="/person/144fc915ec2c38e311bc349ec6601324b/author/9"><span itemprop="name">M. Butt</span></a></span></span> and 4 other author(s). </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">BELIV '16: Beyond Time And Errors: Novel Evaluation Methods For Visualization</span>, </em></span>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Wed Oct 25 15:54:32 CEST 2017BELIV '16: Beyond Time And Errors: Novel Evaluation Methods For VisualizationGenerative Data Models for Validation and Evaluation of Visualization Techniques2016visus:netzelrf visus visus:hlawatml sfbtrr161 vis-ertl sfbtrr75 visus:weiskopf visus:karchgz visus:schulzch vis(us) visus:ertl from:mueller 2016 vis-gis visus:freysn Generative Data Models for Validation and Evaluation of Visualization Techniques