PUMA publications for /tag/c02https://puma.ub.uni-stuttgart.de/tag/c02PUMA RSS feed for /tag/c022024-03-28T19:18:15+01:00Eye Tracking and Visualization: Foundations, Techniques, and Applicationshttps://puma.ub.uni-stuttgart.de/bibtex/26c3d35c2ac5bdad695082fab39d624e0/muellermueller2020-10-09T12:31:50+02:00A01 B01 C02 C03 from:leonkokkoliadis sfbtrr161 visus visus:burchml visus:weiskopf <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="D. Weiskopf" itemprop="url" href="/person/145cdd97bc5df25e82f5da7e6604ecdf5/author/0"><span itemprop="name">D. Weiskopf</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="M. Burch" itemprop="url" href="/person/145cdd97bc5df25e82f5da7e6604ecdf5/author/1"><span itemprop="name">M. Burch</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="L. L. Chuang" itemprop="url" href="/person/145cdd97bc5df25e82f5da7e6604ecdf5/author/2"><span itemprop="name">L. Chuang</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="B. Fischer" itemprop="url" href="/person/145cdd97bc5df25e82f5da7e6604ecdf5/author/3"><span itemprop="name">B. Fischer</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="A. Schmidt" itemprop="url" href="/person/145cdd97bc5df25e82f5da7e6604ecdf5/author/4"><span itemprop="name">A. Schmidt</span></a></span></span>. </span><span class="additional-entrytype-information"><em><span itemprop="publisher">Springer</span>, </em><em>Berlin, Heidelberg, </em>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Fri Oct 09 12:31:50 CEST 2020Berlin, HeidelbergEye Tracking and Visualization: Foundations, Techniques, and Applications2016A01 B01 C02 C03 from:leonkokkoliadis sfbtrr161 visus visus:burchml visus:weiskopf This book discusses research, methods, and recent developments in the interdisciplinary field that spans research in visualization, eye tracking, human-computer interaction, and psychology. It presents extended versions of papers from the First Workshop on Eye Tracking and Visualization (ETVIS), which was organized as a workshop of the IEEE VIS Conference 2015. Topics include visualization and visual analytics of eye-tracking data, metrics and cognitive models, eye-tracking experiments in the context of visualization interfaces, and eye tracking in 3D and immersive environments. The extended ETVIS papers are complemented by a chapter offering an overview of visualization approaches for analyzing eye-tracking data and a chapter that discusses electrooculography (EOG) as an alternative of acquiring information about eye movements. Covering scientific visualization, information visualization, and visual analytics, this book is a valuable resource for eye-tracking researchers within the visualization community.Eye Tracking and Visualization: Foundations, Techniques, and ApplicationsEye Tracking and Visualization: Foundations, Techniques, and Applicationshttps://puma.ub.uni-stuttgart.de/bibtex/26c3d35c2ac5bdad695082fab39d624e0/visusvisus2020-10-09T12:31:50+02:00A01 from:leonkokkoliadis B01 visus:weiskopf sfbtrr161 C03 C02 from:mueller visus:burchml visus <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="D. Weiskopf" itemprop="url" href="/person/145cdd97bc5df25e82f5da7e6604ecdf5/author/0"><span itemprop="name">D. Weiskopf</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="M. Burch" itemprop="url" href="/person/145cdd97bc5df25e82f5da7e6604ecdf5/author/1"><span itemprop="name">M. Burch</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="L. L. Chuang" itemprop="url" href="/person/145cdd97bc5df25e82f5da7e6604ecdf5/author/2"><span itemprop="name">L. Chuang</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="B. Fischer" itemprop="url" href="/person/145cdd97bc5df25e82f5da7e6604ecdf5/author/3"><span itemprop="name">B. Fischer</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="A. Schmidt" itemprop="url" href="/person/145cdd97bc5df25e82f5da7e6604ecdf5/author/4"><span itemprop="name">A. Schmidt</span></a></span></span>. </span><span class="additional-entrytype-information"><em><span itemprop="publisher">Springer</span>, </em><em>Berlin, Heidelberg, </em>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Fri Oct 09 12:31:50 CEST 2020Berlin, HeidelbergEye Tracking and Visualization: Foundations, Techniques, and Applications2016A01 from:leonkokkoliadis B01 visus:weiskopf sfbtrr161 C03 C02 from:mueller visus:burchml visus This book discusses research, methods, and recent developments in the interdisciplinary field that spans research in visualization, eye tracking, human-computer interaction, and psychology. It presents extended versions of papers from the First Workshop on Eye Tracking and Visualization (ETVIS), which was organized as a workshop of the IEEE VIS Conference 2015. Topics include visualization and visual analytics of eye-tracking data, metrics and cognitive models, eye-tracking experiments in the context of visualization interfaces, and eye tracking in 3D and immersive environments. The extended ETVIS papers are complemented by a chapter offering an overview of visualization approaches for analyzing eye-tracking data and a chapter that discusses electrooculography (EOG) as an alternative of acquiring information about eye movements. Covering scientific visualization, information visualization, and visual analytics, this book is a valuable resource for eye-tracking researchers within the visualization community.Eye Tracking and Visualization: Foundations, Techniques, and ApplicationsData-Driven Approaches to Unrestricted Gaze-Tracking Benefit from Saccade Filteringhttps://puma.ub.uni-stuttgart.de/bibtex/2cba117105d7ef206cf8ac1f8c80258e6/sfbtrr161sfbtrr1612020-03-11T13:53:23+01:002016 C02 C03 from:leonkokkoliadis sfbtrr161 <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Nina Flad" itemprop="url" href="/person/194bb491dcadbf71c5bc2200056abf225/author/0"><span itemprop="name">N. Flad</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Jonas C. Ditz" itemprop="url" href="/person/194bb491dcadbf71c5bc2200056abf225/author/1"><span itemprop="name">J. Ditz</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Albrecht Schmidt" itemprop="url" href="/person/194bb491dcadbf71c5bc2200056abf225/author/2"><span itemprop="name">A. Schmidt</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Heinrich H. Bülthoff" itemprop="url" href="/person/194bb491dcadbf71c5bc2200056abf225/author/3"><span itemprop="name">H. Bülthoff</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Lewis L. Chuang" itemprop="url" href="/person/194bb491dcadbf71c5bc2200056abf225/author/4"><span itemprop="name">L. Chuang</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the Second Workshop on Eye Tracking and Visualization (ETVIS)</span>, </em></span><em>page <span itemprop="pagination">1-5</span>. </em><em><span itemprop="publisher">IEEE</span>, </em>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Wed Mar 11 13:53:23 CET 2020Proceedings of the Second Workshop on Eye Tracking and Visualization (ETVIS)1-5Data-Driven Approaches to Unrestricted Gaze-Tracking Benefit from Saccade Filtering20162016 C02 C03 from:leonkokkoliadis sfbtrr161 Unrestricted gaze tracking that allows for head and body movements can enable us to understand interactive gaze behavior with large-scale visualizations. Approaches that support this, by simultaneously recording eye- and user-movements, can either be based on geometric or data-driven regression models. A data-driven approach can be implemented more flexibly but its performance can suffer with poor quality training data. In this paper, we introduce a pre-processing procedure to remove training data for periods when the gaze is not fixating the presented target stimuli. Our procedure is based on a velocity-based filter for rapid eye-movements (i.e., saccades). Our results show that this additional procedure improved the accuracy of our unrestricted gaze-tracking model by as much as 56 %. Future improvements to data-driven approaches for unrestricted gaze-tracking are proposed, in order to allow for more complex dynamic visualizations.Data-Driven Approaches to Unrestricted Gaze-Tracking Benefit from Saccade Filteringhttps://puma.ub.uni-stuttgart.de/bibtex/2cba117105d7ef206cf8ac1f8c80258e6/leonkokkoliadisleonkokkoliadis2020-03-11T13:53:23+01:002016 C02 sfbtrr161 <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Nina Flad" itemprop="url" href="/person/194bb491dcadbf71c5bc2200056abf225/author/0"><span itemprop="name">N. Flad</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Jonas C. Ditz" itemprop="url" href="/person/194bb491dcadbf71c5bc2200056abf225/author/1"><span itemprop="name">J. Ditz</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Albrecht Schmidt" itemprop="url" href="/person/194bb491dcadbf71c5bc2200056abf225/author/2"><span itemprop="name">A. Schmidt</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Heinrich H. Bülthoff" itemprop="url" href="/person/194bb491dcadbf71c5bc2200056abf225/author/3"><span itemprop="name">H. Bülthoff</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Lewis L. Chuang" itemprop="url" href="/person/194bb491dcadbf71c5bc2200056abf225/author/4"><span itemprop="name">L. Chuang</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the Second Workshop on Eye Tracking and Visualization (ETVIS)</span>, </em></span><em>page <span itemprop="pagination">1-5</span>. </em><em><span itemprop="publisher">IEEE</span>, </em>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Wed Mar 11 13:53:23 CET 2020Proceedings of the Second Workshop on Eye Tracking and Visualization (ETVIS)1-5Data-Driven Approaches to Unrestricted Gaze-Tracking Benefit from Saccade Filtering20162016 C02 sfbtrr161 Unrestricted gaze tracking that allows for head and body movements can enable us to understand interactive gaze behavior with large-scale visualizations. Approaches that support this, by simultaneously recording eye- and user-movements, can either be based on geometric or data-driven regression models. A data-driven approach can be implemented more flexibly but its performance can suffer with poor quality training data. In this paper, we introduce a pre-processing procedure to remove training data for periods when the gaze is not fixating the presented target stimuli. Our procedure is based on a velocity-based filter for rapid eye-movements (i.e., saccades). Our results show that this additional procedure improved the accuracy of our unrestricted gaze-tracking model by as much as 56 %. Future improvements to data-driven approaches for unrestricted gaze-tracking are proposed, in order to allow for more complex dynamic visualizations.Designing Consistent Gestures Across Device Types: Eliciting RSVP Controls for Phone, Watch, and Glasseshttps://puma.ub.uni-stuttgart.de/bibtex/2386b2c62a27f3a81f03517203ed1e96b/sfbtrr161sfbtrr1612020-03-11T13:48:12+01:002018 C02 C04 from:leonkokkoliadis sfbtrr161 <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Tilman Dingler" itemprop="url" href="/person/177ee5b6b631cd2c47d233c03f06d864f/author/0"><span itemprop="name">T. Dingler</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Rufat Rzayev" itemprop="url" href="/person/177ee5b6b631cd2c47d233c03f06d864f/author/1"><span itemprop="name">R. Rzayev</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Alireza Sahami Shirazi" itemprop="url" href="/person/177ee5b6b631cd2c47d233c03f06d864f/author/2"><span itemprop="name">A. Shirazi</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Niels Henze" itemprop="url" href="/person/177ee5b6b631cd2c47d233c03f06d864f/author/3"><span itemprop="name">N. Henze</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the CHI Conference on Human Factors in Computing Systems</span>, </em></span><em>page <span itemprop="pagination">419:1–419:12</span>. </em><em><span itemprop="publisher">ACM</span>, </em>(<em><span>2018<meta content="2018" itemprop="datePublished"/></span></em>)</span>Wed Mar 11 13:48:12 CET 2020Proceedings of the CHI Conference on Human Factors in Computing Systemsconf/chi/2018419:1–419:12Designing Consistent Gestures Across Device Types: Eliciting RSVP Controls for Phone, Watch, and Glasses20182018 C02 C04 from:leonkokkoliadis sfbtrr161 In the era of ubiquitous computing, people expect applications to work across different devices. To provide a seamless user experience it is therefore crucial that interfaces and interactions are consistent across different device types. In this paper, we present a method to create gesture sets that are consistent and easily transferable. Our proposed method entails 1) the gesture elicitation on each device type, 2) the consolidation of a unified gesture set, and 3) a final validation by calculating a transferability score. We tested our approach by eliciting a set of user-defined gestures for reading with Rapid Serial Visual Presentation (RSVP) of text for three device types: phone, watch, and glasses. We present the resulting, unified gesture set for RSVP reading and show the feasibility of our method to elicit gesture sets that are consistent across device types with different form factors.Designing Consistent Gestures Across Device Types: Eliciting RSVP Controls for Phone, Watch, and Glasseshttps://puma.ub.uni-stuttgart.de/bibtex/2386b2c62a27f3a81f03517203ed1e96b/leonkokkoliadisleonkokkoliadis2020-03-11T13:48:12+01:002018 C02 sfbtrr161 <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Tilman Dingler" itemprop="url" href="/person/177ee5b6b631cd2c47d233c03f06d864f/author/0"><span itemprop="name">T. Dingler</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Rufat Rzayev" itemprop="url" href="/person/177ee5b6b631cd2c47d233c03f06d864f/author/1"><span itemprop="name">R. Rzayev</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Alireza Sahami Shirazi" itemprop="url" href="/person/177ee5b6b631cd2c47d233c03f06d864f/author/2"><span itemprop="name">A. Shirazi</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Niels Henze" itemprop="url" href="/person/177ee5b6b631cd2c47d233c03f06d864f/author/3"><span itemprop="name">N. Henze</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the CHI Conference on Human Factors in Computing Systems</span>, </em></span><em>page <span itemprop="pagination">419:1–419:12</span>. </em><em><span itemprop="publisher">ACM</span>, </em>(<em><span>2018<meta content="2018" itemprop="datePublished"/></span></em>)</span>Wed Mar 11 13:48:12 CET 2020Proceedings of the CHI Conference on Human Factors in Computing Systemsconf/chi/2018419:1–419:12Designing Consistent Gestures Across Device Types: Eliciting RSVP Controls for Phone, Watch, and Glasses20182018 C02 sfbtrr161 In the era of ubiquitous computing, people expect applications to work across different devices. To provide a seamless user experience it is therefore crucial that interfaces and interactions are consistent across different device types. In this paper, we present a method to create gesture sets that are consistent and easily transferable. Our proposed method entails 1) the gesture elicitation on each device type, 2) the consolidation of a unified gesture set, and 3) a final validation by calculating a transferability score. We tested our approach by eliciting a set of user-defined gestures for reading with Rapid Serial Visual Presentation (RSVP) of text for three device types: phone, watch, and glasses. We present the resulting, unified gesture set for RSVP reading and show the feasibility of our method to elicit gesture sets that are consistent across device types with different form factors.MAGIC-Pointing on Large High-Resolution Displayshttps://puma.ub.uni-stuttgart.de/bibtex/258135fce9124b1afc3436d28d0f77a6d/sfbtrr161sfbtrr1612020-03-11T13:45:38+01:002016 C02 C04 from:leonkokkoliadis sfbtrr161 <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Lars Lischke" itemprop="url" href="/person/17551180af85622275f3adcce948b59cd/author/0"><span itemprop="name">L. Lischke</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Valentin Schwind" itemprop="url" href="/person/17551180af85622275f3adcce948b59cd/author/1"><span itemprop="name">V. Schwind</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Kai Friedrich" itemprop="url" href="/person/17551180af85622275f3adcce948b59cd/author/2"><span itemprop="name">K. Friedrich</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Albrecht Schmidt" itemprop="url" href="/person/17551180af85622275f3adcce948b59cd/author/3"><span itemprop="name">A. Schmidt</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Niels Henze" itemprop="url" href="/person/17551180af85622275f3adcce948b59cd/author/4"><span itemprop="name">N. Henze</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA)</span>, </em></span><em>page <span itemprop="pagination">1706-1712</span>. </em><em><span itemprop="publisher">ACM</span>, </em>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Wed Mar 11 13:45:38 CET 2020Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA)1706-1712MAGIC-Pointing on Large High-Resolution Displays20162016 C02 C04 from:leonkokkoliadis sfbtrr161 Display space in offices constantly increased in the last decades. We believe that this trend will continue and ultimately result in the use of wall-sized displays in the future office. One of the most challenging tasks while interacting with large high-resolution displays is target acquisition. The most important challenges reported in previous work are the long distances that need to be traveled with the pointer while still enabling precise selection as well as seeking for the pointer on the large display. In this paper, we investigate if MAGIC-Pointing, controlling the pointer through eye gaze, can help overcome both challenges. We implemented MAGIC-Pointing for a 2.85m x 1.13m large display. Using this system we conducted a target selection study. The results show that using MAGIC-Pointing for selecting targets on wall-sized displays decreases the task completion time significantly and it also decreases the users' task load. We therefore argue that MAGIC-Pointing can help to make interaction with wall-sized displays usable.MAGIC-Pointing on Large High-Resolution Displayshttps://puma.ub.uni-stuttgart.de/bibtex/258135fce9124b1afc3436d28d0f77a6d/leonkokkoliadisleonkokkoliadis2020-03-11T13:45:38+01:002016 C02 sfbtrr161 <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Lars Lischke" itemprop="url" href="/person/17551180af85622275f3adcce948b59cd/author/0"><span itemprop="name">L. Lischke</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Valentin Schwind" itemprop="url" href="/person/17551180af85622275f3adcce948b59cd/author/1"><span itemprop="name">V. Schwind</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Kai Friedrich" itemprop="url" href="/person/17551180af85622275f3adcce948b59cd/author/2"><span itemprop="name">K. Friedrich</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Albrecht Schmidt" itemprop="url" href="/person/17551180af85622275f3adcce948b59cd/author/3"><span itemprop="name">A. Schmidt</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Niels Henze" itemprop="url" href="/person/17551180af85622275f3adcce948b59cd/author/4"><span itemprop="name">N. Henze</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA)</span>, </em></span><em>page <span itemprop="pagination">1706-1712</span>. </em><em><span itemprop="publisher">ACM</span>, </em>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Wed Mar 11 13:45:38 CET 2020Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA)1706-1712MAGIC-Pointing on Large High-Resolution Displays20162016 C02 sfbtrr161 Display space in offices constantly increased in the last decades. We believe that this trend will continue and ultimately result in the use of wall-sized displays in the future office. One of the most challenging tasks while interacting with large high-resolution displays is target acquisition. The most important challenges reported in previous work are the long distances that need to be traveled with the pointer while still enabling precise selection as well as seeking for the pointer on the large display. In this paper, we investigate if MAGIC-Pointing, controlling the pointer through eye gaze, can help overcome both challenges. We implemented MAGIC-Pointing for a 2.85m x 1.13m large display. Using this system we conducted a target selection study. The results show that using MAGIC-Pointing for selecting targets on wall-sized displays decreases the task completion time significantly and it also decreases the users' task load. We therefore argue that MAGIC-Pointing can help to make interaction with wall-sized displays usable.Interaction Techniques for Wall-Sized Screenshttps://puma.ub.uni-stuttgart.de/bibtex/21facfe42c2a50b848e86235a053b79a9/leonkokkoliadisleonkokkoliadis2020-03-11T13:39:01+01:002015 C02 sfbtrr161 <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Lars Lischke" itemprop="url" href="/person/159438c2a8feea8aa61f3fd02ab4c2ed6/author/0"><span itemprop="name">L. Lischke</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Jürgen Grüninger" itemprop="url" href="/person/159438c2a8feea8aa61f3fd02ab4c2ed6/author/1"><span itemprop="name">J. Grüninger</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Khalil Klouche" itemprop="url" href="/person/159438c2a8feea8aa61f3fd02ab4c2ed6/author/2"><span itemprop="name">K. Klouche</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Albrecht Schmidt" itemprop="url" href="/person/159438c2a8feea8aa61f3fd02ab4c2ed6/author/3"><span itemprop="name">A. Schmidt</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Philipp Slusallek" itemprop="url" href="/person/159438c2a8feea8aa61f3fd02ab4c2ed6/author/4"><span itemprop="name">P. Slusallek</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Giulio Jacucci" itemprop="url" href="/person/159438c2a8feea8aa61f3fd02ab4c2ed6/author/5"><span itemprop="name">G. Jacucci</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/PublicationIssue" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="journal">Proceedings of the International Conference on Interactive Tabletops & Surfaces (ITS)</span>, </em> </span>(<em><span>2015<meta content="2015" itemprop="datePublished"/></span></em>)</span>Wed Mar 11 13:39:01 CET 2020Proceedings of the International Conference on Interactive Tabletops & Surfaces (ITS)501-504Interaction Techniques for Wall-Sized Screens20152015 C02 sfbtrr161 Large screen displays are part of many future visions, such as i-LAND that describes the possible workspace of the future. Research showed that wall-sized screens provide clear benefits for data exploration, collaboration and organizing work in office environments. With the increase of computational power and falling display prices wall-sized screens currently make the step out of research labs and specific settings into office environments and private life. Today, there is no standard set of interaction techniques for interacting with wall-sized displays and it is even unclear if a single mode of input is suitable for all potential applications. In this workshop, we will bring together researchers from academia and industry who work on large screens. Together, we will survey current research directions, review promising interaction techniques, and identify the underlying fundamental research challenges.Interaction Techniques for Wall-Sized ScreensInteraction Techniques for Wall-Sized Screenshttps://puma.ub.uni-stuttgart.de/bibtex/21facfe42c2a50b848e86235a053b79a9/sfbtrr161sfbtrr1612020-03-11T13:39:01+01:00from:leonkokkoliadis sfbtrr161 C02 2015 <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Lars Lischke" itemprop="url" href="/person/159438c2a8feea8aa61f3fd02ab4c2ed6/author/0"><span itemprop="name">L. Lischke</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Jürgen Grüninger" itemprop="url" href="/person/159438c2a8feea8aa61f3fd02ab4c2ed6/author/1"><span itemprop="name">J. Grüninger</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Khalil Klouche" itemprop="url" href="/person/159438c2a8feea8aa61f3fd02ab4c2ed6/author/2"><span itemprop="name">K. Klouche</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Albrecht Schmidt" itemprop="url" href="/person/159438c2a8feea8aa61f3fd02ab4c2ed6/author/3"><span itemprop="name">A. Schmidt</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Philipp Slusallek" itemprop="url" href="/person/159438c2a8feea8aa61f3fd02ab4c2ed6/author/4"><span itemprop="name">P. Slusallek</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Giulio Jacucci" itemprop="url" href="/person/159438c2a8feea8aa61f3fd02ab4c2ed6/author/5"><span itemprop="name">G. Jacucci</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/PublicationIssue" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="journal">Proceedings of the International Conference on Interactive Tabletops & Surfaces (ITS)</span>, </em> </span>(<em><span>2015<meta content="2015" itemprop="datePublished"/></span></em>)</span>Wed Mar 11 13:39:01 CET 2020Proceedings of the International Conference on Interactive Tabletops & Surfaces (ITS)501-504Interaction Techniques for Wall-Sized Screens2015from:leonkokkoliadis sfbtrr161 C02 2015 Large screen displays are part of many future visions, such as i-LAND that describes the possible workspace of the future. Research showed that wall-sized screens provide clear benefits for data exploration, collaboration and organizing work in office environments. With the increase of computational power and falling display prices wall-sized screens currently make the step out of research labs and specific settings into office environments and private life. Today, there is no standard set of interaction techniques for interacting with wall-sized displays and it is even unclear if a single mode of input is suitable for all potential applications. In this workshop, we will bring together researchers from academia and industry who work on large screens. Together, we will survey current research directions, review promising interaction techniques, and identify the underlying fundamental research challenges.Interaction Techniques for Wall-Sized ScreensEMGuitar: Assisting Guitar Playing with Electromyographyhttps://puma.ub.uni-stuttgart.de/bibtex/2b0d5b793a5b9bef284ed268f959ebe2d/sfbtrr161sfbtrr1612020-03-11T13:36:48+01:002018 C02 from:leonkokkoliadis sfbtrr161 <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Jakob Karolus" itemprop="url" href="/person/15723ad6c7aed1ac04df4e60a739e83bd/author/0"><span itemprop="name">J. Karolus</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Hendrik Schuff" itemprop="url" href="/person/15723ad6c7aed1ac04df4e60a739e83bd/author/1"><span itemprop="name">H. Schuff</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Thomas Kosch" itemprop="url" href="/person/15723ad6c7aed1ac04df4e60a739e83bd/author/2"><span itemprop="name">T. Kosch</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Pawel W. Wozniak" itemprop="url" href="/person/15723ad6c7aed1ac04df4e60a739e83bd/author/3"><span itemprop="name">P. Wozniak</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Albrecht Schmidt" itemprop="url" href="/person/15723ad6c7aed1ac04df4e60a739e83bd/author/4"><span itemprop="name">A. Schmidt</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the Designing Interactive Systems Conference (DIS)</span>, </em></span><em>page <span itemprop="pagination">651-655</span>. </em><em><span itemprop="publisher">ACM</span>, </em>(<em><span>2018<meta content="2018" itemprop="datePublished"/></span></em>)</span>Wed Mar 11 13:36:48 CET 2020Proceedings of the Designing Interactive Systems Conference (DIS)conf/ACMdis/2018651-655EMGuitar: Assisting Guitar Playing with Electromyography20182018 C02 from:leonkokkoliadis sfbtrr161 Mastering fine motor tasks, such as playing the guitar, takes years of time-consuming practice. Commonly, expensive guidance by experts is essential for adjusting the training program to the student's proficiency. In our work, we showcase the suitability of Electromyography to detect fine-grained hand and finger postures in an exemplary guitar tutor scenario. We present EMGuitar, an interactive guitar tutoring system, that assists students by reporting on play correctness and adjusts playback tempi automatically. We report person-dependent classification utilizing a ring of electrodes around the forearm with an F1 score of up to 0.89 on recorded calibration data. Furthermore, our system was received well by neither diminishing ease of use nor being disruptive for the participants. Based on the received comments, we identified the need for detailed play accuracy feedback down to individual chords, for which we suggest an adapted visualization and an algorithmic approach.EMGuitar: Assisting Guitar Playing with Electromyographyhttps://puma.ub.uni-stuttgart.de/bibtex/2b0d5b793a5b9bef284ed268f959ebe2d/leonkokkoliadisleonkokkoliadis2020-03-11T13:36:48+01:00C02 sfbtrr161 <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Jakob Karolus" itemprop="url" href="/person/15723ad6c7aed1ac04df4e60a739e83bd/author/0"><span itemprop="name">J. Karolus</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Hendrik Schuff" itemprop="url" href="/person/15723ad6c7aed1ac04df4e60a739e83bd/author/1"><span itemprop="name">H. Schuff</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Thomas Kosch" itemprop="url" href="/person/15723ad6c7aed1ac04df4e60a739e83bd/author/2"><span itemprop="name">T. Kosch</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Pawel W. Wozniak" itemprop="url" href="/person/15723ad6c7aed1ac04df4e60a739e83bd/author/3"><span itemprop="name">P. Wozniak</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Albrecht Schmidt" itemprop="url" href="/person/15723ad6c7aed1ac04df4e60a739e83bd/author/4"><span itemprop="name">A. Schmidt</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the Designing Interactive Systems Conference (DIS)</span>, </em></span><em>page <span itemprop="pagination">651-655</span>. </em><em><span itemprop="publisher">ACM</span>, </em>(<em><span>2018<meta content="2018" itemprop="datePublished"/></span></em>)</span>Wed Mar 11 13:36:48 CET 2020Proceedings of the Designing Interactive Systems Conference (DIS)conf/ACMdis/2018651-655EMGuitar: Assisting Guitar Playing with Electromyography2018C02 sfbtrr161 Mastering fine motor tasks, such as playing the guitar, takes years of time-consuming practice. Commonly, expensive guidance by experts is essential for adjusting the training program to the student's proficiency. In our work, we showcase the suitability of Electromyography to detect fine-grained hand and finger postures in an exemplary guitar tutor scenario. We present EMGuitar, an interactive guitar tutoring system, that assists students by reporting on play correctness and adjusts playback tempi automatically. We report person-dependent classification utilizing a ring of electrodes around the forearm with an F1 score of up to 0.89 on recorded calibration data. Furthermore, our system was received well by neither diminishing ease of use nor being disruptive for the participants. Based on the received comments, we identified the need for detailed play accuracy feedback down to individual chords, for which we suggest an adapted visualization and an algorithmic approach.Mid-Air Gestures for Window Management on Large Displayshttps://puma.ub.uni-stuttgart.de/bibtex/232e2f62b1c66a5647851cfd48d20c3db/sfbtrr161sfbtrr1612020-03-11T13:33:44+01:002015 C02 from:leonkokkoliadis sfbtrr161 <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Lars Lischke" itemprop="url" href="/person/11fda93830db4baa64c645547c85e5c92/author/0"><span itemprop="name">L. Lischke</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Pascal Knierim" itemprop="url" href="/person/11fda93830db4baa64c645547c85e5c92/author/1"><span itemprop="name">P. Knierim</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Hermann Klinke" itemprop="url" href="/person/11fda93830db4baa64c645547c85e5c92/author/2"><span itemprop="name">H. Klinke</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Mensch und Computer 2015 – Tagungsband (MuC)</span>, </em></span><em>page <span itemprop="pagination">439-442</span>. </em><em><span itemprop="publisher">De Gruyter</span>, </em>(<em><span>2015<meta content="2015" itemprop="datePublished"/></span></em>)</span>Wed Mar 11 13:33:44 CET 2020Mensch und Computer 2015 – Tagungsband (MuC)conf/mc/2015439-442Mid-Air Gestures for Window Management on Large Displays20152015 C02 from:leonkokkoliadis sfbtrr161 We can observe a continuous trend for using larger screens with higher resolutions and greater pixel density. With advances in hard- and software technology, wall-sized displays for daily office work are already on the horizon. We assume that there will be no hard paradigm change in interaction techniques in the near future. Therefore, new concepts for wall-sized displays will be included in existing products. Designing interaction concepts for wall-sized displays in an office environment is a challenging task. Most crucial is designing appropriate input techniques. Moving the mouse pointer from one corner to another over a longer distance is cumbersome. However, pointing with a mouse is precise and commonplace. We propose using mid-air gestures to support input with mouse and keyboard on large displays. In particular, we designed a gesture set for manipulating regular windows.Mid-Air Gestures for Window Management on Large Displayshttps://puma.ub.uni-stuttgart.de/bibtex/232e2f62b1c66a5647851cfd48d20c3db/leonkokkoliadisleonkokkoliadis2020-03-11T13:33:44+01:00C02 sfbtrr161 <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Lars Lischke" itemprop="url" href="/person/11fda93830db4baa64c645547c85e5c92/author/0"><span itemprop="name">L. Lischke</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Pascal Knierim" itemprop="url" href="/person/11fda93830db4baa64c645547c85e5c92/author/1"><span itemprop="name">P. Knierim</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Hermann Klinke" itemprop="url" href="/person/11fda93830db4baa64c645547c85e5c92/author/2"><span itemprop="name">H. Klinke</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Mensch und Computer 2015 – Tagungsband (MuC)</span>, </em></span><em>page <span itemprop="pagination">439-442</span>. </em><em><span itemprop="publisher">De Gruyter</span>, </em>(<em><span>2015<meta content="2015" itemprop="datePublished"/></span></em>)</span>Wed Mar 11 13:33:44 CET 2020Mensch und Computer 2015 – Tagungsband (MuC)conf/mc/2015439-442Mid-Air Gestures for Window Management on Large Displays2015C02 sfbtrr161 We can observe a continuous trend for using larger screens with higher resolutions and greater pixel density. With advances in hard- and software technology, wall-sized displays for daily office work are already on the horizon. We assume that there will be no hard paradigm change in interaction techniques in the near future. Therefore, new concepts for wall-sized displays will be included in existing products. Designing interaction concepts for wall-sized displays in an office environment is a challenging task. Most crucial is designing appropriate input techniques. Moving the mouse pointer from one corner to another over a longer distance is cumbersome. However, pointing with a mouse is precise and commonplace. We propose using mid-air gestures to support input with mouse and keyboard on large displays. In particular, we designed a gesture set for manipulating regular windows.Towards Using Gaze Properties to Detect Language Proficiencyhttps://puma.ub.uni-stuttgart.de/bibtex/2a4799cd808a15dc9ad1fd3ae43744ff5/sfbtrr161sfbtrr1612020-03-11T13:28:46+01:002016 C02 from:leonkokkoliadis sfbtrr161 <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Jakob Karolus" itemprop="url" href="/person/118b16dd557494a1864c2a90bf4a3f597/author/0"><span itemprop="name">J. Karolus</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Paweł W. Woźniak" itemprop="url" href="/person/118b16dd557494a1864c2a90bf4a3f597/author/1"><span itemprop="name">P. W. Woźniak</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Lewis L. Chuang" itemprop="url" href="/person/118b16dd557494a1864c2a90bf4a3f597/author/2"><span itemprop="name">L. Chuang</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI)</span>, </em></span><em>page <span itemprop="pagination">118:1-118:6</span>. </em><em>New York, NY, USA, </em><em><span itemprop="publisher">ACM</span>, </em>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Wed Mar 11 13:28:46 CET 2020New York, NY, USAProceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI)118:1-118:6Towards Using Gaze Properties to Detect Language Proficiency20162016 C02 from:leonkokkoliadis sfbtrr161 Humans are inherently skilled at using subtle physiological cues from other persons, for example gaze direction in a conversation. Personal computers have yet to explore this implicit input modality. In a study with 14 participants, we investigate how a user's gaze can be leveraged in adaptive computer systems. In particular, we examine the impact of different languages on eye movements by presenting simple questions in multiple languages to our participants. We found that fixation duration is sufficient to ascertain if a user is highly proficient in a given language. We propose how these findings could be used to implement adaptive visualizations that react implicitly on the user's gaze.Towards Using Gaze Properties to Detect Language Proficiencyhttps://puma.ub.uni-stuttgart.de/bibtex/2a4799cd808a15dc9ad1fd3ae43744ff5/leonkokkoliadisleonkokkoliadis2020-03-11T13:28:46+01:00C02 sfbtrr161 <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Jakob Karolus" itemprop="url" href="/person/118b16dd557494a1864c2a90bf4a3f597/author/0"><span itemprop="name">J. Karolus</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Paweł W. Woźniak" itemprop="url" href="/person/118b16dd557494a1864c2a90bf4a3f597/author/1"><span itemprop="name">P. W. Woźniak</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Lewis L. Chuang" itemprop="url" href="/person/118b16dd557494a1864c2a90bf4a3f597/author/2"><span itemprop="name">L. Chuang</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI)</span>, </em></span><em>page <span itemprop="pagination">118:1-118:6</span>. </em><em>New York, NY, USA, </em><em><span itemprop="publisher">ACM</span>, </em>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Wed Mar 11 13:28:46 CET 2020New York, NY, USAProceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI)118:1-118:6Towards Using Gaze Properties to Detect Language Proficiency2016C02 sfbtrr161 Humans are inherently skilled at using subtle physiological cues from other persons, for example gaze direction in a conversation. Personal computers have yet to explore this implicit input modality. In a study with 14 participants, we investigate how a user's gaze can be leveraged in adaptive computer systems. In particular, we examine the impact of different languages on eye movements by presenting simple questions in multiple languages to our participants. We found that fixation duration is sufficient to ascertain if a user is highly proficient in a given language. We propose how these findings could be used to implement adaptive visualizations that react implicitly on the user's gaze.Screen Arrangements and Interaction Areas for Large Display Work Placeshttps://puma.ub.uni-stuttgart.de/bibtex/28a3ed9d35768e811c01f81c623cb103e/sfbtrr161sfbtrr1612020-03-11T13:23:57+01:002016 C02 C03 from:leonkokkoliadis sfbtrr161 <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Lars Lischke" itemprop="url" href="/person/1053db94cf00d434ec8c4ba5a1330bb67/author/0"><span itemprop="name">L. Lischke</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Sven Mayer" itemprop="url" href="/person/1053db94cf00d434ec8c4ba5a1330bb67/author/1"><span itemprop="name">S. Mayer</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Katrin Wolf" itemprop="url" href="/person/1053db94cf00d434ec8c4ba5a1330bb67/author/2"><span itemprop="name">K. Wolf</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Niels Henze" itemprop="url" href="/person/1053db94cf00d434ec8c4ba5a1330bb67/author/3"><span itemprop="name">N. Henze</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Harald Reiterer" itemprop="url" href="/person/1053db94cf00d434ec8c4ba5a1330bb67/author/4"><span itemprop="name">H. Reiterer</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Albrecht Schmidt" itemprop="url" href="/person/1053db94cf00d434ec8c4ba5a1330bb67/author/5"><span itemprop="name">A. Schmidt</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the ACM International Symposium on Pervasive Displays (PerDis)</span>, </em></span><em> 5, </em><em>page <span itemprop="pagination">228-234</span>. </em><em><span itemprop="publisher">ACM</span>, </em>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Wed Mar 11 13:23:57 CET 2020Proceedings of the ACM International Symposium on Pervasive Displays (PerDis)228-234Screen Arrangements and Interaction Areas for Large Display Work Places520162016 C02 C03 from:leonkokkoliadis sfbtrr161 Size and resolution of computer screens are constantly increasing. Individual screens can easily be combined to wall-sized displays. This enables computer displays that are folded, straight, bow shaped or even spread. As possibilities for arranging the screens are manifold, it is unclear what arrangements are appropriate. Moreover, it is unclear how content and applications should be arranged on such large displays. To determine guidelines for the arrangement of multiple screens and for content and application layouts, we conducted a design study. In the study, we asked 16 participants to arrange a large screen setup as well as to create layouts of multiple common application windows. Based on the results we provide a classification for screen arrangements and interaction areas. We identified, that screen space should be divided into a central area for interactive applications and peripheral areas, mainly for displaying additional content.Screen arrangements and interaction areas for large display work placesScreen Arrangements and Interaction Areas for Large Display Work Placeshttps://puma.ub.uni-stuttgart.de/bibtex/28a3ed9d35768e811c01f81c623cb103e/leonkokkoliadisleonkokkoliadis2020-03-11T13:23:57+01:002016 C02 sfbtrr161 <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Lars Lischke" itemprop="url" href="/person/1053db94cf00d434ec8c4ba5a1330bb67/author/0"><span itemprop="name">L. Lischke</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Sven Mayer" itemprop="url" href="/person/1053db94cf00d434ec8c4ba5a1330bb67/author/1"><span itemprop="name">S. Mayer</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Katrin Wolf" itemprop="url" href="/person/1053db94cf00d434ec8c4ba5a1330bb67/author/2"><span itemprop="name">K. Wolf</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Niels Henze" itemprop="url" href="/person/1053db94cf00d434ec8c4ba5a1330bb67/author/3"><span itemprop="name">N. Henze</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Harald Reiterer" itemprop="url" href="/person/1053db94cf00d434ec8c4ba5a1330bb67/author/4"><span itemprop="name">H. Reiterer</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Albrecht Schmidt" itemprop="url" href="/person/1053db94cf00d434ec8c4ba5a1330bb67/author/5"><span itemprop="name">A. Schmidt</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the ACM International Symposium on Pervasive Displays (PerDis)</span>, </em></span><em> 5, </em><em>page <span itemprop="pagination">228-234</span>. </em><em><span itemprop="publisher">ACM</span>, </em>(<em><span>2016<meta content="2016" itemprop="datePublished"/></span></em>)</span>Wed Mar 11 13:23:57 CET 2020Proceedings of the ACM International Symposium on Pervasive Displays (PerDis)228-234Screen Arrangements and Interaction Areas for Large Display Work Places520162016 C02 sfbtrr161 Size and resolution of computer screens are constantly increasing. Individual screens can easily be combined to wall-sized displays. This enables computer displays that are folded, straight, bow shaped or even spread. As possibilities for arranging the screens are manifold, it is unclear what arrangements are appropriate. Moreover, it is unclear how content and applications should be arranged on such large displays. To determine guidelines for the arrangement of multiple screens and for content and application layouts, we conducted a design study. In the study, we asked 16 participants to arrange a large screen setup as well as to create layouts of multiple common application windows. Based on the results we provide a classification for screen arrangements and interaction areas. We identified, that screen space should be divided into a central area for interactive applications and peripheral areas, mainly for displaying additional content.Screen arrangements and interaction areas for large display work placesIdentifying Cognitive Assistance with Mobile Electroencephalography: A Case Study with In-Situ Projections for Manual Assembly.https://puma.ub.uni-stuttgart.de/bibtex/2049be1b334c56d0cfd1db469faaddab2/sfbtrr161sfbtrr1612020-03-06T14:01:17+01:002018 C02 C03 C06 from:leonkokkoliadis sfbtrr161 <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Thomas Kosch" itemprop="url" href="/person/1974bbade0c3ee3a583b77b73ec5f744e/author/0"><span itemprop="name">T. Kosch</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Markus Funk" itemprop="url" href="/person/1974bbade0c3ee3a583b77b73ec5f744e/author/1"><span itemprop="name">M. Funk</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Albrecht Schmidt" itemprop="url" href="/person/1974bbade0c3ee3a583b77b73ec5f744e/author/2"><span itemprop="name">A. Schmidt</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Lewis L. Chuang" itemprop="url" href="/person/1974bbade0c3ee3a583b77b73ec5f744e/author/3"><span itemprop="name">L. Chuang</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/PublicationIssue" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="journal">Proceedings of the ACM on Human-Computer Interaction (ACMHCI)</span>, </em> </span>(<em><span>2018<meta content="2018" itemprop="datePublished"/></span></em>)</span>Fri Mar 06 14:01:17 CET 2020Proceedings of the ACM on Human-Computer Interaction (ACMHCI)11:1-11:20Identifying Cognitive Assistance with Mobile Electroencephalography: A Case Study with In-Situ Projections for Manual Assembly.220182018 C02 C03 C06 from:leonkokkoliadis sfbtrr161 Manual assembly at production is a mentally demanding task. With rapid prototyping and smaller production lot sizes, this results in frequent changes of assembly instructions that have to be memorized by workers. Assistive systems compensate this increase in mental workload by providing "just-in-time" assembly instructions through in-situ projections. The implementation of such systems and their benefits to reducing mental workload have previously been justified with self-perceived ratings. However, there is no evidence by objective measures if mental workload is reduced by in-situ assistance. In our work, we showcase electroencephalography (EEG) as a complementary evaluation tool to assess cognitive workload placed by two different assistive systems in an assembly task, namely paper instructions and in-situ projections. We identified the individual EEG bandwidth that varied with changes in working memory load. We show, that changes in the EEG bandwidth are found between paper instructions and in-situ projections, indicating that they reduce working memory compared to paper instructions. Our work contributes by demonstrating how design claims of cognitive demand can be validated. Moreover, it directly evaluates the use of assistive systems for delivering context-aware information. We analyze the characteristics of EEG as real-time assessment for cognitive workload to provide insights regarding the mental demand placed by assistive systems.Robust Gaze Features for Enabling Language Proficiency Awarenesshttps://puma.ub.uni-stuttgart.de/bibtex/27587ebf87024c40d61dfb4df578bc56e/sfbtrr161sfbtrr1612020-03-06T13:55:12+01:002017 C02 C06 from:leonkokkoliadis sfbtrr161 <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Jakob Karolus" itemprop="url" href="/person/14ef1d571e2afb0b8b3ac1e2e0e0d1527/author/0"><span itemprop="name">J. Karolus</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Pawel W. Wozniak" itemprop="url" href="/person/14ef1d571e2afb0b8b3ac1e2e0e0d1527/author/1"><span itemprop="name">P. Wozniak</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Lewis L. Chuang" itemprop="url" href="/person/14ef1d571e2afb0b8b3ac1e2e0e0d1527/author/2"><span itemprop="name">L. Chuang</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Albrecht Schmidt" itemprop="url" href="/person/14ef1d571e2afb0b8b3ac1e2e0e0d1527/author/3"><span itemprop="name">A. Schmidt</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the CHI Conference on Human Factors in Computing Systems</span>, </em></span><em>page <span itemprop="pagination">2998-3010</span>. </em><em><span itemprop="publisher">ACM</span>, </em>(<em><span>2017<meta content="2017" itemprop="datePublished"/></span></em>)</span>Fri Mar 06 13:55:12 CET 2020Proceedings of the CHI Conference on Human Factors in Computing Systems2998-3010Robust Gaze Features for Enabling Language Proficiency Awareness20172017 C02 C06 from:leonkokkoliadis sfbtrr161 We are often confronted with information interfaces designed in an unfamiliar language, especially in an increasingly globalized world, where the language barrier inhibits interaction with the system. In our work, we explore the design space for building interfaces that can detect the user's language proficiency. Specifically, we look at how a user's gaze properties can be used to detect whether the interface is presented in a language they understand. We report a study (N=21) where participants were presented with questions in multiple languages, whilst being recorded for gaze behavior. We identified fixation and blink durations to be effective indicators of the participants' language proficiencies. Based on these findings, we propose a classification scheme and technical guidelines for enabling language proficiency awareness on information displays using gaze data.