ETRA '21 Adjunct: ACM Symposium on Eye Tracking Research and Applications

Full Citation in the ACM Digital Library

SESSION: ETRA Demo & Video Track

GazeHelp: Exploring Practical Gaze-assisted Interactions for Graphic Design Tools

This system development project introduces the Adobe Photoshop plugin GazeHelp, exploring the practical application of multimodal gaze-assisted interaction in assisting current graphic design activities. It implements three core features, including QuickTool: a gaze-triggered popup that allows the user to select their next tool with gaze; X-Ray: creating a small non-destructive window at the gaze point, cutting through an artboard’s layers to expose an element on a selected underlying layer; and Privacy Shield: dimming and blocking the current art board from view when looking away from the display. Each harness the speed, gaze-contingent observational nature and presence-implying strengths of gaze respectively, and are customisable to the user’s preferences. The accompanying GazeHelpServer, complete with intuitive GUI, can also be flexibly used by other programs and plugins for further development.

Implementing Eye-Tracking for Persona Analytics

Investigating users’ engagement with interactive persona systems can yield crucial insights for the design of such systems. Using eye-tracking, researchers can address the scarcity of behavioral user studies, even during times when physical user studies are difficult or impossible to carry out. In this research, we implement a webcam-based eye-tracking module into an interactive persona system, facilitating remote user studies. Findings from the implementation can show what information users pay attention to in persona profiles.

Automatic Recognition and Augmentation of Attended Objects in Real-time using Eye Tracking and a Head-mounted Display

Scanning and processing visual stimuli in a scene is essential for the human brain to make situation-aware decisions. Adding the ability to observe the scanning behavior and scene processing to intelligent mobile user interfaces can facilitate a new class of cognition-aware user interfaces. As a first step in this direction, we implement an augmented reality (AR) system that classifies objects at the user’s point of regard, detects visual attention to them, and augments the real objects with virtual labels that stick to the objects in real-time. We use a head-mounted AR device (Microsoft HoloLens 2) with integrated eye tracking capabilities and a front-facing camera for implementing our prototype.

SESSION: ETRA Doctoral Symposium

Eye Tracking Calibration on Mobile Devices

Eye tracking has been widely used in psychology, human-computer interaction and many other fields. Recently, eye tracking based on off-the-shelf cameras has produced promising results, compared to the traditional eye tracking devices. This presents an opportunity to introduce eye tracking on mobile devices. However, eye tracking on mobile devices face many challenges, including occlusion of faces and unstable and changing distance between face and camera. This research project aims to obtain stable and accurate calibration of front-camera based eye tracking in dynamic contexts through the construction of real-world eye-movement datasets, the introduction of novel context-awareness models and improved gaze estimation methods that can be adapted to partial faces.

The influence of clutter on search-based learning, long-term memory, and memory-guided attention in real-world scenes: an eye-movement research protocol

Previous studies have shown that visual clutter degrades visual search performance. This performance decrement is also reflected in several eye movement metrics, such as mean fixation duration, scan path length, and first saccade latency. However, whether and if so, how visual clutter might impact other cognitive processes that are important for adaptive functioning, like learning, long-term memory, and attention, remains poorly understood. Here, we present the rationale for the use of a three-stage experimental paradigm combined with eye-tracking to better understand the effects of visual clutter observed in real-world scenes on cognition and eye movement behavior. We also present preliminary behavioral findings on this topic from our lab and discuss areas with significant potential for future research.

Climate change overlooked. The role of attitudes and mood regulation in visual attention to global warming

Why, in the face of climate catastrophe, do people still seem to underestimate the weight of the threat without taking adequate action to fight global warming? Among many reasons for this, the current study aims to dive into people’s cognitive abilities and explore the barriers located at the individual level, using an eye-tracking methodology. Previous findings indicate that a pro-environmental attitude does not necessarily lead to pro-environmental behavior. What may stand in the way is ignorance that can be mediated by other factors. This study will examine whether visual distraction from images depicting the impacts of climate change is mediated by mood regulation and environmental concern. This will help to fit educational and information materials to specific viewers, which may result in more pro-environmental behaviors in the future.

Gaze and Heart Rate Synchronization in Computer-Mediated Collaboration

Computer-mediated collaboration has become an integral part of our every day functioning. Despite decreased non-verbal communication and face-to-face contact with partners of collaboration, people learned how to remotely work together. The consequences of decreased non-verbal signals such as gaze communication in remote collaboration are however not fully investigated. In a series of three experiments, we propose solutions to enhance quality of remote collaboration. The present paper is focused on examining the relation between gaze and heart reaction during face-to-face and remote collaboration.

SESSION: ActivEye: Challenges in large scale eye-tracking for active participants

Solving Parallax Error for 3D Eye Tracking

Head-mounted eye-trackers allow for unrestricted behavior in the natural environment, but have calibration issues that compromise accuracy and usability. A well-known problem arises from the fact that gaze measurements suffer from parallax error due to the offset between the scene camera origin and eye position. To compensate for this error two pieces of data are required: the pose of the scene camera in head coordinates, and the three-dimensional coordinates of the fixation point in head coordinates. We implemented a method that allows for effective and accurate eye-tracking in the three-dimensional environment. Our approach consists of a calibration procedure that allows to contextually calibrate the eye-tracker and compute the eyes pose in the reference frame of the scene camera, and a custom stereoscopic scene camera that provides the three-dimensional coordinates of the fixation point. The resulting gaze data are free from parallax error, allowing accurate and effective use of the eye-tracker in the natural environment.

Integrating High Fidelity Eye, Head and World Tracking in a Wearable Device

A challenge in mobile eye tracking is balancing the quality of data collected with the ability for a subject to move freely and naturally through their environment. This challenge is exacerbated when an experiment necessitates multiple data streams recorded simultaneously and in high fidelity. Given these constraints, previous devices have had limited spatial and temporal resolution, as well as compression artifacts. To address this, we have designed a wearable device capable of recording a subject’s body, head, and eye positions, simultaneously with RGB and depth data from the subject’s visual environment, measured in high spatial and temporal resolution. The sensors include a binocular eye tracker, an RGB-D scene camera, a high-frame-rate scene camera, and two visual odometry sensors, which we synchronize and record from, with a total incoming data rate of over 700 MB/s. All sensors are operated by a mini-PC optimized for fast data collection, and powered by a small battery pack. The headset weighs only 1.4 kg, the remainder just 3.9kg, and can be comfortably worn by the subject in a small backpack, allowing full mobility.

Fixational stability as a measure for the recovery of visual function in amblyopia

People with amblyopia have been shown to have decreased fixational stability, particularly those with strabismic amblyopia. Fixational stability and visual acuity have been shown to be tightly correlated across multiple studies, suggesting a relationship between acuity and oculomotor stability. Reduced visual acuity is the sine qua non of amblyopia, and recovery is measured by the improvement in visual acuity. Here we ask whether fixational stability can be used as an objective marker for the recovery of visual function in amblyopia. We tracked children’s fixational stability during patching treatment over time and found fixational stability changes alongside improvements in visual acuity. This suggests fixational stability can be used as an objective measure for monitoring treatment in amblyopia and other disorders.

Tracking Active Observers in 3D Visuo-Cognitive Tasks

Most past and present research in computer vision involves passively observed data. Humans, however, are active observers in real life; they explore, search, select what and how to look. In this work, we present a psychophysical experimental setup for active, visual observation in a 3D world dubbed PESAO. The goal was to design PESAO for various active perception tasks with human subjects (active observers) capable of tracking the head and gaze.

Algorithmic gaze classification for mobile eye-tracking

Mobile eye tracking traditionally requires gaze to be coded manually. We introduce an open-source Python package (GazeClassify) that algorithmically annotates mobile eye tracking data for the study of human interactions. Instead of manually identifying objects and identifying if gaze is directed towards an area of interest, computer vision algorithms are used for the identification and segmentation of human bodies. To validate the algorithm, mobile eye tracking data from short combat sport sequences were analyzed. The performance of the algorithm was compared against three manual raters. The algorithm performed with substantial reliability in comparison to the manual raters when it came to annotating which area of interest gaze was closest to. However, the algorithm was more conservative than the manual raters for classifying if gaze was directed towards an object of interest. The algorithmic approach represents a viable and promising means for automating gaze classification for mobile eye tracking.

Sub-centimeter 3D gaze vector accuracy on real-world tasks: an investigation of eye and motion capture calibration routines

Measuring where people look in real-world tasks has never been easier but analyzing the resulting data remains laborious. One solution integrates head-mounted eye tracking with motion capture but no best practice exists regarding what calibration data to collect. Here, we compared four ~1 min calibration routines used to train linear regression gaze vector models and examined how the coordinate system, eye data used and location of fixation changed gaze vector accuracy on three trial types: calibration, validation (static fixation to task relevant locations), and task (naturally occurring fixations during object interaction). Impressively, predicted gaze vectors show ~1 cm of error when looking straight ahead toward objects during natural arms-length interaction. This result was achieved predicting fixations in a Spherical coordinate frame, from the best monocular data, and, surprisingly, depends little on the calibration routine.

Ergonomic Design Development of the Visual Experience Database Headset

Head-mounted devices allow recording of eye movements, head movements, and scene video outside of the traditional laboratory setting. A key challenge for recording comprehensive first-person stimuli and behavior outside the lab is the form factor of the head-mounted assembly. It should be mounted stably on the head to minimize slippage and maximize accuracy of the data; it should be as unobtrusive and comfortable as possible to allow for natural behaviors and enable longer duration recordings; and it should be able to fit a diverse user population. Here, we survey preliminary design iterations of the Visual Experience Database headset, an assembly consisting of the Pupil Core eye tracker, the Intel RealSense T265 ™ (T265) tracking camera, and the FLIR Chameleon™3 (FLIR) world camera. Strengths and weaknesses of each iteration are explored and documented with the goal of informing future ergonomic design efforts for similar head-mounted systems.

VEDBViz: The Visual Experience Database Visualization and Interaction Tool

Mobile, simultaneous tracking of both the head and eyes is typically achieved through integration of separate head and eye tracking systems because off-the-shelf solutions do not yet exist. Similarly, joint visualization and analysis of head and eye movement data is not possible with standard software packages because these were designed to support either head or eye tracking in isolation. Thus, there is a need for software that supports joint analysis of head and eye data to characterize and investigate topics including head-eye coordination and reconstruction of how the eye is moving in space. To address this need, we have begun developing VEDBViz which supports simultaneous graphing and animation of head and eye movement data recorded with the Intel RealSense T265 and Pupil Core, respectively. We describe current functionality as well as features and applications that are still in development.

Eye, Robot: Calibration Challenges and Potential Solutions for Wearable Eye Tracking in Individuals with Eccentric Fixation

Loss of the central retina, including the fovea, can lead to a loss of visual acuity and oculomotor deficits, and thus have profound effects on day-to-day tasks. Recent advances in head-mounted, 3D eye tracking have allowed researchers to extend studies in this population to a broader set of daily tasks and more naturalistic behaviors and settings. However, decreases in fixational stability, multiple fixational loci and their uncertain role as oculomotor references, as well as eccentric fixation all provide additional challenges for calibration and collection of eye movement data. Here we quantify reductions in calibration accuracy relative to fixation eccentricity, and suggest a robotic calibration and validation tool that will allow for future developments of calibration and tracking algorithms designed with this population in mind.

Post-processing integration and semi-automated analysis of eye-tracking and motion-capture data obtained in immersive virtual reality environments to measure visuomotor integration

Mobile eye-tracking and motion-capture techniques yield rich, precisely quantifiable data that can inform our understanding of the relationship between visual and motor processes during task performance. However, these systems are rarely used in combination, in part because of the significant time and human resources required for post-processing and analysis. Recent advances in computer vision have opened the door for more efficient processing and analysis solutions. We developed a post-processing pipeline to integrate mobile eye-tracking and full-body motion-capture data. These systems were used simultaneously to measure visuomotor integration in an immersive virtual environment. Our approach enables calculation of a 3D gaze vector that can be mapped to the participant's body position and objects in the virtual environment using a uniform coordinate system. This approach is generalizable to other configurations, and enables more efficient analysis of eye, head, and body movements together during visuomotor tasks administered in controlled, repeatable environments.

Pupil Tracking Under Direct Sunlight

Pupil tracking in a bright outdoor environment is challenging due to low eye image quality and reduced pupil size in response to bright light. In this study we present research to develop robust outdoor pupil tracking without the need for shading the eyes. We first investigate the effect of camera post-processing settings in order to find values that enhance image quality for the purpose of pupil tracking under direct, oblique and overcast sunlight illuminations. We then tested the performance of the state-of-the-art pupil tracking techniques under these extreme real-world outdoor lighting conditions. Our results suggest that a key goal should be maintaining the contrast between iris and pupil to support accurate estimation of pupil position regardless of the overall eye image quality.

Characterizing the Performance of Deep Neural Networks for Eye-Tracking

Deep neural networks (DNNs) provide powerful tools to identify and track features of interest, and have recently come into use for eye-tracking. Here, we test the ability of a DNN to predict keypoints localizing the eyelid and pupil under the types of challenging image variability that occur in mobile eye-tracking. We simulate varying degrees of perturbation for five common sources of image variation in mobile eye-tracking: rotations, blur, exposure, reflection, and compression artifacts. To compare the relative performance decrease across domains in a common space of image variation, we used features derived from a DNN (ResNet50) to compute the distance of each perturbed video from the videos used to train our DNN. We found that increasing cosine distance from the training distribution was associated with monotonic decreases in model performance in all domains. These results suggest ways to optimize the selection of diverse images for model training.

Noise in the Machine: Sources of Physical and Computation Error in Eye Tracking with Pupil Core Wearable Eye Tracker: Wearable Eye Tracker Noise in Natural Motion Experiments

Developments in wearable eye tracking devices make them an attractive solution for studies of eye movements during naturalistic head/body motion. However, before these systems’ potential can be fully realized, a thorough assessment of potential sources of error is needed. In this study, we examine three possible sources for the Pupil Core eye tracking goggles: camera motion during head/body motion, choice of calibration marker configuration, and eye movement estimation. In our data, we find that up to 36% of reported eye motion may be attributable to camera movement; choice of appropriate calibration routine is essential for minimizing error; and the use of a secondary calibration for eye position remapping can improve eye position errors estimated from the eye tracker.

SESSION: COGAIN Symposium

Gaze Interactive and Attention Aware Low Vision Aids as Future Smart Glasses

We present a working paper on integrating eye tracking with mixed and augmented reality for the benefit of low vision aids. We outline the current state of the art and relevant research and point to further research and development required in order to adapt to individual user, environment, and current task. We outline key technical challenges and possible solutions including calibration, dealing with variant eye data quality, measuring and adapting image processing to low vision within current technical limitations, and outline an experimental approach to designing data-driven solutions using machine learning and artificial intelligence.