Purpose: The purpose of this study was to assess the feasibility of detecting relative afferent pupillary defects (RAPDs) using a commercial virtual reality headset equipped with an eye tracker.
Me...Purpose: The purpose of this study was to assess the feasibility of detecting relative afferent pupillary defects (RAPDs) using a commercial virtual reality headset equipped with an eye tracker.
Methods: This is a cross-sectional study in which we compare the new computerized RAPD test with the traditional clinical standard using the swinging flashlight test. Eighty-two participants including 20 healthy volunteers aged 10 to 88 years were enrolled in this study. We present a bright/dark stimulus alternating between the eyes every 3 seconds using a virtual reality headset, and we simultaneously record changes in pupil size. To determine the presence of an RAPD, we developed an algorithm analyzing the pupil size differences. For the assessment of the performance of the automated and the manual measurement a post hoc impression based on all available data is created. The accuracy of the manual clinical evaluation and the computerized method is compared using confusion matrices and the gold standard of the post hoc impression. The latter is based on all available clinical information.
Results: We found that the computerized method detected RAPD with a sensitivity of 90.2% and an accuracy of 84.4%, as compared to the post hoc impression. This was not significantly different from the clinical evaluation with a sensitivity of 89.1% and an accuracy of 88.3%.
Conclusions: The presented method offers an accurate, easy to use, and fast method to measure an RAPD. In contrast to today's clinical practice, the measures are quantitative and objective.
Translational Relevance: Computerized testing of Relative Afferent Pupillary Defects (RAPD) using a VR-headset and eye-tracking reaches non-inferior performance compared with senior neuro-ophthalmologists.
Selection by progressive refinement allows the accurate acquisition of targets with small visual sizes while keeping the required precision of the task low. Using the eyes as a means to perform 3D sel...Selection by progressive refinement allows the accurate acquisition of targets with small visual sizes while keeping the required precision of the task low. Using the eyes as a means to perform 3D selections is naturally hindered by the low accuracy of eye movements. To account for this low accuracy, we propose to use the concept of progressive refinement to allow accurate 3D selection. We designed a novel eye tracking selection technique with progressive refinement–Eye-controlled Sphere-casting refined by QUAD-menu (EyeSQUAD). We propose an approximation method to stabilize the calculated point-of-regard and a space partitioning method to improve computation. We evaluated the performance of EyeSQUAD in comparison to two previous selection techniques–ray-casting and SQUAD–under different target size and distractor density conditions. Results show that EyeSQUAD outperforms previous eye tracking-based selection techniques, is more accurate and can achieve similar selection speed as ray-casting, and is less accurate and slower than SQUAD. We discuss implications of designing eye tracking-based progressive refinement interaction techniques and provide a potential solution for multimodal user interfaces with eye tracking.
This work introduces a FOVE Head Mounted Display (HMD) as a means of tracking eye movement in an immersive VR environment to determine where VR users are looking as they complete a pre-defined task. T...This work introduces a FOVE Head Mounted Display (HMD) as a means of tracking eye movement in an immersive VR environment to determine where VR users are looking as they complete a pre-defined task. The aim is to determine its applicability to research into user distraction when designing human / machine interfaces. A virtual flight simulator environment is presented as a test case. The work has shown that FOVE eye tracking technology can record vector based data to enable post experimental analysis of where a user is looking in a given scenario. Levels of distraction can be determined based on the differences between where they should be focusing based on a given flight phase and what they are actually looking at. The work also demonstrates how CAD translation between the design and VR environments is key to the provision of a more realistic VR environment. Polygon based models are more efficient for dynamic imaging in the VR environment relative to the NURB based models which form the basis for geometry constructed in CAD packages.
The recent popularity of consumer-grade virtual reality devices, such as Oculus Rift, HTC Vive, and Fove virtual reality headset, has enabled household users to experience highly immersive virtual env...The recent popularity of consumer-grade virtual reality devices, such as Oculus Rift, HTC Vive, and Fove virtual reality headset, has enabled household users to experience highly immersive virtual environments. We take advantage of the commercial availability of these devices to provide a novel virtual reality-based driving training approach designed to help individuals improve their driving habits in common scenarios. Our approach first identifies improper driving habits of a user when he drives in a virtual city. Then it synthesizes a pertinent training program to help improve the users driving skills based on the discovered improper habits of the user. To apply our approach, a user first goes through a pre-evaluation test from which his driving habits are analyzed. The analysis results are used to drive optimization for synthesizing a training program. This training program is a personalized route which includes different traffic events. When the user drives along this route via a driving controller and an eye-tracking virtual reality headset, the traffic events he encounters will help him to improve his driving habits. To validate the effectiveness of our approach, we conducted a user study to compare our virtual reality-based driving training with other training methods. The user study results show that the participants trained by our approach perform better on average than those trained by other methods in terms of evaluation score and response time and their improvement is more persistent.
Virtual reality (VR) programs using head-mounted displays (HMD) give older adults the opportunity for unrestricted spatial exploration, with greater movement. We conducted a single-blind randomized co...Virtual reality (VR) programs using head-mounted displays (HMD) give older adults the opportunity for unrestricted spatial exploration, with greater movement. We conducted a single-blind randomized control trial study to determine the beneficial effects of a non-goal-directed VR traveling program using an HMD on older adults living in an assisted living facility that also caters for people with dementia. Twenty-four participants, with an average age of 88.5 years, were randomly assigned to the VR program group or the control group. The VR group participated in three weekly VR travel sessions of 30 minutes each, for four weeks. The results showed that the VR group not only experienced improvement in simple visuospatial abilities, but also in tasks involving executive function. There was also improvement in vertical and horizontal cervical spine range of motion. However, cervical range of motion in lateral bending was worse in the VR group than in the control group, suggesting a possible effect of the weight of the HMD. Training in wide visual exploration that improves cervical spine range of motion may provide an orientation opportunity to avoid falls among older adults.
We present Lattice Menu, a gaze-based marking menu utilizing a lattice of visual anchors that helps perform accurate gaze pointing for menu item selection. Users who know the location of the desired i...We present Lattice Menu, a gaze-based marking menu utilizing a lattice of visual anchors that helps perform accurate gaze pointing for menu item selection. Users who know the location of the desired item can leverage target-assisted gaze gestures for multilevel item selection by looking at visual anchors over the gaze trajectories. Our evaluation showed that Lattice Menu exhibits a considerably low error rate (~1%) and a quick menu selection time (1.3-1.6 s) for expert usage across various menu structures (4 × 4 × 4 and 6 × 6 × 6) and sizes (8, 10 and 12°). In comparison with a traditional gaze-based marking menu that does not utilize visual targets, Lattice Menu showed remarkably (~5 times) fewer menu selection errors for expert usage. In a post-interview, all 12 subjects preferred Lattice Menu, and most subjects (8 out of 12) commented that the provisioning of visual targets facilitated more stable menu selections with reduced eye fatigue.
Over the past decade, extended reality (XR) has emerged as an assistive technology not only to augment residual vision of people losing their sight but also to study the rudimentary vision restored to...Over the past decade, extended reality (XR) has emerged as an assistive technology not only to augment residual vision of people losing their sight but also to study the rudimentary vision restored to blind people by a visual neuroprosthesis. A defining quality of these XR technologies is their ability to update the stimulus based on the user’s eye, head, or body movements. To make the best use of these emerging technologies, it is valuable and timely to understand the state of this research and identify any shortcomings that are present. Here we present a systematic literature review of 227 publications from 106 different venues assessing the potential of XR technology to further visual accessibility. In contrast to other reviews, we sample studies from multiple scientific disciplines, focus on technology that augments a person’s residual vision, and require studies to feature a quantitative evaluation with appropriate end users. We summarize prominent findings from different XR research areas, show how the landscape has changed over the past decade, and identify scientific gaps in the literature. Specifically, we highlight the need for real-world validation, the broadening of end-user participation, and a more nuanced understanding of the usability of different XR-based accessibility aids.
In this paper, we propose a specific gaze movement detection algorithm, which is necessary for implementing a gaze movement input interface using an HMD built-in eye tracking system. Most input device...In this paper, we propose a specific gaze movement detection algorithm, which is necessary for implementing a gaze movement input interface using an HMD built-in eye tracking system. Most input devices used in current virtual reality and augmented reality are hand-held devices, hand gestures, head tracking and voice input, despite the HMD attachment type. Therefore, in order to use the eye expression as a hands-free input modality, we consider a gaze input interface that does not depend on the measurement accuracy of the measurement device. The proposed method generally assumes eye movement input different from eye gaze position input which is implemented using an eye tracking system. Specifically, by using reciprocation eye movement in an oblique direction as an input channel, it aims to realize an input method that does not block the view by a screen display and does not hinder the acquisition of other lines of sight meta information. Moreover, the proposed algorithm is actually implemented in HMD, and the detection accuracy of the roundtrip eye movement is evaluated by experiments. As a result, the detection results of 5 subjects were averaged to obtain 90% detection accuracy. The results show that it has enough accuracy to develop an input inter-face using eye movement.
We present a study comparing selection performance between three eye/head interaction techniques using the recently released FOVE head-mounted display (HMD). The FOVE offers an integrated eye tracker,...We present a study comparing selection performance between three eye/head interaction techniques using the recently released FOVE head-mounted display (HMD). The FOVE offers an integrated eye tracker, which we use as an alternative to potentially fatiguing and uncomfortable head-based selection used with other commercial devices. Our experiment was modelled after the ISO 9241-9 reciprocal selection task, with targets presented at varying depths in a custom virtual environment. We compared eye-based selection, and head-based selection (i.e., gaze direction) in isolation, and a third condition which used both eye-tracking and head-tracking at once. Results indicate that eye-only selection offered the worst performance in terms of error rate, selection times, and throughput. Head-only selection offered significantly better performance.
We present two experiments evaluating the effectiveness of the eye as a controller for travel in virtual reality (VR). We used the FOVE head-mounted display (HMD), which includes an eye tracker. The f...We present two experiments evaluating the effectiveness of the eye as a controller for travel in virtual reality (VR). We used the FOVE head-mounted display (HMD), which includes an eye tracker. The first experiment compared seven different travel techniques to control movement direction while flying through target rings. The second experiment involved travel on a terrain: moving to waypoints while avoiding obstacles with three travel techniques. Results of the first experiment indicate that performance of the eye tracker with head-tracking was close to head motion alone, and better than eye-tracking alone. The second experiment revealed that completion times of all three techniques were very close. Overall, eye-based travel suffered from calibration issues and yielded much higher cybersickness than head-based approaches.