Pupillometry is an emerging, noninvasive technique that measures pupil reactivity and provides deeper insights into ophthalmic and neurologic function. Extended reality (XR) technology has also emerge...Pupillometry is an emerging, noninvasive technique that measures pupil reactivity and provides deeper insights into ophthalmic and neurologic function. Extended reality (XR) technology has also emerged with powerful clinical capabilities in various medical specialties, particularly in neuro-ophthalmology functional testing. This article explores the use of XR technology in pupillometry. XR encompasses various immersive technologies, including virtual reality, augmented reality, and mixed reality. By integrating eye-tracking technology into these systems, precise measurements of ocular movements, including pupil dilation and constriction, can be obtained. We propose the term “XR-Pupillometry” to describe this novel approach. Our manuscript discusses the potential benefits and limitations of XR-Pupillometry and highlights its applications in various fields, including psychology, neuroscience, space research, and health care. We also provide an overview of existing devices and software available for XR-Pupillometry. As this technology continues to evolve, it has the potential to revolutionize the way we understand human behavior and emotions.
In order to explore presenters' social presence, this study analyzed what elements of English presentation videos affected viewers' attention. Brain and eye movements of two university students were m...In order to explore presenters' social presence, this study analyzed what elements of English presentation videos affected viewers' attention. Brain and eye movements of two university students were measured while watching two presentation videos. An interview was conducted to help construe those data. The data was analyzed, and results showed an increase in cerebral blood flow in situations where the presenter expressed opinions, along with the importance of elaboration on structuring of a presentation and prosody. Furthermore, eye tracking and interview revealed that gestures positively affected the presenters' social presence. However, when opinions were being stated, the gaze was directed to the text rather than the presenters' face, suggesting that looking at the presenter's face did not necessarily lead to a meaningful improvement in social presence.
Aim
This study aimed to verify the usability of our newly developed virtual reality-based cognitive function examination (VR-E) to differentiate mild cognitive impairment (MCI) from normal cognition ...Aim
This study aimed to verify the usability of our newly developed virtual reality-based cognitive function examination (VR-E) to differentiate mild cognitive impairment (MCI) from normal cognition and mild dementia.
Method
The subjects of analysis were 71 people (26 males and 45 females, aged from 59 to 94), including 31 with normal cognitive function, 26 with MCI, and 14 with mild dementia, according to the Clinical Dementia Rating (CDR). The total score and each cognitive domain (memory, judgement, spatial cognition, calculation, language function) score of VR-E were compared among CDR0, CDR0.5 and CDR1. In addition, for CDR0 vs. CDR0.5 and CDR 0.5 vs. 1, the areas under the curve (ROC) were examined using the total score and each cognitive domain of VR-E.
Results
There were significant differences between the three CDR groups in the VR-E scores as well as each VR-E cognitive domain score. The ROC analysis with an AUC value for the VR-E scores between CDR 0 and 0.5 was 0.71, and between CDR 0.5 and 1 was 0.92. For each VR-E cognitive domain, the ROC analysis with an AUC value between CDR 0 and 0.5 was 0.63 for memory, 0.79 for judgment, 0.70 for spatial cognition, 0.62 for calculation, and 0.57 for language. Between CDR 0.5 and 1, the AUC values were 0.81 for memory, 0.75 for judgment, 0.82 for spatial cognition, 0.88 for calculation, and 0.86 for language.
Conclusion
The results suggest that VR-E is useful for differentiating mild cognitive impairment from normal cognition and early dementia.
Objective: To document (1) oculomotor (OM) and vestibulo-ocular (VO) function in children with concussion who were symptomatic at the time of assessment and to compare it with that in children with co...Objective: To document (1) oculomotor (OM) and vestibulo-ocular (VO) function in children with concussion who were symptomatic at the time of assessment and to compare it with that in children with concussion who were clinically recovered (asymptomatic) and in children with no concussive injury, and (2) the extent to which OM and VO function relates to postconcussion symptom severity in injured children.
Setting: Participants were recruited from a concussion clinic or the community.
Participants: A total of 108 youth with concussion (72 symptomatic; 36 recovered) and 79 healthy youth (aged 9-18 years). Youth with concussion were included if aged 9 to 18 years, had no previous concussion within the last 12 months, less than 90 days since injury, and no known existing visual disorders or learning disabilities.
Study design: A prospective cross-sectional study.
Main measures: All participants were tested for OM and VO function with a commercial virtual reality (VR) eye-tracking system (Neuroflex ® , Montreal,Québec, Canada). Participants in the concussion group who completed the postconcussion symptoms were scored with the Post-Concussion Symptom Inventory.
Results: There was a significant group effect for vergence during smooth pursuit ( F2,176 = 10.90; P .05), mean latency during saccades ( F2,171 = 5.99; P = .003), and mean response delay during antisaccades ( F2,177 = 9.07; P .05), where children with symptomatic concussion showed poorer performance than clinically recovered and healthy children. Similar results were found in VO for average vestibular ocular reflex gain in the horizontal leftward ( F2,168 = 7; P = .001) and rightward directions ( F2,163 = 13.08; P .05) and vertical upward ( F2,147 = 7.60; P = .001) and downward directions ( F2,144 = 13.70; P .05). Mean saccade error was positively correlated to total Post-Concussion Symptom Inventory scores in younger clinically recovered children.
Conclusion: VR eye tracking may be an effective tool for identifying OM and VO deficits in the subacute phase (90 days) postconcussion.
With improved portability and affordability, eye tracking devices have facilitated an expanding range of cycling experiments aimed at understanding cycling behavior and potential risks. Given the comp...With improved portability and affordability, eye tracking devices have facilitated an expanding range of cycling experiments aimed at understanding cycling behavior and potential risks. Given the complexity of cyclists’ visual behavior and gaze measurements, we provide a comprehensive review with three key focuses: 1) the adoption and interpretation of various gaze metrics derived from cycling experiments, 2) a summary of the findings of those experiments, and 3) identifying areas for future research. A systematic review of three databases yielded thirty-five articles that met our inclusion criteria. Our review results show that cycling experiments with eye tracking allow analysis of the viewpoint of the cyclist and reactions to the built environment, road conditions, navigation behavior, and mental workload and/or stress levels. Our review suggests substantial variation in research objectives and the consequent selection of eye-tracking devices, experimental design, and which gaze metrics are used and interpreted. A variety of general gaze metrics and gaze measurements related to Areas of Interest (AOI) are applied to infer cyclists’ mental workload/stress levels and attention allocation respectively. The diversity of gaze metrics reported in the literature makes cross-study comparisons difficult. Areas for future research, especially potential integration with computer vision are also discussed.
Head-gaze interaction is an integral mode of interaction in virtual reality (VR) applications, demonstrating high precision in fine manipulation tasks but low efficiency in large-scale object movement...Head-gaze interaction is an integral mode of interaction in virtual reality (VR) applications, demonstrating high precision in fine manipulation tasks but low efficiency in large-scale object movements. To enhance the efficiency of head-gaze interaction, this study adjusted the control-display gain to compensate for the weaknesses of head-gaze interaction in a long-distance object-positioning task. We investigated the effect of the control-display gain of head-gaze interaction on movement time (MT) using a cohort of participants (n = 24) to perform experiments. The results showed that the MT first decreased as the gain increased from 1 to 1.5 and then increased afterwards. Further analysis showed that a high gain improved the interaction efficiency in the ballistic phase, but reduced the interaction efficiency in the corrective phase. To be able to obtain higher efficiency of interaction, we designed a dual-gain mode which set different gains in the ballistic and corrective phases. Evaluated using an additional experimental cohort (n = 24), our results showed that the dual-gain mode was more efficient than the mono-gain mode. Moreover, the dual-gain mode with optimal gains did not induce a more serious perception of inconsistency, confusion, nonacceptance and motion sickness, while it had a tendency to reduce the total workload compared to the interaction with normal gain. Our findings provide potential valuable design insights and guidance contributing to improving the efficiency of head-gaze interaction in virtual spaces.
A person with impaired emotion recognition is not able to correctly identify facial expressions represented by other individuals. The aim of the present study is to assess eyes gaze and facial emotion...A person with impaired emotion recognition is not able to correctly identify facial expressions represented by other individuals. The aim of the present study is to assess eyes gaze and facial emotion recognition in a healthy population using dynamic avatars in immersive virtual reality (IVR). For the first time, the viewing of each area of interest of the face in IVR is studied by gender and age. This work in healthy people is conducted to assess the future usefulness of IVR in patients with deficits in the recognition of facial expressions. Seventy-four healthy volunteers participated in the study. The materials used were a laptop computer, a game controller, and a head-mounted display. Dynamic virtual faces randomly representing the six basic emotions plus neutral expression were used as stimuli. After the virtual human represented an emotion, a response panel was displayed with the seven possible options. Besides storing the hits and misses, the software program internally divided the faces into different areas of interest (AOIs) and recorded how long participants looked at each AOI. As regards the overall accuracy of the participants’ responses, hits decreased from the youngest to the middle-aged and older adults. Also, all three groups spent the highest percentage of time looking at the eyes, but younger adults had the highest percentage. It is also noteworthy that attention to the face compared to the background decreased with age. Moreover, the hits between women and men were remarkably similar and, in fact, there were no statistically significant differences between them. In general, men paid more attention to the eyes than women, but women paid more attention to the forehead and mouth. In contrast to previous work, our study indicates that there are no differences between men and women in facial emotion recognition. Moreover, in line with previous work, the percentage of face viewing time for younger adults is higher than for older adults. However, contrary to earlier studies, older adults look more at the eyes than at the mouth.Consistent with other studies, the eyes are the AOI with the highest percentage of viewing time. For men the most viewed AOI is the eyes for all emotions in both hits and misses. Women look more at the eyes for all emotions, except for joy, fear, and anger on hits. On misses, they look more into the eyes for almost all emotions except surprise and fear.
Hand and gaze pointers are effective visual communication cues for remote collaboration. In this paper, we studied the effect of combining them when two basic visual cues (hand gestures and sketches) ...Hand and gaze pointers are effective visual communication cues for remote collaboration. In this paper, we studied the effect of combining them when two basic visual cues (hand gestures and sketches) are available. We conducted a user study with 24 participants, performing two tasks (Tangram and Origami) under four experimental conditions with two independent variables (added hand and gaze pointers) when hand gesture and sketch cues were available as the baseline. The results demonstrated that the added hand and gaze pointer cues improved the co-presence and reduced the task load for remote experts. Additionally, the added gaze pointer cue was effective in fostering behavioral interdependence and message understanding. Participants most preferred the condition of combining all cues, followed by the condition with added gaze pointer, which resulted in quicker task completion and more accurate communication between collaborators. These findings suggest that adding hand and gaze pointer cues can significantly enhance the user experience in remote collaboration.
Virtual reality (VR) has evolved substantially beyond its initial remit of gaming and entertainment, catalyzed by advancements such as improved screen resolutions and more accessible devices. Among va...Virtual reality (VR) has evolved substantially beyond its initial remit of gaming and entertainment, catalyzed by advancements such as improved screen resolutions and more accessible devices. Among various interaction techniques introduced to VR, eye-tracking stands out as a pivotal development. It not only augments immersion but offers a nuanced insight into user behavior and attention. This precision in capturing gaze direction has made eye-tracking instrumental for applications far beyond mere interaction, influencing areas like medical diagnostics, neuroscientific research, educational interventions, and architectural design, to name a few. Though eye-tracking’s integration into VR has been acknowledged in prior reviews, its true depth, spanning the intricacies of its deployment to its broader ramifications across diverse sectors, has been sparsely explored. This survey undertakes that endeavor, offering a comprehensive overview of eye-tracking’s state of the art within the VR landscape. We delve into its technological nuances, its pivotal role in modern VR applications, and its transformative impact on domains ranging from medicine and neuroscience to marketing and education. Through this exploration, we aim to present a cohesive understanding of the current capabilities, challenges, and future potential of eye-tracking in VR, underscoring its significance and the novelty of our contribution.