Rotation gains in Virtual Reality (VR) enable the exploration of wider Virtual Environments (VEs) compared to the workspace users have in VR setups. The perception of these gains has been consequently explored through multiple experimental conditions in order to improve redirected navigation techniques. While most of the studies consider rotations, in which participants can rotate at the pace they desire but without translational motion, we have no information about the potential impact of the translational and rotational motions on the perception of rotation gains. In this paper, we estimated the influence of these motions and compared the perceptual thresholds of rotations gains through a user study (n = 14), in which participants had to perform virtual rotation tasks at a constant rotation speed. Participants had to determine whether their virtual rotation speed was faster or slower than their real one. We varied the translational optical flow (static or forward motion), the rotational speed (20, 30, or 40 deg/s), and the rotational gain (from 0.5 to 1.5). The main results are that the rotation gains are less perceivable at lower rotation speeds and that translational motion makes detection more difficult at lower rotation speeds. Furthermore, the paper provides insights into the user’s gaze and body motions behaviour when exposed to rotation gains. These results contribute to the understanding of the perception of rotation gains in VEs and they are discussed to improve the implementation of rotation gains in redirection techniques.
Although virtual reality has been gaining in popularity, users continue to report discomfort during and after use of VR applications, and many experience symptoms associated with motion sickness. To mitigate this problem, dynamic field-of-view restriction is a common technique that has been widely implemented in commercial VR games. Although artificially reducing the field-of-view during movement can improve comfort, the standard restrictor is typically implemented using a symmetric circular mask that blocks imagery in the periphery of the visual field. This reduces users’ visibility of the virtual environment and can negatively impact their subjective experience. In this paper, we propose and evaluate a novel asymmetric field-of-view restrictor that maintains visibility of the ground plane during movement. We conducted a remote user study that sampled from the population of VR headset owners. The experiment used a within-subjects design that compared the ground-visible restrictor, the traditional symmetric restrictor, and a control condition without FOV restriction. Participation required navigating through a complex maze-like environment using a controller during three separate virtual reality sessions conducted at least 24 hours apart. Results showed that ground-visible FOV restriction offers benefits for user comfort, postural stability, and subjective sense of presence. Additionally, we found no evidence of negative drawbacks to maintaining visibility of the ground plane during FOV restriction, suggesting that the proposed technique is superior for experienced users compared to the widely used symmetric restrictor.
In this article we investigate the potential of using visuo-haptic illusions in Virtual Reality environment to learn motor skills in a real environment. We report on an empirical study where 20 participants perform a multi-object pick-and-place task. The results show that although users do not perform the same motion trajectories in the virtual and real environments, skills acquired in VR augmented with visuo-haptic illusions can be successfully reused in a real environment: There is a high amount of skill transfer (78.5%), similar to the one obtained in an optimal real training environment (82.4%); Finally, participants did not notice the illusion and were enthusiastic about the VR environment. Our findings invite designers and researchers to consider visuo-haptic illusions to help operators to learn motor skills in a cost-effective environment.
We present Adaptic, a novel ”hybrid” active/passive haptic device that can change shape to act as a proxy for a range of virtual objects in VR. We use Adaptic with haptic retargeting to redirect the user’s hand to provide haptic feedback for several virtual objects in arm’s reach using only a single prop. To evaluate the effectiveness of Adaptic with haptic retargeting, we conducted a within-subjects experiment employing a docking task to compare Adaptic to non-matching proxy objects (i.e., Styrofoam balls) and matching shape props. In our study, Adaptic sat on a desk in front of the user and changed shapes between grasps, to provide matching tactile feedback for various virtual objects placed in different virtual locations. Results indicate that the illusion was convincing: users felt they were manipulating several virtual objects in different virtual locations with a single Adaptic device. Docking performance (completion time and accuracy) with Adaptic was comparable to props without haptic retargeting.
The ability to co-opt everyday surfaces for touch interactivity has been an area of HCI research for several decades. Ideally, a sensor operating in a device (such as a smart speaker) would be able to enable a whole room with touch sensing capabilities. Such a system could allow for software-defined light switches on walls, gestural input on countertops, and in general, more digitally flexible environments. While advances in depth sensors and computer vision have led to step-function improvements in the past, progress has slowed in recent years. We surveyed the literature and found that the very best ad hoc touch sensing systems are able to operate at ranges up to around 1.5 m. This limited range means that sensors must be carefully positioned in an environment to enable specific surfaces for interaction. In this research, we set ourselves the goal of doubling the sensing range of the current state of the art system. To achieve this goal, we leveraged an interesting finger ”denting” phenomena and adopted a marginal gains philosophy when developing our full stack. When put together, these many small improvements compound and yield a significant stride in performance. At 3 m range, our system offers a spatial accuracy of 0.98 cm with a touch segmentation accuracy of 96.1%, in line with prior systems operating at less than half the range. While more work remains to be done to achieve true room-scale ubiquity, we believe our system constitutes a useful advance over prior work.
We present a method to track a smartphone in VR using a fiducial marker displayed on the screen. Using WebRTC transmission protocol, we capture smartphone touchscreen input and the screen contents, copying them to a virtual representation in VR. We present two Fitts’ law experiments assessing the performance of selecting targets displayed on the virtual smartphone screen using this method. The first compares direct vs. indirect input (i.e., virtual smartphone co-located with the physical smartphone, or not), and reveals there is no difference in performance due to input indirection. The second experiment assesses the influence of input scaling, i.e., decoupling the virtual cursor from the actual finger position on the smartphone screen so as to provide a larger virtual tactile surface. Results indicate a small effect for extreme scale factors. We discuss implications for the use of smartphones as input devices in VR.
An increasing number of consumer-oriented head-mounted displays (HMD) for augmented and virtual reality (AR/VR) are capable of finger and hand tracking. We report on the accuracy of off-the-shelf VR and AR HMDs when used for touch-based tasks such as pointing or drawing. Specifically, we report on the finger tracking accuracy of the VR head-mounted displays Oculus Quest, Vive Pro and the Leap Motion controller, when attached to a VR HMD, as well as the finger tracking accuracy of the AR head-mounted displays Microsoft HoloLens 2 and Magic Leap. We present the results of two experiments in which we compare the accuracy for absolute and relative pointing tasks using both human participants and a robot. The results suggest that HTC Vive has a lower spatial accuracy than the Oculus Quest and Leap Motion and that the Microsoft HoloLens 2 provides higher spatial accuracy than Magic Leap One. These findings can serve as decision support for researchers and practitioners in choosing which systems to use in the future.
While virtual reality games have shown a lot of potential for rehabilitation, research on creative virtual therapy is still growing. Considering many possibilities for therapeutic interventions in VR, we can create activities with an appropriate balance between intensity level of therapy intervention with enjoyment and entertainment. In this paper, we propose a creative line art drawing game in an immersive VR environment as a tool for enjoyable upper extremity physical therapy. To examine the validity of the proposed virtual therapy system, we conduct a human-subjects experiment in a mixed design varying the drawing content (Easy vs. Hard; a between-subjects factor) and the user’s position (Seated vs. Standing; a within-subjects factor). Our results with 16 non-clinical participants (8 females) show that the change of drawing content objectively influenced their drawing performance, e.g., the completion time and the number of mistakes, while they did not feel the difference in the difficulty level between the contents subjectively. Interestingly, participants reported more enjoyment from drawing the Hard content than the Easy content, and more substantial body stretches in the Seated setting than the Standing setting. Here, we present the main effects of the study factors and the correlations among the objective and subjective measures, while discussing implications of the findings in the context of enjoyable and customizable physical therapy using the creative VR drawing game.
Large, high-resolution displays (LHRD) are known to be well suited to support visual analysis in ways that surpass desktop/laptop displays. This paper describes our work on SageXR, an initial effort to determine the extent to which physical displays can be replaced by head mounted displays to support the visual analysis process from start to finish. Findings from studies of veteran LHRD users showed that: HMD workspaces has future potential towards providing the benefits of a LHRD; all participants used at least two times more virtual space than physical space - given the opportunity users desire and are able to make meaningful use of more working space; the virtual viewing area used by a single participant was equivalent to physical LHRDs; and while participants tended to surround themselves with data, most actually wanted a virtualized LHRD.
Virtual Reality (VR) remote collaboration is becoming more and more relevant in a wide range of scenarios, such as remote assistance or group work. A way to enhance the user experience is using haptic props that make virtual objects graspable. But physical objects are only present in one location and cannot be manipulated directly by remote users. We explore different strategies to handle ownership of virtual objects enhanced by haptic props. In particular, two strategies of handling object ownership – SingleOwnership and SharedOwnership. SingleOwnership restricts virtual objects to local haptic props, while SharedOwnership allows collaborators to take over ownership of virtual objects using local haptic props. We study both strategies for a collaborative puzzle task regarding their influence on performance and user behavior. Our findings show that SingleOwnership increases communication and enhanced with virtual instructions, results in higher task completion times. SharedOwnership is less reliant on verbal communication and faster, but there is less social interaction between the collaborators.
Exploring the design space of configurations for objects in virtual scenes is a challenge within virtual reality authoring tools due to the lack of visualization capabilities, non-destructive operations, suggestions, and flexibility. This work introduces Attribute Spaces, tools for visualizing and manipulating object attributes in virtual reality during 3D content generation. Attribute Spaces enable designers to systematically explore design spaces by supporting rapid comparisons between design alternatives and offering design suggestions. Custom combinations of attributes can be grouped and manipulated simultaneously for several objects. The grouping supports the creation of custom operation combinations that can be used as tools to edit multiple attribute, as well as snapshots of promising design decisions for later review. In an evaluation of Attribute Spaces by 3D design experts, our approach was found to enhance users’ understanding of their design space exploration progress and showed promise for integration into existing 3D workflows.
Visual cues are essential in computer-mediated communication. It is especially important when communication happens in a collaboration scenario that requires focusing several users’ attention on a specific object among other similar ones. This paper explores the effect of visual cues on pointing tasks in co-located Augmented Reality (AR) collaboration. A user study (N = 32, 16 pairs) was conducted to compare two types of visual cues: Pointing Line (PL) and Moving Track (MT). Both are head-based visual techniques. Through a series of collaborative pointing tasks on objects with different states (static and dynamic) and density levels (low, medium and high), the results showed that PL was better on task performance and usability, but MT was rated higher on social presence and user preference. Based on our results, some design implications are provided for pointing tasks in co-located AR collaboration.
Manipulating objects in immersive virtual environments has been studied for decades, but is still considered as a challenging problem. Current methods for manipulating distant objects still poses issues such as lack of precision and accuracy, and only a limited set of transformations is supported. In this paper, we proposed three metaphors: a near-field widget-based metaphor, and two near-field metaphors with scaled replica, one is unimanual and the other is bimanual. The widget-based metaphor is an extension of Widgets [20], but supports translation, scaling (anchored scaling) and rotation with multi-level DOF (degree-of-freedom) separation. The near-field metaphors with scaled replica can take advantage of the finer motion control and sharper vision in arm-reach manipulations, and thus increases the manipulation precision. Moreover, manipulating the replica via bounding box’s primitives makes the support of translation, scaling (anchored scaling) and rotation with multi-level DOF separation possible, and also leads to an intuitive interface. The support of multi-level DOF separation may increase the manipulation precision and offer more manipulation flexibility as well. We conducted a between-subjects empirical study with 51 participants to compare the three metaphors in terms of effectiveness measures, user experience and user impression. The findings from this study revealed that the unimanual metaphor with scaled replica (UMSR) yielded the highest efficiency. The widget-based metaphor was slower, and the bimanual metaphor with scaled replica (BMSR) yielded lower movement economy and, in one case, was less accurate. However, the subjective impressions were most favorable in the bimanual metaphor with scaled replica (BMSR). Thus, there may be some discrepancy between perceived comfort and user performance inherent in the near-field bimanual scaled-replica interactions.
Future augmented reality (AR) glasses may provide pervasive and continuous access to everyday information. However, it remains unclear how to address the issue of virtual information overlaying and occluding real-world objects and information that are of interest to users. One approach is to keep virtual information sources inactive until they are explicitly requested, so that the real world remains visible. In this research, we explored the design of interaction techniques with which users can activate virtual information sources in AR. We studied this issue in the context of Glanceable AR, in which virtual information resides at the periphery of the user’s view. We proposed five techniques and evaluated them in both sitting and walking scenarios. Our results demonstrate the usability, user preference, and social acceptance of each technique, as well as design recommendations to achieve optimal performance. Our findings can inform the design of lightweight techniques to activate virtual information displays in future everyday AR interfaces.
A system (VizSpace) was evaluated that extends the conventional touch table interface by decoupling the display and placing it above the touch surface to create an interaction volume beneath the display. This physically situated setup enables touch and hand interactions beneath the display, allowing users to reach inside with their hands and interact in a 3D virtual workspace. This paper presents an empirical investigation of the performance and usability of such a system. Participants were asked to perform an object translation task that compares 2D drag with 3D grasp interaction techniques whilst also varying the height of the interaction volume. Results suggest that participants completed the task faster and more accurately with the 2D drag interaction mode compared to the 3D grasp interaction mode. More importantly, the size of the interaction volume did not affect task performance but system usability and subjective perception rankings were ranked lower for the least interaction volume with the clearest viewing area. Results suggest that a larger interaction area behind the screen is more important than a clearer viewing area.
Exploring unfamiliar devices and interfaces through trial and error can be challenging and frustrating. Existing video tutorials require frequent context switching between the device showing the tutorial and the device being used. While augmented reality (AR) has been adopted to create user manuals, many are inflexible for diverse tasks, and usually require programming and AR development experience. We present TutorialLens, a system for authoring interactive AR tutorials through narration and demonstration. To use TutorialLens, authors demonstrate tasks step-by-step while verbally explaining what they are doing. TutorialLens automatically detects and records 3D finger positions and guides authors to capture important changes of the device. Using the created tutorials, TutorialLens then provides AR visual guidance and feedback for novice device users to complete the demonstrated tasks. TutorialLens is automated, friendly to users without AR development experience, and applicable to a variety of devices and tasks.
Smart devices and Internet of Things (IoT) technologies are replacing or being incorporated into traditional devices at a growing pace. The use of digital interfaces to interact with these devices has become a common occurrence in homes, work spaces, and various industries around the world. The most common interfaces for these connected devices focus on mobile apps or voice control via intelligent virtual assistants. However, with augmented reality (AR) becoming more popular and accessible among consumers, there are new opportunities for spatial user interfaces to seamlessly bridge the gap between digital and physical affordances.
In this paper, we present a human-subject study evaluating and comparing four user interfaces for smart connected environments: gaze input, hand gestures, voice input, and a mobile app. We assessed participants’ user experience, usability, task load, completion time, and preferences. Our results show multiple trade-offs between these interfaces across these measures. In particular, we found that gaze input shows great potential for future use cases, while both gaze input and hand gestures suffer from limited familiarity among users, compared to voice input and mobile apps.
In this work we explore a concept system that alters the virtual eye movements without the user’s awareness, and whether this can affect social attention among others. Our concept augments the real movements with subtle redirected gazes to people, that occur in intervals to remain unnoticed. We present a user study with groups of people conversing on a topic, and measure the level of visual attention among users. Compared to a baseline of natural eye movements, we find that the method has indeed affected the overall attention in the group, but in unexpected ways. Our work points to a new way to exploit the inherent role of eyes in social virtual reality.
Multimodal data allows great opportunity in research and interaction in virtual/augmented reality (VR/AR) experiences for measuring human behavior. However, it is challenging to collect, coordinate, and synchronize high-volume of data while preserving the high frame-rate and the quality. Lab Streaming Layer (LSL) is an open-source framework that allows various types of multimodal data to be synchronously collected. In this work, we push the boundaries of the LSL framework by introducing an open-Source MultimOdal framewOrk for Tracking Hardware (SMOOTH)—an LSL-based middleware. The SMOOTH on top of the LSL framework supports real-time data collection and streaming using VR/AR hardware, which the LSL does not currently support, such as Microsoft Azure Kinect and Windows Mixed Reality headset and controllers. We also conducted a preliminary evaluation to understand the effectiveness of the SMOOTH and how well it performed on the task of collecting synchronized image, depth, infrared, audio, and 3D motion tracking data qualitatively.
Teleporting is a popular interface for locomotion through virtual environments (VEs). However, teleporting can cause disorientation. Spatial boundaries, such as room walls, are effective cues for reducing disorientation. This experiment explored the characteristics that make a boundary effective. All boundaries tested reduced disorientation, and boundaries representing navigational barriers (e.g., a fence) were no more effective than those defined only by texture changes (e.g., flooring transition). The findings indicate that boundaries need not be navigational barriers to reduce disorientation, giving VE designers greater flexibility in the spatial cues to include.
Floor vibration, a type of whole-body tactile stimulation, could mitigate cybersickness during virtual reality (VR) exposure. This study aims to further investigate its effects on cybersickness, as well as presence and emotional arousal by introducing floor vibration as a proxy for representing different virtual ground surfaces. For the investigation, a realistic walking-on-the-beach scenario was implemented, and floor vibrations were introduced in synchrony with the footsteps. Three conditions were designed based on the same scenario with different floor vibrations. The user study involving 26 participants found that there was no significant difference in presence and cybersickness across the three conditions, but the introduction of floor vibration (regardless the vibration type) had a mixed impact on the emotional arousal, as measured by changes in pupil sizes and skin conductance. Also, participants generally most preferred the matched vibration.
We present a remote longitudinal experiment to assess the effectiveness of a common motion sickness conditioning technique (MSCT), the Puma method, on cybersickness in VR. Our goal was to evaluate benefits of conditioning techniques as an alternative to visual cybersickness reduction methods (e.g., viewpoint restriction) or habituation approaches which ”train” the user to become acclimatized to cybersickness. We compared three techniques - habituation, the Puma method conditioning exercise, and a placebo (Tai Chi) - in a cybersickness-inducing navigation task over 10 sessions. Preliminary results indicate promising effects.
With recent advances in augmented reality (AR) and computer vision it has become possible to magnify objects in real time in a user’s field of view. AR object magnification can have different purposes, such as enhancing human visual capabilities with the BigHead technique, which works by up-scaling human heads to communicate important facial cues over longer distances. For this purpose, we created a prototype with a 4K camera mounted on a HoloLens 2. In this demo, we present the BigHead technique and proof of concept AR testbed to magnify heads in real-time. Further, we describe how hand gestures are detected to control the scale and position of the magnified head. We discuss the technique and implementation, and propose future research directions.
In this study, we propose the method of integrating a questionnaire into a VR experience. From the result of comparative experiment, our proposed method can alleviate the effect of transition from the VR experience to the questionnaire and maintain the presence in VR until measure.
Smartphone applications that allow users to enjoy playing musical instruments have emerged, opening up numerous related opportunities. However, it is difficult for deaf and hard of hearing (DHH) people to use these apps because of limited access to auditory information. When using real instruments, DHH people typically feel the music from the vibrations transmitted by the instruments or the movements of the body, which is not possible when playing with these apps. We introduce “smartphone drum,’’ a smartphone application that presents a drum-like vibrotactile sensation when the user makes a drumming motion in the air with their smartphone like a drumstick. We implemented an early prototype and received feedback from six DHH participants. We discuss the technical implementation and the future of new instruments of vibration.
Educational VR may help students by being more engaging or improving retention compared to traditional learning methods. However, a student can get distracted in a VR environment due to stress, mind-wandering, unwanted noise, external alerts, etc. Student eye gaze can be useful for detecting these distraction. We explore deep-learning-based approaches to detect distractions from gaze data. We designed an educational VR environment and trained three deep learning models (CNN, LSTM, and CNN-LSTM) to gauge a student’s distraction level from gaze data, using both supervised and unsupervised learning methods. Our results show that supervised learning provided better test accuracy compared to unsupervised learning methods.
Commercial VR Headsets typically include a headset and two motion controllers. From this VR setup, we have access to the user’s head and hands, but lack information about other parts of the user’s body without using additional equipment. Accurate position of other body parts such as the waist would expand the user’s interaction space. In this paper, we describe our efforts at using machine learning to predict the position and rotation of the user’s waist using only the headset and two motion controllers with an additional tracker at the waist for training.
In this paper, we discuss our implementation of a gesture-based 3-dimensional typing system within virtual reality. Rather than the conventional point-and-click keyboard commonly found in immersive technologies, we explore using unique gestures with the controller to enter a specific key. To map these gestures and movements to their respective key, we utilize machine learning techniques to avoid naive hard-coded implementations. The result of the trained model is a text input system that adapts to the user’s gestures, rather than forcing the user to conform to the system’s definition of a gesture. Our goal is to work toward a viable alternative to standard virtual reality keyboards and typing systems.
We introduce and evaluate if user acceptance increases by allowing users to select their preferred driving velocity in the context of autonomous driving. While the actual driving style does not differ, adjustments are made with regard to visualisation and sound of the interior of an autonomous vehicle simulator. These adjustments mimic the preference of a faster or slower driving style selected by the user. The experimental results show (1) that the perception regarding control and safety can increase when introducing customisation options and different feedback modalities and (2) that users experience a change in the experienced driving style simply due to visual and auditive modifications, even though the vehicle’s actual driving does not change.
Peripersonal Equipment Slots are locations to place equipment that are egocentric to the user but do not reside within the user’s personal space. Instead, these slots reside within the user’s arm reach, known as peripersonal space. In this paper, we present our initial approach and results from our attempt to realize peripersonal equipment slots through the use of machine learning.