We present DualVib, a compact handheld device that simulates the haptic sensation of manipulating dynamic mass; mass that causes haptic feedback as the user’s hand moves (e.g., shaking a jar and feeling coins rattling inside). Unlike other devices that require actual displacement of weight, DualVib dispenses with heavy and bulky mechanical structures and, instead, uses four vibration actuators. DualVib simulates a dynamic mass by simultaneously delivering two types of haptic feedback to the user’s hand: (1) pseudo-force feedback created by asymmetric vibrations that render the kinesthetic force arising from the moving mass; and (2) texture feedback through acoustic vibrations that render the object’s surface vibrations correlated with mass material properties. By means of our user study, we found out that DualVib allowed users to more effectively distinguish dynamic masses when compared to using either pseudo-force or texture feedback alone. We also report qualitative feedback from users who experienced five virtual reality applications with our device.
Grounded haptic devices can provide a variety of forces but have limited working volumes. Wearable haptic devices operate over a large volume but are relatively restricted in the types of stimuli they can generate. We propose the concept of docking haptics, in which different types of haptic devices are dynamically docked at run time. This creates a hybrid system, where the potential feedback depends on the user’s location. We show a prototype docking haptic workspace, combining a grounded six degree-of-freedom force feedback arm with a hand exoskeleton. We are able to create the sensation of weight on the hand when it is within reach of the grounded device, but away from the grounded device, hand-referenced force feedback is still available. A user study demonstrates that users can successfully discriminate weight when using docking haptics, but not with the exoskeleton alone. Such hybrid systems would be able to change configuration further, for example docking two grounded devices to a hand in order to deliver twice the force, or extend the working volume. We suggest that the docking haptics concept can thus extend the practical utility of haptics in user interfaces.
Accurate sketching in virtual 3D environments is challenging due to aspects like limited depth perception or the absence of physical support. To address this issue, we propose VRSketchPen – a pen that uses two haptic modalities to support virtual sketching without constraining user actions: (1) pneumatic force feedback to simulate the contact pressure of the pen against virtual surfaces and (2) vibrotactile feedback to mimic textures while moving the pen over virtual surfaces. To evaluate VRSketchPen, we conducted a lab experiment with 20 participants to compare (1) pneumatic, (2) vibrotactile and (3) a combination of both with (4) snapping and no assistance for flat and curved surfaces in a 3D virtual environment. Our findings show that usage of pneumatic, vibrotactile and their combination significantly improves 2D shape accuracy and leads to diminished depth errors for flat and curved surfaces. Qualitative results indicate that users find the addition of unconstraining haptic feedback to significantly improve convenience, confidence and user experience.
Avatars in virtual reality (VR) can have body structures that differ from the physical self. Game designers, for example, often stylize virtual characters by reducing the number of fingers. Previous work found that the sensation of presence in VR depends on avatar realism and the number of limbs. However, it is currently unknown how the removal of individual fingers affects the VR experience, body perception, and how fingers are used instead. In a study with 24 participants, we investigate the effects of missing fingers and avatar realism on presence, phantom pain perception, and finger usage. Our results show that particularly missing index fingers decrease presence, show the highest phantom pain ratings, and significantly change hand interaction behavior. We found that relative usage of thumb and index fingers in contrast to middle, ring, and little finger usage was higher with abstract hands than with realistic ones – even when the fingers were missing. We assume that dominant fingers are firstly integrated into the own body schema when an avatar does not resemble one’s own appearance. We discuss cognitive mechanisms in experiencing virtual limb loss.
Indigenous people (IP) living in remote areas, at the margins of mainstream society, are often the last ones to experience emerging technologies and even less to shape those experiences. It could be argued technology exposure and experience is necessary for IP to gain agency in making informed decisions on the rejection or appropriation of novel technologies. In this paper, VR is introduced to a remote San community within a broader community-based research collaboration considering political and ethical perspectives of technology inclusion. The intent was to familiarise the community with the technology through the development and playthrough of a game, to explore future opportunities for joint co-designs of VR applications, meanwhile gauging the barriers for how VR operates outside of its intended setting. The community members expressed their excitement about the experience and the desire to re-create traditional San games in VR. The paper reflects on the community experiences, the setup and use of VR in remote settings, and the choices made to facilitate the familiarization of emerging technology.
Quantum computing (QC) is an intrinsically complex yet exciting discipline with increasing practical relevance. A deep understanding of QC requires the integration of knowledge across numerous technical fields, such as physics, computing and mathematics. This work aims to investigate how immersive Virtual Reality (VR) compares to a desktop environment (‘web-applet’) as an educational tool to help teach individuals QC fundamentals. We developed two interactive learning tutorials, one utilising the ‘Bloch sphere’ visualisation to represent a single-qubit system, and the other exploring multi-qubit systems through the lens of ‘quantum entanglement’. We evaluate the effectiveness of each medium to teach QC fundamentals in a user study with 24 participants. We find that the Bloch sphere visualisation was well-suited to VR over a desktop environment. Our results also indicate that mathematics literacy is an important factor in facilitating greater learning with this effect being notably more pronounced when using VR. However, VR did not significantly improve learning in a multi-qubit context. Our work provides valuable insights which contribute to the emerging field of Quantum HCI (QHCI) and VR for education.
We empirically explore fundamental requirements for achieving VR in-air typing by observing the unconstrained eyes-free in-air typing of touch typists. We show that unconstrained typing movements differ substantively from previously observed constrained in-air typing movements and introduce a novel binary categorization of typing strategies: typists who use finger movements alone (FINGER) and those who combine finger movement with gross hand movement (HAND). We examine properties of finger kinematics, correlated movement of fingers, interrelation in consecutive key-strokes, and 3D distribution of key-stroke movements. We report that, compared to constrained typing, unconstrained typing generates shorter (49 mm) and faster (764 mm/s) key-strokes with a high correlation of finger movement and that the HAND strategy group exhibits more dynamic key-strokes. We discuss how these findings can inform the design of future in-air typing systems.
Virtual reality (VR) technologies have become more affordable and accessible in recent years. This is opening up new methods and opportunities in the field of digital learning. VR can offer new forms of interactive learning and working, especially for subjects from the STEM (Science, technology, engineering, and mathematics) area. In this context we investigate the potential and application of VR for computer science education with a systematic review in this paper. We present a formal literature review on the use of VR technologies in computer science education. We focus on the identification of factors such as learning objectives, technologies used, interaction characteristics, and challenges and advantages of using fully immersive VR for computer science education.
Teleportation is a navigation technique widely used in virtual reality applications using head-mounted displays. Basic teleportation usually moves a user’s viewpoint to a new destination of the virtual environment without taking into account the physical space surrounding them. However, considering the user’s real workspace is crucial for preventing them from reaching its limits and thus managing direct access to multiple virtual objects. In this paper, we propose to display a virtual representation of the user’s real workspace before the teleportation, and compare manual and automatic techniques for positioning such a virtual workspace. For manual positioning, the user adjusts the position and orientation of their future virtual workspace. A first controlled experiment compared exocentric and egocentric manipulation techniques with different virtual workspace representations, including or not an avatar at the user’s future destination. Although exocentric and egocentric techniques result in a similar level of performance, representations with an avatar help the user to understand better how they will land after teleportation. For automatic positioning, the user selects their future virtual workspace among relevant options generated at runtime. A second controlled experiment shows that the manual technique selected from the first experiment and the automatic technique are more efficient than the basic teleportation. Besides, the manual technique seems to be more suitable for crowded scenes than the automatic one.
Interactions with the physical environment, such as passive haptic feedback, have been previously shown to provide richer and more immersive virtual reality experiences. A strict correspondence between the virtual and real world coordinate systems is a staple requirement for physical interaction. However, many of the commonly employed VR locomotion techniques allow for, or even require, this relationship to change as the experience progresses. The outcome is that experience designers frequently have to choose between flexible locomotion or physical interactivity, as the two are often mutually exclusive. To address this limitation, this paper introduces reactive environmental alignment, a novel framework that leverages redirected walking techniques to achieve a desired configuration of the virtual and real world coordinate systems. This approach can transition the system from a misaligned state to an aligned state, thereby enabling the user to interact with physical proxy objects or passive haptic surfaces. Simulation-based experiments demonstrate the effectiveness of reactive alignment and provide insight into the mechanics and potential applications of the proposed algorithm. In the future, reactive environmental alignment can enhance the interactivity of virtual reality systems and inform new research vectors that combine redirected walking and passive haptics.
Virtual environments with a wide range of scales are becoming commonplace in Virtual Reality applications. Methods to control locomotion parameters can help users explore such environments more easily. For multi-scale virtual environments, point-and-teleport locomotion with a well-designed distance control method can enable mid-air teleportation, which makes it competitive to flying interfaces. Yet, automatic distance control for point-and-teleport has not been studied in such environments. We present a new method to automatically control the distance for point-and-teleport. In our first user study, we used a solar system environment to compare three methods: automatic distance control for point-and-teleport, manual distance control for point-and-teleport, and automatic speed control for flying. Results showed that automatic control significantly reduces overshoot compared with manual control for point-and-teleport, but the discontinuous nature of teleportation made users prefer flying with automatic speed control. We conducted a second study to compare automatic-speed-controlled flying and two versions of our teleportation method with automatic distance control, one incorporating optical flow cues. We found that point-and-teleport with optical flow cues and automatic distance control was more accurate than flying with automatic speed control, and both were equally preferred to point-and-teleport without the cues.
We investigate the effects of different ways of visualizing the virtual gait of the avatar in the context of Walk-in-Place (WIP) based navigation in a virtual environment (VE). In Study 1, participants navigated through a VE using the WIP method while inhabiting an avatar. We varied the visualization of the avatar’s leg motion while performing the WIP gesture: (1) Fixed Body: the legs stood still; (2) Pre-recorded Animation: the legs moved in a fixed predetermined pace (plausible but not in accordance to that of the user in general); (3) Synchronized Motion the legs moved according (synchronized) to those of the user. Our results indicated that the sense of presence and body ownership improved significantly when the leg motion was rendered synchronized to that of the user (Synchronized Motion). In addition, we developed a deep neural network (DNN) that predicted the users’ leg postures only with the head position tracking, eliminating the need for any external sensors. We carried out Study 2, to assess the effects of different gait visualizations, under two new factors: (1) virtual gait seen directly by the user looking down, or already visible by one’s shadow (i.e., no need to look down); and (2) playing a pre-recorded animation, or pre-recorded animation whose playback speed was adjusted to match with pace of the users’ actual leg motions as predicted by the DNN. The results of Study 2 showed that the virtual gait temporally synchronized with that of the user greatly improved the sense of body ownership, whether it was witnessed directly or indirectly with the shadow. However, the effect of virtual gait on presence was less marked when indirectly observed. We discuss our findings and the implications for representing the avatar locomotion in immersive virtual environments.
Image-based 3D reconstructions and their visualization in virtual reality promise novel opportunities to explore and analyze 3D reconstructions of real objects, buildings and places. However, the faithfulness of the presented data is not always obvious and, in most cases, a 3D reconstruction cannot be compared directly to its corresponding real world instance. However, in case of reconstruction methods based on structure from motion (SFM), a large number of raw photos is available. This motivated us to develop a novel interaction technique for the visual comparison of details of 3D models with projections of the corresponding image sections, e.g. in order to rapidly verify the authenticity of perceived features. The results of a formal user study (n=18) demonstrate the general usability of such visual provenance information as well as benefits of the comparison in vicinity of the features in question over a separate image gallery. Further observations informed our iterative design process and led to the development of an improved interactive visualization. Our final implementation provides a spatial and content-related overview while retaining the efficiency of the original approach.
Technological advances are enabling a new class of augmented reality (AR) applications that use bodies as substrates for input and output. In contrast to sensing and augmenting objects, body-based AR applications track people around the user and layer information on them. However, prototyping such applications is complex, time-consuming, and cumbersome, due to a lack of easily accessible tooling and infrastructure. We present Body LayARs, a toolkit for fast development of body-based AR prototypes. Instead of directly programming for a device, Body LayARs provides an extensible graphical programming environment with a device-independent runtime abstraction. We focus on face-based experiences for headset AR, and show how Body LayARs makes a range of body-based AR applications fast and easy to prototype.
Patients with single ventricle heart defect undergo Fontan surgery to reroute the blood flow from the lower body to the lung by connecting the inferior vena cava to the pulmonary artery using a vascular graft. Since each patient has an unique anatomical structure and blood flow dynamics, the graft design is a critical factor for maximizing the long-term survival rate of Fontan patients. Currently, designing and evaluating grafts involve computer aided design (CAD) and computational fluid dynamics (CFD) skills. CAD incorporates numerous tools for design but lacks depth perception, surgical features, and design parameters for creating vascular grafts while visualizing and modifying patient anatomies. These limitations may lead to long lead times, inconsistent workflow, and surgically infeasible graft designs. In this paper, we introduce a novel virtual reality vascular graft modeling software - CorFix, that provides solutions to these challenges. CorFix includes several visualization features for performing diagnostics and surgical features with design guidelines for creating patient specific tube-shaped grafts in 3D. The designed vascular graft can be exported into a 3D model, which can be utilized for performing computational fluid dynamic analysis and 3D printing. The patient specific vascular graft designs in CorFix were compared to an engineering CAD software, SolidWorks (Dassault Systèmes, Vélizy-Villacoublay, France), by 8 participants. Through all participants had only received one time 10-minute tutorial on CorFix, CorFix had a higher success rate and 3.4 times faster performance in designing surgically feasible grafts than CAD. CorFix also scored higher in usability and lower in perceived workload than CAD. CorFix may be the tool that can enable medical doctors without 3D modeling background to design patient specific grafts.
XR (Virtual, Augmented and Mixed Reality) technologies are growing in prominence. However, they are increasingly being used in sectors and in situations that can result in harms. As such, this paper argues the need for auditability to become a key consideration of XR systems. Auditability entails capturing information of a system’s operation to enable oversight, inspection or investigation. Things can and will go wrong, and information that helps unpack situations of failure or harm, and that enables accountability and recourse, will be crucial to XR’s adoption and acceptance. In drawing attention to the urgent need for auditability, we illustrate some risks associated with XR technology and their audit implications, and present some initial findings from a survey with developers indicating the current ‘haphazard’ approach towards such concerns. We also highlight some challenges and considerations of XR audit in practice, as well as areas of future work for taking this important area of research forward.
To allow users to perform real walking in a virtual environment larger than the physical space, redirected walking (RDW) techniques could be employed. Users do not notice this manipulation and immersion remains intact when RDW is applied within certain thresholds. Although many studies on RDW detection thresholds exists, in none of these studies, users were performing an additional task during the threshold identification process. These existing thresholds could be only conservative estimates and the potential of RDW may not be fully utilized.
In this paper, we present an experiment to investigate the effect of cognitive load on curvature RDW thresholds. The cognitive load was imposed using a dual task of serial seven subtraction. Results showed that gender and cognitive load have significant effects on curvature RDW thresholds. More specifically, men are on average more sensitive to RDW than women, and being engaged in a dual task increases users’ RDW thresholds.
Understanding the effects of environmental features such as visual realism on spatial memory can inform a human-centered design of virtual environments. This paper investigates the effects of visual realism on object location memory in virtual reality, taking account of individual differences, gaze, and locomotion. Participants freely explored two environments which varied in visual realism, and then recalled the locations of objects by returning the misplaced objects back to original locations. Overall, we did not find a significant relationship between visual realism and object location memory. We found, however, that individual differences such as spatial ability and gender accounted for more variance than visual realism. Gaze and locomotion analysis suggest that participants exhibited longer gaze duration and more clustered movement patterns in the low realism condition. Preliminary inspection further found that locomotion hotspots coincided with objects that showed a significant gaze time difference between high and low visual realism levels. These results suggest that high visual realism still provides positive spatial learning affordances but the effects are more intricate.
A relatively recent application area for Virtual Reality (VR) systems is sports training and user performance assessment. One of these applications is eye-hand coordination training systems (EHCTSs). Previous research identified that VR-based training systems have great potential for EHCTSs. While previous work investigated 3D targets on a 2D plane, here we aim to study full 3D movements and extend the application of throughput analysis to EHCTSs. We conducted two user studies to investigate how user performance is affected by different target arrangements, feedback conditions, and handedness in VR-based EHCTSs. In the first study, we explored handedness as well as vertical and horizontal target arrangements, and showed that user performance increases with the dominant hand and a vertical target plane. In the second study, we investigated different combinations of visual and haptic feedback and how they affect user performance with different target and cursor sizes. Results illustrate that haptic feedback did not increase user performance when it is added to visual feedback. Our results inform the creation of better EHCTSs with mid-air VR systems.
Many user interfaces involve attention shifts between primary and secondary tasks, e.g., when changing a mode in a menu, which detracts the user from their main task. In this work, we investigate how eye gaze input affords exploiting the attention shifts to enhance the interaction with handheld menus. We assess three techniques for menu selection: dwell time, gaze button, and cursor. Each represents a different multimodal balance between gaze and manual input. We present a user study that compares the techniques against two manual baselines (dunk brush, pointer) in a compound colour selection and line drawing task. We show that user performance with the gaze techniques is comparable to pointer-based menu selection, with less physical effort. Furthermore, we provide an analysis of the trade-off as each technique strives for a unique balance between temporal, manual, and visual interaction properties. Our research points to new opportunities for integrating multimodal gaze in menus and bimanual interfaces in 3D environments.
We present a new pipeline to enable head-motion parallax in omnidirectional stereo (ODS) panorama video rendering using a neural depth decoder. While recent ODS panorama cameras record short-baseline horizontal stereo parallax to offer the impression of binocular depth, they do not support the necessary translational degrees-of-freedom (DoF) to also provide for head-motion parallax in virtual reality (VR) applications.
To overcome this limitation, we propose a pipeline that enhances the classical ODS panorama format with 6 DoF free-viewpoint rendering by decomposing the scene into a multi-layer mesh representation. Given a spherical stereo panorama video, we use the horizontal disparity to store explicit depth information for both eyes in a simple neural decoder architecture. While this approach produces reasonable results for individual frames, video rendering usually suffers from temporal depth inconsistencies. Thus, we perform successive optimization to improve temporal consistency by fine-tuning our depth decoder for both temporal and spatial smoothness.
Using a consumer-grade ODS camera, we evaluate our approach on a number of real-world scene recordings and demonstrate the versatility and robustness of the proposed pipeline.
We present a complete end-to-end pipeline for generating dynamically relightable virtual objects captured using a single handheld consumer-grade RGB-D camera. The proposed system plausibly replicates the geometry, texture, illumination, and surface reflectance properties of non-Lambertian objects, making them suitable for integration within virtual reality scenes that contain arbitrary illumination. First, the geometry of the target object is reconstructed from depth images captured using a handheld camera. To get nearly drift-free texture maps of the virtual object, a set of selected images from the original color stream is used for camera pose optimization. Our approach further separates these images into diffuse (view-independent) and specular (view-dependent) components using low-rank decomposition. The lighting conditions during capture and reflectance properties of the virtual object are subsequently estimated from the computed specular maps. By combining these parameters with the diffuse texture, the reconstructed model can then be rendered in real-time virtual reality scenes that plausibly replicate real world illumination at the point of capture. Furthermore, these objects can interact with arbitrary virtual lights that vary in direction, intensity, and color.
Virtual realities (VR) are becoming an integral part of product development across many industries, for example to assess aesthetics and usability of new features in the automotive industry. The recording of the evaluation is typically conducted by filling out questionnaires after the study participants left the virtual environment. In this paper, we investigate how questionnaires can be best embedded within the virtual environment and compare how VR-questionnaires differ from classical post-test evaluations regarding preference, presence, and questionnaire completion time.
In the first study (N = 11), experts rated four design concepts of questionnaires embedded in VR, of which two were designed as extradiegetic and two as intradiegetic user interfaces. We show that intradiegetic UIs have a significantly higher perceived user experience and presence while the usability remains similar. Intradiegetic UIs are preferred by the majority.
Based on these findings, we compared intradiegetic VR-questionnaires with paper-based evaluations in a follow up study (N = 24). 67% of the participants preferred the evaluation in VR, even though it takes significantly longer. We found no effect on presence.
Virtual reality (VR) has been applied as a complimentary way to conventional treatment for mental disorders successfully. On the other hand, it has not been clearly shown what type of immersive media such as VR can directly affect one’s physiological parameters, associated with the state of mindfulness. We sought to assess how being subjected to differently designed VR contents can affect and modulate one’s anxiety both psychologically and more importantly physiologically. We empirically tested the comparative effects of two polarizing VR content types to this effect: (1) “calm/soothing” content and (2) “disturbing”. Twenty-five adults participated and their mental state, anxiety level and physiological signals were measured before and after experiencing the respective VR content type. The experiment found a statistically significant effect of the content type to the changes in these measures and confirmed that the “calm” content was helpful for one to self-regulate to lower heart rate and blood pressure, stable GSR, and the “disturbing” content in the opposite way. We applied this result to calm down and stabilize vital signs of patients during actual coronary angiography and catheterization operations. We were able to observe the same effect with positive comments from the patients and operating team.
Understanding how and why users reveal information about their self in online social spaces and what they perceive as privacy online is a central research agenda in HCI. Drawing on 30 in-depth interviews, in this paper we focus on what type of information users disclose, to whom they reveal information, and concerns they had regarding self-disclosure in social Virtual Reality (VR) - where multiple users can interact with one another through VR head-mounted displays in 3D virtual spaces. Our findings show that overall, users felt comfortable to disclose their emotions, personal experience, and personal information in social VR. However, they also acknowledged that disclosing personal information in social VR was an inevitable trade-off: giving up bio-metric information in order to better use the system. We contribute to existing literature on self-disclosure and privacy online by focusing on social VR as an emerging novel online social space. We also explicate implications for designing and developing future social VR applications.
Current solutions for creating co-located Mixed Reality (MR) experiences typically rely on platform-specific synchronisation of spatial anchors or Simultaneous Localisation and Mapping (SLAM) data across clients, often coupled to cloud services. This introduces significant costs (in development and deployment), constraints (with interoperability across platforms often limited), and privacy concerns. For practitioners, support is needed for creating platform-agnostic co-located MR experiences. This paper explores the utility of aligned SLAM solutions by 1) surveying approaches toward aligning disparate device coordinate spaces, formalizing their theoretical accuracy and limitations; 2) providing skeleton implementations for audience-based, small-scale and large-scale co-location using said alignment approaches; and 3) detailing how we can assess the accuracy and safety of 6DoF/SLAM tracking solutions for any arbitrary device and dynamic environment without the need for an expensive ground truth optical tracking, by using trilateration and a $30 laser distance meter. Through this, we hope to further democratise the creation of cross-platform co-located MR experiences.
Virtual reality (VR) allows embodying any possible avatar. Known as the Proteus effect, avatars can change users’ behavior and attitudes. Previous work found that embodying Albert Einstein can increase cognitive task performance. The behavioral confirmation paradigm, however, predicts that our behavior is also affected by others’ perception of us. Therefore, we investigated the cognitive performance in collaborative VR when self-perception and external perception of the own avatar differ. 32 male participants performed a Tower of London task in pairs. One participant embodied Einstein or a young adult while the other perceived the participant as Einstein or a young adult. We show that the perception by others affects cognitive performance. The Einstein avatar also decreased the perceived workload. Results imply that avatars’ appearance to both, the user and the others must be considered when designing for cognitively demanding tasks.
Motion tracking technologies and avatars in virtual reality (VR) showing the movements of the own body enable high levels of presence and a strong illusion of body ownership (IBO) – key features of immersive systems and gaming experiences in virtual environments. Previous work suggests using software-based algorithms that can not only compensate system latency but also predict future movements of the user to increase input performance. However, the effects of movement prediction in VR on input performance are largely unknown. In this paper, we investigate neural network-based predictions of full-body avatar movements in two scenarios: In the first study, we used a standardized 2D Fitts’ Law task to examine the information throughput in VR. In the second study, we utilized a full-body VR game to determine the users’ performance. We found that both performance and subjective measures in a standardized 2D Fitts’ law task could not benefit from the predicted avatar movements. In an immersive gaming scenario, however, the perceived accuracy of the own body location improved. Presence and body assessments remained more stable and were higher than during the Fitts’ task. We conclude that machine-learning-based predictions could be used to compensate system-related latency but participants only subjectively benefit under certain conditions.
This paper introduces an automated 3D-reconstruction method for generating high-quality virtual humans from monocular smartphone cameras. The input of our approach are two video clips, one capturing the whole body and the other providing detailed close-ups of head and face. Optical flow analysis and sharpness estimation select individual frames, from which two dense point clouds for the body and head are computed using multi-view reconstruction. Automatically detected landmarks guide the fitting of a virtual human body template to these point clouds, thereby reconstructing the geometry. A graph-cut stitching approach reconstructs a detailed texture. Our results are compared to existing low-cost monocular approaches as well as to expensive multi-camera scan rigs. We achieve visually convincing reconstructions that are almost on par with complex camera rigs while surpassing similar low-cost approaches. The generated high-quality avatars are ready to be processed, animated, and rendered by standard XR simulation and game engines such as Unreal or Unity.
We present user study results on virtual body contact experience in a two-user VR scenario, in which participants performed different touches with a research assistant. The interaction evoked different emotional reactions in perceived relaxation, happiness, desire, anxiety, disgust, and fear. Congruent to physical social touch, the evaluation of virtual body contact was modulated by intimacy, touch direction, and sex. Further, individual comfort with interpersonal touch was positively associated with perceived relaxation and happiness. We discuss the results regarding implications for follow-up studies and infer implications for the use of social touch in social VR applications.
Embedding virtual humans in educational settings enables the transfer of the approved concepts of learning by observation and imitation of experts to extended reality scenarios. Whilst various presentation concepts of virtual humans for learning have been investigated in sports and rehabilitation, little is known regarding industrial use cases. In prior work on manual assembly, Lampen et al. [21] show that three-dimensional (3D) registered virtual humans can provide assistance as effective as state-of-the-art HMD-based AR approaches. We extend this work by conducting a comparative user study (N=30) to verify implementation costs of assistive behavior features and 3D registration. The results reveal that the basic concept of a 3D registered virtual human is limited and comparable to a two-dimensional screen aligned presentation. However, by incorporating additional assistive behaviors, the 3D assistance concept is enhanced and shows significant advantages in terms of cognitive savings and reduced errors. Thus, it can be concluded, that this presentation concept is valuable in situations where time is less crucial, e.g. in learning scenarios or during complex tasks.
In this paper, we applied the concept of diminished reality to remove content-irrelevant pedestrian (i.e., real object) in the context of handheld augmented reality (AR). We prepared three view conditions: in Transparent (TP) condition, we removed the pedestrian entirely; in Semi-transparent (STP) condition, the pedestrian became semi-transparent; lastly, in Default (DF) condition, the pedestrian appeared as is. We conducted a user study to compare the effects of the three conditions on users’ engagement and perception of a virtual pet in the AR content. Our findings revealed that users felt less distracted to the AR content in TP and STP conditions, compared to the DF condition. Furthermore, users felt the virtual pet as more life-like, its behavior more plausible, and felt a higher spatial presence in the real environment, in the TP condition.
Research in sociology shows that effective conversation relates to people’s spatial and orientational relationship, namely the proxemics (distance, eye contact, synchrony) and the F-formation (orientation and arrangement). In this work, we introduce novel conversational paradigms that effects conventional F-formation by introducing the concept of multi-directional conversation. Multiplex Vision is a head-mounted device capable of providing a 360° field-of-view (FOV) and facilitating multi-user interaction multi-directionally, thereby providing novel methods on how people can interact with each other. We propose 3 possible new forms of interactions from our prototype: one-to-one, one-to-many, and many-to-many. To facilitate them, we manipulate 2 key variables, which are the viewing parameter and the display parameter. To gather feedback for our system, we conducted a study to understand information transfer between various modes, as well as a user study on how different proposed paradigms effect conversation. Finally, we discuss present and future use cases that can benefit from our system.
As commodity virtual reality (VR) systems become more common, they are rapidly gaining popularity for entertainment, education, and training purposes. VR utilizes headsets which come in contact with or close proximity to the user’s eyes, nose, and forehead. In this study, the potential for these headsets to become contaminated with bacteria was analyzed. To the best of our knowledge, this study is the first to address the potential for microorganisms to be transmitted via VR headsets. The data discussed herein were collected roughly one year prior to the outbreak of the COVID-19 pandemic in the United States. We feel it is important to be clear that this study focuses exclusively on bacteria, as opposed to viruses like those responsible for the present pandemic.
The nosepieces and foreheads of two HTC Vive headsets were sampled over the course of a seven-week period in a VR software development course. Serial dilutions were performed, and samples were plated on various culture media. Following incubation, counts of bacteria were determined. DNA was extracted from bacterial colonies and the 16S rRNA gene was sequenced to identify bacterial contaminates present on the headsets. Chief among these contaminates was Staphylococcus aureus. The results of these tests indicated that the Staphylococcus aureus strains isolated from the headsets possessed high levels of antibiotic resistance. Other notable bacterial isolates included Moraxella osloensis, the bacteria responsible for foul odors in laundry and, Micrococcus luteus, a communalistic bacterial species capable of causing opportunistic infections. Other bacterial isolates were detected in variable amounts throughout the trial.
We present a method for dynamic projection mapping on deformable, stretchable and elastic materials (e.g. cloth) using a time of flight (ToF) depth camera (e.g. Azure Kinect or Pico-Flexx) that come equipped with an IR camera. We use Bezier surfaces to model the projection surface without explicitly modeling the deformation. We devise an efficient tracking method that tracks the boundary of the surface material using the IR-Depth camera. This achieves realistic mapping even in the interior of the surface, with simple markers (e.g. black dots or squares) or without markers entirely, such that the projection appears to be printed on the material. The surface representation is updated in real-time using GPU based computations. Further, we also show that the speed of these updates is limited by the camera frame rate and therefore can be adopted for higher speed cameras as well. This technique can be used to project on several stretchable moving materials to change their appearance.
We present AffectivelyVR, a personalized real-time emotion recognition system in Virtual Reality (VR) that enables an emotion-adaptive virtual environment. We used off-the-shelf Electroencephalogram (EEG) and Galvanic Skin Response (GSR) physiological sensors to train user-specific machine learning models while exposing users to affective 360° VR videos. Since emotions are largely dependent on interpersonal experiences and expressed in different ways for different people, we personalize the model instead of generalizing it. By doing this, we achieved an emotion recognition rate of 96.5% using the personalized KNN algorithm, and 83.7% using the generalized SVM algorithm.
As a promising alternative to VR head-mounted displays, current autostereoscopic displays and light field displays require high GPU consumption for multiple-view rendering and do not display different scenes for multiple viewers with motion parallax. Building upon prior work demonstrating how GPU utilization can be reduced by only rendering visible views, we propose an innovative approach to only render the visible views of different scenes towards multiple viewers’ eyes. Moreover, we found that the number of visible views decreases as the viewing distances increase. Thus, a dynamic approach can be taken to adjust the number of rendered views according to viewers distance from the display. This approach can be easily adapted to the off-the-shelf light field displays to display different 3D scenes for at least two viewers according to their head positions with reduced GPU costs.
In recent years, the use of virtual reality (VR) attractions has been increasing in amusement facilities with the spread of head-mounted displays (HMDs) and the increase of VR content. VR attractions require less physical space than conventional attractions. However, because it takes a considerable time to adjust an HMD one by one, not many customers can play the attractions. In order to provide VR content with a long subjective time in a short time, this study presents VR content on an HMD that can evoke fear. Fear is one of the factors that influence psychological time. We investigate whether the VR content with fear influences time estimation.
The purpose of this research is to make a change in weight perception by utilizing the potential impression that a person has acquired from visual information. Recently, the illusion caused by changes in visual information, which is represented by the rubber hand illusion and the Proteus effect, has been reported. In this research, we make one’s self-awareness change by the appearance of the own arm through AR technology. We conduct an experiment to verify that the change of the self-awareness influences the weight perception of grasping an actual object.
This project introduces GeospatialVR, an open-source collaborative virtual reality framework to dynamically create 3D real-world environments that can be accessed via desktop and mobile devices as well as virtual and augmented reality headsets. The framework can generate realistic simulations of desired locations entailing the terrain, elevation model, infrastructures (e.g. buildings, roads, bridges), dynamic visualizations (e.g. water and fire simulation), and information layers (e.g. disaster damages and extent, sensor readings, surveillance data, occupancy, traffic, weather). The framework incorporates multiuser support to allow stakeholders to remotely work on the same VR environment, and thus, presenting the potential to be utilized as a virtual incident command platform or meeting room. To demonstrate the framework's usability and benefits, several case studies have been developed for flooding, wildfire, transportation, and active shooter response.
In this project we created an expandable framework for allowing infinite walking in virtual reality in a closed play area. A saccade is a rapid eye movement with a unique property: the eye temporarily gathers reduced information – saccadic suppression. We leverage the suppression to redirect the user’s walking towards the center of the play area by rotating the virtual world around the camera’s location. With the VR environment and corresponding pre and post experience questions we could already show an improvement in understanding on a set of participants. Modern VR hardware such as the Vive Eye Pro allows a reasonable sample rate of eye movement measurements. A self-developed VR testing environment was used and with corresponding pre and post experience questions we tested a group of participants regarding general motion and VR- sickness parameters. We found a certain angle for the maximum saccade rotation which was base of further testing. We found, that our framework and the default settings successfully allow saccadic redirection with only marginal discomfort for the users.
Navigation and selection are critical in very large virtual environments, such as a model of a whole city. In practice, many VR applications require both of these modalities to work together. We compare different combinations of two navigation and two selection methods in VR on selection tasks involving distant targets in a user study. The aim of our work is to discover the trade-off between navigation and selection techniques and to identify which combination leads to better interaction performance in large virtual environments. The results showed that users could complete the task faster with the fly/drive method and traveled less, compared to the teleportation method. Additionally, raycasting exhibited a better performance in terms of time and (less) distance traveled, however, it significantly increased the error rate for the selection of targets.
VR remote learning is an environment-friendly approach for anatomy learning compared with the paper-based, physical model or prepared specimens. However, VR devices can potentially be costly pieces of equipment, and inexperienced users can experience unwanted symptoms like motion sickness. To solve these problems, an asymmetrical system has been developed to connect an experienced head-mounted-display user (lecturer) with light field display users (students) through the Internet. The scenes of lecturer’s end and students’ end are adjusted to match the corresponding displays technologies.
Although there is no distinctive header, this is the abstract. This submission template allows authors to submit their papers for review to an ACM Conference or Journal without any output design specifications incorporated at this point in the process. The ACM manuscript template is a single column document that allows authors to type their content into the pre-existing set of paragraph formatting styles applied to the sample placeholder text here. Throughout the document you will find further instructions on how to format your text.
In this work, a Mixed Reality (MR) system is evaluated to assess whether it can be efficiently used in teleoperation tasks that require an accurate control of the robot end-effector. The robot and its local environment are captured using multiple RGB-D cameras, and a remote user controls the robot arm motion through Virtual Reality (VR) controllers. The captured data is streamed through the network and reconstructed in 3D, allowing the remote user to monitor the state of execution in real time through a VR headset. We compared our method with two other interfaces: i) teleoperation in pure VR, with the robot model rendered with the real joint states, and ii) teleoperation in MR, with the rendered model of the robot superimposed on the actual point cloud data. Preliminary results indicate that the virtual robot visualization is better than the pure point cloud for accurate teleoperation of a robot arm.
Head-Mounted Displays (HMDs) based Virtual Reality (VR) has shown promising results in training and education. We present ATOM, an HMD-VR interface to educate students about atoms, atomic structures and historical research experiments conducted in understanding atomic structures. ATOM is designed to complement the classroom learning for grade 9 students through an interactive and practice-based learning experience. Preliminary evaluation with 10 students revealed higher interest, increase engagement and playfulness. The students also pointed out a few difficult user interactions.
Image inpainting allows for filling masked areas of an image with synthesized content that is indistinguishable from its environment. We present a video inpainting pipeline that enables users to “erase” physical objects in their environment using a mobile device. The pipeline includes an augmented reality application and an on-device conditional adversarial model for generating the inpainted textures. Users are able to interactively remove clutter in their physical space in realtime. The pipeline preserves frame to frame coherence, even with camera movements, using the Google ARCore SDK.
Thanks to the ubiquity of devices capable of recording and playing back video, the amount of video files is growing at a rapid rate. Most of us have now video recordings of major events in our lives. However, until today, these videos are captured mainly in 2D and are mostly used for screen-based video replay. Currently there is no way for watching them in more immersive environments such as on a VR headset. They are simply not optimized for playback in stereoscopic displays or even tracked Virtual Reality devices.
In this work, we present CasualVRVideos, a first approach that works towards solving these issues by extracting spatial information from video footage recorded in 2D, so that it can later be played back in VR displays to increase the immersion. We focus in particular on the challenging scenario when the camera itself is not moving.
This paper introduces a method for computing the difficulty of selection tasks in virtual environments using pointing metaphors by operationalizing an established human motor behavior model. In contrast to previous work, the difficulty is calculated automatically at run-time for arbitrary environments. We present and provide the implementation of our method within Unity 3D. The difficulty is computed based on a contextual analysis of spatial boundary conditions, i.e., target object size and shape, distance to the user, and occlusion. We believe our method will enable developers to build adaptive systems that automatically equip the user with the most appropriate selection technique according to the context. Further, it provides a standard metric to better evaluate and compare different selection techniques.
Postural-Perceptual Dizziness (PPPD) has variable levels of severity and triggers. Hence the use of an e-diary to capture triggers could be useful for both the patient and treating clinician. Virtual reality (VR) is not new to health sciences. This paper proposes a strategy that by using immersive VR environments at home, the technology could facilitate the user to identify baseline symptoms and record in an in-built virtual-reality based contextual diary (e-diary), plus an ability to alter the virtual environments to assess triggers, habituation of triggers and also treatment improvements. We discuss the type of VR designs that could be useful to incorporate a PPPD e-diary from the perspective of a VR designer. We also consider the development of the virtual reality environment that could be paired with e-diary responses.
Prototyping Augmented Reality (AR) applications for smart environments is still a difficult task. Therefore, we propose a pipeline to help designers and developers to create AR applications for monitoring and controlling indoor environments equipped with connected objects. This pipeline starts with the capture (geometry and objects) of the real environment with an AR device. Then, it proposes a Virtual Reality (VR) tool to configure augmentations in this captured environment. This tool includes a feature to simulate AR devices to help anticipate the application’s rendering on real devices. The created application can then be seamlessly deployed on various AR devices including smartphones,tablets and glasses.
The possible interactions with Mobile Augmented Reality applications today are largely limited to on-screen gestures and spatial movement. There is an opportunity to design new interaction methods that address common issues and go beyond the screen. Through this project, we explore the idea of using a second phone as a controller for mobile AR experiences. We develop prototypes that demonstrate the use of a second phone controller for tasks such as pointing, selecting, and drawing in 3D space. We use these prototypes and insights from initial remote evaluations to discuss the benefits and drawbacks of such an interaction method. We conclude by outlining opportunities for future research on Dual Phone AR for multiple usage configurations, and in collaborative settings.
We study experiences of students attending classes remotely from home using a social VR platform, considering both desktop-based and headset-based viewing of remote lectures. Ratings varied widely. Headset viewing produced higher presence overall. Strong negative correlations between headset simulator sickness symptoms and overall experience ratings, and some other ratings, suggest that the headset experience was much better for comfortable users than for others. Reduced sickness symptoms, and no similar correlations, were found for desktop viewing. Desktop viewing appears to be a good alternative for students not comfortable with headsets. Future VR systems are expected to provide more stable and comfortable visuals, providing benefits to more users.
It is said, “Beauty is only skin-deep”. However, looking good, beautiful, or handsome can boost one’s self-confidence, promote a positive outlook, and help to contribute positively during official communications or social interactions. Laughter and smiles provide positive feedback during face-to-face communication. On the other hand, looking sideways, making a face, showing anger or disgust, being tight-lipped, moving the face upward or downward instead of keeping it upright, can deliver negative communicative cues or indifference in communication. The blind and visually impaired (BVI) miss subtle non-verbal gestures, facial expressions, or prosodic features of speech during social interaction. The inability to interpret visual, non-verbal cues impedes communication that can lead to awkward moments, possibly resulting in social avoidance or isolation. In this paper, the concept of haptic selfies, a dynamic tangible interface has been discussed that could make the BVI aware of their own and others’ appearance. This helps the BVI to understand and imagine the inner and outer beauty of a person. Haptic selfies can promote enhanced social interaction and integration into mainstream.
There is a growing need for social interaction in Virtual Reality (VR). Current social VR applications enable human-agent or interpersonal communication, usually by means of visual and audio cues. Touch, which is also an essential method for affective communication, has not received as much attention. To address this, we introduce HexTouch, a forearm-mounted robot that performs touch behaviors in sync with the behaviors of a companion agent, to complement visual and auditory feedback in virtual reality. The robot consists of four robotic tactors driven by servo motors, which render specific tactile patterns to communicate primary emotions (fear, happiness, disgust, anger, and sympathy). We demonstrate HexTouch through a VR game with physical-virtual agent interactions that facilitate the player-companion relationship and increase the immersion of the VR experience. The player will receive affective haptic cues while collaborating with the agent to complete the mission in the game. The multisensory system for affective communication also has the potential to enhance sociality in the virtual world.
Augmented Reality Glasses usually implement an Inside-Out tracking. In case of a driving scenario or glasses with less computation capabilities, an Outside-In tracking approach is required. However, to the best of our knowledge, no public datasets exist that collects images of users wearing AR glasses. To address this problem, we present HMDPose, an infrared trinocular dataset of four different AR Head-mounted displays captured in a car. It contains sequences of 14 subjects captured by three different cameras running at 60 FPS each, adding up to more than 3,000,000 labeled images in total. We provide a ground truth 6DoF-pose, captured by a submillimeter accurate marker-based tracker. We make HMDPose publicly available for non-profit, academic use and non-commercial benchmarking on ags.cs.uni-kl.de/datasets/hmdpose/.
Redirected Walking (RDW) allows users to perform real walking in virtual worlds that are larger than the available physical space. Many RDW algorithms rely on the prediction of users’ possible paths in the virtual environment (VE) to calculate where users should be redirected to. This prediction could be obtained from the structure of the VE, where users look, or from existing path models. In this work, we examine users’ walking behaviors in the presence of a virtual agent acting as a tour guide. Results showed that users changed their speed significantly to match the agent’s walking speed. Furthermore, users also tend to adapt their trajectories to match with the agent’s path.
Single crystal structure determination is the foremost method to determine atomic structures – from minerals to viruses. However, it is a complex process in which errors do not only lead to flawed structures, but may also hinder structure solution entirely. Many of these errors can be recognized by visualizing the measured diffraction data in reciprocal space. Here, we present an immersive tool to support such an analysis. We aim to supplement this traditionally 2D desktop-based investigation of 3D diffraction data with the strengths of immersive visualization, especially depth perception and spatially tracked input devices.
A novel strand of Coronavirus has spread in the past months to the point of becoming a pandemic of massive proportions. In order to mitigate the spread of this disease, many different policies have been adopted, including a strict national lockdown in some countries or milder government policies: one common aspect is that they mostly rely around keeping distance between individuals. The aim of this work is to provide means of visualizing the impact of social distancing in an immersive environment by making use of the virtual reality technology. To this aim, we create a virtual environment which resembles a university setting (we based it on the University of Derby), and populate it with a number of AI agents. We assume that the minimum social distance is 2 meters. The main contribution of this work is twofold: the multi-disciplinary approach that results from visualizing the social distancing in an effort to mitigate the spread of the COVID-19, and the digital twin application in which the users can navigate the virtual environment whilst receiving visual feedback in the proximity of other agents. We named our application SoDAlVR, which stands for Social Distancing Algorithm in Virtual Reality.
Presence in virtual reality (VR) is typically assessed through questionnaires in the real world and after leaving an immersive experience. Previous research suggests that questionnaires in VR reduce biases caused by the real-world setup. However, it remains unclear whether presence questionnaires still provide valid results when subjects are being surveyed while the construct is perceived. In a user study with 36 participants, two standardized presence questionnaires (IPQ, SUSa) were either completed in the real lab, in a virtual lab scene, or in the actual scene after a virtual gaming experience. Our results show inconsistencies between the measurements and that main scores, as well as subscales of the presence measures are significantly affected by the subjects’ environment. As presence questionnaires have been designed to be answered after an immersive experience, we recommend revising those tools for measuring presence in VR.
We introduce a system to assign navigation tasks to a self-moving robot using an Augmented Reality (AR) application running on a smartphone. The system relies on a robot controller and a central server hosted on a PC. The user points at a target location in the phone camera view and the robot moves accordingly. The robot and the phone are independently located in the 3D space thanks to registration methods running on the server, hence they do not need to be spatially registered to each other nor in direct line of sight.
We visualize life-size sequential photographs of sports activities in a mixed reality (MR) environment. Wearing a video-see-through head-mounted display, an observer records the motions of a player using a handheld camera. Our system then places billboards in the three-dimensional MR space, on which the sequential photographs of the player’s motion are presented at life-size. In a user study, we found that the observers perceived the size of the motions more accurately than when viewing sequential photographs on a monitor display.
In tennis training, beginner players can fail to return the ball when the ball moves faster than what they can react to. In this paper, we propose a new training process of mediated-timescale learning (MTL) to manipulate the incoming ball’s motion. The ball first moves in slow motion, allowing more time for players to react and develop skills. The ball then moves in faster motion, challenging players with improved skills. To evaluate MTL, we implemented it in a virtual reality (VR)-oriented tennis training system. We piloted the MTL implementations (N = 12) to study players’ physical enjoyment. We then conducted an efficacy study (N = 8) to evaluate MTL’s training effects on player’s real-world performance. We found that in comparison to real-world training, five participants improved more in hitting the sweet spot after training with MTL.
The perception of time is closely related to our well-being. Psycho-pathological conditions such as depression, schizophrenia and autism are often linked to a disturbed sense of time. In this paper we present a novel framework called Metachron, which is intended to support research in the field of time perception and manipulation in Virtual Reality (VR). Our system allows the systematic modification of events in real time along the three main event axes i) Velocity, ii) Syncronicity and iii) Density. Our future work will investigate the influence of each dimension on the passage of time (varying velocity of time flow) and the structure of time (varying synchronicity of events), which should provide insights for the design of VR diagnostic and therapeutic tools.
We present Molecular MR Multiplayer – an interactive, collaborative solution for material design on the edge devices on flat screens and in Mixed Reality (MR) [3]. The application provides a concept of emerging gaming-like remote collaboration experience for researchers, helping explore and design chemical compounds, Both single-user and multi-user modes are implemented. The multi-user mode allows mixed collaborative sessions between local and distant users. A concept of the digital overlays over physical world known as Metaverse is developed.
Humans have sharp central vision but low peripheral visual acuity. Prior work has taken advantage of this phenomenon in two ways: foveated rendering (FR) reduces the computational workload of rendering by producing lower visual quality for peripheral regions and foveated video encoding (FVE) reduces the bitrate of streamed video through heavier compression of peripheral regions. Remote rendering systems require both rendering and video encoding and the two techniques can be combined to reduce both computing and bandwidth consumption. We report early results from such a combination with remote VR rendering. The results highlight that FR causes large bitrate overhead when combined with normal video encoding but combining it with FVE can mitigate it.
When interacting with virtual objects, we could discover some problems around us. Digital information has no physical properties, and sense organ(vision, tactile) does not match very well. These have a negative influence on the user’s interactive experience. In this paper, we propose to provide virtual objects with no physical properties with physical affordances to support tangible feedback via mechanical movement. In addition, we developed an interaction prototype system. The system could change the physical supports that are absorbed onto an electromagnet to adapt to the shape of different virtual objects and efficiently provide natural and consistent interactions when putting physical objects onto virtual objects in Augmented Reality (AR) scenarios.
To provide naturally walking in small virtual reality (VR) tracking spaces while preventing cables of head-mounted displays (HMDs) getting twisted, we developed Portals With A Twist (PorTwist), a redirected walking method using a portal metaphor. We compared PorTwist with a teleportation method in a 2m × 2.7m tracking space in a within-design user study (N = 34). PorTwist resulted in significantly longer natural walking distances and could prevent HMD cables from getting twisted while providing comparable levels of perceived presence and simulator sickness as teleportation. We further identified potential to improve usability in the future.
In this work, a ray-casting-based three-dimensional (3D) pointing and dragging interface for naked-eye stereoscopic displays is proposed. When a user holds a stylus and points it to a display, the proposed system displays a ray, which extends from the stylus to the virtual space in the display. This ray can be used to interact with objects in the virtual space. By conducting a user study, we found that the proposed method allows users to perform 3D pointing with smaller hand movements compared to a hand-capture-based interface. A model, which extends Fitts’s law, is also proposed. This model is capable of well predicting the time required for a 3D pointing task.
We explore a method to reduce motion sickness and allow people to use virtual reality while moving in vehicles. We put forth a usage scenario where the target VR content is based on constant road navigation so that the actual motion can enhance the VR experience. The method starts with a virtual scene and objects around an infinitely straight road. The motion of the vehicle is sensed by the GPS and IMU module. The sensed motion is reflected in a way that the virtual scene is navigated according to the vehicle motion, and its pathways distorted such that the virtual motion has a near-identical optical flow pattern to the actual. This would align the user’s visual and vestibular sense and reduce the effect of vection and motion sickness. We ran an pilot experiment to validate our approach, comparing the before and after sickness levels with the VR content (1) not aligned to the motion of the vehicle and (2) aligned by our method. Our preliminary results have shown the sickness was reduced significantly (but not eliminated to a negligible level yet) with our approach.
We propose a lightweight smart haptic bracelet-based stimulation system for VR applications. This wireless system equipped with vibrotactile tactors and peltier actuator, generates different haptic/thermal levels for different materials being touched in a VR environment.
Scrum is a well-developed and utilized agile project management framework, which requires extensive training and hands-on experience to master. The latter is not always possible, i.e. during the recent lockdown due to COVID-19. Thus, we propose the creation of a multi-user collaborative virtual simulation of a Scrum sprint that can provide an immersive training experience to remote trainees. Herein, we discuss the design considerations, elements of the virtual learning environment and the development process for the platform.
Currently, the most popular text input method in VR is Controller Pointing (CP). While this method is easy and intuitive to use, it requires users to have steady hands, and the overlaying of the keyboard onto the virtual scene occludes a part of the scene. In this work, we proposed two new text input methods: Sector Input and T9VR , that utilize a circular keyboard that is attached to the HTC Vive controller. A preliminary study with 24 subjects was conducted to explore the potential of the proposed methods, in comparison with CP. While CP performed significantly better than the proposed methods in terms of objective typing speed and error rate, T9VR was able to match with CP in a number of subjective measures.
In this work, we describe the use of a digital docent: a 3D avatar, presented using virtual and augmented reality, as a means for providing interactive storytelling experiences at a living history museum. To allow flexibility depending on the user’s location and access to technology, the app is designed to provide a common experience supporting a variety of different delivery modalities including AR devices, mobile AR, and VR on the Web.
An important goal of glyph based data visualization is to detect anomalies in data sets or to manage complex search tasks by guiding the users attention to regions of interest. Previous studies suggest that the user’s attention is tied to regions where different perspective projections meet. We describe a method for perspective distortion of glyphs within a virtual scene, while the rest of the scene is projected according to the common model of the computer graphics camera. The result is a multi-perspective image that directs the user’s attention. The perspective distortion is only applied if the user focuses on an irrelevant area of the data visualization. As soon as the gaze moves towards the relevant glyph, the perspective distortion is gradually removed. Therefore we evaluate eye tracking data in our prototypical implementation.
A conceptual space-sharing broadcasting service has been proposed, using augmented reality/virtual reality (AR/VR) with a head-mounted display. As proposed, virtual performers are displayed in their real-life sizes; the user experiences proximity to them. Family and friends living apart can also be displayed in this manner, and an individual can communicate with them in real time; this enables both the individual and their peer to enjoy the broadcast media together. In this study, we implemented this concept as a suitable style for daily use and confirmed the effect of the viewing experience. We developed a prototype of an environment for watching AR/VR mixed content along with a person in a distant place, which is expected to become a popular viewing style of future broadcast media. The individual is displayed as a live-action 3D point cloud image, and verbal and nonverbal communication with the individual are enabled. A demonstration showed that the system renders a sense of presence to a distant person and provides the feeling of sharing the same experience among all its users.
Volumetric capture is a technique that allows to create “holographic” recordings of actors, sets and props. The technique can be used to create immersive stories that sometimes reflect aspects of reality better than realistic 3D models. For example, volumetric captures of actors do not seem to cause uncanny valley effect. In this paper, we provide an overview of volumetric capture technology and its application to narrative filmmaking.
Virtual reality (VR) has evolved into a trending technology that has proven its worth in various application domains. VR is especially helpful for the training of complex tasks in special environments. In the area of logistics, warehouse management involves a complex workflow that consists of different order picking activities. Due to this complex workflow, stock discrepancies and misplaced wares are typical problems that often occur. To overcome this problem, we have developed a VR training application that integrates an existing warehouse management system and trains typical order picking processes. In our VR training demo, we simulate a real warehouse that is supplied with real stocks and real orders.