Advances in the past century have resulted in unprecedented access to empowering technology, with user interfaces that typically provide clear distinction and separation between environments, technology and people.
The progress in recent decades indicates, however, inevitable developments where sensing, display, actuation and computation will seek to integrate more intimately with matter, humans and machines. This talk will explore some of the radical new challenges and opportunities that these advancements imply for next-generation interfaces.
We propose, implement and evaluate the use of a smartphone application for real-time six-degrees-of-freedom user input. We show that our app-based approach achieves high accuracy and goes head-to-head with expensive externally tracked controllers. The strength of our application is that it is simple to implement and is highly accessible --- requiring only an off-the-shelf smartphone, without any external trackers, markers, or wearables. Due to its inside-out tracking and its automatic remapping algorithm, users can comfortably perform subtle 3D inputs everywhere (world-scale), without any spatial or postural limitations. For example, they can interact while standing, sitting or while having their hands down by their sides. Finally, we also show its use in a wide range of applications for 2D and 3D object manipulation, thereby demonstrating its suitability for diverse real-world scenarios.
Along with the spread of VR experiences by HMD, many proposals have been made to improve the experience by providing tactile information to the fingertip, but there are problems such as difficulty in attaching and detaching and hindering free movement of fingers. As a method to solve these issues, we developed Haptopus, which embeds the tactile display in the HMD and presents tactile sense associated with fingers to the face. In this paper, we conducted a preliminary investigation on the best suction pressure and compared with the conventional tactile presentation approaches. As a result, it was confirmed that Haptopus improves the quality of the VR experience.
This work investigated how a tracked, real golf club, used for high-fidelity passive haptic feedback in virtual reality, affected performance relative to using tracked controllers for a golf putting task. The primary hypothesis evaluated in this work was that overall accuracy would be improved through various inertial advantages in swinging a real club as well as additional alignment and comfort advantages from placing the putter on the floor. We also expected higher user preference for the technique and correlation with putting performance in the real environment. To evaluate these prospective advantages, a user study with a cross-over design was conducted with 20 participants from the local population. Results confirmed performance advantages as well as preference for the tracked golf club over the controller, but we were not able to confirm a correlation with real-world putting. Future work will investigate means to strengthen this aspect, while evaluating new research opportunities presented by study findings.
Besides sketching in mid-air, Augmented Reality (AR) lets users sketch 3D designs directly attached to existing physical objects. These objects provide natural haptic feedback whenever the pen touches them, and, unlike in VR, there is no need to digitize the physical object first. Especially in Personal Fabrication, this lets non-professional designers quickly create simple 3D models that fit existing physical objects, such as a lampshade for a lamp socket. We categorize guidance types of real objects into flat, concave, and convex surfaces, edges, and surface markings. We studied how accurately these guides let users draw 3D shapes attached to physical vs. virtual objects in AR. Results show that tracing physical objects is 48% more accurate, and can be performed in a similar time compared to virtual objects. Guides on physical objects further improve accuracy especially in the vertical direction. Our findings provide initial metrics when designing AR sketching systems.
The presence of a third dimension makes accurate drawing in virtual reality (VR) more challenging than 2D drawing. These challenges include higher demands on spatial cognition and motor skills, as well as the potential for mistakes caused by depth perception errors. We present Multiplanes, a VR drawing system that supports both the flexibility of freehand drawing and the ability to draw accurate shapes in 3D by affording both planar and beautified drawing. The system was designed to address the above-mentioned challenges. Multiplanes generates snapping planes and beautification trigger points based on previous and current strokes and the current controller pose. Based on geometrical relationships to previous strokes, beautification trigger points serve to guide the user to reach specific positions in space. The system also beautifies user's strokes based on the most probable intended shape while the user is drawing them. With Multiplanes, in contrast to other systems, users do not need to manually activate such guides, allowing them to focus on the creative process.
In this paper we present the design, implementation, and evaluation of IMRCE, our immersive mixed reality collaborative environment toolkit. IMRCE is a lightweight, flexible, and robust Unity toolkit that allows designers and researchers to rapidly prototype mixed reality-mixed presence (MR-MP) environments that connect physical spaces, virtual spaces, and devices. IMRCE helps collaborators maintain group awareness of the shared collaborative environment by providing visual cues such as position indicators and virtual hands. At the same time IMRCE provides flexibility in how physical and virtual spaces are mapped, allowing work environments to be optimised for each collaborator while maintaining a sense of integration. The main contribution of the toolkit is its encapsulation of these features, allowing rapid development of MR-MP systems. We demonstrate IMRCE's features by linking a physical environment with tabletop and wall displays to a virtual replica augmented with support for direct 3D manipulation of shared work objects. We also conducted a comparative evaluation of IMRCE against a standard set of Unity libraries with complementary features with 10 developers, and found significant reductions in time taken, total LOC, errors, and requests for assistance.
When estimating the distance or size of an object in the real world, we often use our own body as a metric; this strategy is called body-based scaling. However, object size estimation in a virtual environment presented via a head-mounted display differs from the physical world due to technical limitations such as narrow field of view and low fidelity of the virtual body when compared to one's real body.
In this paper, we focus on increasing the fidelity of a participant's body representation in virtual environments with a personalized hand using personalized characteristics and a visually faithful augmented virtuality approach. To investigate the impact of the personalized hand, we compared it against a generic virtual hand and measured effects on virtual body ownership, spatial presence, and object size estimation. Specifically, we asked participants to perform a perceptual matching task that was based on scaling a virtual box on a table in front of them. Our results show that the personalized hand not only increased virtual body ownership and spatial presence, but also supported participants in correctly estimating the size of a virtual object in the proximity of their hand.
Humans communicate to a large degree through nonverbal behavior. Nonverbal mimicry, i.e., the imitation of another's behavior can positively affect the social interactions. In virtual environments, user behavior can be replicated to avatars, and agent behaviors can be artificially constructed. By combining both, hybrid avatar-agent technologies aim at actively mediating virtual communication to foster interpersonal understanding and rapport. We present a naïve prototype, the "Mimicry Injector", that injects artificial mimicry in real-time virtual interactions. In an evaluation study, two participants were embodied in a Virtual Reality (VR) simulation, and had to perform a negotiation task. Their virtual characters either a) replicated only the original behavior or b) displayed the original behavior plus induced mimicry. We found that most participants did not detect the modification. However, the modification did not have a significant impact on the perception of the communication.
This paper presents a comparative study between two popular AR systems during a collocated collaborative task. The goal of the study is to start a body of knowledge that describes the effects of different AR approaches in users' experience and performance; i.e., to look at AR not as a single entity with uniform characteristics. Pairs of participants interacted with a game of Match Pairs in both hand-held and project AR conditions, and their engagement, preference, task completion time, and number of game moves was recorded. Participants were also video-recorded during play for additional insights. No significant differences were found between users' self-reported engagement, and 56.25% of participants described a preference for the hand-held experience. On the other hand, participants completed the task significantly faster in the projected condition, despite having performed more game moves (card flips). We conclude the paper by discussing the effect of these two AR prototypes in participants' communication strategies, and how to design hand-held interfaces that could elicit the benefits of projected AR.
In this paper, we present a comparative evaluation of three different approaches to improving users' spatial awareness in virtual reality environments, and consequently their user experience and productivity. Using a scientific visualization task, we test the performance of 21 participants to navigate around a virtual immersive environment. Our results suggest that using landmarks, a 3D minimap, and waypoint navigation all contribute to improved spatial orientation, while the macroscopic view of the environment provided by the 3D minimap has the greatest positive impact on spatial orientation. Users also prefer the 3D minimap for usability and immersion by a wide margin over the other techniques.
Several transition techniques (TTs) exist for Virtual Reality (VR) that allow users to travel to a new target location in the vicinity of their current position. To overcome a greater distance or even move to a different Virtual Environment (VE) other TTs are required that allow for an immediate, quick, and believable change of location. Such TTs are especially relevant for VR user studies and storytelling in VR, yet their effect on the experienced presence, illusion of virtual body ownership (IVBO), and naturalness as well as their efficiency is largely unexplored. In this paper we thus identify and compare three metaphors for transitioning between VEs with respect to those qualities: an in-VR head-mounted display metaphor, a turn around metaphor, and a simulated blink metaphor. Surprisingly, the results show that the tested metaphors did not affect the experienced presence and IVBO. This is especially important for researchers and game designers who want to build more natural VEs.
Spatial user interfaces that help people navigate often focus on turn-by-turn instructions, ignoring how they may help incidental learning of spatial knowledge. Drawing on theories and findings from the area of spatial cognition, the current paper aims to understand how turn-by-turn instructions and relative location updates can help incidental learning of spatial (route and survey) knowledge. A user study was conducted as people used map-based and video-based spatial interfaces to navigate to different locations in an indoor environment using turn-by-turn directions and relative location updates. Consistent with existing literature, we found that providing only turn-by-turn directions was in general not effective for helping people to acquire spatial knowledge as relative location updates, but map-based interfaces were in general better for incidental learning of survey knowledge while video-based interfaces were better for route knowledge. Our result suggested that relative location updates encourage active processing of spatial information, which allows better incidental learning of spatial knowledge. We discussed the implications of our results to designs trade-offs in navigation interfaces that facilitate learning of spatial knowledge.
"Hands-free" pointing techniques used in mid-air gesture interaction require precise motor control and dexterity. Although being applied in a growing number of interaction contexts over the past few years, this input method can be challenging for older users (60+ years old) who experience natural decline in pointing abilities due to natural ageing process. We report the findings of a target acquisition experiment in which older adults had to perform "point-and-select" gestures in mid-air. The experiment investigated the effect of 6 feedback conditions on pointing and selection performance of older users. Our findings suggest that the bimodal combination of Visual and Audio feedback lead to faster target selection times for older adults, but did not lead to making less errors. Furthermore, target location on screen was found to play a more important role in both selection time and accuracy of point-and-select tasks than feedback type.
Object selection in a head-mounted display system has been studied extensively. Although most previous work indicates that users perform better when selecting with minimum offset added to the cursor, it is often not possible to directly select objects that are out of arm's reach. Thus, it is not clear whether offset-based techniques will result in improved overall performance. Moreover, due to the difference in muscle requirements of arm and shoulder between a hand-held device and a motion capture device, selection performance may be affected by factors related to ergonomics of the input device. In order to explore these uncertainties, we conduct a user study to evaluate the effects of four virtual cursor offset techniques on 3D object selection performance using Fitts' model and ISO 9241-9 standard while comparing two input devices in a head-mounted display. The results show that selection with No Offset is most efficient when the target is within reach. When the target is out of reach, Linear Offset outperforms Fixed-Length Offset and Go-Go Offset on movement time, error rate and effective throughput, as well as subjective preference evaluation. Overall, the Razer Hydra controller provides better and more stable selection performance than Leap Motion.
We present two experiments evaluating the effectiveness of the eye as a controller for travel in virtual reality (VR). We used the FOVE head-mounted display (HMD), which includes an eye tracker. The first experiment compared seven different travel techniques to control movement direction while flying through target rings. The second experiment involved travel on a terrain: moving to waypoints while avoiding obstacles with three travel techniques. Results of the first experiment indicate that performance of the eye tracker with head-tracking was close to head motion alone, and better than eye-tracking alone. The second experiment revealed that completion times of all three techniques were very close. Overall, eye-based travel suffered from calibration issues and yielded much higher cybersickness than head-based approaches.
Situated tangible robot programming allows programmers to reference parts of the workspace relevant to the task by indicating objects, locations, and regions of interest using tangible blocks. While it takes advantage of situatedness compared to traditional text-based and visual programming tools, it does not allow programmers to inspect what the robot detects in the workspace, nor to understand any programming or execution errors that may arise. In this work we propose to use a projector mounted on the robot to provide such functionality. This allows us to provide an interactive situated tangible programming experience, taking advantage of situatedness, both in user input and system output, to reference parts of the robot workspace. We describe an implementation and evaluation of this approach, highlighting its differences from traditional robot programming.
Spatial User Interfaces, such as wearable fitness trackers are widely used to monitor and improve athletic performance. However, most fitness tracker interfaces require bimanual interactions, which significantly impacts the user's gait and pace. This paper evaluated a one-handed thumb-to-ring gesture interface to quickly access information without interfering with physical activity, such as running. By a pilot study, the most minimal gesture set was selected, particularly those that could be executed reflexively to minimize distraction and cognitive load. The evaluation revealed that among the selected gestures, the tap, swipe-down, and swipe-left were the most 'easy to use'. Interestingly, motion does not have a significant effect on the ease of use or on the execution time. However, interacting in motion was subjectively rated as more demanding. Finally, the gesture set was evaluated in real-world applications, while the user performed a running exercise and simultaneously controlled a lap timer, a distance counter, and a music player.
Numerous methods have been proposed for presenting tactile sensations from objects in virtual environments. In particular, wearable tactile displays for the fingers, such as fingertip-type and glove-type displays, have been intensely studied. However, the weight and size of these devices typically hinder the free movement of the fingers, especially in a multi-finger scenario. To cope with this issue, we have proposed a method of presenting the haptic sensation of the fingertip to the forearm, including the direction of force. In this study, we extended the method to three fingertips (thumb, index finger and middle finger) and three locations on the forearm using a five-bar linkage mechanism. We tested whether all of the tactile information presented by the device could be discriminated, and confirmed that the discrimination ability was about 90%. Then we conducted an experiment to present the grasping force in a virtual environment, confirming that the realism of the experience was improved by our device, compared with the conditions with no haptic or with vibration cues.
Smartwatches enable spatial user input, namely for the continuous tracking of physical activity and relevant health parameters. Additionally, smartwatches are experiencing greater social acceptability, even among the elderly. While step counting is an essential parameter to calculate the user's spatial activity, current detection algorithms are insufficient for calculating steps when using a rollator, which is a common walking aid for elderly people. Through a pilot study conducted with eight different wrist-worn smart devices, an overall recognition of ~10% was achieved. This is because characteristic motions utilized by step counting algorithms are poorly reflected at the user's wrist when pushing a rollator. This issue is also present among other spatial activities such as pushing a pram, a bike, and a shopping cart. This paper thus introduces an improved step counting algorithm for wrist-worn accelerometers. This new algorithm was first evaluated through a controlled study and achieved promising results with an overall recognition of ~85%. As a follow-up, a preliminary field study with randomly selected elderly people who used rollators resulted in similar detection rates of ~83%. To conclude, this research will expectantly contribute to greater step counting precision in smart wearable technology.
Extended reality (XR) technology challenges practitioners to update the method of representation in art, which our laboratory has been working on as well [2]. Thus, in this demonstration, we present Air Maestros (AM), a multi-user audiovisual experience in mixed reality (MR) space using Microsoft HoloLens. The purpose of AM is to expand an ordinary music sequencer method into a three-dimensional (3D) space and a multi-user system. In this case, the users place 3D note objects into the MR space and, with a certain gesture, shoot a glowing ball at them. When their shots hit the 3D note objects, audiovisual effects appear at the objects' spatial positions.
We developed a cubic keyboard to exploit the three-dimensional (3D) space of virtual reality (VR) environments. The user enters a word by drawing a stroke with the controller. The keyboard consists of 27 keys arranged in a 3 x 3 x 3 (vertical, horizontal, and depth) 3D array; all 26 letters of the alphabet are assigned to 26 keys; the center key is blank. The user moves the controller to the key of a letter of the word and then selects that key by slowing movement.
Cinematic Virtual Reality (CVR) has been increasing in popularity over the last years. During our research on user attention in CVR, we encountered many analytic demands and documented potentially useful features. This led us to develop an analyzing tool for omnidirectional movies: the CVR-Analyzer.
As the most common writing material in our daily life, paper is an important carrier of traditional painting, and it also has a more comfortable physical touch than electronic screens. In this study, we designed a shadow-art device for human--computer interaction called MagicPAPER, which is based on physical touch detection, gesture recognition, and reality projection. MagicPAPER consists of a pen, kraft paper, and several detection devices, such as AirBar, Kinect, LeapMotion, and WebCam. To make our MagicPAPER more interesting, we developed thirteen applications that allow users to experience and explore creative interactions on a desktop with a pen and a piece of paper.
We demonstrate a simple technique that allows tangible objects to track their own position on a surface using an off-the-shelf optical mouse sensor. In addition to measuring the (relative) movement of the device, the sensor also allows capturing a low-resolution raw image of the surface. This makes it possible to detect the absolute position of the device via marker patterns at known positions. Knowing the absolute position may either be used to trigger actions or as a known reference point for tracking the device. This demo allows users to explore and evaluate affordances and applications of such tangibles.
In this demo, we present Slackliner, an interactive slackline training assistant which features life-size projection, skeleton tracking, and real-time feedback. Like in other sports, proper training leads to a faster increase of skill and lessens the risk of injuries. We chose a set of exercises from slackline literature and implemented an interactive trainer which guides the user through the exercises giving feedback if the exercises were executed correctly. Additionally, a post-analysis provides the trainee with more detailed feedback about her performance. The results from a study comparing the interactive slackline training system to a classic approach using a personal trainer indicate the interactive slackline training system can be used as an enjoyable and effective alternative to classic training methods (see [1] for more details). The contribution of the present demo is to showcase how whole body gestures can be used in interactive sports training systems. The design and implementation of the system informs many potential applications ranging from rehabilitation to fitness gyms and home use.
RealityAlert is a hardware device that we designed to alert immersive virtual environment (IVE) users for potential collisions with real-world (RW) objects. It uses distance sensors mounted on a head-mounted display (HMD) and vibro-tactile actuators inserted into the HMD's face cushion. We define a sensor-actuator mapping, which is minimally obtrusive in normal use, but efficiently alerting in risk situations.
Immersive technologies have been touted as empathetic mediums. This capability has yet to be fully explored through machine learning integration. Our demo seeks to explore proxemics in mixed-reality (MR) human-human interactions.
The author developed a system, where spatial features can be manipulated in real time by identifying emotions corresponding to unique combinations of facial micro-expressions and tonal analysis. The Magic Leap One is used as the interactive interface, the first commercial spatial computing head mounted (virtual retinal) display (HUD).
A novel spatial user interface visualization element is prototyped that leverages the affordances of mixed-reality by introducing both a spatial and affective component to interfaces.
We propose the idea of a powerful mobile eye tracking platform that enables whole new ways of explicit and implicit human-machine interactions in complex industrial settings. The system is based on two hardware components (NVIDIA Jetson TX2, Pupil labs eye tracker) and a message-oriented framework for real-time processing [1]. The design is described and potential use cases are sketched.
In this paper, a framework that uses deep learning on a server to recognize signboards in streets with mobile devices is proposed. The proposed framework enables a user to determine the type of shops in his/her location. Our experimental results revealed that the proposed framework recognized signboards with an 86% accuracy within 1 second.
Watching omnidirectional movies via head mounted displays puts the viewer inside the scene. In this way, the viewer enjoys an immersive movie experience. However, due to the free choice of field-of-view, it is possible to miss details which are important for the story. On the other hand, the additional space component gives the filmmakers new opportunities to construct stories. To support filmmakers and viewers, we introduce the concept of a 'spaceline' (named in analogy to the traditional 'timeline') which connects movie sequences via interactive regions. We developed a spaceline editor that allows researchers and filmmakers to define such regions as well as indicators for visualising regions inside and outside the current field-of-view.
This paper describes the development and implementation of "Virtual Campus", a prototype that brings together a set of interfaces and interaction techniques (VR, AR, mobile apps, 3D), to generate alternatives to the systems that are traditionally used (web, desktop applications, etc.), for the spatial management of the campus in Universidad Católica de Pereira, such as the reservation of classrooms, objects, zones, security, among others.1
This paper proposes a pointing method named the Bring Your Own Pointer (BYOP). The BYOP enables an additional participation in a shared display collaboration and allows the users to point at the display simultaneously by using their own smartphones. A sticker application is developed to demonstrate the BYOP.
Current Virtual Reality (VR) devices have limited fields-of-view (FOV). A limited FOV amplifies the problem of objects receding from view. In previous work, different techniques have been proposed to visualize the position of objects out of view. However, these techniques do not allow to identify these objects. In this work, we compare three different ways of identifying out-of-view objects. Our user study shows that participants prefer to have the identification always visible.
According to graphology, people's emotional states can be detected from their handwriting. Unlike writing on paper, which can be analysed through its on-surface properties, spatial interaction-based handwriting is entirely in-air. Consequently, the techniques used in graphology to reveal the emotions of the writer are not directly transferable to spatial interaction. The purpose of our research is to propose a 3D handwriting system with emotional capabilities.
For our study, we retained height basic emotions represented by a large spectrum of coordinates in the Russell's valence-arousal model: afraid, angry, disgusted, happy, sad, surprised, amorous and serious. We used the Leap Motion sensor (https://www.leapmotion.com) to capture hand motion; C# and the Unity 3D game engine (https://unity3d.com) for the 3D rendering of the handwritten characters. With our system, users can write freely with their fingers in the air and immerse themselves in their handwriting by wearing a virtual reality headset.
We aim to create a rendering model that can be universally applied to any handwriting and any alphabet: our choice of parameters is inspired by both Latin typography and Chinese calligraphy, characterised by its four elementary writing instruments: the brush, the ink, the brush-stand and the ink-stone. The final parameter selection process was carried out by immersing ourselves in our own in-air handwriting and through numerous trials.
The five rendering parameters we chose are: (1) weight determined by the radius of the rendered stroke; (2) smoothness determined by the minimum length of one stroke segment; (3) tip of stroke determined by the ratio of the radius to the writing speed; (4) ink density determined by the opacity of the rendering material; and (5) ink dryness determined by the texture of the rendering material, which can be coarse or smooth.
Having implemented the 3D handwriting system and empirically determined five rendering parameters, we designed a survey to gather opinions on which rendering parameters' values are most effective at conveying the intended emotions. For each parameter, we created three handwriting samples by varying the value of the parameter, and to avoid the combinatorial explosion of the number of samples, each parameter was made to vary independently of the others. The formula we used to calculate the optimal value of a parameter is as follows:
Where i = 1, 2, 3 refers to the value of the parameter used in the sample; Q is the total number of respondents (64 in average); qi is the number of people who chose that sample; and Pi denotes the parameter. Applying the R values to the 3D handwriting system in Unity, we obtain the eight emotional styles illustrated below.
We calculated the Euclidean distances between each pair of emotions using their 2D coordinates (x, y) in the Russell's valence-arousal emotion model and their 5-dimensional vectors of normalised parameters' values. Across all pairs of emotions, there is a positive correlation (R=0.41) between the two distances. This is an interesting result, which seems to support the choice of parameters' values that was made in the model.
We then conducted another survey (42 respondents) to evaluate the emotional capabilities of our rendering model. Handwriting samples in both Chinese and English were produced for each of the 8 emotions, making a total of 16 samples. The four most notably recognised emotions are: afraid, sad, serious and angry. Binomial tests with a 95% confidence interval showed that for these four emotions the respondents' choices were significantly different from random chance. We note that these four emotions have all negative or neutral valence in the Russell's valence-arousal model. The emotion afraid was particularly well recognised. The emotion happy was well recognised but was also often confused for serious. The least correctly identified emotions are disgusted, amorous and surprised. Selecting one emotion among eight by observing a single word sample is a difficult exercise, but the results are encouraging.
In the context of spatial user interfaces for virtual or augmented reality, many interaction techniques and metaphors are referred to as being (super-)natural, magical or hyper-real. However, many of these terms have not been defined properly, such that a classification and distinction between those interfaces is often not possible. We propose a new classification system which can be used to identify those interaction techniques and relate them to reality-based and abstract interaction techniques.
Interactive displays are increasingly embedded into the architecture we inhabit. We designed Doodle Daydream, an LED grid display with a mobile interface, which acts similarly to a shared sketch pad. Users contribute their own custom drawings that immediately play back on the display, fostering moment-to-moment playful interactions. This project builds on related work by designing a collaborative display to support calming yet playful interactions in an office setting.
This paper explores the possibilities of Virtual Reality (VR) as a tool for prototyping iterative design and development in the fashion industry. Subsequently, the system was evaluated by using two qualitative test protocols. Our results highlight how professional (fashion)designers view VR and what their expectations are.
We present results of a preliminary study on our planned system for the detection of obstacles in the physical environment (PE) by means of an RGB-D sensor and their unobtrusive signalling using metaphors within the virtual environment (VE).
We describe a flying broom application inspired by the Harry Potter world that we developed for the Destiny-class CyberCANOE - a surround screen hybrid reality environment (HRE). This application uses a broom shaped tangible controller that allows the player to use their body to steer. Our intention with this work is to encourage users to fully engage with the 320° surround view the Destiny environment offers.
In this poster, we present a novel 3-dimensional (3D) interaction technique, Altered Muscle Mapping (AMM), to re-map muscle movements of hands/arms to fingers/wrists. We implemented an initial design of AMM as a 3-Dimensional (3D) selection technique, where finger movements translate a virtual cursor (in 3-degrees-of-freedom) for selection. Direct Manipulation performance benefits may be preserved yet reduce physical fatigue. We designed an initial set of mapping variations. Our results from an initial pilot study provide initial performance insights of mapping configurations. AMM has potential for direct hand interaction in virtual and augmented reality and for users with a limited range of motion.
The popular concepts of Virtual Reality (VR) and Augmented Reality (AR) arose from our ability to interact with objects and environments that appear to be real, but are not. One of the most powerful aspects of these paradigms is the ability of virtual entities to embody a richness of behavior and appearance that we perceive as compatible with reality, and yet unconstrained by reality. The freedom to be or do almost anything helps to reinforce the notion that such virtual entities are inherently distinct from the real world---as if they were magical. This independent magical status is reinforced by the typical need for the use of "magic glasses" (head-worn displays) and "magic wands" (spatial interaction devices) that are ceremoniously bestowed on a chosen few. For those individuals, the experience is inherently egocentric in nature---the sights and sounds effectively emanate from the magic glasses, not the real world, and unlike the magic we are accustomed to from cinema, the virtual entities are unable to affect the real world.
This separation of real and virtual is also inherent in our related conceptual frameworks, such as Milgram's Virtuality Continuum, where the real and virtual are explicitly distinguished and mixed. While these frameworks are indeed conceptual, we often feel the need to position our systems and research somewhere in the continuum, further reinforcing the notion that real and virtual are distinct. The very structures of our professional societies, our research communities, our journals, and our conferences tend to solidify the evolutionary separation of the virtual from the real.
However, independent forces are emerging that could reshape our notions of what is real and virtual, and transform our sense of what it means to interact with technology. First, even within the VR/AR communities, as the appearance and behavioral realism of virtual entities improves, virtual experiences will become more real. Second, as domains such as artificial intelligence, robotics, and the Internet of Things (IoT) mature and permeate throughout our lives, experiences with real things will become more virtual. The convergence of these various domains has the potential to transform the egocentric magical nature of VR/AR into more pervasive allocentric magical experiences and interfaces that interact with and can affect the real world. This transformation will blur traditional technological boundaries such that experiences will no longer be distinguished as real or virtual, and our sense for what is natural will evolve to include what we once remember as cinematic magic.