The ability to select or customize characters in educational applications and games has been shown to influence factors related to learning effects such as transfer, self-efficacy, and motivation. Most previous conclusions on the perception of virtual characters and the effect of character assignment in interactive applications have been reached through short, one-task experiments. To investigate more long-term effects of assigning versus customizing characters as well as explore perceptions of personal character appearance, we conduct a study in which sixth and seventh grade students are introduced to programming concepts with the software VEnvI (Virtual Environment Interactions) in seven one-hour sessions over two weeks. In VEnvI, students create performances for virtual characters by assembling blocks. With a between-subjects design, in which some of the students can alter their character and others are not given that possibility, we examine the influence of the presence or absence of character choice options on learning.
We hypothesize that students have higher learning outcomes when they can choose and customize how their character looks compared to when they are assigned a character. We confirm this hypothesis for a category of learning (Remember and Understand) and give insights on students' relationships with their character.
The individual shape of the human body, including the geometry of its articulated structure and the distribution of weight over that structure, influences the kinematics of a person's movements. How sensitive is the visual system to inconsistencies between shape and motion introduced by retargeting motion from one person onto the shape of another? We used optical motion capture to record five pairs of male performers with large differences in body weight, while they pushed, lifted, and threw objects. Based on a set of 67 markers, we estimated both the kinematics of the actions as well as the performer's individual body shape. To obtain consistent and inconsistent stimuli, we created animated avatars by combining the shape and motion estimates from either a single performer or from different performers. In a virtual reality environment, observers rated the perceived weight or thrown distance of the objects. They were also asked to explicitly discriminate between consistent and hybrid stimuli. Observers were unable to accomplish the latter, but hybridization of shape and motion influenced their judgements of action outcome in systematic ways. Inconsistencies between shape and motion were assimilated into an altered perception of the action outcome.
In recent years, there has been much research and media attention devoted to investigating virtual reality environments. In this paper, we are investigating if there are differences in how characters are perceived in immersive virtual reality as opposed to more common, screen-based environments. We were particularly interested if the spatial and immersive components play an important part in perception of interactive, game-like settings, where characters can either be controlled (avatars) or observed (agents). We focus on the subjective reports on perceived realism, affinity, co-presence and agency. Since appearance of the character is an important component of affinity, we introduced the changes in render style, ranging in three realism levels, to test if appearance would even further influence the perception in relation to control condition and platform. Furthermore, we adapted a behavioural method (proximity task) as a novel approach to establishing if behavioural changes could be recorded based on the introduced conditions and compared those values with the subjective reports of the participants. The conclusions have an important value to character design specific to platform and character control.
Immersive displays allow presentation of rich video content over a wide field of view. We present a method to boost visual importance for a selected - possibly invisible - scene part in a cluttered virtual environment. This desirable feature enables to unobtrusively guide the gaze direction of a user to any location within the immersive 360° surrounding. Our method is based on subtle gaze direction which did not include head rotations in previous work. For covering the full 360° environment and wide field of view, we contribute an approach for dynamic stimulus positioning and shape variation based on eccentricity to compensate for visibility differences across the visual field. Our approach is calibrated in a perceptual study for a head-mounted display with binocular eye tracking. An additional study validates the method within an immersive visual search task.
Visual attention studies in computer vision research have focused on the development of computational attention systems that can detect salient regions in images for adults. Consequently, age differences in scene viewing behavior has rarely been considered. This study quantitatively analyzed the age-related differences in gaze landings during scene viewing for three age groups: children, adults, and elderly. An interesting observation from our analysis is that whereas child observers focus more on the scene foreground, i.e., locations that are near, elderly observers tend to explore the scene background, i.e., locations farther in the scene. Considering this result a framework is proposed in this paper to quantitatively measure the depth bias tendency across age groups. Further, the age impact on exploratory behavior, central bias tendency, and agreement between explored regions within and across the age groups are quantified via analysis. Experimental results show that children exhibit the lowest exploratory behavior level but the highest central bias tendency among the age groups. Further, agreement scores reveal that adults had least agreement with each other in explored regions. The data analysis results were consequently leveraged to develop a more accurate age-adapted saliency model that outperforms existing saliency models that do not consider age.
This paper investigates the influence of stereoscopic vs. non-stereoscopic display in large-screen virtual environments on an everyday perception-action task - crossing traffic-filled roadways as a pedestrian. The task for participants was to physically cross a virtual road with continuous traffic without getting hit by a car in a CAVE-like virtual environment. Half of the participants performed the task with stereoscopic display and half performed the task with non-stereoscopic display. We found that stereoscopic display had little impact on the size of the gaps participants crossed or the timing of their crossing motion relative to the gap with the exception of a small difference in crossing speed. The results are important for validating the use of non-stereoscopic image displays in ground vehicle simulation and supporting the use of non-stereoscopic displays for multi-viewpoint rendering in co-occupied virtual environments.
Locomotion in large virtual environments is currently unsupported in smartphone-powered virtual reality headsets, particularly within the confines of limited physical space. While motion controllers are a workaround for this issue, they exhibit known problems: they occupy the subject's hands, and they cause poor navigation performance. In this paper, we investigate three hands-free methods for navigating large virtual environments. The first method is resetting, a reorientation technique that allows for both translation and rotation body-based cues. The other two methods are walking in place techniques that use only rotation-based cues. In the first walking in place technique, we make use of the inertial measurement unit of the smartphone embedded in a Samsung Gear VR to detect when subjects are stepping. The second technique uses the Kinect's skeletal tracking for step detection. In this paper, we measure the survey component of spatial knowledge to assess three navigation conditions. Our metrics examine how well subjects gather and retain information from their environment, as well as how well they integrate it into a single model. We find that resetting leads to the strongest acquisition of survey knowledge, which we believe is due to the vestibular cues provided by this method.
Immersive virtual reality (VR) technology has the potential to play an important role in the conceptual design process in architecture, if we can ensure that sketch-like structures are able to afford an accurate egocentric appreciation of the scale of the interior space of a preliminary building model. Historically, it has been found that people tend to perceive egocentric distances in head-mounted display (HMD) based virtual environments as being shorter than equivalent distances in the real world. Previous research has shown that in such cases, reducing the quality of the computer graphics does not make the situation significantly worse. However, other research has found that breaking the illusion of reality in a compellingly photorealistic VR experience can have a significant negative impact on distance perception accuracy.
In this paper, we investigate the impact of "graphical realism" on distance perception accuracy in VR from a novel perspective. Rather than starting with a virtual 3D model and varying its surface texture, we start with a live view of the real world, presented through a custom-designed video/optical-see-through HMD, and apply image processing to the video stream to remove details. This approach offers the potential to explore the relationship between visual and experiential realism in a more nuanced manner than has previously been done. In a within-subjects experiment across three different real-world hallway environments, we asked people to perform blind walking to make distance estimates under three different viewing conditions: real-world view through the HMD; closely registered camera views presented via the HMD; and Sobel-filtered versions of the camera views, resulting a sketch-like (NPR) appearance. We found: 1) significant amounts of distance underestimation in all three conditions, most likely due to the heavy backpack computer that participants wore to power the HMD and cameras/graphics; 2) a small but statistically significant difference in the amount of underestimation between the real world and camera/NPR viewing conditions, but no significant difference between the camera and NPR conditions. There was no significant difference between participants' ratings of visual and experiential realism in the real world and camera conditions, but in the NPR condition participants' ratings of experiential realism were significantly higher than their ratings of visual realism. These results confirm the notion that experiential realism is only partially dependent on visual realism, and that degradation of visual realism, independently of experiential realism, does not significantly impact distance perception accuracy in VR.
Complexity is a key factor influencing aesthetic judgment of artworks. Using a well-known artist Wu Guanzhong's paintings as examples, we provide quantified methods to gauge three visual attributes which influence the complexity of paintings, i.e. color richness, stroke thickness and white space. By conducting regression analysis, our research validates the influences of given visual attributes on perceived complexity, and distinguishes the complexity measurements for abstract paintings and representational paintings. Specifically, all three factors influence the complexity of abstract paintings; In contrast, mere white space influences that of representational paintings.
When rendering complex scenes using path-tracing methods, long processing times are required to calculate a sufficient number of samples for high quality results. In this paper, we propose a new method for priority sampling in path-tracing that exploits restrictions of the human visual system by recognizing whether an error is perceivable or not. We use the stationary wavelet transformation to efficiently calculate noise-contrasts in the image based on the standard error of the mean. We then use the Contrast Sensitivity Function and Contrast Masking of the Human Visual System to detect if an error is perceivable for any given pixel in the output image. Errors that can not be detected by a human observer are then ignored in further sampling steps, reducing the amount of samples calculated while producing the same perceived quality. This approach leads to a drastic reduction in the total number of samples required and therefore in total rendering time.
A large variation of the haptic Just Noticeable difference (JND) in stiffness is found in literature. But no underlying model that explains this variation was found, limiting the practical use of the stiffness JND in the evaluation work of control loading system (CLS). To this end, we investigated the cause of this variation from humans' strategy for stiffness discrimination, by two experiments in which a configurable manipulator was used to generate an elastic force proportional to its angular displacement (deflection). In a first experiment, the stiffness JND was measured for three stiffness levels, and an invariant Weber fraction was obtained. We found that for stiffness discrimination, subjects reproduced the same amount of the manipulator deflection and used the difference in the terminal forces as the indication of the stiffness difference. We demonstrated that the stiffness Weber fraction and the force Weber fraction could be related by a systematic bias in the deflection reproduction, which was caused by the difference in the manipulator stiffness. A second experiment with two conditions was done to verify this model. In one condition, we measured the stiffness JND while asking subjects to move the manipulator to a target angular displacement. Thus the bias in the deflection reproduction was eliminated, and this resulted a stiffness Weber fraction that equaled the force Weber fraction. In the other condition, the stiffness JND was measured without the deflection target, and a bias in deflection reproduction was again observed. This bias related the measurements for the two conditions by the formulation obtained from the first experiment. This suggests that the accuracy of reproducing the manipulator position for stiffness discrimination, which may be susceptible to experimental setting, can be used to explain the variation of stiffness JND in literature. Suggestions are given for CLS evaluation and applications requiring precise manipulator motion control.
While performing multiple competing tasks at the same time, e.g., when driving, assistant systems can be used to create cues to direct attention towards required information. However, poorly designed cues will interrupt or annoy users and affect their performance. Therefore, we aim to identify cues that are not missed and trigger a quick reaction without changing the primary task performance. We conducted a dual-task experiment in an anechoic chamber with LED-based stimuli that faded in or turned on abruptly and were placed in the periphery or front of a subject. Additionally, a white noise sound was triggered in a third of the trials. The primary task was to react to visual stimuli placed on a screen in front. We observed significant effects on the response times in the screen task when adding sound. Further, participants responded faster to LED stimuli when they faded in.
Understanding 3D shapes through cross-sections is a mental task that appears both in 3D volume segmentation and solid modeling tasks. Similar to other shape understanding tasks --- such as paper folding --- performance on this task varies across the population, and can be improved through training and practice. We are --- long term --- interested in creating training tools for 3D volume segmentation. To this end, we have modified (and evaluated) an existing cross-section performance measure in the context of our intended application. Our primary adaptations were 1) to use 3D stimuli (instead of 2D) to more accurately capture the real-world application and 2) evaluate performance on 3D biological shapes relative to the 3D geometric shapes used in the previous study. Our findings are: 1) Participants had the same pattern of errors as the original study, but overall their performance improved when they could see the objects rotating in 3D. 2) Inferring cross-sections of biological shapes is more challenging than pure geometric shapes.