In this paper, we describe AiRound, an optical system that displays mid-air images that can be viewed from any direction. Mid-air images are touchable floating images formed by retroreflective transmissive optical elements that can seamlessly connect the virtual world to real space without special equipment. However, they are limited by three problems, including a limited range of observation, the visibility of the light source from the outside, and the aesthetically displeasing of stray light. The proposed system combines view control films and micromirror array plates to form a mid-air image that can be observed from 360 degrees by rotating these components at high speed.
This demo proposes AirPolygon, a soft, bendable, and transparent pneumatic control film. AirPolygon distinguishes itself with its ease of fabrication, high transparency for direct LCD attachment, and lightweight, deflatable design for portability. We designed the system to prevent air leakage while capitalizing on each layer’s material properties. Demonstrated applications include a 3D haptic LCD monitor film, a lightweight game controller, a tactile video conferencing system, a tactile pulse meter, and a transparent switch for glass surfaces, illustrating AirPolygon’s broad utility.
In perspective drawing, designers express 3D shapes by drawing auxiliary lines that construct surfaces and drawing design curves on them. However, drawing auxiliary lines can be challenging, and too many of them can make the drawing difficult to understand. To address these issues, we present a novel 3D sketching system that allows the user to quickly and easily create instant auxiliary lines and instant sketch surfaces for drawing desired 3D curves with fluent bimanual touch and pen interactions. We will produce concept sketches using our system to showcase its potential usefulness.
Designing non-humanoid avatars with different body structures from humans so that humans can intuitively manipulate them requires the design of body structure and motion mapping, which is complex, takes time and requires special design skills. To address this problem, we propose AvatarForge which allows users to design non-humanoid avatars by editing body structure and motion mapping in real-time. In this system, users interact with a node-based interface in a virtual environment to edit their own bodies while visualizing the changes. This system aims to reduce the difficulty of designing non-humanoid avatars, expedite the prototyping of new bodies, and enable personalized avatars that match individual preferences and physical abilities. It contributes to the advancement of non-humanoid body utilization in various fields such as CG animation, inclusive design, and human augmentation.
We present novel web applications that blend AI and video communication, offering an interactive way to engage with videos, transcending time and distance constraints. Known as Time Offset Interaction Application (TOIA), the platform transforms passive video viewing into dynamic conversations using personal recordings or YouTube clips. Our work expands TOIA with two unique experiences: 1) "The Elephant in the Room" project, stimulating dialogues on sensitive topics like childbirth, sex, and death; 2) Support for multilingual interactions, fostering a global collection of diverse personal narratives. Through AI and interactive media, we aim to broaden the horizons for sharing human experiences and perspectives.
The presentation of brightness is essential in virtual reality. In this paper, we propose a system that can induce the sensation of dazzle by presenting a realistic afterimage and a pseudo dazzle reflex sensation with visuo-haptic feedback. Our system is expected to present realistic environments and emotional performance settings.
Some studies have been conducted to induce the sense of being in two locations by inducing body ownership in two bodies. However, the feeling of being in two locations was weak. In our previous study, we induced body ownership in a split avatar in advance to induce the sensation of being at two locations. As a result, although participants’ self-location extended to the right, the split avatar was perceived as a single body and they did not feel as if they were in two locations. Our demonstration expand our previous work by allowing participants to dynamically manipulate the position of the split body. We attempt to present an experience in which the split body is perceived as one body even though it is split, or the participants feel as if they are in two locations as two independent bodies. We also changed tracking system from OptiTrack to Lighthouse.
We demonstrate an ultrasound haptic display-based mid-air thermo-tactile feedback system. We design a proof-of-concept thermo-tactile feedback system with an open-top chamber, heat modules, and an ultrasound display. Our method involves directing heated airflow toward the focused pressure point produced by the ultrasound display to deliver thermal and tactile cues in mid-air simultaneously. We present this system with three different virtual environments (CampFire, Water Fountain, and Kitchen) to show the rich user experiences of integrating thermal and tactile feedback.
ForceField realizes the measurement of intermaterial contact forces across any surface within a room-scale environment without the need for additional device attachments to bodies or objects. It achieves this by acquiring a 3D model through a depth sensor and capturing ground pressure through a floor sensor, and then reconstructing contact forces between surfaces where no sensors are attached, such as when a hand pushes a desk, or people push against each other, from information on force and geometry. Through this unobtrusive environmental measurement approach, ForceField opens up the possibility of highly physical spatial interactions in domains such as healthcare, sports, and entertainment.
We propose a non-contact method to present the tactile sensation of soft fur texture using ultrasound haptic feedback and pseudo-haptics. By responsively adjusting haptic and visual feedback according to user interaction, our approach effectively simulates a realistic fur stroking experience.
The hanger reflex is a phenomenon in which a strong sense of rotational force is perceived when one wares a wire hanger on his/her head. This phenomenon is caused by lateral skin stretch generated by pressure from the wire hanger on the head, and similar phenomena have been observed in the wrist, ankle, knee and elbow. However, the hanger reflex has been applied to only one part of the body, but not to multiple parts of the body simultaneously. In this study, we propose the HangerBody system, which applies the hanger reflex to multiple parts of body simultaneously in order to expand the range of applications. In this paper, we introduce the system configuration and some examples of applying the system to VR games, sports training, rehabilitation, and remote assistance.
Racket sports in virtual spaces are rapidly gaining popularity. However, to approximate the real-world experience, particularly when hitting the ball, tactile feedback is crucial. Current feedback is primarily through vibrations and audio, and providing rich tactile feedback typically requires large, heavy devices. This research proposes a method to realize more realistic tactile feedback by presenting force sensations through fingertip deformation using a small device. This novel device has the potential to enable immersive experiences in VR racket sports and aid in the training for racket sports.
Virtual reality (VR) and augmented reality (AR) with eye tracking and hand tracking are widely used in entertainment, gaming, design, and training. However, most VR and AR interaction methods are limited in their interactable range and do not fully support the direct manipulation of VR and AR objects. This paper proposes a novel technique called Hitchhiking Hands, which allows the user to switch multiple hand avatars by staring them, and enables natural and direct interaction with VR/AR objects ranging from nearby objects to remote ones.
We introduce a motor-skill-transfer technology using electrical muscle stimulation (EMS) for acquiring piano playing skills. While expert pianists use the coordination of multiple muscles, such as fingers and arms, novices are less aware of muscle coordination and tend to only move their fingers. Our EMS-based system encourages them to use their arms as well as their fingers. Based on the analysis of experts’ muscle coordination, our system applies EMS to the novices’ forearms and shoulders. With this system, novices should be able to improve their motor skills, such as playing octave tremolos by using wrist rotation to reduce fatigue and playing C major scales more smoothly by coordinating forearm and shoulder muscles to execute the thumb-under technique.
Our goal has been to create an environment in which anyone can take advantage of tactile technology. Content that incorporates tactile sensations in addition to visual and auditory sensations will activate people’s interaction with it. On the other hand, both sensing and display are complicated to create high-quality content, and ease of use is lost. In this paper, we propose a platform that can easily develop multi-channel vibrotactile contents by utilizing existing workflows. By preparing a 4-channel fingertip-mounted vibrotactile device as a display, a high-resolution experience with less load during the experience is realized. We demonstrate sensor-based pre-recorded or real-time multi-channel tactile contents and show the concept of near-future tactile technology including use cases.
"Phantom Walls" is a novel technique that establishes continuous spatial perception without vision. By creating an auditory environment where obstacles emit sounds, users can perceive and navigate around visually imperceptible "Phantom" obstacles by listening to the generated "soundscapes". This method allows individuals to perceive and avoid these obstacles while walking without vision.
“Synced Drift” provides a new inclusive sport experience that allows two individuals to play a drifting race together while sharing a single body. Synced drift leverages a system consisting of a large-diameter omni-wheel mechanism enabling omnidirectional movement, two sensing seats that detects the user’s center of gravity shift, a mobile vehicle, and a viewpoint sharing system. The movement of the mobile vehicle is achieved by proportional control of the body movements acquired by the two sensing seats, one placed on the vehicle and one remote, from two or more individuals. The speed and direction are adjusted based on the harmonization of the inputs from the two users. This system allows people who were previously unable to participate in sports due to physical limitations, for example people with spinal cord injuries, or those with who had a stroke, to participate equally, regardless of individual impairments. Using the above system, the rules of the sport were designed and implemented. In this demonstration, the system explores the possibility of people coming together, racing and competing as one, transcending physical characteristics and physical locations.
We presented ”TableMorph,” a novel method that combines redirection techniques and encountered-type haptic displays. In this method, concave and convex-shaped tables with mobile robots move through the real environment according to the relative positions of the user and the virtual tables in the virtual environment. In the process, by using redirection techniques that change the position and orientation of the user’s virtual hand and the shape of the virtual tables visually presented through the head-mounted display from the actual one, the system can make them feel, both visually and haptically, that the table has a shape very different from the real one. In the demonstration at SIGGRAPH ASIA 2023 E-tech, participants can try virtual mazes as an application of the proposed method. As the users move around in the virtual maze separated by tables, the arrangement of the tables in the real environment changes. This allows for the presentation of various types of mazes that are four times larger than the physical space.
We introduce the Zoetop, an innovative variation of the Zoetrope opening up new opportunities for design. At the core of the Zoetop is a small and inexpensive electronic device that can be embedded on any naturally spinning object, such as hand-spun spinning tops, car wheels, bouncing balls, or windmills. The device uses an inertial sensor to keep track of its own instantaneous axis of rotation and angular speed. It then produces a constant number of flashes per revolution regardless of speed variations, and corresponding to the number of frames in the animation. As a consequence animations can be created without strobe light synchronization, delicate mechanical setups including driving motors, or any form of calibration. As such, this “kinaesthetic-aware” computational Zoetrope offers new opportunities for deployment in the wild, augmenting hand-spun toys, wind-powered chimes, propellers or fans with animated graphics. We believe that the Zoetop can inspire designers and the DIY community to come up with new ideas centered around this age-old yet still fascinating Victorian-era optical gadget.
For Deaf people to use natural language interfaces, technologies must understand sign language input and respond in the same language. We developed a prototype smart home assistant that utilises gesture-based controls, improving accessibility and convenience for Deaf users. The prototype features Zelda, an interactive signing avatar that provides responses in Auslan (Australian Sign Language), enabling effective two-way communication. Our live demonstration includes gesture recognition and sign production.
Although various haptic devices have been proposed so far, most of them are limited to use in laboratory or indoor environments, because such haptic devices are wired to external equipment for power supply and control. This can be a critical issue, particularly when using them in freely moving situations. In this study, we proposed a self-contained haptic device called Waylet for park-scale interactions. Waylets can provide translational and rotational pseudo-forces via asymmetric vibrations. To demonstrate the feasibility of our concept, we demonstrate an intuitive haptic navigation system in a park-scale mixed-reality environment with haptic rendering.
Wind displays reproduce cutaneous stimuli of wind by blowing actual airflow to make virtual content more immersive. However, reproducing the sensation of a strong wind across the whole body, as might be used in educational or entertainment scenarios, typically requires large and cumbersome wind fans. WearSway is a compact and lightweight wearable haptic device that simulates the swaying of clothes in the wind to express the strong wind without large equipment. The device achieves this by using motors and strings to sway the clothing and a small fan to provide wind sensation. While most existing wind displays stimulate the skin directly, WearSway introduces a novel approach by leveraging the movement of clothing to deliver a strong wind simulation.