During the pandemic, wearables such as face masks and face shields have become broadly adopted, these solutions do reduce infection but do not eliminate infectious agents from surfaces and objects the person may touch. Therefore, regular disinfection of hands and frequently touched surfaces is a critical factor in preventing the spread of infectious diseases ranging from the common cold and flu to SARS and COVID-19. This activity of frequent disinfection requires a high degree of discipline and leads to increased cognitive and physical effort involved in frequent washing of hands or use of a pocket sanitizer. We present an open-source, wearable sanitizer that provides just-in-time, automatic dispensing of alcohol to the wearer’s hand or nearby objects using sensors and programmable cues. We systematically explore the design space aiming to create a device that not only seamlessly integrates with the user’s body and behavior but also frees their physical and mental faculties for other tasks.
Personal protective equipment, particularly face masks, have become increasingly common with the rise of global health issues, such as fine-dust storms and pandemics. Face masks, however, also degrade speech intelligibility by effectively occluding visual cues, such as lip motions and facial expressions. In this paper, we propose MAScreen, a wearable LED display in the shape of a mask, which is capable of sensing lip motion and speech and provides real-time visual feedback of the mouth behind the mask.
We present ZoomTouch, a breakthrough technology for multi-user and real-time control of robot from Zoom by DNN-based gesture recognition. The users can have a video conferencing in a digital world and at the same time to perform dexterous manipulations with tangible objects by remote robot. As the scenario, we proposed the remote robotic COVID-19 test Laboratory to substitute medical assistant working in protective gear in close proximity with infected cells and to considerably reduce the time to receive the test results. The proposed technology suggests a new type of reality, where multi-users can jointly interact with remote object, e.g. make a new building design, joint cooking in robotic kitchen, etc, and discuss/modify the results at the same time.
Dual Body was developed to be a telexistence or telepresence system, one in which the user does not need to continuously operate an avatar robot but is still able to passively perceive feedback sensations when the robot performs actions. This system can recognize user speech commands, and the robot performs the task cooperatively. The system that we propose, in which passive sensation feedback and cooperation of the robot are used, highly reduces the perception of latency and the feeling of fatigue, which increases the quality of experience and task efficiency. In the demo experience, participants will be able to command the robot from individual rooms via a URL and RoomID, and they will perceive sound and visual feedback, such as images or landscapes of the campus of Tokyo Metropolitan University, from the robot as it travels.
We present ElaStick, a handheld variable stiffness controller capable of simulating the kinesthetic sensation of deformable and flexible objects when swung or shaken. ElaStick is capable of rendering gradual changes of stiffness along two independent axes over a wide continuous range. Two trackers on the controller enable a closed-loop feedback that allows to accurately map the device’s deformations to the visuals of a Virtual Reality application.
Midair physical prop is a promising tool to facilitate intuitive human-computer interaction in a three-dimensional (3D) space. In such systems, it is challenging to guarantee safety and comfort during interaction with real objects in high-speed 3D motion. In this paper, we propose a balloon interface, a midair physical prop that affords direct single-handed manipulation in a safe manner. The system uses a spherical helium-filled balloon controlled by surrounding ultrasound phased array transducers as a physical prop. It is safe to collide even when it is moving fast because it has an elastic body and does not have any mechanical parts. Fast switching of driving transducer units and closed-loop control based on a high-speed measurement system result in the 3D control of an object of a size that affords a spherical grasp, e.g., a spherical balloon of 10-cm diameter.
Visuo-haptic augmented reality (AR) systems that present visual and haptic sensations in a spatially and temporally consistent manner have the potential to improve AR applications’ performance. However, there are issues such as enclosing the user’s view with a display, restricting the workspace to a limited amount of flat space, or changing the visual information presented in conventional systems. In this paper, we propose “HaptoMapping,” a novel projection-based AR system, that can present consistent visuo-haptic sensations on a non-planar physical surface without installing any visual displays to users and by keeping the quality of visual information. We implemented a prototype of HaptoMapping consisting of a projection system and a wearable haptic device. Also, we introduce three application scenarios in daily scenes.
We propose a forearm-mounted robot that performs complementary touches in relation to the behaviors of a companion agent in virtual reality (VR). The robot consists of a series of tactors driven by servo motors that render specific tactile patterns to communicate primary emotions (fear, happiness, disgust, anger, and sympathy) and other notification cues. We showcase this through a VR game with physical-virtual agent interactions that facilitate the player-companion relationship and increase user immersion in specific scenarios. The player collaborates with the agent to complete a mission while receiving affective haptic cues with the potential to enhance sociality in the virtual world.
As an alternative to enhance user’s experience in VR contents, we propose KABUTO, a head mounted haptic display designed to induce upper-body movement by the application of kinesthetic feedback to the head. KABUTO can provide impact and resistance by using flywheels and brakes in response to various head movements, as extensive head movements lead to dynamic movement throughout the upper body. We have also designed an application which enables the user to become a rhinoceros beetle.The user can feel the weight of the swinging horn or the impact of the horn flinging object. In the demonstration of it, we observed that KABUTO makes users move their upper body aggressively.
Conventional swept volumetric displays can provide accurate physical cues for depth perception. However, the quality of texture reproduction is not high because these displays use high-speed projectors with low bit depth and low resolution. In this study, to address this limitation of swept volumetric displays while retaining their advantages, a new swept volumetric three-dimensional (3D) display is designed using physical materials as screens. Physical materials are directly used to reproduce textures on a displayed 3D surface. Further, our system can achieve hidden-surface removal based on real-time viewpoint tracking.
We present a new method of mapping projections onto dynamic scenes by using multiple high-speed projectors. The proposed method controls the intensity in a pixel-parallel manner for each projector. As each projected image is updated in real time with low latency, adaptive shadow removal can be achieved for a projected image even in a complicated dynamic scene. Additionally, our pixel-parallel calculation method allows a distributed system configuration so that the number of projectors can be increased by networked connections for high scalability. We demonstrated seamless mapping onto dynamic scenes at 360 fps by using ten cameras and four projectors.
The projection mapping systems on the human face are limited by the process latency and the users’ movement. The area of the projection is restricted by the position of the projectors and cameras. We are introducing MaskBot, a real-time projection mapping system guided by a 6 Degrees of Freedom (DoF) collaborative robot. The collaborative robot locates the projector and camera in front of the user’s face to increase the projection area and reduce the system’s latency. A webcam is used to detect the face orientation and to measure the robot-user distance. Based on this information we modify the projection size and orientation. MaskBot projects different images on the user’s face, such as face modifications, make-up, and logos. In contrast to the existing methods, the presented system is the first that introduces a robotic projection mapping. One of the prospective applications is to acquire a dataset of adversarial images to challenge face detection DNN systems, such as Face ID.
Augmenting the human arm surface via projection mapping can have a great impact on our daily lives with regards to entertainment, human-computer interaction, and education. However, conventional methods ignore skin deformation and have a high latency from motion to projection, which degrades the user experience. In this paper, we propose a projection mapping system that can solve such problems. First, we newly combine a state-of-the-art parametric deformable surface model with an efficient regression-based accuracy compensation method of skin deformation. The compensation method modifies the texture coordinate to achieve high-speed and highly accurate image generation for projection using joint-tracking results. Second, we develop a high-speed system that reduces latency from motion to projection within 10 ms. Compared to the conventional methods, this system provides more realistic experiences.
Sharing virtual reality (VR) experiences between users wearing head-mounted displays (HMD users) and users not wearing HMDs (Non-HMD users) is a promising approach that can help bridge the gap between these users’ experiences. In previous studies, the role of these users and the differences in their attention targets were not considered, causing a lack of joint attention in user communication. Also, previous systems required cumbersome installation in the spaces in which VR was being experienced. Therefore, this paper proposes “CoVR,” a co-located VR sharing system comprising an HMD with a focus-free projector and projecting the perspective of HMD users. Further, we introduce a design methodology for controlling the perspective of the images displayed to the HMD and Non-HMD users. We also discuss three application scenarios where additional information provided are different for each user.
We propose CoiLED Display, a flexible and scalable display that transforms ordinary objects in our environment into displays simply by coiling the device around them. CoiLED Display consists of a strip-shaped display unit with a single row of attached LEDs, and it can represent information, after a calibration process, as it is wrapped onto a target object. The calibration required for fitting each object to the system can be achieved by capturing the entire object from multiple angles with an RGB camera, which recognizes the relative positional relationship among the LEDs. The advantage of this approach is that the calibration is quite simple but robust, even if the coiled strips are misaligned or overlap each other. We demonstrated a proof-of-concept prototype using strips with a 5-mm width and containing LEDs mounted at 2-mm intervals. This paper discusses various example applications of the proposed system.
We present the design and implementation of a ”Laser Graphics Processing Unit” (LGPU) featuring a proposed re-configurable graphics pipeline capable of minimal latency interactive feedback, without the need of computer communication. This is a novel approach for creating interactive graphics where a simple program describes the interaction on a vertex. Similar in design to a geometry or fragment shader on a GPU, these programs are uploaded on initialisation and do not require input from any external micro-controller while running. The interaction shader takes input from a light sensor and updates the vertex and fragment shader, an operation that can be parallelised. Once loaded onto our prototype LGPU the pipeline can create laser graphics that react within 4 ms of interaction and can run without input from a computer. The pipeline achieves this low latency by having the interaction shader communicate with the geometry and vertex shaders that are also running on the LGPU. This enables the creation of low latency displays such as car counters, musical instrument interfaces, and non-touch projected widgets or buttons. From our testing we were able to achieve a reaction time of 4 ms and from a range of up to 15 m.
Until now, immersive 360° VR panoramas could not be captured casually and reliably at the same time as state-of-the-art approaches involve time-consuming or expensive capture processes that prevent the casual capture of real-world VR environments. Existing approaches are also often limited in their supported range of head motion. We introduce OmniPhotos, a novel approach for casually and reliably capturing high-quality 360° VR panoramas. Our approach only requires a single sweep of a consumer 360° video camera as input, which takes less than 3 seconds with a rotating selfie stick. The captured video is transformed into a hybrid scene representation consisting of a coarse scene-specific proxy geometry and optical flow between consecutive video frames, enabling 5-DoF real-world VR experiences. The large capture radius and 360° field of view significantly expand the range of head motion compared to previous approaches. Among all competing methods, ours is the simplest, and fastest by an order of magnitude. We have captured more than 50 OmniPhotos and show video results for a large variety of scenes. We will make our code and datasets publicly available.