SIGGRAPH '18- ACM SIGGRAPH 2018 Emerging Technologies

Full Citation in the ACM Digital Library

A full-color single-chip-DLP projector with an embedded 2400-fps homography warping engine

We demonstrate a 24-bit full-color projector that achieves over 2400-fps motion adaptability to a fast moving planar surface using single-chip DLP technology, which will be useful for projection mapping applications in highly dynamic scenes. The projector can be interfaced with a host PC via standard HDMI and USB without need of high computational burden.

Aerial-biped: a new physical expression by the biped robot using a quadrotor

We present a biped robot which can move agiler than conventional robots. Our robot can generate bipedal walking motion automatically using the proposed method. By using a quadrotor for balance and movement it is possible to make an agiler movement, and generate a gait interactively and in real time according to the motion of the quadrotor using the optimized control policy of the legs. Our system takes the velocity of the quadrotor as an input and legs motions are produced so that the velocity of the foot in contact with the ground to zero, and bipedal walking motion is generated. The control policy is optimized using reinforcement learning with a physics engine.

Autofocals: gaze-contingent eyeglasses for presbyopes

Presbyopia, the loss of accommodation due to the stiffening of the crystalline lens, affects nearly 20% of the population worldwide. Traditional forms of presbyopia correction use fixed focal elements that inherently trade off field of view or stereo vision for a greater range of distances at which the wearer can see clearly. However, none of these offer the same natural refocusing enjoyed in youth. In this work, we built a new presbyopia correction, dubbed Autofocals, which externally mimics the natural accommodation response by combining data from eye trackers and a depth sensor, and then automatically drives focus-tunable lenses. In our testing, wearers generally reported that the Autofocals compare favorably with their own current corrective eyewear.

CHICAP: low-cost hand motion capture device using 3D magnetic sensors for manipulation of virtual objects

In the research, we propose a cost-effective 3-finger exoskeleton hand motion-capturing device and a physics engine-based hand interaction module for immersive experience in manipulation of virtual objects. The developed device provides 12 DOFs data of finger motion by a unique bevel-gear structure as well as the use of six 3D magnetic sensors. It shows a small error in relative distance between two fingertips less than 2 mm and allows the user to reproduce precise hand motion while processing the complex joint data in real-time. We synchronize hand motion with a physics engine-based interaction framework that includes a grasp interpreter and multi-modal feedback operation in virtual reality to minimize penetration of a hand into an object. The system enables feasibility of object manipulation as far as the needs go in various tasks in virtual environment.

Coglobe: a co-located multi-person FTVR experience

Fish Tank Virtual Reality (FTVR) creates a compelling 3D illusion for a single person by rendering to their perspective with head-tracking. However, typically, other participants cannot share in the experience since they see a weirdly distorted image when they look at the FTVR display making it difficult to work and play together. To overcome this problem, we have created CoGlobe: a large spherical FTVR display for multiple users. Using CoGlobe, Siggraph attendees will experience the latest advance of FTVR that supports multiple people co-located in a shared space working and playing together through two different multiplayer games and tasks. We have created a competitive two-person 3D Pong game (Figure 1b) for attendees to experience a highly interactive two-person game looking at the CoGlobe. Onlookers can also watch using a variation of mixed reality with a tracked mobile smartphone. Using a smartphone as a second screen registered to the same virtual world enables multiple people to interact together as well. We have also created a cooperative multi-person 3D drone game (Figure 1c) to illustrate cooperation in FTVR. Attendees will also see how effective co-located 3D FTVR is when cooperating on a complex 3D mental rotation (Figure 1d) and a path-tracing task (Figure 1a). CoGlobe overcomes the limited situation awareness of headset VR, while retaining the benefits of cooperative 3D interaction and thus is an exciting direction for the next wave of 3D displays for work and fun for Siggraph attendees to experience.

Fairlift: interaction with mid-air images on water surface

FairLift is an interaction system involving mid-air images, which are visible to the naked eye under and on a water surface. In this system, the water surface reflects the light from micro-mirror array plates, and a mid-air image appears. The system enables a user to interact with the mid-air image by controlling the image position of a light-source display from the water level measured with an ultrasonic sensor. The contributions of this system are enriching interaction with mid-air images and addressing the limitations of conventional water-display systems.

Fusion: full body surrogacy for collaborative communication

Effective communication is a key factor in social and professional contexts which involve sharing the skills and actions of more than one person. This research proposes a novel system to enable full body sharing over a remotely operated wearable system, allowing one person to dive into someone's else body. "Fusion" enables body surrogacy by sharing the same point of view of two-person: a surrogate and an operator, and it extends the limbs mobility and actions of the operator using two robotic arms mounted on the surrogate body. These arms can be used independently of the surrogate arms for collaborative scenarios or can be linked to surrogate's arms to be used in remote assisting and supporting scenarios. Using Fusion, we realize three levels of bodily driven communication: Direct, Enforced, and Induced. We demonstrate through this system the possibilities of truly embodying and transferring our body actions from one person to another, realizing true body communication.

Gum-gum shooting: inducing a sense of arm elongation via forearm skin-stretch and the change in the center of gravity

Many people sometimes imagine if they can wield superhuman abilities like that appear in games and animation. Among these abilities, we focused particularly on representing the experience of arm stretching beyond the limits of the human body. We proposed a method for inducing a sense of arm stretching by designing the device attached to forearm and giving the user a visual cue by changing the body structure of the user's avatar in the virtual environment. Our device shifts the mass from the elbow to the wrist while stretching the skin of the forearm according to the animation in the virtual environment. The sensation of the elongation of the arm skin as well as the change in the weight of arm is thought to be the feeling when the arms are stretched out. As a result, we introduce these two mechanisms into our device, which allows the user to feel the sense of arm stretching.

Hands-free augmented reality for vascular interventions

During a vascular intervention (a type of minimally invasive surgical procedure), physicians maneuver catheters and wires through a patient's blood vessels to reach a desired location in the body. Since the relevant anatomy is typically not directly visible in these procedures, virtual reality and augmented reality systems have been developed to assist in 3D navigation. Because both of a physician's hands may already be occupied, we developed an augmented reality system supporting hands-free interaction techniques that use voice and head tracking to enable the physician to interact with 3D virtual content on a head-worn display while leaving both hands available intraoperatively. We demonstrate how a virtual 3D anatomical model can be rotated and scaled using small head rotations through first-order (rate) control, and can be rigidly coupled to the head for combined translation and rotation through zero-order control. This enables easy manipulation of a model while it stays close to the center of the physician's field of view.

Hapcube: a tactile actuator providing tangential and normal pseudo-force feedback on a fingertip

We developed a tactile actuator named HapCube that provides tangential and normal pseudo-force feedback on user's fingertip. The tangential feedback is generated by synthesizing two orthogonal asymmetric vibrations, and it simulates frictional force in any desired tangential directions. The normal feedback simulates tactile sensations when pressing various types of button. In addition, by combining the two feedbacks, it can produce frictional force and surface texture simultaneously.

Headlight: egocentric visual augmentation by wearable wide projector

Visual augmentation to the real environment has potential not only to display information but also to provide a new perception of the physical world. However, the currently available mixed reality technologies could not provide enough angle of view. Thus, we introduce "Headlight", a wearable projector system that provides wide egocentric visual augmentation. Our system consists of a small laser projector with a fish-eye wider conversion lens, a headphone and a pose tracker. HeadLight provides projection angle with approx. 105 deg. horizontal and 55 deg. vertical from the point of view of the user. In this system, the three-dimensional virtual space that is consistent with the physical environment is rendered with a virtual camera based on tracking information of the device. By processing inverse correction of the lens distortion and projecting the rendered image from the projector, HeadLight performs consistent visual augmentation in the real world. With Headlight, we envision that physical phenomena that human could not perceive will be perceived through visual augmentation.

Human support robot (HSR)

There has been an increasing interest in mobile manipulators that is capable of performing physical work in living spaces worldwide, corresponding to population aging with declining birth rates with the expectation of improving quality of life (QOL). Research and development is a must in intelligent sensing and software which enable advanced recognition, judgment, and motion to realize household work by robots. In order to accelerate this research, we have developed a compact and safe research platform, Human Support Robot (HSR), which can be operated in an actual home environment. We assume that overall R&D will accelerate by using a common robot platform among many researchers since that enables them to share their research results. In this paper, we introduce HSR design and its utilization.

Leviopole: mid-air haptic interactions using multirotor

We present LevioPole, a rod-like device that provides mid-air haptic feedback for full-body interaction in virtual reality, augmented reality, or other daily activities. The device is constructed from two rotor units, which are designed using propellers, motors, speed controllers, batteries, and sensors, allowing portability and ease of use. Having each group of rotor units on both ends of the pole, these rotors generate both rotational and linear forces that can be driven according to the target application. In this paper, we introduce example applications in both VR and physical environment; embodied gaming with haptic feedback and walking navigation in a specific direction.

Make your own retinal projector: retinal near-eye displays via metamaterials

Retinal projection is required for xR applications that can deliver immersive visual experience throughout the day. If general-purpose retinal projection methods can be realized at a low cost, not only could the image be displayed on the retina using less energy, but there is also a possibility of cutting off the weight of projection unit itself from the AR goggles. Several retinal projection methods have been previously proposed. Maxwellian optics based retinal projection was proposed in 1990s [Kollin 1993]. Laser scanning [Liao and Tsai 2009], laser projection using spatial light modulator (SLM) or holographic optical elements were also explored [Jang et al. 2017]. In the commercial field, QD Laser1 with a viewing angle of 26 degrees is available. However, as the lenses and iris of an eyeball are in front of the retina, which is a limitation of a human eyeball, the proposal of retinal projection is generally fraught with narrow viewing angles and small eyebox problems. Due to these problems, retinal projection displays are still a rare commodity because of their difficulty in optical schematics design.

Real-time non-line-of-sight imaging

Non-line-of-sight (NLOS) imaging aims at recovering the shape of objects hidden outside the direct line of sight of a camera. In this work, we report on a new approach for acquiring time-resolved measurements that are suitable for NLOS imaging. The system uses a confocalized single-photon detector and pulsed laser. As opposed to previously-proposed NLOS imaging systems, our setup is very similar to LIDAR systems used for autonomous vehicles and it facilitates a closed-form solution of the associated inverse problem, which we derive in this work. This algorithm, dubbed the Light Cone Transform, is three orders of magnitude faster and more memory efficient than existing methods. We demonstrate experimental results for indoor and outdoor scenes captured and reconstructed with the proposed confocal NLOS imaging system.

SEER: simulative emotional expression robot

SEER (Simulative Emotional Expression Robot) is an animatronic humanoid robot that generates gaze and emotional facial expressions to improve animativity, lifelikeness, and impresssiveness by the integrated design of modeling, mechanism, materials, and computing. The robot can simulated a user?s movement, gaze, and facial expressions detected by a camera sensor. This system can be applied to puppetry, telepresence avatar, and interactive automation.

Spherical full-parallax light-field display using ball of fly-eye mirror

We present an optical system design for a 3D display that is spherical, full-parallax, and occlusion-capable with a wide viewing zone and no head tracking. The proposed system provides a new approach for the 3D display and thereby addresses limitations of the conventional light-field display structure. Specifically, a spherical full-parallax light-field display is difficult to achieve because it is challenging to curve the conventional structure of the light-field displays. The key elements of the system are a specially designed ball mirror and a high-speed projector. The ball mirror uniaxially rotates and reflects rays from the projector to various angles. The intensities of these rays are controlled by the projector. Rays from a virtual object inside the ball mirror are reconstructed, and the system acts as a light-field display based on the time-division multiplexing method. We implemented this ball mirror by 3D printing and metal plating. The prototype successfully displays a 3D image and the system feasibility is confirmed. Our system is thus suitable for displaying 3D images to many viewers simultaneously and it can be effectively employed as in art or advertisement installation.

Steerable application-adaptive near eye displays

The design challenges of see-through near-eye displays can be mitigated by specializing an augmented reality device for a particular application. We present a novel optical design for augmented reality near-eye displays exploiting 3D stereolithography printing techniques to achieve similar characteristics to progressive prescription binoculars. We propose to manufacture inter-changeable optical components using 3D printing, leading to arbitrary shaped static projection screen surfaces that are adaptive to the targeted applications. We identify a computational optical design methodology to generate various optical components accordingly, leading to small compute and power demands. To this end, we introduce our augmented reality prototype with a moderate form-factor, large field of view. We have also presented that our prototype is promising high resolutions for a foveation technique using a moving lens in front of a projection system. We believe our display technique provides a gate-way to application-adaptive, easily replicable, customizable, and cost-effective near-eye display designs.

Taste controller: galvanic chin stimulation enhances, inhibits, and creates tastes

Galvanic tongue stimulation (GTS) is a technology used to change and induce taste sensation with electrical stimulation. It is known from previous studies that cathodal current stimulation induces two types of effects. The first is the taste suppression that renders the taste induced by electrolytic materials weaker during the stimulation. The second is taste enhancement that makes taste stronger shortly after ending the stimulation. These effects stand a better possibility to affect the ability to emulate taste, which can ultimately control the strength of taste sensation with freedom. Taste emulation has been considered in various applications, such as in virtual reality, in diet efforts, and in other applications. However, conventional GTS is associated with some problems. For example, the duration of taste enhancement is too short for use in diet efforts, and it necessitates the attachment of electrodes in the mouth. Moreover, conventional GTS cannot induce taste at the throat but at the mouth instead. Thus, this study and our associated demonstration introduces some approaches to address and solve these problems. Our approaches realize that taste changes voluntarily and the effects persist for lengthy periods of time.

Transcalibur: weight moving VR controller for dynamic rendering of 2D shape using haptic shape illusion

We introduce a dynamic weight moving VR controller for 2d haptic shape rendering using a haptic shape illusion. This allows users to perceive the feeling of various shapes in virtual space with a single controller. We designed a device that drives weight on a 2d planar area to alter mass properties of the hand-held controller. Our user study showed that the system succeeded in providing shape perception over a wide range.

Transmissive mirror device based near-eye displays with wide field of view

We present a transmissive mirror device (TMD) based near-eye see-through displays with a wide viewing angle and high resolution for virtual reality and augmented reality. In past years, many optical elements, such as transmissive liquid-crystal display (LCD), half-mirror, waveguide and holographic optical element (HOE) have been adopted for near-eye see-through displays. However, it is difficult to obtain wide field of view with see-through capability for beginner developer. To accomplish this, we develop a simple see-through display that easily setup from a combination of off-the-shelf HMD and TMD. In the proposed method, we render "virtual lens," which has the same function as the HMD lens in the air. By using TMD, it is possible to shorten the optical length between the virtual lens and the eyeball. Therefore, the aerial lens provides a wide viewing angle with see-through capability. We demonstrate a prototype with a diagonal viewing angle of 100 degrees.

Verifocal: a platform for vision correction and accommodation in head-mounted displays

The vergence-accommodation conflict is a fundamental cause of discomfort in today's Virtual and Augmented Reality (VR/AR). We present a novel software platform and hardware for varifocal head-mounted displays (HMDs) to generate consistent accommodation cues and account for the user's prescription. We investigate multiple varifocal optical systems and propose the world's first varifocal mobile HMD based on Alvarez lenses. We also introduce a varifocal rendering pipeline, which corrects for distortion introduced by the optical focus adjustment, approximates retinal blur, incorporates eye tracking and leverages on rendered content to correct noisy eye tracking results. We demonstrate the platform running in compact VR headsets and present initial results in video pass-through AR.

VPET: virtual production editing tools

The work on intuitive Virtual Production tools at Filmakademie Baden-Württemberg has focused on an open platform tied to existing film creation pipelines. The Virtual Production Editing Tools (VPET) started in a former project on Virtual Production funded by the European Union and are published and constantly updated on the open source software development platform Github. We introduce an intuitive workflow where Augmented Reality, inside-out tracking and real-time color keying can be applied on the fly to extend a real movie set with editable, virtual extensions in a collaborative setup.

Wind-blaster: a wearable propeller-based prototype that provides ungrounded force-feedback

Ungrounded haptic force-feedback is a crucial element for applications that aim to immerse users in virtual environments where also mobility is an important component of the experience, like for example Virtual Reality games. In this paper, we present a novel wearable interface that generates a force-feedback by spinning two drone-propellers mounted on a wrist. The device is interfaced with a game running in Unity, and it is capable to render different haptic stimuli mapped to four weapons. A simple evaluation with users demonstrates the feasibility of the proposed approach.