SIGGRAPH '24: ACM SIGGRAPH 2024 Immersive Pavilion

Full Citation in the ACM Digital Library

Acoustic Garden: Exploring Accessibility and Interactive Music with Distance-related Audio Effect Modulation in XR

In AR/XR design, spatial audio and audio-driven narratives are generally considered as secondary to visual interface and story-telling. Non-visual content production and research for visually impaired users are underrepresented. Similarly, in the field of music, visual interfaces dominate discussions on gesture-based sound synthesis, leaving intuitive, audience-driven experiences unexplored. Historically, studies in architecture and music focused on spatial and musical sequences, but they largely remained confined to print media representations.

In this project, inspired by the progressive structure of electronic music, we introduce a spatialized sound synthesis method based on distance-related audio effect modulation combined with binaural spatialization. This approach is intended to navigate users through space without depending on visual indicators, utilizing auditory cues from a multitude of virtual audio objects instead. These objects are responsive to user movements, providing dynamic and immersive musical experiences on a walking scale. Interaction with specific audio objects enables users to dictate different musical progressions, which follows a self-similar spatial structure.

Unlike traditional sonification methods that translate data into plain audio, our approach emphasizes the emotional impact of auditory messages. It unveils the potential for audience-involved, spatially-driven musical narratives. Testing across various platforms, we faced challenges in sound design, hardware limitations, and cognitive processing.

Drosera Obscura

Drosera Obscura is an interactive installation that explores the evolutionary journey of a carnivorous plant thriving in the Pacific Northwest after humanity's departure. We imagine the remnants of human existence—plastics, metals, and machinery—become integral elements in the life and ecosystem of this resilient organism. This work incorporates aspects of touch, smell, sound, and animatronics to create an immersive experience.

Ellic's Exercise Camp: Engaging Children in Physical Activity Through Virtual Reality Gaming

Incorporating virtual reality (VR) games into children’s physical activity routines offers an innovative way to promote exercise in a fun and engaging manner. By immersing children in captivating digital worlds that require physical movement to navigate, VR games transform daily sports activities into exciting adventures, as shown in Figure 1. Set within a vibrant cartoon environment, this game features a virtual exercise camp that captivates and draws children in, keeping them engaged and enthusiastic about participating in physical activities. We have crafted several mini-games inspired by various sports activities, each designed to be both fun and easy to learn. Adorable cartoon characters come to life to guide players through the gameplay, enhancing the interactive experience. This immersive approach entertains and encourages children to embrace exercise as a joyful part of their daily routine.

Fate of the Minotaur: A scalable location based VR experience

Within our narrative location-based VR experience „Fate of the Minotaur“, the players embody the role of human sacrifices from Athens who are sent into the labyrinth of the Minotaur by Minos, King of Crete. The players learn about the tragic family story behind the ancient Greek myth and have to pick a side by either killing the Minotaur or sparing the troubled creature’s life. In a unique approach, the content can be experienced in different immersive scale levels, depending on the technical and physical limitations of the location presenting the experience. From a technical perspective, a novel engine-agnostic and flexible open-source virtual production framework was used to realise the multiplayer network part of the game. Our non-photorealistic visual approach is inspired by ancient Greek murals and vases, allowing us to provide the experience with a small footprint in energy consumption and required equipment.

FlowZen: Using Hybrid-Haptic and Particle for Enhancing Immersive Experience via Continuous Illusion of Wind

With the advancement of haptic technology, we can enable a haptic experience to simulate environmental changes seamlessly, even as the user moves around in the immersive virtual reality world. For instance, stationary non-contact tactile devices, such as electric fans or heat lights, can be utilized to replicate wind and heat sensations in the virtual environment. While these stationary devices can effectively provide a full-body experience, the impact diminishes quickly with distance. Conversely, portable devices can provide better experience, but the haptic feedback might affect only partial body areas and the direction of source is limited. Therefore, taking advantage of both portable and stationary devices to provide a complementary effect is a potential approach. We present FlowZen, a hybrid-haptic system with particle effect that enhances wind and thermal sensations through tactile illusion, integrating handheld haptic devices with contact haptic feedback and a stationary device with non-contact feedback. In our approach, visual particle effects are employed to connect the feedback from both haptic devices.

GaussMR: Interactive Gaussian Splatting Sandbox with GPU Particles and Signed Distance Fields

Intractable Live Free-Viewpoint Video with Haptic Feedback

We present a novel interactable free-viewpoint video (FVV), which generates photo-realistic and editable volumetric content with high degree of freedom. On the one hand, an enhanced visual hull guided neural representation with higher performance is proposed, an-easy-to-use sparse multi-camera capture system is used, as well as a high performance VH-NeRF is proposed for fast generation of photo-realistic FVV results for live streaming. On the other hand, for monocular panorama video input, high-precision depth and coarse dynamic mesh are calculated, which are used for novel view synthesis of right perspective views with motion parallax on the fly. For better interaction with high realism, dual-channel stereohaptics is implemented and attached on VR headsets to obtain haptic feedback. Last, our FVV solution can do effective compression and transmission on both multi-perspective videos and panorama video with depth data, as well as real-time rendering on consumer-grade hardware. To the best of our knowledge, our work is the first interactable FVV solution with high visual quality and real-time haptic feedback. It is established that the user can do intuitive manipulation for immersive experiences in VR/AR applications.

Metapunch X: Combing Multidisplay and Exertion Interaction for Watching and Playing E-sports in Multiverse

E-sports events have become more popular. In the Olympics Esports Series, ten items were included in the official competition; four were exergames, and only one implemented extended reality (XR). However, there are no games that apply the encountered-type haptic feedback with exertion interaction. Also, these esports games are only broadcast on screen, resulting in a lack of sense of presence in the virtual world. In our work, we introduce Metapunch X, an encountered-type haptic feedback esports game that integrates exertion interaction in XR with multi-screen spectating. The game is designed as an asymmetric competition, utilizing an XR head-mounted display (HMD) and a substitutional reality robot to create an esports experience with encountered-type haptic feedback. To enhance the audiences’ experience, in addition to the third-person perspective broadcast, we provide 360-degree live streaming available on mobile devices and virtual reality (VR) HMD, allowing audiences to immerse themselves in the virtual environment and experience the competition as if they were personally present at the venue and feel like engaging in a multiverse.

MOFA: Multiplayer Omnipresent Fighting Arena

As Mixed Reality (MR) head-mounted displays (HMDs) become more widely used, the context of collocated MR gaming is shifting from controlled, private environments to spontaneous, public settings in-the-wild. This shift transforms everyday locations into potential gaming stages, blurring the line between designated play areas and the public sphere. We term this phenomenon Spontaneous Collocated Mixed Reality (SCMR). This transformation necessitates game designs that accommodate both HMD wearers ("Wizards") and non-HMD wearers ("Muggles"). To explore the design space of synchronous social bodily interplay in SCMR, we developed the Multiplayer Omnipresent Fighting Arena (MOFA) framework. This framework leverages device asymmetry and role diversification to actively engage all participants. Using this framework, we created five game prototypes—"The Duel", "The Dragon", "The Ghost", "The Training", and "The Duck". These games, inspired by scenarios from fantastic fiction such as Harry Potter, Game of Thrones, and Ghostbusters, demonstrate that strategically involving "Muggles" enhances social engagement and acceptance. This helps us to speculate a shared mixed reality future that is more inclusive and entertaining in a ubiquitous and spontaneous manner.

Props and Rocks: Passive Haptic Mixed Reality for Navigating Far-off Worlds

Our project immerses participants in a world blending virtual and physical realities. Users navigate a divided society, interacting with props via haptic feedback. Employing innovative techniques, including redirected walking, our project seamlessly transitions between virtual worlds, advancing VR integration.

Reframe: Recording and Editing Character Motion in Virtual Reality

Creating lifelike 3D character animations is traditionally complex and requires substantial skill and effort. To overcome this challenge, we introduce Reframe, a Virtual Reality animation authoring interface that allows users to record and edit motion. Reframe utilizes tracking technology in Virtual Reality headsets to capture the user’s full-body motion, facial expressions, and hand gestures. To facilitate the editing process, we have developed an immersive motion editing interface that combines spatial and temporal control for character animation. This system extracts keyposes from the 3D character animation and displays them along a timeline, connecting the joints through 3D trajectories to depict the character’s movement. We have created a proof-of-concept prototype that demonstrates how a single user can select and animate multiple characters in a scene. This system offers an interactive experience that explores the possibilities of future immersive animation technologies.

ReVerie

ReVerie is an Interactive AI installation that collects and visualizes textual dream data from the artist and the audience and creates a collective dream-reliving experience. Within Dream science, ‘Dreamwork’ encompasses techniques such as dream analysis, interpretation, and exploration aimed at uncovering insights into the subconscious mind. Central to ‘Dreamwork’ is the concept of re-experiencing dreams—immersing oneself in the recollection of dream memories, and emotions. Through a 3D generative diffusion model, the ‘ReVerie’ system translates the whispering of dream objects into 3D immersive visualization in real time to facilitate dream re-experiencing and a collective dream fly-through experience.

The Human-like Non-human Series: Data_Revoc(The precursor to the Electrotactile Asura Series)

As a form of electrotactile artificial organ, what kind of immersive experience can MR technology provide?

Revoc is ”cover” spelled backward, symbolizing the liberation of electro-tactility. Data.Revoc liberates the audience’s perceptual abilities through microcurrents, visualizing the relationship between electrotactile sensations and sound using MR technology.

The Tent: Towards a Future of Spatial Entertainment: Movies you can walk around inside of, with or without a headset.

The dominant form of popular entertainment for the last 125 years, the “rectangle” of film and television, is waning and the emerging medium of Spatial Entertainment is in its infancy. A misplaced focus on the form factor of Virtual Reality headsets has obfuscated this fundamental technological advancement, while creating a bottleneck that has prevented the general population from experiencing this important new medium.

The Tent is an Augmented Reality Narrative, 22 minutes long, built for iPadPro. It allows the audience to explore a 3D world on their tabletop, experiencing the story from any perspective and getting as close to the actor as they wish. Photogrammetry creates photo-real environments and props and a live actor captured with Volumetric Video can convey a more nuanced humanity than a mo-cap or animated actor. (Figure 1) Together they are a step towards “movies you can walk around inside of.”

Unveiling the Invisible: Interactive Spatial Sensing Transforms Air Flow Measurement

This engaging experience transforms participants into aerodynamicists, enabling them to explore air flow fields firsthand. Utilizing groundbreaking augmented reality visualizations alongside an active learning measurement system, individuals wield a sensor with inside-out tracking, allowing them to navigate and directly uncover the dynamics of air flow. The system generates real-time visualizations of measured air currents, making the usually complex task of evaluating flow fields more accessible. Assisted by a companion artificial intelligence, users are directed to subsequent measurement points, effectively integrating them into the algorithm’s process. This innovative approach not only simplifies the examination of air flow fields but also enriches the investigative experience by making participants active contributors to the exploration.

WaterForm: Altering the Liquid to Generate Multisensory Feedback for Enhancing Immersive Environment

With virtual reality and haptic technologies, we can create a compelling, immersive experience. Thus, researchers have been exploring integrating different feedback modules to provide more abundantly hybrid feedback systems. Alternatively, they want to use the same module to generate multiple feedback using different interaction techniques. However, simultaneously providing multiple sensory feedbacks in virtual environments presents challenges beyond system integration. Ensuring that users can experience multiple stimuli without compromising the overall immersive experience, as in a physical world, is still challenging. Therefore, we present WaterForm, a liquid-transformation system that utilizes the liquid to generate multisensory feedback, including water splash, water flow, gravity, wind, resistance, buoyant force, mechanical energy, and mist, to enhance the immersive environment. In our demonstration, we developed a VR excursion in the virtual East landscape to explore self-awareness via immersive storytelling.

Windmill

XRweld: An In-Situ Extended Reality Platform for Welding Education

Craft-based manufacturing trades such as metal welding require extensive hands-on training and mentorship to gain embodied knowledge, muscle memory, and experiential learning. As such, there is an opportunity to leverage spatial computing platforms to monitor welding behaviors, provide feedback, and provide assistive value through an extended reality (XR) system. Our demo leverages an XR headset and controller that have been integrated into a fully-functional welding helmet and torch, allowing users to receive in-situ, real-time feedback while actively welding. This system also allows us to explore other aspects of the embodied learning of welding, including the analysis of biometric data, performance analysis, and the inclusion of meditation to further augment the welding educational experience.

Ink Harmony: An AI- and VR-enhanced System for Calligraphy Education

This work introduces an assistive system designed to deepen the engagement with the art of calligraphy and Chinese culture. Users input the text they wish to learn, which is then processed by an AI framework to create a thematic photo. This photo forms part of a virtual environment, establishing an environment that contextualizes the calligraphy experience. Simultaneously, the system communicates with a robotic arm, guiding the user’s hand to improve writing technique and engagement with the art form. This combination of AI, virtual reality, and robotic assistance paves the way for interactive learning and appreciation of calligraphy.