WEB3D '24: Proceedings of the 29th International ACM Conference on 3D Web Technology

Full Citation in the ACM Digital Library

SESSION: Rendering

Instanced Rendering of Parameterized 3D Glyphs with Adaptive Level-of-Detail using three.js

This paper contributes an optimized web-based rendering approach and implementation for parameterized meshes used as 3D glyphs for information visualization. The approach is based on geometry instancing by means of instanced rendering in three.js, and further allows for dynamic mesh selection according to a level-of-detail function and data-driven parameterization of the meshes. As an application example, we demonstrate a visualization prototype of a 2.5D information landscape that allows for exploration of source code modules of a software system. To this end, each data point is represented by a 3D glyph selected from a glyph atlas according to its type and level-of-detail. We benchmark the approach against a straight-forward baseline implementation using regular three.js meshes by evaluating the overall run-time performance. For this, we use a real-world dataset and synthetic variants with up to 50000 data points. Compared to the baseline implementation, the proposed approach achieves up to a 3000 % higher median FPS count on laptop and desktop-class hardware and allows for the visualization of up to 1300 % larger datasets interactively. These results suggest that leveraging instanced rendering for 3D glyphs with LOD increases the feasible dataset size for interactive, web-based visualization by an order of magnitude.

Volumetric Video on the Web: a platform prototype and empirical study

Volumetric video is a promising technology and it is on the rise due to the popularization of eXtended Reality (XR) applications. However, web support for it has only been lightly explored. Developing a platform to stream live volumetric video on the web is a complex challenge, but crucial to allow future web-based applications. This paper describes and critically reviews a prototype in this direction. In addition, to understand the state-of-the-art in web volumetric video rendering, the paper also presents a thorough empirical study of 720 experiments and more than 5.5 million frames logged using eight different devices under different conditions. Our analysis demonstrates that some devices are ready to support volumetric video streaming in a web browser up to 15 fps and 300.000 points per frame over a WiFi network. However, the study also reveals that device performance is heterogeneous and sometimes surprising, because more expensive devices equipped with better hardware do not always deliver the best performance.

A Workload Prediction Model for 3D Textured Meshes in Webgl Context

The vast majority of simplification algorithms are based roughly on the assumption that rendering time is related to the number of primitives, with the aim of reducing memory impact and rendering complexity. In this paper, we define more precisely the links between 3D object intrinsic characteristics and rendering time in order to provide a new tool for prediction and to guide these simplification methods. We conduct a large-scale experiment in a WebGL environment on multiple devices to measure the rendering time of a set of photo-reconstructed and textured 3D meshes. The results showed us the influence of features on rendering time and that the number of vertices is not the most relevant characteristic. We then trained a predictor capable of predicting the rendering performance of a 3D mesh. This predictor takes as input various characteristics of 3D objects as well as a set of device rendering performance features that we have introduced and achieves a prediction accuracy of 1.16 ± 0.09 ms on average (19.70 ± 2.44 % relative error). We also provide an analysis of the most important characteristics for the task of prediction.

Physically-based Path Tracer using WebGPU and OpenPBR

This work presents a web-based, open-source path tracer for rendering physically-based 3D scenes using WebGPU and the OpenPBR surface shading model. While rasterization has been the dominant real-time rendering technique on the web since WebGL’s introduction in 2011, it struggles with global illumination. This necessitates more complex techniques, often relying on pregenerated artifacts to attain the desired level of visual fidelity. Path tracing inherently addresses these limitations but at the cost of increased rendering time. Our work focuses on industrial applications where highly customizable products are common and real-time performance is not critical. We leverage WebGPU to implement path tracing on the web, integrating the OpenPBR standard for physically-based material representation. The result is a near real-time path tracer capable of rendering high-fidelity 3D scenes directly in web browsers, eliminating the need for pregenerated assets. Our implementation demonstrates the potential of WebGPU for advanced rendering techniques and opens new possibilities for web-based 3D visualization in industrial applications.

EGGS: Edge Guided Gaussian Splatting for Radiance Fields

The Gaussian splatting methods are getting popular. However, their loss function only contains the ℓ1 norm and the structural similarity between the rendered and input images, without considering the edges in these images. It is well-known that the edges in an image provide important information. Therefore, in this paper, we propose an Edge Guided Gaussian Splatting (EGGS) method that leverages the edges in the input images. More specifically, we give the edge region a higher weight than the flat region. With such edge guidance, the resulting Gaussian particles focus more on the edges instead of the flat regions. Moreover, such edge guidance does not crease the computation cost during the training and rendering stage. The experiments confirm that such simple yet effective edge-weighted loss function indeed improves about 1 ∼ 2 dB on several data sets. With simply using the edge guidance, the proposed method can improve all Gaussian splatting methods in different scenarios, such as human head modeling, 3D building reconstruction, WebGL, etc.

SESSION: VR and Metaverse Applications

Lightweight Web3D Twinning Fire Evacuation Simulation of Metro Station

The emergence of digital twin technology has revolutionized the field of complex fire evacuation simulations in metro stations. However, the high computational demands and the diversity of evacuation scenarios remain significant challenges. This study harnesses the advanced visualization capabilities of Web3D technology, coupled with the real-time accuracy of digital twin technology, to develop an innovative lightweight digital twin algorithm. By refining smoke dispersion models and integrating smoke alarm data, the accuracy of our simulations has been significantly enhanced, aligning more closely with actual conditions. Moreover, this paper takes into account the psychological factors of evacuees and their familiarity with the environment, incorporating these aspects into an advanced crowd evacuation algorithm. Leveraging pedestrian detection, our simulations more accurately reflect real evacuation experiences. Utilizing Web3D technology, we achieve real-time, dynamic visualization of smoke and crowd behaviors, establishing an efficient, hybrid virtual-real framework for large-scale fire evacuation simulations. The methods advanced in this study not only improve the precision and efficiency of fire evacuation simulations but also strengthen the safety measures and emergency preparedness of metro systems.

Evaluation of a Virtual Experience System for Restaurants

In this study, we develop a restaurant simulation system using VR technology, and investigate the effectiveness of the simulated experience, as well as the impact of the degree of immersion. It is difficult to fully capture a restaurant’s atmosphere using existing gourmet guide websites, and failing to convey the dining experience adequately can result in missed opportunities to attract new customers. For this study, we constructed a system that allows users to simulate visiting a restaurant in a virtual space, and analyzed how such immersive and simulated experiences in a virtual, 3D space affect users’ opinions, memories, and understanding of dining establishments. Our results show that users found the proposed virtual restaurant experience system to be more useful than conventional photo-based systems from various perspectives, including understanding the concepts of restaurants, forming positive impressions, memory retention after one week, enjoyability of use the system and increasing their intention to visit the restaurant. Our results also showed that intention to use the system for personal daily use, information recall about the restaurant after one week, and intention to visit were influenced by the degree of immersion. This research confirms the promising role of experience simulations, and provides a foundation for developing new forms of visitor and gourmet guides.

AOI for automotive industry - a quality assessment approach combining 2D and 3D sensors

Automated Optical Inspection (AOI) systems in industrial settings typically consist of the lighting module, the image capture module, the image processing, and the anomaly detection module. This paper presents a prototype for quality inspections in cast-iron pieces of the automotive industry, supported by Artificial Intelligence (AI) and Computer Vision (CV) algorithms. Early defect detection allows defective pieces to be routed to the recycling system helping to the green transformation of metallurgical industry. The workflow integrates 2D and 3D camera settings to enhance the defect detection process. In this study, we propose a trained model for segmenting the entire piece, and another model for detecting and classifying defects in 2D images. Then, to complete the detection process, we use the collected 3D data to confirm potential defects on surfaces and to extend the search of defect in the shape of the piece. A contribution is an efficient algorithm for aligning the Computer-Aided Design (CAD) model with the point cloud captured by the 3D camera. This algorithm facilitates the detection of geometric errors in areas where discrepancies between the two models exceed a certain threshold. Elements of Web3D technology are introduced in a Graphical User Interface (GUI) to visualize the 3D scanner capture data (Point Cloud - image) and the results of the 3D model comparisons (CAD and Point Cloud). The paper discusses the implementation of a YOLOv5 model that shows superior performance in detecting surface defects, with an accuracy of 0.734, a mean Average Precision (mAP) at 0.50 of 0.61, an F1-Score of 0.626, and a recall of 0.546. These metrics outperform other object detection models like SSD MobilenetV2 [Liu et al. 2016; Sandler et al. 2019], Faster R-CNN [Ren et al. 2016], and EfficientDet [Tan et al. 2020] when are tested on the same dataset.

VR, AR, gamification and AI towards the next generation of systems supporting cultural heritage: addressing challenges of a museum context

This paper explores the development and integration of a system combining Augmented Reality (AR), Virtual Reality (VR), and gamification within a museum setting to enhance the presentation and interaction with cultural heritage. The technological framework employs AR for dynamic artifact interaction and in situ navigation, while VR capabilities facilitate virtual tours, broadening access for individuals with disabilities or those from distant geographies or socioeconomically disadvantaged backgrounds. Gamification transforms educational content into interactive experiences, fostering deeper engagement and learning. Moreover, aligning with the mission of museum institutions for cultural heritage preservation, a module for digital conservation and reconstruction was developed resorting to photogrammetry-based approaches. This module aims to create a virtual catalog accessible to both experts and the general public. Artificial Intelligence (AI) tools automate tasks such as generating thematic quizzes for gamification and cataloging scanned artifacts. The system aims to improve the interpretative and educational potential of museum exhibits, modernizing visitor engagement while preserving the integrity of physical artifacts and spaces. Its continuous evolution aims to bridge traditional forms of cultural preservation and promotion with contemporary digital interaction techniques, leveraged from cost-effective publicly accessible edge technologies.

Holonomy: A Virtual Reality Exploration of Hyperbolic Geometry

Holonomy is a virtual environment based on the mathematical concept of hyperbolic geometry. Unlike other environments, Holonomy allows users to seamlessly explore an infinite hyperbolic space by physically walking. They use their body as the controller, eliminating the need for teleportation or other artificial VR locomotion methods. This paper discusses the development of Holonomy, highlighting the technical challenges faced and overcome during its creation, including rendering complex hyperbolic environments, populating the space with objects, and implementing algorithms for finding shortest paths in the underlying non-Euclidean geometry. Furthermore, we present a proof-of-concept implementation in the form of a VR navigation game and some preliminary learning outcomes from this implementation.

Towards the Industrial Metaverse: A Game-Based VR Application for Fire Drill and Evacuation Training for Ships and Shipbuilding

This paper details the creation of a novel Virtual Reality-based application for the Industrial Metaverse aimed at shipboard fire emergency training for fire drill and evacuation, aligned with the Safety of Life at Sea (SOLAS) convention requirements. Specifically, the application includes gamified scenarios with different levels (e.g., varying fire intensities in engine rooms and galleys). The paper details comprehensively the VR development while providing a holistic overview and practical guidelines. Thus, it can guide practitioners, developers and future researchers to shape the next generation of Industrial Metaverse applications for the shipbuilding industry. Moreover, the paper includes the results of a preliminary user evaluation aimed at quantifying user decision-making and risk assessment skills. The presented results of the experiments provide insights into user performance and allow for pointing out at different future challenges.

SESSION: Contributions of VR to Health and Rehabilitation

Towards remote rehabilitation with gaming, digital monitor and computer vision technologies

Gamification can be a powerful tool in physical rehabilitation, offering engaging experiences that motivate patients and improve recovery outcomes. By integrating computer graphics and computer vision, we are approaching a scenario where a virtual instructor guides personalized rehabilitation routines. State-of-the-art Computer Vision (CV) technologies assess exercise performance, ensuring patients execute movements correctly. This innovation enables clinic-grade rehabilitation programs to be conducted at home, with remote monitoring from healthcare professionals The inherent nature of exergames makes the system user-friendly and enjoyable. We are actively developing a system that incorporates these features to empower remote rehabilitation. This paper delves into the ongoing development of our exergame platform. It combines elements and functionalities commonly found in serious games to enhance the remote physical rehabilitation process. The paper outlines the methodology behind creating the exergame, capturing movements, and integrating the system as a whole, with the ultimate goal of establishing an effective tool for remote rehabilitation assistance.

NeuroVerse: Immersive exploration of 3D ultrastructural brain reconstructions for education and collaborative analysis

We introduce NeuroVerse, a framework designed to support the immersive exploration of 3D nanometric-scale reconstructions of structural and ultrastructural neural or glial cellular processes of the central nervous system. Utilizing image stacks acquired through volume electron microscopy, NeuroVerse reconstructs detailed 3D mesh models and integrates absorption signals, enabling deployment within a Metaverse environment. This framework facilitates innovative educational and collaborative analysis experiences, particularly in neuroanatomy and neuroscience. We present a comprehensive methodology, including a pipeline for 3D model creation, segmentation, mesh reconstruction, and heatmap computation, optimized for the Spatial.io ecosystem. Our contributions include the development of a virtual anatomy lab for immersive neuroanatomy education and collaborative sessions focusing on morphology spatial correlation and neuroenergetic absorption models. Preliminary results indicate significant potential for enhancing neuroscience education, improving remote collaboration among scientists, and democratizing access to advanced neuroscientific data and tools.

Exploring Virtual Reality in Exposure Therapy for Sensory Food Aversion

This paper presents research on how to use Virtual Reality with gamified exercises in a therapeutic context with children, focusing on the particular case of warning sensations triggered by sensory properties of food (sensory food aversion). To achieve this goal, we developed a tool featuring several food exposure challenges for patients to use. In the gamified system, the child explores a virtual environment while facing the food they have a problem with. These environments present tasks that resemble typical interactions performed in the real world to develop accommodation. The therapist also has an external system to control the system from outside. In addition, the system sends data collected during the session for the therapist to analyze. We researched how to keep a child engaged in therapeutic tasks and how a child perceives virtual interaction interfaces. The results suggest our system kept the users engaged. Moreover, data show a tendency for the users’ results (ease of use, presence, and performance) to remain the same when using controllers or hand tracking. The preliminary results are encouraging and allow us to apply the current system to a wider audience.

Blood flow visualization from 4D flow Magnetic Resonance Imaging using the ISO X3D standard

4D flow magnetic resonance imaging (MRI) enables the evaluation of blood flow patterns in cardiovascular diseases by resolving the blood flow velocity relative to the three anatomical dimensions and the fourth dimension of time. 4D flow MRI processing solutions highlight the need of disposing of quality visualisations to enhance the interpretation of the data. In this context, we present an open-source Python-based pipeline for loading and processing 4D flow MRI data in the context of pulmonary hypertension, followed by the publication of interactive and comparative 4D visualizations on the Web using the ISO X3D standard. This opens up the opportunity for larger clinical studies and medical research using this complex imaging modality and highlights the versatility and strength of using the X3D standard for multidimensional medical flow data analysis. Interactive visualization and exploration of both static and dynamic behavior of blood flow helps physicians and researchers to better understand flow patterns. A contrast with similar works using X3D in medical flow visualization (e.-g. brain flow) also shows the potential for extending the use of X3D capabilities specifically for 4D-flow related applications (i.e blood, water current, air).

Integrating Psychological Principles and AI Technology into UX Design for Developing a Metaverse that Promotes Healthy Disconnection for Young Kids

The overuse of technology and extended time in 3D virtual worlds by young children and teenagers have raised concerns among parents and teachers about potential negative impacts on their psychological well-being and real-world engagement. This paper reviews earlier studies from educational and psychological perspectives and proposes using psychological principles as a crucial guide for user experience (UX) design in 3D virtual worlds, or the future metaverse. The goal is to create opportunities for healthy disconnection and encourage children to return to real-life activities. The paper also explores leveraging psychological insights and artificial intelligence (AI) technology in designing metaverse experiences. Such an approach allows UX designers and developers of 3D virtual environments to create functionalities that support disconnecting from the virtual realm. This helps mitigate the potential hazards of excessive virtual involvement and encourages a balanced integration of digital and offline pursuits. Such a virtual social environment could be seen as a responsible metaverse.

SESSION: Methodologies, Techniques and Tools

Conceptualizing Interoperable 3D Geospatial Data Visualization with X3D and OGC 3D Tiles

We describe our concept and implementation outline of bridging two popular but disconnected geospatial ecosystems, Web3D Consortium's X3D and OGC's 3D Tiles with the goal of achieving synergies to both while improving workflows. There are potentially many benefits of integrating these two complementary geospatial open standards such as leveraging OGC toolchains for X3D, providing access to rich X3D interactivity for OGC 3D Tiles or building the foundations for streaming of massive 3D geospatial datasets within X3D. We propose to test this idea by implementing a large subset of OGC 3D Tiles 1.1 functionality for X3DOM, an open source X3D browser, and to show how existing and well tested XD features could aid such an implementation. As a result, we sketch a strategy to include OGC 3D Tilesets declaratively in a fully interactive X3D world, with accurate geospatial registration and integration with the X3D Geospatial component.

Prompt Engineering for X3D Object Creation with LLMs

Large Language Models (LLMs) are a new class of knowledge embodied in a computer and trained on massive amounts of human text, image, and video examples. As the result of a user prompt, these LLMs can generate generally coherent responses in several kinds of media and languages. Can LLMs write X3D code? In this paper we explore the ability of several leading LLMs to generate valid and sensible code for interactive X3D scenes. We compare the prompt results from three different LLMs to examine the quality of the generated X3D. We setup an experimental framework that uses a within-subjects repeated-measures design to create X3D from text prompts. We vary our prompt strategies and give the LLMs increasingly challenging and increasingly detailed scene requests. We assess the quality of the resulting X3D scenes including geometry, appearances, animations, and interactions. Our results provide a comparison of different prompt strategies and their outcomes. Such results provide early probes into the limited epistemology and fluency of contemporary LLMs in composing multi-part, animate-able 3D objects.

AI-Driven Creativity Unleashed: Exploring the Synergistic Effects of UGC and AIGC in Metaverse Gaming from a User Perspective

The metaverse is a shared, immersive 3D world where individuals engage in virtual reality and exchange their interests, perspectives, and resources. User-Generated Content (UGC) serves as the core driving force in constructing the metaverse. This article concentrates on the synergistic effects of UGC and Artificial Intelligence-Generated Content (AIGC) within metaverse games, exploring how AI technology can unleash user creativity. Through interviews with 80 Chinese metaverse gamers aged 14-24, this study identifies user expectations for metaverse platforms to offer interactive, multi-user collaborative, multi-sensory, and emotional communication, as well as support for media integration, to facilitate the collaborative creation of UGC and AIGC. Based on the interview findings, this article proposes three mechanisms: a multi-user collaborative creation mechanism, an intelligent interactive scene generation mechanism, and a highly controllable AI generation mechanism, aiming to provide guidance and suggestions for the future development of metaverse platforms.

WebGL-based Image Processing through JavaScript Injection

Can we modify existing web-based computer graphics content through JavaScript injection? We study how to hijack the WebGL context of any external website to perform GPU-accelerated image processing and scene modification. This allows client-side modification of 2D and 3D content without access to the web server. We demonstrate how JavaScript can overload an existing WebGL context and present examples such as color replacement, edge detection, image filtering, and complete visual transformations of external websites, as well as vertex and geometry processing and manipulation. We discuss the potential of such an approach and present open-source software for real-time processing using a bookmarklet implementation.

SESSION: Web-based Techniques and Applications

ZLS: An Efficient Lossless Compression for Depth-Video Streaming

This paper introduces an efficient compression scheme—Z-lossless (ZLS)—for video applications with depth streams. ZLS utilizes a novel histogram compression module together with a customized predictor-corrector model and an entropy coder based on adaptive Golomb codes. It achieves a lossless compression of 9:1 for Kinect-like depth data with ultra-lightweight computations, which is ideal for streaming depth data in Metaverse video applications over narrow communication channels and low-bandwidth portable devices. To prove the efficiency and feasibility of ZLS, we present two practical, portable video applications using our implementation of ZLS on embedded platforms (i.e., a handheld Kinect camcorder and a Kinect-equipped hexa-copter). ZLS runs on a single ARM Cortex A9 core clocked at 1.7GHz at 42Hz, and outperforms its competitors (such as state-of-the-art lossless general purpose, image, video and depth stream compression) in terms of compression ratio and computation efficiency verified by extensive empirical results.

SpectralSplatsViewer: An Interactive Web-Based Tool for Visualizing Cross-Spectral Gaussian Splats

Spectral rendering accurately simulates light-material interactions by considering the entire light spectrum, unlike traditional rendering methods that use limited color channels like RGB. This technique is particularly valuable in industries to assess visual quality before production. Moreover, Spectral imaging finds extensive applications in fields like agriculture for plant disease detection, cultural heritage for preservation, forensic science, environment monitoring and medical science among others. Advances in generating novel views from images have been achieved through methods like NERF and Gaussian splatting, which outperforms others in terms of quality. This paper introduces a web-based viewer built on the Viser framework for visualizing and comparing cross-spectral Gaussian splats from different views and during various training stages. This viewer supports real-time collaboration and comprehensive visual comparison, enhancing user experience in spectral data analysis. We conduct a user study and performance analysis to confirm its effectiveness and usability for different application scenarios, while also proposing potential enhancements for increased functionality.

Progressively Streamed Real-time Preview of Captured Optical Surface Materials on the Web

Real-time delivery of captured optical surface materials has several uses, ranging from commercial showcasing of products, or product configurators, on the web, to dissemination of and research on cultural heritage by providing easy access to collections. In this work we describe two contributions: First, the development of a real-time 3D viewer for captured optical surface materials mapped to 3D geometry on the web, which is demonstrated using measured Approximate Bidirectional Texturing Function (ABTF) surface materials as an example. Second, we solve the general problem of the initial wait time for the user when viewing 3D renderings coupled with large measured material data by progressively enhancing the material details presented to the user from the start, following one of several methods proposed to define the order in which the individual surface fragments are sent. This reordering prevents the retransmission of fragments and continuously enhances the perceived increase in quality over time, making each step noticeably better in terms of perceived quality without producing visible pop-in effects or other artifacts. The benefit is the ability to visualize captured optical surface materials in their full detail without need for lossy compression, while retaining interactive loading times and thus improving user experience.

DeepMaterialInsights: A Web-based Framework Harnessing Deep Learning for Estimation, Visualization, and Export of Material Assets from Images

Accurately replicating the appearance of real-world materials in computer graphics is a complex task due to the intricate interactions between light, reflectance, and geometry. In this paper we address the challenges of material representation, acquisition, and editing by leveraging the potential of deep learning algorithms our framework provide. To enable the visualization and generation of material assets from single or multi-view images, allowing for the estimation of materials from real world objects. Additionally, a material asset exporter, enabling the export of materials in widely used formats and facilitating easy editing using common content creator tools. The proposed framework enables designers to effectively collaborate and seamlessly integrate deep learning-based material estimation models into their design pipelines using traditional content creation tools. An analysis of the performance and memory usage of material assets at various texture resolutions shows that our framework can be used plausibly according to the needs of the end-user.

SESSION: Posters

MYRTE, A Lightweight flexible platform to create 360 degree cultural heritage virtual tours for non-it expert

This poster presents MYRTE, an acronym for MY viRtual tour for archiTecture and cultural heritagE, an initiative aimed at developing a DIY virtual tour application based on 360-degree panoramas. It is designed to be used by non-experts for both viewing existing tours and creating/editing new ones, serving purposes of both research and education.

Small Scale Insect Photogrammetry: A Deep Dive into Workflows

The Virginia Tech University Libraries 3D Lab Insect Collection digitized 280 specimens from the Virginia Tech Insect Collection, the largest and oldest in Virginia, through the CLIR-funded Entomo-3D project. This poster proposal aligns with the Web3D conference theme Content and Publishing, particularly on 3D content creation and modeling, by showcasing the comprehensive workflow used to transform physical insect specimens into high-quality 3D models, highlighting each stage from image capture to digital deployment. The presentation will emphasize the intricate processes involved and the improvements planned for future projects. This comparison aims to demonstrate the potential for enhancing model accuracy and workflow efficiency, illustrating the journey from lab specimens to versatile, accessible digital models.

Portable LiDAR Scanners: Precision Mapping at Your Fingertips

Handheld LiDAR devices are revolutionizing 3D data capture, offering unprecedented accessibility and ease of use even for inexperienced users. These compact systems facilitate robust virtual reconstructions by extracting detailed geometric information from real-world environments. Advanced positioning systems in these devices generate highly accurate georeferenced 3D point clouds, enhancing the precision of scanned scenes. Leading manufacturers like FARO Technologies, Leica Geosystems, and Trimble offer diverse models, including the FARO Focus, Leica BLK360, and Trimble X7. These devices are invaluable in fields such as cultural heritage conservation, where they document and preserve historical sites; topographic mapping, providing accurate surveys in challenging terrains; and industrial inspection, ensuring thorough examinations of infrastructure and machinery. Additionally, the visualization of these 3D models on web platforms significantly enhances their accessibility and interaction. Web-based visualization tools allow users to explore, manipulate, and analyze 3D data from any location with an internet connection, democratizing access to detailed spatial information and enabling a broader range of users, from researchers to educators, to engage with complex datasets without the need for specialized software. Interactive features on these platforms, such as zooming, rotating, and measuring, provide an intuitive interface for detailed examination and collaboration. Consequently, the integration of handheld LiDAR technology with web visualization platforms opens new avenues for innovation and collaboration across various disciplines, further revolutionizing traditional methods and expanding the potential applications of 3D data capture.

SESSION: Industrial Use Cases

Convergence of IoT, 3D & XR: Creating the Industrial Metaverse Through Live Factory Visualisation

Industrial companies are challenged by the task of managing and leveraging the tremendous amount of data generated throughout engineering, production, and operational phases of their manufacturing plants. As technology progresses, the emergence of sophisticated sensors offers unprecedented opportunities to refine and optimize processes, from the drafting table to the factory floor. This data, however, is often not utilized along the entire value chain to predict and identify errors or support service engineers. Adding high-quality 3D data into the scene leads to even more challenges to support efficient processes. We present a scalable solution that integrates 3D spatial computing with advanced IoT data processing, to enhance data utilization and process optimization across all stages of manufacturing

Emerging Platforms in 3D for Art and Cultural Heritage: Applications and Advancements in 3D Imaging for Conservation, Research and Engagement

This work introduces Emerging Platforms in 3D for Art and Cultural Heritage, an interdisciplinary approach to advancing ultra-high-resolution 3D imaging for art conservation, research and engagement. In presenting a more intimate and detailed digital display of art objects than what can currently be offered in-person, this approach unlocks a deeper level of public access to art and cultural heritage. Furthermore, this work illustrates the promise and value of advancing 3D imaging in Cultural Heritage and the Arts.

Interactive 3D Geospatial Visualization of the Port of Gulfport using X3D

Versar Global Solutions repurposed photogrammetric and LiDAR data collected at the "Port of Gulfport" location for the US Department of Defense (DOD) and integrated it with existing and publicly available data to develop an interactive 3D representation of the site.

Meeting the Apparel/ Footwear Industries challenges with interoperability, animation chaining and Blender imports