As construction projects increase in complexity, the need for a well-educated workforce in the Architectural Engineering and Construction industry grows. Furthermore, concerns about retention in engineering programs have highlighted the need to improve students’ educational experiences. Virtual Reality has emerged as a promising solution, offering immersive and interactive learning experiences that can enhance students’ understanding and retention of complex engineering concepts. This paper explores the effectiveness of a VR system for conducting building inspections and documenting clashes within a building information model as part of an undergraduate civil engineering course. The study compared this VR system to a traditional desktop-based building inspection method, evaluating effects on cognitive load, user experience, and usability. It also examined the relationship between students’ spatial abilities and cognitive load in both systems. Results indicated that VR did not produce higher levels of cognitive load, and users reported lower estimations of perceived effort when using VR. Furthermore, while the level of usability was equivalent across both activities, the VR activity provided a more positive user experience, as measured by the User Experience Questionnaire. These findings suggest that the use of VR provides advantages to perceived effort and user experience while conducting collaborative virtual building inspection activities, demonstrating how the technology can be used to enhance the experience of civil engineering students. Consequentially, further adoption and investigation of VR should be a priority in the refinement of educational teaching methods.
We present a novel framework, dubbed PanoVerse, for the automatic creation and presentation of immersive stereoscopic environments from a single indoor panoramic image. Once per 360° shot, a novel data-driven architecture generates a fixed set of panoramic stereo pairs distributed around the current central view-point. Once per frame, directly on the HMD, we rapidly fuse the precomputed views to seamlessly cover the exploration workspace. To realize this system, we introduce several novel techniques that combine and extend state-of-the art data-driven techniques. In particular, we present a gated architecture for panoramic monocular depth estimation and, starting from the re-projection of visible pixels based on predicted depth, we exploit the same gated architecture for inpainting the occluded and disoccluded areas, introducing a mixed GAN with self-supervised loss to evaluate the stereoscopic consistency of the generated images. At interactive rates, we interpolate precomputed panoramas to produce photorealistic stereoscopic views in a lightweight WebXR viewer. The system works on a variety of available VR headsets and can serve as a base component for Metaverse applications. We demonstrate our technology on several indoor scenes from publicly available data.
User experience is one of the most significant factors that impact users’ opinions about products. This also applies to the user interfaces of household appliances. However, realistic interactive presentation of user interfaces of such appliances in a virtual form remotely over the Internet is challenging. In this paper, we present and compare two different methods of interactive presentation of an oven touchscreen control panel. The panel was precisely recreated as an interactive 3D model, reflecting both its appearance and functionality. The system allows users to interact with the control panel using a mobile device touchscreen or hand movements in a virtual reality environment. We describe an experiment aimed at evaluating and comparing different forms of interaction to determine the benefits and barriers to the presentation of complex products in a virtual form for marketing purposes.
To address the need for interoperable user representations (avatars) cross-platform exchange format for immersive realities, ISO/IEC JTC 1/SC29/WG03 MPEG Systems has standardized a Scene Description framework in ISO/IEC 23090-14 [ISO/IEC 2023]. It serves as a baseline format for user representation format to enrich the interactive experience between 3D objects in an immersive scene. This work presents the MPEG Original Reference Geometric Avatar Neutral (Morgan), a humanoid avatar specified as informative content in the MPEG-I Scene Description (MPEG-I SD) standardization group. Morgan is a generic avatar representation that facilitates interactivity and manipulation in immersive realities and is accompanied by a complete body mesh and realistic appearance, hierarchical skeletal representation, blend shapes, eye globes, jaws with teeth and semantical representation of human body parts.
Social VR (SVR) systems are VR systems with a common subset of features facilitating unstructured social interaction. In the real world, social situations have many purposes, each with a different set of requirements, and roles its participants take - creator, moderator, performer, visitor, etc. Yet, common SVR systems typically offer only a single client to users. Even if there are versions for different platforms, there is a one-size-fits-all approach to the user experience. Consequently users need to employ workarounds or build their own functionality to support specific roles, where this is possible at all. We argue that platforms need to develop more open frameworks that support different processes and user interactions. One way to do this is through using appropriate web standards and an open messaging system in order to allow distributed clients that can leverage the strongest features of heterogeneous computing platforms. Supporting asymmetrical capabilities greatly increases the scope of supported virtual social interactions and potential use cases of SVR. We take a qualitative experimental approach to exploring cross platform support in this way, from a designers perspective. We use the open-source SDK Ubiq, and create a library that allows building Ubiq Peers using web standards and thus clients that can operate solely in a web browser or certain Javascript environments. We validate our approach by demonstrating six proof of concept demonstrators that would be difficult or impossible to achieve in most other SVR systems, and report on what we encountered for the benefit of other SVR designers.
Situational awareness plays a critical role in daily life, enabling individuals to comprehend their surroundings, make informed decisions, and navigate safely. However, individuals with low vision or visual impairments face difficulties in perceiving their real or virtual environment. In order to address this challenge, we propose a 3D computer vision-based accessibility solution, empowered by object-detection and text-to-speech technology. Our application describes the visual content of a Web3D scene from the user’s perspective through auditory channels, thereby enhancing situational awareness for individuals with visual impairments in virtual and physical environments. We conducted a user study of 44 participants to compare a set of algorithms for specific tasks, such as Search or Summarize, and assessed the effectiveness of our captioning algorithms based on user ratings of naturalness, correctness, and satisfaction. Our study results indicate positive subjective results in accessibility for both normal and visually-impaired subjects and also distinguish significant effects between the task and the captioning algorithm.
The glTF 2.0 graphics format allows for the API-neutral representation of 3D scenes consisting of one or multiple textured meshes. It is currently adopted as one of two file formats for 3D asset interoperability by the Metaverse Standards Forum. glTF 2.0 has however not been designed to be streamable over the network; instead, glTF 2.0 files typically first need to be downloaded fully before their contents can be rendered locally. This can lead to high start-up delays which in turn can lead to user frustration. This paper therefore contributes a methodology and associated Web-based client, implemented in JavaScript on top of the three.js rendering engine, that allows to stream glTF 2.0 files from a content server to the consuming client up to the level of individual glTF bufferviews. This in turn facilitates the progressive client-side rendering of 3D scenes, meaning that scene rendering can already commence while the glTF file is still being downloaded. The proposed methodology is conceptually compliant with the HTTP Adaptive Streaming (HAS) paradigm that dominates the contemporary market of over-the-top video streaming. Experimental results show that our methodology is most beneficial when network throughput is limited (e.g., 20Mbps). In all, our work represents an important step towards making 3D content faster accessible to consuming (Web) clients, akin to the way platforms like YouTube have brought universal accessibility for video content.
Web-based visualization of large-scale 3D city models and urban digital twins is essential for various applications, such as urban planning, infrastructure management, environmental analysis, and public participation. The widely used OGC CityGML standard to store semantically rich large-scale 3D city models is not well suited for web-based visualization due to its GML encoding. The conversion of CityGML-based 3D city models to web-friendly formats like 3D Tiles and I3S often leads to data loss due to the inability of such streaming formats in preserving hierarchical organization and semantic relationships inherent in the CityGML data model. Additionally, inconsistencies may arise during maintenance or updates primarily due to the secondary usage of these formats rather than being used as a direct encoding. Therefore, in this paper, we present the concept of using I3S as a direct encoding for CityGML 3.0. The recently released CityGML 3.0 version distinctly separates its conceptual data model from the data encoding. This significant change enables the provision of additional encoding specifications beyond GML, a capability that was not possible in the previous version, CityGML 2.0. This opens a new paradigm where CityGML payloads could be optimized using existing 3D encodings such as OGCs Indexed 3D Scene Layers (I3S) standard, designed to support 3D streaming and distribution of large volumes of 3D content through a combination of Level of Detail (LoD) and selection criteria. The advantage of CityGML I3S encoding is that it combines a semantically rich 3D city model with a bounding volume hierarchy (BVH) optimized for streaming large-scale 3D city models in multiple resolutions. A new capability that enables the consumption of CityGML content, without data conversion, directly in web-enabled systems. To demonstrate this concept, we evaluated the suitability of encoding a CityGML 3.0 indoor model for one of the Stuttgart University of Applied Sciences (HFT Stuttgart) campus buildings, using I3S Building Scene Layer (BSL). I3S BSL is a new enhancement provided in the OGC I3S version 1.3 community standard enabling the representation of building elements, such as roofs, walls, floors, doors, windows, etc., in a hierarchical structure of layers and sub-layers, combined with BVH optimization. The results proved that the I3S BSL format successfully preserves the geometry, hierarchical organization, and semantic relationships among CityGML building classes, surpassing the capabilities of a generic CityGML conversion-based implementations and therefore making a promising foundation for an enhancement of the CityGML 3.0 building model using I3S encoding. This concept can be further extended in the future to other thematic modules of CityGML 3.0, for further development of I3S encoding of CityGML 3.0.
The introduction of the Digital Twin (DT) has sparked a great deal of interest in the virtual replication of physical assets and processes. However, to fully realize the potential of DT, companies require robust and scalable front-end solutions. This study proposes harnessing micro-frontend technologies to surmount the limitations of monolithic front-end frameworks, thereby crafting effective presentation layers tailored for industrial companies. By decomposing complex and distributed systems into modular web-based DT, the proposed framework enables better scalability, synergy, and efficient application development. It also tackles the intricacies introduced by complex multi-vendor elements within DT and the orchestration of services and virtual environments. Research is focused on developing an architecture that facilitates seamless connectivity between multiple blocks of DT as well as interactivity and immersive experiences. Our study offers insights into our journey of implementing this framework for various industrial use cases, such as training, monitoring, and control, highlighting the benefits, drawbacks, and challenges. Ultimately, this research aims to accelerate the creation of DT, improve maintainability, and increase efficiency and productivity in industrial environments.
In the current wave of digital industry, Digital Twins play a major role in the virtualization of manufacturing processes. Digital Twins are virtual entities that mirror the behavior of physical entitites. They are used to predict or monitor the information of a product or a process. In recent years, virtualization technologies have benefited from cloud computing and web-based services to enhance the capabilities of industrial Digital Twins. These Web Digital Twins enable greater degree of distribution and collaboration between the users of the Digital Twin. In this article we present an use case of web-based Digital Twins for the design and monitoring of Gerotor pumps. The novelties of the present article are: i) it highlights the advantages that result from applying the Digital Twin methodology to the case of Gerotor pumps, ii) it describes the implementation of a web-based Digital Twin tool for a Gerotor pump. Our tool allows the user to design a Gerotor pump using a parametric interface, visualize the 3D model of the pump, visualize its expected performance using fast simulation routines and obtain the optimum design for a set of desired performance parameters. This article focuses on the technological overview of the Digital Twin tool and its web-based architecture, as the simulation and optimization details were addressed in different publications.
Digital preservation of Cultural Heritage (CH) sites is crucial to protect them against damage from natural disasters or human activities. Creating 3D models of CH sites has become a popular method of digital preservation thanks to advancements in computer vision and photogrammetry. However, the process is time-consuming, expensive, and typically requires specialized equipment and expertise, posing challenges in resource-limited developing countries. Additionally, the lack of an open repository for 3D models hinders research and public engagement with their heritage. To address these issues, we propose Tirtha, a web platform for crowdsourcing images of CH sites and creating their 3D models. Tirtha utilizes state-of-the-art Structure from Motion (SfM) and Multi-View Stereo (MVS) techniques. It is modular, extensible and cost-effective, allowing for the incorporation of new techniques as photogrammetry advances. Tirtha is accessible through a web interface at https://tirtha.niser.ac.in and can be deployed on-premise or in a cloud environment. In our case studies, we demonstrate the pipeline’s effectiveness by creating 3D models of temples in Odisha, India, using crowdsourced images. These models are available for viewing, interaction, and download on the Tirtha website. Our work aims to provide a dataset of crowdsourced images and 3D reconstructions for research in computer vision, heritage conservation, and related domains. Overall, Tirtha is a step towards democratizing digital preservation, primarily in resource-limited developing countries.
We present an efficient framework for reconstructing real-world terrain models and achieving high-fidelity real-time rendering. In the terrain model reconstruction phase, we utilize Digital Elevation Model (DEM) or LiDAR point clouds as input data. For LiDAR point clouds, we introduce a normal estimation algorithm that incorporates LiDAR scan information. This enables us to generate an oriented point cloud, which serves as the basis for 3D reconstruction. The resulting mesh models can preserve landform features such as cliffs and overhangs, leading to highly realistic terrain models. To facilitate efficient rendering, we construct a Hierarchical Level of Detail (HLOD) representation based on the generated terrain models. The HLOD structure establishes a hierarchical relationship across successive LODs. During the rendering stage, we dynamically traverse the hierarchy and select the appropriate resolution tiles based on the viewpoint’s position. This approach ensures smooth LOD transitions and eliminates issues such as LOD popping and mesh cracks through the use of vertex interpolation. We have implemented and tested our framework on a modest desktop machine without a discrete GPU. The HLOD construction implementation achieves a processing speed of 1 million triangles per second per core. During real-time rendering, our approach maintains an interactive frame rate without introducing artifacts. Our pipeline effectively leverages the capabilities of an average graphics card, ensuring efficient utilization of available resources.
In the context of Multidisciplinary Design Optimization (MDO), the use of data visualization and data analytics is critical for the understanding of complex interactions between variables and multi-objective functions in high-dimensional (>3) spaces. Current Visual Analytics (VA) techniques provide powerful interactive tools to analyze general-purpose data in many scientific and business contexts. However, the application of these methods in MDO contexts is less explored. Direct application of existing methods can easily overwhelm the user, mainly due to a) incorrect preprocessing of raw data, b) use of incorrect tools, or c) a combination of the aforementioned factors. To overcome these challenges, this manuscript aims to explore the application of some relevant 2D and 3D visualization techniques in the context of MDO. To achieve this goal, this manuscript presents some of the best state-of-the-art tools and discusses best practices for data processing. In addition, the tools presented are implemented in a client-server web environment where the heavy work (data preprocessing) is carried out by a Python-based server while the visualization tasks are left to the client. Ongoing work includes the integration and deployment of the presented methods in an interactive visualization framework for the analysis of MDO results.
In the domain of bike route planning for urban environments, the solutions provided by large corporations (e.g. Google Maps, Waze-Google) are not tailored for this particular vehicle or do not reflect path cost structures that human interactions and agglomerations produce. Bikepath expenses different from the usual Euclidean or City-Block distance functions but relevant in a city relate to safety (in terms of accidents or criminality), slopes, path roughness, time-dependent (i.e. rush hour) costs, etc. To partially overcome these disadvantages, this manuscript presents the implementation of a bike route planning algorithm in a urban environment, which efficiently solves the problem of presenting the biker with a low cost route. At the same time, our application allows flexibility in the degree of usage of dedicated bike routes built by the city. This flexibility obeys to city regulations, which may prescribe more or less priority in the usage of dedicated bikepaths. Our algorithm integrates bike dispensers, bike routes, variety of costs (additional to travel length) and finds the suggested routes in a constrained Delaunay graph. The execution of the algorithm is enhanced by using the fact that large part of the travel might be pre-computed if the biker must pick up and return the city-provided bikes in specific dispenser points. Future work is needed in (a) adding more flexible heuristics as the city may decide to prioritize diverse environmental, economic, or transportation goals, (b) transcending canonical metrics, e.g. by considering non-symmetrical costs (d(p, q) ≠ d(q, p)).
Free-view video lets viewers choose their camera parameters when watching a recorded or live event; they can interactively control the camera view and choose to focus on different parts of the scene. This paper presents a novel client-server architecture approach for free-view videos of sports. The clients obtain a detailed 3D representation of the players and the game field from the server of a shared repository. The server receives video streams from several cameras around the game field, detects the players, determines the camera with the best view, extracts the poses of each player, and encodes this data with a timestamp into a snapshot, which is streamed to the clients. A client receives a stream of snapshots, applies each pose to the appropriate player’s 3D model (avatar), and renders the scene according to the user’s virtual camera. We have implemented our approach while using VIBE [Kocabas et al. 2020] for pose extraction and obtained promising results. We transferred a soccer game into a 3D representation supporting free-view with a reconstruction error below . Our unoptimized implementation is nearly real-time; it runs at about 30 frames/second.
This paper presents an update of the French solution for long-term archiving combined with online publication of 3D research data in the humanities. The choice of data organisation, metadata, standards and infrastructure is in line with the FAIR principles of the semantic web. Since 2017, our consortium 3D for Humanities offers this service through the French National 3D Data Repository for Humanities. This was the start of our collaboration with CINES (Centre Informatique National de l’Enseignement Supérieur), the standard Open Archival Information System (OAIS) infrastructure for research data in France. In 2021, with more than one thousand (1000) projects registered, driven by laboratories from all over the country, open questions were posed to the community. In response, we have developed a new metadata schema that allows a more precise description of the research content, greater openness to other non-archaeological humanities fields, and better FAIR compliance. This metadata schema is aligned with standard vocabularies and mapped to the Europeana Data Model (EDM). Among other online features, a 3D viewer has been implemented to meet the needs of researchers and public communication. As designed, aLTAG3D, the desktop UI software we developed to help research labs create Submission Information Packages (SIPs), has adapted itself to the new schema through the XSD content.
This work presents the development of DrumsVR, a system for simulating drum percussion using gesture recognition on smartwatches and virtual reality. The objective is to provide an immersive and interactive experience for users, allowing them to play a virtual drum kit intuitively and engagingly. The system utilizes the MDDTW algorithm to accurately recognize the gestures performed by users, triggering the playback of corresponding drum sounds. During the conducted tests, various drumming gestures were explored, such as hits on the snare drum, cymbals, and bass drum, and the algorithm successfully recognized each gesture with precision. The results demonstrate the feasibility and effectiveness of DrumsVR, providing a precise and synchronized sound response to the executed gestures. Furthermore, the importance of providing information about the start of gesture execution was observed to enhance the accuracy and responsiveness of the system. As next steps, the plan is to improve the MDDTW algorithm, explore the integration of additional sensors on smartwatches, and enhance the visual experience of virtual reality. DrumsVR represents a promising approach for simulating drum percussion, encouraging creativity and musical expression among users.
This work presents the development and usability evaluation of the MazeVR game, a maze game designed for the Google Cardboard device. The game allows interactions through head movements to control the camera and gestures on the smartwatch to perform additional actions, such as starting and stopping walking, picking up and discarding torches, and clearing the path. Gesture recognition on the smartwatch is performed using the continuous gesture recognition algorithm. The main objective was to create an immersive and challenging experience for players, allowing them to explore the maze and overcome obstacles using only head movements and gestures on the smartwatch. An evaluation was conducted with volunteer experts, using the SUS questionnaire as a data collection method. The results indicated a positive perception of the usability and effectiveness of the method. The experts considered the method intuitive, easy to learn and use, and highlighted the smooth integration of system functionalities. The results demonstrated satisfactory usability and a positive user experience.
This paper explores the integration of eXtended Reality (XR) content within X3DOM, a popular framework for displaying 3D content in web browsers. The importance of Web3D and the prevalent use of the X3D file format are discussed. With the deprecation of WebVR and the adoption of WebXR in web browsers, X3DOM has emerged as one of the pioneering adaptors of WebXR APIs. This paper highlights the current capabilities of X3DOM, which enable users to explore 3D scenes on regular screens and seamlessly transition into Virtual Reality (VR) mode. It showcases the use of controllers for navigation and the execution of custom functions within the X3D scenes. Additionally, the paper presents a series of developed 3D scenes that demonstrate the effectiveness of X3DOM in rendering VR content, ranging from indoor to outdoor environments, utilizing X3D nodes to display images and videos to create immersive photospheres and rich interactive scenes.
This paper introduces a method to enhance STEM education in primary schools by integrating 3D educational games with traditional teaching tools. Our approach combines two fundamental aspects: technical, which incorporates multi-platform educational games of various genres, and social, fostering interactions between teachers, students, and software developers. This model encourages adaptive learning tailored to student needs and promotes continuous student engagement. By simplifying complex STEM concepts through gamification, our method holds promising potential to improve the quality and effectiveness of STEM education in primary schools.
In the future, museums may undergo a partial transformation where traditional tours are replaced by teleoperated tours, offering innovative museum experiences. This transition is made possible by technological advances in various domains such as Extended Reality (XR), media workflow management or positioning systems. However, there is still a lack of synergy between these technologies to create comprehensive and seamless experiences that enable professionals to remotely guide visitors or cater to multiple groups and individuals simultaneously. To address this gap, we propose a web-based architecture that combines the following research themes: (i) Accurate indoor localization systems for museum visits, with an emphasis in geofencing; (ii) Low-latency interactive audio and video workflow management between visitors and guides; (iii) Rich AR and VR interactions between visitors and guides as well as the visitors themselves.
This paper presents a new method for the digital partial reconstruction of a Roman statue using 3D techniques. In particular, it discusses the animation of the 3D model using rigging and skinning techniques to allow easy modification of the figure. The advantages of this approach over traditional manual methods are explained and the possibilities for re-use of such a digital reconstruction are highlighted.
A multi-platform Virtual Reality (VR) approach is proposed to complement the traditional approaches for construction safety training. Visual simulations of a highway construction project were developed and presented through the developed platforms, aiming at giving students immersive experience of actual construction environments. The simulated worksite scenarios included active traffic, multiple worker roles and heavy equipment, and was rendered at different times of day and weather conditions. We used this material in an undergraduate class activity with 50 students. During a session in our visualization lab, students experienced the scenarios presenting day shift, afternoon shift with adverse weather and night shift and were asked to develop daily report of their job site observation. The scenrios were presented via the following platforms: TV projection, Mobile Phone, Head-Mounted Display (HMD), and CAVE projection room. The results demonstrates that the multi-platform immersive experience has the potential to significantly improve hazard recognition skill of construction students.
Enhancing the process of generating entirely automated visualization schemes of complex fluid flow patterns within brain tumors is critical for gaining insights into their movements and behaviors. This study focused on optimizing and automating the processing of 3D volumetric and vector field data sets obtained from DCE-MRI (Dynamic Contrast-Enhanced Magnetic Resonance Imaging) scans. It is crucial to maintain performance, preserve data quality and resolution, and provide an accessible platform for biomedical scientists.
In this paper, we represent an innovative approach to enhance fluid flow visualization of brain tumors through scalable visualization techniques. New techniques have been designed, benchmarked, and authenticated to produce X3D visualizations in Web3D environments using Python, and ParaView. The proposed approach does not only enhance fluid flow visualization in the context of brain tumor research but also provides a reproducible and transparent framework for future studies with both human and mouse scans.
CoEditAR is an open-source framework focused on facilitating the creation, sharing and discovery of WebXR-based, interoperable experiences. To achieve this goal, we are working on a semantic model (and language) that allows the contextual description of the real world and interactive virtual elements on top of them. In this way, it is possible for end users to automatically find new experiences by simply navigating through a physical location, overcoming the limitations of current URL-based methods and simplifying the transition towards interaction paradigms based on Spatial Computing.
Immersive experiences are mostly created with desktop-based tools using flat displays, mouse and keyboard; as a result immersive experiences have a 2D look and interaction: user sit or stand looking forward, and interact with elements within reach of their arms as in a desktop. The objective of this work is to facilitate creating room-size immersive experiences that take advantage of tethered head mounted displays and hand-held controllers to create while walking freely in space with full body movement freedom. To this end we propose XRStudio, a set of tools that permit to create within an extended reality environment so that authors can walk to move objects in space or manipulate elements with hand and full body movements as in the real world.
We present a mobile application designed to enhance students’ understanding of directional derivatives and level curves in first year’s calculus. The application offers visual tools and gamified learning to provide an engaging educational experience. Using novel technologies such the application is able to take a users drawing, generate a corresponding 3D model, and display this to the user. Through this presentation, attendees will gain a comprehensive understanding of the application’s features and the benefits it offers to students in comprehending directional derivatives.
We present four technologies to deliver contactless haptic stimuli for enriching Virtual Reality (VR) experiences. The technologies are electrostatic piloerection, focused light-induced heat, electric plasma, and ultrasound; the user does not require to wear or touch any device. We describe the working principle behind each technology and how these technologies can provide new exciting sensations in VR experiences. Additionally, we showcase a VR demo experience gathering all four remote haptic stimuli along a circuit for the users to experiment with these new sensations.
In this paper, we present a pipeline for reconstructing terrain models using LiDAR point cloud data, which aims to preserve landform features and produce realistic models. First, a normal estimation method leverages the information from the LiDAR scans in order to get more accurately oriented point clouds. Then, we employ Poisson reconstruction to generate a watertight terrain model based on the oriented point clouds. Compared to terrain models that reconstruct from Digital Elevation Models (DEMs), our proposed method can achieve a higher level of realism in the generated models.
Forests are among the most widespread and diverse ecosystems on Earth, providing essential ecosystem services at local and global scales. However, they are facing major challenges due to climate change, economic pressures and human population growth. Digital twins of forests could help address these challenges by enabling comprehensive forest monitoring and supporting management decisions. In this publication, we describe how digital twins differ from other digital tools in the forest domain and explore concepts and technologies that can serve as the basis for implementing forest digital twins. We outline the underlying data model of the digital twins, which includes trees as the core forest elements, as well as their environment. We explain how a wide range of data collection approaches can be combined for comprehensive data collection and how the data can be integrated into a spatio-temporal forest data space. We describe data processing approaches to enrich raw data with semantic information and address how digital twins can support decision making through modeling and simulation. We explain the role of web-based visualization in interacting with forest digital twins. Overall, our concept lays the foundation for the technical implementation of forest digital twins that integrate, process, analyze and visualize forest data from a variety of sources. The implementation of forest digital twins in practice would enrich our understanding of forest ecosystems and enable targeted management of forests and their ecosystem services.
This paper presents concepts and approaches towards a climate and energy oriented digital twin for public buildings. The sustainable, resource-efficient operation of these buildings, such as schools and education centers, and the monitoring, control, and optimization of their climate, air, and energy performance pose multiple challenges, in particular to cope with the consequences of climate change and changes in the energy economy. In our approach, we consider buildings in which a network of heterogeneous sensors in each spatial unit records key properties such as temperature, humidity, and CO2 concentration, as well as energy consumption and solar energy production. The continuously collected sensor data forms a spatio-temporal data space, which is used by the digital twin as a basis for AI-based analyses and simulations. The transfer of time-series data in near real time can be done by different databases. Analysis techniques focusing on time-series data allow for targeted access to the information and support the identification of exceptional events, recurring patterns, and the comparison of energy and climate-related performance. A prototype of an energy- and climate-oriented digital twin is currently being implemented in a government project in Andalusia, Spain, covering about 430 public buildings.
This workshop brings together participants from around the world with the goal of building a strong foundation for an open, interoperable Metaverse using the Web and the Web Standards ecosystem. The workshop will focus on four main topics: 1) The variety of relevant Standards and technology roles in the Metaverse stack, 2) the role of the 3D Web Interoperability Working Group, which has recently been chartered in the Metaverse Standards Forum, 3) scoping what the Metaverse IS NOT, and 4) how Use Cases and Scenarios can help clarify what the Metaverse IS. In this emerging space, perspectives and tradeoffs abound; we hope this workshop will push our understanding and terminology forward and also provide the community with an actionable set of common (yet extensible) referents and goals.
In manufacturing companies, quality reports are often created in hard copy on draft tool prinitouts. These printouts contain technical drawings and handwritten notes, which leave room for interpretation. In addition, these papers are scanned and uploaded to various systems for sharing with other users. Because of this slow and error-prone process, there is a need for action. Ideally, the process could be digitised through an individual application for quality inspectors who create reports and for machine programmers who view the reports. The recording of quality characteristics should take place efficiently and effectively on a virtual model with a fixed set of terms and provide information for optimising component production. For this purpose, a web-based application was developed with neutral file formats like X3D and open internet technologies like X3DOM as well as with methods of user-centred design. The usability of the application was evaluated with the cooperation partner Premium Aerotec.
For a variety of immersive applications, 360-degree image captures, be it purely visual or mixed, is becoming a standard in acquisition, allowing for fast, efficient, and cost-effective capture from a limited number of static poses. At the same time panoramic coverage naturally leads to the implementation of immersive and engaging solutions. In this industrial use case, we present the preliminary outcomes of a collaboration between MSheireb Properties and Hamad Bin Khalifa University for advancing the state of research to envision novel automatic solutions geared towards the generation of Spherical Digital Twins for supporting and speeding up the design, planning and management processes related to construction development, real-estate management and presentation, advertising and showcasing for design and development initiatives.
By using “representative” meshes, the “nitty gritty” (the important aspects or practical details) of editing in X3D, the requirement to write scripts for data outputs from various software programs, and impact of using both LOA2 and LOA3 skeletons for humanoid modeling will be presented.
The Apparel industry is moving into the 3D environment without the proper tools. The gaps include scan data models with fusing, or algorithmic assumptions that impact data quality, incomplete rigs for proper humanoid modeling, and lack of interoperability between CAD programs.
Using X3D and HAnim, some of these issues can be addressed. HAnim standard has various Levels of Articulation (LOA) skeletons which can assist in providing a more realistic modeling and animation to better reflect natural human posture.
3D models in the architecture, engineering and construction (AEC) sector are becoming bigger by the day, yet AEC professionals still need to be able to federate and collaborate on such massive 3D scenes over the Internet. In this case study, we present the 3D Repo Infinite Geometry Streaming solution on the 40 Leadenhall Street project, the largest construction to ever receive a planning permission in the City of London. Over 100 GB of raw geometry has been processed and stored in the cloud while all the visibility calculations were performed client-side for seamless real-time exploration of the continuous 3D space.
The architecture, engineering and construction (AEC) sector is facing one of the biggest global challenges to reduce overall carbon emissions on projects, which come with great environmental responsibility towards the built environment. To address and facilitate the achievement of such sustainability goals while encouraging collaboration across AEC professionals, the 3D Repo platform enables novel data-driven approaches to quickly visualise estimated carbon metrics in 3D, on the web and in real time. It is possible to combine 3D model data with Life Cycle Assessment (LCA) data for embodied carbon measurements to create comparative dashboards and optioneer the most eco-friendly way to build.
The Metaverse [Mystakidis 2022], represents a universe beyond reality, where the boundaries between the physical and digital realms merge seamlessly. It exists as a continuous and enduring multi-user environment, facilitated by the convergence of technologies that allow for immersive interactions with virtual environments such as Virtual Reality (VR), Augmented Reality (AR), digital entities, and workers. The approach in the application of this concept, in a real use case in an electrical substation, involves a bridge between MR and VR technologies, for a multi-user and multi-platform remote collaboration, supported by 5G communications and Edge Computing to manage visualisation and interaction through a cloud video-audio streaming. There is also management of real data acquisition from an autonomous mobile field robot, BIM Models of electrical structures and the integration of complex information systems, adding context to the experience.