Web3D '22: The 27th International Conference on 3D Web Technology

Full Citation in the ACM Digital Library

SESSION: Session: Rendering is Key in 3D Graphics

Hardware-accelerated Rendering of Web-based 3D Scatter Plots with Projected Density Fields and Embedded Controls

3D scatter plots depicting massive data suffer from occlusion, which makes it difficult to get an overview and perceive structure. This paper presents a technique that facilitates the comprehension of heavily occluded 3D scatter plots. Data points are projected to axial planes, creating x-ray-like 2D views that support the user in analyzing the data’s density and layout. We showcase our open-source web application with a hardware-accelerated rendering component written in WebGL. It allows for interactive interaction, filtering, and navigation with datasets up to hundreds of thousands of nodes. The implementation is detailed and discussed with respect to challenges posed by API and performance limitations.

InstantXR: Instant XR Environment on the Web Using Hybrid Rendering of Cloud-based NeRF with 3D Assets

For an XR environment to be used on a real-life task, it is crucial all the contents are created and delivered when we want, where we want, and most importantly, on time. To deliver an XR environment faster and correctly, the time spent on modeling should be considerably reduced or eliminated. In this paper, we propose a hybrid method that fuses the conventional method of rendering 3D assets with the Neural Radiance Fields (NeRF) technology, which uses photographs to create and display an instantly generated XR environment in real-time, without a modeling process. While NeRF can generate a relatively realistic space without human supervision, it has disadvantages owing to its high computational complexity. We propose a cloud-based distributed acceleration architecture to reduce computational latency. Furthermore, we implemented an XR streaming structure that can process the input from an XR device in real-time. Consequently, our proposed hybrid method for real-time XR generation using NeRF and 3D graphics is available for lightweight mobile XR clients, such as untethered HMDs. The proposed technology makes it possible to quickly virtualize one location and deliver it to another remote location, thus making virtual sightseeing and remote collaboration more accessible to the public. The implementation of our proposed architecture along with the demo video is available at https://moonsikpark.github.io/instantxr/.

Terrender: A Web-Based Multi-Resolution Terrain Rendering Framework

Terrain rendering is a fundamental requirement when visualizing 3D geographic data in various research, commercial or personal applications such as geographic information systems (GIS), 3D maps, simulators, and games. It entails handling large amounts of data for height and color as well as high-performance algorithms that can benefit from the parallel rendering power of GPUs. The main challenge is (1) to create a detailed renderable mesh using a fraction of the data that is most relevant to a specific camera position and orientation, and (2) to update this mesh in real time as the camera moves while keeping the transition artifacts low. Many algorithms have been proposed for adaptive adjustment of the level of detail (LOD) of large terrains. However, the existing web-based terrain rendering frameworks do not use state-of-the-art algorithms. As a result, these frameworks are prone to classic shortcomings of simpler terrain rendering algorithms such as discontinuities and limited visibility. We introduce a novel open-source web-based framework for rendering high quality terrains with adaptive LOD: Terrender. Terrender employs RASTeR, a modern LOD-based terrain rendering algorithm, while running smoothly with a limited bandwidth on all common web browsers, even on mobile devices. Finally, we present a thorough analysis of our system’s performance when the camera moves on a predefined trajectory. We also compare its performance and visual quality to another well-known framework.

A Framework for Safe Execution of User-Uploaded Algorithms

In recent years, a trend has existed for an open benchmark aiming for reproducible and comparable benchmarking results. The best reproducibility can be achieved when performing the benchmarks in the same hard- and software environment. This can be offered as a web service. One challenge of such a web service is the integration of new algorithms into the existing benchmarking tool due to security concerns. In this paper, we present a framework that allows the safe execution of user-uploaded algorithms in such a benchmark-as-a-service web tool. To guarantee security as well as reproducibility and comparability of the service, we extend an existing system architecture to allow the execution of user-uploaded algorithms in a virtualization environment. Our results show that although the results from the virtualization environment are slightly slower by around 3.7% to 4.7% compared with the native environment, the results are consistent across all scenarios with different algorithms, object shapes, and object complexity. Moreover, we have automated the entire process from turning on/off a virtual machine, starting benchmark with intended parameters to communicating with the backend server when the benchmark has finished. Our implementation is based on Microsoft Hyper-V that allows us to benchmark algorithms that use Single Instruction, Multiple Data (SIMD) instruction sets as well as access to the Graphics Processing Unit (GPU).

SESSION: Session: 3D Content Processing, Augmented and Virtual Reality

Flexible Photorealistic VR Training System for Electrical Operators

Virtual reality can be a very effective tool for professional training, especially when training performed in reality would pose a high risk for the trainees. However, efficient use of VR in practical industrial training requires high-quality VR training environments - 3D scenes and interactive scenarios. Because creating such a VR environment is a complex and time-consuming task, existing VR training systems generally do not offer the flexibility that would allow trainers to adjust the environments to changing training contexts and requirements. In many cases, in industrial training, however, one can benefit from the componentization and repeatability of the training content to provide such flexibility. In this paper, we propose a method of creating photorealistic virtual reality training scenes with the use of a library of training objects. Designing training objects is a relatively complex process, but once a library of objects is available, a designer can assemble new training scenes from the objects available in the library. Training scenarios can be added using an easy-to-use graphical tool.

Levels of Representation and Data Infrastructures in Entomo-3D: An applied research approach to addressing metadata curation issues to support preservation and access of 3D data

This paper employs an action-based research approach to address the question of how to create a sustainable workflow to support long term access, interoperability, and reuse of 3D data. This is applied research, stemming from the Entomo-3D collaboration between Virginia Tech University Libraries and the Virginia Tech Department of Entomology to digitize a university insect pollinator collection. The paper will describe infrastructure to support data management and transformation, as well as new challenges that have emerged from this effort.

Evaluation of simplified 3D CAD data for conveying industrial assembly instructions via Augmented reality

Augmented Reality (AR) based training is gaining momentum in industrial sectors, particularly in assembly and maintenance. Generally, the media contents used to create AR assembly instructions include audio, video, images, text, signs, 3D data, and animations. Literature suggests that 3D CAD-based AR instructions spatially registered with the real-world environment are more effective and produce better training results. However, storing, processing, and rendering 3D data can be challenging even for state-of-the-art AR devices like HoloLens2, particularly in industrial usage. To overcome these concerns, heavy 3D models can be simplified to a certain extent with a minimal impact on the user experience, that is, the quality of visualization in AR. In the present paper, we evaluate the usability of a set of simplified 3D CAD models used to convey manual assembly information to novice operators. The experiment included 14 participants, six assembly operations, and two sets of 3D CAD models (i.e., originals and simplified) and was conducted in a laboratory setting. To simulate as much as possible a real-world assembly scenario, the components, and the original corresponding 3D CAD models were obtained from a real-world industrial setup. The present paper confirms that simplified 3D CAD models can replace the original 3D CAD models within AR applications without affecting the user experience with the help of subjective evaluations.

A new database for image retrieval of camera filmed printed documents

The massive use of phones and their cameras is driving the research around augmented reality technologies that can be used in a browser. Indeed, this could allow to turn every physical support into an access to digital information. A family of specific objects used for such scenario is the printed material. The applications augmenting printed material with additional content such as videos, 3d animations, sound, etc. follow the same scenario: the printed material is filmed by the camera phone and the captured image is sent to a server able to run image recognition algorithms in order to retrieve a similar image in a database. Several technological building-blocks are composing the pipeline including image segmentation (usually done on the phone to extract only the pixels corresponding to the printed material) and image recognition (usually performed on the server). New methods and tools are proposed every year to address them, however, there is still a lack of a common database to benchmark these new methods. In this paper, we propose such database that we make publicly available.

url : https://github.com/Ttibo/A-new-database-for-image-retrieval-of-camera-filmed-printed-documents

SESSION: Session: Metaverse Definition and Characteristics

The Keys to an Open, Interoperable Metaverse

The term ‘Metaverse’ has taken on new interest recently, appearing prominently in the marketing materials of a number of large technology companies. Indeed, many have attempted, or are attempting, to co-opt it for their own purposes, which has resulted in a great deal of confusion among producers and consumers in the marketplace. With this paper, the Web3D Consortium seeks to address this confusion by exploring the history of the ‘Metaverse’, provide a workable definition of the term ‘Metaverse’, and provide a vision for its sustainable, cooperative construction into the future. We believe that all the technologies are in place to fulfill the vision of an open, equitable, and ubiquitous information space. What remains are the key issues that have kept the Metaverse from manifesting the last two decades: poor user experience and poor corporate cooperation.

Defining the Metaverse through the lens of academic scholarship, news articles, and social media

The emergence of the Metaverse has received varied attention from academic scholars, news media, and social media. While the term ’Metaverse’ has been around since Neil Stevenson’s 1992 novel, Snow Crash, the definition of the Metaverse changes depending on who and when it is described. In this study we analyze various works about the Metaverse from ACM publications on the topic, to news media reports, to discussions on social media. Using topic modeling techniques and natural language analysis, we show how each community speaks about and defines the Metaverse.

Designing for Social Interactions in a Virtual Art Gallery

The dawn of a new digital world has emerged with new ways to communicate and collaborate with other people across the globe. Metaverses and Mirror Worlds have broadened our perspectives on the ways we can utilize 3D virtual environments. A Mirror World is a 3D virtual space that depicts a real-life place or environment that people may want to see physically or would like to manipulate to create something new. A perfect example of this would be an art gallery which provides people an outlet to express themselves through various art forms and be able to socialize and have that human interaction that is needed during times when physical presence may be difficult.

This project strives to improve user social interactions and make spatial control easier and more fluid in a virtual art gallery, while also incorporating the existing metaphor of permission and user privileges used in synchronous collaborative environments. We worked to create ways for people to be invited into group chats based on proximity, allowing users to give their consent as to who they want to talk to and who they will allow to share control within the space. We also implemented a way to view the space as a 3D map that highlights pieces of artwork around the space for people to teleport to and view at ease. To demonstrate this shared viewing and navigation experience we also focused on incorporating audio and spatial interaction features within the art gallery prototype of X3D and glTF models, images and audio, and HTML user interface.

SESSION: Poster Abstracts

An Open, Multi-Platform Software Architecture for Online Education in the Metaverse

Use of online platforms for education is a vibrant and growing arena, incorporating a variety of software platforms and technologies, including various modalities of extended reality. We present our Enhanced Reality Teaching Concierge, an open networking hub architected to enable efficient and easy connectivity between a wide variety of services or applications to a wide variety of clients, designed to showcase 3D for academic purposes across web technologies, virtual reality, and even virtual worlds. The agnostic nature of the system, paired with efficient architecture, and simple and open protocols furnishes an ecosystem that can easily be tailored to maximize the innate characteristics of each 3D display environment while sharing common data and control systems with the ultimate goal of a seamless, expandable, nimble education metaverse.

Challenges in Applying Deep Learning to Augmented Reality for Manufacturing

Augmented Reality (AR) for industry has become a significant research area because of its potential benefits for operators and factories. AR tools could help to collect data, create standardized representations of industrial procedures, guide operators in real-time during operations, assess factory efficiency, and elaborate personalized training and coaching systems. However, AR is not yet widely deployed in industries, and this is due to several factors: hardware, software, user acceptance, and companies’ constraints. One of the causes we have identified in our factory is the poor user experience when using AR assistance software. We argue that adding computer vision and deep learning (DL) algorithms into AR assistance software could improve the quality of interactions with the user, handle dynamic environments, and facilitate AR adoption. We conduct a preliminary experiment aiming to perform 3D pose estimation of a boiler with MobileNetv2 in an uncontrolled industrial environment. This experiment produces insufficient results that cannot be directly used but allow us to establish a list of challenges and perspectives for future work.

Deep Learning Classification in web3D model geometries: Using X3D models for Machine Learning Classification in Real-Time web applications

In this paper we study about the requirements of web3D models and particular X3D formatted models in order to work efficiently with Deep Learning algorithms. The reason we are focusing in this particular type of 3D models is that we consider web3D as part of the future in computer graphics. The introduction of metaverse™ technology, indeed confirms that lightweight interoperable 3D models will be an essential part of many novel services we will see in the near future. Furthermore, X3D language is expressing 3D information in a way semantically friendly and so very useful for future applications. In our research we conclude that the lightweight X3D models require some vertices enhancement in order to cooperate with Deep Learning algorithms, however we suggest algorithms that may be applied and make the whole process in Real-Time which is very important in case of web applications.

Document Segmentation for WebAR application

In recent years, we have witnessed the appearance of consumer applications of Augmented Reality (AR) available natively on smartphones. More recently, these applications are also implemented in web browsers. Among various AR applications, a simple one consisting in detecting a target object filmed by the phone and trigger an event following the detection. The target object can be of any kind, including 3D objects or simpler documents and printed pictures. The underlying process consists in comparing the image captured by the camera with large scale image remote database. The goal is then to display new content over the target object, by keeping the 3D spatial registration. When the target object is a document (or printed picture), the image captured by the camera contains, in many cases, a lot of useless information (such as the background). It is therefore more optimal to segment the captured image and send only to the server the representation of the target object. In this paper, we propose a deep-learning (DL) based method for fast detection and segmentation of printed documents within natural images. The goal is to provide a light and fast DL model to be used directly in the web browser, on mobile devices. We designed a compact and fast DL architecture, allowing to keep the same accuracy as the reference architecture, but dividing the inference time by 3 and the number of parameters by 10.

Spatial Audio Designer

The Web Audio API is an underutilized technology that provides a potential for rich interactive control over sound generation and rendering. Our team made use of the API in combination with Web3D technologies to create a spatial audio design tool for digital audiovisual creators. Our primary design challenge was creating an interface for visualizing and manipulating sound design in 3D space. We wanted our interface to be learnable and usable for our target user groups: digital music creators, digital audiovisual 3D artists, and physical audiovisual installation artists who wish to develop ideas in a virtual space. From user interviews, we learned that users needed a detailed visual 3D space as a starting point to populate with sound, as well as fine control over positioning of sound sources. The prototype web app can be used by digital and physical artists to create novel virtual audiovisual experiences, or to model a physical audiovisual installation to share and test with others. More work needs to be done to add direct spatial controls for sound fields and to make the app more natural to use. We asked artists of varying technical skill to use the app and re-create a reference scene, and measured how accurate their re-creation is.

Visual Rehabilitation for Learning Disorders in Virtual Reality: Visual Rehabilitation for Learning Disorder in VR

Current dyslexia rehabilitations methods, although efficient, suffer from the lack of adherence from young patients due to their repetitive and arduous tasks. Digital Therapeutics (DT) have grown exponentially in the last decade, and could be a stepping stone for dyslexia therapy. Making full use of new technologies, they offer new treatments for various disorders. The advancement and diffusion of Virtual Reality (VR) technologies are a new step in the therapeutic domain, notably for the treatment of neurological troubles. In this paper we propose a hybrid VR interface using eye-tracking (ET) and Brain-Computer Interface (BCI) with a gamified application for the rehabilitation of dyslexia. This prototype was designed in collaboration with medical professionals to create a gamified set of exercises adapted in 3D for dyslexia rehabilitation. The interface VR-ET-BCI serves as a monitoring device for the patient and a therapy evaluator for the practitioner. As of today, it lacks yet the clinical trials to show validated results, but an increase in motivation and adherence to therapy is expected.

SESSION: Industrial Use Case Abstracts

Digital Twin and 3D Web-based Use Cases in Industry

Multi-physical modeling combined with data-driven decision making is giving rise to a new paradigm, the "digital twin." The digital twin is a living digital model of a system or physical asset that continuously adapts to operational changes based on real-time data. When properly designed, a digital twin can help predict the future behavior of its corresponding physical counterpart. This paper presents a series of use cases that illustrate the role of a digital twin in different stages of the industrial product lifecycle. The use cases are implemented using 3D web technology for user interfaces and web standards (X3D and glTF) for data exchange between modules. The contribution of this work consists of a set of lessons learnt and some hints on future synergies between digital twin and 3D web technologies.

Industrial Use-Case : AR for Manual Assembly in Industry

Augmented Reality (AR), part of Industry 4.0 concepts, is an emerging technology with a great potential in assisting humans in a wide range of industrial processes. Among various use-cases, AR has started to be used as a training tool in manual assembly, by enabling workers to access contextualized digital information overlaid in the physical world. However, very few AR solutions have been adopted so far in industrial sectors, mainly because of technical and acceptability issues, as well as the effort to create AR contents. In this paper we present an AR training system designed for and within the framework of manual assembly production. The proposed approach aims to find the right balance between usability, effectiveness, user acceptance and authoring efforts, to address significant industrial challenges and provide an AR training tool adapted to the shop floor context. We demonstrated the usability and effectiveness of the proposed AR system in multiple experiments and comparatively evaluated authoring efforts with the state-of-the-art. Overall, our proposal reported excellent usability scores and was almost unanimously preferred by the participants to the experiments. Authoring was almost twice as fast as the state-of-the-art while the error rate during training was zero, validating therefore the effectiveness of the system.

Industrial Use-Case : Digital Twin for Autonomous Earthwork in Virtual-Reality

While advances in autonomous robotics and earthwork automation are paving the way for the construction site of tomorrow by means of fleets of autonomous machinery, they also force the issue of the supervision of such machines when they get stuck or stop for safety. When traditional supervision approaches impose an operator to stay on site and physically reach into the paused machine to re-gain control, our method consists in controlling the machines remotely in VR by leveraging a digital twin of the machine in its environment, fulled by LiDAR mapping. Our approach also allows the supervisor to orchestrate the autonomous fleet by means of a high-level "bird-view" interface of the whole site. As an attempt to help designing the tools of future earthwork supervisors, our preliminary results demonstrate the feasibility of a VR digital-twin approach for single-machine control, coupled with a 2D fleet-level control interface.

Industrial Use Cases: 3D Connectivity for Digital Twins: Decoupling 3D data utilization from delivery and file formats on an infrastructure level.

With the rapidly growing size of 3D data sets as well as configuration complexity e.g., of assembly structures, the way to an enterprise-wide utilization of 3D product data is already a difficult one. While the 3D Master for engineering & manufacturing is rather established, availability of 3D data along the full value chain e.g., as part of the Digital Thread, and the ability to easily relate it to other business information is still a huge challenge. On top of this, Digital Twins introduce the demand to dynamically link 3D data with IIoT, AI and other business information to visually bring such Digital Twins to live or even into Mixed Reality applications, adding another level of complexity. Building on established Web technology patterns, we experienced that an API-based harmonization over (brownfield) data backends and file formats can help to establish a "single source of truth" as a prerequisite for the required agile utilization of 3D data and highly flexible interconnectivity. Many solutions rely on explicitly exporting and preparing use case or application specific subsets of 3D data before making such available, often heavily limiting its utilization on the way. Instead, a high level of interconnectivity is achievable by combining a virtualization of 3D data and algorithms with a unified addressing scheme to abstract over file boundaries and formats while enabling dynamic resolution and mapping of contained 3D data elements with business information on many levels. In our talk, we will present the underlying approach and explain its utilization throughout several industrial use cases in the area of Digital Twins with a focus on dynamic linkage and enrichment of 3D product data across software, solution and format boundaries.

Turn BIM Models in high-resolution architectural images with a web-based real-time simulation and collaboration platform: A web application developed with Autodesk Forge and V-Ray SDK