Web3D '18- Proceedings of the 23rd International ACM Conference on 3D Web Technology

Full Citation in the ACM Digital Library

SESSION: Image and 3D model processing

A compact representation of relightable images for the web

Relightable images have demonstrated to be a valuable tool for the study and the analysis of coins, bas-relief, paintings, and epigraphy in the Cultural Heritage (CH) field. Reflection Transformation Imaging (RTI) are the most diffuse type of relightable images. An RTI image consists in a per-pixel function which encodes the reflection behavior, estimated from a set of digital photographs acquired from a fixed view. Even if web visualization tools for RTI images are available, high fidelity of the relighted images still requires a high amount of data to be transmitted. To overcome this limit, we propose a web-friendly compact representation for RTI images which allows very high quality of the rendered images with a relatively small amount of data required (in the order of 6-9 standard JPEG color images). The proposed approach is based on a joint interpolation-compression scheme that combines a PCA-based data reduction with a Gaussian Radial Basis Function (RBF) interpolation. We will see that the proposed approach can be adapted also to other data interpolation schemes, and it is not limited to Gaussian RBF. The proposed approach has been compared with several techniques, demonstrating its superior performance in terms of quality/size ratio. Additionally, the rendering part is simple to implement and very efficient in terms of computational cost. This allows real-time rendering also on low-end devices.

A service-oriented approach for classifying 3D points clouds by example of office furniture classification

The rapid digitalization of the Facility Management (FM) sector has increased the demand for mobile, interactive analytics approaches concerning the operational state of a building. These approaches provide the key to increasing stakeholder engagement associated with Operation and Maintenance (O&M) procedures of living and working areas, buildings, and other built environment spaces. We present a generic and fast approach to process and analyze given 3D point clouds of typical indoor office spaces to create corresponding up-to-date approximations of classified segments and object-based 3D models that can be used to analyze, record and highlight changes of spatial configurations. The approach is based on machine-learning methods used to classify the scanned 3D point cloud data using 2D images. This approach can be used to primarily track changes of objects over time for comparison, allowing for routine classification, and presentation of results used for decision making. We specifically focus on classification, segmentation, and reconstruction of multiple different object types in a 3D point-cloud scene. We present our current research and describe the implementation of these technologies as a web-based application using a services-oriented methodology.

Mixed reality tool for training on pressure immobilization treatment of snake bite envenomation

Snakebite, one of the most common and catastrophic environmental sicknesses, occurs due to the ignorance of its importance toward public health. The rich protein and peptide toxin nature of snake venom makes snake bite envenomation clinically challenging and a scientifically attractive issue. In most cases, the severity of snake bite envenomation mainly depends on the quality of first aid or snake bite management measure given to the victim prior to hospital treatment. In countries with field management strategies (such as pressure immobilization technique (PIT)), including Australia, the number of fatalities due to snake bites is considerably less compared with those in other countries without such precautionary measures. PIT involves the wrapping of a bandage or a crepe over the bitten area with a standard pressure of 55--70 and 40--70 mm Hg for lower and upper extremities, respectively. This technique delays the absorption rate or venom spread inside the body. However, the PIT displays a noticeable failure rate due to its sensitivity toward the pressure range that must be maintained when gripping the bandage around the bitten area. Off-the-shelf bandages with visual markers aid in the process of training on PIT. Despite the visual markers on the bandage, human interpretation of these markers differs, which causes discrepancies in applying correct pressure. In this paper, a mixed reality-based virtual reality (VR) training tool for PIT training is proposed. The VR application assists in training individuals to self-validate the correctness of pressure range applied to the bandage. The application provides a passive haptic response and a visual feedback on an augmented live stream of the camera to indicate whether the pressure is within the range. Visual feedback is obtained using a feature extraction technique, which adds novelty to the proposed research. Feedback suggests that the VR-based training tool will assist individuals in obtaining real-time feedback on the correctness of the bandage pressure and further understand the process of PIT.

Improving mobile MR applications using a cloud-based image segmentation approach with synthetic training data

In this paper, we show how the quality of augmentation in mobile Mixed Reality applications can be improved using a cloud-based image segmentation approach with synthetic training data. Many modern Augmented Reality frameworks are based on visual inertial odometry on mobile devices and therefore have limited access to tracking hardware (e.g., depth sensor). Consequently, tracking still suffers from drift that makes it difficult to utilize in use cases that require a higher precision. To improve tracking quality, we propose a cloud tracking approach that uses machine learning based image segmentation to recognize known objects in a real scene, which allows us to estimate a precise camera pose. Augmented Reality applications that utilize our web service can use the resulting camera pose to correct drift from time to time, while still using local tracking between key frames. Moreover, the device's position in the real world, when starting the application, is usually used as reference coordinate system. Therefore, we simplify the authoring of MR applications significantly due to a well-defined coordinate system, which is context-based and not dependend on the starting position of a user. We present all steps from web-based initialization over the generation of synthetic training data up to usage in production. In addition, we describe the underlying algorithms in detail. Finally, we show a mobile Mixed Reality application, which is based on this novel approach and discuss its advantages.

SESSION: Rendering and simulation

Defeating lag in network-distributed physics simulations: an architecture supporting declarative network physics representation protocols

Current shared worlds for games, simulations, AR, and VR rely on "good-enough" intuitively correct depictions of shared world state. This is inadequate to provide repeatable, verifiable results for decision-making, safety related/equipment-in-the-loop simulations, or distributed multi-user augmented reality. These require world representations which are physically correct to a designer-defined level of fidelity and produce repeatable, verifiable results. A network-distributed dynamic simulation architecture as illustrated in Figure 1 is presented, with consistent distributed state and selective level of physics-based fidelity with known bounds on transient durations when state diverges due to external input. Coherent dynamic state has previously been considered impossible.

Web-based geometric acoustic simulator

Geometric acoustics is one of the most commonly used techniques for exploring what a virtual environment sounds like. These methods allow the user to place a sound source and receiver in a space and compute how the space influences what the receiver hears. These simulation methods are well-known, but most are proprietary or difficult to implement. We present a web-based simulator that enables anyone with an internet connection to hear a virtual space quickly and easily. Our simulator is written in C++ and compiled to javascript for use in all major web browsers.

Real-time interactive platform-agnostic volumetric path tracing in webGL 2.0

Path tracing has become a de facto standard for photo-realistic rendering due to its conceptual and algorithmic simplicity. Over the last few years, it has been successfully applied to the rendering of participating media, although it has not seen widespread adoption. Most implementations are targeted at specific platforms or hardware, which makes them difficult to deploy or extend. However, recent advancements in web technologies enable us to access graphics hardware from a web browser in a platform-agnostic manner. Therefore, in this paper, we present an implementation of a state-of-the-art volumetric path tracer developed in JavaScript using WebGL 2.0. The presented solution supports the use of arbitrary 2D transfer functions and heterogeneous volumetric data, aims to be interactive, platform-agnostic, easily extensible, and runs in real-time - both on desktop and mobile devices.

Rule and reuse based lightweight modeling and real time web3D rendering of forest scenes

How to model and visualize a large scale forest scenes in a realistic and lightweight way keeps to be an important problem in Web3D Game and VR applications. This paper presents a novel solution to real time Web3D visualization of huge forest scenes. The data size of huge forest scenes is reduced to very low level by reusing some branches repetitively, thus, which can be transmitted over mobile Internet and rendered Web3D browsers instantly. However, the visual morphology and shaping variety of huge forest scenes can still be preserved with more straightforward hierarchical rules than L-System. From one sampling tree, we can extract it rules and grow them randomly to model large scale virtual forest scenes and render them lightweightly at Web3D browsers. Experimental Results show our proposed method outperform the existing Web3D forest modeling and rendering methods in terms of data volume, shaping polymorphism, networking bandwidth and Web rendering and achieves instant Web3D visualization of huge forest.

SESSION: Collaborative environments and perception

Screen space 3D diff: a fast and reliable method for real-time 3D differencing on the web

We introduce Screen Space 3D Diff, an interactive method for fast and reliable visual differencing on the web. This method is based on the properties of 3D scenes in 2D space such as depth, color, normals, UV, texture, etc. The central idea states that if two objects project into the same screen space with identical properties, they can be assumed to be identical; otherwise, they are different. In comparison to previous works that require large computational overheads in 3D space, our approach is significantly easier to implement and faster to determine disparities. This is because screen space methods scale with resolution and not scene complexity. They also lend themselves to massive parallelization on modern GPU hardware, and have the added advantage of being accurate at pixel level regardless of magnification. The combination of these benefits allows for instant real-time differencing with minimal disruption to the rendering pipeline even on web browsers. We demonstrate two different implementations of this method; one in a desktop application and one in Unity 3D game engine on the web. The performance of the proposed method across a range of devices is presented and practical benefits of our approach are further evaluated in a user study of twenty professionals with several large-scale models that are typical of the Building Information Modelling paradigm. Based on this evaluation we conclude that our method is invaluable in day-to-day engineering workflows and is well suited to its purpose.

A web-based solution for collaborative design supporting multiple CAD systems

Collaborative engineering design is a time-consuming process because the collaborative partners are spatially distant and familiar with divergent software solutions due to their local circumstances. The data compatibility problem rises between collaborating groups because each group uses a particular software solution that is different from other groups. A platform independent web-based solution for collaborative CAD system, that supports files of multiple CAD systems as well, has been implemented in this paper. The implemented solution saves time by providing web-based editing and visualization and preserves the design intent that makes it reusable.

Semantic 4-dimensionai modeling of VR content in a heterogeneous collaborative environment

Interactive 3D content gains increasing use in VR/AR applications in different domains, such as education, training, engineering, spatial and urban planning as well as architectural and interior design. While modeling and presenting interactive 3D scenes in collaborative VR/AR environments, different 3D objects are added, modified and removed by different users, which leads to the evolution of the scenes over time. Representation of VR content covering temporal knowledge is essential to enable exploration of such time-dependent VR/AR content. However, the available approaches do not enable exploration of VR content with regards to its time-dependent components and properties, which limits their usage in web-based systems. The main contribution of this paper is 4--dimensional representation of VR content, which encompasses time being the fourth dimension. The representation is based on the semantic web standards and ontologies, which enable the use of domain knowledge for collaborative creation and exploration of content. This could improve the availability of VR/AR applications to domain specialists without expertise in 3D graphics and animation, thus improving the overall dissemination of VR/AR on the web. The representation has been implemented in a heterogeneous collaborative VR environment for urban design.

Webizing collaborative interaction space for cross reality with various human interface devices

Recently, web based cross reality (XR) concept embracing virtual reality (VR), augmented reality (AR), and mixed reality (MR) technology has emerged, but user interaction devices for collaborative work supported in web environments are limited. The Virtual-Reality Peripheral Network (VRPN) in the traditional VR environment provides a device-independent and network transparent interface. To promote the development of web based XR applications, a common method is required to support a collaborative interaction space in VR, AR, and MR contexts according to the user environment and various interaction devices that can be used without limitation of XR content in web based XR environments as well. In this paper, we propose a webizing method for a collaborative interaction space that provides user authentication and manages user sessions in XR environments. In addition, the webizing method supports human interface devices and related events via an interaction adaptor that delivers events based on user session and converts event messages according to XR content types such as VRPN messages, X3D sensor data, and HTML DOM events to deal with interaction events.

The value of 3D models and immersive technology in planning urban density

This project explores the difficulties of increasing density in a college town struggling with how to plan for population growth. It presents a concept idea for a section of Downtown Blacksburg, Virginia that meets the varied planning goals of the community. It also experiments with an innovative way of presenting plans with 3D computer models to prompt discussion about the vision by inviting a group of people to experience 3D models of the concept in an immersive display. A select group of participants completed surveys, viewed presentations of 3D computer models of conceptual developments in Blacksburg, and discussed their opinions and thoughts about the models and proposed ideas. The findings suggest that 3D modeling can be a better planning tool for helping decision-makers understand density and quality design than typical planning tools based on 2D presentations.

SESSION: Animation and tracking

Direct manipulation of blendshapes using a sketch-based interface

We introduce a method that localizes the direct manipulation of blendshape models for facial animation with a customized sketch-based interface. Direct manipulation methods address the problem of cumbersome weight editing process using the traditional tools with a practical "pin-and-drag" operation directly on the 3D facial models. However, most of the direct manipulation methods have a global deformation impact, which lead to unintuitive and unexpected results. To this end, we propose a new way to localize the theory of direct manipulation method, using geodesic circles for confining the edits to the local geometry. Inspired by artists' brush painting on canvas, we additionally introduce a sketch-based interface as an application that provides direct manipulation and produces expressive facial poses efficiently and intuitively. Our method allows the artists to simply sketch directly onto the 3D facial model and automatically produces the expeditious manipulation until the desired facial pose is obtained. We show that localized blendshape direct manipulation has the potential to reduce the time-consuming blendshape editing process to an easy freehand stroke drawing.

Query-based composition of animations for 3D web applications

In this paper, we present the pipeline of animated 3D web content creation, which is based on semantic composition of 3D content activities into more complex animations. The use of knowledge representation adapts the approach to the current trends in the web development and enables modeling of animations using different concepts, at arbitrary abstraction levels, which makes the approach intelligible to domain experts without technical skills. Within the pipeline, we use the OpenStage 2 motion capture system and the Unity game engine.

Dynamo - dynamic 3D models for the web: a declarative approach to dynamic and interactive 3D models on the web using x3dom

Animations aid greatly to a presentation of a sophisticated man-made object. With additional interactivity, a user can explore such an object to gain even better understanding. The authoring of such dynamic models is often very resource demanding, as the animations and logic of interaction has to be expressed in an authoring program; often using a programming language. Furthermore, the 3D model and its dynamic capabilites are tightly coupled, which makes it costly to integrate changes of the underlying 3D model - e.g. if a machine part changes.

Contribution and Benefit. We present a novel scheme to define varying states of a 3D model in a decoupled, declarative manner using pattern matching. Furthermore, we demonstrate its capabilites for the web with an open source JavaScript implementation that operates on an x3dom model.

Efficient pose tracking from natural features in standard web browsers

Computer Vision-based natural feature tracking is at the core of modern Augmented Reality applications. Still, Web-based Augmented Reality typically relies on location-based sensing (using GPS and orientation sensors) or marker-based approaches to solve the pose estimation problem.

We present an implementation and evaluation of an efficient natural feature tracking pipeline for standard Web browsers using HTML5 and WebAssembly. Our system can track image targets at real-time frame rates tablet PCs (up to 60 Hz) and smartphones (up to 25 Hz).

SESSION: Annotation and semantics

An architecture for distributed semantic augmented reality services

In this paper, we present an architecture for distributed augmented reality (AR) services. The presented architecture is based on a client-server design, which supports semantic modeling and building contextual AR presentations for a large number of users. It encompasses two approaches: SOA and CARE. The Service-Oriented Architecture enables building distributed systems that provide application functionality as services to either end-user applications or other services. In turn, CARE allows semantic modeling of AR environments, which combined with SOA enables dividing responsibility between loosely coupled semantic services distributed on the internet. We also provide the results of an experimental evaluation of the performance of a prototype based on the above-mentioned architecture. The findings are promising and demonstrate that an application of semantic web techniques can be an effective approach to implementation of large-scale contextual distributed AR services.

A scalable webGL-based approach for visualizing massive 3D point clouds using semantics-dependent rendering techniques

3D point cloud technology facilitates the automated and highly detailed digital acquisition of real-world environments such as assets, sites, cities, and countries; the acquired 3D point clouds represent an essential category of geodata used in a variety of geoinformation applications and systems. In this paper, we present a web-based system for the interactive and collaborative exploration and inspection of arbitrary large 3D point clouds. Our approach is based on standard WebGL on the client side and is able to render 3D point clouds with billions of points. It uses spatial data structures and level-of-detail representations to manage the 3D point cloud data and to deploy out-of-core and web-based rendering concepts. By providing functionality for both, thin-client and thick-client applications, the system scales for client devices that are vastly different in computing capabilities. Different 3D point-based rendering techniques and post-processing effects are provided to enable task-specific and data-specific filtering and highlighting, e.g., based on per-point surface categories or temporal information. A set of interaction techniques allows users to collaboratively work with the data, e.g., by measuring distances and areas, by annotating, or by selecting and extracting data subsets. Additional value is provided by the system's ability to display additional, context-providing geodata alongside 3D point clouds and to integrate task-specific processing and analysis operations. We have evaluated the presented techniques and the prototype system with different data sets from aerial, mobile, and terrestrial acquisition campaigns with up to 120 billion points to show their practicality and feasibility.

Semantic approach to visualization of research front of scientific papers using web-based 3D graphic

In this paper we describe a semantic approach to visualization of 3D cyberspace of scientific papers and their research front using web-based 3D graphic. The most cited and significant documents in this cyberspace are represented by spheres of a large size and the distance between documents is proportional to their semantic similarity. A new measure of semantic similarity of documents is proposed that is determined by the maximum correlation between explicit and implicit connectivity of the documents. A new science contextual citation index (SCCI) that is defined by a correlation maximum with a science citation index (SCI) is proposed and implemented. SCCI can more accurately measure scientific impact, find significant documents and evaluate new articles with zero SCI. Significant similar articles confirm each other and form clusters in the cyberspace. Research front exists as a set of such clusters. The proposed cyberspace implemented by WebVR and interactive 3D graphics can be considered as a dynamic learning environment that is convenient for discovering new significant articles, ideas and trends. It can be a powerful tool of information integration, because it allows you to visualize documents of different languages in a single space.

MoST: a 3D web architectural style for hybrid model data

Within this paper, we present a novel 3D web architecture style which allows to build format-agnostic 3D model graphs on the basis of ReSTful principles. We generalize the abstract definitions of RFC 2077 and allow to compose models and model fractions while transferring the "Media Selection URI" to a new domain. We present a best practice subset of HTTP/HTTPS and ReST to model authorization, data change, and content format negation within a single efficient request. This allows implementations to handle massive graphs with hybrid format configurations on the very efficient HTTP transport layer, without further application intervention.

The system should be attractive to platform and service providers aiming to increase their ability to build 3D data application mashups with a much higher level of interoperability. We also hope to inspire standardization organizations to link generic "model/*" formats to RFC 2077 defined semantics like "compose".

Dynamic annotations on an interactive web-based 360° video player

The use of 360° videos has been increasing steadily in the 2010s, as content creators and users search for more immersive experiences. The freedom to choose where to look at during the video may hinder the overall experience instead of enhancing it, as there is no guarantee that the user will focus on relevant sections of the scene. Visual annotations superimposed on the video, such as text boxes or arrow icons, can help guide the user through the narrative of the video while maintaining freedom of movement. This paper presents a web-based immersive visualizer for 360° videos that contain dynamic media annotations, rendered in real-time. A set of annotations was created with the purpose of providing information or guiding the user to points of interest. The visualizer can be used with a computer, using a keyboard and mouse or HTC Vive, and in mobile devices with Cardboard VR headsets, to experience the video in virtual reality, which is made possible with the WebVR API. The visualizer was evaluated through usability tests, to analyze the impact of different annotation techniques on the users' experience. The obtained results demonstrate that annotations can assist in guiding the user during the video, and a careful design is imperative so that they are not intrusive and distracting for the viewers.

POSTER SESSION: Poster abstracts

A virtual car showroom

In this poster, we present a virtual car showroom. The use of virtual and augmented reality (VR/AR) becomes increasingly popular in various application domains, including prototyping, marketing and merchandising. In car industry, VR/AR systems enable rapid creation and evaluation of virtual prototypes by domain specialists as well as potential customers. In this work, we present a virtual car showroom implemented using the Unity game engine, an HMD and a gesture tracking system. The application enables immersive presentation and interaction with 3D cars in a virtual environment.

Campus SAGA: historical 360 degree VR and location based AR

Disappeared buildings in university campus is always an unforgotten memory for alumnus. The authors developed a web application to allow users to view the historical 360 degree spherical panoramas when approaching the GPS location of the historical and disappeared campus scenery. A spherical panoramas database of the campus were recorded during the past 20 years from spring to winter and important events, thus it could provide a multi-dimensional memories for various alumnus through the AR interface.

Design of an AR and VR supported SCORM courseware player

In this research, the authors designed a SCORM courseware player based on Flash action script 3 language, which could support more media than other SCORM courseware players did, especially it could support interactive media such as virtual reality and augmented reality contents, learning location could also be recorded accurately, thus it could provide more flexibility to educators.

3D model making patterns for active architectural visualization: guidelines for graphic designers cooperating with software developers

The paper1 presents guidelines for cooperation between software developers and graphic designers creating urban visualizations.

Super-resolution of interpolated downsampled semi-dense depth map

We study depth map reconstruction for a specific task of fast rough depth approximation having sparse depth samples obtained from low-cost depth sensors or SLAM algorithms. We propose a model interpolating downsampled semi-dense depth values and then processing super-resolution. We study our method in comparison with the state-of-the-art approaches transferring RGB information to depth. It appears that the proposed approach can be used to approximately estimate high-resolution depth maps.

Novel fitness function for 3D image reconstruction using bat algorithm based autoencoder

With a view to investigate, examine and analyze the performance of 3D medical image compression based on Autoencoder neural networks, a novel algorithm named autoimage reconstruct algorithm is developed, which is based on recent BAT bioinspired algorithm with a novel fitness function as Mean Square Error. It is observed that proposed algorithm outperforms existing algorithms like wavelet-based encoding and decoding for image compression and reconstruction in terms of Mean Square Error, Peak Signal to Noise Ratio (PSNR) and Compression Ratio (CR).

A configurable virtual reality store

In this poster, we present a configurable virtual reality store. The virtual store may be displayed using immersive VR systems, such as a CAVE or an HMD, to enable merchandising research, or on the web to enable creation of 3D e-commerce applications. In the proposed solution, the visualized shopping space is created dynamically as a combination of three elements: virtual exhibition model, product models and virtual exhibition data, enabling easy configuration of exhibitions by domain experts and researchers.

POSTER SESSION: Industrial poster abstracts

Applications of web3D technology in architecture, engineering and construction

Architecture, engineering and construction (AEC) has witnessed a boom in the use of web3D technologies over the past few years thanks to the proliferation of WebGL support in modern web browsers, and the ever increasing need for multidisciplinary collaboration across large-scale construction projects. In this summary, we present a number of AEC-specific requirements and a set of applications for interactive visualization via the Internet based on the 3D Repo open source platform. These span collaborative design, 3D change and clash detection, data mining, maps integration and health and safety training. The presented use cases were developed in collaboration with large AEC companies, namely Atkins, Balfour Beatty, Canary Wharf Contractors, Costain and Skanska in the UK.

Non-invasive 3D data access for PLM work flows: web-based integration of 3D data visualization with PLM work flows on a service level

We present the instant3Dhub1 solution as a showcase on the use of today's web technologies for connecting business data and 3D data in the context of PLM systems and modern service platforms or cloud environments. This solution allows companies to easily enrich their applications and workflows with 3D visualization in a non-invasive way. Concepts and benefits are illustrated by examples of successfully integrated solutions.

Embodied engineering: the use of embodiment theories in production planning with VR

Virtual Reality provides a unique opportunity to improve modern production planning processes. Using the first-person perspective and users body as the source of realistic constrains, any user can experience the design challenges at a given workstation. By grounding VR User Experience design in embodiment theories, we propose a new paradigm of human-in-the-loop for production planning. We call it Embodied Engineering (EE).

Virtual reality training of hard and soft skills in production

The paper1 presents development and test procedure of two Virtual Reality systems for training of skills necessary in a modern production company. The two systems allow teaching hard skills (example: forklift operation) and soft skills (example: quality management tools). Basic ideas, concepts of the virtual courses and their test procedure is presented, along with obtained results.

PSNC advanced multimedia and visualization infrastructures, services and applications

Poznan Supercomputing and Networking Services (PSNC) offers advanced visualisation and multimedia infrastructures as a set of dedicated laboratories to conduct innovative research and development projects involving both academia and industry. In this short overview we present the existing facilities located at the PSNC campus in Poznan, Poland as well as short descriptions of example applications and networked services which have been recently developed.