Creating photorealistic avatar head models from multi-view or monocular images is crucial for applications like novel view synthesis and face reenactment. Recent methods fit a 3D Gaussian Splatting model (3DGS) anchored on a 3D Morphable Model (3DMM) to input images using an optimization scheme. This paper introduces a novel 3DGS approach for lightweight setups with static images from sparse viewpoints and a limited set facial expressions. It combines 3D Gaussian Splats and neural networks in two stages: efficient geometry-aware density control of Gaussian primitives and an optimization scheme leveraging neural networks to enrich the 3DGS model. We evaluate our method on a new high-resolution multi-view face image dataset and demonstrate it outperforms the state of the art in rendering novel views of human faces, both qualitatively and quantitatively.
Interactive web-based 3D visualization of large tabular datasets requires efficient loading to ensure quick access, responsiveness, and interactivity. Current web technologies, in particular JavaScript-based CSV parsers, face significant performance bottlenecks due to JavaScript’s inherent single-threaded nature and inefficient data handling mechanisms. To overcome these limitations, this paper presents a multithreaded loading and parsing approach for tabular data leveraging Web Workers and the Streams API. Our method partitions input data into manageable batches to be processed in parallel, significantly reducing parsing time. Furthermore, the parsed data is directly stored in ArrayBuffers, enabling direct and efficient transfer to GPU memory using WebGL or WebGPU, which is critical for visualizing large datasets. We evaluate our parser against state-of-the-art parsers, demonstrating substantial performance gains – specifically, parsing a dataset of 10 million rows significantly faster than the fastest state-of-the-art parsers for the web. Our approach can be applied as a robust and scalable solution tailored for real-time, web-based 3D information and data visualization such as scatter plots, treemaps, and information landscapes.
The complex visual characteristics of minerals, particularly those exhibiting crystalline properties - such as transparency, refraction, inclusions, and iridescence - pose a significant challenge for conventional 3D scanning techniques like photogrammetry and surface scans. To address this limitation, we are investigating 3D Gaussian Splatting (3DGS), a novel volumetric rendering method, to faithfully reproduce the view-dependent optical phenomena of minerals. The image acquisition and post-processing workflow is described, along with the GPU server-based training process for 3DGS models. We compare visualizations generated using 3DGS with those produced by photogrammetry and demonstrate the impact of spherical harmonics degree on the visual quality of 3DGS mineral renderings. A selection of mineral specimens from a scientific collection has been made available through an interactive web-based virtual mineral collection.
Neural rendering has recently emerged as a powerful technique for generating photorealistic 3D content, enabling high levels of visual fidelity. In parallel, web technologies such as WebGL and WebGPU support platform-independent, in-browser rendering, allowing broad accessibility without the need for platform-specific installations. The convergence of these two advancements opens up new possibilities for delivering immersive, high-fidelity 3D experiences directly through the web. However, achieving such integration remains challenging due to strict real-time performance requirements and limited client-side resources. To address this, we propose a hybrid rendering framework that offloads high-fidelity 3D Gaussian Splatting(3DGS)/4D Gaussian Splatting(4DGS) processing to a server, while delegating lightweight mesh rendering and final image composition to the client via depth-based screen-space fusion. This architecture ensures consistent performance across heterogeneous devices, reduces client-side memory usage, and decouples rendering logic from the client, allowing seamless integration of evolving neural models without frequent re-engineering. Empirical evaluations show that the proposed hybrid architecture reduces the computational burden on client devices while consistently maintaining performance across diverse platforms. These results highlight the potential of our framework as a practical solution for accessible, high-quality neural rendering across diverse web platforms. (The project page is available at: )
Efficient processing and accurate representation of point clouds are crucial for many tasks, such as real-time 3D scene and avatar reconstruction. Especially for web/cloud-based streaming and telepresence, minimizing time, size, and bandwidth becomes paramount. We propose a novel approach for compact point cloud representation and efficient real-time streaming using a generative model consisting of a hierarchy of overlapping Gaussian Mixture Models (GMMs). Our level-wise construction scheme allows for dynamic construction and rendering of LODs, progressive transmission, and bandwidth- and computing power-adaptive transmission. Utilizing temporal coherence in sequential input, we reduce construction time significantly. Together with our highly optimized and parallelized CUDA implementation, we achieve real-time speeds with high-fidelity reconstructions. Moreover, we achieve significantly higher compression factors, up to 59 %, than previous work with only slightly lower accuracy.
The Permafrost Tunnel Research Facility near Fox, Alaska is a unique excavated site that provides physical access to permafrost, including up to 40,000-year-old ground ice, along with modern ice intrusions from thaw above the tunnel that have solidified due to the artificial cooling of the tunnel. We present a Web3D-based digital twin of the Permafrost Tunnel Research Facility created to document this permafrost environment and support Arctic infrastructure planning. Using LiDAR scanning and photogrammetry (including Matterport capture), we reconstructed the tunnel in high fidelity and deployed it through open web technologies (X3D/X3DOM and WebXR) for immersive access via standard browsers and VR/AR devices. This digital twin enables researchers, educators, and the public to virtually explore permafrost features, such as Pleistocene ice wedges and fossils, previously accessible only to specialists. We describe our methodology for site documentation, model processing, and platform integration, and we evaluate the visual fidelity, interactivity, and performance of the resulting 3D web experience. Results demonstrate that an interactive, cross-platform virtual tunnel can engage users in understanding permafrost dynamics and landscape change. We discuss how such digital twins act as digital legacies, preserving records and enabling immersive futures for science communication and site planning. Limitations (e.g., data size, web graphics constraints) are addressed along with future improvements. This work illustrates the potential of Web3D digital twins to safeguard digital heritage while supporting remote site visualization in Arctic regions.
This study explores the application of Additive Manufacturing (AM) for the reconstruction the Etemenanki temple tower in Babylon. The research focuses on creating a 3D model from archaeological findings using Computer-Aided Design (CAD) software. The modeling process resulted in two distinct levels of detail: an abstract model emphasizing geometry and a detailed model incorporating textures and colors derived from historical records through rendering. A Polyjet 3D printer was employed to fabricate physical models. The 3D model represents a digital twin of the 3D printed physical model. The design process involved addressing challenges related to the model’s size, the complexity of replicating brickwork and textures, and the limitations inherent in 3D printing. The model was strategically divided into individual parts, facilitating easier 3D printing and assembly. The results highlight the potential of AM in visualizing archaeological sites and its flexibility in representing diverse interpretations of historical structures. This approach allows for both abstract and detailed visualizations within a single model. Furthermore, the study provides a discussion of production costs and time-frames, offering practical insights into the feasibility of 3D printing for architectural reconstructions. The methodology and solutions developed can be applied to other archaeological projects, enabling the creation of detailed and customizable models for research and presentation purposes.
Integrating Extended Reality (XR) technologies into Cultural Heritage (CH) sites enhances visitor engagement, increases site visibility, supports sustainable management, and facilitates the preservation and accessibility of cultural content. While these benefits are widely recognized, their application is particularly critical in complex urban environments such as Rome, where major, internationally known monuments often overshadow a vast network of smaller, yet historically significant, sites. In these contexts, XR technologies are not only vital for animating spaces with few surviving artifacts or those difficult to reconstruct, but also for boosting the visibility and appreciation of lesser-known heritage locations. Furthermore, given the operational constraints typical of CH sites, such as conservation requirements, restricted access, and limited budgets, XR solutions offer the added advantage of reducing interpretive and exhibition costs while enabling more flexible and sustainable forms of public engagement. This work presents such a case study: the Bottega space of the Roman Houses on the Celio Hill, where XR technologies, combined with photogrammetry, 3D graphics, and 3D printing, have been deployed to overcome the challenges of limited physical remains, enhance visitor experience, communicate the cultural significance of space, and optimize resource use.
Mixed reality (MR) offers transformative potential for social interaction by blending physical and virtual worlds. However, achieving congruent shared experiences is challenging as virtual elements must seamlessly integrate with multiple users’ diverse, often conflicting, physical environments. While universal solutions for all social MR scenarios remain elusive, we argue that for specific, common interaction paradigms — such as a seated, tabletop chess application — a more robust congruency can be achieved. We propose a conceptual shift towards using static virtual anchors as the primary reference points within the shared MR space. Instead of users independently manipulating shared virtual objects for local calibration, they align and scale themselves relative to a stationary virtual scene yielding a congruent MR experience despite using differently sized chessboards. Moreover, because chess interaction is primarily local to each player, individual physical pieces can provide congruent haptic feedback. The proposed concept was implemented and is available on the Meta Horizon platform.
We present an early-stage framework for the intelligent evaluation and optimization of remote workspaces using Artificial Intelligence (AI), with a vision toward future integration into Digital Twins and Web3D environments. The proposed system leverages computer vision techniques to identify ergonomic, safety, and comfort-related issues from workspace images, and suggests improvements through an interactive checklist interface. Current components include image-based hazard detection using object recognition models (e.g., YOLO), depth and segmentation models, and a rule-based recommendation engine. We also explore the potential integration of ergonomic assessments through pose estimation (e.g., BlazePose) and environmental parameters such as CO2, light, and noise via smart sensors for future development of a Digital Twin interface. A preliminary case study involving a home office and a coworking space demonstrates the effectiveness of the visual analysis pipeline and recommendation features. While real-time sensor integration and full 3D workspace reconstruction remain future work, our results suggest that AI-based workspace analysis can serve as a foundation for scalable, user-centric ergonomic assessment platforms.
This paper presents an educational framework that combines low-cost 3D photogrammetry with digital fabrication to support interdisciplinary learning in design, art, education, and museology. Students digitized and reproduced clay sculptures by the iconic Brazilian popular artist Mestre Vitalino, creating both virtual and physical exhibitions. The project built skills in 3D capture, post-processing, web design, and curation, while encouraging critical reflection on cultural engagement and on the ways digital media and digital fabrication can mediate and reinterpret heritage.
We present X3Test, an automated testing and performance benchmarking framework designed to address the absence of dedicated tools for X3D scenes on the web. X3Test employs Node.js and Puppeteer to load X3D/X3DOM scenes in a headless browser, simulate user interactions, and collect runtime performance metrics like frame rate (FPS), render timings, and scene graph size. This is achieved by leveraging X3DOM’s runtime API, which allows data extraction without requiring modifications to the existing content. The framework offers both a command-line interface (CLI) and a programmatic API, enabling developers to specify scene URLs, trigger events such as animations or camera changes, and customize test parameters like duration. We evaluated X3Test across diverse scenarios, from simple static scenes and animated content to the complex "3D Roanoke" city model, demonstrating its capability to effectively capture performance variations linked to scene complexity and dynamic elements. X3Test provides a robust solution for systematic Web3D performance analysis, with future plans including RenderDoc support and the collection of more advanced metrics.
We introduce PanoStyleVR, an immersive web-based framework for analyzing, ranking, and interactively applying style similarities within panoramic indoor scenes, enabling stereoscopic virtual exploration and photorealistic style adaptation. A key innovation of our system is a fully immersive WebXR interface, allowing users wearing head-mounted displays to navigate indoor environments in stereo and apply new styles in real time. Style suggestions are visualized through floating thumbnails rendered in the VR space; selecting a style triggers photorealistic transfer on the current room view and updates the immersive stereo representation. This interactive pipeline is powered by two integrated neural components: (1) a geometry-aware and shading-independent GAN-based framework for semantic style transfer on albedo-reflectance representations; and (2) a gated architecture that synthesizes omnidirectional stereoscopic views from a single 360° panorama for realistic depth-aware exploration. Our system enables cosine-similarity-based style ranking, t-SNE-driven dimensionality reduction, and GMM-based clustering over large-scale panoramic datasets. These components support an immersive recommendation mechanism that connects stylistic analysis with interactive editing. Experimental evaluations on the Structured3D dataset demonstrate strong alignment between perceptual similarity and our proposed metric, and effective grouping of panoramas based on latent style representations.
Traditional insurance claim processing for property damage often involves inefficiencies due to multiple site visits and challenges in documenting spatial context, issues highlighted by preliminary surveys with industry professionals. This paper presents AssureXR, an innovative end-to-end system designed to streamline claim capturing, assessment, and settlement by leveraging mobile LiDAR scanning and Web3D technologies.
Using readily available devices like iPhones with LiDAR, adjusters can efficiently capture a 3D digital twin of a damage site. Critical information, including measurements, detailed photographs, 360° views, and comments, can be spatially annotated directly onto this 3D model after acquisition. The core contribution is an integrated pipeline culminating in a web-based platform, built with A-Frame, that allows stakeholders to remotely view, interact with, and document the annotated digital twin over time, effectively ’freezing’ the damage state for later assessment.
Virtual Reality (VR) integration via WebXR further enhances immersive review capabilities. Developed in collaboration with R+V Versicherung, a major German insurer, the system prototype was evaluated in a workshop with diverse insurance professionals, confirming the approach’s potential to significantly improve efficiency, documentation quality, and transparency in the claims process. AssureXR serves as a comprehensive digital archive, advancing claims management through interactive, web-based 3D documentation.
The advancement of web graphics technologies opened new possibilities for delivering interactive 3D content natively within web browsers. However, the transition from legacy APIs such as WebGL to the more powerful and modern WebGPU standard presents both technical and tooling challenges. This paper introduces wgpuEngine, an open-source, cross-platform graphics engine developed in C++ and designed from the ground up to support WebGPU across desktop and web environments. Through WebAssembly compilation and rendering backend adaptation via Google’s Dawn framework, the engine ensures consistent rendering fidelity across platforms. Full support for both OpenXR and experimental WebXR-WebGPU bindings demonstrates its readiness for immersive applications. Extensive validation includes collaborative deployments, real-time visualization showcases, and cross-platform interactive XR applications. We show that the engine both fills a gap within the WebGPU tooling, and offers a flexible foundation for future research, education, and experimental development in interactive and immersive graphics. The code is open source and is publicly available at https://github.com/upf-gti/wgpuEngine.
As immersive media increasingly mediate cultural heritage experiences, a key challenge is how to evaluate content deployed across diverse Extended Reality (XR) platforms. This paper proposes a three-axis evaluation framework - Interpretive, Cognitive/Perceptual, and Affective/Sensory - that synthesizes constructs from museum studies, Human Computer Interaction (HCI), and presence theory into a comparative structure. The framework is applied to "Who Killed Helene Pumpulivaara?", a narrative experience produced in both WebXR and Virtual Reality (VR) formats, with a Spatial Augmented Reality (SAR) version underway. Using user feedback from pilot workshops and AI-assisted design workflows, the study identifies how platform affordances reshape meaning, usability, and emotional impact. The resulting matrix supports transparent, design-grounded evaluation and offers transferable insights for XR storytelling across modalities.
Planning outdoor trips is a challenging task and requires the analysis of multiple variables. Factors that affect the planning of an enjoyable hike or cycling tour, especially through scenic landscapes, include weather, day time, seasonal sunlight, scenic views, facilities, as well as terrain morphology. We thus present a web-based solution to this task, TrailView, that integrates realistic 3D graphics and interactive features to support outdoor trip planning. TrailView goes beyond prior solutions by not only providing immersive first-person view experiences but also incorporating guidance and visual context for the planning and preview of hikes or bicycle rides in the mountains. TrailView seamlessly integrates crowd-sourced data to enhance 3D visualization with routing, topography, and points of interest and supports user decision-making via interactive 3D and 2D map views, scenery preview, predicted light and weather conditions, as well as sunrises and sunsets from a first-person perspective. We evaluated the performance of our tool by recording the frametime while the camera follows along a planned hike. We show interactive times for intuitive route navigation and planning through composite 2D and 3D views, targeted for both computers and modern smartphones directly in their browser.
In the Architectural Design Process (ADP), the communication of design ideas between professionals and stakeholders is a complex task that currently still uses outdated methods. Despite advances in the use of Computer-Aided Design (CAD) and Building Information Modeling (BIM) tools, products are still presented in 2D on flat screens or paper, leaving too much information for the end user to interpret. The use of Augmented Reality (AR) and Mixed Reality (MR), which allow the overlay and direct interaction of digital objects with the real environment, has been shown to be one of the key factors for the next technological breakthrough in the development of the Architecture, Engineering and Construction (AEC) industry. This research uses the concept of serious gaming to propose a workflow for the development of an AR/MR prototype application for the Microsoft HoloLens2 head-mounted device (HMD) capable of capturing the interactions made in an architectural design project and transferring them for execution in BIM software. The prototype application has been tested on indoor and outdoor case scenarios of office layout design and urban furniture implementation showing feasibility of use, though further possible improvements are considered. The results of a preliminary user experience survey conducted, suggest that the proposed approach can facilitate the communication of design ideas between the architect and the end user, offering a more accessible interpretation of design proposals in comparison to conventional methods. Furthermore, it demonstrates its potential for use at AEC.
Sharing identical 3D scene states across different platforms and user sessions poses practical limitations. Existing mechanisms such as X3D anchors, glTF camera tags, and proprietary URL-based encodings typically support only static, author-defined viewpoints or are constrained to a specific viewer implementation. This paper introduces a schema and API for a Web3D Bookmarking system that enables interoperable sharing of precise, user-generated 3D scene contexts. The proposed grammar encodes the complete scene state, including camera parameters, navigation settings, scene composition, and contextual metadata, into a compact descriptor suitable for embedding in hyperlinks or other distribution methods. When accessed, the scene is re-instantiated with the same view and context as intended by the original sharer.
We define the grammar using a formal JSON Schema and justify each parameter through a design rationale. Use cases include collaborative VR/AR environments, remote design reviews, and educational applications where reproducible scene context is critical. An initial implementation using X3D and X3DOM demonstrates feasibility. We also compare our approach with existing viewpoint sharing techniques and outline a structured evaluation plan to assess the utility and performance of the proposed system.
Panoramic video recording of Intangible Cultural Heritage (ICH) often yields large files due to Equirectangular Projection (ERP) inefficiencies and static scene redundancy. This paper introduces a hybrid storage architecture integrating Artificial Intelligence Generated Content (AIGC) optimization with a static background (optimized first frame) and dynamic region overlays. AIGC (simulated via manual editing in experiments) cleans the background, while marked dynamic regions are independently encoded. In Web3D playback, these regions are overlaid onto the static panorama with edge feathering. This approach significantly reduces data volume, improving Web3D loading times and user interaction. Experiments on two papermaking ICH scenes demonstrated compression ratios of approximately 76.5% (93.5MB to 21.9MB) and approximately 90.0% (792MB to 79.06MB). The architecture curtails storage/transmission costs, enhances visual presentation, and boosts Web3D performance for interactive ICH displays.
Photogrammetry is widely used in cultural heritage documentation for its accessibility, non-invasiveness and adaptability to objects of different nature and scale. Over the years, several projects have developed specific solutions to improve the accuracy and traceability of workflows, with approaches ranging from embedded metadata to manual annotation. Over time, the need for a shared semantic structure that can guarantee interoperability between tools and institutions has emerged in parallel in order to facilitate the integration and coordinated reuse of data in order to avoid a fragmentation that can be critical when a single project is articulated on multiple platforms or devices, making it difficult to maintain metadata consistency throughout the process.
With these specific issues as a focus, this paper presents PODS (Protocol of Ontological Digital Survey), a semantic and scalable framework for the structuring of photogrammetric metadata, based on an extension of CIDOC-CRM and CRMdig. The model includes controlled classes and vocabularies to ensure semantic consistency, procedural traceability and machine-actionable representations. In addition to the implementation for semantic query, PODS has also been integrated into Omeka-S, through exportable RDF templates compatible with existing ontologies, such as BeArchaeo, to facilitate distributed operational use. The three reported case studies aim to illustrate the potential of the model with respect to different scales, tools and environments, with the aim of supporting semantic interoperability even in heterogeneous and uncoordinated scenarios.
This article introduces how an exact computation library based on rational arithmetic has been used in a polyhedral modeler based on face shifts and topological event detection. The goal of the use of exact computation is to get rid of the imprecision in the geometrical predicates computations, and thus to avoid false positives and false negatives in the topological events detection. This article also presents two algorithms which transform a polyhedral mesh with an approximated geometry (a mesh with faces which does not co-intersect in one point) into a mesh with the same structure, but with a non-approximate geometry. This is, to our knowledge, the first attempt to use rational arithmetic in a polyhedral modeler to manage the geometrical data. The reasons why rational arithmetic has not been used before are the memory consumption that it can generate, but also the fact that to keep an absolute precision, some operators and functions can not be used (square root, logarithm, trigonometric functions, etc.) and finally the fact that all data are produced using floating-point arithmetic, and so that data should be corrected before use. This article explains how all these issues have been handled.
Presenting point clouds in web environments is a very demanding process. It requires downsampling of point clouds in specific file formats, such us X3D, that uses web architecture to work across diverse devices and lightweight 3D graphics to ensure web page functionality. This paper evaluates the performance of several point cloud downsampling methods with the objective of identifying those that best balance data reduction for efficient web deployment with the preservation of essential features required for AI and machine learning applications. We present a comparative analysis of Voxel Grid Downsampling (VGD), Uniform Grid Subsampling (UGS), Curvature Preserving Sampling (CPS), Random Sampling (RS) and Farthest Point Sampling (FPS). To assess the impact of each approach on the distribution and structure of sampled point clouds, we conduct a number of experiments by combining SHREC 2021 Cultural Heritage dataset with PointNet.
Realistic modeling of 3D human avatars in dynamic virtual environments requires accurate modeling of skin deformation, texture adaptation, and biomechanical response under motion/movement. Heterogeneous mesh structures and improper anatomical segmentation result in deformation artifacts, especially during high-strain activities like powerlifting, dancing, etc. Existing 3D avatar generation approaches primarily focus on visual fidelity or static appearance, but maintaining the correctness and coherence of skin and texture states during concurrent, region-specific deformations remains a challenge. This work introduces a multilayered, region-aware texture modeling methodology that integrates strain-dependent viscoelastic dynamics and adaptive texture mapping. The methodology is evaluated using a created 3D human avatar performing a powerlifting and dancing sequence, which induces significant and varied deformation across anatomical regions. A compartmentalized feedback control system dynamically updates texture and mesh states, enforcing consistent visual response during motion. Quantitative analysis shows 90% motion damping in displacement versus time, 5% of texture distortion error, and over 95% voxel-wise skin deformation accuracy. The methodology demonstrates region-specific adaptation and synchronization of physical and visual states, supporting interactive avatars in metaverse platforms. Supplementary video material is available at: 1
Very little of the seafloor is mapped and imaged to the extent needed to assess and monitor the changing ocean environment. The Monterey Bay Aquarium Research Institute develops systems to conduct seafloor surveys producing 1-cm-lateral resolution products over hundreds of square meter areas. The same sites can be revisited to understand physical, chemical, geological, and biological changes. Products from the surveys are published using traditional methods and data are submitted to national data archives such as the MGDS [1]. Because of the charismatic nature of some of our surveys a more engaging experience for the general public's access to data is desired. For this we envision using glTF, OGC 3D Tiles, and X3D to develop open-source workflows in our data processing system.
Traditional insurance claim processing for property damage often involves inefficiencies due to multiple site visits and challenges in documenting spatial context. AssureXR is an innovative end-to-end system developed in collaboration with R+V Versicherung, a major German insurer that is designed to streamline claim capturing, assessment, and settlement by leveraging mobile LiDAR scanning and Web3D technologies. The core contribution is an integrated pipeline culminating in a web-based platform.
AssureXR serves as a rich digital archive, advancing claims management through interactive web-based 3D documentation.
The Industrial Data Space (IDS) concept serves as a blueprint for modern industrial data exchange, enabling security and decentralized data provisioning. Modern implementations such as Catena-X implement standards and patterns from the Industrial Data Space Association (IDSA) and the Industrial Digital Twin Association (IDTA) to create a standardized and interoperable digital twin ecosystem for the automotive industry and enable secure and transparent data exchange along the entire value chain. Work is currently underway to enrich these digital twins with engineering and CAD data to enable new applications for master data use cases.
The industrial metaverse is a concept that leverages virtual and augmented reality to create a digital twin of physical industrial environments and processes. This digital twin enables remote collaboration, simulation, and optimization of manufacturing, engineering, and field service activities. It connects the physical and digital worlds to increase efficiency, reduce costs, and enhance the work experience in industrial virtual worlds.
We demonstrate how modern IDS digital twin instances such as Catena-X can be deployed directly in virtual environments for 3D collaboration applications and use cases. Existing data ownership and user rights are transferred into a secure, interactive, and cross-device collaboration solution.
We present an industrial use case demonstrating the integration and effectiveness of Web3D technologies within the University of Turin’s (UniTO) Asset Management System (AMS). Leveraging advanced capabilities such as Building Information Modeling (BIM), Geographic Information Systems (GIS), and Business Intelligence (BI), the AMS provides a robust and interactive web-based platform designed for the comprehensive management of UniTO’s extensive real estate portfolio. The implementation of these technologies facilitates enhanced asset visibility, optimized resource allocation, and informed decision-making processes through dynamic visualization and analytical dashboards. The AMS’s modular structure supports diverse operational scenarios including occupancy monitoring, environmental quality assessment, and spatial optimization. The deployment of web-based solutions at UniTO underscores significant improvements in management efficiency, sustainability, and cost-effectiveness, serving as a model for scalable and replicable applications in similar institutional contexts.
A persistent challenge in digital heritage and museology is how to meaningfully integrate technical documentation—such as 3D lidar scans—into exhibits that are engaging, informative, and inclusive. In this paper, we describe how participatory, community-engaged methods shaped the design of an interactive, thematic timeline exhibit at the Tampa Bay History Center (TBHC). The resulting “Tampa Topography and Timelines” exhibit is a spatial and sensory landscape that explores themes of urban expansion and transit, Tampa's changing skyline, and heritage at risk. Each theme integrates 3D visualizations of key sites – Union Station, the Ybor City Casitas, a heritage streetcar network, and Battery Laidley – with community narratives that connect these places to broader themes of immigration, military conflict, urbanization, and the ongoing struggle for preservation amid displacement and development. This multimodal approach to digital storytelling bridges the gap between technical cultural heritage documentation and first-person storytelling, resulting in a more engaging and memorable museum experience.
We present an industrial use case demonstrating the exploitation of AI-based digital twin technologies developed in the AIN2 project for the creation of immersive virtual tour experiences. The case study focuses on the Bluu Casa showroom located in the Doha Design District (DDD). We demonstrate the complete pipeline from single-shot acquisition with a consumer-grade 360° camera to immersive WebXR deployment on head-mounted displays (HMDs) like Meta Quest 3. The proposed solution enables intuitive and photorealistic exploration of interior environments with stereo 3DOF navigation.