We contribute the idea of an instrumented office chair as a platform for spatial augmented reality (SAR). Seated activities are tracked through chair position, back tilt, rotation, surface sensors, and touches along the armrest. A depth camera tracks chair position using simultaneous localization and mapping and a servo-actuated pan-tilt projector mounted on the side of the chair displays content for applications. Eleven demonstration scenarios explore usage possibilities and an online survey gathers feedback. Many respondents perceive the concept as useful and comfortable, validating it as a promising direction for personal portable SAR.
Group-based object alignment is an essential manipulation task, particularly for complex scenes. In conventional 2D user interfaces, such alignment tasks are generally achieved via a command/menu-based interface. Virtual reality (VR) head-mounted displays (HMDs) provide a rich immersive interaction experience, which opens more design options for group-based object alignment interaction techniques. However, object alignment techniques in immersive environments are underexplored. In this paper, we present four interaction techniques for 3 degrees-of-freedom translational alignments: AlignPanel, AlignWidget, AlignPin, and AlignGesture. We evaluated their performance, workload, and usability in a user study with 20 participants. Our results indicate different benefits and drawbacks of these techniques for group-based alignment in immersive systems. Based on the findings, we distill a set of design choices and recommendations for these techniques in various application scenarios.
Haptic feedback to fingers is essential for enhancing the experience of human-computer interactions. In mixed-reality (MR) environments, haptic feedback to fingers should be achieved without covering the fingerpad because seamless interactions with both physical and virtual objects are required. However, previous haptic devices cannot provide a wide range of haptic feedback without interfering with physical interactions. Here, we propose a novel self-contained fingerpad-free haptic device named Fingeret, which enables a wide range of feedback and seamless interactions in MR environments. Specifically, we combined newly developed actuators for the sides of fingers, hereafter named finger side actuators (FSAs), with a conventional fingernail actuator (FNA) to obtain a wide range of haptic feedback. FSAs are suitable for relatively low-frequency haptic feedback, whereas FNA is suitable for relatively high-frequency haptic feedback. Our user study showed that the two actuators have complementary roles in covering a wide range of haptic feedback. Finally, we demonstrated an application scenario for designing molecules using an MR tool using Fingeret.
Thumb-to-finger interactions leverage the thumb for precise, eyes-free input with high sensory bandwidth. While previous research explored gestures based on touch contact and finger movement on the skin, interactions leveraging depth such as pressure and hovering input are still underinvestigated. We present MicroPress, a proof-of-concept device that can detect both, precise thumb pressure applied on the skin and hover distance between the thumb and the index finger. We rely on a wearable IMU sensor array and a bi-directional RNN deep learning approach to enable fine-grained control while preserving the natural tactile feedback and touch of the skin. We demonstrate MicroPress’ efficacy with two interactive scenarios that pose challenges for real-time input and we validate its design with a study involving eight participants. With short per-user calibration steps, MicroPress is capable of predicting hover distance with 0.57mm accuracy, and on-skin pressure with normalized pressure error at 6 locations on the index finger.
Teleportation is a popular locomotion technique that allows users to navigate beyond the confines of available tracking space with the smallest chance of inducing VR sickness. Users typically specify a teleportation destination by using a hand-held motion-sensing controller. However, for various reasons, it can be desirable or required to have a hands-free alternative to controller-based teleportation. We evaluate three different hands-free ways of teleporting with users selecting a destination using head gaze and activating teleport using: (1) eye-wink, (2) a mouth gesture, and (3) dwell. A user study with 20 participants compared all three techniques to controller-based teleportation using a waypoint based navigation task. Quantitative and subjective results showed eye-wink is the most viable alternative to using a controller and offered a lower selection error.
Spatial referencing is one of the important tasks in remote collaboration; for example, a person asks other collaborators to move to a specific location or pick up an object in a specific location. To achieve successful spatial referencing, one of the key points is that collaborators should obtain spatial knowledge and shared spatial knowledge about the environment. We selected the view-sharing method, which is one approach to support collaborators in understanding each other’s spatial frame-of-reference, and we investigated the effect of view-sharing on spatial knowledge and shared spatial knowledge acquisition in remote collaboration. A maze exploration experiment was conducted. Participants were asked to explore the maze collaboratively with/without view-sharing. Later, to examine the participants’ acquired graph knowledge, survey knowledge, and shared survey knowledge, the participants were asked to individually plan routes to move from objects to objects in the maze and draw the maze. The result showed that sharing collaborators’ viewpoints improved collaborators’ spatial knowledge acquisition.
Virtual reality (VR) exergames have the potential to train cognitive and physical abilities. However, most of these spatial games are developed for younger users and do not consider older adults with different design requirements. Yet, to be entertaining and efficient, the difficulty of games has to match the needs of players with different abilities. In this paper, we explore the effects of individually calibrating a starting difficulty and adjusting it: i) exactly as calibrated before, ii) 50% more difficult, and iii) 50% less difficult. In a user study, we compare the effects of using these adjustments on reaction times and subjective measures on younger (n=30) and older adults (n=9). The results show that most of the users prefer a faster-paced VR game in terms of enjoyment, but this also resulted in a higher perceived workload. Compared to the younger adults, the older adults rated the game more positive in terms of higher enjoyment and eagerness to play the game again, as well as lower perceived workload. This emphasizes the need for games to be designed for the user group they are intended for; both in terms of cognitive-physical difficulty and game content. Furthermore, we reflect on the transferability of the results obtained from testing with the younger adults and highlight their potential, especially for identifying suggestions and issues with the gameplay.
Locomotion is an essential factor for interaction in virtual environments. Virtual reality allows users to move freely from the constraints of gravity or the environment, which is impossible in the real world. These experiences, in which users can move at all 6 degrees of freedom, can enable new navigation paradigms that exceed 2D ground-based movement in terms of enjoyment and interactivity. However, most existing VR locomotion interfaces have limitations because they are developed for ground-based locomotion, constraining the degrees of freedom (DoF) during movement as in the real world. This exploratory study was designed to seek the features required for three-dimensional (3D) locomotion by evaluating three types of interfaces: Slider, Teleport, and Point-Tug, which were implemented based on existing ground-based locomotion techniques. We then conducted a user study with 3D navigation tasks, using both objective and subjective measures to evaluate efficiency, overall usability, motion sickness, perceived workload, and presence. The results suggest that Slider has an advantage compared to the others in terms of usability and perceived workload since it can freely and intuitively designate the direction of movement.
Collaborative augmented reality is an emerging field with the promise of simulating natural human-human interactions from remote locations. Users, represented by their photorealistic avatars, and relevant objects in their scenes can be teleported to each other's environments, with the capability of tracking their gaze, body pose, and other nonverbal behaviors using modern augmented reality devices. Pointing gestures play a key role for users to communicate about aspects related to their environment. Also, the body pose one uses during pointing, relays cues of the user's intentions and nonverbal behaviors during the interaction. Due to dissimilarities in multiple users’ environments, pointing gesture needs to be redirected since direct animation of the user's motion onto its avatar may introduce error. At the same time, the nonverbal behavior represented by the body pose also needs to be preserved for realistic interaction. While these objectives are not mutually exclusive, current approaches only solve the redirection of the gesture without preserving the body pose. In this paper, we present a systematic approach to solving the dual problem of redirection of gesture as well as preserving the body pose using a multi-objective optimization framework. The presented framework efficiently adjusts the weighting between the two objectives and gives the user the flexibility to set the minimum angular error tolerance for the pointing gesture redirection. We have tested our approach with the current state-of-the-art using both pointing gesture reference poses and redirecting against continuous gesture actions collected from an augmented reality human participant study. Results show that for a given user-defined error tolerance, our approach has a decrease of 33.5% in body pose error vs current-state-of-the-art for pointing gesture reference poses and 33.6% for pointing gestures recorded during the human participant study.
Critique sessions are an essential educational activity at the center of many design disciplines, especially those involving the creation of physical mockups. Conventional approaches often require the students and the instructor to be in the same space to jointly view and discuss physical artifacts. However, in remote learning contexts, available tools (such as videoconferencing) are insufficient due to ineffective, inefficient spatial referencing. This paper presents ARCritique, a mobile Augmented Reality application that allows users to 1) scan physical artifacts, generate corresponding 3D models, and share them with distant instructors; 2) view the model simultaneously in a synchronized virtual environment with remote collaborators; and 3) point to and draw on the model synchronously to aid communication. We evaluated ARCritique with seven Industrial Design students and three faculty to use the app in a remote critique setting. The results suggest that direct support for spatial communication improves collaborative experiences.
Network planners consider many factors in designing an indoor signal space to ensure a healthy network. These include material attenuation, channel overlap, peak bandwidth demand, and even humidity. Unfortunately, the current network planning software often fails to account for these factors adequately. Further, state-of-the-art network planning software either uses a few samples or limits itself to a single router. WaveRider is a mixed reality application for immersively viewing multiple routers and their signal strengths in indoor spaces to aid in network analysis. Our contribution lies in our visualization designs made to tackle domain tasks and introduce techniques, such as line integral convolution (LIC) and textons. For early direction and feedback on our visualizations, we recruited five experts in the field of signal analysis, presented WaveRider with various visual representations, and collected their feedback. Their responses show that WaveRider provides novel ways of visualizing signals, which can aid analysts in tackling real-world problems that they encounter in monitoring signal networks. Their feedback also gave us significant insights into future directions for WaveRider and other similar indoor signal space exploration systems.
Building facades are components that shape a structure’s daylighting, energy use, and view factors. This paper presents an approach that enables designers to understand the impact that different facade designs will have over time and space in the built environment through a BIM-enabled augmented reality system. The system permits the examination of a range of facade retrofit scenarios and visualizes the daylighting simulations and aesthetics of a structure while retaining function and comfort. A focus of our study was to measure how participants make decisions within the multi-objective decision space designers often face when buildings undergo retrofitting. This process often requires designers to search for a set of alternatives that represent the optimal solution. We analyze the decision-making process of forty-four subjects to determine how they explore design choices. Our results indicate the feasibility of using BIM-enabled AR to improve how designers make informed decisions.
Employing virtual prototypes and immersive Virtual Reality (VR) in usability evaluation can save time and speed up the iteration process during the design process. However, it is still unclear whether we can use conventional usability evaluation methods in VR and obtain results comparable to performing the evaluation on a physical prototype. Hence, we conducted a user study with 24 participants, where we compared the results obtained by using the Think Aloud Protocol to inspect an everyday product and its virtual twin. Results show that more than 60% of the reported usability problems were shared by both the physical and virtual prototype, and the in-depth qualitative analysis further highlights the potential of immersive VR evaluations. We report on the lessons we learned for designing and implementing virtual prototypes in immersive VR evaluations.
This work presents a hybrid immersive headset- and desktop-based virtual reality (VR) visualization and annotation system for point clouds, oriented towards application on laser scans of plants. The system can be used to paint regions or individual points with fine detail, while using compute shaders to address performance limitations when working with large, dense point clouds. The system can either be used with an immersive VR headset and tracked controllers, or with mouse and keyboard on a 2D monitor using the same underlying rendering systems. A within-subjects user study (N=16) was conducted to compare these interfaces for annotation and counting tasks. Results showed a strong user preference for the immersive virtual reality interface, likely as a result of perceived and actual significant differences in task performance. This was especially true for annotation tasks, where users could rapidly identify, reach and paint over target regions, reaching high levels of accuracy with minimal time, but we found nuances in the ways users approached the tasks in the two systems.
In a guided virtual field trip, students often need to pay attention to the correct objects in a 3D scene. Distractions or misunderstandings of a virtual agent’s spatial guidance may cause students to miss critical information. We present a generalizable virtual reality (VR) avatar animation architecture that is responsive to a viewer’s eye gaze and we evaluate the rated effectiveness (e.g., naturalness) of enabled agent responses. Our novel annotation-driven sequencing system modifies the playing, seeking, rewinding, and pausing of teacher recordings to create appropriate teacher avatar behavior based on a viewer’s eye-tracked visual attention. Annotations are contextual metadata that modify sequencing behavior during critical time points and can be adjusted in a timeline editor. We demonstrate the success of our architecture with a study that compares 3 different teacher agent behavioral responses when pointing to and explaining objects on a virtual oil rig while an in-game mobile device provides an experiment control mechanism for 2 levels of distractions. Results suggest that users consider teacher agent behaviors with increased interactivity to be more appropriate, more natural, and less strange than default agent behaviors, implying that more elaborate agent behaviors can improve a student’s educational VR experience. Results also provide insights into how or why a minimal response (Pause) and a more dynamic response (Respond) are perceived differently.
Hybrid creatures are a part of mythology and folklore around the world. Often composed of parts from different animals (e.g., a Centaur), they are also increasingly seen in popular culture, such as in films, video games, print media, clothing, and art. Similarly, hybrid fictional entities composed of parts from different objects are widely visible in popular culture. Thus, modeling and animating hybrid creatures and objects is highly desirable and plays an important role in 3D character design and creation. However, this is a challenging task, even for those with considerable prior experience. In this work, we propose Mix3D, an assembly-based system for helping users, especially amateur users, easily model and animate 3D hybrids. Although assembly provides a potentially simple way to create hybrids, it is challenging to extract semantically meaningful segments from existing models and produce interchangeable edges of topologically different parts for seamless assembly. Recently, deep neural network-based approaches have attempted to address parts of this challenge, such as 3D mesh segmentation and deformation. While these methods produce good results on those two tasks independently, they are not generalizable to human, animal and object models and are therefore not suitable for the task of heterogeneous component stitching as needed for creating the hybrids. Our system tackles this issue by separating the hybrid modeling problem into three automatic and holistic processes: 1) segmenting semantically meaningful components, 2) deforming them into interchangeable parts, and 3) stitching the segments seamlessly to create hybrid models. We design an user interface (UI) that enables amateur users to easily create and animate hybrid models. Technical evaluations confirm the effectiveness of our proposed assembly method, and a user study (N=12) demonstrates the usability, simplicity and efficiency of our interactive user interface.
Visual notifications are omnipresent in applications ranging from smart phones to Virtual Reality (VR) and Augmented Reality (AR) systems. They are especially useful in applications where users performing a primary task have to be interrupted to react to external events. However, these notifications can cause disruptive effects on the performance of users concerning their currently executed primary task. Also, different notification placements have been shown to have an influence on response times, as well as e.g. on user perceived intrusiveness and disruptiveness.
We investigated the effects and impacts of four visual notification types in AR environments when the main task was performed (1) in AR and (2) the real world. We used subtitle, heads-up, world space, and user wrist as notification types. In a user study, we interrupted the execution of the main task with one of the AR notification types. When noticing a notification, users responded to it by completing a secondary task. We used a Memory card game as the main task and the pressing of a correctly colored button as the secondary task. Our findings suggest that notifications at a user’s wrist are most suitable when other AR elements are present. Notifications displayed in the World are quick to notice and understand if the view direction of a user is known. Heads-up notifications in the corner of the field-of-view, as they are primarily used in smart glasses, performed significantly worse, especially compared to Subtitle placement. Hence, we recommend to use different notification types depending on the overall structure of an AR system.
During the complex process of motor skill acquisition, novices might focus on different criteria, such as speed or accuracy, in their training. Previous research on virtual reality (VR) has shown that effective throughput could also be used as an assessment criterion. Effective throughput combines speed, accuracy, and precision into one measure, and can be influenced by auditory feedback. This paper investigates through a user study how to improve participants’ effective throughput performance using auditory feedback. In the study, we mapped the speed and accuracy of the participants to the pitch of the auditory error feedback in an ISO 9241-411 multidirectional pointing task and evaluated participants’ performance. The results showed it is possible to regulate the time or accuracy performance of the participants and thus the effective throughput. Based on the findings of our work, we also identify that effective throughput is an appropriate assessment criterion for VR systems. We hope that our results can be used for VR applications.
On-body IMU-based pose tracking systems have gained prevalence over their external tracking counterparts due to their mobility, ease of installation and use. However, even in these systems, an IMU sensor placed on a particular joint can only estimate the pose of that particular limb. In contrast, activity recognition systems contain insights into the whole body’s motion dynamics. In this work, we present ActivityPoser, which uses the activity context as a conditional input to estimate the pose of limbs for which we do not have any direct sensor data. ActivityPoser compensates for impoverished sensing paradigms by reducing the overall pose error by up to 17%, compared to a model bereft of activity context. This highlights a pathway to high-fidelity full-body digitization with minimal user instrumentation.
Remote meeting applications are becoming more immersive by supporting virtual reality, however, support for augmented reality devices is still lingering. For augmented reality to be integrated, the asymmetry between the local spaces of the users needs to be solved, to which we contribute by focusing on the mismatch between tables. We present the AdapTables system, which maps a virtual reality user’s virtual meeting table onto an augmented reality user’s differently shaped physical table. By creating conformal maps between these tables, remote users can be transported to the environment of the other user. Additionally, shared virtual objects are also mapped to adapt to each user’s table. We tested the system with 32 participants in pairs of two.
In this study, to overcome the issues with the narrow display areas of mobile devices such as smartphones and tablets, we propose an Augmented Reality (AR) technology called “AR Digital Workspace” that augments a workspace in real space by superimposing windows on a plane in real space. In the proposed interface, a plane in real space is detected when a user moves a mobile device, and multiple windows are superimposed on the detected plane. By touching the screen of the device, the user can change the position and size of the windows, and copy and paste text between windows. The user can also copy text in real space using character recognition on document images acquired from the camera. This system enables users to work while displaying a large amount of information in a pseudo spacious workspace. We implemented the proposed interface on a mobile device using ARCore, an AR framework for Android devices.
Mixed Reality (MR) has significant potential to support scenarios of remote collaboration, where distributed team-members need to establish a common understanding. However, most efforts have been devoted to explore one-to-one use-cases. This work describes a one-to-many user study with 16 participants, aimed at understanding how an on-site team-member can be assisted by small groups using: a large-scale display (C1) and an interactive projector (C2) to collaborate and provide guidance during a remote maintenance procedure. Results suggest condition C1 was preferred by the majority of participants, having higher level of social presence and being considered more useful for information understanding, while condition C2 was considered easier to express ideas properly.
This study aims to develop a system that can intuitively access various connected appliances in a room. There are various home appliances in a room, each of which has configurable parameters. These are controllable with each remote controller or an IoT application. However, these controllers can be used only for a single appliance. To address this problem, we developed a system that obtains configurable information about an appliance using a smartphone. This system uses a smartphone gyro sensor to determine the position and direction of a user in the room, and allows users to instantly acquire and intuitively manipulate information about an appliance by pointing.
Immersive exergames at home have become popular due to the advance in gaming headsets. A typical commercial gaming headset contains a head-mounted display (HMD) and a pair of hand-held controllers for interacting with virtual objects. Given this setup, an additional motion capture camera will be needed to allow full-body motion tracking. However, using the motion capture system within the immersive virtual environment (IVE) at home could be challenging since proper setup and calibration are required for a specific home environment. Hence, to simplify the camera setup and calibration issue, this work introduces the design of the calibration module for a motion capture system using the Azure Kinect camera. The calibration module allows the user to perform calibration and self-evaluate the camera setup in the immersive virtual environment. Once it has been calibrated, users can drive a full-body avatar through their body movements in the first-person perspective.
Virtual Reality (VR) provides a more engaging learning experience to students and could improve knowledge retention compared to traditional learning methods. However, a student could get distracted in the VR environment due to stress, mind-wandering, unwanted noise, external sounds, etc. Distractions could be classified as either external (due to environment) or internal (due to internal thoughts). Past researchers have used eye-gaze data to detect external distractions. However, eye-gaze data can not measure internal distractions since a user could look at the educational content and may be thinking about something else. We explored the usage of electroencephalogram (EEG) data to detect internal distraction. We designed an educational VR environment and trained three machine learning models: Random Forest (RF), Support Vector Machine (SVM), and Linear Discriminant Analysis (LDA), to detect internal distractions of students. Our preliminary study results show that RF provides a better accuracy (98%) compared to SVM and LDA.
To help people who are blind or have low vision (BLV) travel independently, virtual reality (VR) and mixed reality (MR) are used to train the BLV users to acquire spatial knowledge for trip planning. The VR apps utilizes auditory and haptic feedback to help BLV people interact with virtual environments to build mental maps. In this project, we improve a MR cane iPhone app by designing and integrating a laser pointer tool and a gesture-based menu to assist BLV users in learning spatial layout of a virtual environment more efficiently. A laser pointer allows BLV users to quickly explore a virtual environment. A gesture-based menu helps BLV users easily manipulate the user interface of the MR cane app.
Augmented Reality (AR) has been explored to assist scenarios of remote collaboration. Most studies use Handheld Devices to assist on-site members requiring help. However, the most suitable way to use such devices has not been examined, due to most efforts being put in the development of the technology itself. This work explored three distinct setups during a remote maintenance scenario with 10 participants: HHD; Support Mounted HHD; Wrist Mounted HHD. We report participants’ insights, suggesting the last represents the most adequate condition given the scenario context.
An effective navigation tool is critical for an ambiguous virtual environment. We examined a lines navigation aid in a multi-floor environment against a 2D map. Results show that the lines navigation aid outperformed the map in navigation tasks while performing the same in later unassisted revisits.
Augmented reality(AR) has the potential to address the limitations of in-person brainstorming, such as digitization and remote collaboration while preserving the spatial relationship between participants and their environments. However, current AR input methods are not sufficient for supporting rapid ideation compared to non-digital tools used in brainstorming: pen, paper, sticky notes, and whiteboards. To help users create comprehensible notes rapidly for AR-based collaborative brainstorming, we developed IdeaSpace, a system that allows users to use traditional tools like pens and sticky notes. We evaluated this input method through a user study (N=22) to assess the efficiency, usability, and comprehensibility of the approach. Our evaluation indicates that IdeaSpace input method outperforms the baseline method in all metrics.
We investigate superpowers as a way to both present and visually represent interaction techniques in VR. A mixed-design study (n=20) compares variants of the well-known Go-Go interaction technique in a non-game selection task. The primary factors are the effect of using superhero-themed priming (including a brief backstory intervention and a modified avatar appearance), and modifying the visual representation of the interaction technique to be reminiscent of superhero powers.
Immersive analytics is a form of data visualization that allows users to observe and analyze data in multiple dimensions within the virtual environment (VE). Nonetheless, when immersing in such an environment with a large and abstract dataset, users may lose their spatial awareness and be unable to localize themselves within the VE. This issue motivates us to explore visualization techniques to improve the localization ability of the users. Based on the requirements to utilize the dataset’s intrinsic features and explore both ego-and exo-centric reference frames, we propose three techniques to help users localize themselves in VE for immersive analytics: World-in-miniature (WIM), Landmarks and Constellations.
We present a ring-type pointing device that uses a three-axis pressure sensor for controlling a two-dimensional cursor shown on a large display. The pressure sensor attached to the ring measures the force applied to each axis to control the cursor. Specifically, the movement of the cursor is controlled on the basis of the forces applied on the x and y-axis, whereas tapping is controlled on the basis of the force applied on the z-axis. The cursor’s speed is controlled along with force on either axis: when applying a slight force, the cursor can be moved at a slower speed, which would be suitable for moving short distances; a stronger force can be applied for longer distances.
We propose Shadow Clones; a user interface with unattended visual information to accomplish multiple tasks in multiple spaces using fast-gaze-switching. Shadow Clones interface provides not only a cursor on the view area the user is gazing at but also cursors on other view areas that the user is not gazing at. To validate Shadow Clones, we designed a successive reaching task and compared our approach to single-cursor interface, in which the cursor was displayed in the view area that the user currently gaze at. The results showed that the shadow clones’ condition improved performance while maintaining fast switching. Our approach is expected to be an interface design for interactions in which a single user can manipulate multiple bodies.
This paper presents insights about children’s manipulative gestures in a spatial puzzle play (i.e. tangram) in both real and virtual environments. We present our initial work with 11 children (aged between 7 and 14) and preliminary results based on a qualitative analysis of children’s goal-directed actions as one dimension of gestural input. Based on our early results, we list a set of goal-directed actions as a first stage for developing a manipulative gestural taxonomy. For a more comprehensive view, we suggest a further in-depth investigation of these actions combined with hand and finger kinematics, and outline a number of paths for future research.
We propose Virtual Triplets, a collaborative Virtual Reality system capable of human-human and human-agent interaction where users can choose to switch their control between two or more virtual avatars, while the virtual agent would take over the control of the free, unpossessed avatars. We developed a use-case scenario where a single human instructor concurrently supervises two students in a virtual classroom setting. Each student is assigned a personal instructor avatar where the instructor could switch the possession between these two avatars. When the human instructor attends to one of the students, the virtual agent controls the unpossessed avatar to assume the supervision role for another student. Virtual Triplets supports recording and playback of the user’s actions. For example, the human instructor’s demonstration shown to one student could be recorded and played back on a different avatar for another student. Moreover, the human instructor can choose to possess a top-down view instead of an individual avatar to have an overview of all the workspaces and give commands to the virtual instructor’s avatar. Our goal is to enable parallelism in a collaborative environment to improve the user experience when multitasking is required for repetitive activities.