UIST '22 Adjunct: The Adjunct Publication of the 35th Annual ACM Symposium on User Interface Software and Technology

Full Citation in the ACM Digital Library

SESSION: Poster Session

What’s Cooking? Olfactory Sensing Using Off-the-Shelf Components

We present Project Sniff, a hardware and software exploration into olfactory sensing with an application in digital communication and social presence. Our initial results indicate that a simple hardware design using off-the-shelf sensors and the application of supervised learning to the sensor data allows us to detect several common household scents reliably. As part of this exploration, we developed a scent-sensing IoT prototype and placed it in the kitchen area to sense “what’s cooking?”, and share the olfactory information via a Slack bot. We conclude by outlining our plans for future steps and potential applications of this research.

RCSketch: Sketch, Build, and Control Your Dream Vehicles

We present RCSketch, a system that lets children sketch their dream vehicles in 3D, build moving structures of those vehicles, and control them from multiple viewpoints. As a proof of concept, we implemented our system and designed five vehicles that could perform a wide variety of realistic movements.

Puppeteer: Manipulating Human Avatar Actions with Intuitive Hand Gestures and Upper-Body Postures

We present Puppeteer, an input prototype system that allows players directly control their avatars through intuitive hand gestures and upper-body postures. We selected 17 avatar actions discovered in the pilot study and conducted a gesture elicitation study to invite 12 participants to design best representing hand gestures and upper-body postures for each action. Then we implemented a prototype system using the MediaPipe framework to detect keypoints and a self-trained model to recognize 17 hand gestures and 17 upper-body postures. Finally, three applications demonstrate the interactions enabled by Puppeteer.

MoonBuddy: A Voice-based Augmented Reality User Interface That Supports Astronauts During Extravehicular Activities

As NASA pursues Artemis missions to the moon and beyond, it is essential to equip astronauts with the appropriate human-autonomy enabling technology necessary for the elevated demands of lunar surface exploration and extreme terrestrial access. We present MoonBuddy, an application built for the Microsoft HoloLens 2 that utilizes Augmented Reality (AR) and voice-based interaction to assist astronauts in communication, navigation, and documentation on future lunar extravehicular activities (EVAs), with the goal of reducing cognitive load and increasing task completion. User testing results for MoonBuddy under simulated lunar conditions have been positive overall, with participants indicating that the application was easy to use and helpful in completing the required tasks.

Towards using Involuntary Body Gestures for Measuring the User Engagement in VR Gaming

Understanding the degree of user engagement in a VR game is vital to provide a better gaming experience. While prior work has suggested self-reports, and biological signal-based methods, measuring game engagement remains a challenge due to its complex nature. In this work, we provide a preliminary exploration of using involuntary body gestures to measure user engagement in VR gaming. Based on data collected from 27 participants performing multiple VR games, we demonstrate a relationship between foot gesture-based models for measuring arousal and physiological responses while engaging in VR games. Our findings show the possibility of using involuntary body gestures to measure engagement.

Bodyweight Exercise based Exergame to Induce High Intensity Interval Training

Exergames have been proposed as an attractive way of making exercise fun; however, most of them do not reach the recommended intensity. Although HCI research has explored how exergame can be designed to follow High Intensity Interval Training (HIIT) that is effective exercise consisting of intermittent vigorous activity and short rest or low-intensity exercise, there are limited studies on designing bodyweight exercise (BWE) based exergame to follow HIIT. In this paper, we propose BWE based exergame to encourage users to maintain high intensity exercise. Our initial study (n=10) showed that the exergame had a significant effect on enjoyment, while the ratio of incorrect posture (ex., squat) also increased due to participants’ concentration on the exergame, which imply future design implications of BWE based exergames.

Involuntary Exhalation Control by Facial Vibration

Breathing affects physical and mental health as well as skills in playing sports and musical instruments. Previous studies proposed various methods using sensory stimuli to assist users with controlling their breathing voluntarily by paying attention to it. However, focusing on breathing is difficult when they play sports or instruments because there are many factors to focus on. Therefore, we propose a wearable system that can control the user’s exhalation involuntarily by facial vibration; pushing air from the cheeks independent of the user’s voluntary breathing. Our system can control the exhaled air velocity and duration by changing the frequency, amplitude, and duration of the facial vibration. We consider our system will help novices acquire advanced skills in playing wind instruments like circular breathing.

HapticPuppet: A Kinesthetic Mid-air Multidirectional Force-Feedback Drone-based Interface

Providing kinesthetic force-feedback for human-scale interactions is challenging due to the relatively large forces needed. Therefore, robotic actuators are predominantly used to deliver this kind of haptic feedback; however, they offer limited flexibility and spatial resolution. In this work, we introduce HapticPuppet, a drone-based force-feedback interface which can exert multidirectional forces onto the human body. This can be achieved by attaching strings to different parts of the human body such as fingers, hands or ankles, which can then be affixed to multiple coordinated drones - puppeteering the user. HapticPuppet opens up a wide range of potential applications in virtual, augmented and mixed reality, exercising, physiotherapy, remote collaboration as well as haptic guidance.

A11yBoard: Using Multimodal Input and Output to Make Digital Artboards Accessible to Blind Users

We present A11yBoard, an interactive multimodal system that makes interpreting and authoring digital artboards, such as presentation slides or vector drawings, accessible to blind and low-vision (BLV) users. A11yBoard combines a web-based application with a mobile touch screen device such as a smartphone or tablet. The artboard is mirrored from the PC onto the touch screen, enabling spatial exploration of the artboard via touch and gesture. In addition, speech recognition and non-speech audio are used for input and output, respectively. Finally, keyboard input is used with a custom search-driven command line interface to access various commands and properties. These modalities combine into a rich, accessible system in which artboard contents, such as shapes, lines, text boxes, and images, can be interpreted, generated, and manipulated with ease. With A11yBoard, BLV users can not only consume accessible content, but create their own as well.

Rapid Prototyping Dynamic Robotic Fibers for Tunable Movement

Liquid crystal elastomers (LCEs) are promising shape-changing actuators for soft robotics in human–computer interaction (HCI). Current LCE manufacturing processes, such as fiber-drawing, extrusion, and 3D printing, face limitations on form-giving and accessibility. We introduce a novel rapid-prototyping approach for thermo-responsive LCE fiber actuators based on vacuum molding extrusion. Our contributions are threefold, a) a vacuum fiber molding (VFM) machine, b) LCE actuators with customizable fiber shapes c) open-source hackability of the machine. We build and test the VFM machine to generate shape-changing movements from four fiber actuators (pincer, curl, ribbon, and hook), and we look at how these new morphologies bridge towards soft robotic device integration.

ARfy: A Pipeline for Adapting 3D Scenes to Augmented Reality

Virtual content placement in physical scenes is a crucial aspect of augmented reality (AR). This task is particularly challenging when the virtual elements must adapt to multiple target physical environments unknown during development. AR authors use strategies such as manual placement performed by end-users, automated placement powered by author-defined constraints, and procedural content generation to adapt virtual content to physical spaces. Although effective, these options require human effort or annotated virtual assets. As an alternative, we present ARfy, a pipeline to support the adaptive placement of virtual content from pre-existing 3D scenes in arbitrary physical spaces. ARfy does not require intervention by end-users or asset annotation by AR authors. We demonstrate the pipeline capabilities using simulations on a publicly available indoor space dataset. ARfy makes any generic 3D scene automatically AR-ready and provides evaluation tools to facilitate future research on adaptive virtual content placement.

“Inconsistent Performance”: Understanding Concerns of Real-World Users on Smart Mobile Health Applications Through Analyzing App Reviews

While smart mobile health apps that adapt to users’ progressive individual needs are proliferating, many of them struggle to fulfill their promises due to an inferior user experience. Understanding the concerns of real-world users related to those apps, and their smart components in particular, could help advance the app design to attract and retain users. In this paper, we target this issue through a preliminary thematic analysis of 120 user reviews of six smart health apps. We found that accuracy, customizability, and convenience of data input are primary concerns raised in real-world user reviews. Many concerns on the smart components are related to the trust issue of the users towards the apps. However, several important aspects such as privacy and fairness were rarely discussed in the reviews. Overall, our study provides insights that can inspire further investigations to support the design of smart mobile health apps.

Wemoji: Towards Designing Complementary Communication Systems in Augmented Reality

Augmented Reality (AR) can enable new forms of self-expression and communication. However, little is known about how AR experiences should be designed to complement face-to-face communication. We present an initial set of insights derived from the iterative design of a mobile AR app called Wemoji. It enables people to emote or react to one another by spawning visual AR effects in a shared physical space. As an additional communication medium, it can help add volume and dimension to what is exchanged. We outline a design space for AR complementary communication systems, and offer a set of insights from initial testing that points towards how AR can be used to enhance same-time same-place social interactions.

RemoconHanger: Making Head Rotation in Remote Person using the Hanger Reflex

For remote collaboration, it is essential to intuitively grasp the situation and spatial location. However, the difficulty in grasping information about the remote user’s orientation can hinder remote communication. For example, if a remote user turns his or her head to the right to operate a device on the right, and this sensation cannot be shared, the image sent by the remote user suddenly appears to flow laterally, and it will lose the positional relationship like Figure 1 (left). Therefore, we propose a device using the ”hanger reflex” to experience the sensation of head rotation intuitively. The ”hanger reflex” is a phenomenon in which the head turns unconsciously when a wire hanger is placed on the head. It has been verified that the sensation of turning is produced by the distribution of pressure exerted by a device worn on the head. This research aims to construct a mechanism to verify its effectiveness for telecommunication that can unconsciously experience the remote user’s rotation sensation using the hanger reflex phenomenon. An inertial measurement unit(IMU) grasps the remote user’s rotation information like Figure 1 (right).

SpiceWare: Simulating Spice Using Thermally Adjustable Dinnerware to Bridge Cultural Gaps

Preference and tolerance towards spicy food may vary depending on culture, location, upbringing, personality and even gender. Due to this, spicy food can often effect the social interaction on the dining table, especially if it is presented as a cultural dish. We propose SpiceWare, a thermally adjustable spoon that alters the perception of spice to improve cross-cultural communication. SpiceWare is a 3D-printed aluminium spoon that houses a thermal peltier that provides thermal feedback up to 45°C which can alter the taste perception of the user. As an initial evaluation, we conducted a workshop among participants of varying cultural backgrounds and observe their interaction when dining on spicy food. We found that the overall interaction was perceived to be more harmonious, and we discuss potential future works on improving the system.

Early Usability Evaluation of a Relational Agent for the COVID-19 Pandemic

Relational agents (RAs) have shown effectiveness in various health interventions with and without healthcare professionals (HCPs) and hospital facilities. RAs have not been widely researched in COVID-19 context, although they can give health interventions during the pandemic. Addressing this gap, this work presents an early usability evaluation of a prototypical RA, which is iteratively designed and developed in collaboration with infected patients (n=21) and two groups of HCPs (n=19, n=16) to aid COVID-19 patients at various stages about four main tasks: testing guidance, support during self-isolation, handling emergency situations, and promoting post-infection mental well-being. The prototype obtained an average score of 58.82 on the system usability scale (SUS) after being evaluated by 98 people. This result implies that the suggested design still needs to be improved for greater usability and adoption.

Thumble: One-Handed 3D Object Manipulation Using a Thimble-Shaped Wearable Device in Virtual Reality

Conventional controllers or hand-tracking interactions in VR cause hand fatigue while manipulating 3D objects because repetitive wrist rotation and hand movements are often required. As a solution to this inconvenience, we propose Thumble, a novel wearable input device worn on the thumb for modifying the orientation of 3D objects. Thumble can rotate the 3D objects depending on the orientation of the thumb and using the thumb pad as an input surface on which the index finger rubs to control the direction and degree of rotations. Therefore, it requires minimal motion of the wrist and the arm. Through the informal user study, we collected the subjective feedback of users and found that Thumble has less hand movement than a conventional VR controller.

Sharing Heartbeat: Toward Conducting Heartrate and Speech Rhythm through Tactile Presentation of Pseudo-heartbeats

Currently, the ongoing COVID-19 pandemic makes physical contact, such as handshakes, difficult. However, physical contact is effective in strengthening the bonds between people. In this study, we aim to compensate for the physical contact lost during the COVID-19 pandemic by presenting a pseudo-heartbeat through a speaker to reproduce entrainment and the synchronized state of heartbeats induced by physiological synchronization. We evaluated the effects of the device in terms of speech rhythm and heart rate. The experimental results showed that a presentation of 80 BPM significantly reduced the difference in heart rate between the two participants, bringing them closer to a synchronized heart rate state. The heart rates of participants were significantly lower when 45 BPM and 80 BPM were presented than when no stimulus was given. Furthermore, when 45 BPM was presented, the silent periods between conversations were significantly more extended than when no stimulus was given. This result indicates that this device can intentionally create the entrainment phenomenon and a synchronized heart rate state, thereby producing the same effect of physical contact communication without contact.

SomaFlatables: Supporting Embodied Cognition through Pneumatic Bladders

Applying the theory of Embodied Cognition through design allows us to create computational interactions that engage our bodies by modifying our body schema. However, in HCI, most of these interactive experiences have been stationed around creating sensing-based systems that leverage our body’s position and movement to offer an experience, such as games using Nintendo Wii and Xbox Kinect. In this work, we created two pneumatic inflatables-based prototypes that actuate our body to support embodied cognition in two scenarios by altering the user’s body schema. We call these ”SomaFlatables” and demonstrate the design and implementation of these inflatables based prototypes that can move and even extend our bodies, allowing for novel bodily experiences. Furthermore, we discuss the future work and limitations of the current implementation.

Fringer: A Finger-Worn Passive Device Enabling Computer Vision Based Force Sensing Using Moiré Fringes

Despite the importance of utilizing forces when interacting with objects, sensing force interactions without active force sensors is challenging. We introduce Fringer, a finger sleeve that physically visualizes the force to allow a camera to estimate the force without using any active sensors. The sleeve has stripe-pattern slits, a sliding paper with stripe pattern, and a compliant layer that converts force into sliding paper movements. The patterns of the slit and the paper have different frequencies to create Moiré fringes, which can magnify the small displacement caused by the compliant layer compression for webcams to easily capture such displacement.

Self-Supervised Approach for Few-shot Hand Gesture Recognition

Data-driven machine learning approaches have become increasingly used in human-computer interaction (HCI) tasks. However, compared with traditional machine learning tasks, for which large datasets are available and maintained, each HCI project needs to collect new datasets because HCI systems usually propose new sensing or use cases. Such datasets tend to be lacking in amount and lead to low performance or place a burden on participants in user studies. In this paper, taking hand gesture recognition using wrist-worn devices as a typical HCI task, I propose a self-supervised approach that achieves high performance with little burden on the user. The experimental results showed that hand gesture recognition was achieved with a very small number of labeled training samples (five samples with 95% accuracy for 5 gestures and 10 samples with 95% accuracy for 10 gestures). The results support the story that when the user wants to design 5 new gestures, he/she can activate the feature in less than 2 minutes. I discuss the potential of this self-supervised framework for the HCI community.

Echofluid: An Interface for Remote Choreography Learning and Co-creation Using Machine Learning Techniques

Born from physical activities, dance carries beyond mere body movement. Choreographers interact with audiences’ perceptions through the kinaesthetics, creativity, and expressivity of whole-body performance, inviting them to construct experience, emotion, culture, and meaning together. Computational choreography support can bring endless possibilities into this one of the most experiential and creative artistic forms. While various interactive and motion technologies have been developed and adopted to support creative choreographic processes, little work has been done in exploring incorporating machine learning in a choreographic system, and few remote dance teaching systems in particular have been suggested. In this exploratory work, we proposed Echofluid-a novel AI-based choreographic learning and support system that allows student dancers to compose their own AI models for learning, evaluation, exploration, and creation. In this poster, we present the design, development and ongoing validation process of Echofluid, and discuss the possibilities of applying machine learning in collaborative art and dance as well as the opportunities of augmenting interactive experiences between the performers and audiences with emerging technologies.

HomeView: Automatically Building Smart Home Digital Twins With Augmented Reality Headsets

Digital twins have demonstrated great capabilities in the industrial setting, but the cost of building them prohibits their usage in home environments. We present HomeView, a system that automatically builds and maintains a digital twin using data from Augmented Reality (AR) headsets and Internet of Things (IoT) devices. We evaluated the system in a simulator and found it performs better than the baseline algorithm. The user feedback on programming IoT devices also suggests that contextual information rendered by HomeView is preferable to text descriptions.

Detecting Changes in User Emotions During Bicycle Riding by Sampling Facial Images

In the context of mobility as a Service (MaaS), bicycles are an important mode of transport for the first and last mile between the home and other transport modalities. Also, due to covid-19 bicycle users such as food delivery drivers and commuters to work are increasing. To investigate driving experience of bicycle users in context and improve MaaS service quality, we propose and describe a method to automatically detect changes in user emotions during bicycle riding by sampling facial images using a smartphone. We describe the proposed method and how we plan to use it in the future.

PerSign: Personalized Bangladeshi Sign Letters Synthesis

Bangladeshi Sign Language (BdSL)—like other sign languages— is tough to learn for general people, especially when it comes to expressing letters. In this poster, we propose PerSign, a system that can reproduce a person’s image by introducing sign gestures in it. We make this operation “personalized”, which means the generated image keeps the person’s initial image profile–face, skin tone, attire, background—unchanged while altering the hand, palm, and finger positions appropriately. We use an image-to-image translation technique and build a corresponding unique dataset to accomplish the task. We believe the translated image can reduce the communication gap between signers1 and non-signers without having prior knowledge of BdSL.

LV-Linker: Supporting Fine-grained User Interaction Analyses by Linking Smartphone Log and Recorded Video Data

Data-driven mobile design as an important UI/UX research technique often requires analyzing recorded screen video data and time-series usage log data, because it helps to obtain a deeper understanding of fine-grained usage behaviors. However, there is a lack of interactive tools that support simultaneously navigation of both mobile usage log and video data. In this paper, we propose LV-Linker (Log and Video Linker), a web-based data viewer system for synchronizing both smartphone usage log and video data to help researchers quickly to analyze and easily understand user behaviors. We conducted a preliminary user study and evaluated the benefits of linking both data by measuring task completion time, helpfulness, and subjective task workload. Our results showed that offering linked navigation significantly lowers the task completion time and task workload, and promotes data understanding and analysis fidelity.

SilentWhisper: faint whisper speech using wearable microphone

Voice interaction is a fundamental human capacity, and we can use voice user interfaces just speaking. However, in public spaces, we are hesitant to use them because of consideration for their surroundings and low privacy. Silent speech, a method that recognizes the movement of speech in silence, has been proposed as a solution to this problem, and it allows us to maintain our privacy when speaking. However, existing silent speech interfaces are burdensome because the sensor must be kept in contact with the face and mouth, and commands must be prepared for each user. In this study, we propose a method to input whispered speech at a quiet volume that cannot be heard by others using a pin microphone. Experimental results show that a recognition rate was 13.9% WER and 6.4% CER for 210 phrases. We showed that privacy-preserving vocal input is possible by whispering voices which are not comprehensible to others.

The Reflective Maker: Using Reflection to Support Skill-learning in Makerspaces

In recent years, while HCI researchers have developed several systems that leverage the use of reflection for skill learning, the use of reflection-based learning of maker skills remains unexplored. We present ReflectiveMaker - a toolkit for experts and educators to design reflection exercises for novice learners in makerspaces. We describe the three components of our toolkit: (a) a designer interface to author the reflection prompts during fabrication activities, (b) a set of fabrication tools to sense the user’s activities and (c) a reflection diary interface to record the user’s reflections and analyze data on their learning progress. We then outline future work and envision a range of application scenarios.

DIY Graphics Tab: A Cost-Effective Alternative to Graphics Tablet for Educators

Recording lectures is a normal task for online educator, and graphics tablet is a great tool for that. However, it is very expensive for many instructors. In this paper, we propose an alternative called “DIY Graphics Tab” that functions largely in the same way as a graphic tab, but requires only a pen, paper, and laptop’s webcam. Our system takes images of writings on a paper with webcam and outputs the contents. The task is not straightforward since there are obstacles, such as hand occlusion, paper movements, lighting, and perspective distortion due to the viewing angle. A pipeline is used that applies segmentation and post-processing for generating appropriate output from input frames. We also conducted user experience evaluations from the teachers.

TrackItPipe: A Fabrication Pipeline To Incorporate Location and Rotation Tracking Into 3D Printed Objects

The increasing convergence of the digital and physical world creates a growing urgency to integrate 3D printed physical tangibles with virtual environments. A precise position and rotation tracking are essential to integrate such physical objects with a virtual environment. However, available 3D models commonly do not provide tracking support on their composition, which requires modifications by CAD experts. This poses a challenge for users with no prior CAD experience. This work presents TrackItPipe, a fabrication pipeline supporting users by semi-automatically adding tracking capabilities for 3D printable tangibles tailored to environmental requirements. TrackItPipe integrates modifications to the 3D model, produces the respective tangibles for 3D printing, and provides integration scripts for Mixed Reality. Using TrackItPipe, users can rapidly equip objects with tracking capabilities.

TaskScape: Fostering Holistic View on To-do List With Tracking Plan and Emotion

Despite advancements with intelligence and connectivity in the workspace, productivity tools, such as to-do list applications, still, measure workers’ performance by a binary state—completed, yet completed, and thus the number of tasks completed. Such quantitative measurements can often overlook human values and individual well-being. While concepts such as positive computing and digital well-being are on the rise in the HCI community, few systems have been proposed to effectively integrate holistic considerations for mental and emotional well-being into productivity tools. In this work, we depart from the classic task list management tool and explore the construction of well-being-centered to-do list software. We propose a task management system–TaskScape—, which allow users to have awareness on the following two aspects: (1) how they plan and complete tasks and (2) how they feel towards their work. With the proposed system, we will investigate if having holistic view on their tasks can facilitate reflection on what they work on, how they stick to their plans, and how their tasks portfolio support their emotional well-being, nudging users to reflect upon their work, planning performance, and their emotional values towards their work. In this poster, we share the design, development, and ongoing validation progress of TaskScape, which is aimed to nudge workers to holistically view work productivity, reminding users that work is more than just work but life.

iMarker: Instant and True-to-scale AR with Invisible Markers

Augmented Reality (AR) has been widely used in modern mobile devices for various applications. To achieve a stable and precise AR experience, mobile devices are equipped with various sensors (e.g., dual camera, LiDAR) to increase the robustness of camera tracking. Those sensors largely increased the cost of mobile devices and are usually not available on low-cost devices. We propose a novel AR system that leverage the advance of marker-based camera tracking to produce fast and true-to-scale AR rendering on any device with a single camera. Our method enables the computer monitor to be the host of AR markers, without taking up valuable screen space nor impacting the user experience. Unlike traditional marker-based methods, we utilize the difference between human vision and camera system, making AR markers to be invisible to human vision. We propose an efficient algorithm that allows the mobile device to detect those markers accurately and later recover the camera pose for AR rendering. Since the markers are invisible to human vision, we can embed them on any website and the user will not notice the existence of these markers. We also conduct extensive experiments that evaluate the efficacy of our method. The experimental results show that our method is faster and has a more accurate scale of the virtual objects compared to the state-of-the-art AR solution.

Tie Memories to E-souvenirs: Hybrid Tangible AR Souvenirs in the Museum

Traditional physical souvenirs in museums have three major limits: monotonous interaction, lack of personalization, and disconnection to the exhibition. To conquer these problems and to make personalized souvenirs a part of the visiting experience, we create a hybrid tangible Augmented Reality(AR) souvenir that combines a physical firework launcher and AR models. An application called AR Firework is designed for customizing the hybrid souvenir as well as interactive learning in an exhibition in the wild. Multiple interaction methods including mobile user interface, hand gestures, and voice are adopted to create a multi-sensory product. As the first research to propose tangible AR souvenirs, we find that they establish a long-lasting connection between visitors and their personal visiting experiences. This paper promotes the understanding of personalization, socialization and tangible AR.

Gustav: Cross-device Cross-computer Synchronization of Sensory Signals

Temporal synchronization of behavioral and physiological signals collected through different devices (and sometimes through different computers) is a longstanding challenge in HCI, neuroscience, psychology, and related areas. Previous research has proposed to synchronize sensory signals using (1) dedicated hardware; (2) dedicated software; or (3) alignment algorithms. All these approaches are either vendor-locked, non-generalizable, or difficult to adopt in practice. We propose a simple but highly efficient alternative: instrument the stimulus presentation software by injecting supervisory event-related timestamps, followed by a post-processing step over the recorded log files. Armed with this information, we introduce Gustav, our approach to orchestrate the recording of sensory signals across devices and computers. Gustav ensures that all signals coincide exactly with the duration of each experiment condition, with millisecond precision. Gustav is publicly available as open source software.

Towards Semantically Aware Word Cloud Shape Generation

Word clouds are a data visualization technique that showcases a subset of words from a body of text in a cluster form, where a word’s font size encodes some measure of its relative importance—typically frequency—in the text. This technique is primarily used to help viewers glean the most pertinent information from long text documents and to compare and contrast different pieces of text. Despite their popularity, previous research has shown that word cloud designs are often not optimally suited for analytical tasks such as summarization or topic understanding. We propose a solution for generating more effective visualization technique that shapes the word cloud to reflect the key topic(s) of the text. Our method automates the processes of manual image selection and masking required from current word cloud tools to generate shaped word clouds, better allowing for quick summarization. We showcase two approaches using classical and state-of-the-art methods. Upon successfully generating semantically shaped word clouds using both methods, we performed preliminary evaluations with 5 participants. We found that although most participants preferred shaped word clouds over regular ones, the shape can be distracting and detrimental to information extraction if it is not directly relevant to the text or contains graphical imperfections. Our work has implications on future semantically-aware word cloud generation tools as well as efforts to balance visual appeal of word clouds with their effectiveness in textual comprehension.

Over-The-Shoulder Training Between Redundant Wearable Sensors for Unified Gesture Interactions

Wearable computers are now prevalent, and it is not uncommon to see people wearing multiple wearable devices. These wearable devices are often equipped with sensors to detect the user’s interactions and context. As more devices are worn on the user’s body, there is an increasing redundancy between the sensors. For example, swiping gestures on a headphone are detected by its touch sensor, but the movement it caused can also be measured by the sensors in a smartwatch or smart rings. We present a new mechanism to train a gesture recognition model using redundant sensor data so that measurements from other sensors can be used to detect gestures performed on another device even if the device is missing. Our preliminary study with 13 participants revealed that a unified gesture recognition model for touch gestures achieved accuracy for 25 gestures (5 gestures × 5 scenarios) where gestures were trained by leveraging the available sensors.

WireSketch: Bimanual Interactions for 3D Curve Networks in VR

3D content authoring in immersive environments has the advantage of allowing users to see a design result on its actual scale in real time. We present a system to intuitively create and modify 3D curve networks using bimanual gestures in virtual reality (VR). Our system provides a rich vocabulary of interactions in which both hands are used harmoniously following simple and intuitive grammar, and supports comprehensive manipulation of 3D curve networks.

AIx speed: Playback Speed Optimization using Listening Comprehension of Speech Recognition Models

In recent years, more and more time has been spent watching videos for online seminars, lectures, and entertainment. In order to improve time efficiency, people often adjust the playback speed to a speed that suits them best. However, it is troublesome to adjust the optimal speed for each video and even more challenging to change and adjust the speed for each speaker within a single video. Therefore, we propose ”AIx speed,” a system that maximizes the playback speed within the range where the speech recognition model can recognize and flexibly adjusts the playback speed for the entire video. This system makes it possible to set a flexible playback speed that balances playback time and content comprehension, compared to fixing the playback speed for the entire video.

Methods of Gently Notifying Pedestrians of Approaching Objects when Listening to Music

Many people now listen to music with earphones while walking, and are less likely to notice approaching people, cars, etc. Many methods of detecting approaching objects and notifying pedestrians have been proposed, but few have focused on low urgency situations or music listeners, and many notification methods are unpleasant. Therefore, in this work, we propose methods of gently notifying pedestrians listening to music of approaching objects using environmental sound. We conducted experiments in a virtual environment to assess directional perception accuracy and comfort. Our results show the proposed method allows participants to detect the direction of approaching objects as accurately as explicit notification methods, with less discomfort.

LUNAChair: Remote Wheelchair System that Links Up a Remote Caregiver and Wheelchair Surroundings

We introduce LUNAChair, a remote control and communication system that uses omnidirectional video to connect a remote caregiver to a wheelchair user and a third person around the wheelchair. With the recent growing need for wheelchairs, much of the wheelchair research has focused on wheelchair control, such as fully automatic driving and remote operation. For wheelchair users, conversations with caregivers and third persons around them are also important. Therefore, we propose a system that connects a wheelchair user and a remote caregiver using omnidirectional cameras, which allows the remote caregiver to control the wheelchair while observing both the wheelchair user and his/her surroundings. Moreover, the system facilitates communication by using gaze and hand pointing estimation from an omnidirectional video.

FormSense: A Fabrication Method to Support Shape Exploration of Interactive Prototypes

When exploring the shape of interactive objects, existing prototyping methods can conflict with the iterative process. In this paper, we present FormSense: a simple, fast, and modifiable fabrication method to support the exploration of shape when prototyping interactive objects. FormSense enables touch and pressure sensing through a multi-layer coating approach and a custom touch sensor built from commodity electronic components. We use FormSense to create four interactive prototypes of diverse geometries and materials.

Little Garden: An augmented reality game for older adults to promote body movement

Physical activity is one of the most effective ways to help older adults stay healthy, but traditional training methods for older adults use single tasks and are boring, often making it difficult for the elderly to achieve good exercise results. In contrast to existing digital games, games based on augmented reality technology have the potential to promote physical activity in the elderly. This paper presents Little Garden, an interactive augmented-reality game designed for older adults. It uses projective augmented reality technology, physical card manipulation, virtual social scenarios to increase user engagement and motor initiation. The pilot data show that the game system promotes physical engagement and provides a good user experience. We believe that augmented reality technology provides a new approach to interface design for age-appropriate user-interface experiences.

One-Dimensional Eye-Gaze Typing Interface for People with Locked-in Syndrome

People with Locked-in syndrome (LIS) suffer from complete loss of voluntary motor functions for speech or hand-writing. They are mentally intact, retaining only the control of vertical eye movements and blinking. In this work, we present a one-dimensional typing interface controlled exclusively by vertical eye movements and dwell-time for them to communicate at will. Hidden Markov Model and Bigram Models are used as auto-completion on both word and sentence level. We conducted two preliminary user studies on non-disabled users. The typing interface achieved 3.75 WPM without prediction and 11.36 WPM with prediction.

ASTREL: Prototyping Shape-changing Interface with Variable Stiffness Soft Robotics Module

Prototyping a shape-changing interface is challenging because it requires knowledge of both electronics and mechanical engineering. In this study, we introduced a prototyping platform using a soft pneumatic artificial muscle(PAMs) and modular 3D printed reinforcement. To facilitate a wide variety of applications we propose six types of reinforcement modules capable of either shape deformation and/or variable stiffness. Users can create an approximate prototype using lego-built modules with magnetic connectors. A modeling toolkit can then be used to recreate and customize the prototype structure. After 3D printing, the shape-changing interface can be assembled by threading the PAMs through holes in the reinforcement. We envision that this prototyping platform can be useful in shape-changing interface exploration, where researchers can create working prototypes easily, rapidly, and at a low cost.

Exploring Virtual Object Translation in Head-Mounted Augmented Reality for Upper Limb Motor Rehabilitation with Motor Performance and Eye Movement Characteristics

Head-mounted augmented reality (AR) technology is currently employed in upper limb motor rehabilitation, and the degrees of freedom (DoF) of virtual object translation modes become critical for rehabilitation tasks in AR settings. Since motor performance is the primary focus of motor rehabilitation, this study assessed it across different translation modes (1DoF and 3DoF) via task efficiency and accuracy analysis. In addition, eye movement characteristics were used to further illustrate motor performance. This research revealed 1DoF and 3DoF modes showing their own benefits in upper limb motor rehabilitation tasks. Finally, this study recommended selecting or integrating these two translation modes for future rehabilitation task design.

Amplified Carousel: Amplifying the Perception of Vertical Movement using Optical Illusion

With the spread of virtual reality (VR) attractions, vector generation techniques that enhance the sense of realism are gaining attention. Additionally, mixed reality (MR) attractions, which overlay VR onto a real-world display, are expected to become more prevalent in the future. However, with MR, it is impossible to move all the coordinates of the visual stimuli to generate proper vection effects. Therefore, we have created an optical illusion method that provides a three-dimensional impression of a two-dimensional visual stimulation. The technique amplifies the sensation of vertical movement by placing the illusion on the floor.

HapticLever: Kinematic Force Feedback using a 3D Pantograph

HapticLever is a new kinematic approach for VR haptics which uses a 3D pantograph to stiffly render large-scale surfaces using small-scale proxies. The HapticLever approach does not consume power to render forces, but rather puts a mechanical constraint on the end effector using a small-scale proxy surface. The HapticLever approach provides stiff force feedback when the user interacts with a static virtual surface, but allows the user to move their arm freely when moving through free virtual space. We present the problem space, the related work, and the HapticLever design approach.

LipLearner: Customizing Silent Speech Commands from Voice Input using One-shot Lipreading

We present LipLearner, a lipreading-based silent speech interface that enables in-situ command customization on mobile devices. By leveraging contrastive learning to learn efficient representations from existing datasets, it performs instant fine-tuning for unseen users and words using one-shot learning. To further minimize the labor of command registration, we incorporate speech recognition to automatically learn new commands from voice input. Conventional lipreading systems provide limited pre-defined commands due to the time cost and user burden of data collection. In contrast, our technique provides expressive silent speech interaction with minimal data requirements. We conducted a pilot experiment to investigate the real-time performance of LipLearner, and the result demonstrates that an average accuracy of is achievable with only one training sample for each command.

VLOGS: Virtual Laboratory Observation Tool for Monitoring a Group of Students

Virtual laboratories (VLs) enable students to conduct lab experiments in the virtual world using Virtual Reality (VR) technology, providing benefits in areas such as availability, safety as well as scalability. While there are existing platforms that provide VLs with rich content as well as research works on promoting effective learning in VLs, less attention has been paid on VLs from a teaching perspective. Students usually learn and practice in VL sessions with limited help from the instructors. Instructors, on the other hand, could only receive limited information on the performance of the students and could not provide timely feedback to facilitate students’ learning. In this work, we present a prototype virtual laboratory monitoring tool, created using a design thinking approach, for addressing teaching needs when conducting a virtual laboratory session simultaneously for multiple students, similar to that in a physical lab environment.

ICEBOAT: An Interactive User Behavior Analysis Tool for Automotive User Interfaces

In this work, we present ICEBOAT an interactive tool that enables automotive UX experts to explore how users interact with In-Vehicle Information Systems (IVISs). Based on large naturalistic driving data continuously collected from production line vehicles, ICEBOAT visualizes drivers’ interactions and driving behavior on different levels of detail. Hence, it allows to easily compare different user flows based on performance- and safety-related metrics.

Search with Space: Find and Visualize Furniture in Your Space

Online shopping in the home category enables quick and convenient access to large catalog of products. In particular, users can simultaneously browse for functional requirements, such as size and material, while evaluating aesthetic fit, such as color and style, across hundreds of product offerings. However, the typical user flow requires navigating to an e-commerce retailer’s website first, setting the search/filter parameters that may be generic, and then landing on product pages, one at a time, to make a decision. Upon purchase, ”does not fit” is among the top reasons for returning a product. Amalgamating the above information, we present Search with Space, a novel interactive approach that a) inputs the user’s space as a search parameter to b) filter for product matches that will physically fit, and c) visualize these matches in the user’s space at true scale and in a format that facilitates simultaneous comparison. Briefly, the user leverages augmented reality (AR) to set a proxy 3d product in the desired location, updates the proxy’s dimensions, and takes photos from preset angles. Using spatial information captured with AR, a web-based gallery page is curated with all the product matches that will physically fit and products are shown at true scale in their original photos. The user may now browse products visualized in the context of their space and evaluate based on their shopping criteria, share the gallery page with designers or partners for asynchronous feedback, re-use the photos for a different product class, or re-capture their space with different criteria altogether. Search with Space inverts the typical user journey by starting with the user’s space and maintaining that context across all touch points with the catalog.

FullPull : A Stretchable UI to Input Pulling Strength on Touch Surfaces

Touch surfaces are used as input interfaces for many devices such as smartphones, tablets, and smartwatches. However, the flexibility of the input surface is low, and the possible input operations are limited to planar ones such as touch and swipe. In contrast, in the field of HCI, there has been much research on increasing the number of input interactions by attaching augmented devices with various physical characteristics to the touch surface. However, most of these interactions are limited to operations where pressure is applied to the input surface. In this study, we propose FullPull, which consists of a rubber tube filled with conductive ink and a suction cup to attach the rubber tube to the surface. FullPull allows users to input pulling depth and strength on the touch surface. We implemented a prototype FullPull device which can be attached to an existing capacitive touch surface and can be pulled by a user. We then evaluated the accuracy of tensile strength estimation of the implemented device. The results showed that the outflow current value when stretched could be classified into four tensile strength levels.

iThem: Programming Internet of Things Beyond Trigger-Action Pattern

With emerging technologies bringing Internet of Things (IoT) devices into domestic environments, trigger-action programming such as IFTTT with its simple if-this-then-that pattern provides an effective way for end-users to connect fragmented intelligent services and program their own smart home/work space automation. While the simplicity of trigger-action programming can be effective for non-programmers with its straightforward concepts and graphical user interface, it does not allow the algorithmic expressivity that a programming language has. For instance, the simple if-this-then-that structure cannot cover complex algorithms that arise from real world scenarios involving multiple conditions or keeping track of a sequence of conditions (e.g., incrementing counters, triggering one action if two conditions are both true). In this exploratory work, we take an alternative approach by creating a programmable channel between application programming interfaces (APIs), which allows programmers to preserve states and to use them to write complex algorithms. We propose iThem, which stands for intelligence of them—internet of things, that allow programmers to author any complex algorithms that can connect different IoT services and fully unleash the freedom of a general programming language. In this poster, we share the design, development, and ongoing validation progress of iThem, which piggybacks on existing programmable IoT system IFTTT, and which allows for a programmable channel that connects triggers and actions in IFTTT with versatility.

Transtiff: A Stick Interface with Various Stiffness by Artificial Muscle Mechanism

We manipulate stick objects, such as chopsticks and pens in our daily life. The senses of the human hand are extremely sensitive and can acquire detailed information. By perceiving changes in the feel of the finger, we perceive the characteristic of the object when we manipulate sticks, such as pens and brushes. Therefore, we can extend the tactile experience of touching something by controlling a stick's grasping sensation. In this study, we propose Transtiff which has a joint that changes its stiffness dynamically to a stick object that generally cannot bend. We use a piston mechanism that uses a small motor to compress the liquid in a flexible tube to control the stiffness of the joint.

SESSION: Demo Session

Demonstration of Geppetteau: Enabling haptic perceptions of virtual fluids in various vessel profiles using a string-driven haptic interface

Liquids sloshing around in vessels produce unique unmistakable tactile sensations of handling fluids in daily life, laboratory environments, and industrial contexts. Providing nuanced congruent tactile sensations would enrich interactions of handling fluids in virtual reality (VR). To this end, we introduce Geppetteau, a novel string-driven weight-shifting mechanism capable of providing a continuous spectrum of perceivable tactile sensations of handling virtual liquids in VR vessels. Geppetteau’s weight-shifting actuation system can be housed in 3D-printable shells, adapting to varying vessel shapes and sizes. A variety of different fluid behaviors can be felt using our haptic interface. In this work, Geppetteau assumes the shape of conical, spherical, cylindrical, and cuboid flasks, widening the range of augmentable shapes beyond the state-of-the-art of existing mechanical systems.

KineCAM: An Instant Camera for Animated Photographs

The kinegram is a classic animation technique that involves sliding a striped overlay across an interlaced image to create the effect of frame-by-frame motion. While there are known tools for generating kinegrams from pre-existing videos and images, there exists no system for capturing and fabricating kinegrams in situ. To bridge this gap, we created KineCAM, an open source1 instant camera that captures and prints animated photographs in the form of kinegrams. KineCAM combines the form factor of instant cameras with the expressiveness of animated photographs to explore and extend creative applications for instant photography.

Demonstrating ex-CHOCHIN: Shape/Texture-changing cylindrical interface with deformable origami tessellation

We demonstrate ex-CHOCHIN which is a cylindrical shape/texture-changing display inspired by the chochin, a traditional Japanese foldable lantern. Ex-CHOCHIN achieves complex control over origami such as local deformation and control in the intermediate process of folding by attaching multiple mechanisms to the origami tessellation. It thereby results in flexible deformations that can be adapted to a wide range of shapes, a one-continuous surface without gaps, and even changes in texture. It creates several deformed shapes from a crease pattern, allowing flexible deformation to function as a display. We have also produced an application using ex-CHOCHIN.

During this hands-on demo at UIST, attendees will manipulate the amount of change in shape and texture and interact with ex-CHOCHIN by seeing and touching.

Expert Goggles: Detecting and Annotating Visualizations using a Machine Learning Classifier

Demonstration of Lenticular Objects: 3D Printed Objects with Lenticular Lens Surfaces That Can Change their Appearance Depending on the Viewpoint

We present Lenticular Objects, which are 3D objects that appear differently from different viewpoints. We accomplish this by 3D printing lenticular lenses across the curved surface of objects and computing underlying surface color patterns, which enables to generate different appearances to the user at each viewpoint.

In addition, we present the Lenticular Objects 3D editor, which takes as input the 3D model and multiple surface textures, i.e. images, that are visible at multiple viewpoints. Our tool calculates the lens placements distribution on the surface of the objects and underlying color pattern. On export, the user can use ray tracing to live preview the resulting appearance from each angle. The 3D model, color pattern, and lenses are then 3D printed in one pass on a multi-material 3D printer to create the final 3D object. We demonstrate our system in practice with a range of use cases that benefits from appearing differently under different viewpoints.

ConfusionLens: Dynamic and Interactive Visualization for Performance Analysis of Multiclass Image Classifiers

Building higher-quality image classification models requires better performance analysis (PA) methods to help understand their behaviors. We propose ConfusionLens, a dynamic and interactive visualization interface that augments a conventional confusion matrix with focus+context visualization. This interface makes it possible to adaptively provide relevant information for different kinds of PA tasks. Specifically, it allows users to seamlessly switch table layouts among three views (overall view, class-level view, and between-class view) while observing all of the instance images in a single screen. This paper presents a ConfusionLens prototype that supports hundreds of instances and its several extensions to further support practical PA tasks, such as activation map visualization and instance sorting/filtering.

Thermoformable Shell for Repeatable Thermoforming

We propose a thermoformable shell called TF-Shell that allows repeatable thermoforming. Due to the low thermal conductivity of typical printing materials like polylactic acid (PLA), thermoforming 3D printed objects is largely limited. Through embedding TF-Shell, users can thermoform target parts in diverse ways. Moreover, the deformed structures can be restored by reheating. In this demo, we introduce the TF-Shell and demonstrate four thermoforming behaviors with the TF-Shell embedded figure. With our approach, we envision bringing the value of hands-on craft to digital fabrication.

Point Cloud Capture and Editing for AR Environmental Design

We present a tablet-based system for AR environmental design using point clouds. It integrates point cloud capture and editing in a single AR workflow to help users quickly prototype design ideas in their spatial context. We hypothesize that point clouds are well suited for prototyping, as they can be captured rapidly and then edited immediately on the capturing device in situ. Our system supports a variety of point cloud editing operations in AR, including selection, transformation, hole filling, drawing, and animation. This enables a wide range of design applications for objects, interior environments, buildings, and landscapes.

A Triangular Actuating Device Stand that Dynamically Adjusts Mobile Screen’s Position

This demo presents a triangular actuating device stand that can automatically adjust the height and tilt angle of a mounted mobile device (e.g., smartphone) to adapt to the user’s varying interaction needs (e.g., touch, browsing, and viewing). We employ a unique mechanism to deform the stand’s triangular shape with two extendable reel actuators, which enables us to reposition the mobile screen mounted on the hypotenuse side. Each actuator is managed by the mobile device and controls the height and base of the stand’s triangular shape, respectively. To demonstrate the potential of our new actuating device stand, we present two types of interaction scenarios: manual device repositioning based on the user’s postures or gestures captured by the device’s front camera and automatic device repositioning that adapt to the on-screen contents the user will interact with (i.e., touch-based menus, web browsers, illustrator, and video viewers).

DATALEV: Acoustophoretic Data Physicalisation

Here, we demonstrate DataLev, a data physicalisation platform with a physical assembly pipeline that allows us to computationally assemble 3D physical charts using acoustically levitated contents. DataLev consists of several enhancement props that allow us to incorporate high-resolution projection, different 3D printed artifacts and multi-modal interaction. DataLev supports reconfigurable and dynamic physicalisations that we animate and illustrate for different chart types. Our work opens up new opportunities for data storytelling using acoustic levitation.

Anywhere Hoop: Virtual Free Throw Training System

To complete successfully a high percentage of free throws in basketball, the shooter must achieve a stable trajectory with the ball. A player must practice shooting repeatedly to shoot a ball with a stable trajectory. However, traditional practice methods require the preparation of a real basketball hoop, which has made it difficult for some players to prepare a practice environment. We propose a training method for free throws using a virtual basketball hoop. In this paper, we present an implementation of the proposed method and experimental results showing its effectiveness.

Shadowed Speech: an Audio Feedback System which Slows Down Speech Rate

In oral communication, it is important to speak at a speed appropriate for the situation. However, we need a lot of training in order to control our speech rate as intended. This paper proposes a speech rate control system which enables the user to speak at a pace closer to the target rate using Delayed Auditory Feedback (DAF). We implement a prototype and confirm that the proposed system can slow down the speech rate when the user speaks too fast without giving any instructions to the speaker on how to respond to the audio feedback.

M&M: Molding and Melting Method Using a Replica Diffraction Grating Film and a Laser for Decorating Chocolate with Structural Color

Chocolate is a great food loved around the world. Methods to decorate chocolate with patterns drawn in structural colors have been developed; however, the methods which requires precision molds with nanoscale processing takes a lot of cost and time. In this paper, I propose a new method to decorate chocolate with structural color using a laser engraving machine and a replica diffraction grating film. The proposed method is composed of two simple steps: 1) molding chocolate on a replica diffraction grating film and 2) melting chocolate with structural color with a laser to draw a design. The proposed method allows creation of chocolates decorated with structural color with only a simple manufacturing process and inexpensive equipment without special precision molds.

Touchibo: Multimodal Texture-Changing Robotic Platform for Shared Human Experiences

Touchibo is a modular robotic platform for enriching interpersonal communication in human-robot group activities, suitable for children with mixed visual abilities. Touchibo incorporates several modalities, including dynamic textures, scent, audio, and light. Two prototypes are demonstrated for supporting storytelling activities and mediating group conversations between children with and without visual impairment. Our goal is to provide an inclusive platform for children to interact with each other, perceive their emotions, and become more aware of how they impact others.

Demonstrating Finger-Based Dexterous Phone Gestures

This hands-on demonstration enables participants to experience single-handed “dexterous gestures”, a novel approach for the physical manipulation of a phone using the fine motor skills of fingers. A recognizer is developed for variations of “full” and “half” gestures that spin (yaw axis), rotate (roll axis), and flip (pitch axis), all detected using the built-in phone IMU sensor. A functional prototype demonstrates how recognized dexterous gestures can be used to interact with a variety of current smartphone applications.

Demonstrating p5.fab: Direct Control of Digital Fabrication Machines from a Creative Coding Environment

Machine settings and tuning are critical for digital fabrication outcomes. However, exploring these parameters is non-trivial. We seek to enable exploration of the full design space of digital fabrication. To do so, we built p5.fab, a system for controlling digital fabrication machines from the creative coding environment p5.js and informed by a qualitative study of 3D printer maintenance practices. p5.fab prioritizes material exploration, fine-tuned control, and iteration in fabrication workflows. We demonstrate p5.fab with examples of 3D prints that cannot be made with traditional 3D printing software, including delicate bridging structures and prints on top of existing objects.

Music Scope Pad: Video Selecting Interface by Natural Movement in VR Space

This paper describes a novel video selecting interface that enables us to select videos without having to click a mouse or touch a screen. Existing video players enable us to see and hear only one video at a time, and thus we have to play pieces individually to select the one we want to hear from numerous new videos such as music videos, which involves a large number of mouse and screen-touch operations. The main advantage of our video selecting interface is that it detects natural movements, such as head or hand movements when users are listening to sounds and they can focus on a particular sound source that they want to hear. By moving their head left or right, users can hear the source from a frontal position as the tablet detects changes in the direction they are facing. By putting their hand behind their ear, users can focus on a particular sound source.

Extail: a Kinetic Inconspicuous Wareable Hair Extension Device

Wearable devices that present the wearer’s information have been designed to stand out when worn, making it difficult to conceal their wear. Therefore, we have been working on developing and evaluating a dynamic expression of hair to realize the presentation of the wearer’s information in an inconspicuous wearable device. In the precedents of hair interaction, the hairstyles and expressions to which the technique can be applied are limited. In this paper, we focus on the output mechanism and present Extail, a hair extension type device with a control mechanism that moves bundled hair like a tail. The results of a questionnaire survey on the correspondence between the movement of hair bundles and emotional expression were generally consistent with the results of the evaluation of tail devices in related studies.

TouchVR: A Modality for Instant VR Experience

In near future, we envision instant and ubiquitous access to the VR worlds. However, existing highly portable VR devices usually lack rich and convenient input modality. In response, we introduce TouchVR, a system that enables BoD interaction in instant VR supported by mobile HMDs.

We present the overall architecture and prototype of the TouchVR system in Android platform, capable of being easily integrated to Unity-based mobile VR applications. Furthermore, we implement a sample application (360° video player) to demonstrate the usage of our system. Through TouchVR, we expect it to be a step for enriching interaction methods in instant VR.

SpinOcchietto: A Wearable Skin-Slip Haptic Device for Rendering Width and Motion of Objects Gripped Between the Fingertips

Various haptic feedback techniques have been explored to enable users to interact with their virtual surroundings using their hands. However, investigation on interactions with virtual objects slipping against the skin using skin-slip haptic feedback is still at its early stages. Prior skin-slip virtual reality (VR) haptic display implementations involved bulky actuation mechanisms and were not suitable for multi-finger and bimanual interactions. As a solution to this limitation, we present SpinOcchietto, a wearable skin-slip haptic feedback device using spinning discs for rendering the width and movement of virtual objects gripped between the fingertips. SpinOcchietto was developed to miniaturize and simplify SpinOcchio[1], a 6-DoF handheld skin-slip haptic display. With its smaller, lighter, and wearable form factor, SpinOcchietto enables users with a wide range of hand sizes to interact with virtual objects with their thumb and index fingers while freeing the rest of the hand. Users can perceive the speed of virtual objects slipping against the fingertips and can use varying grip strengths to grab and release the objects. Three demo applications were developed to showcase the different types of virtual object interactions enabled by the prototype.

Silent subwoofer system using myoelectric stimulation to presents the acoustic deep bass experiences

This study demonstrates a portable, low-noise system that utilizes electrical muscle stimulation (EMS) to present a body-sensory acoustic experience similar to that experienced during live concerts. Twenty-four participants wore head-mounted displays (HMDs), headphones, and the proposed system and experienced a live concert in a virtual reality (VR) space to evaluate the system. We found that the system was not inferior to a system with loudspeakers and subwoofers, where ambient noise concerns precision in rhythm and harmony. These results could be explained by the user perceiving the EMS experience as a single signal when the EMS stimulation is presented in conjunction with visual and acoustic stimuli (e.g., the kicking of a bass drum, the bass sound generated from the kicking, and the acoustic sensation caused by the bass sound). The proposed method offers a novel EMS-based body-sensory acoustic experience, and the results of this study may lead to an improved experience not only for live concerts in VR space but also for everyday music listening.

Designing a Hairy Haptic Display using 3D Printed Hairs and Perforated Plates

Haptic displays that can convey various material sensations and physical properties on conventional 2D displays or in virtual reality (VR) environments, have been widely explored in the field of human-computer interaction (HCI). We introduce a fabrication technique of haptic apparatus using 3D printed hairs, which can stimulate the users’ sensory perception with hair-like bristles mimicking furry animals. Design parameters that determine 3D printed hair’s properties such as length, density, and direction, can affect the stiffness and roughness of the contact area between the hair tip and the user’s sensory receptors on their skin, thus changing stimulation patterns. To further explore the expressivity of this apparatus, we present a haptic display built with controlling mechanisms. The device is constructed by threading many 3D printed hairs through a perforated plate, manipulating the length and direction of hairs via the connected inner actuator. We present the design specifications including printing parameters, assembly, and electronics through a demonstration of prototypes, and future works.

Knitted Force Sensors

In this demo, we present two types of knitted resistive force sensors for both pressure and strain sensing. They can be manufactured ready-made on a two-bed weft knitting machine, without requiring further post-processing steps. Due to their softness, elasticity, and breathability our sensors provide an appealing haptic experience. We show their working principle, discuss their advantages and limitations, and elaborate on different areas of application. They are presented as standalone demonstrators, accompanied by exemplary applications to provide insights into their haptic qualities and sensing capabilities.

Calligraphy Z: A Fabricatable Pen Plotter for Handwritten Strokes with Z-Axis Pen Pressure

In the current age, the use of desktop publishing software and printing presses makes it possible to produce various expressions. On the other hand, it is difficult to perfectly replicate the ink grazing and subtle pressure fluctuations that occur when using a writing implement to output characters on a printer. In this study, we reproduce such incidental brushstrokes by using a writing implement to output text layout created on software. To replicate slight variations in strokes, we developed Calligraphy Z, a system that consists of a writing device and an application. The writing device controls the vertical position of the writing tool in addition to the writing position, thus producing handwritten-like character output, and an application that generates G-code for the device operation from user input. With the application, users can select their favorite fonts, input words, and adjust the layout to operate the writing device using several types of extended font data with writing pressure data prepared in advance. After developing our system, we compared the strokes of several writing implements to select the most suitable one for Calligraphy Z. We also conducted evaluations of the identification of characters output by Calligraphy Z and those output by a printing machine. We found participants in the evaluation experiment perceive the features of handwritten characters, such as ink blotting and fine blurring of strokes, in the characters by our system.

Explorations of Wrist Haptic Feedback for AR/VR Interactions with Tasbi

Most widespread haptic feedback devices for augmented and virtual reality (AR/VR) fall into one of two categories: simple hand-held controllers with a single vibration actuator, or complex glove systems with several embedded actuators. In this work, we explore haptic feedback on the wrist for interacting with virtual objects. We use Tasbi, a compact bracelet device capable of rendering complex multisensory squeeze and vibrotactile feedback. Leveraging Tasbi’s haptic rendering, and using standard visual and audio rendering of a head mounted display, we present several interactions that tightly integrate sensory substitutive haptics with visual and audio cues. Interactions include push/pull buttons, rotary knobs, textures, rigid body weight and inertia, and several custom bimanual manipulations such as shooting an arrow from a bow. These demonstrations suggest that wrist-based haptic feedback substantially improves virtual hand-based interactions in AR/VR compared to no haptic feedback.

InfraredTags Demo: Invisible AR Markers and Barcodes Using Infrared Imaging and 3D Printing

We showcase InfraredTags, which are 2D markers and barcodes imperceptible to the naked eye that can be 3D printed as part of objects, and detected rapidly by low-cost near-infrared cameras. We achieve this by printing objects from an infrared-transmitting filament, which infrared cameras can see through, and by having air gaps inside for the tag’s bits, which appear at a different intensity in the infrared image.

We built a user interface that facilitates the integration of common tags (QR codes, ArUco markers) with the object geometry to make them 3D printable as InfraredTags. We also developed a low-cost infrared imaging module that augments existing mobile devices and decodes tags using our image processing pipeline. We demonstrate how our method enables various applications, such as object tracking and embedding metadata for augmented reality and tangible interactions.

Demonstrating a Fabricatable Bioreactor Toolkit for Small-Scale Biochemical Automation

Biological and chemical engineering creates novel materials through custom workflows. Supporting such materials development through systems research such as toolkits and software is increasingly of interest to HCI. Bioreactors are widely used systems which can grow materials, converting feedstock into valuable products through fermentation. However, integrated bioreactors are difficult to design and program. We present a modular toolkit for developing custom bioreactors. Our toolkit contains custom hardware and software for adding chemicals, monitoring the mixture, and refining outputs. We demonstrate our bioreactor toolkit with a beer brewing application, an automated process which involves several biochemical reactions that are comparable to other synthetic biology processes.

Demonstrating Dynamic Toolchains for Machine Control

Humans are increasingly able to work side-by-side with desktop-scale digital fabrication machines. However, much of the software for controlling these machines does not support live, interactive exploration of their capabilities. We present Dynamic Toolchains, an extensible development framework for building parametric machine control interfaces from reusable modules. Toolchains are built and run in a live environment, removing the repetitive import and export bottleneck between software programs. This enables humans to easily explore how they can use machine precision to manipulate physical materials and achieve unique aesthetic outcomes. In this demonstration, we build a toolchain for computer-controlled watercolor painting and show how it facilitates rapid iteration on brush stroke patterns.

Interactive 3D Zoetrope with a Strobing Flashlight

We propose a 3D printed zoetrope mounted on a bike wheel where users can watch the 3D figures come to life in front of their eyes. Each frame of our animation is a 9 by 16 cm 3D fabricated diorama containing a small scene. A strobed flashlight synced with the spinning of the wheel shows the viewer each frame at just the right time, creating the illusion of 3D motion. The viewer can hold and shine the flashlight into the scene, illuminating each frame from their own point of view. Our zoetrope is modular and can have different 16 frame animations substituted in and out for fast prototyping of many cinematography, fabrication, and strobe lighting techniques. Our interactive truly 3D movie experience will push the zoetrope format to tell more complex stories and better engage viewers.

Hands-On: Using Gestures to Control Descriptions of a Virtual Environment for People with Visual Impairments

Virtual reality (VR) uses three main senses to relay information: sight, sound, and touch. People with visual impairments (PVI) rely primarily on auditory and haptic feedback to receive information in VR. While researchers have explored several approaches to make navigation and perception of objects more accessible in VR, none of them offer a natural way to request descriptions of objects, nor control of the flow of auditory information. In this demonstration, we present a haptic glove that PVI can use to request object descriptions in VR with their hands through familiar hand gestures. We contribute designs for a set of hand gestures that allow PVI to interactively get descriptions of the VR environment. We plan to conduct an user study where we will have PVI interact with a VR environment using these gestures to request audio descriptions.

A bonding technique for electric circuit prototyping using conductive transfer foil and soldering iron

Several electric circuit prototyping techniques have been proposed. While most focus on creating the conductive traces, we focus on the bonding technique needed for this kind of circuit. Our technique is an extension of existing work in that we use the traces themselves as the bonding material for the components. The bonding process is not soldering but yields joints of adequate connectivity. A hot soldering iron is used to activate the traces and bond the component to the circuit. Simple circuits are created on MDF, paper, and acrylic board to show the feasibility of the technique. It is also confirmed that the resistance of the contact points is sufficiently low.

CircuitAssist: Automatically Dispensing Electronic Components to Facilitate Circuit Building

When learning how to build circuits, one of the challenges novice makers face is how to identify the components needed for the circuit. Many makerspaces are stocked with a variety of electronic components that look visually similar or have similar names. Thus, novice makers may pick the wrong component, which creates a non-functional circuit although the wires are correctly connected. To address this issue, we present CircuitAssist, an actuated electronics component shelf connected to a tutorial system that dispenses electronic components for the maker in the order that they occur in the tutorial. The shelf contains dispensers for each component type with a custom release mechanism actuated by a servo motor that dispenses one component at a time. Makers can work with CircuitAssist by either following one of the provided tutorials or by directly selecting a component they need from the user interface.

Using a Dual-Camera Smartphone to Recognize Imperceptible 2D Barcodes Embedded in Videos

Invisible screen-camera communication is promising in that it does not interfere with the video viewing experience. In the imperceptible color vibration method, which displays two colors of the same luminance alternately at high speed for each pixel, embedded information is decoded by taking the difference between distant frames on the time axis. Therefore, the interframe differences of the original video contents affect the decoding performance. In this study, we propose a decoding method which utilizes simultaneously captured images using a dual-camera smartphone with different exposure times. This allows taking the color difference between the frames that are close to each other on the time axis. The feasibility of this approach is demonstrated through several application examples.

SESSION: Doctoral Symposium

Towards Future Health and Well-being: Bridging Behavior Modeling and Intervention

With the advent of always-available, ubiquitous devices with powerful passive sensing and active interaction capabilities, the opportunities to integrate AI into this ecosystem have matured, providing an unprecedented opportunity to understand and support user well-being. A wide array of research has demonstrated the potential to detect risky behaviors and address health concerns, using human-centered ML to understand longitudinal, passive behavior logs. Unfortunately, it is difficult to translate these findings into deployable applications without better approaches to providing human-understandable relationship explanations between behavior features and predictions; and generalizing to new users and new time periods. My past work has made significant headway in addressing modeling accuracy, interpretability, and robustness. Moreover, my ultimate goal is to build deployable, intelligent interventions for health and well-being that make use of succeeding ML-based behavior models. I believe that just-in-time interventions are particularly well suited to ML support. I plan to test the value of ML for providing users with a better, interpretable, and robust experience in supporting their well-being.

Environmental physical intelligence: Seamlessly deploying sensors and actuators to our everyday life

Weiser has predicted the third generation of computing would result in individuals interacting with many computing devices and ultimately can “weave themselves into the fabric of everyday life until they are indistinguishable from it” [17]. However, how to achieve this seamlessness and what associated interaction should be developed are still under investigation. On the other hand, the material composition, structures and operating logic of a variety of physical objects existing in everyday life determine how we interact with them [13]. The intelligence of the built environment does not only rely on the encoded computational abilities within the “brain” (like the controllers of home appliances), but also the physical intelligence encoded in their “body” (e.g., materials, mechanical structures). In my research, I work on creating computational materials with different encoded material properties (e.g., conductivity, transparency, water-solubility, self-assembly, etc.) that can be seamlessly integrated into our living environment to enrich different modalities of information communication.

Designing Tools for Autodidactic Learning of Skills

In the last decade, HCI researchers have designed and engineered several systems to lower the entry barrier for beginners and support novices in learning hands-on creative skills, such as motor skills, fabrication, circuit prototyping, and design.

In my research , I contribute to this body of work by designing tools that enable learning by oneself, also known as autodidactism. My research lies at the intersection of system design, learning sciences, and technologies that support physical skill-learning. Through my research projects, I propose to re-imagine the design of systems for skill-learning through the lens of learner-centric theories and frameworks.

I present three sets of research projects - (1) adaptive learning of motor skills, (2) game-based learning for fabrication skills, and (3) reflection-based learning of maker skills. Through these projects, I demonstrate how we can leverage existing theories, frameworks, and approaches from the learning sciences to design autodidactic systems for skill- learning.

Extending Computational Abstractions with Manual Craft for Visual Art Tools

Programming and computation are powerful tools for manipulating visual forms, but working with these abstractions is challenging for artists who are accustomed to direct manipulation and manual control. The goal of my research is to develop visual art tools that extend programmatic capabilities with manual craft. I do so by exposing computational abstractions as transparent materials that artists may directly manipulate and observe in a process that accommodates their non-linear workflows. Specifically, I conduct empirical research to identify challenges professional artists face when using existing software tools—as well as programming their own—to make art. I apply principles derived from these findings in two projects: an interactive programming environment that links code, numerical information, and program state to artists’ ongoing artworks, and an algorithm that automates the rigging of character clothing to bodies to allow for more flexible and customizable 2D character illustrations. Evaluating these interactions, my research promotes authoring tools that support arbitrary execution by adapting to the existing workflows of artists.

Design and Fabricate Personal Health Sensing Devices

With the development of low-cost electronics, rapid prototyping techniques, as well as widely available mobile devices (e.g. mobile phones, smart watches), users are able to develop their own basic interactive functional applications, either on top of existing device platforms, or as stand-alone devices. However, the boundary for creating personal health sensing devices, both function prototyping and fabrication -wise, are still high. In this paper, I present my works on designing and fabricating personal health sensing devices with rapid function prototyping techniques and novel sensing technologies. Through these projects and ongoing future research, I am working towards my vision that everyone can design and fabricate highly-customized health sensing devices based on their body form and desired functions.

Exploiting and Guiding User Interaction in Interactive Machine Teaching

Humans are talented with the ability to perform diverse interactions in the teaching process. However, when humans want to teach AI, existing interactive systems only allow humans to perform repetitive labeling, causing an unsatisfactory teaching experience. My Ph.D. research studies Interactive Machine Teaching (IMT), an emerging field of HCI research that aims to enhance humans’ teaching experience in the AI creation process. My research builds IMT systems that exploit and guide user interaction and shows that such in-depth integration of human interaction can benefit both AI models and user experience.

Empowering domain experts to author valid statistical analyses

Reliable statistical analyses are critical for making scientific discoveries, guiding policy, and informing decisions. To author reliable statistical analyses, integrating knowledge about the domain, data, statistics, and programming is necessary. However, this is an unrealistic expectation for many analysts who may possess domain expertise but lack statistical or programming expertise, including many researchers, policy makers, and other data scientists. How can our statistical software help these analysts? To address this need, we first probed into the cognitive and operational processes involved in authoring statistical analyses and developed the theory of hypothesis formalization. Authoring statistical analyses is a dual-search process that requires grappling with assumptions about conceptual relationships and iterating on statistical model implementations. This led to our key insight: statistical software needs to help analysts translate what they know about their domain and data into statistical modeling programs. To do so, statistical software must provide programming interfaces and interaction models that allow statistical non-experts to express their analysis goals accurately and reflect on their domain knowledge and data. Thus far, we have developed two such systems that embody this insight: Tea and Tisane. Ongoing work on rTisane explores new semantics for more accurately eliciting analysis intent and conceptual knowledge. Additionally, we are planning a summative evaluation of rTisane to assess our hypothesis that this new way of authoring statistical analyses makes domain experts more aware of their implicit assumptions, able to author and understand nuanced statistical models that answer their research questions, and avoid previous analysis mistakes.

Artistic User Expressions in AI-powered Creativity Support Tools

Novel AI algorithms introduce a new generation of AI-powered Creativity Support Tools (AI-CSTs). These tools can inspire and surprise users with algorithmic outputs that the users could not expect. However, users can struggle to align their intentions with unexpected algorithmic behaviors. My dissertation research studies how user expressions in art-making AI-CSTs need to be designed. With an interview study with 14 artists and a literature survey on 111 existing CSTs, I first isolate three requirements: 1) allow users to express under-constrained intentions, 2) enable the tool and the user to co-learn the user expressions and the algorithmic behaviors, and 3) allow easy and expressive iteration. Based on these requirements, I introduce two tools, 1) Artinter, which learns how the users express their visual art concepts within their communication process for art commissions, and 2) TaleBrush, which facilitates the under-constrained and iterative expression of user intents through sketching-based story generation. My research provides guidelines for designing user expression interactions for AI-CSTs while demonstrating how they can suggest new designs of AI-CSTs.

SESSION: Student Innovation Contest

UltraBat: An Interactive 3D Side-Scrolling Game using Ultrasound Levitation

We present UltraBat, an interactive 3D side-scrolling game inspired by Flappy Bird, in which the game character, a bat, is physically levitated in mid-air using ultrasound. Players aim to navigate the bat through a stalagmite tunnel that scrolls to one side as the bat travels, which is implemented using a pin-array display to create a shape-changing passage.

ShadowAstro: Levitating Constellation Silhouette for Spatial Exploration and Learning

We introduce ShadowAstro, a system that uses the levitating particles’ casted shadow to produce a constellation pattern. In contrast to the traditional approach of making astronomical observations via AR, planetarium, and computer screens, we intend to use the shadows created by each levitated bead to construct the silhouette of constellations - a natural geometrical pattern that can be represented by a set of particles. In this proposal, we show that ShadowAstro can help users inspect the 12 constellations on the ecliptic plane and augment users’ experience with a projector that will serve as the light source. Through this, we draw a future vision, where ShadowAstro can serve as an interactive tool with educational purposes or an art installation in museum. We believe the concept of designing interactions between the levitated objects and their casted shadows will provide a brand new experience to end user.

Top-Levi: Multi-User Interactive System Using Acoustic Levitation

Top-Levi is a public multi-user interactive system that requires a pair of users to cooperate with an audience around them. Based on acoustic levitation technology, Top-Levi leverages a unique attribute of dynamic physical 3D contents displayed and animated in the air: such systems inherently provide different visual information to users depending on where they are around the device. In Top-Levi, there are two primary users on opposite (left/right) sides of the device, and audience members to the front. Each sees different instructions displayed on a floating cube. Their collective task is to cooperate to move the cube from a start point to a final destination by synchronizing their responses to the instructions.

Magic Drops:Food 3D Printing of Colored Liquid Balls by Ultrasound Levitation

We introduce the concept of “Magic Drops”, the process which is all using ultrasound levitation to mix multiple liquid drops into a single one in the air and move it specified position and let it free fall below. For molecular gastronomy application, mixture drops with sodium alginate solution are free-fall into a container filled with calcium lactate solution. With this, drops encased in a calcium alginate film are formed in the container, these are edible and the color and flavor of mixture are also controlled through the process. We will also demonstrate stack these drops to create larger edible structure. Our novel mixture drop control technology has other potential applications such as painting techniques and drug development. Thus, we believe that this concept will become a new technology for mixing liquids in the future.

Shadow Play using Ultrasound Levitated Props

Shadow play is a traditional culture to communicate narratives. However, its inheritance is in danger, and traditional methods have limitations in their expression. We propose a novel system to perform a shadow play by levitating props using an ultrasound speaker array. Our system computes an ideal position of levitating props to create a shadow of the desired image. Shadow play will be performed by displaying a sequence of images as shadows. Since the performance is automated, our work makes shadow play accessible to people for generations. Also, our system allows 6 DoF and floating movement of props, which expands the limit of expression. Through this system, we aim to enhance shadow plays informatically and aesthetically.

UltraBots: Large-Area Mid-Air Haptics for VR with Robotically Actuated Ultrasound Transducers

We introduce UltraBots, a system that combines ultrasound haptic feedback and robotic actuation for large-area mid-air haptics for VR. Ultrasound haptics can provide precise mid-air haptic feedback and versatile shape rendering, but the interaction area is often limited by the small size of the ultrasound devices, restricting the possible interactions for VR. To address this problem, this paper introduces a novel approach that combines robotic actuation with ultrasound haptics. More specifically, we will attach ultrasound transducer arrays to tabletop mobile robots or robotic arms for scalable, extendable, and translatable interaction areas. We plan to use Sony Toio robots for 2D translation and/or commercially available robotic arms for 3D translation. Using robotic actuation and hand tracking measured by a VR HMD (ex: Oculus Quest), our system can keep the ultrasound transducers underneath the user’s hands to provide on-demand haptics. We demonstrate applications with workspace environments, medical training, education and entertainment.

Garnish into Thin Air

We propose Garnish into Thin Air, dynamic and three-dimensional food presentation with acoustic levitation. In contrast to traditional plating on the dishes, we make the whole garnishing process an interactive experience to stimulate users’ appetite by leveraging acoustic levitation’s capacity to decorate edibles dynamically in mid-air. To achieve Garnish into Thin Air, our system is built to orchestrate a range of edible materials, such as flavored droplets, edible beads, and rice paper cutouts. We demonstrate Garnish into Thin Air with two examples, including a glass of cocktail named “The Floral Party” and a plate of dessert called “The Winter Twig”.

LeviCircuits: Adhoc Electrical Circuit Prototyping using Ultrasound Levitation

Improving 3D-Editing Workflows via Acoustic Levitation

We outline how to improve common 3D-editing workflows such as modeling or character animation by utilizing an acoustic levitation kit as an interactive 3D display. Our proposed system allows users to directly interact with models in 3D space and perform multi-point gestures to manipulate them. Editing of complex 3D objects can be enabled by combining the 3D display with an LCD, projector or HMD to display additional context.

DAWBalloon: An Intuitive Musical Interface Using Floating Balloons

The development of music synthesis technology has created a way to enjoy listening to music and actively manipulate it. However, it is difficult for an amateur to combine sounds or change pitches to operate a DAW (Digital Audio Workstation). Therefore, we focused on ultrasonic levitation and haptic feedback to develop an appropriate interface for DAW. We propose "DAWBalloon", a system that uses ultrasonic levitation arrays to visualize rhythms using floating balloons as a metaphor for musical elements and to combine sounds by manipulating the balloons. DAWBalloon realizes the intuitive manipulation of sounds in three dimensions, even for people without knowledge of music.