UIST '21: The Adjunct Publication of the 34th Annual ACM Symposium on User Interface Software and Technology

Full Citation in the ACM Digital Library

SESSION: Poster Session

Towards Social Interaction between 1st and 2nd Person Perspectives on Bodily Play

Bodily play, which is a productive social interaction for bonding social relationships, has positive impacts on self-efficacy, acute cognitive benefit, and emotion. However, most bodily play encourages players to enjoy their own experiences. There are limited researches on sharing players' perspectives to enhance players' empathy for understanding others. Thus, we propose an asymmetric two-person game in an immersive environment. This bodily play, which supports perspective-taking via the integration with the first- and second-perspectives, has a collaborative interface that allows users to share their physiological and emotional perspectives. Initial testing of the system shows that players can not only understand well the feeling and problems encountered by each other through sharing perspectives and information but also increases the social closeness of players and stimulates empathy after the interplay.

Interactive Color Manipulation with Natural Language Modifiers

Bendable Color ePaper Displays for Novel Wearable Applications and Mobile Visualization

This paper presents a toolkit that allows to easily prototype with bendable color ePaper displays for designing and studying novel body-worn interfaces in mobile scenarios. We introduce a software and hardware platform that enables researchers for the first time to implement fully-functional wearable and UbiComp applications with interactive, curved color pixel displays. Further, we provide a set of visual and sensory-rich materials for customization and mounting options. To technically validate our approach and demonstrate its promising potential, we implemented eight real-world applications ranging from personal information and mobile data visualizations over active notifications to media controls. Finally, we report on first usage experiences and conclude with a research roadmap that outlines future applications and directions.

Assisting with Voluntary Pinch Force Control by Using Electrical Muscle Stimulation and Active Bio-Acoustic Sensing

We present a novel wearable system for assisting with voluntary pinch force control that requires no devices on the fingers. We use active bio-acoustic sensing to estimate voluntary pinch force with piezo elements attached to the back of the hand and electrical muscle stimulation to the forearm to control involuntary pinch force in a closed-loop system. We conducted a user study with eight participants to investigate whether the system could assist with pinch force control under target forces of 3 N, 6 N, or 9 N. When they tried to get pinch force closer to each target force with our system, the medians of the absolute errors were 0.8 N, 1.0 N, and 2.8 N, respectively. When they tried to do it without our system, they were 0.9 N, 1.4 N, and 2.4 N, respectively.

IntoTheVideos: Exploration of Dynamic 3D Space Reconstruction From Single Sports Videos

We present IntoTheVideos, a novel system that takes a sports video and reconstructs the 3D space (e.g., basketball court, football field) along with the highlighted event (e.g., making a basket, scoring a touchdown) to enable viewers to experience the sports entertainment with more freedom. Users can move around in the reconstructed 3D space and explore the event freely from different angles and distances as well as view the players in 3D. With this system, we hope to offer users a new option to enjoy sports entertainment beyond passively watching video footage potentially leading to more active fan engagement and participation.

FaceMe: An Augmented Reality Social Agent Game for Facilitating Children's Learning about Emotional Expressions

Children with autism spectrum disorder (ASD) experience dysfunctional emotional development leading to negative effects on their social communication. Although interventions are effective in helping children with ASD improve their social skills over time, they have been found to lack the essential ability to engage children in a real social environment. In this paper, we present “FaceMe,” which is an augmented reality (AR) system that uses a virtual agent and a set of tangible toolkits to teach children with ASD about six basic emotions and improve their emotional and communication skills. On the basis of the pilot data, the results suggest that children, especially those with ASD, were willing to socialize with the virtual agent and understand more emotional states. It is hoped that FaceMe can be used as a tool to provide assistance to children with ASD, as well as a way for future interface system design to support emotional development in children.

A Tool for Monitoring and Controlling Standalone Immersive HCI Experiments

We present an open source tool which provides commonly required functionality in HCI experimental research targeted at Immersive technologies. Using the tool, researchers can monitor and control a Unity scene from a controller panel running in the web-browser. Researchers can also use the tool to track the user status (e.g. see what the user sees) and trigger the execution of remote functions (e.g. alter the Unity Scene, the objects appearing, etc.). Moreover, we show a use case to exemplify how the tool is being used to support our research regarding Immersive Multisensory Interaction in a room-sized environment.

A Spatial Music Listening Experience in Augmented Reality

Live music provides a more immersive and social experience that recorded music cannot replicate. In a live music setting, listeners perceive sounds differently based on their position with respect to the musicians and can enjoy the experience with others. To make recorded music a dynamic listening experience, we propose and implement an app that adds a spatial dimension to music using augmented reality to allow users to listen to a song as if it were played live. The app lets users place virtual instruments around a physical space and plays the instrument track for each instrument. Users can move around in the space and change the importance of various sound localization aspects to customize their experience. Finally, users can record and livestream to share their listening experience with others.

Motion Improvisation: 3D Human Motion Synthesis with a Transformer

The synthesis of complicated and realistic human motion is a challenging problem and is a significant task for game, film and animation industries. Many existing methods rely on complex and time-consuming keyframe-based methods that demand professional skills in animation software and motion capture hardware. On the other hand, casual users seek a playful experience to animate their favorite characters with a simple and easy-to-use tool. Recent work has explored building intuitive animation systems but suffers from inability to generate complex and expressive motions. To tackle this limitation, we present a keyframe-driven animation synthesis algorithm that is able to produce complex human motions with a few input keyframes, allowing the user to control the keyframes at will. Inspired by the success of attention-based techniques in natural language processing, our method completes body motions in a sequence-to-sequence manner and captures motion dependencies both spatially and temporally. We evaluate our method qualitatively and quantitatively on the LaFAN1 dataset, demonstrating improved accuracy compared with state of the art methods.

Animating Various Characters using Arm Gestures in Virtual Reality Environment

In this study, we propose a method for efficiently animating various characters. The main concept is to build an animation-authoring system in a virtual reality (VR) environment and allow users to move anchors of a character model with their arms. With our system, users select two anchors, which are associated with two VR controllers. The users then directly specify the three-dimensional (3D) motions of the anchors by moving the controllers. To animate various characters with multiple anchors, users can repeat this specification process multiple times. To demonstrate the feasibility of our method, we show animations designed with our system, such as a walking fox, a walking spider, and a flapping hawk.

Multi-Window Web Browser with History Tree Visualization for Virtual Reality Environment

This study introduces a multi-window web browser system to visualize all visited pages and their link structure in a virtual reality (VR) environment. Using VR space, it is possible to visualize many pages and their connections while maintaining their readability. To evaluate the usefulness of our system, we conducted a user study to compare our system to a conventional single-window browsing system. We then found that our system reduces the browsing operations and time for the task of comparing multiple web pages.

2.5D Simulated Keyframe Animation in Blender

3D animation requires specialized skills and tends to limit creative expression in favor of physical feasibility, while 2D animation does the opposite. Another duality exists between simulated and keyframe animation. While simulations provide physical believability, keyframes give animators fine timing control. This project seeks to bridge the gap between these approaches to animation: leveraging the expressiveness of 2D animation, the robustness of 3D environment and camera movement, the physical feasibility of simulation, and the control of keyframing. To this end, we present a 2.5D animation interface that takes 2D drawn keyframes and 3D context (object, environment and camera movement) to generate simulated animations that adhere to the user-drawn keyframes.

TTTV (Taste the TV): Taste Presentation Display for “Licking the Screen” using a Rolling Transparent Sheet and a Mixture of Liquid Sprays

Reproducing the taste of food and beverages as a media technology is an emerging problem of commercial value and entertainment utility. An already developed taste reproduction system uses electrolyte containing gels controlled by electric current to reproduce taste by ion-phoresis. In this study, an alternative taste reproduction mechanism was proposed that sprays a mixture of liquids on a rolling transparent sheet placed on a screen that displays the image of the food item under consideration for reproduction of its taste.

Feeasy: An Interactive Crowd-Feedback System

Established user-centered evaluation methods for interactive systems are time-consuming and expensive. Crowd-feedback systems provide a quick and cheap solution to collect large amounts of meaningful feedback from users. However, existing crowd-feedback systems for evaluating interactive systems lack interactivity and a seamless integration into the developed artifact. This might have negative effects on user engagement as well as feedback quality and quantity. In this work, we present ”Feeasy”, an interactive crowd-feedback system with five key design features that are motivated by a qualitative pilot study. Feeasy extends existing crowd-feedback systems for interactive designs by offering more interaction and combining the feedback panel and the interactive system in one user interface. Thereby, it provides users a more seamless way to contribute feedback.

A Language Acquisition Support System that Presents Differences and Distances from Model Speech

It is difficult for language learners to know whether they are speaking well and, if not, to know where their speech differs from that of native speakers and how much their speech differs from that of native speakers. Therefore, we propose a novel language learning system that solves these problems. The system uses self-supervised learning to determine whether the user’s speech is good or not. The system also shows where the user’s speech differs from the native speaker’s speech by highlighting the corresponding places on the user’s speech waveform. It also represents the learner’s speech and the native speaker’s speech as points on a two-dimensional coordinate to show the user how far apart they are. We expect that the learner’s speech will gradually become better by repeatedly modifying the speech to eliminate these differences and bring the distance closer.

MathMap: Supporting Exploratory Problem Solving with Algebra

Tools that support problem-solving in mathematics tend to focus on reaching a solution directly. In practice, it is common to go down paths that do not obviously lead to the solution. This part of the process should be reflected in the tools students use to help them better learn problem-solving strategies. MathMap is an application designed to help high-school students learn how to solve algebraic problems by encouraging them to use multiple strategies, maintain the history of previous attempts, and allows them to meaningfully compare methods with other students.

Continuous Travel In Virtual Reality Using a 3D Portal

In virtual reality (VR), continuous movement often leads to users experiencing vection, or cybersickness. To circumvent vection, users are typically given the choice to use teleportation. However, teleportation leads to less spatial awareness compared to continuous movement. One approach to combating cybersickness in continuous movement is to restrict the user’s field of view (FOV). This is typically done by occluding the user’s peripheral vision with a vignette. We developed a FOV restricting continuous locomotion system that does not occlude the user’s FOV and instead uses a 3D portal to display the continuous movement in a limited area in the user’s FOV. We found that this system reduces nausea and disorientation compared to continuous locomotion. However, our system did not significantly increase spatial awareness compared to teleportation.

infOrigami: A Computer-aided Design Method for Introducing Traditional Perforated Boneless Lantern Craft to Everyday Interfaces

The traditional perforated boneless lantern craft is a three-dimensional (3D) lamp fabrication method in which paper sheets are folded without any skeleton, and often accompanied by hole-like patterns. It has been extensively applied throughout history using various means and is favored because of its low cost and environmental friendliness. However, the traditional craft requires a complicated manual production process and accumulated craft expertise, which limits its aesthetics and daily use-value. We propose a computer-aided design method to customize a 3D lamp with decorative patterns from paper pieces. The key idea is to establish an automatic workflow for producing 3D objects with visual information that is similar to the lantern tradition but without overloaded manual efforts. Our user tests and results demonstrate the potential for DIY everyday interfaces for creating personalized aesthetics, enhancing the visualization of information, and promoting multi-person handicraft activity.

Flick Gesture Interaction in Augmented Reality: AR Carrom

Gestural input can make augmented reality applications feel natural and intuitive. While gestures used as simple input controls are common, gestures that interact directly with the virtual objects are less so. These interaction gestures pose additional challenges since they require the application to make many interpretations about the hand in the camera’s field of view such as depth, occlusion, size, and motion. In this work, we describe and propose a flick gesture control mechanic that estimates force and direction from a baseline pinch gesture. We demonstrate the gesture through an example implementation of an AR version of a game called Carrom in which we use our flick mechanic to dynamically interact with the virtual Carrom striker.

Rope X: Assistance and Guidance on Jumping Rope Frequency, based on Real-time, Heart Rate Feedback During Exercise

Jumping rope is a very efficient aerobic exercise, during which the exerciser can maintain an efficient heart rate range (heart rate value to 140 ∼ 160bpm). However, at present, people are unable to recognize whether they are in this ideal aerobic state in a timely and accurate manner, and there is no convenient way of showing exercisers how to perform efficient jumping rope exercises. We developed an early prototype, Rope X, an exercise device that communicates the exerciser's heart rate status through a dynamic interactive interface and gives the exerciser a certain degree of guidance, as to the correct use of the jumping rope to improve the efficiency of the jumping rope exercise (aerobic exercise).

Romadoro: Leveraging Nudge Techniques to Encourage Break-Taking

Excessive screen-time has negative impacts on mental and physical well-being, and taking breaks is important to keeping creativity, interest, and productivity high. We developed Romadoro, a Chrome extension that uses the Pomodoro Technique and technology-mediated nudges to promote better break-taking practices. Nudges involve designing choices to predictably alter the behavior of users. To test the effectiveness of using technology mediated nudges together with the Pomodoro Technique on break-taking, we conducted a mixed design user study with 36 participants. The findings from our study indicate that nudge techniques have a significant impact on motivating users to take breaks. Our work demonstrates potential avenues for designing time-management apps that could be more beneficial to the users than the classic Pomodoro approach.

4D Doodling: Free Creation of Shape-Changing Decoration with A 3D Printing Pen

4D printing thermoplastic technology has become a common method for fabricating shape-changing interfaces, however, the fully automated 3D printing method also limits user's creative participation in the fabrication process. We present a 4D Doodling method which allow users to create deformable daily items in a free-style way using a 3D printing pen. In the doodling experiment, we summarized the experience of manual printing and provided a practical method to support the user's creation, which can be a way to increase humanity for 4D printing technology.

3DP-Ori: Bridging-Printing Based Origami Fabrication Method with Modifiable Haptic properties

Origami is the art of making 2D/3D shapes by folding paper, HCI researchers have leveraged novel structure design and material techniques to create a wide range of origami-based shape-changing interface. Meanwhile, additive manufacturing which could easily fabricate complicated artifacts and augment the haptic sensation, has drawn more attention to the field. This paper presents 3DP-Ori, a fabrication method of flexible origami construction with conventional FDM printers. To create and end-to-end pipeline, we developed a parametric design tool with dynamic folding simulation and feasibility estimation, which enable our software to output a printable semi-folded origami model for further easy manual deformation. Adopting and optimizing bridging-based printing, we leverage multiple geometric patterns with adjustable flexibility on creases, which further effect the haptic properties on 3DP-Ori objects. We believe the 3DP-Ori extends the design space of 3D printing beyond typically hard and fixed forms, and it will help designers and researchers to fabricate interactivities with various physical properties.

ViTT: Towards a Virtual Reality System that Supports the Learning of Ergonomic Patient Transfers

While patient transfers are part of nurses’ daily work, the manual transfer of patients can also pose a major risk to nurses’ health. The Kinaesthetics care conception may help nurses to conduct patient transfers more ergonomically. However, existing support to learn the concept is low. We introduce ViTT, a Virtual Reality system to promote the individual, self-directed learning of ergonomic patient transfers based on the Kinaesthetics care conception. The current implementation of ViTT supports a nurse in two phases: (i) instructions for a patient transfer, and (ii) training of the transfer with a virtual patient (based on a physics engine; implementation limited). In contrast to previous work, our approach provides an immersive experience that may allow for the ‘safe’ training of different transfer scenarios—e.g., patients with different impairments—and the study of different parameters that may influence nurses’ learning experience—e.g., the simulation of stress—in the future.

ScenThread: Weaving Smell into Textiles

In this paper, we propose ScenThread, a controllable threadlike olfactory display that could be woven into textiles, creating a localized scent release on a flexible surface. ScenThread comprises a tubular PTFE scent reservoir, a lightweight and non-vibrating piezoelectric pumping system for delivering the scents, and a permeable silicone tubing to diffuse the aroma, as well as a heating module using conductive yarns for accelerating the scent release. ScenThread can be refilled and reused multiple times. We articulated the mechanical design of ScenThread and the pattern design of the heating module, smell intensity and multi-scent release. We conducted a preliminary study by testing the scent releasing performance through maximum smell distance and the duration time. We envision that ScenThread could integrate into a wide range of textile-based items in daily life naturally, seamlessly, and artistically.

SESSION: Demo Session

ShiftTouch: Sheet-type Interface Extending Capacitive Touch Inputs with Minimal Screen Occlusion

We present ShiftTouch, a sheet-type passive interface that provides multiple inputs for capacitive touch surfaces with minimal screen occlusion. It consists of a conductive layer and a masking layer that partially insulates the conductive one. ShiftTouch uses multiple linear electrodes to control the fine touch position. The touch input is triggered under the electrodes when several adjacent electrodes are grounded simultaneously. Although each input area shares some electrodes with neighboring input areas, the touch surface can identify the inputs from each input area by detecting the slight displacement of the touch position. Our interface is simple yet effective in implementing multiple input areas while reducing screen occlusion compared to existing approaches using finger-sized electrodes.

Towards a Generalized Acoustic Minimap for Visually Impaired Gamers

Video games created for visually impaired players (VIPs) remain inequivalent to those created for sighted players. Sighted players use minimaps within games to learn how their surrounding environment is laid out, but there is no effective analogue to the minimap for visually impaired players. A major accessibility challenge is to create a generalized, acoustic (non-visual) version of the minimap for VIPs. To address this challenge, we develop and investigate four acoustic minimap techniques which represent a breadth of ideas for how an acoustic minimap might work: a companion smartphone app, echolocation, a directional scanner, and a simple menu. Each technique is designed to communicate information about the area around the player within a game world, providing functionality analogous to a visual minimap but in acoustic form. We close by describing a user study that we are performing with these techniques to investigate the factors that are important in the design of acoustic minimap tools.

Eloquent: Improving Text Editing for Mobile

We present Eloquent, an exploration of new interaction techniques for text editing on mobile. Our prototype combines techniques for targeting, selection, and command menus. We demonstrate preliminary results in the form of a text editor, whose design was informed by feedback from users of existing systems. With Eloquent, selecting and acting on text can be done with a single gesture for both novice and expert users.

Demonstration of FabHydro: 3D Printing Techniques for Interactive Hydraulic Devices with an Affordable SLA 3D Printer

In this demonstration, we showcase FabHydro [6], a set of rapid and low-cost techniques to prototype interactive hydraulic devices using off-the-shelf SLA 3D printers and flexible photosensitive resin. We introduce two printing processes to seal the transmission fluid: the Submerged Printing process seals liquid resin inside the printed objects without assembly, and the Printing with Plug method which allows using variety of transmission fluid with no modification to the printer. We showcase how a range of relevant 3D printable primitives, including hydraulic generator, transmitter, and actuator can be printed and combined to create interactive examples.

ViObject: A Smartwatch-based Object Recognition System via Vibrations

Today, in order to start an interaction with most digital technology, we must perform some sort of action to indicate our intention, such as shaking a computer’s mouse to wake it or pressing a coffee maker’s start button for your morning cup of coffee. Our system aims to help remove these currently necessary ”trigger actions” and aims to support an array of applications to create borderless and fluid interactions between the technological world and our own. Our system as has the potential for application within the world of accessible technology as well. The system we propose is a method of identifying held objects via a smartwatch’s accelerometer and gyroscope sensors. A preview demo video is available at https://youtu.be/1YCTzvjcJ18.

EIT-kit Demo: An Electrical Impedance Tomography Toolkit for Health and Motion Sensing

In this demo, we present EIT-kit, an electrical impedance tomography toolkit for designing and fabricating health and motion sensing devices. EIT-kit contains (1) an extension to a 3D editor for personalizing the form factor of the electrode arrays and the electrode distribution, (2) a customized EIT sensing motherboard that users can use to perform measurements, (3) a microcontroller library that automates electrical impedance measurements, and (4) an image reconstruction library for mobile devices for interpolating and then visualizing the measured data. Together, these allow for applications that require 2- or 4-terminal setups, up to 64-electrodes, and single or multiple (up to four) electrode arrays simultaneously.

We demonstrate each element of EIT-kit, as well as the design space that EIT-kit enables by showing various applications in health sensing as well as motion sensing and control.

VoLearn: An Operable Motor Learning System with Auditory Feedback

Previous motor learning systems rely on a vision-based workflow both from feed-forward and feedback process, which limits the application requirement and scenario. In this demo, we presented a novel cross-modal motor learning system named VoLearn. The novice is able to interact with desired motion through a virtual 3D interface and obtain the audio feedback based on a personal smartphone. Both interactivity and user-accessibility of the designed system contribute to a wider range of applications and reduce the limitations in the applied space as well.

Pronunciation Training through Articulation Motion Sensing

Vowels are considered the essence of the syllable, which controls the articulation of each word uttered. However, articulation sensing has not been adequately evaluated. The challenging task is that the speech signal contains insufficient information for articulation analysis. It is difficult for users to improve their pronunciation only by getting scoring feedback on pronunciation assessments. We propose a new method to simultaneously use two different acoustic signals (speech and ultrasonic) to recognize lip shape and tongue position. The system gives articulation feedback to a user, identifying the articulation of monophthongs in multiple languages. The proposed technique is implemented into an off-the-shelf smartphone to be more accessible.

Fabricating Wooden Circuit Boards by Laser Beam Machining

Laser cutting machines are commonly used in wood processing to cut and engrave wood. In this paper, we propose a method and workflow for producing various sensors and electrical circuits by partially carbonizing the wood surface with a laser cutting machine. Similar to wiring on a conventional printed circuit board (PCB), the carbonized part functions as a conductive electrical path. Several methods for creating small-scale graphene by using a raster-scanning laser beam have been proposed; however, raster-scanning requires a substantial amount of time to create a large circuit using carbon. This paper extends the method with a defocused vector-scanning CW laser beam and reduces the time and cost required for fabrication. The proposed method uses an affordable CW laser cutter to fabricate an electrical circuit including touch sensors, damage sensors, and load sensors on wood boards. The circuit can be easily connected to a common one-board microcontroller using metal screws and nails typically used in DIY woodworking.

Enhancing Model Assessment in Vision-based Interactive Machine Teaching through Real-time Saliency Map Visualization

Interactive Machine Teaching systems allow users to create customized machine learning models through an iterative process of user-guided training and model assessment. They primarily offer confidence scores of each label or class as feedback for assessment by users. However, we observe that such feedback does not necessarily suffice for users to confirm the behavior of the model. In particular, confidence scores do not always offer the full understanding of what features in the data are used for learning, potentially leading to the creation of an incorrectly-trained model. In this demonstration paper, we present a vision-based interactive machine teaching interface with real-time saliency map visualization in the assessment phase. This visualization can offer feedback on which regions of each image frame the current model utilizes for classification, thus better guiding users to correct the corresponding concepts in the iterative teaching.

HMK: Head-Mounted-Keyboard for Text Inputin Virtual or Augmented Reality

Text input is essential in a variety of uses in virtual and augmented reality (VR and AR). We present HMK: an effective text input method that mounts split keyboards on the left and right side of the head mounted display (HMD). Users who can touch-type are able to type using HMK by relying on their familiarity with the normal QWERTY keyboard. We develop custom keycaps to make it easier to find the home position. A study with three participants shows that users retain most of their normal keyboard typing skills. The participants achieved, on average, 34.7 words per minute (WPM) by the end of three days of use, retaining 81 percent of their regular entry speed.

OpenNFCSense: Open-Source Library for NFCSense

OpenNFCSense is an open-source library for NFCSense, a system for detecting the movements of near-field communication (NFC) tags using a commodity, low-cost RC522 NFC reader. With a user-defined tag profile, the users can use its application programming interface (API) to obtain the NFC tagged objects’ motion speed, motion frequency, and motion type while recognizing these tagged objects. Since NFC tags support straightforward augmentation to existed physical objects, such as modular building blocks and mechanical toys, OpenNFCSense offers an easy and safe method for rapid prototyping tangible user interfaces for designers.

Programmable Polarities: Actuating Interactive Prototypes with Programmable Electromagnets

This demo introduces a framework that uses programmable electromagnets as a method to rapidly prototype interactive objects. Our approach allows users to to quickly and inexpensively embed actuation mechanisms into otherwise static prototypes in order to make them dynamic and interactive. Underpinning the technique is the insight of using electromagnets to interchangeably create attractive and repulsive forces between adjacent parts, and programmatically setting their polarities in a way that allows objects to translate rotationally and linearly, respond haptically, assemble, and locomote.

Demonstration of GestuRING, a Web Tool for Ring Gesture Input

We present use cases for GestuRING, our web-based tool providing access to 579 gesture-to-function mappings, companion YouTube videos, and numerical gesture recordings for input with smart rings. We illustrate how practitioners can employ GestuRING for the design of gesture sets for ring-based UIs by discussing two examples: (1) enhancing a smart ring application for users with motor impairments with new functions and corresponding gesture commands, and (2) identifying a gesture set for cross-device watch-ring input.

Single-sided Multi-layer Electric Circuit by Hot Stamping with 3D Printer

The spread of personal computer-aided fabrication has made it possible for individuals to create things that meet their personal needs. However, it is still not easy for an individual to prototype electronic circuits. Several methods have been proposed for quickly prototyping electronic circuits. Our solution is combining a 3D printer with transfer foil. This paper expands our original circuit printing method so as to fabricate single-sided multi-layer boards. We show that the proposed technique allows a 3D printer and two types of transfer foil to create single-sided multi-layer circuits.

Deep Augmented Performers: A New Ensemble Performance System by Fusion of Melody Morphing and Body Movements

The fusion of music-information processing and human physical functions will enable new musical experiences to be created. We developed an interactive music system called “Deep Augmented Performers,” which provides users with the experience of conducting a musical performance. This system converts music arranged using melody morphing into electrical muscle stimulation (EMS) to control the body movements of multiple performers. The melodies used in the system are divided into segments, and each segment has multiple variations of melodies. The user can interactively control the performers, thus the actual performance.

Demonstrating HapticBots: Distributed Encountered-type Haptics for VR with Multiple Shape-changing Mobile Robots

HapticBots introduces a novel encountered-type haptic approach for Virtual Reality (VR) based on multiple tabletop-size shape-changing robots. These robots move on a tabletop and change their height and orientation to haptically render various surfaces and objects on-demand. Compared to previous encountered-type haptic approaches like shape displays or robotic arms, our proposed approach has an advantage in deployability, scalability, and generalizability—these robots can be easily deployed due to their compact form factor. They can support multiple concurrent touch points in a large area thanks to the distributed nature of the robots. We propose and evaluate a novel set of interactions enabled by these robots which include: 1) rendering haptics for VR objects by providing just-in-time touch-points on the user’s hand, 2) simulating continuous surfaces with the concurrent height and position change, and 3) enabling the user to pick up and move VR objects through graspable proxy objects. Finally, we demonstrate HapticBots with various applications, including remote collaboration, education and training, design and 3D modeling, and gaming and entertainment.

ProbMap: Automatically constructing design galleries through feature extraction and semantic clustering

Making sense of large unstructured problem spaces is cognitively demanding. Structure can help, but adding structure to a problem space also takes significant effort. ProbMap is a novel application for automatically constructing a design gallery from unstructured text input. Given a list of problem statements, ProbMap extracts and semantically groups the stakeholders to construct hierarchical search facets which enables designers to more efficiently navigate the problem statements. We contribute a novel feature extraction algorithm using natural language processing and a technique for automatically constructing a design gallery. These stakeholders are grouped semantically by clustering stakeholders with higher pairwise similarity together. Preliminary trials show that these techniques, which mirror traditional design activities like stakeholder identification and affinity mapping, provide an initial structure to a large unstructured problem space. This resulted in similar features that would be extracted by humans and sensible clusters.

Designing Adaptive Tools for Motor Skill Training

We demonstrate the design of adaptive tools for motor skill training that use shape change to automatically vary task difficulty based on a learner’s performance. Studies [1] have shown that automatically-adaptive tools lead to significantly higher learning gains when compared to non-adaptive and manually-adaptive tools. We demonstrate the use of Adapt2Learn [2] - a toolkit that supports designers in building adaptive training tools. Adapt2Learn auto-generates an algorithm that converts a learner’s performance data into adaptation states during motor skill training. This algorithm, that maintains the training difficulty at the ’optimal challenge point’, can be uploaded to the micro-controller to convert several shape-changing tools into adaptive tools for motor skill training. We demonstrate 7 prototypes of adaptive tools for motor-skill learning to show applications in sports, music, rehabilitation, and accessibility.

Post-plant3: The Third Type of a Series of Non-humanoid Robots with an Embedded Physical Interaction: The development of non-verbal human-robot interaction framework and interactive robot prototypes

Post-plant is a series of plant-like robots which communicate nonverbally through physical movements. Most of the social robots look like humans or animals. communicating with us by mimicking human speech and gestures. Inspired by plants, post-plant respond to touch instead of language. With post-plant as a starting point, robots of the future will communicate with us in their own way, without having to mimic human behaviors. Post-plant 3 is the third type of post-plant robot series.

UPLIGHT: A Novel Portable Game Device with Omnidirectional Projection Display

We hypothesized that the act of actively moving one’s body to see the hidden parts of a sphere, cube, or any other structure with height and sides would be entertaining. In this paper, we propose a novel portable game device with omnidirectional display called “UPLIGHT,” which was created by combining the element of entertainment with the play style of a portable game device. We also describe the design of a prototype and a playable game application that we developed to achieve this interaction.

GenLine and GenForm: Two Tools for Interacting with Generative Language Models in a Code Editor

A large, generative language model’s output can be influenced through well-designed prompts, or text-based inputs that establish textual patterns that the model replicates in its output [6]. These capabilities create new opportunities for novel interactions with large, generative language models. We present a macro system with two tools that allow users to invoke language model prompts as macros in a code editor. GenLine allows users to execute macros inline as they write code in the editor (e.g., “Make an OK button” produces the equivalent HTML). GenForm provides a form-like interface where the user provides input that is then transformed into multiple pieces of output at the same time (e.g., a description of web code is transformed into HTML, CSS, and JavaScript).

SESSION: Doctoral Symposium

Get the GIST: An Interactive Toolkit to Support Generative Design through Metaheuristic Optimization

Generative design tools afford designers new ways of creating complex objects from 3D models to machine knittable textures. However, implementing the optimization algorithms that make generative design possible requires the rare combination of programming and domain expertise. I present the Generative Interactive Synthesis Toolkit (GIST) to simplify the implementation of generative design tools in novel domains. GIST factors common optimization algorithms into elements of an extensible library and structures optimization tasks around objectives and tactics specified by domain experts in a simple GUI. This moves the burden of domain expertise from programmers to domain experts. I demonstrate GIST in three unique domains: machine knitting, cookie recipes, and tactile maps for blind users. These show the versatility of GIST’s structure.

From Illusions to Beyond-Real Interactions in Virtual Reality

Despite recent advances in technology, current virtual reality (VR) experiences have many limitations. When designing VR interactions, we can leverage the unique affordances of this virtual medium and our ability to programmatically control the renderings to not only overcome these limitations, but also to create new interactions that go beyond the replication of the real world. In my dissertation, I seek to answer the following research questions: How can we utilize the unique affordances that VR offers to overcome the current limitations of this technology? How can we go even further and design mixed reality interactions that leverage these affordances to extend our experiences in the real world? In my work, I approach movement-based VR interactions from a sensorimotor control perspective, carefully considering the plasticity and limits of human perception. To answer the first research question, I explore various visuo-haptic illusions to overcome the limitations of existing haptic devices. In my ongoing work, I am building tools that help researchers and practitioners design and evaluate novel and usable mixed reality interactions that have no real-world counterparts.

Systems to Democratize and Standardize Access to Web APIs

Today, many web sites offer third-party access to their data through web APIs. However, manually encoding URLs with arbitrary endpoints, parameters, authentication handshakes, and pagination, among other things, makes API use challenging and laborious for programmers, and untenable for novices. In addition, each API offers its own idiosyncratic data model, properties, and methods that a new user must learn, even when the sites manage the same common types of information as many others. In my research, I show how working with web APIs can be dramatically simplified by describing the APIs using a standardized, machine-readable ontology, and building systems that democratize and standardize access to these APIs. Specifically, I focus on: 1) systems to enable users to query and retrieve data through APIs without programming and 2) systems that standardize access to APIs and simplify the work for users—even non-programmers—to create interactive web applications that operate on data accessible through arbitrary APIs.

Human-Scale Personal Fabrication

Building large structures from small elements, creating life-size animated creatures, or making contraptions that we can ride on have almost certainly been everyone's childhood dreams. However, researchers and practitioners of personal fabrication have been mainly focusing on creating objects that fit into a human palm, also called “hand-size” objects. The reason behind this is not only because of the size limitation of consumer-grade fabrication machinery but also because of the very long printing time and high material costs of large-scale prototypes. To overcome these limitations, I combine 3D printed hubs and ready-made objects, such as plastic bottles, as well as welding steel rods into a certain type of node-link structures called “trusses”. However, the actual challenge behind my work is not only about achieving the size, but ensuring that the resulting large structures withstand the orders of magnitude larger forces than their hand-sized counterparts. Designing such structures requires substantial engineering know-how that users of personal fabrication equipment, such as makers, typically do not possess. By providing the lacking engineering know-how, my three end-to-end software systems TrussFab, TrussFormer, and Trusscillator enable non-experts to build such human-scale static, kinetic, and human-powered dynamic devices, such as pavilions, large-scale animatronic devices, and playground equipment. These systems achieve this by allowing users to focus on high-level design aspects, such as shape, animation, or riding experience, while abstracting away the underlying technicalities of forces, inertia, eigenfrequencies, etc. To help building the designs, the software generates the connector pieces and assembly instructions. With this body of work, I aim at democratizing engineering that enables individuals to design and fabricate large-scale objects and mechanisms that involve human-scale forces.

Enabling Ubiquitous Personal Fabrication by Deconstructing Established Notions of Artifact Modeling

With the notion of personal fabrication, users are handed industry-level processes to design and manufacture arbitrary physical artifacts. While personal fabrication is a powerful opportunity, it is currently employed by hobbyists and enthusiasts. Consumers, accounting for a majority of the population, still employ workflows like shopping to acquire physical artifacts. The core of my thesis focuses on partially or fully omitting steps of modeling, by relying on outsourced design effort, remixing, and low-effort interactions. Through such deliberate omission of workflow steps, the required effort can be reduced. Instead of starting ”from scratch”, users may remix existing designs, tune parametric designs or merely retrieve their desired artifacts. This moves processes in personal fabrication towards shopping-like interactions, away from complex but powerful industrial CAD (computer-aided design) systems. Instead of relegating design processes to a disconnected workstation, users may conduct search, remix, and preview procedures in-situ, at the location of use for the future artifact. This may simplify the transfer of requirements from the physical environment. Low-effort–high-expressivity fabrication workflows may not be easy to achieve, but crucial for widespread dissemination of personal fabrication. The broader vision behind my focus on ”ubiquitous personal fabrication” is one where any person can create highly personalized artifacts that suit their unique aesthetic and functional needs, without having to define and model every single detail of the artifact.

Harnessing Disagreement to Create AI-Powered Systems That Reflect Our Values

How do we build artificial intelligence systems that reflect our values? Competing potential values we may want to choose from are, at their core, made up of disagreements between individual people. But while the raw datasets that most ML systems rely on are made up of individuals, today’s approaches to building AI typically abstract the individuals out of the pipeline. My thesis contributes a set of algorithms and interactive systems that re-imagine the pipeline for designing and evaluating AI systems, requiring that we deal with competing values in an informed and intentional way. I start from the insight that at each stage of the pipeline, we need to treat individuals as the key unit of operation, rather than the abstractions or aggregated pseudo-humans in use by today’s approaches. I instantiate this insight to address two problems that arise from today’s AI pipeline. First, evaluation metrics produce actively misleading scores about the extent to which some people’s values are being reflected. I introduce a mathematical transformation that more closely aligns metrics with the values and methods of user-facing performance measures. Second, the resulting AI systems either surreptitiously choose which values to listen to without input from users, or simply present several different outcomes to users without mechanisms to help them select an outcome grounded in their values. I propose a new interaction paradigm for deploying classifiers, asking users to compose a jury consisting of the individual people they’d like their classifiers’ decisions to come from.

Visual Design Reuse Through Style Recognition and Transfer

This work aims to transfer design attributes and styles within and across visual documents such as slides, graphic designs, and non-photorealistic renderings. Consistent style across elements is a hallmark of good graphic design. Many visual stylistic design patterns exist throughout visualizations, presentations, and interactive media experiences (games, visual novels). These patterns often exist in visual style guides, brand guides, and concept art. However, except for structured document layouts (e.g., HTML/CSS), design tools often do not enforce consistent style decisions or must be manually maintained. Synchronizing style guides and designs require significant effort, discouraging exploration and the mixing of new ideas. This work introduces algorithms that recognize implicit patterns and structures in visual documents along with interfaces that let designers operate on these patterns, specifically, to view and apply design changes across pattern instances flexibly. The key benefits of visual redesign through implicit patterns are: (1) removing any dependence on upfront formal style declarations, (2) enabling extraction and distribution of implicit visual patterns, and (3) facilitating the exploration of novel visual design concepts through the mixing of styles.

Taking Digital Product Design Coordination to the Next Level by Elastic Zooming and Linecept