UIST '18 Adjunct: The 31st Annual ACM Symposium on User Interface Software and Technology Adjunct Proceedings

Full Citation in the ACM Digital Library

SESSION: Poster Session

Pushables: A DIY Approach for Fabricating Customizable and Self-Contained Tactile Membrane Dome Switches

Momentary switches are important building blocks to prototype novel physical user interfaces and enable tactile, explicit and eyes-free interactions. Unfortunately, typical representatives, such as push-buttons or pre-manufactured membrane switches, often do not fulfill individual design requirements and lack customization options for rapid prototyping. With this work, we present Pushables, a DIY fabrication approach for producing thin, bendable and highly customizable membrane dome switches. Therefore, we contribute a three-stage fabrication pipeline that describes the production and assembly on the basis of prototyping methods with different skill levels making our approach suitable for technology-enthusiastic makers, researchers, fab labs and others who require custom membrane switches in small quantities. To demonstrate the wide applicability of Pushables, we present application examples from ubiquitous, mobile and wearable computing.

Mindgame: Mediating People's EEG Alpha Band Power through Reinforcement Learning

This paper presents Mindgame, a reinforcement learning optimized neurofeedback mindfulness system. To avoid the potential bias and difficulties of designing mapping between neural signal and output, we adopt a trial-and-error learning method to explore the preferred mapping. In a pilot study we assess the effectiveness of Mindgame in mediating people's EEG alpha band. All participants' alpha band change towards the desired direction.

SweatSponse: Closing the Loop on Notification Delivery Using Skin Conductance Responses

Today"s smartphone notification systems are incapable of determining whether a notification has been successfully perceived without explicit interaction from the user. When the system incorrectly assumes that a notification has not been perceived, it may repeat it redundantly, disrupting the user (e.g., phone ringing). Or, when it assumes that a notification was perceived, and therefore fails to repeat it, the notification will be missed altogether (e.g., text message). We introduce SweatSponse, a feedback loop using skin conductance responses (SCR) to infer the perception of smartphone notifications just after their presentation. Early results from a laboratory study suggest that notifications induce SCR and that they could be used to better infer perception of smartphone notifications in real-time.

SurfaceStreams: A Content-Agnostic Streaming Toolkit for Interactive Surfaces

We present SurfaceStreams, an open-source toolkit for recording and sharing visual content among multiple heterogeneous display-camera systems. SurfaceStreams clients support on-the-fly background removal and rectification on a range of different capture devices (Kinect & RealSense depth cameras, SUR40 sensor, plain webcam). After preprocessing, the raw data is compressed and sent to the SurfaceStreams server, which can dynamically receive streams from multiple clients, overlay them using the removed background as mask, and deliver the merged result back to the clients for display. We discuss an exemplary usage scenario (3-way shared interactive tabletop surface) and present results from a preliminary performance evaluation.

Augmented Collaboration in Shared Space Design with Shared Attention and Manipulation

Augmented collaboration in a shared house design scenario has been studied widely with various approaches. However, those studies did not consider human perception. Our goal is to lower the user's perceptual load for augmented collaboration in shared space design scenarios. Applying attention theories, we implemented shared head gaze, shared selected object, and collaborative manipulation features in our system in two different versions with HoloLens. To investigate whether user perceptions of the two different versions differ, we conducted an experiment with 18 participants (9 pairs) and conducted a survey and semi-structured interviews. The results did not show significant differences between the two versions, but produced interesting insights. Based on the findings, we provide design guidelines for collaborative AR systems.

Aalto Interface Metrics (AIM): A Service and Codebase for Computational GUI Evaluation

Aalto Interface Metrics (AIM) pools several empirically validated models and metrics of user perception and attention into an easy-to-use online service for the evaluation of graphical user interface (GUI) designs. Users input a GUI design via URL, and select from a list of 17 different metrics covering aspects ranging from visual clutter to visual learnability. AIM presents detailed breakdowns, visualizations, and statistical comparisons, enabling designers and practitioners to detect shortcomings and possible improvements. The web service and code repository are available at interfacemetrics.aalto.fi.

Shared Autonomy for an Interactive AI System

Across many domains, interactive systems either make decisions for us autonomously or yield decision-making authority to us and play a supporting role. However, many settings, such as those in education or the workplace, benefit from sharing this autonomy between the user and the system, and thus from a system that adapts to them over time. In this paper, we pursue two primary research questions: (1) How do we design interfaces to share autonomy between the user and the system? (2) How does shared autonomy alter a user"s perception of a system? We present SharedKeys, an interactive shared autonomy system for piano instruction that plays different video segments of a piece for students to emulate and practice. Underlying our approach to shared autonomy is a mixed-observability Markov decision process that estimates a user"s desired autonomy level based on her performance and attentiveness. Pilot studies revealed that students sharing autonomy with the system learned more quickly and perceived the system as more intelligent.

DynamicSlide: Reference-based Interaction Techniques for Slide-based Lecture Videos

Presentation slides play an important role in online lecture videos. Slides convey the main points of the lecture visually, while the instructor's narration adds detailed verbal explanations to each item in the slide. We call the link between a slide item and the corresponding part of the narration a reference. In order to assess the feasibility of reference-based interaction techniques for watching videos, we introduce DynamicSlide, a video processing system that automatically extracts references from slide-based lecture videos and a video player. The system incorporates a set of reference-based techniques: emphasizing the current item in the slide that is being explained, enabling item-based navigation, and enabling item-based note-taking. Our pipeline correctly finds 79% of the references in a set of five videos with 141 references. Results from a user study suggest that DynamicSlide's features improve the learner's video browsing and navigation experience.

Gaze-guided Image Classification for Reflecting Perceptual Class Ambiguity

Despite advances in machine learning and deep neural networks, there is still a huge gap between machine and human image understanding. One of the causes is the annotation process used to label training images. In most image categorization tasks, there is a fundamental ambiguity between some image categories and the underlying class probability differs from very obvious cases to ambiguous ones. However, current machine learning systems and applications usually work with discrete annotation processes and the training labels do not reflect this ambiguity. To address this issue, we propose an new image annotation framework where labeling incorporates human gaze behavior. In this framework, gaze behavior is used to predict image labeling difficulty. The image classifier is then trained with sample weights defined by the predicted difficulty. We demonstrate our approach's effectiveness on four-class image classification tasks.

Touch180: Finger Identification on Mobile Touchscreen using Fisheye Camera and Convolutional Neural Network

We present Touch180, a computer vision based solution for identifying fingers on a mobile touchscreen with a fisheye camera and deep learning algorithm. As a proof-of-concept research, this paper focused on robustness and high accuracy of finger identification. We generated a new dataset for Touch180 configuration, which is named as Fisheye180. We trained a CNN (Convolutional Neural Network)-based network utilizing touch locations as auxiliary inputs. With our novel dataset and deep learning algorithm, finger identification result shows 98.56% accuracy with VGG16 model. Our study will serve as a step stone for finger identification on a mobile touchscreen.

OmniEyeball: Spherical Display Equipped With Omnidirectional Camera And Its Application For 360-Degree Video Communication

We propose OmniEyeball (OEB), which is a novel interactive 360° image I/O system. It integrates the spherical display system with an omnidirectional camera to enable both capturing the 360° panoramic live streaming video as well as displaying it. We also present its unique application for symmetric 360° video communication by utilizing two OEB terminals, which may solve the narrow field-of-view problem in video communication. In addition, we designed a vision-based touch detection technique as well as some features to support 360° video communication.

AmbientLetter: Letter Presentation Method for Discreet Notification of Unknown Spelling when Handwriting

We propose a technique to support writing activity in a confidential manner with a pen-based device. Autocorrect and predictive conversion do not work when writing by hand, and looking up unknown spelling is sometimes embarrassing. Therefore, we propose AmbientLetter which seamlessly and discretely presents the forgotten spelling to the user in scenarios where handwriting is necessary. In this work, we describe the system structure and the technique used to conceal the user"s getting the information.

Head Pose Classification by using Body-Conducted Sound

Vibrations generated by human activity have been used for recognizing human behavior and developing user interfaces; however, it is difficult to estimate static poses that do not generate a vibration. This can be solved using active acoustic sensing; however, this method is not suitable for emitting some vibrations around the head in terms of the influence of audition. Therefore, we propose a method for estimating head poses using body-conducted sound naturally and regularly generated in the human body. The support vector classification recognizes vertical and horizontal directions of the head, and we confirmed the feasibility of the proposed method through experiments.

cARe: An Augmented Reality Support System for Dementia Patients

Symptoms of progressing dementia like memory loss, impaired executive function and decreasing motivation can gradually undermine instrumental activities of daily living (IADL) such as cooking. Assisting technologies in form of augmented reality (AR) have previously been applied to support cognitively impaired users during IADLs. In most cases, instructions were provided locally via projection or a head-mounted display (HMD) but lacked an incentive mechanism and the flexibility to support a broad range of use-cases. To provide users and therapists with a holistic solution, we propose cARe, a framework that can be easily adapted by therapists to various use-cases without any programming knowledge. Users are then guided through manual processes with localized visual and auditory cues that are rendered by an HMD. Our ongoing user study indicates that users are more comfortable and successful in cooking with cARe as compared to a printed recipe, which promises a more dignified and autonomous living for dementia patients.

Scaling Notifications Beyond Alerts: From Subtly Drawing Attention up to Forcing the User to Take Action

Research has been done in sophisticated notifications, still, devices today mainly stick to a binary level of information, while they are either attention drawing or silent. We propose scalable notifications, which adjust the intensity level reaching from subtle to obtrusive and even going beyond that level while forcing the user to take action. To illustrate the technical feasibility and validity of this concept, we developed three prototypes. The prototypes provided mechano-pressure, thermal, and electrical feedback, which were evaluated in different lab studies. Our first prototype provides subtle poking through to high and frequent pressure on the user's spine, which significantly improves back posture. In a second scenario, the user is able to perceive the overuse of a drill by an increased temperature on the palm of a hand until the heat is intolerable, forcing the user to eventually put down the tool. The last application comprises of a speed control in a driving simulation, while electric muscle stimulation on the users' legs, conveys information on changing the car's speed by a perceived tingling until the system forces the foot to move involuntarily. In conclusion, all studies' findings support the feasibility of our concept of a scalable notification system, including the system forcing an intervention.

Companion - A Software Toolkit for Digitally Aided Pen-and-Paper Tabletop Roleplaying

We present Companion, a software tool tailored towards improving and digitally supporting the pen-and-paper tabletop role-playing experience. Pen-and-paper role-playing games (P&P RPG) are a concept known since the early 1970s. Since then, the genre has attracted a massive community of players while branching out into several genres and P&P RPG systems to choose from. Due to the highly interactive and dynamic nature of the game, a participants individual impact on narrative and interactive aspects of the game is extremely high. The diversity of scenarios within this context unfold a variety of players needs, as well as factors limiting and enhancing game-play. Companion offers an audio management workspace for creation and playback of soundscapes based on visual layouting. It supports interactive image presentation and map exploration which can incorporate input from any device providing TUIO tracking data. Additionally, a mobile app was developed to be used as a remote control for media activation on the desktop host.

Post-literate Programming: Linking Discussion and Code in Software Development Teams

The literate programming paradigm presents a program interleaved with natural language text explaining the code's rationale and logic. While this is great for program readers, the labor of creating literate programs deters most program authors from providing this text at authoring time. Instead, as we determine through interviews, developers provide their design rationales after the fact, in discussions with collaborators. We propose to capture these discussions and incorporate them into the code. We have prototyped a tool to link online discussion of code directly to the code it discusses. Incorporating these discussions incrementally creates post-literate programs that convey information to future developers.

Juggling 4.0: Learning Complex Motor Skills with Augmented Reality Through the Example of Juggling

Learning new motor skills is a problem that people are constantly confronted with (e.g. to learn a new kind of sport). In our work, we investigate to which extent the learning process of a motor sequence can be optimized with the help of Augmented Reality as a technical assistant. Therefore, we propose an approach that divides the problem into three tasks: (1) the tracking of the necessary movements, (2) the creation of a model that calculates possible deviations and (3) the implementation of a visual feedback system. To evaluate our approach, we implemented the idea by using infrared depth sensors and an Augmented Reality head-mounted device (HoloLens). Our results show that the system can give an efficient assistance for the correct height of a throw with one ball. Furthermore, it provides a basis for the support of a complete juggling sequence.

Wearable Kinesthetic I/O Device for Sharing Muscle Compliance

In this paper, we present a wearable kinesthetic I/O device, which is able to measure and intervene in multiple muscle activities simultaneously through the same electrodes. The developed system includes an I/O module, capable of measuring the electromyogram (EMG) of four muscle tissues, while applying electrical muscle stimulation (EMS) at the same time. The developed wearable system is configured in a scalable manner for achieving 1) high stimulus frequency (up to 70 Hz), 2) wearable dimensions in which the device can be placed along the limbs, and 3) flexibility of the number of I/O electrodes (up to 32 channels). In a pilot user study, which shared the wrist compliance between two persons, participants were able to recognize the level of their confederate's wrist joint compliance using a 4-point Likert scale. The developed system would benefit a physical therapist and a patient, during hand rehabilitation, using a peg board for sharing their wrist compliance and grip force, which are usually difficult to be observed in a visual contact.

Reversing Voice-Related Biases Through Haptic Reinforcement

Biased perceptions of others are known to negatively influence the outcomes of social and professional interactions in many regards. Theses biases can be informed by a multitude of non-verbal cues such as voice pitch and voice volume. This project explores how haptic effects, generated from speech, could attenuate listeners' perceived voice-related biases formed from a speaker's voice pitch. Promising preliminary results collected during a decision-making task suggest that the speech to haptic mapping and vibration delivery mechanism employed does attenuate voice-related biases. Accordingly, it is anticipated that such a system could be introduced in the workplace to equalize people's contribution opportunities and to create a more inclusive environment by reversing voice-related biases.

Mixed-Reality for Object-Focused Remote Collaboration

In this paper we outline the design of a mixed-reality system to support object-focused remote collaboration. Here, being able to adjust collaborators' perspectives on the object as well as understand one another's perspective is essential to support effective collaboration over distance. We propose a low-cost mixed-reality system that allows users to: (1) quickly align and understand each other's perspective; (2) explore objects independently from one another, and (3) render gestures in the remote's workspace. In this work, we focus on the expert's role and we introduce an interaction technique allowing users to quickly manipulation 3D virtual objects in space.

Trans-scale Playground: An Immersive Visual Telexistence System for Human Adaptation

In this paper, we present a novel telexistence system and design methods for telexistence studies to explore spatialscale deconstruction. There have been studies on the experience of dwarf-sized or giant-sized telepresence have been conducted over a period of many years. In this study, we discuss the scale of movements, image transformation, technical components of telepresence robots, and user experiences of telexistence-based spatial transformations. We implemented two types of telepresence robots with an omnidirectional stereo camera setup for a spatial trans-scale experience, wheeled robots, and quadcopters. These telepresence robots provide users with a trans-scale experience for a distance ranging from 15 cm to 30 m. We conducted user studies for different camera positions on robots and for different image transformation method.

Augmenting Human Hearing Through Interactive Auditory Mediated Reality

To filter and shut out an increasingly loud environment, many resort to the use of personal audio technology. They drown out unwanted sounds, by wearing headphones. This uniform interaction with all surrounding sounds can have a negative impact on social relations and situational awareness. Leveraging mediation through smarter headphones, users gain more agency over their sense of hearing: For instance by being able to selectively alter the volume and other features of specific sounds, without losing the ability to add media. In this work, we propose the vision of interactive auditory mediated reality (AMR). To understand users' attitude and requirements, we conducted a week-long event sampling study (n = 12), where users recorded and rated sources (n = 225) which they would like to mute, amplify or turn down. The results indicate that besides muting, a distinct, "quiet-but-audible" volume exists. It caters to two requirements at the same time: aesthetics/comfort and information acquisition.

Sense.Seat: Inducing Improved Mood and Cognition through Multisensorial Priming

User interface software and technologies have been evolving significantly and rapidly. This poster presents a breakthrough user experience that leverages multisensorial priming and embedded interaction and introduces an interactive piece of furniture called Sense.Seat. Sensory stimuli such as calm colors, lavender and other scents as well as ambient soundscapes have been traditionally used to spark creativity and promote well-being. Sense.Seat is the first computational multisensorial seat that can be digitally controlled and vary the frequency and intensity of visual, auditory and olfactory stimulus. It is a new user interface shaped as a seat or pod that primes the user for inducing improved mood and cognition, therefore improving the work environment.

Kaleidoscope: An RDF-based Exploratory Data Analysis Tool for Ideation Outcomes

Evaluating and selecting ideas is a critical and time-consuming step in collaborative ideation, making computational support for this task a desired research goal. However, existing automatic approaches to idea selection might eliminate valuable ideas. In this work we combine automatic approaches with human sensemaking. Kaleidoscope is an exploratory data analytics tool based on semantic technologies. It supports users in exploring and annotating existing ideas interactively. In the following, we present key design principles of Kaleidoscope. Based on qualitative feedback collected on a prototype, we identify potential improvements and describe future work.

Perceptual Switch for Gaze Selection

One of the main drawbacks of the fixation-based gaze interfaces is that they are unable to distinguish top-down attention (or selection, a gaze with a purpose) from stimulus driven bottom-up attention (or navigation, a stare without any intentions) without time durations or unnatural eye movements. We found that using the bistable image called the Necker's cube as a button user interface (UI) helps to remedy the limitation. When users switch two rivaling percepts of the Necker's cube at will, unique eye movements are triggered and these characteristics can be used to indicate a button press or a selecting action. In this paper, we introduce (1) the cognitive phenomenon called "percept switch" for gaze interaction, and (2) propose "perceptual switch" or the Necker's cube user interface (UI) which uses "percept switch" as the indication of a selection. Our preliminary experiment confirms that perceptual switch can be used to distinguish voluntary gaze selection from random navigation, and discusses that the visual elements of the Necker's cube such as size and biased visual cues could be adjusted for the optimal use of individual users.

ZEUSSS: Zero Energy Ubiquitous Sound Sensing Surface Leveraging Triboelectric Nanogenerator and Analog Backscatter Communication

ZEUSSS (Zero Energy Ubiquitous Sound Sensing Surface), allows physical objects and surfaces to be instrumented with a thin, self-sustainable material that provides acoustic sensing and communication capabilities. We have built a prototype ZEUSSS tag using minimal hardware and flexible electronic components, extending our original self-sustaining SATURN microphone with a printed, flexible antenna to support passive communication via analog backscatter. ZEUSSS enables objects to have ubiquitous wire-free battery-free audio based context sensing, interaction, and surveillance capabilities.

reMi: Translating Ambient Sounds of Moment into Tangible and Shareable Memories through Animated Paper

We present a tangible memory notebook--reMi--that records the ambient sounds and translates them into a tangible and shareable memory using animated paper. The paper replays the recorded sounds and deforms its shape to generate synchronized motions with the sounds. Computer-mediated communication interfaces have allowed us to share, record and recall memories easily through visual records. However, those digital visual-cues that are trapped behind the device's 2D screen are not the only means to recall a memory we experienced with more than the sense of vision. To develop a new way to store, recall and share a memory, we investigate how tangible motion of a paper that represents sound can enhance the "reminiscence".

Engagement Learning: Expanding Visual Knowledge by Engaging Online Participants

Most artificial intelligence (AI) systems to date have focused entirely on performance, and rarely if at all on their social interactions with people and how to balance the AIs' goals against their human collaborators'. Learning quickly from interactions with people poses both social challenges and is unresolved technically. In this paper, we introduce engagement learning: a training approach that learns to trade off what the AI needs---the knowledge value of a label to the AI---against what people are interested to engage with---the engagement value of the label. We realize our goal with ELIA (Engagement Learning Interaction Agent), a conversational AI agent who's goal is to learn new facts about the visual world by asking engaging questions of people about the photos they upload to social media. Our current deployment of ELIA on Instagram receives a response rate of 26%.

A WOZ Study of Feedforward Information on an Ambient Display in Autonomous Cars

We describe the development and user testing of an ambient display for autonomous vehicles. Instead of providing feedback about driving actions, once executed, it communicates driving decisions in advance, via light signals in passengers" peripheral vision. This ambient display was tested in an WoZ-based on-the-road-driving simulation of a fully autonomous vehicle. Findings from a preliminary study with 14 participants suggest that such a display might be particularly useful to communicate upcoming inertia changes for passengers.

CrowdMuse: An Adaptive Crowd Brainstorming System

Online crowds, with their large numbers and diversity, show great potential for creativity, particularly during large-scale brainstorming sessions. Research has explored different ways of augmenting this creativity, such as showing ideators some form of inspiration to get them to explore more categories or generate more ideas. The mechanisms used to select which inspirations are shown to ideators thus far have been focused on characteristics of the inspirations rather than on ideators. This can hinder their effect, as creativity research has shown that ideators have unique cognitive structures and may therefore be better inspired by some ideas rather than others. We introduce CrowdMuse, an adaptive system for supporting large scale brainstorming. The system models ideators based on their past ideas and adapts the system views and inspiration mechanisms accordingly. An evaluation of this system could inform how to better individually support ideators.

Active Authentication on Smartphone using Touch Pressure

Smartphone user authentication is still an open challenge because the balance between both security and usability is indispensable. To balance between them, active authentication is one way to overcome the problem. In this paper, we tackle to improve the accuracy of active authentication by adopting online learning with touch pressure. In recent years, it becomes easy to use the smartphones equipped with pressure sensor so that we have confirmed the effectiveness of adopting the touch pressure as one of the features to authenticate. Our experiments adopting online AROW algorithm with touch pressure show that equal error rate (EER), where the miss rate and false rate are equal, is reduced up to one-fifth by adding touch pressure feature. Moreover, we have confirmed that training with the data from both sitting posture and prone posture archives the best when testing variety of postures including sitting, standing and prone, which achieves EER up to 0.14%.

DisplayBowl: A Bowl-Shaped Display for Omnidirectional Videos

We introduce DisplayBowl which is a concept of a bowl shaped hemispherical display for showing omnidirectional images. This display provides three-way observation for omnidirectional images. DisplayBowl allows users to observe an omnidirectional image by looking the image from above. In addition, users can see it with a first-person-viewpoint, by looking into the inside of the hemispherical surface from diagonally above. Furthermore, by observing both the inside and the outside of the hemispherical surface at the same time from obliquely above, it is possible to observe it by a pseudo third-person-viewpoint, like watching the drone obliquely from behind. These ways of viewing solve the problem of inability of pilots controlling a remote vehicle such as a drone to notice what happens behind them, which happen with conventional displays such as flat displays and head mounted displays.

Investigation into Natural Gestures Using EMG for "SuperNatural" Interaction in VR

Can natural interaction requirements be fulfilled while still harnessing the "supernatural" fantasy of Virtual Reality (VR)? In this work we used off the shelf Electromyogram (EMG) sensors as an input device which can afford natural gestures to preform the "supernatural" task of growing your arm in VR. We recorded 18 participants preforming a simple retrieval task in two phases; an initial and a learning phase where the stretch arm was disabled and enabled respectively. The results show that the gestures used in the initial phase are different than the main gestures used to retrieve an object in our system and that the times taken to complete the learning phase are highly variable across participants.

I Know What You Want: Using Gaze Metrics to Predict Personal Interest

In daily communications, we often use interpersonal cues - telltale facial expressions and body language - to moderate responses to our conversation partners. While we are able to interpret gaze as a sign of interest or reluctance, conventional user interfaces do not yet possess this possible benefit. In our work, we evaluate to what degree fixation-based gaze metrics can be used to infer a user's personal interest in the displayed content. We report on a study (N=18) where participants were presented with a grid array of different images, whilst being recorded for gaze behavior. Our system calculated a ranking for shown images based on gaze metrics. We found that all metrics are effective indicators of the participants' interest by analyzing their agreement with regard to the system's ranking. In an evaluation in a museum, we found that this translates to in-the-wild scenarios despite environmental constraints, such as limited data accuracy.

D-Aquarium: A Digital Aquarium to Reduce Perceived Waiting Time at Children's Hospital

Patients waiting for long to use medical services become more physically and psychologically anxious than do people waiting to use general services. Since children feel more anxiety and fear in a hospital, it is necessary to reduce their perceived waiting time by disturbing their awareness of time and dispersing their attention. We present the D-Aquarium, a computer-based digital aquarium that provides psychological stability to pediatric patients and reduces their perceived waiting time by using distractions to alleviate their psychological anxiety and interfere with their perception of time.

One Button to Rule Them All: Rendering Arbitrary Force-Displacement Curves

Physical buttons provide rich force characteristics during the travel range, which are commonly described in the form of force-displacement curves. These force characteristics play an important role in the users' experiences while pressing a button. However, due to lack of proper tools to dynamically render various force-displacement curves, little literature has tried iterative button design improvement. This paper presents Button Simulator, a low-cost 3D printed physical button capable of displaying any force-displacement curves, with limited average error offset around .034 N. By reading the force-displacement curves of existing push-buttons, we can easily replicate the force characteristics from any buttons onto our Button Simulator. One can even go beyond existing buttons and design non-existent ones as the form of arbitrary force-displacement curves; then use Button Simulator to render the sensation. This project will be open-sourced and the implementation details will be released. Our system can be a useful tool for future researchers, designers, and makers to investigate rich and dynamic button"s force design.

Towards a Symbiotic Human-Machine Depth Sensor: Exploring 3D Gaze for Object Reconstruction

Eye tracking is expected to become an integral part of future augmented reality (AR) head-mounted displays (HMDs) given that it can easily be integrated into existing hardware and provides a versatile interaction modality. To augment objects in the real world, AR HMDs require a three-dimensional understanding of the scene, which is currently solved using depth cameras. In this work we aim to explore how 3D gaze data can be used to enhance scene understanding for AR HMDs by envisioning a symbiotic human-machine depth camera, fusing depth data with 3D gaze information. We present a first proof of concept, exploring to what extend we are able to recognise what a user is looking at by plotting 3D gaze data. To measure 3D gaze, we implemented a vergence-based algorithm and built an eye tracking setup consisting of a Pupil Labs headset and an OptiTrack motion capture system, allowing us to measure 3D gaze inside a 50x50x50 cm volume. We show first 3D gaze plots of "gazed-at" objects and describe our vision of a symbiotic human-machine depth camera that combines a depth camera and human 3D gaze information.

Phonoscape: Auralization of Photographs using Stereophonic Auditory Icons

In this paper, we developed an auditory display method which improves the comprehension of photograph to apply the support system for person with visual impairment. The auralization method is constructed by object recognition, auditory iconization and stereophonic techniques. Through the experiments, the enhancement of intelligibility and discriminability was confirmed compared to the image-to-speech reading machine method.

DroneCTRL: A Tangible Remote Input Control for Quadcopters

Recent research has presented quadcopters to enable mid-air interaction. Using quadcopters to provide tactile feedback, navigation, or user input are the current scope of related work. However, most quadcopter steering systems are complicated to use for non-expert users or require an expensive tracking system for autonomous flying. Safety-critical scenarios require trained and expensive personnel to navigate quadcopters through crucial flight paths within narrow spaces. To simplify the input and manual operation of quadcopters, we present DroneCTRL, a tangible pointing device to navigate quadcopters. DroneCTRL resembles a remote control including optional visual feedback by a laser pointer and tangibility to improve the quadcopter control usability for non-expert users. In a preliminary user study, we compare the efficiency of hardware and software-based controller with DroneCTRL. Our results favor the usage of DroneCTRL with and without visual feedback to achieve more precision and accuracy.

resources2city Explorer: A System for Generating Interactive Walkable Virtual Cities out of File Systems

We present resources2city Explorer (R2CE), a tool for representing file systems as interactive, walkable virtual cities. R2CE visualizes file systems based on concepts of spatial, 3D information processing. For this purpose, it extends the range of functions of conventional file browsers considerably. Visual elements in a city generated by R2CE represent (relations of) objects of the underlying file system. The paper describes the functional spectrum of R2CE and illustrates it by visualizing a sample of 940 files.

EyeExpress: Expanding Hands-free Input Vocabulary using Eye Expressions

The muscles surrounding the human eye are capable of performing a wide range of expressions such as squinting, blinking, frowning, and raising eyebrows. This work explores the use of these ocular expressions to expand the input vocabularies of hands-free interactions. We conducted a series of user studies: 1) to understand which eye expressions users could consistently perform among all possible expressions, 2) to explore how these expressions can be used for hands-free interactions through a user-defined design process. Our study results showed that most participants could consistently perform 9 of the 18 possible eye expressions. Also, in the user define study the participants used the eye expressions to create hands-free interactions for the state-of-the-art augmented reality (AR) head-mounted displays.

Game Design for Users with Constraint: Exergame for Older Adults with Cognitive Impairment

In order to design serious games, attention needs to be paid to the target users. One important application of serious games is the design of games for older adults with dementia. Interfaces and activities in games designed for this group of users should be conducted by considering both the cognitive and physical limitations of these people, which may be challenging. We overcome these challenges by using the advantages of new head mounted display virtual reality (HMD-VR) technology and the knowledge of experts. The results of a preliminary three-week exercise involving participants with dementia shows that our design approach has been successful in achieving an interesting environment and could engage participants in the game.

Pop-up Robotics: Facilitating HRI in Public Spaces

Human-Robot Interaction (HRI) research in public spaces often encounters delays and restrictions due to several factors, including the need for sophisticated technology, regulatory approvals, and public or community support. To remedy these concerns, we suggest HRI can apply the core philosophy of Tactical Urbanism, a concept from urban planning, to catalyze HRI in public spaces, provide community feedback and information on the feasibility of future implementations of robots in the public, and also create social impact and forge connections with the community while spreading awareness about robots as a public resource. As a case study, we share tactics used and strategies followed to conduct a pop-up style study of 'A robotic mailbox to support and raise awareness about homelessness.' We discuss benefits and challenges of the pop-up approach and recommend using it to enable the social studies of HRI not only to match but to precede, the fast-paced technological advancement and deployment of robots.

SESSION: Demo Reception

Scout: Mixed-Initiative Exploration of Design Variations through High-Level Design Constraints

Although the exploration of variations is a key part of interface design, current processes for creating variations are mostly manual. We present Scout, a system that helps designers explore many variations rapidly through mixed-initiative interaction with high-level constraints and design feedback. Past constraint-based layout systems use low-level spatial constraints and mostly produce only a single design. Scout advances upon these systems by introducing high-level constraints based on design concepts (e.g. emphasis). With Scout, we have formalized several high-level constraints into their corresponding low-level spatial constraints to enable rapidly generating many designs through constraint solving and program synthesis.

PrintMotion: Actuating Printed Objects Using Actuators Equipped in a 3D Printer

We introduce a novel use for desktop 3D printers using actuators equipped in the printers. The actuators control an extruder and a build-plate mounted on a fused deposition modeling (FDM) 3D printer, moving them horizontally or vertically. Our technique enables actuation of 3D-printed objects on the build-plate by controlling the actuators, and people can interact with them by connecting interface devices to the 3D printer. In this work, we describe how to actuate printed objects using the actuators and present several objects illustrated by our technique.

MetaArms: Body Remapping Using Feet-Controlled Artificial Arms

We introduce MetaArms, wearable anthropomorphic robotic arms and hands with six degrees of freedom operated by the user's legs and feet. Our overall research goal is to re-imagine what our bodies can do with the aid of wearable robotics using a body-remapping approach. To this end, we present an initial exploratory case study. MetaArms' two robotic arms are controlled by the user's feet motion, and the robotic hands can grip objects according to the user's toes bending. Haptic feedback is also presented on the user's feet that correlate with the touched objects on the robotic hands, creating a closed-loop system. Using this system, users can experience an expanded number of arms interaction in which there legs are mapped into the artificial limbs. MetaArms provided initial indications for the sense of limbs alteration.

Interactive Tangrami: Rapid Prototyping with Modular Paper-folded Electronics

Prototyping interactive objects with personal fabrication tools like 3D printers requires the maker to create subsequent design artifacts from scratch which produces unnecessary waste and does not allow to reuse functional components. We present Interactive Tangrami, paper-folded and reusable building blocks (Tangramis) that can contain various sensor input and visual output capabilities. We propose a digital design toolkit that lets the user plan the shape and functionality of a design piece. The software manages the communication to the physical artifact and streams the interaction data via the Open Sound protocol (OSC) to an application prototyping system (e.g. MaxMSP). The building blocks are fabricated digitally with a rapid and inexpensive ink-jet printing method. Our systems allows to prototype physical user interfaces within minutes and without knowledge of the underlying technologies. We demo its usefulness with two application examples.

Face/On: Actuating the Facial Contact Area of a Head-Mounted Display for Increased Immersion

In this demonstration, we introduce Face/On, an embedded feedback device that leverages the contact area between the user's face and a virtual reality (VR) head-mounted display (HMD) to provide rich haptic feedback in virtual environments (VEs). Head-worn haptic feedback devices have been explored in previous work to provide directional cues via grids of actuators and localized feedback on the users' skin. Most of these solutions were immersion breaking due to their encumbering and uncomfortable design and build around a single actuator type, thus limiting the overall fidelity and flexibility of the haptic feedback. We present Face/On, a VR HMD face cushion with three types of discreetly embedded actuators that provide rich haptic feedback without encumbering users with invasive instrumentation on the body. By combining vibro-tactile and thermal feedback with electrical muscle stimulation (EMS), Face/On can simulate a wide range of scenarios and benefit from synergy effects between these feedback types.

Transparent Mask: Face-Capturing Head-Mounted Display with IR Pass Filters

Virtual reality (VR) using a head-mounted display (HMD) have been rapidly becoming popular. Lots of HMD products and various VR applications such as games, training tools and communication services have been released in recent years. However, there is a well-known problem that the user's face is covered by the HMD preventing the facial expression from being captured. This strongly restricts VR applications. For example, users wearing HMDs normally cannot exchange their face images. This degrades communication quality in virtual spaces because facial expressions are an important element of human communication.

Wearable Haptic Device that Presents the Haptics Sensation Corresponding to Three Fingers on the Forearm

In this demonstration, as an attempt of a new haptic presentation method for objects in virtual reality (VR) environment, we show a device that presents the haptic sensation of the fingertip on the forearm, not on the fingertip. This device adopts a five-bar linkage mechanism and it is possible to present the strength, direction of force. Compared with a fingertip mounted type displays, it is possible to address the issues of their weight and size which hinder the free movement of fingers. We have confirmed that the experiences in the VR environment is improved compared with without haptics cues situation regardless of without presenting haptics information directly to the fingertip.

Haptopus: Haptic VR Experience Using Suction Mechanism Embedded in Head-mounted Display

With the spread of VR experiences using HMD, many proposals have been made to improve the experiences by providing tactile information to the fingertips. However, there are problems, such as difficulty attaching and detaching the devices and hindrances to free finger movement. To solve these issues, we developed "Haptopus," which embeds a tactile display in the HMD and presents tactile sensations to the face. In this paper, we conducted a preliminary investigation on the best suction pressure and compared Haptopus to conventional tactile presentation approaches. As a result, we confirmed that Haptopus improves the quality of the VR experience.

Unlimited Electric Gum: A Piezo-based Electric Taste Apparatus Activated by Chewing

Herein, we propose "unlimited electric gum," an electric taste device that will enable users to perceive taste for as long the user is chewing the gum. We developed an in-mouth type novel electric taste-imparting apparatus using a piezoelectric element so that the piezoelectric effect is stimulated by chewing. This enabled the design of a device that does not require cables around a user's lips or batteries in their mouth. In this paper, we introduce this device and report our experimental and exhibition results.

AccordionFab: Fabricating Inflatable 3D Objects by Laser Cutting and Welding Multi-Layered Sheets

In this paper, we propose a method to create 3D inflatable objects by laminating plastic layers. AccordionFab is a fabrication method in which the user can prototype multi-layered inflatable structures rapidly with a common laser cutter. Our key finding is that it is possible to selectively weld the two uppermost plastic sheets out of the stacked sheets by defocusing the laser and inserting the heat-resistant paper below the desired welding layer. As the contribution of our research, we investigated the optimal distance between the lens and the workpiece for cutting and welding and developed an attachment which supports welding process. Next, we developed a mechanism of changing the thickness and bending angle of multi-layered objects and created a simulation software. Using these techniques, the user can create various prototypes such as personal furniture that fits user's body and packing containers that fit the contents.

HoloRoyale: A Large Scale High Fidelity Augmented Reality Game

Recent years saw an explosion in Augmented Reality (AR) experiences for consumers. These experiences can be classified based on the scale of the interactive area (room vs city/global scale) , or the fidelity of the experience (high vs low). Experiences that target large areas, such as campus or world scale, commonly have only rudimentary interactions with the physical world, and suffer from registration errors and jitter. We classify these experiences as large scale and low fidelity. On the other hand, various room sized experiences feature realistic interaction of virtual content with the real world. We classify these experiences as small scale and high fidelity. Our work is the first to explore the domain of large scale high fidelity (LSHF) AR experiences. We build upon the small scale high fidelity capabilities of the Microsoft HoloLens to allow LSHF interactions. We demonstrate the capabilities of our system with a game specifically designed for LSHF interactions, handling many challenges and limitations unique to the domain of LSHF AR through the game design. Our contributions are twofold: - The lessons learned during the design and development of a system capable of LSHF AR interactions. ­ Identification of a set of reusable game elements specific to LSHF AR, including mechanisms for addressing spatiotemporal inconsistencies and crowd control. \We believe our contributions will be fully applicable not only to games, but all LSHF AR experiences.

Screen-Camera Communication via Matrix Barcode Utilizing Imperceptible Color Vibration

Communication between screens and cameras has attracted attention as a ubiquitous information source, motivated by the widespread use of smartphones and the increase of public advertising and information screens. We propose embedding matrix barcodes into images projected on displays by utilizing imperceptible color vibration. This approach maintains the visual experience as the barcodes are imperceptible and can be implemented on almost any display and camera for the technology to be pervasive. In fact, the color vibration can be generated by ordinary 60 Hz LCDs and captured by 120 fps smartphone cameras. To illustrate the technology capabilities, we present scenarios of potential practical applications.

OptRod: Constructing Interactive Surface with Multiple Functions and Flexible Shape by Projected Image

In this demonstration, we propose OptRod, constructing interactive surface with multiple functions and flexible shape by projected image. A PC generates images as control signals and projects them to the bottom of OptRods by a projector or LCD. An OptRod receives the light and converts its brightness into a control signal for the attached output device. By using multiple OptRods, the PC can simultaneously operate many output devices without any signal lines. Moreover, we can arrange surfaces of various shapes easily by combining multiple OptRods. OptRod supports various functions by replacing the device unit connected to OptRod.

Haptic Interface Using Tendon Electrical Stimulation

This demonstration corresponds to our previous paper, which deals with our finding that a proprioceptive force sensation can be presented by electrical stimulation from the skin surface to the tendon region (Tendon Electrical Stimulation: TES). We showed that TES can elicit a force sensation, and adjusting the current parameters can control the amount of the sensation. Unlike electrical muscle stimulation (EMS), which can also present force sensation by stimulating motor nerves to contract muscles, TES is thought to present a proprioceptive force sensation by stimulating receptors or sensory nerves responsible for recognizing the magnitude of the muscle contraction existing inside the tendon. In the demo, we offer the occasion for trying TES.

FTIR-based Touch Pad for Smartphone-based HMD Enhancement

We propose to equip smartphone-based HMDs (SbHMDs) with an additional touch pad. SbHMDs are a low cost approach to allowing users to experience virtual reality (VR). Current SbHMDs, however, provide poor input functionality and sometimes external devices are necessary to enhance the VR experience. Our proposal uses frustrated total internal reflection (FTIR) to realize a touch pad on the external surfaces of the HMD case; no special devices are needed. As simple FTIR approaches do not suit SbHMDs due to the spatial relation between camera and light, we design an arrangement of acrylic plates and mirror suitable for smartphone's built-in camera and torch-light. It extends the input vocabulary SbHMDs to include touch location, gestures, and also pressure.

The Immersive Bubble Chart: a Semantic and Virtual Reality Visualization for Big Data

In this paper, we introduce the Immersive Bubble Chart, a visualization for hierarchical datasets presented in a virtual reality (VR) world. Users get immersed into the visualization and interact with the bubbles using gestures with a view to overcoming some limitations of 2D visualizations due to the capabilities and interaction affordances of the devices. The technological advances in VR give the possibility to design malleable and extensible representations and more natural and engaging interactions. Using the Oculus Touch controllers, the users can grab and move the bubbles, throw them away or bump two of them for creating a cluster. We have tested the Immersive Bubble Chart with the hierarchical clusters of semantically related terms generated from Twitter.

Collaborative Virtual Reality for Low-Latency Interaction

In collaborative virtual environments, users must often perform tasks requiring coordinated action between multiple parties. Some cases are symmetric, in which users work together on equal footing, while others are asymmetric, in which one user may have more experience or capabilities than another (e.g., one may guide another in completing a task). We present a multi-user virtual reality system that supports interactions of both these types. Two collaborating users, whether co-located or remote, simultaneously manipulate the same virtual objects in a physics simulation, in tasks that require low latency networking to perform successfully. We are currently applying this approach to motor rehabilitation, in which a therapist and patient work together.

Artificial Motion Guidance: an Intuitive Device based on Pneumatic Gel Muscle (PGM)

We present a wearable soft exoskeleton sleeve based on PGM. The sleeve consists of 4 PGMs is controlled by a computing system and can actuate 4 different movements (hand extension, flexion, pronation and supination). Depending on how strong the actuation is, the user feels a slight force (haptic feedback) or the hand moves (if the users relaxes the muscles). The paper gives details about the system implementation, the interaction space and some ideas about application scenarios.

A Demonstration of VRSpinning: Exploring the Design Space of a 1D Rotation Platform to Increase the Perception of Self-Motion in VR

In this demonstration we introduce VRSpinning, a seated locomotion approach based around stimulating the user's vestibular system using a rotational impulse to induce the perception of linear self-motion. Currently, most approaches for locomotion in VR use either concepts like teleportation for traveling longer distances or present a virtual motion that creates a visual-vestibular conflict, which is assumed to cause simulator sickness. With our platform we evaluated two designs for using the rotation of a motorized swivel chair to alleviate this, wiggle and impulse. Our evaluation showed that impulse, using short rotation bursts matched with the visual acceleration, can significantly reduce simulator sickness and increase the perception of self-motion compared to no physical motion.

An Interactive Pipeline for Creating Visual Blends

Visual blends are an advanced graphic design technique to draw users' attention to a message. They blend together two objects in a way that is novel and useful in conveying a message symbolically. This demo presents an interactive pipeline for creating visual blends that follows the iterative design process. Our pipeline decomposes the process into both computational techniques and human microtasks. It allows users to collaboratively generate visual blends with steps involving brainstorming, synthesis, and iteration. Our demo allows individual users to see how existing visual blends were made, edit or improve existing visual blends, and create new visual blends.

A Stretch-Flexible Textile Multitouch Sensor for User Input on Inflatable Membrane Structures & Non-Planar Surfaces

We present a textile sensor, capable of detecting multi-touch and multi-pressure input on non-planar surfaces and demonstrate how such sensors can be fabricated and integrated into pressure stabilized membrane envelopes (i.e. inflatables). Our sensor design is both stretchable and flexible/bendable and can conform to various three-dimensional surface geometries and shape-changing surfaces. We briefly outline an approach for basic signal acquisition from such sensors and how they can be leveraged to measure internal air-pressure of inflatable objects without specialized air-pressure sensors. We further demonstrate how standard electronic circuits can be integrated with malleable inflatable objects without the need for rigid enclosures for mechanical protection.

Demonstrating Gamepad with Programmable Haptic Texture Analog Buttons

We demonstrate a haptic feedback method to generate multiple virtual textures on analog buttons of the gamepad. The method utilizes the haptic illusion evoked from proper haptic cues in respect of the analog button's movement to change the perceived physical property of the button. Two types of analog buttons, joystick and trigger button on the gamepad is augmented with localized haptic feedback. We implemented two virtual textures for each type of analog button, and these textures could be programmatically controlled reflecting the dynamic game situations. We also demonstrate a two-player shooter game to show the dynamic texture representation of customized gamepad could enrich the game experience.

Knobology 2.0: Giving Shape to the Haptic Force Feedback of Interactive Knobs

We present six rotary knobs, each with a distinct shape, that provide haptic force feedback on rotation. The knob shapes were evaluated in relation to twelve haptic feedback stimuli. The stimuli were designed as a combination of the most relevant perceptual parameters of force feedback; acceleration, friction, detent amplitude and spacing. The results indicate that there is a relationship between the shape of a knob and its haptic feedback. The perceived functionality can be dynamically altered by changing its shape and haptic feedback. This work serves as basis for the design of dynamic interface controls that can adapt their shape and haptic feel to the content that is controlled. In our demonstration, we show the six distinct knobs shapes with the different haptic feedback stimuli. Attendees can experience the interaction with the different knob shapes in relation the stimuli and design stimuli with a graphical editor.

Hybrid Watch User Interfaces: Collaboration Between Electro-Mechanical Components and Analog Materials

We introduce programmable material and electro-mechanical control to enable a set of hybrid watch user interfaces that symbiotically leverage the joint strengths of electro-mechanical hands and a dynamic watch dial. This approach enables computation and connectivity with existing materials to preserve the inherent physical qualities and abilities of traditional analog watches. We augment the watch's mechanical hands with micro-stepper motors for control, positioning and mechanical expressivity. We extend the traditional watch dial with programmable pigments for non-emissive dynamic patterns. Together, these components enable a unique set of interaction techniques and user interfaces beyond their individual capabilities.

I/O Braid: Scalable Touch-Sensitive Lighted Cords Using Spiraling, Repeating Sensing Textiles and Fiber Optics

We introduce I/O Braid, an interactive textile cord with embedded sensing and visual feedback. I/O Braid senses proximity, touch, and twist through a spiraling, repeating braiding topology of touch matrices. This sensing topology is uniquely scalable, requiring only a few sensing lines to cover the whole length of a cord. The same topology allows us to embed fiber optic strands to integrate co-located visual feedback. We provide an overview of the enabling braiding techniques, design considerations, and approaches to gesture detection. These allow us to derive a set of interaction techniques, which we demonstrate with different form factors and capabilities. Our applications illustrate how I/O Braid can invisibly augment everyday objects, such as touch-sensitive headphones and interactive drawstrings on garments, while enabling discoverability and feedback through embedded light sources.

SESSION: Doctoral Consortium

Artistic Vision: Providing Contextual Guidance for Capture-Time Decisions

With the increased popularity of cameras, more and more people are interested in learning photography. People are willing to invest in expensive cameras as a medium for their artistic expression, but few have access to in-person classes. Inspired by critique sessions common in in-person art practice classes, we propose design principles for creative learning. My dissertation research focuses on designing new interfaces and interactions that provide contextual in-camera feedback to aid users in learning visual elements of photography. We interactively visualize results of image processing algorithms as additional information for the user to make more informed and intentional decisions during capture. In this paper, we describe our design principles, and apply these principles in the design of two guided photography interfaces: one to explore lighting options for a portrait, and one to refine contents and composition of a photo.

The Right Content at the Right Time: Contextual Examples for Just-in-time Creative Learning

People often run into barriers when doing creative tasks with software because it is difficult to translate goals into concrete actions. While expert-made tutorials, examples, and documentation abound online, finding the most relevant content and adapting it to one's own situation and task is a challenge. My research introduces techniques for exposing relevant examples to novices in the context of their own workflows. These techniques are embodied in three systems. The first, RePlay, helps people find solutions when stuck by automatically locating relevant moments from expert-made videos. The second, DiscoverySpace, helps novices get started by mining and recommending expert-made software macros. The third, CritiqueKit, helps novices improve their work by providing ambient guidance and recommendations. Preliminary experiments with RePlay suggest that contextual video clips help people complete targeted tasks. Controlled experiments with DiscoverySpace and CritiqueKit demonstrate that software macros prevent novices from losing confidence, and ambient guidance improves novice output. My research illustrates the power of user communities to support creative learning.

Crowd-AI Systems for Non-Visual Information Access in the Real World

The world is full of information, interfaces and environments that are inaccessible to blind people. When navigating indoors, blind people are often unaware of key visual information, such as posters, signs, and exit doors. When accessing specific interfaces, blind people cannot independently do so without at least first learning their layout and labeling them with sighted assistance. My work investigates interactive systems that integrates computer vision, on-demand crowdsourcing, and wearables to amplify the abilities of blind people, offering solutions for real-time environment and interface navigation. My work provides more options for blind people to access information and increases their freedom in navigating the world.

Designing Inherent Interactions on Wearable Devices

Wearable devices are becoming important computing devices to personal users. They have shown promising applications in multiple domains. However, designing interactions on smartwears remains challenging as the miniature sized formfactors limit both its input and output space. My thesis research proposes a new paradigm of Inherent Interaction on smartwears, with the idea of seeking interaction opportunities from users daily activities. This is to help bridging the gap between novel smartwear interactions and real-life experiences shared among users. This report introduces the concept of Inherent Interaction with my previous and current explorations in the category.

Fostering Design Process of Shape-Changing Interfaces

Shape-changing interfaces match forms and haptics with functions and bring affordances to devices. I believe that shape-changing interfaces will be increasingly available to end-users in the future. To increase acceptance of shape-changing interfaces by end-users, we need to provide designers with design criteria and framework closely grounded on their current skills and needs. Also, we need to provide them with prototyping tools to enable quick assessment of ideas in the physical world. In this paper, I introduce the three threads of my Ph.D. research in the direction of providing the design tools. First, I advance existing shape-changing interface taxonomies to broaden design vocabulary and systemize design framework, based on the classification of everyday objects. Second, I conduct a study with end-users to suggest interaction techniques and design guidelines for shape-changing interfaces from their current practice. Lastly, I develop a physical prototyping tool for shape-changing interfaces to shorten prototyping iterations based on well-known Lego-like bricks.

Designing Interactive Behaviours Beyond the Desktop

As interactions move beyond the desktop, interactive behaviours (effects of actions as they happen, or once they happen) are becoming increasingly complex. This complexity is due to the variety of forms that objects might take, and the different inputs and sensors capturing information, and the ability to create nuanced responses to those inputs. Current interaction design tools do not support much of this rich behaviour authoring. In my work I create prototyping tools that examine ways in which designers can create interactive behaviours. Thus far, I have created two prototyping tools: Pineal and Astral, which examine how to create physical forms based on a smart object's behaviour, and how to reuse existing desktop infrastructures to author different kinds of interactive behaviour. I also contribute conceptual elements, such as how to create smart objects using mobile devices, their sensors and outputs, instead of using custom electronic circuits, as well as devising evaluation strategies used in HCI toolkit research which directly informs my approach to evaluating my tools.

Comfortable and Efficient Travel Techniques in VR

Locomotion,the most basic interaction in Virtual Environments (VE), enables users to move around the virtual world. Locomotion in Virtual Reality (VR) is a problem which has not been solved completely since existing techniques have a specific set of requirements and limitations. In addition, the uncertainty about the impact that virtual cues have on users perception complicates the development of better locomotion interfaces. A broadly applicable locomotion technique that is easy to use and addresses the issues of presence, cybersickness and fatigue has yet to be developed. Though optical flow and vestibular cues are dominant in navigation, other cues such as auditory, arm feedback, wind, etc. play a role. The proposed research aims to evaluate and improve upon a set of locomotion techniques for different modes of locomotion in virtual scenarios, as well as the transitions between them. The outcome measures of the evaluations of the different scenarios are usefulness for spatial orientation, presence, fatigue, cybersickness and user preference. The envisioned contribution of my thesis is research towards the design of a locomotion technique that is easy to use and addresses the shortcomings of current implementations.

Enabling Single-Handed Interaction in Mobile and Wearable Computing

Mobile and wearable computing are increasingly pervasive as people carry and use personal devices in everyday life. Screen sizes of such devices are becoming larger and smaller to accommodate both intimate and practical uses. Some mobile device screens are becoming larger to accommodate new experiences (e.g., phablet, tablet, eReader), whereas screen sizes on wearable devices are becoming smaller to allow them to fit into more places (e.g., smartwatch, wrist-band and eye-wear). However, these trends are making it difficult to use such devices with only one hand due to their placement, limited thumb reach and the fat-finger problem. This is especially true as there are many occasions when a user's other hand is occupied (encumbered) or not available. This thesis work explores, creates and studies novel interaction techniques that enable effective single-hand usage on mobile and wearable devices, empowering users to achieve more with their smart devices when only one hand is available.