SIGGRAPH '21: ACM SIGGRAPH 2021 Appy Hour

Full Citation in the ACM Digital Library

3D Meister Planner: The simplest floor planner worldwide

3D Meister Planner is the simplest real time floor planner worldwide. It is a Progressive Web App in which users are able to define and customize their room in 2D and 3D. Users can position and select which elements are present and choose from the catalogue of products that the client offers, including wall and floor designs. Objects can be visualized independently using the 3D Product Viewer, and in the room using the 3D Room Viewer. 3D meister Planner is based on WebGL API which allows fast real time hardware accelerated computer graphics on mobile devices and home computers.

Colors - One: Perceptually Based Color Photo Editing

We introduce ‘Colors - One’, a perceptually based color photo editing app built for the Apple ecosystem. The app’s core algorithm augments standard Poisson image editing methods to allow the prediction and editing of perceived image color, rather than pixel color. Users can isolate 16 unique hues and edit the contrast color of each hue individually. The resulting photo edits are striking and provide new insights into the nature of perceptual color representations.

Forward Selfies

Taking selfies is a common practice for smartphone users. Simultaneously capturing oneself and the desired background is not a trivial task, because it is often not possible to get a good view of both. Moreover, users often loose attention of their surroundings, thus taking a selfie also showed to lead to serious injuries. To ease the process of capturing selfies and to make it more safe, this work proposes forward selfies as a simple yet effective concept to account for both, risk and challenges. Forward selfies seamlessly combine images of the front-facing and the rear-facing smartphone camera. We propose a mobile app that builds on this concept and implements the selfie synthesis in a post-processing image composition stage. Thereby, we can take advantage of the commonly more advanced back-camera hardware, i.e., providing higher image resolutions, larger field of views, and different perspectives. Finally, we leverage built-in camera optimizations for independently (de-)focusing objects at different distances, such as for persons and backgrounds. We conclude that the concept of forward selfies can effectively address and solve certain challenges of capturing selfies, which we demonstrate by a simple app user interface.

From A-Pose to AR-Pose: Animating Characters in Mobile AR

We present AR-Pose, a mobile AR app to generate keyframe-based animations of rigged humanoid characters. The smartphone’s positional and rotational degrees of freedom are used for two purposes: (i) as a 3D cursor to interact with inverse kinematic (IK) controllers placed on or near the character’s joints; and (ii) as a virtual camera that enables users to freely move around the character. Through the touch screen, users can activate/deactivate actions such as selecting an IK controller or pressing animation control buttons placed in a hovering 3D panel. By systematically re-positioning and saving the positions of the IK controllers, different poses can be achieved and, therefore, used to generate a 3D animation.

HoloVista: Designing for Immersion: Using a mobile app to simulate an alternate reality

HoloVista is a mixed reality social media simulator game reminiscent of a near-future Instagram. Players experience a week in our protagonist Carmen’s life through the places she goes, the objects she photographs, the thoughts she shares on social media, and her chats with friends. Every time Carmen travels to a new location in the story, players access a virtual camera, which allows them to view the game’s environments in full 360°, finding and photographing significant objects. By taking photos and solving puzzles, players learn a secret that Carmen has been running since childhood.

Our goal was to create an immersive mobile XR experience with no peripherals. To achieve this, we relied on interaction design and narrative. Leveraging the phone’s gyroscope and accelerometer, we gave players a sense of presence within a fictional space by letting them use the same motions/gestures to explore it as they would when examining a real-life scene through the game’s camera. In this way, HoloVista engages the proprioception area of the brain, making people feel as though they are physically present in the world we have built. In this document, we explore several of our techniques for player immersion.

MotionViz: Artistic Visualization of Human Motion on Mobile Devices

We present MotionViz, an interactive iOS mobile app that enables users to amplify motion and dynamics in videos. MotionViz implements novel augmented reality and expressive rendering techniques in an end-to-end processing pipeline: multi-dimensional video data is captured, analyzed, and processed to render animated graphical elements that help express figures and actions. Through an easy-to-use graphical user interface, users can choose from a curated list of artistic motion visualization effects, including the overlay of animated silhouettes, halos, and contour lines. MotionViz is based on Apple’s LiDAR technology, accelerated image processing APIs, and dedicated Neural Engine for real-time on-device processing.

Museum Alive with David Attenborough: The Challenges of Real-Time: Techniques for converting high quality television ready assets for the real-time mobile AR app Museum Alive with David Attenborough

StyleTune: Interactive Style Transfer Enhancement on Mobile Devices

We present StyleTune, a mobile app for interactive style transfer enhancement that enables global and spatial control over stroke elements and can generate high fidelity outputs. The app uses adjustable neural style transfer (NST) networks to enable art-direction of stroke size and orientation in the output image. The implemented approach enables continuous and seamless edits through a unified stroke-size representation in the feature space of the style transfer network. StyleTune introduces a three-stage user interface, that enables users to first explore global stroke parametrizations for a chosen NST. They can then interactively locally retouch the stroke size and orientation using brush metaphors. Finally, high resolution outputs of 20 Megapixels and more can be obtained using a patch-based upsampling and local detail transfer approach, that transfers small-scale details such as paint-bristles and canvas structure. The app uses Apple’s CoreML and Metal APIs for efficient on-device processing.

Take K-12 Students for Global Field Trips by Interactive Droneography

We build an interactive droneography system that emulates in-person field trips, letting students and educators see, learn and interact with remote places by flying drones at home. To guide students from missing directions or losing attention, a visual salience detector and an object recognizer through neural networks are also included.