A sense of rhythm is essential for playing instruments. However, many beginners learning how to play musical instruments have difficulty with rhythm. We have proposed "Stimulated Percussions," which is a musical instrument performance system using electrical muscle stimulation (EMS) in the past. In this study, we apply it to the learning of rhythm. By the movement of muscles stimulated using EMS, users are able to acquire what kind of arms and legs to move at what timing. In addition to small percussion instruments such as castanets, users can play the rhythm patterns of drums that the require the simultaneous movement of their limbs.
Since its release, "The Design Engine" has been played by groups of students, teachers, and individuals looking to spark self-guided training. "The Design Engine" is a direct response to educators' requests for better classroom tools surrounding inspiration and 3D printing. By prompting participants to create their own original, imaginative works-instead of using pre-selected examples-teachers can keep their students better motivated through the process of mastering desktop 3D printing. We are hosting a brand new SIGGRAPH-edition of "The Design Engine," a constantly evolving series of challenges hosted within the Studio. Participants of all backgrounds can join for a short startup round, or stick around to design and develop their projects using the tools available in the SIGGRAPH Studio Workshop.
In this study, We propose a method to develop a spring glass dip pen by using a 3D printer and reproduce different types of writing feeling. There have been several studies on different types of pens to change the feel of writing. For example, EV-Pen [Wang et al. 2016] and haptics pens [Lee et al. 2004] changes the feel of pen writing with using vibration. However, our proposed method does not reproduce tactile sensation of softness by using vibrations.
Creatives in animated and real movie productions have been exploring new modalities to visually design filmic sequences before realizing them in studios, through techniques like hand-drawn storyboards, physical mockups or more recently virtual 3D environments. A central issue in using virtual 3D environments is the complexity of content creation tools for non technical film creatives. To overcome this issue, we present One Man Movie, a VR authoring system which enables the crafting of filmic sequences with no prior knowledge in 3D animation. The system is designed to reflect the traditional creative process in film pre-production through stages like (i) scene layout (ii) animation of characters, (iii) placement and control of cameras and (iv) montage of the filmic sequence, while enabling a fully novel and seamless back-and-forth between all stages of the process thanks to real-time engines. This research tool has been designed and evaluated with students and experts from film schools, and should therefore raise a significant interest among Siggraph participants.
Projected augmented reality, also called projection mapping or video mapping, is a form of augmented reality that uses projected light to directly augment 3D surfaces, as opposed to using pass-through screens or headsets. The value of projected AR is its ability to add a layer of digital content directly onto physical objects or environments in a way that can be instantaneously viewed by multiple people, unencumbered by a screen or additional setup.
In our hands-on demonstration, we show several objects, the functionality of which is defined by the objects' internal micro-structure. Such metamaterial machines can (1) be mechanisms based on their microstructures, (2) employ simple mechanical computation, or (3) change their outside to interact with their environment. They are 3D printed from one piece and we support their creating by providing interactive software tools.
In this research, we propose a system which makes paper through additive manufacturing process by using a dispenser mounted on XY plotter. By using our system, graphic designers can design and output paper itself which is hard in an existing paper production process. This time, we designed and implemented a machine for fabricating paper and created several output examples. In SIGGRAPH, we will provide a workshop for participants to design their original paper using our machines.
Raymarching signed distance fields is a technique used by graphics experts and demoscene enthusiasts to construct scenes with features unusual in traditional polygonal workflows-blending shapes, kaleidoscopic patterns, reflections, and infinite fractal detail all become possible and are represented in compact representations that live mostly on the graphics card. Until now these scenes have had to be constructed in shaders by hand, but the Raymarching Toolkit for Unity is an extension that combines Unity's highly visual scene editor with the power of raymarched visuals by automatically generating the raymarching shader for the scene an artist is creating, live.
Describing the motions of imaginary original creatures is an essential part of animations and computer games. One approach to generate such motions involves finding an optimal motion for approaching a goal by using the creatures' body and motor skills. Currently, researchers are employing deep reinforcement learning (DeepRL) to find such optimal motions. Some end-to-end DeepRL approaches learn the policy function, which outputs target pose for each joint according to the environment. In our study, we employed a hierarchical approach with a separate DeepRL decision maker and simple exploration-based sequence maker, and an action token, through which these two layers can communicate. By optimizing these two functions independently, we can achieve a light, fast-learning system available on mobile devices. In addition, we propose another technique to learn the policy at a faster pace with the help of a heuristic rule. By treating the heuristic rule as an additional action token, we can naturally incorporate it via Q-learning. The experimental results show that creatures can achieve better performance with the use of both heuristics and DeepRL than by using them independently.