• Propose the concept of Digital Heritage+.
• Use interactive documentary (Batik project as an archetype), where interaction occurs between people and intangible cultural heritage (ICH), to resurge and encourage ICH to be ingrained in people's everyday life.
• Bridge the gap between researcher community and general public.
We use our hands every day: to grasp a cup of coffee, write text on a keyboard, or signal that we are about to say something important. We use our hands to interact with our environment and to help us communicate with each other without thinking about it. Wouldn't it be great to be able to do the same in virtual reality? However, accurate hand motions are not trivial to capture. In this course, we present the current state of the art when it comes to virtual hands. Starting with current examples for controlling and depicting hands in virtual reality (VR), we dive into the latest methods and technologies to capture hand motions. As hands can currently not be captured in every situation and as constraints stopping us from intersecting with objects are typically not available in VR, we present research on how to synthesize hand motions and simulate grasping motions. Finally, we provide an overview of our knowledge of how virtual hands are being perceived, resulting in practical tips on how to represent and handle virtual hands.
Our goals are (a) to present a broad state of the art of the current usage of hands in VR, (b) to provide more in-depth knowledge about the functioning of current hand motion tracking and hand motion synthesis methods, (c) to give insights on our perception of hand motions in VR and how to use those insights when developing new applications, and finally (d) to identify gaps in knowledge that might be investigated next. While the focus of this course is on VR, many parts also apply to augmented reality, mixed reality, and character animation in general, and some content originates from these areas.
The vector space is denoted as R,Rn,Rm×n,V,W
Matricies are denoted by upper case, italic, and boldface letters: Am×n
Vectors are column vectors denoted by boldface and lower case letters: x ∈ Rn×1
1n ∈ Rn is a n × 1 vector of all ones
In is n × n identitymatrix.
ei is the unit vector where only the i -th element is 1 and the rest are 0.
Overview of real-world polarization effects
Polarization in nature
Ray Tracer Normal vs. Polarized
Overview on Jones Calculus
Overview on Mueller Calculus
Build up of the Ray Tracer
Physics-based animation has emerged as a core area of computer graphics finding widespread application in the film and video game industries as well as in areas such as virtual surgery, virtual reality, and training simulations. This course introduces students and practitioners to fundamental concepts in physics-based animation, placing an emphasis on breadth of coverage and providing a foundation for pursuing more advanced topics and current research in the area. The course focuses on imparting practical knowledge and intuitive understanding rather than providing detailed derivations of the underlying mathematics. The course is suitable for someone with no background in physics-based animation---the only prerequisites are basic calculus, linear algebra, and introductory physics.
We begin with a simple, and complete, example of a mass-spring system, introducing the principles behind physics-based animation: mathematical modeling and numerical integration. From there, we systematically present the mathematical models commonly used in physics-based animation beginning with Newton's laws of motion and conservation of mass, momentum, and energy. We then describe the underlying physical and mathematical models for animating rigid bodies, soft bodies, and fluids. Then we describe how these continuous models are discretized in space and time, covering Lagrangian and Eulerian formulations, spatial discretizations and interpolation, and explicit and implicit time integration. In the final section, we discuss commonly used constraint formulations and solution methods.
A central goal of computer graphics is to provide tools for designing and simulating real or imagined artifacts. An understanding of functionality is important in enabling such modeling tools. Given that the majority of man-made artifacts are designed to serve a certain function, the functionality of objects is often reflected by their geometry, the way that they are organized in an environment, and their interaction with other objects or agents. Thus, in recent years, a variety of methods in shape analysis have been developed to extract functional information about objects and scenes from these different types of cues.
In this course, we discuss recent developments involving functionality analysis of 3D shapes and scenes. We provide a summary of the state-of-the-art in this area, including a discussion of key ideas and an organized review of the relevant literatures. More specifically, we first present a general definition of functionality from which we derive criteria for classifying the body of prior work. This definition facilitates a comparative view of methods for functionality analysis. Moreover, we connect these methods to recent advances in deep learning, computer vision and robotics. Finally, we discuss a variety of application areas, and outline current challenges and directions for future work.
In the development of novel algorithms and techniques in virtual and augmented reality (VR/AR), it is crucial to take human visual perception into account. For example, when hardware resources are a restraining factor, the limitations of the human visual system can be exploited in the creation and evaluation of new effective techniques. Over the last decades, visual perception evaluation studies have become a vital part of the design, development, and evaluation of immersive computer graphics applications. This course aims at introducing the attendees to the basic concepts of visual perception applied to computer graphics and it offers an overview of recent perceptual evaluation studies that have been conducted with head-mounted displays (HMDs) in the context of VR and AR applications. During this course, we call attention to the latest published courses and surveys on visual perception applied to computer graphics and interaction techniques. Through an extensive search in the literature, we have identified six main areas in which recent visual perceptual evaluation studies have been focused on: distance perception, avatar perception, image quality, interaction, motion perception, and cybersickness. Trends, main results, and open challenges are discussed for each area and accompanied with relevant references offering the attendees a wide introduction and perspective on the topic.
1. History (Cloudpipe)
3. Volume Generation
4. Set-Dressing and Framing (Design)
5. Data Management
7. Lighting and Compositing
Building a network of healthcare professionals
Identifying current practices
Big Picture but no Specs
1. Accelerating 3D Deep Learning with PyTorch3D, arXiv 2007.08501
2. Mesh R-CNN, ICCV 2019
3. SynSin: End-to-end View Synthesis from a Single Image, CVPR 2020
4. Fast Differentiable Raycasting for Neural Rendering using Sphere-based Representations, arXiv 2004.07484
• Do Machine Learning algorithms have a Soul?
• Could they understand every day's reality as us Humans do?
• What the consequence of their Creativity?
• Can they help us to understand world better?