Dr. Jun-Yan Zhu is a pioneer in the use of modern machine learning in computer graphics. His dissertation is arguably the first to systematically attack the problem of natural image synthesis using deep neural networks. As such, his work has already had an enormous impact on the field, with several of his contributions, most notably CycleGAN, becoming widely-used tools not just for researchers in computer graphics and beyond, but also for visual artists.
A key open problem in data-driven image synthesis is how to make sure that the synthesized image looks realistic, i.e., lies on the manifold of natural images? In Part I of his thesis, Zhu takes a discriminative approach to address a particular instance of this problem, training a classifier to estimate the realism of spliced image composites. Since it is difficult to obtain enough human-labeled training data to learn what looks realistic, he instead learned to classify between real images and automatically-generated composites, whether they look realistic or not. The surprising finding: resulting classifier can actually predict how realistic a new composite would look to a human. Moreover, this realism score can be used to improve the composite realism by iteratively updating the image via a learned transform. This work could be thought of as an early precursor to the conditional Generative Adversarial Network (GAN) architectures. He also developed a similar discriminative learning approach for improving the photograph aesthetics of portraits (SIGAsia’14). In Part II, Zhu takes the opposite, generative approach to modeling natural images and constrains the output of a photo editing tool to lie on this manifold. He built real-time data-driven exploration and editing interfaces based on both classic image averaging models (SIGGRAPH’14) and more recent Generative Adversarial Networks. The latter work and the associated software iGAN was the first use of GANs in a real-time application, and it contributed to the popularization of GANs in the community. In Part III, Zhu combines the lessons learned from his earlier work for developing a novel set of image-to-image translation algorithms. Of particular importance is the CycleGAN framework (ICCV’17), which revolutionized image-based computer graphics as a general-purpose framework for transferring the visual style from one set of images onto another, e.g., translating summer into winter and horses into zebras, generating real photographs from computer graphics renderings, etc. It was the first to show artistic collection style transfer (e.g., using all of Van Gogh paintings instead of only the “Starry Night”), and translating a painting into a photograph. In the short time since CycleGAN was published, it has already been applied to many different problems far beyond computer graphics, from generating synthetic training data (computer vision), to converting MRIs into CT scans (medical imaging), to applications in NLP and speech synthesis. In addition to his dissertation work, he also contributed to learning-based methods for interactive colorization (SIGGRAPH’17) and light field videography (SIGGRAPH’17).
Apart from several well-cited papers in top graphics and vision venues, Zhu’s work has an impact in other ways as well. His research has been repeatedly featured in the popular press, including New Yorker, Economist, Forbes, Wired, etc. Jun-Yun is also exemplary in facilitating the reproducibility of research and making it easy for researchers and practitioners to build on his contributions. He has open-sourced many of his projects and, as a sign of his impact, he has earned over 22,000 GitHub stars and 1,900 followers. Most impressively, his code has been used widely not just by researchers and developers, but also by visual artists (e.g. see #cycleGAN on Twitter).
Bio: Jun-Yan Zhu received his B.E in Computer Sciences from Tsinghua University in 2012. He obtained his Ph.D. in Electrical Engineering and Computer Sciences from UC Berkeley in 2017 supervised by Alexei Efros, after spending five years at CMU and UC Berkeley. His Ph.D. work was supported by a Facebook Fellowship. Jun-Yan is currently a postdoctoral researcher at MIT CSAIL.
photo credit: Prof. Ren Ng, UC Berkeley
Previous Award Recipients
2017 Felix Heide
2016 Eduardo S. L. Gastal