Virtual reality (VR) and augmented reality (AR) are currently two of the hottest topics in the IT industries. Many consider them to be the next wave in computing with a similar impact as the shift from desktop systems to mobiles and wearables. Multimodal interaction offers great potential to not only make AR and VR experiences more realistic, but also to provide more powerful and efficient means of interacting with virtual and augmented worlds. The aim of this workshop is to explore these opportunities by inviting contributions on all kinds of works related to interaction or multimodality in the context of VR and AR computing.
We invite researchers and visionaries to submit their latest results on any aspects that are relevant for multimodality and/or interaction in VR and AR. Contributions of more fundamental nature (e.g., psychophysical studies and empirical research about multimodality) are welcome as well as more technical contributions (including use cases, best-practice demonstrations, prototype systems, etc.). Position papers and reviews of the state-of-the art and ongoing research are invited, too. Submissions do not necessarily have to address multiple modalities, but work focusing on single modes that go beyond the state-of-the-art of "purely visual" systems (e.g., papers about smell, taste, and haptics) are suited as well. Final versions of accepted manuscripts will be published in the ACM Digital Library. Selected contributions will be invited for publication of a special issue in a suitable journal.