DescriptionTo view live stream and the recording of this session, visit the conference stream tab on the left hand side of navigation.
Telepresence has the potential to bring billions of people into AR and VR. It is the next step in the evolution from telegraphy to telephony to videoconferencing. Just like telephony and video-conferencing, the key attribute of success will be “authenticity”: users' trust that received signals (e.g., audio for the telephone and video/audio for VC) are truly those transmitted by their friends, colleagues, or family. The challenge arises from this seeming contradiction: how do we enable authentic interactions in artificial environments? In 2019, Yaser Sheikh from Meta Reality Labs Pittsburgh gave a frontier talk introducing metric telepresence: realtime social interactions in AR/VR with avatars that look like you, move like you, and sound like you. This year, Yaser will discuss progress towards achieving metric telepresence. He will describe Meta’s approach using codec avatars — neural networks to address computer vision and computer graphics problems in signal transmission and reception of photorealistic avatars. He will also introduce the large scale systems required to train codec avatars, visually and acoustically, and the research challenges ahead to achieve metric telepresence at scale.