Frontier Talk: Metric Telepresence using Codec Avatars
Event Type
Interest Areas
Research & Education
Presentation Types
In Person
Registration Categories
Full Conference Supporter
Full Conference
Exhibitor Experience
Exhibitor Additional Experience
Experience Plus
Virtual Conference Supporter
Virtual Conference
Exhibitor Additional Full Conference
Exhibitor Full Conference
This session WILL be recorded.
TimeThursday, 11 August 20228am - 8:45am PDT
LocationWest Building, Room 109-110
DescriptionTelepresence has the potential to bring billions of people into AR and VR. It is the next step in the evolution from telegraphy to telephony to videoconferencing. Just like telephony and video-conferencing, the key attribute of success will be “authenticity”: users' trust that received signals (e.g., audio for the telephone and video/audio for VC) are truly those transmitted by their friends, colleagues, or family. The challenge arises from this seeming contradiction: how do we enable authentic interactions in artificial environments? In 2019, Yaser Sheikh from Meta Reality Labs Pittsburgh gave a frontier talk introducing metric telepresence: realtime social interactions in AR/VR with avatars that look like you, move like you, and sound like you. This year, Yaser will discuss progress towards achieving metric telepresence. He will describe Meta’s approach using codec avatars — neural networks to address computer vision and computer graphics problems in signal transmission and reception of photorealistic avatars. He will also introduce the large scale systems required to train codec avatars, visually and acoustically, and the research challenges ahead to achieve metric telepresence at scale.