DescriptionThe Champion of Auchwitz is the first full-length feature film that was neural rendered in order to allow the actors to be converted from performing in German to performing in English. In this Polish film, set in WW2, the entire production was filmed and finished in German. Using advances in AI and machine learning, the actors' faces have been replaced by inferred versions, visually built from English re-performances of the original dialogue.
The film tells the story of pre-war boxing champion Tadeusz "Teddy" Pietrzykowski, who in 1940 arrives with the first transport of prisoners to the newly created Auschwitz concentration camp. The story was filmed without any consideration of later dialogue replacement.
This production session discusses the important issues involved in doing professional neural rendering on a large scale - hundreds of shots, often with multiple characters in the same frame.
While smaller projects have featured neural rendering in the past, these previous applications involved massive amounts of manual intervention per shot. Based on new technology, The Champion used only the footage already edited for the final german version of the film, combined with a robust and non-intrusive recording of the actors delivering the lines in English.
The panel will discuss innovations in technology and provide insights into further advances that the team is working on. Including: • How the team found a general solution to actor re-capturing that was quick to set up and non-invasive to the actor’s process. Our solution involved only 5 cameras and no tracking markers, per shot or per scene lighting, and no special camera calibration. • The workflow that was developed to remove training data ML sorting or categorizing. • Our professional pipeline allows for any film to be processed, with no requirement of access to special calibration clips, filming, or even outtakes from the main unit. The process relies solely on the finished film and the additional audio session recordings. • The film was adapted instead of dubbed with the input and cooperation of the Director, the actors, and the creative team. However, our pipeline could also be adapted to replacing dubbing on actors who are no longer available.
Our objective was to produce a robust pipeline that could perform visual dubbing in a scalable fashion, more than a demo that would only work with extensive and time-consuming manual intervention on a few scenes.
Our production solution needed to allow for • vastly varying camera angles, (not just the common face-swapping approach where the talent is facing the camera), • dramatic changes in lighting, contrast and camera artifacts, and • no access to lengthy or specially shot training data of the final scenes • actors with varying facial appearance over the film, in our case, a boxer who at times is severely bruised and beat up.
With the extensive use of visuals and clips, the panel will discuss both the lessons learned and the advances in ML that allowed the wide-scale adoption of this technology in place of traditional dubbing or subtitles.