StyleGAN-NADA: CLIP-guided Domain Adaptation of Image Generators
Event Type
Technical Paper
Interest Areas
Research & Education
Presentation Types
In Person
Registration Categories
Full Conference Supporter
Full Conference
Exhibitor Additional Full Conference
Exhibitor Full Conference
This session WILL NOT be recorded.
TimeThursday, 11 August 20229:26am - 9:34am PDT
LocationEast Building, Room 1-3
DescriptionCan generative models be trained to produce images from a specific domain, guided only by text prompts, without seeing any image? In other words: can image generators be trained "blindly"? We propose a zero-shot domain adaptation method, guided by CLIP, and show that the answer to these questions is "yes"!