Presentation

StyleGAN-NADA: CLIP-guided Domain Adaptation of Image Generators
SessionTechnical Papers
Event Type
Technical Paper
Research & Education
Virtual
Full Conference Supporter
Full Conference
Virtual Conference Supporter
Virtual Conference
Exhibitor Additional Full Conference
Exhibitor Full Conference
Time
Location
DescriptionCan generative models be trained to produce images from a specific domain, guided only by text prompts, without seeing any image? In other words: can image generators be trained "blindly"? We propose a zero-shot domain adaptation method, guided by CLIP, and show that the answer to these questions is "yes"!