· Contributors · Organizations · Search
StyleGAN-NADA: CLIP-guided Domain Adaptation of Image Generators
Research & Education
Full Conference Supporter
Virtual Conference Supporter
Exhibitor Additional Full Conference
Exhibitor Full Conference
DescriptionCan generative models be trained to produce images from a specific domain, guided only by text prompts, without seeing any image? In other words: can image generators be trained "blindly"? We propose a zero-shot domain adaptation method, guided by CLIP, and show that the answer to these questions is "yes"!