Presentation

StyleGAN-NADA: CLIP-guided Domain Adaptation of Image Generators
Event Type
Technical Paper
Interest Areas
Research & Education
Presentation Types
Virtual
Registration Categories
Full Conference Supporter
Full Conference
Virtual Conference Supporter
Virtual Conference
Exhibitor Additional Full Conference
Exhibitor Full Conference
Time
Location
DescriptionCan generative models be trained to produce images from a specific domain, guided only by text prompts, without seeing any image? In other words: can image generators be trained "blindly"? We propose a zero-shot domain adaptation method, guided by CLIP, and show that the answer to these questions is "yes"!