Presentation

CLIP2StyleGAN: Unsupervised Extraction of StyleGAN Edit Directions
Event Type
Technical Paper
Interest Areas
Research & Education
Presentation Types
Virtual
Registration Categories
Full Conference Supporter
Full Conference
Virtual Conference Supporter
Virtual Conference
Exhibitor Additional Full Conference
Exhibitor Full Conference
Time
Location
DescriptionWe present CLIP2StyleGAN, a framework that effectively links the pretrained latent spaces of StyleGAN and CLIP. The framework can automatically extract semantically labeled edit directions in StyleGAN by finding, naming, and projecting interesting meaningful edit directions from CLIP space, in a fully unsupervised setup, without additional human guidance.