Presentation

CLIP2StyleGAN: Unsupervised Extraction of StyleGAN Edit Directions
SessionTechnical Papers
Event Type
Technical Paper
Research & Education
Virtual
Full Conference Supporter
Full Conference
Virtual Conference Supporter
Virtual Conference
Exhibitor Additional Full Conference
Exhibitor Full Conference
Time
Location
DescriptionWe present CLIP2StyleGAN, a framework that effectively links the pretrained latent spaces of StyleGAN and CLIP. The framework can automatically extract semantically labeled edit directions in StyleGAN by finding, naming, and projecting interesting meaningful edit directions from CLIP space, in a fully unsupervised setup, without additional human guidance.