· Contributors · Organizations · Search
CLIP2StyleGAN: Unsupervised Extraction of StyleGAN Edit Directions
Research & Education
Full Conference Supporter
Virtual Conference Supporter
Exhibitor Additional Full Conference
Exhibitor Full Conference
DescriptionWe present CLIP2StyleGAN, a framework that effectively links the pretrained latent spaces of StyleGAN and CLIP. The framework can automatically extract semantically labeled edit directions in StyleGAN by finding, naming, and projecting interesting meaningful edit directions from CLIP space, in a fully unsupervised setup, without additional human guidance.