StyleGAN2 is a state-of-the-art network in generating realistic images. Besides, it was explicitly trained to have disentangled directions in latent space, which allows efficient image manipulation by varying latent factors. Editing existing images requires embedding a given
image into the latent space of StyleGAN2. Latent code optimization
via backpropagation is commonly used for qualitative embedding of real
world images, although it is prohibitively slow for many applications. We
propose a way to distill a particular image manipulation of StyleGAN2
into image-to-image network trained in paired way. The resulting pipeline
is an alternative to existing GANs, trained on unpaired data. We provide
results of human faces’ transformation: gender swap, aging/rejuvenation,
style transfer and image morphing. We show that the quality of generation using our method is comparable to StyleGAN2 backpropagation
and current state-of-the-art methods in these particular tasks.