StyleGAN 3 - Computer Vision and Image Manipulation

Estimated time:
time
min

<h2>What is StyleGAN?</h2> StyleGAN is a revolutionary computer vision tool. It has changed the image generation and style transfer fields forever. Its <a href="https://arxiv.org/abs/1812.04948" target="_blank" rel="noopener noreferrer">first version was released in 2018</a>, by researchers from NVIDIA. After a year, the enhanced version - <a href="https://arxiv.org/abs/1912.04958" target="_blank" rel="noopener noreferrer">StyleGAN 2</a> was released. And yes, it was a huge improvement. In October 2021, the latest version was announced - <a href="https://nvlabs.github.io/stylegan3" target="_blank" rel="noopener noreferrer">AliasFreeGAN, also known as StyleGAN 3</a>. StyleGAN became so popular because of its astonishing results for generating natural-looking images. It was able to generate not only human faces, but also animals, cars, and landscapes. Using this tool, one can easily generate interpolations between different images and make some changes in the image. For example, you can change the mood of the person in the picture, rotate the objects, etc. The key part of StyleGAN is an autoencoder neural network, where one part creates a latent space representation of a given input image. And the other generates the image back again, using a sequence of layers, like convolutions, nonlinearities, upsampling and per-pixel noise. This part is often referred to as a generator because it generates the results. Since StyleGANs are GANs, Generative Adversarial Networks, next to the generator, there is also a discriminator network. This is trained to discriminate the images produced by the generator from real images (e.g., photos of real people). During training, the generator and discriminator compete against each other. In order to fool the discriminator, the generator needs to produce more and more realistic-looking images. <blockquote><strong>Looking for Computer Vision models? Discover the largest Computer Vision models library: <a href="https://appsilon.com/timm-with-fastai/">pyTorch IMage Models (TIMM) with fastai.</a></strong></blockquote> <h2>Difference between StyleGAN 3 vs StyleGAN 2</h2> In late 2019, the StyleGAN 2 was announced, improving the basic architecture and creating even more realistic images.  Even though the method turned out to be a large success, NVIDIA researchers still found StyleGANs 2 models to be insufficient and worth further improvements. And they were right. <img class="size-full wp-image-11692" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01e924d7961f8266faca0_comparison-of-StyleGAN-v2-and-v3-generated-animations.gif" alt="Comparison of StyleGANv2 and v3 generated animations (Video credit: NVIDIA Labs)" width="480" height="262" /> Comparison of StyleGANv2 and v3 generated animations (Video credit: NVIDIA Labs) <h3>Aliasing</h3> The main problem they wanted to solve was the aliasing (for that reason StyleGAN 3 is also called AliasFreeGAN). Aliasing is particularly noticeable when creating rotations of a given image. When the pixels look like they were “glued” to some specific places of the image, they do not rotate in a natural way. The picture below shows the visual 2D meaning of aliasing; on the left side, one can see that the averaged version of the image should be more blurred, but instead, there is cat fur attached to the cat’s eye. A similar situation can be seen in human hair examples, in latent interpolations. By successfully dealing with the aliasing problem, the authors hope that StyleGAN 3 becomes more useful for generating videos and animations. &nbsp; <img class="size-full wp-image-11694 aligncenter" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01e946bdc8f59ff866ee5_comparison-of-styleganv2-and-v3-aliasing-issue.webp" alt="comparison of styleganv2 and v3 - aliasing issue" width="1098" height="307" /> <em>The aliasing effects are non-ideal upsampling filters, that are not aggressive enough to eliminate aliasing and pointwise application of nonlinear operations like ReLU.</em> <blockquote><strong>CNNs have many exciting applications. Read our comprehensive guide on <a href="https://appsilon.com/object-detection-yolo-algorithm/">object detection with the YOLO algorithm</a>.  </strong></blockquote> <h3>Improvements</h3> The wide set of improvements implemented in StyleGAN 3 generator (the discriminator remained unchanged) was set to eliminate the aliasing effect from output images. This was done by making every layer of the synthesis network give a continuous signal, in order to transform details together. The main enhancements incorporated into AliasFreeGAN generator are: <ul><li style="font-weight: 400;" aria-level="1">Metrics replacement for peak signal-to-noise ratio (PSNR) in decibels (dB) between two sets of images, obtained by translating the input and output of the 5 synthesis network by a random amount, and a similar metric EQ-R for rotations.</li><li style="font-weight: 400;" aria-level="1">Replacing the input constant in StyleGAN 2 with Fourier features also has the advantage of naturally defining a spatially infinite map. This change improves the results, but also helps in computations.</li><li style="font-weight: 400;" aria-level="1">Decreased mapping network depth.</li><li style="font-weight: 400;" aria-level="1">Disabled mixing regularization and path length regularization.</li><li style="font-weight: 400;" aria-level="1">Output skip connections eliminated.</li><li style="font-weight: 400;" aria-level="1">Replacing the bilinear 2× upsampling filter with a windowed sinc filter with Kaiser window of size n = 6 (in that way every output pixel is affected by 6 input pixels in upsampling and each input pixel affects 6 output pixels in downsampling).</li><li style="font-weight: 400;" aria-level="1">Replacing a sinc-based downsampling filter with a radially symmetric jinc-based one that was constructed using the same Kaiser scheme.</li><li style="font-weight: 400;" aria-level="1">A new learned affine layer was added that outputs global translation and rotation parameters for the input Fourier features.</li><li style="font-weight: 400;" aria-level="1">Introduced a novel custom CUDA kernel, after convolutions step in every encoder, it consists of upsampling, leaky ReLU, downsampling, and cropped steps.</li><li style="font-weight: 400;" aria-level="1">Stabilization trick - at the beginning of the training, all images the discriminator sees are blurred using a Gaussian filter. At first with σ = 10 pixels, which decreases to zero over the first 200k images. Such a trick prevents the discriminator from focusing too much on high frequencies in the first stages of the training process.</li></ul> <img class="size-full wp-image-11704" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01e94cd7330b395c66fe0_StyleGANv3-architecture.webp" alt="StyleGAN v3 architecture (Image credit: NVIDIA Labs)" width="502" height="475" /> StyleGAN v3 architecture (Image credit: NVIDIA Labs) <h3>Visual comparisons</h3> The picture below shows the comparison between the StyleGAN 2 and 3 internal representations and the latent interpolations visuals. In both StyleGAN 3 cases, the latent interpolations call to mind some kind of “alien” map of the human face, with correct rotations. While in StyleGAN 2, the whole pixel area is “glued” to particular parts of the image. <img class="size-full wp-image-11696" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01e96efaa3c14c7f0c7a1_comparison-of-styleganv2-and-v3.webp" alt="Comparison of StyleGAN v2 and v3 (Image credit: NVIDIA Labs)" width="1078" height="505" /> Comparison of StyleGAN v2 and v3 (Image credit: NVIDIA Labs) <h2>StyleGAN 3 Capabilities</h2> We've already discussed the main characteristics of StyleGAN 3, so let's move on to what the tool is actually capable of. <h3>Image Generation</h3> Using this simple command: <pre class="language-r"><code class="language-r">python gen_images.py --outdir=out --trunc=1 --seeds=2 \</code></pre> <pre class="language-r"><code class="language-r">--network=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-afhqv2-512x512.pkl</code></pre> <img class="size-full wp-image-11698" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01e972d3baed3c5a8b428_example-images-generated-using-the-styleganv3-left-AFHQ-dataset-right-MetFaces.webp" alt="Example images generated using the StyleGANv3 (left - from AFHQ dataset, right - MetFaces)" width="913" height="463" /> Example images generated using the StyleGANv3 (left - from AFHQ dataset, right - MetFaces) You can generate the images from a given model, by changing the seed number. Once the seed is set, the script generates the random vector of size [1,512] and synthesizes the appropriate image from these numbers, based on the dataset it was trained on. Here, I put the examples from the model trained on the AFHQ dataset, so the model will output only dogs, cats, foxes, and wild big cats images variations. NVIDIA published other models, trained on the FFHQ dataset (human faces) and MetFaces (faces from MET Gallery), in different resolutions. The models are available for download. Moreover, if you have enough data, or using transfer learning, you can also train your own models using the code published by NVIDIA in their repository, using a command similar to this one: <pre class="language-r"><code class="language-r"># Train StyleGAN2 for FFHQ at 1024x1024 resolution using 8 GPUs.</code></pre> <pre class="language-r"><code class="language-r">python train.py --outdir=~/training-runs --cfg=stylegan2 --data=~/datasets/ffhq-1024x1024.zip \</code></pre> <pre class="language-r"><code class="language-r">    --gpus=8 --batch=32 --gamma=10 --mirror=1 --aug=noaug</code></pre> <h3>Image interpolation</h3> Besides generating images from seeds, you can also use StyleGAN 3 to generate a video of interpolations between a given number of images, for the given seeds, you need to specify in such command: <pre class="language-r"><code class="language-r"># Render a 4x2 grid of interpolations for seeds 0 through 31.</code></pre> <pre class="language-r"><code class="language-r">python gen_video.py --output=lerp.mp4 --trunc=1 --seeds=0-31 --grid=4x2 \</code></pre> <pre class="language-r"><code class="language-r">--network=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-afhqv2-512x512.pkl</code></pre> Below, you can see the result - the video of interpolations: <img class="wp-image-21400 size-full" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01e987a92f47f699ac334_exemplary-interpolations-for-a-given-seed-using-styleganv3-on-afhq-dataset-opt.gif" alt="" width="480" height="320" /> Exemplary interpolations for a given seed, using StyleGAN v3 on AFHQ dataset <h3>Integration of CLIP and StyleGAN 3 - Text2Image</h3> Next to the above examples, you can use StyleGAN 3 and adapt it to your own needs. There are many interesting examples of StyleGAN 2 modifications in the literature to explore. StyleGAN 3 modifications are at an early stage because its code was released a month prior to the writing of this blog post, but I managed to find something intriguing. Somewhere on the internet, I managed to dig up a <a href="https://colab.research.google.com/drive/1IXdEu871_n4ws8-Y1OCX3s3OcnfkGQof?usp=sharing" target="_blank" rel="noopener noreferrer">collab notebook</a> by these two authors <a href="https://twitter.com/nshepperd1?lang=en" target="_blank" rel="noopener noreferrer">[1]</a> <a href="https://twitter.com/EarthML1" target="_blank" rel="noopener noreferrer">[2]</a>, which uses CLIP tool integration with StyleGAN 3 and produces text-to-image results. The user needs to type some text, like “red clown | Richard Nixon”, set some parameters in a basic GUI, and the model will try to produce appropriate interpolations! The results are sometimes amazing, sometimes funny, but worth a try! Here's a video for the text: "red clown | Richard Nixon." <img class="alignnone wp-image-11712" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01e9b6c7e8f92760a20ab_video-from-text-red-clown_richard-nixon.gif" alt="" width="300" height="225" /> <h4>How does it work?</h4> At first, the CLIP tool is used to transform an input text into a vector, word embedding. Later, during the fine-tuning stage of StyleGAN 3, the special spherical loss is calculated between an embedded generated image vector and a word embedding, which is used as a target. In this way, the fine-tuning process teaches the model how to “understand” the user's text and paint adequate images. Another example is "happy blue woman | doge." <img class="alignnone wp-image-21396 " src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01e9b5cab2da46e968da5_video-from-text-happy-blue-woman_doge_opt.gif" alt="" width="299" height="224" /> The last video, for input text “medieval knight | Asian guy." It's actually a 50:50 mixture of two StyleGAN 3 models - Met Gallery Faces and Human Faces. <img class="alignnone wp-image-21398 " src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01e9cc328baac9edfc704_video-from-text-medieval-knight_asian-guy_opt.gif" alt="" width="299" height="299" /> In my opinion, the only drawback of the latest StyleGAN 3 is the new artifacts. These can be seen on generated images - a kind of “snakeskin” pattern that seems to persist in the internal representations from “the alien masks” layer. <h2>Summing up StyleGAN 3</h2> StyleGAN 3 is the latest version of the StyleGAN project by NVIDIA. And there's no doubt about it - it's amazing. The whole aliasing problem was cared for in a very precise and detailed way. Improving the generated images' rotations and making them even more natural. Although it should be noted that by playing around with the models, one can sometimes find rather strange artifacts in the images. Moreover, this StyleGAN version opens the door for generating whole videos and animations. I can’t wait to see more diverse and intriguing modifications of StyleGAN 3. If you've created something unique be sure to share it with us @appsilon or comment below. If you're curious to know more about Appsilon's Computer Vision and ML solutions, check out what the Appsilon ML team is up to.  <blockquote><strong>Computer Vision is being used to leverage Citizen Science data in the fight against climate change. See how to <a href="https://appsilon.com/monitoring-ecosystems-with-computer-vision/">monitor shifts in ecosystems with CV</a>.</strong></blockquote>

Contact us!
Damian's Avatar
Damian Rodziewicz
Head of Sales
image classification
ai&research