Generative adversarial networks (GANs) are known to benefit from regularization or normalization of their discriminator network during training. In this work, we introduced sparsity aware normalization (SAN), a new method for stabilizing GAN training. Our method is particularly effective for image restoration and image-to-image translation. There, it significantly improves upon existing methods, like spectral normalization, while allowing using shorter training and smaller capacity networks, at no computational overhead.
Image restoration methods do not allow exploring the infinitely many plausible reconstructions that might have given rise to the measured image. In this work, we introduced the task of explorable image restoration, and illustrated it for the tasks of super resolution and JPEG decompression. We proposed a framework comprising a graphical user interface with a neural network backend, allowing editing the output to explore the abundance of plausible explanations to the input. We illustrated our approach in a variety of use cases, ranging from medical imaging and forensics to graphics (Oral presentations at CVPR`20, CVPR`21).
We introduced an unconditional generative model that can be learned from a single natural image. Our model, coined SinGAN, is trained to capture the internal distribution of patches within the image, and is then able to generate high quality, diverse samples of arbitrary size and aspect ratio, that carry the same visual content as the image. We illustrated the utility of SinGAN in a wide range of image manipulation tasks. This work won the Best Paper Award (Marr Prize) at ICCV`19.