Researchers leverage GANs to eliminate clouds from satellite RGB images, showcasing the potential of this technology in enhancing visual clarity for scientific and practical uses.
Researchers have leveraged Generative Adversarial Networks (GANs) to remove clouds from satellite RGB (Red, Green, Blue) images. GANs, introduced by Ian Goodfellow and his team in 2014, have gained widespread popularity in the field of computer vision for their ability to produce realistic images through a game-theoretic process involving two neural networks: a generator and a discriminator.
This study utilized the EuroSat dataset, which contains 27,000 labeled, clear-sky RGB images from Sentinel-2 satellites, with each image measuring 64×64 pixels. To simulate cloud cover, the researchers generated realistic noise patterns using Perlin noise, a technique developed by Ken Perlin in the 1980s for creating organic-looking noise.
In the GAN setup for this project, the generator’s architecture was based on UNet, a known Convolutional Neural Network (CNN), while the discriminator employed a ResNet model, another CNN structure. The generator’s task was to create cloud-free images from cloud-covered inputs, striving to make them indistinguishable from real, clear-sky images. The discriminator’s role was to differentiate between real clear-sky images and the generated images.
The training process involved multiple iterations, where the generator improved its outputs based on the discriminator’s feedback, and vice versa, leading to progressively better synthesized images. The project highlights the practical application of GANs in satellite imagery to provide clearer views for various scientific and practical purposes.