Restoring Images by Adding Noise


Wait, what? Restoring images by adding noise?


Yes, you read that right. It's possible to beautifully restore, upscale, and enhanced images and videos by adding noise to the picture. That may seem counterintuitive at first, but cutting-edge AI image generation tools like Midjourney, Dall-E, and Stable Diffusion do exactly that. This can be done either as Text-to-Image (T2I) from a text prompt, or as Image-to-Image (I2I) with an input source image instead of a prompt. They deliberately add random noise, similar to old-fashioned analog TV static, to empty latent images or existing but damaged images and then transform them into clear, vibrant images. This innovative technique, known as "noise diffusion," is revolutionizing how we preserve and revitalize old, damaged photos, and it's only a matter of time until it becomes applicable to video, as well. Let’s take a look at this intriguing process and discover how it works to bring cherished memories and artworks back to life.

What Is Noise Diffusion?


Noise diffusion is a technique that artificial intelligence (AI) uses to create or enhance images by introducing random noise and then refining the image through a series of noise reduction steps, known as sampling. In simple terms, it’s a process that helps the AI figure out what an image should look like, even if parts of it are damaged, faded, or missing. This technique helps the AI focus on the essential details and reconstruct the image accurately.

How Does Noise Diffusion Work?

Adding Noise: The process starts with the AI adding random noise to the image. This noise looks like tiny specks, similar to what you might see on a TV when it’s not tuned to any channel. This might seem odd, but adding noise helps the AI handle uncertainty and enhances its ability to generate a clear image.

Refining the Image: After adding noise, the AI gradually removes it through several sampling steps, which can be done using many different algorithms and techniques. In each step, the AI takes a closer look at the image and tries to remove more of the unwanted noise while keeping the important details intact. This process is repeated many times, with the AI refining the image a little more with each step. A typical image might use 20-40 sampling steps, although some models may use as few as 6 or as many as 100.

Recovering the Image: As the AI refines the image, it gradually removes the noise and brings out the desired picture. The AI has been trained on millions of images, so it has a good idea of what various objects, faces, and scenes should look like. By the end of the process, the AI has removed most of the noise and restored the image, often making it look as clear and vibrant as possible.

Here’s an illustration showing how noise diffusion works for image restoration:



The first image shows a damaged or faded photograph.
The second image has random noise added, resembling TV static.
The third image shows the AI starting to remove some of the noise, revealing a clearer picture.
The final image is a fully restored and vibrant version of the original photograph.

A Practical Example


To illustrate the practical application of noise diffusion in AI-driven image restoration, let's look at how it's implemented using ComfyUI and SUPIR. ComfyUI is a powerful node-based graphical interface for Stable Diffusion, allowing users to interact with the model without needing to dive deep into complex code. SUPIR, on the other hand, stands for Stable Ultra-Resolution Progressive Image Restoration, a powerful technique designed to enhance and restore images. Together, these tools make it easy for anyone to breathe new life into damaged or faded photos. By combining ComfyUI's powerful, flexible design with SUPIR's advanced restoration capabilities, we can transform old, low-resolution or poor quality pictures into vibrant, detailed images.

The image on the left I downloaded from a random website for purposes of this demo because it showed a lot of the types of defects I was looking for. The image on the right is straight out of Stable Diffusion, not retouched with Photoshop or anything of that nature. The comparison really doesn't do it justice, because the image on the right was upscaled by a factor of almost 10x and all that extra detail was lost in scaling it back down to create the side-by-side comparison. But you can see clearly the potential of this technology, and it's only a matter of time until this becomes possible with video and not just still images. 

If you are interested in learning more about how this works, you can watch YouTube tutorials here and here

Alan Burns May 5, 2024
Share this post
Archive