Real-ESRGAN - AI Image Tools Tool

Overview

Real-ESRGAN is an open-source, production-focused fork of ESRGAN designed for practical blind super-resolution of real-world images. The original paper describes training with a high-order synthetic degradation pipeline (including sinc filtering and simulated ringing/overshoot) and a U-Net discriminator with spectral normalization to stabilize GAN training; those changes are intended to improve restoration on diverse, realistically degraded photos. ([arxiv.org](https://arxiv.org/abs/2107.10833)) The project (BSD-3-Clause) is actively maintained as a model zoo and toolkit with multiple pretrained weights (RealESRGAN_x4plus, RealESRGAN_x2plus, anime-specialized models, and tiny/general models), optional GFPGAN face enhancement, video inference scripts, and an ncnn/vulkan portable runtime for CPU/GPU on different OSes. The repository documents tiling, alpha/16-bit image support, an outscale option for arbitrary final size, and denoising-strength (-dn) controls for balancing smoothing vs noise preservation. The model is widely used in hosted demos (e.g., on Replicate) and community workflows for photo, artwork and video upscaling. ([github.com](https://github.com/xinntao/Real-ESRGAN))

Key Features

  • Open-source BSD-3-Clause license with published training and inference code.
  • Multiple pretrained models: RealESRGAN_x4plus, x2plus, anime and tiny/general variants.
  • Optional GFPGAN face-enhancement toggle for improved portrait restoration.
  • Supports video upscaling via inference_realesrgan_video.py and tiled processing for large frames.
  • Outscale/arbitrary final size and -dn denoising-strength option to control smoothing.
  • NCNN/vulkan portable builds for CPU and mobile-friendly GPU inference.
  • Handles alpha channels, grayscale and 16-bit images; tile options reduce GPU memory needs.

Example Usage

Example (python):

### Local inference using the official repository's script (requires cloning xinntao/Real-ESRGAN and installed deps)
# Example: run from shell (can be invoked from Python subprocess)
# See repository README for full install instructions and models. ([github.com](https://github.com/xinntao/Real-ESRGAN))

# Shell form (recommended as a one-liner from Python):
# python inference_realesrgan.py -n RealESRGAN_x4plus -i input.jpg -o output.png --outscale 2.0

# Minimal Python snippet invoking the GitHub script via subprocess:
import subprocess
cmd = [
    "python",
    "inference_realesrgan.py",
    "-n", "RealESRGAN_x4plus",
    "-i", "input.jpg",
    "-o", "output.png",
    "--outscale", "2.0"
]
subprocess.run(cmd, check=True)

# Alternatively, run the Replicate-hosted model from Python using the Replicate client.
# Install replicate client and set REPLICATE_API_TOKEN. Adjust params per the model API page. ([replicate.com](https://replicate.com/docs/get-started/python?utm_source=openai))
import replicate
# Example (generic): output = replicate.run("nightmareai/real-esrgan", input={"image": open("input.jpg", "rb")})
# Save returned file(s) as needed.

Benchmarks

GitHub stars: 33.8k (Source: https://github.com/xinntao/Real-ESRGAN)

Replicate runs (nightmareai/real-esrgan): 82.9M runs (Source: https://replicate.com/nightmareai/real-esrgan)

Academic citations (paper 'Real-ESRGAN'): 755 citations (approximately) (Source: https://www.scinapse.io/papers/3204971388)

Max recommended input resolution (Replicate demo): 1440p (Source: https://replicate.com/nightmareai/real-esrgan)

Last Refreshed: 2026-01-09

Key Information

  • Category: Image Tools
  • Type: AI Image Tools Tool