SD.Next - AI Image Tools Tool
Overview
SD.Next is an open-source, all-in-one web UI and tooling layer for diffusion-based image and video generation. It bundles a modern web interface (Standard and Modern UIs, PWA support), an installer/updater, model management, and a wide set of inference backends so users can run Stable Diffusion and many newer models locally across GPUs and CPUs. The project is maintained in a public GitHub repository (Apache‑2.0) and includes integrated documentation, a built-in queue, interrogate/captioning tools, and multiple preconfigured model references to simplify setup. ([github.com](https://github.com/vladmandic/sdnext)) SD.Next emphasizes cross‑platform performance and model compatibility: it supports CUDA (NVIDIA), ROCm (AMD Linux), DirectML (Windows), Intel XPU / IPEX, OpenVINO, ONNX/Olive and Apple MPS, plus container recipes for common compute stacks. The repo contains specialized optimizations — model compile backends (Triton, StableFast, etc.), quantization and compression (including SD.Next’s SDNQ), and optional high-performance inference engines such as Nunchaku — to reduce VRAM needs and accelerate throughput. SD.Next exposes CLI/API scripts for automated generation, a UI benchmark tool, and numerous helper utilities for model conversion, offloading, and video/frame workflows. ([github.com](https://github.com/vladmandic/sdnext))
GitHub Statistics
- Stars: 6,862
- Forks: 537
- Contributors: 424
- License: Apache-2.0
- Primary Language: Python
- Last Updated: 2026-01-01T09:35:43Z
Key Features
- Multi-backend compute support: CUDA, ROCm, DirectML, OpenVINO, ONNX/Olive, Intel XPU, Apple MPS.
- Model compile and acceleration: Triton, StableFast, DeepCache and other compile backends.
- On-the-fly quantization and SDNQ: cross-platform 8/6/4/2/1-bit and float8 quantization methods.
- Integrated installer/updater with auto-dependency management and platform auto-tuning.
- Multiple UIs (Standard, Modern) plus PWA mobile-compatible interface and gallery browser.
- Video & I2V tools: frame extraction, FramePack, GIF/MP4/PNG outputs and duration/pad controls.
- Built-in queue management, interrogate/captioning (150+ OpenCLiP models, 20+ VLMs).
Example Usage
Example (python):
import subprocess
import json
# Example: run bundled CLI generator (uses cli/generate.py or api-txt2img.py)
# Adjust path to your sdnext checkout and Python environment as needed.
cmd = [
'python', 'cli/api-txt2img.py',
'--prompt', "A highly detailed portrait of a futuristic city at sunset, cinematic lighting",
'--steps', '28',
'--width', '1024',
'--height', '576',
'--output', 'outputs/'
]
proc = subprocess.run(cmd, capture_output=True, text=True)
if proc.returncode == 0:
print('Generation started — check outputs/ for results')
else:
print('Error running generator:\n', proc.stdout, proc.stderr)
# For programmatic calls, SD.Next includes API example scripts (cli/api-*.py).
# See the project's CLI/API docs for full parameter lists and advanced usage. ([vladmandic.github.io](https://vladmandic.github.io/sdnext-docs/CLI-Tools/?utm_source=openai)) Benchmarks
RTX 4090 (API, SD1.5, default simple settings): ≈110 iterations/second (API peak, reported in repo benchmark runs) (Source: https://github.com/vladmandic/sdnext/wiki/Benchmark)
RTX 4090 (optimized with StableFast / TAESD): ≈150–165 iterations/second (optimized/custom compiled backends) (Source: https://github.com/vladmandic/sdnext/wiki/Benchmark)
DirectML (RX 7900 XTX, batch 8): Peak ≈9.36 it/s (reported DirectML benchmark) (Source: https://github.com/vladmandic/sdnext/wiki/Benchmark)
ONNX Runtime (RX 7900 XTX): Peak ≈17.58 it/s (reported ONNX benchmark) (Source: https://github.com/vladmandic/sdnext/wiki/Benchmark)
Repository activity & popularity: ≈6.9k GitHub stars, ≈11,800 commits (active development history) (Source: https://github.com/vladmandic/sdnext)
Key Information
- Category: Image Tools
- Type: AI Image Tools Tool