ComfyUI-nunchaku - AI Image Tools Tool

Overview

ComfyUI-nunchaku is an open-source ComfyUI plugin that integrates Nunchaku — an efficient inference engine for 4-bit neural networks quantized with SVDQuant — into the ComfyUI node-based workflow. The plugin enables users to run quantized models inside ComfyUI graphs, reducing VRAM footprint and enabling inference workflows that otherwise require more memory. According to the GitHub repository, ComfyUI-nunchaku focuses on practical inference features such as multi‑LoRA support, ControlNet compatibility, and FP16 attention to balance speed, memory use, and fidelity. The project is maintained alongside other components from the MIT HAN Lab organization and is intended for users who want to run modern diffusion and transformer-based models in low-VRAM or performance-constrained environments. ComfyUI-nunchaku exposes nodes and runtime hooks so models quantized with SVDQuant can be loaded and executed within ComfyUI, while still supporting common extensions such as multiple LoRA modules and ControlNet conditioning. See the repository for up-to-date implementation details, compatibility notes, and platform-specific setup instructions.

Installation

Install via pip:

git clone https://github.com/mit-han-lab/ComfyUI-nunchaku.git
mkdir -p /path/to/ComfyUI/custom_nodes && cp -r ComfyUI-nunchaku/* /path/to/ComfyUI/custom_nodes/
pip install -r /path/to/ComfyUI/custom_nodes/requirements.txt
Restart ComfyUI and follow the repository README for any additional runtime or GPU-driver steps

Key Features

  • 4-bit inference using SVDQuant-quantized weights to reduce VRAM usage during model runs
  • Multi‑LoRA: apply and combine several LoRA modules on-the-fly within ComfyUI graphs
  • ControlNet compatibility: use ControlNet nodes and conditions together with Nunchaku backend
  • FP16 attention support to lower memory consumption while preserving attention precision
  • Designed for modern GPUs: aims for compatibility with contemporary GPU toolchains and drivers

Community

ComfyUI-nunchaku is developed on GitHub under the mit-han-lab organization. According to the repository, the project accepts issues and pull requests and provides README-based setup guidance. Community engagement primarily happens via GitHub issues, forks, and discussion threads where users share setup tips, troubleshooting, and usage examples (for example low‑VRAM inference, multi‑LoRA pipelines, and ControlNet integrations). For the latest activity, installation help, and compatibility reports, consult the repository’s issue tracker and README.

Last Refreshed: 2026-01-09

Key Information

  • Category: Image Tools
  • Type: AI Image Tools Tool