InvokeAI - AI Workflow Tools Tool

Overview

InvokeAI is an open-source creative engine and web-based UI for Stable Diffusion–family image generation, editing and production workflows. It combines a locally hosted React WebUI, a unified canvas for in-/out-painting and iterative editing, and a node-based workflow editor that lets artists and teams build, save and share multi-step pipelines. According to the project repository, InvokeAI supports ckpt and diffusers model formats and modern model families such as SD1.5, SD2.x, SDXL and Flux. ([github.com](https://github.com/invoke-ai/InvokeAI)) The project is distributed under an Apache-2.0 license and is maintained as a community-driven repo with an installer/launcher, PyPI packages and a dedicated docs site. The official documentation and release notes highlight features like a Model Manager (v2), embedding/LoRA support, ControlNet/IP adapters, a board/gallery system, upscalers, and low‑VRAM modes for smaller GPUs. Recent release notes show ongoing active development (regular minor releases and release-candidates) and canvas/workflow improvements. ([invoke-ai.github.io](https://invoke-ai.github.io/InvokeAI/)) InvokeAI is intended both for local self-hosting (Community Edition, free) and as the basis for commercial/hosted offerings (separate commercial products use the same UI foundation). For self-hosting the docs list supported OSes, recommended VRAM (8GB+ for many XL workflows) and installation paths (Launcher, pip, Docker). ([github.com](https://github.com/invoke-ai/InvokeAI))

GitHub Statistics

  • Stars: 26,529
  • Forks: 2,755
  • Contributors: 335
  • License: Apache-2.0
  • Primary Language: TypeScript
  • Last Updated: 2026-01-08T21:48:16Z
  • Latest Release: v6.10.0

Key Features

  • Unified Canvas: integrated in-/out‑painting, brush tools, layers and infinite undo.
  • Node-based Workflow Editor: build, save, and share multi-step graph pipelines.
  • Model Manager: import, identify, and manage ckpt and diffusers models (in‑place option).
  • Broad model support: SD1.5, SD2.x, SDXL, Flux and both ckpt & diffusers formats.
  • Embeddings & LoRA support: load/manage embeddings and LoRA weights for conditioning.
  • Control adapters: ControlNet and Image-Prompt (IP) adapter support for guided editing.
  • Low‑VRAM modes & upscaling: memory‑saving runtime options and integrated upscalers.

Example Usage

Example (python):

import asyncio
from invoke import Invoke

async def main():
    invoke = Invoke()              # connect to a running Invoke backend / launcher
    print('Waiting for Invoke server...')
    version = await invoke.wait_invoke()
    print(f'Invoke version: {version}')

    # list available main models (example: SDXL)
    from invoke.api import BaseModels, ModelType
    models = await invoke.models.list(base_models=[BaseModels.SDXL], model_type=[ModelType.Main])
    print('SDXL models available:', models)

if __name__ == '__main__':
    asyncio.run(main())

# Notes: The official invokeai-python client provides helpers for workflows, submissions
# and mapping outputs to files. See the client package docs / examples for workflow execution.

Pricing

InvokeAI (the open-source project) is available free under the Apache-2.0 license for local self-hosting (Community Edition). The same UI and engine also form the foundation of separate commercial/hosted Invoke products; the commercial offering (Invoke / invoke.com) lists subscription plans (Starter/Indie/Premier/Enterprise) beginning at $15/month for hosted Starter plans, with larger team and enterprise tiers available. (See the GitHub project for Community Edition details and invoke.com for hosted pricing.) ([github.com](https://github.com/invoke-ai/InvokeAI))

Benchmarks

GitHub stars: 26.5k (Source: https://github.com/invoke-ai/InvokeAI)

GitHub forks: 2.8k (Source: https://github.com/invoke-ai/InvokeAI)

Repository commits: 18,714 commits (Source: https://github.com/invoke-ai/InvokeAI)

Open issues (approx.): 582 (Source: https://github.com/invoke-ai/InvokeAI)

Recommended GPU VRAM for SDXL workflows: 8 GB+ VRAM (recommended) (Source: https://invoke-ai.github.io/InvokeAI/installation/requirements/)

Last Refreshed: 2026-01-09

Key Information

  • Category: Workflow Tools
  • Type: AI Workflow Tools Tool