Self‑hosted AI Starter Kit - AI Developer Tools Tool
Overview
Self‑hosted AI Starter Kit is an open-source Docker Compose template curated by the n8n team to rapidly bootstrap a local AI + low‑code environment that combines n8n with commonly used self‑hosted AI components. The kit wires together self‑hosted n8n (the low‑code workflow engine), Ollama for running local LLMs, Qdrant as a high‑performance vector store, and PostgreSQL for relational storage — plus preconfigured example workflows to get you experimenting quickly. ([github.com](https://github.com/n8n-io/self-hosted-ai-starter-kit)) Designed primarily for proof‑of‑concepts and local development, the starter kit provides deployment profiles for CPU, Nvidia GPU, and AMD GPU hosts, a shared host folder mounted into the n8n container for file access, and sample AI agent/chat workflows you can open immediately after booting the stack. The repository and official docs emphasize that this is intended as a developer/deployment starting point (not a hardened production appliance), and recommend securing and hardening the stack before production use. ([github.com](https://github.com/n8n-io/self-hosted-ai-starter-kit))
GitHub Statistics
- Stars: 13,782
- Forks: 3,526
- Contributors: 14
- License: Apache-2.0
- Last Updated: 2026-01-06T14:08:20Z
According to the GitHub repository, the project is actively used and maintained (≈13.8k stars, ≈3.5k forks) and is Apache‑2.0 licensed. Repository activity includes dozens of commits and community contributions, with the project curated by n8n. (Repository metadata as shown on GitHub.) ([github.com](https://github.com/n8n-io/self-hosted-ai-starter-kit)) Community signals show active discussion and troubleshooting in the official n8n forum and GitHub issues (install questions, AI agent node behavior, license/feature registration reports). While the kit has strong adoption for demos and POCs, several reported issues and forum threads highlight common setup or integration edge cases to be aware of. ([community.n8n.io](https://community.n8n.io/t/self-hosted-ai-starter-kit/56525?utm_source=openai))
Installation
Install via docker:
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.gitcd self-hosted-ai-starter-kitcp .env.example .env # edit .env to update secrets and passwordsdocker compose --profile cpu updocker compose --profile gpu-nvidia up # Nvidia GPU hostsdocker compose --profile gpu-amd up # AMD GPU (Linux) hosts Key Features
- Prewired n8n + Ollama integration for running local LLMs without external APIs.
- Qdrant vector store preconfigured for retrieval / RAG workflows.
- PostgreSQL included for durable workflow and credential storage.
- Docker Compose profiles for CPU, Nvidia GPU, and AMD GPU deployment.
- Shared host folder mounted at /data/shared for local file access in workflows.
Community
Adoption is high (many stars and forks) and the n8n community actively discusses the kit in the official forum. Common feedback covers installation edge cases, GPU setup, and some AI‑node behavior/visualization differences in self‑hosted deployments. Users recommend the kit for rapid prototyping but advise hardening and careful configuration before production use. See the GitHub repo and n8n community threads for real‑world reports and troubleshooting. ([github.com](https://github.com/n8n-io/self-hosted-ai-starter-kit))
Key Information
- Category: Developer Tools
- Type: AI Developer Tools Tool