Self‑hosted AI Starter Kit - AI Developer Tools Tool

Overview

Self‑hosted AI Starter Kit is an open‑source Docker Compose template from the n8n team that quickly boots a local AI + low‑code environment for proof‑of‑concepts and developer labs. The kit bundles a self‑hosted n8n instance (the low‑code workflow editor), Ollama for running local LLMs, Qdrant as a high‑performance vector store, and PostgreSQL for persistent workflow data — all prewired so you can run Retrieval‑Augmented Generation (RAG) and multi‑step AI agents locally. According to the project README, the kit provides preconfigured workflow templates, a shared host filesystem mount for processing local files (/data/shared), and hardware‑aware Docker Compose profiles for CPU, NVIDIA, and AMD GPU setups, so the same workflow UI works across different host machines. ([github.com](https://github.com/n8n-io/self-hosted-ai-starter-kit)) The starter kit is positioned for fast experimentation rather than turnkey production: it gets developers and teams running chat, summarization, and agent workflows in minutes while keeping data on‑premises. n8n’s blog and docs describe the kit as a way to reduce initial integration complexity (network, storage, and model hosting) and to provide immediate examples (e.g., a chat workflow that will download Llama 3.2 via Ollama during first run). Users should plan to harden and adapt the compose configuration for production. ([blog.n8n.io](https://blog.n8n.io/self-hosted-ai/?utm_source=openai))

GitHub Statistics

  • Stars: 14,230
  • Forks: 3,634
  • Contributors: 14
  • License: Apache-2.0
  • Last Updated: 2026-01-06T14:08:20Z

Repository activity and community signals show strong interest and a healthy open‑source footprint. The GitHub repository has roughly 14.2k stars and ~3.6k forks, and it is published under the Apache‑2.0 license. The codebase is concise (single docker‑compose driven template) with ~39 commits in the main history and a small number of open PRs and discussions visible on the project page. The project was announced and maintained by the n8n team and has attracted community conversation in the n8n forum. The repo metadata (stars/forks/license) is documented on the GitHub page. ([github.com](https://github.com/n8n-io/self-hosted-ai-starter-kit)) The repository received ongoing interest since launch (announcement and forum threads), and several community threads document troubleshooting and setup tips (Ollama / Qdrant integration questions), indicating active user testing and community support. The repository’s latest activity metadata has been indexed with a last‑commit date reported in public indexes as Jan 6, 2026. ([community.n8n.io](https://community.n8n.io/t/share-your-creations-introducing-the-self-hosted-ai-starter-kit/52180?utm_source=openai))

Installation

Install via docker:

git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
cd self-hosted-ai-starter-kit
cp .env.example .env  # update secrets and passwords inside .env
docker compose --profile cpu up  # CPU-only systems (default for most Linux/servers)
docker compose --profile gpu-nvidia up  # Nvidia GPU systems (requires NVIDIA Container Toolkit)
docker compose --profile gpu-amd up  # AMD GPU on Linux (ROCm)
docker compose up  # Mac/Apple Silicon (if running Ollama locally or for basic CPU usage)

Key Features

  • Prewired stack combining n8n, Ollama (local LLMs), Qdrant (vector store), and PostgreSQL.
  • Hardware profiles: docker compose profiles for CPU, NVIDIA GPU, and AMD GPU deployments.
  • Shared host filesystem mounted into n8n at /data/shared for secure local file processing.
  • Preconfigured AI workflow templates (chat, RAG, agent examples) to import and run immediately.
  • Single Docker Compose entrypoint to bootstrap services and expose n8n at http://localhost:5678.

Community

n8n announced the starter kit and maintains the repository; the project has strong community interest (14k+ stars and thousands of forks) and active discussion on the n8n forum and GitHub. Community feedback shows users successfully running the kit for POCs but also raising practical setup questions (examples: Ollama model pulls, Qdrant connection and API key handling). Forum threads and GitHub Discussions are the primary support channels; maintainers and community members have posted troubleshooting tips and environment guidance. Users are advised to treat the kit as a fast experimentation platform and to perform additional hardening for production use. ([github.com](https://github.com/n8n-io/self-hosted-ai-starter-kit))

Last Refreshed: 2026-03-03

Key Information

  • Category: Developer Tools
  • Type: AI Developer Tools Tool