Tabby - AI Code Assistants Tool

Overview

Tabby is an open-source, self‑hosted AI coding assistant that brings chat-driven code completions, repository-aware suggestions, and an in‑IDE Answer Engine to developer workflows. It is designed to be run on your infrastructure (Docker, single‑node or multi‑worker) and integrates with popular editors such as VS Code, Vim/Neovim and JetBrains IDEs while exposing an OpenAPI‑compatible interface for custom integrations. ([github.com](https://github.com/TabbyML/tabby)) Unlike cloud‑hosted copilots, Tabby emphasizes on‑premises control and configurable context providers: it can index private Git repositories, developer documentation, PR/MR/issue data, and other developer docs to enable contextually relevant, RAG‑style completions and chat answers. The project provides ready‑to‑run Docker images and a registry of recommended models and GPU guidance, making it practical to run on consumer GPUs or in production clusters. Key built‑in features include multi‑line / full‑function completions, an Answer Engine with “@” document mentions, a Code Browser for code search, and inline editing via chat commands inside editors. ([tabby.tabbyml.com](https://tabby.tabbyml.com/docs/administration/context/?utm_source=openai))

GitHub Statistics

  • Stars: 32,751
  • Forks: 1,670
  • Contributors: 116
  • License: NOASSERTION
  • Primary Language: Rust
  • Last Updated: 2026-01-12T08:20:03Z
  • Latest Release: v0.31.2

The Tabby repository is active and widely adopted: the public GitHub repo shows ~32.7k stars and ~1.7k forks, with hundreds of open issues and an active release cadence (multiple releases and changelog entries through 2024–2025). This indicates strong community interest and active maintenance. The project provides frequent changelogs and a documented roadmap, and the codebase (written primarily in Rust) has thousands of commits and many contributors. ([github.com](https://github.com/TabbyML/tabby))

Installation

Install via docker:

docker run -it --gpus all -p 8080:8080 -v $HOME/.tabby:/data tabbyml/tabby serve --model StarCoder-1B --device cuda --chat-model Qwen2-1.5B-Instruct
In VSCode: open Quick Open (Ctrl/Cmd+P) and run: ext install TabbyML.vscode-tabby  (then connect the extension to your Tabby server endpoint/token).

Key Features

  • Repository-aware code completions: indexes GitHub/GitLab repos for contextually relevant suggestions.
  • Answer Engine: chat Q&A inside the IDE with '@' mentions to include docs or files.
  • Inline editing via chat: request edits and insertions directly from the editor chat panel.
  • Context Providers: pull in repo code, developer docs, PRs/MRs, and other sources as context.
  • Self‑hosted deployment: Docker images, compose examples, and model registry with GPU recommendations.

Community

Tabby has an active open‑source community (tens of thousands of GitHub stars), a public docs site and Slack community, official editor extensions (VS Code, Vim), third‑party plugins, and regular releases and changelogs. Community support is available via GitHub issues/discussions and the project Slack; the project also maintains a models registry and deployment guides for common setups. ([github.com](https://github.com/TabbyML/tabby))

Last Refreshed: 2026-01-17

Key Information

  • Category: Code Assistants
  • Type: AI Code Assistants Tool