Tabby - AI Code Assistants Tool

Overview

Tabby is an open-source, self‑hosted AI coding assistant that provides multi-line autocompletion, an in‑IDE chat/Answer Engine, and repository‑aware code context as an alternative to cloud copilots. It is designed to run on-prem or on personal hardware (consumer GPUs supported) and exposes an OpenAPI‑compatible interface so teams can integrate Tabby into cloud IDEs, CI workflows, or internal tooling. ([github.com](https://github.com/TabbyML/tabby)) The project supports mixing completion and chat backends (e.g., StarCoder for completions and Qwen variants for chat) and includes a curated model registry with recommended hardware guidance (1B–3B models on T4/M1; 7B+ on V100/A100 or modern 30/40‑series GPUs). Tabby also provides data connectors and repo indexing so suggestions and answers can be grounded in a team’s code, PRs, or documentation. Getting started is intentionally fast (Docker images, CLI, and build from source are documented). ([tabby.tabbyml.com](https://tabby.tabbyml.com/docs/models))

GitHub Statistics

  • Stars: 32,973
  • Forks: 1,689
  • Contributors: 118
  • License: NOASSERTION
  • Primary Language: Rust
  • Last Updated: 2026-03-02T08:20:29Z
  • Latest Release: v0.32.0

The repository is actively maintained and popular: roughly ~33k stars and ~1.7k forks on GitHub, with hundreds of releases and thousands of commits reflecting frequent feature work and bug fixes. Recent months show ongoing commits adding OAuth, multi‑branch indexing, and model/embedding updates — indicators of active development and responsiveness to enterprise needs. Community activity includes issue reports, PRs, and a public Slack for discussion. (Repository pages and commit history referenced.) ([github.com](https://github.com/TabbyML/tabby))

Installation

Install via docker:

docker run -it --gpus all -p 8080:8080 -v $HOME/.tabby:/data tabbyml/tabby serve --model StarCoder-1B --device cuda --chat-model Qwen2-1.5B-Instruct
git clone --recurse-submodules https://github.com/TabbyML/tabby && cd tabby
# For Ubuntu/Debian: apt install protobuf-compiler libopenblas-dev
# For macOS: brew install protobuf
cargo build  # build from source (requires Rust toolchain)

Key Features

  • Multi-line, context-aware code completions that can generate entire functions in real time.
  • Inline editing and 'explain this' commands from the IDE to request edits or explanations.
  • Answer Engine: chat-based Q&A grounded in indexed repository content and documentation.
  • Data connectors / Context Providers: index repos, GitLab/GitHub PRs, docs, and external APIs.
  • Model registry support: run StarCoder, CodeLlama, Qwen, Mistral and many embeddings locally.
  • OpenAPI + REST endpoints for integration with cloud IDEs, CI systems, or custom tooling.
  • IDE plugins for VS Code, JetBrains, and Vim/NeoVim for in-editor completions and chat.

Community

Tabby has a large and engaged open‑source community (many stars, active issues and PRs, public Slack and newsletter). Development is frequent — recent work added OAuth, multi‑branch indexing, and embedding features — and maintainers accept contributions via PRs. Common community feedback: strong praise for self‑hosting and privacy, requests for improved Windows/build tooling and occasional installation/compatibility bug reports (issues track these). For hands‑on help, the project maintains documentation, a Slack channel, and an active issues/PR queue. ([github.com](https://github.com/TabbyML/tabby))

Last Refreshed: 2026-03-03

Key Information

  • Category: Code Assistants
  • Type: AI Code Assistants Tool