* feat: add `tabby scheduler` command * update test cases * fix fmt |
||
|---|---|---|
| .github | ||
| clients | ||
| crates | ||
| docs | ||
| tabby | ||
| tests | ||
| .dockerignore | ||
| .gitattributes | ||
| .gitignore | ||
| .gitmodules | ||
| .pre-commit-config.yaml | ||
| Cargo.lock | ||
| Cargo.toml | ||
| Dockerfile.rust | ||
| LICENSE | ||
| Makefile | ||
| README.md | ||
| poetry.lock | ||
| pyproject.toml | ||
README.md
Self-hosted AI coding assistant. An opensource / on-prem alternative to GitHub Copilot.
Warning Tabby is still in the alpha phase
Features
- Self-contained, with no need for a DBMS or cloud service
- Web UI for visualizing and configuration models and MLOps.
- OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE).
- Consumer level GPU supports (FP-16 weight loading with various optimization).
Demo
Get started: Server
Docker
We recommend adding the following aliases to your .bashrc or .zshrc file:
# Save aliases to bashrc / zshrc
alias tabby="docker run -u $(id -u) -p 8080:8080 -v $HOME/.tabby:/data tabbyml/tabby"
# Alias for GPU (requires NVIDIA Container Toolkit)
alias tabby-gpu="docker run --gpus all -u $(id -u) -p 8080:8080 -v $HOME/.tabby:/data tabbyml/tabby"
After adding these aliases, you can use the tabby command as usual. Here are some examples of its usage:
# Usage
tabby --help
# Serve the model
tabby serve --model TabbyML/J-350M
Getting Started: Client
We offer multiple methods to connect to Tabby Server, including using OpenAPI and editor extensions.
API
Tabby has opened a FastAPI server at localhost:8080, which includes an OpenAPI documentation of the HTTP API. The same API documentation is also hosted at https://tabbyml.github.io/tabby
Editor Extensions
- VSCode Extension – Install from the marketplace, or open-vsx.org
- VIM Extension