Go to file
Meng Zhang 10ec34decd
Update README.md
2023-04-06 19:18:09 +08:00
.github/workflows Add pages 2023-04-06 18:48:26 +08:00
clients/vscode Update README.md 2023-04-06 18:23:30 +08:00
deployment feat: integrate caddy, re-org paths (#49) 2023-04-06 17:02:10 +08:00
development feat: hide streamlit menu in production (#50) 2023-04-06 17:21:26 +08:00
docs Add pages 2023-04-06 18:48:26 +08:00
tabby bug: download_models might break with unexpected exception string 2023-04-06 18:34:17 +08:00
tests test: support TABBY_API_HOST in k6 tests 2023-04-04 11:14:22 +08:00
.dockerignore Add gptj converter (#19) 2023-03-27 11:12:52 +08:00
.gitattributes Add docker compose (#3) 2023-03-22 02:42:47 +08:00
.gitignore Add supervisord.pid to gitignore 2023-03-29 16:41:18 +08:00
.pre-commit-config.yaml feat: support stopping words in python backend. (#32) 2023-03-29 20:23:11 +08:00
Dockerfile feat: integrate caddy, re-org paths (#49) 2023-04-06 17:02:10 +08:00
LICENSE Create LICENSE 2023-03-16 17:28:10 +08:00
Makefile test: support TABBY_API_HOST in k6 tests 2023-04-04 11:14:22 +08:00
README.md Update README.md 2023-04-06 19:18:09 +08:00
poetry.lock Add bitsandbytes (#35) 2023-03-29 20:47:44 +08:00
pyproject.toml Add bitsandbytes (#35) 2023-03-29 20:47:44 +08:00

README.md

🐾 Tabby

License Code style: black Docker build status Huggingface Space

architecture

Self-hosted AI coding assistant. An opensource / on-prem alternative to GitHub Copilot.

Warning Tabby is still in the alpha phrase

Features

  • Self-contained, with no need for a DBMS or cloud service
  • Web UI for visualizing and configuration models and MLOps.
  • OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE).
  • Consumer level GPU supports (FP-16 weight loading with various optimization).

Get started

Docker

The easiest way of getting started is using the official docker image:

# Create data dir and grant owner to 1000 (Tabby run as uid 1000 in container)
mkdir -p data/hf_cache && chown -R 1000 data

docker run \
  -it --rm \
  -v ./data:/data \
  -v ./data/hf_cache:/home/app/.cache/huggingface \
  -p 5000:5000 \
  -e MODEL_NAME=TabbyML/J-350M \
  tabbyml/tabby

To use the GPU backend (triton) for a faster inference speed:

docker run \
  --gpus all \
  -it --rm \
  -v ./data:/data \
  -v ./data/hf_cache:/home/app/.cache/huggingface \
  -p 5000:5000 \
  -e MODEL_NAME=TabbyML/J-350M \
  -e MODEL_BACKEND=triton \
  tabbyml/tabby

Note: To use GPUs, you need to install the NVIDIA Container Toolkit. We also recommend using NVIDIA drivers with CUDA version 11.8 or higher.

You can then query the server using /v1/completions endpoint:

curl -X POST http://localhost:5000/v1/completions -H 'Content-Type: application/json' --data '{
    "prompt": "def binarySearch(arr, left, right, x):\n    mid = (left +"
}'

We also provides an interactive playground in admin panel localhost:5000/_admin

image

Skypilot

See deployment/skypilot/README.md

API documentation

Tabby opens an FastAPI server at localhost:5000, which embeds an OpenAPI documentation of the HTTP API.

Development

Go to development directory.

make dev

or

make dev-triton # Turn on triton backend (for cuda env developers)

TODOs

  • VIM Client #36
  • Fine-tuning models on private code repository. #23
  • Production ready (Open Telemetry, Prometheus metrics).