tabby/README.md

2.8 KiB

🐾 Tabby

License Code style: black Docker build status

architecture

Warning Tabby is still in the alpha phrase

An opensource / on-prem alternative to GitHub Copilot.

Features

  • Self-contained, with no need for a DBMS or cloud service
  • Web UI for visualizing and configuration models and MLOps.
  • OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE).
  • Consumer level GPU supports (FP-16 weight loading with various optimization).

Get started

Docker

The easiest way of getting started is using the official docker image:

# Create data dir and grant owner to 1000 (Tabby run as uid 1000 in container)
mkdir -p data/hf_cache && chown -R 1000 data

docker run \
  -it --rm \
  -v ./data:/data \
  -v ./data/hf_cache:/home/app/.cache/huggingface \
  -p 5000:5000 \
  -p 8501:8501 \
  -p 8080:8080 \
  -e MODEL_NAME=TabbyML/J-350M \
  tabbyml/tabby

You can then query the server using /v1/completions endpoint:

curl -X POST http://localhost:5000/v1/completions -H 'Content-Type: application/json' --data '{
    "prompt": "def binarySearch(arr, left, right, x):\n    mid = (left +"
}'

To use the GPU backend (triton) for a faster inference speed, use deployment/docker-compose.yml:

docker-compose up

Note: To use GPUs, you need to install the NVIDIA Container Toolkit. We also recommend using NVIDIA drivers with CUDA version 11.8 or higher.

We also provides an interactive playground in admin panel localhost:8501

image

Skypilot

See deployment/skypilot/README.md

API documentation

Tabby opens an FastAPI server at localhost:5000, which embeds an OpenAPI documentation of the HTTP API.

Development

Go to development directory.

make dev

or

make dev-python  # Turn off triton backend (for non-cuda env developers)

TODOs

  • VIM Client #36
  • Fine-tuning models on private code repository. #23
  • Production ready (Open Telemetry, Prometheus metrics).