Go to file
Meng Zhang ca077a3403
feat: ensure model exist before serving (#180)
* chore: migrate completion to new metadata format

* feat: ensure model exist before serving
2023-06-01 07:26:21 +00:00
.github feat: switch default docker image to rust based (#175) 2023-05-31 23:27:55 +00:00
clients fix: tabby agent shouldn't ping a url with trailing slash (#173) 2023-05-30 23:36:54 -07:00
crates feat: ensure model exist before serving (#180) 2023-06-01 07:26:21 +00:00
deployment
development
docs Update Events.md 2023-05-10 20:23:43 -07:00
tabby
tests
.dockerignore
.gitattributes
.gitignore build mac binary in ci (#152) 2023-05-27 14:31:27 -07:00
.gitmodules add ctranslate2-bindings / tabby rust packages (#146) 2023-05-25 14:05:28 -07:00
.pre-commit-config.yaml
Cargo.lock feat: improve download command - support local cache checking behavior (#178) 2023-06-01 06:42:04 +00:00
Cargo.toml feat: add events logger (#170) 2023-05-30 15:44:29 -07:00
Dockerfile
Dockerfile.rust feat: switch default docker image to rust based (#175) 2023-05-31 23:27:55 +00:00
LICENSE
Makefile
README.md feat: switch default docker image to rust based (#175) 2023-05-31 23:27:55 +00:00
poetry.lock
pyproject.toml

README.md

🐾 Tabby

License Code style: black Docker build status Docker pulls

Self-hosted AI coding assistant. An opensource / on-prem alternative to GitHub Copilot.

Warning Tabby is still in the alpha phase

Features

  • Self-contained, with no need for a DBMS or cloud service
  • Web UI for visualizing and configuration models and MLOps.
  • OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE).
  • Consumer level GPU supports (FP-16 weight loading with various optimization).

Demo

Open in Spaces

Demo

Get started: Server

Docker

We recommend adding the following aliases to your .bashrc or .zshrc file:

# Save aliases to bashrc / zshrc
alias tabby="docker run -u $(id -u) -p 8080:8080 -v $HOME/.tabby:/data tabbyml/tabby"

# Alias for GPU (requires NVIDIA Container Toolkit)
alias tabby-gpu="docker run --gpus all -u $(id -u) -p 8080:8080 -v $HOME/.tabby:/data tabbyml/tabby"

After adding these aliases, you can use the tabby command as usual. Here are some examples of its usage:

# Usage
tabby --help

# Download model
tabby download --model TabbyML/J-350M

# Serve the model
tabby serve --model TabbyML/J-350M

Getting Started: Client

We offer multiple methods to connect to Tabby Server, including using OpenAPI and editor extensions.

API

Tabby has opened a FastAPI server at localhost:8080, which includes an OpenAPI documentation of the HTTP API. The same API documentation is also hosted at https://tabbyml.github.io/tabby

Editor Extensions