# 🐾 Tabby
[](https://opensource.org/licenses/Apache-2.0)
[](https://github.com/psf/black)
[](https://github.com/TabbyML/tabby/actions/workflows/ci.yml)
[](https://hub.docker.com/r/tabbyml/tabby)
Self-hosted AI coding assistant. An opensource / on-prem alternative to GitHub Copilot.
> **Warning**
> Tabby is still in the alpha phase
## Features
* Self-contained, with no need for a DBMS or cloud service
* Web UI for visualizing and configuration models and MLOps.
* OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE).
* Consumer level GPU supports (FP-16 weight loading with various optimization).
## Demo
## Get started: Server
### Docker
We recommend adding the following aliases to your `.bashrc` or `.zshrc` file:
```shell
# Save aliases to bashrc / zshrc
alias tabby="docker run -u $(id -u) -p 8080:8080 -v $HOME/.tabby:/data tabbyml/tabby"
# Alias for GPU (requires NVIDIA Container Toolkit)
alias tabby-gpu="docker run --gpus all -u $(id -u) -p 8080:8080 -v $HOME/.tabby:/data tabbyml/tabby"
```
After adding these aliases, you can use the `tabby` command as usual. Here are some examples of its usage:
```shell
# Usage
tabby --help
# Serve the model
tabby serve --model TabbyML/J-350M
```
## Getting Started: Client
We offer multiple methods to connect to Tabby Server, including using OpenAPI and editor extensions.
### API
Tabby has opened a FastAPI server at [localhost:8080](https://localhost:8080), which includes an OpenAPI documentation of the HTTP API. The same API documentation is also hosted at https://tabbyml.github.io/tabby
### Editor Extensions
* [VSCode Extension](./clients/vscode) – Install from the [marketplace](https://marketplace.visualstudio.com/items?itemName=TabbyML.vscode-tabby), or [open-vsx.org](https://open-vsx.org/extension/TabbyML/vscode-tabby)
* [VIM Extension](./clients/vim)