Go to file
Meng Zhang 104fea61ad
fix: ref name in docker.yml (#377)
2023-08-30 15:28:50 +08:00
.github fix: ref name in docker.yml (#377) 2023-08-30 15:28:50 +08:00
ci
clients chore: bump intellij plugin version 0.2.0. (#369) 2023-08-25 00:19:45 +08:00
crates feat: add logging on server starting (#372) 2023-08-28 06:12:00 +00:00
experimental feat: add model-converter [TAB-153] (#357) 2023-08-17 22:29:20 +08:00
python/tabby
tests
website Fix typo in tagline (#353) 2023-08-15 09:08:49 +08:00
.dockerignore fix: git information in health API response [TAB-177] (#375) 2023-08-29 18:19:54 +08:00
.gitattributes
.gitignore
.gitmodules
.rustfmt.toml
Cargo.lock feat: add gpu info to health state [TAB-162] (#364) 2023-08-21 18:06:38 +08:00
Cargo.toml
Dockerfile fix: git information in health API response [TAB-177] (#375) 2023-08-29 18:19:54 +08:00
LICENSE
Makefile
README.md docs: add docker run command in README (#374) 2023-08-29 10:36:05 +08:00
package.json
yarn.lock

README.md

🐾 Tabby

build status Docker pulls License Slack Community

Warning Tabby is still in the alpha phase

Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features:

  • Self-contained, with no need for a DBMS or cloud service.
  • OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE).
  • Supports consumer-grade GPUs.

Open in Playground

Demo

👀 What's New

👋 Get Started

The easiest way to start a Tabby server is by using the following Docker command:

docker run -it \
  --gpus all -p 8080:8080 -v $HOME/.tabby:/data \
  tabbyml/tabby \
  serve --model TabbyML/SantaCoder-1B --device cuda

For additional options (e.g inference type, parallelism), please refer to the documentation at https://tabbyml.github.io/tabby.

🌟 Star History

Star History Chart