Go to file
Meng Zhang 68a7b3628c
docs: Update faq.mdx on multiple GPU utilization
2023-10-29 01:22:13 -07:00
.github docs: add labels to docker image. add tag for branch build. 2023-10-28 00:04:16 -07:00
ci
clients fix(ui): handle invalid semver error 2023-10-28 23:30:35 -07:00
contrib/jetson feat: add contrib/jetson/Dockerfilefor NVIDIA Jetson devices (#632) 2023-10-25 00:09:08 -07:00
crates feat: support continuous batching in llama.cpp backend (#659) 2023-10-28 23:37:05 -07:00
experimental feat: add an experimental supervisor (#630) 2023-10-27 12:46:08 -07:00
python/tabby
tests
website docs: Update faq.mdx on multiple GPU utilization 2023-10-29 01:22:13 -07:00
.dockerignore
.gitattributes
.gitignore
.gitmodules
.rustfmt.toml
CHANGELOG.md feat: switch cuda backend to llama.cpp (#656) 2023-10-27 13:41:22 -07:00
Cargo.lock Revert "feat: supports PHP (#634)" 2023-10-28 23:02:10 -07:00
Cargo.toml
Dockerfile fix: Update Dockerfile 2023-10-27 16:11:10 -07:00
LICENSE
MODEL_SPEC.md
Makefile fix(ui): handle invalid semver error 2023-10-28 23:30:35 -07:00
README.md docs: Update Slack group link README.md (#653) 2023-10-27 09:16:10 -07:00
package.json
yarn.lock

README.md

🐾 Tabby

latest release build status Docker pulls License

Slack Community Office Hours

Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features:

  • Self-contained, with no need for a DBMS or cloud service.
  • OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE).
  • Supports consumer-grade GPUs.

Open in Playground

Demo

🔥 What's New

  • 10/24/2023 Major updates for Tabby IDE plugins across VSCode/Vim/IntelliJ!
  • 10/15/2023 RAG-based code completion is enabled by detail in v0.3.0🎉! Check out the blogpost explaining how Tabby utilizes repo-level context to get even smarter!
  • 10/04/2023 Check out the model directory for the latest models supported by Tabby.
Archived
  • 09/21/2023 We've hit 10K stars 🌟 on GitHub! 🚀🎉👏
  • 09/18/2023 Apple's M1/M2 Metal inference support has landed in v0.1.1!
  • 08/31/2023 Tabby's first stable release v0.0.1 🥳.
  • 08/28/2023 Experimental support for the CodeLlama 7B.
  • 08/24/2023 Tabby is now on JetBrains Marketplace!

👋 Getting Started

You can find our documentation here.

Run Tabby in 1 Minute

The easiest way to start a Tabby server is by using the following Docker command:

docker run -it \
  --gpus all -p 8080:8080 -v $HOME/.tabby:/data \
  tabbyml/tabby \
  serve --model TabbyML/SantaCoder-1B --device cuda

For additional options (e.g inference type, parallelism), please refer to the documentation page.

🤝 Contributing

Get the Code

git clone --recurse-submodules https://github.com/TabbyML/tabby
cd tabby

If you have already cloned the repository, you could run the git submodule update --recursive --init command to fetch all submodules.

Build

  1. Set up the Rust environment by following this tutorial.

  2. Install the required dependencies:

# For MacOS
brew install protobuf

# For Ubuntu / Debian
apt-get install protobuf-compiler libopenblas-dev
  1. Now, you can build Tabby by running the command cargo build.

Start Hacking!

... and don't forget to submit a Pull Request

🌍 Community

  • #️⃣ Slack - connect with the TabbyML community
  • 🎤 Twitter / X - engage with TabbyML for all things possible
  • 📚 LinkedIn - follow for the latest from the community
  • 💌 Newsletter - subscribe to unlock Tabby insights and secrets

🌟 Star History

Star History Chart