|
|
||
|---|---|---|
| .github | ||
| ci | ||
| clients | ||
| crates | ||
| experimental | ||
| python/tabby | ||
| tests | ||
| website | ||
| .dockerignore | ||
| .gitattributes | ||
| .gitignore | ||
| .gitmodules | ||
| .rustfmt.toml | ||
| Cargo.lock | ||
| Cargo.toml | ||
| Dockerfile | ||
| LICENSE | ||
| Makefile | ||
| README.md | ||
| package.json | ||
| yarn.lock | ||
README.md
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features:
- Self-contained, with no need for a DBMS or cloud service.
- OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE).
- Supports consumer-grade GPUs.
👀 What's New
- 08/31/2023 tabby's first stable release v0.0.1 🥳.
- 08/28/2023 Experimental support for the CodeLlama 7B.
- 08/24/2023 Tabby is now on JetBrains Marketplace!
👋 Getting Started
The easiest way to start a Tabby server is by using the following Docker command:
docker run -it \
--gpus all -p 8080:8080 -v $HOME/.tabby:/data \
tabbyml/tabby \
serve --model TabbyML/SantaCoder-1B --device cuda
For additional options (e.g inference type, parallelism), please refer to the documentation at https://tabbyml.github.io/tabby.
🤝 Contributing
Get the Code
git clone --recurse-submodules https://github.com/TabbyML/tabby
cd tabby
Build
-
Set up the Rust environment by following this tutorial.
-
Install the required dependencies:
# For MacOS
brew install protobuf
# For Ubuntu / Debian
apt-get install protobuf-compiler libopenblas-dev
- Now, you can build Tabby by running the command
cargo build.
Start Hacking!
... and don't forget to submit a Pull Request