Go to file
Meng Zhang 3573d4378e
feat: llama.cpp for metal support [TAB-146] (#391)
* feat: init commit adding llama-cpp-bindings

* add llama.cpp submodule

* add LlamaEngine to hold llama context / llama model

* add cxxbridge

* add basic greedy sampling

* move files

* make compile success

* connect TextGeneration with LlamaEngine

* experimental support llama.cpp

* add metal device

* add Accelerate

* fix namespace for llama-cpp-bindings

* fix lint

* move stepping logic to rust

* add stop words package

* use stop-words in ctranslate2-bindings

* use raw string for regex

* use Arc<Tokenizer> for sharing tokenizers

* refactor: remove useless stop_words_encoding_offset

* switch to tokenizers 0.13.4-rc.3

* fix lints in cpp

* simplify implementation of greedy decoding

* feat: split metal feature for llama backend

* add ci

* update ci

* build tabby bin in ci build
2023-09-03 09:59:07 +08:00
.github feat: llama.cpp for metal support [TAB-146] (#391) 2023-09-03 09:59:07 +08:00
ci Revert "chore: add RELEASE_NOTES.md" 2023-08-31 10:11:29 +08:00
clients feat(agent): added adaptive completion debouncing. (#389) 2023-09-01 13:30:53 +08:00
crates feat: llama.cpp for metal support [TAB-146] (#391) 2023-09-03 09:59:07 +08:00
experimental feat: add model-converter [TAB-153] (#357) 2023-08-17 22:29:20 +08:00
python/tabby feat: cleanup trainer with new data format 2023-06-13 12:48:27 -07:00
tests test: support TABBY_API_HOST in k6 tests 2023-04-04 11:14:22 +08:00
website docs: add roadmap for Q4 2023 (#384) 2023-08-31 17:19:14 +08:00
.dockerignore fix: git information in health API response [TAB-177] (#375) 2023-08-29 18:19:54 +08:00
.gitattributes Add docker compose (#3) 2023-03-22 02:42:47 +08:00
.gitignore refactor: update yarn workspace struct and build scripts. (#288) 2023-07-11 14:02:58 +08:00
.gitmodules feat: llama.cpp for metal support [TAB-146] (#391) 2023-09-03 09:59:07 +08:00
.rustfmt.toml refactor: rust nightly format (#197) 2023-06-05 14:17:07 -07:00
Cargo.lock feat: llama.cpp for metal support [TAB-146] (#391) 2023-09-03 09:59:07 +08:00
Cargo.toml feat: llama.cpp for metal support [TAB-146] (#391) 2023-09-03 09:59:07 +08:00
Dockerfile fix: git information in health API response [TAB-177] (#375) 2023-08-29 18:19:54 +08:00
LICENSE Create LICENSE 2023-03-16 17:28:10 +08:00
Makefile feat: build index from dataset (#234) 2023-06-12 19:21:27 +00:00
README.md docs: update README.md 2023-08-31 10:31:54 +08:00
package.json feat: add intellij plugin. (#312) 2023-07-25 19:17:14 +08:00
yarn.lock feat(agent): support add request headers in config. (#351) 2023-08-11 19:39:17 +08:00

README.md

🐾 Tabby

build status Docker pulls License Slack Community

Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features:

  • Self-contained, with no need for a DBMS or cloud service.
  • OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE).
  • Supports consumer-grade GPUs.

Open in Playground

Demo

👀 What's New

👋 Get Started

The easiest way to start a Tabby server is by using the following Docker command:

docker run -it \
  --gpus all -p 8080:8080 -v $HOME/.tabby:/data \
  tabbyml/tabby \
  serve --model TabbyML/SantaCoder-1B --device cuda

For additional options (e.g inference type, parallelism), please refer to the documentation at https://tabbyml.github.io/tabby.

🌟 Star History

Star History Chart