Eric
3d3f1761ad
feat: support build tabby on windows ( #948 )
...
* feat: update config to support build on windows
* resolve comment
* update release.yml
* resolve comment
2023-12-11 12:29:21 +08:00
Mikko Tiihonen
9aed0dee08
feat: Add support for 7840U iGPU type ( #960 )
...
rocminfo reports that my AMD Ryzen 7 PRO 7840U w/ Radeon 780M Graphics GPU version is gfx1103
2023-12-06 23:11:05 +00:00
Meng Zhang
9c905e4849
feat: add rocm support ( #913 )
...
* Added build configurations for Intel and AMD hardware
* Improved rocm build
* Added options for OneAPI and ROCm
* Build llama using icx
* [autofix.ci] apply automated fixes
* Fixed rocm image
* Build ROCm
* Tried to adjust compile flags for SYCL
* Removed references to oneAPI
* Provide info about the used device for ROCm
* Added ROCm documentation
* Addressed review comments
* Refactored to expose generic accelerator information
* Pull request cleanup
* cleanup
* cleanup
* Delete .github/workflows/docker-cuda.yml
* Delete .github/workflows/docker-rocm.yml
* Delete crates/tabby-common/src/api/accelerator.rs
* update
* cleanup
* update
* update
* update
* update
---------
Co-authored-by: Cromefire_ <cromefire+git@pm.me>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2023-11-29 03:27:03 +00:00
Meng Zhang
23a49beaa9
feat(ci): support manylinux build for cpu / cuda ( #899 )
2023-11-26 16:37:12 +08:00
Maciej
ebbe6e5af8
fix: helpful message when llama.cpp submodule is not present ( #719 ) ( #775 )
2023-11-13 07:51:46 +00:00
Meng Zhang
9309e0314f
fix: fix docker build
2023-10-27 21:25:45 -07:00
Meng Zhang
6dd12ce1ec
fix: adding cuda search path to docker build.
2023-10-27 19:40:35 -07:00
Meng Zhang
2d948639be
fix: docker build for llama cuda backend
2023-10-27 16:36:54 -07:00
Meng Zhang
23bd542cec
feat: switch cuda backend to llama.cpp ( #656 )
...
* feat: switch cuda backend to llama.cpp
* fix
* fix
2023-10-27 13:41:22 -07:00
vodkaslime
3c7c8d9293
feat: add cargo test to github actions and run only unit tests in ci [TAB-185] ( #390 )
...
* feat: add cargo test to github actions
* chore: fix lint
* chore: add openblas dependency
* chore: update build dependency
* chore: resolve comments
* chore: fix lint
* chore: fix lint
* chore: test installing dependencies
* chore: refactor integ test
* update ci
* cleanup
---------
Co-authored-by: Meng Zhang <meng@tabbyml.com>
2023-09-03 05:04:52 +00:00
Meng Zhang
3573d4378e
feat: llama.cpp for metal support [TAB-146] ( #391 )
...
* feat: init commit adding llama-cpp-bindings
* add llama.cpp submodule
* add LlamaEngine to hold llama context / llama model
* add cxxbridge
* add basic greedy sampling
* move files
* make compile success
* connect TextGeneration with LlamaEngine
* experimental support llama.cpp
* add metal device
* add Accelerate
* fix namespace for llama-cpp-bindings
* fix lint
* move stepping logic to rust
* add stop words package
* use stop-words in ctranslate2-bindings
* use raw string for regex
* use Arc<Tokenizer> for sharing tokenizers
* refactor: remove useless stop_words_encoding_offset
* switch to tokenizers 0.13.4-rc.3
* fix lints in cpp
* simplify implementation of greedy decoding
* feat: split metal feature for llama backend
* add ci
* update ci
* build tabby bin in ci build
2023-09-03 09:59:07 +08:00