tabby/crates/llama-cpp-bindings
Meng Zhang 9c905e4849
feat: add rocm support (#913)
* Added build configurations for Intel and AMD hardware

* Improved rocm build

* Added options for OneAPI and ROCm

* Build llama using icx

* [autofix.ci] apply automated fixes

* Fixed rocm image

* Build ROCm

* Tried to adjust compile flags for SYCL

* Removed references to oneAPI

* Provide info about the used device for ROCm

* Added ROCm documentation

* Addressed review comments

* Refactored to expose generic accelerator information

* Pull request cleanup

* cleanup

* cleanup

* Delete .github/workflows/docker-cuda.yml

* Delete .github/workflows/docker-rocm.yml

* Delete crates/tabby-common/src/api/accelerator.rs

* update

* cleanup

* update

* update

* update

* update

---------

Co-authored-by: Cromefire_ <cromefire+git@pm.me>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2023-11-29 03:27:03 +00:00
..
include feat: add --parallelism to control throughput and vram usage (#727) 2023-11-08 18:31:22 +00:00
llama.cpp@efbd009d2f feat: sync llama.cpp to latest 2023-11-08 16:06:09 -08:00
src refactor: handle max output length in StopCondition (#910) 2023-11-28 16:57:16 +08:00
Cargo.toml feat: add rocm support (#913) 2023-11-29 03:27:03 +00:00
build.rs feat: add rocm support (#913) 2023-11-29 03:27:03 +00:00