* Added build configurations for Intel and AMD hardware * Improved rocm build * Added options for OneAPI and ROCm * Build llama using icx * [autofix.ci] apply automated fixes * Fixed rocm image * Build ROCm * Tried to adjust compile flags for SYCL * Removed references to oneAPI * Provide info about the used device for ROCm * Added ROCm documentation * Addressed review comments * Refactored to expose generic accelerator information * Pull request cleanup * cleanup * cleanup * Delete .github/workflows/docker-cuda.yml * Delete .github/workflows/docker-rocm.yml * Delete crates/tabby-common/src/api/accelerator.rs * update * cleanup * update * update * update * update --------- Co-authored-by: Cromefire_ <cromefire+git@pm.me> Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| include | ||
| llama.cpp@efbd009d2f | ||
| src | ||
| Cargo.toml | ||
| build.rs | ||