tabby/crates/llama-cpp-bindings
Meng Zhang 7bd99d14c0
feat: support continuous batching in llama.cpp backend (#659)
* refactor: switch back to llama batch interface

* feat: support cont batching
2023-10-28 23:37:05 -07:00
..
include feat: support continuous batching in llama.cpp backend (#659) 2023-10-28 23:37:05 -07:00
llama.cpp@638ff1aba1 fix(llama.cpp): bump upstream fix for starcoder model on cuda 2023-10-28 02:03:34 -07:00
src feat: support continuous batching in llama.cpp backend (#659) 2023-10-28 23:37:05 -07:00
Cargo.toml feat: switch cuda backend to llama.cpp (#656) 2023-10-27 13:41:22 -07:00
build.rs fix: fix docker build 2023-10-27 21:25:45 -07:00