This website requires JavaScript.
Explore
Help
Sign In
root
/
tabby
mirror of
https://gitee.com/zhang_1334717033/tabby.git
Watch
1
Star
0
Fork
You've already forked tabby
0
Code
Issues
Packages
Projects
Releases
Wiki
Activity
3c3b14c9f5
tabby
/
crates
/
llama-cpp-bindings
History
Meng Zhang
ca52ac4b01
fix: support cpu only run in llama.cpp cuda build
2023-11-06 22:59:24 -08:00
..
include
refactor: use llama.cpp tokenizer (
#683
)
2023-10-31 22:16:09 +00:00
llama.cpp
@
75fb6f2ba0
fix: support cpu only run in llama.cpp cuda build
2023-11-06 22:59:24 -08:00
src
fix: llama.cpp requires kv cache to be N_CTX * parallelism (
#714
)
2023-11-07 06:16:36 +00:00
Cargo.toml
fix: when there's an error happens in background inference loop, it should exit the process (
#713
)
2023-11-06 20:41:49 +00:00
build.rs
fix: fix docker build
2023-10-27 21:25:45 -07:00