This website requires JavaScript.
Explore
Help
Sign In
root
/
tabby
mirror of
https://gitee.com/zhang_1334717033/tabby.git
Watch
1
Star
0
Fork
You've already forked tabby
0
Code
Issues
Packages
Projects
Releases
Wiki
Activity
c28f5838ce
tabby
/
crates
/
llama-cpp-bindings
History
Meng Zhang
c28f5838ce
fix: support cpu only run in llama.cpp cuda build
2023-11-06 23:02:31 -08:00
..
include
refactor: use llama.cpp tokenizer (
#683
)
2023-10-31 22:16:09 +00:00
llama.cpp
@
75fb6f2ba0
fix: support cpu only run in llama.cpp cuda build
2023-11-06 23:02:31 -08:00
src
fix: llama.cpp requires kv cache to be N_CTX * parallelism (
#714
)
2023-11-06 23:02:28 -08:00
Cargo.toml
fix: when there's an error happens in background inference loop, it should exit the process (
#713
)
2023-11-06 23:02:23 -08:00
build.rs
fix: fix docker build
2023-10-27 21:25:45 -07:00