update
parent
d37484686b
commit
294c2a4334
|
|
@ -16,10 +16,10 @@
|
|||
|
||||
## Fixes and Improvements
|
||||
|
||||
* Switch cpu backend to llama.cpp: https://github.com/TabbyML/tabby/pull/638
|
||||
* Cpu backend is switched to llama.cpp: https://github.com/TabbyML/tabby/pull/638
|
||||
* add `server.completion_timeout` to control the code completion interface timeout: https://github.com/TabbyML/tabby/pull/637
|
||||
* Switch cuda backend to llama.cpp: https://github.com/TabbyML/tabby/pull/656
|
||||
* Switch tokenizer to llama.cpp, so tabby no longer need to download additional tokenizer file: https://github.com/TabbyML/tabby/pull/683
|
||||
* Cuda backend is switched to llama.cpp: https://github.com/TabbyML/tabby/pull/656
|
||||
* Tokenizer implementation is switched to llama.cpp, so tabby no longer need to download additional tokenizer file: https://github.com/TabbyML/tabby/pull/683
|
||||
|
||||
# v0.4.0
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue