From 294c2a4334785c6bc3043b9f96cbb70ba16f7b07 Mon Sep 17 00:00:00 2001 From: Meng Zhang Date: Fri, 3 Nov 2023 14:00:24 -0700 Subject: [PATCH] update --- CHANGELOG.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 68288ec..66ae418 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -16,10 +16,10 @@ ## Fixes and Improvements -* Switch cpu backend to llama.cpp: https://github.com/TabbyML/tabby/pull/638 +* Cpu backend is switched to llama.cpp: https://github.com/TabbyML/tabby/pull/638 * add `server.completion_timeout` to control the code completion interface timeout: https://github.com/TabbyML/tabby/pull/637 -* Switch cuda backend to llama.cpp: https://github.com/TabbyML/tabby/pull/656 -* Switch tokenizer to llama.cpp, so tabby no longer need to download additional tokenizer file: https://github.com/TabbyML/tabby/pull/683 +* Cuda backend is switched to llama.cpp: https://github.com/TabbyML/tabby/pull/656 +* Tokenizer implementation is switched to llama.cpp, so tabby no longer need to download additional tokenizer file: https://github.com/TabbyML/tabby/pull/683 # v0.4.0