From e6fb1b6ac043f566dcbcf8224ee841ff11c89992 Mon Sep 17 00:00:00 2001 From: Meng Zhang Date: Thu, 9 Nov 2023 00:36:35 -0800 Subject: [PATCH] docs: add v0.5.5 CHANGELOG.md --- CHANGELOG.md | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 8109a8f..3e7d205 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,14 +2,10 @@ ## Features -# v0.5.4 +# v0.5.5 ## Fixes and Improvements -* Fix deadlock issue reported in https://github.com/TabbyML/tabby/issues/718 - -# v0.5.3 - ## Notice * llama.cpp backend (CPU, Metal) now requires a redownload of gguf model due to upstream format changes: https://github.com/TabbyML/tabby/pull/645 https://github.com/ggerganov/llama.cpp/pull/3252 @@ -26,6 +22,7 @@ * add `server.completion_timeout` to control the code completion interface timeout: https://github.com/TabbyML/tabby/pull/637 * Cuda backend is switched to llama.cpp: https://github.com/TabbyML/tabby/pull/656 * Tokenizer implementation is switched to llama.cpp, so tabby no longer need to download additional tokenizer file: https://github.com/TabbyML/tabby/pull/683 +* Fix deadlock issue reported in https://github.com/TabbyML/tabby/issues/718 # v0.4.0