docs: add date to previous releases
parent
d8e83d0610
commit
8654116126
|
|
@ -12,7 +12,7 @@
|
|||
* Make `--model` optional, so users can create a chat only instance.
|
||||
* Add `--parallelism` to control the throughput and VRAM usage: https://github.com/TabbyML/tabby/pull/727
|
||||
|
||||
# v0.5.5
|
||||
# v0.5.5 (11/09/2023)
|
||||
|
||||
## Fixes and Improvements
|
||||
|
||||
|
|
@ -34,7 +34,7 @@
|
|||
* Tokenizer implementation is switched to llama.cpp, so tabby no longer need to download additional tokenizer file: https://github.com/TabbyML/tabby/pull/683
|
||||
* Fix deadlock issue reported in https://github.com/TabbyML/tabby/issues/718
|
||||
|
||||
# v0.4.0
|
||||
# v0.4.0 (10/24/2023)
|
||||
|
||||
## Features
|
||||
|
||||
|
|
@ -47,13 +47,13 @@
|
|||
|
||||
* Improve snippets retrieval by dedup candidates to existing content + snippets: https://github.com/TabbyML/tabby/pull/582
|
||||
|
||||
# v0.3.1
|
||||
# v0.3.1 (10/21/2023)
|
||||
## Fixes and improvements
|
||||
|
||||
* Fix GPU OOM issue caused the parallelism: https://github.com/TabbyML/tabby/issues/541, https://github.com/TabbyML/tabby/issues/587
|
||||
* Fix git safe directory check in docker: https://github.com/TabbyML/tabby/issues/569
|
||||
|
||||
# v0.3.0
|
||||
# v0.3.0 (10/13/2023)
|
||||
|
||||
## Features
|
||||
### Retrieval-Augmented Code Completion Enabled by Default
|
||||
|
|
|
|||
Loading…
Reference in New Issue