From 90e446bfbaef76aedd43d46e74c0ba8a92109acb Mon Sep 17 00:00:00 2001 From: Meng Zhang Date: Wed, 1 Nov 2023 09:37:38 -0700 Subject: [PATCH] docs: Update MODEL_SPEC.md --- MODEL_SPEC.md | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/MODEL_SPEC.md b/MODEL_SPEC.md index 7540e30..d214d61 100644 --- a/MODEL_SPEC.md +++ b/MODEL_SPEC.md @@ -7,7 +7,6 @@ The minimal Tabby model directory should include the following contents: ``` ggml/ tabby.json -tokenizer.json ``` ### tabby.json @@ -27,10 +26,8 @@ One example for the **prompt_template** is `
{prefix}{suffix}`. In
 
 The **chat_template** field is optional. When it is present, it is assumed that the model supports an instruct/chat-style interaction, and can be passed to `--chat-model`.
 
-### tokenizer.json
-This is the standard fast tokenizer file created using [Hugging Face Tokenizers](https://github.com/huggingface/tokenizers). Most Hugging Face models already come with it in repository.
-
 ### ggml/
+
 This directory contains binary files used by the [llama.cpp](https://github.com/ggerganov/llama.cpp) inference engine. Tabby utilizes ggml for inference on `cpu`, `cuda` and `metal` devices.
 
 Currently, only `q8_0.v2.gguf` in this directory is in use. You can refer to the instructions in llama.cpp to learn how to acquire it.