docs: add chat_template to model spec
parent
f7ebce2514
commit
6b38b32117
|
|
@ -15,10 +15,11 @@ tokenizer.json
|
|||
|
||||
This file provides meta information about the model. An example file appears as follows:
|
||||
|
||||
```js
|
||||
```json
|
||||
{
|
||||
"auto_model": "AutoModelForCausalLM",
|
||||
"prompt_template": "<PRE>{prefix}<SUF>{suffix}<MID>"
|
||||
"prompt_template": "<PRE>{prefix}<SUF>{suffix}<MID>",
|
||||
"chat_template": "<s>{% for message in messages %}{% if message['role'] == 'user' %}{{ '[INST] ' + message['content'] + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + '</s> ' }}{% endif %}{% endfor %}",
|
||||
}
|
||||
```
|
||||
|
||||
|
|
@ -30,6 +31,8 @@ The **prompt_template** field is optional. When present, it is assumed that the
|
|||
|
||||
One example for the **prompt_template** is `<PRE>{prefix}<SUF>{suffix}<MID>`. In this format, `{prefix}` and `{suffix}` will be replaced with their corresponding values, and the entire prompt will be fed into the LLM.
|
||||
|
||||
The **chat_template** field is optional. When it is present, it is assumed that the model supports an instruct/chat-style interaction, and can be passed to `--chat-model`.
|
||||
|
||||
### tokenizer.json
|
||||
This is the standard fast tokenizer file created using [Hugging Face Tokenizers](https://github.com/huggingface/tokenizers). Most Hugging Face models already come with it in repository.
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue