* refactor: switch to llama.cpp tokenizer to simplify implementation * refactor: remove tokenizer dependency from tabby * refactor: renaming decoding to stop condition * refactor: remove tokenizer dependency * refactor: remove submodule * chore: update formatting * move tokenization to c++ |
||
|---|---|---|
| .. | ||
| include | ||
| llama.cpp@f858db8db3 | ||
| src | ||
| Cargo.toml | ||
| build.rs | ||