docs: recommend StarCoder-1B for Apple M1/M2
parent
0b4206e6f8
commit
ac057f8a2f
|
|
@ -11,11 +11,8 @@ Thanks to Apple's Accelerate and CoreML frameworks, we can now run Tabby on edge
|
|||
brew tap TabbyML/tabby
|
||||
brew install tabby
|
||||
|
||||
# Use the flag --HEAD if you're instered in the nightly build.
|
||||
brew install --HEAD tabby
|
||||
|
||||
# Start server with CodeLlama
|
||||
tabby serve --device metal --model TabbyML/CodeLlama-7B
|
||||
# Start server with StarCoder-1B
|
||||
tabby serve --device metal --model TabbyML/StarCoder-1B
|
||||
```
|
||||
|
||||
The compute power of M1/M2 is limited and is likely to be sufficient only for individual usage. If you require a shared instance for a team, we recommend considering Docker hosting with CUDA. You can find more information about Docker [here](./docker).
|
||||
|
|
|
|||
Loading…
Reference in New Issue