docs: add homebrew instructions for tabby (#411)

release-0.2
Meng Zhang 2023-09-07 22:04:35 +08:00 committed by GitHub
parent 66bc979aa9
commit bd51f31186
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 7 additions and 7 deletions

View File

@ -1,15 +1,15 @@
# Mac M1/M2 (Preview)
Thanks to Apple's Accelerate and CoreML frameworks, we can now run Tabby on edge devices with reasonable inference speed. Follow the steps below to set it up:
Thanks to Apple's Accelerate and CoreML frameworks, we can now run Tabby on edge devices with reasonable inference speed. Follow the steps below to set it up using homebrew:
1. Download the tabby binary from the latest Release page, rename it to `tabby`, place it in a directory included in your `$PATH` variable, and ensure its permissions are set to executable (e.g., 755).
3. Run `tabby --help` to verify successful installation.
3. Start the server with:
```bash
tabby serve --model TabbyML/T5P-220M
```
brew tap TabbyML/tabby
brew install --HEAD tabby
# Start server with CodeLlama
tabby serve --device metal --model TabbyML/CodeLlama-7B
```
:::tip
The compute power of M1/M2 is limited and is likely to be sufficient only for individual usage. If you require a shared instance for a team, we recommend considering Docker hosting with CUDA. You can find more information about Docker [here](./docker).
:::