tabby/website/docs/self-hosting/02-apple.md

16 lines
780 B
Markdown
Raw Normal View History

2023-06-07 02:34:36 +00:00
# Mac M1/M2 (Preview)
Thanks to Apple's Accelerate and CoreML frameworks, we can now run Tabby on edge devices with reasonable inference speed. Follow the steps below to set it up:
2023-06-12 13:30:22 +00:00
1. Download the tabby binary from the latest Release page, rename it to `tabby`, place it in a directory included in your `$PATH` variable, and ensure its permissions are set to executable (e.g., 755).
3. Run `tabby --help` to verify successful installation.
2023-06-07 02:34:36 +00:00
2023-06-07 02:54:50 +00:00
3. Start the server with:
```bash
tabby serve --model TabbyML/T5P-220M
2023-06-07 02:54:50 +00:00
```
2023-06-07 02:53:40 +00:00
:::tip
The compute power of M1/M2 is limited and is likely to be sufficient only for individual usage. If you require a shared instance for a team, we recommend considering Docker hosting with CUDA. You can find more information about Docker [here](./docker).