781 B
781 B
Mac M1/M2 (Preview)
Thanks to Apple's Accelerate and CoreML frameworks, we can now run Tabby on edge devices with reasonable inference speed. Follow the steps below to set it up:
-
Download the tabby binary from the latest Release page, rename it to
tabby, place it in a directory included in your$PATHvariable, and ensure its permissions are set to executable (e.g., 755). -
Run
tabby --helpto verify successful installation. -
Start the server with:
tabby serve --model TabbyML/T5P-220M
:::tip The compute power of M1/M2 is limited and is likely to be sufficient only for individual usage. If you require a shared instance for a team, we recommend considering Docker hosting with CUDA. You can find more information about Docker here.