From bd51f31186a5a410723b056dcfc62d7e2dbf9061 Mon Sep 17 00:00:00 2001 From: Meng Zhang Date: Thu, 7 Sep 2023 22:04:35 +0800 Subject: [PATCH] docs: add homebrew instructions for tabby (#411) --- website/docs/02-self-hosting/02-apple.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/website/docs/02-self-hosting/02-apple.md b/website/docs/02-self-hosting/02-apple.md index 9db591c..cb4e6b0 100644 --- a/website/docs/02-self-hosting/02-apple.md +++ b/website/docs/02-self-hosting/02-apple.md @@ -1,15 +1,15 @@ # Mac M1/M2 (Preview) -Thanks to Apple's Accelerate and CoreML frameworks, we can now run Tabby on edge devices with reasonable inference speed. Follow the steps below to set it up: +Thanks to Apple's Accelerate and CoreML frameworks, we can now run Tabby on edge devices with reasonable inference speed. Follow the steps below to set it up using homebrew: -1. Download the tabby binary from the latest Release page, rename it to `tabby`, place it in a directory included in your `$PATH` variable, and ensure its permissions are set to executable (e.g., 755). -3. Run `tabby --help` to verify successful installation. - -3. Start the server with: ```bash -tabby serve --model TabbyML/T5P-220M -``` +brew tap TabbyML/tabby +brew install --HEAD tabby +# Start server with CodeLlama +tabby serve --device metal --model TabbyML/CodeLlama-7B +``` :::tip The compute power of M1/M2 is limited and is likely to be sufficient only for individual usage. If you require a shared instance for a team, we recommend considering Docker hosting with CUDA. You can find more information about Docker [here](./docker). +:::