1.0 KiB
1.0 KiB
| authors | |
|---|---|
|
Highlights of Tabby v0.1.0: Apple M1/M2 Support
We are thrilled to announce the release of Tabby v0.1.0👏🏻.
Thanks to llama.cpp, Apple M1/M2 Tabby users can now harness Metal inference support on Apple's M1 and M2 chips by using the --device metal flag.
This enhancement leads to a significant inference speed upgrade🚀. It marks a meaningful milestone in Tabby's adoption on Apple devices. Check out our Model Directory to discover LLM models with Metal support! 🎁
An example inference benchmarking with CodeLlama-7B on Apple M2 Max, takes ~600ms.
:::tip Check out latest Tabby updates on Linkedin and Slack community! Our Tabby community is eager for your participation. ❤️ :::
