From 05106889d981e9235dab6c41ee093b692fc84d52 Mon Sep 17 00:00:00 2001 From: Meng Zhang Date: Sun, 17 Sep 2023 17:46:15 +0800 Subject: [PATCH] docs: improve tabby blogpost --- website/blog/2023-09-18-release-0-1-1-metal/index.md | 9 ++++++++- .../staring-tabby-on-llama-cpp.png | 3 +++ 2 files changed, 11 insertions(+), 1 deletion(-) create mode 100644 website/blog/2023-09-18-release-0-1-1-metal/staring-tabby-on-llama-cpp.png diff --git a/website/blog/2023-09-18-release-0-1-1-metal/index.md b/website/blog/2023-09-18-release-0-1-1-metal/index.md index bb453f0..979f610 100644 --- a/website/blog/2023-09-18-release-0-1-1-metal/index.md +++ b/website/blog/2023-09-18-release-0-1-1-metal/index.md @@ -1,9 +1,16 @@ --- authors: [ meng ] --- -# Highlights of Tabby v0.1.1: Apple M1/M2 Support +# Tabby v0.1.1: Metal inference and StarCoder supports in llama.cpp! + We are thrilled to announce the release of Tabby [v0.1.1](https://github.com/TabbyML/tabby/releases/tag/v0.1.1) 👏🏻. +
+ +![Staring tabby riding on llama.cpp](./staring-tabby-on-llama-cpp.png) + +
+ Apple M1/M2 Tabby users can now harness Metal inference support on Apple's M1 and M2 chips by using the `--device metal` flag, thanks to [llama.cpp](https://github.com/ggerganov/llama.cpp)'s awesome metal support. The Tabby team made a [contribution](https://github.com/ggerganov/llama.cpp/pull/3187) by adding support for the StarCoder series models (1B/3B/7B) in llama.cpp, enabling more appropriate model usage on the edge for completion use cases. diff --git a/website/blog/2023-09-18-release-0-1-1-metal/staring-tabby-on-llama-cpp.png b/website/blog/2023-09-18-release-0-1-1-metal/staring-tabby-on-llama-cpp.png new file mode 100644 index 0000000..23bf05e --- /dev/null +++ b/website/blog/2023-09-18-release-0-1-1-metal/staring-tabby-on-llama-cpp.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6353801e36b9a36aa37936aef0b92f7105d837764f05b93ff8aea8bbb3ce9715 +size 518921