docs: add v0.1.0 release blog (#440)

* docs: add v0.1.0 release blog

* remove unused files

* add authors

* Apply suggestions from code review

Co-authored-by: Lucy Gao <gyxlucy@gmail.com>

---------

Co-authored-by: Lucy Gao <gyxlucy@gmail.com>
release-0.2
Meng Zhang 2023-09-13 17:19:28 +08:00 committed by GitHub
parent 4ac5006b89
commit 3d00cc5e87
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 26 additions and 2 deletions

View File

@ -0,0 +1,21 @@
---
authors: [ meng ]
---
# Highlights of Tabby v0.1.0: Apple M1/M2 Support
We are thrilled to announce the release of Tabby v0.1.0👏🏻.
Thanks to [llama.cpp](https://github.com/ggerganov/llama.cpp), Apple M1/M2 Tabby users can now harness Metal inference support on Apple's M1 and M2 chips by using the `--device metal` flag.
This enhancement leads to a significant inference speed upgrade🚀. It marks a meaningful milestone in Tabby's adoption on Apple devices. Check out our [Model Directory](/docs/models) to discover LLM models with Metal support! 🎁
<center>
![Inference](./inference.png)
*An example inference benchmarking with [CodeLlama-7B](https://huggingface.co/TabbyML/CodeLlama-7B) on Apple M2 Max, takes ~600ms.*
</center>
:::tip
Check out latest Tabby updates on [Linkedin](https://www.linkedin.com/company/tabbyml/) and [Slack community](https://join.slack.com/t/tabbycommunity/shared_invite/zt-1xeiddizp-bciR2RtFTaJ37RBxr8VxpA)! Our Tabby community is eager for your participation. ❤️
:::

View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:702e9b69b54a0b86731c23d199ffe454a2f03437b25f0fe8c25257e9c71b8877
size 19495

View File

@ -8,9 +8,9 @@ We maintains a curated list of models varies from 200M to 10B+.
| Model ID | License | <span title="Apple M1/M2 Only">Metal Support</span> |
| --------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | :-------------------------------------------------: |
| [TabbyML/CodeLlama-13B](https://huggingface.co/TabbyML/CodeLlama-13B) | [Llama2](https://github.com/facebookresearch/llama/blob/main/LICENSE) | |
| [TabbyML/CodeLlama-13B](https://huggingface.co/TabbyML/CodeLlama-13B) | [Llama2](https://github.com/facebookresearch/llama/blob/main/LICENSE) | |
| [TabbyML/CodeLlama-7B](https://huggingface.co/TabbyML/CodeLlama-7B) | [Llama2](https://github.com/facebookresearch/llama/blob/main/LICENSE) | ✅ |
| [TabbyML/StarCoder-1B](https://huggingface.co/TabbyML/StarCoder-1B) | [BigCode-OpenRAIL-M](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) | |
| [TabbyML/StarCoder-1B](https://huggingface.co/TabbyML/StarCoder-1B) | [BigCode-OpenRAIL-M](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) | 🔜 |
| [TabbyML/SantaCoder-1B](https://huggingface.co/TabbyML/SantaCoder-1B) | [BigCode-OpenRAIL-M](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) | ❌ |
| [TabbyML/J-350M](https://huggingface.co/TabbyML/J-350M) | [BSD-3](https://opensource.org/license/bsd-3-clause/) | ❌ |
| [TabbyML/T5P-220M](https://huggingface.co/TabbyML/T5P-220M) | [BSD-3](https://opensource.org/license/bsd-3-clause/) | ❌ |