parent
31217bcfc8
commit
ad6a2072d3
|
|
@ -22,7 +22,7 @@ Try our online demo [here](https://tabbyml.github.io/tabby/playground).
|
|||
|
||||
If you have installed the Tabby VSCode extension, you can follow the built-in walkthrough guides to get started. You can also reopen walkthrough page anytime by using command `Tabby: Getting Started`.
|
||||
|
||||
1. Setup the Tabby server: you can get a Tabby Cloud hosted server [here](https://app.tabbyml.com), or build your self-hosted Tabby server following [this guide](https://tabbyml.github.io/tabby/docs/self-hosting/).
|
||||
1. Setup the Tabby server: you can get a Tabby Cloud hosted server [here](https://app.tabbyml.com), or build your self-hosted Tabby server following [this guide](https://tabby.tabbyml.com/docs/installation).
|
||||
2. Use the command `Tabby: Specify API Endpoint of Tabby` to connect the extension to your Tabby server. If you are using a Tabby Cloud server endpoint, please follow the popup messages to complete authorization.
|
||||
|
||||
Once setup is complete, Tabby will provide inline suggestions automatically, and you can accept suggestions by just pressing the Tab key. You can hover on the inline suggestion text to find more useful actions such as partially accepting by word or by line.
|
||||
|
|
|
|||
|
|
@ -1,77 +0,0 @@
|
|||
# Docker
|
||||
|
||||
There is a supplied docker image to make deploying a server as a container easier.
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
## CPU
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="shell" label="Shell" default>
|
||||
|
||||
```bash title="run.sh"
|
||||
docker run -it \
|
||||
-p 8080:8080 -v $HOME/.tabby:/data \
|
||||
tabbyml/tabby serve --model TabbyML/SantaCoder-1B
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="compose" label="Docker Compose">
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: '3.5'
|
||||
|
||||
services:
|
||||
tabby:
|
||||
restart: always
|
||||
image: tabbyml/tabby
|
||||
command: serve --model TabbyML/SantaCoder-1B
|
||||
volumes:
|
||||
- "$HOME/.tabby:/data"
|
||||
ports:
|
||||
- 8080:8080
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
|
||||
## CUDA (requires NVIDIA Container Toolkit)
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="shell" label="Shell" default>
|
||||
|
||||
```bash title="run.sh"
|
||||
docker run -it \
|
||||
--gpus all -p 8080:8080 -v $HOME/.tabby:/data \
|
||||
tabbyml/tabby \
|
||||
serve --model TabbyML/SantaCoder-1B --device cuda
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="compose" label="Docker Compose">
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: '3.5'
|
||||
services:
|
||||
tabby:
|
||||
restart: always
|
||||
image: tabbyml/tabby
|
||||
command: serve --model TabbyML/SantaCoder-1B --device cuda
|
||||
volumes:
|
||||
- "$HOME/.tabby:/data"
|
||||
ports:
|
||||
- 8080:8080
|
||||
deploy:
|
||||
resources:
|
||||
reservations:
|
||||
devices:
|
||||
- driver: nvidia
|
||||
count: 1
|
||||
capabilities: [gpu]
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
|
|
@ -1,6 +0,0 @@
|
|||
# 📚 Self Hosting
|
||||
|
||||
Tabby can be deployed on the server side using Docker or as client-side software, harnessing the computing power of Mac M1/M2. Please refer to each individual page for more information.
|
||||
|
||||
* [Docker](./docker)
|
||||
* [Mac M1/M2 (Preview)](./apple)
|
||||
|
|
@ -1,4 +1,4 @@
|
|||
# Configuration
|
||||
# ⚙️ Configuration
|
||||
|
||||
:::tip
|
||||
The configuration file is not mandatory; Tabby can be run with just a single line of command.
|
||||
|
|
@ -0,0 +1,9 @@
|
|||
---
|
||||
sidebar_position: 3
|
||||
---
|
||||
|
||||
# 💻 IDE / Editor Extensions
|
||||
|
||||
import DocCardList from '@theme/DocCardList';
|
||||
|
||||
<DocCardList />
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
# IntelliJ Platform
|
||||
#
|
||||
import IntelliJ from "../../../clients/intellij/README.md";
|
||||
|
||||
<IntelliJ />
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
# VIM / NeoVIM
|
||||
|
||||
import VIM from "../../../clients/vim/README.md";
|
||||
|
||||
<VIM />
|
||||
|
|
@ -0,0 +1,8 @@
|
|||
---
|
||||
sidebar_position: 0
|
||||
---
|
||||
# Visual Studio Code
|
||||
|
||||
import VSCode from "../../../clients/vscode/README.md";
|
||||
|
||||
<VSCode />
|
||||
|
|
@ -1,3 +1,6 @@
|
|||
---
|
||||
sidebar_position: 0
|
||||
---
|
||||
# 👋 Getting Started
|
||||
|
||||
Tabby is an open-source, self-hosted AI coding assistant. With Tabby, every team can set up its own LLM-powered code completion server with ease.
|
||||
|
|
@ -1,4 +1,13 @@
|
|||
# Mac M1/M2 (Preview)
|
||||
---
|
||||
sidebar_position: 3
|
||||
---
|
||||
|
||||
# Homebrew (Apple M1/M2)
|
||||
This guide explains how to install Tabby using homebrew.
|
||||
|
||||
:::info
|
||||
Apple M1/M2 support is under **alpha** test.
|
||||
:::
|
||||
|
||||
Thanks to Apple's Accelerate and CoreML frameworks, we can now run Tabby on edge devices with reasonable inference speed. Follow the steps below to set it up using homebrew:
|
||||
|
||||
|
|
@ -10,6 +19,4 @@ brew install --HEAD tabby
|
|||
tabby serve --device metal --model TabbyML/CodeLlama-7B
|
||||
```
|
||||
|
||||
:::tip
|
||||
The compute power of M1/M2 is limited and is likely to be sufficient only for individual usage. If you require a shared instance for a team, we recommend considering Docker hosting with CUDA. You can find more information about Docker [here](./docker).
|
||||
:::
|
||||
|
|
@ -0,0 +1,53 @@
|
|||
---
|
||||
sidebar_position: 1
|
||||
---
|
||||
|
||||
# Docker Compose
|
||||
This guide explains how to launch Tabby using docker-compose.
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="cpu" label="CPU">
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: '3.5'
|
||||
|
||||
services:
|
||||
tabby:
|
||||
restart: always
|
||||
image: tabbyml/tabby
|
||||
command: serve --model TabbyML/SantaCoder-1B
|
||||
volumes:
|
||||
- "$HOME/.tabby:/data"
|
||||
ports:
|
||||
- 8080:8080
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="cuda" label="CUDA (requires NVIDIA Container Toolkit)">
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: '3.5'
|
||||
services:
|
||||
tabby:
|
||||
restart: always
|
||||
image: tabbyml/tabby
|
||||
command: serve --model TabbyML/SantaCoder-1B --device cuda
|
||||
volumes:
|
||||
- "$HOME/.tabby:/data"
|
||||
ports:
|
||||
- 8080:8080
|
||||
deploy:
|
||||
resources:
|
||||
reservations:
|
||||
devices:
|
||||
- driver: nvidia
|
||||
count: 1
|
||||
capabilities: [gpu]
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
|
|
@ -0,0 +1,32 @@
|
|||
---
|
||||
sidebar_position: 0
|
||||
---
|
||||
|
||||
# Docker
|
||||
|
||||
This guide explains how to launch Tabby using docker.
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="cpu" label="CPU" default>
|
||||
|
||||
```bash title="run.sh"
|
||||
docker run -it \
|
||||
-p 8080:8080 -v $HOME/.tabby:/data \
|
||||
tabbyml/tabby serve --model TabbyML/SantaCoder-1B
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="cuda" label="CUDA (requires NVIDIA Container Toolkit)" default>
|
||||
|
||||
```bash title="run.sh"
|
||||
docker run -it \
|
||||
--gpus all -p 8080:8080 -v $HOME/.tabby:/data \
|
||||
tabbyml/tabby \
|
||||
serve --model TabbyML/SantaCoder-1B --device cuda
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
|
@ -0,0 +1,9 @@
|
|||
---
|
||||
sidebar_position: 1
|
||||
---
|
||||
|
||||
import DocCardList from '@theme/DocCardList';
|
||||
|
||||
# 📚 Installation
|
||||
|
||||
<DocCardList />
|
||||
Loading…
Reference in New Issue