Skip to content

Commit 4941f13

Browse files
committed
Bulk update documentation with new image tag names
Signed-off-by: Simon Redman <simon@ergotech.com>
1 parent 3823f0d commit 4941f13

5 files changed

Lines changed: 19 additions & 19 deletions

File tree

README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@
4343

4444
> :bulb: Get help - [❓FAQ](https://localai.io/faq/) [💭Discussions](https://github.com/go-skynet/LocalAI/discussions) [:speech_balloon: Discord](https://discord.gg/uJAeKSAGDy) [:book: Documentation website](https://localai.io/)
4545
>
46-
> [💻 Quickstart](https://localai.io/basics/getting_started/) [🖼️ Models](https://models.localai.io/) [🚀 Roadmap](https://github.com/mudler/LocalAI/issues?q=is%3Aissue+is%3Aopen+label%3Aroadmap) [🛫 Examples](https://github.com/mudler/LocalAI-examples) Try on
46+
> [💻 Quickstart](https://localai.io/basics/getting_started/) [🖼️ Models](https://models.localai.io/) [🚀 Roadmap](https://github.com/mudler/LocalAI/issues?q=is%3Aissue+is%3Aopen+label%3Aroadmap) [🛫 Examples](https://github.com/mudler/LocalAI-examples) Try on
4747
[![Telegram](https://img.shields.io/badge/Telegram-2CA5E0?style=for-the-badge&logo=telegram&logoColor=white)](https://t.me/localaiofficial_bot)
4848

4949
[![tests](https://github.com/go-skynet/LocalAI/actions/workflows/test.yml/badge.svg)](https://github.com/go-skynet/LocalAI/actions/workflows/test.yml)[![Build and Release](https://github.com/go-skynet/LocalAI/actions/workflows/release.yaml/badge.svg)](https://github.com/go-skynet/LocalAI/actions/workflows/release.yaml)[![build container images](https://github.com/go-skynet/LocalAI/actions/workflows/image.yml/badge.svg)](https://github.com/go-skynet/LocalAI/actions/workflows/image.yml)[![Bump dependencies](https://github.com/go-skynet/LocalAI/actions/workflows/bump_deps.yaml/badge.svg)](https://github.com/go-skynet/LocalAI/actions/workflows/bump_deps.yaml)[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/localai)](https://artifacthub.io/packages/search?repo=localai)
@@ -131,10 +131,10 @@ For more installation options, see [Installer Options](https://localai.io/instal
131131
Or run with docker:
132132

133133
> **💡 Docker Run vs Docker Start**
134-
>
134+
>
135135
> - `docker run` creates and starts a new container. If a container with the same name already exists, this command will fail.
136136
> - `docker start` starts an existing container that was previously created with `docker run`.
137-
>
137+
>
138138
> If you've already run LocalAI before and want to start it again, use: `docker start -i local-ai`
139139
140140
### CPU only image:
@@ -159,7 +159,7 @@ docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-nv
159159
### AMD GPU Images (ROCm):
160160

161161
```bash
162-
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-gpu-hipblas
162+
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-gpu-amd-rocm-6
163163
```
164164

165165
### Intel GPU Images (oneAPI):
@@ -190,7 +190,7 @@ docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-ai
190190
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-gpu-intel
191191

192192
# AMD GPU version
193-
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-aio-gpu-hipblas
193+
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-aio-gpu-amd-rocm-6
194194
```
195195

196196
For more information about the AIO images and pre-downloaded models, see [Container Documentation](https://localai.io/basics/container/).
@@ -250,7 +250,7 @@ Roadmap items: [List of issues](https://github.com/mudler/LocalAI/issues?q=is%3A
250250
- 🗣 [Text to Audio](https://localai.io/features/text-to-audio/)
251251
- 🔈 [Audio to Text](https://localai.io/features/audio-to-text/) (Audio transcription with `whisper.cpp`)
252252
- 🎨 [Image generation](https://localai.io/features/image-generation)
253-
- 🔥 [OpenAI-alike tools API](https://localai.io/features/openai-functions/)
253+
- 🔥 [OpenAI-alike tools API](https://localai.io/features/openai-functions/)
254254
- 🧠 [Embeddings generation for vector databases](https://localai.io/features/embeddings/)
255255
- ✍️ [Constrained grammars](https://localai.io/features/constrained_grammars/)
256256
- 🖼️ [Download Models directly from Huggingface ](https://localai.io/models/)
@@ -356,7 +356,7 @@ Other:
356356
- Github bot which answer on issues, with code and documentation as context https://github.com/JackBekket/GitHelper
357357
- Github Actions: https://github.com/marketplace/actions/start-localai
358358
- Examples: https://github.com/mudler/LocalAI/tree/master/examples/
359-
359+
360360

361361
### 🔗 Resources
362362

docs/content/features/GPU-acceleration.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -181,7 +181,7 @@ The following are examples of the ROCm specific configuration elements required.
181181

182182
```yaml
183183
# For full functionality select a non-'core' image, version locking the image is recommended for debug purposes.
184-
image: quay.io/go-skynet/local-ai:master-aio-gpu-hipblas
184+
image: quay.io/go-skynet/local-ai:master-aio-gpu-amd-rocm-6
185185
environment:
186186
- DEBUG=true
187187
# If your gpu is not already included in the current list of default targets the following build details are required.
@@ -204,7 +204,7 @@ docker run \
204204
-e GPU_TARGETS=gfx906 \
205205
--device /dev/dri \
206206
--device /dev/kfd \
207-
quay.io/go-skynet/local-ai:master-aio-gpu-hipblas
207+
quay.io/go-skynet/local-ai:master-aio-gpu-amd-rocm-6
208208
```
209209
210210
Please ensure to add all other required environment variables, port forwardings, etc to your `compose` file or `run` command.

docs/content/getting-started/container-images.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -84,9 +84,9 @@ Standard container images do not have pre-installed models. Use these if you wan
8484

8585
| Description | Quay | Docker Hub |
8686
| --- | --- |-------------------------------------------------------------|
87-
| Latest images from the branch (development) | `quay.io/go-skynet/local-ai:master-gpu-hipblas` | `localai/localai:master-gpu-hipblas` |
88-
| Latest tag | `quay.io/go-skynet/local-ai:latest-gpu-hipblas` | `localai/localai:latest-gpu-hipblas` |
89-
| Versioned image | `quay.io/go-skynet/local-ai:{{< version >}}-gpu-hipblas` | `localai/localai:{{< version >}}-gpu-hipblas` |
87+
| Latest images from the branch (development) | `quay.io/go-skynet/local-ai:master-gpu-amd-rocm-6` | `localai/localai:master-gpu-amd-rocm-6` |
88+
| Latest tag | `quay.io/go-skynet/local-ai:latest-gpu-amd-rocm-6` | `localai/localai:latest-gpu-amd-rocm-6` |
89+
| Versioned image | `quay.io/go-skynet/local-ai:{{< version >}}-gpu-amd-rocm-6` | `localai/localai:{{< version >}}-gpu-amd-rocm-6` |
9090

9191
{{% /tab %}}
9292

@@ -178,7 +178,7 @@ services:
178178

179179
**Models caching**: The **AIO** image will download the needed models on the first run if not already present and store those in `/models` inside the container. The AIO models will be automatically updated with new versions of AIO images.
180180

181-
You can change the directory inside the container by specifying a `MODELS_PATH` environment variable (or `--models-path`).
181+
You can change the directory inside the container by specifying a `MODELS_PATH` environment variable (or `--models-path`).
182182

183183
If you want to use a named model or a local directory, you can mount it as a volume to `/models`:
184184

@@ -203,7 +203,7 @@ docker run -p 8080:8080 --name local-ai -ti -v localai-models:/models localai/lo
203203
| Versioned image (e.g. for CPU) | `quay.io/go-skynet/local-ai:{{< version >}}-aio-cpu` | `localai/localai:{{< version >}}-aio-cpu` |
204204
| Latest images for Nvidia GPU (CUDA11) | `quay.io/go-skynet/local-ai:latest-aio-gpu-nvidia-cuda-11` | `localai/localai:latest-aio-gpu-nvidia-cuda-11` |
205205
| Latest images for Nvidia GPU (CUDA12) | `quay.io/go-skynet/local-ai:latest-aio-gpu-nvidia-cuda-12` | `localai/localai:latest-aio-gpu-nvidia-cuda-12` |
206-
| Latest images for AMD GPU | `quay.io/go-skynet/local-ai:latest-aio-gpu-hipblas` | `localai/localai:latest-aio-gpu-hipblas` |
206+
| Latest images for AMD GPU | `quay.io/go-skynet/local-ai:latest-aio-gpu-amd-rocm-6` | `localai/localai:latest-aio-gpu-amd-rocm-6` |
207207
| Latest images for Intel GPU | `quay.io/go-skynet/local-ai:latest-aio-gpu-intel` | `localai/localai:latest-aio-gpu-intel` |
208208

209209
### Available environment variables

docs/content/installation/docker.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gp
7070

7171
**AMD GPU (ROCm):**
7272
```bash
73-
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-gpu-hipblas
73+
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-gpu-amd-rocm-6
7474
```
7575

7676
**Intel GPU:**
@@ -112,7 +112,7 @@ docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-ai
112112

113113
**AMD GPU (ROCm):**
114114
```bash
115-
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-aio-gpu-hipblas
115+
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-aio-gpu-amd-rocm-6
116116
```
117117

118118
**Intel GPU:**
@@ -132,7 +132,7 @@ services:
132132
# For GPU support, use one of:
133133
# image: localai/localai:latest-aio-gpu-nvidia-cuda-12
134134
# image: localai/localai:latest-aio-gpu-nvidia-cuda-11
135-
# image: localai/localai:latest-aio-gpu-hipblas
135+
# image: localai/localai:latest-aio-gpu-amd-rocm-6
136136
# image: localai/localai:latest-aio-gpu-intel
137137
healthcheck:
138138
test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]

docs/static/install.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -702,10 +702,10 @@ install_docker() {
702702
$envs \
703703
-d -p $PORT:8080 --name local-ai localai/localai:$IMAGE_TAG $STARTCOMMAND
704704
elif [ "$HAS_AMD" ]; then
705-
IMAGE_TAG=${LOCALAI_VERSION}-gpu-hipblas
705+
IMAGE_TAG=${LOCALAI_VERSION}-gpu-amd-rocm-6
706706
# AIO
707707
if [ "$USE_AIO" = true ]; then
708-
IMAGE_TAG=${LOCALAI_VERSION}-aio-gpu-hipblas
708+
IMAGE_TAG=${LOCALAI_VERSION}-aio-gpu-amd-rocm-6
709709
fi
710710

711711
info "Starting LocalAI Docker container..."

0 commit comments

Comments
 (0)