Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.

Commit 9d2fb5d

Browse files
authored
feat: improve SEO for nitro (#239)
1 parent 4dc6957 commit 9d2fb5d

21 files changed

+35
-5
lines changed

docs/docs/examples/jan.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
---
22
title: Nitro with Jan
33
description: Nitro integrates with Jan to enable a ChatGPT-like functional app, optimized for local AI.
4+
keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama]
45
---
56

67
You can effortlessly utilize Nitro through [Jan](https://jan.ai/), as it is fully integrated with all its functions. With Jan, using Nitro becomes straightforward without the need for any coding.

docs/docs/examples/openai-node.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
---
22
title: Nitro with openai-node
33
description: Nitro intergration guide for Node.js.
4+
keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama]
45
---
56

67
You can migrate from OAI API or Azure OpenAI to Nitro using your existing NodeJS code quickly

docs/docs/examples/openai-python.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
---
22
title: Nitro with openai-python
33
description: Nitro intergration guide for Python.
4+
keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama]
45
---
56

67

docs/docs/examples/palchat.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
---
22
title: Nitro with Pal Chat
33
description: Nitro intergration guide for mobile device usage.
4+
keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama]
45
---
56

67
This guide demonstrates how to use Nitro with Pal Chat, enabling local AI chat capabilities on mobile devices.

docs/docs/features/chat.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
---
22
title: Chat Completion
33
description: Inference engine for chat completion, the same as OpenAI's
4+
keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama]
45
---
56

67
The Chat Completion feature in Nitro provides a flexible way to interact with any local Large Language Model (LLM).

docs/docs/features/cont-batch.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
---
22
title: Continuous Batching
33
description: Nitro's continuous batching combines multiple requests, enhancing throughput.
4+
keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama]
45
---
56

67
Continuous batching boosts throughput and minimizes latency in large language model (LLM) inference. This technique groups multiple inference requests, significantly improving GPU utilization.

docs/docs/features/embed.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
---
22
title: Embedding
33
description: Inference engine for embedding, the same as OpenAI's
4+
keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama]
45
---
56

67
Embeddings are lists of numbers (floats). To find how similar two embeddings are, we measure the [distance](https://en.wikipedia.org/wiki/Cosine_similarity) between them.

docs/docs/features/feat.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
---
22
title: Nitro Features
33
description: What Nitro supports
4+
keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama]
45
---
56

67
Nitro enhances the `llama.cpp` research base, optimizing it for production environments with advanced features:

docs/docs/features/load-unload.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
---
22
title: Load and Unload models
33
description: Nitro loads and unloads local AI models (local LLMs).
4+
keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama]
45
---
56

67
## Load model

docs/docs/features/multi-thread.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
---
22
title: Multithreading
33
description: Nitro utilizes multithreading to optimize hardware usage.
4+
keywords: [Nitro, Jan, fast inference, inference server, local AI, large language model, OpenAI compatible, open source, llama]
45
---
56

67
Multithreading in programming allows concurrent task execution, improving efficiency and responsiveness. It's key for optimizing hardware and application performance.

0 commit comments

Comments
 (0)