Skip to content

Commit a7465e8

Browse files
Update for redisvl 0.8.2
1 parent 9bf9bf8 commit a7465e8

File tree

5 files changed

+8
-26
lines changed

5 files changed

+8
-26
lines changed

content/develop/ai/redisvl/0.8.2/api/message_history.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
linkTitle: LLM message history
33
title: LLM Message History
4-
url: '/develop/ai/redisvl/0.8.2/api/essage_history/'
4+
url: '/develop/ai/redisvl/0.8.2/api/message_history/'
55
---
66

77

content/develop/ai/redisvl/0.8.2/install.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
---
2-
description: Install RedisVL
3-
linkTitle: Install
4-
title: Install
2+
linkTitle: Install RedisVL
3+
title: Install RedisVL
54
weight: 2
65
aliases:
76
- /integrate/redisvl/install

content/develop/ai/redisvl/0.8.2/user_guide/getting_started.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
linkTitle: Getting started with RedisVL
33
title: Getting Started with RedisVL
44
weight: 01
5-
url: '/develop/ai/redisvl/0.8.2/user_guide/getting_starte/'
5+
url: '/develop/ai/redisvl/0.8.2/user_guide/getting_started/'
66
---
77

88
`redisvl` is a versatile Python library with an integrated CLI, designed to enhance AI applications using Redis. This guide will walk you through the following steps:

content/develop/ai/redisvl/0.8.2/user_guide/llmcache.md

Lines changed: 3 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1,28 +1,11 @@
11
---
2-
linkTitle: LLM caching
3-
title: LLM Caching
4-
aliases:
5-
- /integrate/redisvl/user_guide/03_llmcache
2+
linkTitle: First, we will import [openai](https://platform.openai.com) to use their API for responding to user prompts. we will also create a simple `ask_openai` helper method to assist.
3+
title: First, we will import [OpenAI](https://platform.openai.com) to use their API for responding to user prompts. We will also create a simple `ask_openai` helper method to assist.
64
weight: 03
5+
url: '/develop/ai/redisvl/0.8.2/user_guide/llmcache/'
76
---
87

98

10-
This notebook demonstrates how to use RedisVL's `SemanticCache` to cache LLM responses based on semantic similarity. Semantic caching can significantly reduce API costs and latency by retrieving cached responses for semantically similar prompts instead of making redundant API calls.
11-
12-
Key features covered:
13-
- Basic cache operations (store, check, clear)
14-
- Customizing semantic similarity thresholds
15-
- TTL policies for cache expiration
16-
- Performance benchmarking
17-
- Access controls with tags and filters for multi-user scenarios
18-
19-
Prerequisites:
20-
- Ensure `redisvl` is installed in your Python environment
21-
- Have a running instance of [Redis Stack](https://redis.io/docs/install/install-stack/) or [Redis Cloud](https://redis.io/cloud)
22-
- OpenAI API key for the examples
23-
24-
First, we will import [OpenAI](https://platform.openai.com) to use their API for responding to user prompts. We will also create a simple `ask_openai` helper method to assist.
25-
269

2710
```python
2811
import os

content/develop/ai/redisvl/0.8.2/user_guide/message_history.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
linkTitle: LLM message history
33
title: LLM Message History
44
weight: 07
5-
url: '/develop/ai/redisvl/0.8.2/user_guide/essage_history/'
5+
url: '/develop/ai/redisvl/0.8.2/user_guide/message_history/'
66
---
77

88

0 commit comments

Comments
 (0)