Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
84 changes: 84 additions & 0 deletions .opencode/skills/rag-workflows/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
---
name: rag-workflows
description: Retrieval-Augmented Generation (RAG) workflows for document-based Q&A
---

<!-- //review-2026-02-15 @twishapatel12 -->

## Overview

This skill describes patterns and workflows for **Retrieval-Augmented Generation (RAG)** in OpenWork, with a focus on **local document question answering**.

RAG combines document retrieval with large language models to answer questions grounded in external context rather than relying solely on model knowledge.

This skill is intended to help users design and reason about RAG-style workflows within OpenWork.

## Local RAG with Ollama

A common use case is running RAG **fully locally** using Ollama as the LLM backend. This enables:

- Querying private or sensitive documents
- Offline experimentation
- Avoiding external API dependencies

Typical steps in a local RAG workflow include:
1. Preparing a set of local documents
2. Retrieving relevant chunks based on a query
3. Providing retrieved context to a local LLM via Ollama
4. Generating an answer grounded in the retrieved context

## Setup (Ollama)

To use the included example workflow, you will need Ollama installed and running locally.

### 1. Install Ollama

Follow the official installation instructions for your platform:
https://ollama.com/download

### 2. Start Ollama

After installation, ensure the Ollama service is running. On most systems, this can be done by simply running:

```bash
ollama serve
````

(or by launching the Ollama app if you installed the desktop version).

### 3. Pull the required model

The example workflow uses **llama3**. Install it with:

```bash
ollama pull llama3
```

You can verify it is available by running:

```bash
ollama list
```

## Included Example

This skill includes a minimal example workflow:

* **local-ollama-doc-qa**

It demonstrates:

* Retrieving context from local documents
* Answering questions using a local Ollama model

## Example Use Cases

* Question answering over local markdown or text files
* Exploring private knowledge bases
* Prototyping RAG pipelines before production deployment

## Notes

* This skill is **optional** and **not enabled by default**
* It becomes available via the Skills settings when present
* This skill currently provides conceptual guidance, patterns, and a minimal runnable example
12 changes: 12 additions & 0 deletions .opencode/skills/rag-workflows/prompts/doc-qa.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
You are a helpful assistant answering questions based only on the provided context.

If the answer cannot be found in the context, say:
"I don't know based on the provided documents."

Context:
{{context}}

Question:
{{question}}

Answer:
31 changes: 31 additions & 0 deletions .opencode/skills/rag-workflows/workflows/local-ollama-doc-qa.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# //review-2026-02-15 @twishapatel12

name: local-ollama-doc-qa
description: Local document Q&A using Ollama

inputs:
documents_path:
type: string
description: Path to a folder containing local documents
question:
type: string
description: Question to ask about the documents

steps:
- id: retrieve
type: rag.retrieve
with:
path: "{{documents_path}}"

- id: answer
type: llm.generate
with:
provider: ollama
model: llama3
prompt: ../prompts/doc-qa.md
context: "{{steps.retrieve.context}}"
question: "{{question}}"

outputs:
answer:
value: "{{steps.answer.text}}"