A hands-on prompt engineering course powered entirely by local open-source models.
No API keys. No cloud costs. Just your machine and a lightweight LLM.
This repository contains 6 structured Jupyter notebooks that teach core prompt engineering techniques using Ollama and the Qwen3 0.6B model. Everything runs locally on your machine β no OpenAI keys, no cloud dependencies.
Each notebook is self-contained, progressively building your skills from basic prompting principles to advanced text transformation.
prompt-engineering-ollama/
β
βββ π notebooks/
β βββ 01-guidelines-for-prompting.ipynb # Principles & tactics for effective prompts
β βββ 02-iterative-prompt-development.ipynb # Refine prompts through iteration
β βββ 03-summarizing.ipynb # Summarize text with topic focus
β βββ 04-inferring.ipynb # Sentiment analysis & topic extraction
β βββ 05-expanding.ipynb # Generate tailored long-form content
β βββ 06-transforming.ipynb # Translation, grammar, tone & format
β
βββ π assets/
β βββ architecture.svg # Architecture diagram
β
βββ π docs/
β βββ SETUP.md # Detailed setup instructions
β
βββ .gitignore
βββ CONTRIBUTING.md
βββ LICENSE
βββ README.md
βββ requirements.txt
Download and install from ollama.com/download (Windows, macOS, or Linux).
ollama --versionollama pull qwen3:0.6b
ollama serveNote
Leave the server running in a separate terminal. It listens on http://localhost:11434.
git clone https://github.com/<your-username>/prompt-engineering-ollama.git
cd prompt-engineering-ollama
pip install -r requirements.txt
jupyter notebookNavigate to the notebooks/ folder in Jupyter and start with 01-guidelines-for-prompting.ipynb.
| # | Notebook | What You'll Learn |
|---|---|---|
| 01 | Guidelines for Prompting | Two core principles β write clear instructions & give the model time to think |
| 02 | Iterative Prompt Development | Systematically refine prompts using a product fact sheet as a case study |
| 03 | Summarizing | Condense text with word limits, topic focus, and extract-vs-summarize strategies |
| 04 | Inferring | Detect sentiment, extract emotions, identify topics from reviews and articles |
| 05 | Expanding | Generate personalized customer service emails from review context |
| 06 | Transforming | Translate languages, fix grammar, adjust tone, and convert formats |
Every notebook uses a shared helper function that sends prompts to Ollama's local REST API:
import requests
OLLAMA_URL = "http://localhost:11434/api/generate"
MODEL = "qwen3:0.6b"
def get_completion(prompt, temperature=0.2):
payload = {
"model": MODEL,
"prompt": prompt,
"stream": False,
"options": {"temperature": temperature}
}
r = requests.post(OLLAMA_URL, json=payload)
r.raise_for_status()
return r.json()["response"]Data flow:
NotebookβPython requestsβOllama REST APIβQwen3 inferenceβJSON responseβNotebook output
| Component | Minimum | Recommended |
|---|---|---|
| OS | Windows 10 / macOS 12 / Ubuntu 20.04 | Latest stable |
| RAM | 4 GB | 8 GB+ |
| Python | 3.9 | 3.10 β 3.12 |
| Disk | ~1 GB (model + deps) | 2 GB+ |
| Problem | Solution |
|---|---|
Connection refused |
Make sure ollama serve is running in a separate terminal |
Model not found |
Run ollama pull qwen3:0.6b |
Port conflict on 11434 |
Stop other Ollama instances or reboot |
| Slow responses | Close heavy applications to free RAM |
Contributions are welcome! Please read CONTRIBUTING.md for guidelines.
This project is licensed under the MIT License β see the LICENSE file for details.
Built with β€οΈ for open-source AI education