TaskFlow AI is an AI Product Engineering Lab built with Python, FastAPI, Ollama, Pydantic, and pytest.
The purpose of this project is to practice modern backend development and AI engineering using an incremental, hands-on, and Spec-Driven Development approach.
The project currently includes:
- FastAPI application setup
- Health check endpoint
- Task creation
- Task listing
- Get task by ID
- Update task status
- Task summarization with a local LLM
- Task breakdown into structured subtasks
- LLM provider abstraction
- Pydantic schema validation
- Controlled handling for invalid LLM outputs
- Automated tests with pytest
Current test result:
14 passed| Area | Technology |
|---|---|
| Language | Python 3.12 |
| API Framework | FastAPI |
| Package Manager | uv |
| Validation | Pydantic |
| Local LLM Runtime | Ollama |
| Local Model | gemma3:1b |
| HTTP Client | httpx |
| Testing | pytest |
| Version Control | Git + GitHub |
GET /healthReturns the API health status.
Example response:
{
"status": "ok"
}POST /tasksCreates a new task.
Example request:
{
"title": "Study LLM integration",
"description": "Connect FastAPI to a local Ollama model",
"priority": "HIGH"
}Example response:
{
"id": "generated-task-id",
"title": "Study LLM integration",
"description": "Connect FastAPI to a local Ollama model",
"status": "TODO",
"priority": "HIGH"
}Rules:
titleis required.titlemust have at least 3 characters.descriptionis optional.priorityis optional.- Default priority is
MEDIUM. - Initial status is always
TODO.
GET /tasksReturns all tasks currently stored in memory.
Example response:
[
{
"id": "generated-task-id",
"title": "Study LLM integration",
"description": "Connect FastAPI to a local Ollama model",
"status": "TODO",
"priority": "HIGH"
}
]GET /tasks/{task_id}Returns a specific task by ID.
Example success response:
{
"id": "generated-task-id",
"title": "Study LLM integration",
"description": "Connect FastAPI to a local Ollama model",
"status": "TODO",
"priority": "HIGH"
}If the task does not exist:
{
"detail": "Task not found"
}PATCH /tasks/{task_id}/statusUpdates the status of an existing task.
Example request:
{
"status": "IN_PROGRESS"
}Supported statuses:
TODO
IN_PROGRESS
DONE
BLOCKED
Example response:
{
"id": "generated-task-id",
"title": "Study LLM integration",
"description": "Connect FastAPI to a local Ollama model",
"status": "IN_PROGRESS",
"priority": "HIGH"
}POST /tasks/{task_id}/summarizeGenerates a concise task summary using a local LLM through Ollama.
Example response:
{
"task_id": "generated-task-id",
"summary": "Connect a FastAPI application to a local Ollama model for LLM integration."
}POST /tasks/{task_id}/breakdownUses the local LLM to generate structured subtasks.
Example response:
{
"task_id": "generated-task-id",
"subtasks": [
{
"title": "Define API contract",
"description": "Create the endpoint contract for task breakdown.",
"priority": "HIGH"
},
{
"title": "Implement service",
"description": "Create the service that calls the LLM client.",
"priority": "HIGH"
},
{
"title": "Add tests",
"description": "Validate successful and error scenarios.",
"priority": "MEDIUM"
}
]
}If the LLM returns invalid structured output:
{
"detail": "Invalid LLM output"
}HTTP status:
502 Bad Gateway
This project demonstrates important AI engineering patterns:
The API route does not call Ollama directly.
Current flow:
task_routes.py
-> task_summary_service.py / task_breakdown_service.py
-> llm_client.py
-> ollama_client.py
-> Local Ollama model
This design makes it easier to replace Ollama with another provider in the future.
Possible future providers:
- OpenAI
- Azure OpenAI
- Anthropic
- Gemini
- Other local models
The task breakdown feature expects the LLM to return structured JSON.
The application then:
LLM raw output
-> JSON parse
-> Pydantic validation
-> API response
This avoids trusting raw LLM output directly.
Key principle:
Never trust raw LLM output directly.
If the LLM returns invalid structured output, the API returns a controlled error instead of crashing with an internal server error.
Example:
{
"detail": "Invalid LLM output"
}Automated tests do not depend on Ollama being available.
LLM calls are mocked during tests using monkeypatch.
This makes tests:
- Faster
- More predictable
- Independent from local model availability
- Safer for CI/CD
taskflow-ai/
app/
api/
task_routes.py
core/
settings.py
models/
schemas/
task_schema.py
services/
llm_client.py
ollama_client.py
task_breakdown_service.py
task_service.py
task_summary_service.py
main.py
specs/
break-task-into-subtasks.md
create-task.md
get-task-by-id.md
summarize-task.md
update-task-status.md
tests/
test_task_routes.py
docs/
assets/
taskflow-ai-overview.png
API.md
ARCHITECTURE.md
pyproject.toml
uv.lock
README.md
git clone https://github.com/AGDM97/taskflow-ai.git
cd taskflow-aiuv syncCheck Ollama:
ollama --versionPull the local model:
ollama pull gemma3:1bCheck installed models:
ollama listuv run uvicorn app.main:app --reloadOpen Swagger UI:
http://127.0.0.1:8000/docs
uv run pytestExpected result:
14 passed
This project follows a Spec-Driven Development workflow:
Spec -> Schema -> Service -> Route -> Test -> Commit
Each feature starts with a specification file inside the specs/ folder.
Current specs:
create-task.mdget-task-by-id.mdupdate-task-status.mdsummarize-task.mdbreak-task-into-subtasks.md
Recommended repository description:
AI Product Engineering Lab for building FastAPI-based task workflows with local LLMs, structured outputs, Pydantic validation, and automated tests.
Recommended topics:
fastapi
python
ollama
llm
ai-engineering
pydantic
pytest
spec-driven-development
structured-output
local-llm
backend
ai-agents
uv
Next improvements:
- Improve JSON extraction from local LLM responses
- Add standardized API error schema
- Add PostgreSQL persistence
- Add repositories
- Add task history
- Add RAG over project documents
- Add evals for LLM outputs
- Add GitHub Actions
- Add Docker support
- Add MCP server
- Add simple web UI with Next.js
https://github.com/AGDM97/taskflow-ai
Built by AGDM97 as part of an AI Engineering learning journey.
