Skip to content

Run automated tutorial walkthrough (Modules 0-11) and capture doc bugs #28

@rdwj

Description

@rdwj

Goal

Re-validate the tutorial end-to-end against a fresh cluster path before scheduling a human walkthrough. Pattern: Claude runs every documented step top-to-bottom, files/fixes any drift between docs and reality, and only then hands off to a human reader for flow/clarity feedback.

Scope

Modules 0–11 plus the five setup guides:

  • 00-prerequisites.md
  • 01 through 11
  • guides/cluster-options.md
  • guides/install-openshift-ai.md
  • guides/serve-an-llm.md
  • guides/install-cli-tools.md
  • guides/registry-setup.md
  • guides/install-ogx.md
  • guides/configure-shields.md
  • guides/observability-backends.md

Acceptance criteria

  • Every code block that says "run this" runs successfully (or has a documented "expected error" qualifier)
  • Every assertion the doc makes about command output matches reality
  • Module 9's full agent + MCP smoke test passes
  • Module 10's OGX guardrails + observability path passes
  • Module 11's llm-d scaling exercises pass
  • Each drift found is filed as its own follow-up issue and either fixed in the same pass or linked from a fix-up PR

Out of scope

  • Substantive content rewrites (only doc bugs — wrong commands, stale paths, missing steps)
  • Human readability / flow feedback (that's the next issue)

Notes

Reference model is RedHatAI/gpt-oss-20b. Reference shield is code-scanner (Path A). If a second GPU is available, exercise Path B (Llama Guard) too — see #27.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions