Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
108 changes: 25 additions & 83 deletions .claude/skills/autosteer/SKILL.md
Original file line number Diff line number Diff line change
@@ -1,107 +1,49 @@
---
name: autosteer
description: Run the full development pipeline (plan, implement, validate, review, branch, commit, PR, YouTrack update) autonomously without pausing for user confirmation between phases. Use when you want hands-off task completion. Stops only on errors.
description: Run the full development pipeline autonomously without pausing between phases. Stops only on quality-gate failures.
argument-hint: "[ticket-id-or-task-description]"
---

# Autosteer

Run the full development pipeline end-to-end without pausing for user
confirmation between phases. The agent proceeds autonomously through every
stage, stopping **only** when a phase fails (tests, lint, coverage).
Suspends the **pacing rule** from `CLAUDE.md`. Proceed through all phases
without confirmation, stopping **only** on quality-gate failures.

## Activation

When this skill is invoked, the **pacing rule** from `CLAUDE.md` is
suspended for the remainder of the current task. Concretely:

- Do **not** pause between phases to ask for confirmation.
- Do **not** present a plan and wait for a go-ahead — just execute.
- Do **not** pause before pushing or creating the PR.
- **Do** stop immediately if a quality gate fails (`make check`,
`make test`, `make test-cov-check`) and attempt to fix it. If you
cannot fix it after two attempts, stop and report the failure.
- Do **not** pause between phases or present plans for approval.
- **Do** stop on quality-gate failure (`make check`, `make test`,
`make test-cov-check`). Attempt fix twice, then stop and report.

## Steps

### 1. Resolve the ticket

If the argument looks like a ticket ID (`DBA-XXX`, a bare number, or a
YouTrack URL), fetch it with `get_issue` and move it to **Develop**.

If the argument is a free-text task description without a ticket ID,
create a YouTrack issue automatically:

- Derive **Summary** (imperative, one-line) and **Description**
(2-4 sentences) from the argument text.
- Type: infer Bug / Task / Feature; default to Task.
- Create via `create_issue` in the **DBA** project.
- Move to **Develop**.

If no argument is provided, ask the user once for a task description or
ticket ID, then proceed autonomously from that point.

### 2. Plan

Read the ticket (or task description) and relevant source files.
Outline the approach internally — do not present it to the user.
Proceed immediately to implementation.

### 3. Implement

Write the code changes and tests. Run `make test` to verify.
If tests fail, fix and re-run (up to two retries).

### 4. Validate

Run quality gates sequentially:

1. `make check` — auto-fix ruff issues if needed, re-run.
2. `make test-cov-check` — if coverage is below threshold, write
additional tests and re-run.

If a gate still fails after two fix attempts, stop and report.

### 5. Review

Run `local-code-review` and `review-architecture` skills **in parallel**
(both use `context: fork`). Inspect findings:

- **Critical / High severity** — fix them, then re-run validate (step 4).
- **Medium / Low** — note them but proceed.

### 6. Branch and commit

- Use the `create-branch` skill to create a feature branch.
- Stage specific files and commit following **Commit Messages** in
`CLAUDE.md`.

### 7. PR
If argument is a ticket ID (`DBA-XXX`, number, or URL): fetch with
`get_issue`, move to **Develop**.

- Push with `-u` flag.
- Create the PR via `gh pr create` using the template from the
`create-pr` skill. Skip the pre-push confirmation pause.
If free-text: create via `create_issue` in **DBA** project (derive
summary, description, type; default Task). Move to **Develop**.

### 8. Update YouTrack
If no argument: ask user once, then proceed autonomously.

- Move the ticket to **Review** state.
- Add a comment with the PR URL and the Claude Code session cost
(from `/cost`).
### 2. Execute phases

### 9. Report
Follow **After Completing Work** in `CLAUDE.md` (phases 1-7), with:
- Phase 2 (implement): up to two retries on test failure.
- Phase 3 (validate): run quality gates per **Quality Gates** in `CLAUDE.md`.
Two fix attempts, then stop.
- Phase 4 (review): fix Critical/High findings, note Medium/Low.
- Phase 5-7: use `create-branch`, `create-pr` skills. Skip pre-push pause.

Output a final summary:
### 3. Report

- Ticket ID and link
- Branch name
- PR URL
- Any review findings that were not auto-fixed
Output: ticket ID/link, branch name, PR URL, unfixed review findings.

## Guardrails

- Never commit or push to `main` — always branch first.
- Never skip quality gates — they are blocking even in autosteer mode.
- Never create a YouTrack issue without at least a task description
(from argument or one user prompt).
- If `gh` CLI or YouTrack MCP is unavailable, stop and inform the user.
- Stage specific files — never use `git add -A` or `git add .`.
- Never commit or push to `main`.
- Never skip quality gates.
- Never create a ticket without at least a task description.
- Stage specific files -- never `git add -A` or `git add .`.
- If `gh` or YouTrack MCP unavailable, stop and inform user.
88 changes: 25 additions & 63 deletions .claude/skills/check-coverage/SKILL.md
Original file line number Diff line number Diff line change
@@ -1,86 +1,48 @@
---
name: check-coverage
description: Run test coverage measurement, analyze results, and fix gaps when coverage falls below the 80% threshold. Use after implementing features or fixing bugs, when `make test-cov-check` fails, when reviewing module test adequacy, or before creating a PR that touches `src/databao_cli/`.
description: Run test coverage measurement, analyze results, and fix gaps when coverage falls below the 80% threshold.
---

# Check Coverage

Run test coverage measurement, analyze results, and fix gaps when coverage
falls below the 80% threshold.

## Steps

### 1. Run coverage measurement

```bash
make test-cov-check
```

If all tests pass and coverage is ≥80%, you are done. Otherwise continue.

### 2. If tests fail (non-zero exit from pytest itself)
### 1. Run `make test-cov-check`

Before looking at coverage, fix the test failures:
If tests pass and coverage >= 80%, done.

- **Existing tests broke after your code change**: The production code is
likely wrong. Fix the code in `src/databao_cli/`, not the tests. The
existing tests encode expected behavior — do not weaken them to make
them pass.
- **Newly written tests fail**: The test itself has a bug. Fix the test.
- **Ambiguous case**: Read the test name and docstring to understand its
intent. If the test asserts correct prior behavior that your change
intentionally alters, update the test and document the behavioral change
in the commit message.
### 2. If tests fail

### 3. If coverage is below 80%
- **Existing tests broke**: fix production code, not tests.
- **New tests fail**: fix the test.
- **Ambiguous**: read test intent. If behavior intentionally changed, update
test and document in commit message.

Examine the "Missing" column in the terminal report to identify uncovered
lines. Prioritize:
### 3. If coverage below 80%

1. New code you just wrote — this should always have tests.
2. Critical paths (error handling, validation, CLI command logic).
3. Utility functions with clear input/output contracts.
Check "Missing" column. Prioritize:
1. New code you just wrote (must have tests).
2. Critical paths (error handling, validation, CLI logic).
3. Utility functions with clear contracts.

Do NOT:
- Add empty or trivial tests solely to raise the coverage number.
- Add `# pragma: no cover` to bypass the threshold without justification.
- Test third-party library behavior or Streamlit UI internals.
Do NOT: add trivial tests just to raise numbers, add `# pragma: no cover`
without justification, or test third-party/Streamlit internals.

### 4. Write targeted tests
### 4. Write tests

Add tests in the corresponding `tests/test_<module>.py` file. Follow the
existing pattern:
- Use `project_layout` fixture from `conftest.py` when the test needs a
project directory.
- One behavior per test function.
- Test file mirrors source module structure.
Add to `tests/test_<module>.py`. Use `project_layout` fixture when needed.
One behavior per test function.

### 5. Re-run and verify
### 5. Re-run `make test-cov-check`

```bash
make test-cov-check
```
Repeat until threshold met.

Repeat steps 2–5 until the threshold is met and all tests pass.
### 6. HTML report (optional)

### 6. Generate HTML report (optional)

```bash
make test-cov
```

Open `htmlcov/index.html` to visually inspect coverage.
`make test-cov` -- opens `htmlcov/index.html`.

## Failure handling

- If `pytest-cov` is not installed, run `uv sync --dev` first.
- If coverage is below 80% and you cannot reasonably cover the uncovered
lines (e.g., platform-specific code, external service calls), add a
brief `# pragma: no cover` comment with a reason, and note it in the
commit message.

## What this skill does NOT do

- It does not run e2e tests or measure their coverage.
- It does not modify the 80% threshold — that is set in `pyproject.toml`.
- It does not auto-generate tests — it guides you to write meaningful ones.
- Missing `pytest-cov`: run `uv sync --dev`.
- Uncoverable lines (platform-specific, external calls): add
`# pragma: no cover` with reason, note in commit message.
Loading
Loading