Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
b728c4f
feat:
AlexVOiceover Jan 6, 2026
aec70e1
feat:
AlexVOiceover Jan 6, 2026
09e0789
feat:
AlexVOiceover Jan 6, 2026
4b0646c
feat:
AlexVOiceover Jan 6, 2026
3c8fe41
feat:
AlexVOiceover Jan 6, 2026
63e19cc
feat:
AlexVOiceover Jan 6, 2026
3e8b15d
feat:
AlexVOiceover Jan 6, 2026
a237f48
feat:
AlexVOiceover Jan 6, 2026
03a0ee6
feat:
AlexVOiceover Jan 6, 2026
3e47a58
feat:
AlexVOiceover Jan 6, 2026
59d92e5
feat:
AlexVOiceover Jan 6, 2026
c6c588f
feat:
AlexVOiceover Jan 6, 2026
d8910f3
feat:
AlexVOiceover Jan 6, 2026
5f50596
feat:
AlexVOiceover Jan 6, 2026
d879b23
feat:
AlexVOiceover Jan 6, 2026
926ac8a
feat:
AlexVOiceover Jan 6, 2026
def1fff
feat:
AlexVOiceover Jan 6, 2026
370e2dd
feat: automated report evaluation system
AlexVOiceover Jan 6, 2026
37b4efd
feat: automated testing evaluation system and plan-iterator optimizat…
AlexVOiceover Jan 6, 2026
1ec09cc
feat: major plan-iterator automation improvements
AlexVOiceover Jan 6, 2026
0b851f8
feat: implement multi-term filtering with actual Airtable end dates
AlexVOiceover Jan 6, 2026
f83dd5e
feat: enhance term filter with dropdown multi-select and staged selec…
AlexVOiceover Jan 6, 2026
d963b54
feat: add mutually exclusive custom date range filter
AlexVOiceover Jan 6, 2026
df83e56
fix: resolve attendance stats inconsistency and add status editing de…
AlexVOiceover Jan 7, 2026
af82988
feat: add attendance filter types and refactor plan
AlexVOiceover Jan 7, 2026
f1361b9
feat: add attendance filter helpers and safety guard
AlexVOiceover Jan 7, 2026
19c5767
feat: add unified getApprenticeStats function with cohort filtering
AlexVOiceover Jan 7, 2026
bc0044e
refactor: update getApprenticeAttendanceHistory for cohort-only events
AlexVOiceover Jan 7, 2026
9aac3d9
feat: add reusable AttendanceFilters component
AlexVOiceover Jan 7, 2026
b3cd054
refactor: update Apprentice List page to use new functions and component
AlexVOiceover Jan 7, 2026
142ed7a
feat: add filter support to Apprentice Detail page
AlexVOiceover Jan 7, 2026
76d8aef
refactor: remove deprecated functions and update tests
AlexVOiceover Jan 7, 2026
72703d5
feat: add 'All' filter option and increase dropdown height
AlexVOiceover Jan 7, 2026
550e5ce
refactor: reorganize attendance card to show Attended under Present +…
AlexVOiceover Jan 7, 2026
a19ab94
fix: remove redundant '(Present + Late)' text from Attended label
AlexVOiceover Jan 7, 2026
bff01e8
fix: pre-populate check-in time with event start time when editing st…
AlexVOiceover Jan 7, 2026
5c94f94
refactor: remove redundant Actions column from attendance history
AlexVOiceover Jan 7, 2026
4bf6c66
feat: add Escape key support to cancel status editing
AlexVOiceover Jan 7, 2026
0683b3d
feat: add Excused counter to attendance card
AlexVOiceover Jan 7, 2026
ce26b2f
chore(config): remove postman MCP server configuration
AlexVOiceover Jan 7, 2026
0e1176b
docs(scratchpad): add UI improvement notes
AlexVOiceover Jan 7, 2026
413a88c
chore(config): add postman MCP config backup
AlexVOiceover Jan 7, 2026
3ccb2a5
refactor(attendance): rename status 'Not Coming' to 'Absent'
AlexVOiceover Jan 7, 2026
8f9c587
feat(ui): add Missed group to attendance card
AlexVOiceover Jan 7, 2026
417d09e
feat(checkin): allow staff with linked apprentice records to check in
AlexVOiceover Jan 7, 2026
2828922
refactor(auth): consolidate login into single page with unified API
AlexVOiceover Jan 7, 2026
ba2797f
refactor(attendance): flatten routes and simplify cohort selection UI
AlexVOiceover Jan 7, 2026
b2d0734
style(attendance): improve page styling and dynamic title
AlexVOiceover Jan 7, 2026
1b9feb6
fix(attendance): improve navigation and simplify cohort table
AlexVOiceover Jan 7, 2026
08150bb
style(attendance): redesign detail page and stats card
AlexVOiceover Jan 7, 2026
f88ce01
feat(attendance): add attendance trend chart component
AlexVOiceover Jan 7, 2026
748ee4a
feat(chart): add lateness trend line to attendance chart
AlexVOiceover Jan 7, 2026
13339b5
style(events): align styling with attendance pages
AlexVOiceover Jan 7, 2026
ffc197e
docs: updated scratchpad
AlexVOiceover Jan 7, 2026
bdbcf88
fix: resolve lint errors in chart and attendance pages
AlexVOiceover Jan 7, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 10 additions & 5 deletions .claude/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ Custom automation for working with Jira tasks.
| `/plan AP-23` | Start a Jira task (fetch → branch → plan → activate loop) |
| `/stop` | Stop the iterator loop early |
| `/update-report` | Document meaningful changes to `docs/report.md` |
| `/evaluate-report` | Decide if the last task needs a report.md update |
| `/evaluate-tests` | Decide what tests are required for the last task |

## Complete Workflow

Expand Down Expand Up @@ -37,11 +39,12 @@ The hook (`.claude/hooks/plan-iterator.sh`) runs after each Claude response:

```
docs/plan.md
- [x] Completed task 1
- [x] Completed task 2
- [ ] Current task ◄── Claude works on this
- [ ] Next task
- [ ] Final task
1. [x] Setup
- [x] 1.1 Completed task 1
- [x] 1.2 Completed task 2
2. [ ] Current task ◄── Claude works on this
- [ ] 2.1 Next task
- [ ] 2.2 Final task
```

For each task, Claude:
Expand Down Expand Up @@ -73,6 +76,8 @@ When the loop ends, you decide what to do next:
| `.claude/commands/plan.md` | `/plan` command definition |
| `.claude/commands/stop.md` | `/stop` command definition |
| `.claude/commands/update-report.md` | `/update-report` command definition |
| `.claude/commands/evaluate-report.sh` | `/evaluate-report` command definition |
| `.claude/commands/evaluate-tests.sh` | `/evaluate-tests` command definition |
| `docs/plan.md` | Current task checklist (created per-task) |
| `docs/report.md` | Project documentation |

Expand Down
103 changes: 103 additions & 0 deletions .claude/commands/analyze-plan-iterator.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
#!/usr/bin/env bash
set -euo pipefail

# Plan Iterator Analysis
# Mirrors the AWK parsing logic used by the plan-iterator hook

PLAN_FILE="docs/plan.md"

if [[ ! -f "$PLAN_FILE" ]]; then
echo "Missing $PLAN_FILE"
exit 1
fi

echo "=== PLAN ITERATOR ANALYSIS (AWK PARSER) ==="
echo "Plan file: $PLAN_FILE"
echo ""
echo "Expected formats:"
echo " Main tasks: 1. [ ] Task"
echo " Subtasks: - [ ] 1.1 Task"
echo ""

IFS=$'\t' read -r REMAINING COMPLETED NEXT_TASK LAST_DONE TO_COMPLETE < <(
awk '
function trim(s) { sub(/^[[:space:]]+/, "", s); sub(/[[:space:]]+$/, "", s); return s }
function after_checkbox(s) {
gsub(/^[[:space:]]*[0-9]+\.[[:space:]]*\[[ x]\][[:space:]]*/, "", s)
gsub(/^[[:space:]]*-[[:space:]]*\[[ x]\][[:space:]]*/, "", s)
return trim(s)
}

BEGIN {
remaining = 0
completed = 0
next_task = ""
last_done = ""
}

match($0, /^([0-9]+)\.[[:space:]]*\[ \][[:space:]]*(.*)$/, m) {
n = m[1]
main_state[n] = "incomplete"
main_text[n] = trim(m[2])
next
}

match($0, /^([0-9]+)\.[[:space:]]*\[x\][[:space:]]*(.*)$/, m) {
n = m[1]
main_state[n] = "complete"
main_text[n] = trim(m[2])
next
}

match($0, /^[[:space:]]*-[[:space:]]*\[([ x])\][[:space:]]*([0-9]+)\.[0-9]+[[:space:]]*(.*)$/, m) {
status = m[1]
n = m[2]
sub_any[n] = 1

if (status == "x") {
sub_complete[n]++
completed++
last_done = after_checkbox($0)
} else {
sub_incomplete[n]++
remaining++
if (next_task == "") next_task = after_checkbox($0)
}

next
}

END {
for (n in main_state) {
if (!sub_any[n]) {
if (main_state[n] == "complete") {
completed++
last_done = main_text[n]
} else if (main_state[n] == "incomplete") {
remaining++
if (next_task == "") next_task = main_text[n]
}
}
}

to_complete = ""
for (n in sub_any) {
if (main_state[n] == "incomplete" && sub_incomplete[n] == 0 && sub_complete[n] > 0) {
to_complete = to_complete (to_complete ? " " : "") n
}
}

printf "%d\t%d\t%s\t%s\t%s\n", remaining, completed, next_task, last_done, to_complete
}
' "$PLAN_FILE"
)

echo "Leaf tasks remaining: ${REMAINING:-0}"
echo "Leaf tasks completed: ${COMPLETED:-0}"
echo "Next task: ${NEXT_TASK:-<none>}"
echo "Last done: ${LAST_DONE:-<none>}"
if [[ -n "${TO_COMPLETE:-}" ]]; then
echo "Main tasks eligible for auto-complete: $TO_COMPLETE"
else
echo "Main tasks eligible for auto-complete: <none>"
fi
105 changes: 105 additions & 0 deletions .claude/commands/evaluate-report.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
#!/bin/bash

# Evaluate Report Command
# Analyzes recent work and determines if report.md needs updating
# Usage: /evaluate-report [task-description]

TASK="${1:-recent work}"

# Get recent changes for context
BASE_REF="HEAD~1"
if ! git rev-parse --verify "$BASE_REF" >/dev/null 2>&1; then
BASE_REF="HEAD"
fi

CHANGED_FILES=$(git diff --name-only "$BASE_REF" 2>/dev/null || true)
RECENT_FILES=$(printf '%s\n' "$CHANGED_FILES" | head -10 | tr '\n' ' ')
LAST_COMMIT=$(git log -1 --oneline 2>/dev/null || echo "No commits found")

# Check for indicators that suggest report update needed
INDICATORS=""
if printf '%s\n' "$CHANGED_FILES" | grep -qE '(routes|lib|server).*\.ts$'; then
INDICATORS="${INDICATORS}[New/modified TypeScript files] "
fi
if printf '%s\n' "$CHANGED_FILES" | grep -q '\.spec\.ts$'; then
INDICATORS="${INDICATORS}[Test files] "
fi
if git log -1 --oneline 2>/dev/null | grep -qiE '(refactor|optimize|performance|security|auth|error|pattern|architecture)'; then
INDICATORS="${INDICATORS}[Significant keywords in commit] "
fi

cat << EOF
I need to evaluate if the recent work should be documented in report.md.

## Task Context
Task: $TASK
Last commit: $LAST_COMMIT
Base ref: $BASE_REF
Modified files: $RECENT_FILES
Auto-detected indicators: $INDICATORS

## Evaluation Framework

EOF

# Include the skill content directly for self-contained execution
if [[ -f ".claude/skills/report-evaluator.md" ]]; then
echo "### Report Evaluation Guidelines:"
echo ""
cat .claude/skills/report-evaluator.md
echo ""
echo "---"
echo ""
fi

cat << EOF

## Evaluation Process

Following the guidelines above:

1. **Analyze Recent Changes**:
- Run: git diff "$BASE_REF" --stat
- Run: git log -3 --oneline
- Review the actual code changes in modified files

2. **Identify Significant Technical Decisions**:
- Architecture patterns or design choices
- Performance optimizations
- Error handling strategies
- State management approaches
- API design decisions
- Testing strategies
- Security implementations

3. **Map to Assessment Criteria** (docs/Assessment-criteria.md):
- Check if changes provide evidence for P1-P11 or D1-D4
- Don't force connections - only map if naturally applicable

4. **Make Update Decision**:
✅ UPDATE if:
- Significant technical decisions were made
- New patterns or approaches introduced
- Performance/security improvements
- Complex problem solved
- Assessment criteria evidence exists

⏭️ SKIP if:
- Routine bug fixes
- Simple UI text changes
- Code formatting only
- Dependency updates
- No architectural impact

5. **If Updating report.md**:
- Add section with clear heading
- Explain the "why" behind decisions
- Include code examples if helpful
- Reference assessment criteria where natural
- Keep it concise but complete

## IMPORTANT
This evaluation is MANDATORY after each task. Even if no update is needed, you must explicitly state that you evaluated and determined no update was necessary.

Please proceed with the evaluation now.
EOF
147 changes: 147 additions & 0 deletions .claude/commands/evaluate-tests.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
#!/bin/bash

# Evaluate Tests Command
# Analyzes recent work and determines what testing is needed
# Usage: /evaluate-tests [task-description]

TASK="${1:-recent work}"

# Get recent changes for context
BASE_REF="HEAD~1"
if ! git rev-parse --verify "$BASE_REF" >/dev/null 2>&1; then
BASE_REF="HEAD"
fi

CHANGED_FILES=$(git diff --name-only "$BASE_REF" 2>/dev/null || true)
RECENT_FILES=$(printf '%s\n' "$CHANGED_FILES" | head -10 | tr '\n' ' ')
LAST_COMMIT=$(git log -1 --oneline 2>/dev/null || echo "No commits found")

# Check for indicators that suggest specific test types needed
INDICATORS=""
if printf '%s\n' "$CHANGED_FILES" | grep -qE 'src/routes/api.*\.ts$'; then
INDICATORS="${INDICATORS}[New/modified API routes - Postman tests needed] "
fi
if printf '%s\n' "$CHANGED_FILES" | grep -qE 'src/lib/(server|airtable).*\.ts$'; then
INDICATORS="${INDICATORS}[Service/business logic - Unit tests needed] "
fi
if printf '%s\n' "$CHANGED_FILES" | grep -qE '\.svelte$'; then
INDICATORS="${INDICATORS}[Svelte components - Consider component tests] "
fi
if printf '%s\n' "$CHANGED_FILES" | grep -qE 'auth|session|security'; then
INDICATORS="${INDICATORS}[Auth/security changes - Critical testing needed] "
fi
if git log -1 --oneline 2>/dev/null | grep -qiE '(fix|bug|error|issue)'; then
INDICATORS="${INDICATORS}[Bug fix - Regression tests recommended] "
fi

# Count existing test files
if command -v rg >/dev/null 2>&1; then
EXISTING_TESTS=$(rg --files -g '*.spec.ts' src 2>/dev/null | wc -l | tr -d ' ')
else
EXISTING_TESTS=$(find src -name "*.spec.ts" 2>/dev/null | wc -l | tr -d ' ')
fi

cat << EOF
I need to evaluate what testing is needed for the recent work.

## Task Context
Task: $TASK
Last commit: $LAST_COMMIT
Base ref: $BASE_REF
Modified files: $RECENT_FILES
Test indicators: $INDICATORS
Existing test files: $EXISTING_TESTS

## Testing Framework

EOF

# Include the skill content directly for self-contained execution
if [[ -f ".claude/skills/test-evaluator.md" ]]; then
echo "### Testing Evaluation Guidelines:"
echo ""
cat .claude/skills/test-evaluator.md
echo ""
echo "---"
echo ""
fi

cat << EOF

## Evaluation Process

Following the guidelines above:

1. **Analyze Recent Changes**:
- Run: git diff "$BASE_REF" --stat
- Review modified files and understand what changed
- Identify the type of changes (API, business logic, UI, bug fix)

2. **Determine Test Requirements**:

**For API Routes** (/routes/api/*):
- Create Postman collection if none exists for this module
- Add requests for success/error scenarios
- Include authentication testing
- Test response validation

**For Business Logic** (/lib/server/, /lib/airtable/):
- Create co-located .spec.ts files
- Test core functions with edge cases
- Mock external dependencies (Airtable, APIs)
- Verify error handling

**For Components** (*.svelte):
- Consider component tests if complex logic exists
- Test reactive state management
- Test event handling

3. **Check Existing Tests**:
- Run: npm test
- Ensure existing tests still pass
- Update tests if functionality changed
- Add missing test cases

4. **Create Missing Tests**:

**Unit Tests**:
- Create src/path/to/file.spec.ts next to source
- Follow Vitest patterns with describe/it/expect
- Test normal cases, edge cases, error cases

**API Tests** (Use MCP Postman integration):
- Get existing collections: Use mcp__postman__getCollections
- Create new collection if needed: Use mcp__postman__createCollection
- Add test requests: Use mcp__postman__createCollectionRequest
- Include proper test scripts for validation

5. **Run and Verify**:
- Execute: npm test
- Fix any failing tests
- Ensure new tests pass
- Verify coverage of critical paths

## Decision Matrix

✅ **CREATE TESTS** if:
- New API endpoints added
- Business logic/service functions created
- Authentication/security changes
- Bug fixes (regression prevention)
- Complex component logic

⏭️ **SKIP TESTS** if:
- Simple UI text changes
- Styling/CSS only
- Documentation updates
- Configuration without logic changes

## IMPORTANT
After evaluation, you must either:
1. **Create the identified tests** (unit tests and/or Postman collections)
2. **Explicitly state** why tests were not needed for this change

Testing evaluation is MANDATORY - never skip this step.

Please proceed with the evaluation now.
EOF
Loading