Skip to content

Latest commit

 

History

History
536 lines (383 loc) · 12.7 KB

File metadata and controls

536 lines (383 loc) · 12.7 KB

AGENTS.md

Guidelines for AI agents working on the ForgeSyte Plugins repository.

Project Overview

This repository contains independent pip-installable plugins for the ForgeSyte platform. Each plugin is located in plugins/<plugin-name> and follows a standardized structure.

Key Directories

forgesyte-plugins/
├── plugins/
│   ├── forgesyte-yolo-tracker/   # YOLO sports analysis plugin
│   ├── ocr/
│   ├── block_mapper/
│   ├── moderation/
│   ├── motion_detector/
│   └── plugin_template/          # Template for new plugins
├── scripts/                      # Utility scripts
├── docs/                         # Documentation
└── README.md                     # Main readme

Development Commands

Setting Up a Plugin

cd plugins/<plugin-name>

# Create virtual environment
uv venv --python 3.9
source .venv/bin/activate

# Install plugin in development mode
uv pip install -e .

# Install with all dependencies
uv pip install -e ".[dev]"

Running Tests

From plugin directory (plugins/forgesyte-yolo-tracker/):

Contract tests only (fast, CI-safe):

make test-fast
# or: pytest tests_contract -v --cov --cov-report=term-missing

Heavy tests only (requires GPU/models):

make test-heavy
# or: pytest tests_heavy -v -m heavy

All tests (contract + heavy):

make test-all
# or: pytest tests_contract tests_heavy -v

Quality checks:

make lint          # ruff check + fix
make format        # black + isort
make type-check    # mypy

Code Quality

# Linting
uv run ruff check src/ --fix

# Type checking
uv run mypy src/

# Formatting
uv run black src/
uv run isort src/

Running Quality Checks

Always run these before committing:

# Lint and format
uv run ruff check src/ --fix

# Type check
uv run mypy src/

# Run tests
uv run pytest src/tests/ -v

Branching and Pull Request Workflow

1. Create a Branch

Always create a new branch for changes:

# Create and switch to new branch
git checkout -b feature/my-new-feature

# Or for bug fixes
git checkout -b fix/description-of-fix

# Or for refactoring
git checkout -b refactor/component-being-refactored

2. Make Changes and Commit

# Stage changes
git add .

# Commit with proper message format
git commit -m "feat(yolo-tracker): Add player detection inference

Implement detect_players_json() and detect_players_json_with_annotated_frame()

Closes #32"

3. Push Branch

# Push branch to remote
git push -u origin feature/my-new-feature

4. Create Pull Request

# Create PR using gh CLI
gh pr create --title "feat(yolo-tracker): Add player detection" \
  --body "## Summary

Add player detection inference module.

## Changes

- Add inference/player_detection.py
- Add tests for detect_players_json
- Update manifest.json

## Testing

- All tests pass
- Ruff lint clean
- Mypy type check clean

Closes #32" \
  --label enhancement

5. Get Review and Merge

  1. Wait for review/approval
  2. Merge PR via GitHub UI or CLI:
    gh pr merge --admin --merge
  3. Delete branch after merge:
    git checkout main && git pull && git branch -d feature/my-new-feature

Branch Naming Conventions

Type Prefix Example
Feature feature/ feature/radar-visualization
Bug Fix fix/ fix/memory-leak-detection
Refactor refactor/ refactor/plugin-initialization
Docs docs/ docs/update-readme
Experiment experiment/ experiment/new-tracking-algorithm

PR Title Format

<type>(<scope>): <subject>

- feat(yolo-tracker): Add radar visualization
- fix(ocr): Fix text extraction bug
- refactor(moderation): Simplify rule engine
- docs(readme): Add installation instructions

When to Create a Branch

ALWAYS create a branch for:

  • New features
  • Bug fixes
  • Refactoring changes
  • Documentation updates
  • Test additions

Direct commits to main are NOT allowed.

PR Description Template

## Summary

Brief description of changes.

## Changes

- List of files changed
- What was modified in each file

## Testing

- Test results
- Any manual testing done

## Checklist

- [ ] Tests pass
- [ ] Ruff lint clean
- [ ] Mypy type check clean
- [ ] Documentation updated (if applicable)

## Related Issue

Closes #XX

TDD Workflow for this Project

1. Test Structure

All tests follow this pattern:

  • Fast tests: No model loading, mock dependencies, run on CPU
  • Model tests: Require YOLO model loading, skipped by default
  • Integration tests: Require GPU, skipped by default

2. Test Configuration

Each test file with model dependencies MUST include:

import os
import pytest

RUN_MODEL_TESTS = os.getenv("RUN_MODEL_TESTS", "0") == "1"

pytestmark = pytest.mark.skipif(
    not RUN_MODEL_TESTS,
    reason="Set RUN_MODEL_TESTS=1 to run (requires YOLO model)"
)

3. TDD Pattern for New Features

When adding a new inference module:

Step 1: Write failing tests first

# Create test file
touch src/tests/test_inference_new_feature.py

# Write tests with proper skip markers
# Run to verify they fail (module doesn't exist yet)
uv run pytest src/tests/test_inference_new_feature.py -v

Step 2: Implement the module

# Create the inference module
touch src/forgesyte_yolo_tracker/inference/new_feature.py

# Implement functions to pass tests
# Use sports.common when available (MIT License)

Step 3: Verify tests pass

uv run pytest src/tests/test_inference_new_feature.py -v

4. Test File Naming Convention

Type Pattern Example
Unit tests test_*.py test_player_detection.py
Integration tests src/tests/integration/test_*.py test_team_integration.py
Utils tests src/tests/utils/test_*.py test_soccer_pitch.py

5. Model-Dependent Test Pattern

For tests requiring YOLO models (always skip on CPU):

"""Tests for new feature inference module."""

import os
import pytest
import numpy as np
from unittest.mock import MagicMock, patch

RUN_MODEL_TESTS = os.getenv("RUN_MODEL_TESTS", "0") == "1"

pytestmark = pytest.mark.skipif(
    not RUN_MODEL_TESTS,
    reason="Set RUN_MODEL_TESTS=1 to run (requires YOLO model)"
)


class TestNewFeatureJSON:
    """Tests for detect_new_feature_json function."""

    def test_returns_dict(self) -> None:
        """Verify returns dictionary."""
        from forgesyte_yolo_tracker.inference.new_feature import detect_new_feature_json

        frame = np.zeros((480, 640, 3), dtype=np.uint8)
        result = detect_new_feature_json(frame, device="cpu")

        assert isinstance(result, dict)

    def test_returns_expected_keys(self) -> None:
        """Verify returns expected keys."""
        from forgesyte_yolo_tracker.inference.new_feature import detect_new_feature_json

        frame = np.zeros((480, 640, 3), dtype=np.uint8)
        result = detect_new_feature_json(frame, device="cpu")

        assert "detections" in result

Plugin Structure for forgesyte-yolo-tracker

plugins/forgesyte-yolo-tracker/src/forgesyte_yolo_tracker/
├── plugin.py              # ForgeSyte-native functions (no class)
├── manifest.json          # Tool schema
├── inference/             # Frame-based JSON modes
│   ├── player_detection.py    # detect_players_json(), *_with_annotated_frame()
│   ├── player_tracking.py     # track_players_json(), *_with_annotated_frame()
│   ├── ball_detection.py      # detect_ball_json(), *_with_annotated_frame()
│   ├── team_classification.py # classify_teams_json(), *_with_annotated_frame()
│   ├── pitch_detection.py     # detect_pitch_json(), *_with_annotated_frame()
│   └── radar.py               # radar_json(), radar_json_with_annotated_frame()
├── video/                 # Video processing modes
│   └── *_video.py         # run_*_video_frames(), run_*_video()
├── utils/                 # Utilities
│   ├── ball.py            # BallTracker
│   ├── soccer_pitch.py    # Soccer pitch drawing
│   └── (imports from sports.common)
├── configs/
│   └── soccer.py          # SoccerPitchConfiguration
└── models/                # Model files
    ├── football-player-detection-v3.pt
    ├── football-ball-detection-v2.pt
    └── football-pitch-detection-v1.pt

Design Decisions (from Lead Designer)

Decision Value
Team colors Team A: #00BFFF, Team B: #FF1493, GK: #FFD700, Ref: #FF6347
Confidence defaults Player: 0.25, Ball: 0.20, Pitch: 0.25
Radar resolution 600×300 px, pitch: 12000×7000 cm
Team classification On-the-fly (collect → UMAP → KMeans → predict)
Model versions player-v3, ball-v2, pitch-v1

Useful Commands

# Check Python version
python --version

# List installed packages
uv pip list

# Freeze requirements
uv pip freeze > requirements.txt

# Create requirements.txt for plugin
cd plugins/forgesyte-yolo-tracker
uv pip freeze > requirements.txt

Common Patterns

Import Pattern for Inference Modules

from forgesyte_yolo_tracker.inference.player_detection import (
    detect_players_json,
    detect_players_json_with_annotated_frame,
)

Video Processing Pattern

from forgesyte_yolo_tracker.video.player_detection_video import (
    run_player_detection_video_frames,
    run_player_detection_video,
)

# Generator pattern for frames
for annotated_frame in run_player_detection_video_frames(source, device="cpu"):
    process(annotated_frame)

# Full video pipeline
run_player_detection_video(source, target, device="cpu")

Using sports.common

from sports.common import TeamClassifier, ViewTransformer, create_batches

Testing on GPU

Tests requiring GPU should be run on Kaggle with CUDA available:

git clone https://github.com/rogermt/forgesyte-plugins.git
cd forgesyte-plugins/plugins/forgesyte-yolo-tracker

# Install with GPU support
uv pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118
uv pip install -e .

# Download model files to models/
# football-player-detection-v3.pt
# football-ball-detection-v2.pt
# football-pitch-detection-v1.pt

# Run contract tests only (fast, CI-safe)
make test-fast

# Run heavy tests only
make test-heavy

# Run all tests (contract + heavy)
make test-all

# Run with coverage (excludes model internals)
pytest tests_contract tests_heavy -v --cov --cov-report=term-missing

Test Organization (Issue #79)

Tests are now organized into two categories:

  • tests_contract/ - Fast, mocked, plugin-API tests (run in CI, ~90 tests, 2 min)
  • tests_heavy/ - Model-dependent, inference tests (optional, ~200 tests, 15 min)

Why this structure:

  • CI runs only contract tests (fast, stable)
  • Heavy tests preserved for debugging/research
  • Coverage measures plugin contract, not YOLO internals
  • Professional separation of concerns

See /docs/design/TEST_FLOW.md for architecture diagram.

Code Style

  • Follow PEP 8
  • Use type hints
  • Run ruff check src/ --fix before committing
  • Add docstrings to all public functions
  • Keep functions small and focused
  • Use mocking for CPU tests (no model loading)

Commit Message Format

<type>(<scope>): <subject>

<body>

footer

Types: feat, fix, refactor, docs, chore, test

Example:

feat(yolo-tracker): Add player detection inference module

Implement detect_players_json() and detect_players_json_with_annotated_frame()

Closes #32

Getting Help

  • See README.md for plugin documentation
  • See docs/development/ for general guidelines
  • Check existing tests in src/tests/ for patterns

Video Tracker Integration (Current)

Goal: Integrate YOLO tracker with forgesyte web-ui for real-time video streaming

Status: Web-UI integration in progress (see forgesyte/AGENTS.md)

What This Means for Plugins

No plugin code changes required — your plugin is already compatible!

  • Manifest is frozen (no changes)
  • Tool functions remain as-is (player_detection, player_tracking, etc.)
  • Frame input/output contracts are stable

Web-UI Will Do

  • Discover your tools from manifest.json
  • Call /plugins/{id}/tools/{tool}/run with base64 frames
  • Render results using generic canvas overlays

When to Test

  • Week 3: GPU tests on Kaggle with real YOLO models
  • Command: RUN_MODEL_TESTS=1 pytest tests/integration/

Quick Reference

  • Your tools: player_detection, player_tracking, ball_detection, pitch_detection, radar
  • Your manifest: Defines input/output contracts (don't modify)
  • Your tests: Keep them passing (CPU-only in CI, GPU on Kaggle)