Thank you for contributing to Agent Actions! This guide covers coding standards and development workflow.
Before your first contribution, you must sign our Contributor License Agreement. When you open a pull request, the CLA Assistant bot will automatically ask you to sign if you haven't already. This is a one-time process.
If you cannot sign the CLA, we are unable to merge your pull request. You are still welcome to open issues, participate in discussions, and comment on existing issues.
# Install dependencies
task dev
# Install pre-commit hooks (run once after cloning)
pre-commit installPre-commit hooks run automatically on every commit, catching the same issues as CI before they reach GitHub.
# Run all checks (matches CI)
task check
# Individual checks
task lint # ruff linting + import sorting
task format:check # ruff format check
task mypy # type checking
# Run pre-commit manually across all files
pre-commit run --all-filesThe pre-commit hooks run:
- ruff — lint and auto-fix (import sorting, style)
- ruff-format — formatting
- mypy — type checking
This project uses f-strings as the standard logging format for readability and consistency.
# F-strings (project standard)
logger.info(f"Processing {item_id} with value {value}")
logger.debug(f"Workflow {name} completed in {duration:.2f} seconds")
logger.warning(f"Retry attempt {attempt}/{max_retries} for {operation}")
logger.error(f"Failed to process {item_id}: {error}")
# In exception handlers, use .exception() for automatic traceback
try:
do_something()
except Exception as e:
logger.exception(f"Unexpected error processing {item}") # Preferred
# NOT: logger.error(f"Error: {e}", exc_info=True)# BAD: Missing f-prefix with {variable} syntax
# This logs literal "{item_id}" instead of the value!
logger.info("Processing {item_id}")
# BAD: Mixed formatting styles
logger.info("Processing {item_id} with %s", value)
# BAD: Using .error() with exc_info=True in exception handlers
# Use .exception() instead
logger.error(f"Error: {e}", exc_info=True)The bug pattern logger.info("Processing {item_id}") (missing f prefix) is particularly dangerous because:
- No exception raised - Code runs without errors
- Silent failure - Logs show
{item_id}literally instead of the value - Hard to detect - Only visible when you read the logs carefully
- Wastes debugging time - Logs are useless for troubleshooting
We use multiple tools to catch logging issues:
- Ruff (
task lint:ruff) - Catches logging anti-patterns - AST Checker (
task lint:logging) - Detects{var}without f-prefix - Pre-commit hooks - Runs both on every commit
agent-actions uses an event-driven architecture for user-facing output and observability.
Application Code
│
├── logger.info("msg") ──┐
│ │
└── fire_event(Event) ───┼──► EventManager
│ │
│ ┌────┴────┐
│ │ │
▼ ▼ ▼
Console JSON File run_results.json
All logging flows through the EventManager:
- Python logging (
logger.info()) → LoggingBridgeHandler → Events - Direct events (
fire_event()) → Events - Events → Handlers (Console, JSON, run_results.json)
Create event classes in agent_actions/logging/events/types.py:
from agent_actions.logging.events.base import BaseEvent, EventCategory
@dataclass
class MyCustomEvent(BaseEvent):
"""Emitted when custom action occurs."""
category: EventCategory = EventCategory.SYSTEM
event_type: str = "custom_action"
# Event-specific data
action_name: str = ""
result: str = ""
def __post_init__(self):
super().__post_init__()
# Add event-specific data to the data dict
self.data.update({
"action_name": self.action_name,
"result": self.result,
})Then emit the event:
from agent_actions.logging.core.manager import fire_event
from agent_actions.logging.events import MyCustomEvent
fire_event(MyCustomEvent(
message="Custom action completed",
action_name="my_action",
result="success",
))Events are organized by category:
- workflow - Workflow lifecycle (start, complete, error)
- agent - Agent execution (start, complete, skip, error)
- batch - Batch job operations (submit, complete, error)
- validation - Validation events (start, pass, fail, warning)
- progress - Progress updates
- system - System-level events
Implement custom handlers by extending the base handler:
from agent_actions.logging.core.handlers import EventHandler
class MyHandler(EventHandler):
def accepts(self, event: BaseEvent) -> bool:
"""Return True for events this handler should process."""
return event.category == "workflow"
def handle(self, event: BaseEvent) -> None:
"""Process the event."""
print(f"Workflow event: {event.message}")
def flush(self) -> None:
"""Flush any buffered data."""
passRegister handlers with the EventManager:
from agent_actions.logging.core.manager import get_manager
manager = get_manager()
manager.register(MyHandler())Test event emission and handling:
from agent_actions.logging.core.manager import EventManager
from agent_actions.logging.events import WorkflowStartEvent
def test_workflow_event():
manager = EventManager.get()
# Create mock handler
events_received = []
def mock_handler(event):
events_received.append(event)
# Register handler
manager.register_function(mock_handler)
# Fire event
fire_event(WorkflowStartEvent(
message="Test workflow",
workflow_name="test",
))
# Verify
assert len(events_received) == 1
assert events_received[0].workflow_name == "test"Events automatically inherit context from the CorrelationContext:
from agent_actions.logging.core.manager import get_manager
# Set context (automatically propagates to all events)
manager = get_manager()
with manager.context(
workflow_name="my_workflow",
correlation_id="abc123",
):
fire_event(AgentStartEvent(
message="Starting agent",
agent_name="extract_data",
))
# Event will have workflow_name and correlation_id populated- Use typed events - Create specific event classes, don't use BaseEvent directly
- Clear messages - Event messages should be human-readable and actionable
- Structured data - Put machine-readable data in the
datadict - Categories matter - Use correct category for proper filtering
- Test handlers - Write tests for custom handlers
See agent_actions/logging/events/types.py for all available event types.
# Run all tests
task test
# Run with coverage
task test:coverage
# Run specific test types
task test:unit
task test:integration
# Run in parallel
task test:fast- Python 3.11+
- 4-space indentation
- 100 character line length
- Type hints encouraged
- Run
task checkbefore committing
We use changie to manage changelog entries. Every PR that changes user-facing behavior should include a changelog entry.
# Interactive — prompts for kind, description, and optional issue number
task changelog:newThis creates a YAML fragment in .changes/unreleased/. Commit it with your PR.
# Batch unreleased entries into a version (e.g., 2.1.0)
task changelog:batch -- 2.1.0
# Merge all versions into CHANGELOG.md
task changelog:mergechangie batch also updates the version in pyproject.toml and agent_actions/__version__.py via the replacements configured in .changie.yaml.
Packages are published to PyPI automatically when a GitHub Release is created.
- OIDC Trusted Publishing must be configured on pypi.org
- Version in git tag,
pyproject.toml, andagent_actions/__version__.pymust all match
- Ensure all changelog entries are batched:
task changelog:batch -- X.Y.Z - Merge to
main - Create a GitHub Release with tag
vX.Y.Z - The
publish.ymlworkflow validates versions and publishes to PyPI
All commits must include a Signed-off-by trailer. Use the --signoff flag:
git commit --signoff -m "your message"To set this automatically on every commit:
git config --local commit.gpgsign false
git config --local format.signoff true- Create a feature branch from
main - Make your changes
- Add a changelog entry:
task changelog:new - Ensure
task checkpasses - Ensure
task testpasses - Submit PR with signed commits