Skip to content

Implement structured output formats and GitHub Actions integration#131

Merged
Kilo59 merged 24 commits intomainfrom
reports
Mar 29, 2026
Merged

Implement structured output formats and GitHub Actions integration#131
Kilo59 merged 24 commits intomainfrom
reports

Conversation

@Kilo59
Copy link
Copy Markdown
Owner

@Kilo59 Kilo59 commented Mar 29, 2026

Summary

This PR implements structured output formatting for the ruff-sync CLI, introducing a unified ResultFormatter architecture. This enables advanced machine-readable diagnostics, JSON export for automated tools, and native GitHub Actions annotations.

Key Features

  • New CLI Flag: Added --output-format with support for text (default), json, and github.
  • Architectural Refactor: All CLI output (diffs, diagnostics, and status notes) is now routed through a shared ResultFormatter protocol, ensuring consistency across formats.
  • Native CI Integration: GitHub Actions annotations now produce inline errors pointing directly to the out-of-sync files (e.g., .pre-commit-config.yaml).
  • Robustness:
    • Implementation of %, \r, and \n escaping for GitHub workflow commands.
    • Graceful handling of "version already exists" errors in the PyPI publication pipeline.

Testing & Quality

  • 100% Coverage: Reached 100% test coverage for the src/ruff_sync/formatters.py module using a new parameterized test suite.
  • Improved Fixtures: Introduced consolidated logging and capture fixtures to ensure CLI output is reliably verified in tests.

@sourcery-ai
Copy link
Copy Markdown

sourcery-ai bot commented Mar 29, 2026

Reviewer's Guide

Adds a pluggable output formatting layer (text, JSON, GitHub Actions annotations) to ruff-sync’s core commands and CLI, routes key messages and diffs through this abstraction, tightens logging configuration for better test capture, and wires CI/docs/tests to the new output behavior while bumping version and relaxing patch coverage slightly.

Sequence diagram for ruff_sync check with pluggable output formats

sequenceDiagram
    actor Developer
    participant GitHubActions
    participant CLI_main as cli_main
    participant Core as core_check
    participant Formatters as get_formatter
    participant Formatter as ResultFormatter
    participant Logger as logging

    Developer->>GitHubActions: push or PR
    GitHubActions->>CLI_main: run ruff-sync check --semantic --output-format github

    CLI_main->>CLI_main: parse args
    CLI_main->>CLI_main: build Arguments(output_format=OutputFormat.GITHUB)
    CLI_main->>Core: check(args)

    Core->>Formatters: get_formatter(args.output_format)
    Formatters-->>Core: GithubFormatter instance
    Core->>Formatter: note("🔍 Checking Ruff sync status...")

    alt config file missing
        Core->>Formatter: error(message, file_path, logger)
        Formatter->>Logger: error(message)
        Formatter->>GitHubActions: ::error file=...::message
        Core-->>CLI_main: exit code 1
    else config drift
        Core->>Formatter: error("config is out of sync", file_path, logger)
        Formatter->>Logger: error(message)
        Formatter->>GitHubActions: ::error file=...::message
        opt diff requested
            Core->>Formatter: diff(diff_text)
            Formatter->>GitHubActions: print unified diff
        end
        Core-->>CLI_main: exit code 1
    else in sync
        Core->>Formatter: success("configuration is in sync")
        Core->>Core: _check_pre_commit_sync(args, Formatter)
        alt pre-commit out of sync
            Core->>Formatter: warning("pre-commit hook version is out of sync", file_path, logger)
            Formatter->>Logger: warning(message)
            Formatter->>GitHubActions: ::warning file=...::message
            Core-->>CLI_main: exit code 2
        else pre-commit in sync
            Core-->>CLI_main: exit code 0
        end
    end

    CLI_main-->>GitHubActions: process exit code
Loading

Class diagram for new output formatting layer and CLI wiring

classDiagram
    class OutputFormat {
      <<enum>>
      TEXT
      JSON
      GITHUB
      +__str__() str
    }

    class ResultFormatter {
      <<interface>>
      +note(message str) void
      +info(message str, logger logging.Logger) void
      +success(message str) void
      +error(message str, file_path pathlib.Path, logger logging.Logger) void
      +warning(message str, file_path pathlib.Path, logger logging.Logger) void
      +debug(message str, logger logging.Logger) void
      +diff(diff_text str) void
    }

    class TextFormatter {
      +note(message str) void
      +info(message str, logger logging.Logger) void
      +success(message str) void
      +error(message str, file_path pathlib.Path, logger logging.Logger) void
      +warning(message str, file_path pathlib.Path, logger logging.Logger) void
      +debug(message str, logger logging.Logger) void
      +diff(diff_text str) void
    }

    class GithubFormatter {
      +note(message str) void
      +info(message str, logger logging.Logger) void
      +success(message str) void
      +error(message str, file_path pathlib.Path, logger logging.Logger) void
      +warning(message str, file_path pathlib.Path, logger logging.Logger) void
      +debug(message str, logger logging.Logger) void
      +diff(diff_text str) void
      -_escape(value str, is_property bool) str
    }

    class JsonFormatter {
      +note(message str) void
      +info(message str, logger logging.Logger) void
      +success(message str) void
      +error(message str, file_path pathlib.Path, logger logging.Logger) void
      +warning(message str, file_path pathlib.Path, logger logging.Logger) void
      +debug(message str, logger logging.Logger) void
      +diff(diff_text str) void
    }

    class DiffContext {
      <<tuple>>
      +source_toml_path pathlib.Path
      +source_doc TOMLDocument
      +merged_doc TOMLDocument
      +source_val Any
      +merged_val Any
    }

    class Arguments {
      <<tuple>>
      +init bool
      +pre_commit bool
      +save bool
      +output_format OutputFormat
    }

    class CoreModule {
      <<module>>
      +check(args Arguments) int
      +pull(args Arguments) int
      +_check_pre_commit_sync(args Arguments, fmt ResultFormatter) int
      +_print_diff(args Arguments, fmt ResultFormatter, ctx DiffContext) void
    }

    class FormattersModule {
      <<module>>
      +get_formatter(output_format OutputFormat) ResultFormatter
    }

    class CliMain {
      <<function>>
      +main() int
    }

    OutputFormat <.. Arguments : uses
    ResultFormatter <|.. TextFormatter
    ResultFormatter <|.. GithubFormatter
    ResultFormatter <|.. JsonFormatter

    FormattersModule ..> ResultFormatter : returns
    FormattersModule ..> OutputFormat : selects

    CoreModule ..> ResultFormatter : depends_on
    CoreModule ..> DiffContext : uses

    CliMain ..> Arguments : builds
    CliMain ..> CoreModule : invokes
    CliMain ..> FormattersModule : indirectly via CoreModule
Loading

File-Level Changes

Change Details Files
Introduce pluggable output formatters (text, JSON, GitHub) and wire core check/pull flows through them instead of printing directly.
  • Define OutputFormat enum covering text, json, and github values for CLI selection.
  • Add ResultFormatter protocol plus TextFormatter, JsonFormatter, and GithubFormatter implementations, including GitHub annotation escaping and JSON record structures.
  • Replace direct print/stdout usage in core.check and core.pull with formatter calls for notes, success, errors, warnings, and diffs, including passing file_path/logger context.
  • Add DiffContext helper to encapsulate diff inputs and update _print_diff to emit via formatter.diff rather than sys.stdout.
  • Update pre-commit drift checking helper to use formatter.warning with a best-effort file path instead of printing a bare message.
src/ruff_sync/constants.py
src/ruff_sync/formatters.py
src/ruff_sync/core.py
Extend CLI to accept an --output-format flag and centralize logging configuration on the ruff_sync logger.
  • Add output_format field to Arguments NamedTuple with OutputFormat default and plumb it from parsed CLI args.
  • Register --output-format argument on the shared subparser, constraining values to OutputFormat choices and defaulting to text.
  • Change CLI logging setup to configure the package root logger (ruff_sync) with a single StreamHandler, avoiding duplicate handlers across main() calls and using getEffectiveLevel in tests.
  • Set logger.propagate based on PYTEST_CURRENT_TEST to allow pytest to capture logs when running tests.
src/ruff_sync/cli.py
tests/test_basic.py
Adapt and extend tests to validate new formatter behavior, logging routing, and output streams for both core and CLI paths.
  • Introduce a configure_logging pytest fixture that installs a TestStreamHandler on the ruff_sync logger so capsys captures log output reliably between tests.
  • Update existing tests to request configure_logging and to assert that high-level error messages now appear on stderr while diffs remain on stdout.
  • Add async tests for check using JSON and GitHub output formats, verifying JSON lines, levels, messages, file fields, and GitHub ::error annotations and file hints.
  • Add CLI-level tests for --output-format=json and --output-format=github to assert respective annotation/JSON behavior and success vs error record expectations.
  • Adjust serialization tests to use caplog.at_level(logger='ruff_sync') so they align with the new logger configuration.
tests/conftest.py
tests/test_check.py
tests/test_basic.py
tests/test_scaffold.py
tests/test_serialization.py
Update documentation, agent skill content, and CI workflow to promote and leverage the new output formats.
  • Document --output-format flag and its supported values in usage docs, README, and agent skill configuration/CI references, including mapping github to PR annotations.
  • Adjust CI integration examples to include --output-format github on ruff-sync check steps and explain that it generates inline annotations.
  • Add a CI dogfooding step that runs ruff-sync check against this repo using --output-format github.
  • Clarify contributor guidance to prefer typing.Protocol over abc.ABC and to avoid autouse=True fixtures in tests.
README.md
docs/usage.md
docs/ci-integration.md
.agents/skills/ruff-sync-usage/SKILL.md
.agents/skills/ruff-sync-usage/references/ci-integration.md
.agents/skills/ruff-sync-usage/references/configuration.md
AGENTS.md
.github/workflows/ci.yaml
Adjust project configuration and metadata for the new behavior and CI expectations.
  • Bump package version from 0.1.3.dev1 to 0.1.3.dev2.
  • Enable flake8-print (T20) globally in Ruff config while allowing prints in specific files (formatters module, tasks) and all tests via per-file-ignores.
  • Relax Codecov patch coverage threshold from 95% to 90%.
  • Update lockfile to reflect dependency state after changes (if applicable).
pyproject.toml
codecov.yml
uv.lock

Possibly linked issues

  • #ROADMAP-FEATURES: PR delivers the roadmap’s richer CI output: JSON format, GitHub annotations, and related docs/CI wiring for drift detection.

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@codecov
Copy link
Copy Markdown

codecov bot commented Mar 29, 2026

Codecov Report

❌ Patch coverage is 98.67550% with 2 lines in your changes missing coverage. Please review.
✅ Project coverage is 92.78%. Comparing base (d858900) to head (d5ba989).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
src/ruff_sync/core.py 93.75% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #131      +/-   ##
==========================================
+ Coverage   91.53%   92.78%   +1.25%     
==========================================
  Files           5        6       +1     
  Lines         803      929     +126     
==========================================
+ Hits          735      862     +127     
+ Misses         68       67       -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@Kilo59
Copy link
Copy Markdown
Owner Author

Kilo59 commented Mar 29, 2026

@sourcery-ai review

Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 1 issue, and left some high level feedback:

  • The JsonFormatter currently just prints plain strings like the text formatter, which means --output-format json does not actually emit valid/structured JSON; either implement real JSON output or avoid exposing this option until it behaves as advertised.
  • In get_formatter, the parameter name format shadows the built-in format; consider renaming it (e.g. output_format) to avoid confusion and keep type hints/readability clearer.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The `JsonFormatter` currently just prints plain strings like the text formatter, which means `--output-format json` does not actually emit valid/structured JSON; either implement real JSON output or avoid exposing this option until it behaves as advertised.
- In `get_formatter`, the parameter name `format` shadows the built-in `format`; consider renaming it (e.g. `output_format`) to avoid confusion and keep type hints/readability clearer.

## Individual Comments

### Comment 1
<location path="src/ruff_sync/formatters.py" line_range="82-34" />
<code_context>
+        print(f"::warning {file_arg}title=Ruff Sync Warning::{clean_msg}")
+
+
+class JsonFormatter:
+    """JSON output formatter (stub)."""
+
+    def info(self, message: str) -> None:
+        """Stub."""
+        print(message)
+
+    def success(self, message: str) -> None:
</code_context>
<issue_to_address>
**suggestion (bug_risk):** The JsonFormatter currently emits plain text and doesn't provide structured JSON output.

This stub still prints plain text, so `--output-format json` won’t actually produce machine-readable JSON and may break consumers relying on it. Consider either emitting a minimal structured payload now (e.g. `{level, message, file}`) or disabling/marking this option as experimental until it returns real JSON.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@Kilo59 Kilo59 marked this pull request as ready for review March 29, 2026 18:52
@Kilo59
Copy link
Copy Markdown
Owner Author

Kilo59 commented Mar 29, 2026

@sourcery-ai review

Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 2 issues, and left some high level feedback:

  • The ResultFormatter protocol includes logger and file_path parameters for several methods, but the GithubFormatter and JsonFormatter ignore them—consider either using these parameters consistently (e.g., including file context in JSON/annotations, delegating to loggers where appropriate) or simplifying the protocol so implementations have a clearer and more uniform contract.
  • The GithubFormatter builds ::error/::warning/::debug commands via raw print, but GitHub Actions requires special characters in messages to be percent-encoded (e.g., %, newlines, carriage returns); consider adding escaping to avoid malformed annotations or broken workflow logs when messages contain such characters.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The `ResultFormatter` protocol includes `logger` and `file_path` parameters for several methods, but the `GithubFormatter` and `JsonFormatter` ignore them—consider either using these parameters consistently (e.g., including file context in JSON/annotations, delegating to loggers where appropriate) or simplifying the protocol so implementations have a clearer and more uniform contract.
- The `GithubFormatter` builds `::error`/`::warning`/`::debug` commands via raw `print`, but GitHub Actions requires special characters in messages to be percent-encoded (e.g., `%`, newlines, carriage returns); consider adding escaping to avoid malformed annotations or broken workflow logs when messages contain such characters.

## Individual Comments

### Comment 1
<location path="src/ruff_sync/formatters.py" line_range="30-39" />
<code_context>
+        """Print a success message (standard stdout)."""
+        print(message)
+
+    def error(
+        self,
+        message: str,
+        file_path: pathlib.Path | None = None,
+        logger: logging.Logger | None = None,
+    ) -> None:
+        """Print an error message as a GitHub Action error annotation."""
+        # Also print the standard string so it shows up cleanly in logs
+        print(message)
+        # Strip emoji/symbols if any for the raw title, or just use a generic title
+        file_arg = f"file={file_path}," if file_path else ""
+        # The message is technically what we pass after ::
+        # E.g. ::error file=app.js,line=1::Missing semicolon
+        clean_msg = message.replace("❌ ", "")
+        print(f"::error {file_arg}title=Ruff Sync Error::{clean_msg}")
+
+    def warning(
</code_context>
<issue_to_address>
**🚨 issue (security):** Escape GitHub workflow command messages to avoid malformed annotations or injection issues.

`clean_msg` is passed directly into the `::error`/`::warning` commands without escaping. GitHub Actions requires `%`, `
`, and `
` in messages to be encoded as `%25`, `%0D`, and `%0A`; otherwise annotations can break and messages may alter the workflow command. Please add a helper (e.g. `escape_github_message`) that performs this escaping before interpolating `clean_msg` into the command string.
</issue_to_address>

### Comment 2
<location path="tests/test_check.py" line_range="51" />
<code_context>

 @pytest.mark.asyncio
-async def test_check_out_of_sync(fs: FakeFilesystem, capsys):
+async def test_check_out_of_sync(fs: FakeFilesystem, capsys, configure_logging):
     # Setup
     local_content = """
</code_context>
<issue_to_address>
**suggestion (testing):** Add tests that exercise non-default `output_format` values (JSON and GitHub) for `check` so the new formatter behavior is validated end-to-end.

Right now `check` tests only cover the default text output and assertions around stderr routing. With the new enum + `get_formatter`, we’re missing coverage that:

- For JSON, `check` emits valid JSON records for notes/success/errors with correct `level`/`message`, and includes `file` when `file_path` is set.
- For GitHub, errors/warnings are rendered as `::error`/`::warning` lines (with `file=` when present) while still printing a readable line.

Consider parametrized tests around `ruff_sync.check` that set `output_format` to JSON / GITHUB, use `capsys` to capture output, and assert JSON validity / `::error` prefixes and relative paths for out-of-sync configs, to ensure the CLI wiring is covered end-to-end.

Suggested implementation:

```python
@pytest.mark.asyncio
async def test_check_out_of_sync(fs: FakeFilesystem, capsys, configure_logging):
    # Setup
    local_content = """
[tool.ruff]
        assert exit_code == 1

        captured = capsys.readouterr()
        assert "is out of sync!" in captured.err
        assert '-target-version = "py310"' in captured.out
        assert '+target-version = "py311"' in captured.out


@pytest.mark.asyncio
async def test_check_out_of_sync_json_format(fs: FakeFilesystem, capsys, configure_logging):
    # Setup mirrors the default-format test but uses JSON output_format so we
    # can validate the formatter end-to-end.
    local_content = """
[tool.ruff]
target-version = "py310"
"""
    remote_content = """
[tool.ruff]
target-version = "py311"
"""

    # Create local and remote configs in the fake filesystem. The exact paths
    # should match whatever ruff_sync.check expects in the existing tests.
    fs.create_file("pyproject.toml", contents=local_content)
    fs.create_file("pyproject.toml.remote", contents=remote_content)

    # Act
    from ruff_sync import check
    from ruff_sync.formatter import OutputFormat

    exit_code = await check(
        local_path="pyproject.toml",
        remote_path="pyproject.toml.remote",
        output_format=OutputFormat.JSON,
    )

    # Assert exit code
    assert exit_code == 1

    captured = capsys.readouterr()

    # JSON formatter should emit valid JSON records (one per line).
    import json

    lines = [line for line in captured.out.splitlines() if line.strip()]
    assert lines, "Expected at least one JSON log line for out-of-sync config"

    records = [json.loads(line) for line in lines]

    # There should be at least one error record describing the out-of-sync state.
    error_records = [r for r in records if r.get("level") == "error"]
    assert error_records, "Expected at least one error record in JSON output"

    error_messages = " ".join(r.get("message", "") for r in error_records)
    assert "is out of sync" in error_messages

    # When file_path is set, JSON output should include a file field. We expect
    # a relative path for the local config.
    files = {r.get("file") for r in records if "file" in r}
    assert "pyproject.toml" in files


@pytest.mark.asyncio
async def test_check_out_of_sync_github_format(fs: FakeFilesystem, capsys, configure_logging):
    # Setup mirrors the default-format test but uses GITHUB output_format so we
    # can validate GitHub annotation rendering.
    local_content = """
[tool.ruff]
target-version = "py310"
"""
    remote_content = """
[tool.ruff]
target-version = "py311"
"""

    fs.create_file("pyproject.toml", contents=local_content)
    fs.create_file("pyproject.toml.remote", contents=remote_content)

    from ruff_sync import check
    from ruff_sync.formatter import OutputFormat

    exit_code = await check(
        local_path="pyproject.toml",
        remote_path="pyproject.toml.remote",
        output_format=OutputFormat.GITHUB,
    )

    assert exit_code == 1

    captured = capsys.readouterr()

    # GitHub formatter should emit ::error / ::warning annotations. For an
    # out-of-sync config we expect at least one ::error line with a relative
    # file path and a readable message.
    error_lines = [
        line for line in captured.out.splitlines() if line.startswith("::error")
    ]
    assert error_lines, "Expected at least one ::error line in GitHub output"

    # Ensure the annotation references the config file as a relative path.
    assert any("file=pyproject.toml" in line for line in error_lines)

    # Also ensure the human-readable message is present after the annotation.
    assert any("is out of sync" in line for line in error_lines)

```

The new tests assume:
1. `ruff_sync.check` accepts `local_path`, `remote_path`, and `output_format` keyword arguments.
2. `OutputFormat` is importable from `ruff_sync.formatter` and has `JSON` and `GITHUB` members.
3. The fake filesystem fixture `fs` works with `fs.create_file(path, contents=...)` and `check` can resolve `"pyproject.toml"` / `"pyproject.toml.remote"` relative to the working directory.

To integrate with your actual codebase, you may need to:
- Adjust the argument names and values passed to `check` to match the existing `test_check_out_of_sync` setup (e.g., different path arguments, repository/branch parameters, or enums).
- Adapt the file creation paths if the real tests use different locations or helper functions.
- If JSON records have different field names than `level` / `message` / `file`, update the assertions accordingly.
- If GitHub formatter uses a slightly different annotation format, tweak the `startswith("::error")` and `file=pyproject.toml` checks to match the actual output.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 4 issues, and left some high level feedback:

  • The ResultFormatter protocol methods accept an optional logger, but GithubFormatter and JsonFormatter ignore it while TextFormatter uses it; consider either wiring the logger through consistently or dropping the logger parameter from the protocol to avoid a misleading interface.
  • In the uv publish step shell snippet, consider quoting $exit_code in the test (if [ "$exit_code" -eq 2 ]; then) to avoid edge-case issues in POSIX shells.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The `ResultFormatter` protocol methods accept an optional `logger`, but `GithubFormatter` and `JsonFormatter` ignore it while `TextFormatter` uses it; consider either wiring the logger through consistently or dropping the `logger` parameter from the protocol to avoid a misleading interface.
- In the `uv publish` step shell snippet, consider quoting `$exit_code` in the test (`if [ "$exit_code" -eq 2 ]; then`) to avoid edge-case issues in POSIX shells.

## Individual Comments

### Comment 1
<location path="src/ruff_sync/formatters.py" line_range="60-69" />
<code_context>
+
+    def success(self, message: str) -> None:
+        """Print a success message (standard stdout)."""
+        print(message)
+
+    def error(
+        self,
+        message: str,
+        file_path: pathlib.Path | None = None,
+        logger: logging.Logger | None = None,
+    ) -> None:
+        """Print an error message as a GitHub Action error annotation."""
+        # Also print the standard string so it shows up cleanly in logs
+        print(message)
+        # Strip emoji/symbols if any for the raw title, or just use a generic title
+        file_arg = f"file={file_path}," if file_path else ""
+        # The message is technically what we pass after ::
+        # E.g. ::error file=app.js,line=1::Missing semicolon
+        clean_msg = message.replace("❌ ", "")
+        print(f"::error {file_arg}title=Ruff Sync Error::{clean_msg}")
+
+    def warning(
</code_context>
<issue_to_address>
**issue (bug_risk):** GitHub formatter should escape message/file values for workflow commands and avoid spaces in the `::error` syntax.

`clean_msg` is passed directly into the workflow command and may contain characters (`%`, `
`, `
`, `:` etc.) that must be escaped for GitHub Actions to parse annotations correctly. Also, the current format uses a space after `::error`, whereas the documented form is `::error{props}::{msg}` (no space). Please apply the official encoding rules (e.g. `%``%25`, `
``%0D`, `
``%0A`) and adjust the format to `::error{file_arg}title=...::{clean_msg}` to ensure GitHub reliably recognizes the error annotations.
</issue_to_address>

### Comment 2
<location path="src/ruff_sync/formatters.py" line_range="38-47" />
<code_context>
+        clean_msg = message.replace("❌ ", "")
+        print(f"::error {file_arg}title=Ruff Sync Error::{clean_msg}")
+
+    def warning(
+        self,
+        message: str,
+        file_path: pathlib.Path | None = None,
+        logger: logging.Logger | None = None,
+    ) -> None:
+        """Print a warning message as a GitHub Action warning annotation."""
+        print(message)
+        file_arg = f"file={file_path}," if file_path else ""
+        clean_msg = message.replace("⚠️ ", "")
+        print(f"::warning {file_arg}title=Ruff Sync Warning::{clean_msg}")
+
+    def debug(self, message: str, logger: logging.Logger | None = None) -> None:
</code_context>
<issue_to_address>
**issue (bug_risk):** Apply the same GitHub Actions escaping and syntax considerations to `::warning` as for `::error`.

`clean_msg` here should follow GitHub workflow command escaping rules, and the `::warning` line should omit the space before the property list (i.e., `::warning{props}::{msg}`) to match `::error`. Otherwise warnings may not appear as annotations or may be truncated.
</issue_to_address>

### Comment 3
<location path="tests/conftest.py" line_range="36-45" />
<code_context>
+            self.handleError(record)
+
+
+@pytest.fixture
+def configure_logging():
+    """Configure ruff_sync logger for tests to ensure capsys can capture log output."""
+    logger = logging.getLogger("ruff_sync")
+    logger.setLevel(logging.DEBUG)
+
+    # Clear existing handlers to avoid duplicates/stale handlers
+    logger.handlers = []
+
+    # Add our dynamic handler
+    handler = TestStreamHandler("stderr")
+    logger.addHandler(handler)
+
+    yield logger
+
+
</code_context>
<issue_to_address>
**suggestion (testing):** Reset logger configuration after `configure_logging` to avoid cross-test leakage

This fixture permanently changes the `ruff_sync` logger and doesn’t restore its previous configuration, which can cause inter-test coupling and make logging behavior harder to reason about as the test suite grows. Please snapshot the logger’s original `level`, `handlers`, and `propagate` before modifying them, and restore them in a `try`/`finally` around the `yield` so the fixture is fully isolated.

Suggested implementation:

```python
import ruff_sync


class TestStreamHandler(logging.Handler):

```

```python
            self.handleError(record)



import logging
import sys

import pytest
from typing_extensions import override

import ruff_sync


class TestStreamHandler(logging.Handler):

```

```python
    logger = logging.getLogger("ruff_sync")

    # Snapshot original configuration so we can restore it after the test
    original_level = logger.level
    original_handlers = list(logger.handlers)
    original_propagate = logger.propagate

    # Configure logger for tests
    logger.setLevel(logging.DEBUG)

    # Clear existing handlers to avoid duplicates/stale handlers
    logger.handlers = []

    # Add our dynamic handler
    handler = TestStreamHandler("stderr")
    logger.addHandler(handler)

    try:
        yield logger
    finally:
        # Restore original configuration to avoid cross-test leakage
        logger.handlers = original_handlers
        logger.setLevel(original_level)
        logger.propagate = original_propagate

```
</issue_to_address>

### Comment 4
<location path="tests/test_check.py" line_range="51-54" />
<code_context>

 @pytest.mark.asyncio
-async def test_check_out_of_sync(fs: FakeFilesystem, capsys):
+async def test_check_out_of_sync(fs: FakeFilesystem, capsys, configure_logging):
     # Setup
     local_content = """
</code_context>
<issue_to_address>
**suggestion (testing):** Add coverage for non-text output formats in `check` tests

Now that JSON and GitHub output formats are supported, please extend the `check` tests to cover at least one failing case for each format. For example, parameterize `test_check_out_of_sync` over `OutputFormat.TEXT`, `JSON`, and `GITHUB`, and assert that:
- JSON output is valid JSON with the expected `level`, `message`, and `file` fields.
- GitHub output includes the correct `::error` command and `file=` argument where applicable.

This will validate the new formatter abstraction across all formats, not just the default text output.

Suggested implementation:

```python
@pytest.mark.asyncio
@pytest.mark.parametrize("output_format", [OutputFormat.TEXT, OutputFormat.JSON, OutputFormat.GITHUB])
async def test_check_out_of_sync(fs: FakeFilesystem, capsys, configure_logging, output_format):

```

```python
    # Setup
    local_content = """
[tool.ruff]
"""

    # (rest of existing setup / invocation remains the same, but must ensure the
    # `check` command is invoked with the selected `output_format`)

    # ... existing code that writes local_content to pyproject, calls the check
    # entrypoint/CLI, etc. ...

    assert exit_code == 1

    captured = capsys.readouterr()

    if output_format is OutputFormat.TEXT:
        # Existing text assertions
        assert "is out of sync!" in captured.err
        assert '-target-version = "py310"' in captured.out
        assert '+target-version = "py311"' in captured.out

    elif output_format is OutputFormat.JSON:
        # JSON output should be valid JSON with expected fields
        # (assumes JSON is emitted on stdout; adjust to `captured.err` if needed)
        payload = json.loads(captured.out)
        # Basic shape
        assert isinstance(payload, dict)
        assert payload.get("level") == "error"
        assert "is out of sync" in payload.get("message", "").lower()
        # File field should point to the config that is out of sync
        assert "pyproject.toml" in payload.get("file", "")

    elif output_format is OutputFormat.GITHUB:
        # GitHub output should use the ::error command and include the file
        # (assumes GitHub annotations are written to stderr)
        err = captured.err
        assert "::error" in err
        assert "file=" in err
        assert "pyproject.toml" in err
        assert "is out of sync" in err

```

1. Ensure `OutputFormat` and `json` are imported at the top of `tests/test_check.py`, for example:
   - `from <your_package>.check import OutputFormat` (or whatever module defines `OutputFormat`)
   - `import json`
2. In the body of `test_check_out_of_sync` (not fully shown in the snippet), make sure the code that invokes the `check` command passes the `output_format` parameter correctly (e.g., CLI flag like `--format output_format.value` or a function argument).
3. If your implementation emits JSON to `stderr` instead of `stdout`, update `json.loads(captured.out)` to `json.loads(captured.err)` accordingly.
4. If the JSON schema or GitHub annotation format differs (field names, level value, etc.), adjust the asserted keys/values to match the actual implementation while preserving the intent: validating level/message/file for JSON and `::error` plus `file=` for GitHub.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@Kilo59
Copy link
Copy Markdown
Owner Author

Kilo59 commented Mar 29, 2026

@sourcery-ai review

Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 2 issues, and left some high level feedback:

  • The configure_logging fixture permanently mutates the ruff_sync logger’s handlers and level without restoring the previous configuration after the test, which could lead to subtle cross-test interactions; consider capturing and restoring the original handlers/level inside the fixture.
  • In GithubFormatter, only the message is escaped for GitHub workflow commands while file and title fields are interpolated raw; if a path or future title contains characters like %, ,, or newlines this may break annotation parsing, so it may be safer to escape or sanitize those fields as well.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The `configure_logging` fixture permanently mutates the `ruff_sync` logger’s handlers and level without restoring the previous configuration after the test, which could lead to subtle cross-test interactions; consider capturing and restoring the original handlers/level inside the fixture.
- In `GithubFormatter`, only the message is escaped for GitHub workflow commands while `file` and `title` fields are interpolated raw; if a path or future title contains characters like `%`, `,`, or newlines this may break annotation parsing, so it may be safer to escape or sanitize those fields as well.

## Individual Comments

### Comment 1
<location path=".github/workflows/ci.yaml" line_range="154-155" />
<code_context>

       - name: Publish package
-        run: uv publish
+        run: |
+          uv publish || {
+            exit_code=$?
+            if [ $exit_code -eq 2 ]; then
</code_context>
<issue_to_address>
**issue (bug_risk):** Relying on uv exiting with code 2 for "version exists" may mask other error conditions that also use exit code 2.

This relies on `uv publish` using exit code 2 only for "version already exists". If `uv` starts returning 2 for other failures (e.g., network/auth/config), those would be incorrectly treated as non-fatal and the workflow would pass. Consider using a more specific signal (e.g., matching a known stderr message or a dedicated flag/option if `uv` provides one) to distinguish "already published" from real errors.
</issue_to_address>

### Comment 2
<location path="tests/test_basic.py" line_range="802" />
<code_context>
     monkeypatch: pytest.MonkeyPatch,
     respx_mock: respx.Router,
     capsys: pytest.CaptureFixture[str],
+    configure_logging: logging.Logger,
 ) -> None:
     """Ensure UpstreamError from a failed fetch surfaces as exit code 1 with logged failures."""
</code_context>
<issue_to_address>
**suggestion (testing):** Consider adding an end-to-end CLI test for the new --output-format flag

The current tests exercise argument parsing and logging, but not that `--output-format` is correctly propagated into `Arguments.output_format` and the selected formatter. Please add a small CLI-level test that calls `ruff_sync.main()` with `['check', '--output-format', 'github', ...]` (and one for `json`) and asserts that the output contains `::error` / JSON records instead of plain text, confirming the new flag is wired through end-to-end.

Suggested implementation:

```python
def test_cli_output_format_github(
    monkeypatch: pytest.MonkeyPatch,
    respx_mock: respx.Router,
    capsys: pytest.CaptureFixture[str],
    configure_logging: logging.Logger,
) -> None:
    """End-to-end: --output-format=github produces GitHub-style annotations."""
    # Arrange: reuse any existing HTTP mocking / logging configuration
    # (configure_logging fixture is invoked by pytest when requested).

    # Act
    with pytest.raises(SystemExit):
        ruff_sync_cli.main(
            [
                "check",
                "--output-format",
                "github",
            ]
        )

    captured = capsys.readouterr()
    output = captured.out + captured.err

    # Assert: GitHub Actions annotation is present and plain-text summary is not.
    assert "::error" in output
    # The GitHub formatter should not emit JSON records.
    assert not output.lstrip().startswith("{")



def test_cli_output_format_json(
    monkeypatch: pytest.MonkeyPatch,
    respx_mock: respx.Router,
    capsys: pytest.CaptureFixture[str],
    configure_logging: logging.Logger,
) -> None:
    """End-to-end: --output-format=json produces JSON records."""
    # Arrange

    # Act
    with pytest.raises(SystemExit):
        ruff_sync_cli.main(
            [
                "check",
                "--output-format",
                "json",
            ]
        )

    captured = capsys.readouterr()
    output = captured.out + captured.err
    lines = [line for line in output.splitlines() if line.strip()]

    # Assert: no GitHub-style annotations in JSON mode
    assert all("::error" not in line for line in lines)

    # At least one line should be valid JSON (JSONL-style output)
    parsed_any = False
    for line in lines:
        try:
            json_obj = json.loads(line)
        except ValueError:
            continue
        else:
            assert isinstance(json_obj, dict)
            parsed_any = True
            break

    assert parsed_any, "Expected at least one JSON record in JSON output"

    # Verify that the computed log level matches what we expect for this verbosity
    assert ruff_sync_cli.LOGGER.getEffectiveLevel() == expected_level

    # Verify that the verbose flag value propagates into Arguments.verbose
    assert len(patch_cli.captured_args) == 1
    monkeypatch: pytest.MonkeyPatch,
    respx_mock: respx.Router,
    capsys: pytest.CaptureFixture[str],
    configure_logging: logging.Logger,
) -> None:
    """Ensure UpstreamError from a failed fetch surfaces as exit code 1 with logged failures."""
    # Successful upstream

```

1. Ensure `json` is imported at the top of `tests/test_basic.py`, e.g. `import json`.
2. These tests assume `ruff_sync_cli.main` exits via `SystemExit` in both success and failure cases (as appears to be the pattern in other CLI tests). If your CLI returns normally instead, drop the `with pytest.raises(SystemExit):` wrappers and call `ruff_sync_cli.main([...])` directly.
3. If your GitHub formatter's output structure differs (e.g. uses `::warning` or a different annotation format), adjust the `"::error"` assertion accordingly.
4. If the JSON formatter emits a single JSON array instead of JSONL records, update the JSON assertions to parse the whole `output` instead of line-by-line.
5. If these tests need additional setup (e.g. seeding mocked upstream responses with `respx_mock` so that there is at least one violation), mirror the existing setup code from your other CLI tests before calling `ruff_sync_cli.main(...)`.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@Kilo59
Copy link
Copy Markdown
Owner Author

Kilo59 commented Mar 29, 2026

@sourcery-ai review

@Kilo59 Kilo59 added the enhancement New feature or request label Mar 29, 2026
Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 1 issue

Prompt for AI Agents
Please address the comments from this code review:

## Individual Comments

### Comment 1
<location path="tests/test_basic.py" line_range="885-894" />
<code_context>
+def test_cli_output_format_json(
</code_context>
<issue_to_address>
**suggestion (testing):** Strengthen JSON CLI test by asserting presence of expected fields like `file` and message content

To better capture the failure mode, consider also asserting that at least one parsed JSON object has `"level": "error"` (or the expected failure severity), and that one record’s `"message"` contains the “out of sync” wording (or equivalent). This will keep the test high‑level while making it more robust against accidental changes in the emitted JSON structure and content.

Suggested implementation:

```python
def test_cli_output_format_json(
    monkeypatch: pytest.MonkeyPatch,
    respx_mock: respx.Router,
    capsys: pytest.CaptureFixture[str],
    configure_logging: logging.Logger,
) -> None:
    """End-to-end: --output-format=json produces JSON records with expected fields and content."""
    respx_mock.get("https://example.com/pyproject.toml").respond(
        status_code=200,
        text="[tool.ruff]\ntarget-version = 'py311'\n",
    )

```

To fully implement the stronger assertions, update the *body* of `test_cli_output_format_json` (after the mock `respx_mock.get(...)` setup and any CLI invocation you already have) along these lines:

1. **Parse JSON output**  
   After you capture CLI output (similar to the non-JSON test above, e.g. via `captured = capsys.readouterr()`), add:
   ```python
   import json

   captured = capsys.readouterr()
   output = captured.out + captured.err

   # Each line should be a JSON record.
   records = []
   for line in output.splitlines():
       line = line.strip()
       if not line:
           continue
       records.append(json.loads(line))

   assert records, "Expected at least one JSON record in CLI output"
   ```

2. **Assert expected fields like `file` are present**  
   ```python
   # Ensure at least one record has a 'file' field (or whatever the field is named in your JSON schema)
   assert any("file" in record for record in records), "Expected at least one JSON record to contain a 'file' field"
   ```

3. **Assert presence of an error-level record**  
   Adjust the expected level key/name to match your schema (e.g. `"level"` / `"severity"` / `"type"`):
   ```python
   assert any(
       record.get("level") == "error" for record in records
   ), "Expected at least one JSON record with level='error'"
   ```

4. **Assert message content mentions “out of sync” (or equivalent)**  
   Depending on the actual wording your CLI emits for this failure mode, assert on a robust substring:
   ```python
   # Normalize message content and use a substring that’s stable across minor rewordings.
   assert any(
       "out of sync" in str(record.get("message", "")).lower()
       for record in records
   ), "Expected at least one JSON record whose message mentions 'out of sync'"
   ```

5. **Imports**  
   If `json` is not yet imported in `tests/test_basic.py`, add at the top of the file:
   ```python
   import json
   ```

These additions should be integrated where you currently validate that `--output-format=json` produces JSON (replacing or augmenting any previous, weaker assertions), while still matching how your CLI is actually invoked in this test (e.g. via `CliRunner`, `main()`, or a helper).
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@Kilo59
Copy link
Copy Markdown
Owner Author

Kilo59 commented Mar 29, 2026

@sourcery-ai review

Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 1 issue, and left some high level feedback:

  • The CLI only wires --output-format through the check subcommand, but pull() also depends on Arguments.output_format and the docs describe --output-format as a general flag; consider adding the option to the pull (and/or top-level) parser or tightening the docs to match the current behavior.
  • The formatter methods taking an optional logger parameter on every call (e.g., error(..., logger=LOGGER)) make the API a bit noisy; you could simplify usage by having each formatter bind to a logger at construction time or consistently use the package-level logger internally.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The CLI only wires `--output-format` through the `check` subcommand, but `pull()` also depends on `Arguments.output_format` and the docs describe `--output-format` as a general flag; consider adding the option to the `pull` (and/or top-level) parser or tightening the docs to match the current behavior.
- The formatter methods taking an optional `logger` parameter on every call (e.g., `error(..., logger=LOGGER)`) make the API a bit noisy; you could simplify usage by having each formatter bind to a logger at construction time or consistently use the package-level logger internally.

## Individual Comments

### Comment 1
<location path="src/ruff_sync/cli.py" line_range="295-299" />
<code_context>
         help="Do not show a diff.",
     )
+    check_parser.add_argument(
+        "--output-format",
+        type=OutputFormat,
+        choices=list(OutputFormat),
+        default=OutputFormat.TEXT,
+        help="Format for output. Default: text.",
+    )

</code_context>
<issue_to_address>
**suggestion:** Using Enum instances directly as argparse choices may make the CLI help text less clear.

With `type=OutputFormat` and `choices=list(OutputFormat)`, argparse will likely show choices like `OutputFormat.TEXT` instead of the actual `text/json/github` values users must pass. Since `OutputFormat` subclasses `str`, consider using `type=str` with `choices=[e.value for e in OutputFormat]` and converting to `OutputFormat` after parsing, or implement `__str__` to return `self.value` so the help text shows the real CLI values.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@Kilo59
Copy link
Copy Markdown
Owner Author

Kilo59 commented Mar 29, 2026

@sourcery-ai review

Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 4 issues, and left some high level feedback:

  • In core._print_diff you still write directly to sys.stdout, which means JSON and GitHub output formats will produce mixed plain-text diff output alongside structured records; consider routing diff emission through the selected ResultFormatter (or a dedicated hook) so consumers of --output-format=json/github can rely on fully structured output.
  • The GitHub formatter’s error/warning methods currently strip emojis via hard-coded replace("❌ ", "") / replace("⚠️ ", ""); if messages change or use different symbols this will leak icons into annotations—consider separating a machine-readable title/message from the user-facing text, or passing a title parameter instead of inferring it from the message.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `core._print_diff` you still write directly to `sys.stdout`, which means JSON and GitHub output formats will produce mixed plain-text diff output alongside structured records; consider routing diff emission through the selected `ResultFormatter` (or a dedicated hook) so consumers of `--output-format=json/github` can rely on fully structured output.
- The GitHub formatter’s `error`/`warning` methods currently strip emojis via hard-coded `replace("❌ ", "")` / `replace("⚠️ ", "")`; if messages change or use different symbols this will leak icons into annotations—consider separating a machine-readable title/message from the user-facing text, or passing a `title` parameter instead of inferring it from the message.

## Individual Comments

### Comment 1
<location path="src/ruff_sync/core.py" line_range="870-878" />
<code_context>


-def _check_pre_commit_sync(args: Arguments) -> int | None:
+def _check_pre_commit_sync(args: Arguments, fmt: ResultFormatter) -> int | None:
     """Return exit code 2 if pre-commit hook version is out of sync, otherwise None.

     Shared helper to avoid duplicating the pre-commit synchronization logic.
     """
     if getattr(args, "pre_commit", False) and not sync_pre_commit(pathlib.Path.cwd(), dry_run=True):
-        print("⚠️ Pre-commit hook version is out of sync!")
+        fmt.warning("⚠️ Pre-commit hook version is out of sync!", logger=LOGGER)
         return 2
     return None
</code_context>
<issue_to_address>
**suggestion:** Consider propagating file context into the pre-commit warning for non-text formatters.

Since `_check_pre_commit_sync` is now formatter-aware, consider passing a `file_path` into `fmt.warning(...)` when available (e.g. repo root or the pre-commit config path) so GitHub annotations can point to a concrete location, which is particularly helpful in CI runs.

```suggestion
def _check_pre_commit_sync(args: Arguments, fmt: ResultFormatter) -> int | None:
    """Return exit code 2 if pre-commit hook version is out of sync, otherwise None.

    Shared helper to avoid duplicating the pre-commit synchronization logic.
    """
    if getattr(args, "pre_commit", False) and not sync_pre_commit(pathlib.Path.cwd(), dry_run=True):
        repo_root = pathlib.Path.cwd()
        pre_commit_config = repo_root / ".pre-commit-config.yaml"
        file_path = pre_commit_config if pre_commit_config.exists() else repo_root
        fmt.warning(
            "⚠️ Pre-commit hook version is out of sync!",
            logger=LOGGER,
            file_path=file_path,
        )
        return 2
    return None
```
</issue_to_address>

### Comment 2
<location path="src/ruff_sync/core.py" line_range="905-913" />
<code_context>
     _source_toml_path = resolve_target_path(args.to, args.upstream).resolve(strict=False)
     if not _source_toml_path.exists():
-        print(
+        fmt.error(
             f"❌ Configuration file {_source_toml_path} does not exist. "
-            "Run 'ruff-sync pull' to create it."
+            "Run 'ruff-sync pull' to create it.",
+            file_path=_source_toml_path,
+            logger=LOGGER,
         )
         return 1
</code_context>
<issue_to_address>
**suggestion:** Align error metadata in `pull()` with `check()` for better formatter output.

In the `pull()` path where config creation fails, you're calling `fmt.error(...)` without `file_path`, while the equivalent `check()` error includes `file_path=_source_toml_path`. Passing `file_path` here too would keep GitHub/JSON formatter output consistent and enable better inline annotations for the config file.

```suggestion
    _source_toml_path = resolve_target_path(args.to, args.upstream).resolve(strict=False)
    if not _source_toml_path.exists():
        fmt.error(
            f"❌ Configuration file {_source_toml_path} does not exist. "
            "Run 'ruff-sync pull' to create it.",
            file_path=_source_toml_path,
        )
        return 1
```
</issue_to_address>

### Comment 3
<location path="src/ruff_sync/formatters.py" line_range="92-101" />
<code_context>
+class GithubFormatter:
</code_context>
<issue_to_address>
**issue (bug_risk):** The emoji-stripping logic in GitHub formatter is brittle and may alter messages unexpectedly.

`error()` and `warning()` use `message.replace("❌ ", "")` / `replace("⚠️ ", "")`, which removes those sequences anywhere in the string and assumes they’re always at the start. If the text legitimately contains these characters or the phrasing changes, messages will be unintentionally altered. Consider only stripping a leading prefix (e.g. via `removeprefix` or `startswith` + slicing), or keeping the original message intact and deriving separate `title`/`body` fields for annotations.
</issue_to_address>

### Comment 4
<location path="tests/test_basic.py" line_range="885-894" />
<code_context>
+def test_cli_output_format_json(
</code_context>
<issue_to_address>
**suggestion (testing):** Tighten the JSON CLI test by asserting on message content, not just JSON shape

Consider also asserting on the error message content to ensure the CLI is surfacing the expected error, not just any JSON log line. For example, after parsing the JSON record:

```python
assert "is out of sync" in json_obj.get("message", "")
```

(or similar, based on the exact error path). This makes the test more robust against accidental changes where unrelated JSON logs are emitted but the primary error isn’t properly exposed in JSON form.

Suggested implementation:

```python
def test_cli_output_format_json(
    monkeypatch: pytest.MonkeyPatch,
    respx_mock: respx.Router,
    capsys: pytest.CaptureFixture[str],
    configure_logging: logging.Logger,
) -> None:
    """End-to-end: --output-format=json produces JSON records."""
    respx_mock.get("https://example.com/pyproject.toml").respond(
        status_code=200,
        text="[tool.ruff]\ntarget-version = 'py311'\n",
    )

    # Invoke the CLI with JSON output enabled.
    result = subprocess.run(
        ["ruff", "check", "--output-format=json", "https://example.com/pyproject.toml"],
        capture_output=True,
        text=True,
        check=False,
    )

    # Depending on how logging is configured, JSON records may be written to
    # stdout or stderr. Prefer stderr when present, fall back to stdout.
    captured = result.stderr.strip() or result.stdout.strip()
    assert captured, "Expected JSON output from CLI"

    # Take the first JSON record and decode it.
    first_line = captured.splitlines()[0]
    json_obj = json.loads(first_line)

    # Tighten the test by checking the surfaced error message content, not
    # just the JSON shape.
    message = json_obj.get("message", "")
    assert "is out of sync" in message

```

To make this compile and align with your existing test helpers:

1. Ensure the module imports `json` and `subprocess` at the top of `tests/test_basic.py`, for example:
   ```python
   import json
   import subprocess
   ```
2. If your project exposes the CLI through a helper (e.g. a `run_cli` function, `click`'s `CliRunner`, or a different entry point than `["ruff", "check", ...]`), adjust the `subprocess.run(...)` call to use the same pattern as neighboring tests, and update the arguments so that the same error path that produces an `"is out of sync"` message is exercised.
3. If your JSON output is guaranteed to be on `stdout` or `stderr` specifically (rather than either), simplify the `captured = ...` line to only read from the correct stream to match the rest of the suite.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@Kilo59
Copy link
Copy Markdown
Owner Author

Kilo59 commented Mar 29, 2026

@sourcery-ai review

Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 2 issues

Prompt for AI Agents
Please address the comments from this code review:

## Individual Comments

### Comment 1
<location path="tests/test_check.py" line_range="50-54" />
<code_context>


 @pytest.mark.asyncio
-async def test_check_out_of_sync(fs: FakeFilesystem, capsys):
+async def test_check_out_of_sync(fs: FakeFilesystem, capsys, configure_logging):
</code_context>
<issue_to_address>
**suggestion (testing):** Add coverage for successful/in-sync paths for JSON and GitHub output formats at the core level

The new `test_check_out_of_sync_json_format` and `test_check_out_of_sync_github_format` cover the error/out-of-sync cases, but there are no tests for the in-sync/success path with JSON or GITHUB output. Please add tests where local and upstream configs are already in sync and assert that:

- Exit code is `0`.
- JSON formatter emits a success/"in sync" record (e.g., `level` = `success`) and no `error` records.
- GitHub formatter emits no `::error` lines, but still outputs the human-friendly success/note message.

This ensures the happy-path behavior of the new formatters is also verified, not just failures.

Suggested implementation:

```python
        assert "is out of sync!" in captured.err
        assert '-target-version = "py310"' in captured.out
        assert '+target-version = "py311"' in captured.out


@pytest.mark.asyncio
async def test_check_in_sync_json_format(
    fs: FakeFilesystem,
    capsys,
    configure_logging,
):
    """Ensure JSON formatter reports success and no errors when configs are in sync."""
    # Arrange: local and upstream configs are identical
    local_content = """
[tool.ruff]
target-version = "py311"
line-length = 88
"""

    fs.create_file(".ruff.toml", contents=local_content)
    fs.create_file("upstream/.ruff.toml", contents=local_content)

    # Act: run check with JSON output format
    # NOTE: adapt the invocation to match the helper used in existing JSON tests
    exit_code = await check(
        local_path=".",
        upstream_path="upstream",
        output_format="json",
    )

    # Assert: success exit code
    assert exit_code == 0

    captured = capsys.readouterr()

    # JSON formatter should emit a single success/"in sync" record and no error records.
    # The exact structure should mirror the implementation of the JSON formatter.
    # This assertion assumes each record is output on its own line as a JSON object.
    json_lines = [line for line in captured.err.splitlines() if line.strip()]
    assert json_lines, "Expected at least one JSON record in stderr"

    import json

    records = [json.loads(line) for line in json_lines]

    # No error-level records
    assert all(r.get("level") != "error" for r in records)

    # At least one success/in-sync record
    assert any(
        r.get("level") in ("success", "info")
        and "in sync" in (r.get("message") or "").lower()
        for r in records
    )


@pytest.mark.asyncio
async def test_check_in_sync_github_format(
    fs: FakeFilesystem,
    capsys,
    configure_logging,
):
    """Ensure GitHub formatter does not emit ::error lines when configs are in sync."""
    # Arrange: local and upstream configs are identical
    local_content = """
[tool.ruff]
target-version = "py311"
line-length = 88
"""

    fs.create_file(".ruff.toml", contents=local_content)
    fs.create_file("upstream/.ruff.toml", contents=local_content)

    # Act: run check with GitHub output format
    # NOTE: adapt the invocation to match the helper used in existing GitHub tests
    exit_code = await check(
        local_path=".",
        upstream_path="upstream",
        output_format="github",
    )

    # Assert: success exit code
    assert exit_code == 0

    captured = capsys.readouterr()

    # No GitHub ::error annotations should be emitted
    assert "::error" not in captured.err

    # Human-friendly success/note message should still be present somewhere
    combined = captured.out + captured.err
    assert "in sync" in combined.lower()
    assert "ruff" in combined.lower()

```

The above implementation assumes:
1. There is an async `check` function taking `local_path`, `upstream_path`, and `output_format` parameters, similar to how the existing `test_check_out_of_sync_*` tests invoke the core check logic.
2. JSON formatter writes its machine-readable records to `stderr` (as suggested by the existing out-of-sync tests). If your implementation uses `stdout` instead, adjust `captured.err` to `captured.out` accordingly.
3. Each JSON record is emitted as a single line; if the format differs (e.g., a single JSON array), update the parsing logic to match.
4. The JSON success record uses a `level` field (e.g., `"success"` or `"info"`) and a `message` field containing an “in sync” phrase. If your schema differs (e.g., uses `status` or `event`), update the assertions to align with the actual structure.
5. The GitHub formatter writes GitHub annotations to `stderr` (where `::error` would appear) and the human-readable note/success message to either `stdout` or `stderr`. If your formatter behaves differently, adjust `captured.out`/`captured.err` usage as needed.

If the core invocation pattern in existing tests is different (for example, using a CLI runner, `async_main(["--format", "json"])`, or a helper like `run_check`), replace the `await check(...)` calls with the corresponding pattern used in `test_check_out_of_sync_json_format` and `test_check_out_of_sync_github_format` so these new tests exercise the same code path.
</issue_to_address>

### Comment 2
<location path="tests/test_basic.py" line_range="886-895" />
<code_context>
+def test_cli_output_format_json(
</code_context>
<issue_to_address>
**suggestion (testing):** Add a positive/success-path CLI test for JSON output format

The existing test covers the out-of-sync case (non-zero exit, error-level JSON, and "out of sync" message). Please add a complementary test for the in-sync case that verifies:

- `ruff_sync.main()` returns `0`
- JSON output includes a success / "in sync" record (e.g., `level == "success"`)
- No `error`-level records are emitted

This will validate JSON CLI behavior for both failure and success paths.

Suggested implementation:

```python
import json
import logging

```

```python
def test_cli_output_format_json(
    monkeypatch: pytest.MonkeyPatch,
    respx_mock: respx.Router,
    capsys: pytest.CaptureFixture[str],
    configure_logging: logging.Logger,
) -> None:
    """End-to-end: --output-format=json produces JSON records."""
    respx_mock.get("https://example.com/pyproject.toml").respond(
        status_code=200,
        text="[tool.ruff]\ntarget-version = 'py311'\n",
    )


def test_cli_output_format_json_success(
    monkeypatch: pytest.MonkeyPatch,
    respx_mock: respx.Router,
    capsys: pytest.CaptureFixture[str],
    configure_logging: logging.Logger,
) -> None:
    """End-to-end: --output-format=json produces success-level JSON when in sync."""
    # Arrange: mock a configuration that is in sync.
    respx_mock.get("https://example.com/pyproject.toml").respond(
        status_code=200,
        text="[tool.ruff]\ntarget-version = 'py311'\n",
    )

    # Act: run the CLI in JSON output mode.
    exit_code = ruff_sync.main(
        [
            "--config-url",
            "https://example.com/pyproject.toml",
            "--output-format",
            "json",
        ]
    )

    # Assert: exit code indicates success.
    assert exit_code == 0

    # Assert: stdout/stderr contain only JSON records.
    captured = capsys.readouterr()
    output = captured.out + captured.err

    lines = [line for line in output.splitlines() if line.strip()]
    records = [json.loads(line) for line in lines]

    # At least one success-level record is emitted.
    assert any(record.get("level") == "success" for record in records)

    # No error-level records are emitted on the success path.
    assert all(record.get("level") != "error" for record in records)

```

1. Ensure `ruff_sync` is imported in `tests/test_basic.py` (e.g., `import ruff_sync`) if it is not already; mirror whatever import style is used in the existing CLI tests.
2. Align the arguments passed to `ruff_sync.main(...)` in `test_cli_output_format_json_success` with the existing `test_cli_output_format_json` (out-of-sync) test so they exercise the same code path; typically this means copying the argument list from that test and only changing the mocked configuration/content so that it represents the “in sync” scenario.
3. If the existing tests use `monkeypatch.setenv` / `monkeypatch.setattr(sys, "argv", ...)` instead of passing an argument list to `ruff_sync.main`, update the success test to use the same pattern and then call `exit_code = ruff_sync.main()`.
4. If the “in sync” configuration requires different content than `"[tool.ruff]\ntarget-version = 'py311'\n"`, adjust the mocked response body so that the tool considers the local and remote configurations in sync (for example, match whatever configuration is used as the local baseline in other tests).
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@Kilo59
Copy link
Copy Markdown
Owner Author

Kilo59 commented Mar 29, 2026

@sourcery-ai review

Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've reviewed your changes and they look great!


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@Kilo59 Kilo59 changed the title Github Action Reports Implement structured output formats and GitHub Actions integration Mar 29, 2026
@Kilo59 Kilo59 enabled auto-merge (squash) March 29, 2026 20:34
@github-actions
Copy link
Copy Markdown

╒═══════════════╤═════════════════╤════════════╤═════════════════╤══════════════╕
│ File          │ Lines of Code   │ Unique     │ Maintainabili   │ Cyclomatic   │
│               │                 │ Operands   │ ty Index        │ Complexity   │
╞═══════════════╪═════════════════╪════════════╪═════════════════╪══════════════╡
│ tests/conftes │ 12 -> 64        │ 0 -> 0     │ 100 ->          │ 1 -> 8       │
│ t.py          │                 │            │ 82.0156         │              │
├───────────────┼─────────────────┼────────────┼─────────────────┼──────────────┤
│ tests/test_ba │ 843 -> 999      │ 13 -> 14   │ 23.49 ->        │ 50 -> 61     │
│ sic.py        │                 │            │ 20.3738         │              │
├───────────────┼─────────────────┼────────────┼─────────────────┼──────────────┤
│ tests/test_sc │ 257 -> 269      │ 6 -> 6     │ 47.6741 ->      │ 8 -> 8       │
│ affold.py     │                 │            │ 47.232          │              │
├───────────────┼─────────────────┼────────────┼─────────────────┼──────────────┤
│ tests/test_se │ 397 -> 398      │ 6 -> 6     │ 41.4506 ->      │ 18 -> 18     │
│ rialization.p │                 │            │ 41.4272         │              │
│ y             │                 │            │                 │              │
├───────────────┼─────────────────┼────────────┼─────────────────┼──────────────┤
│ tests/test_fo │ - -> 154        │ - -> 1     │ - -> 51.52078   │ - -> 27      │
│ rmatters.py   │                 │            │ 908268894       │              │
├───────────────┼─────────────────┼────────────┼─────────────────┼──────────────┤
│ tests/test_ch │ 374 -> 596      │ 3 -> 7     │ 46.2002 ->      │ 8 -> 25      │
│ eck.py        │                 │            │ 36.6462         │              │
╘═══════════════╧═════════════════╧════════════╧═════════════════╧══════════════╛

@Kilo59 Kilo59 merged commit e3413c5 into main Mar 29, 2026
12 of 13 checks passed
@Kilo59 Kilo59 deleted the reports branch March 29, 2026 20:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant