draft(intake): study upstream ai-skills/generic stack impact#147
draft(intake): study upstream ai-skills/generic stack impact#147
Conversation
* implement ai-skills command line switch * fix: address review comments, remove breaking change for existing projects, add tests * fix: review comments * fix: review comments * fix: review comments * fix: review comments * fix: review comments, add test cases for all the agents * fix: review comments * fix: review comments * chore: trigger CI * chore: trigger CodeQL * ci: add CodeQL workflow for code scanning * ci: add actions language to CodeQL workflow, disable default setup --------- Co-authored-by: dhilipkumars <s.dhilipkumar@gmail.com> (cherry picked from commit 9402ebd)
* feat: add GitHub Actions workflow for testing and linting Python code * fix: resolve ruff lint errors in specify_cli - Remove extraneous f-string prefixes (F541) - Split multi-statement lines (E701, E702) - Remove unused variable assignments (F841) - Remove ruff format check from CI workflow (format-only PR to follow) Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: strip ANSI codes in ai-skills help text test The Rich/Typer CLI injects ANSI escape codes into option names in --help output, causing plain string matching to fail. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --------- Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> (cherry picked from commit 24d76b5)
…ub#1639) - Add --ai generic option for unsupported AI agents (bring your own agent) - Require --ai-commands-dir to specify where agent reads commands from - Generate Markdown commands with $ARGUMENTS format (compatible with most agents) - Rebuild CHANGELOG from GitHub releases (last 10 releases) - Align version to 0.1.3 Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> (cherry picked from commit 6150f1e)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3 to 4. - [Release notes](https://github.com/github/codeql-action/releases) - [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md) - [Commits](github/codeql-action@v3...v4) --- updated-dependencies: - dependency-name: github/codeql-action dependency-version: '4' dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> (cherry picked from commit 04fc3fd)
📝 AI PR SummarySummaryAdd CodeQL security analysis workflow, support new "generic" agent in release packaging and update scripts, and introduce comprehensive unit tests for AI agent skills installation. Changes
Impact
🤖 Auto-generated · openai/gpt-4.1-mini · GitHub Models free tier · 0 premium requests |
Summary of ChangesHello @nsalvacao, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request serves as a preliminary investigation into integrating an upstream AI-skills/generic stack. It selectively imports components to assess their surface area, CI/script interactions, and packaging implications, while consciously avoiding immediate changes to core, conflict-prone files. The aim is to gather data and insights to inform a strategic decision on future adaptation, rather than to deliver a functional, mergeable feature at this stage. Highlights
Changelog
Ignored Files
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
🔍 AI Code ReviewChunk 01Security VulnerabilitiesNo critical vulnerabilities found. Bugs🔴 Critical
🟡 Warning
Best Practice Violations🟡 Warning
🔵 Info
Summary Table
No security vulnerabilities found. Address the critical workflow bug and warnings for improved reliability and maintainability. Chunk 02Review SummaryThe diff consists of test cases for the 1. Security Vulnerabilities🔵 Info
2. Bugs🟡 Warning
3. Best Practice Violations🟡 Warning
🔵 Info
4. Additional Observations🔵 Info
Action Items
No critical vulnerabilities found. Chunk 03Review Summary1. Security Vulnerabilities🔵 Info 2. Bugs🟡 Warning
3. Best Practice Violations🟡 Warning
🔵 Info
Summary Table
No critical issues found. Address warnings for improved reliability and maintainability. 🤖 AI Review · openai/gpt-4.1 · 11825 tokens · GitHub Models free tier · 0 premium requests |
There was a problem hiding this comment.
Code Review
This pull request is a draft to evaluate upstream changes for AI skills and generic agent support. The changes introduce a 'generic' agent type in the agent context update scripts and add a comprehensive test suite for the new AI skills functionality. My review focuses on the new test file, tests/test_ai_skills.py. I've provided suggestions to improve test robustness, code structure, and adherence to Python style guidelines. The changes in the shell scripts are consistent and look good.
| def _fake_extract(self, agent, project_path, **_kwargs): | ||
| """Simulate template extraction: create agent commands dir.""" | ||
| agent_cfg = AGENT_CONFIG.get(agent, {}) | ||
| agent_folder = agent_cfg.get("folder", "") | ||
| if agent_folder: | ||
| cmds_dir = project_path / agent_folder.rstrip("/") / "commands" | ||
| cmds_dir.mkdir(parents=True, exist_ok=True) | ||
| (cmds_dir / "speckit.specify.md").write_text("# spec") |
There was a problem hiding this comment.
This helper method does not use the instance (self). It can be converted to a static method to make it clear that it doesn't depend on the state of the test class instance. This improves code clarity and structure.
| def _fake_extract(self, agent, project_path, **_kwargs): | |
| """Simulate template extraction: create agent commands dir.""" | |
| agent_cfg = AGENT_CONFIG.get(agent, {}) | |
| agent_folder = agent_cfg.get("folder", "") | |
| if agent_folder: | |
| cmds_dir = project_path / agent_folder.rstrip("/") / "commands" | |
| cmds_dir.mkdir(parents=True, exist_ok=True) | |
| (cmds_dir / "speckit.specify.md").write_text("# spec") | |
| @staticmethod | |
| def _fake_extract(agent, project_path, **_kwargs): | |
| """Simulate template extraction: create agent commands dir.""" | |
| agent_cfg = AGENT_CONFIG.get(agent, {}) | |
| agent_folder = agent_cfg.get("folder", "") | |
| if agent_folder: | |
| cmds_dir = project_path / agent_folder.rstrip("/") / "commands" | |
| cmds_dir.mkdir(parents=True, exist_ok=True) | |
| (cmds_dir / "speckit.specify.md").write_text("# spec") |
|
|
||
| def test_new_project_commands_removed_after_skills_succeed(self, tmp_path): | ||
| """For new projects, commands should be removed when skills succeed.""" | ||
| from typer.testing import CliRunner |
There was a problem hiding this comment.
According to PEP 8, imports should be at the top of the file. Please move from typer.testing import CliRunner to the top-level imports section of this file. This avoids repeated imports within test methods and improves code organization.
This comment applies to all other local imports of CliRunner in this file.
References
- Imports should usually be on separate lines, at the top of the file, just after any module comments and docstrings, and before module globals and constants. They should be grouped in the following order: standard library imports, third-party imports, local application/library specific imports. Each group should be separated by a blank line. (link)
| patch("specify_cli.install_ai_skills", return_value=True) as mock_skills, \ | ||
| patch("specify_cli.is_git_repo", return_value=False), \ | ||
| patch("specify_cli.shutil.which", return_value="/usr/bin/git"): | ||
| result = runner.invoke(app, ["init", str(target), "--ai", "claude", "--ai-skills", "--script", "sh", "--no-git"]) |
There was a problem hiding this comment.
The result of runner.invoke is captured but not checked. It's a good practice to assert that the command executed successfully by checking the exit code. This makes the test more robust by ensuring that the CLI command itself didn't fail for an unexpected reason before the main assertions are checked.
This also applies to the runner.invoke calls in test_commands_preserved_when_skills_fail and test_here_mode_commands_preserved.
| result = runner.invoke(app, ["init", str(target), "--ai", "claude", "--ai-skills", "--script", "sh", "--no-git"]) | |
| result = runner.invoke(app, ["init", str(target), "--ai", "claude", "--ai-skills", "--script", "sh", "--no-git"]) | |
| assert result.exit_code == 0 |
Purpose (Study-Only)
This is an evaluation Draft PR to inspect the upstream AI-skills/generic stack in isolation before deciding whether to adapt it to our fork architecture.
Target Upstream Stack
9402ebd- Feat/ai skills (Feat/ai skills github/spec-kit#1632)24d76b5- Add pytest and Python linting (ruff) to CI (Add pytest and Python linting (ruff) to CI github/spec-kit#1637)6150f1e- Generic agent support with customizable command directories (Add generic agent support (bring your own agent) github/spec-kit#1639)04fc3fd- Bump github/codeql-action v3 -> v4 (chore(deps): bump github/codeql-action from 3 to 4 github/spec-kit#1635)What Was Imported in This Draft
.github/workflows/codeql.ymltests/test_ai_skills.pyand follow-up test deltas from the stackExplicitly Deferred (Conflict-Heavy Core)
For this study pass, conflict-heavy files were intentionally kept on fork baseline to avoid unintended replacement of current architecture:
src/specify_cli/__init__.pyREADME.mdCHANGELOG.mdpyproject.tomlAGENTS.mdCurrent Expected State
tests/test_ai_skills.pycurrently fails to collect because core implementation symbols are not yet ported (e.g.,_get_skills_dir).Why This Is Useful
It allows us to:
Credit / Provenance
Authorship is preserved via
cherry-pick -xon all four upstream commits.