Skip to content

0.0.9

0.0.9 #25

Workflow file for this run

# This workflow runs a locally‑hosted LLM inside the runner to generate critical comments on pull requests
# Based on: https://remarkablemark.org/blog/2025/02/23/run-ollama-large-language-models-on-github-actions/
# The prompt is taken from here: https://www.reddit.com/r/ChatGPTCoding/comments/1f51y8s/a_collection_of_prompts_for_generating_high/
name: A very stupid code review
on:
pull_request:
branches:
- main
permissions:
contents: read
pull-requests: write
env:
MODEL: kirito1/qwen3-coder:4b
PROMPT: |
You are a senior developer and an attentive code quality control agent in the repository. Your task is to read the code and pay attention to the following things:
1. Code quality and adherence to best practices
2. Potential bugs or edge cases
3. Performance optimizations
4. Readability and maintainability
5. Any security concerns
Only if you are absolutely sure that there is a problem (check it twice) - write about it. You must explain the problem clearly, but be as short and concise as possible.
To clarify your thoughts, you can use blocks of code in the markdown format, but do not abuse it, use it only if you cannot convey your thoughts in a shorter way.
If everything is fine with the code and you don't see any obvious errors, just write: "Everything seems to be fine with the code!". Don't write anything else in this case.
Here’s the code given (or rather, only a diff from PR):
jobs:
code-review:
runs-on: ubuntu-latest
steps:
- name: Restore the model cache
uses: actions/cache@v4
with:
path: ~/.ollama
key: ${{ runner.os }}-${{ env.MODEL }}
restore-keys: |
${{ runner.os }}-${{ env.MODEL }}
# Install ollama
- name: Setup ollama
uses: ai-action/setup-ollama@v1
# Pull the model if it hasn't been cached yet
- name: Download the model
run: ollama pull ${{ env.MODEL }}
# Checkout repository
- name: Checkout repository
uses: actions/checkout@v4
- name: Show PROMPT
run: |
echo "PROMPT: ${{ env.PROMPT }}"
- name: Code review comment
run: |
RESPONSE=$(ollama run ${{ env.MODEL }} "${{ env.PROMPT }}\n$(gh pr diff $PR_NUMBER)")
gh pr comment $PR_NUMBER --body "$RESPONSE"
env:
GH_TOKEN: ${{ github.token }}
PR_NUMBER: ${{ github.event.pull_request.number }}