Agentic approach#5
Conversation
vineeshah
commented
Apr 8, 2026
- Replaces the OpenAI diff-only review with a full agentic approach running inside an isolated E2B microVM
- Agent clones the PR's repo (shallow, branch-only), explores the codebase using a bash tool (grep, git, cat, linters, etc.), and posts up to 3 inline or overall comments(can work around the prompt and quality of comments)
- Uses claude-haiku-4-5 as its main model(can experiment and change later).
|
[CRITICAL] |
|
[CRITICAL] |
| print(f"[E2B] Starting review for PR #{pr_number} in {repo}") | ||
|
|
||
| # Acquire semaphore to limit concurrent sandboxes | ||
| acquired = False |
There was a problem hiding this comment.
[HIGH] Semaphore resource leak: acquired is initialized to False but only released if it remains False. If acquire() returns True or raises an exception, the semaphore may never be released due to the early return statements on lines 42 and 46. The semaphore should only be released if acquire() succeeded.
| safe_env = { | ||
| "PATH": os.environ.get("PATH", "/usr/bin:/bin:/usr/local/bin"), | ||
| "HOME": os.environ.get("HOME", "/root"), | ||
| "LANG": os.environ.get("LANG", "en_US.UTF-8"), |
There was a problem hiding this comment.
[HIGH] Security concern: Passing GITHUB_TOKEN via environment variable to subprocess with shell=True (line 18) in run_bash() is a vulnerability vector. Although safe_env is used to strip most env vars, the function signature doesn't prevent callers from modifying env. Additionally, running arbitrary git/bash commands with token access could enable exfiltration despite the stated intent to prevent prompt injection.