[Feat] code_generate 도구 구현#12
Conversation
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
Warning Rate limit exceeded
You’ve run out of usage credits. Purchase more in the billing tab. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughOpenRouter 기반 LLM 클라이언트 캐싱, SSE 이벤트 스트리밍 인프라, 그리고 이를 통합한 ChangesLLM 클라이언트 및 SSE 스트리밍 인프라 추가
Sequence Diagram(s)sequenceDiagram
participant Caller
participant Tool as code_generate Tool
participant Emitter as SSEEmitter
participant LLMClient as get_llm(flash)
participant Queue as asyncio.Queue
Caller->>Tool: code_generate(problem, approach)
alt SSE emitter available
Tool->>Emitter: emit(tool_start, {name, input})
Emitter->>Queue: put(SSEEvent)
end
Tool->>LLMClient: request ChatOpenRouter instance
LLMClient->>LLMClient: check/create cached instance
LLMClient->>Tool: return ChatOpenRouter
Tool->>Tool: invoke LLM with system prompt
alt SSE emitter available
Tool->>Emitter: emit(tool_result, {output})
Emitter->>Queue: put(SSEEvent)
end
Tool->>Caller: return code string
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai 리뷰 ㄱ |
|
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
src/proovy_agent/graph/tools/code_generate.py (1)
27-44: ⚡ Quick winLLM 호출 실패 시 SSE 종료 이벤트를 보강해 주세요.
현재는 예외가 나면
tool_start만 발행되고 결과/오류 이벤트가 빠질 수 있습니다.try/except에서error이벤트를 emit하면 UI 상태가 안정적입니다.제안 diff
emitter = current_emitter.get() if emitter: await emitter.emit( "tool_start", {"name": "code_generate", "label": "🔧 검증 코드 생성 중..."} ) - llm = get_llm("flash") - messages = [ - {"role": "system", "content": _SYSTEM_PROMPT}, - {"role": "user", "content": f"문제: {problem}\n\n풀이 방향: {approach}"}, - ] - response = await llm.ainvoke(messages) - code = response.content.strip() + try: + llm = get_llm("flash") + messages = [ + {"role": "system", "content": _SYSTEM_PROMPT}, + {"role": "user", "content": f"문제: {problem}\n\n풀이 방향: {approach}"}, + ] + response = await llm.ainvoke(messages) + code = response.content.strip() + except Exception as exc: + if emitter: + await emitter.emit( + "error", + {"name": "code_generate", "message": str(exc)}, + ) + raise if emitter: await emitter.emit("tool_result", {"name": "code_generate", "output": code})🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/proovy_agent/graph/tools/code_generate.py` around lines 27 - 44, The code_generate function emits "tool_start" but doesn't emit a follow-up event if the LLM call fails; wrap the LLM invocation and result emission in a try/except and on exception call emitter.emit with an error event (e.g., "tool_error" or "tool_result" with an error payload) including the exception message and the tool name "code_generate", then re-raise or return an appropriate value so the UI gets a terminal event; ensure you reference current_emitter.get(), emitter.emit("tool_start", ...), llm.ainvoke(...), and emitter.emit("tool_result", ...) and add emitter.emit("tool_error", {"name":"code_generate", "error": str(e)}) in the except block.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/proovy_agent/common/sse/emitter.py`:
- Around line 22-28: Add the missing return type annotation to the async
generator method stream: change its signature in the Emitter class (async def
stream(self)) to specify AsyncIterator[dict[str, str]] so the function is
declared as returning an AsyncIterator of dict[str, str]; ensure you import
AsyncIterator from typing if not already imported.
In `@src/proovy_agent/graph/tools/code_generate.py`:
- Around line 38-40: The code assumes response.content is always a string;
change the extraction to use the LangChain-standard content_blocks on the
response from llm.ainvoke so it handles string, list, or dict formats: fetch
response.content_blocks, join or pick the primary textual block and then strip
it (with a safe fallback if content_blocks is missing or empty) instead of
calling response.content.strip(); update the logic around the response variable
returned by llm.ainvoke to use content_blocks and a fallback text to avoid
AttributeError.
---
Nitpick comments:
In `@src/proovy_agent/graph/tools/code_generate.py`:
- Around line 27-44: The code_generate function emits "tool_start" but doesn't
emit a follow-up event if the LLM call fails; wrap the LLM invocation and result
emission in a try/except and on exception call emitter.emit with an error event
(e.g., "tool_error" or "tool_result" with an error payload) including the
exception message and the tool name "code_generate", then re-raise or return an
appropriate value so the UI gets a terminal event; ensure you reference
current_emitter.get(), emitter.emit("tool_start", ...), llm.ainvoke(...), and
emitter.emit("tool_result", ...) and add emitter.emit("tool_error",
{"name":"code_generate", "error": str(e)}) in the except block.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1228a28e-5f9e-42ef-ae14-f628397fa846
📒 Files selected for processing (5)
src/proovy_agent/common/llm/client.pysrc/proovy_agent/common/sse/context.pysrc/proovy_agent/common/sse/emitter.pysrc/proovy_agent/common/sse/events.pysrc/proovy_agent/graph/tools/code_generate.py
6841430 to
64cb051
Compare
response.content가 list인 경우 .strip() 호출 시 AttributeError 발생하므로 isinstance 분기로 안전하게 텍스트 추출 Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
📌 관련 이슈
🏷️ PR 타입
📝 작업 내용
src/proovy_agent/graph/tools/code_generate.py구현src/proovy_agent/common/sse/context.py구현 —ContextVar기반 SSE emitter 전달@tool로 정의,problem과approach를 입력받아 실행 가능한 Python 코드 반환get_llm("flash")사용으로 코드 생성current_emitterContextVar로 tool 내부에서 SSEtool_start/tool_result이벤트 발행📸 스크린샷
✅ 체크리스트
📎 기타 참고사항
ChatOpenRouter) 의존으로 단위 테스트 미작성current_emitterContextVar는 CoreSolver 노드가 실행 전set(), tool 내부에서get()으로 참조하는 방식Summary by CodeRabbit
릴리스 노트