Skip to content

Commit 110d036

Browse files
authored
fix(docs): improve classify_intent reliability and draft logging (#1721)
- Switch model from gpt-4 to gpt-5-nano because gpt-4 could not reliably return valid JSON, causing classify_intent to fail. - Update logging to avoid printing draft_response before it exists, which previously caused an error in: print(f"Draft ready for review: {result['draft_response'][:100]}...") ## Overview This PR updates the example to use `gpt-5-nano` instead of `gpt-4` for the `classify_intent` step and fixes a logging statement that could raise an exception when `draft_response` is not yet available. ## Type of change **Type:** Update existing documentation ## Related issues/PRs <!-- Link to related issues, feature PRs, or discussions (if applicable) --> - GitHub issue: - Feature PR: <!-- For LangChain employees, if applicable: --> - Linear issue: - Slack thread: ## Checklist <!-- Put an 'x' in all boxes that apply --> - [x] I have read the [[contributing guidelines](https://sider.ai/zh-CN/README.md)](README.md) - [ ] I have tested my changes locally using `docs dev` - [x] All code examples have been tested and work correctly - [ ] I have used **root relative** paths for internal links - [ ] I have updated navigation in `src/docs.json` if needed (Internal team members only / optional): Create a preview deployment as necessary using the [[Create Preview Branch workflow](https://github.com/langchain-ai/docs/actions/workflows/create-preview-branch.yml)](https://github.com/langchain-ai/docs/actions/workflows/create-preview-branch.yml) ## Additional notes - The switch to `gpt-5-nano` ensures structured JSON output for the `classify_intent` node. - The updated print statement prevents runtime errors when `draft_response` is missing.
1 parent 87ef38a commit 110d036

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

src/oss/langgraph/thinking-in-langgraph.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -482,7 +482,7 @@ We'll implement each node as a simple function. Remember: nodes take state, do w
482482
from langchain_openai import ChatOpenAI
483483
from langchain.messages import HumanMessage
484484

485-
llm = ChatOpenAI(model="gpt-4")
485+
llm = ChatOpenAI(model="gpt-5-nano")
486486

487487
def read_email(state: EmailAgentState) -> dict:
488488
"""Extract and parse email content"""
@@ -974,7 +974,7 @@ initial_state = {
974974
config = {"configurable": {"thread_id": "customer_123"}}
975975
result = app.invoke(initial_state, config)
976976
# The graph will pause at human_review
977-
print(f"Draft ready for review: {result['draft_response'][:100]}...")
977+
print(f"human review interrupt:{result['__interrupt__']}")
978978

979979
# When ready, provide human input to resume
980980
from langgraph.types import Command

0 commit comments

Comments
 (0)