|
response = await litellm.acompletion( |
|
model=model, |
|
messages=litellm_messages, |
|
tools=litellm_tools, |
|
api_key=config.LLM_API_KEY, |
|
api_base=config.LLM_BASE_URL, |
|
timeout=config.DEFAULT_TIMEOUT, |
|
) |
Hi, I found that all model is tested without any reasoning_effort. Are all model results on the leaderboard tested with reasoning_effort disabled? Does MCP-Atlas specifically test tool-calling abilities without the reasoning before tool-calling ?
mcp-atlas/services/mcp_eval/mcp_completion/llm.py
Lines 49 to 56 in 867003a
Hi, I found that all model is tested without any reasoning_effort. Are all model results on the leaderboard tested with reasoning_effort disabled? Does MCP-Atlas specifically test tool-calling abilities without the reasoning before tool-calling ?