lang instruction to beginning of prompt#233
Conversation
📝 WalkthroughWalkthroughReorders prompt assembly in two call-processing services to place language instructions before base prompts; adds support in the LLM factory for OpenAI-compatible endpoints by routing non-default base_url to a new _create_openai_compatible_llm builder. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant LLMFactory as LLMServiceFactory
participant OpenAI as BaseOpenAILLMService
Client->>LLMFactory: request LLM (api_key, model, params, base_url)
alt base_url == "https://api.openai.com/v1" or not provided
LLMFactory->>OpenAI: _create_openai_llm(api_key, model, params)
else non-default base_url
LLMFactory->>OpenAI: _create_openai_compatible_llm(api_key, model, params, base_url)
end
OpenAI-->>LLMFactory: configured service instance
LLMFactory-->>Client: return LLM service
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
wavefront/server/apps/call_processing/call_processing/services/language_detection_tool.py (1)
152-153: Update the comment to match the new prepend behavior.The code now prepends
language_instruction, so the inline comment should say “Prepend” (not “Append”) to avoid future confusion.Suggested edit
- # Append new language instruction to clean base prompt + # Prepend language instruction to clean base prompt🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@wavefront/server/apps/call_processing/call_processing/services/language_detection_tool.py` around lines 152 - 153, The inline comment above the updated_content assignment is incorrect: the code now prepends language_instruction to base_prompt (using updated_content = f'{language_instruction}\n\n{base_prompt}'), so update the comment text to say "Prepend new language instruction to clean base prompt" (or similar) near the variables language_instruction, base_prompt, and updated_content to reflect the current behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In
`@wavefront/server/apps/call_processing/call_processing/services/language_detection_tool.py`:
- Around line 152-153: The inline comment above the updated_content assignment
is incorrect: the code now prepends language_instruction to base_prompt (using
updated_content = f'{language_instruction}\n\n{base_prompt}'), so update the
comment text to say "Prepend new language instruction to clean base prompt" (or
similar) near the variables language_instruction, base_prompt, and
updated_content to reflect the current behavior.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
wavefront/server/apps/call_processing/call_processing/services/language_detection_tool.pywavefront/server/apps/call_processing/call_processing/services/pipecat_service.py
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
wavefront/server/apps/call_processing/call_processing/services/llm_service.py (1)
106-126: Extract shared OpenAI-family parameter mapping to one helper.This block duplicates logic from
_create_openai_llm, which already drifted (service_tiermismatch). A shared mapper will prevent future parity bugs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@wavefront/server/apps/call_processing/call_processing/services/llm_service.py` around lines 106 - 126, The OpenAI-compatible parameter mapping in _create_openai_compatible_llm duplicates logic from _create_openai_llm and has already drifted (e.g., service_tier mismatch); extract a single helper (e.g., _map_openai_parameters or similar) that accepts parameters: Dict[str, Any] and returns a dict/structure suitable for BaseOpenAILLMService.InputParams, then replace the inline mapping in both _create_openai_compatible_llm and _create_openai_llm to call that helper; ensure the helper includes all shared keys (temperature, max_completion_tokens, top_p, frequency_penalty, presence_penalty, seed) and that any function-specific fields like service_tier are set consistently where needed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@wavefront/server/apps/call_processing/call_processing/services/llm_service.py`:
- Around line 113-125: The mapping block that builds params_dict from parameters
omits the 'service_tier' key, causing OpenAI-compatible configs (handled earlier
in the OpenAI path around the code that checks base_url) to ignore it; update
the mapping in llm_service.py to include if 'service_tier' in parameters:
params_dict['service_tier'] = parameters['service_tier'] so that the same
'service_tier' value is forwarded for both OpenAI and non-OpenAI flows (ensure
you modify the same function that constructs params_dict and references the
parameters dict).
- Around line 53-59: Normalize the retrieved base_url from llm_config before
comparing to the default OpenAI endpoint: get the raw value (e.g., base_url_raw
= llm_config.get('base_url')), treat None as empty string, strip surrounding
whitespace and trailing slashes (and optionally lower-case) into a normalized
base_url, then compare normalized base_url to the normalized default
"https://api.openai.com/v1" and call LLMServiceFactory._create_openai_llm when
it matches or is empty; otherwise call
LLMServiceFactory._create_openai_compatible_llm with the original/normalized
base_url.
---
Nitpick comments:
In
`@wavefront/server/apps/call_processing/call_processing/services/llm_service.py`:
- Around line 106-126: The OpenAI-compatible parameter mapping in
_create_openai_compatible_llm duplicates logic from _create_openai_llm and has
already drifted (e.g., service_tier mismatch); extract a single helper (e.g.,
_map_openai_parameters or similar) that accepts parameters: Dict[str, Any] and
returns a dict/structure suitable for BaseOpenAILLMService.InputParams, then
replace the inline mapping in both _create_openai_compatible_llm and
_create_openai_llm to call that helper; ensure the helper includes all shared
keys (temperature, max_completion_tokens, top_p, frequency_penalty,
presence_penalty, seed) and that any function-specific fields like service_tier
are set consistently where needed.
| base_url = llm_config.get('base_url') | ||
| if not base_url or base_url == 'https://api.openai.com/v1': | ||
| return LLMServiceFactory._create_openai_llm(api_key, model, parameters) | ||
| else: | ||
| return LLMServiceFactory._create_openai_compatible_llm( | ||
| api_key, model, parameters, base_url | ||
| ) |
There was a problem hiding this comment.
Normalize base_url before default-endpoint comparison.
On Line 54, raw string comparison can misroute equivalent default URLs (e.g., trailing slash/whitespace) into the OpenAI-compatible path.
Suggested fix
if llm_type == 'openai':
- base_url = llm_config.get('base_url')
- if not base_url or base_url == 'https://api.openai.com/v1':
+ raw_base_url = llm_config.get('base_url')
+ base_url = raw_base_url.strip().rstrip('/') if isinstance(raw_base_url, str) else raw_base_url
+ if not base_url or base_url == 'https://api.openai.com/v1':
return LLMServiceFactory._create_openai_llm(api_key, model, parameters)
else:
return LLMServiceFactory._create_openai_compatible_llm(
api_key, model, parameters, base_url
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| base_url = llm_config.get('base_url') | |
| if not base_url or base_url == 'https://api.openai.com/v1': | |
| return LLMServiceFactory._create_openai_llm(api_key, model, parameters) | |
| else: | |
| return LLMServiceFactory._create_openai_compatible_llm( | |
| api_key, model, parameters, base_url | |
| ) | |
| raw_base_url = llm_config.get('base_url') | |
| base_url = raw_base_url.strip().rstrip('/') if isinstance(raw_base_url, str) else raw_base_url | |
| if not base_url or base_url == 'https://api.openai.com/v1': | |
| return LLMServiceFactory._create_openai_llm(api_key, model, parameters) | |
| else: | |
| return LLMServiceFactory._create_openai_compatible_llm( | |
| api_key, model, parameters, base_url | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@wavefront/server/apps/call_processing/call_processing/services/llm_service.py`
around lines 53 - 59, Normalize the retrieved base_url from llm_config before
comparing to the default OpenAI endpoint: get the raw value (e.g., base_url_raw
= llm_config.get('base_url')), treat None as empty string, strip surrounding
whitespace and trailing slashes (and optionally lower-case) into a normalized
base_url, then compare normalized base_url to the normalized default
"https://api.openai.com/v1" and call LLMServiceFactory._create_openai_llm when
it matches or is empty; otherwise call
LLMServiceFactory._create_openai_compatible_llm with the original/normalized
base_url.
| if 'temperature' in parameters: | ||
| params_dict['temperature'] = parameters['temperature'] | ||
| if 'max_completion_tokens' in parameters: | ||
| params_dict['max_completion_tokens'] = parameters['max_completion_tokens'] | ||
| if 'top_p' in parameters: | ||
| params_dict['top_p'] = parameters['top_p'] | ||
| if 'frequency_penalty' in parameters: | ||
| params_dict['frequency_penalty'] = parameters['frequency_penalty'] | ||
| if 'presence_penalty' in parameters: | ||
| params_dict['presence_penalty'] = parameters['presence_penalty'] | ||
| if 'seed' in parameters: | ||
| params_dict['seed'] = parameters['seed'] | ||
|
|
There was a problem hiding this comment.
service_tier is silently ignored for OpenAI-compatible configs.
In Line 113–125 mapping, service_tier is missing even though the OpenAI path supports it (Line 94–95). That creates inconsistent runtime behavior based only on base_url.
Suggested fix
if 'seed' in parameters:
params_dict['seed'] = parameters['seed']
+ if 'service_tier' in parameters:
+ params_dict['service_tier'] = parameters['service_tier']📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if 'temperature' in parameters: | |
| params_dict['temperature'] = parameters['temperature'] | |
| if 'max_completion_tokens' in parameters: | |
| params_dict['max_completion_tokens'] = parameters['max_completion_tokens'] | |
| if 'top_p' in parameters: | |
| params_dict['top_p'] = parameters['top_p'] | |
| if 'frequency_penalty' in parameters: | |
| params_dict['frequency_penalty'] = parameters['frequency_penalty'] | |
| if 'presence_penalty' in parameters: | |
| params_dict['presence_penalty'] = parameters['presence_penalty'] | |
| if 'seed' in parameters: | |
| params_dict['seed'] = parameters['seed'] | |
| if 'temperature' in parameters: | |
| params_dict['temperature'] = parameters['temperature'] | |
| if 'max_completion_tokens' in parameters: | |
| params_dict['max_completion_tokens'] = parameters['max_completion_tokens'] | |
| if 'top_p' in parameters: | |
| params_dict['top_p'] = parameters['top_p'] | |
| if 'frequency_penalty' in parameters: | |
| params_dict['frequency_penalty'] = parameters['frequency_penalty'] | |
| if 'presence_penalty' in parameters: | |
| params_dict['presence_penalty'] = parameters['presence_penalty'] | |
| if 'seed' in parameters: | |
| params_dict['seed'] = parameters['seed'] | |
| if 'service_tier' in parameters: | |
| params_dict['service_tier'] = parameters['service_tier'] |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@wavefront/server/apps/call_processing/call_processing/services/llm_service.py`
around lines 113 - 125, The mapping block that builds params_dict from
parameters omits the 'service_tier' key, causing OpenAI-compatible configs
(handled earlier in the OpenAI path around the code that checks base_url) to
ignore it; update the mapping in llm_service.py to include if 'service_tier' in
parameters: params_dict['service_tier'] = parameters['service_tier'] so that the
same 'service_tier' value is forwarded for both OpenAI and non-OpenAI flows
(ensure you modify the same function that constructs params_dict and references
the parameters dict).
Summary by CodeRabbit
New Features
Refactor