Skip to content

lang instruction to beginning of prompt#233

Merged
vizsatiz merged 2 commits intodevelopfrom
fix_moving_lang_instruction_to_top
Mar 4, 2026
Merged

lang instruction to beginning of prompt#233
vizsatiz merged 2 commits intodevelopfrom
fix_moving_lang_instruction_to_top

Conversation

@rootflo-hardik
Copy link
Contributor

@rootflo-hardik rootflo-hardik commented Mar 2, 2026

Summary by CodeRabbit

  • New Features

    • Added support for using custom OpenAI-compatible endpoints (non-default base URLs) for model connections.
  • Refactor

    • Reordered and adjusted spacing of language instructions inside system prompts to improve language detection and instruction handling.

@coderabbitai
Copy link

coderabbitai bot commented Mar 2, 2026

📝 Walkthrough

Walkthrough

Reorders prompt assembly in two call-processing services to place language instructions before base prompts; adds support in the LLM factory for OpenAI-compatible endpoints by routing non-default base_url to a new _create_openai_compatible_llm builder.

Changes

Cohort / File(s) Summary
Prompt assembly
wavefront/server/apps/call_processing/call_processing/services/language_detection_tool.py, wavefront/server/apps/call_processing/call_processing/services/pipecat_service.py
Swapped ordering of language instruction and base prompt in system message construction and adjusted newline placement. No other logic changes.
LLM factory / provider support
wavefront/server/apps/call_processing/call_processing/services/llm_service.py
Added detection of non-default OpenAI base_url and new _create_openai_compatible_llm(api_key, model, parameters, base_url) that maps parameters into InputParams and constructs a BaseOpenAILLMService with the given base_url. Existing default OpenAI path unchanged.

Sequence Diagram(s)

sequenceDiagram
  participant Client
  participant LLMFactory as LLMServiceFactory
  participant OpenAI as BaseOpenAILLMService
  Client->>LLMFactory: request LLM (api_key, model, params, base_url)
  alt base_url == "https://api.openai.com/v1" or not provided
    LLMFactory->>OpenAI: _create_openai_llm(api_key, model, params)
  else non-default base_url
    LLMFactory->>OpenAI: _create_openai_compatible_llm(api_key, model, params, base_url)
  end
  OpenAI-->>LLMFactory: configured service instance
  LLMFactory-->>Client: return LLM service
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • vizsatiz

Poem

🐰
Prompts shuffled like springtime hay,
Language first, then base at play.
New endpoint paths hop into view,
Configured, tidy—ready to woo.
I nibble bugs, then sprint away.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'lang instruction to beginning of prompt' directly matches the primary change across all three modified files, which moves language instructions to the beginning of system prompts.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix_moving_lang_instruction_to_top

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
wavefront/server/apps/call_processing/call_processing/services/language_detection_tool.py (1)

152-153: Update the comment to match the new prepend behavior.

The code now prepends language_instruction, so the inline comment should say “Prepend” (not “Append”) to avoid future confusion.

Suggested edit
-                    # Append new language instruction to clean base prompt
+                    # Prepend language instruction to clean base prompt
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@wavefront/server/apps/call_processing/call_processing/services/language_detection_tool.py`
around lines 152 - 153, The inline comment above the updated_content assignment
is incorrect: the code now prepends language_instruction to base_prompt (using
updated_content = f'{language_instruction}\n\n{base_prompt}'), so update the
comment text to say "Prepend new language instruction to clean base prompt" (or
similar) near the variables language_instruction, base_prompt, and
updated_content to reflect the current behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In
`@wavefront/server/apps/call_processing/call_processing/services/language_detection_tool.py`:
- Around line 152-153: The inline comment above the updated_content assignment
is incorrect: the code now prepends language_instruction to base_prompt (using
updated_content = f'{language_instruction}\n\n{base_prompt}'), so update the
comment text to say "Prepend new language instruction to clean base prompt" (or
similar) near the variables language_instruction, base_prompt, and
updated_content to reflect the current behavior.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c22cc40 and e98dd70.

📒 Files selected for processing (2)
  • wavefront/server/apps/call_processing/call_processing/services/language_detection_tool.py
  • wavefront/server/apps/call_processing/call_processing/services/pipecat_service.py

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
wavefront/server/apps/call_processing/call_processing/services/llm_service.py (1)

106-126: Extract shared OpenAI-family parameter mapping to one helper.

This block duplicates logic from _create_openai_llm, which already drifted (service_tier mismatch). A shared mapper will prevent future parity bugs.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@wavefront/server/apps/call_processing/call_processing/services/llm_service.py`
around lines 106 - 126, The OpenAI-compatible parameter mapping in
_create_openai_compatible_llm duplicates logic from _create_openai_llm and has
already drifted (e.g., service_tier mismatch); extract a single helper (e.g.,
_map_openai_parameters or similar) that accepts parameters: Dict[str, Any] and
returns a dict/structure suitable for BaseOpenAILLMService.InputParams, then
replace the inline mapping in both _create_openai_compatible_llm and
_create_openai_llm to call that helper; ensure the helper includes all shared
keys (temperature, max_completion_tokens, top_p, frequency_penalty,
presence_penalty, seed) and that any function-specific fields like service_tier
are set consistently where needed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@wavefront/server/apps/call_processing/call_processing/services/llm_service.py`:
- Around line 113-125: The mapping block that builds params_dict from parameters
omits the 'service_tier' key, causing OpenAI-compatible configs (handled earlier
in the OpenAI path around the code that checks base_url) to ignore it; update
the mapping in llm_service.py to include if 'service_tier' in parameters:
params_dict['service_tier'] = parameters['service_tier'] so that the same
'service_tier' value is forwarded for both OpenAI and non-OpenAI flows (ensure
you modify the same function that constructs params_dict and references the
parameters dict).
- Around line 53-59: Normalize the retrieved base_url from llm_config before
comparing to the default OpenAI endpoint: get the raw value (e.g., base_url_raw
= llm_config.get('base_url')), treat None as empty string, strip surrounding
whitespace and trailing slashes (and optionally lower-case) into a normalized
base_url, then compare normalized base_url to the normalized default
"https://api.openai.com/v1" and call LLMServiceFactory._create_openai_llm when
it matches or is empty; otherwise call
LLMServiceFactory._create_openai_compatible_llm with the original/normalized
base_url.

---

Nitpick comments:
In
`@wavefront/server/apps/call_processing/call_processing/services/llm_service.py`:
- Around line 106-126: The OpenAI-compatible parameter mapping in
_create_openai_compatible_llm duplicates logic from _create_openai_llm and has
already drifted (e.g., service_tier mismatch); extract a single helper (e.g.,
_map_openai_parameters or similar) that accepts parameters: Dict[str, Any] and
returns a dict/structure suitable for BaseOpenAILLMService.InputParams, then
replace the inline mapping in both _create_openai_compatible_llm and
_create_openai_llm to call that helper; ensure the helper includes all shared
keys (temperature, max_completion_tokens, top_p, frequency_penalty,
presence_penalty, seed) and that any function-specific fields like service_tier
are set consistently where needed.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e98dd70 and c1b80f1.

📒 Files selected for processing (1)
  • wavefront/server/apps/call_processing/call_processing/services/llm_service.py

Comment on lines +53 to +59
base_url = llm_config.get('base_url')
if not base_url or base_url == 'https://api.openai.com/v1':
return LLMServiceFactory._create_openai_llm(api_key, model, parameters)
else:
return LLMServiceFactory._create_openai_compatible_llm(
api_key, model, parameters, base_url
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Normalize base_url before default-endpoint comparison.

On Line 54, raw string comparison can misroute equivalent default URLs (e.g., trailing slash/whitespace) into the OpenAI-compatible path.

Suggested fix
         if llm_type == 'openai':
-            base_url = llm_config.get('base_url')
-            if not base_url or base_url == 'https://api.openai.com/v1':
+            raw_base_url = llm_config.get('base_url')
+            base_url = raw_base_url.strip().rstrip('/') if isinstance(raw_base_url, str) else raw_base_url
+            if not base_url or base_url == 'https://api.openai.com/v1':
                 return LLMServiceFactory._create_openai_llm(api_key, model, parameters)
             else:
                 return LLMServiceFactory._create_openai_compatible_llm(
                     api_key, model, parameters, base_url
                 )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
base_url = llm_config.get('base_url')
if not base_url or base_url == 'https://api.openai.com/v1':
return LLMServiceFactory._create_openai_llm(api_key, model, parameters)
else:
return LLMServiceFactory._create_openai_compatible_llm(
api_key, model, parameters, base_url
)
raw_base_url = llm_config.get('base_url')
base_url = raw_base_url.strip().rstrip('/') if isinstance(raw_base_url, str) else raw_base_url
if not base_url or base_url == 'https://api.openai.com/v1':
return LLMServiceFactory._create_openai_llm(api_key, model, parameters)
else:
return LLMServiceFactory._create_openai_compatible_llm(
api_key, model, parameters, base_url
)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@wavefront/server/apps/call_processing/call_processing/services/llm_service.py`
around lines 53 - 59, Normalize the retrieved base_url from llm_config before
comparing to the default OpenAI endpoint: get the raw value (e.g., base_url_raw
= llm_config.get('base_url')), treat None as empty string, strip surrounding
whitespace and trailing slashes (and optionally lower-case) into a normalized
base_url, then compare normalized base_url to the normalized default
"https://api.openai.com/v1" and call LLMServiceFactory._create_openai_llm when
it matches or is empty; otherwise call
LLMServiceFactory._create_openai_compatible_llm with the original/normalized
base_url.

Comment on lines +113 to +125
if 'temperature' in parameters:
params_dict['temperature'] = parameters['temperature']
if 'max_completion_tokens' in parameters:
params_dict['max_completion_tokens'] = parameters['max_completion_tokens']
if 'top_p' in parameters:
params_dict['top_p'] = parameters['top_p']
if 'frequency_penalty' in parameters:
params_dict['frequency_penalty'] = parameters['frequency_penalty']
if 'presence_penalty' in parameters:
params_dict['presence_penalty'] = parameters['presence_penalty']
if 'seed' in parameters:
params_dict['seed'] = parameters['seed']

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

service_tier is silently ignored for OpenAI-compatible configs.

In Line 113–125 mapping, service_tier is missing even though the OpenAI path supports it (Line 94–95). That creates inconsistent runtime behavior based only on base_url.

Suggested fix
         if 'seed' in parameters:
             params_dict['seed'] = parameters['seed']
+        if 'service_tier' in parameters:
+            params_dict['service_tier'] = parameters['service_tier']
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if 'temperature' in parameters:
params_dict['temperature'] = parameters['temperature']
if 'max_completion_tokens' in parameters:
params_dict['max_completion_tokens'] = parameters['max_completion_tokens']
if 'top_p' in parameters:
params_dict['top_p'] = parameters['top_p']
if 'frequency_penalty' in parameters:
params_dict['frequency_penalty'] = parameters['frequency_penalty']
if 'presence_penalty' in parameters:
params_dict['presence_penalty'] = parameters['presence_penalty']
if 'seed' in parameters:
params_dict['seed'] = parameters['seed']
if 'temperature' in parameters:
params_dict['temperature'] = parameters['temperature']
if 'max_completion_tokens' in parameters:
params_dict['max_completion_tokens'] = parameters['max_completion_tokens']
if 'top_p' in parameters:
params_dict['top_p'] = parameters['top_p']
if 'frequency_penalty' in parameters:
params_dict['frequency_penalty'] = parameters['frequency_penalty']
if 'presence_penalty' in parameters:
params_dict['presence_penalty'] = parameters['presence_penalty']
if 'seed' in parameters:
params_dict['seed'] = parameters['seed']
if 'service_tier' in parameters:
params_dict['service_tier'] = parameters['service_tier']
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@wavefront/server/apps/call_processing/call_processing/services/llm_service.py`
around lines 113 - 125, The mapping block that builds params_dict from
parameters omits the 'service_tier' key, causing OpenAI-compatible configs
(handled earlier in the OpenAI path around the code that checks base_url) to
ignore it; update the mapping in llm_service.py to include if 'service_tier' in
parameters: params_dict['service_tier'] = parameters['service_tier'] so that the
same 'service_tier' value is forwarded for both OpenAI and non-OpenAI flows
(ensure you modify the same function that constructs params_dict and references
the parameters dict).

@vizsatiz vizsatiz merged commit ed859ff into develop Mar 4, 2026
10 checks passed
@vizsatiz vizsatiz deleted the fix_moving_lang_instruction_to_top branch March 4, 2026 08:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants