Skip to content

feat(novita): add Novita AI provider integration#1118

Open
Alex-wuhu wants to merge 3 commits intoMervinPraison:mainfrom
Alex-wuhu:novita-integration
Open

feat(novita): add Novita AI provider integration#1118
Alex-wuhu wants to merge 3 commits intoMervinPraison:mainfrom
Alex-wuhu:novita-integration

Conversation

@Alex-wuhu
Copy link

@Alex-wuhu Alex-wuhu commented Mar 19, 2026

Summary

  • Adds novita-basic.py example showing how to use Novita AI's OpenAI-compatible endpoint (https://api.novita.ai/openai) with PraisonAI agents
  • Adds tests/unit/test_novita_provider.py with unit tests covering Novita provider configuration (base URL, API key, Kimi/DeepSeek/GLM model names)
  • No changes to existing providers or core code — fully backward-compatible

Usage

Set NOVITA_API_KEY to your Novita AI API key. Novita AI offers fast inference for Kimi K2.5, DeepSeek V3.2, GLM-5, and other open-source models.

Test plan

  • pytest src/praisonai-agents/tests/unit/test_novita_provider.py passes without real API calls
  • Run python src/praisonai-agents/novita-basic.py with a valid NOVITA_API_KEY for end-to-end verification

Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Added an example demonstrating Novita AI integration with PraisonAI Agents (uses environment-based API key and runs a sample query).
  • Tests

    • Added unit tests validating Novita provider configuration, API key retrieval from environment, and support for multiple AI model identifiers.

Alex-wuhu and others added 2 commits March 19, 2026 12:11
Add novita-basic.py demonstrating how to use Novita AI's OpenAI-compatible
endpoint with PraisonAI Agents. Uses NOVITA_API_KEY env var and
base_url=https://api.novita.ai/openai following the same pattern as
existing ollama and gemini examples.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds tests/unit/test_novita_provider.py with coverage for Novita AI's
OpenAI-compatible endpoint (https://api.novita.ai/openai), verifying
that Agent correctly accepts base_url, api_key, and Novita model names
(Kimi, DeepSeek, GLM).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 19, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 515081cd-4619-4861-8727-99228d7a5278

📥 Commits

Reviewing files that changed from the base of the PR and between 4788333 and b2dadec.

📒 Files selected for processing (2)
  • src/praisonai-agents/novita-basic.py
  • src/praisonai-agents/tests/unit/test_novita_provider.py
✅ Files skipped from review due to trivial changes (1)
  • src/praisonai-agents/tests/unit/test_novita_provider.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/praisonai-agents/novita-basic.py

📝 Walkthrough

Walkthrough

Adds a Novita AI example script for PraisonAI Agents and a unit test suite validating Novita-compatible configuration, model selection, and environment-based API key handling.

Changes

Cohort / File(s) Summary
Novita AI Example
src/praisonai-agents/novita-basic.py
New example script that reads NOVITA_API_KEY from the environment, constructs a praisonaiagents.Agent configured with Novita's OpenAI-compatible base_url and llm identifier, and calls agent.start("Why is the sky blue?").
Novita AI Tests
src/praisonai-agents/tests/unit/test_novita_provider.py
New unit tests verifying agent construction with Novita base_url, environment-sourced api_key (via patch.dict), and parametrized llm model names; no external network calls.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

Poem

🐰 I found a Novita key in spring,
I hopped, I coded, gave it wing,
An agent starts with sky so blue,
Tests cheer on — all green and new,
Hooray! The rabbit's done a thing. 🥕✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: adding Novita AI provider integration with a new example and unit tests.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

CodeRabbit can generate a title for your PR based on the changes with custom instructions.

Set the reviews.auto_title_instructions setting to generate a title for your PR based on the changes in the PR with custom instructions.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new integration for Novita AI, allowing PraisonAI agents to leverage Novita's OpenAI-compatible endpoint. This expands the range of available high-quality open-source models like Kimi, DeepSeek, and GLM, providing more flexibility and options for users without altering existing functionalities.

Highlights

  • Novita AI Integration Example: A new example script, novita-basic.py, was added to demonstrate how to use Novita AI's OpenAI-compatible endpoint with PraisonAI agents.
  • Unit Tests for Novita Provider: Unit tests were added in tests/unit/test_novita_provider.py to cover Novita provider configuration, including base URL, API key handling, and support for Kimi, DeepSeek, and GLM models.
  • Backward Compatibility: The changes introduce no modifications to existing providers or core code, ensuring full backward compatibility.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for the Novita AI provider by adding an example script and unit tests. My review focuses on improving the robustness of the example and the maintainability of the tests.

In novita-basic.py, I've suggested adding a check to ensure the NOVITA_API_KEY environment variable is set. This will provide a clearer error message to users if the key is missing, improving the user experience.

In test_novita_provider.py, I've proposed refactoring several similar tests for different Novita models into a single, parameterized test. This improves code maintainability and readability. I've also strengthened an assertion to be more precise.

Comment on lines +17 to +22
agent = Agent(
instructions="You are a helpful assistant",
llm="openai/moonshotai/kimi-k2.5",
base_url="https://api.novita.ai/openai",
api_key=os.environ.get("NOVITA_API_KEY"),
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The script currently passes None to the Agent constructor if the NOVITA_API_KEY environment variable is not set. This can lead to a less-than-obvious authentication error when agent.start() is called. It would be more user-friendly to explicitly check for the presence of the API key and raise a ValueError if it's missing. This provides a clear and immediate error message to the user.

Suggested change
agent = Agent(
instructions="You are a helpful assistant",
llm="openai/moonshotai/kimi-k2.5",
base_url="https://api.novita.ai/openai",
api_key=os.environ.get("NOVITA_API_KEY"),
)
api_key = os.environ.get("NOVITA_API_KEY")
if not api_key:
raise ValueError("The NOVITA_API_KEY environment variable is not set. Please set it to your Novita AI API key.")
agent = Agent(
instructions="You are a helpful assistant",
llm="openai/moonshotai/kimi-k2.5",
base_url="https://api.novita.ai/openai",
api_key=api_key,
)

Comment on lines +47 to +97
def test_agent_novita_kimi_model(self):
"""Agent should accept Novita's Kimi model."""
from praisonaiagents import Agent

agent = Agent(
name="KimiTest",
instructions="You are a helpful assistant",
llm="openai/moonshotai/kimi-k2.5",
base_url="https://api.novita.ai/openai",
api_key="test-key",
)
assert agent is not None

def test_agent_novita_deepseek_model(self):
"""Agent should accept Novita's DeepSeek model."""
from praisonaiagents import Agent

agent = Agent(
name="DeepSeekNovitaTest",
instructions="You are a helpful assistant",
llm="openai/deepseek/deepseek-v3.2",
base_url="https://api.novita.ai/openai",
api_key="test-key",
)
assert agent is not None

def test_agent_novita_glm_model(self):
"""Agent should accept Novita's GLM model."""
from praisonaiagents import Agent

agent = Agent(
name="GLMNovitaTest",
instructions="You are a helpful assistant",
llm="openai/zai-org/glm-5",
base_url="https://api.novita.ai/openai",
api_key="test-key",
)
assert agent is not None

def test_agent_novita_model_stored(self):
"""Agent should store the Novita model name correctly."""
from praisonaiagents import Agent

agent = Agent(
name="ModelStoreTest",
instructions="You are a helpful assistant",
llm="openai/moonshotai/kimi-k2.5",
base_url="https://api.novita.ai/openai",
api_key="test-key",
)
assert "kimi-k2.5" in agent.llm or "moonshotai" in agent.llm
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The tests for different Novita models (test_agent_novita_kimi_model, test_agent_novita_deepseek_model, test_agent_novita_glm_model) are very similar and can be consolidated into a single parameterized test using pytest.mark.parametrize. This will make the test suite more concise and easier to maintain.

Additionally, the assertion in test_agent_novita_model_stored can be made more precise. Instead of checking for substrings, it should assert that agent.llm is exactly equal to the model name provided.

I've combined these improvements into a single parameterized test that covers all model variations and includes a more robust assertion.

    @pytest.mark.parametrize(
        "model_name",
        [
            "openai/moonshotai/kimi-k2.5",
            "openai/deepseek/deepseek-v3.2",
            "openai/zai-org/glm-5",
        ],
    )
    def test_agent_novita_models(self, model_name):
        """Agent should accept various Novita models and store them correctly."""
        from praisonaiagents import Agent

        agent = Agent(
            name=f"NovitaModelTest-{model_name.split('/')[-1]}",
            instructions="You are a helpful assistant",
            llm=model_name,
            base_url="https://api.novita.ai/openai",
            api_key="test-key",
        )
        assert agent is not None
        assert agent.llm == model_name

@Alex-wuhu Alex-wuhu marked this pull request as ready for review March 19, 2026 07:19
@qodo-code-review
Copy link

Review Summary by Qodo

Add Novita AI provider integration with tests

✨ Enhancement 🧪 Tests

Grey Divider

Walkthroughs

Description
• Adds Novita AI provider integration example with OpenAI-compatible endpoint
• Implements unit tests for Novita configuration and model support
• Supports Kimi, DeepSeek, and GLM models via Novita's API
• Fully backward-compatible with no changes to existing code
Diagram
flowchart LR
  A["Novita AI Provider"] -->|"OpenAI-compatible endpoint"| B["Agent Configuration"]
  B -->|"base_url + api_key"| C["Kimi/DeepSeek/GLM Models"]
  D["Example Code"] -->|"demonstrates usage"| B
  E["Unit Tests"] -->|"validates configuration"| B
Loading

Grey Divider

File Changes

1. src/praisonai-agents/novita-basic.py ✨ Enhancement +24/-0

Novita AI provider example implementation

• Adds example demonstrating Novita AI integration with PraisonAI Agents
• Configures Agent with Novita's OpenAI-compatible endpoint URL
• Uses environment variable for API key management
• Includes comprehensive documentation and setup instructions

src/praisonai-agents/novita-basic.py


2. src/praisonai-agents/tests/unit/test_novita_provider.py 🧪 Tests +97/-0

Unit tests for Novita provider integration

• Adds 97 lines of unit tests for Novita AI provider configuration
• Tests Agent acceptance of Novita base_url and API key parameters
• Validates support for Kimi K2.5, DeepSeek V3.2, and GLM-5 models
• Verifies environment variable reading and model name storage

src/praisonai-agents/tests/unit/test_novita_provider.py


Grey Divider

Qodo Logo

@qodo-code-review
Copy link

qodo-code-review bot commented Mar 19, 2026

Code Review by Qodo

🐞 Bugs (2) 📘 Rule violations (0) 📎 Requirement gaps (0) 📐 Spec deviations (0)

Grey Divider


Remediation recommended

1. No NOVITA_API_KEY guard 🐞 Bug ⛯ Reliability
Description
novita-basic.py passes api_key=os.environ.get("NOVITA_API_KEY"), so the script can run with
api_key=None and then attempt authenticated calls without an explicit key. With base_url set, Agent
wires a LiteLLM-backed LLM using the provided api_key value, and LLM only forwards api_key when it
is non-empty—so the example provides no credential when NOVITA_API_KEY is unset.
Code

src/praisonai-agents/novita-basic.py[R17-22]

+agent = Agent(
+    instructions="You are a helpful assistant",
+    llm="openai/moonshotai/kimi-k2.5",
+    base_url="https://api.novita.ai/openai",
+    api_key=os.environ.get("NOVITA_API_KEY"),
+)
Evidence
The example reads NOVITA_API_KEY with os.environ.get (may be None). When base_url is provided, Agent
constructs an LLM instance passing api_key through, and LLM only includes api_key in downstream
params when it is truthy—meaning the example supplies no API key at all if NOVITA_API_KEY is
missing.

src/praisonai-agents/novita-basic.py[17-22]
src/praisonai-agents/praisonaiagents/agent/agent.py[1312-1334]
src/praisonai-agents/praisonaiagents/llm/llm.py[4197-4208]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`src/praisonai-agents/novita-basic.py` allows `api_key` to be `None` when `NOVITA_API_KEY` is unset, which leads to confusing runtime/auth failures.

### Issue Context
This file is presented as a copy/paste runnable example; it should validate required configuration and provide a clear error.

### Fix Focus Areas
- src/praisonai-agents/novita-basic.py[8-24]

### Suggested change
- Read the key into a variable, validate it, and exit/raise with a clear message before constructing the `Agent`.
 - e.g., `api_key = os.getenv(&quot;NOVITA_API_KEY&quot;)` then `if not api_key: raise ValueError(&quot;NOVITA_API_KEY is required...&quot;)` and pass `api_key=api_key` to `Agent`.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Missing llm extra prerequisite 🐞 Bug ⚙ Maintainability
Description
novita-basic.py sets base_url, which forces Agent to import and use the LiteLLM-based LLM
implementation and raises ImportError unless the optional "llm" extra is installed. The example’s
Prerequisites section doesn’t mention installing praisonaiagents[llm], so users on a base install
will hit an ImportError immediately.
Code

src/praisonai-agents/novita-basic.py[R8-13]

+Prerequisites:
+    Set your Novita AI API key as an environment variable:
+        export NOVITA_API_KEY="your-api-key-here"
+
+    Get your API key at: https://novita.ai
+"""
Evidence
Agent.__init__ always imports ..llm.llm.LLM when base_url is provided, and explicitly raises an
ImportError instructing users to install praisonaiagents[llm] if that dependency set isn’t
present. The example’s docstring prerequisites omit this requirement.

src/praisonai-agents/praisonaiagents/agent/agent.py[1312-1345]
src/praisonai-agents/novita-basic.py[8-13]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The Novita example uses `base_url=...`, which requires the optional LLM dependency set, but the example’s prerequisites don’t mention installing it.

### Issue Context
`Agent` raises an ImportError (with install instructions) when `base_url` is provided but LLM dependencies are missing.

### Fix Focus Areas
- src/praisonai-agents/novita-basic.py[1-13]

### Suggested change
- Update the docstring/prerequisites to include an explicit install step, e.g.:
 - `pip install &quot;praisonaiagents[llm]&quot;`
 - (or `praisonaiagents[all]` if that’s the recommended bundle).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (4)
src/praisonai-agents/novita-basic.py (1)

17-22: Add validation for missing API key to improve error messaging.

If NOVITA_API_KEY is not set, os.environ.get() returns None, which may cause a confusing runtime error deep in the API call. Adding early validation would make this example more user-friendly.

💡 Proposed fix to validate API key
 import os
 from praisonaiagents import Agent

+api_key = os.environ.get("NOVITA_API_KEY")
+if not api_key:
+    raise ValueError("Please set NOVITA_API_KEY environment variable")
+
 agent = Agent(
     instructions="You are a helpful assistant",
     llm="openai/moonshotai/kimi-k2.5",
     base_url="https://api.novita.ai/openai",
-    api_key=os.environ.get("NOVITA_API_KEY"),
+    api_key=api_key,
 )

Based on learnings: "Code examples must run without modification (copy-paste success)" - clear error messages improve the copy-paste experience.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai-agents/novita-basic.py` around lines 17 - 22, The Agent
instantiation uses os.environ.get("NOVITA_API_KEY") which can be None; add an
early validation before creating Agent: read the env var into a variable (e.g.,
novita_api_key), check if it's falsy, and if so raise a clear error or exit with
a message like "NOVITA_API_KEY environment variable is required", otherwise pass
that variable into the Agent constructor's api_key parameter; update references
around Agent(...) to use this validated novita_api_key.
src/praisonai-agents/tests/unit/test_novita_provider.py (3)

47-84: Consider parameterizing similar model tests to reduce duplication.

These three tests (test_agent_novita_kimi_model, test_agent_novita_deepseek_model, test_agent_novita_glm_model) are nearly identical, differing only in the model name. Using pytest.mark.parametrize would be more maintainable and make it easier to add new models.

♻️ Proposed parameterized version
`@pytest.mark.parametrize`("model_id,expected_substring", [
    ("openai/moonshotai/kimi-k2.5", "kimi"),
    ("openai/deepseek/deepseek-v3.2", "deepseek"),
    ("openai/zai-org/glm-5", "glm"),
])
def test_agent_accepts_novita_models(self, model_id, expected_substring):
    """Agent should accept various Novita model identifiers."""
    from praisonaiagents import Agent

    agent = Agent(
        name="NovitaModelTest",
        instructions="You are a helpful assistant",
        llm=model_id,
        base_url="https://api.novita.ai/openai",
        api_key="test-key",
    )
    assert agent is not None
    assert expected_substring in agent.llm.lower()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai-agents/tests/unit/test_novita_provider.py` around lines 47 -
84, The three nearly identical tests (test_agent_novita_kimi_model,
test_agent_novita_deepseek_model, test_agent_novita_glm_model) should be
consolidated into a parameterized pytest to reduce duplication; replace them
with a single parametrized test that iterates over model identifiers and
expected substrings, calls Agent(...) with llm set to the parameter, and asserts
the Agent is created (and optionally that agent.llm contains the expected
substring) — update the test function name (e.g.,
test_agent_accepts_novita_models) and reference the Agent class and llm
parameter when making the assertions.

86-97: Weak assertion using or may mask failures.

The assertion assert "kimi-k2.5" in agent.llm or "moonshotai" in agent.llm will pass if either condition is true, which makes it unclear what the expected behavior is and could mask subtle bugs. Consider asserting on the exact expected value.

💡 Proposed stronger assertion
-        assert "kimi-k2.5" in agent.llm or "moonshotai" in agent.llm
+        # Assert exact expected value or the full model string
+        assert agent.llm == "openai/moonshotai/kimi-k2.5"

If the Agent class normalizes the model string, verify the expected normalized format instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai-agents/tests/unit/test_novita_provider.py` around lines 86 -
97, The test test_agent_novita_model_stored uses a weak boolean assertion;
replace it with a single exact-value assertion against the expected normalized
model string for the Agent (reference Agent and agent.llm) — e.g. assert
agent.llm == "openai/moonshotai/kimi-k2.5" (or if Agent normalizes to a shorter
form, assert the exact normalized value like "moonshotai/kimi-k2.5"); ensure the
test checks equality to the single expected string rather than using an or
condition.

33-45: Strengthen assertion to verify API key was actually configured.

The test patches the environment and passes the value to Agent, but only asserts agent is not None. This doesn't verify the API key was correctly read from the environment and stored. The pattern is already established in test_agent_accepts_novita_base_url (line 31), which asserts both agent creation and the attribute value. Consider asserting on agent.api_key to match that pattern:

         with patch.dict(os.environ, {"NOVITA_API_KEY": "env-test-key"}):
             agent = Agent(
                 name="NovitaEnvTest",
                 instructions="You are a helpful assistant",
                 llm="openai/moonshotai/kimi-k2.5",
                 base_url="https://api.novita.ai/openai",
                 api_key=os.environ.get("NOVITA_API_KEY"),
             )
         assert agent is not None
+        assert agent.api_key == "env-test-key"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai-agents/tests/unit/test_novita_provider.py` around lines 33 -
45, Update the test_agent_novita_api_key_from_env test to assert the Agent
actually stored the API key: after creating the Agent instance in
test_agent_novita_api_key_from_env, add an assertion that agent.api_key equals
the expected "env-test-key" (or the value from
os.environ.get("NOVITA_API_KEY")), similar to the pattern used in
test_agent_accepts_novita_base_url; reference the Agent constructor and the
agent.api_key attribute to locate where to add the assertion.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/praisonai-agents/novita-basic.py`:
- Around line 17-22: The Agent instantiation uses
os.environ.get("NOVITA_API_KEY") which can be None; add an early validation
before creating Agent: read the env var into a variable (e.g., novita_api_key),
check if it's falsy, and if so raise a clear error or exit with a message like
"NOVITA_API_KEY environment variable is required", otherwise pass that variable
into the Agent constructor's api_key parameter; update references around
Agent(...) to use this validated novita_api_key.

In `@src/praisonai-agents/tests/unit/test_novita_provider.py`:
- Around line 47-84: The three nearly identical tests
(test_agent_novita_kimi_model, test_agent_novita_deepseek_model,
test_agent_novita_glm_model) should be consolidated into a parameterized pytest
to reduce duplication; replace them with a single parametrized test that
iterates over model identifiers and expected substrings, calls Agent(...) with
llm set to the parameter, and asserts the Agent is created (and optionally that
agent.llm contains the expected substring) — update the test function name
(e.g., test_agent_accepts_novita_models) and reference the Agent class and llm
parameter when making the assertions.
- Around line 86-97: The test test_agent_novita_model_stored uses a weak boolean
assertion; replace it with a single exact-value assertion against the expected
normalized model string for the Agent (reference Agent and agent.llm) — e.g.
assert agent.llm == "openai/moonshotai/kimi-k2.5" (or if Agent normalizes to a
shorter form, assert the exact normalized value like "moonshotai/kimi-k2.5");
ensure the test checks equality to the single expected string rather than using
an or condition.
- Around line 33-45: Update the test_agent_novita_api_key_from_env test to
assert the Agent actually stored the API key: after creating the Agent instance
in test_agent_novita_api_key_from_env, add an assertion that agent.api_key
equals the expected "env-test-key" (or the value from
os.environ.get("NOVITA_API_KEY")), similar to the pattern used in
test_agent_accepts_novita_base_url; reference the Agent constructor and the
agent.api_key attribute to locate where to add the assertion.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: bd0ce45f-9e87-4e68-9f10-7c7661276aa5

📥 Commits

Reviewing files that changed from the base of the PR and between f8c6f32 and 4788333.

📒 Files selected for processing (2)
  • src/praisonai-agents/novita-basic.py
  • src/praisonai-agents/tests/unit/test_novita_provider.py

- Add API key validation in example before Agent construction
- Consolidate duplicate model tests into parametrized test
- Use exact equality assertion for model name storage

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant