When making changes in the project, please make sure that the corresponding tests are not failing.
These rules are non-negotiable and apply to every change that touches production code.
A change is NOT done until ALL of the following are true:
- Tests exist for every newly added or modified code path.
- New public function/class → new unit test covering happy path + at least one edge case.
- Bug fix → a regression test that fails WITHOUT the fix and passes WITH it.
- Behavior change → existing tests updated to reflect the new contract.
- Tests actually run and pass locally via
./gradlew jvmTest(andjsTestif the module has a JS target). Do NOT report a task as complete based on "it compiles". Compilation ≠ tests pass. ./gradlew buildsucceeds for the affected modules before the change is handed back to the user.- No suppressed warnings, no
@Ignore, no@Disabled, no commented-out assertions were introduced to make tests pass. - Public API changes (new/removed/renamed public symbols) are documented with KDoc.
If any gate fails, fix the underlying issue. Do NOT weaken the test, disable it, or mark the task complete with a caveat. If the gate genuinely cannot be satisfied, stop and ask the user.
Before writing a test, make sure these points are clear:
- What behavior is under test? (not "what function" — what observable behavior)
- What are the inputs / preconditions? Include the boundary and error cases.
- What is the expected observable outcome? Return value, thrown exception, state change, emitted event.
- What is the minimal test double setup? Prefer the framework's
getMockExecutor/mockToolover hand-rolled mocks. - Which source set does the test belong in? (
commonTestwhen platform-agnostic, elsejvmTest/jsTest.)
If any of these is unclear, the implementation itself is probably under-specified — pause and clarify.
Before considering the change complete, confirm:
- Which tests were added or modified (file paths + test names).
- The exact Gradle command that was run and its result (pass/fail count).
- Whether any Quality Gate was skipped, and why.
If this checklist cannot be filled in truthfully, the task is not done.
This project uses Kotlin Multiplatform with JVM tests located in the jvmTest source sets.
To run all JVM tests in the project:
./gradlew jvmTestTo run JVM tests from a specific module:
./gradlew :<module>:jvmTestFor example, to run JVM tests from the agents-test module:
./gradlew :agents:agents-test:jvmTestTo run a specific test class:
./gradlew :<module>:jvmTest --tests "fully.qualified.TestClassName"For example:
./gradlew :agents:agents-test:jvmTest --tests "ai.koog.agents.test.SimpleAgentMockedTest"Integration tests are located in the integration-tests module and are used to test interactions with external LLM
services.
To run all integration tests in the project:
./gradlew jvmIntegrationTestTo run integration tests from a specific module:
./gradlew :<module>:jvmIntegrationTestFor example, to run integration tests from the integration-tests module:
./gradlew :integration-tests:jvmIntegrationTestIntegration test methods are prefixed with integration_ to distinguish them from unit tests. To run a specific
integration test:
./gradlew :<module>:jvmIntegrationTest --tests "fully.qualified.TestClassName.integration_testMethodName"For example:
./gradlew :integration-tests:jvmIntegrationTest --tests "ai.koog.integration.tests.SingleLLMPromptExecutorIntegrationTest.integration_testExecute"Integration tests that interact with LLM services require API tokens to be set as environment variables:
ANTHROPIC_API_TEST_KEY- Required for tests using Anthropic's Claude modelsDEEPSEEK_API_TEST_KEY- Required for tests using DeepSeekGEMINI_API_TEST_KEY- Required for tests using Google's Gemini modelsMISTRAL_AI_API_TEST_KEY- Required for tests using MistralAIOPEN_AI_API_TEST_KEY- Required for tests using OpenAI's modelsOPEN_ROUTER_API_TEST_KEY- Required for tests using OpenRouter
You need to set these environment variables before running the integration tests that use the corresponding LLM clients.
To simplify development, you can also create env.properties file (already gitignored) using env.template.propertes as a template.
Then properties specified there would be automatically applied as environment variables when you run any test task.
If you don't have API keys for certain LLM providers, you can skip the tests for those providers using the skip.llm.providers system property. This is useful when you want to run integration tests but only have API keys for some of the providers.
To skip tests for specific providers, use the -Dskip.llm.providers flag with a comma-separated list of provider IDs:
./gradlew :integration-tests:jvmIntegrationTest -Dskip.llm.providers=openai,googleThis will skip all tests that use OpenAI and Google models, but still run tests for other providers like Anthropic.
Available provider IDs:
openai- Skip tests using OpenAI modelsanthropic- Skip tests using Anthropic modelsgoogle- Skip tests using Google modelsopenrouter- Skip tests using OpenRouter models
You can also run a specific test class with provider skipping:
./gradlew :integration-tests:jvmIntegrationTest --tests "ai.koog.integration.tests.AIAgentIntegrationTest" -Dskip.llm.providers=anthropic,geminiWhen tests are skipped due to provider filtering, they will be reported as "skipped" in the test results rather than "failed".
Ollama tests are integration tests that use the Ollama LLM client. These tests are located in the integration-tests
module and are prefixed with ollama_ to distinguish them from other integration tests.
To run all Ollama tests in the project:
./gradlew jvmOllamaTestTo run Ollama tests from a specific module:
./gradlew :integration-tests:jvmOllamaTestTo run a specific Ollama test:
./gradlew :integration-tests:jvmOllamaTest --tests "fully.qualified.TestClassName.ollama_testMethodName"For example:
./gradlew :integration-tests:jvmOllamaTest --tests "ai.koog.integration.tests.OllamaClientIntegrationTest.ollama_test execute simple prompt"By default, Ollama tests use a Docker container to run the Ollama server. You need to:
- Have Docker installed and running on your machine
- Set the
OLLAMA_IMAGE_URLenvironment variable to the URL of the Ollama image to use
For example:
export OLLAMA_IMAGE_URL="ollama/ollama:latest"Alternatively, you can use a local Ollama client:
- Install Ollama from https://ollama.com/download
- Pull the required model (e.g.,
llama3) - Comment out the
@field:InjectOllamaTestFixtureannotation in the test class - Manually specify the executor and model in the test
Example of modifying a test to use a local Ollama client:
// @ExtendWith(OllamaTestFixtureExtension::class)
class OllamaClientIntegrationTest {
companion object {
// @field:InjectOllamaTestFixture // Comment out this line
private lateinit var fixture: OllamaTestFixture
// Manually initialize executor and model
private val executor = SingleLLMPromptExecutor(OllamaClient("http://localhost:11434"))
private val model = OllamaModels.Meta.LLAMA_3_2
}
// Test methods...
}Note that when using a local Ollama client, you need to ensure that the model specified in the test is pulled and available in your local Ollama installation.