This repository uses an xUnit test project to validate the MCP server's behavior, schema validation, and command-building logic.
- Total Tests: Large suite (unit + conformance + opt-in scenarios)
- Tool Coverage: Consolidated MCP tools have comprehensive unit tests
- Code Coverage: Tracked in CI and uploaded to Codecov
- Test Organization: Tests are organized by category (Templates, Packages, Projects, Solutions, References, etc.)
- MCP Conformance: 19 conformance tests validate MCP protocol compliance (including consolidated tool schema validation)
This project uses consolidated tools as the primary test surface:
- Comprehensive coverage: Consolidated tool tests (
Consolidated*ToolTests.cs) contain detailed parameter-matrix tests, command-building assertions, and validation logic - Machine-readable contract tests: Verify both plain-text and JSON output formats
- Action routing tests: Validate that each action enum value correctly routes to underlying implementation
- Schema validation: MCP conformance tests verify action enums appear correctly in tool schemas
- Clear test organization: Tests organized by domain (project, package, EF, etc.)
- Improved signal-to-noise: Focused tests on consolidated tool behavior
- Easier maintenance: Adding new actions means adding tests to existing consolidated tool test files
- Future-proof: Tool surface remains stable even as new capabilities are added
Run all tests (recommended):
dotnet test --project DotNetMcp.Tests/DotNetMcp.Tests.csproj -c ReleaseImportant:
- When specifying a project for
dotnet test, use--project(do not pass a.csprojas a positional argument). If you pass a project path positionally,dotnet testwill emit:Specifying a project for 'dotnet test' should be via '--project'. - When running from the solution, use
--solution.
This repository uses the Microsoft Testing Platform (MTP) as configured in global.json:
{
"test": {
"runner": "Microsoft.Testing.Platform"
}
}The --project flag support depends on which test runner is being used:
-
Microsoft Testing Platform (MTP): Supports
--projectflag- Used when
global.jsonconfigures"runner": "Microsoft.Testing.Platform" - Default in .NET SDK 10.0+
- Recommended:
dotnet test --project MyTests.csproj
- Used when
-
VSTest: Does not support
--projectflag- Legacy test runner (default in .NET SDK 7.0 and earlier)
- Uses positional argument:
dotnet test MyTests.csproj
References:
The dotnet_project Test action now supports automatic test runner detection:
- Detects Microsoft Testing Platform (MTP) from
global.jsonconfiguration - Defaults to VSTest for legacy compatibility when MTP is not detected
- Supports explicit test runner selection via
testRunnerparameter
Test Runner Selection:
The MCP server chooses between --project flag (MTP) and positional argument (VSTest) based on:
- Explicit
testRunnerparameter:MicrosoftTestingPlatform,VSTest, orAuto(default) - Auto-detection: Checks for
{ "test": { "runner": "Microsoft.Testing.Platform" } }inglobal.json - Legacy fallback: Uses
useLegacyProjectArgument: trueparameter (deprecated) - Default: VSTest mode (positional argument) for safety with legacy projects
SDK Requirements:
- MTP mode (
--projectflag) requires:- .NET SDK 8.0+ with Microsoft Testing Platform enabled, OR
- .NET SDK 10.0+ (MTP enabled by default)
- VSTest mode (positional argument) works with all SDK versions
Troubleshooting: If you encounter Unrecognized option '--project' or MSB1001: Unknown switch:
- Auto mode (recommended): Add to your
global.json:{ "test": { "runner": "Microsoft.Testing.Platform" } } - Explicit mode: Use
testRunner: "VSTest"for VSTest ortestRunner: "MicrosoftTestingPlatform"for MTP - Legacy parameter: Use
useLegacyProjectArgument: true(deprecated, usetestRunnerinstead) - Verify support: Run
dotnet test --help | grep -- --projectto check if your SDK supports--project
Machine-Readable Output: When machineReadable: true, test results include metadata:
selectedTestRunner:microsoft-testing-platformorvstestprojectArgumentStyle:--project,positional, ornoneselectionSource:global.json,testRunner-parameter,useLegacyProjectArgument-parameter, ordefault
Examples:
dotnet test --project DotNetMcp.Tests/DotNetMcp.Tests.csproj -c Release
dotnet test --solution DotNetMcp.slnx -c ReleaseRun tests from the solution:
dotnet test --solution DotNetMcp.slnx -c ReleaseThis repo includes end-to-end scenario tests that start the real DotNetMcp server binary and communicate with it over stdio.
These are integration-style tests (slower and more environment-dependent than unit tests).
- Namespace:
DotNetMcp.Tests.Scenarios - Gate:
DOTNET_MCP_SCENARIO_TESTS=1(ortrue/yes) - How they run in CI:
build.ymlruns them by settingDOTNET_MCP_SCENARIO_TESTS=1and filtering to the scenario namespace.
Run scenario tests locally:
$env:DOTNET_MCP_SCENARIO_TESTS = "1"
dotnet test --project DotNetMcp.Tests/DotNetMcp.Tests.csproj -c Release -- --filter-namespace "*DotNetMcp.Tests.Scenarios*"All scenario and conformance test classes receive ITestOutputHelper via constructor injection. The harness (McpScenarioClient) writes diagnostic lines to the per-test output channel so that CI logs for failing tests include actionable context without interleaving output across tests.
Logged events (no arg values are logged to avoid leaking secrets):
| Event | Example output |
|---|---|
| Server start command | [MCP] Server command: dotnet /path/to/DotNetMcp.dll |
| Server connected | [MCP] Server connected. |
| Tool call (arg keys only) | [MCP] → dotnet_project (args: {action, project, configuration}) |
| Tool response (first 500 chars) | [MCP] ← dotnet_project: Build succeeded... |
For conformance tests the server command is also logged in InitializeAsync via [MCP] Conformance server command: ....
Adding output to new scenario tests: Every new scenario or release-scenario test class should accept ITestOutputHelper in its constructor and pass it to McpScenarioClient.CreateAsync:
public class MyNewScenarioTests
{
private readonly ITestOutputHelper _output;
public MyNewScenarioTests(ITestOutputHelper output)
{
_output = output;
}
[ScenarioFact]
public async Task MyScenario()
{
await using var client = await McpScenarioClient.CreateAsync(
TestContext.Current.CancellationToken, _output);
// ...
}
}Release-gate scenarios are a second tier of long-running integration tests intended to run before shipping a release. They may:
-
perform real NuGet restores,
-
install local .NET tools (e.g.,
dotnet-ef), -
run EF Core migrations against a local SQLite file,
-
stress concurrency by running many tool calls in parallel.
-
Namespace:
DotNetMcp.Tests.ReleaseScenarios -
Gate:
DOTNET_MCP_RELEASE_SCENARIO_TESTS=1(ortrue/yes)
Use the helper script which enables both scenario tiers and runs them in a predictable order:
pwsh -File scripts/run-release-gate-tests.ps1Common variants:
# Fast path (assumes you already built)
pwsh -File scripts/run-release-gate-tests.ps1 -NoRestore -NoBuild
# Skip server.json validation
pwsh -File scripts/run-release-gate-tests.ps1 -SkipServerJsonValidationThe workflow .github/workflows/release-scenarios.yml is designed to be run via workflow_dispatch.
It runs release scenario tests on the selected OS.
The repository includes conformance tests that validate the server's compliance with the Model Context Protocol (MCP) specification.
To run only the conformance tests:
dotnet test --project DotNetMcp.Tests/DotNetMcp.Tests.csproj -c Release -- --filter-class "*McpConformanceTests"The conformance tests verify:
- Server Initialization: Handshake protocol, server info, capabilities, and protocol version negotiation
- Tool Discovery: Tool listing with proper metadata (names, descriptions, input schemas)
- Tool Invocation: Successful tool execution and response format
- Error Handling: Proper MCP error responses with error codes and messages
- Resource Listing: Resource discovery API (if resources are provided)
The conformance tests use an in-process stdio approach:
- Tests start the actual DotNetMcp server binary as a child process
- Communication happens via stdin/stdout using the MCP SDK's
StdioClientTransport - Tests are deterministic and require no external services
- The same server binary used in production is tested for conformance
This approach ensures that:
- Tests validate the actual deployed server behavior
- No mocking or test doubles are used for the core server
- Protocol conformance is verified end-to-end
Conformance tests run automatically in GitHub Actions as part of the build.yml workflow. They run before the full test suite to provide early feedback on protocol compliance issues.
To collect coverage with Microsoft Testing Platform, run:
dotnet test --project DotNetMcp.Tests/DotNetMcp.Tests.csproj -c Release -- --coverage --coverage-output-format coberturaThe Cobertura XML will be written under the test output folder, typically:
DotNetMcp.Tests/bin/Release/net10.0/TestResults/*.cobertura.xml
CI uploads a Cobertura coverage artifact named coverage-cobertura on each run of the build.yml workflow.
To download the latest successful coverage artifact for main and print a quick hotspot summary:
pwsh -File scripts/download-coverage-artifact.ps1To download coverage from a specific workflow run (e.g. a run URL like .../actions/runs/20865330584):
pwsh -File scripts/download-coverage-artifact.ps1 -RunId 20865330584To download coverage for a specific pull request:
pwsh -File scripts/download-coverage-artifact.ps1 -PullRequest 285When using -PullRequest, the script also downloads the latest successful run for the base branch (defaults to -Branch main) and prints a PR-vs-base delta.
To disable the base branch comparison:
pwsh -File scripts/download-coverage-artifact.ps1 -PullRequest 285 -NoBaseCompareNotes:
- Requires GitHub CLI (
gh) and auth (gh auth login). - Output is saved under
artifacts/coverage/run-<runId>/.
This project follows a clear policy on what code is included in coverage measurements and what is intentionally excluded.
Coverage metrics focus on production code that we author and maintain:
- All source files in
DotNetMcp/(excluding generated code) - Tool implementations, helpers, and business logic
- Error handling and validation code
- SDK integration code
The following categories are excluded from coverage reports:
-
Build Artifacts (
obj/,bin/)- Intermediate build outputs
- Compiled assemblies
- These are build products, not source code
-
Auto-Generated Files
*.g.cs- Generated C# files (e.g., Regex source generators)*.GlobalUsings.g.cs- SDK-generated global usings*.AssemblyInfo.cs- SDK-generated assembly metadata*.AssemblyAttributes.cs- SDK-generated assembly attributes
-
Source Generator Outputs
generated/directories- Files in generator-specific directories (e.g.,
System.Text.RegularExpressions.Generator/) - These are generated at build time by source generators
-
Test Projects
**/*.Tests/**- Test code itself is not measured**/Tests/**- Directories namedTeststhat contain test-only projects and assets- We measure how well tests cover production code, not the tests themselves
Exclusions are implemented in multiple locations to ensure consistency:
-
Codecov Configuration (
codecov.yml)- Primary exclusion mechanism for Codecov reporting
- Defines patterns for files to ignore
- See the
ignore:section incodecov.ymlfor the complete list
-
CI Workflow (
.github/workflows/build.yml)- Coverage is collected using Microsoft Testing Platform
- Raw coverage data includes all files; exclusions happen at report processing
- Coverage artifact is uploaded for diagnostics
-
Coverage Collector (Microsoft Testing Platform)
- Collects coverage from all compiled code by default
- Exclusions are applied post-collection by Codecov
-
Generated code should not affect coverage metrics because:
- We don't author or maintain it
- It's auto-generated at build time
- Changes to it are not under our control
-
Build artifacts (
obj/,bin/) should be excluded because:- They are compiler outputs, not source code
- Including them would duplicate coverage of the same code
-
Test code should be excluded because:
- Coverage measures how well tests cover production code
- Measuring test coverage of tests is not useful
The exclusion patterns are validated by:
- Codecov reporting (visible in PR comments and coverage reports)
- CI uploads coverage artifacts that can be inspected locally
- The
download-coverage-artifact.ps1script can be used to examine raw coverage data - The
validate-coverage-exclusions.ps1script validates exclusion patterns locally
To validate exclusion patterns against the latest coverage data:
pwsh -File scripts/validate-coverage-exclusions.ps1This script will:
- Parse the codecov.yml configuration
- Analyze the latest coverage report
- Show which files will be excluded/included
- Warn if expected patterns are not working
If you notice coverage anomalies (e.g., generated files appearing in reports), check:
- The
codecov.ymlconfiguration has the correct patterns - The file paths match the exclusion patterns
- Codecov is processing the exclusions correctly:
- For uploads from
main, review the latest report in the Codecov UI - For pull requests, use the
coverage-coberturaartifact and/orscripts/download-coverage-artifact.ps1to inspect coverage locally and confirm exclusions
- For uploads from
DotNetMcp.Tests/contains the test suite.- Most tests are "pure" unit tests (no network, no machine state changes).
McpConformanceTests.cs- MCP protocol conformance validation (handshake, tool listing/invocation, error handling)TemplateToolsTests.cs- Template-related tools (list, search, info, cache)PackageToolsTests.cs- Package management tools (add, remove, update, search, pack)ProjectToolsTests.cs- Project operations (restore, clean)ReferenceToolsTests.cs- Project reference managementSolutionToolsTests.cs- Solution file operationsMiscellaneousToolsTests.cs- Watch, format, NuGet, help, and framework toolsSdkAndServerInfoToolsTests.cs- SDK info and server capability toolsEdgeCaseAndIntegrationTests.cs- Comprehensive edge cases and parameter combinationsDotNetCliToolsTests.cs- Core CLI tools (build, test, run, publish, certificates, secrets, tools)EntityFrameworkCoreToolsTests.cs- EF Core migration and database tools- Plus helper/infrastructure tests for caching, concurrency, error handling, etc.
Some tests are intentionally skipped by default because they require external state (for example, an actual dotnet CLI invocation with a valid project on disk).
This includes:
- Scenario tests (real server over stdio)
- Release-gate scenarios (NuGet network access, tool install, EF migrations, concurrency stress)
If you are working on command execution behavior, you may want to:
- Run tests locally (not in a restricted CI sandbox).
- Ensure the .NET SDK is installed and available on
PATH.
Certain operations (notably development certificate trust/clean) can trigger OS prompts (UAC dialogs on Windows, trust prompts on macOS). Tests that can produce modal dialogs are opt-in.
To enable interactive tests, set:
DOTNET_MCP_INTERACTIVE_TESTS=1
Examples:
PowerShell:
$env:DOTNET_MCP_INTERACTIVE_TESTS = "1"
dotnet test --project DotNetMcp.Tests/DotNetMcp.Tests.csproj -c Releasebash:
DOTNET_MCP_INTERACTIVE_TESTS=1 dotnet test --project DotNetMcp.Tests/DotNetMcp.Tests.csproj -c ReleaseIf interactive tests are disabled, they will appear as skipped with a message explaining how to enable them.
When adding a new action to a consolidated tool:
- Add action routing test in the corresponding
Consolidated*ToolTests.csfile - Add parameter validation tests (both machineReadable and plain text)
- Add command-building assertion tests using
MachineReadableCommandAssertions.AssertExecutedDotnetCommand - If the action has required parameters, add validation error tests
Example:
[Fact]
public async Task DotnetProject_NewAction_RoutesCorrectly()
{
var result = await _tools.DotnetProject(
action: DotnetProjectAction.NewAction,
requiredParam: "value",
machineReadable: true);
Assert.NotNull(result);
MachineReadableCommandAssertions.AssertExecutedDotnetCommand(result, "dotnet new-command \"value\"");
}When testing machineReadable: true behavior:
- Use
MachineReadableCommandAssertions.AssertExecutedDotnetCommandto verify the command was executed - Verify the response contains
"success": trueor"success": falseas appropriate - For validation errors, verify the response contains the expected error code and parameter name
Avoid creating redundant tests that only differ by the machineReadable flag unless they verify different contract behavior.
- Prefer
-c Releasefor CI parity. - If you're iterating on a single area, use Microsoft Testing Platform filters after
--, for example:dotnet test --project DotNetMcp.Tests/DotNetMcp.Tests.csproj -c Release -- --filter-class DotNetMcp.Tests.CacheMetricsTests.
When writing tests (including scenario/release-scenario tests), prefer Path.Join(...) over Path.Combine(...) whenever possible.
Path.Combinehas "rooted path reset" behavior if a later segment is rooted (or begins with a directory separator), which can produce surprising results.- Several analyzers/security checks recommend
Path.Jointo avoid those classes of issues.