This MCP server project includes a comprehensive test suite with unit tests and integration tests. The test suite uses pytest as the testing framework and includes coverage reporting, fixtures, and custom markers for organizing tests.
tests/
├── __init__.py
├── conftest.py # Shared pytest configuration (imports fixtures)
├── fixtures/ # Test fixtures and mocks
│ ├── __init__.py
│ ├── context.py # Mock MCP Context fixture
│ └── weather.py # Weather sample data fixtures
├── unit/ # Unit tests (90 tests)
│ ├── __init__.py
│ ├── test_auth_additional_tools.py # Auth-related MCP tool handlers
│ ├── test_auth_provider.py # Auth provider selection logic
│ ├── test_elicitation.py # Trip extension elicitation flows
│ ├── test_helpers.py # Shared helper utilities
│ ├── test_itinerary_service_extra.py # Itinerary service activity suggestions
│ ├── test_itinerary_tool_handler.py # Itinerary tool handler delegation
│ ├── test_models.py # Data model validation
│ ├── test_server.py # Server main() wiring and transports
│ ├── test_travel_prompts.py # Travel prompt template generation
│ ├── test_travel_prompts_handler.py # Prompt handler delegation
│ ├── test_weather_forecast.py # Weather forecast utilities
│ └── test_weather_resource.py # Weather MCP resource handler
└── integration/ # Integration tests (13 tests)
├── __init__.py
├── test_itinerary_tool.py # End-to-end itinerary generation
└── test_weather_api.py # Open-Meteo weather API client
Total Tests: 103 tests
- Unit Tests: 90 tests across 12 files (fast, isolated component testing)
- Integration Tests: 13 tests across 2 files (component interaction testing)
Ensure you have the dev dependencies installed:
# Using uv (recommended)
uv sync --dev# Using the test script (recommended)
./scripts/test.sh
# Or directly with uv
uv run pytest
# Or with verbose output
uv run pytest tests/ -vThe project includes a ready-to-use test script that runs the full suite with coverage:
#!/bin/bash
set -e
set -x
uv run pytest \
--cov=src/mcp_server \
--cov-report=term-missing \
--cov-report=html \
--cov-report=xml \
--cov-fail-under=80 \
--cov-config=.coveragerc \
-v \
tests/ \
"${@}"This script:
- Runs all tests with verbose output
- Generates coverage reports (terminal with missing lines, HTML, and XML)
- Enforces a minimum 80% coverage threshold
- Uses
.coveragercfor coverage configuration - Passes any extra arguments through (e.g.,
./scripts/test.sh -k "weather")
Tests are organized using pytest markers defined in pytest.ini:
@pytest.mark.unit- Unit tests for isolated component testing@pytest.mark.integration- Integration tests for component interactions@pytest.mark.slow- Tests that take longer to execute@pytest.mark.skip_ci- Tests to skip in CI environments
| File | Tests | Description |
|---|---|---|
test_helpers.py |
27 | Shared helpers: date formatting, day validation, temperature labels, auth permission/premium checks, request/context accessors, Auth0 userinfo HTTP client |
test_weather_forecast.py |
12 | Weather description from codes, time-of-day activity suggestions, fallback forecast structure |
test_elicitation.py |
9 | Trip-extension elicitation paths: accept, reject, cancel, unsupported, errors, ValueError propagation |
test_travel_prompts.py |
8 | Itinerary and weather-based activity prompt templates: structure, content, edge cases |
test_server.py |
7 | Server main() wiring: transports (stdio, HTTP, streamable HTTP, SSE, default), provider registration, rate limiting |
test_models.py |
6 | ItineraryPreferences model validation, defaults, day bounds, dict round-trip |
test_auth_additional_tools.py |
5 | Auth-related MCP tools: GitHub/Auth0 user info, custom auth message, request/session info |
test_auth_provider.py |
5 | get_auth_provider behavior: GitHub, Auth0, Clerk, case-insensitivity, unsupported provider errors |
test_itinerary_service_extra.py |
4 | Itinerary service activity suggestions by time of day and defaults |
test_weather_resource.py |
3 | Weather MCP resource handler: success, "today" shorthand, varying day counts |
test_itinerary_tool_handler.py |
2 | Itinerary tool handlers call service and return expected output shapes |
test_travel_prompts_handler.py |
2 | Prompt handler delegates to prompt template with correct parameters |
| File | Tests | Description |
|---|---|---|
test_itinerary_tool.py |
7 | End-to-end itinerary generation with elicitation, invalid dates, weather formatting, and activity suggestions |
test_weather_api.py |
6 | Open-Meteo weather client with mocked HTTP: success, errors, date parsing, range validation |
# Run all tests
uv run pytest
# Run with verbose output
uv run pytest -v
# Run with extra verbose output
uv run pytest -vv
# Run quietly (minimal output)
uv run pytest -q# Unit tests only (fast, isolated)
uv run pytest tests/unit/ -v
uv run pytest -m unit
# Integration tests only (component interactions)
uv run pytest tests/integration/ -v
uv run pytest -m integration
# Exclude slow tests
uv run pytest -m "not slow"# Single test file
uv run pytest tests/unit/test_models.py -v
# Multiple test files
uv run pytest tests/unit/test_models.py tests/unit/test_helpers.py -v
# Specific test class
uv run pytest tests/unit/test_models.py::TestItineraryPreferences -v
# Specific test method
uv run pytest tests/unit/test_helpers.py::TestFormatDate::test_format_date_standard -v
# Pattern matching by keyword
uv run pytest -k "elicitation" -v
uv run pytest -k "weather and not api" -v
uv run pytest -k "auth" -vuv run pytest tests/unit/test_*.py
uv run pytest tests/**/test_weather*.pyThe project uses a .coveragerc file to configure coverage collection:
[run]
omit =
*/tests/*
*/__init__.py
*/lib/*
*/prompt_templates/*The simplest way to run tests with coverage is via the test script:
# Full suite with coverage (enforces 80% minimum)
./scripts/test.sh
# Pass extra args to pytest
./scripts/test.sh -k "weather"
./scripts/test.sh tests/unit/ --maxfail=1# Run tests with coverage
uv run pytest --cov=src/mcp_server
# With terminal report showing missing lines
uv run pytest --cov=src/mcp_server --cov-report=term-missing
# With HTML report (opens in browser)
uv run pytest --cov=src/mcp_server --cov-report=html
open htmlcov/index.html
# Enforce minimum coverage threshold
uv run pytest --cov=src/mcp_server --cov-fail-under=80# Coverage for specific module
uv run pytest --cov=src/mcp_server.utils tests/unit/test_weather_forecast.py
# Generate multiple report formats
uv run pytest --cov=src/mcp_server --cov-report=html --cov-report=xml --cov-report=term-missing
# Unit tests only with coverage
uv run pytest tests/unit/ --cov=src/mcp_server --cov-report=term-missing -v# Show test names
uv run pytest -v
# Show more details
uv run pytest -vv
# Show print statements from tests
uv run pytest -s
# Show local variables on failure
uv run pytest -l
# Show captured log messages
uv run pytest --log-cli-level=INFOThe project's pytest.ini configures logging:
- Log level: INFO
- Format:
%(asctime)s [%(levelname)8s] %(message)s - Date format:
%Y-%m-%d %H:%M:%S
# Stop on first failure
uv run pytest -x
uv run pytest --maxfail=1
# Stop after N failures
uv run pytest --maxfail=3# Run only last failed tests
uv run pytest --lf
uv run pytest --last-failed
# Run failed tests first, then continue with others
uv run pytest --ff
uv run pytest --failed-first# Drop into debugger on failure
uv run pytest --pdb
# Drop into debugger at start of each test
uv run pytest --trace
# Debug specific test with all output
uv run pytest tests/unit/test_helpers.py::TestFormatDate::test_format_date_standard -vv -s --pdb# Auto-detect CPU count (requires pytest-xdist)
uv run pytest -n auto
# Specific number of workers
uv run pytest -n 4
# Parallel execution with coverage
uv run pytest -n auto --cov=src/mcp_server --cov-report=html# Show slowest 10 tests
uv run pytest --durations=10
# Show all test durations
uv run pytest --durations=0# JUnit XML report (for CI systems)
uv run pytest --junitxml=report.xml
# JSON report (requires pytest-json-report)
uv run pytest --json-report --json-report-file=report.json
# Multiple report formats
uv run pytest \
--cov=src/mcp_server \
--cov-report=xml \
--cov-report=term \
--junitxml=test-results.xml \
-v# List all available markers
uv run pytest --markers
# Run with strict marker checking (enabled by default in pytest.ini)
uv run pytest --strict-markers# Quick feedback: run related unit tests
uv run pytest tests/unit/test_helpers.py -v
# Debug with print statements
uv run pytest tests/unit/test_models.py -v -s
# Debug failing test with pdb
uv run pytest tests/unit/test_elicitation.py::TestElicitTripExtension::test_accept -x --pdb# Run all tests with coverage, fail-fast, and minimum threshold
./scripts/test.sh --maxfail=5# Fast unit tests only, fail fast
uv run pytest tests/unit/ -v --maxfail=1# Comprehensive test run with coverage and strict checks
uv run pytest \
--cov=src/mcp_server \
--cov-report=html \
--cov-report=term-missing \
--cov-fail-under=80 \
--strict-markers \
--maxfail=1 \
-vv# Full test suite optimized for CI
uv run pytest \
--cov=src/mcp_server \
--cov-report=xml \
--cov-report=term \
--cov-fail-under=80 \
--junitxml=test-results.xml \
--maxfail=1 \
-vFor continuous testing during development:
# Install pytest-watch
uv pip install pytest-watch
# Run in watch mode
ptw
# Watch mode with options
ptw -- -v --maxfail=1The project includes a comprehensive pytest.ini configuration:
- Test Discovery: Automatically finds
test_*.pyfiles inTest*classes withtest_*functions - Markers: Custom markers for organizing tests (
unit,integration,slow,skip_ci) - Strict Markers: Enabled by default — unknown markers cause failures
- Asyncio: Auto mode for async test support
- Warnings: Error on warnings (except deprecated and pending deprecated)
- Minimum Python: 3.10
- Default Options:
--strict-markers --verbose --tb=short --maxfail=10 -ra - Console Output: Progress style
Coverage collection is configured to omit:
- Test files (
*/tests/*) - Init files (
*/__init__.py) - Library files (
*/lib/*) - Prompt templates (
*/prompt_templates/*)
# Use different config file
uv run pytest -c custom_pytest.ini
# Override config options
uv run pytest -o markers="custom: custom marker description"
# Ensure module is importable
export PYTHONPATH="${PYTHONPATH}:$(pwd)/src"
uv run pytestTests follow the AAA pattern:
- Arrange: Set up test data and preconditions
- Act: Execute the code being tested
- Assert: Verify the results
Shared fixtures are available in:
tests/conftest.py- Imports and re-exports all project fixturestests/fixtures/context.py- Mock MCPContextfixturetests/fixtures/weather.py- Weather sample data and Open-Meteo-style response fixtures
import pytest
from mcp_server.models import ItineraryPreferences
@pytest.mark.unit
class TestItineraryPreferences:
def test_valid_preferences(self):
"""Test creating valid itinerary preferences."""
prefs = ItineraryPreferences(
destination="Paris",
start_date="2024-06-01",
end_date="2024-06-07"
)
assert prefs.destination == "Paris"
assert prefs.days == 7import pytest
from unittest.mock import AsyncMock
@pytest.mark.unit
class TestWeatherResource:
@pytest.mark.asyncio
async def test_get_weather_success(self, mock_context):
"""Test weather resource returns forecast data."""
result = await get_weather(uri="weather://London/3", ctx=mock_context)
assert "London" in result-
Unit Tests: Test individual components in isolation
- Mock external dependencies
- Fast execution
- High coverage of edge cases
-
Integration Tests: Test component interactions
- Use real dependencies when possible
- Test realistic scenarios
- Verify end-to-end workflows
- Fast Feedback: Start with
uv run pytest tests/unit/ -xfor quick failures - Debugging: Use
uv run pytest --pdb -xto debug first failure immediately - Coverage: Use
./scripts/test.shfor full coverage with 80% enforcement - CI: Use
uv run pytest --maxfail=1 -vto fail fast in continuous integration - Development: Use
ptw(pytest-watch) for continuous testing during coding - Isolation: Keep unit tests isolated and fast
- Realistic: Make integration tests reflect real-world usage
- Fixtures: Reuse test fixtures to reduce duplication
- Markers: Use markers to organize and filter tests effectively
- Documentation: Add docstrings to explain what each test validates
# Fast unit tests with coverage
uv run pytest tests/unit/ --cov=src/mcp_server --cov-report=term-missing -v
# Integration tests with detailed output
uv run pytest tests/integration/ -vv -s
# All tests, stop on first failure, show coverage
uv run pytest --cov=src/mcp_server -x -v
# Parallel execution with coverage (faster)
uv run pytest -n auto --cov=src/mcp_server --cov-report=html
# Debug specific test with full context
uv run pytest tests/unit/test_helpers.py::TestFormatDate -vv -s --pdb
# Pre-commit: full suite with coverage threshold
./scripts/test.sh --maxfail=5
# CI-ready: comprehensive test with reports
uv run pytest --cov=src/mcp_server --cov-report=xml --junitxml=results.xml --cov-fail-under=80 -v --maxfail=1Import Errors
# Ensure src is in PYTHONPATH
export PYTHONPATH="${PYTHONPATH}:$(pwd)/src"
uv run pytestAsync Tests Not Running
- Ensure
pytest-asynciois installed - Check
asyncio_mode = autoinpytest.ini
Coverage Not Working
- Install
pytest-cov:uv pip install pytest-cov - Verify source path:
--cov=src/mcp_server - Check
.coveragercomit patterns aren't excluding your code
Coverage Below 80%
- The test script enforces
--cov-fail-under=80 - Run
./scripts/test.shto see which lines are missing coverage - Check the HTML report:
open htmlcov/index.html
Tests Running Slowly
- Run unit tests only:
uv run pytest tests/unit/ - Use parallel execution:
uv run pytest -n auto - Profile slow tests:
uv run pytest --durations=10
Current test suite statistics:
- Total Tests: 103
- Unit Tests: 90 (12 test files)
- Integration Tests: 13 (2 test files)
- Fixture Modules: 2 modules (
context.py,weather.py) - Test Markers: 4 custom markers (
unit,integration,slow,skip_ci) - Coverage Threshold: 80% minimum
For CI/CD pipelines, use the test script or configure directly:
# Example GitHub Actions configuration
- name: Run Tests
run: |
./scripts/test.shOr with explicit options:
- name: Run Tests
run: |
uv run pytest \
--cov=src/mcp_server \
--cov-report=xml \
--cov-report=term \
--cov-fail-under=80 \
--cov-config=.coveragerc \
--junitxml=test-results.xml \
--maxfail=1 \
-vThis will:
- Run all 103 tests with verbose output
- Generate coverage reports (XML for CI integration, terminal for logs)
- Enforce 80% minimum coverage threshold
- Use
.coveragercto omit non-source files from coverage - Create JUnit XML for test result reporting
- Fail fast on first error