This document provides comprehensive information about the testing framework for the MCP LogSeq server project.
The testing framework is built using pytest and provides comprehensive coverage for all components of the MCP LogSeq server. It includes unit tests, integration tests, and shared fixtures to ensure code quality and reliability.
tests/
├── conftest.py # Shared fixtures and test configuration
├── unit/ # Unit tests for individual components
│ ├── test_logseq_api.py # Tests for LogSeq API client
│ └── test_tool_handlers.py # Tests for MCP tool handlers
└── integration/ # Integration tests for system components
└── test_mcp_server.py # Tests for MCP server integration
The testing framework uses the following dependencies:
- pytest - Main testing framework
- pytest-mock - Mocking utilities for pytest
- responses - HTTP request mocking library
- pytest-asyncio - Async testing support
# Run all tests
uv run pytest
# Run with verbose output
uv run pytest -v
# Run with short traceback format
uv run pytest --tb=short# Run only unit tests
uv run pytest tests/unit/
# Run only integration tests
uv run pytest tests/integration/
# Run specific test file
uv run pytest tests/unit/test_logseq_api.py
# Run specific test class
uv run pytest tests/unit/test_logseq_api.py::TestLogSeqAPI
# Run specific test method
uv run pytest tests/unit/test_logseq_api.py::TestLogSeqAPI::test_create_page_success# Show detailed output for failed tests
uv run pytest -v --tb=long
# Show only test names and results
uv run pytest -q
# Stop after first failure
uv run pytest -x
# Show local variables in tracebacks
uv run pytest -lTests for the LogSeq class covering:
- Initialization: Default and custom parameters
- URL Generation: Base URL construction
- Authentication: Header generation
- Page Operations: Create, read, update, delete
- Search Functionality: Content search with options
- Error Handling: Network errors, API failures
- Edge Cases: Non-existent pages, empty responses
Key Test Methods:
test_create_page_success()- Successful page creationtest_get_page_content_success()- Page content retrievaltest_delete_page_not_found()- Error handling for missing pagestest_search_content_with_options()- Search with custom parameters
Tests for all MCP tool handler classes:
- Schema Validation: Tool description and input schema
- Successful Operations: Normal execution paths
- Error Scenarios: Missing arguments, API failures
- Input Validation: Required parameters, type checking
- Output Formatting: Text and JSON response formats
Covered Tool Handlers:
CreatePageToolHandlerListPagesToolHandlerGetPageContentToolHandlerDeletePageToolHandlerUpdatePageToolHandlerSearchToolHandler
Tests for the complete MCP server system:
- Tool Registration: Handler registration and discovery
- Tool Interface: Schema compliance and method availability
- End-to-End Execution: Complete tool execution flows
- Error Propagation: Exception handling across layers
- Custom Handlers: Dynamic tool handler addition
Key Integration Areas:
- Handler registration system
- Tool discovery and validation
- Cross-component communication
- Error handling consistency
The testing framework provides several shared fixtures:
mock_api_key- Provides a test API keylogseq_client- Pre-configured LogSeq client instancetool_handlers- Dictionary of all tool handler instancesmock_env_api_key- Mocked environment variable
mock_logseq_responses- Comprehensive mock API responses including:- Successful page creation
- Page listing with journal filtering
- Page content with metadata and blocks
- Search results with various content types
- Error scenarios and edge cases
The framework uses the responses library to mock HTTP requests:
@responses.activate
def test_api_call(self, logseq_client):
responses.add(
responses.POST,
"http://127.0.0.1:12315/api",
json={"success": True},
status=200
)
# Test implementationEnvironment variables are mocked using patch.dict:
@patch.dict('os.environ', {'LOGSEQ_API_TOKEN': 'test_token'})
def test_with_env_var(self):
# Test implementation- Place unit tests in
tests/unit/ - Place integration tests in
tests/integration/ - Use descriptive test class and method names
- Group related tests in the same class
class TestComponentName:
def test_method_name_success(self):
"""Test successful operation."""
def test_method_name_failure_scenario(self):
"""Test specific failure case."""
def test_method_name_edge_case(self):
"""Test edge case handling."""- Isolation: Each test should be independent
- Mocking: Mock external dependencies (HTTP calls, file system)
- Assertions: Use specific, meaningful assertions
- Documentation: Include clear docstrings
- Coverage: Test both success and failure paths
Add new fixtures to conftest.py:
@pytest.fixture
def new_fixture():
"""Description of what this fixture provides."""
return fixture_dataThe testing framework is designed to work well in CI environments:
- All tests are isolated and don't require external services
- HTTP requests are mocked to avoid network dependencies
- Tests run quickly (< 1 second for full suite)
- Clear error messages for debugging failures
- Import Errors: Ensure all dependencies are installed with
uv sync --dev - Mock Failures: Verify mock setup matches actual API calls
- Async Issues: Use
pytest.mark.asynciofor async tests
# Run with Python debugger on failure
uv run pytest --pdb
# Show local variables in tracebacks
uv run pytest -l
# Increase verbosity for debugging
uv run pytest -vvvdef test_debug_example(self):
import pdb; pdb.set_trace() # Breakpoint
# Test code here- Fast Execution: Full test suite runs in < 1 second
- Parallel Execution: Tests can run in parallel (use
pytest-xdist) - Resource Usage: Minimal memory footprint with proper mocking
When modifying the codebase:
- Update corresponding tests
- Add tests for new functionality
- Ensure all tests pass before committing
- Update mock data if API responses change
Keep test dependencies up to date:
# Update development dependencies
uv sync --dev --upgradeCurrent test coverage:
- Total Tests: 50
- Unit Tests: 35
- Integration Tests: 15
- Success Rate: 100%
- Execution Time: ~0.3 seconds