Thank you for your interest in contributing to AI SDK Python! This document provides guidelines and information for contributors.
- Getting Started
- Development Setup
- Project Structure
- Coding Standards
- Testing
- Documentation
- Submitting Changes
- Code Review Process
- Release Process
- Python 3.12 or higher
- UV for package management
- Ty for type checking
- Ruff for linting and formatting
-
Fork and clone the repository
git clone https://github.com/your-username/ai-sdk.git cd ai-sdk -
Set up the development environment
uv sync --dev
-
Install pre-commit hooks
uv run pre-commit install
-
Run tests to verify setup
uv run pytest
-
Create a virtual environment
uv venv source .venv/bin/activate # On Windows: .venv\Scripts\activate
-
Install dependencies
uv sync --dev
-
Set up environment variables
cp .env.example .env # Edit .env with your API keys
You'll need API keys for testing different providers:
# OpenAI
export OPENAI_API_KEY="sk-..."
# Anthropic
export ANTHROPIC_API_KEY="sk-ant-..."ai-sdk/
├── src/ai_sdk/ # Main package source
│ ├── __init__.py # Package initialization
│ ├── embed.py # Embedding functionality
│ ├── generate_object.py # Structured output generation
│ ├── generate_text.py # Text generation
│ ├── providers/ # Provider implementations
│ │ ├── anthropic.py # Anthropic provider
│ │ ├── openai.py # OpenAI provider
│ │ ├── language_model.py # Base language model
│ │ └── embedding_model.py # Base embedding model
│ ├── tool.py # Tool calling functionality
│ └── types.py # Type definitions
├── tests/ # Test suite
│ ├── test_ai_sdk.py # Core functionality tests
│ ├── test_embed.py # Embedding tests
│ ├── test_generate_object_dummy.py # Object generation tests
│ └── test_tool_calling.py # Tool calling tests
├── examples/ # Usage examples
├── docs/ # Documentation
└── pyproject.toml # Project configuration
We follow PEP 8 with some modifications:
- Line length: 88 characters (Black default)
- Import sorting: Use
isortconfiguration - Type hints: Required for all public functions
- Docstrings: Use Google-style docstrings
We use Ruff for formatting and linting:
# Format code
uv run ruff format .
# Lint code
uv run ruff check .
# Fix auto-fixable issues
uv run ruff check --fix .We use Ty for type checking:
# Run type checker
uv run ty check src/We use pre-commit hooks to ensure code quality:
# Install pre-commit hooks
uv run pre-commit install
# Run all hooks manually
uv run pre-commit run --all-files# Run all tests
uv run pytest
# Run tests with coverage
uv run pytest --cov=src/ai_sdk
# Run specific test file
uv run pytest tests/test_generate_text.py
# Run tests with verbose output
uv run pytest -v-
Test Structure
- Place tests in the
tests/directory - Use descriptive test function names
- Group related tests in classes
- Place tests in the
-
Test Examples
import pytest from ai_sdk import generate_text, openai class TestGenerateText: def test_basic_text_generation(self): model = openai("gpt-4o-mini") result = generate_text(model=model, prompt="Hello") assert result.text is not None assert len(result.text) > 0 @pytest.mark.asyncio async def test_streaming_text(self): model = openai("gpt-4o-mini") stream = stream_text(model=model, prompt="Hello") async for chunk in stream.text_stream: assert chunk is not None
-
Mocking External APIs
import pytest from unittest.mock import patch @patch('ai_sdk.providers.openai.OpenAIClient') def test_openai_integration(mock_client): # Mock the OpenAI client mock_client.return_value.chat.completions.create.return_value = MockResponse() # Test your code
We aim for high test coverage:
# Generate coverage report
uv run pytest --cov=src/ai_sdk --cov-report=html
# View coverage report
open htmlcov/index.html-
Docstrings
- Use Google-style docstrings
- Include type hints
- Provide usage examples
def generate_text( model: LanguageModel, prompt: str, system: Optional[str] = None, **kwargs ) -> TextGenerationResult: """Generate text using the specified language model. Args: model: The language model to use for generation. prompt: The input prompt for text generation. system: Optional system message to set context. **kwargs: Additional parameters to pass to the model. Returns: TextGenerationResult: The generated text and metadata. Raises: ValueError: If the model is not properly configured. APIError: If the API request fails. Example: >>> from ai_sdk import generate_text, openai >>> model = openai("gpt-4o-mini") >>> result = generate_text(model=model, prompt="Hello, world!") >>> print(result.text) Hello, world! """
-
Type Hints
- Use type hints for all public functions
- Import types from
typingmodule - Use
Optionalfor nullable parameters
The documentation is built with Mintlify:
docs/
├── index.mdx # Homepage
├── sdk/ # SDK documentation
│ ├── introduction.mdx # Getting started
│ ├── concepts.mdx # Core concepts
│ ├── generate_text.mdx # Text generation
│ ├── generate_object.mdx # Object generation
│ ├── embed.mdx # Embeddings
│ ├── tool.mdx # Tool calling
│ └── providers/ # Provider-specific docs
│ ├── openai.mdx # OpenAI provider
│ └── anthropic.mdx # Anthropic provider
└── examples/ # Example documentation
├── basic-text.mdx # Basic text generation
├── streaming.mdx # Streaming examples
└── structured-output.mdx # Structured output
-
Structure
- Use clear, descriptive titles
- Include code examples
- Provide step-by-step guides
-
Code Examples
```python from ai_sdk import generate_text, openai model = openai("gpt-4o-mini") result = generate_text(model=model, prompt="Hello, world!") print(result.text) ```
-
Components
- Use Mintlify components for better UX
- Include tips, warnings, and notes
- Add interactive code snippets
When adding new tools to the SDK, follow these guidelines:
-
Use Pydantic Models (Recommended)
- Define parameter schemas using Pydantic models
- Include field descriptions and validation constraints
- Provide clear, descriptive field names
-
Tool Structure
from pydantic import BaseModel, Field from ai_sdk import tool class MyToolParams(BaseModel): input: str = Field(description="Input parameter") option: bool = Field(default=False, description="Optional flag") @tool( name="my_tool", description="Clear description of what the tool does", parameters=MyToolParams ) def my_tool(input: str, option: bool = False) -> str: # Tool implementation return f"Processed: {input}"
-
Validation and Error Handling
- Use Pydantic validation constraints (e.g.,
ge,le,min_length) - Handle errors gracefully with meaningful messages
- Test edge cases and invalid inputs
- Use Pydantic validation constraints (e.g.,
-
Testing Tools
- Test both valid and invalid inputs
- Verify Pydantic model validation
- Test tool execution and return values
- Mock external dependencies
-
Documentation
- Update tool documentation with examples
- Include parameter descriptions
- Show both Pydantic and JSON schema approaches
- Clear Descriptions: Provide descriptive field and tool descriptions
- Type Safety: Use Pydantic models for automatic validation
- Error Handling: Gracefully handle validation and runtime errors
- Testing: Comprehensive test coverage for all tool functionality
- Documentation: Clear examples and usage patterns
-
Create a feature branch
git checkout -b feature/your-feature-name
-
Make your changes
- Follow coding standards
- Add tests for new functionality
- Update documentation
-
Run quality checks
uv run ruff format . uv run ruff check . uv run ty check src/ uv run pytest
-
Commit your changes
git add . git commit -m "feat: add new feature description"
-
Push and create a pull request
git push origin feature/your-feature-name
We follow Conventional Commits:
<type>[optional scope]: <description>
[optional body]
[optional footer(s)]
Types:
feat: New featurefix: Bug fixdocs: Documentation changesstyle: Code style changesrefactor: Code refactoringtest: Test changeschore: Maintenance tasks
Examples:
feat: add support for Claude 3.5 Sonnet
fix: resolve token counting issue in streaming
docs: update OpenAI provider documentation
test: add comprehensive embedding tests
- Title: Clear, descriptive title
- Description: Explain what and why, not how
- Checklist: Include a checklist of completed tasks
- Tests: Ensure all tests pass
- Documentation: Update relevant documentation
Example PR Description:
## Description
Adds support for Claude 3.5 Sonnet model in the Anthropic provider.
## Changes
- Add `claude-3.5-sonnet` model identifier
- Update model documentation with pricing info
- Add tests for new model
## Checklist
- [x] Code follows project style guidelines
- [x] Tests added for new functionality
- [x] Documentation updated
- [x] All tests pass
- [x] Type checking passes
## Related Issues
Closes #123-
Code Quality
- Follows project standards
- Proper error handling
- Good test coverage
-
Documentation
- Clear docstrings
- Updated README/docs
- Good commit messages
-
Testing
- Tests for new functionality
- No breaking changes
- All tests pass
- Code follows style guidelines
- Tests are comprehensive
- Documentation is updated
- No breaking changes
- Performance impact considered
- Security implications reviewed
We use Semantic Versioning:
- MAJOR: Breaking changes
- MINOR: New features (backward compatible)
- PATCH: Bug fixes (backward compatible)
-
Update version
# Update pyproject.toml version # Update CHANGELOG.md
-
Create release branch
git checkout -b release/v1.2.0
-
Run release checks
uv run pytest uv run ruff check . uv run ty check src/ -
Build and publish
uv build uv publish
-
Create GitHub release
- Tag the release
- Add release notes
- Attach built artifacts
- Issues: Use GitHub issues for bugs and feature requests
- Discussions: Use GitHub discussions for questions
- Discord: Join our community Discord server
We are committed to providing a welcoming and inclusive environment for all contributors. Please read our Code of Conduct for details.
By contributing to AI SDK Python, you agree that your contributions will be licensed under the MIT License.
Thank you for contributing to AI SDK Python! 🚀