All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- Response caching functionality to reduce API costs and improve performance
- Rate limiting using token bucket algorithm
- Configurable timeout support for all LLM clients
- Custom prompt template support via
PromptTemplateclass CachedGPTClientwrapper for adding caching to any clientRateLimitedGPTClientwrapper for rate limitingResponseCacheclass for managing cached responsesRateLimiterclass for token bucket rate limiting- Version info (
__version__) to main package - Comprehensive README.md with examples for all providers
- CONTRIBUTING.md with development guidelines
- SECURITY.md with API key handling best practices
- CHANGELOG.md for tracking project changes
- Updated default OpenAI model from
gpt-4togpt-4o - Updated default Anthropic model to
claude-3-5-sonnet-20241022 - Updated default Ollama model to
llama3.2 - Fixed return type bugs in
clear_gpt_rules()andadd_gpt_rule()methods - Fixed lambda closure bug in quorum.py that could cause issues during retries
- Improved quorum logic to require majority (>50%) instead of just count > 1
- Enhanced error messages throughout the codebase for better debugging
- Improved docstrings for all client classes with parameter descriptions
- Temperature parameter type changed from
inttofloatfor consistency
- Typo in openai.py: "Not content available" → "No content available"
- Typo in clients/models.py: "recieve" → "receive"
- Duplicate imports in ollama.py
- Improper error handling in OllamaGPTClient.generate_plan()
- Lambda closure issues in MultiProviderGPTClient
- Error messages now include more context and actionable information
- Quorum mode now provides detailed vote distribution in error messages
- Metadata consistency across all provider clients
- Logging throughout the codebase with appropriate log levels
- pyproject.toml with proper classifiers, keywords, and [all] extras group
- Dependency version constraints for better stability
0.1.0 - 2024-XX-XX
- Initial release of hier-config-gpt
- Support for OpenAI GPT models via
ChatGPTClient - Support for Anthropic Claude models via
ClaudeGPTClient - Support for Ollama self-hosted models via
OllamaGPTClient - Multi-provider quorum mode via
MultiProviderGPTClient GPTWorkflowRemediationclass for LLM-based remediationGPTRemediationRulefor defining custom remediation rulesGPTRemediationContextfor passing context to LLMs- Retry logic with exponential backoff
- Comprehensive test suite using pytest
- Documentation with mkdocs
- Apache 2.0 license