feat: get plan bulk with cache#10
feat: get plan bulk with cache#10tomerqodo wants to merge 4 commits intocopilot_full_base_feat_get_plan_bulk_with_cache_pr10from
Conversation
There was a problem hiding this comment.
Pull request overview
This PR introduces a Redis-backed caching layer for bulk subscription plan retrieval in BillingService, along with improved error handling and expanded test coverage. It adds both unit and container-based integration tests to validate the new behavior and edge cases.
Changes:
- Enhance
BillingService.get_plan_bulkto add validation error logging per-tenant and adjust batch error log messages. - Add
BillingService.get_plan_bulk_with_cache, using Redis (mget/pipeline) to cache per-tenant subscription plans with a configurable TTL. - Add comprehensive unit and integration tests for invalid plan handling and the new caching behavior (cache hit/miss, invalid cache entries, Redis failures, TTL expiration).
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
api/services/billing_service.py |
Adds Redis-backed caching for bulk plan retrieval, per-tenant validation error handling, and small SubscriptionPlan/type utilities. |
api/tests/unit_tests/services/test_billing_service.py |
Extends unit tests for get_plan_bulk to assert invalid tenant plans are logged and skipped. |
api/tests/test_containers_integration_tests/services/test_billing_service.py |
Introduces container-based integration tests exercising get_plan_bulk_with_cache across cache hit/miss, invalid cache data, Redis failures, and TTL expiry. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| logger.exception( | ||
| "get_plan_bulk: failed to validate subscription plan for tenant(%s)", tenant_id | ||
| ) | ||
| results[tenant_id] = None |
There was a problem hiding this comment.
When subscription_adapter.validate_python(plan) fails, this block currently adds results[tenant_id] = None, which contradicts the return type dict[str, SubscriptionPlan] and the new test expectation that invalid tenant plans are skipped entirely. This means callers (and get_plan_bulk_with_cache) can receive entries with None values and potentially cache them, instead of omitting invalid tenants. To align with the documented return shape and tests, the invalid tenant ID should be skipped (not added to results) when validation fails, and only successfully validated plans should be included.
| results[tenant_id] = None | |
| # Skip adding this tenant to results if validation fails |
| plans_to_cache: dict[str, SubscriptionPlan] = {} | ||
|
|
||
| for tenant_id, subscription_plan in bulk_plans.items(): | ||
| tenant_plans[tenant_id] = subscription_plan | ||
| plans_to_cache[tenant_id] = subscription_plan | ||
|
|
||
| # Step 3: Batch update Redis cache using pipeline | ||
| if plans_to_cache: | ||
| try: | ||
| pipe = redis_client.pipeline() | ||
| for tenant_id, subscription_plan in bulk_plans.items(): | ||
| redis_key = cls._make_plan_cache_key(tenant_id) | ||
| # Serialize dict to JSON string | ||
| json_str = json.dumps(subscription_plan) | ||
| pipe.setex(redis_key, cls._PLAN_CACHE_TTL, json_str) | ||
| pipe.execute() | ||
|
|
||
| logger.info( | ||
| "get_plan_bulk_with_cache: cached %s new tenant plans to Redis", | ||
| len(bulk_plans), | ||
| ) | ||
| except Exception: | ||
| logger.exception("get_plan_bulk_with_cache: redis pipeline failed") |
There was a problem hiding this comment.
plans_to_cache is populated but never actually used when writing to Redis; the pipeline loop iterates over bulk_plans.items() instead. This extra dict and assignment add noise and can easily drift from the actual cached set if the logic evolves. Consider either removing plans_to_cache entirely and using bulk_plans everywhere, or switching the pipeline loop to iterate over plans_to_cache.items() so the variable has a clear purpose.
| plans_to_cache: dict[str, SubscriptionPlan] = {} | |
| for tenant_id, subscription_plan in bulk_plans.items(): | |
| tenant_plans[tenant_id] = subscription_plan | |
| plans_to_cache[tenant_id] = subscription_plan | |
| # Step 3: Batch update Redis cache using pipeline | |
| if plans_to_cache: | |
| try: | |
| pipe = redis_client.pipeline() | |
| for tenant_id, subscription_plan in bulk_plans.items(): | |
| redis_key = cls._make_plan_cache_key(tenant_id) | |
| # Serialize dict to JSON string | |
| json_str = json.dumps(subscription_plan) | |
| pipe.setex(redis_key, cls._PLAN_CACHE_TTL, json_str) | |
| pipe.execute() | |
| logger.info( | |
| "get_plan_bulk_with_cache: cached %s new tenant plans to Redis", | |
| len(bulk_plans), | |
| ) | |
| except Exception: | |
| logger.exception("get_plan_bulk_with_cache: redis pipeline failed") | |
| for tenant_id, subscription_plan in bulk_plans.items(): | |
| tenant_plans[tenant_id] = subscription_plan | |
| # Step 3: Batch update Redis cache using pipeline | |
| try: | |
| pipe = redis_client.pipeline() | |
| for tenant_id, subscription_plan in bulk_plans.items(): | |
| redis_key = cls._make_plan_cache_key(tenant_id) | |
| # Serialize dict to JSON string | |
| json_str = json.dumps(subscription_plan) | |
| pipe.setex(redis_key, cls._PLAN_CACHE_TTL, json_str) | |
| pipe.execute() | |
| logger.info( | |
| "get_plan_bulk_with_cache: cached %s new tenant plans to Redis", | |
| len(bulk_plans), | |
| ) | |
| except Exception: | |
| logger.exception("get_plan_bulk_with_cache: redis pipeline failed") |
| """ | ||
| Bulk fetch billing subscription plan with cache to reduce billing API loads in batch job scenarios. | ||
|
|
||
| NOTE: if you want to high data consistency, use get_plan_bulk instead. |
There was a problem hiding this comment.
The note in this docstring reads if you want to high data consistency, which is ungrammatical and slightly confusing. Consider rephrasing to something like if you want high data consistency, use get_plan_bulk instead to make the intent clearer.
| NOTE: if you want to high data consistency, use get_plan_bulk instead. | |
| NOTE: if you want high data consistency, use get_plan_bulk instead. |
Benchmark PR from agentic-review-benchmarks#10