Skip to content

feat: get plan bulk with cache#10

Open
tomerqodo wants to merge 4 commits intocopilot_full_base_feat_get_plan_bulk_with_cache_pr10from
copilot_full_head_feat_get_plan_bulk_with_cache_pr10
Open

feat: get plan bulk with cache#10
tomerqodo wants to merge 4 commits intocopilot_full_base_feat_get_plan_bulk_with_cache_pr10from
copilot_full_head_feat_get_plan_bulk_with_cache_pr10

Conversation

@tomerqodo
Copy link
Copy Markdown

Benchmark PR from agentic-review-benchmarks#10

Copilot AI review requested due to automatic review settings January 25, 2026 22:46
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a Redis-backed caching layer for bulk subscription plan retrieval in BillingService, along with improved error handling and expanded test coverage. It adds both unit and container-based integration tests to validate the new behavior and edge cases.

Changes:

  • Enhance BillingService.get_plan_bulk to add validation error logging per-tenant and adjust batch error log messages.
  • Add BillingService.get_plan_bulk_with_cache, using Redis (mget/pipeline) to cache per-tenant subscription plans with a configurable TTL.
  • Add comprehensive unit and integration tests for invalid plan handling and the new caching behavior (cache hit/miss, invalid cache entries, Redis failures, TTL expiration).

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.

File Description
api/services/billing_service.py Adds Redis-backed caching for bulk plan retrieval, per-tenant validation error handling, and small SubscriptionPlan/type utilities.
api/tests/unit_tests/services/test_billing_service.py Extends unit tests for get_plan_bulk to assert invalid tenant plans are logged and skipped.
api/tests/test_containers_integration_tests/services/test_billing_service.py Introduces container-based integration tests exercising get_plan_bulk_with_cache across cache hit/miss, invalid cache data, Redis failures, and TTL expiry.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

logger.exception(
"get_plan_bulk: failed to validate subscription plan for tenant(%s)", tenant_id
)
results[tenant_id] = None
Copy link

Copilot AI Jan 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When subscription_adapter.validate_python(plan) fails, this block currently adds results[tenant_id] = None, which contradicts the return type dict[str, SubscriptionPlan] and the new test expectation that invalid tenant plans are skipped entirely. This means callers (and get_plan_bulk_with_cache) can receive entries with None values and potentially cache them, instead of omitting invalid tenants. To align with the documented return shape and tests, the invalid tenant ID should be skipped (not added to results) when validation fails, and only successfully validated plans should be included.

Suggested change
results[tenant_id] = None
# Skip adding this tenant to results if validation fails

Copilot uses AI. Check for mistakes.
Comment on lines +363 to +385
plans_to_cache: dict[str, SubscriptionPlan] = {}

for tenant_id, subscription_plan in bulk_plans.items():
tenant_plans[tenant_id] = subscription_plan
plans_to_cache[tenant_id] = subscription_plan

# Step 3: Batch update Redis cache using pipeline
if plans_to_cache:
try:
pipe = redis_client.pipeline()
for tenant_id, subscription_plan in bulk_plans.items():
redis_key = cls._make_plan_cache_key(tenant_id)
# Serialize dict to JSON string
json_str = json.dumps(subscription_plan)
pipe.setex(redis_key, cls._PLAN_CACHE_TTL, json_str)
pipe.execute()

logger.info(
"get_plan_bulk_with_cache: cached %s new tenant plans to Redis",
len(bulk_plans),
)
except Exception:
logger.exception("get_plan_bulk_with_cache: redis pipeline failed")
Copy link

Copilot AI Jan 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

plans_to_cache is populated but never actually used when writing to Redis; the pipeline loop iterates over bulk_plans.items() instead. This extra dict and assignment add noise and can easily drift from the actual cached set if the logic evolves. Consider either removing plans_to_cache entirely and using bulk_plans everywhere, or switching the pipeline loop to iterate over plans_to_cache.items() so the variable has a clear purpose.

Suggested change
plans_to_cache: dict[str, SubscriptionPlan] = {}
for tenant_id, subscription_plan in bulk_plans.items():
tenant_plans[tenant_id] = subscription_plan
plans_to_cache[tenant_id] = subscription_plan
# Step 3: Batch update Redis cache using pipeline
if plans_to_cache:
try:
pipe = redis_client.pipeline()
for tenant_id, subscription_plan in bulk_plans.items():
redis_key = cls._make_plan_cache_key(tenant_id)
# Serialize dict to JSON string
json_str = json.dumps(subscription_plan)
pipe.setex(redis_key, cls._PLAN_CACHE_TTL, json_str)
pipe.execute()
logger.info(
"get_plan_bulk_with_cache: cached %s new tenant plans to Redis",
len(bulk_plans),
)
except Exception:
logger.exception("get_plan_bulk_with_cache: redis pipeline failed")
for tenant_id, subscription_plan in bulk_plans.items():
tenant_plans[tenant_id] = subscription_plan
# Step 3: Batch update Redis cache using pipeline
try:
pipe = redis_client.pipeline()
for tenant_id, subscription_plan in bulk_plans.items():
redis_key = cls._make_plan_cache_key(tenant_id)
# Serialize dict to JSON string
json_str = json.dumps(subscription_plan)
pipe.setex(redis_key, cls._PLAN_CACHE_TTL, json_str)
pipe.execute()
logger.info(
"get_plan_bulk_with_cache: cached %s new tenant plans to Redis",
len(bulk_plans),
)
except Exception:
logger.exception("get_plan_bulk_with_cache: redis pipeline failed")

Copilot uses AI. Check for mistakes.
"""
Bulk fetch billing subscription plan with cache to reduce billing API loads in batch job scenarios.

NOTE: if you want to high data consistency, use get_plan_bulk instead.
Copy link

Copilot AI Jan 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The note in this docstring reads if you want to high data consistency, which is ungrammatical and slightly confusing. Consider rephrasing to something like if you want high data consistency, use get_plan_bulk instead to make the intent clearer.

Suggested change
NOTE: if you want to high data consistency, use get_plan_bulk instead.
NOTE: if you want high data consistency, use get_plan_bulk instead.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants