Skip to content

perf: replace per-object async delete with SQL cascade walker#14566

Draft
valentijnscholten wants to merge 16 commits intoDefectDojo:bugfixfrom
valentijnscholten:optimize-delete-auto-cascade
Draft

perf: replace per-object async delete with SQL cascade walker#14566
valentijnscholten wants to merge 16 commits intoDefectDojo:bugfixfrom
valentijnscholten:optimize-delete-auto-cascade

Conversation

@valentijnscholten
Copy link
Member

@valentijnscholten valentijnscholten commented Mar 21, 2026

Summary

Replaces the per-object chunked deletion strategy with a single-pass SQL cascade walker (cascade_delete), eliminating the need to fan out deletion work across multiple Celery workers. This auto detects relationships and respect the SET_NULL or DO_NOTHING or CASCADE_DELETE settings of the relationship.

The old delete code had many inefficienies, especially around duplicates.

  • When a product was deleted, the delete was split up over each engagement. A finding B in engagement B that was duplicate of finding A in engagement A would be recalculated to be a duplciate of finding C in engagement C. But then engagement C would be deleted, so finding B had to be recalculated again, etc, etc,
  • Django collector used for most things related to finding models.
  • Every finding was handled by its own resulting in 1+N queries problems.
  • Updates were done using finding.save() were this was not needed

Key architectural changes:

  • SQL cascade walker replaces Django's ORM Collector for bulk deletion. Walks _meta.related_objects and issues bottom-up DELETE statements directly, bypassing per-object signals and Python-level overhead.
  • Findings deleted separately from the parent object tree. M2M through tables and self-referential FKs (duplicate_finding) are handled explicitly before cascade, then the parent object's remaining children (Tests, Engagements, Endpoints) are cleaned up in a single cascade_delete call.
  • Duplicate cluster preparation now scoped to the full deletion target (product/product_type) instead of per-engagement iteration, avoiding unnecessary reconfiguration of findings that are about to be deleted.
  • Outside-scope duplicates deleted before main scope to prevent FK violations during chunked finding deletion.
  • reconfigure_duplicate_cluster uses .update() instead of Finding.save(), avoiding Django signal storms (prioritization, dedup, audit) during deletion.

Performance (3 × 1373 findings, 3 engagements, many duplicates, 1 product, JFrog Xray Api Scan)

Scenario Duration Celery workers
Old code ~42s 6
New code ~3.4s 1

So it's 13 times quicker and uses 6 times less celeryworkers.

Performance (3 × 1050 findings, 3 engagements, many duplicates, 1 product, Acunetix 360 Scan)

Scenario Duration Celery workers
Old code ~50s 8
New code ~3.5s 1

So it's 14 times quicker and uses 8 times less celeryworkers. If you add Notes, Files, the gains become even larger.

This should have huge effect on instances that do a lot of deletions.

Inspired by: https://dev.to/redhap/efficient-django-delete-cascade-43i5

Replace per-original O(n×m) loop with a single bulk UPDATE for
inside-scope duplicate reset. Outside-scope reconfiguration still
runs per-original but now uses .iterator() and .exists() to avoid
loading full querysets into memory.

Also adds WARN-level logging to fix_loop_duplicates for visibility
into how often duplicate loops occur in production, and a comment on
removeLoop explaining the optimization opportunity.
Remove redundant .exclude() and .exists() calls by leveraging the
bulk UPDATE that already unlinks inside-scope duplicates. Add
prefetch_related to fetch all reverse relations in a single query.
Replace the per-object obj.delete() approach in async_delete_crawl_task
with a recursive SQL cascade walker that compiles QuerySets to raw SQL
and walks model._meta.related_objects bottom-up. This auto-discovers
all FK relations at runtime, including those added by plugins.

Key changes:
- New dojo/utils_cascade_delete.py: cascade_delete() utility
- New dojo/signals.py: pre_bulk_delete_findings signal for extensibility
- New bulk_clear_finding_m2m() in finding/helper.py for M2M cleanup
  with FileUpload disk cleanup and orphaned Notes deletion
- Rewritten async_delete_crawl_task with chunked cascade deletion
- Removed async_delete_chunk_task (no longer needed)
- Product grading recalculated once at end instead of per-object
…plicate_cluster

Use QuerySet.update() instead of mass_model_updater to re-point
duplicates to the new original. Single SQL query instead of loading
all findings into Python and calling bulk_update.
Remove reset_duplicate_before_delete, reset_duplicates_before_delete,
and set_new_original — all replaced by bulk UPDATE in
prepare_duplicates_for_delete and .update() in
reconfigure_duplicate_cluster. Remove unused mass_model_updater import.
…olations

When bulk-deleting findings in chunks, an original in an earlier chunk
could fail to delete because its duplicate (higher ID) in a later chunk
still references it via duplicate_finding FK. Fix by deleting outside-scope
duplicates first, then the main scope.

Also moves pre_bulk_delete_findings signal into bulk_delete_findings so it
fires automatically.
…cluster

Avoids triggering Finding.save() signals (pre_save_changed,
execute_prioritization_calculations) when reconfiguring duplicate
clusters during deletion. Adds tests for cross-engagement duplicate
reconfiguration and product deletion with duplicates.
…-engagement

Adds product= and product_type= parameters so the entire deletion scope
is handled in one call, avoiding unnecessary reconfiguration of findings
that are about to be deleted anyway. Uses subqueries instead of
materializing ID sets, and chunks the originals loop with prefetch to
bound memory. Reverts finding_delete to use ORM .delete() for single
finding cascade deletes.
Replace the model_list-based mapping with a simple scope filter dict.
prepare_duplicates_for_delete now accepts a single object and derives
the scope via FINDING_SCOPE_FILTERS. Removes the redundant non-Finding
model deletion loop — cascade_delete on the top-level object handles
all remaining children. Cleans up async_delete class.
@github-actions github-actions bot added settings_changes Needs changes to settings.py based on changes in settings.dist.py included in this PR unittests labels Mar 21, 2026
@valentijnscholten valentijnscholten added this to the 2.57.0 milestone Mar 21, 2026
@valentijnscholten valentijnscholten marked this pull request as draft March 21, 2026 20:44
- Add bulk_delete_findings() wrapper: M2M cleanup + chunked cascade_delete
- reconfigure_duplicate_cluster: return early when CASCADE_DELETE=True
  instead of calling Django .delete() which fires signals per finding
- finding_delete: use bulk_delete_findings when CASCADE_DELETE=True
- async_delete_crawl_task: expand scope to include outside-scope duplicates,
  use bulk_delete_findings instead of manual M2M + cascade_delete calls
- Fix test to use async_delete class instead of direct task import
Adds generic M2M through-table cleanup to cascade_delete so tags and
other M2M relations are cleared before row deletion. Introduces
bulk_remove_all_tags in tag_utils to properly decrement tagulous tag
counts during bulk deletion. Adds test for product deletion with tagged
objects.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

settings_changes Needs changes to settings.py based on changes in settings.dist.py included in this PR unittests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant