perf: replace per-object async delete with SQL cascade walker#14566
Draft
valentijnscholten wants to merge 16 commits intoDefectDojo:bugfixfrom
Draft
perf: replace per-object async delete with SQL cascade walker#14566valentijnscholten wants to merge 16 commits intoDefectDojo:bugfixfrom
valentijnscholten wants to merge 16 commits intoDefectDojo:bugfixfrom
Conversation
Replace per-original O(n×m) loop with a single bulk UPDATE for inside-scope duplicate reset. Outside-scope reconfiguration still runs per-original but now uses .iterator() and .exists() to avoid loading full querysets into memory. Also adds WARN-level logging to fix_loop_duplicates for visibility into how often duplicate loops occur in production, and a comment on removeLoop explaining the optimization opportunity.
Remove redundant .exclude() and .exists() calls by leveraging the bulk UPDATE that already unlinks inside-scope duplicates. Add prefetch_related to fetch all reverse relations in a single query.
Replace the per-object obj.delete() approach in async_delete_crawl_task with a recursive SQL cascade walker that compiles QuerySets to raw SQL and walks model._meta.related_objects bottom-up. This auto-discovers all FK relations at runtime, including those added by plugins. Key changes: - New dojo/utils_cascade_delete.py: cascade_delete() utility - New dojo/signals.py: pre_bulk_delete_findings signal for extensibility - New bulk_clear_finding_m2m() in finding/helper.py for M2M cleanup with FileUpload disk cleanup and orphaned Notes deletion - Rewritten async_delete_crawl_task with chunked cascade deletion - Removed async_delete_chunk_task (no longer needed) - Product grading recalculated once at end instead of per-object
…plicate_cluster Use QuerySet.update() instead of mass_model_updater to re-point duplicates to the new original. Single SQL query instead of loading all findings into Python and calling bulk_update.
Remove reset_duplicate_before_delete, reset_duplicates_before_delete, and set_new_original — all replaced by bulk UPDATE in prepare_duplicates_for_delete and .update() in reconfigure_duplicate_cluster. Remove unused mass_model_updater import.
…olations When bulk-deleting findings in chunks, an original in an earlier chunk could fail to delete because its duplicate (higher ID) in a later chunk still references it via duplicate_finding FK. Fix by deleting outside-scope duplicates first, then the main scope. Also moves pre_bulk_delete_findings signal into bulk_delete_findings so it fires automatically.
…cluster Avoids triggering Finding.save() signals (pre_save_changed, execute_prioritization_calculations) when reconfiguring duplicate clusters during deletion. Adds tests for cross-engagement duplicate reconfiguration and product deletion with duplicates.
…-engagement Adds product= and product_type= parameters so the entire deletion scope is handled in one call, avoiding unnecessary reconfiguration of findings that are about to be deleted anyway. Uses subqueries instead of materializing ID sets, and chunks the originals loop with prefetch to bound memory. Reverts finding_delete to use ORM .delete() for single finding cascade deletes.
Replace the model_list-based mapping with a simple scope filter dict. prepare_duplicates_for_delete now accepts a single object and derives the scope via FINDING_SCOPE_FILTERS. Removes the redundant non-Finding model deletion loop — cascade_delete on the top-level object handles all remaining children. Cleans up async_delete class.
- Add bulk_delete_findings() wrapper: M2M cleanup + chunked cascade_delete - reconfigure_duplicate_cluster: return early when CASCADE_DELETE=True instead of calling Django .delete() which fires signals per finding - finding_delete: use bulk_delete_findings when CASCADE_DELETE=True - async_delete_crawl_task: expand scope to include outside-scope duplicates, use bulk_delete_findings instead of manual M2M + cascade_delete calls - Fix test to use async_delete class instead of direct task import
Adds generic M2M through-table cleanup to cascade_delete so tags and other M2M relations are cleared before row deletion. Introduces bulk_remove_all_tags in tag_utils to properly decrement tagulous tag counts during bulk deletion. Adds test for product deletion with tagged objects.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Replaces the per-object chunked deletion strategy with a single-pass SQL cascade walker (
cascade_delete), eliminating the need to fan out deletion work across multiple Celery workers. This auto detects relationships and respect the SET_NULL or DO_NOTHING or CASCADE_DELETE settings of the relationship.The old delete code had many inefficienies, especially around duplicates.
finding.save()were this was not neededKey architectural changes:
Collectorfor bulk deletion. Walks_meta.related_objectsand issues bottom-upDELETEstatements directly, bypassing per-object signals and Python-level overhead.cascade_deletecall.reconfigure_duplicate_clusteruses.update()instead ofFinding.save(), avoiding Django signal storms (prioritization, dedup, audit) during deletion.Performance (3 × 1373 findings, 3 engagements, many duplicates, 1 product, JFrog Xray Api Scan)
So it's 13 times quicker and uses 6 times less celeryworkers.
Performance (3 × 1050 findings, 3 engagements, many duplicates, 1 product, Acunetix 360 Scan)
So it's 14 times quicker and uses 8 times less celeryworkers. If you add Notes, Files, the gains become even larger.
This should have huge effect on instances that do a lot of deletions.
Inspired by: https://dev.to/redhap/efficient-django-delete-cascade-43i5