Sorted Merge: Eliminate coordinator Sort node for multi-shard ORDER BY queries#8529
Sorted Merge: Eliminate coordinator Sort node for multi-shard ORDER BY queries#8529neildsh wants to merge 11 commits intocitusdata:mainfrom
Conversation
Codecov Report❌ Patch coverage is ❌ Your patch check has failed because the patch coverage (12.29%) is below the target coverage (75.00%). You can increase the patch coverage or adjust the target coverage.
Additional details and impacted files@@ Coverage Diff @@
## main #8529 +/- ##
==========================================
- Coverage 88.91% 79.02% -9.89%
==========================================
Files 286 287 +1
Lines 63198 61754 -1444
Branches 7933 7596 -337
==========================================
- Hits 56190 48801 -7389
- Misses 4734 10092 +5358
- Partials 2274 2861 +587 🚀 New features to boost your workflow:
|
Phase 1 of the sorted-merge feature. This commit adds the data structures and GUC needed by later phases, with zero behavioral changes: - SortedMergeKey typedef in multi_physical_planner.h describing one sort key for the coordinator k-way merge - useSortedMerge, sortedMergeKeys[], sortedMergeKeyCount fields on DistributedPlan (plan-time decision, never checked at runtime via GUC) - sortedMergeEligible field on MultiExtendedOp (logical optimizer tag read by the physical planner) - Hidden GUC citus.enable_sorted_merge (PGC_SUSET, default off, GUC_NO_SHOW_ALL) consulted only during planning - Serialization in citus_outfuncs.c and deep-copy in citus_copyfuncs.c for all new fields All new fields default to false/0/NULL. Existing regression tests are unaffected. Co-authored-by: Copilot
Phase 2 of the sorted-merge feature. Workers now sort their results when citus.enable_sorted_merge is enabled at planning time, even for queries without LIMIT. The plan metadata is populated so later phases can execute the merge and set pathkeys. Logical optimizer changes (multi_logical_optimizer.c): - WorkerSortClauseList() gains an early-return path that pushes the sort clause to workers when the GUC is on and the sort is safe (no aggregates in ORDER BY, no non-pushable window functions, and either no GROUP BY or GROUP BY on partition column). - WorkerExtendedOpNode() sets sortedMergeEligible = true when the worker sort clause semantically matches the original sort clause, using the new SortClauseListsMatch() helper. - SortClauseListsMatch() compares tleSortGroupRef, sortop, nulls_first, and eqop for each pair. Physical planner changes (multi_physical_planner.c): - CreatePhysicalDistributedPlan() finds the worker MultiExtendedOp with sortedMergeEligible = true, builds SortedMergeKey metadata from the worker job query, and sets useSortedMerge on the plan. - BuildSortedMergeKeys() constructs the key array from the worker query's SortGroupClause list and target list. The coordinator Sort node is still present above the CustomScan (pathkeys not set yet — that is Phase 4). Results are correct because the redundant Sort re-sorts already-sorted data. Co-authored-by: Copilot
Phase 3 of the sorted-merge feature. When distributedPlan->useSortedMerge is true (set at planning time by Phase 2), the adaptive executor now: 1. Routes worker results into per-task tuple stores via a new PerTaskDispatchTupleDest that dispatches by task->taskId hash lookup. No Task fields are mutated — all state lives on DistributedExecution. 2. After all tasks complete, performs a k-way merge of the per-task stores into the final scanState->tuplestorestate using PostgreSQL's public binaryheap and SortSupport APIs. 3. Frees per-task stores after the merge. The existing CitusExecScan/ReturnTupleFromTuplestore/CitusEndScan/ CitusReScan code paths are completely unchanged — they read from the final tuplestore exactly as before. New files: - sorted_merge.h: CreatePerTaskDispatchDest, MergePerTaskStoresIntoFinalStore - sorted_merge.c: PerTaskDispatchTupleDest with taskId->index hash routing, MergePerTaskStoresIntoFinalStore with binaryheap merge, MergeHeapComparator modeled after PG's heap_compare_slots in nodeMergeAppend.c Modified: - adaptive_executor.c: DistributedExecution gains useSortedMerge/perTaskStores/ perTaskStoreCount fields. AdaptiveExecutor() branches on useSortedMerge to create per-task stores, then merges post-execution. EXPLAIN ANALYZE falls back to existing single-tuplestore path. Safety: - Shared TupleDestinationStats preserves citus.max_intermediate_result_size - Per-task stores allocated in AdaptiveExecutor local memory context (auto-cleanup on error via PG memory context teardown) - task->totalReceivedTupleData tracking preserved The coordinator Sort node is still present above the CustomScan (pathkeys not set until Phase 4). Results are correct because the redundant Sort re-sorts already-sorted data. Co-authored-by: Copilot
Phase 1 of the sorted-merge feature. This commit adds the data structures and GUC needed by later phases, with zero behavioral changes: - SortedMergeKey typedef in multi_physical_planner.h describing one sort key for the coordinator k-way merge - useSortedMerge, sortedMergeKeys[], sortedMergeKeyCount fields on DistributedPlan (plan-time decision, never checked at runtime via GUC) - sortedMergeEligible field on MultiExtendedOp (logical optimizer tag read by the physical planner) - Hidden GUC citus.enable_sorted_merge (PGC_SUSET, default off, GUC_NO_SHOW_ALL) consulted only during planning - Serialization in citus_outfuncs.c and deep-copy in citus_copyfuncs.c for all new fields All new fields default to false/0/NULL. Existing regression tests are unaffected. Co-authored-by: Copilot
…cases to impose a deterministic order
| test: custom_aggregate_support aggregate_support tdigest_aggregate_support | ||
| test: multi_average_expression multi_working_columns multi_having_pushdown having_subquery | ||
| test: multi_array_agg multi_limit_clause multi_orderby_limit_pushdown | ||
| test: multi_orderby_pushdown |
There was a problem hiding this comment.
Can you try the test in multi_mx_schedule ? That exercises query from any node (any node can act as coordinator for select and DML queries). It doesn't need to be committed, just verify that the test passes there.
| -- Cleanup | ||
| -- ================================================================= | ||
|
|
||
| SET citus.enable_sorted_merge TO off; |
There was a problem hiding this comment.
Im slightly concerned that the GUC is off by default. The commit message states that when enabled the only failing regress tests are because of expected plan change. Why not enable by default?
| SELECT id, val, num FROM sorted_merge_test ORDER BY id LIMIT 20 | ||
| ) | ||
| SELECT * FROM cte WHERE num > 10 ORDER BY id LIMIT 5'); | ||
|
|
There was a problem hiding this comment.
Can there be a category of tests for distributed transactions? To verify correctness in scenarios like:
BEGIN;
INSERT INTO t ..
UPDATE t ..
SELECT ... FROM t ORDER BY c1, c2, c3;
END;
| */ | ||
| if (EnableSortedMerge && sortClauseList != NIL && | ||
| orderByLimitReference.onlyPushableWindowFunctions && | ||
| !orderByLimitReference.hasOrderByAggregate) |
There was a problem hiding this comment.
May need to also check that the clauses do not contain expressions that need to be evaluated on the coordinator?
| Group Key: sorted_merge_test.id | ||
| -> Seq Scan on public.sorted_merge_test_960000 sorted_merge_test (actual rows=N loops=N) | ||
| Output: id, val, num, ts | ||
| (19 rows) |
There was a problem hiding this comment.
It would be useful to include examples that avoid a sort on the workers, by taking advantage of an index for example.
And also curious if you have tested the performance of this and what the observed gains are ?
| Output: remote_scan.id, remote_scan.val | ||
| Task Count: N | ||
| Tuple data received from nodes: N bytes | ||
| Tasks Shown: One of N |
There was a problem hiding this comment.
Some indication of the N-way merge would be helpful here.
| PREPARE merge_off_stmt AS SELECT id, val FROM sorted_merge_test ORDER BY id LIMIT 10; | ||
| EXECUTE merge_off_stmt; | ||
| SET citus.enable_sorted_merge TO on; | ||
| EXECUTE merge_off_stmt; |
There was a problem hiding this comment.
Include an EXPLAIN to verify line 322 ? Also there may need to be 6+ EXECUTE statements before the plan is cached.
Sorted Merge: Eliminate coordinator Sort node for multi-shard ORDER BY queries
Summary
When a multi-shard
SELECT ... ORDER BYquery goes through the logical planner, Citus currently collects all worker results into a single tuplestore and relies on a PostgreSQLSortnode above theCustom Scanto produce the final ordering. This PR adds an alternative path: pushORDER BYto workers, k-way merge the pre-sorted worker results on the coordinator using a binary heap, and declarepathkeyson theCustomPathso PostgreSQL eliminates the coordinatorSortnode entirely.The feature is gated behind a hidden GUC
citus.enable_sorted_merge(defaultoff,PGC_SUSET,GUC_NO_SHOW_ALL). All eligibility decisions are made at planning time and baked into the serializedDistributedPlan, so cached/prepared plans remain correct regardless of later GUC changes.The sorted merge implementation is based on the Postgres MergeAppend node.
How it works
Worker sort pushdown (
multi_logical_optimizer.c):WorkerSortClauseList()gains an early-return path that pushes ORDER BY to workers even without LIMIT, when the GUC is on and the sort is safe (no aggregates in ORDER BY, no non-pushable window functions, GROUP BY on distribution column or absent).Planner eligibility (
multi_logical_optimizer.c,multi_physical_planner.c):WorkerExtendedOpNode()tags the workerMultiExtendedOpwithsortedMergeEligible = truewhen the worker sort clause semantically matches the original.SetSortedMergeFields()in the physical planner buildsSortedMergeKeymetadata (attno, sortop, collation, nullsFirst) and setsuseSortedMergeon theDistributedPlan.PathKeys (
combine_query_planner.c):CreateCitusCustomScanPath()setspath->pathkeys = root->sort_pathkeyswhen the plan hasuseSortedMerge = true, causing PostgreSQL'screate_ordered_paths()to skip adding a Sort node.Per-task stores + k-way merge (
sorted_merge.c,adaptive_executor.c): A newPerTaskDispatchTupleDestroutes worker tuples to per-task tuplestores bytaskIdhash lookup (no Task fields mutated). After all tasks complete,MergePerTaskStoresIntoFinalStore()performs a k-way merge using PostgreSQL's publicbinaryheapandSortSupportAPIs, writing sorted output into the existingscanState->tuplestorestate. The existingCitusExecScan()/ReturnTupleFromTuplestore()path is completely unchanged.Follow up changes
The fact that we continue to use the default tuple store for the final result set in addition to the per task stores means that the memory consumption increases when this change is enabled. Follow up work is to stop using the default tuple store to reduce the memory consumption.
Safety properties
distributedPlan->useSortedMerge. Cached plans are safe.DistributedExecution(execution-local), not on reusableTasknodes. Onlytask->totalReceivedTupleDatais updated (execution-time reporting field, reset each execution).CitusExecScan,CitusEndScan,CitusReScan,ReturnTupleFromTuplestoreare all unchanged. Quals, projection, backward scan, and cursor support work exactly as before.TupleDestinationStatsobject, preservingcitus.max_intermediate_result_sizeenforcement semantics.EnsureIntermediateSizeLimitNotExceeded()is now exported fromtuple_destination.cfor use by the dispatch destination.HasOrderByAggregate().Files changed
sorted_merge.h/sorted_merge.cCreatePerTaskDispatchDest,MergePerTaskStoresIntoFinalStore,MergeHeapComparatormulti_logical_optimizer.cSortClauseListsMatch()multi_physical_planner.cSetSortedMergeFields()+BuildSortedMergeKeys()combine_query_planner.cpathkeysonCustomPathadaptive_executor.cmulti_physical_planner.hSortedMergeKeystruct + fields onDistributedPlanandMultiExtendedOptuple_destination.h/.cEnsureIntermediateSizeLimitNotExceeded()shared_library_init.c/multi_executor.c/.hcitus_outfuncs.c/citus_copyfuncs.cmulti_orderby_pushdown.sql/.outTest coverage
The new
multi_orderby_pushdownregression test covers:max_intermediate_result_sizewith CTE subplanValidated with
citus.enable_sorted_mergeglobally enabled: 0 crashes across check-multi (192 tests) and check-multi-1 (210 tests). All failures are expected plan-shape diffs (Sort node elision in EXPLAIN output).