Skip to content

Conversation

@Centril
Copy link
Contributor

@Centril Centril commented Dec 5, 2025

Description of Changes

See tin.

API and ABI breaking changes

None

Expected complexity level and risk

2

Testing

TODO

Err(_) => {}
}

row_ptrs.clear();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTreeMap::clear appears to not save and re-use the allocation(s) in the map, per https://doc.rust-lang.org/src/alloc/collections/btree/map.rs.html#662-671 :

    pub fn clear(&mut self) {
        // avoid moving the allocator
        drop(BTreeMap {
            root: mem::replace(&mut self.root, None),
            length: mem::replace(&mut self.length, 0),
            alloc: self.alloc.clone(),
            _marker: PhantomData,
        });
    }

(alloc is the Allocator, which defaults to Global.)

As such, saving the delete_tables here is not useful.

Copy link
Contributor Author

@Centril Centril Dec 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This shouldn't apply here, as we're calling .clear() on DeleteTable, not the outer BTreeMap.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I was remembering an earlier implementation of DeleteTable which was backed by a BTreeSet. I see that it's now a more complex data structure.

// The order here does not matter as once a `table_id` has been dropped
// it will never be re-created.
for change in pending_schema_changes {
for change in mem::take(&mut tx_state.pending_schema_changes) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like drain here would be better than mem::take, as we could potentially re-use this allocation. We may need to be careful not to let it grow unbounded, though.

Comment on lines +849 to +885
// Cleanup `tx_table` and steals its pages.
let (schema, pages) = tx_table.drain_for_merge();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stealing pages out of the tx-state seems counterproductive here. If we leave the pages in the tx tables, and just empty them, it saves us having to reset their schema/row-size related metadata, including the row-present-p bitvec. Again, we have to be careful about not letting each tx table grow without bound.

self.page_pool.put_many(pages);
}

tx_state.blob_store.clear();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could do away with the tx state blob store entirely and just use the committed state one, at the cost of adding code to rollbacks which removes the new blobs. I'm not sure if this is a good idea, since that makes rollbacks potentially much worse, unless we store the set of new blob hashes in the tx state.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In a different PR, iirc, I had some code to merge blob stores.


// Clear indices.
for index in self.indexes.values_mut() {
index.clear();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another place where BTreeMap::clear doesn't re-use its allocation.

@Centril Centril force-pushed the centril/wasmtime-pooling-stack-allocator branch from 3c24662 to c5b41ae Compare December 10, 2025 13:43
@Centril Centril force-pushed the centril/reuse-tx-state branch from 656ea06 to 0b303c0 Compare December 10, 2025 13:47
@Centril Centril force-pushed the centril/wasmtime-pooling-stack-allocator branch 2 times, most recently from dd2be6b to 5edaedc Compare December 10, 2025 23:37
Base automatically changed from centril/wasmtime-pooling-stack-allocator to master December 11, 2025 08:33
@Centril Centril force-pushed the centril/reuse-tx-state branch from 0b303c0 to 5afc6c6 Compare December 11, 2025 09:14
@Centril Centril changed the base branch from master to centril/subs-less-alloc December 11, 2025 09:14
@Centril Centril force-pushed the centril/reuse-tx-state branch from 5afc6c6 to 5e619ca Compare December 11, 2025 09:49
@Centril Centril force-pushed the centril/subs-less-alloc branch from d23f3ff to 72a3dab Compare December 18, 2025 08:36
Base automatically changed from centril/subs-less-alloc to master December 19, 2025 04:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants