🐞 Describe the Bug
While upgrading Discourse from Ember 6.6 to 6.10, our QUnit test suite (~5,000 tests) started running out of memory in Chrome on CI. Through bisecting ember-source versions we traced the regression to Ember 6.8, where the renderer refactoring added multiple new associateDestroyableChild() calls, creating deeper destroyable ownership trees.
The underlying issue is in @glimmer/destroyable's destroy() function: after destruction completes, the metadata entries in DESTROYABLE_META retain stale object references, preventing the garbage collector from reclaiming the destroyed object tree.
Two issues combine to cause this:
-
The scheduleDestroyed callback sets meta.state = DESTROYED_STATE but never clears the parents, children, eagerDestructors, or destructors properties.
-
removeChildFromParent has a parentMeta.state === LIVE_STATE guard that checks the parent's state. During cascaded destruction (destroy(parent) → destroy(child)), the child's scheduleDestroyed callback is queued before the parent's. When it runs and calls removeChildFromParent(child, parent), the parent's state is already DESTROYING_STATE (set synchronously at the start of destroy(parent)), so the guard skips the removal.
After destroy(parent) completes:
- The parent's metadata still holds strong references to all children via
meta.children
- Each child's metadata still holds a strong reference back to the parent via
meta.parents
- These cross-references create cycles through the
WeakMap values: each entry's value (the metadata) holds a strong reference to another entry's key (the related destroyable), keeping the entire graph alive
In workloads that rapidly create and destroy large destroyable trees (such as test suites that spin up an Application per test), these stale references put significant pressure on the garbage collector, causing memory to accumulate and eventually leading to OOM.
🔬 Minimal Reproduction
The core issue can be illustrated with this pattern:
import { associateDestroyableChild, destroy } from '@ember/destroyable';
// Create a parent-child tree
const parent = {};
const child = {};
associateDestroyableChild(parent, child);
destroy(parent);
// After the scheduleDestroyed callbacks run:
// - DESTROYABLE_META.get(parent).children still references child
// - DESTROYABLE_META.get(child).parents still references parent
// - Both entries are in DESTROYED_STATE but hold strong cross-references
// - These cross-references through the WeakMap keep the entire tree alive
In Discourse's QUnit test suite (~5,000 tests), each test creates and destroys an Application instance. Even with explicit app.destroy() and releasing the app reference, Chrome runs out of memory on CI. Adding the metadata cleanup in the scheduleDestroyed callback (see proposed fix below) resolves the OOM completely.
😕 Actual Behavior
After destroy() completes and meta.state reaches DESTROYED_STATE, the metadata retains all parents, children, eagerDestructors, and destructors references. Combined with the LIVE_STATE guard in removeChildFromParent, this means:
- Parent's
meta.children still references the child — removeChildFromParent in the child's scheduleDestroyed callback checks parentMeta.state === LIVE_STATE, but the parent is already in DESTROYING_STATE by that point, so the removal is skipped
- Child's
meta.parents still references the parent — no code path clears this after destruction
- The
DESTROYABLE_META WeakMap entries form reference cycles through their values, keeping the entire destroyed tree alive
🤔 Expected Behavior
After destruction completes (meta.state = DESTROYED_STATE), the metadata should eagerly release all object references so the garbage collector can reclaim the destroyed tree sooner.
Proposed fix in @glimmer/destroyable:
scheduleDestroyed(() => {
iterate(parents, (parent) => {
removeChildFromParent(destroyable, parent);
});
meta.state = DESTROYED_STATE;
// Release references so GC can reclaim the destroyed tree
meta.parents = null;
meta.children = null;
meta.eagerDestructors = null;
meta.destructors = null;
});
This is safe because the null-outs happen after meta.state = DESTROYED_STATE is set, which means:
destroy() will return early via the meta.state >= DESTROYING_STATE guard — these fields won't be read again
registerDestructor / unregisterDestructor / associateDestroyableChild all throw if the object is already destroying
removeChildFromParent has already iterated over parents earlier in the same callback
_hasDestroyableChildren would correctly return false for a destroyed object
🌍 Environment
- Ember: 6.10.1 (the destroyable bug has existed since the beginning, but became critical after renderer changes in 6.8)
- Node.js: 22.x
- OS: Linux (CI), macOS (local)
- Browser: Chrome 145
➕ Additional Context
We are currently patching ember-source with the above fix as a workaround (discourse/discourse#38220). With the patch applied, our CI passes with no OOM.
🐞 Describe the Bug
While upgrading Discourse from Ember 6.6 to 6.10, our QUnit test suite (~5,000 tests) started running out of memory in Chrome on CI. Through bisecting
ember-sourceversions we traced the regression to Ember 6.8, where the renderer refactoring added multiple newassociateDestroyableChild()calls, creating deeper destroyable ownership trees.The underlying issue is in
@glimmer/destroyable'sdestroy()function: after destruction completes, the metadata entries inDESTROYABLE_METAretain stale object references, preventing the garbage collector from reclaiming the destroyed object tree.Two issues combine to cause this:
The
scheduleDestroyedcallback setsmeta.state = DESTROYED_STATEbut never clears theparents,children,eagerDestructors, ordestructorsproperties.removeChildFromParenthas aparentMeta.state === LIVE_STATEguard that checks the parent's state. During cascaded destruction (destroy(parent)→destroy(child)), the child'sscheduleDestroyedcallback is queued before the parent's. When it runs and callsremoveChildFromParent(child, parent), the parent's state is alreadyDESTROYING_STATE(set synchronously at the start ofdestroy(parent)), so the guard skips the removal.After
destroy(parent)completes:meta.childrenmeta.parentsWeakMapvalues: each entry's value (the metadata) holds a strong reference to another entry's key (the related destroyable), keeping the entire graph aliveIn workloads that rapidly create and destroy large destroyable trees (such as test suites that spin up an Application per test), these stale references put significant pressure on the garbage collector, causing memory to accumulate and eventually leading to OOM.
🔬 Minimal Reproduction
The core issue can be illustrated with this pattern:
In Discourse's QUnit test suite (~5,000 tests), each test creates and destroys an Application instance. Even with explicit
app.destroy()and releasing the app reference, Chrome runs out of memory on CI. Adding the metadata cleanup in thescheduleDestroyedcallback (see proposed fix below) resolves the OOM completely.😕 Actual Behavior
After
destroy()completes andmeta.statereachesDESTROYED_STATE, the metadata retains allparents,children,eagerDestructors, anddestructorsreferences. Combined with theLIVE_STATEguard inremoveChildFromParent, this means:meta.childrenstill references the child —removeChildFromParentin the child'sscheduleDestroyedcallback checksparentMeta.state === LIVE_STATE, but the parent is already inDESTROYING_STATEby that point, so the removal is skippedmeta.parentsstill references the parent — no code path clears this after destructionDESTROYABLE_METAWeakMap entries form reference cycles through their values, keeping the entire destroyed tree alive🤔 Expected Behavior
After destruction completes (
meta.state = DESTROYED_STATE), the metadata should eagerly release all object references so the garbage collector can reclaim the destroyed tree sooner.Proposed fix in
@glimmer/destroyable:This is safe because the null-outs happen after
meta.state = DESTROYED_STATEis set, which means:destroy()will return early via themeta.state >= DESTROYING_STATEguard — these fields won't be read againregisterDestructor/unregisterDestructor/associateDestroyableChildall throw if the object is already destroyingremoveChildFromParenthas already iterated overparentsearlier in the same callback_hasDestroyableChildrenwould correctly returnfalsefor a destroyed object🌍 Environment
➕ Additional Context
We are currently patching
ember-sourcewith the above fix as a workaround (discourse/discourse#38220). With the patch applied, our CI passes with no OOM.