Skip to content

412 recovery and transactional snapshots#418

Draft
nphias wants to merge 5 commits intomainfrom
412-crash-resilient-structural-undoredo-for-open-transactions
Draft

412 recovery and transactional snapshots#418
nphias wants to merge 5 commits intomainfrom
412-crash-resilient-structural-undoredo-for-open-transactions

Conversation

@nphias
Copy link
Collaborator

@nphias nphias commented Mar 10, 2026

### Please note this is branched off 421 not main.. merge 422 first before 418

  • clean up of storageConfig - remove hardcoded defaults .. fail startup on no config found
  • removed unused function in holochain network config
  • moved provider specific structs/impl into files under providers (ipfs,local)
  • abstracted providerConfig trait for common fields
  • add snapshot_recovery_store to the base receptor for all providers that have it enabled
  • created common setup file for common functions
  • added code to create a snapshot database for each baseReceptor (local / holochain setup)
  • improved async performance for multi-provider
  • added debug to baseReceptor (manual fmt for relevant fields)
  • transaction store with embedded schema created in recovery crate
  • added transaction snapshot logic
  • created unit test to test all transactionStore snapshots api
  • created clientsession wrapper on client_context. added functions to use the recovery store
  • receptors now use ClientSession to access store and context, (stores pre-created and available via baseReceptor)
  • conductora crate is using HolonErrors, so started upgraded error handling in setup to use HolonError

TODO in follow-up issue:

  • create maprequest commands for undo/redo/disable persist/ disable undo and wire up in the receptors (see unit tests for api examples)
  • final touches to fullfill any outstanding requirements

A lot of new code was added and 50 plus existing files updated.
most requirements are met .. i would like to merge part 1 and create a refinements issue
for further final touches

Definition of Done (Section 7)

Criterion Status
Structural undo/redo stacks per open transaction ✅ Done
Checkpoint after successful command completion ✅ Done
Redo stack clears on new undoable command ✅ Code done, ❌ not tested
disable_undo metadata behavior ✅ Done
snapshot_after policy hook (mock acceptable) ❌ Not implemented
SQLite schema (recovery_session, recovery_checkpoint, indexes) ✅ Done
Snapshot blobs from wire serializer path ✅ Done (export_staged/transient_holonsSerializableHolonPool)
Crash/restart restores consistent state + stacks ✅ Code done, ⚠️ test doesn't verify content
Commit/rollback destroys history + deletes snapshot ✅ Done (CASCADE)
Tests cover all areas ⚠️ Partial — see gaps below

Gaps to Close

# Gap Priority Suggested Action
1 snapshot_after policy hook Medium Add a stub trait / config flag; "mocked trigger acceptable until Phase 3" per spec
2 Redo-clearing test Low Add: persist Aundo Apersist B → assert can_redo() == false
3 Blob roundtrip assertion Medium After undo/recover, assert snapshot.staged_holons / transient_holons match original
4 Crash recovery simulation Medium Persist to temp file → drop store → reopen → recover_latest() → assert content matches
5 Partial-write atomicity Low Arguably covered by SQLite guarantees; optional stress test

@nphias nphias linked an issue Mar 10, 2026 that may be closed by this pull request
10 tasks
@evomimic evomimic self-requested a review March 13, 2026 00:15
@evomimic
Copy link
Owner

This review is now based on the post-419 architecture (per MAP Commands Spec), where MAP command ingress is moving to the Runtime layer and holochain_receptor is no longer expected to front TS-client IPC.

Review Findings

A lot of work went into PR 418. It delivers a substantial amount of new recovery/persistence infrastructure, including a recovery store implementation and SQLite-backed persistence built from scratch, and it arrived faster than I was expecting. The bulk of the code structure and supporting logic seems good to me.

It's also pretty cool to see this level of resilience being added to Holons Core at this early stage!

There are a few concerns I wanted to note and some changes I’m requesting.

Dev-Mode Enhancements

The introduction of dev-mode changes initially caught me off guard, since they are outside the core scope of Issue 412. In general, I would prefer not to expand a feature PR beyond the specific issue/spec it is meant to implement, especially when the additional work is not tied to a separately defined issue or spec.

That said, at this point I do not think the dev-mode changes are a strong reason to block or split the PR. They are outside the core Issue 412 scope, but they still appear useful under the new architecture (despite the narrowed scope of the holochain receptor), and they are not a major source of the 418/419 merge-conflict surface.

Architectural Concerns

Coupling of Recovery Persistence to "Provider"

Recovery persistence appears to be coupled to provider setup, especially the Holochain provider path. That seems architecturally inverted. The need for snapshot persistence and crash recovery arises from IntegrationHub’s own transaction-staging responsibilities, not from any particular external provider. It is not a Holochain-specific concern.

Recovery should therefore be owned at the transaction/runtime layer and backed by a persistence boundary, rather than being provisioned through provider-specific setup.

SQLite Not Modeled as a Receptor

I expected recovery persistence to be introduced behind a receptor abstraction, consistent with the deployment architecture’s treatment of IntegrationHub/environment touch points.

PR 418 instead embeds SQLite directly inside the host recovery subsystem. I can see the rationale for treating the recovery store as an internal persistence mechanism inside holons_recovery rather than as an environment-facing service boundary. However, that is in tension with the broader architectural story that receptors mediate IntegrationHub’s touch points with its environment.

Since this introduces a new persistent local storage dependency (SQLite) and writes to the host filesystem, I think there is a strong argument that it belongs behind a receptor-style abstraction rather than being embedded directly in the recovery subsystem.

At minimum, this highlights the need for clearer architectural guidance on what does and does not count as a receptor within the IntegrationHub.

Recovery / Persistence Store Review

Two logic issues need to be fixed before merge.

  1. host/crates/holons_recovery/src/transaction_store.rs
    undo() currently pops the top undo snapshot and returns/restores that same snapshot. With the current snapshot model, that restores the current state rather than the previous state, so structural undo semantics are incorrect. The implementation needs to restore the prior checkpoint after the pop (or otherwise maintain an initial baseline/current-vs-previous distinction).

  2. host/conductora/src/setup/app_builder.rs
    create_window() checks for any enabled Holochain provider type, then looks up provider config by the literal key "holochain". The checked-in config now uses named provider entries like "holochain_dev" / "holochain_production", so this lookup can fail even when Holochain is enabled. The window/provider resolution needs to follow the actual enabled provider entry rather than hardcoding "holochain".

Recommendation

Before resolving the remaining 418/419 merge conflicts, I think the recovery/persistence store should first be decoupled from provider-specific setup and elevated to its own receptor/provider.

As noted above, the need for recovery arises from IntegrationHub’s own transaction-staging responsibilities. It is not a Holochain concern, and it is not logically owned by any particular provider. At the same time, recovery persistence introduces its own external boundary: a SQLite-backed store on the host filesystem. Given the way providers/receptors are used elsewhere in the architecture, that makes recovery persistence a better fit for its own receptor/provider abstraction than as a configurable option attached to other providers.

My recommendation would be:

  1. keep the dev-mode changes
  2. refactor recovery persistence so it is modeled as its own receptor/provider rather than being provisioned through Holochain setup or other provider-specific setup
  3. fix the two correctness issues in transaction_store.rs and app_builder.rs
  4. then resolve the remaining 418/419 conflicts in favor of the new Runtime / MAP Commands direction
  5. defer full command-path integration until the post-merge Runtime follow-up

That seems like the cleanest path to reduce conflict surface, avoid locking in the wrong ownership model, and merge 418 without regression while preserving the architectural direction introduced by 419.
need for clearer architectural guidance on what does and does not count as a receptor within the IntegrationHub.

Recovery / Persistence Store Review

Two logic issues need to be fixed before merge.

  1. host/crates/holons_recovery/src/transaction_store.rs
    undo() currently pops the top undo snapshot and returns/restores that same snapshot. With the current snapshot model, that restores the current state rather than the previous state, so structural undo semantics are wrong. The implementation needs to restore the prior checkpoint after the pop (or otherwise maintain an initial baseline/current-vs-previous distinction).

  2. host/conductora/src/setup/app_builder.rs
    create_window() checks for any enabled Holochain provider type, then looks up provider config by the literal key "holochain". The checked-in config now uses named provider entries like "holochain_dev" / "holochain_production", so this lookup can fail even when Holochain is enabled. The window/provider resolution needs to follow the actual enabled provider entry rather than hardcoding "holochain".

Recommendation

Before resolving the remaining 418/419 merge conflicts, I think the recovery/persistence store should first be decoupled from provider-specific setup and elevated to its own receptor/provider.

The need for recovery arises from IntegrationHub’s own transaction-staging responsibilities. It is not a Holochain concern, and it is not logically owned by any particular provider. At the same time, recovery persistence introduces its own external boundary: a SQLite-backed store on the host filesystem. Given the way providers/receptors are used elsewhere in the architecture, that makes recovery persistence a better fit for its own receptor/provider abstraction than as a configurable option attached to other providers.

My recommendation would be:

  1. keep the dev-mode changes
  2. refactor recovery persistence so it is modeled as its own receptor/provider rather than being provisioned through Holochain setup or other provider-specific setup
  3. fix the two correctness issues in transaction_store.rs and app_builder.rs
  4. then resolve the remaining 418/419 conflicts in favor of the new Runtime / MAP Commands direction
  5. defer full command-path integration until the post-merge Runtime follow-up

That seems like the cleanest path to reduce conflict surface, avoid locking in the wrong ownership model, and merge 418 without regression while preserving the architectural direction introduced by 419.

@nphias
Copy link
Collaborator Author

nphias commented Mar 15, 2026

Without looking at the changes and merge conflicts.. just going to register my first reaction before diving deeper into the perspective given.
And re-writing a response

Receptors are by my initial design, (with the exception of the local receptor which we haven't built yet ) are holonic network storage options. Holochain being one of them.

The recovery snapshot option is a core feature that is available to every provider. Providers being the storage configuration build that encodes into a BaseReceptor later to instanciate a real receptor (builder pattern)

The recovery host crate encapsulates the storage implementation logic and vendor.. these can be swapped out anytime independently. For now we have chosen SQLite.

SQLite is a storage crate .. it is not a receptor, there is no conceptual match there.

The architecture is both UI and Receptor driven.. configuration, database creation and other startup pre-requisites are performed by the appbuilder setup .. recovery/snapshot database is optional per receptor type.. if its configured, the snapshot database is created on startup for that receptor. this is not a runtime operation.. its core startup operation including recovery from crash which restores all receptors with their respective holon cache of snaphots.

I don't really agree with or perhaps fully understand this statement:
"Recovery should therefore be owned at the transaction/runtime layer and backed by a persistence boundary, rather than being provisioned through provider-specific setup."

With regards to the other two additional features that came with this PR namely logging and dev-mode , I agree they should have been separate issues.. with my frustration of needing a better build test and log experience, I bundled them in.

I have made a separate branch off main to merge in just dev-mode+logging
I will revise the recovery-snaphot PR and base it off the dev-mode+logging branch.. later it can be rebased to main after dev-mode+logging is merged

@nphias nphias self-assigned this Mar 15, 2026
@nphias nphias marked this pull request as draft March 15, 2026 13:42
@nphias nphias changed the title dev_mode, snapshots, logging recovery and transactional snapshots Mar 15, 2026
@nphias nphias force-pushed the 412-crash-resilient-structural-undoredo-for-open-transactions branch from 1474ba2 to 33cb383 Compare March 18, 2026 08:03
@nphias nphias changed the base branch from main to 421-dev-modelogging March 18, 2026 08:16
@nphias
Copy link
Collaborator Author

nphias commented Mar 18, 2026

  • 1. keep the dev-mode changes - done in a separate branch 421 +PR

  • 2.fix the two correctness issues in transaction_store.rs and app_builder.rs - done, undo works fine, app-builder agnosticated, no string matching

  • 3.then resolve the remaining 418/419 conflicts in favor of the new Runtime / MAP Commands direction - done

  • 4.defer full command-path integration until the post-merge Runtime follow-up - defered

  • 5.refactor recovery persistence so it is modeled as its own receptor/provider rather than being provisioned through Holochain setup or other provider-specific setup - This needs a greater architectural discussion

### Please note this is branched off 421 not main.. merge 422 first before 418

422 PR merges into main
       ↓
Change PR 418 base to main on GitHub
       ↓
Does GitHub show conflicts?
       ↓               ↓
      YES              NO
       ↓               ↓
git rebase          ready to merge
origin/main
git push
--force-with-lease

@nphias nphias marked this pull request as ready for review March 18, 2026 09:16
@evomimic
Copy link
Owner

TL;DR:

The core disagreement isn’t about SQLite—it’s about architectural boundaries. I see the IntegrationHub as independent of any storage provider, and Receptors as boundary-layer components between the IntegrationHub and external technologies/services (not between the TS Client and the IntegrationHub). Under that model, recovery persistence is not a capability of the Holochain provider, but an independent IntegrationHub concern that should be backed by its own provider/receptor path.

Architectural Clarifications

I think this exchange highlights that although we’ve integrated the Conductora and MAP implementations, we didn’t fully reconcile their architectural frameworks. It is important that we do that now.

I want to acknowledge that you introduced the Receptor concept into the architecture and, like you, feel Receptors play a critical role. But I think our views currently differ in two important ways:

  1. I see them playing a broader role than may have been initially imagined.
  2. I situate them at a different place in the architecture.

You described Receptors as holonic network storage options, with Holochain being one of them. I think the current architecture now requires a broader conception than that.

  1. We need more integration options than just holonic network storage. Our persistent recovery store is not a holonic network storage alternative to Holochain; it serves a different purpose entirely. Likewise, we may need a large-object store for assets such as video files. IPFS and/or Filecoin might be options for this, but neither is a holonic network store. And beyond storage, I envision external integrations with REST, SMTP, ActivityPub, Signal, BeckN, and other services. Having protocol-specific Receptors could off-load the details of protocol interaction and make these services pluggable.

  2. The choice of whether and when a particular storage operation is needed is not, and should not be, made at ingress from the TS Client to the IntegrationHub. Consider a simple get_property_value command on a HolonReference to a SavedHolon. If the SavedHolon is already in the holons cache, the request can be handled solely within the IntegrationHub itself. It is only on a cache fault that a call on the primary backing store, whether Holochain, Neo4j, or anything else, is required.

  3. The IntegrationHub is not itself a holonic network storage provider. It functions as a transaction manager, dance dispatcher, and an in-memory holons manager for saved, staged, and transient holons. Soon it will also be the Open Cypher query engine and TrustChannel executor. It has no intrinsic Holochain dependency, and in fact it should not have a hard dependency on any storage or external service provider. It has been carefully designed and implemented so that we could swap out Holochain for, say, Neo4j as our primary holonic network storage provider with no change to the IntegrationHub itself, and no change to how calls from the TS Client to the IntegrationHub are routed.

Where I think the Receptor concept is most useful is not at the boundary between the _TS Client_and the IntegrationHub, but at the boundary between the IntegrationHub and all external integration points. In the biological metaphor, the IntegrationHub is like a cell and Receptors sit on its membrane, handling interactions with the extra-cellular world.

This can be seen in the Deployment Architecture Diagram.

image

The Holochain receptor does not sit between the TS Client and the IntegrationHub. It sits between the IntegrationHub and the Holochain Conductor. By the same logic, local persistence and other external technologies belong on that same outer boundary.

So what is a “Receptor”? I think the important thing here is to define the concept precisely, rather than getting stuck on the term itself.

My proposed definition of that concept would be something like this:

A Receptor is a boundary-layer integration component that mediates interactions between the IntegrationHub and some external technology, service, or persistence substrate.

That would include cases where the external integration is:

  • a holonic network storage provider such as Holochain or Neo4j
  • a local persistence technology such as SQLite
  • an object store such as IPFS/Filecoin
  • an external service such as REST, SMTP, ActivityPub, Signal, BeckN, etc.

Under that definition, the essential characteristic is not “holonic network storage” specifically. The essential characteristic is that the component sits at the boundary between the IntegrationHub and the external world.

If it feels like too much of a stretch to extend the definition of the term "Receptor" in that way, we can absolutely choose a different name. The key thing for me is that we define the concept precisely and align the architecture around it.


Specific Responses to Earlier Comments

...configuration, database creation and other startup pre-requisites are performed by the appbuilder setup .. recovery/snapshot database is optional per receptor type.. if its configured, the snapshot database is created on startup for that receptor. this is not a runtime operation.. its core startup operation including recovery from crash which restores all receptors with their respective holon cache of snaphots.

On the point about setup-time provisioning: I think we may be conflating two separate questions:

  1. When should the snapshot database be provisioned?
  2. Which architectural component should own or depend on it?

I agree that provisioning of a snapshot database should be configuration-driven and performed once during app-builder setup, not provisioned dynamically as part of runtime command execution. In that respect, setup-time creation is entirely reasonable.

But that does not imply that the recovery store is “for the Holochain receptor.”

The interface and implementation of holons_recovery are completely independent of the holonic network storage provider. The recovery component has no Holochain dependency, and swapping Neo4j for Holochain would not affect holons_recovery at all.

That is why I do not see the recovery store as something that should be provisioned for the Holochain receptor. Rather, it is a separate persistence capability that serves IntegrationHub transaction/snapshot responsibilities and can be attached alongside whichever primary storage/provider configuration is active.

So my concern is not with setup-time provisioning. My concern is with coupling the recovery store conceptually to the Holochain receptor, when it is in fact independent of the holonic network storage provider and should remain so.

To put it another way: Holochain may be one backing store for holonic network state, while the recovery store is a different backing store for transaction snapshot persistence. They serve different responsibilities and should not be modeled as if one is subordinate to the other.


The recovery snapshot option is a core feature that is available to every provider. Providers being the storage configuration build that encodes into a BaseReceptor later to instanciate a real receptor (builder pattern)
The recovery host crate encapsulates the storage implementation logic and vendor.. these can be swapped out anytime independently. For now we have chosen SQLite.
SQLite is a storage crate .. it is not a receptor, there is no conceptual match there."

I think we are actually aligned on two important points:

  1. I agree that there is a distinction between a provider and a receptor. If I understand your model correctly, a provider is an integration-component type/configuration, while a receptor is the runtime binding of that provider-type to a concrete implementation technology. The app-builder phase is where this binding happens. I completely agree with that.

  2. I also agree that SQLite itself is just the current storage vendor behind the recovery capability. It is not the architectural concept at stake here.

Where I think we still differ is this: I do not see persistent recovery as a capability offered by the holonic network storage provider. Whether persistent recovery is available for transactions and the holons they are staging is a design decision of the IntegrationHub. Holochain, Neo4j, or any other primary holonic network store is orthogonal to that.

So my view is that persistent recovery should itself be modeled as its own provider-type, which is then bound during app-builder setup to some concrete implementation technology, SQLite being the current one.


Finally, on my earlier statement that:

Recovery should therefore be owned at the transaction/runtime layer and backed by a persistence boundary, rather than being provisioned through provider-specific setup.

I think that sentence compressed too many ideas together, so let me restate it more clearly.

I am not arguing that the recovery store must be provisioned during runtime command execution rather than during app-builder setup. As noted above, I agree that setup-time provisioning is appropriate.

What I mean by “owned at the transaction/runtime layer” is that the reason recovery exists is because the IntegrationHub manages open transactions, staged holons, and transactional snapshots. In that sense, recovery is conceptually owned by the IntegrationHub’s transaction model, not by Holochain or any other primary storage provider.

What I mean by “backed by a persistence boundary” is that the actual storage substrate used to persist those snapshots should be treated as its own integration concern, with its own provider/receptor path and its own implementation technology choice.

So the distinction I’m making is:

  • the need for recovery comes from the IntegrationHub transaction model
  • the provisioning of the recovery store can still happen during app-builder setup
  • but that provisioning should not be modeled as if recovery were a capability of the Holochain provider, because it is orthogonal to the choice of primary holonic network storage provider

@nphias
Copy link
Collaborator Author

nphias commented Mar 23, 2026

Yeah first reaction to re-write later..

As I said before the recovery snapshots are available to all receptors not just holochain and bootstraped at startup .

I have a notion of these receptors as network storage for actually writing holon data, all with git versioning ability. whereas other storage options in the hub design are more read only... And don't need transaction recovery

I also see an architecture around the local receptor being the main holon storage point . And holochain, GitHub, radicle and other remote provider options being secondary sync points ..

If everything goes through the local receptor it might be the case that we only need one transaction snapshot recovery database and can as you suggest move this out of the receptor configuration options

With a 20 second startup time using the holochain provider in normal mode...I don't see that as practical option for the home/ root space.

@evomimic
Copy link
Owner

@nphias

My goal is to get explicit architectural alignment so this PR can move to merge-ready.

I know your previous note was an initial reaction; this follow-up is to ask for a clear agree/disagree on each point below.

  1. A Receptor is defined as _"a boundary-layer integration component that mediates interactions between the IntegrationHub and some external technology, service, or persistence substrate."
  2. The IntegrationHub is not the Holochain Receptor. In fact, it is not a Receptor at all; it orchestrates interactions with Receptors.
  3. The Holochain Receptor’s scope is to adapt IntegrationHub storage/dance operations to Holochain Conductor APIs and protocol semantics. It should not own IntegrationHub transaction policy, command ingress policy, or cross-provider concerns such as recovery persistence.
  4. The Holochain Receptor does not sit between the TS Client and the IntegrationHub. It sits between the IntegrationHub and the Holochain Conductor.
  5. The choice of whether and when a particular storage operation is needed should not be made at ingress from the TS Client to the IntegrationHub. That decision belongs inside IntegrationHub command execution.
  6. The architecture needs integration types beyond holonic network storage, including recovery persistence (this PR), large-object storage (e.g., IPFS/Filecoin), and external service protocols.
  7. Recovery persistence is not a capability of the Holochain Receptor, but an independent IntegrationHub concern that should be backed by its own provider/receptor path.
  8. Recovery persistence should be modeled as its own provider-type, provisioned during app-builder setup, and currently implemented with SQLite-backed storage.

If you disagree with any item, please call out the specific item number and your alternative wording so we can converge quickly.

@evomimic
Copy link
Owner

Here is what I see as the implementation implications of the architectural points listed in my previous comment.

I'm providing them here:

  1. To make the implications of the architectural statements concrete.
  2. To provide immediate guidance so that, if you agree with the architecture as defined, you can begin work immediately.

Concrete Code Impact (Assuming Architectural Alignment)

1. Model recovery persistence as its own provider-type (not a per-provider capability flag)

Move from “snapshot recovery as an option on unrelated providers” to “recovery persistence as its own provider configuration type.”

Primary files:

  • host/conductora/src/config/storage_config.rs
  • host/conductora/src/config/providers/mod.rs
  • host/conductora/src/config/providers/* (add/adjust recovery provider config shape)
  • host/conductora/src/config/storage.json (reflect new provider entry)

Expected outcome:

  • Recovery persistence is configured independently of Holochain/Neo4j/etc.
  • Provider selection remains explicit and composable.

2. Remove conceptual coupling of recovery store to Holochain receptor setup

Any wiring that makes recovery appear “for Holochain” should be removed.

Primary files:

  • host/conductora/src/setup/providers/holochain/setup.rs
  • host/conductora/src/setup/common_setup.rs (or equivalent setup orchestration location)

Expected outcome:

  • Holochain setup handles Holochain concerns only.
  • Recovery store setup is independent and reusable across primary storage choices.

3. Keep provisioning at app-builder setup (not runtime command path), but via recovery provider path

Provision recovery DB once during startup/setup, driven by recovery provider config.

Primary files:

  • host/conductora/src/setup/app_builder.rs
  • host/conductora/src/setup/common_setup.rs
  • host/conductora/src/setup/provider_registry.rs / provider integration setup path

Expected outcome:

  • Setup-time creation remains intact (agreed behavior).
  • Ownership/modeling follows the new architectural boundary.

4. Stop treating recovery as receptor-local payload on primary storage receptors

If BaseReceptor currently carries recovery store state that is specific to primary storage receptor construction, refactor so recovery is injected through IntegrationHub/session boundary instead.

Primary files:

  • host/crates/holons_client/src/shared_types/base_receptor.rs
  • host/crates/holochain_receptor/src/holochain_receptor.rs
  • host/crates/holons_receptor/src/receptors/local_receptor/local_receptor.rs
  • host/crates/holons_client/src/client_context.rs

Expected outcome:

  • Recovery capability is not subordinate to Holochain receptor instantiation.
  • Session/runtime can use recovery regardless of primary storage provider.

5. Keep holons_recovery implementation vendor-agnostic and mostly unchanged

holons_recovery should remain independent of Holochain and other primary providers; most change should be wiring/ownership, not store internals.

Primary files:

  • host/crates/holons_recovery/src/recovery_store.rs
  • host/crates/holons_recovery/src/transaction_store.rs
  • host/crates/holons_recovery/src/transaction_snapshot.rs

Expected outcome:

  • SQLite remains current implementation choice.
  • Future swap to other persistence tech remains straightforward.

@nphias nphias changed the base branch from 421-dev-modelogging to main March 25, 2026 04:09
@nphias nphias force-pushed the 412-crash-resilient-structural-undoredo-for-open-transactions branch from 33cb383 to 58df61c Compare March 25, 2026 12:01
@nphias
Copy link
Collaborator Author

nphias commented Mar 25, 2026

after the conversation yesterday i think we mostly agree on the way forward..
my plan is create a separate recovery provider which effectively becomes local recovery receptor as it reads and writes data to filesystem..

i have rebased off main and a few things were broken and added last minute with the 422 merge that conflicted with this PR ... i have reverted them as they regress my work for my app_builder cleanup commit. i will comment here more once i finish this

@nphias nphias marked this pull request as draft March 25, 2026 12:40
@nphias nphias changed the title recovery and transactional snapshots 412 recovery and transactional snapshots Mar 26, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Crash-Resilient Structural Undo/Redo for Open Transactions

2 participants