Auto-splice on-chain funds into the LSP channel#38
Conversation
Plumb auto-splice manager configuration through the napi surface ahead of the manager itself. New optional `splice` field on MdkNodeOptions with two knobs: `enabled` (default true) and `pollIntervalSecs` (default 30, matching mdkd). Cache the parsed LSP PublicKey on MdkNode rather than re-parsing it per splice tick once the manager lands.
Bring over the background splice loop without wiring it into the node
lifecycle yet. Wiring (start_receiving / stop_receiving) lands in the
next commit so the manager change reviews cleanly on its own.
* Bump the ldk-node pin to 5dce44b6 (3 commits ahead of the prior
pin on the same branch). That rev exposes get_max_splice_in_amount,
which the manager needs to dry-run BDK selection at the live
channel funding feerate before calling splice_in.
* Add tokio-util (default-features = false) for CancellationToken.
* New src/splice_manager.rs ported from mdk::splice_manager. The
pure decision logic (decide, advance_in_flight) is identical to
the source; the effectful shell is adapted to lightning-js:
- takes Arc<Node> + LSP PublicKey directly (no MdkClient);
- silent — drops MdkEvent emission, diagnostics go to stderr
via eprintln! with the existing [lightning-js] prefix;
- SpliceError + map_splice_error inlined (no shared MdkError).
* Port all 6 mdkd unit tests on decide() / advance_in_flight() plus
3 trivial variants. 9 tests, all green.
Module is added but not yet spawned — compiles clean (gated under
#[allow(dead_code)] until next commit wires it).
Wires the splice_manager module into the session lifecycle so consumers (mdk-checkout) get auto-splice without changing their nextEvent() loop. Spawn on start_receiving, cancel+join on stop_receiving and destroy. MdkNode now owns a long-lived single-worker tokio runtime built once in new() and reused across every start/stop cycle. A current-thread runtime was the first thought, but it only drives tasks while someone is actively calling block_on, so a fire-and-forget spawn would just sit there. multi_thread(1) gives us one self-driving worker thread without us having to babysit a driver thread ourselves. node is Option<Arc<Node>> so the manager can hold a refcount for the lifetime of its task. The &Node accessor stays the same via Arc deref; node_arc() hands out clones for background tasks. destroy() shuts down the splice task before dropping the Node, otherwise the manager's Arc would keep the inner Node alive past the JS object's lifetime and defeat the point of destroy(). stop_receiving cancels the token and blocks on the JoinHandle before calling node.stop(). The manager loop selects over shutdown.cancelled(), so the join returns as soon as the next poll. Doing this in the other order would let the loop observe a stopped node mid-tick.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: b8279a4af6
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| .poll_interval_secs | ||
| .map(|s| Duration::from_secs(s as u64)) | ||
| .unwrap_or(default.poll_interval), |
There was a problem hiding this comment.
Reject zero poll interval for splice manager
Clamp or validate poll_interval_secs to a positive value before building Duration: a user-provided 0 currently becomes Duration::from_secs(0), and the manager loop then repeatedly hits sleep(0) and tick() as fast as possible. In production this can create a hot loop that pegs CPU and repeatedly calls wallet/channel APIs (and potentially splice RPCs) instead of polling.
Useful? React with 👍 / 👎.
| if self.splice_cfg.enabled { | ||
| // Defensive: if a prior session leaked a task (or start_receiving is | ||
| // double-invoked), cancel + join the previous one before spawning. | ||
| self.shutdown_splice_task(); | ||
| let shutdown = CancellationToken::new(); | ||
| let join = splice_manager::spawn( | ||
| self.node_arc(), | ||
| self.lsp_pubkey, | ||
| self.splice_cfg, | ||
| shutdown.clone(), | ||
| self.splice_runtime.handle(), | ||
| ); | ||
| *self.splice_task.lock().unwrap() = Some(SpliceTask { shutdown, join }); |
There was a problem hiding this comment.
Tie auto-splice task lifecycle to all node stop paths
The splice worker is started from start_receiving but only canceled in stop_receiving/destroy, so callers that use the existing stop() path after start_receiving() leave the background task alive. That task can continue polling and attempting work against a stopped node, which is surprising for a stop operation and can cause unintended background activity until destroy is called.
Useful? React with 👍 / 👎.
Auto-splice on-chain funds into the LSP channel
Ports the auto-splice manager from
mdkdtolightning-jsso JSconsumers (
mdk-checkout) get liquidity consolidation without anychanges on their side. After a JIT channel closes and a fresh one opens
for the same wallet, the sweep sits on-chain and is useless for routing
until something splices it back into a channel. This manager does that.
How it works
A background task is spawned on
startReceiving, polled everypollIntervalSecs(default 10), and torn down onstopReceiving/destroy. Each tick: if there is a spendable on-chain balance and ausable LSP channel, splice everything available into the largest such
channel. Skip the tick if a splice is already in flight; never fragment
into a smaller channel.
The manager is silent. No new JS events, no new napi exports. Logs to
stderr through the same channel as the rest of the crate.
Configuration
New optional field on
MdkNodeOptions:Existing callers are unaffected — missing field means defaults.
ldk-node bump
Pin moved from
5baa1f8to5dce44b(same revmdkduses) to pick upget_max_splice_in_amountandsplice_in. Both branches arelsp-0.7.0_accept-underpaying-htlcs-based.Commits
Add SpliceConfig to MdkNodeOptions— pure plumbing, no behavior.Port auto-splice manager from mdkd— bump ldk-node, addsplice_managermodule with 9 unit tests. Module not yet spawned.Spawn auto-splice manager from start_receiving— wires themanager into the session lifecycle on a dedicated tokio runtime.
Tighten comments— comment-only cleanup pass.Testing
cargo test --lib splice_manager— 9 pure-logic tests coveringdecide()andadvance_in_flight()state transitions.mdkdhas the regtest E2E(
test_auto_splice_after_channel_close_and_reopen) which exercisesthe same algorithm.
mdk-checkout: consolidated fundsacross multiple channels successfully. Then force-closed the remaining channel.
Watched the funds get spliced back in once a new channel was created
after a receive.