Skip to content

Auto-splice on-chain funds into the LSP channel#38

Open
amackillop wants to merge 4 commits into
mainfrom
austin_mdk-715_client-splice-in
Open

Auto-splice on-chain funds into the LSP channel#38
amackillop wants to merge 4 commits into
mainfrom
austin_mdk-715_client-splice-in

Conversation

@amackillop
Copy link
Copy Markdown
Contributor

Auto-splice on-chain funds into the LSP channel

Ports the auto-splice manager from mdkd to lightning-js so JS
consumers (mdk-checkout) get liquidity consolidation without any
changes on their side. After a JIT channel closes and a fresh one opens
for the same wallet, the sweep sits on-chain and is useless for routing
until something splices it back into a channel. This manager does that.

How it works

A background task is spawned on startReceiving, polled every
pollIntervalSecs (default 10), and torn down on stopReceiving /
destroy. Each tick: if there is a spendable on-chain balance and a
usable LSP channel, splice everything available into the largest such
channel. Skip the tick if a splice is already in flight; never fragment
into a smaller channel.

The manager is silent. No new JS events, no new napi exports. Logs to
stderr through the same channel as the rest of the crate.

Configuration

New optional field on MdkNodeOptions:

splice?: {
  enabled?: boolean;        // default true
  pollIntervalSecs?: number; // default 10
}

Existing callers are unaffected — missing field means defaults.

ldk-node bump

Pin moved from 5baa1f8 to 5dce44b (same rev mdkd uses) to pick up
get_max_splice_in_amount and splice_in. Both branches are
lsp-0.7.0_accept-underpaying-htlcs-based.

Commits

  1. Add SpliceConfig to MdkNodeOptions — pure plumbing, no behavior.
  2. Port auto-splice manager from mdkd — bump ldk-node, add
    splice_manager module with 9 unit tests. Module not yet spawned.
  3. Spawn auto-splice manager from start_receiving — wires the
    manager into the session lifecycle on a dedicated tokio runtime.
  4. Tighten comments — comment-only cleanup pass.

Testing

  • cargo test --lib splice_manager — 9 pure-logic tests covering
    decide() and advance_in_flight() state transitions.
  • No E2E here. mdkd has the regtest E2E
    (test_auto_splice_after_channel_close_and_reopen) which exercises
    the same algorithm.
  • Manually verified on signet via mdk-checkout: consolidated funds
    across multiple channels successfully. Then force-closed the remaining channel.
    Watched the funds get spliced back in once a new channel was created
    after a receive.

Plumb auto-splice manager configuration through the napi surface ahead
of the manager itself. New optional `splice` field on MdkNodeOptions
with two knobs: `enabled` (default true) and `pollIntervalSecs`
(default 30, matching mdkd).

Cache the parsed LSP PublicKey on MdkNode rather than re-parsing it
per splice tick once the manager lands.
Bring over the background splice loop without wiring it into the node
lifecycle yet. Wiring (start_receiving / stop_receiving) lands in the
next commit so the manager change reviews cleanly on its own.

  * Bump the ldk-node pin to 5dce44b6 (3 commits ahead of the prior
    pin on the same branch). That rev exposes get_max_splice_in_amount,
    which the manager needs to dry-run BDK selection at the live
    channel funding feerate before calling splice_in.
  * Add tokio-util (default-features = false) for CancellationToken.
  * New src/splice_manager.rs ported from mdk::splice_manager. The
    pure decision logic (decide, advance_in_flight) is identical to
    the source; the effectful shell is adapted to lightning-js:
      - takes Arc<Node> + LSP PublicKey directly (no MdkClient);
      - silent — drops MdkEvent emission, diagnostics go to stderr
        via eprintln! with the existing [lightning-js] prefix;
      - SpliceError + map_splice_error inlined (no shared MdkError).
  * Port all 6 mdkd unit tests on decide() / advance_in_flight() plus
    3 trivial variants. 9 tests, all green.

Module is added but not yet spawned — compiles clean (gated under
#[allow(dead_code)] until next commit wires it).
Wires the splice_manager module into the session lifecycle so
consumers (mdk-checkout) get auto-splice without changing their
nextEvent() loop. Spawn on start_receiving, cancel+join on
stop_receiving and destroy.

MdkNode now owns a long-lived single-worker tokio runtime built
once in new() and reused across every start/stop cycle. A
current-thread runtime was the first thought, but it only drives
tasks while someone is actively calling block_on, so a
fire-and-forget spawn would just sit there. multi_thread(1) gives
us one self-driving worker thread without us having to babysit a
driver thread ourselves.

node is Option<Arc<Node>> so the manager can hold a refcount for
the lifetime of its task. The &Node accessor stays the same via
Arc deref; node_arc() hands out clones for background tasks.
destroy() shuts down the splice task before dropping the Node,
otherwise the manager's Arc would keep the inner Node alive past
the JS object's lifetime and defeat the point of destroy().

stop_receiving cancels the token and blocks on the JoinHandle
before calling node.stop(). The manager loop selects over
shutdown.cancelled(), so the join returns as soon as the next
poll. Doing this in the other order would let the loop observe a
stopped node mid-tick.
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: b8279a4af6

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread src/lib.rs
Comment on lines +335 to +337
.poll_interval_secs
.map(|s| Duration::from_secs(s as u64))
.unwrap_or(default.poll_interval),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Reject zero poll interval for splice manager

Clamp or validate poll_interval_secs to a positive value before building Duration: a user-provided 0 currently becomes Duration::from_secs(0), and the manager loop then repeatedly hits sleep(0) and tick() as fast as possible. In production this can create a hot loop that pegs CPU and repeatedly calls wallet/channel APIs (and potentially splice RPCs) instead of polling.

Useful? React with 👍 / 👎.

Comment thread src/lib.rs
Comment on lines +638 to +650
if self.splice_cfg.enabled {
// Defensive: if a prior session leaked a task (or start_receiving is
// double-invoked), cancel + join the previous one before spawning.
self.shutdown_splice_task();
let shutdown = CancellationToken::new();
let join = splice_manager::spawn(
self.node_arc(),
self.lsp_pubkey,
self.splice_cfg,
shutdown.clone(),
self.splice_runtime.handle(),
);
*self.splice_task.lock().unwrap() = Some(SpliceTask { shutdown, join });
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Tie auto-splice task lifecycle to all node stop paths

The splice worker is started from start_receiving but only canceled in stop_receiving/destroy, so callers that use the existing stop() path after start_receiving() leave the background task alive. That task can continue polling and attempting work against a stopped node, which is surprising for a stop operation and can cause unintended background activity until destroy is called.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant