Skip to content

Comments

feat: Introduce bloom filter sync worker, optimize Bitcoin RPC prevout resolution, and persist block hashes for reorg detection.#43

Open
anhbaysgalan1 wants to merge 1 commit intofystack:mainfrom
anhbaysgalan1:main
Open

feat: Introduce bloom filter sync worker, optimize Bitcoin RPC prevout resolution, and persist block hashes for reorg detection.#43
anhbaysgalan1 wants to merge 1 commit intofystack:mainfrom
anhbaysgalan1:main

Conversation

@anhbaysgalan1
Copy link

Summary

  • Fix critical BTC indexer N+1 prevout problem — replaced per-transaction sequential RPC calls with batch parallel resolution using dedup cache. A block with 2000 txs previously made ~5000+ sequential RPC calls, now resolved in a single parallel pass.
  • Fix race condition in mempool worker — added mutex protection around seenTxs map read/write/cleanup operations.
  • Recreate bloom filter sync worker — incremental DB-to-bloom-filter sync with burst-aware catch-up loop, per-network type tracking, and configurable interval/batch size.
  • Add graceful shutdown timeout — manager now stops all workers concurrently with a 30s timeout instead of sequential blocking stops.
  • Persist block hashes for reorg detection — block hashes now survive restarts via KV store, increased window from 20 to 50 blocks.
  • Add missing composite DB index(type, created_at) index for bloom sync queries that were doing full table scans at scale.
  • Improve HTTP connection pooling — configured MaxIdleConnsPerHost=20 (was Go default of 2), eliminating connection bottleneck under concurrency.
  • Fix shutdown order — workers now stop before health server so health endpoint remains available during drain.
  • Fix go.mod — corrected invalid go 1.24.5 directive to go 1.24.

Files changed

  • internal/rpc/bitcoin/client.go — added ResolvePrevouts() batch method
  • internal/rpc/bitcoin/api.go — added ResolvePrevouts to interface
  • internal/indexer/bitcoin.go — rewrote block conversion + mempool to use batch resolution, pre-allocated slices
  • internal/worker/mempool.go — added sync.Mutex for seenTxs
  • internal/worker/bloom_sync.go — new file, bloom filter sync worker
  • internal/worker/manager.go — concurrent shutdown with timeout
  • internal/worker/regular.go — persisted block hashes, increased reorg window
  • internal/worker/factory.go — wired bloom sync worker
  • pkg/store/blockstore/store.go — added block hash persistence methods
  • pkg/common/config/services.go — added BloomSyncConfig
  • internal/rpc/client.go — configured HTTP transport pool
  • cmd/indexer/main.go — fixed shutdown order, wired bloom sync config
  • sql/wallet_address.sql — added composite index
  • go.mod — fixed go version directive

Test plan

  • go build ./... passes
  • go vet ./... passes
  • go test ./... all tests pass
  • Run BTC indexer and verify block processing time drops from minutes to seconds
  • Run with -race flag to confirm no data races
  • Verify graceful shutdown completes within 30s timeout
  • Verify block hashes persist across indexer restart (check reorg detection works)
  • Deploy with bloom sync enabled and confirm new addresses sync within configured interval

…t resolution, and persist block hashes for reorg detection.
Copy link
Collaborator

@Azzurriii Azzurriii left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @anhbaysgalan1, I have some concerns about the Bitcoin optimization part. Could you please help explain them to me? Thanks.

Comment on lines +190 to +192
if concurrency <= 0 {
concurrency = 8
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be configurable, or define a const instead of using a magic number

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll extract this to a const DefaultPrevoutConcurrency = 8 and also fix the hardcoded 8 in GetMempoolTransactions. The value itself comes from the caller, convertBlockWithPrevoutResolution already passes config.Throttle.Concurrency, so the default here is only a safety fallback.

return nil
}

// Fetch all needed transactions in parallel with bounded concurrency
Copy link
Collaborator

@Azzurriii Azzurriii Feb 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we only need to monitor tx where "to" addresses are monitored, why ResolvePrevouts needs fetches the entire previous transaction?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bitcoin UTXO model - inputs only have (txid, vout) reference, no address or amount. We need to fetch the full prev tx to get the from-address and calculate fees. Unfortunately getrawtransaction is the only RPC available for this. I think it should better track full UTXO record for future scaling.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agreed with you on this point, and i have a pending PR for indexing UTXO: #42

FYI, This indexer only track tx where "to" is on monitored list, but for the UTXO, is it good if we tracking both direction? Because every tx can have multiple direction of UTXO? I'd appriciate if you can take a look in this and give me some opinion,

newTxCount := 0
networkType := mw.chain.GetNetworkType()

mw.mu.Lock()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we need mutex lock here?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for example if we have 1000 mempool transactions and each NATS emit takes 50ms,were holding the lock for 50 seconds?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If its single goroutine. You can drop it. I can remove in the fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants