feat: Introduce bloom filter sync worker, optimize Bitcoin RPC prevout resolution, and persist block hashes for reorg detection.#43
Conversation
…t resolution, and persist block hashes for reorg detection.
Azzurriii
left a comment
There was a problem hiding this comment.
Hi @anhbaysgalan1, I have some concerns about the Bitcoin optimization part. Could you please help explain them to me? Thanks.
| if concurrency <= 0 { | ||
| concurrency = 8 | ||
| } |
There was a problem hiding this comment.
I think this should be configurable, or define a const instead of using a magic number
There was a problem hiding this comment.
I'll extract this to a const DefaultPrevoutConcurrency = 8 and also fix the hardcoded 8 in GetMempoolTransactions. The value itself comes from the caller, convertBlockWithPrevoutResolution already passes config.Throttle.Concurrency, so the default here is only a safety fallback.
| return nil | ||
| } | ||
|
|
||
| // Fetch all needed transactions in parallel with bounded concurrency |
There was a problem hiding this comment.
If we only need to monitor tx where "to" addresses are monitored, why ResolvePrevouts needs fetches the entire previous transaction?
There was a problem hiding this comment.
Bitcoin UTXO model - inputs only have (txid, vout) reference, no address or amount. We need to fetch the full prev tx to get the from-address and calculate fees. Unfortunately getrawtransaction is the only RPC available for this. I think it should better track full UTXO record for future scaling.
There was a problem hiding this comment.
I agreed with you on this point, and i have a pending PR for indexing UTXO: #42
FYI, This indexer only track tx where "to" is on monitored list, but for the UTXO, is it good if we tracking both direction? Because every tx can have multiple direction of UTXO? I'd appriciate if you can take a look in this and give me some opinion,
| newTxCount := 0 | ||
| networkType := mw.chain.GetNetworkType() | ||
|
|
||
| mw.mu.Lock() |
There was a problem hiding this comment.
Why we need mutex lock here?
There was a problem hiding this comment.
for example if we have 1000 mempool transactions and each NATS emit takes 50ms,were holding the lock for 50 seconds?
There was a problem hiding this comment.
If its single goroutine. You can drop it. I can remove in the fix.
Summary
seenTxsmap read/write/cleanup operations.(type, created_at)index for bloom sync queries that were doing full table scans at scale.MaxIdleConnsPerHost=20(was Go default of 2), eliminating connection bottleneck under concurrency.go 1.24.5directive togo 1.24.Files changed
internal/rpc/bitcoin/client.go— addedResolvePrevouts()batch methodinternal/rpc/bitcoin/api.go— addedResolvePrevoutsto interfaceinternal/indexer/bitcoin.go— rewrote block conversion + mempool to use batch resolution, pre-allocated slicesinternal/worker/mempool.go— added sync.Mutex for seenTxsinternal/worker/bloom_sync.go— new file, bloom filter sync workerinternal/worker/manager.go— concurrent shutdown with timeoutinternal/worker/regular.go— persisted block hashes, increased reorg windowinternal/worker/factory.go— wired bloom sync workerpkg/store/blockstore/store.go— added block hash persistence methodspkg/common/config/services.go— added BloomSyncConfiginternal/rpc/client.go— configured HTTP transport poolcmd/indexer/main.go— fixed shutdown order, wired bloom sync configsql/wallet_address.sql— added composite indexgo.mod— fixed go version directiveTest plan
go build ./...passesgo vet ./...passesgo test ./...all tests pass-raceflag to confirm no data races