-
Notifications
You must be signed in to change notification settings - Fork 578
Description
NOTE: This issue is now outdated as it doesn't contain the updates we came up with in Buenos Aires offsite - I didn't bother updating this. The new algo is described in the body of loadPrivateLogsForSenderRecipientPair function in this PR.
Old Description
In this PR I implemented a "more robust" approach for tagging index sync when sending a log.
The approach was originally described by Nico in this post.
The updated sender sync gives us a guarantee that the newly chosen index won't be more than WINDOW_LEN away from the last finalized index (we throw an error if it's the case). This is important as that will ensure that we will not fail to load some logs during recipient sync.
Now we need to implement similar approach for the recipient log sync (i.e. replace the whole implementation of syncTaggedLogs). In the case of sender sync we called pending logs any logs that are not yet included in a finalized block - that means that logs that are not included in any block are treated the same as logs from a block that has not yet been finalized. In the case of the recipient sync we are not able to obtain the logs that are not yet included in any block so what we'll consider pending logs are the logs that were already included in non-finalized blocks.
- Load the window of logs while in the middle of the window is the index of the latest finalized log we have found (this is different from the sender sync where we don't look back from the index - this means that the window for recipient sync is 2x the sender sync length),
- process all the logs,
- if we find new finalized logs we run another iteration of log sync for the indexes we have not yet tried to obtain (and repeat if we find new finalized logs).
Look for "TODO(#17775)" in the codebase to find the relevant places that need updating.
(Note that on the first iteration of the sync we are querying again for the window where in the center is the latest finalized index. This means we will obtain the same logs again. In the future it might make sense to optimize this and store the info of which logs we've already processed)