Add createConnectedBufferedChannels with simple delay model#1141
Add createConnectedBufferedChannels with simple delay model#1141
Conversation
Crude approximation of GSV style.
The consensus tests are using a single channel per mini-protocol, so it should work. |
| -> g | ||
| -> [b] | ||
| -> [((VTime, b), (VTime, b))] | ||
| expectedDelayChannelTimes maxsize (g, s, v) prng0 xs0 = |
There was a problem hiding this comment.
It's actually surprising to me that one can dequeue only when the channel is full to correctly model channels with delay. I think that's possible because the time (the now argument of go) is decoupled and moves on as we dequeue.
Maybe you have a simple model how this works, it would be good to put it in haddock.
There was a problem hiding this comment.
My "ah ha" moment along these lines was realizing that the scheduled "arrival times" of the elements in the buffer might not be monotonic -- so they're actually the "earliest possible arrival time" per message, not necessarily the actual arrival time. Hence arrive = max maxarrive ((g + vsample) `addTime` depart).
That contention/overlap between the queue's semantics and the scheduled arrival times' values was a bit surprising. Would it be worthwhile to maintain the max while generating the arrival times? Or perhaps just a comment/renaming some variables would suffice. (Possibly include whichever in the implementation too, so that debugging the queue's contents might be less confusing?)
More directly to Marcin's comment: to is called once per element, as expected. The queue maintained here is not analogous to the original queue of elements. A renaming might help: perhaps now -> senderTime and maxarrive to receiverTime? Or senderClock and receiverClock, etc.
| time | ||
| time, | ||
| random |
There was a problem hiding this comment.
Preserve alphabetic order?
| , time | ||
| , random |
There was a problem hiding this comment.
Preserve alphabetic order?
| ---------------- | ||
| -- Queue | ||
| -- |
There was a problem hiding this comment.
This implementation is simple and good, but would reusing Data.Sequence be even simpler (e.g. less maintenance burden)?
| ] | ||
|
|
||
|
|
||
| prop_createConnectedDelayChannels |
There was a problem hiding this comment.
Would it be worthwhile to also test the other half of the Channel?
| -> g | ||
| -> [b] | ||
| -> [((VTime, b), (VTime, b))] | ||
| expectedDelayChannelTimes maxsize (g, s, v) prng0 xs0 = |
There was a problem hiding this comment.
My "ah ha" moment along these lines was realizing that the scheduled "arrival times" of the elements in the buffer might not be monotonic -- so they're actually the "earliest possible arrival time" per message, not necessarily the actual arrival time. Hence arrive = max maxarrive ((g + vsample) `addTime` depart).
That contention/overlap between the queue's semantics and the scheduled arrival times' values was a bit surprising. Would it be worthwhile to maintain the max while generating the arrival times? Or perhaps just a comment/renaming some variables would suffice. (Possibly include whichever in the implementation too, so that debugging the queue's contents might be less confusing?)
More directly to Marcin's comment: to is called once per element, as expected. The queue maintained here is not analogous to the original queue of elements. A renaming might help: perhaps now -> senderTime and maxarrive to receiverTime? Or senderClock and receiverClock, etc.
| -- * The V is the variance in latency, which is assumed to be uniform in the | ||
| -- range @0..v@. | ||
| -- | ||
| -- The sender is delayed by S. The receiver is delayed until the arrival time | ||
| -- which is G + S + V. |
There was a problem hiding this comment.
I'm not certain, but those three sentences together feel somewhat incorrect. Something like "the receiver is delayed until either G + S + V, or by something more than that because the channel had to preserve order instead of respecting that particular scheduled arrival time."
This is the same thing I mentioned in the test code.
|
@nfrisby I presume we don't need this now right? Or would it still be useful to tidy this up? |
|
tl; dr - I'm not currently planning on relying on this. I used a different implementation for my PR that was doing similar things. But it's been so long that I can't really explain the difference -- I think we wrote them in parallel, if I recall correctly. They might have slightly different invariants too, with a particular focus on my intention to make sure the accumulated latencies didn't breach over into the next slot, since that is in some ways a "network partition" at that point. My test |
Crude approximation of GSV style.
Test included to check that the delays we get are what we expect.