You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/sharing-io-resources.md
+78-5Lines changed: 78 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -524,11 +524,15 @@ We can instead use `try_send`, which isn't blocking, and manage our own bufferin
524
524
}
525
525
```
526
526
527
-
This solves most of the problems; at some point back-pressure needs to be handled and having the task that sends messages to the `client_dispatcher` be blocked seems good enough. However, one might want to stop reading packets as channels get filled; that way the TCP queues get full and new packets don't get acknowledge and clients will automatically retransmit packets. Notice, however, this would require a channel back from the anonymous tasks that send the bytes to the `client_dispatcher` and then back from the `central_dispatcher` to `handle_connection`.
527
+
This solves most of the problems; at some point back-pressure needs to be handled, and having the task that sends messages to the `client_dispatcher` be blocked seems good enough. However, one might want to stop reading packets as channels get filled; that way the TCP queues get full and new packets get dropped, and clients will automatically retransmit packets. Notice, however, this would require a channel back from the anonymous tasks that send the bytes to the `client_dispatcher` and then back from the `central_dispatcher` to `handle_connection`.
528
528
529
-
Another way we will require more channels is if we wanted to start handling errors on writing to the socket. Instead of being able to handle errors idiomatically in-place in `central_dispatcher`, `client_dispatcher` would need a way to send the error back, if that error would kill the connection.
529
+
Another way we will require more channels is if we want to start handling errors on writing to the socket. Instead of being able to handle errors idiomatically inplace in `central_dispatcher`, `client_dispatcher` would need a way to send the error back if that error would kill the connection.
530
530
531
-
Furthermore, each additional IO requires its own task, additionally, each piece of state requires either its own task or to be added to the `central_dispatch`. The first way would also require more channels for cross-comunication between state owners, and the problem with all these channels is that they are decoupled from the tasks generating messages. This is what we want from channels, but it potentially makes code harder to follow. Look at `central_dispatcher` by itself.
531
+
Furthermore, each additional IO requires its own task; additionally, each piece of state requires either its own task or to be added to the `central_dispatch`. The first way would also require more channels for cross-communication between state owners.
532
+
533
+
All these channels and tasks add complexity. For one, tasks need to be managed and dropped eventually[^3]. But there's another complication: we lose co-location of IO and state by creating these channels. Channels do move around, so now when you see the place where state is managed, you have no idea where the events are produced.
There's no way to tell how `Message` was created, if for example, we wanted to know if the bytes somehow correlate to a valid string, we would need to hunt down where `Message::Send` is created. That could potentially be anyhwere. Considering the producers can be sent back and forth between tasks, cloned, it can become very complicated.
567
+
There's no way to know where `rx` comes from other than looking for the place where the channel is created, which can be very far up the stack. Then, to know where `Message` can be produced, you need to look at all the places where the corresponding `tx` moves to, which again can be very far down the stack.
568
+
569
+
Not having mutexes does simplify things a lot, but there's still complexity related to having multiple tasks. There's, however, a clue on how we could further simplify things. Remember that I said that if we had additional state, we could manage it in the `central_dispatcher` and have it act as a multiplexer between IO events. For example, if we had a new source of events, such as another socket, it can also use `tx` to inform the `central_dispatcher` of the events with a new `Message` variant. If that also required a new state, `central_dispatcher` can also manage that.
570
+
571
+
Let's say, for example, a new administration socket that can subscribe multiple clients to a "topic" that can look something like this.
All of this to say, while it's very nice no longer requiring mutexes, by decoupling IO and state it becomes harder to communicate errors back to the state and correlate data that alters the state with the IO that generates the data. I mentioned that there is another way to handle additional state, other than having new tasks handling it. We can move it into `central_dispatcher` and have it multiplex messages from multiple channels. This doesn't solve the complexity of data and IO lack of co-location, but it tells us that we could potentially handle all IO-events in a single task concurrently. So let's look into that.
634
+
{{ note(body="This code is merely illustrative, it's not written to be compiled.") }}
635
+
636
+
Well, we still have multiple incoming and outgoing channels, but this shows that we can have a single task managing state while listening concurrently to multiple IO events. So perhaps, we can move the IO inside that same task and simplify things further.
566
637
567
638
### A single task to rule them all
568
639
@@ -688,6 +759,7 @@ impl Router {
688
759
Now we're modeling I/O as a stream of events. We suspend execution while awaiting any of those events, each time one of the events happen we're signaled about it.
689
760
690
761
In response we handle the event synchroneously, in a context where we have mutable access to the state.
762
+
691
763
#### Hand-rolled future
692
764
693
765
There's another option that's way less discussed, if you think about it, what we want to do is:
@@ -879,3 +951,4 @@ What I hope you take away from this article, is that, particularly in Rust, you
0 commit comments