Conversation
90fdf20 to
70746b4
Compare
I don’t know if it fixes that one, I was investigating something else. I’m really not sure it’s related. 🤔 |
|
Oh, sorry; someone pointed me to that ticket as the reason for maintaining a fork, so I thought this PR was related to that |
|
@mat007 can you explain what the deadlock is that you're seeing? |
Sure! The added test without the fix never terminates. Breaking under a debugger we see that the goroutine that called Line 578 in bdc6c11 ListenPipe one is waiting on Lines 462 to 463 in bdc6c11 The problem is that we have 2 readers for Line 447 in bdc6c11 This PR fixes this by closing the channel instead of writing to it. |
|
@kevpar what do you think of my explanation? Do you need more info? |
Sorry, I just got back from holiday. I will take a look today and see if we can get this in! |
|
I've been doing some testing/thinking on this. Overall I think it looks good, just have a few pieces of feedback.
Thanks! |
|
Also, looks like there are some CI failures. |
|
Had a quick peek; linting errors look unrelated to the PR; The go-generate one is more interesting, as there's a panic; I see 070c828 changed CI to not used fixed versions of Go, and |
|
Opened a PR for those linting issues; (not sure about the panic, because I'm not on Windows, so maybe it was just a incident; we'll see on the other PR) |
Thanks! I’ll look into addressing these. |
|
@kevpar can you please take another look? |
Changes look good, thanks! One thing that just occurred to me -- there is a chance of a panic if two goroutines try to close a listener at the same time (such that they both try to close |
Good catch! This makes the implementation of Close much simpler as well. |
kevpar
left a comment
There was a problem hiding this comment.
LGTM. Thanks for the change :)
|
@kiashok sorry for pinging you again. |
Signed-off-by: Mathieu Champlon <mathieu.champlon@docker.com>
Signed-off-by: Mathieu Champlon <mathieu.champlon@docker.com>
Signed-off-by: Mathieu Champlon <mathieu.champlon@docker.com>
200 was a little low to consistently trigger the issue. Signed-off-by: Mathieu Champlon <mathieu.champlon@docker.com>
This should help tell when we are deadlocking from test logs if we hit a problem in the future. Signed-off-by: Mathieu Champlon <mathieu.champlon@docker.com>
Signed-off-by: Mathieu Champlon <mathieu.champlon@docker.com>
Hi!
This PR fixes a deadlock that we face every once in a while.
Thanks!