Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions device/pools_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,8 @@ import (
func TestWaitPool(t *testing.T) {
var wg sync.WaitGroup
var trials atomic.Int32
startTrials := int32(100000)
n := runtime.NumCPU()
startTrials := int32(125 * n * n)
Comment on lines +20 to +21
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WaitPool had a bug around sync.Cond usage that was fixed in 1e08883.

Prior to fixing that bug, the test was simply skipped, which I imagine was due to its inherent flakiness.

That's all to say, I'd bet startTrials was set to a "large number" when debugging the flakiness.

So, scaling by numCPU seems reasonable to me, but the 125 factor is seemingly another new arbitrary number. So perhaps you can add some history and/or justification based on what I've put above.

if raceEnabled {
// This test can be very slow with -race.
startTrials /= 10
Expand Down Expand Up @@ -63,7 +64,7 @@ func TestWaitPool(t *testing.T) {
}
wg.Wait()
if max.Load() != p.max {
t.Errorf("Actual maximum count (%d) != ideal maximum count (%d)", max, p.max)
t.Errorf("Actual maximum count (%d) != ideal maximum count (%d)", max.Load(), p.max)
}
}

Expand Down
Loading