Bagger and MultiTaskBagger both train the individual models in parallel. Because the order of training is uncontrolled, this means that Lolo random forests are inherently non-reproducible, even if the bagging and the rngs for base learners are identical.
There are ways of guaranteeing reproducibility across multiple threads, and we should make use of them.
SplittableRandom in Java
A discussion in the context of numpy
Bagger and MultiTaskBagger both train the individual models in parallel. Because the order of training is uncontrolled, this means that Lolo random forests are inherently non-reproducible, even if the bagging and the rngs for base learners are identical.
There are ways of guaranteeing reproducibility across multiple threads, and we should make use of them.
SplittableRandom in Java
A discussion in the context of numpy