See e.g. #195 or #196.
With the API assertion that "If N_or_isdone is an Integer, exactly N_or_isdone samples are returned.", I see that if thinning != 1, then more than N steps have to be done. Fine, I dislike this already - then it should IMO be called n_substeps instead of thinning.
But how is thinning then supposed to interact with num_warmup and discard_initial? Are there also supposed to be thinning * num_warmup warm-up steps? Apparently not, as the API states "num_warmup (default: 0): number of "warm-up" steps to take before the first "regular" step, i.e. number of times to call AbstractMCMC.step_warmup before the first call to AbstractMCMC.step.".
And if on top of this discard_initial < num_warmup, things become even weirder (as seen in #196). What's supposed to happen then? Obviously IMO not what's happening right now.
See e.g. #195 or #196.
With the API assertion that "If N_or_isdone is an Integer, exactly N_or_isdone samples are returned.", I see that if
thinning != 1, then more than N steps have to be done. Fine, I dislike this already - then it should IMO be calledn_substepsinstead ofthinning.But how is
thinningthen supposed to interact withnum_warmupanddiscard_initial? Are there also supposed to bethinning * num_warmupwarm-up steps? Apparently not, as the API states "num_warmup (default: 0): number of "warm-up" steps to take before the first "regular" step, i.e. number of times to call AbstractMCMC.step_warmup before the first call to AbstractMCMC.step.".And if on top of this
discard_initial < num_warmup, things become even weirder (as seen in #196). What's supposed to happen then? Obviously IMO not what's happening right now.