@winterqt discovered that all the FOD branding assets failed to build on aarch64-darwin due to hash mismatches. After comparing their output with my own builds on x86_64-linux, we found that it was due to floating point differences.
The easiest fix might be to add a simple function that can round to significant figures. This implementation should work.
def round_to_sigfig(x, s):
return round(x, s - 1 - int(floor(log10(abs(x)))))
After some basic testing, it appears to work. Also not shown is it working for negative numbers.
>>> def round_to_sigfig(x, s):
... return round(x, s - 1 -int(floor(log10(abs(x)))))
...
>>> round_to_sigfig(1111.1111, 8)
1111.1111
>>> round_to_sigfig(1111.1111, 7)
1111.111
>>> round_to_sigfig(1111.1111, 4)
1111.0
>>> round_to_sigfig(1111.1111, 2)
1100.0
Double precision in IEEE 754 should give between 15-17 digits of precision. Using 12 significant figures should be sufficient. It will solve the rounding differences, give us a few digits of padding in case anything weird happens, but still have enough precision that we will never have to worry about a lack of co-linear lines.
@winterqt discovered that all the FOD branding assets failed to build on
aarch64-darwindue to hash mismatches. After comparing their output with my own builds onx86_64-linux, we found that it was due to floating point differences.The easiest fix might be to add a simple function that can round to significant figures. This implementation should work.
After some basic testing, it appears to work. Also not shown is it working for negative numbers.
Double precision in IEEE 754 should give between 15-17 digits of precision. Using 12 significant figures should be sufficient. It will solve the rounding differences, give us a few digits of padding in case anything weird happens, but still have enough precision that we will never have to worry about a lack of co-linear lines.