Skip to content

Conversation

@JacobHass8
Copy link

Implementation of log incomplete gamma function using asymptotic approximations where the incomplete gamma function underflows. See #1173 and #1338.

@JacobHass8
Copy link
Author

@jzmaddock I've tried to add error checking and the promotion boilerplate based on your comment #1338 (comment). Does this look okay for a start?

@jzmaddock
Copy link
Collaborator

Thanks @JacobHass8 that looks good to me. BTW the separation between lgamma_incomplete_imp_final and lgamma_incomplete_imp shouldn't be needed in this case: that was a hack Matt introduced for some functions to workaround the lack of recursion support in some GPU contexts, but there's no recursion here, so we should be good :)

Some tests and docs and hopefully this should be good to go! Thanks for this.

@JacobHass8
Copy link
Author

JacobHass8 commented Dec 31, 2025

Some tests and docs and hopefully this should be good to go! Thanks for this.

What file should I put the tests in, math/tests/test_igamma.hpp? Are the spot checks I've implemented so far sufficient?

Comment on lines 259 to 264
//
// Check that lgamma_q returns correct values
//
BOOST_CHECK_CLOSE(::boost::math::lgamma_q(static_cast<T>(5), static_cast<T>(100)), static_cast<T>(log(1.6139305336977304790405739225035685228527400976549e-37L)), tolerance);
BOOST_CHECK_CLOSE(::boost::math::lgamma_q(static_cast<T>(22.5), static_cast<T>(2000)), static_cast<T>(-1883.4897732037716195918619632721L), tolerance * 10); // calculated via mpmath
BOOST_CHECK_CLOSE(::boost::math::lgamma_q(static_cast<T>(501.2), static_cast<T>(2000)), static_cast<T>(-810.31461624182202285737730562687L), tolerance * 10); // calculated via mpmath
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I used one test case that was previously checked for gamma_q. I also added two more examples that I calculated using mpmath in python which supports arbitrary precision. Are there any other tests that I should implement?

@jzmaddock
Copy link
Collaborator

drone tests were failing for 128-bit long double platforms, I've updated the test data with values from wolframalpha in case the data was truncated to double precision somewhere along the way.

Also not sure why no github actions are being run - anyone any ideas? @mborland ?

With regard to further tests, the area most likely to fail, would be where the implementation first switches to the log asymptotic expansion, and at higher precision (we should support 128-bit long doubles at least, I'm less concerned about full arbitrary precision just yet).

@jzmaddock
Copy link
Collaborator

Ah... I needed to approve the github run: now done!

@codecov
Copy link

codecov bot commented Jan 1, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 95.29%. Comparing base (f6be8e8) to head (40a0c54).

Additional details and impacted files

Impacted file tree graph

@@           Coverage Diff            @@
##           develop    #1346   +/-   ##
========================================
  Coverage    95.28%   95.29%           
========================================
  Files          814      814           
  Lines        67364    67396   +32     
========================================
+ Hits         64191    64223   +32     
  Misses        3173     3173           
Files with missing lines Coverage Δ
include/boost/math/special_functions/gamma.hpp 99.85% <100.00%> (+<0.01%) ⬆️
test/test_igamma.hpp 100.00% <100.00%> (ø)

Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f6be8e8...40a0c54. Read the comment docs.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants