Skip to content

Conversation

@mnoergaard
Copy link
Collaborator

This PR addresses issue #191 by capping the PET head motion correction mri_robust_template node to use at most four threads and provided scheduler resource limits to bound each instance to 16 GB, enabling multiple parallel runs.

@mnoergaard mnoergaard requested a review from effigies December 8, 2025 12:15
@codecov
Copy link

codecov bot commented Dec 8, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 80.64%. Comparing base (133526d) to head (fdef702).

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #192   +/-   ##
=======================================
  Coverage   80.63%   80.64%           
=======================================
  Files          84       84           
  Lines        6512     6514    +2     
  Branches      657      657           
=======================================
+ Hits         5251     5253    +2     
  Misses       1097     1097           
  Partials      164      164           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

outputnode = pe.Node(niu.IdentityInterface(fields=['xforms', 'petref']), name='outputnode')

robust_template_threads = min(omp_nthreads, 4)
robust_template_mem_gb = min(mem_gb, 16)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the actual memory usage of mri_robust_template? Does it keep usage below 16GB? Note that what you're making here is a claim that the process will consume <=16GB of memory, and nipype will schedule accordingly. There is nothing in nipype that will stop the process from consuming more memory.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see - then this will not work as intended, and will depend on the input data. Will need to come up with a smarter way of estimating the memory use of mri_robust_template given the input data, and then ideally simplify the mri_robust_template call e.g. by fixing the reference frame.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but putting the memory aside, but do you think of the changes regarding the processors? I have experienced that this workflow often not run in parallel despite plenty of resources being available.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants