Skip to content

Conversation

@Ramdam17
Copy link
Collaborator

@Ramdam17 Ramdam17 commented Feb 8, 2026

Summary

  • Modular sync architecture: Each connectivity metric (PLV, CCorr, ACCorr, Coh, ImCoh, PLI, wPLI, EnvCorr, PowCorr) is now a class inheriting from BaseMetric, in its own file under hypyp/sync/
  • ACCorr optimizations: Integrates numba JIT and PyTorch GPU backends for Adjusted Circular Correlation, originally developed by @m2march as part of BrainHack Montreal 2026 (PR Optimization of accorr via pytorch #246)
  • Unified optimization API: Single parameter (None, 'auto', 'numba', 'torch') flows through the entire chain: compute_sync()get_metric()ACCorr() → auto-detects GPU with graceful fallback and warnings

Optimization behavior

Value Behavior Fallback
None Standard numpy (default)
'auto' Best available: torch → numba → numpy cascade with warnings
'numba' Numba JIT compilation warn + numpy if unavailable
'torch' PyTorch with auto-detected GPU (MPS/CUDA) warn + torch CPU if no GPU, warn + numpy if no torch

Usage

from hypyp.analyses import compute_sync
result = compute_sync(complex_signal, 'accorr', optimization='torch')

# Or directly
from hypyp.sync import ACCorr
metric = ACCorr(optimization='auto')

Optional dependencies

poetry install --with optim_torch    # PyTorch
poetry install --with optim_numba    # Numba

Test plan

  • 172 tests pass, 0 failures
  • ACCorr numpy vs reference implementation
  • ACCorr numba vs reference implementation
  • ACCorr torch (MPS) vs reference implementation
  • Fallback warnings tested via mocking
  • All 9 metrics work through compute_sync()
  • Existing test suite (test_stats, test_fnirs, etc.) unaffected

🤖 Generated with Claude Code

Co-Authored-By: Martín A. Miguel m2march@users.noreply.github.com

Ramdam17 and others added 3 commits January 30, 2026 14:23
- Create hypyp/sync/ module with individual metric files:
  - plv.py, ccorr.py, accorr.py, coh.py, imaginary_coh.py
  - pli.py, wpli.py, envelope_corr.py, pow_corr.py
- Add base.py with BaseMetric abstract class and helper functions
- Add backend support (numpy default, numba/torch for future optimization)
- Add get_metric() function for retrieving metrics by name
- Update compute_sync() to delegate to sync module
- Add deprecation warnings to old helper functions
- All implementations verified identical to original code
- Simplify packages config in pyproject.toml (Poetry auto-includes subpackages)
- Re-execute tutorial notebooks to refresh outputs
… API

Integrate accorr optimizations (numba JIT, PyTorch GPU) from PR #246
into the modular sync architecture. Unify the API around a single
`optimization` parameter (None, 'auto', 'numba', 'torch') with
graceful fallback and warnings when backends are unavailable.

- BaseMetric: add _resolve_optimization() with fallback cascade
- ACCorr: numpy/numba/torch backends with precompute optimization
- All metrics: remove dead dispatch code for numpy-only metrics
- compute_sync: pass optimization directly to get_metric()
- Tests: reference-based validation for all backends, mocked fallbacks
- Add optional dependency groups (optim_torch, optim_numba)

Co-Authored-By: Martín A. Miguel <m2march@users.noreply.github.com>
@Ramdam17
Copy link
Collaborator Author

Ramdam17 commented Feb 8, 2026

Hi @m2march,

This PR integrates your accorr optimizations (numba/torch) from PR #246 into the new modular sync architecture. Your work is credited via co-authorship on the commit and in the accorr.py docstring.

Main changes from your original PR:

  • Unified API: single optimization parameter instead of backend + device
  • Graceful fallback with warnings when backends are unavailable
  • Optional dependency groups in pyproject.toml (poetry install --with optim_torch)
  • Reference-based tests validating all 3 backends against the original implementation

Could you review the changes before we merge?

@m2march
Copy link

m2march commented Feb 9, 2026

I've got one comment and then some less relevant notes:

Comment

  • In testsync.py:89, the test is supposed to test the call through compute_sync, but the function is never called.

Notes

  • There is no config of the device. I like that it makes the optimization interface simpler, but I am afraid this will bite back if there are reasons to select CUDA vs MPS.
  • Also, nowhere in the documentation it is stated that if MPS is available, it goes over CUDA. I'm not sure if they are exclusive.
  • In analysis.py:compute_sync, the math for the different sync methods is removed. I see that it's in every subclass of BaseMetric. I fear that trying to find that (along with the reference papers) might be too obscure for non-programmers. It is annoying to have documentation too far from the implementation, but might be worth the duplicate for clarity. Alternatively, there would be a wiki entry or README in the sync folder that could be referred to in the docstring of compute_sync.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants