Skip to content

[essreduce] Compute wavelength LUT from chopper cascade#557

Open
nvaytet wants to merge 23 commits into
mainfrom
lut-from-chopper-cascade
Open

[essreduce] Compute wavelength LUT from chopper cascade#557
nvaytet wants to merge 23 commits into
mainfrom
lut-from-chopper-cascade

Conversation

@nvaytet
Copy link
Copy Markdown
Member

@nvaytet nvaytet commented May 7, 2026

This is the first step towards computing the LUT on-the-fly from chopper settings read from the NeXus files.

Move the old LookupTableWorkflow to LookupTableFromTofWorkflow.
Then use scippneutron.chopper_cascade in LookupTableWorkflow to compute the LUT.

To approximate the polygons as single lines, we produce a set of vertical lines that represent the bins in event_time_offset.
For each line, we find the intersections with the polygon edges, and then compute the midpoint between the min and max y coordinate of the intersections.

The table below is computed in ~0.7s compared to ~16s before (using 5M neutrons).

To use:

import scipp as sc
from ess.reduce import unwrap
from ess.reduce.nexus.types import AnyRun
from ess.odin.beamline import choppers

source_position = sc.vector([0, 0, 0], unit='m')
disk_choppers = choppers(source_position)

wf = unwrap.LookupTableWorkflow()
wf[unwrap.DiskChoppers[AnyRun]] = disk_choppers
wf[unwrap.SourcePosition] = source_position
wf[unwrap.PulseStride] = 2
wf[unwrap.LtotalRange] = sc.scalar(5.0, unit="m"), sc.scalar(65.0, unit="m")
wf[unwrap.DistanceResolution] = sc.scalar(0.1, unit="m")
wf[unwrap.TimeResolution] = sc.scalar(250.0, unit='us')

table = wf.compute(unwrap.LookupTable)
table.plot(vmax=10.0)
Screenshot_20260507_161109

Needs scipp/scippneutron#700

@nvaytet nvaytet added the essreduce Issues for essreduce. label May 7, 2026
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that the diff here is not very useful. The old file was moved to lut_from_simulation.py with bascially no modifications.
It is better to read this file as a newly added file.

)
if simulation.choppers is not None
else None,
# TODO: Do we still want to store the chopper information in the lookup table?
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do we do about storing the chopper info the the table?
If choppers are always read from the file, and the LUT computed on-the-fly as part of the same workflow (not implemented in this PR but will be in a follow-up), then it's maybe not so useful to store anymore?

However, if the user loads a table from file, we may want to valudate the chopper settings before proceeding with the calculation, so maybe we should keep the ability to do so?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would be the motivation for breaking the file format? Is storing the info harmful?

@nvaytet nvaytet marked this pull request as ready for review May 8, 2026 13:27
@nvaytet nvaytet requested a review from SimonHeybrock May 8, 2026 13:27
)
assert np.nanpercentile(diff.values, 99) < 0.02
if engine == "tof":
assert np.nanpercentile(diff.values, 99) < 0.02
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tof performs better in this case because it has a concept of intensity and it knows that the neutrons that pollute neighbouring frames are few, so the uncertainty is lower than with chopper_cascade.
This is a special case here, as the V20 beamline was badly configured for the full ESS pulse (the pulse that they simulated using the counter-rotating choppers had a shorter tail).

Maybe instead we should use a shorter pulse in the SourcePulse when computing the FrameSequence?
I don't know if it's better than keeping the same source for all tests...

@nvaytet nvaytet requested a review from jokasimr May 9, 2026 16:38
@nvaytet
Copy link
Copy Markdown
Member Author

nvaytet commented May 9, 2026

I do not know what is going on with the CI and the pixi.lock file...
I tried asking copilot but it did not help.

The CI keeps failing with the error

thread 'main2' (2436) panicked at crates/pixi_core/src/lock_file/resolve/build_dispatch.rs:452:17:
could not initialize build dispatch correctly
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Error:   × failed to solve the pypi requirements of environment 'default' for platform 'linux-64'
  ╰─▶ build dispatch initialization failed: failed to link python-3.11.15-hd63d673_0_cpython.conda

Note that it's not always platform 'linux-64', it has also failed with osx-64, osx-arm64...

Restarting the job works most of the time, but it just continues failing for every other job.

@SimonHeybrock
Copy link
Copy Markdown
Member

Move the old LookupTableWorkflow to LookupTableFromTofWorkflow.
Then use scippneutron.chopper_cascade in LookupTableWorkflow to compute the LUT.

Can we avoid this breaking change?

@SimonHeybrock
Copy link
Copy Markdown
Member

Could it have been avoided to refactor all instruments in the same PR as the intial addition of the new workflow? If the old workflow still exists then having the new thing build alongside it would have been easier to review, test, and rollout.

@nvaytet
Copy link
Copy Markdown
Member Author

nvaytet commented May 11, 2026

So you're saying:

  • leave the old workflow as it was (no renaming class and file)
  • make a new workflow for the chopper cascade
  • do not update the docs yet?
  • then as a second step use the providers from the new workflow to compute lut on the fly in the generic workflow in another PR?

I don't mind. Feel free to go ahead this way.

The changes in the tests I think should be kept though. Parametrizing them according to engine is a good thing I think, and there was a lot of flakiness sensitive to small changes which I hope were improved.

@SimonHeybrock
Copy link
Copy Markdown
Member

So you're saying:

* leave the old workflow as it was (no renaming class and file)

* make a new workflow for the chopper cascade

Or add a keyword arg that has a default.

* do not update the docs yet?

* then as a second step use the providers from the new workflow to compute lut on the fly in the generic workflow in another PR?

Now it is done already, but I am just concerned about these sweeping breaking changes (instead of building new in parallel, then rollout, then remove old), without trying it more in practice before a rollout. You can decide.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

essreduce Issues for essreduce.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants